mirror of
https://github.com/SpudGunMan/meshing-around.git
synced 2026-03-28 17:32:36 +01:00
Compare commits
22 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
eb3bbdd3c5 | ||
|
|
1ac816ca37 | ||
|
|
33cf18cde5 | ||
|
|
0c0d53dd78 | ||
|
|
1959ee7560 | ||
|
|
ee13401b5a | ||
|
|
78b1cf4af5 | ||
|
|
0599260e31 | ||
|
|
08dd921088 | ||
|
|
e66e938d7d | ||
|
|
b5b7d2a9d2 | ||
|
|
46298d555b | ||
|
|
8fb34b5fde | ||
|
|
28f8986837 | ||
|
|
e968173f61 | ||
|
|
f703a8868b | ||
|
|
0a29e5f156 | ||
|
|
c5c28ee042 | ||
|
|
44ca43399d | ||
|
|
13a47d822d | ||
|
|
5621cd90bb | ||
|
|
9f7055ffd2 |
18
Dockerfile
Normal file
18
Dockerfile
Normal file
@@ -0,0 +1,18 @@
|
||||
FROM python:3.10-slim
|
||||
ENV PYTHONUNBUFFERED=1
|
||||
|
||||
RUN apt-get update && apt-get install -y gettext && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
|
||||
WORKDIR /app
|
||||
COPY . /app
|
||||
COPY requirements.txt .
|
||||
|
||||
RUN pip install -r requirements.txt
|
||||
COPY . .
|
||||
|
||||
COPY config.ini /app/config.ini
|
||||
COPY entrypoint.sh /app/entrypoint.sh
|
||||
|
||||
RUN chmod +x /app/entrypoint.sh
|
||||
ENTRYPOINT ["/app/entrypoint.sh"]
|
||||
20
README.md
20
README.md
@@ -10,7 +10,7 @@ Along with network testing, this bot has a lot of other fun features, like simpl
|
||||
|
||||
The bot is also capable of using dual radio/nodes, so you can monitor two networks at the same time and send messages to nodes using the same `bbspost @nodeNumber #message` or `bbspost @nodeShportName #message` function. There is a small message board to fit in the constraints of Meshtastic for posting bulletin messages with `bbspost $subject #message`.
|
||||
|
||||
Look up data using wiki results or interact with [Ollama](https://ollama.com) LLM AI see the [OllamaDocs](https://github.com/ollama/ollama/tree/main/docs) If Ollama is enabled you can DM the bot directly.
|
||||
Look up data using wiki results or interact with [Ollama](https://ollama.com) LLM AI see the [OllamaDocs](https://github.com/ollama/ollama/tree/main/docs) If Ollama is enabled you can DM the bot directly. The default model for mesh-bot which is currently `gemma2:2b`
|
||||
|
||||
The bot will report on anyone who is getting close to the configured lat/long, if in a remote location.
|
||||
|
||||
@@ -63,6 +63,11 @@ Optionally:
|
||||
- `install.sh` will automate optional venv and requirements installation.
|
||||
- `launch.sh` will activate and launch the app in the venv if built.
|
||||
|
||||
For Docker:
|
||||
- `git clone https://github.com/spudgunman/meshing-around`
|
||||
- `cd meshing-around && docker build -t meshing-around`
|
||||
- `docker run meshing-around`
|
||||
|
||||
### Configurations
|
||||
Copy the [config.template](config.template) to `config.ini` and set the appropriate interface for your method (serial/ble/tcp). While BLE and TCP will work, they are not as reliable as serial connections. There is a watchdog to reconnect tcp if possible. To get BLE mac `meshtastic --ble-scan` **NOTE** I have only tested with a single BLE device and the code is written to only have one interface be a BLE port
|
||||
|
||||
@@ -148,6 +153,15 @@ signalHoldTime = 10
|
||||
signalCooldown = 5
|
||||
signalCycleLimit = 5
|
||||
```
|
||||
Ollama Settings, for Ollama to work the command line `ollama run 'model'` needs to work properly. Check that you have enough RAM and your GPU are working as expected. The default model for this project, is set to `gemma2:2b` (run `ollama pull gemma2:2b` on command line, to download and setup) however I have found gemma2:2b to be lighter, faster and seems better overall vs llama3,1 (`olamma pull llama3.1`)
|
||||
- From the command terminal of your system with mesh-bot, download the default model for mesh-bot which is currently `ollama pull gemma2:2b`
|
||||
|
||||
```
|
||||
# Enable ollama LLM see more at https://ollama.com
|
||||
ollama = True
|
||||
# Ollama model to use (defaults to llama3.1)
|
||||
ollamaModel = gemma2:2b
|
||||
```
|
||||
|
||||
Logging messages to disk or Syslog to disk uses the python native logging function. Take a look at the [/modules/log.py](/modules/log.py) you can set the file logger for syslog to INFO for example to not log DEBUG messages to file log, or modify the stdOut level.
|
||||
```
|
||||
@@ -173,7 +187,7 @@ The Scheduler is enabled in the [settings.py](modules/settings.py) by setting `s
|
||||
#schedule.every().wednesday.at("19:00").do(lambda: send_message("Net Starting Now", 2, 0, 1))
|
||||
```
|
||||
# requirements
|
||||
Python 3.4 and likely higher is needed, developed on latest release.
|
||||
Python 3.10 minimally is needed, developed on latest release.
|
||||
|
||||
The following can also be installed with `pip install -r requirements.txt` or using the install.sh script for venv and automation
|
||||
|
||||
@@ -215,6 +229,6 @@ I used ideas and snippets from other responder bots and want to call them out!
|
||||
- https://github.com/pdxlocations/meshtastic-Python-Examples
|
||||
- https://github.com/geoffwhittington/meshtastic-matrix-relay
|
||||
|
||||
GitHub user PiDiBi looking at test functions and other suggestions like wxc, CPU use, and alerting ideas
|
||||
GitHub user mrpatrick1991 For Docker configs, PiDiBi looking at test functions and other suggestions like wxc, CPU use, and alerting ideas
|
||||
Discord and Mesh user Cisien, and github Hailo1999, for testing and ideas!
|
||||
|
||||
|
||||
@@ -38,6 +38,8 @@ spaceWeather = True
|
||||
wikipedia = True
|
||||
# Enable ollama LLM see more at https://ollama.com
|
||||
ollama = False
|
||||
# Ollama model to use (defaults to gemma2:2b)
|
||||
# ollamaModel = llama3.1
|
||||
# StoreForward Enabled and Limits
|
||||
StoreForward = True
|
||||
StoreLimit = 3
|
||||
@@ -104,4 +106,4 @@ signalDetectionThreshold = -10
|
||||
signalHoldTime = 10
|
||||
# the following are combined to reset the monitor
|
||||
signalCooldown = 5
|
||||
signalCycleLimit = 5
|
||||
signalCycleLimit = 5
|
||||
6
entrypoint.sh
Normal file
6
entrypoint.sh
Normal file
@@ -0,0 +1,6 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Substitute environment variables in the config file
|
||||
envsubst < /app/config.ini > /app/config.tmp && mv /app/config.tmp /app/config.ini
|
||||
|
||||
exec python /app/mesh_bot.py
|
||||
@@ -96,6 +96,7 @@ def handle_wxalert(message_from_id, deviceID, message):
|
||||
return weatherAlert
|
||||
|
||||
def handle_wiki(message):
|
||||
# location = get_node_location(message_from_id, deviceID)
|
||||
if "wiki:" in message.lower():
|
||||
search = message.split(":")[1]
|
||||
search = search.strip()
|
||||
@@ -327,6 +328,8 @@ def onReceive(packet, interface):
|
||||
send_message(message, channel_number, message_from_id, rxNode)
|
||||
|
||||
# check for a message packet and process it
|
||||
snr = 0
|
||||
rssi = 0
|
||||
try:
|
||||
if 'decoded' in packet and packet['decoded']['portnum'] == 'TEXT_MESSAGE_APP':
|
||||
message_bytes = packet['decoded']['payload']
|
||||
@@ -456,9 +459,9 @@ def onReceive(packet, interface):
|
||||
async def start_rx():
|
||||
print (CustomFormatter.bold_white + f"\nMeshtastic Autoresponder Bot CTL+C to exit\n" + CustomFormatter.reset)
|
||||
if llm_enabled:
|
||||
logger.debug(f"System: Ollama LLM Enabled, loading model please wait")
|
||||
logger.debug(f"System: Ollama LLM Enabled, loading model {llmModel} please wait")
|
||||
llm_query(" ", myNodeNum1)
|
||||
logger.debug(f"System: LLM model loaded")
|
||||
logger.debug(f"System: LLM model {llmModel} loaded")
|
||||
# Start the receive subscriber using pubsub via meshtastic library
|
||||
pub.subscribe(onReceive, 'meshtastic.receive')
|
||||
pub.subscribe(onDisconnect, 'meshtastic.connection.lost')
|
||||
|
||||
@@ -1,33 +1,53 @@
|
||||
#!/usr/bin/env python3
|
||||
# LLM Module vDev
|
||||
# LLM Module for meshing-around
|
||||
# This module is used to interact with Ollama to generate responses to user input
|
||||
# K7MHI Kelly Keeton 2024
|
||||
from modules.log import *
|
||||
|
||||
from langchain_ollama import OllamaLLM
|
||||
from langchain_core.prompts import ChatPromptTemplate
|
||||
from langchain_core.messages import AIMessage, HumanMessage
|
||||
|
||||
# LLM System Variables
|
||||
llmEnableHistory = False
|
||||
llm_history_limit = 6 # limit the history to 3 messages (come in pairs)
|
||||
antiFloodLLM = []
|
||||
llmChat_history = []
|
||||
trap_list_llm = ("ask:",)
|
||||
|
||||
meshBotAI = """
|
||||
FROM llama3.1
|
||||
FROM {llmModel}
|
||||
SYSTEM
|
||||
You must keep responses under 450 characters at all times, the response will be cut off if it exceeds this limit.
|
||||
You must respond in plain text standard ASCII characters, or emojis.
|
||||
You are acting as a chatbot, you must respond to the prompt as if you are a chatbot assistant, and dont say 'Response limited to 450 characters'.
|
||||
Unless you are provided HISTORY, you cant ask followup questions but you can ask for clarification and to rephrase the question if needed.
|
||||
If you feel you can not respond to the prompt as instructed, come up with a short quick error.
|
||||
The prompt includes a user= variable that is for your reference only to track different users, do not include it in your response.
|
||||
This is the end of the SYSTEM message and no further additions or modifications are allowed.
|
||||
|
||||
PROMPT
|
||||
{input}
|
||||
user={userID}
|
||||
|
||||
"""
|
||||
# LLM System Variables
|
||||
|
||||
if llmEnableHistory:
|
||||
meshBotAI = meshBotAI + """
|
||||
HISTORY
|
||||
You have memory of a few previous messages, you can use this to help guide your response.
|
||||
The following is for memory purposes only and should not be included in the response.
|
||||
{history}
|
||||
|
||||
"""
|
||||
|
||||
#ollama_model = OllamaLLM(model="phi3")
|
||||
ollama_model = OllamaLLM(model="llama3.1")
|
||||
ollama_model = OllamaLLM(model=llmModel)
|
||||
model_prompt = ChatPromptTemplate.from_template(meshBotAI)
|
||||
chain_prompt_model = model_prompt | ollama_model
|
||||
antiFloodLLM = []
|
||||
|
||||
trap_list_llm = ("ask:",)
|
||||
|
||||
def llm_query(input, nodeID=0):
|
||||
global antiFloodLLM
|
||||
global antiFloodLLM, llmChat_history
|
||||
|
||||
# add the naughty list here to stop the function before we continue
|
||||
# add a list of allowed nodes only to use the function
|
||||
@@ -41,10 +61,20 @@ def llm_query(input, nodeID=0):
|
||||
response = ""
|
||||
logger.debug(f"System: LLM Query: {input} From:{nodeID}")
|
||||
|
||||
result = chain_prompt_model.invoke({"input": input})
|
||||
result = chain_prompt_model.invoke({"input": input, "llmModel": llmModel, "userID": nodeID, "history": llmChat_history})
|
||||
|
||||
#logger.debug(f"System: LLM Response: " + result.strip().replace('\n', ' '))
|
||||
response = result.strip().replace('\n', ' ')
|
||||
|
||||
# Store history of the conversation, with limit to prevent template growing too large causing speed issues
|
||||
if len(llmChat_history) > llm_history_limit:
|
||||
# remove the oldest two messages
|
||||
llmChat_history.pop(0)
|
||||
llmChat_history.pop(1)
|
||||
inputWithUserID = input + f" user={nodeID}"
|
||||
llmChat_history.append(HumanMessage(content=inputWithUserID))
|
||||
llmChat_history.append(AIMessage(content=response))
|
||||
|
||||
# done with the query, remove the user from the anti flood list
|
||||
antiFloodLLM.remove(nodeID)
|
||||
|
||||
|
||||
@@ -99,6 +99,7 @@ try:
|
||||
solar_conditions_enabled = config['general'].getboolean('spaceWeather', True)
|
||||
wikipedia_enabled = config['general'].getboolean('wikipedia', False)
|
||||
llm_enabled = config['general'].getboolean('ollama', False) # https://ollama.com
|
||||
llmModel = config['general'].get('ollamaModel', 'gemma2:2b') # default gemma2:2b
|
||||
|
||||
sentry_enabled = config['sentry'].getboolean('SentryEnabled', False) # default False
|
||||
secure_channel = config['sentry'].getint('SentryChannel', 2) # default 2
|
||||
|
||||
@@ -498,15 +498,29 @@ def tell_joke():
|
||||
return ''
|
||||
|
||||
def get_wikipedia_summary(search_term):
|
||||
# search wikipedia for a summary of the search term
|
||||
try:
|
||||
logger.debug(f"System: Searching Wikipedia for:{search_term}")
|
||||
summary = wikipedia.summary(search_term, sentences=wiki_return_limit)
|
||||
return summary
|
||||
except Exception as e:
|
||||
# The errors are vebose, normallly around trying to guess the search term
|
||||
logger.warning(f"System: Error searching Wikipedia for:{search_term}")
|
||||
wikipedia_search = wikipedia.search(search_term, results=3)
|
||||
wikipedia_suggest = wikipedia.suggest(search_term)
|
||||
#wikipedia_aroundme = wikipedia.geosearch(location[0], location[1], results=3)
|
||||
#logger.debug(f"System: Wikipedia Nearby:{wikipedia_aroundme}")
|
||||
|
||||
if len(wikipedia_search) == 0:
|
||||
logger.warning(f"System: No Wikipedia Results for:{search_term}")
|
||||
return ERROR_FETCHING_DATA
|
||||
|
||||
try:
|
||||
logger.debug(f"System: Searching Wikipedia for:{search_term}, First Result:{wikipedia_search[0]}, Suggest Word:{wikipedia_suggest}")
|
||||
summary = wikipedia.summary(search_term, sentences=wiki_return_limit, auto_suggest=False, redirect=True)
|
||||
except wikipedia.DisambiguationError as e:
|
||||
logger.warning(f"System: Disambiguation Error for:{search_term} trying {wikipedia_search[0]}")
|
||||
summary = wikipedia.summary(wikipedia_search[0], sentences=wiki_return_limit, auto_suggest=True, redirect=True)
|
||||
except wikipedia.PageError as e:
|
||||
logger.warning(f"System: Wikipedia Page Error for:{search_term} {e} trying {wikipedia_search[0]}")
|
||||
summary = wikipedia.summary(wikipedia_search[0], sentences=wiki_return_limit, auto_suggest=True, redirect=True)
|
||||
except Exception as e:
|
||||
logger.error(f"System: Error with Wikipedia for:{search_term} {e}")
|
||||
return ERROR_FETCHING_DATA
|
||||
|
||||
return summary
|
||||
|
||||
def messageTrap(msg):
|
||||
# Check if the message contains a trap word
|
||||
|
||||
@@ -143,6 +143,8 @@ def onReceive(packet, interface):
|
||||
message_from_id = 0
|
||||
|
||||
# check for a message packet and process it
|
||||
snr = 0
|
||||
rssi = 0
|
||||
try:
|
||||
if 'decoded' in packet and packet['decoded']['portnum'] == 'TEXT_MESSAGE_APP':
|
||||
message_bytes = packet['decoded']['payload']
|
||||
|
||||
Reference in New Issue
Block a user