Compare commits

...

33 Commits

Author SHA1 Message Date
SpudGunMan
37a9fc2eb0 Update system.py 2024-09-01 01:12:52 -07:00
SpudGunMan
923325874c Update README.md 2024-09-01 01:10:31 -07:00
SpudGunMan
7ca0c4d744 Update README.md 2024-09-01 01:10:02 -07:00
Kelly
a584a71429 Merge pull request #52 from SpudGunMan/llm
Ollama Module
2024-09-01 01:06:44 -07:00
SpudGunMan
70f47635b4 Update system.py 2024-09-01 01:04:47 -07:00
SpudGunMan
8e35d77e07 Update system.py 2024-09-01 01:00:33 -07:00
SpudGunMan
7024f2d472 Update system.py 2024-09-01 00:58:52 -07:00
SpudGunMan
7e2dd4c7ff Update mesh_bot.py 2024-09-01 00:55:34 -07:00
SpudGunMan
f20d83ca8c Update README.md 2024-09-01 00:48:45 -07:00
SpudGunMan
f31f920137 Update system.py 2024-09-01 00:43:20 -07:00
SpudGunMan
0f428438a3 Update mesh_bot.py 2024-09-01 00:28:28 -07:00
SpudGunMan
b7882b0322 Update mesh_bot.py 2024-09-01 00:17:11 -07:00
SpudGunMan
3a417a9281 Update mesh_bot.py 2024-09-01 00:11:37 -07:00
SpudGunMan
748085c2be Update mesh_bot.py 2024-09-01 00:09:51 -07:00
SpudGunMan
6a3f56f95f enhance 2024-08-31 23:56:55 -07:00
SpudGunMan
f6d6fb7185 enhance 2024-08-31 23:55:33 -07:00
SpudGunMan
7865263c1c Update mesh_bot.py 2024-08-31 23:46:12 -07:00
SpudGunMan
2cf51d5a09 Update system.py 2024-08-31 23:37:23 -07:00
SpudGunMan
f993be950f LLM module 2024-08-31 23:35:03 -07:00
SpudGunMan
52c4c49bab enhance 2024-08-31 23:29:41 -07:00
SpudGunMan
60fdc7b7ea Update system.py 2024-08-31 22:57:37 -07:00
SpudGunMan
a330cff3e5 Update system.py 2024-08-31 22:56:05 -07:00
SpudGunMan
9ffbac7420 Update system.py
random fix
2024-08-31 22:55:12 -07:00
SpudGunMan
7909707894 config enable llm 2024-08-31 22:41:43 -07:00
SpudGunMan
8d8014b157 Update bbstools.py 2024-08-31 22:20:27 -07:00
SpudGunMan
a459b7a393 R&R 2024-08-31 22:11:39 -07:00
SpudGunMan
7d405dc0c2 Update settings.py 2024-08-29 02:42:17 -07:00
SpudGunMan
3decf8749b Update settings.py 2024-08-29 02:41:06 -07:00
SpudGunMan
ba6869ec76 Update system.py 2024-08-28 23:31:32 -07:00
SpudGunMan
33cb70ea17 Update mesh_bot.py 2024-08-28 23:25:21 -07:00
SpudGunMan
69f1b7471f Update mesh_bot.py 2024-08-28 23:22:34 -07:00
SpudGunMan
76a7d1dba7 wikipedia
is this needed? who knows its meshing about!
2024-08-28 23:10:36 -07:00
SpudGunMan
9f0d3c9d3b Update README.md 2024-08-28 12:54:08 -07:00
8 changed files with 180 additions and 25 deletions

View File

@@ -10,6 +10,8 @@ Along with network testing, this bot has a lot of other fun features, like simpl
The bot is also capable of using dual radio/nodes, so you can monitor two networks at the same time and send messages to nodes using the same `bbspost @nodeNumber #message` or `bbspost @nodeShportName #message` function. There is a small message board to fit in the constraints of Meshtastic for posting bulletin messages with `bbspost $subject #message`.
Look up data using wiki results or interact with [Ollama](https://ollama.com) LLM AI see the [OllamaDocs](https://github.com/ollama/ollama/tree/main/docs) If Ollama is enabled you can DM the bot directly.
The bot will report on anyone who is getting close to the configured lat/long, if in a remote location.
Store and forward-like message re-play with `messages`, and there is a repeater module for dual radio bots to cross post messages. Messages are also logged locally to disk.
@@ -18,7 +20,9 @@ The bot can also be used to monitor a radio frequency and let you know when high
Any messages that are over 160 characters are chunked into 160 message bytes to help traverse hops, in testing, this keeps delivery success higher.
Full list of commands for the bot.
[Donate$](https://www.paypal.com/donate?token=ZpiU7zDh-AQDyK76nWmWPQLf04iOm-Iyr3f85lpubt37NWGRYtfe11UyC0LmY1wdcC20UubWo4Kec-_G) via PayPal if you like the project!
## Full list of commands for the bot
- Various solar details for radio propagation (spaceWeather module)
- `sun` and `moon` return info on rise and set local time
@@ -36,6 +40,8 @@ Full list of commands for the bot.
- `wx` and `wxc` returns local weather forecast, (wxc is metric value), NOAA or Open Meteo for weather forecasting.
- `wxa` and `wxalert` return NOAA alerts. Short title or expanded details
- `joke` tells a joke
- `wiki: ` search wikipedia, return the first few sentances of first result if a match `wiki: lora radio`
- `ask: ` ask Ollama LLM AI for a response `ask: what temp do I cook chicken`
- `messages` Replay the last messages heard, like Store and Forward
- `motd` or to set the message `motd $New Message Of the day`
- `lheard` returns the last 5 heard nodes with SNR, can also use `sitrep`
@@ -51,6 +57,7 @@ The project is written on Linux on a Pi and should work anywhere [Meshtastic](ht
Clone the project with `git clone https://github.com/spudgunman/meshing-around`
code is under a lot of development, so check back often with `git pull`
Copy [config.template](config.template) to `config.ini` and edit for your needs.
`pip install -r requirements.txt`
Optionally:
- `install.sh` will automate optional venv and requirements installation.
@@ -185,6 +192,7 @@ pip install beautifulsoup4
pip install dadjokes
pip install geopy
pip install schedule
pip install wikipedia
```
The following is needed for open-meteo use
```
@@ -192,6 +200,12 @@ pip install openmeteo_requests
pip install retry_requests
pip install numpy
```
The following is for the Ollama LLM
```
pip install langchain
pip install langchain-ollama
pip install ollama
```
To enable emoji in the Debian console, install the fonts `sudo apt-get install fonts-noto-color-emoji`

View File

@@ -34,6 +34,10 @@ welcome_message = MeshBot, here for you like a friend who is not. Try sending: p
DadJokes = True
# enable or disable the Solar module
spaceWeather = True
# enable or disable the wikipedia search module
wikipedia = True
# Enable ollama LLM see more at https://ollama.com
ollama = False
# StoreForward Enabled and Limits
StoreForward = True
StoreLimit = 3
@@ -46,7 +50,6 @@ LogMessagesToFile = False
# Logging of system messages to file
SyslogToFile = False
[sentry]
# detect anyone close to the bot
SentryEnabled = True

View File

@@ -22,6 +22,8 @@ def auto_response(message, snr, rssi, hop, message_from_id, channel_number, devi
"wxa": lambda: handle_wxalert(message_from_id, deviceID, message),
"wxc": lambda: handle_wxc(message_from_id, deviceID, 'wxc'),
"wx": lambda: handle_wxc(message_from_id, deviceID, 'wx'),
"wiki:": lambda: handle_wiki(message),
"ask:": lambda: handle_llm(message_from_id, channel_number, deviceID, message, publicChannel),
"joke": tell_joke,
"bbslist": bbs_list_messages,
"bbspost": lambda: handle_bbspost(message, message_from_id, deviceID),
@@ -93,6 +95,53 @@ def handle_wxalert(message_from_id, deviceID, message):
return weatherAlert
def handle_wiki(message):
if "wiki:" in message.lower():
search = message.split(":")[1]
search = search.strip()
return get_wikipedia_summary(search)
else:
return "Please add a search term example:wiki: travelling gnome"
def handle_llm(message_from_id, channel_number, deviceID, message, publicChannel):
global llmRunCounter, llmTotalRuntime
if "ask:" in message.lower():
user_input = message.split(":")[1]
user_input = user_input.strip()
else:
user_input = message
if len(user_input) < 1:
return "Please ask a question"
# information for the user on how long the query will take on average
if llmRunCounter > 0:
averageRuntime = sum(llmTotalRuntime) / len(llmTotalRuntime)
if averageRuntime > 25:
msg = f"Please wait, average query time is: {int(averageRuntime)} seconds"
if channel_number == publicChannel:
send_message(msg, channel_number, message_from_id, deviceID)
else:
send_message(msg, channel_number, 0, deviceID)
else:
msg = "Please wait, response could take 3+ minutes. Fund the SysOp's GPU budget!"
if channel_number == publicChannel:
send_message(msg, channel_number, message_from_id, deviceID)
else:
send_message(msg, channel_number, 0, deviceID)
start = time.time()
#response = asyncio.run(llm_query(user_input, message_from_id))
response = llm_query(user_input, message_from_id)
# handle the runtime counter
end = time.time()
llmRunCounter += 1
llmTotalRuntime.append(end - start)
return response
def handle_wxc(message_from_id, deviceID, cmd):
location = get_node_location(message_from_id, deviceID)
if use_meteo_wxApi and not "wxc" in cmd and not use_metric:
@@ -335,10 +384,16 @@ def onReceive(packet, interface):
"From: " + CustomFormatter.white + f"{get_name_from_number(message_from_id, 'long', rxNode)}")
# respond with DM
send_message(auto_response(message_string, snr, rssi, hop, message_from_id, channel_number, rxNode), channel_number, message_from_id, rxNode)
else:
# respond with welcome message on DM
logger.warning(f"Device:{rxNode} Ignoring DM: {message_string} From: {get_name_from_number(message_from_id, 'long', rxNode)}")
send_message(welcome_message, channel_number, message_from_id, rxNode)
else:
if llm_enabled:
llm = handle_llm(message_from_id, channel_number, rxNode, message_string, publicChannel)
send_message(llm, channel_number, message_from_id, rxNode)
else:
# respond with welcome message on DM
logger.warning(f"Device:{rxNode} Ignoring DM: {message_string} From: {get_name_from_number(message_from_id, 'long', rxNode)}")
send_message(welcome_message, channel_number, message_from_id, rxNode)
# log the message to the message log
msgLogger.info(f"Device:{rxNode} Channel:{channel_number} | {get_name_from_number(message_from_id, 'long', rxNode)} | " + message_string.replace('\n', '-nl-'))
else:
# message is on a channel
@@ -400,6 +455,10 @@ def onReceive(packet, interface):
async def start_rx():
print (CustomFormatter.bold_white + f"\nMeshtastic Autoresponder Bot CTL+C to exit\n" + CustomFormatter.reset)
if llm_enabled:
logger.debug(f"System: Ollama LLM Enabled, loading model please wait")
llm_query(" ", myNodeNum1)
logger.debug(f"System: LLM model loaded")
# Start the receive subscriber using pubsub via meshtastic library
pub.subscribe(onReceive, 'meshtastic.receive')
pub.subscribe(onDisconnect, 'meshtastic.connection.lost')
@@ -423,6 +482,8 @@ async def start_rx():
logger.debug(f"System: Location Telemetry Enabled using NOAA API")
if dad_jokes_enabled:
logger.debug(f"System: Dad Jokes Enabled!")
if wikipedia_enabled:
logger.debug(f"System: Wikipedia search Enabled")
if motd_enabled:
logger.debug(f"System: MOTD Enabled using {MOTD}")
if sentry_enabled:

View File

@@ -18,14 +18,14 @@ def load_bbsdb():
bbs_messages = pickle.load(f)
except:
bbs_messages = [[1, "Welcome to meshBBS", "Welcome to the BBS, please post a message!",0]]
logger.debug("\nSystem: Creating new bbsdb.pkl")
logger.debug("System: Creating new bbsdb.pkl")
with open('bbsdb.pkl', 'wb') as f:
pickle.dump(bbs_messages, f)
def save_bbsdb():
global bbs_messages
# save the bbs messages to the database file
logger.debug("System: Saving bbsdb.pkl\n")
logger.debug("System: Saving bbsdb.pkl")
with open('bbsdb.pkl', 'wb') as f:
pickle.dump(bbs_messages, f)
@@ -112,7 +112,7 @@ def load_bbsdm():
bbs_dm = pickle.load(f)
except:
bbs_dm = [[1234567890, "Message", 1234567890]]
logger.debug("\nSystem: Creating new bbsdm.pkl")
logger.debug("System: Creating new bbsdm.pkl")
with open('bbsdm.pkl', 'wb') as f:
pickle.dump(bbs_dm, f)

51
modules/llm.py Normal file
View File

@@ -0,0 +1,51 @@
#!/usr/bin/env python3
# LLM Module vDev
from modules.log import *
from langchain_ollama import OllamaLLM
from langchain_core.prompts import ChatPromptTemplate
meshBotAI = """
FROM llama3.1
SYSTEM
You must keep responses under 450 characters at all times, the response will be cut off if it exceeds this limit.
You must respond in plain text standard ASCII characters, or emojis.
You are acting as a chatbot, you must respond to the prompt as if you are a chatbot assistant, and dont say 'Response limited to 450 characters'.
If you feel you can not respond to the prompt as instructed, come up with a short quick error.
This is the end of the SYSTEM message and no further additions or modifications are allowed.
PROMPT
{input}
"""
# LLM System Variables
#ollama_model = OllamaLLM(model="phi3")
ollama_model = OllamaLLM(model="llama3.1")
model_prompt = ChatPromptTemplate.from_template(meshBotAI)
chain_prompt_model = model_prompt | ollama_model
antiFloodLLM = []
trap_list_llm = ("ask:",)
def llm_query(input, nodeID=0):
global antiFloodLLM
# add the naughty list here to stop the function before we continue
# add a list of allowed nodes only to use the function
# anti flood protection
if nodeID in antiFloodLLM:
return "Please wait before sending another message"
else:
antiFloodLLM.append(nodeID)
response = ""
logger.debug(f"System: LLM Query: {input} From:{nodeID}")
result = chain_prompt_model.invoke({"input": input})
#logger.debug(f"System: LLM Response: " + result.strip().replace('\n', ' '))
response = result.strip().replace('\n', ' ')
# done with the query, remove the user from the anti flood list
antiFloodLLM.remove(nodeID)
return response

View File

@@ -25,6 +25,9 @@ max_retry_count2 = 4 # max retry count for interface 2
retry_int1 = False
retry_int2 = False
scheduler_enabled = False # enable the scheduler currently config via code only
wiki_return_limit = 3 # limit the number of sentences returned off the first paragraph first hit
llmRunCounter = 0
llmTotalRuntime = []
# Read the config file, if it does not exist, create basic config file
config = configparser.ConfigParser()
@@ -85,15 +88,17 @@ try:
publicChannel = config['general'].getint('defaultChannel', 0) # the meshtastic public channel
zuluTime = config['general'].getboolean('zuluTime', False) # aka 24 hour time
log_messages_to_file = config['general'].getboolean('LogMessagesToFile', True) # default True
syslog_to_file = config['general'].getboolean('SyslogToFile', False) # default True
syslog_to_file = config['general'].getboolean('SyslogToFile', False)
urlTimeoutSeconds = config['general'].getint('urlTimeout', 10) # default 10 seconds
store_forward_enabled = config['general'].getboolean('StoreForward', True) # default False
store_forward_enabled = config['general'].getboolean('StoreForward', True)
storeFlimit = config['general'].getint('StoreLimit', 3) # default 3 messages for S&F
welcome_message = config['general'].get(f'welcome_message', WELCOME_MSG)
welcome_message = (f"{welcome_message}").replace('\\n', '\n') # allow for newlines in the welcome message
motd_enabled = config['general'].getboolean('motdEnabled', True)
dad_jokes_enabled = config['general'].getboolean('DadJokes', True)
solar_conditions_enabled = config['general'].getboolean('spaceWeather', True)
dad_jokes_enabled = config['general'].getboolean('DadJokes', False)
solar_conditions_enabled = config['general'].getboolean('spaceWeather', True)
wikipedia_enabled = config['general'].getboolean('wikipedia', False)
llm_enabled = config['general'].getboolean('ollama', False) # https://ollama.com
sentry_enabled = config['sentry'].getboolean('SentryEnabled', False) # default False
secure_channel = config['sentry'].getint('SentryChannel', 2) # default 2

View File

@@ -64,6 +64,18 @@ if dad_jokes_enabled:
trap_list = trap_list + ("joke",)
help_message = help_message + ", joke"
# Wikipedia Search Configuration
if wikipedia_enabled:
import wikipedia # pip install wikipedia
trap_list = trap_list + ("wiki:",)
help_message = help_message + ", wiki:"
# LLM Configuration
if llm_enabled:
from modules.llm import * # from the spudgunman/meshing-around repo
trap_list = trap_list + trap_list_llm # items ask:
help_message = help_message + ", ask:"
# Scheduled Broadcast Configuration
if scheduler_enabled:
import schedule # pip install schedule
@@ -91,6 +103,7 @@ if interface1_type == 'ble' and interface2_type == 'ble':
# Interface1 Configuration
try:
logger.debug(f"System: Initalizing Interface1")
if interface1_type == 'serial':
interface1 = meshtastic.serial_interface.SerialInterface(port1)
elif interface1_type == 'tcp':
@@ -106,6 +119,7 @@ except Exception as e:
# Interface2 Configuration
if interface2_enabled:
logger.debug(f"System: Initalizing Interface2")
try:
if interface2_type == 'serial':
interface2 = meshtastic.serial_interface.SerialInterface(port2)
@@ -414,7 +428,7 @@ def get_closest_nodes(nodeInt=1,returnCount=3):
return ERROR_FETCHING_DATA
def send_message(message, ch, nodeid=0, nodeInt=1):
if message == "":
if message == "" or message == None or len(message) == 0:
return
# if message over MESSAGE_CHUNK_SIZE characters, split it into multiple messages
if len(message) > MESSAGE_CHUNK_SIZE:
@@ -483,14 +497,16 @@ def tell_joke():
else:
return ''
def messageTrap(msg):
# Check if the message contains a trap word
message_list=msg.split(" ")
for m in message_list:
for t in trap_list:
if t.lower() == m.lower():
return True
return False
def get_wikipedia_summary(search_term):
# search wikipedia for a summary of the search term
try:
logger.debug(f"System: Searching Wikipedia for:{search_term}")
summary = wikipedia.summary(search_term, sentences=wiki_return_limit)
return summary
except Exception as e:
# The errors are vebose, normallly around trying to guess the search term
logger.warning(f"System: Error searching Wikipedia for:{search_term}")
return ERROR_FETCHING_DATA
def messageTrap(msg):
# Check if the message contains a trap word
@@ -503,7 +519,7 @@ def messageTrap(msg):
def exit_handler():
# Close the interface and save the BBS messages
logger.debug(f"\nSystem: Closing Autoresponder\n")
logger.debug(f"System: Closing Autoresponder")
try:
interface1.close()
logger.debug(f"System: Interface1 Closed")
@@ -546,14 +562,14 @@ async def handleSignalWatcher():
if interface2_enabled:
send_message(msg, int(ch), 0, 2)
else:
logger.error(f"System: antiSpam prevented Alert from Hamlib {msg}")
logger.warning(f"System: antiSpam prevented Alert from Hamlib {msg}")
else:
if antiSpam and sigWatchBroadcastCh != publicChannel:
send_message(msg, int(sigWatchBroadcastCh), 0, 1)
if interface2_enabled:
send_message(msg, int(sigWatchBroadcastCh), 0, 2)
else:
logger.error(f"System: antiSpam prevented Alert from Hamlib {msg}")
logger.warning(f"System: antiSpam prevented Alert from Hamlib {msg}")
await asyncio.sleep(1)
pass

View File

@@ -12,3 +12,8 @@ retry_requests
numpy
geopy
schedule
wikipedia
langchain
langchain-ollama
ollama