Compare commits

...

47 Commits

Author SHA1 Message Date
SpudGunMan
71733de05f enhance DM/Channel logic for llm
fix logic for DM or Channel Sending
2024-09-10 12:59:00 -07:00
SpudGunMan
d5e48a3e36 Update README.md 2024-09-10 00:56:16 -07:00
SpudGunMan
e36755a21d Update llm.py
fix typo
2024-09-10 00:52:20 -07:00
SpudGunMan
9620164884 Update locationdata.py
enhance for LLM
2024-09-10 00:46:43 -07:00
SpudGunMan
711844cc83 enhanceMessages
fixes a few things I didnt wrap up and also enhances this suggestion https://github.com/SpudGunMan/meshing-around/issues/59
2024-09-10 00:46:32 -07:00
SpudGunMan
6eb82b26a7 Update mesh_bot.py 2024-09-09 23:27:12 -07:00
SpudGunMan
b12cf6219a responseDelay 2024-09-06 09:31:30 -07:00
SpudGunMan
7e46305277 responseDelay 2024-09-06 09:30:07 -07:00
SpudGunMan
819c37bdec networkErrorFix 2024-09-05 23:01:46 -07:00
SpudGunMan
c80690d66e Update README.md 2024-09-05 09:38:56 -07:00
SpudGunMan
084f879537 Update settings.py 2024-09-05 09:09:38 -07:00
SpudGunMan
7afd6bbbe9 Update README.md 2024-09-05 00:54:32 -07:00
SpudGunMan
46641e8a86 Update README.md 2024-09-05 00:51:01 -07:00
SpudGunMan
010b386ce1 Update README.md 2024-09-05 00:43:02 -07:00
SpudGunMan
a73b320715 Update llm.py 2024-09-05 00:22:33 -07:00
SpudGunMan
5f822a6230 Update mesh_bot.py 2024-09-04 22:27:15 -07:00
SpudGunMan
024fac90cd Update install.sh 2024-09-04 19:50:49 -07:00
SpudGunMan
133bc36cca Update install.sh 2024-09-04 19:22:21 -07:00
SpudGunMan
51a7ff2820 checkPip 2024-09-04 19:17:37 -07:00
SpudGunMan
bc96f8df49 installOllama 2024-09-04 18:42:04 -07:00
Kelly
979f197476 Merge pull request #57 from SpudGunMan/llmLocationAware
LLM location aware enhancement
2024-09-04 17:18:03 -07:00
SpudGunMan
1677b69363 comments# 2024-09-04 15:10:38 -07:00
SpudGunMan
d627f694df typo 2024-09-04 15:05:15 -07:00
SpudGunMan
4c52cba21f SpErr 2024-09-04 15:00:44 -07:00
SpudGunMan
597fdd1695 Update llm.py 2024-09-04 14:53:33 -07:00
SpudGunMan
9031704b9b Update llm.py 2024-09-04 14:51:50 -07:00
SpudGunMan
510a5c5007 Update llm.py 2024-09-04 14:42:23 -07:00
SpudGunMan
469e76c50b Update llm.py 2024-09-04 13:14:28 -07:00
SpudGunMan
f6c6c58c17 Update README.md 2024-09-04 11:06:09 -07:00
SpudGunMan
e546866f78 Update llm.py 2024-09-04 09:40:57 -07:00
SpudGunMan
081566b5d9 lower value to speed up query 2024-09-04 09:40:04 -07:00
SpudGunMan
ec078666ae saveSomeAPIcalls 2024-09-04 00:50:57 -07:00
SpudGunMan
1ce394c7a1 Update for LLM 2024-09-04 00:20:06 -07:00
SpudGunMan
2fc3930b43 Update mesh_bot.py 2024-09-03 23:22:02 -07:00
SpudGunMan
9fa9da5e74 Update mesh_bot.py 2024-09-03 23:20:23 -07:00
SpudGunMan
d6ad0b5e94 Update mesh_bot.py 2024-09-03 23:20:13 -07:00
SpudGunMan
15dc50804f Update mesh_bot.py 2024-09-03 23:17:34 -07:00
SpudGunMan
63c3e35064 Update llm.py 2024-09-03 23:12:32 -07:00
SpudGunMan
297930c4d1 Update llm.py 2024-09-03 23:09:45 -07:00
SpudGunMan
098c344047 Update llm.py 2024-09-03 23:08:37 -07:00
SpudGunMan
4f74677d14 Update mesh_bot.py 2024-09-03 23:05:34 -07:00
SpudGunMan
0869b19408 addTimeAware
include the current date in the awareness of location
2024-09-03 23:05:00 -07:00
SpudGunMan
9b02611700 LocationAware 2024-09-03 23:02:04 -07:00
SpudGunMan
5daa71e6c1 llmLocationAware
enhance with local data to the AI
2024-09-03 22:52:27 -07:00
SpudGunMan
aa5f2f66f8 Update llm.py 2024-09-03 21:10:11 -07:00
SpudGunMan
92d04f81c3 contextFromGoogle 2024-09-03 21:08:06 -07:00
SpudGunMan
5d53db4211 enhance 2024-09-03 17:13:04 -07:00
9 changed files with 253 additions and 74 deletions

View File

@@ -8,7 +8,7 @@ The feature-rich bot requires the internet for full functionality. These respond
Along with network testing, this bot has a lot of other fun features, like simple mail messaging you can leave for another device, and when that device is seen, it can send the mail as a DM. Or a scheduler to send weather or a reminder weekly for the VHF net.
The bot is also capable of using dual radio/nodes, so you can monitor two networks at the same time and send messages to nodes using the same `bbspost @nodeNumber #message` or `bbspost @nodeShportName #message` function. There is a small message board to fit in the constraints of Meshtastic for posting bulletin messages with `bbspost $subject #message`.
The bot is also capable of using dual radio/nodes, so you can monitor two networks at the same time and send messages to nodes using the same `bbspost @nodeNumber #message` or `bbspost @nodeShortName #message` function. There is a small message board to fit in the constraints of Meshtastic for posting bulletin messages with `bbspost $subject #message`.
Look up data using wiki results or interact with [Ollama](https://ollama.com) LLM AI see the [OllamaDocs](https://github.com/ollama/ollama/tree/main/docs) If Ollama is enabled you can DM the bot directly. The default model for mesh-bot which is currently `gemma2:2b`
@@ -32,7 +32,7 @@ Any messages that are over 160 characters are chunked into 160 message bytes to
- `bbshelp` returns the following
- `bbslist` list the messages by ID and subject
- `bbsread` read a message example use: `bbsread #1`
- `bbspost` post a message to public board or send a DM example use: `bbspost $subject #message, or bbspost @nodeNumber #message or bbspost @nodeShportName #message`
- `bbspost` post a message to public board or send a DM example use: `bbspost $subject #message, or bbspost @nodeNumber #message or bbspost @nodeShortName #message`
- `bbsdelete` delete a message example use: `bbsdelete #4`
- Other functions
- `whereami` returns the address of location of sender if known
@@ -40,8 +40,8 @@ Any messages that are over 160 characters are chunked into 160 message bytes to
- `wx` and `wxc` returns local weather forecast, (wxc is metric value), NOAA or Open Meteo for weather forecasting.
- `wxa` and `wxalert` return NOAA alerts. Short title or expanded details
- `joke` tells a joke
- `wiki: ` search wikipedia, return the first few sentances of first result if a match `wiki: lora radio`
- `ask: ` ask Ollama LLM AI for a response `ask: what temp do I cook chicken`
- `wiki: ` will search wikipedia, return the first few sentances of first result if a match `wiki: lora radio`
- `askai` and `ask:` will ask Ollama LLM AI for a response `askai what temp do I cook chicken`
- `messages` Replay the last messages heard, like Store and Forward
- `motd` or to set the message `motd $New Message Of the day`
- `lheard` returns the last 5 heard nodes with SNR, can also use `sitrep`
@@ -64,6 +64,7 @@ Optionally:
- `launch.sh` will activate and launch the app in the venv if built.
For Docker:
Check you have serial port properly shared and the GPU if using LLM with [NVidia](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/docker-specialized.html)
- `git clone https://github.com/spudgunman/meshing-around`
- `cd meshing-around && docker build -t meshing-around`
- `docker run meshing-around`
@@ -156,11 +157,25 @@ signalCycleLimit = 5
Ollama Settings, for Ollama to work the command line `ollama run 'model'` needs to work properly. Check that you have enough RAM and your GPU are working as expected. The default model for this project, is set to `gemma2:2b` (run `ollama pull gemma2:2b` on command line, to download and setup) however I have found gemma2:2b to be lighter, faster and seems better overall vs llama3,1 (`olamma pull llama3.1`)
- From the command terminal of your system with mesh-bot, download the default model for mesh-bot which is currently `ollama pull gemma2:2b`
Enable History, set via code readme Ollama Config in [Settings](https://github.com/SpudGunMan/meshing-around?tab=readme-ov-file#configurations) and [llm.py](https://github.com/SpudGunMan/meshing-around/blob/eb3bbdd3c5e0f16fe3c465bea30c781bd132d2d3/modules/llm.py#L12)
Tested models are `llama3.1, gemma2 (and variants), phi3.5, mistrial` other models may not handle the template as well.
```
# Enable ollama LLM see more at https://ollama.com
ollama = True
# Ollama model to use (defaults to llama3.1)
ollamaModel = gemma2:2b
# Ollama model to use (defaults to gemma2:2b)
ollamaModel = gemma2
#ollamaModel = llama3.1
```
also see llm.py for changing the defaults of
```
# LLM System Variables
llmEnableHistory = False # enable history for the LLM model to use in responses adds to compute time
llmContext_fromGoogle = True # enable context from google search results adds to compute time but really helps with responses accuracy
googleSearchResults = 3 # number of google search results to include in the context more results = more compute time
llm_history_limit = 6 # limit the history to 3 messages (come in pairs) more results = more compute time
```
Logging messages to disk or Syslog to disk uses the python native logging function. Take a look at the [/modules/log.py](/modules/log.py) you can set the file logger for syslog to INFO for example to not log DEBUG messages to file log, or modify the stdOut level.
@@ -219,6 +234,7 @@ The following is for the Ollama LLM
pip install langchain
pip install langchain-ollama
pip install ollama
pip install googlesearch-python
```
To enable emoji in the Debian console, install the fonts `sudo apt-get install fonts-noto-color-emoji`

View File

@@ -3,10 +3,23 @@
# install.sh
cd "$(dirname "$0")"
printf "\nMeshing Around Installer\n"
# add user to groups for serial access
printf "\nAdding user to dialout and tty groups for serial access\n"
sudo usermod -a -G dialout $USER
sudo usermod -a -G tty $USER
# check for pip
if ! command -v pip &> /dev/null
then
printf "pip not found, please install pip with your OS\n"
sudo apt-get install python3-pip
else
printf "python pip found\n"
fi
# generate config file, check if it exists
if [ -f config.ini ]; then
printf "\nConfig file already exists, moving to backup config.old\n"
@@ -97,6 +110,25 @@ if [ $bot == "n" ]; then
fi
fi
printf "\nOptionally if you want to install the LLM Ollama compnents we will execute the following commands\n"
printf "\ncurl -fsSL https://ollama.com/install.sh | sh\n"
# ask if the user wants to install the LLM Ollama components
echo "Do you want to install the LLM Ollama components? (y/n)"
read ollama
if [ $ollama == "y" ]; then
curl -fsSL https://ollama.com/install.sh | sh
# ask if want to install gemma2:2b
printf "\n Ollama install done now we can install the Gemma2:2b components, multi GB download\n"
echo "Do you want to install the Gemma2:2b components? (y/n)"
read gemma
if [ $gemma == "y" ]; then
olamma pull gemma2:2b
fi
fi
printf "\nGoodbye!"
exit 0

View File

@@ -8,6 +8,8 @@ from pubsub import pub # pip install pubsub
from modules.log import *
from modules.system import *
responseDelay = 0.7 # delay in seconds for response to avoid message collision
def auto_response(message, snr, rssi, hop, message_from_id, channel_number, deviceID):
#Auto response to messages
message_lower = message.lower()
@@ -24,6 +26,7 @@ def auto_response(message, snr, rssi, hop, message_from_id, channel_number, devi
"wx": lambda: handle_wxc(message_from_id, deviceID, 'wx'),
"wiki:": lambda: handle_wiki(message),
"ask:": lambda: handle_llm(message_from_id, channel_number, deviceID, message, publicChannel),
"askai": lambda: handle_llm(message_from_id, channel_number, deviceID, message, publicChannel),
"joke": tell_joke,
"bbslist": bbs_list_messages,
"bbspost": lambda: handle_bbspost(message, message_from_id, deviceID),
@@ -56,8 +59,8 @@ def auto_response(message, snr, rssi, hop, message_from_id, channel_number, devi
# run the first command after sorting
bot_response = command_handler[cmds[0]['cmd']]()
# wait a 700ms to avoid message collision from lora-ack
time.sleep(0.7)
# wait a responseDelay to avoid message collision from lora-ack
time.sleep(responseDelay)
return bot_response
@@ -103,14 +106,50 @@ def handle_wiki(message):
return get_wikipedia_summary(search)
else:
return "Please add a search term example:wiki: travelling gnome"
llmRunCounter = 0
llmTotalRuntime = []
llmLocationTable = {}
def handle_llm(message_from_id, channel_number, deviceID, message, publicChannel):
global llmRunCounter, llmTotalRuntime
global llmRunCounter, llmTotalRuntime, llmLocationTable
if location_enabled:
location = get_node_location(message_from_id, deviceID)
# if message_from_id is is the llmLocationTable use the location from the table to save on API calls
if message_from_id in llmLocationTable:
location_name = llmLocationTable[message_from_id]
else:
location_name = where_am_i(str(location[0]), str(location[1]), short = True)
if NO_DATA_NOGPS in location_name:
location_name = "no location provided "
else:
location_name = "no location provided"
if "ask:" in message.lower():
user_input = message.split(":")[1]
user_input = user_input.strip()
elif "askai" in message.lower():
user_input = message.replace("askai", "")
else:
# likely a DM
user_input = message
# if the message_from_id is not in the llmLocationTable send the welcome message
if not message_from_id in llmLocationTable:
if (channel_number == publicChannel and antiSpam) or useDMForResponse:
# send via DM
send_message(welcome_message, channel_number, message_from_id, deviceID)
time.sleep(responseDelay)
else:
# send via channel
send_message(welcome_message, channel_number, 0, deviceID)
time.sleep(responseDelay)
# add the node to the llmLocationTable for future use
llmLocationTable[message_from_id] = location_name
user_input = user_input.strip()
if len(user_input) < 1:
return "Please ask a question"
@@ -120,21 +159,29 @@ def handle_llm(message_from_id, channel_number, deviceID, message, publicChannel
averageRuntime = sum(llmTotalRuntime) / len(llmTotalRuntime)
if averageRuntime > 25:
msg = f"Please wait, average query time is: {int(averageRuntime)} seconds"
if channel_number == publicChannel:
if (channel_number == publicChannel and antiSpam) or useDMForResponse:
# send via DM
send_message(msg, channel_number, message_from_id, deviceID)
time.sleep(responseDelay)
else:
# send via channel
send_message(msg, channel_number, 0, deviceID)
time.sleep(responseDelay)
else:
msg = "Please wait, response could take 3+ minutes. Fund the SysOp's GPU budget!"
if channel_number == publicChannel:
send_message(msg, channel_number, message_from_id, deviceID)
else:
send_message(msg, channel_number, 0, deviceID)
msg = "Please wait, response could take 30+ seconds. Fund the SysOp's GPU budget!"
if (channel_number == publicChannel and antiSpam) or useDMForResponse:
# send via DM
send_message(msg, channel_number, message_from_id, deviceID)
time.sleep(responseDelay)
else:
# send via channel
send_message(msg, channel_number, 0, deviceID)
time.sleep(responseDelay)
start = time.time()
#response = asyncio.run(llm_query(user_input, message_from_id))
response = llm_query(user_input, message_from_id)
response = llm_query(user_input, message_from_id, location_name)
# handle the runtime counter
end = time.time()
@@ -320,8 +367,8 @@ def onReceive(packet, interface):
msg = bbs_check_dm(message_from_id)
if msg:
# wait a 700ms to avoid message collision from lora-ack.
time.sleep(0.7)
# wait a responseDelay to avoid message collision from lora-ack.
time.sleep(responseDelay)
logger.info(f"System: BBS DM Found: {msg[1]} For: {get_name_from_number(message_from_id, 'long', rxNode)}")
message = "Mail: " + msg[1] + " From: " + get_name_from_number(msg[2], 'long', rxNode)
bbs_delete_dm(msg[0], msg[1])
@@ -440,8 +487,8 @@ def onReceive(packet, interface):
# repeat the message on the other device
if repeater_enabled and interface2_enabled:
# wait a 700ms to avoid message collision from lora-ack.
time.sleep(0.7)
# wait a responseDelay to avoid message collision from lora-ack.
time.sleep(responseDelay)
rMsg = (f"{message_string} From:{get_name_from_number(message_from_id, 'short', rxNode)}")
# if channel found in the repeater list repeat the message
if str(channel_number) in repeater_channels:

View File

@@ -4,40 +4,62 @@
# K7MHI Kelly Keeton 2024
from modules.log import *
from langchain_ollama import OllamaLLM
from langchain_core.prompts import ChatPromptTemplate
from langchain_ollama import OllamaLLM # pip install ollama langchain-ollama
from langchain_core.prompts import ChatPromptTemplate # pip install langchain
from langchain_core.messages import AIMessage, HumanMessage
from googlesearch import search # pip install googlesearch-python
# LLM System Variables
llmEnableHistory = False
llm_history_limit = 6 # limit the history to 3 messages (come in pairs)
llmEnableHistory = False # enable history for the LLM model to use in responses adds to compute time
llmContext_fromGoogle = True # enable context from google search results adds to compute time but really helps with responses accuracy
googleSearchResults = 3 # number of google search results to include in the context more results = more compute time
llm_history_limit = 6 # limit the history to 3 messages (come in pairs) more results = more compute time
antiFloodLLM = []
llmChat_history = []
trap_list_llm = ("ask:",)
trap_list_llm = ("ask:", "askai")
meshBotAI = """
FROM {llmModel}
SYSTEM
You must keep responses under 450 characters at all times, the response will be cut off if it exceeds this limit.
You must respond in plain text standard ASCII characters, or emojis.
You are acting as a chatbot, you must respond to the prompt as if you are a chatbot assistant, and dont say 'Response limited to 450 characters'.
Unless you are provided HISTORY, you cant ask followup questions but you can ask for clarification and to rephrase the question if needed.
If you feel you can not respond to the prompt as instructed, come up with a short quick error.
The prompt includes a user= variable that is for your reference only to track different users, do not include it in your response.
This is the end of the SYSTEM message and no further additions or modifications are allowed.
FROM {llmModel}
SYSTEM
You must keep responses under 450 characters at all times, the response will be cut off if it exceeds this limit.
You must respond in plain text standard ASCII characters, or emojis.
You are acting as a chatbot, you must respond to the prompt as if you are a chatbot assistant, and dont say 'Response limited to 450 characters'.
Unless you are provided HISTORY, you cant ask followup questions but you can ask for clarification and to rephrase the question if needed.
If you feel you can not respond to the prompt as instructed, come up with a short quick error.
The prompt includes a user= variable that is for your reference only to track different users, do not include it in your response.
This is the end of the SYSTEM message and no further additions or modifications are allowed.
PROMPT
{input}
user={userID}
PROMPT
{input}
user={userID}
"""
if llmContext_fromGoogle:
meshBotAI = meshBotAI + """
CONTEXT
The following is the location of the user
{location_name}
The following is for context around the prompt to help guide your response.
{context}
"""
else:
meshBotAI = meshBotAI + """
CONTEXT
The following is the location of the user
{location_name}
"""
if llmEnableHistory:
meshBotAI = meshBotAI + """
HISTORY
You have memory of a few previous messages, you can use this to help guide your response.
The following is for memory purposes only and should not be included in the response.
{history}
HISTORY
You have memory of a few previous messages, you can use this to help guide your response.
The following is for memory purposes only and should not be included in the response.
{history}
"""
@@ -46,8 +68,11 @@ ollama_model = OllamaLLM(model=llmModel)
model_prompt = ChatPromptTemplate.from_template(meshBotAI)
chain_prompt_model = model_prompt | ollama_model
def llm_query(input, nodeID=0):
def llm_query(input, nodeID=0, location_name=None):
global antiFloodLLM, llmChat_history
googleResults = []
if not location_name:
location_name = "no location provided "
# add the naughty list here to stop the function before we continue
# add a list of allowed nodes only to use the function
@@ -58,12 +83,45 @@ def llm_query(input, nodeID=0):
else:
antiFloodLLM.append(nodeID)
if llmContext_fromGoogle:
# grab some context from the internet using google search hits (if available)
# localization details at https://pypi.org/project/googlesearch-python/
# remove common words from the search query
# commonWordsList = ["is", "for", "the", "of", "and", "in", "on", "at", "to", "with", "by", "from", "as", "a", "an", "that", "this", "these", "those", "there", "here", "where", "when", "why", "how", "what", "which", "who", "whom", "whose", "whom"]
# sanitizedSearch = ' '.join([word for word in input.split() if word.lower() not in commonWordsList])
try:
googleSearch = search(input, advanced=True, num_results=googleSearchResults)
if googleSearch:
for result in googleSearch:
# SearchResult object has url= title= description= just grab title and description
googleResults.append(f"{result.title} {result.description}")
else:
googleResults = ['no other context provided']
except Exception as e:
logger.debug(f"System: LLM Query: context gathering failed, likely due to network issues")
googleResults = ['no other context provided']
if googleResults:
logger.debug(f"System: LLM Query: {input} From:{nodeID} with context from google")
else:
logger.debug(f"System: LLM Query: {input} From:{nodeID}")
response = ""
logger.debug(f"System: LLM Query: {input} From:{nodeID}")
result = ""
location_name += f" at the current time of {datetime.now().strftime('%Y-%m-%d %H:%M:%S %Z')}"
result = chain_prompt_model.invoke({"input": input, "llmModel": llmModel, "userID": nodeID, "history": llmChat_history})
try:
result = chain_prompt_model.invoke({"input": input, "llmModel": llmModel, "userID": nodeID, \
"history": llmChat_history, "context": googleResults, "location_name": location_name})
#logger.debug(f"System: LLM Response: " + result.strip().replace('\n', ' '))
except Exception as e:
logger.warning(f"System: LLM failure: {e}")
return "I am having trouble processing your request, please try again later."
#logger.debug(f"System: LLM Response: " + result.strip().replace('\n', ' '))
response = result.strip().replace('\n', ' ')
# Store history of the conversation, with limit to prevent template growing too large causing speed issues
@@ -79,3 +137,15 @@ def llm_query(input, nodeID=0):
antiFloodLLM.remove(nodeID)
return response
# import subprocess
# def get_ollama_cpu():
# try:
# psOutput = subprocess.run(['ollama', 'ps'], capture_output=True, text=True)
# if "GPU" in psOutput.stdout:
# logger.debug(f"System: Ollama process with GPU")
# else:
# logger.debug(f"System: Ollama process with CPU, query time will be slower")
# except Exception as e:
# logger.debug(f"System: Ollama process not found, {e}")
# return False

View File

@@ -11,7 +11,7 @@ from modules.log import *
trap_list_location = ("whereami", "tide", "moon", "wx", "wxc", "wxa", "wxalert")
def where_am_i(lat=0, lon=0):
def where_am_i(lat=0, lon=0, short=False):
whereIam = ""
grid = mh.to_maiden(float(lat), float(lon))
@@ -22,22 +22,33 @@ def where_am_i(lat=0, lon=0):
# initialize Nominatim API
geolocator = Nominatim(user_agent="mesh-bot")
# Nomatim API call to get address
if float(lat) == latitudeValue and float(lon) == longitudeValue:
# redacted address when no GPS and using default location
location = geolocator.reverse(lat + ", " + lon)
address = location.raw['address']
address_components = ['city', 'state', 'postcode', 'county', 'country']
whereIam += ' '.join([address.get(component, '') for component in address_components if component in address])
whereIam += " Grid: " + grid
return whereIam
else:
location = geolocator.reverse(lat + ", " + lon)
address = location.raw['address']
address_components = ['house_number', 'road', 'city', 'state', 'postcode', 'county', 'country']
whereIam += ' '.join([address.get(component, '') for component in address_components if component in address])
whereIam += " Grid: " + grid
try:
# Nomatim API call to get address
if short:
location = geolocator.reverse(lat + ", " + lon)
address = location.raw['address']
address_components = ['city', 'state', 'county', 'country']
whereIam = f"City: {address.get('city', '')}. State: {address.get('state', '')}. County: {address.get('county', '')}. Country: {address.get('country', '')}."
return whereIam
if float(lat) == latitudeValue and float(lon) == longitudeValue:
# redacted address when no GPS and using default location
location = geolocator.reverse(lat + ", " + lon)
address = location.raw['address']
address_components = ['city', 'state', 'postcode', 'county', 'country']
whereIam += ' '.join([address.get(component, '') for component in address_components if component in address])
whereIam += " Grid: " + grid
else:
location = geolocator.reverse(lat + ", " + lon)
address = location.raw['address']
address_components = ['house_number', 'road', 'city', 'state', 'postcode', 'county', 'country']
whereIam += ' '.join([address.get(component, '') for component in address_components if component in address])
whereIam += " Grid: " + grid
return whereIam
except Exception as e:
logger.debug("Location:Error fetching location data with whereami, likely network error")
return ERROR_FETCHING_DATA
def get_tide(lat=0, lon=0):
station_id = ""

View File

@@ -28,6 +28,7 @@ scheduler_enabled = False # enable the scheduler currently config via code only
wiki_return_limit = 3 # limit the number of sentences returned off the first paragraph first hit
llmRunCounter = 0
llmTotalRuntime = []
llmLocationTable = []
# Read the config file, if it does not exist, create basic config file
config = configparser.ConfigParser()
@@ -92,9 +93,10 @@ try:
urlTimeoutSeconds = config['general'].getint('urlTimeout', 10) # default 10 seconds
store_forward_enabled = config['general'].getboolean('StoreForward', True)
storeFlimit = config['general'].getint('StoreLimit', 3) # default 3 messages for S&F
welcome_message = config['general'].get(f'welcome_message', WELCOME_MSG)
welcome_message = config['general'].get('welcome_message', WELCOME_MSG)
welcome_message = (f"{welcome_message}").replace('\\n', '\n') # allow for newlines in the welcome message
motd_enabled = config['general'].getboolean('motdEnabled', True)
MOTD = config['general'].get('motd', MOTD)
dad_jokes_enabled = config['general'].getboolean('DadJokes', False)
solar_conditions_enabled = config['general'].getboolean('spaceWeather', True)
wikipedia_enabled = config['general'].getboolean('wikipedia', False)

View File

@@ -74,7 +74,7 @@ if wikipedia_enabled:
if llm_enabled:
from modules.llm import * # from the spudgunman/meshing-around repo
trap_list = trap_list + trap_list_llm # items ask:
help_message = help_message + ", ask:"
help_message = help_message + ", askai"
# Scheduled Broadcast Configuration
if scheduler_enabled:
@@ -103,7 +103,7 @@ if interface1_type == 'ble' and interface2_type == 'ble':
# Interface1 Configuration
try:
logger.debug(f"System: Initalizing Interface1")
logger.debug(f"System: Initializing Interface1")
if interface1_type == 'serial':
interface1 = meshtastic.serial_interface.SerialInterface(port1)
elif interface1_type == 'tcp':
@@ -114,12 +114,12 @@ try:
logger.critical(f"System: Interface Type: {interface1_type} not supported. Validate your config against config.template Exiting")
exit()
except Exception as e:
logger.critical(f"System: script abort. Initalizing Interface1 {e}")
logger.critical(f"System: script abort. Initializing Interface1 {e}")
exit()
# Interface2 Configuration
if interface2_enabled:
logger.debug(f"System: Initalizing Interface2")
logger.debug(f"System: Initializing Interface2")
try:
if interface2_type == 'serial':
interface2 = meshtastic.serial_interface.SerialInterface(port2)
@@ -131,7 +131,7 @@ if interface2_enabled:
logger.critical(f"System: Interface Type: {interface2_type} not supported. Validate your config against config.template Exiting")
exit()
except Exception as e:
logger.critical(f"System: script abort. Initalizing Interface2 {e}")
logger.critical(f"System: script abort. Initializing Interface2 {e}")
exit()
#Get the node number of the device, check if the device is connected
@@ -670,7 +670,6 @@ async def watchdog():
with contextlib.redirect_stdout(None):
interface1.localNode.getMetadata()
print(f"System: if you see this upgrade python to >3.4")
#if "device_state_version:" not in meta:
except Exception as e:
logger.error(f"System: communicating with interface1, trying to reconnect: {e}")
retry_int1 = True

View File

@@ -8,6 +8,8 @@ from pubsub import pub # pip install pubsub
from modules.log import *
from modules.system import *
responseDelay = 0.7 # delay in seconds for response to avoid message collision
def auto_response(message, snr, rssi, hop, message_from_id, channel_number, deviceID):
# Auto response to messages
message_lower = message.lower()
@@ -37,8 +39,8 @@ def auto_response(message, snr, rssi, hop, message_from_id, channel_number, devi
# run the first command after sorting
bot_response = command_handler[cmds[0]['cmd']]()
# wait a 700ms to avoid message collision from lora-ack
time.sleep(0.7)
# wait a responseDelay to avoid message collision from lora-ack
time.sleep(responseDelay)
return bot_response
@@ -252,8 +254,8 @@ def onReceive(packet, interface):
# repeat the message on the other device
if repeater_enabled and interface2_enabled:
# wait a 700ms to avoid message collision from lora-ack.
time.sleep(0.7)
# wait a responseDelay to avoid message collision from lora-ack.
time.sleep(responseDelay)
rMsg = (f"{message_string} From:{get_name_from_number(message_from_id, 'short', rxNode)}")
# if channel found in the repeater list repeat the message
if str(channel_number) in repeater_channels:

View File

@@ -16,4 +16,4 @@ wikipedia
langchain
langchain-ollama
ollama
googlesearch-python