Compare commits

...

36 Commits

Author SHA1 Message Date
Kelly
c5ba56a656 Merge pull request #178 from SpudGunMan/lab
Gemma3 LLM
2025-08-26 11:00:31 -07:00
SpudGunMan
50c3249edc explicitCmd
i think @NomDeTom mentioned this a long time ago and well .. here is a change to help the phases of the moon and tide.
2025-08-26 10:44:30 -07:00
SpudGunMan
80897f7a82 defaults to gemma3 raw input
this is a change as its looking to remove google lookups with the python module. if this changes is impacting please let me know in [general] add `rawLLMQuery = False` to reverse
2025-08-26 10:26:33 -07:00
SpudGunMan
d311832a92 truncation 2025-08-20 11:55:44 -07:00
SpudGunMan
56af59345d Update settings.py 2025-08-19 06:19:07 -07:00
SpudGunMan
c1adca7db0 Update mesh_bot.py 2025-08-18 18:55:23 -07:00
SpudGunMan
4c7fe55b43 hopFix 2025-08-18 18:54:02 -07:00
SpudGunMan
df6a1cfb66 Update pong_bot.py
better MQTT handler
2025-08-17 20:25:40 -07:00
SpudGunMan
9994446510 better MQTT handler 2025-08-17 20:24:09 -07:00
SpudGunMan
9272218815 Update system.py
i declare
2025-08-17 20:05:41 -07:00
SpudGunMan
388d862fc9 Update llm.py 2025-08-16 21:24:46 -07:00
SpudGunMan
ac33f8a02b Update llm.py 2025-08-16 21:16:44 -07:00
SpudGunMan
f04392a81c Update llm.py 2025-08-16 20:25:53 -07:00
SpudGunMan
d0097c092b Update llm.py 2025-08-16 17:48:04 -07:00
SpudGunMan
92ff166260 Update llm.py
adding these back, the token limit just has no bounds
2025-08-15 21:48:30 -07:00
Kelly
bfe0d219f9 Merge pull request #175 from SpudGunMan/main
fix daylight
2025-08-15 21:44:24 -07:00
SpudGunMan
85a2d90cff fix daylight 2025-08-15 21:42:45 -07:00
SpudGunMan
e15232875c token limit 2025-08-15 21:33:40 -07:00
SpudGunMan
d1a87f161b updateOllama
remember to update to latest ollama bin's
2025-08-15 21:17:58 -07:00
SpudGunMan
626ac59b4e gemma3
LLM rewrite, the removal of RAG to keep things clean.

This default changes as well as puts input direct to the LLM further testing is needed, new LLM prompting is different.
2025-08-15 21:11:09 -07:00
SpudGunMan
835a9e5f89 Update space.py 2025-08-15 06:52:55 -07:00
SpudGunMan
3ae928dd66 more light on the sun 2025-08-15 06:51:51 -07:00
SpudGunMan
3973406783 formatting of sun
trying this out vs the old way
2025-08-15 06:32:15 -07:00
SpudGunMan
4fbdd42837 Update space.py 2025-08-15 05:40:33 -07:00
SpudGunMan
04378efdd8 Update space.py 2025-08-14 20:33:21 -07:00
SpudGunMan
0d19a40ed6 Update space.py 2025-08-14 19:47:53 -07:00
SpudGunMan
75ac3c974a Update space.py 2025-08-14 19:47:25 -07:00
SpudGunMan
7e0eb348ae 🌝 2025-08-14 19:46:53 -07:00
SpudGunMan
af6ea2a512 Update space.py 2025-08-14 19:41:12 -07:00
SpudGunMan
6665ea7dcd moon refactor 2025-08-14 19:40:32 -07:00
SpudGunMan
3212661ee8 enhance sun and moon
add position data when visible
2025-08-14 19:35:13 -07:00
SpudGunMan
0675132171 up a river
without help
2025-08-13 20:41:42 -07:00
Kelly
fdb7897963 Merge pull request #173 from dludwig/typofix
typo fix deteted -> detected
2025-08-13 20:16:18 -07:00
dludwig
8ff7a0bf3c typo fix detetec -> detected 2025-08-13 15:18:34 -07:00
SpudGunMan
c210534543 Update README.md 2025-08-13 08:58:31 -07:00
SpudGunMan
ea7574a868 Update locationdata.py
remove the $$ end marker
2025-08-12 13:40:26 -07:00
10 changed files with 197 additions and 164 deletions

View File

@@ -86,13 +86,13 @@ git clone https://github.com/spudgunman/meshing-around
### Networking
| Command | Description | ✅ Works Off-Grid |
|---------|-------------|-
| `ping`, `ack` | Return data for signal. Example: `ping 15 #DrivingI5` (activates auto-ping every 20 seconds for count 15) | ✅ |
| `ping`, `ack` | Return data for signal. Example: `ping 15 #DrivingI5` (activates auto-ping every 20 seconds for count 15 via DM only) | ✅ |
| `cmd` | Returns the list of commands (the help message) | ✅ |
| `history` | Returns the last commands run by user(s) | ✅ |
| `lheard` | Returns the last 5 heard nodes with SNR. Can also use `sitrep` | ✅ |
| `motd` | Displays the message of the day or sets it. Example: `motd $New Message Of the day` | ✅ |
| `sysinfo` | Returns the bot node telemetry info | ✅ |
| `test` | used to test the limits of data transfer `test 4` sends data to the maxBuffer limit (default 220) | ✅ |
| `test` | used to test the limits of data transfer `test 4` sends data to the maxBuffer limit (default 220) via DM only | ✅ |
| `whereami` | Returns the address of the sender's location if known |
| `whoami` | Returns details of the node asking, also returned when position exchanged 📍 | ✅ |
| `whois` | Returns details known about node, more data with bbsadmin node | ✅ |
@@ -144,7 +144,7 @@ git clone https://github.com/spudgunman/meshing-around
| `checkout` | Checkout the node in the checklist database, checkout all from node | ✅ |
| `checklist` | Display the checklist database, with note | ✅ |
### Games (via DM)
### Games (via DM only)
| Command | Description | |
|---------|-------------|-
| `blackjack` | Plays Blackjack (Casino 21) | ✅ |
@@ -211,6 +211,7 @@ defaultChannel = 0
ignoreDefaultChannel = False # ignoreDefaultChannel, the bot will ignore the default channel set above
ignoreChannels = # ignoreChannels is a comma separated list of channels to ignore, e.g. 4,5
cmdBang = False # require ! to be the first character in a command
explicitCmd = True # require explicit command, the message will only be processed if it starts with a command word disable to get more activity
```
### Location Settings
@@ -227,7 +228,7 @@ coastalEnabled = False # NOAA Coastal Data Enable NOAA Coastal Waters Forecasts
# Find the correct costal weather directory at https://tgftp.nws.noaa.gov/data/forecasts/marine/coastal/
# this map can help https://www.weather.gov/marine select location and then look at the 'Forecast-by-Zone Map'
myCoastalZone = https://tgftp.nws.noaa.gov/data/forecasts/marine/coastal/pz/pzz135.txt # myCoastalZone is the .txt file with the forecast data
castalForecastDays = 3 # number of data points to return, default is 3
coastalForecastDays = 3 # number of data points to return, default is 3
```
### Module Settings
@@ -330,7 +331,7 @@ Volcano Alerts use lat/long to determine ~1000km radius
```ini
[location]
# USGS Hydrology unique identifiers, LID or USGS ID https://waterdata.usgs.gov
riverListDefault = 14144700
riverList = 14144700 # example Mouth of Columbia River
# USGS Volcano alerts Enable USGS Volcano Alert Broadcast
volcanoAlertBroadcastEnabled = False
@@ -347,12 +348,12 @@ repeater_channels = [2, 3]
```
### Ollama (LLM/AI) Settings
For Ollama to work, the command line `ollama run 'model'` needs to work properly. Ensure you have enough RAM and your GPU is working as expected. The default model for this project is set to `gemma2:2b`. Ollama can be remote [Ollama Server](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server) works on a pi58GB with 40 second or less response time.
For Ollama to work, the command line `ollama run 'model'` needs to work properly. Ensure you have enough RAM and your GPU is working as expected. The default model for this project is set to `gemma3:270m`. Ollama can be remote [Ollama Server](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server) works on a pi58GB with 40 second or less response time.
```ini
# Enable ollama LLM see more at https://ollama.com
ollama = True # Ollama model to use (defaults to gemma2:2b)
ollamaModel = gemma2 #ollamaModel = llama3.1
ollamaModel = gemma3:latest # Ollama model to use (defaults to gemma3:270m)
ollamaHostName = http://localhost:11434 # server instance to use (defaults to local machine install)
```
@@ -360,6 +361,9 @@ Also see `llm.py` for changing the defaults of:
```ini
# LLM System Variables
rawQuery = True # if True, the input is sent raw to the LLM if False, it is processed by the meshBotAI template
# Used in the meshBotAI template (legacy)
llmEnableHistory = True # enable history for the LLM model to use in responses adds to compute time
llmContext_fromGoogle = True # enable context from google search results helps with responses accuracy
googleSearchResults = 3 # number of google search results to include in the context more results = more compute time

View File

@@ -36,6 +36,8 @@ ignoreDefaultChannel = False
ignoreChannels =
# require ! to be the first character in a command
cmdBang = False
# require explicit command, the message will only be processed if it starts with a command word
explicitCmd = True
# motd is reset to this value on boot
motd = Thanks for using MeshBOT! Have a good day!
@@ -56,13 +58,15 @@ wikipedia = True
# Enable ollama LLM see more at https://ollama.com
ollama = False
# Ollama model to use (defaults to gemma2:2b)
# ollamaModel = llama3.1
# Ollama model to use (defaults to gemma3:270m)
# ollamaModel = gemma3:latest
# server instance to use (defaults to local machine install)
ollamaHostName = http://localhost:11434
# Produce LLM replies to messages that aren't commands?
# If False, the LLM only replies to the "ask:" and "askai" commands.
llmReplyToNonCommands = True
# if True, the input is sent raw to the LLM, if False uses legacy template query
rawLLMQuery = True
# StoreForward Enabled and Limits
StoreForward = True
@@ -160,8 +164,8 @@ myCoastalZone = https://tgftp.nws.noaa.gov/data/forecasts/marine/coastal/pz/pzz1
# number of data points to return, default is 3
coastalForecastDays = 3
# USGS Hydrology unique identifiers, LID or USGS ID https://waterdata.usgs.gov
riverListDefault =
# NOAA USGS Hydrology river identifiers, LID or USGS ID https://waterdata.usgs.gov
riverList =
# NOAA EAS Alert Broadcast
wxAlertBroadcastEnabled = False

View File

@@ -250,7 +250,7 @@ if [[ $(echo "${embedded}" | grep -i "^n") ]]; then
printf "\nOptionally if you want to install the multi gig LLM Ollama compnents we will execute the following commands\n"
printf "\ncurl -fsSL https://ollama.com/install.sh | sh\n"
printf "ollama pull gemma2:2b\n"
printf "ollama pull gemma3:latest\n"
printf "Total download is multi GB, recomend pi5/8GB or better for this\n"
# ask if the user wants to install the LLM Ollama components
printf "\nDo you want to install the LLM Ollama components? (y/n)"
@@ -258,12 +258,12 @@ if [[ $(echo "${embedded}" | grep -i "^n") ]]; then
if [[ $(echo "${ollama}" | grep -i "^y") ]]; then
curl -fsSL https://ollama.com/install.sh | sh
# ask if want to install gemma2:2b
printf "\n Ollama install done now we can install the Gemma2:2b components\n"
echo "Do you want to install the Gemma2:2b components? (y/n)"
# ask if want to install gemma3:latest
printf "\n Ollama install done now we can install the gemma3:latest components\n"
echo "Do you want to install the gemma3:latest components? (y/n)"
read gemma
if [[ $(echo "${gemma}" | grep -i "^y") ]]; then
ollama pull gemma2:2b
ollama pull gemma3:latest
fi
fi

View File

@@ -1111,7 +1111,7 @@ def onReceive(packet, interface):
elif multiple_interface and port7 in rxInterface: rxNode = 7
elif multiple_interface and port8 in rxInterface: rxNode = 8
elif multiple_interface and port9 in rxInterface: rxNode = 9
if rxType == 'TCPInterface':
rxHost = interface.__dict__.get('hostname', 'unknown')
if rxHost and hostname1 in rxHost and interface1_type == 'tcp': rxNode = 1
@@ -1162,10 +1162,12 @@ def onReceive(packet, interface):
if 'decoded' in packet and packet['decoded']['portnum'] == 'TEXT_MESSAGE_APP':
message_bytes = packet['decoded']['payload']
message_string = message_bytes.decode('utf-8')
via_mqtt = packet['decoded'].get('viaMqtt', False)
rx_time = packet['decoded'].get('rxTime', time.time())
# check if the packet is from us
if message_from_id in [myNodeNum1, myNodeNum2, myNodeNum3, myNodeNum4, myNodeNum5, myNodeNum6, myNodeNum7, myNodeNum8, myNodeNum9]:
logger.warning(f"System: Packet from self {message_from_id} loop or traffic replay deteted")
logger.warning(f"System: Packet from self {message_from_id} loop or traffic replay detected")
# get the signal strength and snr if available
if packet.get('rxSnr') or packet.get('rxRssi'):
@@ -1201,13 +1203,15 @@ def onReceive(packet, interface):
if enableHopLogs:
logger.debug(f"System: Packet HopDebugger: hop_away:{hop_away} hop_limit:{hop_limit} hop_start:{hop_start}")
if hop_away == 0 and hop_limit == 0 and hop_start == 0:
logger.debug(f"System: Packet HopDebugger: No hop count found in PACKET {packet} END PACKET")
if hop_away == 0 and hop_limit == 0 and hop_start == 0:
hop = "Last Hop"
hop_count = 0
if hop_start == hop_limit:
hop = "Direct"
hop_count = 0
elif hop_start == 0 and hop_limit > 0:
elif hop_start == 0 and hop_limit > 0 or via_mqtt:
hop = "MQTT"
hop_count = 0
else:

View File

@@ -8,32 +8,34 @@ from modules.log import *
# https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server
import requests
import json
from googlesearch import search # pip install googlesearch-python
# This is my attempt at a simple RAG implementation it will require some setup
# you will need to have the RAG data in a folder named rag in the data directory (../data/rag)
# This is lighter weight and can be used in a standalone environment, needs chromadb
# "chat with a file" is the use concept here, the file is the RAG data
# is anyone using this please let me know if you are Dec62024 -kelly
ragDEV = False
if ragDEV:
import os
import ollama # pip install ollama
import chromadb # pip install chromadb
from ollama import Client as OllamaClient
ollamaClient = OllamaClient(host=ollamaHostName)
if not rawLLMQuery:
# this may be removed in the future
from googlesearch import search # pip install googlesearch-python
# LLM System Variables
ollamaAPI = ollamaHostName + "/api/generate"
tokens = 450 # max charcters for the LLM response, this is the max length of the response also in prompts
requestTruncation = True # if True, the LLM "will" truncate the response
openaiAPI = "https://api.openai.com/v1/completions" # not used, if you do push a enhancement!
# Used in the meshBotAI template
llmEnableHistory = True # enable last message history for the LLM model
llmContext_fromGoogle = True # enable context from google search results adds to compute time but really helps with responses accuracy
googleSearchResults = 3 # number of google search results to include in the context more results = more compute time
antiFloodLLM = []
llmChat_history = {}
trap_list_llm = ("ask:", "askai")
meshbotAIinit = """
keep responses as short as possible. chatbot assistant no followuyp questions, no asking for clarification.
You must respond in plain text standard ASCII characters or emojis.
"""
truncatePrompt = f"truncate this as short as possible:\n"
meshBotAI = """
FROM {llmModel}
SYSTEM
@@ -74,76 +76,16 @@ if llmEnableHistory:
"""
def llm_readTextFiles():
# read .txt files in ../data/rag
try:
text = []
directory = "../data/rag"
for filename in os.listdir(directory):
if filename.endswith(".txt"):
filepath = os.path.join(directory, filename)
with open(filepath, 'r') as f:
text.append(f.read())
return text
except Exception as e:
logger.debug(f"System: LLM readTextFiles: {e}")
return False
def store_text_embedding(text):
try:
# store each document in a vector embedding database
for i, d in enumerate(text):
response = ollama.embeddings(model="mxbai-embed-large", prompt=d)
embedding = response["embedding"]
collection.add(
ids=[str(i)],
embeddings=[embedding],
documents=[d]
)
except Exception as e:
logger.debug(f"System: Embedding failed: {e}")
return False
## INITALIZATION of RAG
if ragDEV:
try:
chromaHostname = "localhost:8000"
# connect to the chromaDB
chromaHost = chromaHostname.split(":")[0]
chromaPort = chromaHostname.split(":")[1]
if chromaHost == "localhost" and chromaPort == "8000":
# create a client using local python Client
chromaClient = chromadb.Client()
else:
# create a client using the remote python Client
# this isnt tested yet please test and report back
chromaClient = chromadb.Client(host=chromaHost, port=chromaPort)
clearCollection = False
if "meshBotAI" in chromaClient.list_collections() and clearCollection:
logger.debug(f"System: LLM: Clearing RAG files from chromaDB")
chromaClient.delete_collection("meshBotAI")
# create a new collection
collection = chromaClient.create_collection("meshBotAI")
logger.debug(f"System: LLM: Cataloging RAG data")
store_text_embedding(llm_readTextFiles())
except Exception as e:
logger.debug(f"System: LLM: RAG Initalization failed: {e}")
def query_collection(prompt):
# generate an embedding for the prompt and retrieve the most relevant doc
response = ollama.embeddings(prompt=prompt, model="mxbai-embed-large")
results = collection.query(query_embeddings=[response["embedding"]], n_results=1)
data = results['documents'][0][0]
return data
def llm_query(input, nodeID=0, location_name=None):
global antiFloodLLM, llmChat_history
googleResults = []
# if this is the first initialization of the LLM the query of " " should bring meshbotAIinit OTA shouldnt reach this?
# This is for LLM like gemma and others now?
if input == " " and rawLLMQuery:
logger.warning("System: These LLM models lack a traditional system prompt, they can be verbose and not very helpful be advised.")
input = meshbotAIinit
if not location_name:
location_name = "no location provided "
@@ -162,7 +104,7 @@ def llm_query(input, nodeID=0, location_name=None):
else:
antiFloodLLM.append(nodeID)
if llmContext_fromGoogle:
if llmContext_fromGoogle and not rawLLMQuery:
# grab some context from the internet using google search hits (if available)
# localization details at https://pypi.org/project/googlesearch-python/
@@ -193,36 +135,29 @@ def llm_query(input, nodeID=0, location_name=None):
location_name += f" at the current time of {datetime.now().strftime('%Y-%m-%d %H:%M:%S %Z')}"
try:
# RAG context inclusion testing
ragContext = False
if ragDEV:
ragContext = query_collection(input)
if ragContext:
ragContextGooogle = ragContext + '\n'.join(googleResults)
# Build the query from the template
modelPrompt = meshBotAI.format(input=input, context=ragContext, location_name=location_name, llmModel=llmModel, history=history)
# Query the model with RAG context
result = ollamaClient.generate(model=llmModel, prompt=modelPrompt)
# Condense the result to just needed
if isinstance(result, dict):
result = result.get("response")
if rawLLMQuery:
# sanitize the input to remove tool call syntax
if '```' in input:
logger.warning("System: LLM Query: Code markdown detected, removing for raw query")
input = input.replace('```bash', '').replace('```python', '').replace('```', '')
modelPrompt = input
else:
# Build the query from the template
modelPrompt = meshBotAI.format(input=input, context='\n'.join(googleResults), location_name=location_name, llmModel=llmModel, history=history)
llmQuery = {"model": llmModel, "prompt": modelPrompt, "stream": False}
# Query the model via Ollama web API
result = requests.post(ollamaAPI, data=json.dumps(llmQuery))
# Condense the result to just needed
if result.status_code == 200:
result_json = result.json()
result = result_json.get("response", "")
llmQuery = {"model": llmModel, "prompt": modelPrompt, "stream": False, "max_tokens": tokens}
# Query the model via Ollama web API
result = requests.post(ollamaAPI, data=json.dumps(llmQuery))
# Condense the result to just needed
if result.status_code == 200:
result_json = result.json()
result = result_json.get("response", "")
# deepseek-r1 has added <think> </think> tags to the response
if "<think>" in result:
result = result.split("</think>")[1]
else:
raise Exception(f"HTTP Error: {result.status_code}")
# deepseek-r1 has added <think> </think> tags to the response
if "<think>" in result:
result = result.split("</think>")[1]
else:
raise Exception(f"HTTP Error: {result.status_code}")
#logger.debug(f"System: LLM Response: " + result.strip().replace('\n', ' '))
except Exception as e:
@@ -231,6 +166,23 @@ def llm_query(input, nodeID=0, location_name=None):
# cleanup for message output
response = result.strip().replace('\n', ' ')
if rawLLMQuery and requestTruncation and len(response) > 450:
#retryy loop to truncate the response
logger.warning(f"System: LLM Query: Response exceeded {tokens} characters, requesting truncation")
truncateQuery = {"model": llmModel, "prompt": truncatePrompt + response, "stream": False, "max_tokens": tokens}
truncateResult = requests.post(ollamaAPI, data=json.dumps(truncateQuery))
if truncateResult.status_code == 200:
truncate_json = truncateResult.json()
result = truncate_json.get("response", "")
else:
#use the original result if truncation fails
logger.warning("System: LLM Query: Truncation failed, using original response")
# cleanup for message output
response = result.strip().replace('\n', ' ')
# done with the query, remove the user from the anti flood list
antiFloodLLM.remove(nodeID)

View File

@@ -703,19 +703,19 @@ def get_volcano_usgs(lat=0, lon=0):
def get_nws_marine(zone, days=3):
# forcast from NWS coastal products
try:
marine_pzz_data = requests.get(zone, timeout=urlTimeoutSeconds)
if not marine_pzz_data.ok:
marine_pz_data = requests.get(zone, timeout=urlTimeoutSeconds)
if not marine_pz_data.ok:
logger.warning("Location:Error fetching NWS Marine PZ data")
return ERROR_FETCHING_DATA
except (requests.exceptions.RequestException):
logger.warning("Location:Error fetching NWS Marine PZ data")
return ERROR_FETCHING_DATA
marine_pzz_data = marine_pzz_data.text
marine_pz_data = marine_pz_data.text
#validate data
todayDate = today.strftime("%Y%m%d")
if marine_pzz_data.startswith("Expires:"):
expires = marine_pzz_data.split(";;")[0].split(":")[1]
if marine_pz_data.startswith("Expires:"):
expires = marine_pz_data.split(";;")[0].split(":")[1]
expires_date = expires[:8]
if expires_date < todayDate:
logger.debug("Location: NWS Marine PZ data expired")
@@ -725,8 +725,8 @@ def get_nws_marine(zone, days=3):
return ERROR_FETCHING_DATA
# process the marine forecast data
marine_pzz_lines = marine_pzz_data.split("\n")
marine_pzz_report = ""
marine_pzz_lines = marine_pz_data.split("\n")
marine_pz_report = ""
day_blocks = []
current_block = ""
in_forecast = False
@@ -743,17 +743,21 @@ def get_nws_marine(zone, days=3):
if current_block:
day_blocks.append(current_block.strip())
# Only keep up to pzzDays blocks
# Only keep up to pzDays blocks
for block in day_blocks[:days]:
marine_pzz_report += block + "\n"
marine_pz_report += block + "\n"
# remove last newline
if marine_pzz_report.endswith("\n"):
marine_pzz_report = marine_pzz_report[:-1]
if marine_pz_report.endswith("\n"):
marine_pz_report = marine_pz_report[:-1]
# remove NOAA EOF $$
if marine_pz_report.endswith("$$"):
marine_pz_report = marine_pz_report[:-2].strip()
# abbreviate the report
marine_pzz_report = abbreviate_noaa(marine_pzz_report)
if marine_pzz_report == "":
marine_pz_report = abbreviate_noaa(marine_pz_report)
if marine_pz_report == "":
return NO_DATA_NOGPS
return marine_pzz_report
return marine_pz_report

View File

@@ -197,6 +197,7 @@ try:
ignoreChannels = config['general'].get('ignoreChannels', '').split(',') # ignore these channels
ignoreDefaultChannel = config['general'].getboolean('ignoreDefaultChannel', False)
cmdBang = config['general'].getboolean('cmdBang', False) # default off
explicitCmd = config['general'].getboolean('explicitCmd', True) # default on
zuluTime = config['general'].getboolean('zuluTime', False) # aka 24 hour time
log_messages_to_file = config['general'].getboolean('LogMessagesToFile', False) # default off
log_backup_count = config['general'].getint('LogBackupCount', 32) # default 32 days
@@ -219,8 +220,9 @@ try:
solar_conditions_enabled = config['general'].getboolean('spaceWeather', True)
wikipedia_enabled = config['general'].getboolean('wikipedia', False)
llm_enabled = config['general'].getboolean('ollama', False) # https://ollama.com
llmModel = config['general'].get('ollamaModel', 'gemma2:2b') # default gemma2:2b
ollamaHostName = config['general'].get('ollamaHostName', 'http://localhost:11434') # default localhost
llmModel = config['general'].get('ollamaModel', 'gemma3:270m') # default gemma3:270m
rawLLMQuery = config['general'].getboolean('rawLLMQuery', True) #default True
llmReplyToNonCommands = config['general'].getboolean('llmReplyToNonCommands', True)
dont_retry_disconnect = config['general'].getboolean('dont_retry_disconnect', False) # default False, retry on disconnect
# emergency response
@@ -250,7 +252,7 @@ try:
repeater_lookup = config['location'].get('repeaterLookup', 'rbook') # default repeater lookup source
n2yoAPIKey = config['location'].get('n2yoAPIKey', '') # default empty
satListConfig = config['location'].get('satList', '25544').split(',') # default 25544 ISS
riverListDefault = config['location'].get('riverList', '').split(',') # default 12061500 Skagit River
riverListDefault = config['location'].get('riverList', '').split(',') # default None
coastalEnabled = config['location'].getboolean('coastalEnabled', False) # default False
myCoastalZone = config['location'].get('myCoastalZone', None) # default None
coastalForecastDays = config['location'].getint('coastalForecastDays', 3) # default 3 days
@@ -361,7 +363,7 @@ try:
splitDelay = config['messagingSettings'].getfloat('splitDelay', 0) # default 0
MESSAGE_CHUNK_SIZE = config['messagingSettings'].getint('MESSAGE_CHUNK_SIZE', 160) # default 160
wantAck = config['messagingSettings'].getboolean('wantAck', False) # default False
maxBuffer = config['messagingSettings'].getint('maxBuffer', 220) # default 220
maxBuffer = config['messagingSettings'].getint('maxBuffer', 200) # default 200
enableHopLogs = config['messagingSettings'].getboolean('enableHopLogs', False) # default False
except KeyError as e:

View File

@@ -6,7 +6,7 @@ import requests # pip install requests
import xml.dom.minidom
from datetime import datetime
import ephem # pip install pyephem
from datetime import timedelta
from datetime import timezone
from modules.log import *
trap_list_solarconditions = ("sun", "moon", "solar", "hfcond", "satpass")
@@ -63,7 +63,7 @@ def drap_xray_conditions():
def get_sun(lat=0, lon=0):
# get sunrise and sunset times using callers location or default
obs = ephem.Observer()
obs.date = datetime.now()
obs.date = datetime.now(timezone.utc)
sun = ephem.Sun()
if lat != 0 and lon != 0:
obs.lat = str(lat)
@@ -74,9 +74,17 @@ def get_sun(lat=0, lon=0):
sun.compute(obs)
sun_table = {}
# get the sun azimuth and altitude
sun_table['azimuth'] = sun.az
sun_table['altitude'] = sun.alt
# sun is up include altitude
if sun_table['altitude'] > 0:
sun_table['altitude'] = sun.alt
else:
sun_table['altitude'] = 0
# get the next rise and set times
local_sunrise = ephem.localtime(obs.next_rising(sun))
local_sunset = ephem.localtime(obs.next_setting(sun))
@@ -86,19 +94,25 @@ def get_sun(lat=0, lon=0):
else:
sun_table['rise_time'] = local_sunrise.strftime('%a %d %I:%M%p')
sun_table['set_time'] = local_sunset.strftime('%a %d %I:%M%p')
# if sunset is before sunrise, then it's tomorrow
# if sunset is before sunrise, then data will be for tomorrow format sunset first and sunrise second
if local_sunset < local_sunrise:
local_sunset = ephem.localtime(obs.next_setting(sun)) + timedelta(1)
if zuluTime:
sun_table['set_time'] = local_sunset.strftime('%a %d %H:%M')
else:
sun_table['set_time'] = local_sunset.strftime('%a %d %I:%M%p')
sun_data = "SunRise: " + sun_table['rise_time'] + "\nSet: " + sun_table['set_time']
sun_data = "SunSet: " + sun_table['set_time'] + "\nRise: " + sun_table['rise_time']
else:
sun_data = "SunRise: " + sun_table['rise_time'] + "\nSet: " + sun_table['set_time']
sun_data += "\nDaylight: " + str((local_sunset - local_sunrise).seconds // 3600) + "h " + str(((local_sunset - local_sunrise).seconds // 60) % 60) + "m"
if sun_table['altitude'] > 0:
sun_data += "\nRemaining: " + str((local_sunset - datetime.now()).seconds // 3600) + "h " + str(((local_sunset - datetime.now()).seconds // 60) % 60) + "m"
sun_data += "\nAzimuth: " + str('{0:.2f}'.format(sun_table['azimuth'] * 180 / ephem.pi)) + "°"
if sun_table['altitude'] > 0:
sun_data += "\nAltitude: " + str('{0:.2f}'.format(sun_table['altitude'] * 180 / ephem.pi)) + "°"
return sun_data
def get_moon(lat=0, lon=0):
# get moon phase and rise/set times using callers location or default
# the phase calculation mght not be accurate (followup later)
obs = ephem.Observer()
moon = ephem.Moon()
if lat != 0 and lon != 0:
@@ -108,10 +122,28 @@ def get_moon(lat=0, lon=0):
obs.lat = str(latitudeValue)
obs.lon = str(longitudeValue)
obs.date = datetime.now()
obs.date = datetime.now(timezone.utc)
moon.compute(obs)
moon_table = {}
moon_phase = ['NewMoon', 'Waxing Crescent', 'First Quarter', 'Waxing Gibbous', 'FullMoon', 'Waning Gibbous', 'Last Quarter', 'Waning Crescent'][round(moon.phase / (2 * ephem.pi) * 8) % 8]
illum = moon.phase # 0 = new, 50 = first/last quarter, 100 = full
if illum < 1.0:
moon_phase = 'New Moon🌑'
elif illum < 49:
moon_phase = 'Waxing Crescent🌒'
elif 49 <= illum < 51:
moon_phase = 'First Quarter🌓'
elif illum < 99:
moon_phase = 'Waxing Gibbous🌔'
elif illum >= 99:
moon_phase = 'Full Moon🌕'
elif illum > 51:
moon_phase = 'Waning Gibbous🌖'
elif 51 >= illum > 49:
moon_phase = 'Last Quarter🌗'
else:
moon_phase = 'Waning Crescent🌘'
moon_table['phase'] = moon_phase
moon_table['illumination'] = moon.phase
moon_table['azimuth'] = moon.az
@@ -139,6 +171,11 @@ def get_moon(lat=0, lon=0):
"\nPhase:" + moon_table['phase'] + " @:" + str('{0:.2f}'.format(moon_table['illumination'])) + "%" \
+ "\nFullMoon:" + moon_table['next_full_moon'] + "\nNewMoon:" + moon_table['next_new_moon']
# if moon is in the sky, add azimuth and altitude
if moon_table['altitude'] > 0:
moon_data += "\nAz: " + str('{0:.2f}'.format(moon_table['azimuth'] * 180 / ephem.pi)) + "°" + \
"\nAlt: " + str('{0:.2f}'.format(moon_table['altitude'] * 180 / ephem.pi)) + "°"
return moon_data
def getNextSatellitePass(satellite, lat=0, lon=0):

View File

@@ -94,6 +94,10 @@ if location_enabled:
# NOAA only features
help_message = help_message + ", wxa"
# USGS riverFlow Configuration
if riverListDefault != ['']:
help_message = help_message + ", riverflow"
# NOAA alerts needs location module
if wxAlertBroadcastEnabled or emergencyAlertBrodcastEnabled or volcanoAlertBroadcastEnabled:
from modules.locationdata import * # from the spudgunman/meshing-around repo
@@ -264,6 +268,7 @@ if ble_count > 1:
logger.debug(f"System: Initializing Interfaces")
interface1 = interface2 = interface3 = interface4 = interface5 = interface6 = interface7 = interface8 = interface9 = None
retry_int1 = retry_int2 = retry_int3 = retry_int4 = retry_int5 = retry_int6 = retry_int7 = retry_int8 = retry_int9 = False
myNodeNum1 = myNodeNum2 = myNodeNum3 = myNodeNum4 = myNodeNum5 = myNodeNum6 = myNodeNum7 = myNodeNum8 = myNodeNum9 = 777
max_retry_count1 = max_retry_count2 = max_retry_count3 = max_retry_count4 = max_retry_count5 = max_retry_count6 = max_retry_count7 = max_retry_count8 = max_retry_count9 = interface_retry_count
for i in range(1, 10):
interface_type = globals().get(f'interface{i}_type')
@@ -682,11 +687,24 @@ def messageTrap(msg):
message_list=msg.split(" ")
for m in message_list:
for t in trap_list:
# if word in message is in the trap list, return True
if t.lower() == m.lower():
return True
if cmdBang and m.startswith("!"):
return True
if not explicitCmd:
# if word in message is in the trap list, return True
if t.lower() == m.lower():
if cmdBang:
if m.startswith('!'):
return True
else:
continue
return True
else:
# if the index 0 of the message is a word in the trap list, return True
if t.lower() == m.lower() and message_list.index(m) == 0:
if cmdBang:
if m.startswith('!'):
return True
else:
continue
return True
# if no trap words found, run a search for near misses like ping? or cmd?
for m in message_list:
for t in range(len(trap_list)):

View File

@@ -254,6 +254,7 @@ def onReceive(packet, interface):
if 'decoded' in packet and packet['decoded']['portnum'] == 'TEXT_MESSAGE_APP':
message_bytes = packet['decoded']['payload']
message_string = message_bytes.decode('utf-8')
via_mqtt = packet['decoded'].get('viaMqtt', False)
# check if the packet is from us
if message_from_id == myNodeNum1 or message_from_id == myNodeNum2:
@@ -283,10 +284,17 @@ def onReceive(packet, interface):
else:
hop_start = 0
if enableHopLogs:
logger.debug(f"System: Packet HopDebugger: hop_away:{hop_away} hop_limit:{hop_limit} hop_start:{hop_start}")
if hop_away == 0 and hop_limit == 0 and hop_start == 0:
hop = "Last Hop"
hop_count = 0
if hop_start == hop_limit:
hop = "Direct"
hop_count = 0
elif hop_start == 0 and hop_limit > 0:
elif hop_start == 0 and hop_limit > 0 or via_mqtt:
hop = "MQTT"
hop_count = 0
else: