Compare commits

..

84 Commits

Author SHA1 Message Date
SpudGunMan 0c6fcf10ef Update README.md 2025-11-01 09:39:08 -07:00
SpudGunMan 647ae92649 cleanup 2025-11-01 00:49:33 -07:00
SpudGunMan 254eef4be9 fix No Data 2025-10-31 20:30:38 -07:00
SpudGunMan bd0a94e2a1 refactor ✈️
add better altitude detector
2025-10-31 20:06:56 -07:00
SpudGunMan 2d8256d9f7 Update test_bot.py 2025-10-31 17:25:20 -07:00
SpudGunMan 1f9b81865e Update test_bot.py 2025-10-31 17:15:30 -07:00
SpudGunMan 17221cf37f enhance auto-block
with string protectors
2025-10-31 16:19:13 -07:00
SpudGunMan 47dd75bfb3 autoBlock, enhance ban list and such
https://github.com/SpudGunMan/meshing-around/issues/252

# Enable or disable automatic banning of nodes
autoBanEnabled = False
2025-10-31 16:02:14 -07:00
SpudGunMan d4773705ce echo welcome! 2025-10-31 13:24:44 -07:00
SpudGunMan 4f46e659d9 TOC updated 2025-10-31 12:41:52 -07:00
SpudGunMan 404f84f39c echo motd 2025-10-31 12:39:19 -07:00
SpudGunMan c07ec534a7 Update mesh_bot.py 2025-10-31 12:33:38 -07:00
SpudGunMan 4d88aed0d8 🐬
enhance echo with admin functions
2025-10-31 12:04:52 -07:00
SpudGunMan b1946608f4 FIX👀
how the heck did I miss this!
2025-10-31 11:42:33 -07:00
SpudGunMan b92cf48fd0 fix update errors
removing this logic for now
2025-10-31 07:24:07 -07:00
SpudGunMan 227ffc94e6 Update update.sh 2025-10-31 07:22:58 -07:00
SpudGunMan b9f5a0c7f9 refactor
https://github.com/SpudGunMan/meshing-around/issues/249
2025-10-31 07:04:21 -07:00
SpudGunMan d56c1380c3 fix overload
floods node otherwise
2025-10-30 23:03:28 -07:00
SpudGunMan e8a8eefcc2 leaderboard enhancment
dont count messages to bot
2025-10-30 22:47:28 -07:00
SpudGunMan 5738e8d306 fix 2025-10-30 20:14:29 -07:00
SpudGunMan 11359e4016 cleanup 2025-10-30 20:09:16 -07:00
SpudGunMan 7bb31af1d2 Update mesh_bot.py 2025-10-30 19:48:53 -07:00
SpudGunMan fd115916f5 cleanup 2025-10-30 18:50:54 -07:00
SpudGunMan 32b60297c8 Update README.md 2025-10-30 17:00:01 -07:00
SpudGunMan f15a871967 fix alerting 2025-10-30 16:59:55 -07:00
SpudGunMan a346354dbc add reporting service back in
let me know if errors
2025-10-30 13:00:44 -07:00
SpudGunMan 3d8007bbf6 docs 2025-10-30 10:32:35 -07:00
SpudGunMan bb254474d0 fix config.ini
ownership issues my fault for not having this done a long time ago
2025-10-30 10:23:46 -07:00
SpudGunMan 37e3790ee4 Update update.sh 2025-10-30 09:03:19 -07:00
SpudGunMan 0ec380931a Update README.md 2025-10-30 07:42:46 -07:00
SpudGunMan 9cfd1bc670 Update README.md 2025-10-30 07:39:12 -07:00
SpudGunMan a672c94303 Update README.md 2025-10-30 07:38:32 -07:00
SpudGunMan 92b3574c22 news sort by 2025-10-30 07:24:54 -07:00
SpudGunMan 27d8e198ae Update rss.py 2025-10-30 05:52:52 -07:00
SpudGunMan 11eeaa445a Update rss.py 2025-10-30 05:51:20 -07:00
SpudGunMan 57efc8a69b more
🐄🫑
2025-10-30 00:16:49 -07:00
SpudGunMan 7442ce11b4 Update rss.py 2025-10-29 23:58:47 -07:00
SpudGunMan 8bb6ba4d8e Update rss.py 2025-10-29 23:48:24 -07:00
SpudGunMan da10af8d93 Update rss.py 2025-10-29 23:37:18 -07:00
SpudGunMan 46a33178f6 Update rss.py 2025-10-29 23:35:12 -07:00
SpudGunMan e07c5a923e headline
headline command which uses NewsAPI.org
2025-10-29 23:28:14 -07:00
SpudGunMan d330f3e0d6 patchAlerts 2025-10-29 21:51:57 -07:00
SpudGunMan eddb2fe08c patch alerting 2025-10-29 21:49:14 -07:00
SpudGunMan ebe729cf13 leaderboardFix 2025-10-29 21:36:05 -07:00
SpudGunMan 41a45c6e9c Update README.md 2025-10-29 21:22:15 -07:00
SpudGunMan 4224579f79 Update checklist.md 2025-10-29 21:16:32 -07:00
SpudGunMan aa43d4acad auto Approve
approval is needed to alarm
2025-10-29 21:12:54 -07:00
SpudGunMan 4406f2b86f it SPEAKS
KittenML/KittenTTS
2025-10-29 20:52:14 -07:00
SpudGunMan 649c959304 Update radio.py 2025-10-29 19:28:41 -07:00
SpudGunMan 3529e40743 Update radio.py 2025-10-29 19:05:00 -07:00
SpudGunMan f5c2dfa5e4 cleanup 2025-10-29 12:19:09 -07:00
SpudGunMan 1fb144ae1e docs 2025-10-29 11:43:33 -07:00
SpudGunMan 7e66ffc3a0 docs 2025-10-29 11:42:48 -07:00
SpudGunMan d7371fae98 change approve 2025-10-29 11:39:47 -07:00
SpudGunMan e4c51c97a1 Update checklist.py 2025-10-29 11:37:19 -07:00
SpudGunMan 70f072d222 default is nonApproved 2025-10-29 11:29:33 -07:00
SpudGunMan 8bb587cc7a Update checklist.py 2025-10-29 11:22:32 -07:00
SpudGunMan 313c313412 cleanup admin checklist 2025-10-29 11:15:02 -07:00
SpudGunMan e5e8fbd0b5 Update checklist.py 2025-10-29 10:55:44 -07:00
SpudGunMan 2ef96f3ae3 Update checklist.py 2025-10-29 10:52:30 -07:00
SpudGunMan a58605aba3 Update system.py 2025-10-29 10:47:29 -07:00
SpudGunMan ffdd3a1ea9 enhance 2025-10-29 10:44:42 -07:00
SpudGunMan 185de28139 Update system.py 2025-10-29 10:30:26 -07:00
SpudGunMan 0eea36fba2 Update system.py 2025-10-29 10:16:20 -07:00
SpudGunMan cb9e62894d Update system.py 2025-10-29 10:11:11 -07:00
SpudGunMan 9443d5fb0a Update system.py 2025-10-29 10:06:53 -07:00
SpudGunMan 1751648b12 Update system.py 2025-10-29 10:03:26 -07:00
SpudGunMan 8823d415c3 Update mesh_bot.py 2025-10-29 09:58:44 -07:00
SpudGunMan 55a1d951a7 Update mesh_bot.py 2025-10-29 09:57:42 -07:00
SpudGunMan c8096107a0 Update mesh_bot.py 2025-10-29 09:55:56 -07:00
SpudGunMan 5bdf1a9d6c Update mesh_bot.py 2025-10-29 09:54:56 -07:00
SpudGunMan 85344db27e Update settings.py 2025-10-29 09:50:51 -07:00
SpudGunMan 5990a859d9 Update settings.py 2025-10-29 09:49:23 -07:00
SpudGunMan ad6a55b9cd Update checklist.py 2025-10-29 09:47:50 -07:00
SpudGunMan 6fcd981eae Update mesh_bot.py 2025-10-29 09:47:11 -07:00
SpudGunMan 9564c92cc8 Update checklist.py 2025-10-29 09:46:08 -07:00
SpudGunMan 149dc10df6 Create test_checklist.py 2025-10-29 09:44:27 -07:00
SpudGunMan e211efca4e Update mesh_bot.py 2025-10-29 09:35:37 -07:00
SpudGunMan a974de790b refactor Alerts 2025-10-29 09:31:21 -07:00
SpudGunMan 777c423f17 refactor 2025-10-29 09:30:59 -07:00
SpudGunMan dbcb93eabb refactor Alerts 🚨 2025-10-29 08:29:20 -07:00
SpudGunMan 69518ea317 enhance 2025-10-29 08:21:09 -07:00
SpudGunMan 11faea2b4e purge 2025-10-29 00:48:59 -07:00
SpudGunMan acb0e870d6 cleanup 2025-10-29 00:15:46 -07:00
28 changed files with 1349 additions and 1142 deletions
+24 -1
View File
@@ -196,4 +196,27 @@ From your project root, run one of the following commands:
- The script requires a Python virtual environment (`venv`) to be present in the project directory.
- If `venv` is missing, the script will exit with an error message.
- Always provide an argument (`mesh`, `pong`, `html`, `html5`, or `add`) to specify what you want to launch.
- Always provide an argument (`mesh`, `pong`, `html`, `html5`, or `add`) to specify what you want to launch.
## Troubleshooting
### Permissions Issues
If you encounter errors related to file or directory permissions (e.g., "Permission denied" or services failing to start):
- Ensure you are running installation scripts with sufficient privileges (use `sudo` if needed).
- The `logs`, `data`, and `config.ini` files must be owned by the user running the bot (often `meshbot` or your current user).
- You can manually reset permissions using the provided script:
```sh
sudo bash etc/set-permissions.sh meshbot
```
- If you moved the project directory, re-run the permissions script to update ownership.
- For systemd service issues, check logs with:
```sh
sudo journalctl -u mesh_bot.service
```
If problems persist, double-check that the user specified in your service files matches the owner of the project files and directories.
+16 -15
View File
@@ -40,11 +40,12 @@ Mesh Bot is a feature-rich Python bot designed to enhance your [Meshtastic](http
- **New Node Greetings**: Automatically greet new nodes via text.
### Interactive AI and Data Lookup
- **Weather, Earthquake, River, and Tide Data**: Get local alerts and info from NOAA/USGS; uses Open-Meteo for areas outside NOAA coverage. Global tide predictions available via tidepredict library for worldwide locations.
- **Wikipedia Search**: Retrieve summaries from Wikipedia.
- **Weather, Earthquake, River, and Tide Data**: Get local alerts and info from NOAA/USGS; uses Open-Meteo for areas outside NOAA coverage.
- **Wikipedia Search**: Retrieve summaries from Wikipedia and Kiwix
- **OpenWebUI, Ollama LLM Integration**: Query the [Ollama](https://github.com/ollama/ollama/tree/main/docs) AI for advanced responses. Supports RAG (Retrieval Augmented Generation) with Wikipedia/Kiwix context and [OpenWebUI](https://github.com/open-webui/open-webui) integration for enhanced AI capabilities. [LLM Readme](modules/llm.md)
- **Satellite Passes**: Find upcoming satellite passes for your location.
- **GeoMeasuring Tools**: Calculate distances and midpoints using collected GPS data; supports Fox & Hound direction finding.
- **RSS & News Feeds**: Receive news and data from multiple sources directly on the mesh.
### Proximity Alerts
- **Location-Based Alerts**: Get notified when members arrive at a configured latitude/longitude—ideal for campsites, geo-fences, or remote locations. Optionally, trigger scripts, send emails, or automate actions (e.g., change node config, turn on lights, or drop an `alert.txt` file to start a survey or game).
@@ -52,12 +53,25 @@ Mesh Bot is a feature-rich Python bot designed to enhance your [Meshtastic](http
- **High Flying Alerts**: Receive notifications when nodes with high altitude are detected on the mesh.
- **Voice/Command Triggers**: Activate bot functions using keywords or voice commands (see [Voice Commands](#voice-commands-vox) for "Hey Chirpy!" support).
### EAS Alerts
- **FEMA iPAWS/EAS Alerts**: Receive Emergency Alerts from FEMA via API on internet-connected nodes.
- **NOAA EAS Alerts**: Get Emergency Alerts from NOAA via API.
- **USGS Volcano Alerts**: Receive volcano alerts from USGS via API.
- **NINA Alerts (Germany)**: Receive emergency alerts from the xrepository.de feed for Germany.
- **Offline EAS Alerts**: Report EAS alerts over the mesh using external tools, even without internet.
### File Monitor Alerts
- **File Monitoring**: Watch a text file for changes and broadcast updates to the mesh channel.
- **News File Access**: Retrieve the contents of a news file on request; supports multiple news sources or files.
- **Shell Command Access**: Execute shell commands via DM with replay protection (admin only).
#### Radio Frequency Monitoring
- **SNR RF Activity Alerts**: Monitor radio frequencies and receive alerts when high SNR (Signal-to-Noise Ratio) activity is detected.
- **Hamlib Integration**: Use Hamlib (rigctld) to monitor the S meter on a connected radio.
- **Speech-to-Text Broadcasting**: Convert received audio to text using [Vosk](https://alphacephei.com/vosk/models) and broadcast it to the mesh.
- **WSJT-X Integration**: Monitor WSJT-X (FT8, FT4, WSPR, etc.) decode messages and forward them to the mesh network with optional callsign filtering.
- **JS8Call Integration**: Monitor JS8Call messages and forward them to the mesh network with optional callsign filtering.
- **Meshages TTS**: The bot can speak mesh messages aloud using [KittenTTS](https://github.com/KittenML/KittenTTS). Enable this feature to have important alerts and messages read out loud on your device—ideal for hands-free operation or accessibility. See [radio.md](modules/radio.md) for setup instructions.
### Asset Tracking, Check-In/Check-Out, and Inventory Management
Advanced check-in/check-out and asset tracking for people and equipment—ideal for accountability, safety monitoring, and logistics (e.g., Radio-Net, FEMA, trailhead groups). Admin approval workflows, GPS location capture, and overdue alerts. The integrated inventory and point-of-sale (POS) system enables item management, sales tracking, cart-based transactions, and daily reporting, for swaps, emergency supply management, and field operations, maker-places.
@@ -79,21 +93,8 @@ Advanced check-in/check-out and asset tracking for people and equipment—ideal
- **User Feedback**: Users participate via DM; responses are logged for review.
- **Reporting**: Retrieve survey results with `survey report` or `survey report <surveyname>`.
### EAS Alerts
- **FEMA iPAWS/EAS Alerts**: Receive Emergency Alerts from FEMA via API on internet-connected nodes.
- **NOAA EAS Alerts**: Get Emergency Alerts from NOAA via API.
- **USGS Volcano Alerts**: Receive volcano alerts from USGS via API.
- **Offline EAS Alerts**: Report EAS alerts over the mesh using external tools, even without internet.
- **NINA Alerts (Germany)**: Receive emergency alerts from the xrepository.de feed for Germany.
### File Monitor Alerts
- **File Monitoring**: Watch a text file for changes and broadcast updates to the mesh channel.
- **News File Access**: Retrieve the contents of a news file on request; supports multiple news sources or files.
- **Shell Command Access**: Execute shell commands via DM with replay protection (admin only).
### Data Reporting
- **HTML Reports**: Visualize bot traffic and data flows with a built-in HTML generator. See [data reporting](logs/README.md) for details.
- **RSS & News Feeds**: Receive news and data from multiple sources directly on the mesh.
### Robust Message Handling
- **Automatic Message Chunking**: Messages over 160 characters are automatically split to ensure reliable delivery across multiple hops.
+43 -30
View File
@@ -62,6 +62,12 @@ rssFeedURL = http://www.hackaday.com/rss.xml,http://rss.slashdot.org/Slashdot/sl
rssFeedNames = default,slashdot,mesh
rssMaxItems = 3
rssTruncate = 100
# enable or disable the headline command which uses NewsAPI.org key at https://newsapi.org/register
enableNewsAPI = False
newsAPI_KEY =
newsAPIregion = us
# could also be 'relevancy' or 'popularity' or 'publishedAt'
sort_by = relevancy
# enable or disable the wikipedia search module
wikipedia = True
@@ -203,18 +209,27 @@ useMetric = False
# repeaterList lookup location (rbook / artsci / False)
repeaterLookup = rbook
# Satalite Pass Prediction
# Register for free API https://www.n2yo.com/login/ personal data page at bottom 'Are you developer?'
n2yoAPIKey =
# NORAD list https://www.n2yo.com/satellites/
satList = 25544,7530
# use Open-Meteo API for weather data not NOAA useful for non US locations
UseMeteoWxAPI = False
# NOAA weather forecast days
NOAAforecastDuration = 3
# number of weather alerts to display
NOAAalertCount = 2
# use Open-Meteo API for weather data not NOAA useful for non US locations
UseMeteoWxAPI = False
# Global Tide Prediction using tidepredict (for non-US locations or offline use)
# When enabled, uses tidepredict library for global tide predictions instead of NOAA API
# tidepredict uses University of Hawaii's Research Quality Dataset for worldwide coverage
useTidePredict = False
# NOAA Weather EAS Alert Broadcast
wxAlertBroadcastEnabled = False
# Enable Ignore any message that includes following word list
ignoreEASenable = False
ignoreEASwords = test,advisory
# Add extra location to the weather alert
enableExtraLocationWx = False
# NOAA Coastal Data Enable NOAA Coastal Waters Forecasts and Tide
coastalEnabled = False
@@ -230,52 +245,40 @@ coastalForecastDays = 3
# for multiple rivers use comma separated list e.g. 12484500,14105700
riverList =
# NOAA EAS Alert Broadcast
wxAlertBroadcastEnabled = False
# Enable Ignore any message that includes following word list
ignoreEASenable = False
ignoreEASwords = test,advisory
# EAS Alert Broadcast Channels
wxAlertBroadcastCh = 2
# Add extra location to the weather alert
enableExtraLocationWx = False
# Goverment Alert Broadcast defaults to FEMA IPAWS
eAlertBroadcastEnabled = False
# USA FEMA IPAWS alerts
ipawsAlertEnabled = True
# comma separated list of FIPS codes to trigger local alert. find your FIPS codes at https://en.wikipedia.org/wiki/Federal_Information_Processing_Standard_state_code
myFIPSList = 57,58,53
# find your SAME https://www.weather.gov/nwr/counties comma separated list of SAME code to further refine local alert.
mySAMEList = 053029,053073
# Goverment Alert Broadcast Channels
eAlertBroadcastCh = 2
# Enable Ignore, headline that includes following word list
ignoreFEMAenable = True
ignoreFEMAwords = test,exercise
# USGS Volcano alerts Enable USGS Volcano Alert Broadcast
volcanoAlertBroadcastEnabled = False
volcanoAlertBroadcastCh = 2
# Enable Ignore any message that includes following word list
ignoreUSGSEnable = False
ignoreUSGSWords = test,advisory
# Use DE Alert Broadcast Data
# Use Germany/DE Alert Broadcast Data
enableDEalerts = False
# comma separated list of regional codes trigger local alert.
# find your regional codet at https://www.xrepository.de/api/xrepository/urn:de:bund:destatis:bevoelkerungsstatistik:schluessel:rs_2021-07-31/download/Regionalschl_ssel_2021-07-31.json
myRegionalKeysDE = 110000000000,120510000000
# Satalite Pass Prediction
# Register for free API https://www.n2yo.com/login/ personal data page at bottom 'Are you developer?'
n2yoAPIKey =
# NORAD list https://www.n2yo.com/satellites/
satList = 25544,7530
# Alerts are sent to the emergency_handler interface and channel duplicate messages are send here if set
eAlertBroadcastCh =
# CheckList Checkin/Checkout
[checklist]
enabled = False
checklist_db = data/checklist.db
reverse_in_out = False
# Auto approve new checklists
auto_approve = True
# Check-in reminder interval is 5min
# Checkin broadcast interface and channel is emergency_handler interface and channel
# Inventory and Point of Sale System
[inventory]
@@ -360,6 +363,10 @@ voxTrapList = chirpy
# allow use of 'weather' and 'joke' commands via VOX
voxEnableCmd = True
# Meshages Text-to-Speech (TTS) for incoming messages and DM
meshagesTTS = False
ttsChannels = 2
# WSJT-X UDP monitoring - listens for decode messages from WSJT-X, FT8/FT4/WSPR etc.
wsjtxDetectionEnabled = False
# UDP address and port where WSJT-X broadcasts (default: 127.0.0.1:2237)
@@ -394,9 +401,9 @@ enable_runShellCmd = False
# direct shell command handler the x: command in DMs
allowXcmd = False
# Enable 2 factor authentication for x: commands
2factor_enabled = True
twoFactor_enabled = True
# time in seconds to wait for the correct 2FA answer
2factor_timeout = 100
twoFactor_timeout = 100
[smtp]
# enable or disable the SMTP module
@@ -474,3 +481,9 @@ DEBUGpacket = False
# metaPacket detailed logging, the filter negates the port ID
debugMetadata = False
metadataFilter = TELEMETRY_APP,POSITION_APP
# Enable or disable automatic banning of nodes
autoBanEnabled = False
# Number of offenses before auto-ban
autoBanThreshold = 5
# Timeframe for offenses (in seconds)
autoBanTimeframe = 3600
+1 -1
View File
@@ -22,7 +22,7 @@ Environment=SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt
Environment=PYTHONUNBUFFERED=1
Restart=on-failure
Type=notify #try simple if any problems
[Install]
WantedBy=default.target
-1
View File
@@ -23,7 +23,6 @@ ExecStop=pkill -f report_generator5.py
Environment=PYTHONUNBUFFERED=1
Restart=on-failure
Type=notify #try simple if any problems
[Install]
WantedBy=timers.target
-1
View File
@@ -22,7 +22,6 @@ Environment=SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt
Environment=PYTHONUNBUFFERED=1
Restart=on-failure
Type=notify #try simple if any problems
[Install]
WantedBy=default.target
+3 -3
View File
@@ -2,7 +2,7 @@
# # Simulate meshing-around de K7MHI 2024
from modules.log import logger, getPrettyTime # Import the logger; ### --> If you are reading this put the script in the project root <-- ###
import time
import datetime
from datetime import datetime
import random
# Initialize the tool
@@ -51,8 +51,8 @@ def example_handler(message, nodeID, deviceID):
msg = f"Hello {get_name_from_number(nodeID)}, simulator ready for testing {projectName} project! on device {deviceID}"
msg += f" Your location is {location}"
msg += f" you said: {message}"
# Add timestamp
msg += f" [Time: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}]"
return msg
+20 -20
View File
@@ -285,15 +285,6 @@ sudo usermod -a -G tty "$whoami"
sudo usermod -a -G bluetooth "$whoami"
echo "Added user $whoami to dialout, tty, and bluetooth groups"
sudo chown -R "$whoami:$whoami" "$program_path/logs"
sudo chown -R "$whoami:$whoami" "$program_path/data"
sudo chown "$whoami:$whoami" "$program_path/config.ini"
sudo chmod 640 "$program_path/config.ini"
echo "Permissions set for meshbot on config.ini"
sudo chmod 750 "$program_path/logs"
sudo chmod 750 "$program_path/data"
echo "Permissions set for meshbot on logs and data directories"
# check and see if some sort of NTP is running
if ! systemctl is-active --quiet ntp.service && \
! systemctl is-active --quiet systemd-timesyncd.service && \
@@ -321,17 +312,17 @@ if [[ $(echo "${bot}" | grep -i "^m") ]]; then
fi
# install mesh_bot_reporting timer to run daily at 4:20 am
# echo ""
# echo "Installing mesh_bot_reporting.timer to run mesh_bot_reporting daily at 4:20 am..."
# sudo cp etc/mesh_bot_reporting.service /etc/systemd/system/
# sudo cp etc/mesh_bot_reporting.timer /etc/systemd/system/
# sudo systemctl daemon-reload
# sudo systemctl enable mesh_bot_reporting.timer
# sudo systemctl start mesh_bot_reporting.timer
# echo "mesh_bot_reporting.timer installed and enabled"
# echo "Check timer status with: systemctl status mesh_bot_reporting.timer"
# echo "List all timers with: systemctl list-timers"
# echo ""
echo ""
echo "Installing mesh_bot_reporting.timer to run mesh_bot_reporting daily at 4:20 am..."
sudo cp etc/mesh_bot_reporting.service /etc/systemd/system/
sudo cp etc/mesh_bot_reporting.timer /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable mesh_bot_reporting.timer
sudo systemctl start mesh_bot_reporting.timer
echo "mesh_bot_reporting.timer installed and enabled"
echo "Check timer status with: systemctl status mesh_bot_reporting.timer"
echo "List all timers with: systemctl list-timers"
echo ""
# # install mesh_bot_w3_server service
# echo "Installing mesh_bot_w3_server.service to run the web3 server..."
@@ -468,6 +459,15 @@ else
printf "*** Stay Up to date using 'bash update.sh' ***\n" >> install_notes.txt
fi
sudo chown -R "$whoami:$whoami" "$program_path/logs"
sudo chown -R "$whoami:$whoami" "$program_path/data"
sudo chown "$whoami:$whoami" "$program_path/config.ini"
sudo chmod 640 "$program_path/config.ini"
echo "Permissions set for meshbot on config.ini"
sudo chmod 750 "$program_path/logs"
sudo chmod 750 "$program_path/data"
echo "Permissions set for meshbot on logs and data directories"
printf "\nInstallation complete?\n"
exit 0
+123 -59
View File
@@ -40,10 +40,10 @@ def auto_response(message, snr, rssi, hop, pkiStatus, message_from_id, channel_n
"bbspost": lambda: handle_bbspost(message, message_from_id, deviceID),
"bbsread": lambda: handle_bbsread(message),
"blackjack": lambda: handleBlackJack(message, message_from_id, deviceID),
"approvecl": lambda: handle_checklist(message, message_from_id, deviceID),
"denycl": lambda: handle_checklist(message, message_from_id, deviceID),
"checkin": lambda: handle_checklist(message, message_from_id, deviceID),
"checklist": lambda: handle_checklist(message, message_from_id, deviceID),
"checklistapprove": lambda: handle_checklist(message, message_from_id, deviceID),
"checklistdeny": lambda: handle_checklist(message, message_from_id, deviceID),
"checkout": lambda: handle_checklist(message, message_from_id, deviceID),
"chess": lambda: handle_gTnW(chess=True),
"clearsms": lambda: handle_sms(message_from_id, message),
@@ -84,6 +84,7 @@ def auto_response(message, snr, rssi, hop, pkiStatus, message_from_id, channel_n
"cartremove": lambda: handle_inventory(message, message_from_id, deviceID),
"cartsell": lambda: handle_inventory(message, message_from_id, deviceID),
"joke": lambda: tell_joke(message_from_id),
"latest": lambda: get_newsAPI(message),
"leaderboard": lambda: get_mesh_leaderboard(message, message_from_id, deviceID),
"lemonstand": lambda: handleLemonade(message, message_from_id, deviceID),
"lheard": lambda: handle_lheard(message, message_from_id, deviceID, isDM),
@@ -96,8 +97,6 @@ def auto_response(message, snr, rssi, hop, pkiStatus, message_from_id, channel_n
"ping": lambda: handle_ping(message_from_id, deviceID, message, hop, snr, rssi, isDM, channel_number),
"pinging": lambda: handle_ping(message_from_id, deviceID, message, hop, snr, rssi, isDM, channel_number),
"pong": lambda: "🏓PING!!🛜",
"purgein": lambda: handle_checklist(message, message_from_id, deviceID),
"purgeout": lambda: handle_checklist(message, message_from_id, deviceID),
"q:": lambda: quizHandler(message, message_from_id, deviceID),
"quiz": lambda: quizHandler(message, message_from_id, deviceID),
"readnews": lambda: handleNews(message_from_id, deviceID, message, isDM),
@@ -250,7 +249,11 @@ def handle_ping(message_from_id, deviceID, message, hop, snr, rssi, isDM, chann
global multiPing
myNodeNum = globals().get(f'myNodeNum{deviceID}', 777)
if "?" in message and isDM:
return message.split("?")[0].title() + " command returns SNR and RSSI, or hopcount from your message. Try adding e.g. @place or #tag"
pingHelp = "🤖Ping Command Help:\n" \
"🏓 Send 'ping' or 'ack' or 'test' to get a response.\n" \
"🏓 Send 'ping <number>' to get multiple pings in DM"
"🏓 ping @USERID to send a Joke from the bot"
return pingHelp
msg = ""
type = ''
@@ -331,8 +334,11 @@ def handle_ping(message_from_id, deviceID, message, hop, snr, rssi, isDM, chann
# no autoping in channels
pingCount = 1
if pingCount > 51:
if pingCount > 51 and pingCount <= 101:
pingCount = 50
if pingCount > 800:
ban_hammer(message_from_id, deviceID, reason="Excessive auto-ping request")
return "🚫⛔️auto-ping request denied."
except ValueError:
pingCount = -1
@@ -359,7 +365,8 @@ def handle_emergency(message_from_id, deviceID, message):
# if user in bbs_ban_list return
if str(message_from_id) in my_settings.bbs_ban_list:
# silent discard
logger.warning(f"System: {message_from_id} on spam list, no emergency responder alert sent")
hammer_value = ban_hammer(message_from_id, deviceID, reason="Emergency Alert from banned node")
logger.warning(f"System: {message_from_id} on spam list, no emergency responder alert sent. Ban hammer value: {hammer_value}")
return ''
# trgger alert to emergency_responder_alert_channel
if message_from_id != 0:
@@ -391,11 +398,42 @@ def handle_motd(message, message_from_id, isDM):
return msg
def handle_echo(message, message_from_id, deviceID, isDM, channel_number):
# Check if user is admin
isAdmin = isNodeAdmin(message_from_id)
# Admin extended syntax: echo <string> c=<channel> d=<device>
if isAdmin and message.strip().lower().startswith("echo ") and not message.strip().endswith("?"):
msg_to_echo = message.split(" ", 1)[1]
target_channel = channel_number
target_device = deviceID
# Split into words to find c= and d=, but preserve spaces in message
words = msg_to_echo.split()
new_words = []
for w in words:
if w.startswith("c=") and w[2:].isdigit():
target_channel = int(w[2:])
elif w.startswith("d=") and w[2:].isdigit():
target_device = int(w[2:])
else:
new_words.append(w)
msg_to_echo = " ".join(new_words).strip()
# Replace motd/MOTD with the current MOTD from settings
msg_to_echo = " ".join(my_settings.MOTD if w.lower() == "motd" else w for w in msg_to_echo.split())
# Replace welcome! with the current welcome_message from settings
msg_to_echo = " ".join(my_settings.welcome_message if w.lower() == "welcome!" else w for w in msg_to_echo.split())
# Send echo to specified channel/device
logger.debug(f"System: Admin Echo to channel {target_channel} device {target_device} message: {msg_to_echo}")
time.sleep(splitDelay) # throttle for 2x send
send_message(msg_to_echo, target_channel, 0, target_device)
time.sleep(splitDelay) # throttle for 2x send
return f"🐬echoed to channel {target_channel} device {target_device}"
# dev echoBinary off
echoBinary = False
if echoBinary:
try:
#send_raw_bytes echo the data to the channel with synch word:
port_num = 256
synch_word = b"echo:"
parts = message.split("echo ", 1)
@@ -404,25 +442,29 @@ def handle_echo(message, message_from_id, deviceID, isDM, channel_number):
raw_bytes = synch_word + msg_to_echo.encode('utf-8')
send_raw_bytes(message_from_id, raw_bytes, nodeInt=deviceID, channel=channel_number, portnum=port_num)
return f"Sent binary echo message to {message_from_id} to {port_num} on channel {channel_number} device {deviceID}"
else:
return "Please provide a message to echo back to you. Example:echo Hello World"
except Exception as e:
logger.error(f"System: Echo Exception {e}")
return f"Sent binary echo message to {message_from_id} to {port_num} on channel {channel_number} device {deviceID}"
if "?" in message.lower():
return "command returns your message back to you. Example:echo Hello World"
elif "echo " in message.lower():
parts = message.lower().split("echo ", 1)
if "?" in message:
isAdmin = isNodeAdmin(message_from_id)
if isAdmin:
return (
"Admin usage: echo <message> c=<channel> d=<device>\n"
"Example: echo Hello world c=1 d=2"
)
return "command returns your message back to you. Example: echo Hello World"
# process normal echo back to user
elif message.strip().lower().startswith("echo "):
parts = message.split("echo ", 1)
if len(parts) > 1 and parts[1].strip() != "":
echo_msg = parts[1]
if channel_number != my_settings.echoChannel and not isDM:
echo_msg = "@" + get_name_from_number(message_from_id, 'short', deviceID) + " " + echo_msg
return echo_msg
else:
return "Please provide a message to echo back to you. Example:echo Hello World"
else:
return "Please provide a message to echo back to you. Example:echo Hello World"
return "Please provide a message to echo back to you. Example: echo Hello World"
return "🐬echo.."
def handle_wxalert(message_from_id, deviceID, message):
if my_settings.use_meteo_wxApi:
@@ -1428,21 +1470,10 @@ def handle_repeaterQuery(message_from_id, deviceID, channel_number):
return "Repeater lookup not enabled"
def handle_tide(message_from_id, deviceID, channel_number, vox=False):
# Check if tidepredict (xtide) is enabled
if vox:
return get_NOAAtide(str(my_settings.latitudeValue), str(my_settings.longitudeValue))
location = get_node_location(message_from_id, deviceID, channel_number)
lat = str(location[0])
lon = str(location[1])
if lat == "0.0" or lon == "0.0":
lat = str(my_settings.latitudeValue)
lon = str(my_settings.longitudeValue)
if my_settings.useTidePredict:
logger.debug("System: Location: Using tidepredict")
return xtide.get_tide_predictions(lat, lon)
else:
# Fallback to NOAA tide data
logger.debug("System: Location: Using NOAA")
return get_NOAAtide(str(location[0]), str(location[1]))
return get_NOAAtide(str(location[0]), str(location[1]))
def handle_moon(message_from_id, deviceID, channel_number, vox=False):
if vox:
@@ -1552,6 +1583,9 @@ def handle_boot(mesh=True):
if my_settings.solar_conditions_enabled:
logger.debug("System: Celestial Telemetry Enabled")
if my_settings.meshagesTTS:
logger.debug("System: Meshages TTS Text-to-Speech Enabled")
if my_settings.location_enabled:
if my_settings.use_meteo_wxApi:
@@ -1564,23 +1598,23 @@ def handle_boot(mesh=True):
if my_settings.coastalEnabled:
logger.debug("System: Coastal Forecast and Tide Enabled!")
if my_settings.useTidePredict:
logger.debug("System: Using Local TidePredict for Tide Data")
if games_enabled:
logger.debug("System: Games Enabled!")
if my_settings.wikipedia_enabled:
if my_settings.use_kiwix_server:
logger.debug(f"System: Wikipedia search Enabled using Kiwix server at {kiwix_url}")
logger.debug(f"System: Wikipedia search Enabled using Kiwix server at {my_settings.kiwix_url}")
else:
logger.debug("System: Wikipedia search Enabled")
if my_settings.rssEnable:
logger.debug(f"System: RSS Feed Reader Enabled for feeds: {rssFeedNames}")
logger.debug(f"System: RSS Feed Reader Enabled for feeds: {my_settings.rssFeedNames}")
if my_settings.enable_headlines:
logger.debug("System: News Headlines Enabled from NewsAPI.org")
if my_settings.radio_detection_enabled:
logger.debug(f"System: Radio Detection Enabled using rigctld at {my_settings.rigControlServerAddress} broadcasting to channels: {my_settings.sigWatchBroadcastCh} for {get_freq_common_name(get_hamlib('f'))}")
logger.debug(f"System: Radio Detection Enabled using rigctld at {my_settings.rigControlServerAddress} broadcasting to channels: {my_settings.sigWatchBroadcastCh}")
if my_settings.file_monitor_enabled:
logger.warning(f"System: File Monitor Enabled for {my_settings.file_monitor_file_path}, broadcasting to channels: {my_settings.file_monitor_broadcastCh}")
@@ -1591,21 +1625,21 @@ def handle_boot(mesh=True):
if my_settings.read_news_enabled:
logger.debug(f"System: File Monitor News Reader Enabled for {my_settings.news_file_path}")
if my_settings.bee_enabled:
logger.debug("System: File Monitor Bee Monitor Enabled for bee.txt")
if my_settings.wxAlertBroadcastEnabled:
logger.debug(f"System: Weather Alert Broadcast Enabled on channels {my_settings.wxAlertBroadcastChannel}")
if my_settings.emergencyAlertBrodcastEnabled:
logger.debug(f"System: Emergency Alert Broadcast Enabled on channels {my_settings.emergencyAlertBroadcastCh} for FIPS codes {my_settings.myStateFIPSList}")
if my_settings.myStateFIPSList == ['']:
logger.warning("System: No FIPS codes set for iPAWS Alerts")
if my_settings.emergency_responder_enabled:
logger.debug(f"System: Emergency Responder Enabled on channels {my_settings.emergency_responder_alert_channel} for interface {my_settings.emergency_responder_alert_interface}")
logger.debug("System: File Monitor Bee Monitor Enabled for 🐝bee.txt")
if my_settings.usAlerts:
logger.debug(f"System: Emergency Alert Broadcast Enabled on channel {my_settings.emergency_responder_alert_channel} for interface {my_settings.emergency_responder_alert_interface}")
if my_settings.enableDEalerts:
logger.debug(f"System: NINA Alerts Enabled with counties {my_settings.myRegionalKeysDE}")
if my_settings.volcanoAlertBroadcastEnabled:
logger.debug(f"System: Volcano Alert Broadcast Enabled on channels {my_settings.volcanoAlertBroadcastChannel}")
logger.debug(f"System: Volcano Alert Broadcast Enabled on channels {my_settings.emergency_responder_alert_channel} ignoreUSGSWords {my_settings.ignoreUSGSWords}")
if my_settings.ipawsAlertEnabled:
logger.debug(f"System: iPAWS Alerts Enabled with FIPS codes {my_settings.myStateFIPSList} ignorelist {my_settings.ignoreFEMAwords}")
if my_settings.enableDEalerts:
logger.debug(f"System: NINA Alerts Enabled with counties {my_settings.myRegionalKeysDE}")
if my_settings.wxAlertBroadcastEnabled:
logger.debug(f"System: Weather Alert Broadcast Enabled on channels {my_settings.emergency_responder_alert_channel} ignoreEASwords {my_settings.ignoreEASwords}")
if my_settings.emergency_responder_enabled:
logger.debug(f"System: Emergency Responder Enabled on channels {my_settings.emergency_responder_alert_channel}")
if my_settings.qrz_hello_enabled:
if my_settings.train_qrz:
@@ -1623,6 +1657,10 @@ def handle_boot(mesh=True):
if my_settings.useDMForResponse:
logger.debug("System: Respond by DM only")
if my_settings.autoBanEnabled:
logger.debug(f"System: Auto-Ban Enabled for {my_settings.autoBanThreshold} messages in {my_settings.autoBanTimeframe} seconds")
load_bbsBanList()
if my_settings.log_messages_to_file:
logger.debug("System: Logging Messages to disk")
if my_settings.syslog_to_file:
@@ -1755,9 +1793,14 @@ def onReceive(packet, interface):
message_from_id = packet['from']
# if message_from_id is not in the seenNodes list add it
if not any(node['nodeID'] == message_from_id for node in seenNodes):
seenNodes.append({'nodeID': message_from_id, 'rxInterface': rxNode, 'channel': channel_number, 'welcome': False, 'lastSeen': time.time()})
if not any(node.get('nodeID') == message_from_id for node in seenNodes):
seenNodes.append({'nodeID': message_from_id, 'rxInterface': rxNode, 'channel': channel_number, 'welcome': False, 'first_seen': time.time(), 'lastSeen': time.time()})
else:
# update lastSeen time
for node in seenNodes:
if node.get('nodeID') == message_from_id:
node['lastSeen'] = time.time()
break
# BBS DM MAIL CHECKER
if bbs_enabled and 'decoded' in packet:
msg = bbs_check_dm(message_from_id)
@@ -1766,7 +1809,12 @@ def onReceive(packet, interface):
message = "Mail: " + msg[1] + " From: " + get_name_from_number(msg[2], 'long', rxNode)
bbs_delete_dm(msg[0], msg[1])
send_message(message, channel_number, message_from_id, rxNode)
# CHECK with ban_hammer() if the node is banned
if str(message_from_id) in my_settings.bbs_ban_list or str(message_from_id) in my_settings.autoBanlist:
logger.warning(f"System: Banned Node {message_from_id} tried to send a message. Ignored. Try adding to node firmware-blocklist")
return
# handle TEXT_MESSAGE_APP
try:
if 'decoded' in packet and packet['decoded']['portnum'] == 'TEXT_MESSAGE_APP':
@@ -1846,7 +1894,7 @@ def onReceive(packet, interface):
logger.debug(f"System: Packet HopDebugger: hop_away:{hop_away} hop_limit:{hop_limit} hop_start:{hop_start} calculated_hop_count:{hop_count} final_hop_value:{hop} via_mqtt:{via_mqtt} transport_mechanism:{transport_mechanism} Hostname:{rxNodeHostName}")
# check with stringSafeChecker if the message is safe
if stringSafeCheck(message_string) is False:
if stringSafeCheck(message_string, message_from_id) is False:
logger.warning(f"System: Possibly Unsafe Message from {get_name_from_number(message_from_id, 'long', rxNode)}")
if help_message in message_string or welcome_message in message_string or "CMD?:" in message_string:
@@ -1902,7 +1950,13 @@ def onReceive(packet, interface):
else:
# respond with help message on DM
send_message(help_message, channel_number, message_from_id, rxNode)
# add message to tts queue
if meshagesTTS:
# add to the tts_read_queue
readMe = f"DM from {get_name_from_number(message_from_id, 'short', rxNode)}: {message_string}"
tts_read_queue.append(readMe)
# log the message to the message log
if log_messages_to_file:
msgLogger.info(f"Device:{rxNode} Channel:{channel_number} | {get_name_from_number(message_from_id, 'long', rxNode)} | DM | " + message_string.replace('\n', '-nl-'))
@@ -1999,13 +2053,19 @@ def onReceive(packet, interface):
msg = f"🎉 {get_name_from_number(message_from_id, 'long', rxNode)} found the Word of the Day🎊:\n {wordWas}, {metaWas}"
send_message(msg, channel_number, 0, rxNode)
if bingo_win:
msg = f"🎉 {get_name_from_number(message_from_id, 'long', rxNode)} scored BINGO!🥳 {bingo_message}"
msg = f"🎉 {get_name_from_number(message_from_id, 'long', rxNode)} scored word-search-BINGO!🥳 {bingo_message}"
send_message(msg, channel_number, 0, rxNode)
slotMachine = theWordOfTheDay.emojiMiniGame(message_string, emojiSeen=emojiSeen, nodeID=message_from_id, nodeInt=rxNode)
if slotMachine:
msg = f"🎉 {get_name_from_number(message_from_id, 'long', rxNode)} played the Slot Machine and got: {slotMachine} 🥳"
msg = f"🎉 {get_name_from_number(message_from_id, 'long', rxNode)} played the emote-Fruit-Machine and got: {slotMachine} 🥳"
send_message(msg, channel_number, 0, rxNode)
# add message to tts queue
if my_settings.meshagesTTS and channel_number == my_settings.ttsChannels:
# add to the tts_read_queue
readMe = f"DM from {get_name_from_number(message_from_id, 'short', rxNode)}: {message_string}"
tts_read_queue.append(readMe)
else:
# Evaluate non TEXT_MESSAGE_APP packets
consumeMetadata(packet, rxNode, channel_number)
@@ -2057,7 +2117,11 @@ async def main():
tasks.append(asyncio.create_task(handleSignalWatcher(), name="hamlib"))
if my_settings.voxDetectionEnabled:
from modules.radio import voxMonitor
tasks.append(asyncio.create_task(voxMonitor(), name="vox_detection"))
if my_settings.meshagesTTS:
tasks.append(asyncio.create_task(handleTTS(), name="tts_handler"))
if my_settings.wsjtx_detection_enabled:
tasks.append(asyncio.create_task(handleWsjtxWatcher(), name="wsjtx_monitor"))
+132 -23
View File
@@ -9,10 +9,7 @@ This document provides an overview of all modules available in the Mesh-Bot proj
- [Networking](#networking)
- [Games](#games)
- [BBS (Bulletin Board System)](#bbs-bulletin-board-system)
- [Checklist](#checklist)
- [Inventory & Point of Sale](#inventory--point-of-sale)
- [Location & Weather](#location--weather)
- [Map Command](#map-command)
- [EAS & Emergency Alerts](#eas--emergency-alerts)
- [File Monitoring & News](#file-monitoring--news)
- [Radio Monitoring](#radio-monitoring)
@@ -20,8 +17,10 @@ This document provides an overview of all modules available in the Mesh-Bot proj
- [Ollama LLM/AI](#ollama-llmai)
- [Wikipedia Search](#wikipedia-search)
- [DX Spotter Module](#dx-spotter-module)
- [Mesh Bot Scheduler User Guide](#mesh-bot-scheduler-user-guide)
- [Other Utilities](#other-utilities)
- [Checklist](#checklist)
- [Inventory & Point of Sale](#inventory--point-of-sale)
- [Echo Command](#echo-command)
- [Messaging Settings](#messaging-settings)
- [Troubleshooting](#troubleshooting)
- [Configuration Guide](#configuration-guide)
@@ -139,8 +138,8 @@ The checklist module provides asset tracking and accountability features with sa
| `checkin` | Check in a node/asset |
| `checkout` | Check out a node/asset |
| `checklist` | Show active check-ins |
| `purgein` | Delete your check-in record |
| `purgeout` | Delete your check-out record |
| `approvecl` | Admin Approve id |
| `denycl` | Admin Remove id |
#### Advanced Features
@@ -150,8 +149,8 @@ The checklist module provides asset tracking and accountability features with sa
- Ideal for solo activities, remote work, or safety accountability
- **Approval Workflow**
- `checklistapprove <id>` - Approve a pending check-in (admin)
- `checklistdeny <id>` - Deny/remove a check-in (admin)
- `approvecl <id>` - Approve a pending check-in (admin)
- `denycl <id>` - Deny/remove a check-in (admin)
more at [modules/checklist.md](modules/checklist.md)
@@ -287,7 +286,7 @@ The system uses SQLite with four tables:
| `wxa` | NOAA alerts |
| `wxalert` | NOAA alerts (expanded) |
| `mwx` | NOAA Coastal Marine Forecast |
| `tide` | Tide info (NOAA/tidepredict for global) |
| `tide` | NOAA tide info |
| `riverflow` | NOAA river flow info |
| `earthquake` | USGS earthquake info |
| `valert` | USGS volcano alerts |
@@ -296,11 +295,10 @@ The system uses SQLite with four tables:
| `howfar` | Distance traveled since last check |
| `howtall` | Calculate height using sun angle |
| `whereami` | Show current location |
| `map` | Location data/map.csv |
Configure in `[location]` section of `config.ini`.
**Note**: For global tide predictions outside the US, enable `useTidePredict = True` in `config.ini`. See [xtide.md](xtide.md) for setup details.
Certainly! Heres a README help section for your `mapHandler` command, suitable for users of your meshbot:
---
@@ -343,7 +341,6 @@ The `map` command allows you to log your current GPS location with a custom desc
|--------------|-----------------------------------------------|
| `ea`/`ealert`| FEMA iPAWS/EAS alerts (USA/DE) |
Enable in `[eas]` section of `config.ini`.
---
@@ -365,10 +362,6 @@ The Radio Monitoring module provides several ways to integrate amateur radio sof
### Hamlib Integration
| Command | Description |
|--------------|-----------------------------------------------|
| `radio` | Monitor radio SNR via Hamlib |
Monitors signal strength (S-meter) from a connected radio via Hamlib's `rigctld` daemon. When the signal exceeds a configured threshold, it broadcasts an alert to the mesh network with frequency and signal strength information.
### WSJT-X Integration
@@ -459,9 +452,6 @@ Enable and configure VOX features in the `[vox]` section of `config.ini`.
| Command | Description |
|--------------|-----------------------------------------------|
| `askai` | Ask Ollama LLM AI |
| `ask:` | Ask Ollama LLM AI (raw) |
Configure in `[ollama]` section of `config.ini`.
More at [LLM Readme](llm.md)
@@ -473,11 +463,66 @@ More at [LLM Readme](llm.md)
|--------------|-----------------------------------------------|
| `wiki` | Search Wikipedia or local Kiwix server |
Configure in `[wikipedia]` section of `config.ini`.
Configure in `[general]` section of `config.ini`.
---
## News & Headlines (`latest` Command)
The `latest` command allows you to fetch current news headlines or articles on any topic using the NewsAPI integration. This is useful for quickly checking the latest developments on a subject, even from the mesh.
### Usage
- **Get the latest headlines on a topic:**
```
latest <topic>
```
Example:
```
latest meshtastic
```
This will return the most recent news articles about "meshtastic".
- **General latest news:**
```
latest
```
Returns the latest general news headlines.
### How It Works
- The bot queries NewsAPI.org for the most recent articles matching your topic.
- Each result includes the article title and a short description.
You need to go register for the developer key and read terms of use.
```ini
# enable or disable the headline command which uses NewsAPI.org
enableNewsAPI = True
newsAPI_KEY = key at https://newsapi.org/register
newsAPIregion = us
```
### Example Output
```
🗞️:📰Meshtastic project launches new firmware
The open-source mesh radio project Meshtastic has released a major firmware update...
📰How Meshtastic is changing off-grid communication
A look at how Meshtastic devices are being used for emergency response...
📰Meshtastic featured at DEF CON 2025
The Meshtastic team presented new features at DEF CON, drawing large crowds...
```
### Notes
- You can search for any topic, e.g., `latest wildfire`, `latest ham radio`, etc.
- The number of results can be adjusted in the configuration.
- Requires internet access for the bot to fetch news.
___
## DX Spotter Module
The DX Spotter module allows you to fetch and display recent DX cluster spots from [spothole.app](https://spothole.app) directly in your mesh-bot.
@@ -690,6 +735,73 @@ You can use any of these options to schedule messages on specific days:
- `history` — Command history
- `cmd`/`cmd?` — Show help message (the bot avoids the use of saying or using help)
| Command | Description | ✅ Works Off-Grid |
|--------------|-------------|------------------|
| `echo` | Echo string back. Admins can use `echo <message> c=<channel> d=<device>` to send to any channel/device. | ✅ |
---
### Echo Command
The `echo` command returns your message back to you.
**Admins** can use an extended syntax to send a message to any channel and device.
#### Usage
- **Basic Echo (all users):**
```
echo Hello World
```
Response:
```
Hello World
```
- **Admin Extended Syntax:**
```
echo <message> c=<channel> d=<device>
```
Example:
```
echo Hello world c=1 d=2
```
This will send "Hello world" to channel 1, device 2.
#### Special Keyword Substitution
- In admin echo, if you include the word `motd` or `MOTD` (case-insensitive), it will be replaced with the current Message of the Day.
- If you include the word `welcome!` (case-insensitive), it will be replaced with the current Welcome Message as set in your configuration.
- Example:
```
echo Today's message is motd c=1 d=2
```
If the MOTD is "Potatos Are Cool!", the message sent will be:
```
Today's message is Potatos Are Cool!
```
#### Notes
- Only admins can use the `c=<channel>` and `d=<device>` override.
- If you omit `c=<channel>` and `d=<device>`, the message is echoed back to your current channel/device.
- MOTD substitution works for any standalone `motd` or `MOTD` in the message.
#### Help
- Send `echo?` for usage instructions.
- Admins will see this help message:
```
Admin usage: echo <message> c=<channel> d=<device>
Example: echo Hello world c=1 d=2
```
#### Notes
- Only admins can use the `c=<channel>` and `d=<device>` override.
- If you omit `c=<channel>` and `d=<device>`, the message is echoed back to your current channel/device.
---
## Configuration
@@ -974,7 +1086,6 @@ This uses USA: SAME, FIPS, to locate the alerts in the feed. By default ignoring
```ini
eAlertBroadcastEnabled = False # Goverment IPAWS/CAP Alert Broadcast
eAlertBroadcastCh = 2,3 # Goverment Emergency IPAWS/CAP Alert Broadcast Channels
ignoreFEMAenable = True # Ignore any headline that includes followig word list
ignoreFEMAwords = test,exercise
# comma separated list of FIPS codes to trigger local alert. find your FIPS codes at https://en.wikipedia.org/wiki/Federal_Information_Processing_Standard_state_code
@@ -1142,6 +1253,4 @@ enabled = True # QRZ Hello to new nodes
qrz_hello_string = "send CMD or DM me for more info." # will be sent to all heard nodes once
training = True # Training mode will not send the hello message to new nodes, use this to build up database
```
Happy meshing!
+77 -31
View File
@@ -26,7 +26,6 @@ The enhanced checklist module provides asset tracking and accountability feature
### 📍 Location Tracking
- Automatic GPS location capture when checking in/out
- View last known location in checklist
- Track movement over time
- **Time Window Monitoring**: Check-in with safety intervals (e.g., `checkin 60 Hunting in tree stand`)
- Tracks if users don't check in within expected timeframe
@@ -34,20 +33,65 @@ The enhanced checklist module provides asset tracking and accountability feature
- Provides `get_overdue_checkins()` function for alert integration
- **Approval Workflow**:
- `checklistapprove <id>` - Approve pending check-ins (admin)
- `checklistdeny <id>` - Deny/remove check-ins (admin)
- `clok <id>` - Approve pending check-ins (admin)
- `denycl <id>` - Deny/remove check-ins (admin)
- Support for approval-based workflows
- **Enhanced Database Schema**:
- Added `approved` field for approval workflows
- Added `expected_checkin_interval` field for safety monitoring
- Automatic migration for existing databases
#### New Commands:
- `checklistapprove <id>` - Approve a check-in
- `checklistdeny <id>` - Deny a check-in
- `clok <id>` - Approve a check-in
- `denycl <id>` - Deny a check-in
- Enhanced `checkin [interval] [note]` - Now supports interval parameter
### Enhanced Check Out Options
You can now check out in three ways:
#### 1. Check Out the Most Recent Active Check-in
```
checkout [notes]
```
Checks out your most recent active check-in.
*Example:*
```
checkout Heading back to camp
```
#### 2. Check Out All Active Check-ins
```
checkout all [notes]
```
Checks out **all** of your active check-ins at once.
*Example:*
```
checkout all Done for the day
```
*Response:*
```
Checked out 2 check-ins for Hunter1. Durations: 01:23:45, 00:15:30
```
#### 3. Check Out a Specific Check-in by ID
```
checkout <checkin_id> [notes]
```
Checks out a specific check-in using its ID (as shown in the `checklist` command).
*Example:*
```
checkout 123 Leaving early
```
*Response:*
```
Checked out check-in ID 123 for Hunter1. Duration: 00:45:12
```
**Tip:**
- Use `checklist` to see your current check-in IDs and durations.
- You can always add a note to any checkout command for context.
---
These options allow you to manage your check-ins more flexibly, whether you want to check out everything at once or just a specific session.
## Configuration
Add to your `config.ini`:
@@ -106,38 +150,31 @@ ID: Hunter1 checked-In for 01:23:45📝Solo hunting
ID: Tech2 checked-In for 00:15:30📝Equipment repair
```
#### Purge Records
```
purgein # Delete your check-in record
purgeout # Delete your check-out record
```
Use these to manually remove your records if needed.
### Admin Commands
#### Approve Check-in
```
checklistapprove <checkin_id>
approvecl <checkin_id>
```
Approve a pending check-in (requires admin privileges).
**Example:**
```
checklistapprove 123
approvecl 123
```
#### Deny Check-in
```
checklistdeny <checkin_id>
denycl <checkin_id>
```
Deny and remove a check-in (requires admin privileges).
**Example:**
```
checklistdeny 456
denycl 456
```
## Safety Monitoring Feature
@@ -153,7 +190,7 @@ checkin 60 Hunting in remote area
This tells the system:
- You're checking in now
- You expect to check in again or check out within 60 minutes
- If 60 minutes pass without activity, you'll be marked as overdue
- If 60 minutes pass without activity, you'll be marked as overdue alert
### Use Cases for Time Intervals
@@ -174,14 +211,17 @@ This tells the system:
4. **Check-in Points**: Regular status updates during long operations
```
checkin 15 Descending cliff face
checkin 15 Descending cliff
```
5. **Check-in a reminder**: Reminders to check in on something like a pot roast
```
checkin 30 🍠🍖
```
### Overdue Check-ins
The system tracks all check-ins with time intervals and can identify who is overdue. The module provides the `get_overdue_checkins()` function that returns a list of overdue users.
**Note**: Automatic alerts for overdue check-ins require integration with the bot's scheduler or alert system. The checklist module provides the detection capability, but sending notifications must be configured separately through the main bot's alert features.
The system tracks all check-ins with time intervals and can identify who is overdue. The module provides the `get_overdue_checkins()` function that returns a list of overdue users. It alerts on the 20min watchdog.
## Practical Examples
@@ -258,15 +298,12 @@ checkin 45 Site survey tower location 2
The checklist system automatically captures GPS coordinates when available. This can be used for:
- Tracking last known position
- Geo-fencing applications
- Emergency response coordination
- Asset location management
### Alert Systems
The overdue check-in feature can trigger:
- Notifications to supervisors
- Emergency alerts
- Automated messages to response teams
- Email/SMS notifications (if configured)
@@ -274,9 +311,7 @@ The overdue check-in feature can trigger:
Combine with the scheduler module to:
- Send reminders to check in
- Automatically generate reports
- Schedule periodic check-in requirements
- Send daily summaries
## Best Practices
@@ -306,6 +341,17 @@ Combine with the scheduler module to:
checklist
```
The list will show ✅ approved and ☑️ unapproved
The alarm will only alert on approved.
in config.ini
```ini
# Auto approve new checklists
auto_approve = True
# Check-in reminder interval is 5min
# Checkin broadcast interface and channel is emergency_handler interface and channel
```
2. **Respond to Overdue Situations**: Act on overdue check-ins promptly
3. **Set Clear Policies**: Establish when and how to use the system
+165 -118
View File
@@ -3,69 +3,50 @@
import sqlite3
from modules.log import logger
from modules.settings import checklist_db, reverse_in_out, bbs_ban_list
from modules.settings import checklist_db, reverse_in_out, bbs_ban_list, bbs_admin_list, checklist_auto_approve
import time
trap_list_checklist = ("checkin", "checkout", "checklist", "purgein", "purgeout",
"checklistapprove", "checklistdeny", "checklistadd", "checklistremove")
trap_list_checklist = ("checkin", "checkout", "checklist", "approvecl", "denycl",)
def initialize_checklist_database():
try:
conn = sqlite3.connect(checklist_db)
c = conn.cursor()
# Check if the checkin table exists, and create it if it doesn't
logger.debug("System: Checklist: Initializing database...")
c.execute('''CREATE TABLE IF NOT EXISTS checkin
(checkin_id INTEGER PRIMARY KEY, checkin_name TEXT, checkin_date TEXT,
checkin_time TEXT, location TEXT, checkin_notes TEXT,
approved INTEGER DEFAULT 1, expected_checkin_interval INTEGER DEFAULT 0)''')
# Check if the checkout table exists, and create it if it doesn't
approved INTEGER DEFAULT 1, expected_checkin_interval INTEGER DEFAULT 0,
removed INTEGER DEFAULT 0)''')
c.execute('''CREATE TABLE IF NOT EXISTS checkout
(checkout_id INTEGER PRIMARY KEY, checkout_name TEXT, checkout_date TEXT,
checkout_time TEXT, location TEXT, checkout_notes TEXT)''')
# Add new columns if they don't exist (for migration)
try:
c.execute("ALTER TABLE checkin ADD COLUMN approved INTEGER DEFAULT 1")
except sqlite3.OperationalError:
pass # Column already exists
try:
c.execute("ALTER TABLE checkin ADD COLUMN expected_checkin_interval INTEGER DEFAULT 0")
except sqlite3.OperationalError:
pass # Column already exists
try:
c.execute("ALTER TABLE checkin ADD COLUMN removed INTEGER DEFAULT 0")
except sqlite3.OperationalError:
pass # Column already exists
# Add this to your DB init (if not already present)
try:
c.execute("ALTER TABLE checkout ADD COLUMN removed INTEGER DEFAULT 0")
except sqlite3.OperationalError:
pass # Column already exists
checkout_time TEXT, location TEXT, checkout_notes TEXT,
checkin_id INTEGER, removed INTEGER DEFAULT 0)''')
conn.commit()
conn.close()
return True
except Exception as e:
logger.error(f"Checklist: Failed to initialize database: {e}")
logger.error(f"Checklist: Failed to initialize database: {e} Please delete old checklist database file. rm data/checklist.db")
return False
def checkin(name, date, time, location, notes):
location = ", ".join(map(str, location))
# checkin a user
# Auto-approve if setting is enabled
approved_value = 1 if checklist_auto_approve else 0
conn = sqlite3.connect(checklist_db)
c = conn.cursor()
try:
c.execute("INSERT INTO checkin (checkin_name, checkin_date, checkin_time, location, checkin_notes) VALUES (?, ?, ?, ?, ?)", (name, date, time, location, notes))
# # remove any checkouts that are older than the checkin
# c.execute("DELETE FROM checkout WHERE checkout_date < ? OR (checkout_date = ? AND checkout_time < ?)", (date, date, time))
c.execute(
"INSERT INTO checkin (checkin_name, checkin_date, checkin_time, location, checkin_notes, removed, approved) VALUES (?, ?, ?, ?, ?, 0, ?)",
(name, date, time, location, notes, approved_value)
)
except sqlite3.OperationalError as e:
if "no such table" in str(e):
initialize_checklist_database()
c.execute("INSERT INTO checkin (checkin_name, checkin_date, checkin_time, location, checkin_notes) VALUES (?, ?, ?, ?, ?)", (name, date, time, location, notes))
c.execute(
"INSERT INTO checkin (checkin_name, checkin_date, checkin_time, location, checkin_notes, removed, approved) VALUES (?, ?, ?, ?, ?, 0, ?)",
(name, date, time, location, notes, approved_value)
)
else:
raise
conn.commit()
@@ -75,71 +56,90 @@ def checkin(name, date, time, location, notes):
else:
return "Checked✅In: " + str(name)
def delete_checkin(checkin_id):
# delete a checkin
conn = sqlite3.connect(checklist_db)
c = conn.cursor()
c.execute("DELETE FROM checkin WHERE checkin_id = ?", (checkin_id,))
conn.commit()
conn.close()
return "Checkin deleted." + str(checkin_id)
def checkout(name, date, time_str, location, notes):
def checkout(name, date, time_str, location, notes, all=False, checkin_id=None):
location = ", ".join(map(str, location))
checkin_record = None # Ensure variable is always defined
conn = sqlite3.connect(checklist_db)
c = conn.cursor()
checked_out_ids = []
durations = []
try:
# Check if the user has a checkin before checking out
c.execute("""
SELECT checkin_id FROM checkin
WHERE checkin_name = ?
AND NOT EXISTS (
SELECT 1 FROM checkout
WHERE checkout_name = checkin_name
AND (checkout_date > checkin_date OR (checkout_date = checkin_date AND checkout_time > checkin_time))
)
ORDER BY checkin_date DESC, checkin_time DESC
LIMIT 1
""", (name,))
checkin_record = c.fetchone()
if checkin_record:
c.execute("INSERT INTO checkout (checkout_name, checkout_date, checkout_time, location, checkout_notes) VALUES (?, ?, ?, ?, ?)", (name, date, time_str, location, notes))
# calculate length of time checked in
c.execute("SELECT checkin_time, checkin_date FROM checkin WHERE checkin_id = ?", (checkin_record[0],))
checkin_time, checkin_date = c.fetchone()
checkin_datetime = time.strptime(checkin_date + " " + checkin_time, "%Y-%m-%d %H:%M:%S")
time_checked_in_seconds = time.time() - time.mktime(checkin_datetime)
timeCheckedIn = time.strftime("%H:%M:%S", time.gmtime(time_checked_in_seconds))
# # remove the checkin record older than the checkout
# c.execute("DELETE FROM checkin WHERE checkin_date < ? OR (checkin_date = ? AND checkin_time < ?)", (date, date, time_str))
if checkin_id is not None:
# Check out a specific check-in by ID
c.execute("""
SELECT checkin_id, checkin_time, checkin_date FROM checkin
WHERE checkin_id = ? AND checkin_name = ?
""", (checkin_id, name))
row = c.fetchone()
if row:
c.execute("INSERT INTO checkout (checkout_name, checkout_date, checkout_time, location, checkout_notes, checkin_id) VALUES (?, ?, ?, ?, ?, ?)",
(name, date, time_str, location, notes, row[0]))
checkin_time, checkin_date = row[1], row[2]
checkin_datetime = time.strptime(checkin_date + " " + checkin_time, "%Y-%m-%d %H:%M:%S")
time_checked_in_seconds = time.time() - time.mktime(checkin_datetime)
durations.append(time.strftime("%H:%M:%S", time.gmtime(time_checked_in_seconds)))
checked_out_ids.append(row[0])
elif all:
# Check out all active check-ins for this user
c.execute("""
SELECT checkin_id, checkin_time, checkin_date FROM checkin
WHERE checkin_name = ?
AND removed = 0
AND checkin_id NOT IN (
SELECT checkin_id FROM checkout WHERE checkin_id IS NOT NULL
)
""", (name,))
rows = c.fetchall()
for row in rows:
c.execute("INSERT INTO checkout (checkout_name, checkout_date, checkout_time, location, checkout_notes, checkin_id) VALUES (?, ?, ?, ?, ?, ?)",
(name, date, time_str, location, notes, row[0]))
checkin_time, checkin_date = row[1], row[2]
checkin_datetime = time.strptime(checkin_date + " " + checkin_time, "%Y-%m-%d %H:%M:%S")
time_checked_in_seconds = time.time() - time.mktime(checkin_datetime)
durations.append(time.strftime("%H:%M:%S", time.gmtime(time_checked_in_seconds)))
checked_out_ids.append(row[0])
else:
# Default: check out the most recent active check-in
c.execute("""
SELECT checkin_id, checkin_time, checkin_date FROM checkin
WHERE checkin_name = ?
AND removed = 0
AND checkin_id NOT IN (
SELECT checkin_id FROM checkout WHERE checkin_id IS NOT NULL
)
ORDER BY checkin_date DESC, checkin_time DESC
LIMIT 1
""", (name,))
row = c.fetchone()
if row:
c.execute("INSERT INTO checkout (checkout_name, checkout_date, checkout_time, location, checkout_notes, checkin_id) VALUES (?, ?, ?, ?, ?, ?)",
(name, date, time_str, location, notes, row[0]))
checkin_time, checkin_date = row[1], row[2]
checkin_datetime = time.strptime(checkin_date + " " + checkin_time, "%Y-%m-%d %H:%M:%S")
time_checked_in_seconds = time.time() - time.mktime(checkin_datetime)
durations.append(time.strftime("%H:%M:%S", time.gmtime(time_checked_in_seconds)))
checked_out_ids.append(row[0])
except sqlite3.OperationalError as e:
if "no such table" in str(e):
conn.close()
initialize_checklist_database()
# Try again after initializing
return checkout(name, date, time_str, location, notes)
return checkout(name, date, time_str, location, notes, all=all, checkin_id=checkin_id)
else:
conn.close()
raise
conn.commit()
conn.close()
if checkin_record:
if reverse_in_out:
return "Checked⌛️In: " + str(name) + " duration " + timeCheckedIn
if checked_out_ids:
if all:
return f"Checked out {len(checked_out_ids)} check-ins for {name}. Durations: {', '.join(durations)}"
elif checkin_id is not None:
return f"Checked out check-in ID {checkin_id} for {name}. Duration: {durations[0]}"
else:
return "Checked⌛️Out: " + str(name) + " duration " + timeCheckedIn
if reverse_in_out:
return f"Checked⌛️In: {name} duration {durations[0]}"
else:
return f"Checked⌛️Out: {name} duration {durations[0]}"
else:
return "None found for " + str(name)
def delete_checkout(checkout_id):
# delete a checkout
conn = sqlite3.connect(checklist_db)
c = conn.cursor()
c.execute("DELETE FROM checkout WHERE checkout_id = ?", (checkout_id,))
conn.commit()
conn.close()
return "Checkout deleted." + str(checkout_id)
return f"None found for {name}"
def approve_checkin(checkin_id):
"""Approve a pending check-in"""
@@ -254,25 +254,27 @@ def get_overdue_checkins():
return []
def format_overdue_alert():
header = "⚠️ OVERDUE CHECK-INS:\a\n"
alert = ""
try:
"""Format overdue check-ins as an alert message"""
overdue = get_overdue_checkins()
logger.debug(f"Overdue check-ins: {overdue}") # Add this line
if not overdue:
return None
alert = "⚠️ OVERDUE CHECK-INS:\n"
for entry in overdue:
hours = entry['overdue_minutes'] // 60
minutes = entry['overdue_minutes'] % 60
alert += f"{entry['name']}: {hours}h {minutes}m overdue"
if hours > 0:
alert += f"{entry['name']}: {hours}h {minutes}m overdue"
else:
alert += f"{entry['name']}: {minutes}m overdue"
# if entry['location']:
# alert += f" @ {entry['location']}"
if entry['checkin_notes']:
alert += f" 📝{entry['checkin_notes']}"
alert += "\n"
return alert.rstrip()
if alert:
return header + alert.rstrip()
except Exception as e:
logger.error(f"Checklist: Error formatting overdue alert: {e}")
return None
@@ -285,9 +287,9 @@ def list_checkin():
c.execute("""
SELECT * FROM checkin
WHERE removed = 0
AND checkin_id NOT IN (
SELECT checkin_id FROM checkout
WHERE checkout_date > checkin_date OR (checkout_date = checkin_date AND checkout_time > checkin_time)
AND NOT EXISTS (
SELECT 1 FROM checkout
WHERE checkout.checkin_id = checkin.checkin_id
)
""")
rows = c.fetchall()
@@ -298,12 +300,16 @@ def list_checkin():
return list_checkin()
else:
conn.close()
logger.error(f"Checklist: Error listing checkins: {e}")
initialize_checklist_database()
return "Error listing checkins."
conn.close()
timeCheckedIn = ""
# Get overdue info
overdue = {entry['id']: entry for entry in get_overdue_checkins()}
checkin_list = ""
for row in rows:
checkin_id = row[0]
# Calculate length of time checked in, including days
total_seconds = time.time() - time.mktime(time.strptime(row[2] + " " + row[3], "%Y-%m-%d %H:%M:%S"))
days = int(total_seconds // 86400)
@@ -314,9 +320,31 @@ def list_checkin():
timeCheckedIn = f"{days}d {hours:02}:{minutes:02}:{seconds:02}"
else:
timeCheckedIn = f"{hours:02}:{minutes:02}:{seconds:02}"
checkin_list += "ID: " + str(row[0]) + " " + row[1] + " checked-In for " + timeCheckedIn
# Add ⏰ if routine check-ins are required
routine = ""
if len(row) > 7 and row[7] and int(row[7]) > 0:
routine = f" ⏰({row[7]}m)"
# Indicate approval status
approved_marker = "" if row[6] == 1 else "☑️"
# Check if overdue
if checkin_id in overdue:
overdue_minutes = overdue[checkin_id]['overdue_minutes']
overdue_hours = overdue_minutes // 60
overdue_mins = overdue_minutes % 60
if overdue_hours > 0:
overdue_str = f"overdue by {overdue_hours}h {overdue_mins}m"
else:
overdue_str = f"overdue by {overdue_mins}m"
status = f"{row[1]} {overdue_str}{routine}"
else:
status = f"{row[1]} checked-In for {timeCheckedIn}{routine}"
checkin_list += f"ID: {checkin_id} {approved_marker} {status}"
if row[5] != "":
checkin_list += "📝" + row[5]
checkin_list += " 📝" + row[5]
if row != rows[-1]:
checkin_list += "\n"
# if empty list
@@ -331,6 +359,9 @@ def process_checklist_command(nodeID, message, name="none", location="none"):
if str(nodeID) in bbs_ban_list:
logger.warning("System: Checklist attempt from the ban list")
return "unable to process command"
is_admin = False
if str(nodeID) in bbs_admin_list:
is_admin = True
message_lower = message.lower()
parts = message.split()
@@ -359,22 +390,44 @@ def process_checklist_command(nodeID, message, name="none", location="none"):
return result
elif ("checkout" in message_lower and not reverse_in_out) or ("checkin" in message_lower and reverse_in_out):
return checkout(name, current_date, current_time, location, comment)
# Support: checkout all, checkout <id>, or checkout [note]
all_flag = False
checkin_id = None
actual_comment = comment
elif "purgein" in message_lower:
return mark_checkin_removed_by_name(name)
# Split the command into parts after the keyword
checkout_args = parts[1:] if len(parts) > 1 else []
elif "purgeout" in message_lower:
return mark_checkout_removed_by_name(name)
if checkout_args:
if checkout_args[0].lower() == "all":
all_flag = True
actual_comment = " ".join(checkout_args[1:]) if len(checkout_args) > 1 else ""
elif checkout_args[0].isdigit():
checkin_id = int(checkout_args[0])
actual_comment = " ".join(checkout_args[1:]) if len(checkout_args) > 1 else ""
else:
actual_comment = " ".join(checkout_args)
elif message_lower.startswith("checklistapprove "):
return checkout(name, current_date, current_time, location, actual_comment, all=all_flag, checkin_id=checkin_id)
# elif "purgein" in message_lower:
# return mark_checkin_removed_by_name(name)
# elif "purgeout" in message_lower:
# return mark_checkout_removed_by_name(name)
elif "approvecl " in message_lower:
if not is_admin:
return "You do not have permission to approve check-ins."
try:
checkin_id = int(parts[1])
return approve_checkin(checkin_id)
except (ValueError, IndexError):
return "Usage: checklistapprove <checkin_id>"
elif message_lower.startswith("checklistdeny "):
elif "denycl " in message_lower:
if not is_admin:
return "You do not have permission to deny check-ins."
try:
checkin_id = int(parts[1])
return deny_checkin(checkin_id)
@@ -385,21 +438,15 @@ def process_checklist_command(nodeID, message, name="none", location="none"):
if not reverse_in_out:
return ("Command: checklist followed by\n"
"checkin [interval] [note]\n"
"checkout [note]\n"
"purgein - delete your checkin\n"
"purgeout - delete your checkout\n"
"checklistapprove <id> - approve checkin\n"
"checklistdeny <id> - deny checkin\n"
"Example: checkin 60 Hunting in tree stand")
"checkout [all] [note]\n"
"Example: checkin 60 Leaving for a hike")
else:
return ("Command: checklist followed by\n"
"checkout [interval] [note]\n"
"checkout [all] [interval] [note]\n"
"checkin [note]\n"
"purgeout - delete your checkout\n"
"purgein - delete your checkin\n"
"Example: checkout 60 Leaving park")
"Example: checkout 60 Leaving for a hike")
elif "checklist" in message_lower:
elif message_lower.strip() == "checklist":
return list_checkin()
else:
+3 -4
View File
@@ -2,7 +2,7 @@
# Fetches DX spots from Spothole API based on user commands
# 2025 K7MHI Kelly Keeton
import requests
import datetime
from datetime import datetime, timedelta
from modules.log import logger
from modules.settings import latitudeValue, longitudeValue
@@ -69,7 +69,6 @@ def get_spothole_spots(source=None, band=None, mode=None, date=None, dx_call=Non
url = "https://spothole.app/api/v1/spots"
params = {}
fetched_count = 0
# Add administrative filters if provided
qrt = False # Always fetch active spots
@@ -83,7 +82,7 @@ def get_spothole_spots(source=None, band=None, mode=None, date=None, dx_call=Non
params["needs_sig"] = str(needs_sig).lower()
params["needs_sig_ref"] = 'true'
# Only get spots from last 9 hours
received_since_dt = datetime.datetime.utcnow() - datetime.timedelta(hours=9)
received_since_dt = datetime.utcnow() - timedelta(hours=9)
received_since = int(received_since_dt.timestamp())
params["received_since"] = received_since
@@ -170,7 +169,7 @@ def get_spothole_spots(source=None, band=None, mode=None, date=None, dx_call=Non
return spots
def handle_post_dxspot():
time = int(datetime.datetime.utcnow().timestamp())
time = int(datetime.utcnow().timestamp())
freq = 14200000 # 14 MHz
comment = "Test spot please ignore"
de_spot = "N0CALL"
+31 -1
View File
@@ -718,4 +718,34 @@ This module implements a survey system for the Meshtastic mesh-bot.
---
**Written for Meshtastic mesh-bot by K7MHI Kelly Keeton 2025**
**Written for Meshtastic mesh-bot by K7MHI Kelly Keeton 2025**
___
Pay no attention to the..
'pygame - Community Edition' ('pygame-ce' for short) is a fork of the original 'pygame' library by former 'pygame' core contributors.
It offers many new features and optimizations, receives much better maintenance and runs under a better governance model, while being highly compatible with code written for upstream pygame (`import pygame` still works).
**Details**
- [Initial announcement on Reddit](<https://www.reddit.com/r/pygame/comments/1112q10/pygame_community_edition_announcement/>) (or https://discord.com/channels/772505616680878080/772506385304649738/1074593440148500540)
- [Why the forking happened](<https://www.reddit.com/r/pygame/comments/18xy7nf/what_was_the_disagreement_that_led_to_pygamece/>)
**Helpful Links**
- https://discord.com/channels/772505616680878080/772506385304649738
- [Our GitHub releases](<https://github.com/pygame-community/pygame-ce/releases>)
- [Our docs](https://pyga.me/docs/)
**Installation**
```sh
pip uninstall pygame # Uninstall pygame first since it would conflict with pygame-ce
pip install pygame-ce
```
-# Because 'pygame' installs to the same location as 'pygame-ce', it must first be uninstalled.
-# Note that the `import pygame` syntax has not changed with pygame-ce.
---
+45 -27
View File
@@ -175,7 +175,6 @@ def getArtSciRepeaters(lat=0, lon=0):
return msg
def get_NOAAtide(lat=0, lon=0):
# get tide data from NOAA for lat/lon
station_id = ""
location = lat,lon
if float(lat) == 0 and float(lon) == 0:
@@ -463,7 +462,6 @@ def alertBrodcastNOAA():
# broadcast the alerts send to wxBrodcastCh
elif currentAlert[0] not in wxAlertCacheNOAA:
# Check if the current alert is not in the weather alert cache
logger.debug("Location:Broadcasting weather alerts")
wxAlertCacheNOAA = currentAlert[0]
return currentAlert
@@ -1006,39 +1004,42 @@ def distance(lat=0,lon=0,nodeID=0, reset=False):
return msg
def get_openskynetwork(lat=0, lon=0):
# get the latest aircraft data from OpenSky Network in the area
def get_openskynetwork(lat=0, lon=0, altitude=0, node_altitude=0, altitude_window=1000):
"""
Returns the aircraft dict from OpenSky Network closest in altitude (within altitude_window meters)
to the given node_altitude. If no aircraft found, returns my_settings.NO_ALERTS.
"""
if lat == 0 and lon == 0:
return my_settings.NO_ALERTS
# setup a bounding box of 50km around the lat/lon
box_size = 0.45 # approx 50km
# return limits for aircraft search
search_limit = 3
return False
box_size = 0.45 # approx 50km
lamin = lat - box_size
lamax = lat + box_size
lomin = lon - box_size
lomax = lon + box_size
# fetch the aircraft data from OpenSky Network
opensky_url = f"https://opensky-network.org/api/states/all?lamin={lamin}&lomin={lomin}&lamax={lamax}&lomax={lomax}"
opensky_url = (
f"https://opensky-network.org/api/states/all?lamin={lamin}&lomin={lomin}"
f"&lamax={lamax}&lomax={lomax}"
)
try:
aircraft_data = requests.get(opensky_url, timeout=my_settings.urlTimeoutSeconds)
if not aircraft_data.ok:
logger.warning("Location:Error fetching aircraft data from OpenSky Network")
return my_settings.ERROR_FETCHING_DATA
return False
except (requests.exceptions.RequestException):
logger.warning("Location:Error fetching aircraft data from OpenSky Network")
return my_settings.ERROR_FETCHING_DATA
return False
aircraft_json = aircraft_data.json()
if 'states' not in aircraft_json or not aircraft_json['states']:
return my_settings.NO_ALERTS
return False
aircraft_list = aircraft_json['states']
aircraft_report = ""
logger.debug(f"Location: OpenSky Network: Found {len(aircraft_list)} possible aircraft in area")
closest = None
min_diff = float('inf')
for aircraft in aircraft_list:
if len(aircraft_report.split("\n")) >= search_limit:
break
# extract values from JSON
try:
callsign = aircraft[1].strip() if aircraft[1] else "N/A"
origin_country = aircraft[2]
@@ -1046,20 +1047,37 @@ def get_openskynetwork(lat=0, lon=0):
true_track = aircraft[10]
vertical_rate = aircraft[11]
sensors = aircraft[12]
baro_altitude = aircraft[7]
geo_altitude = aircraft[13]
squawk = aircraft[14] if len(aircraft) > 14 else "N/A"
except Exception as e:
logger.debug("Location:Error extracting aircraft data from OpenSky Network")
continue
# format the aircraft data
aircraft_report += f"{callsign} Alt:{int(geo_altitude) if geo_altitude else 'N/A'}m Vel:{int(velocity) if velocity else 'N/A'}m/s Heading:{int(true_track) if true_track else 'N/A'}°\n"
# remove last newline
if aircraft_report.endswith("\n"):
aircraft_report = aircraft_report[:-1]
aircraft_report = abbreviate_noaa(aircraft_report)
return aircraft_report if aircraft_report else my_settings.NO_ALERTS
# Prefer geo_altitude, fallback to baro_altitude
plane_alt = geo_altitude if geo_altitude is not None else baro_altitude
if plane_alt is None or node_altitude == 0:
continue
diff = abs(plane_alt - node_altitude)
if diff <= altitude_window and diff < min_diff:
min_diff = diff
closest = {
"callsign": callsign,
"origin_country": origin_country,
"velocity": velocity,
"true_track": true_track,
"vertical_rate": vertical_rate,
"sensors": sensors,
"altitude": baro_altitude,
"geo_altitude": geo_altitude,
"squawk": squawk,
}
if closest:
return closest
else:
return False
def log_locationData_toMap(userID, location, message):
"""
+55
View File
@@ -0,0 +1,55 @@
# Radio Module: Meshages TTS (Text-to-Speech) Setup
The radio module supports audible mesh messages using the [KittenTTS](https://github.com/KittenML/KittenTTS) engine. This allows the bot to generate and play speech from text, making mesh alerts and messages audible on your device.
## Features
- Converts mesh messages to speech using KittenTTS.
## Installation
1. **Install Python dependencies:**
- `kittentts` is the TTS engine.
`pip install https://github.com/KittenML/KittenTTS/releases/download/0.1/kittentts-0.1.0-py3-none-any.whl`
2. **Install PortAudio (required for sounddevice):**
- **macOS:**
```sh
brew install portaudio
```
- **Linux (Debian/Ubuntu):**
```sh
sudo apt-get install portaudio19-dev
```
- **Windows:**
No extra step needed; `sounddevice` will use the default audio driver.
## Configuration
- Enable TTS in your `config.ini`:
```ini
[radioMon]
meshagesTTS = True
```
## Usage
When enabled, the bot will generate and play speech for mesh messages using the selected voice.
No additional user action is required.
## Troubleshooting
- If you see errors about missing `sounddevice` or `portaudio`, ensure you have installed the dependencies above.
- On macOS, you may need to allow microphone/audio access for your terminal.
- If you have audio issues, check your systems default output device.
## References
- [KittenTTS GitHub](https://github.com/KittenML/KittenTTS)
- [KittenTTS Model on HuggingFace](https://huggingface.co/KittenML/kitten-tts-nano-0.2)
- [sounddevice documentation](https://python-sounddevice.readthedocs.io/)
---
+114 -81
View File
@@ -16,6 +16,9 @@ import struct
import json
from modules.log import logger
# verbose debug logging for trap words function
debugVoxTmsg = False
from modules.settings import (
radio_detection_enabled,
rigControlServerAddress,
@@ -31,14 +34,52 @@ from modules.settings import (
voxTrapList,
voxOnTrapList,
voxEnableCmd,
ERROR_FETCHING_DATA
ERROR_FETCHING_DATA,
meshagesTTS,
)
# module global variables
previousStrength = -40
signalCycle = 0
# verbose debug logging for trap words function
debugVoxTmsg = False
FREQ_NAME_MAP = {
462562500: "GRMS CH1",
462587500: "GRMS CH2",
462612500: "GRMS CH3",
462637500: "GRMS CH4",
462662500: "GRMS CH5",
462687500: "GRMS CH6",
462712500: "GRMS CH7",
467562500: "GRMS CH8",
467587500: "GRMS CH9",
467612500: "GRMS CH10",
467637500: "GRMS CH11",
467662500: "GRMS CH12",
467687500: "GRMS CH13",
467712500: "GRMS CH14",
467737500: "GRMS CH15",
462550000: "GRMS CH16",
462575000: "GMRS CH17",
462600000: "GMRS CH18",
462625000: "GMRS CH19",
462675000: "GMRS CH20",
462670000: "GMRS CH21",
462725000: "GMRS CH22",
462725500: "GMRS CH23",
467575000: "GMRS CH24",
467600000: "GMRS CH25",
467625000: "GMRS CH26",
467650000: "GMRS CH27",
467675000: "GMRS CH28",
467700000: "FRS CH1",
462650000: "FRS CH5",
462700000: "FRS CH7",
462737500: "FRS CH16",
146520000: "2M Simplex Calling",
446000000: "70cm Simplex Calling",
156800000: "Marine CH16",
# Add more as needed
}
# --- WSJT-X and JS8Call Settings Initialization ---
wsjtxMsgQueue = [] # Queue for WSJT-X detected messages
@@ -100,9 +141,9 @@ try:
watched_callsigns = list({cs.upper() for cs in callsigns})
except ImportError:
logger.debug("RadioMon: WSJT-X/JS8Call settings not configured")
logger.debug("System: RadioMon: WSJT-X/JS8Call settings not configured")
except Exception as e:
logger.warning(f"RadioMon: Error loading WSJT-X/JS8Call settings: {e}")
logger.warning(f"System: RadioMon: Error loading WSJT-X/JS8Call settings: {e}")
if radio_detection_enabled:
@@ -136,51 +177,43 @@ if voxDetectionEnabled:
voxModel = Model(lang=voxLanguage) # use built in model for specified language
except Exception as e:
print(f"RadioMon: Error importing VOX dependencies: {e}")
print(f"System: RadioMon: Error importing VOX dependencies: {e}")
print(f"To use VOX detection please install the vosk and sounddevice python modules")
print(f"pip install vosk sounddevice")
print(f"sounddevice needs pulseaudio, apt-get install portaudio19-dev")
voxDetectionEnabled = False
logger.error(f"RadioMon: VOX detection disabled due to import error")
logger.error(f"System: RadioMon: VOX detection disabled due to import error")
FREQ_NAME_MAP = {
462562500: "GRMS CH1",
462587500: "GRMS CH2",
462612500: "GRMS CH3",
462637500: "GRMS CH4",
462662500: "GRMS CH5",
462687500: "GRMS CH6",
462712500: "GRMS CH7",
467562500: "GRMS CH8",
467587500: "GRMS CH9",
467612500: "GRMS CH10",
467637500: "GRMS CH11",
467662500: "GRMS CH12",
467687500: "GRMS CH13",
467712500: "GRMS CH14",
467737500: "GRMS CH15",
462550000: "GRMS CH16",
462575000: "GMRS CH17",
462600000: "GMRS CH18",
462625000: "GMRS CH19",
462675000: "GMRS CH20",
462670000: "GMRS CH21",
462725000: "GMRS CH22",
462725500: "GMRS CH23",
467575000: "GMRS CH24",
467600000: "GMRS CH25",
467625000: "GMRS CH26",
467650000: "GMRS CH27",
467675000: "GMRS CH28",
467700000: "FRS CH1",
462650000: "FRS CH5",
462700000: "FRS CH7",
462737500: "FRS CH16",
146520000: "2M Simplex Calling",
446000000: "70cm Simplex Calling",
156800000: "Marine CH16",
# Add more as needed
}
if meshagesTTS:
try:
# TTS for meshages imports
logger.debug("System: RadioMon: Initializing TTS model for audible meshages")
import sounddevice as sd
from kittentts import KittenTTS
ttsModel = KittenTTS("KittenML/kitten-tts-nano-0.2")
available_voices = [
'expr-voice-2-m', 'expr-voice-2-f', 'expr-voice-3-m', 'expr-voice-3-f',
'expr-voice-4-m', 'expr-voice-4-f', 'expr-voice-5-m', 'expr-voice-5-f'
]
except Exception as e:
logger.error(f"To use Meshages TTS please review the radio.md documentation for setup instructions.")
meshagesTTS = False
async def generate_and_play_tts(text, voice, samplerate=24000):
"""Async: Generate speech and play audio."""
text = text.strip()
if not text:
return
try:
logger.debug(f"System: RadioMon: Generating TTS for text: {text} with voice: {voice}")
audio = await asyncio.to_thread(ttsModel.generate, text, voice=voice)
if audio is None or len(audio) == 0:
return
await asyncio.to_thread(sd.play, audio, samplerate)
await asyncio.to_thread(sd.wait)
del audio
except Exception as e:
logger.warning(f"System: RadioMon: Error in generate_and_play_tts: {e}")
def get_freq_common_name(freq):
freq = int(freq)
@@ -194,14 +227,14 @@ def get_freq_common_name(freq):
def get_hamlib(msg="f"):
# get data from rigctld server
if "socket" not in globals():
logger.warning("RadioMon: 'socket' module not imported. Hamlib disabled.")
logger.warning("System: RadioMon: 'socket' module not imported. Hamlib disabled.")
return ERROR_FETCHING_DATA
try:
rigControlSocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
rigControlSocket.settimeout(2)
rigControlSocket.connect((rigControlServerAddress.split(":")[0],int(rigControlServerAddress.split(":")[1])))
except Exception as e:
logger.error(f"RadioMon: Error connecting to rigctld: {e}")
logger.error(f"System: RadioMon: Error connecting to rigctld: {e}")
return ERROR_FETCHING_DATA
try:
@@ -215,7 +248,7 @@ def get_hamlib(msg="f"):
data = data.replace(b'\n',b'')
return data.decode("utf-8").rstrip()
except Exception as e:
logger.error(f"RadioMon: Error fetching data from rigctld: {e}")
logger.error(f"System: RadioMon: Error fetching data from rigctld: {e}")
return ERROR_FETCHING_DATA
def get_sig_strength():
@@ -225,7 +258,7 @@ def get_sig_strength():
def checkVoxTrapWords(text):
try:
if not voxOnTrapList:
logger.debug(f"RadioMon: VOX detected: {text}")
logger.debug(f"System: RadioMon: VOX detected: {text}")
return text
if text:
traps = [voxTrapList] if isinstance(voxTrapList, str) else voxTrapList
@@ -235,27 +268,27 @@ def checkVoxTrapWords(text):
trap_lower = trap_clean.lower()
idx = text_lower.find(trap_lower)
if debugVoxTmsg:
logger.debug(f"RadioMon: VOX checking for trap word '{trap_lower}' in: '{text}' (index: {idx})")
logger.debug(f"System: RadioMon: VOX checking for trap word '{trap_lower}' in: '{text}' (index: {idx})")
if idx != -1:
new_text = text[idx + len(trap_clean):].strip()
if debugVoxTmsg:
logger.debug(f"RadioMon: VOX detected trap word '{trap_lower}' in: '{text}' (remaining: '{new_text}')")
logger.debug(f"System: RadioMon: VOX detected trap word '{trap_lower}' in: '{text}' (remaining: '{new_text}')")
new_words = new_text.split()
if voxEnableCmd:
for word in new_words:
if word in botMethods:
logger.info(f"RadioMon: VOX action '{word}' with '{new_text}'")
logger.info(f"System: RadioMon: VOX action '{word}' with '{new_text}'")
if word == "joke":
return botMethods[word](vox=True)
else:
return botMethods[word](None, None, None, vox=True)
logger.debug(f"RadioMon: VOX returning text after trap word '{trap_lower}': '{new_text}'")
logger.debug(f"System: RadioMon: VOX returning text after trap word '{trap_lower}': '{new_text}'")
return new_text
if debugVoxTmsg:
logger.debug(f"RadioMon: VOX no trap word found in: '{text}'")
logger.debug(f"System: RadioMon: VOX no trap word found in: '{text}'")
return None
except Exception as e:
logger.debug(f"RadioMon: Error in checkVoxTrapWords: {e}")
logger.debug(f"System: RadioMon: Error in checkVoxTrapWords: {e}")
return None
async def signalWatcher():
@@ -265,7 +298,7 @@ async def signalWatcher():
signalStrength = int(get_sig_strength())
if signalStrength >= previousStrength and signalStrength > signalDetectionThreshold:
message = f"Detected {get_freq_common_name(get_hamlib('f'))} active. S-Meter:{signalStrength}dBm"
logger.debug(f"RadioMon: {message}. Waiting for {signalHoldTime} seconds")
logger.debug(f"System: RadioMon: {message}. Waiting for {signalHoldTime} seconds")
previousStrength = signalStrength
signalCycle = 0
await asyncio.sleep(signalHoldTime)
@@ -285,7 +318,7 @@ async def signalWatcher():
async def make_vox_callback(loop, q):
def vox_callback(indata, frames, time, status):
if status:
logger.warning(f"RadioMon: VOX input status: {status}")
logger.warning(f"System: RadioMon: VOX input status: {status}")
try:
loop.call_soon_threadsafe(q.put_nowait, bytes(indata))
except asyncio.QueueFull:
@@ -298,7 +331,7 @@ async def make_vox_callback(loop, q):
loop.call_soon_threadsafe(q.put_nowait, bytes(indata))
except asyncio.QueueFull:
# If still full, just drop this frame
logger.debug("RadioMon: VOX queue full, dropping audio frame")
logger.debug("System: RadioMon: VOX queue full, dropping audio frame")
except RuntimeError:
# Loop may be closed
pass
@@ -310,7 +343,7 @@ async def voxMonitor():
model = voxModel
device_info = sd.query_devices(voxInputDevice, 'input')
samplerate = 16000
logger.debug(f"RadioMon: VOX monitor started on device {device_info['name']} with samplerate {samplerate} using trap words: {voxTrapList if voxOnTrapList else 'none'}")
logger.debug(f"System: RadioMon: VOX monitor started on device {device_info['name']} with samplerate {samplerate} using trap words: {voxTrapList if voxOnTrapList else 'none'}")
rec = KaldiRecognizer(model, samplerate)
loop = asyncio.get_running_loop()
callback = await make_vox_callback(loop, q)
@@ -337,7 +370,7 @@ async def voxMonitor():
await asyncio.sleep(0.1)
except Exception as e:
logger.error(f"RadioMon: Error in VOX monitor: {e}")
logger.warning(f"System: RadioMon: Error in VOX monitor: {e}")
def decode_wsjtx_packet(data):
"""Decode WSJT-X UDP packet according to the protocol specification"""
@@ -439,7 +472,7 @@ def decode_wsjtx_packet(data):
return None
except Exception as e:
logger.debug(f"RadioMon: Error decoding WSJT-X packet: {e}")
logger.debug(f"System: RadioMon: Error decoding WSJT-X packet: {e}")
return None
def check_callsign_match(message, callsigns):
@@ -481,7 +514,7 @@ def check_callsign_match(message, callsigns):
async def wsjtxMonitor():
"""Monitor WSJT-X UDP broadcasts for decode messages"""
if not wsjtx_enabled:
logger.warning("RadioMon: WSJT-X monitoring called but not enabled")
logger.warning("System: RadioMon: WSJT-X monitoring called but not enabled")
return
try:
@@ -490,9 +523,9 @@ async def wsjtxMonitor():
sock.bind((wsjtx_udp_address, wsjtx_udp_port))
sock.setblocking(False)
logger.info(f"RadioMon: WSJT-X UDP listener started on {wsjtx_udp_address}:{wsjtx_udp_port}")
logger.info(f"System: RadioMon: WSJT-X UDP listener started on {wsjtx_udp_address}:{wsjtx_udp_port}")
if watched_callsigns:
logger.info(f"RadioMon: Watching for callsigns: {', '.join(watched_callsigns)}")
logger.info(f"System: RadioMon: Watching for callsigns: {', '.join(watched_callsigns)}")
while True:
try:
@@ -507,29 +540,29 @@ async def wsjtxMonitor():
# Check if message contains watched callsigns
if check_callsign_match(message, watched_callsigns):
msg_text = f"WSJT-X {mode}: {message} (SNR: {snr:+d}dB)"
logger.info(f"RadioMon: {msg_text}")
logger.info(f"System: RadioMon: {msg_text}")
wsjtxMsgQueue.append(msg_text)
except BlockingIOError:
# No data available
await asyncio.sleep(0.1)
except Exception as e:
logger.debug(f"RadioMon: Error in WSJT-X monitor loop: {e}")
logger.debug(f"System: RadioMon: Error in WSJT-X monitor loop: {e}")
await asyncio.sleep(1)
except Exception as e:
logger.error(f"RadioMon: Error starting WSJT-X monitor: {e}")
logger.warning(f"System: RadioMon: Error starting WSJT-X monitor: {e}")
async def js8callMonitor():
"""Monitor JS8Call TCP API for messages"""
if not js8call_enabled:
logger.warning("RadioMon: JS8Call monitoring called but not enabled")
logger.warning("System: RadioMon: JS8Call monitoring called but not enabled")
return
try:
logger.info(f"RadioMon: JS8Call TCP listener connecting to {js8call_tcp_address}:{js8call_tcp_port}")
logger.info(f"System: RadioMon: JS8Call TCP listener connecting to {js8call_tcp_address}:{js8call_tcp_port}")
if watched_callsigns:
logger.info(f"RadioMon: Watching for callsigns: {', '.join(watched_callsigns)}")
logger.info(f"System: RadioMon: Watching for callsigns: {', '.join(watched_callsigns)}")
while True:
try:
@@ -539,14 +572,14 @@ async def js8callMonitor():
sock.connect((js8call_tcp_address, js8call_tcp_port))
sock.setblocking(False)
logger.info("RadioMon: Connected to JS8Call API")
logger.info("System: RadioMon: Connected to JS8Call API")
buffer = ""
while True:
try:
data = sock.recv(4096)
if not data:
logger.warning("RadioMon: JS8Call connection closed")
logger.warning("System: RadioMon: JS8Call connection closed")
break
buffer += data.decode('utf-8', errors='ignore')
@@ -570,34 +603,34 @@ async def js8callMonitor():
if text and check_callsign_match(text, watched_callsigns):
msg_text = f"JS8Call from {from_call}: {text} (SNR: {snr:+d}dB)"
logger.info(f"RadioMon: {msg_text}")
logger.info(f"System: RadioMon: {msg_text}")
js8callMsgQueue.append(msg_text)
except json.JSONDecodeError:
logger.debug(f"RadioMon: Invalid JSON from JS8Call: {line[:100]}")
logger.debug(f"System: RadioMon: Invalid JSON from JS8Call: {line[:100]}")
except Exception as e:
logger.debug(f"RadioMon: Error processing JS8Call message: {e}")
logger.debug(f"System: RadioMon: Error processing JS8Call message: {e}")
except BlockingIOError:
await asyncio.sleep(0.1)
except socket.timeout:
await asyncio.sleep(0.1)
except Exception as e:
logger.debug(f"RadioMon: Error in JS8Call receive loop: {e}")
logger.debug(f"System: RadioMon: Error in JS8Call receive loop: {e}")
break
sock.close()
logger.warning("RadioMon: JS8Call connection lost, reconnecting in 5s...")
logger.warning("System: RadioMon: JS8Call connection lost, reconnecting in 5s...")
await asyncio.sleep(5)
except socket.timeout:
logger.warning("RadioMon: JS8Call connection timeout, retrying in 5s...")
logger.warning("System: RadioMon: JS8Call connection timeout, retrying in 5s...")
await asyncio.sleep(5)
except Exception as e:
logger.warning(f"RadioMon: Error connecting to JS8Call: {e}")
logger.warning(f"System: RadioMon: Error connecting to JS8Call: {e}")
await asyncio.sleep(10)
except Exception as e:
logger.error(f"RadioMon: Error starting JS8Call monitor: {e}")
logger.warning(f"System: RadioMon: Error starting JS8Call monitor: {e}")
# end of file
+41 -1
View File
@@ -1,11 +1,13 @@
# rss feed module for meshing-around 2025
from modules.log import logger
from modules.settings import rssFeedURL, rssFeedNames, rssMaxItems, rssTruncate, urlTimeoutSeconds, ERROR_FETCHING_DATA
from modules.settings import rssFeedURL, rssFeedNames, rssMaxItems, rssTruncate, urlTimeoutSeconds, ERROR_FETCHING_DATA, newsAPI_KEY, newsAPIsort
import urllib.request
import xml.etree.ElementTree as ET
import html
from html.parser import HTMLParser
import bs4 as bs
import requests
from datetime import datetime, timedelta
# Common User-Agent for all RSS requests
COMMON_USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'
@@ -136,3 +138,41 @@ def get_rss_feed(msg):
logger.error(f"Error fetching RSS feed from {feed_url}: {e}")
return ERROR_FETCHING_DATA
def get_newsAPI(user_search="meshtastic"):
# Fetch news from NewsAPI.org
user_search = user_search.strip()
if user_search.lower().startswith("latest"):
user_search = user_search[6:].strip()
if not user_search:
user_search = "meshtastic"
try:
last_week = datetime.now() - timedelta(days=7)
newsAPIurl = (
f"https://newsapi.org/v2/everything?"
f"q={user_search}&language=en&from={last_week.strftime('%Y-%m-%d')}&sortBy={newsAPIsort}shedAt&pageSize=5&apiKey={newsAPI_KEY}"
)
response = requests.get(newsAPIurl, headers={"User-Agent": COMMON_USER_AGENT}, timeout=urlTimeoutSeconds)
news_data = response.json()
if news_data.get("status") != "ok":
error_message = news_data.get("message", "Unknown error")
logger.error(f"NewsAPI error: {error_message}")
return ERROR_FETCHING_DATA
logger.debug(f"System: NewsAPI Searching for '{user_search}' got {news_data.get('totalResults', 0)} results")
articles = news_data.get("articles", [])[:3]
news_list = []
for article in articles:
title = article.get("title", "No Title")
url = article.get("url", "")
description = article.get("description", '')
news_list.append(f"📰{title}\n{description}")
# Make a nice newspaper style output
msg = f"🗞️:"
for item in news_list:
msg += item + "\n\n"
return msg.strip()
except Exception as e:
logger.error(f"System: NewsAPI fetching news: {e}")
return ERROR_FETCHING_DATA
+46 -21
View File
@@ -32,8 +32,10 @@ cmdHistory = [] # list to hold the command history for lheard and history comman
msg_history = [] # list to hold the message history for the messages command
max_bytes = 200 # Meshtastic has ~237 byte limit, use conservative 200 bytes for message content
voxMsgQueue = [] # queue for VOX detected messages
tts_read_queue = [] # queue for TTS messages
wsjtxMsgQueue = [] # queue for WSJT-X detected messages
js8callMsgQueue = [] # queue for JS8Call detected messages
autoBanlist = [] # list of nodes to autoban for repeated offenses
# Game trackers
surveyTracker = [] # Survey game tracker
tictactoeTracker = [] # TicTacToe game tracker
@@ -80,7 +82,7 @@ if 'sentry' not in config:
config.write(open(config_file, 'w'))
if 'location' not in config:
config['location'] = {'enabled': 'True', 'lat': '48.50', 'lon': '-123.0', 'UseMeteoWxAPI': 'False', 'useMetric': 'False', 'NOAAforecastDuration': '4', 'NOAAalertCount': '2', 'NOAAalertsEnabled': 'True', 'wxAlertBroadcastEnabled': 'False', 'wxAlertBroadcastChannel': '2', 'repeaterLookup': 'rbook'}
config['location'] = {'enabled': 'True', 'lat': '48.50', 'lon': '-123.0', 'fuzzConfigLocation': 'True',}
config.write(open(config_file, 'w'))
if 'bbs' not in config:
@@ -275,12 +277,10 @@ try:
rssMaxItems = config['general'].getint('rssMaxItems', 3) # default 3 items
rssTruncate = config['general'].getint('rssTruncate', 100) # default 100 characters
rssFeedNames = config['general'].get('rssFeedNames', 'default,arrl').split(',')
# emergency response
emergency_responder_enabled = config['emergencyHandler'].getboolean('enabled', False)
emergency_responder_alert_channel = config['emergencyHandler'].getint('alert_channel', 2) # default 2
emergency_responder_alert_interface = config['emergencyHandler'].getint('alert_interface', 1) # default 1
emergency_responder_email = config['emergencyHandler'].get('email', '').split(',')
newsAPI_KEY = config['general'].get('newsAPI_KEY', '') # default empty
newsAPIregion = config['general'].get('newsAPIregion', 'us') # default us
enable_headlines = config['general'].getboolean('enableNewsAPI', False) # default False
newsAPIsort = config['general'].get('sort_by', 'relevancy') # default publishedAt
# sentry
sentry_enabled = config['sentry'].getboolean('SentryEnabled', False) # default False
@@ -315,34 +315,52 @@ try:
n2yoAPIKey = config['location'].get('n2yoAPIKey', '') # default empty
satListConfig = config['location'].get('satList', '25544').split(',') # default 25544 ISS
riverListDefault = config['location'].get('riverList', '').split(',') # default None
useTidePredict = config['location'].getboolean('useTidePredict', False) # default False use NOAA
coastalEnabled = config['location'].getboolean('coastalEnabled', False) # default False
myCoastalZone = config['location'].get('myCoastalZone', None) # default None
coastalForecastDays = config['location'].getint('coastalForecastDays', 3) # default 3 days
# location alerts
emergencyAlertBrodcastEnabled = config['location'].getboolean('eAlertBroadcastEnabled', False) # default False
eAlertBroadcastEnabled = config['location'].getboolean('eAlertBroadcastEnabled', False) # old deprecated name
ipawsAlertEnabled = config['location'].getboolean('ipawsAlertEnabled', False) # default False new ^
# Keep both in sync for backward compatibility
if eAlertBroadcastEnabled or ipawsAlertEnabled:
eAlertBroadcastEnabled = True
ipawsAlertEnabled = True
wxAlertBroadcastEnabled = config['location'].getboolean('wxAlertBroadcastEnabled', False) # default False
volcanoAlertBroadcastEnabled = config['location'].getboolean('volcanoAlertBroadcastEnabled', False) # default False
enableGBalerts = config['location'].getboolean('enableGBalerts', False) # default False
enableDEalerts = config['location'].getboolean('enableDEalerts', False) # default False
wxAlertsEnabled = config['location'].getboolean('NOAAalertsEnabled', True) # default True
ignoreEASenable = config['location'].getboolean('ignoreEASenable', False) # default False
ignoreEASwords = config['location'].get('ignoreEASwords', 'test,advisory').split(',') # default test,advisory
myRegionalKeysDE = config['location'].get('myRegionalKeysDE', '110000000000').split(',') # default city Berlin
ignoreFEMAenable = config['location'].getboolean('ignoreFEMAenable', True) # default True
ignoreFEMAwords = config['location'].get('ignoreFEMAwords', 'test,exercise').split(',') # default test,exercise
ignoreUSGSEnable = config['location'].getboolean('ignoreVolcanoEnable', False) # default False
ignoreUSGSWords = config['location'].get('ignoreVolcanoWords', 'test,advisory').split(',') # default test,advisory
forecastDuration = config['location'].getint('NOAAforecastDuration', 4) # NOAA forcast days
numWxAlerts = config['location'].getint('NOAAalertCount', 2) # default 2 alerts
enableExtraLocationWx = config['location'].getboolean('enableExtraLocationWx', False) # default False
myStateFIPSList = config['location'].get('myFIPSList', '').split(',') # default empty
mySAMEList = config['location'].get('mySAMEList', '').split(',') # default empty
ignoreFEMAenable = config['location'].getboolean('ignoreFEMAenable', True) # default True
ignoreFEMAwords = config['location'].get('ignoreFEMAwords', 'test,exercise').split(',') # default test,exercise
wxAlertBroadcastChannel = config['location'].get('wxAlertBroadcastCh', '2').split(',') # default Channel 2
emergencyAlertBroadcastCh = config['location'].get('eAlertBroadcastCh', '2').split(',') # default Channel 2
volcanoAlertBroadcastEnabled = config['location'].getboolean('volcanoAlertBroadcastEnabled', False) # default False
volcanoAlertBroadcastChannel = config['location'].get('volcanoAlertBroadcastCh', '2').split(',') # default Channel 2
ignoreUSGSEnable = config['location'].getboolean('ignoreVolcanoEnable', False) # default False
ignoreUSGSWords = config['location'].get('ignoreVolcanoWords', 'test,advisory').split(',') # default test,advisory
myRegionalKeysDE = config['location'].get('myRegionalKeysDE', '110000000000').split(',') # default city Berlin
eAlertBroadcastChannel = config['location'].get('eAlertBroadcastCh', '').split(',') # default empty
# any US alerts enabled
usAlerts = (
ipawsAlertEnabled or
wxAlertBroadcastEnabled or
volcanoAlertBroadcastEnabled or
eAlertBroadcastEnabled
)
# emergency response
emergency_responder_enabled = config['emergencyHandler'].getboolean('enabled', False)
emergency_responder_alert_channel = config['emergencyHandler'].getint('alert_channel', 2) # default 2
emergency_responder_alert_interface = config['emergencyHandler'].getint('alert_interface', 1) # default 1
emergency_responder_email = config['emergencyHandler'].get('email', '').split(',')
# bbs
bbs_enabled = config['bbs'].getboolean('enabled', False)
bbsdb = config['bbs'].get('bbsdb', 'data/bbsdb.pkl')
@@ -356,6 +374,7 @@ try:
checklist_enabled = config['checklist'].getboolean('enabled', False)
checklist_db = config['checklist'].get('checklist_db', 'data/checklist.db')
reverse_in_out = config['checklist'].getboolean('reverse_in_out', False)
checklist_auto_approve = config['checklist'].getboolean('auto_approve', True) # default True
# qrz hello
qrz_hello_enabled = config['qrz'].getboolean('enabled', False)
@@ -418,6 +437,9 @@ try:
voxOnTrapList = config['radioMon'].getboolean('voxOnTrapList', False) # default False
voxTrapList = config['radioMon'].get('voxTrapList', 'chirpy').split(',') # default chirpy
voxEnableCmd = config['radioMon'].getboolean('voxEnableCmd', True) # default True
meshagesTTS = config['radioMon'].getboolean('meshagesTTS', False) # default False
ttsChannels = config['radioMon'].get('ttsChannels', '2').split(',') # default Channel 2
ttsnoWelcome = config['radioMon'].getboolean('ttsnoWelcome', False) # default False
# WSJT-X and JS8Call monitoring
wsjtx_detection_enabled = config['radioMon'].getboolean('wsjtxDetectionEnabled', False) # default WSJT-X detection disabled
@@ -436,8 +458,8 @@ try:
news_random_line_only = config['fileMon'].getboolean('news_random_line', False) # default False
enable_runShellCmd = config['fileMon'].getboolean('enable_runShellCmd', False) # default False
allowXcmd = config['fileMon'].getboolean('allowXcmd', False) # default False
xCmd2factorEnabled = config['fileMon'].getboolean('2factor_enabled', True) # default True
xCmd2factor_timeout = config['fileMon'].getint('2factor_timeout', 100) # default 100 seconds
xCmd2factorEnabled = config['fileMon'].getboolean('twoFactor_enabled', True) # default True
xCmd2factor_timeout = config['fileMon'].getint('twoFactor_timeout', 100) # default 100 seconds
# games
game_hop_limit = config['games'].getint('game_hop_limit', 5) # default 5 hops
@@ -471,6 +493,9 @@ try:
noisyNodeLogging = config['messagingSettings'].getboolean('noisyNodeLogging', False) # default False
logMetaStats = config['messagingSettings'].getboolean('logMetaStats', True) # default True
noisyTelemetryLimit = config['messagingSettings'].getint('noisyTelemetryLimit', 5) # default 5 packets
autoBanEnabled = config['messagingSettings'].getboolean('autoBanEnabled', False) # default False
autoBanThreshold = config['messagingSettings'].getint('autoBanThreshold', 5) # default 5 offenses
autoBanTimeframe = config['messagingSettings'].getint('autoBanTimeframe', 3600) # default 1 hour in seconds
except Exception as e:
print(f"System: Error reading config file: {e}")
print("System: Check the config.ini against config.template file for missing sections or values.")
+223 -187
View File
@@ -114,7 +114,7 @@ if location_enabled:
help_message = help_message + ", howtall"
# NOAA alerts needs location module
if wxAlertBroadcastEnabled or emergencyAlertBrodcastEnabled or volcanoAlertBroadcastEnabled:
if wxAlertBroadcastEnabled or ipawsAlertEnabled or volcanoAlertBroadcastEnabled or eAlertBroadcastEnabled: #eAlertBroadcastEnabled depricated
from modules.locationdata import * # from the spudgunman/meshing-around repo
# limited subset, this should be done better but eh..
trap_list = trap_list + ("wx", "wxa", "wxalert", "ea", "ealert", "valert")
@@ -125,10 +125,6 @@ if coastalEnabled:
from modules.locationdata import * # from the spudgunman/meshing-around repo
trap_list = trap_list + ("mwx","tide",)
help_message = help_message + ", mwx, tide"
if useTidePredict:
from modules import xtide
trap_list = trap_list + ("tide",)
help_message = help_message + ", tide"
# BBS Configuration
if bbs_enabled:
@@ -157,10 +153,14 @@ if wikipedia_enabled or use_kiwix_server:
help_message = help_message + ", wiki"
# RSS Feed Configuration
if rssEnable:
if rssEnable or enable_headlines:
from modules.rss import * # from the spudgunman/meshing-around repo
trap_list = trap_list + ("readrss",)
help_message = help_message + ", readrss"
if rssEnable:
trap_list = trap_list + ("readrss",)
help_message = help_message + ", readrss"
if enable_headlines:
trap_list = trap_list + ("latest",)
help_message = help_message + ", latest"
# LLM Configuration
if llm_enabled:
@@ -292,13 +292,6 @@ if inventory_enabled:
trap_list = trap_list + trap_list_inventory # items item, itemlist, itemsell, etc.
help_message = help_message + ", item, cart"
# Radio Monitor Configuration
if radio_detection_enabled:
from modules.radio import * # from the spudgunman/meshing-around repo
if voxDetectionEnabled:
from modules.radio import * # from the spudgunman/meshing-around repo
# File Monitor Configuration
if file_monitor_enabled or read_news_enabled or bee_enabled or enable_runShellCmd or cmdShellSentryAlerts:
from modules.filemon import * # from the spudgunman/meshing-around repo
@@ -383,6 +376,9 @@ for i in range(1, 10):
logger.critical(f"System: abort. Initializing Interface{i} {e}")
exit()
# Get my node numbers for global use
my_node_ids = [globals().get(f'myNodeNum{i}') for i in range(1, 10)]
# Get the node number of the devices, check if the devices are connected meshtastic devices
for i in range(1, 10):
if globals().get(f'interface{i}') and globals().get(f'interface{i}_enabled'):
@@ -666,7 +662,7 @@ async def get_closest_nodes(nodeInt=1,returnCount=3, channel=publicChannel):
distance = round(geopy.distance.geodesic((latitudeValue, longitudeValue), (latitude, longitude)).m, 2)
if (distance < sentry_radius):
if (nodeID not in [globals().get(f'myNodeNum{i}') for i in range(1, 10)]) and str(nodeID) not in sentryIgnoreList:
if (nodeID not in my_node_ids) and str(nodeID) not in sentryIgnoreList:
node_list.append({'id': nodeID, 'latitude': latitude, 'longitude': longitude, 'distance': distance})
except Exception as e:
@@ -678,7 +674,7 @@ async def get_closest_nodes(nodeInt=1,returnCount=3, channel=publicChannel):
try:
logger.debug(f"System: Requesting location data for {node['id']}, lastHeard: {node.get('lastHeard', 'N/A')}")
# if not a interface node
if node['num'] in [globals().get(f'myNodeNum{i}') for i in range(1, 10)]:
if node['num'] in my_node_ids:
ignore = True
else:
# one idea is to send a ping to the node to request location data for if or when, ask again later
@@ -955,21 +951,94 @@ def messageTrap(msg):
return True
return False
def stringSafeCheck(s):
def stringSafeCheck(s, fromID=0):
# Check if a string is safe to use, no control characters or non-printable characters
soFarSoGood = True
if not all(c.isprintable() or c.isspace() for c in s):
return False
ban_hammer(fromID, reason="Non-printable character in message")
return False # non-printable characters found
if any(ord(c) < 32 and c not in '\n\r\t' for c in s):
return False
ban_hammer(fromID, reason="Control character in message")
return False # control characters found
if any(c in s for c in ['\x0b', '\x0c', '\x1b']):
return False
return False # vertical tab, form feed, escape characters found
if len(s) > 1000:
return False
injection_chars = [';', '|', '../']
if any(char in s for char in injection_chars):
# Check for single-character injections
single_injection_chars = [';', '|', '}', '>', ')']
if any(c in s for c in single_injection_chars):
return False # injection character found
# Check for multi-character patterns
multi_injection_patterns = ['../', '||']
if any(pattern in s for pattern in multi_injection_patterns):
return False
return soFarSoGood
return True
def ban_hammer(node_id, rxInterface=None, channel=None, reason=""):
"""
Auto-ban nodes that exceed the message threshold within the timeframe.
Returns True if the node is (or becomes) banned, False otherwise.
"""
global autoBanlist, seenNodes, bbs_ban_list
current_time = time.time()
node_id_str = str(node_id)
if isNodeAdmin(node_id_str):
return False # Do not ban admin nodes
# Check if the node is already banned
if node_id_str in bbs_ban_list or node_id_str in autoBanlist:
return True # Node is already banned
# if no reason provided, dont ban just run that last check
if reason == "":
return False
# Find or create the seenNodes entry (patched for missing 'node_id')
node_entry = next((entry for entry in seenNodes if entry.get('node_id') == node_id_str), None)
if node_entry:
# Update interface and channel if provided
if rxInterface is not None:
node_entry['rxInterface'] = rxInterface
if channel is not None:
node_entry['channel'] = channel
# Check if the timeframe has expired
if (current_time - node_entry['lastSeen']) > autoBanTimeframe:
node_entry['auto_ban_count'] = 1
node_entry['lastSeen'] = current_time
else:
node_entry['auto_ban_count'] += 1
node_entry['lastSeen'] = current_time
else:
# node not found, create a new entry
entry = {
'node_id': node_id_str,
'first_seen': current_time,
'lastSeen': current_time,
'auto_ban_count': 3, # start at 3 to trigger ban faster
'rxInterface': rxInterface,
'channel': channel,
'welcome': False
}
seenNodes.append(entry)
node_entry = entry
# Check if the node has exceeded the ban threshold
if node_entry['auto_ban_count'] < autoBanThreshold:
logger.debug(f"System: Node {node_id_str} auto-ban count: {node_entry['auto_ban_count']}")
return False # No ban applied
# If the node has exceeded the ban threshold within the time window
autoBanlist.append(node_id_str)
logger.info(f"System: Node {node_id_str} exceeded auto-ban threshold with {node_entry['auto_ban_count']} messages")
if autoBanEnabled:
logger.warning(f"System: Auto-banned node {node_id_str} Reason: {reason}")
if node_id_str not in bbs_ban_list:
bbs_ban_list.append(node_id_str)
save_bbsBanList()
return True # Node is now banned
return False # No ban applied
def save_bbsBanList():
# save the bbs_ban_list to file
@@ -987,7 +1056,7 @@ def load_bbsBanList():
try:
with open('data/bbs_ban_list.txt', 'r') as f:
loaded_list = [line.strip() for line in f if line.strip()]
logger.debug("System: BBS ban list loaded from file")
logger.debug(f"System: BBS ban list now has {len(loaded_list)} entries loaded from file")
except FileNotFoundError:
config_val = config['bbs'].get('bbs_ban_list', '')
if config_val:
@@ -1007,8 +1076,6 @@ def isNodeAdmin(nodeID):
for admin in bbs_admin_list:
if str(nodeID) == admin:
return True
else:
return True
return False
def isNodeBanned(nodeID):
@@ -1019,6 +1086,7 @@ def isNodeBanned(nodeID):
return False
def handle_bbsban(message, message_from_id, isDM):
global bbs_ban_list
msg = ""
if not isDM:
return "🤖only available in a Direct Message📵"
@@ -1115,136 +1183,76 @@ def handleMultiPing(nodeID=0, deviceID=1):
multiPingList.pop(j)
break
priorVolcanoAlert = ""
priorEmergencyAlert = ""
priorWxAlert = ""
# Alert broadcasting initialization
last_alerts = {
"overdue": {"time": 0, "message": ""},
"fema": {"time": 0, "message": ""},
"uk": {"time": 0, "message": ""},
"de": {"time": 0, "message": ""},
"wx": {"time": 0, "message": ""},
"volcano": {"time": 0, "message": ""},
}
def should_send_alert(alert_type, new_message, min_interval=1):
now = time.time()
last = last_alerts[alert_type]
# Only send if enough time has passed AND the message is different
if (now - last["time"]) > min_interval and new_message != last["message"]:
last_alerts[alert_type]["time"] = now
last_alerts[alert_type]["message"] = new_message
return True
return False
def handleAlertBroadcast(deviceID=1):
try:
global priorVolcanoAlert, priorEmergencyAlert, priorWxAlert
alertUk = NO_ALERTS
alertDe = NO_ALERTS
alertFema = NO_ALERTS
wxAlert = NO_ALERTS
volcanoAlert = NO_ALERTS
overdueAlerts = NO_ALERTS
alertUk = alertDe = alertFema = wxAlert = volcanoAlert = overdueAlerts = NO_ALERTS
alertWx = False
# only allow API call every 20 minutes
# the watchdog will call this function 3 times, seeing possible throttling on the API
clock = datetime.now()
if clock.minute % 20 != 0:
return False
if clock.second > 17:
return False
# check for alerts
if wxAlertBroadcastEnabled:
alertWx = alertBrodcastNOAA()
if emergencyAlertBrodcastEnabled:
if enableDEalerts:
alertDe = get_nina_alerts()
if enableGBalerts:
alertUk = get_govUK_alerts()
else:
# default USA alerts
alertFema = getIpawsAlert(latitudeValue,longitudeValue, shortAlerts=True)
# Overdue check-in alert
if checklist_enabled:
overdueAlerts = format_overdue_alert()
# format alert
if alertWx:
wxAlert = f"🚨 {alertWx[1]} EAS-WX ALERT: {alertWx[0]}"
else:
wxAlert = False
if overdueAlerts:
if should_send_alert("overdue", overdueAlerts, min_interval=300): # 5 minutes interval for overdue alerts
send_message(overdueAlerts, emergency_responder_alert_channel, 0, emergency_responder_alert_interface)
femaAlert = alertFema
ukAlert = alertUk
deAlert = alertDe
if overdueAlerts != NO_ALERTS and overdueAlerts != None:
logger.debug("System: Adding overdue checkin to emergency alerts")
if femaAlert and NO_ALERTS not in femaAlert and ERROR_FETCHING_DATA not in femaAlert:
femaAlert += "\n\n" + overdueAlerts
elif ukAlert and NO_ALERTS not in ukAlert and ERROR_FETCHING_DATA not in ukAlert:
ukAlert += "\n\n" + overdueAlerts
elif deAlert and NO_ALERTS not in deAlert and ERROR_FETCHING_DATA not in deAlert:
deAlert += "\n\n" + overdueAlerts
else:
# only overdue alerts to send
if overdueAlerts != "" and overdueAlerts is not None and overdueAlerts != NO_ALERTS:
if overdueAlerts != priorEmergencyAlert:
priorEmergencyAlert = overdueAlerts
else:
return False
if isinstance(emergencyAlertBroadcastCh, list):
for channel in emergencyAlertBroadcastCh:
send_message(overdueAlerts, int(channel), 0, deviceID)
else:
send_message(overdueAlerts, emergencyAlertBroadcastCh, 0, deviceID)
return True
if emergencyAlertBrodcastEnabled:
if NO_ALERTS not in femaAlert and ERROR_FETCHING_DATA not in femaAlert:
if femaAlert != priorEmergencyAlert:
priorEmergencyAlert = femaAlert
else:
return False
if isinstance(emergencyAlertBroadcastCh, list):
for channel in emergencyAlertBroadcastCh:
send_message(femaAlert, int(channel), 0, deviceID)
else:
send_message(femaAlert, emergencyAlertBroadcastCh, 0, deviceID)
return True
if NO_ALERTS not in ukAlert:
if ukAlert != priorEmergencyAlert:
priorEmergencyAlert = ukAlert
else:
return False
if isinstance(emergencyAlertBroadcastCh, list):
for channel in emergencyAlertBroadcastCh:
send_message(ukAlert, int(channel), 0, deviceID)
else:
send_message(ukAlert, emergencyAlertBroadcastCh, 0, deviceID)
return True
if NO_ALERTS not in alertDe:
if deAlert != priorEmergencyAlert:
priorEmergencyAlert = deAlert
else:
return False
if isinstance(emergencyAlertBroadcastCh, list):
for channel in emergencyAlertBroadcastCh:
send_message(deAlert, int(channel), 0, deviceID)
else:
send_message(deAlert, emergencyAlertBroadcastCh, 0, deviceID)
return True
# Only allow API call every 20 minutes
if not (clock.minute % 20 == 0 and clock.second <= 17):
return False
# Collect alerts
if wxAlertBroadcastEnabled:
if wxAlert:
if wxAlert != priorWxAlert:
priorWxAlert = wxAlert
else:
return False
if isinstance(wxAlertBroadcastChannel, list):
for channel in wxAlertBroadcastChannel:
send_message(wxAlert, int(channel), 0, deviceID)
else:
send_message(wxAlert, wxAlertBroadcastChannel, 0, deviceID)
return True
alertWx = alertBrodcastNOAA()
if alertWx:
wxAlert = f"🚨 {alertWx[1]} EAS-WX ALERT: {alertWx[0]}"
if eAlertBroadcastEnabled or ipawsAlertEnabled:
alertFema = getIpawsAlert(latitudeValue, longitudeValue, shortAlerts=True)
if volcanoAlertBroadcastEnabled:
volcanoAlert = get_volcano_usgs(latitudeValue, longitudeValue)
if volcanoAlert and NO_ALERTS not in volcanoAlert and ERROR_FETCHING_DATA not in volcanoAlert:
# check if the alert is different from the last one
if volcanoAlert != priorVolcanoAlert:
priorVolcanoAlert = volcanoAlert
if isinstance(volcanoAlertBroadcastChannel, list):
for channel in volcanoAlertBroadcastChannel:
send_message(volcanoAlert, int(channel), 0, deviceID)
else:
send_message(volcanoAlert, volcanoAlertBroadcastChannel, 0, deviceID)
return True
if enableDEalerts:
deAlerts = get_nina_alerts()
if usAlerts:
alert_types = [
("fema", alertFema, ipawsAlertEnabled),
("wx", wxAlert, wxAlertBroadcastEnabled),
("volcano", volcanoAlert, volcanoAlertBroadcastEnabled),]
if enableDEalerts:
alert_types = [("de", deAlerts, enableDEalerts)]
for alert_type, alert_msg, enabled in alert_types:
if enabled and alert_msg and NO_ALERTS not in alert_msg and ERROR_FETCHING_DATA not in alert_msg:
if should_send_alert(alert_type, alert_msg):
logger.debug(f"System: Sending {alert_type} alert to emergency responder channel {emergency_responder_alert_channel}")
send_message(alert_msg, emergency_responder_alert_channel, 0, emergency_responder_alert_interface)
if eAlertBroadcastChannel:
for ch in eAlertBroadcastChannel:
ch = ch.strip()
if ch:
logger.debug(f"System: Sending {alert_type} alert to aux channel {ch}")
time.sleep(splitDelay)
send_message(alert_msg, int(ch), 0, emergency_responder_alert_interface)
except Exception as e:
logger.error(f"System: Error in handleAlertBroadcast: {e}")
return False
@@ -1444,11 +1452,13 @@ def consumeMetadata(packet, rxNode=0, channel=-1):
# Meta for most Messages leaderboard
if packet_type == 'TEXT_MESSAGE':
message_count = meshLeaderboard.get('nodeMessageCounts', {})
message_count[nodeID] = message_count.get(nodeID, 0) + 1
meshLeaderboard['nodeMessageCounts'] = message_count
if message_count[nodeID] > meshLeaderboard['mostMessages']['value']:
meshLeaderboard['mostMessages'] = {'nodeID': nodeID, 'value': message_count[nodeID], 'timestamp': time.time()}
# if packet isnt TO a my_node_id count it
if packet.get('to') not in my_node_ids:
message_count = meshLeaderboard.get('nodeMessageCounts', {})
message_count[nodeID] = message_count.get(nodeID, 0) + 1
meshLeaderboard['nodeMessageCounts'] = message_count
if message_count[nodeID] > meshLeaderboard['mostMessages']['value']:
meshLeaderboard['mostMessages'] = {'nodeID': nodeID, 'value': message_count[nodeID], 'timestamp': time.time()}
else:
tmessage_count = meshLeaderboard.get('nodeTMessageCounts', {})
tmessage_count[nodeID] = tmessage_count.get(nodeID, 0) + 1
@@ -1554,10 +1564,11 @@ def consumeMetadata(packet, rxNode=0, channel=-1):
# Track highest altitude 🚀 (also log if over highfly_altitude threshold)
if position_data.get('altitude') is not None:
altitude = position_data['altitude']
if altitude > meshLeaderboard['highestAltitude']['value']:
meshLeaderboard['highestAltitude'] = {'nodeID': nodeID, 'value': altitude, 'timestamp': time.time()}
if logMetaStats:
logger.info(f"System: 🚀 New altitude record: {altitude}m from NodeID:{nodeID} ShortName:{get_name_from_number(nodeID, 'short', rxNode)}")
if altitude > highfly_altitude:
if altitude > meshLeaderboard['highestAltitude']['value']:
meshLeaderboard['highestAltitude'] = {'nodeID': nodeID, 'value': altitude, 'timestamp': time.time()}
if logMetaStats:
logger.info(f"System: 🚀 New altitude record: {altitude}m from NodeID:{nodeID} ShortName:{get_name_from_number(nodeID, 'short', rxNode)}")
# Track tallest node 🪜 (under the highfly_altitude limit by 100m)
if position_data.get('altitude') is not None:
altitude = position_data['altitude']
@@ -1579,25 +1590,26 @@ def consumeMetadata(packet, rxNode=0, channel=-1):
if current_time - last_alert_time < 1800:
return False # less than 30 minutes since last alert
positionMetadata[nodeID]['lastHighFlyAlert'] = current_time
if highfly_check_openskynetwork:
# check get_openskynetwork to see if the node is an aircraft
if 'latitude' in position_data and 'longitude' in position_data:
flight_info = get_openskynetwork(position_data.get('latitude', 0), position_data.get('longitude', 0))
# Only show plane if within altitude
if (
flight_info
and NO_ALERTS not in flight_info
and ERROR_FETCHING_DATA not in flight_info
and isinstance(flight_info, dict)
and 'altitude' in flight_info
):
plane_alt = flight_info['altitude']
node_alt = position_data.get('altitude', 0)
if abs(node_alt - plane_alt) <= 1000: # within 1000 meters
msg += f"\n✈️Detected near:\n{flight_info}"
send_message(msg, highfly_channel, 0, highfly_interface)
try:
if highfly_check_openskynetwork:
if 'latitude' in position_data and 'longitude' in position_data and 'altitude' in position_data:
flight_info = get_openskynetwork(
position_data.get('latitude', 0),
position_data.get('longitude', 0),
node_altitude=position_data.get('altitude', 0)
)
if flight_info and isinstance(flight_info, dict):
msg += (
f"\n✈️Detected near:\n"
f"{flight_info.get('callsign', 'N/A')} "
f"Alt:{int(flight_info.get('geo_altitude', 0)) if flight_info.get('geo_altitude') else 'N/A'}m "
f"Vel:{int(flight_info.get('velocity', 0)) if flight_info.get('velocity') else 'N/A'}m/s "
f"Heading:{int(flight_info.get('true_track', 0)) if flight_info.get('true_track') else 'N/A'}°\n"
f"From:{flight_info.get('origin_country', 'N/A')}"
)
send_message(msg, highfly_channel, 0, highfly_interface)
except Exception as e:
logger.debug(f"System: Highfly: error: {e}")
# Keep the positionMetadata dictionary at a maximum size
if len(positionMetadata) > MAX_SEEN_NODES:
# Remove the oldest entry
@@ -1986,7 +1998,8 @@ def get_sysinfo(nodeID=0, deviceID=1):
return sysinfo
async def handleSignalWatcher():
global lastHamLibAlert
from modules.radio import signalWatcher
from modules.settings import sigWatchBroadcastCh, sigWatchBroadcastInterface, lastHamLibAlert
# monitor rigctld for signal strength and frequency
while True:
msg = await signalWatcher()
@@ -2212,17 +2225,40 @@ async def handleSentinel(deviceID):
handleSentinel_loop = 0 # Reset if nothing detected
async def process_vox_queue():
# process the voxMsgQueue
global voxMsgQueue
items_to_process = voxMsgQueue[:]
voxMsgQueue.clear()
if len(items_to_process) > 0:
logger.debug(f"System: Processing {len(items_to_process)} items in voxMsgQueue")
for item in items_to_process:
message = item
for channel in sigWatchBroadcastCh:
if antiSpam and int(channel) != publicChannel:
send_message(message, int(channel), 0, sigWatchBroadcastInterface)
# process the voxMsgQueue
from modules.settings import sigWatchBroadcastCh, sigWatchBroadcastInterface, voxMsgQueue
items_to_process = voxMsgQueue[:]
voxMsgQueue.clear()
if len(items_to_process) > 0:
logger.debug(f"System: Processing {len(items_to_process)} items in voxMsgQueue")
for item in items_to_process:
message = item
for channel in sigWatchBroadcastCh:
if antiSpam and int(channel) != publicChannel:
send_message(message, int(channel), 0, sigWatchBroadcastInterface)
async def handleTTS():
from modules.radio import generate_and_play_tts, available_voices
from modules.settings import ttsnoWelcome, tts_read_queue
logger.debug("System: Handle TTS started")
if not ttsnoWelcome:
logger.debug("System: Playing TTS welcome message to disable set 'ttsnoWelcome = True' in settings.ini")
await generate_and_play_tts("Hey its Cheerpy! Thanks for using Meshing-Around on Meshtasstic!", available_voices[0])
try:
while True:
if tts_read_queue:
tts_read = tts_read_queue.pop(0)
voice = available_voices[0]
# ensure the tts_read ends with a punctuation mark
if not tts_read.endswith(('.', '!', '?')):
tts_read += '.'
try:
await generate_and_play_tts(tts_read, voice)
except Exception as e:
logger.error(f"System: TTShandler error: {e}")
await asyncio.sleep(1)
except Exception as e:
logger.critical(f"System: handleTTS crashed: {e}")
async def watchdog():
global localTelemetryData, retry_int1, retry_int2, retry_int3, retry_int4, retry_int5, retry_int6, retry_int7, retry_int8, retry_int9
@@ -2256,7 +2292,7 @@ async def watchdog():
handleMultiPing(0, i)
if wxAlertBroadcastEnabled or emergencyAlertBrodcastEnabled or volcanoAlertBroadcastEnabled or checklist_enabled:
if usAlerts or checklist_enabled or enableDEalerts:
handleAlertBroadcast(i)
intData = displayNodeTelemetry(0, i)
+15 -10
View File
@@ -28,7 +28,7 @@ if os.path.isfile(checkall_path):
# List of module names to exclude
exclude = ['test_bot','udp', 'system', 'log', 'gpio', 'web','test_xtide',]
exclude = ['test_bot','udp', 'system', 'log', 'gpio', 'web',]
available_modules = [
m.name for m in pkgutil.iter_modules([modules_path])
if m.name not in exclude]
@@ -77,6 +77,13 @@ class TestBot(unittest.TestCase):
self.assertTrue(result)
self.assertIsInstance(result1, str)
def test_initialize_inventory_database(self):
from inventory import initialize_inventory_database, process_inventory_command
result = initialize_inventory_database()
result1 = process_inventory_command(0, 'inventory', name="none")
self.assertTrue(result)
self.assertIsInstance(result1, str)
def test_init_news_sources(self):
from filemon import initNewsSources
result = initNewsSources()
@@ -87,11 +94,6 @@ class TestBot(unittest.TestCase):
alerts = get_nina_alerts()
self.assertIsInstance(alerts, str)
def test_llmTool_get_google(self):
from llm import llmTool_get_google
result = llmTool_get_google("What is 2+2?", 1)
self.assertIsInstance(result, list)
def test_send_ollama_query(self):
from llm import send_ollama_query
response = send_ollama_query("Hello, Ollama!")
@@ -150,10 +152,13 @@ class TestBot(unittest.TestCase):
result = initalize_qrz_database()
self.assertTrue(result)
def test_get_hamlib(self):
from radio import get_hamlib
frequency = get_hamlib('f')
self.assertIsInstance(frequency, str)
def test_import_radio_module(self):
try:
import radio
#frequency = get_hamlib('f')
#self.assertIsInstance(frequency, str)
except Exception as e:
self.fail(f"Importing radio module failed: {e}")
def test_get_rss_feed(self):
from rss import get_rss_feed
+78
View File
@@ -0,0 +1,78 @@
# modules/test_checklist.py
import os
import sys
# Add the parent directory to sys.path to allow module imports
parent_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
sys.path.insert(0, parent_path)
import unittest
from unittest.mock import patch
from checklist import process_checklist_command, initialize_checklist_database
import time
class TestProcessChecklistCommand(unittest.TestCase):
def setUp(self):
# Always start with a fresh DB
initialize_checklist_database()
# Patch settings for consistent test behavior
patcher1 = patch('modules.checklist.reverse_in_out', False)
patcher2 = patch('modules.checklist.bbs_ban_list', [])
patcher3 = patch('modules.checklist.bbs_admin_list', ['999'])
self.mock_reverse = patcher1.start()
self.mock_ban = patcher2.start()
self.mock_admin = patcher3.start()
self.addCleanup(patcher1.stop)
self.addCleanup(patcher2.stop)
self.addCleanup(patcher3.stop)
def test_checkin_command(self):
result = process_checklist_command(1, "checkin test note", name="TESTUSER", location=["loc"])
self.assertIn("Checked✅In: TESTUSER", result)
def test_checkout_command(self):
# First checkin
process_checklist_command(1, "checkin test note", name="TESTUSER", location=["loc"])
# Then checkout
result = process_checklist_command(1, "checkout", name="TESTUSER", location=["loc"])
self.assertIn("Checked⌛️Out: TESTUSER", result)
def test_checkin_with_interval(self):
result = process_checklist_command(1, "checkin 15 hiking", name="TESTUSER", location=["loc"])
self.assertIn("monitoring every 15min", result)
def test_checkout_all(self):
# Multiple checkins
process_checklist_command(1, "checkin note1", name="TESTUSER", location=["loc"])
process_checklist_command(1, "checkin note2", name="TESTUSER", location=["loc"])
result = process_checklist_command(1, "checkout all", name="TESTUSER", location=["loc"])
self.assertIn("Checked out", result)
self.assertIn("check-ins for TESTUSER", result)
def test_checklistapprove_nonadmin(self):
process_checklist_command(1, "checkin foo", name="FOO", location=["loc"])
result = process_checklist_command(2, "checklistapprove 1", name="NOTADMIN", location=["loc"])
self.assertNotIn("approved", result)
def test_checklistdeny_nonadmin(self):
process_checklist_command(1, "checkin foo", name="FOO", location=["loc"])
result = process_checklist_command(2, "checklistdeny 1", name="NOTADMIN", location=["loc"])
self.assertNotIn("denied", result)
def test_help_command(self):
result = process_checklist_command(1, "checklist ?", name="TESTUSER", location=["loc"])
self.assertIn("Command: checklist", result)
def test_checklist_listing(self):
process_checklist_command(1, "checkin foo", name="FOO", location=["loc"])
result = process_checklist_command(1, "checklist", name="FOO", location=["loc"])
self.assertIsInstance(result, str)
self.assertIn("checked-In", result)
def test_invalid_command(self):
result = process_checklist_command(1, "foobar", name="FOO", location=["loc"])
self.assertEqual(result, "Invalid command.")
if __name__ == "__main__":
unittest.main()
-135
View File
@@ -1,135 +0,0 @@
#!/usr/bin/env python3
"""
Test script for xtide module
Tests both NOAA (disabled) and tidepredict (when available) tide predictions
"""
import sys
import os
# Add parent directory to path
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
def test_xtide_import():
"""Test that xtide module can be imported"""
print("Testing xtide module import...")
try:
from modules import xtide
print(f"✓ xtide module imported successfully")
print(f" - tidepredict available: {xtide.TIDEPREDICT_AVAILABLE}")
return True
except Exception as e:
print(f"✗ Failed to import xtide: {e}")
return False
def test_locationdata_import():
"""Test that modified locationdata can be imported"""
print("\nTesting locationdata module import...")
try:
from modules import locationdata
print(f"✓ locationdata module imported successfully")
return True
except Exception as e:
print(f"✗ Failed to import locationdata: {e}")
return False
def test_settings():
"""Test that settings has useTidePredict option"""
print("\nTesting settings configuration...")
try:
from modules import settings as my_settings
has_setting = hasattr(my_settings, 'useTidePredict')
print(f"✓ settings module loaded")
print(f" - useTidePredict setting available: {has_setting}")
if has_setting:
print(f" - useTidePredict value: {my_settings.useTidePredict}")
return True
except Exception as e:
print(f"✗ Failed to load settings: {e}")
return False
def test_noaa_fallback():
"""Test NOAA API fallback (without enabling tidepredict)"""
print("\nTesting NOAA API (default mode)...")
try:
from modules import locationdata
from modules import settings as my_settings
# Test with Seattle coordinates (should use NOAA)
lat = 47.6062
lon = -122.3321
print(f" Testing with Seattle coordinates: {lat}, {lon}")
print(f" useTidePredict = {my_settings.useTidePredict}")
# Note: This will fail if we can't reach NOAA, but that's expected
result = locationdata.get_NOAAtide(str(lat), str(lon))
if result and "Error" not in result:
print(f"✓ NOAA API returned data")
print(f" First 100 chars: {result[:100]}")
return True
else:
print(f"⚠ NOAA API returned: {result[:100]}")
return True # Still pass as network might not be available
except Exception as e:
print(f"⚠ NOAA test encountered expected issue: {e}")
return True # Expected in test environment
def test_parse_coords():
"""Test coordinate parsing function"""
print("\nTesting coordinate parsing...")
try:
from modules.xtide import parse_station_coords
test_cases = [
(("43-36S", "172-43E"), (-43.6, 172.71666666666667)),
(("02-45N", "072-21E"), (2.75, 72.35)),
(("02-45S", "072-21W"), (-2.75, -72.35)),
]
all_passed = True
for (lat_str, lon_str), (expected_lat, expected_lon) in test_cases:
result_lat, result_lon = parse_station_coords(lat_str, lon_str)
if abs(result_lat - expected_lat) < 0.01 and abs(result_lon - expected_lon) < 0.01:
print(f"{lat_str}, {lon_str} -> {result_lat:.2f}, {result_lon:.2f}")
else:
print(f"{lat_str}, {lon_str} -> expected {expected_lat}, {expected_lon}, got {result_lat}, {result_lon}")
all_passed = False
return all_passed
except Exception as e:
print(f"✗ Coordinate parsing test failed: {e}")
import traceback
traceback.print_exc()
return False
def main():
"""Run all tests"""
print("=" * 60)
print("xtide Module Test Suite")
print("=" * 60)
results = []
results.append(("Import xtide", test_xtide_import()))
results.append(("Import locationdata", test_locationdata_import()))
results.append(("Settings configuration", test_settings()))
results.append(("Parse coordinates", test_parse_coords()))
results.append(("NOAA fallback", test_noaa_fallback()))
print("\n" + "=" * 60)
print("Test Results Summary")
print("=" * 60)
passed = sum(1 for _, result in results if result)
total = len(results)
for test_name, result in results:
status = "✓ PASS" if result else "✗ FAIL"
print(f"{status}: {test_name}")
print(f"\n{passed}/{total} tests passed")
return passed == total
if __name__ == "__main__":
success = main()
sys.exit(0 if success else 1)
-129
View File
@@ -1,129 +0,0 @@
# xtide Module - Global Tide Predictions
This module provides global tide prediction capabilities using the [tidepredict](https://github.com/windcrusader/tidepredict) library, which uses the University of Hawaii's Research Quality Dataset for worldwide tide station coverage.
## Features
- Global tide predictions (not limited to US locations like NOAA)
- Offline predictions once station data is initialized
- Automatic selection of nearest tide station
- Compatible with existing tide command interface
## Installation
1. Install tidepredict library:
this takes about 3-500MB of disk
```bash
pip install tidepredict
```
note: if you see warning about system packages the override for debian OS to install it anyway is..
```bash
pip install tidepredict --break-system-packages
```
2. Enable in `config.ini`:
```ini
[location]
useTidePredict = True
```
## First-Time Setup
On first use, tidepredict needs to download station data from the University of Hawaii FTP server. This requires internet access and happens automatically when you:
1. Run the tide command for the first time with `useTidePredict = True`
2. Or manually initialize with:
```bash
python3 -m tidepredict -l <location> -genharm
```
The station data is cached locally in `~/.tidepredict/` for offline use afterward.
No other downloads will happen automatically, its offline
## Usage
Once enabled, the existing `tide` command will automatically use tidepredict for global locations:
```
tide
```
The module will:
1. Find the nearest tide station to your GPS coordinates
2. Load harmonic constituents for that station
3. Calculate tide predictions for today
4. Format output compatible with mesh display
## Configuration
### config.ini Options
```ini
[location]
# Enable global tide predictions using tidepredict
useTidePredict = True
# Standard location settings still apply
lat = 48.50
lon = -123.0
useMetric = False
```
## Fallback Behavior
If tidepredict is not available or encounters errors, the module will automatically fall back to the NOAA API for US locations.
## Limitations
- First-time setup requires internet access to download station database
- Station coverage depends on University of Hawaii's dataset
- Predictions may be less accurate for locations far from tide stations
## Troubleshooting
### "Station database not initialized" error
This means the station data hasn't been downloaded yet. Ensure internet access and:
```bash
# Test station download
python3 -m tidepredict -l Sydney
# Or manually run initialization
python3 -c "from tidepredict import process_station_list; process_station_list.create_station_dataframe()"
```
### "No tide station found nearby"
The module couldn't find a nearby station. This may happen if:
- You're in a location without nearby tide monitoring stations
- The station database hasn't been initialized
- Network issues prevented loading the station list
Tide Station Map
[https://uhslc.soest.hawaii.edu/network/](https://uhslc.soest.hawaii.edu/network/)
- click on Tide Guages
- Find yourself on the map
- Locate the closest Gauge and its name (typically the city name)
To manually download data for the station first location the needed station id
- `python -m tidepredict -l "Port Angeles"` finds a station
- `python -m tidepredict -l "Port Angeles" -genharm` downloads that datafile
## Data Source
Tide predictions are based on harmonic analysis of historical tide data from:
- University of Hawaii Sea Level Center (UHSLC)
- Research Quality Dataset
- Global coverage with 600+ stations
## References
- [tidepredict GitHub](https://github.com/windcrusader/tidepredict)
- [UHSLC Data](https://uhslc.soest.hawaii.edu/)
- [pytides](https://github.com/sam-cox/pytides) - Underlying tide calculation library
-202
View File
@@ -1,202 +0,0 @@
# xtide.py - Global tide prediction using tidepredict library
# K7MHI Kelly Keeton 2025
import json
from datetime import datetime, timedelta
from modules.log import logger
import modules.settings as my_settings
try:
from tidepredict import processdata, process_station_list, constants, timefunc
from tidepredict.tide import Tide
import pandas as pd
TIDEPREDICT_AVAILABLE = True
except ImportError:
TIDEPREDICT_AVAILABLE = False
logger.error("xtide: tidepredict module not installed. Install with: pip install tidepredict")
def get_nearest_station(lat, lon):
"""
Find the nearest tide station to the given lat/lon coordinates.
Returns station code (e.g., 'h001a') or None if not found.
"""
if not TIDEPREDICT_AVAILABLE:
return None
try:
# Read the station list
try:
stations = pd.read_csv(constants.STATIONFILE)
except FileNotFoundError:
# If station file doesn't exist, create it (requires network)
logger.info("xtide: Creating station database from online source (requires network)")
try:
stations = process_station_list.create_station_dataframe()
except Exception as net_error:
logger.error(f"xtide: Failed to download station database: {net_error}")
return None
if stations.empty:
logger.error("xtide: No stations found in database")
return None
# Calculate distance to each station
# Using simple haversine-like calculation
def calc_distance(row):
try:
# Parse lat/lon from the format like "43-36S", "172-43E"
station_lat, station_lon = parse_station_coords(row['Lat'], row['Lon'])
# Simple distance calculation (not precise but good enough)
dlat = lat - station_lat
dlon = lon - station_lon
return (dlat**2 + dlon**2)**0.5
except:
return float('inf')
stations['distance'] = stations.apply(calc_distance, axis=1)
# Find the nearest station
nearest = stations.loc[stations['distance'].idxmin()]
if nearest['distance'] > 10: # More than ~10 degrees away, might be too far
logger.warning(f"xtide: Nearest station is {nearest['distance']:.1f}° away at {nearest['loc_name']}")
station_code = "h" + nearest['stat_idx'].lower()
logger.debug(f"xtide: Found nearest station: {nearest['loc_name']} ({station_code}) at {nearest['distance']:.2f}° away")
return station_code, nearest['loc_name'], nearest['country']
except Exception as e:
logger.error(f"xtide: Error finding nearest station: {e}")
return None
def parse_station_coords(lat_str, lon_str):
"""
Parse station coordinates from format like "43-36S", "172-43E"
Returns tuple of (latitude, longitude) as floats
"""
try:
# Parse latitude
lat_parts = lat_str.split('-')
lat_deg = float(lat_parts[0])
lat_min = float(lat_parts[1][:-1]) # Remove N/S
lat_dir = lat_parts[1][-1] # Get N/S
lat_val = lat_deg + lat_min/60.0
if lat_dir == 'S':
lat_val = -lat_val
# Parse longitude
lon_parts = lon_str.split('-')
lon_deg = float(lon_parts[0])
lon_min = float(lon_parts[1][:-1]) # Remove E/W
lon_dir = lon_parts[1][-1] # Get E/W
lon_val = lon_deg + lon_min/60.0
if lon_dir == 'W':
lon_val = -lon_val
return lat_val, lon_val
except Exception as e:
logger.debug(f"xtide: Error parsing coordinates {lat_str}, {lon_str}: {e}")
return 0.0, 0.0
def get_tide_predictions(lat=0, lon=0, days=1):
"""
Get tide predictions for the given location using tidepredict library.
Returns formatted string with tide predictions.
Parameters:
- lat: Latitude
- lon: Longitude
- days: Number of days to predict (default: 1)
Returns:
- Formatted string with tide predictions or error message
"""
if not TIDEPREDICT_AVAILABLE:
return "module not installed, see logs for more ⚓️"
if float(lat) == 0 and float(lon) == 0:
return "No GPS data for tide prediction"
try:
# Find nearest station
station_info = get_nearest_station(float(lat), float(lon))
if not station_info:
return "No tide station found nearby. Network may be required to download station data."
station_code, station_name, station_country = station_info
# Load station data
station_dict, harmfileloc = process_station_list.read_station_info_file()
# Check if harmonic data exists for this station
if station_code not in station_dict:
logger.warning(f"xtide: No harmonic data. python -m tidepredict -l \"{station_name}\" -genharm")
return f"Tide data not available for {station_name}. Station database may need initialization."
# Reconstruct tide model
tide = processdata.reconstruct_tide_model(station_dict, station_code)
if tide is None:
return f"Tide model unavailable for {station_name}"
# Set up time range (today only)
now = datetime.now()
start_time = now.strftime("%Y-%m-%d 00:00")
end_time = (now + timedelta(days=days)).strftime("%Y-%m-%d 00:00")
# Create time object
timeobj = timefunc.Tidetime(
st_time=start_time,
en_time=end_time,
station_tz=station_dict[station_code].get('tzone', 'UTC')
)
# Get predictions
predictions = processdata.predict_plain(tide, station_dict[station_code], 't', timeobj)
# Format output for mesh
lines = predictions.strip().split('\n')
if len(lines) > 2:
# Skip the header lines and format for mesh display
result = f"Tide: {station_name}\n"
tide_lines = lines[2:] # Skip first 2 header lines
# Format each tide prediction
for line in tide_lines[:8]: # Limit to 8 entries
parts = line.split()
if len(parts) >= 4:
date_str = parts[0]
time_str = parts[1]
height = parts[3]
tide_type = ' '.join(parts[4:])
# Convert to 12-hour format if not using zulu time
if not my_settings.zuluTime:
try:
time_obj = datetime.strptime(time_str, "%H%M")
hour = time_obj.hour
minute = time_obj.minute
if hour >= 12:
time_str = f"{hour-12 if hour > 12 else 12}:{minute:02d} PM"
else:
time_str = f"{hour if hour > 0 else 12}:{minute:02d} AM"
except:
pass
result += f"{tide_type} {time_str}, {height}\n"
return result.strip()
else:
return predictions
except FileNotFoundError as e:
logger.error(f"xtide: Station data file not found: {e}")
return "Tide station database not initialized. Network access required for first-time setup."
except Exception as e:
logger.error(f"xtide: Error getting tide predictions: {e}")
return f"Error getting tide data: {str(e)}"
def is_enabled():
"""Check if xtide/tidepredict is enabled in config"""
return getattr(my_settings, 'useTidePredict', False) and TIDEPREDICT_AVAILABLE
+24 -5
View File
@@ -65,7 +65,11 @@ def handle_cmd(message, message_from_id, deviceID):
def handle_ping(message_from_id, deviceID, message, hop, snr, rssi, isDM, channel_number):
global multiPing
if "?" in message and isDM:
return message.split("?")[0].title() + " command returns SNR and RSSI, or hopcount from your message. Try adding e.g. @place or #tag"
pingHelp = "🤖Ping Command Help:\n" \
"🏓 Send 'ping' or 'ack' or 'test' to get a response.\n" \
"🏓 Send 'ping <number>' to get multiple pings in DM"
"🏓 ping @USERID to send a Joke from the bot"
return pingHelp
msg = ""
type = ''
@@ -303,10 +307,21 @@ def onReceive(packet, interface):
# set the message_from_id
message_from_id = packet['from']
# check if the packet has a channel flag use it
if packet.get('channel'):
channel_number = packet.get('channel', 0)
# if message_from_id is not in the seenNodes list add it
if not any(node.get('nodeID') == message_from_id for node in seenNodes):
seenNodes.append({'nodeID': message_from_id, 'rxInterface': rxNode, 'channel': channel_number, 'welcome': False, 'first_seen': time.time(), 'lastSeen': time.time()})
else:
# update lastSeen time
for node in seenNodes:
if node.get('nodeID') == message_from_id:
node['lastSeen'] = time.time()
break
# CHECK with ban_hammer() if the node is banned
if str(message_from_id) in my_settings.bbs_ban_list or str(message_from_id) in my_settings.autoBanlist:
logger.warning(f"System: Banned Node {message_from_id} tried to send a message. Ignored. Try adding to node firmware-blocklist")
return
# handle TEXT_MESSAGE_APP
try:
if 'decoded' in packet and packet['decoded']['portnum'] == 'TEXT_MESSAGE_APP':
@@ -379,7 +394,7 @@ def onReceive(packet, interface):
logger.debug(f"System: Packet HopDebugger: hop_away:{hop_away} hop_limit:{hop_limit} hop_start:{hop_start} calculated_hop_count:{hop_count} final_hop_value:{hop} via_mqtt:{via_mqtt} transport_mechanism:{transport_mechanism} Hostname:{rxNodeHostName}")
# check with stringSafeChecker if the message is safe
if stringSafeCheck(message_string) is False:
if stringSafeCheck(message_string, message_from_id) is False:
logger.warning(f"System: Possibly Unsafe Message from {get_name_from_number(message_from_id, 'long', rxNode)}")
if help_message in message_string or welcome_message in message_string or "CMD?:" in message_string:
@@ -574,6 +589,10 @@ def handle_boot(mesh=True):
if my_settings.useDMForResponse:
logger.debug("System: Respond by DM only")
if my_settings.autoBanEnabled:
logger.debug(f"System: Auto-Ban Enabled for {my_settings.autoBanThreshold} messages in {my_settings.autoBanTimeframe} seconds")
load_bbsBanList()
if my_settings.log_messages_to_file:
logger.debug("System: Logging Messages to disk")
if my_settings.syslog_to_file:
+61 -19
View File
@@ -1,22 +1,4 @@
## script/runShell.sh
**Purpose:**
`runShell.sh` is a simple demo shell script for the Mesh Bot project. It demonstrates how to execute shell commands within the projects scripting environment.
**Usage:**
Run this script from the terminal to see a basic example of shell scripting in the project context.
```sh
bash script/runShell.sh
```
**What it does:**
- Changes the working directory to the scripts location.
- Prints the current directory path and a message indicating the script is running.
- Serves as a template for creating additional shell scripts or automating tasks related to the project.
**Note:**
You can modify this script to add more shell commands or automation steps as needed for your workflow.
## script/runShell.sh
@@ -57,4 +39,64 @@ bash script/sysEnv.sh
- Designed to work on Linux systems, with special handling for Raspberry Pi hardware.
**Note:**
You can expand or modify this script to include additional telemetry or environment checks as needed for your deployment.
You can expand or modify this script to include additional telemetry or environment checks as needed for your deployment.
## script/configMerge.py
**Purpose:**
`configMerge.py` is a Python script that merges your user configuration (`config.ini`) with the default template (`config.template`). This helps you keep your settings up to date when the default configuration changes, while preserving your customizations.
**Usage:**
Run this script from the project root or the `script/` directory:
```sh
python3 script/configMerge.py
```
**What it does:**
- Backs up your current `config.ini` to `config.bak`.
- Merges new or updated settings from `config.template` into your `config.ini`.
- Saves the merged result as `config_new.ini`.
- Shows a summary of changes between your config and the merged version.
**Note:**
After reviewing the changes, you can replace your `config.ini` with the merged version:
```sh
cp config_new.ini config.ini
```
This script is useful for safely updating your configuration when new options are added upstream.
## script/addFav.py
**Purpose:**
`addFav.py` is a Python script to help manage and add favorite nodes to all interfaces using data from `config.ini`. It supports both bot and roof (client_base) node workflows, making it easier to retain DM keys and manage node lists across devices.
**Usage:**
Run this script from the main repo directory:
```sh
python3 script/addFav.py
```
- To print the contents of `roofNodeList.pkl` and exit, use:
```sh
# note it is not production ready
python3 script/addFav.py -p
```
**What it does:**
- Interactively asks if you are running on a roof (client_base) node or a bot.
- On the bot:
- Compiles a list of favorite nodes and saves it to `roofNodeList.pkl` for later use on the roof node.
- On the roof node:
- Loads the node list from `roofNodeList.pkl`.
- Shows which favorite nodes will be added and asks for confirmation.
- Adds favorite nodes to the appropriate devices, handling API rate limits.
- Logs actions and errors for troubleshooting.
**Note:**
- Always run this script from the main repo directory to ensure module imports work.
- After running on the bot, copy `roofNodeList.pkl` to the roof node and rerun the script there to complete the process.
+9 -17
View File
@@ -28,10 +28,18 @@ fi
# Fetch latest changes from GitHub
echo "Fetching latest changes from GitHub..."
if ! git fetch origin; then
echo "Error: Failed to fetch from GitHub, check your network connection."
echo "Error: Failed to fetch from GitHub, check your network connection. script expects to be run inside a git repository."
exit 1
fi
# Check for detached HEAD state
if [[ $(git symbolic-ref --short -q HEAD) == "" ]]; then
echo "WARNING: You are in a detached HEAD state."
echo "You may not be on a branch. To return to the main branch, run:"
echo " git checkout main"
echo "Proceed with caution; changes may not be saved to a branch."
fi
# git pull with rebase to avoid unnecessary merge commits
echo "Pulling latest changes from GitHub..."
if ! git pull origin main --rebase; then
@@ -63,23 +71,7 @@ if [[ -f "modules/custom_scheduler.py" ]]; then
echo "Including custom_scheduler.py in backup..."
cp modules/custom_scheduler.py data/
fi
# Check config.ini ownership and permissions
if [[ -f "config.ini" ]]; then
owner=$(stat -f "%Su" config.ini)
perms=$(stat -f "%A" config.ini)
echo "config.ini is owned by: $owner"
echo "config.ini permissions: $perms"
if [[ "$owner" == "root" ]]; then
echo "Warning: config.ini is owned by root check out the etc/set-permissions.sh script"
fi
if [[ $(stat -f "%Lp" config.ini) =~ .*[7,6,2]$ ]]; then
echo "Warning: config.ini is world-writable or world-readable! check out the etc/set-permissions.sh script"
fi
echo "Including config.ini in backup..."
cp config.ini data/config.backup
fi
#create the tar.gz backup
tar -czf "$backup_file" "$path2backup"
if [ $? -ne 0 ]; then