Compare commits

..

6 Commits

Author SHA1 Message Date
SpudGunMan
17bfb8ec3e Update xtide.md 2025-10-29 11:56:24 -07:00
SpudGunMan
0cfe4a39ed refactor 2025-10-28 22:14:34 -07:00
copilot-swe-agent[bot]
fc5476b5dd Update documentation for global tide prediction support
Co-authored-by: SpudGunMan <12676665+SpudGunMan@users.noreply.github.com>
2025-10-29 03:58:26 +00:00
copilot-swe-agent[bot]
f40d5b24f6 Add comprehensive error handling and documentation for xtide module
Co-authored-by: SpudGunMan <12676665+SpudGunMan@users.noreply.github.com>
2025-10-29 03:57:04 +00:00
copilot-swe-agent[bot]
f8782de291 Add tidepredict support for global tide predictions
Co-authored-by: SpudGunMan <12676665+SpudGunMan@users.noreply.github.com>
2025-10-29 03:53:07 +00:00
copilot-swe-agent[bot]
74f4cd284c Initial plan 2025-10-29 03:46:26 +00:00
24 changed files with 1004 additions and 942 deletions

View File

@@ -196,27 +196,4 @@ From your project root, run one of the following commands:
- The script requires a Python virtual environment (`venv`) to be present in the project directory.
- If `venv` is missing, the script will exit with an error message.
- Always provide an argument (`mesh`, `pong`, `html`, `html5`, or `add`) to specify what you want to launch.
## Troubleshooting
### Permissions Issues
If you encounter errors related to file or directory permissions (e.g., "Permission denied" or services failing to start):
- Ensure you are running installation scripts with sufficient privileges (use `sudo` if needed).
- The `logs`, `data`, and `config.ini` files must be owned by the user running the bot (often `meshbot` or your current user).
- You can manually reset permissions using the provided script:
```sh
sudo bash etc/set-permissions.sh meshbot
```
- If you moved the project directory, re-run the permissions script to update ownership.
- For systemd service issues, check logs with:
```sh
sudo journalctl -u mesh_bot.service
```
If problems persist, double-check that the user specified in your service files matches the owner of the project files and directories.
- Always provide an argument (`mesh`, `pong`, `html`, `html5`, or `add`) to specify what you want to launch.

View File

@@ -40,12 +40,11 @@ Mesh Bot is a feature-rich Python bot designed to enhance your [Meshtastic](http
- **New Node Greetings**: Automatically greet new nodes via text.
### Interactive AI and Data Lookup
- **Weather, Earthquake, River, and Tide Data**: Get local alerts and info from NOAA/USGS; uses Open-Meteo for areas outside NOAA coverage.
- **Wikipedia Search**: Retrieve summaries from Wikipedia and Kiwix
- **Weather, Earthquake, River, and Tide Data**: Get local alerts and info from NOAA/USGS; uses Open-Meteo for areas outside NOAA coverage. Global tide predictions available via tidepredict library for worldwide locations.
- **Wikipedia Search**: Retrieve summaries from Wikipedia.
- **OpenWebUI, Ollama LLM Integration**: Query the [Ollama](https://github.com/ollama/ollama/tree/main/docs) AI for advanced responses. Supports RAG (Retrieval Augmented Generation) with Wikipedia/Kiwix context and [OpenWebUI](https://github.com/open-webui/open-webui) integration for enhanced AI capabilities. [LLM Readme](modules/llm.md)
- **Satellite Passes**: Find upcoming satellite passes for your location.
- **GeoMeasuring Tools**: Calculate distances and midpoints using collected GPS data; supports Fox & Hound direction finding.
- **RSS & News Feeds**: Receive news and data from multiple sources directly on the mesh.
### Proximity Alerts
- **Location-Based Alerts**: Get notified when members arrive at a configured latitude/longitude—ideal for campsites, geo-fences, or remote locations. Optionally, trigger scripts, send emails, or automate actions (e.g., change node config, turn on lights, or drop an `alert.txt` file to start a survey or game).
@@ -53,25 +52,12 @@ Mesh Bot is a feature-rich Python bot designed to enhance your [Meshtastic](http
- **High Flying Alerts**: Receive notifications when nodes with high altitude are detected on the mesh.
- **Voice/Command Triggers**: Activate bot functions using keywords or voice commands (see [Voice Commands](#voice-commands-vox) for "Hey Chirpy!" support).
### EAS Alerts
- **FEMA iPAWS/EAS Alerts**: Receive Emergency Alerts from FEMA via API on internet-connected nodes.
- **NOAA EAS Alerts**: Get Emergency Alerts from NOAA via API.
- **USGS Volcano Alerts**: Receive volcano alerts from USGS via API.
- **NINA Alerts (Germany)**: Receive emergency alerts from the xrepository.de feed for Germany.
- **Offline EAS Alerts**: Report EAS alerts over the mesh using external tools, even without internet.
### File Monitor Alerts
- **File Monitoring**: Watch a text file for changes and broadcast updates to the mesh channel.
- **News File Access**: Retrieve the contents of a news file on request; supports multiple news sources or files.
- **Shell Command Access**: Execute shell commands via DM with replay protection (admin only).
#### Radio Frequency Monitoring
- **SNR RF Activity Alerts**: Monitor radio frequencies and receive alerts when high SNR (Signal-to-Noise Ratio) activity is detected.
- **Hamlib Integration**: Use Hamlib (rigctld) to monitor the S meter on a connected radio.
- **Speech-to-Text Broadcasting**: Convert received audio to text using [Vosk](https://alphacephei.com/vosk/models) and broadcast it to the mesh.
- **WSJT-X Integration**: Monitor WSJT-X (FT8, FT4, WSPR, etc.) decode messages and forward them to the mesh network with optional callsign filtering.
- **JS8Call Integration**: Monitor JS8Call messages and forward them to the mesh network with optional callsign filtering.
- **Meshages TTS**: The bot can speak mesh messages aloud using [KittenTTS](https://github.com/KittenML/KittenTTS). Enable this feature to have important alerts and messages read out loud on your device—ideal for hands-free operation or accessibility. See [radio.md](modules/radio.md) for setup instructions.
### Asset Tracking, Check-In/Check-Out, and Inventory Management
Advanced check-in/check-out and asset tracking for people and equipment—ideal for accountability, safety monitoring, and logistics (e.g., Radio-Net, FEMA, trailhead groups). Admin approval workflows, GPS location capture, and overdue alerts. The integrated inventory and point-of-sale (POS) system enables item management, sales tracking, cart-based transactions, and daily reporting, for swaps, emergency supply management, and field operations, maker-places.
@@ -93,8 +79,21 @@ Advanced check-in/check-out and asset tracking for people and equipment—ideal
- **User Feedback**: Users participate via DM; responses are logged for review.
- **Reporting**: Retrieve survey results with `survey report` or `survey report <surveyname>`.
### EAS Alerts
- **FEMA iPAWS/EAS Alerts**: Receive Emergency Alerts from FEMA via API on internet-connected nodes.
- **NOAA EAS Alerts**: Get Emergency Alerts from NOAA via API.
- **USGS Volcano Alerts**: Receive volcano alerts from USGS via API.
- **Offline EAS Alerts**: Report EAS alerts over the mesh using external tools, even without internet.
- **NINA Alerts (Germany)**: Receive emergency alerts from the xrepository.de feed for Germany.
### File Monitor Alerts
- **File Monitoring**: Watch a text file for changes and broadcast updates to the mesh channel.
- **News File Access**: Retrieve the contents of a news file on request; supports multiple news sources or files.
- **Shell Command Access**: Execute shell commands via DM with replay protection (admin only).
### Data Reporting
- **HTML Reports**: Visualize bot traffic and data flows with a built-in HTML generator. See [data reporting](logs/README.md) for details.
- **RSS & News Feeds**: Receive news and data from multiple sources directly on the mesh.
### Robust Message Handling
- **Automatic Message Chunking**: Messages over 160 characters are automatically split to ensure reliable delivery across multiple hops.

View File

@@ -62,12 +62,6 @@ rssFeedURL = http://www.hackaday.com/rss.xml,http://rss.slashdot.org/Slashdot/sl
rssFeedNames = default,slashdot,mesh
rssMaxItems = 3
rssTruncate = 100
# enable or disable the headline command which uses NewsAPI.org key at https://newsapi.org/register
enableNewsAPI = False
newsAPI_KEY =
newsAPIregion = us
# could also be 'relevancy' or 'popularity' or 'publishedAt'
sort_by = relevancy
# enable or disable the wikipedia search module
wikipedia = True
@@ -209,27 +203,18 @@ useMetric = False
# repeaterList lookup location (rbook / artsci / False)
repeaterLookup = rbook
# Satalite Pass Prediction
# Register for free API https://www.n2yo.com/login/ personal data page at bottom 'Are you developer?'
n2yoAPIKey =
# NORAD list https://www.n2yo.com/satellites/
satList = 25544,7530
# use Open-Meteo API for weather data not NOAA useful for non US locations
UseMeteoWxAPI = False
# NOAA weather forecast days
NOAAforecastDuration = 3
# number of weather alerts to display
NOAAalertCount = 2
# NOAA Weather EAS Alert Broadcast
wxAlertBroadcastEnabled = False
# Enable Ignore any message that includes following word list
ignoreEASenable = False
ignoreEASwords = test,advisory
# Add extra location to the weather alert
enableExtraLocationWx = False
# use Open-Meteo API for weather data not NOAA useful for non US locations
UseMeteoWxAPI = False
# Global Tide Prediction using tidepredict (for non-US locations or offline use)
# When enabled, uses tidepredict library for global tide predictions instead of NOAA API
# tidepredict uses University of Hawaii's Research Quality Dataset for worldwide coverage
useTidePredict = False
# NOAA Coastal Data Enable NOAA Coastal Waters Forecasts and Tide
coastalEnabled = False
@@ -245,40 +230,52 @@ coastalForecastDays = 3
# for multiple rivers use comma separated list e.g. 12484500,14105700
riverList =
# USA FEMA IPAWS alerts
ipawsAlertEnabled = True
# NOAA EAS Alert Broadcast
wxAlertBroadcastEnabled = False
# Enable Ignore any message that includes following word list
ignoreEASenable = False
ignoreEASwords = test,advisory
# EAS Alert Broadcast Channels
wxAlertBroadcastCh = 2
# Add extra location to the weather alert
enableExtraLocationWx = False
# Goverment Alert Broadcast defaults to FEMA IPAWS
eAlertBroadcastEnabled = False
# comma separated list of FIPS codes to trigger local alert. find your FIPS codes at https://en.wikipedia.org/wiki/Federal_Information_Processing_Standard_state_code
myFIPSList = 57,58,53
# find your SAME https://www.weather.gov/nwr/counties comma separated list of SAME code to further refine local alert.
mySAMEList = 053029,053073
# Goverment Alert Broadcast Channels
eAlertBroadcastCh = 2
# Enable Ignore, headline that includes following word list
ignoreFEMAenable = True
ignoreFEMAwords = test,exercise
# USGS Volcano alerts Enable USGS Volcano Alert Broadcast
volcanoAlertBroadcastEnabled = False
volcanoAlertBroadcastCh = 2
# Enable Ignore any message that includes following word list
ignoreUSGSEnable = False
ignoreUSGSWords = test,advisory
# Use Germany/DE Alert Broadcast Data
# Use DE Alert Broadcast Data
enableDEalerts = False
# comma separated list of regional codes trigger local alert.
# find your regional codet at https://www.xrepository.de/api/xrepository/urn:de:bund:destatis:bevoelkerungsstatistik:schluessel:rs_2021-07-31/download/Regionalschl_ssel_2021-07-31.json
myRegionalKeysDE = 110000000000,120510000000
# Alerts are sent to the emergency_handler interface and channel duplicate messages are send here if set
eAlertBroadcastCh =
# Satalite Pass Prediction
# Register for free API https://www.n2yo.com/login/ personal data page at bottom 'Are you developer?'
n2yoAPIKey =
# NORAD list https://www.n2yo.com/satellites/
satList = 25544,7530
# CheckList Checkin/Checkout
[checklist]
enabled = False
checklist_db = data/checklist.db
reverse_in_out = False
# Auto approve new checklists
auto_approve = True
# Check-in reminder interval is 5min
# Checkin broadcast interface and channel is emergency_handler interface and channel
# Inventory and Point of Sale System
[inventory]
@@ -363,10 +360,6 @@ voxTrapList = chirpy
# allow use of 'weather' and 'joke' commands via VOX
voxEnableCmd = True
# Meshages Text-to-Speech (TTS) for incoming messages and DM
meshagesTTS = False
ttsChannels = 2
# WSJT-X UDP monitoring - listens for decode messages from WSJT-X, FT8/FT4/WSPR etc.
wsjtxDetectionEnabled = False
# UDP address and port where WSJT-X broadcasts (default: 127.0.0.1:2237)

View File

@@ -22,7 +22,7 @@ Environment=SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt
Environment=PYTHONUNBUFFERED=1
Restart=on-failure
Type=notify #try simple if any problems
[Install]
WantedBy=default.target

View File

@@ -23,6 +23,7 @@ ExecStop=pkill -f report_generator5.py
Environment=PYTHONUNBUFFERED=1
Restart=on-failure
Type=notify #try simple if any problems
[Install]
WantedBy=timers.target

View File

@@ -22,6 +22,7 @@ Environment=SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt
Environment=PYTHONUNBUFFERED=1
Restart=on-failure
Type=notify #try simple if any problems
[Install]
WantedBy=default.target

View File

@@ -285,6 +285,15 @@ sudo usermod -a -G tty "$whoami"
sudo usermod -a -G bluetooth "$whoami"
echo "Added user $whoami to dialout, tty, and bluetooth groups"
sudo chown -R "$whoami:$whoami" "$program_path/logs"
sudo chown -R "$whoami:$whoami" "$program_path/data"
sudo chown "$whoami:$whoami" "$program_path/config.ini"
sudo chmod 640 "$program_path/config.ini"
echo "Permissions set for meshbot on config.ini"
sudo chmod 750 "$program_path/logs"
sudo chmod 750 "$program_path/data"
echo "Permissions set for meshbot on logs and data directories"
# check and see if some sort of NTP is running
if ! systemctl is-active --quiet ntp.service && \
! systemctl is-active --quiet systemd-timesyncd.service && \
@@ -312,17 +321,17 @@ if [[ $(echo "${bot}" | grep -i "^m") ]]; then
fi
# install mesh_bot_reporting timer to run daily at 4:20 am
echo ""
echo "Installing mesh_bot_reporting.timer to run mesh_bot_reporting daily at 4:20 am..."
sudo cp etc/mesh_bot_reporting.service /etc/systemd/system/
sudo cp etc/mesh_bot_reporting.timer /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable mesh_bot_reporting.timer
sudo systemctl start mesh_bot_reporting.timer
echo "mesh_bot_reporting.timer installed and enabled"
echo "Check timer status with: systemctl status mesh_bot_reporting.timer"
echo "List all timers with: systemctl list-timers"
echo ""
# echo ""
# echo "Installing mesh_bot_reporting.timer to run mesh_bot_reporting daily at 4:20 am..."
# sudo cp etc/mesh_bot_reporting.service /etc/systemd/system/
# sudo cp etc/mesh_bot_reporting.timer /etc/systemd/system/
# sudo systemctl daemon-reload
# sudo systemctl enable mesh_bot_reporting.timer
# sudo systemctl start mesh_bot_reporting.timer
# echo "mesh_bot_reporting.timer installed and enabled"
# echo "Check timer status with: systemctl status mesh_bot_reporting.timer"
# echo "List all timers with: systemctl list-timers"
# echo ""
# # install mesh_bot_w3_server service
# echo "Installing mesh_bot_w3_server.service to run the web3 server..."
@@ -459,15 +468,6 @@ else
printf "*** Stay Up to date using 'bash update.sh' ***\n" >> install_notes.txt
fi
sudo chown -R "$whoami:$whoami" "$program_path/logs"
sudo chown -R "$whoami:$whoami" "$program_path/data"
sudo chown "$whoami:$whoami" "$program_path/config.ini"
sudo chmod 640 "$program_path/config.ini"
echo "Permissions set for meshbot on config.ini"
sudo chmod 750 "$program_path/logs"
sudo chmod 750 "$program_path/data"
echo "Permissions set for meshbot on logs and data directories"
printf "\nInstallation complete?\n"
exit 0

View File

@@ -40,10 +40,10 @@ def auto_response(message, snr, rssi, hop, pkiStatus, message_from_id, channel_n
"bbspost": lambda: handle_bbspost(message, message_from_id, deviceID),
"bbsread": lambda: handle_bbsread(message),
"blackjack": lambda: handleBlackJack(message, message_from_id, deviceID),
"approvecl": lambda: handle_checklist(message, message_from_id, deviceID),
"denycl": lambda: handle_checklist(message, message_from_id, deviceID),
"checkin": lambda: handle_checklist(message, message_from_id, deviceID),
"checklist": lambda: handle_checklist(message, message_from_id, deviceID),
"checklistapprove": lambda: handle_checklist(message, message_from_id, deviceID),
"checklistdeny": lambda: handle_checklist(message, message_from_id, deviceID),
"checkout": lambda: handle_checklist(message, message_from_id, deviceID),
"chess": lambda: handle_gTnW(chess=True),
"clearsms": lambda: handle_sms(message_from_id, message),
@@ -84,7 +84,6 @@ def auto_response(message, snr, rssi, hop, pkiStatus, message_from_id, channel_n
"cartremove": lambda: handle_inventory(message, message_from_id, deviceID),
"cartsell": lambda: handle_inventory(message, message_from_id, deviceID),
"joke": lambda: tell_joke(message_from_id),
"latest": lambda: get_newsAPI(message),
"leaderboard": lambda: get_mesh_leaderboard(message, message_from_id, deviceID),
"lemonstand": lambda: handleLemonade(message, message_from_id, deviceID),
"lheard": lambda: handle_lheard(message, message_from_id, deviceID, isDM),
@@ -97,6 +96,8 @@ def auto_response(message, snr, rssi, hop, pkiStatus, message_from_id, channel_n
"ping": lambda: handle_ping(message_from_id, deviceID, message, hop, snr, rssi, isDM, channel_number),
"pinging": lambda: handle_ping(message_from_id, deviceID, message, hop, snr, rssi, isDM, channel_number),
"pong": lambda: "🏓PING!!🛜",
"purgein": lambda: handle_checklist(message, message_from_id, deviceID),
"purgeout": lambda: handle_checklist(message, message_from_id, deviceID),
"q:": lambda: quizHandler(message, message_from_id, deviceID),
"quiz": lambda: quizHandler(message, message_from_id, deviceID),
"readnews": lambda: handleNews(message_from_id, deviceID, message, isDM),
@@ -1427,10 +1428,21 @@ def handle_repeaterQuery(message_from_id, deviceID, channel_number):
return "Repeater lookup not enabled"
def handle_tide(message_from_id, deviceID, channel_number, vox=False):
if vox:
return get_NOAAtide(str(my_settings.latitudeValue), str(my_settings.longitudeValue))
# Check if tidepredict (xtide) is enabled
location = get_node_location(message_from_id, deviceID, channel_number)
return get_NOAAtide(str(location[0]), str(location[1]))
lat = str(location[0])
lon = str(location[1])
if lat == "0.0" or lon == "0.0":
lat = str(my_settings.latitudeValue)
lon = str(my_settings.longitudeValue)
if my_settings.useTidePredict:
logger.debug("System: Location: Using tidepredict")
return xtide.get_tide_predictions(lat, lon)
else:
# Fallback to NOAA tide data
logger.debug("System: Location: Using NOAA")
return get_NOAAtide(str(location[0]), str(location[1]))
def handle_moon(message_from_id, deviceID, channel_number, vox=False):
if vox:
@@ -1540,9 +1552,6 @@ def handle_boot(mesh=True):
if my_settings.solar_conditions_enabled:
logger.debug("System: Celestial Telemetry Enabled")
if my_settings.meshagesTTS:
logger.debug("System: Meshages TTS Text-to-Speech Enabled")
if my_settings.location_enabled:
if my_settings.use_meteo_wxApi:
@@ -1555,23 +1564,23 @@ def handle_boot(mesh=True):
if my_settings.coastalEnabled:
logger.debug("System: Coastal Forecast and Tide Enabled!")
if my_settings.useTidePredict:
logger.debug("System: Using Local TidePredict for Tide Data")
if games_enabled:
logger.debug("System: Games Enabled!")
if my_settings.wikipedia_enabled:
if my_settings.use_kiwix_server:
logger.debug(f"System: Wikipedia search Enabled using Kiwix server at {my_settings.kiwix_url}")
logger.debug(f"System: Wikipedia search Enabled using Kiwix server at {kiwix_url}")
else:
logger.debug("System: Wikipedia search Enabled")
if my_settings.rssEnable:
logger.debug(f"System: RSS Feed Reader Enabled for feeds: {my_settings.rssFeedNames}")
if my_settings.enable_headlines:
logger.debug("System: News Headlines Enabled from NewsAPI.org")
logger.debug(f"System: RSS Feed Reader Enabled for feeds: {rssFeedNames}")
if my_settings.radio_detection_enabled:
logger.debug(f"System: Radio Detection Enabled using rigctld at {my_settings.rigControlServerAddress} broadcasting to channels: {my_settings.sigWatchBroadcastCh}")
logger.debug(f"System: Radio Detection Enabled using rigctld at {my_settings.rigControlServerAddress} broadcasting to channels: {my_settings.sigWatchBroadcastCh} for {get_freq_common_name(get_hamlib('f'))}")
if my_settings.file_monitor_enabled:
logger.warning(f"System: File Monitor Enabled for {my_settings.file_monitor_file_path}, broadcasting to channels: {my_settings.file_monitor_broadcastCh}")
@@ -1582,21 +1591,21 @@ def handle_boot(mesh=True):
if my_settings.read_news_enabled:
logger.debug(f"System: File Monitor News Reader Enabled for {my_settings.news_file_path}")
if my_settings.bee_enabled:
logger.debug("System: File Monitor Bee Monitor Enabled for 🐝bee.txt")
if my_settings.usAlerts:
logger.debug(f"System: Emergency Alert Broadcast Enabled on channel {my_settings.emergency_responder_alert_channel} for interface {my_settings.emergency_responder_alert_interface}")
if my_settings.enableDEalerts:
logger.debug(f"System: NINA Alerts Enabled with counties {my_settings.myRegionalKeysDE}")
if my_settings.volcanoAlertBroadcastEnabled:
logger.debug(f"System: Volcano Alert Broadcast Enabled on channels {my_settings.emergency_responder_alert_channel} ignoreUSGSWords {my_settings.ignoreUSGSWords}")
if my_settings.ipawsAlertEnabled:
logger.debug(f"System: iPAWS Alerts Enabled with FIPS codes {my_settings.myStateFIPSList} ignorelist {my_settings.ignoreFEMAwords}")
if my_settings.enableDEalerts:
logger.debug(f"System: NINA Alerts Enabled with counties {my_settings.myRegionalKeysDE}")
logger.debug("System: File Monitor Bee Monitor Enabled for bee.txt")
if my_settings.wxAlertBroadcastEnabled:
logger.debug(f"System: Weather Alert Broadcast Enabled on channels {my_settings.emergency_responder_alert_channel} ignoreEASwords {my_settings.ignoreEASwords}")
logger.debug(f"System: Weather Alert Broadcast Enabled on channels {my_settings.wxAlertBroadcastChannel}")
if my_settings.emergencyAlertBrodcastEnabled:
logger.debug(f"System: Emergency Alert Broadcast Enabled on channels {my_settings.emergencyAlertBroadcastCh} for FIPS codes {my_settings.myStateFIPSList}")
if my_settings.myStateFIPSList == ['']:
logger.warning("System: No FIPS codes set for iPAWS Alerts")
if my_settings.emergency_responder_enabled:
logger.debug(f"System: Emergency Responder Enabled on channels {my_settings.emergency_responder_alert_channel}")
logger.debug(f"System: Emergency Responder Enabled on channels {my_settings.emergency_responder_alert_channel} for interface {my_settings.emergency_responder_alert_interface}")
if my_settings.volcanoAlertBroadcastEnabled:
logger.debug(f"System: Volcano Alert Broadcast Enabled on channels {my_settings.volcanoAlertBroadcastChannel}")
if my_settings.qrz_hello_enabled:
if my_settings.train_qrz:
@@ -1893,13 +1902,7 @@ def onReceive(packet, interface):
else:
# respond with help message on DM
send_message(help_message, channel_number, message_from_id, rxNode)
# add message to tts queue
if meshagesTTS:
# add to the tts_read_queue
readMe = f"DM from {get_name_from_number(message_from_id, 'short', rxNode)}: {message_string}"
tts_read_queue.append(readMe)
# log the message to the message log
if log_messages_to_file:
msgLogger.info(f"Device:{rxNode} Channel:{channel_number} | {get_name_from_number(message_from_id, 'long', rxNode)} | DM | " + message_string.replace('\n', '-nl-'))
@@ -2003,12 +2006,6 @@ def onReceive(packet, interface):
if slotMachine:
msg = f"🎉 {get_name_from_number(message_from_id, 'long', rxNode)} played the Slot Machine and got: {slotMachine} 🥳"
send_message(msg, channel_number, 0, rxNode)
# add message to tts queue
if my_settings.meshagesTTS and channel_number == my_settings.ttsChannels:
# add to the tts_read_queue
readMe = f"DM from {get_name_from_number(message_from_id, 'short', rxNode)}: {message_string}"
tts_read_queue.append(readMe)
else:
# Evaluate non TEXT_MESSAGE_APP packets
consumeMetadata(packet, rxNode, channel_number)
@@ -2060,11 +2057,7 @@ async def main():
tasks.append(asyncio.create_task(handleSignalWatcher(), name="hamlib"))
if my_settings.voxDetectionEnabled:
from modules.radio import voxMonitor
tasks.append(asyncio.create_task(voxMonitor(), name="vox_detection"))
if my_settings.meshagesTTS:
tasks.append(asyncio.create_task(handleTTS(), name="tts_handler"))
if my_settings.wsjtx_detection_enabled:
tasks.append(asyncio.create_task(handleWsjtxWatcher(), name="wsjtx_monitor"))

View File

@@ -139,8 +139,8 @@ The checklist module provides asset tracking and accountability features with sa
| `checkin` | Check in a node/asset |
| `checkout` | Check out a node/asset |
| `checklist` | Show active check-ins |
| `approvecl` | Admin Approve id |
| `denycl` | Admin Remove id |
| `purgein` | Delete your check-in record |
| `purgeout` | Delete your check-out record |
#### Advanced Features
@@ -150,8 +150,8 @@ The checklist module provides asset tracking and accountability features with sa
- Ideal for solo activities, remote work, or safety accountability
- **Approval Workflow**
- `approvecl <id>` - Approve a pending check-in (admin)
- `denycl <id>` - Deny/remove a check-in (admin)
- `checklistapprove <id>` - Approve a pending check-in (admin)
- `checklistdeny <id>` - Deny/remove a check-in (admin)
more at [modules/checklist.md](modules/checklist.md)
@@ -287,7 +287,7 @@ The system uses SQLite with four tables:
| `wxa` | NOAA alerts |
| `wxalert` | NOAA alerts (expanded) |
| `mwx` | NOAA Coastal Marine Forecast |
| `tide` | NOAA tide info |
| `tide` | Tide info (NOAA/tidepredict for global) |
| `riverflow` | NOAA river flow info |
| `earthquake` | USGS earthquake info |
| `valert` | USGS volcano alerts |
@@ -299,6 +299,8 @@ The system uses SQLite with four tables:
Configure in `[location]` section of `config.ini`.
**Note**: For global tide predictions outside the US, enable `useTidePredict = True` in `config.ini`. See [xtide.md](xtide.md) for setup details.
Certainly! Heres a README help section for your `mapHandler` command, suitable for users of your meshbot:
---
@@ -475,62 +477,7 @@ Configure in `[wikipedia]` section of `config.ini`.
---
## News & Headlines (`latest` Command)
The `latest` command allows you to fetch current news headlines or articles on any topic using the NewsAPI integration. This is useful for quickly checking the latest developments on a subject, even from the mesh.
### Usage
- **Get the latest headlines on a topic:**
```
latest <topic>
```
Example:
```
latest meshtastic
```
This will return the most recent news articles about "meshtastic".
- **General latest news:**
```
latest
```
Returns the latest general news headlines.
### How It Works
- The bot queries NewsAPI.org for the most recent articles matching your topic.
- Each result includes the article title and a short description.
You need to go register for the developer key and read terms of use.
```ini
# enable or disable the headline command which uses NewsAPI.org
enableNewsAPI = True
newsAPI_KEY = key at https://newsapi.org/register
newsAPIregion = us
```
### Example Output
```
🗞️:📰Meshtastic project launches new firmware
The open-source mesh radio project Meshtastic has released a major firmware update...
📰How Meshtastic is changing off-grid communication
A look at how Meshtastic devices are being used for emergency response...
📰Meshtastic featured at DEF CON 2025
The Meshtastic team presented new features at DEF CON, drawing large crowds...
```
### Notes
- You can search for any topic, e.g., `latest wildfire`, `latest ham radio`, etc.
- The number of results can be adjusted in the configuration.
- Requires internet access for the bot to fetch news.
___
## DX Spotter Module
The DX Spotter module allows you to fetch and display recent DX cluster spots from [spothole.app](https://spothole.app) directly in your mesh-bot.
@@ -1027,6 +974,7 @@ This uses USA: SAME, FIPS, to locate the alerts in the feed. By default ignoring
```ini
eAlertBroadcastEnabled = False # Goverment IPAWS/CAP Alert Broadcast
eAlertBroadcastCh = 2,3 # Goverment Emergency IPAWS/CAP Alert Broadcast Channels
ignoreFEMAenable = True # Ignore any headline that includes followig word list
ignoreFEMAwords = test,exercise
# comma separated list of FIPS codes to trigger local alert. find your FIPS codes at https://en.wikipedia.org/wiki/Federal_Information_Processing_Standard_state_code

View File

@@ -26,6 +26,7 @@ The enhanced checklist module provides asset tracking and accountability feature
### 📍 Location Tracking
- Automatic GPS location capture when checking in/out
- View last known location in checklist
- Track movement over time
- **Time Window Monitoring**: Check-in with safety intervals (e.g., `checkin 60 Hunting in tree stand`)
- Tracks if users don't check in within expected timeframe
@@ -33,65 +34,20 @@ The enhanced checklist module provides asset tracking and accountability feature
- Provides `get_overdue_checkins()` function for alert integration
- **Approval Workflow**:
- `clok <id>` - Approve pending check-ins (admin)
- `denycl <id>` - Deny/remove check-ins (admin)
- `checklistapprove <id>` - Approve pending check-ins (admin)
- `checklistdeny <id>` - Deny/remove check-ins (admin)
- Support for approval-based workflows
- **Enhanced Database Schema**:
- Added `approved` field for approval workflows
- Added `expected_checkin_interval` field for safety monitoring
- Automatic migration for existing databases
#### New Commands:
- `clok <id>` - Approve a check-in
- `denycl <id>` - Deny a check-in
- `checklistapprove <id>` - Approve a check-in
- `checklistdeny <id>` - Deny a check-in
- Enhanced `checkin [interval] [note]` - Now supports interval parameter
### Enhanced Check Out Options
You can now check out in three ways:
#### 1. Check Out the Most Recent Active Check-in
```
checkout [notes]
```
Checks out your most recent active check-in.
*Example:*
```
checkout Heading back to camp
```
#### 2. Check Out All Active Check-ins
```
checkout all [notes]
```
Checks out **all** of your active check-ins at once.
*Example:*
```
checkout all Done for the day
```
*Response:*
```
Checked out 2 check-ins for Hunter1. Durations: 01:23:45, 00:15:30
```
#### 3. Check Out a Specific Check-in by ID
```
checkout <checkin_id> [notes]
```
Checks out a specific check-in using its ID (as shown in the `checklist` command).
*Example:*
```
checkout 123 Leaving early
```
*Response:*
```
Checked out check-in ID 123 for Hunter1. Duration: 00:45:12
```
**Tip:**
- Use `checklist` to see your current check-in IDs and durations.
- You can always add a note to any checkout command for context.
---
These options allow you to manage your check-ins more flexibly, whether you want to check out everything at once or just a specific session.
## Configuration
Add to your `config.ini`:
@@ -150,31 +106,38 @@ ID: Hunter1 checked-In for 01:23:45📝Solo hunting
ID: Tech2 checked-In for 00:15:30📝Equipment repair
```
#### Purge Records
```
purgein # Delete your check-in record
purgeout # Delete your check-out record
```
Use these to manually remove your records if needed.
### Admin Commands
#### Approve Check-in
```
approvecl <checkin_id>
checklistapprove <checkin_id>
```
Approve a pending check-in (requires admin privileges).
**Example:**
```
approvecl 123
checklistapprove 123
```
#### Deny Check-in
```
denycl <checkin_id>
checklistdeny <checkin_id>
```
Deny and remove a check-in (requires admin privileges).
**Example:**
```
denycl 456
checklistdeny 456
```
## Safety Monitoring Feature
@@ -190,7 +153,7 @@ checkin 60 Hunting in remote area
This tells the system:
- You're checking in now
- You expect to check in again or check out within 60 minutes
- If 60 minutes pass without activity, you'll be marked as overdue alert
- If 60 minutes pass without activity, you'll be marked as overdue
### Use Cases for Time Intervals
@@ -211,17 +174,14 @@ This tells the system:
4. **Check-in Points**: Regular status updates during long operations
```
checkin 15 Descending cliff
```
5. **Check-in a reminder**: Reminders to check in on something like a pot roast
```
checkin 30 🍠🍖
checkin 15 Descending cliff face
```
### Overdue Check-ins
The system tracks all check-ins with time intervals and can identify who is overdue. The module provides the `get_overdue_checkins()` function that returns a list of overdue users. It alerts on the 20min watchdog.
The system tracks all check-ins with time intervals and can identify who is overdue. The module provides the `get_overdue_checkins()` function that returns a list of overdue users.
**Note**: Automatic alerts for overdue check-ins require integration with the bot's scheduler or alert system. The checklist module provides the detection capability, but sending notifications must be configured separately through the main bot's alert features.
## Practical Examples
@@ -298,12 +258,15 @@ checkin 45 Site survey tower location 2
The checklist system automatically captures GPS coordinates when available. This can be used for:
- Tracking last known position
- Geo-fencing applications
- Emergency response coordination
- Asset location management
### Alert Systems
The overdue check-in feature can trigger:
- Notifications to supervisors
- Emergency alerts
- Automated messages to response teams
- Email/SMS notifications (if configured)
@@ -311,7 +274,9 @@ The overdue check-in feature can trigger:
Combine with the scheduler module to:
- Send reminders to check in
- Automatically generate reports
- Schedule periodic check-in requirements
- Send daily summaries
## Best Practices
@@ -341,17 +306,6 @@ Combine with the scheduler module to:
checklist
```
The list will show ✅ approved and ☑️ unapproved
The alarm will only alert on approved.
in config.ini
```ini
# Auto approve new checklists
auto_approve = True
# Check-in reminder interval is 5min
# Checkin broadcast interface and channel is emergency_handler interface and channel
```
2. **Respond to Overdue Situations**: Act on overdue check-ins promptly
3. **Set Clear Policies**: Establish when and how to use the system

View File

@@ -3,50 +3,69 @@
import sqlite3
from modules.log import logger
from modules.settings import checklist_db, reverse_in_out, bbs_ban_list, bbs_admin_list, checklist_auto_approve
from modules.settings import checklist_db, reverse_in_out, bbs_ban_list
import time
trap_list_checklist = ("checkin", "checkout", "checklist", "approvecl", "denycl",)
trap_list_checklist = ("checkin", "checkout", "checklist", "purgein", "purgeout",
"checklistapprove", "checklistdeny", "checklistadd", "checklistremove")
def initialize_checklist_database():
try:
conn = sqlite3.connect(checklist_db)
c = conn.cursor()
# Check if the checkin table exists, and create it if it doesn't
logger.debug("System: Checklist: Initializing database...")
c.execute('''CREATE TABLE IF NOT EXISTS checkin
(checkin_id INTEGER PRIMARY KEY, checkin_name TEXT, checkin_date TEXT,
checkin_time TEXT, location TEXT, checkin_notes TEXT,
approved INTEGER DEFAULT 1, expected_checkin_interval INTEGER DEFAULT 0,
removed INTEGER DEFAULT 0)''')
approved INTEGER DEFAULT 1, expected_checkin_interval INTEGER DEFAULT 0)''')
# Check if the checkout table exists, and create it if it doesn't
c.execute('''CREATE TABLE IF NOT EXISTS checkout
(checkout_id INTEGER PRIMARY KEY, checkout_name TEXT, checkout_date TEXT,
checkout_time TEXT, location TEXT, checkout_notes TEXT,
checkin_id INTEGER, removed INTEGER DEFAULT 0)''')
checkout_time TEXT, location TEXT, checkout_notes TEXT)''')
# Add new columns if they don't exist (for migration)
try:
c.execute("ALTER TABLE checkin ADD COLUMN approved INTEGER DEFAULT 1")
except sqlite3.OperationalError:
pass # Column already exists
try:
c.execute("ALTER TABLE checkin ADD COLUMN expected_checkin_interval INTEGER DEFAULT 0")
except sqlite3.OperationalError:
pass # Column already exists
try:
c.execute("ALTER TABLE checkin ADD COLUMN removed INTEGER DEFAULT 0")
except sqlite3.OperationalError:
pass # Column already exists
# Add this to your DB init (if not already present)
try:
c.execute("ALTER TABLE checkout ADD COLUMN removed INTEGER DEFAULT 0")
except sqlite3.OperationalError:
pass # Column already exists
conn.commit()
conn.close()
return True
except Exception as e:
logger.error(f"Checklist: Failed to initialize database: {e} Please delete old checklist database file. rm data/checklist.db")
logger.error(f"Checklist: Failed to initialize database: {e}")
return False
def checkin(name, date, time, location, notes):
location = ", ".join(map(str, location))
# Auto-approve if setting is enabled
approved_value = 1 if checklist_auto_approve else 0
# checkin a user
conn = sqlite3.connect(checklist_db)
c = conn.cursor()
try:
c.execute(
"INSERT INTO checkin (checkin_name, checkin_date, checkin_time, location, checkin_notes, removed, approved) VALUES (?, ?, ?, ?, ?, 0, ?)",
(name, date, time, location, notes, approved_value)
)
c.execute("INSERT INTO checkin (checkin_name, checkin_date, checkin_time, location, checkin_notes) VALUES (?, ?, ?, ?, ?)", (name, date, time, location, notes))
# # remove any checkouts that are older than the checkin
# c.execute("DELETE FROM checkout WHERE checkout_date < ? OR (checkout_date = ? AND checkout_time < ?)", (date, date, time))
except sqlite3.OperationalError as e:
if "no such table" in str(e):
initialize_checklist_database()
c.execute(
"INSERT INTO checkin (checkin_name, checkin_date, checkin_time, location, checkin_notes, removed, approved) VALUES (?, ?, ?, ?, ?, 0, ?)",
(name, date, time, location, notes, approved_value)
)
c.execute("INSERT INTO checkin (checkin_name, checkin_date, checkin_time, location, checkin_notes) VALUES (?, ?, ?, ?, ?)", (name, date, time, location, notes))
else:
raise
conn.commit()
@@ -56,90 +75,71 @@ def checkin(name, date, time, location, notes):
else:
return "Checked✅In: " + str(name)
def checkout(name, date, time_str, location, notes, all=False, checkin_id=None):
location = ", ".join(map(str, location))
def delete_checkin(checkin_id):
# delete a checkin
conn = sqlite3.connect(checklist_db)
c = conn.cursor()
c.execute("DELETE FROM checkin WHERE checkin_id = ?", (checkin_id,))
conn.commit()
conn.close()
return "Checkin deleted." + str(checkin_id)
def checkout(name, date, time_str, location, notes):
location = ", ".join(map(str, location))
checkin_record = None # Ensure variable is always defined
conn = sqlite3.connect(checklist_db)
c = conn.cursor()
checked_out_ids = []
durations = []
try:
if checkin_id is not None:
# Check out a specific check-in by ID
c.execute("""
SELECT checkin_id, checkin_time, checkin_date FROM checkin
WHERE checkin_id = ? AND checkin_name = ?
""", (checkin_id, name))
row = c.fetchone()
if row:
c.execute("INSERT INTO checkout (checkout_name, checkout_date, checkout_time, location, checkout_notes, checkin_id) VALUES (?, ?, ?, ?, ?, ?)",
(name, date, time_str, location, notes, row[0]))
checkin_time, checkin_date = row[1], row[2]
checkin_datetime = time.strptime(checkin_date + " " + checkin_time, "%Y-%m-%d %H:%M:%S")
time_checked_in_seconds = time.time() - time.mktime(checkin_datetime)
durations.append(time.strftime("%H:%M:%S", time.gmtime(time_checked_in_seconds)))
checked_out_ids.append(row[0])
elif all:
# Check out all active check-ins for this user
c.execute("""
SELECT checkin_id, checkin_time, checkin_date FROM checkin
WHERE checkin_name = ?
AND removed = 0
AND checkin_id NOT IN (
SELECT checkin_id FROM checkout WHERE checkin_id IS NOT NULL
)
""", (name,))
rows = c.fetchall()
for row in rows:
c.execute("INSERT INTO checkout (checkout_name, checkout_date, checkout_time, location, checkout_notes, checkin_id) VALUES (?, ?, ?, ?, ?, ?)",
(name, date, time_str, location, notes, row[0]))
checkin_time, checkin_date = row[1], row[2]
checkin_datetime = time.strptime(checkin_date + " " + checkin_time, "%Y-%m-%d %H:%M:%S")
time_checked_in_seconds = time.time() - time.mktime(checkin_datetime)
durations.append(time.strftime("%H:%M:%S", time.gmtime(time_checked_in_seconds)))
checked_out_ids.append(row[0])
else:
# Default: check out the most recent active check-in
c.execute("""
SELECT checkin_id, checkin_time, checkin_date FROM checkin
WHERE checkin_name = ?
AND removed = 0
AND checkin_id NOT IN (
SELECT checkin_id FROM checkout WHERE checkin_id IS NOT NULL
)
ORDER BY checkin_date DESC, checkin_time DESC
LIMIT 1
""", (name,))
row = c.fetchone()
if row:
c.execute("INSERT INTO checkout (checkout_name, checkout_date, checkout_time, location, checkout_notes, checkin_id) VALUES (?, ?, ?, ?, ?, ?)",
(name, date, time_str, location, notes, row[0]))
checkin_time, checkin_date = row[1], row[2]
checkin_datetime = time.strptime(checkin_date + " " + checkin_time, "%Y-%m-%d %H:%M:%S")
time_checked_in_seconds = time.time() - time.mktime(checkin_datetime)
durations.append(time.strftime("%H:%M:%S", time.gmtime(time_checked_in_seconds)))
checked_out_ids.append(row[0])
# Check if the user has a checkin before checking out
c.execute("""
SELECT checkin_id FROM checkin
WHERE checkin_name = ?
AND NOT EXISTS (
SELECT 1 FROM checkout
WHERE checkout_name = checkin_name
AND (checkout_date > checkin_date OR (checkout_date = checkin_date AND checkout_time > checkin_time))
)
ORDER BY checkin_date DESC, checkin_time DESC
LIMIT 1
""", (name,))
checkin_record = c.fetchone()
if checkin_record:
c.execute("INSERT INTO checkout (checkout_name, checkout_date, checkout_time, location, checkout_notes) VALUES (?, ?, ?, ?, ?)", (name, date, time_str, location, notes))
# calculate length of time checked in
c.execute("SELECT checkin_time, checkin_date FROM checkin WHERE checkin_id = ?", (checkin_record[0],))
checkin_time, checkin_date = c.fetchone()
checkin_datetime = time.strptime(checkin_date + " " + checkin_time, "%Y-%m-%d %H:%M:%S")
time_checked_in_seconds = time.time() - time.mktime(checkin_datetime)
timeCheckedIn = time.strftime("%H:%M:%S", time.gmtime(time_checked_in_seconds))
# # remove the checkin record older than the checkout
# c.execute("DELETE FROM checkin WHERE checkin_date < ? OR (checkin_date = ? AND checkin_time < ?)", (date, date, time_str))
except sqlite3.OperationalError as e:
if "no such table" in str(e):
conn.close()
initialize_checklist_database()
return checkout(name, date, time_str, location, notes, all=all, checkin_id=checkin_id)
# Try again after initializing
return checkout(name, date, time_str, location, notes)
else:
conn.close()
raise
conn.commit()
conn.close()
if checked_out_ids:
if all:
return f"Checked out {len(checked_out_ids)} check-ins for {name}. Durations: {', '.join(durations)}"
elif checkin_id is not None:
return f"Checked out check-in ID {checkin_id} for {name}. Duration: {durations[0]}"
if checkin_record:
if reverse_in_out:
return "CheckedIn: " + str(name) + " duration " + timeCheckedIn
else:
if reverse_in_out:
return f"Checked⌛In: {name} duration {durations[0]}"
else:
return f"Checked⌛Out: {name} duration {durations[0]}"
return "Checked⌛Out: " + str(name) + " duration " + timeCheckedIn
else:
return f"None found for {name}"
return "None found for " + str(name)
def delete_checkout(checkout_id):
# delete a checkout
conn = sqlite3.connect(checklist_db)
c = conn.cursor()
c.execute("DELETE FROM checkout WHERE checkout_id = ?", (checkout_id,))
conn.commit()
conn.close()
return "Checkout deleted." + str(checkout_id)
def approve_checkin(checkin_id):
"""Approve a pending check-in"""
@@ -254,27 +254,25 @@ def get_overdue_checkins():
return []
def format_overdue_alert():
header = "⚠️ OVERDUE CHECK-INS:\a\n"
alert = ""
try:
"""Format overdue check-ins as an alert message"""
overdue = get_overdue_checkins()
logger.debug(f"Overdue check-ins: {overdue}") # Add this line
if not overdue:
return None
alert = "⚠️ OVERDUE CHECK-INS:\n"
for entry in overdue:
hours = entry['overdue_minutes'] // 60
minutes = entry['overdue_minutes'] % 60
if hours > 0:
alert += f"{entry['name']}: {hours}h {minutes}m overdue"
else:
alert += f"{entry['name']}: {minutes}m overdue"
alert += f"{entry['name']}: {hours}h {minutes}m overdue"
# if entry['location']:
# alert += f" @ {entry['location']}"
if entry['checkin_notes']:
alert += f" 📝{entry['checkin_notes']}"
alert += "\n"
if alert:
return header + alert.rstrip()
return alert.rstrip()
except Exception as e:
logger.error(f"Checklist: Error formatting overdue alert: {e}")
return None
@@ -287,9 +285,9 @@ def list_checkin():
c.execute("""
SELECT * FROM checkin
WHERE removed = 0
AND NOT EXISTS (
SELECT 1 FROM checkout
WHERE checkout.checkin_id = checkin.checkin_id
AND checkin_id NOT IN (
SELECT checkin_id FROM checkout
WHERE checkout_date > checkin_date OR (checkout_date = checkin_date AND checkout_time > checkin_time)
)
""")
rows = c.fetchall()
@@ -300,16 +298,12 @@ def list_checkin():
return list_checkin()
else:
conn.close()
initialize_checklist_database()
logger.error(f"Checklist: Error listing checkins: {e}")
return "Error listing checkins."
conn.close()
# Get overdue info
overdue = {entry['id']: entry for entry in get_overdue_checkins()}
timeCheckedIn = ""
checkin_list = ""
for row in rows:
checkin_id = row[0]
# Calculate length of time checked in, including days
total_seconds = time.time() - time.mktime(time.strptime(row[2] + " " + row[3], "%Y-%m-%d %H:%M:%S"))
days = int(total_seconds // 86400)
@@ -320,31 +314,9 @@ def list_checkin():
timeCheckedIn = f"{days}d {hours:02}:{minutes:02}:{seconds:02}"
else:
timeCheckedIn = f"{hours:02}:{minutes:02}:{seconds:02}"
# Add ⏰ if routine check-ins are required
routine = ""
if len(row) > 7 and row[7] and int(row[7]) > 0:
routine = f" ⏰({row[7]}m)"
# Indicate approval status
approved_marker = "" if row[6] == 1 else "☑️"
# Check if overdue
if checkin_id in overdue:
overdue_minutes = overdue[checkin_id]['overdue_minutes']
overdue_hours = overdue_minutes // 60
overdue_mins = overdue_minutes % 60
if overdue_hours > 0:
overdue_str = f"overdue by {overdue_hours}h {overdue_mins}m"
else:
overdue_str = f"overdue by {overdue_mins}m"
status = f"{row[1]} {overdue_str}{routine}"
else:
status = f"{row[1]} checked-In for {timeCheckedIn}{routine}"
checkin_list += f"ID: {checkin_id} {approved_marker} {status}"
checkin_list += "ID: " + str(row[0]) + " " + row[1] + " checked-In for " + timeCheckedIn
if row[5] != "":
checkin_list += " 📝" + row[5]
checkin_list += "📝" + row[5]
if row != rows[-1]:
checkin_list += "\n"
# if empty list
@@ -359,9 +331,6 @@ def process_checklist_command(nodeID, message, name="none", location="none"):
if str(nodeID) in bbs_ban_list:
logger.warning("System: Checklist attempt from the ban list")
return "unable to process command"
is_admin = False
if str(nodeID) in bbs_admin_list:
is_admin = True
message_lower = message.lower()
parts = message.split()
@@ -390,44 +359,22 @@ def process_checklist_command(nodeID, message, name="none", location="none"):
return result
elif ("checkout" in message_lower and not reverse_in_out) or ("checkin" in message_lower and reverse_in_out):
# Support: checkout all, checkout <id>, or checkout [note]
all_flag = False
checkin_id = None
actual_comment = comment
return checkout(name, current_date, current_time, location, comment)
# Split the command into parts after the keyword
checkout_args = parts[1:] if len(parts) > 1 else []
elif "purgein" in message_lower:
return mark_checkin_removed_by_name(name)
if checkout_args:
if checkout_args[0].lower() == "all":
all_flag = True
actual_comment = " ".join(checkout_args[1:]) if len(checkout_args) > 1 else ""
elif checkout_args[0].isdigit():
checkin_id = int(checkout_args[0])
actual_comment = " ".join(checkout_args[1:]) if len(checkout_args) > 1 else ""
else:
actual_comment = " ".join(checkout_args)
elif "purgeout" in message_lower:
return mark_checkout_removed_by_name(name)
return checkout(name, current_date, current_time, location, actual_comment, all=all_flag, checkin_id=checkin_id)
# elif "purgein" in message_lower:
# return mark_checkin_removed_by_name(name)
# elif "purgeout" in message_lower:
# return mark_checkout_removed_by_name(name)
elif "approvecl " in message_lower:
if not is_admin:
return "You do not have permission to approve check-ins."
elif message_lower.startswith("checklistapprove "):
try:
checkin_id = int(parts[1])
return approve_checkin(checkin_id)
except (ValueError, IndexError):
return "Usage: checklistapprove <checkin_id>"
elif "denycl " in message_lower:
if not is_admin:
return "You do not have permission to deny check-ins."
elif message_lower.startswith("checklistdeny "):
try:
checkin_id = int(parts[1])
return deny_checkin(checkin_id)
@@ -438,15 +385,21 @@ def process_checklist_command(nodeID, message, name="none", location="none"):
if not reverse_in_out:
return ("Command: checklist followed by\n"
"checkin [interval] [note]\n"
"checkout [all] [note]\n"
"Example: checkin 60 Leaving for a hike")
"checkout [note]\n"
"purgein - delete your checkin\n"
"purgeout - delete your checkout\n"
"checklistapprove <id> - approve checkin\n"
"checklistdeny <id> - deny checkin\n"
"Example: checkin 60 Hunting in tree stand")
else:
return ("Command: checklist followed by\n"
"checkout [all] [interval] [note]\n"
"checkout [interval] [note]\n"
"checkin [note]\n"
"Example: checkout 60 Leaving for a hike")
"purgeout - delete your checkout\n"
"purgein - delete your checkin\n"
"Example: checkout 60 Leaving park")
elif message_lower.strip() == "checklist":
elif "checklist" in message_lower:
return list_checkin()
else:

View File

@@ -175,6 +175,7 @@ def getArtSciRepeaters(lat=0, lon=0):
return msg
def get_NOAAtide(lat=0, lon=0):
# get tide data from NOAA for lat/lon
station_id = ""
location = lat,lon
if float(lat) == 0 and float(lon) == 0:

View File

@@ -1,55 +0,0 @@
# Radio Module: Meshages TTS (Text-to-Speech) Setup
The radio module supports audible mesh messages using the [KittenTTS](https://github.com/KittenML/KittenTTS) engine. This allows the bot to generate and play speech from text, making mesh alerts and messages audible on your device.
## Features
- Converts mesh messages to speech using KittenTTS.
## Installation
1. **Install Python dependencies:**
- `kittentts` is the TTS engine.
`pip install https://github.com/KittenML/KittenTTS/releases/download/0.1/kittentts-0.1.0-py3-none-any.whl`
2. **Install PortAudio (required for sounddevice):**
- **macOS:**
```sh
brew install portaudio
```
- **Linux (Debian/Ubuntu):**
```sh
sudo apt-get install portaudio19-dev
```
- **Windows:**
No extra step needed; `sounddevice` will use the default audio driver.
## Configuration
- Enable TTS in your `config.ini`:
```ini
[radioMon]
meshagesTTS = True
```
## Usage
When enabled, the bot will generate and play speech for mesh messages using the selected voice.
No additional user action is required.
## Troubleshooting
- If you see errors about missing `sounddevice` or `portaudio`, ensure you have installed the dependencies above.
- On macOS, you may need to allow microphone/audio access for your terminal.
- If you have audio issues, check your systems default output device.
## References
- [KittenTTS GitHub](https://github.com/KittenML/KittenTTS)
- [KittenTTS Model on HuggingFace](https://huggingface.co/KittenML/kitten-tts-nano-0.2)
- [sounddevice documentation](https://python-sounddevice.readthedocs.io/)
---

View File

@@ -16,9 +16,6 @@ import struct
import json
from modules.log import logger
# verbose debug logging for trap words function
debugVoxTmsg = False
from modules.settings import (
radio_detection_enabled,
rigControlServerAddress,
@@ -34,52 +31,14 @@ from modules.settings import (
voxTrapList,
voxOnTrapList,
voxEnableCmd,
ERROR_FETCHING_DATA,
meshagesTTS,
ERROR_FETCHING_DATA
)
# module global variables
previousStrength = -40
signalCycle = 0
FREQ_NAME_MAP = {
462562500: "GRMS CH1",
462587500: "GRMS CH2",
462612500: "GRMS CH3",
462637500: "GRMS CH4",
462662500: "GRMS CH5",
462687500: "GRMS CH6",
462712500: "GRMS CH7",
467562500: "GRMS CH8",
467587500: "GRMS CH9",
467612500: "GRMS CH10",
467637500: "GRMS CH11",
467662500: "GRMS CH12",
467687500: "GRMS CH13",
467712500: "GRMS CH14",
467737500: "GRMS CH15",
462550000: "GRMS CH16",
462575000: "GMRS CH17",
462600000: "GMRS CH18",
462625000: "GMRS CH19",
462675000: "GMRS CH20",
462670000: "GMRS CH21",
462725000: "GMRS CH22",
462725500: "GMRS CH23",
467575000: "GMRS CH24",
467600000: "GMRS CH25",
467625000: "GMRS CH26",
467650000: "GMRS CH27",
467675000: "GMRS CH28",
467700000: "FRS CH1",
462650000: "FRS CH5",
462700000: "FRS CH7",
462737500: "FRS CH16",
146520000: "2M Simplex Calling",
446000000: "70cm Simplex Calling",
156800000: "Marine CH16",
# Add more as needed
}
# verbose debug logging for trap words function
debugVoxTmsg = False
# --- WSJT-X and JS8Call Settings Initialization ---
wsjtxMsgQueue = [] # Queue for WSJT-X detected messages
@@ -141,9 +100,9 @@ try:
watched_callsigns = list({cs.upper() for cs in callsigns})
except ImportError:
logger.debug("System: RadioMon: WSJT-X/JS8Call settings not configured")
logger.debug("RadioMon: WSJT-X/JS8Call settings not configured")
except Exception as e:
logger.warning(f"System: RadioMon: Error loading WSJT-X/JS8Call settings: {e}")
logger.warning(f"RadioMon: Error loading WSJT-X/JS8Call settings: {e}")
if radio_detection_enabled:
@@ -177,43 +136,51 @@ if voxDetectionEnabled:
voxModel = Model(lang=voxLanguage) # use built in model for specified language
except Exception as e:
print(f"System: RadioMon: Error importing VOX dependencies: {e}")
print(f"RadioMon: Error importing VOX dependencies: {e}")
print(f"To use VOX detection please install the vosk and sounddevice python modules")
print(f"pip install vosk sounddevice")
print(f"sounddevice needs pulseaudio, apt-get install portaudio19-dev")
voxDetectionEnabled = False
logger.error(f"System: RadioMon: VOX detection disabled due to import error")
logger.error(f"RadioMon: VOX detection disabled due to import error")
if meshagesTTS:
try:
# TTS for meshages imports
logger.debug("System: RadioMon: Initializing TTS model for audible meshages")
import sounddevice as sd
from kittentts import KittenTTS
ttsModel = KittenTTS("KittenML/kitten-tts-nano-0.2")
available_voices = [
'expr-voice-2-m', 'expr-voice-2-f', 'expr-voice-3-m', 'expr-voice-3-f',
'expr-voice-4-m', 'expr-voice-4-f', 'expr-voice-5-m', 'expr-voice-5-f'
]
except Exception as e:
logger.error(f"To use Meshages TTS please review the radio.md documentation for setup instructions.")
meshagesTTS = False
async def generate_and_play_tts(text, voice, samplerate=24000):
"""Async: Generate speech and play audio."""
text = text.strip()
if not text:
return
try:
logger.debug(f"System: RadioMon: Generating TTS for text: {text} with voice: {voice}")
audio = await asyncio.to_thread(ttsModel.generate, text, voice=voice)
if audio is None or len(audio) == 0:
return
await asyncio.to_thread(sd.play, audio, samplerate)
await asyncio.to_thread(sd.wait)
del audio
except Exception as e:
logger.warning(f"System: RadioMon: Error in generate_and_play_tts: {e}")
FREQ_NAME_MAP = {
462562500: "GRMS CH1",
462587500: "GRMS CH2",
462612500: "GRMS CH3",
462637500: "GRMS CH4",
462662500: "GRMS CH5",
462687500: "GRMS CH6",
462712500: "GRMS CH7",
467562500: "GRMS CH8",
467587500: "GRMS CH9",
467612500: "GRMS CH10",
467637500: "GRMS CH11",
467662500: "GRMS CH12",
467687500: "GRMS CH13",
467712500: "GRMS CH14",
467737500: "GRMS CH15",
462550000: "GRMS CH16",
462575000: "GMRS CH17",
462600000: "GMRS CH18",
462625000: "GMRS CH19",
462675000: "GMRS CH20",
462670000: "GMRS CH21",
462725000: "GMRS CH22",
462725500: "GMRS CH23",
467575000: "GMRS CH24",
467600000: "GMRS CH25",
467625000: "GMRS CH26",
467650000: "GMRS CH27",
467675000: "GMRS CH28",
467700000: "FRS CH1",
462650000: "FRS CH5",
462700000: "FRS CH7",
462737500: "FRS CH16",
146520000: "2M Simplex Calling",
446000000: "70cm Simplex Calling",
156800000: "Marine CH16",
# Add more as needed
}
def get_freq_common_name(freq):
freq = int(freq)
@@ -227,14 +194,14 @@ def get_freq_common_name(freq):
def get_hamlib(msg="f"):
# get data from rigctld server
if "socket" not in globals():
logger.warning("System: RadioMon: 'socket' module not imported. Hamlib disabled.")
logger.warning("RadioMon: 'socket' module not imported. Hamlib disabled.")
return ERROR_FETCHING_DATA
try:
rigControlSocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
rigControlSocket.settimeout(2)
rigControlSocket.connect((rigControlServerAddress.split(":")[0],int(rigControlServerAddress.split(":")[1])))
except Exception as e:
logger.error(f"System: RadioMon: Error connecting to rigctld: {e}")
logger.error(f"RadioMon: Error connecting to rigctld: {e}")
return ERROR_FETCHING_DATA
try:
@@ -248,7 +215,7 @@ def get_hamlib(msg="f"):
data = data.replace(b'\n',b'')
return data.decode("utf-8").rstrip()
except Exception as e:
logger.error(f"System: RadioMon: Error fetching data from rigctld: {e}")
logger.error(f"RadioMon: Error fetching data from rigctld: {e}")
return ERROR_FETCHING_DATA
def get_sig_strength():
@@ -258,7 +225,7 @@ def get_sig_strength():
def checkVoxTrapWords(text):
try:
if not voxOnTrapList:
logger.debug(f"System: RadioMon: VOX detected: {text}")
logger.debug(f"RadioMon: VOX detected: {text}")
return text
if text:
traps = [voxTrapList] if isinstance(voxTrapList, str) else voxTrapList
@@ -268,27 +235,27 @@ def checkVoxTrapWords(text):
trap_lower = trap_clean.lower()
idx = text_lower.find(trap_lower)
if debugVoxTmsg:
logger.debug(f"System: RadioMon: VOX checking for trap word '{trap_lower}' in: '{text}' (index: {idx})")
logger.debug(f"RadioMon: VOX checking for trap word '{trap_lower}' in: '{text}' (index: {idx})")
if idx != -1:
new_text = text[idx + len(trap_clean):].strip()
if debugVoxTmsg:
logger.debug(f"System: RadioMon: VOX detected trap word '{trap_lower}' in: '{text}' (remaining: '{new_text}')")
logger.debug(f"RadioMon: VOX detected trap word '{trap_lower}' in: '{text}' (remaining: '{new_text}')")
new_words = new_text.split()
if voxEnableCmd:
for word in new_words:
if word in botMethods:
logger.info(f"System: RadioMon: VOX action '{word}' with '{new_text}'")
logger.info(f"RadioMon: VOX action '{word}' with '{new_text}'")
if word == "joke":
return botMethods[word](vox=True)
else:
return botMethods[word](None, None, None, vox=True)
logger.debug(f"System: RadioMon: VOX returning text after trap word '{trap_lower}': '{new_text}'")
logger.debug(f"RadioMon: VOX returning text after trap word '{trap_lower}': '{new_text}'")
return new_text
if debugVoxTmsg:
logger.debug(f"System: RadioMon: VOX no trap word found in: '{text}'")
logger.debug(f"RadioMon: VOX no trap word found in: '{text}'")
return None
except Exception as e:
logger.debug(f"System: RadioMon: Error in checkVoxTrapWords: {e}")
logger.debug(f"RadioMon: Error in checkVoxTrapWords: {e}")
return None
async def signalWatcher():
@@ -298,7 +265,7 @@ async def signalWatcher():
signalStrength = int(get_sig_strength())
if signalStrength >= previousStrength and signalStrength > signalDetectionThreshold:
message = f"Detected {get_freq_common_name(get_hamlib('f'))} active. S-Meter:{signalStrength}dBm"
logger.debug(f"System: RadioMon: {message}. Waiting for {signalHoldTime} seconds")
logger.debug(f"RadioMon: {message}. Waiting for {signalHoldTime} seconds")
previousStrength = signalStrength
signalCycle = 0
await asyncio.sleep(signalHoldTime)
@@ -318,7 +285,7 @@ async def signalWatcher():
async def make_vox_callback(loop, q):
def vox_callback(indata, frames, time, status):
if status:
logger.warning(f"System: RadioMon: VOX input status: {status}")
logger.warning(f"RadioMon: VOX input status: {status}")
try:
loop.call_soon_threadsafe(q.put_nowait, bytes(indata))
except asyncio.QueueFull:
@@ -331,7 +298,7 @@ async def make_vox_callback(loop, q):
loop.call_soon_threadsafe(q.put_nowait, bytes(indata))
except asyncio.QueueFull:
# If still full, just drop this frame
logger.debug("System: RadioMon: VOX queue full, dropping audio frame")
logger.debug("RadioMon: VOX queue full, dropping audio frame")
except RuntimeError:
# Loop may be closed
pass
@@ -343,7 +310,7 @@ async def voxMonitor():
model = voxModel
device_info = sd.query_devices(voxInputDevice, 'input')
samplerate = 16000
logger.debug(f"System: RadioMon: VOX monitor started on device {device_info['name']} with samplerate {samplerate} using trap words: {voxTrapList if voxOnTrapList else 'none'}")
logger.debug(f"RadioMon: VOX monitor started on device {device_info['name']} with samplerate {samplerate} using trap words: {voxTrapList if voxOnTrapList else 'none'}")
rec = KaldiRecognizer(model, samplerate)
loop = asyncio.get_running_loop()
callback = await make_vox_callback(loop, q)
@@ -370,7 +337,7 @@ async def voxMonitor():
await asyncio.sleep(0.1)
except Exception as e:
logger.warning(f"System: RadioMon: Error in VOX monitor: {e}")
logger.error(f"RadioMon: Error in VOX monitor: {e}")
def decode_wsjtx_packet(data):
"""Decode WSJT-X UDP packet according to the protocol specification"""
@@ -472,7 +439,7 @@ def decode_wsjtx_packet(data):
return None
except Exception as e:
logger.debug(f"System: RadioMon: Error decoding WSJT-X packet: {e}")
logger.debug(f"RadioMon: Error decoding WSJT-X packet: {e}")
return None
def check_callsign_match(message, callsigns):
@@ -514,7 +481,7 @@ def check_callsign_match(message, callsigns):
async def wsjtxMonitor():
"""Monitor WSJT-X UDP broadcasts for decode messages"""
if not wsjtx_enabled:
logger.warning("System: RadioMon: WSJT-X monitoring called but not enabled")
logger.warning("RadioMon: WSJT-X monitoring called but not enabled")
return
try:
@@ -523,9 +490,9 @@ async def wsjtxMonitor():
sock.bind((wsjtx_udp_address, wsjtx_udp_port))
sock.setblocking(False)
logger.info(f"System: RadioMon: WSJT-X UDP listener started on {wsjtx_udp_address}:{wsjtx_udp_port}")
logger.info(f"RadioMon: WSJT-X UDP listener started on {wsjtx_udp_address}:{wsjtx_udp_port}")
if watched_callsigns:
logger.info(f"System: RadioMon: Watching for callsigns: {', '.join(watched_callsigns)}")
logger.info(f"RadioMon: Watching for callsigns: {', '.join(watched_callsigns)}")
while True:
try:
@@ -540,29 +507,29 @@ async def wsjtxMonitor():
# Check if message contains watched callsigns
if check_callsign_match(message, watched_callsigns):
msg_text = f"WSJT-X {mode}: {message} (SNR: {snr:+d}dB)"
logger.info(f"System: RadioMon: {msg_text}")
logger.info(f"RadioMon: {msg_text}")
wsjtxMsgQueue.append(msg_text)
except BlockingIOError:
# No data available
await asyncio.sleep(0.1)
except Exception as e:
logger.debug(f"System: RadioMon: Error in WSJT-X monitor loop: {e}")
logger.debug(f"RadioMon: Error in WSJT-X monitor loop: {e}")
await asyncio.sleep(1)
except Exception as e:
logger.warning(f"System: RadioMon: Error starting WSJT-X monitor: {e}")
logger.error(f"RadioMon: Error starting WSJT-X monitor: {e}")
async def js8callMonitor():
"""Monitor JS8Call TCP API for messages"""
if not js8call_enabled:
logger.warning("System: RadioMon: JS8Call monitoring called but not enabled")
logger.warning("RadioMon: JS8Call monitoring called but not enabled")
return
try:
logger.info(f"System: RadioMon: JS8Call TCP listener connecting to {js8call_tcp_address}:{js8call_tcp_port}")
logger.info(f"RadioMon: JS8Call TCP listener connecting to {js8call_tcp_address}:{js8call_tcp_port}")
if watched_callsigns:
logger.info(f"System: RadioMon: Watching for callsigns: {', '.join(watched_callsigns)}")
logger.info(f"RadioMon: Watching for callsigns: {', '.join(watched_callsigns)}")
while True:
try:
@@ -572,14 +539,14 @@ async def js8callMonitor():
sock.connect((js8call_tcp_address, js8call_tcp_port))
sock.setblocking(False)
logger.info("System: RadioMon: Connected to JS8Call API")
logger.info("RadioMon: Connected to JS8Call API")
buffer = ""
while True:
try:
data = sock.recv(4096)
if not data:
logger.warning("System: RadioMon: JS8Call connection closed")
logger.warning("RadioMon: JS8Call connection closed")
break
buffer += data.decode('utf-8', errors='ignore')
@@ -603,34 +570,34 @@ async def js8callMonitor():
if text and check_callsign_match(text, watched_callsigns):
msg_text = f"JS8Call from {from_call}: {text} (SNR: {snr:+d}dB)"
logger.info(f"System: RadioMon: {msg_text}")
logger.info(f"RadioMon: {msg_text}")
js8callMsgQueue.append(msg_text)
except json.JSONDecodeError:
logger.debug(f"System: RadioMon: Invalid JSON from JS8Call: {line[:100]}")
logger.debug(f"RadioMon: Invalid JSON from JS8Call: {line[:100]}")
except Exception as e:
logger.debug(f"System: RadioMon: Error processing JS8Call message: {e}")
logger.debug(f"RadioMon: Error processing JS8Call message: {e}")
except BlockingIOError:
await asyncio.sleep(0.1)
except socket.timeout:
await asyncio.sleep(0.1)
except Exception as e:
logger.debug(f"System: RadioMon: Error in JS8Call receive loop: {e}")
logger.debug(f"RadioMon: Error in JS8Call receive loop: {e}")
break
sock.close()
logger.warning("System: RadioMon: JS8Call connection lost, reconnecting in 5s...")
logger.warning("RadioMon: JS8Call connection lost, reconnecting in 5s...")
await asyncio.sleep(5)
except socket.timeout:
logger.warning("System: RadioMon: JS8Call connection timeout, retrying in 5s...")
logger.warning("RadioMon: JS8Call connection timeout, retrying in 5s...")
await asyncio.sleep(5)
except Exception as e:
logger.warning(f"System: RadioMon: Error connecting to JS8Call: {e}")
logger.warning(f"RadioMon: Error connecting to JS8Call: {e}")
await asyncio.sleep(10)
except Exception as e:
logger.warning(f"System: RadioMon: Error starting JS8Call monitor: {e}")
logger.error(f"RadioMon: Error starting JS8Call monitor: {e}")
# end of file

View File

@@ -1,13 +1,11 @@
# rss feed module for meshing-around 2025
from modules.log import logger
from modules.settings import rssFeedURL, rssFeedNames, rssMaxItems, rssTruncate, urlTimeoutSeconds, ERROR_FETCHING_DATA, newsAPI_KEY, newsAPIsort
from modules.settings import rssFeedURL, rssFeedNames, rssMaxItems, rssTruncate, urlTimeoutSeconds, ERROR_FETCHING_DATA
import urllib.request
import xml.etree.ElementTree as ET
import html
from html.parser import HTMLParser
import bs4 as bs
import requests
import datetime
# Common User-Agent for all RSS requests
COMMON_USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'
@@ -138,41 +136,3 @@ def get_rss_feed(msg):
logger.error(f"Error fetching RSS feed from {feed_url}: {e}")
return ERROR_FETCHING_DATA
def get_newsAPI(user_search="meshtastic"):
# Fetch news from NewsAPI.org
user_search = user_search.strip()
if user_search.lower().startswith("latest"):
user_search = user_search[6:].strip()
if not user_search:
user_search = "meshtastic"
try:
last_week = datetime.datetime.now() - datetime.timedelta(days=7)
newsAPIurl = (
f"https://newsapi.org/v2/everything?"
f"q={user_search}&language=en&from={last_week.strftime('%Y-%m-%d')}&sortBy={newsAPIsort}shedAt&pageSize=5&apiKey={newsAPI_KEY}"
)
response = requests.get(newsAPIurl, headers={"User-Agent": COMMON_USER_AGENT}, timeout=urlTimeoutSeconds)
news_data = response.json()
if news_data.get("status") != "ok":
error_message = news_data.get("message", "Unknown error")
logger.error(f"NewsAPI error: {error_message}")
return ERROR_FETCHING_DATA
logger.debug(f"System: NewsAPI Searching for '{user_search}' got {news_data.get('totalResults', 0)} results")
articles = news_data.get("articles", [])[:3]
news_list = []
for article in articles:
title = article.get("title", "No Title")
url = article.get("url", "")
description = article.get("description", '')
news_list.append(f"📰{title}\n{description}")
# Make a nice newspaper style output
msg = f"🗞️:"
for item in news_list:
msg += item + "\n\n"
return msg.strip()
except Exception as e:
logger.error(f"System: NewsAPI fetching news: {e}")
return ERROR_FETCHING_DATA

View File

@@ -32,7 +32,6 @@ cmdHistory = [] # list to hold the command history for lheard and history comman
msg_history = [] # list to hold the message history for the messages command
max_bytes = 200 # Meshtastic has ~237 byte limit, use conservative 200 bytes for message content
voxMsgQueue = [] # queue for VOX detected messages
tts_read_queue = [] # queue for TTS messages
wsjtxMsgQueue = [] # queue for WSJT-X detected messages
js8callMsgQueue = [] # queue for JS8Call detected messages
# Game trackers
@@ -276,10 +275,12 @@ try:
rssMaxItems = config['general'].getint('rssMaxItems', 3) # default 3 items
rssTruncate = config['general'].getint('rssTruncate', 100) # default 100 characters
rssFeedNames = config['general'].get('rssFeedNames', 'default,arrl').split(',')
newsAPI_KEY = config['general'].get('newsAPI_KEY', '') # default empty
newsAPIregion = config['general'].get('newsAPIregion', 'us') # default us
enable_headlines = config['general'].getboolean('enableNewsAPI', False) # default False
newsAPIsort = config['general'].get('sort_by', 'relevancy') # default publishedAt
# emergency response
emergency_responder_enabled = config['emergencyHandler'].getboolean('enabled', False)
emergency_responder_alert_channel = config['emergencyHandler'].getint('alert_channel', 2) # default 2
emergency_responder_alert_interface = config['emergencyHandler'].getint('alert_interface', 1) # default 1
emergency_responder_email = config['emergencyHandler'].get('email', '').split(',')
# sentry
sentry_enabled = config['sentry'].getboolean('SentryEnabled', False) # default False
@@ -314,54 +315,34 @@ try:
n2yoAPIKey = config['location'].get('n2yoAPIKey', '') # default empty
satListConfig = config['location'].get('satList', '25544').split(',') # default 25544 ISS
riverListDefault = config['location'].get('riverList', '').split(',') # default None
useTidePredict = config['location'].getboolean('useTidePredict', False) # default False use NOAA
coastalEnabled = config['location'].getboolean('coastalEnabled', False) # default False
myCoastalZone = config['location'].get('myCoastalZone', None) # default None
coastalForecastDays = config['location'].getint('coastalForecastDays', 3) # default 3 days
# location alerts
eAlertBroadcastEnabled = config['location'].getboolean('eAlertBroadcastEnabled', False) # old deprecated name
ipawsAlertEnabled = config['location'].getboolean('ipawsAlertEnabled', False) # default False new ^
# Keep both in sync for backward compatibility
if eAlertBroadcastEnabled or ipawsAlertEnabled:
eAlertBroadcastEnabled = True
ipawsAlertEnabled = True
wxAlertsEnabled = config['location'].getboolean('NOAAalertsEnabled', True) # default True
emergencyAlertBrodcastEnabled = config['location'].getboolean('eAlertBroadcastEnabled', False) # default False
wxAlertBroadcastEnabled = config['location'].getboolean('wxAlertBroadcastEnabled', False) # default False
volcanoAlertBroadcastEnabled = config['location'].getboolean('volcanoAlertBroadcastEnabled', False) # default False
enableGBalerts = config['location'].getboolean('enableGBalerts', False) # default False
enableDEalerts = config['location'].getboolean('enableDEalerts', False) # default False
wxAlertsEnabled = config['location'].getboolean('NOAAalertsEnabled', True) # default True
ignoreEASenable = config['location'].getboolean('ignoreEASenable', False) # default False
ignoreEASwords = config['location'].get('ignoreEASwords', 'test,advisory').split(',') # default test,advisory
ignoreFEMAenable = config['location'].getboolean('ignoreFEMAenable', True) # default True
ignoreFEMAwords = config['location'].get('ignoreFEMAwords', 'test,exercise').split(',') # default test,exercise
ignoreUSGSEnable = config['location'].getboolean('ignoreVolcanoEnable', False) # default False
ignoreUSGSWords = config['location'].get('ignoreVolcanoWords', 'test,advisory').split(',') # default test,advisory
myRegionalKeysDE = config['location'].get('myRegionalKeysDE', '110000000000').split(',') # default city Berlin
forecastDuration = config['location'].getint('NOAAforecastDuration', 4) # NOAA forcast days
numWxAlerts = config['location'].getint('NOAAalertCount', 2) # default 2 alerts
enableExtraLocationWx = config['location'].getboolean('enableExtraLocationWx', False) # default False
myStateFIPSList = config['location'].get('myFIPSList', '').split(',') # default empty
mySAMEList = config['location'].get('mySAMEList', '').split(',') # default empty
myRegionalKeysDE = config['location'].get('myRegionalKeysDE', '110000000000').split(',') # default city Berlin
eAlertBroadcastChannel = config['location'].getint('eAlertBroadcastChannel', '') # default empty
# any US alerts enabled
usAlerts = (
ipawsAlertEnabled or
wxAlertBroadcastEnabled or
volcanoAlertBroadcastEnabled or
wxAlertsEnabled or
eAlertBroadcastEnabled
)
ignoreFEMAenable = config['location'].getboolean('ignoreFEMAenable', True) # default True
ignoreFEMAwords = config['location'].get('ignoreFEMAwords', 'test,exercise').split(',') # default test,exercise
wxAlertBroadcastChannel = config['location'].get('wxAlertBroadcastCh', '2').split(',') # default Channel 2
emergencyAlertBroadcastCh = config['location'].get('eAlertBroadcastCh', '2').split(',') # default Channel 2
volcanoAlertBroadcastEnabled = config['location'].getboolean('volcanoAlertBroadcastEnabled', False) # default False
volcanoAlertBroadcastChannel = config['location'].get('volcanoAlertBroadcastCh', '2').split(',') # default Channel 2
ignoreUSGSEnable = config['location'].getboolean('ignoreVolcanoEnable', False) # default False
ignoreUSGSWords = config['location'].get('ignoreVolcanoWords', 'test,advisory').split(',') # default test,advisory
# emergency response
emergency_responder_enabled = config['emergencyHandler'].getboolean('enabled', False)
emergency_responder_alert_channel = config['emergencyHandler'].getint('alert_channel', 2) # default 2
emergency_responder_alert_interface = config['emergencyHandler'].getint('alert_interface', 1) # default 1
emergency_responder_email = config['emergencyHandler'].get('email', '').split(',')
# bbs
bbs_enabled = config['bbs'].getboolean('enabled', False)
bbsdb = config['bbs'].get('bbsdb', 'data/bbsdb.pkl')
@@ -375,7 +356,6 @@ try:
checklist_enabled = config['checklist'].getboolean('enabled', False)
checklist_db = config['checklist'].get('checklist_db', 'data/checklist.db')
reverse_in_out = config['checklist'].getboolean('reverse_in_out', False)
checklist_auto_approve = config['checklist'].getboolean('auto_approve', True) # default True
# qrz hello
qrz_hello_enabled = config['qrz'].getboolean('enabled', False)
@@ -438,9 +418,6 @@ try:
voxOnTrapList = config['radioMon'].getboolean('voxOnTrapList', False) # default False
voxTrapList = config['radioMon'].get('voxTrapList', 'chirpy').split(',') # default chirpy
voxEnableCmd = config['radioMon'].getboolean('voxEnableCmd', True) # default True
meshagesTTS = config['radioMon'].getboolean('meshagesTTS', False) # default False
ttsChannels = config['radioMon'].get('ttsChannels', '2').split(',') # default Channel 2
ttsnoWelcome = config['radioMon'].getboolean('ttsnoWelcome', False) # default False
# WSJT-X and JS8Call monitoring
wsjtx_detection_enabled = config['radioMon'].getboolean('wsjtxDetectionEnabled', False) # default WSJT-X detection disabled

View File

@@ -114,7 +114,7 @@ if location_enabled:
help_message = help_message + ", howtall"
# NOAA alerts needs location module
if wxAlertBroadcastEnabled or ipawsAlertEnabled or volcanoAlertBroadcastEnabled or eAlertBroadcastEnabled: #eAlertBroadcastEnabled depricated
if wxAlertBroadcastEnabled or emergencyAlertBrodcastEnabled or volcanoAlertBroadcastEnabled:
from modules.locationdata import * # from the spudgunman/meshing-around repo
# limited subset, this should be done better but eh..
trap_list = trap_list + ("wx", "wxa", "wxalert", "ea", "ealert", "valert")
@@ -125,6 +125,10 @@ if coastalEnabled:
from modules.locationdata import * # from the spudgunman/meshing-around repo
trap_list = trap_list + ("mwx","tide",)
help_message = help_message + ", mwx, tide"
if useTidePredict:
from modules import xtide
trap_list = trap_list + ("tide",)
help_message = help_message + ", tide"
# BBS Configuration
if bbs_enabled:
@@ -153,14 +157,10 @@ if wikipedia_enabled or use_kiwix_server:
help_message = help_message + ", wiki"
# RSS Feed Configuration
if rssEnable or enable_headlines:
if rssEnable:
from modules.rss import * # from the spudgunman/meshing-around repo
if rssEnable:
trap_list = trap_list + ("readrss",)
help_message = help_message + ", readrss"
if enable_headlines:
trap_list = trap_list + ("latest",)
help_message = help_message + ", latest"
trap_list = trap_list + ("readrss",)
help_message = help_message + ", readrss"
# LLM Configuration
if llm_enabled:
@@ -292,6 +292,13 @@ if inventory_enabled:
trap_list = trap_list + trap_list_inventory # items item, itemlist, itemsell, etc.
help_message = help_message + ", item, cart"
# Radio Monitor Configuration
if radio_detection_enabled:
from modules.radio import * # from the spudgunman/meshing-around repo
if voxDetectionEnabled:
from modules.radio import * # from the spudgunman/meshing-around repo
# File Monitor Configuration
if file_monitor_enabled or read_news_enabled or bee_enabled or enable_runShellCmd or cmdShellSentryAlerts:
from modules.filemon import * # from the spudgunman/meshing-around repo
@@ -1108,70 +1115,136 @@ def handleMultiPing(nodeID=0, deviceID=1):
multiPingList.pop(j)
break
# Alert broadcasting initialization
last_alerts = {
"overdue": {"time": 0, "message": ""},
"fema": {"time": 0, "message": ""},
"uk": {"time": 0, "message": ""},
"de": {"time": 0, "message": ""},
"wx": {"time": 0, "message": ""},
"volcano": {"time": 0, "message": ""},
}
def should_send_alert(alert_type, new_message, min_interval=1):
now = time.time()
last = last_alerts[alert_type]
# Only send if enough time has passed AND the message is different
if (now - last["time"]) > min_interval and new_message != last["message"]:
last_alerts[alert_type]["time"] = now
last_alerts[alert_type]["message"] = new_message
return True
return False
priorVolcanoAlert = ""
priorEmergencyAlert = ""
priorWxAlert = ""
def handleAlertBroadcast(deviceID=1):
try:
alertUk = alertDe = alertFema = wxAlert = volcanoAlert = overdueAlerts = NO_ALERTS
global priorVolcanoAlert, priorEmergencyAlert, priorWxAlert
alertUk = NO_ALERTS
alertDe = NO_ALERTS
alertFema = NO_ALERTS
wxAlert = NO_ALERTS
volcanoAlert = NO_ALERTS
overdueAlerts = NO_ALERTS
alertWx = False
# only allow API call every 20 minutes
# the watchdog will call this function 3 times, seeing possible throttling on the API
clock = datetime.now()
# Overdue check-in alert
if checklist_enabled:
overdueAlerts = format_overdue_alert()
if overdueAlerts:
if should_send_alert("overdue", overdueAlerts, min_interval=300): # 5 minutes interval for overdue alerts
send_message(overdueAlerts, emergency_responder_alert_channel, 0, emergency_responder_alert_interface)
# Only allow API call every 20 minutes
if not (clock.minute % 20 == 0 and clock.second <= 17):
if clock.minute % 20 != 0:
return False
# Collect alerts
if clock.second > 17:
return False
# check for alerts
if wxAlertBroadcastEnabled:
alertWx = alertBrodcastNOAA()
if alertWx:
wxAlert = f"🚨 {alertWx[1]} EAS-WX ALERT: {alertWx[0]}"
if eAlertBroadcastEnabled or ipawsAlertEnabled:
alertFema = getIpawsAlert(latitudeValue, longitudeValue, shortAlerts=True)
if emergencyAlertBrodcastEnabled:
if enableDEalerts:
alertDe = get_nina_alerts()
if enableGBalerts:
alertUk = get_govUK_alerts()
else:
# default USA alerts
alertFema = getIpawsAlert(latitudeValue,longitudeValue, shortAlerts=True)
if checklist_enabled:
overdueAlerts = format_overdue_alert()
# format alert
if alertWx:
wxAlert = f"🚨 {alertWx[1]} EAS-WX ALERT: {alertWx[0]}"
else:
wxAlert = False
femaAlert = alertFema
ukAlert = alertUk
deAlert = alertDe
if overdueAlerts != NO_ALERTS and overdueAlerts != None:
logger.debug("System: Adding overdue checkin to emergency alerts")
if femaAlert and NO_ALERTS not in femaAlert and ERROR_FETCHING_DATA not in femaAlert:
femaAlert += "\n\n" + overdueAlerts
elif ukAlert and NO_ALERTS not in ukAlert and ERROR_FETCHING_DATA not in ukAlert:
ukAlert += "\n\n" + overdueAlerts
elif deAlert and NO_ALERTS not in deAlert and ERROR_FETCHING_DATA not in deAlert:
deAlert += "\n\n" + overdueAlerts
else:
# only overdue alerts to send
if overdueAlerts != "" and overdueAlerts is not None and overdueAlerts != NO_ALERTS:
if overdueAlerts != priorEmergencyAlert:
priorEmergencyAlert = overdueAlerts
else:
return False
if isinstance(emergencyAlertBroadcastCh, list):
for channel in emergencyAlertBroadcastCh:
send_message(overdueAlerts, int(channel), 0, deviceID)
else:
send_message(overdueAlerts, emergencyAlertBroadcastCh, 0, deviceID)
return True
if emergencyAlertBrodcastEnabled:
if NO_ALERTS not in femaAlert and ERROR_FETCHING_DATA not in femaAlert:
if femaAlert != priorEmergencyAlert:
priorEmergencyAlert = femaAlert
else:
return False
if isinstance(emergencyAlertBroadcastCh, list):
for channel in emergencyAlertBroadcastCh:
send_message(femaAlert, int(channel), 0, deviceID)
else:
send_message(femaAlert, emergencyAlertBroadcastCh, 0, deviceID)
return True
if NO_ALERTS not in ukAlert:
if ukAlert != priorEmergencyAlert:
priorEmergencyAlert = ukAlert
else:
return False
if isinstance(emergencyAlertBroadcastCh, list):
for channel in emergencyAlertBroadcastCh:
send_message(ukAlert, int(channel), 0, deviceID)
else:
send_message(ukAlert, emergencyAlertBroadcastCh, 0, deviceID)
return True
if NO_ALERTS not in alertDe:
if deAlert != priorEmergencyAlert:
priorEmergencyAlert = deAlert
else:
return False
if isinstance(emergencyAlertBroadcastCh, list):
for channel in emergencyAlertBroadcastCh:
send_message(deAlert, int(channel), 0, deviceID)
else:
send_message(deAlert, emergencyAlertBroadcastCh, 0, deviceID)
return True
if wxAlertBroadcastEnabled:
if wxAlert:
if wxAlert != priorWxAlert:
priorWxAlert = wxAlert
else:
return False
if isinstance(wxAlertBroadcastChannel, list):
for channel in wxAlertBroadcastChannel:
send_message(wxAlert, int(channel), 0, deviceID)
else:
send_message(wxAlert, wxAlertBroadcastChannel, 0, deviceID)
return True
if volcanoAlertBroadcastEnabled:
volcanoAlert = get_volcano_usgs(latitudeValue, longitudeValue)
if enableDEalerts:
deAlerts = get_nina_alerts()
if usAlerts:
alert_types = [
("fema", alertFema, ipawsAlertEnabled),
("wx", wxAlert, wxAlertBroadcastEnabled),
("volcano", volcanoAlert, volcanoAlertBroadcastEnabled),]
if enableDEalerts:
alert_types = [("de", deAlerts, enableDEalerts)]
for alert_type, alert_msg, enabled in alert_types:
if enabled and alert_msg and NO_ALERTS not in alert_msg and ERROR_FETCHING_DATA not in alert_msg:
if should_send_alert(alert_type, alert_msg):
send_message(alert_msg, emergency_responder_alert_channel, 0, emergency_responder_alert_interface)
if eAlertBroadcastChannel != '':
send_message(alert_msg, eAlertBroadcastChannel, 0, emergency_responder_alert_interface)
if volcanoAlert and NO_ALERTS not in volcanoAlert and ERROR_FETCHING_DATA not in volcanoAlert:
# check if the alert is different from the last one
if volcanoAlert != priorVolcanoAlert:
priorVolcanoAlert = volcanoAlert
if isinstance(volcanoAlertBroadcastChannel, list):
for channel in volcanoAlertBroadcastChannel:
send_message(volcanoAlert, int(channel), 0, deviceID)
else:
send_message(volcanoAlert, volcanoAlertBroadcastChannel, 0, deviceID)
return True
except Exception as e:
logger.error(f"System: Error in handleAlertBroadcast: {e}")
return False
@@ -1481,11 +1554,10 @@ def consumeMetadata(packet, rxNode=0, channel=-1):
# Track highest altitude 🚀 (also log if over highfly_altitude threshold)
if position_data.get('altitude') is not None:
altitude = position_data['altitude']
if altitude > highfly_altitude:
if altitude > meshLeaderboard['highestAltitude']['value']:
meshLeaderboard['highestAltitude'] = {'nodeID': nodeID, 'value': altitude, 'timestamp': time.time()}
if logMetaStats:
logger.info(f"System: 🚀 New altitude record: {altitude}m from NodeID:{nodeID} ShortName:{get_name_from_number(nodeID, 'short', rxNode)}")
if altitude > meshLeaderboard['highestAltitude']['value']:
meshLeaderboard['highestAltitude'] = {'nodeID': nodeID, 'value': altitude, 'timestamp': time.time()}
if logMetaStats:
logger.info(f"System: 🚀 New altitude record: {altitude}m from NodeID:{nodeID} ShortName:{get_name_from_number(nodeID, 'short', rxNode)}")
# Track tallest node 🪜 (under the highfly_altitude limit by 100m)
if position_data.get('altitude') is not None:
altitude = position_data['altitude']
@@ -1914,8 +1986,7 @@ def get_sysinfo(nodeID=0, deviceID=1):
return sysinfo
async def handleSignalWatcher():
from modules.radio import signalWatcher
from modules.settings import sigWatchBroadcastCh, sigWatchBroadcastInterface, lastHamLibAlert
global lastHamLibAlert
# monitor rigctld for signal strength and frequency
while True:
msg = await signalWatcher()
@@ -2141,40 +2212,17 @@ async def handleSentinel(deviceID):
handleSentinel_loop = 0 # Reset if nothing detected
async def process_vox_queue():
# process the voxMsgQueue
from modules.settings import sigWatchBroadcastCh, sigWatchBroadcastInterface, voxMsgQueue
items_to_process = voxMsgQueue[:]
voxMsgQueue.clear()
if len(items_to_process) > 0:
logger.debug(f"System: Processing {len(items_to_process)} items in voxMsgQueue")
for item in items_to_process:
message = item
for channel in sigWatchBroadcastCh:
if antiSpam and int(channel) != publicChannel:
send_message(message, int(channel), 0, sigWatchBroadcastInterface)
async def handleTTS():
from modules.radio import generate_and_play_tts, available_voices
from modules.settings import ttsnoWelcome, tts_read_queue
logger.debug("System: Handle TTS started")
if not ttsnoWelcome:
logger.debug("System: Playing TTS welcome message to disable set 'ttsnoWelcome = True' in settings.ini")
await generate_and_play_tts("Hey its Cheerpy! Thanks for using Meshing-Around on Meshtasstic!", available_voices[0])
try:
while True:
if tts_read_queue:
tts_read = tts_read_queue.pop(0)
voice = available_voices[0]
# ensure the tts_read ends with a punctuation mark
if not tts_read.endswith(('.', '!', '?')):
tts_read += '.'
try:
await generate_and_play_tts(tts_read, voice)
except Exception as e:
logger.error(f"System: TTShandler error: {e}")
await asyncio.sleep(1)
except Exception as e:
logger.critical(f"System: handleTTS crashed: {e}")
# process the voxMsgQueue
global voxMsgQueue
items_to_process = voxMsgQueue[:]
voxMsgQueue.clear()
if len(items_to_process) > 0:
logger.debug(f"System: Processing {len(items_to_process)} items in voxMsgQueue")
for item in items_to_process:
message = item
for channel in sigWatchBroadcastCh:
if antiSpam and int(channel) != publicChannel:
send_message(message, int(channel), 0, sigWatchBroadcastInterface)
async def watchdog():
global localTelemetryData, retry_int1, retry_int2, retry_int3, retry_int4, retry_int5, retry_int6, retry_int7, retry_int8, retry_int9
@@ -2208,7 +2256,7 @@ async def watchdog():
handleMultiPing(0, i)
if anyAlertBroadcastEnabled or checklist_enabled:
if wxAlertBroadcastEnabled or emergencyAlertBrodcastEnabled or volcanoAlertBroadcastEnabled or checklist_enabled:
handleAlertBroadcast(i)
intData = displayNodeTelemetry(0, i)

View File

@@ -28,7 +28,7 @@ if os.path.isfile(checkall_path):
# List of module names to exclude
exclude = ['test_bot','udp', 'system', 'log', 'gpio', 'web',]
exclude = ['test_bot','udp', 'system', 'log', 'gpio', 'web','test_xtide',]
available_modules = [
m.name for m in pkgutil.iter_modules([modules_path])
if m.name not in exclude]

View File

@@ -1,78 +0,0 @@
# modules/test_checklist.py
import os
import sys
# Add the parent directory to sys.path to allow module imports
parent_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
sys.path.insert(0, parent_path)
import unittest
from unittest.mock import patch
from checklist import process_checklist_command, initialize_checklist_database
import time
class TestProcessChecklistCommand(unittest.TestCase):
def setUp(self):
# Always start with a fresh DB
initialize_checklist_database()
# Patch settings for consistent test behavior
patcher1 = patch('modules.checklist.reverse_in_out', False)
patcher2 = patch('modules.checklist.bbs_ban_list', [])
patcher3 = patch('modules.checklist.bbs_admin_list', ['999'])
self.mock_reverse = patcher1.start()
self.mock_ban = patcher2.start()
self.mock_admin = patcher3.start()
self.addCleanup(patcher1.stop)
self.addCleanup(patcher2.stop)
self.addCleanup(patcher3.stop)
def test_checkin_command(self):
result = process_checklist_command(1, "checkin test note", name="TESTUSER", location=["loc"])
self.assertIn("Checked✅In: TESTUSER", result)
def test_checkout_command(self):
# First checkin
process_checklist_command(1, "checkin test note", name="TESTUSER", location=["loc"])
# Then checkout
result = process_checklist_command(1, "checkout", name="TESTUSER", location=["loc"])
self.assertIn("Checked⌛Out: TESTUSER", result)
def test_checkin_with_interval(self):
result = process_checklist_command(1, "checkin 15 hiking", name="TESTUSER", location=["loc"])
self.assertIn("monitoring every 15min", result)
def test_checkout_all(self):
# Multiple checkins
process_checklist_command(1, "checkin note1", name="TESTUSER", location=["loc"])
process_checklist_command(1, "checkin note2", name="TESTUSER", location=["loc"])
result = process_checklist_command(1, "checkout all", name="TESTUSER", location=["loc"])
self.assertIn("Checked out", result)
self.assertIn("check-ins for TESTUSER", result)
def test_checklistapprove_nonadmin(self):
process_checklist_command(1, "checkin foo", name="FOO", location=["loc"])
result = process_checklist_command(2, "checklistapprove 1", name="NOTADMIN", location=["loc"])
self.assertNotIn("approved", result)
def test_checklistdeny_nonadmin(self):
process_checklist_command(1, "checkin foo", name="FOO", location=["loc"])
result = process_checklist_command(2, "checklistdeny 1", name="NOTADMIN", location=["loc"])
self.assertNotIn("denied", result)
def test_help_command(self):
result = process_checklist_command(1, "checklist ?", name="TESTUSER", location=["loc"])
self.assertIn("Command: checklist", result)
def test_checklist_listing(self):
process_checklist_command(1, "checkin foo", name="FOO", location=["loc"])
result = process_checklist_command(1, "checklist", name="FOO", location=["loc"])
self.assertIsInstance(result, str)
self.assertIn("checked-In", result)
def test_invalid_command(self):
result = process_checklist_command(1, "foobar", name="FOO", location=["loc"])
self.assertEqual(result, "Invalid command.")
if __name__ == "__main__":
unittest.main()

135
modules/test_xtide.py Normal file
View File

@@ -0,0 +1,135 @@
#!/usr/bin/env python3
"""
Test script for xtide module
Tests both NOAA (disabled) and tidepredict (when available) tide predictions
"""
import sys
import os
# Add parent directory to path
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
def test_xtide_import():
"""Test that xtide module can be imported"""
print("Testing xtide module import...")
try:
from modules import xtide
print(f"✓ xtide module imported successfully")
print(f" - tidepredict available: {xtide.TIDEPREDICT_AVAILABLE}")
return True
except Exception as e:
print(f"✗ Failed to import xtide: {e}")
return False
def test_locationdata_import():
"""Test that modified locationdata can be imported"""
print("\nTesting locationdata module import...")
try:
from modules import locationdata
print(f"✓ locationdata module imported successfully")
return True
except Exception as e:
print(f"✗ Failed to import locationdata: {e}")
return False
def test_settings():
"""Test that settings has useTidePredict option"""
print("\nTesting settings configuration...")
try:
from modules import settings as my_settings
has_setting = hasattr(my_settings, 'useTidePredict')
print(f"✓ settings module loaded")
print(f" - useTidePredict setting available: {has_setting}")
if has_setting:
print(f" - useTidePredict value: {my_settings.useTidePredict}")
return True
except Exception as e:
print(f"✗ Failed to load settings: {e}")
return False
def test_noaa_fallback():
"""Test NOAA API fallback (without enabling tidepredict)"""
print("\nTesting NOAA API (default mode)...")
try:
from modules import locationdata
from modules import settings as my_settings
# Test with Seattle coordinates (should use NOAA)
lat = 47.6062
lon = -122.3321
print(f" Testing with Seattle coordinates: {lat}, {lon}")
print(f" useTidePredict = {my_settings.useTidePredict}")
# Note: This will fail if we can't reach NOAA, but that's expected
result = locationdata.get_NOAAtide(str(lat), str(lon))
if result and "Error" not in result:
print(f"✓ NOAA API returned data")
print(f" First 100 chars: {result[:100]}")
return True
else:
print(f"⚠ NOAA API returned: {result[:100]}")
return True # Still pass as network might not be available
except Exception as e:
print(f"⚠ NOAA test encountered expected issue: {e}")
return True # Expected in test environment
def test_parse_coords():
"""Test coordinate parsing function"""
print("\nTesting coordinate parsing...")
try:
from modules.xtide import parse_station_coords
test_cases = [
(("43-36S", "172-43E"), (-43.6, 172.71666666666667)),
(("02-45N", "072-21E"), (2.75, 72.35)),
(("02-45S", "072-21W"), (-2.75, -72.35)),
]
all_passed = True
for (lat_str, lon_str), (expected_lat, expected_lon) in test_cases:
result_lat, result_lon = parse_station_coords(lat_str, lon_str)
if abs(result_lat - expected_lat) < 0.01 and abs(result_lon - expected_lon) < 0.01:
print(f"{lat_str}, {lon_str} -> {result_lat:.2f}, {result_lon:.2f}")
else:
print(f"{lat_str}, {lon_str} -> expected {expected_lat}, {expected_lon}, got {result_lat}, {result_lon}")
all_passed = False
return all_passed
except Exception as e:
print(f"✗ Coordinate parsing test failed: {e}")
import traceback
traceback.print_exc()
return False
def main():
"""Run all tests"""
print("=" * 60)
print("xtide Module Test Suite")
print("=" * 60)
results = []
results.append(("Import xtide", test_xtide_import()))
results.append(("Import locationdata", test_locationdata_import()))
results.append(("Settings configuration", test_settings()))
results.append(("Parse coordinates", test_parse_coords()))
results.append(("NOAA fallback", test_noaa_fallback()))
print("\n" + "=" * 60)
print("Test Results Summary")
print("=" * 60)
passed = sum(1 for _, result in results if result)
total = len(results)
for test_name, result in results:
status = "✓ PASS" if result else "✗ FAIL"
print(f"{status}: {test_name}")
print(f"\n{passed}/{total} tests passed")
return passed == total
if __name__ == "__main__":
success = main()
sys.exit(0 if success else 1)

129
modules/xtide.md Normal file
View File

@@ -0,0 +1,129 @@
# xtide Module - Global Tide Predictions
This module provides global tide prediction capabilities using the [tidepredict](https://github.com/windcrusader/tidepredict) library, which uses the University of Hawaii's Research Quality Dataset for worldwide tide station coverage.
## Features
- Global tide predictions (not limited to US locations like NOAA)
- Offline predictions once station data is initialized
- Automatic selection of nearest tide station
- Compatible with existing tide command interface
## Installation
1. Install tidepredict library:
this takes about 3-500MB of disk
```bash
pip install tidepredict
```
note: if you see warning about system packages the override for debian OS to install it anyway is..
```bash
pip install tidepredict --break-system-packages
```
2. Enable in `config.ini`:
```ini
[location]
useTidePredict = True
```
## First-Time Setup
On first use, tidepredict needs to download station data from the University of Hawaii FTP server. This requires internet access and happens automatically when you:
1. Run the tide command for the first time with `useTidePredict = True`
2. Or manually initialize with:
```bash
python3 -m tidepredict -l <location> -genharm
```
The station data is cached locally in `~/.tidepredict/` for offline use afterward.
No other downloads will happen automatically, its offline
## Usage
Once enabled, the existing `tide` command will automatically use tidepredict for global locations:
```
tide
```
The module will:
1. Find the nearest tide station to your GPS coordinates
2. Load harmonic constituents for that station
3. Calculate tide predictions for today
4. Format output compatible with mesh display
## Configuration
### config.ini Options
```ini
[location]
# Enable global tide predictions using tidepredict
useTidePredict = True
# Standard location settings still apply
lat = 48.50
lon = -123.0
useMetric = False
```
## Fallback Behavior
If tidepredict is not available or encounters errors, the module will automatically fall back to the NOAA API for US locations.
## Limitations
- First-time setup requires internet access to download station database
- Station coverage depends on University of Hawaii's dataset
- Predictions may be less accurate for locations far from tide stations
## Troubleshooting
### "Station database not initialized" error
This means the station data hasn't been downloaded yet. Ensure internet access and:
```bash
# Test station download
python3 -m tidepredict -l Sydney
# Or manually run initialization
python3 -c "from tidepredict import process_station_list; process_station_list.create_station_dataframe()"
```
### "No tide station found nearby"
The module couldn't find a nearby station. This may happen if:
- You're in a location without nearby tide monitoring stations
- The station database hasn't been initialized
- Network issues prevented loading the station list
Tide Station Map
[https://uhslc.soest.hawaii.edu/network/](https://uhslc.soest.hawaii.edu/network/)
- click on Tide Guages
- Find yourself on the map
- Locate the closest Gauge and its name (typically the city name)
To manually download data for the station first location the needed station id
- `python -m tidepredict -l "Port Angeles"` finds a station
- `python -m tidepredict -l "Port Angeles" -genharm` downloads that datafile
## Data Source
Tide predictions are based on harmonic analysis of historical tide data from:
- University of Hawaii Sea Level Center (UHSLC)
- Research Quality Dataset
- Global coverage with 600+ stations
## References
- [tidepredict GitHub](https://github.com/windcrusader/tidepredict)
- [UHSLC Data](https://uhslc.soest.hawaii.edu/)
- [pytides](https://github.com/sam-cox/pytides) - Underlying tide calculation library

202
modules/xtide.py Normal file
View File

@@ -0,0 +1,202 @@
# xtide.py - Global tide prediction using tidepredict library
# K7MHI Kelly Keeton 2025
import json
from datetime import datetime, timedelta
from modules.log import logger
import modules.settings as my_settings
try:
from tidepredict import processdata, process_station_list, constants, timefunc
from tidepredict.tide import Tide
import pandas as pd
TIDEPREDICT_AVAILABLE = True
except ImportError:
TIDEPREDICT_AVAILABLE = False
logger.error("xtide: tidepredict module not installed. Install with: pip install tidepredict")
def get_nearest_station(lat, lon):
"""
Find the nearest tide station to the given lat/lon coordinates.
Returns station code (e.g., 'h001a') or None if not found.
"""
if not TIDEPREDICT_AVAILABLE:
return None
try:
# Read the station list
try:
stations = pd.read_csv(constants.STATIONFILE)
except FileNotFoundError:
# If station file doesn't exist, create it (requires network)
logger.info("xtide: Creating station database from online source (requires network)")
try:
stations = process_station_list.create_station_dataframe()
except Exception as net_error:
logger.error(f"xtide: Failed to download station database: {net_error}")
return None
if stations.empty:
logger.error("xtide: No stations found in database")
return None
# Calculate distance to each station
# Using simple haversine-like calculation
def calc_distance(row):
try:
# Parse lat/lon from the format like "43-36S", "172-43E"
station_lat, station_lon = parse_station_coords(row['Lat'], row['Lon'])
# Simple distance calculation (not precise but good enough)
dlat = lat - station_lat
dlon = lon - station_lon
return (dlat**2 + dlon**2)**0.5
except:
return float('inf')
stations['distance'] = stations.apply(calc_distance, axis=1)
# Find the nearest station
nearest = stations.loc[stations['distance'].idxmin()]
if nearest['distance'] > 10: # More than ~10 degrees away, might be too far
logger.warning(f"xtide: Nearest station is {nearest['distance']:.1f}° away at {nearest['loc_name']}")
station_code = "h" + nearest['stat_idx'].lower()
logger.debug(f"xtide: Found nearest station: {nearest['loc_name']} ({station_code}) at {nearest['distance']:.2f}° away")
return station_code, nearest['loc_name'], nearest['country']
except Exception as e:
logger.error(f"xtide: Error finding nearest station: {e}")
return None
def parse_station_coords(lat_str, lon_str):
"""
Parse station coordinates from format like "43-36S", "172-43E"
Returns tuple of (latitude, longitude) as floats
"""
try:
# Parse latitude
lat_parts = lat_str.split('-')
lat_deg = float(lat_parts[0])
lat_min = float(lat_parts[1][:-1]) # Remove N/S
lat_dir = lat_parts[1][-1] # Get N/S
lat_val = lat_deg + lat_min/60.0
if lat_dir == 'S':
lat_val = -lat_val
# Parse longitude
lon_parts = lon_str.split('-')
lon_deg = float(lon_parts[0])
lon_min = float(lon_parts[1][:-1]) # Remove E/W
lon_dir = lon_parts[1][-1] # Get E/W
lon_val = lon_deg + lon_min/60.0
if lon_dir == 'W':
lon_val = -lon_val
return lat_val, lon_val
except Exception as e:
logger.debug(f"xtide: Error parsing coordinates {lat_str}, {lon_str}: {e}")
return 0.0, 0.0
def get_tide_predictions(lat=0, lon=0, days=1):
"""
Get tide predictions for the given location using tidepredict library.
Returns formatted string with tide predictions.
Parameters:
- lat: Latitude
- lon: Longitude
- days: Number of days to predict (default: 1)
Returns:
- Formatted string with tide predictions or error message
"""
if not TIDEPREDICT_AVAILABLE:
return "module not installed, see logs for more ⚓️"
if float(lat) == 0 and float(lon) == 0:
return "No GPS data for tide prediction"
try:
# Find nearest station
station_info = get_nearest_station(float(lat), float(lon))
if not station_info:
return "No tide station found nearby. Network may be required to download station data."
station_code, station_name, station_country = station_info
# Load station data
station_dict, harmfileloc = process_station_list.read_station_info_file()
# Check if harmonic data exists for this station
if station_code not in station_dict:
logger.warning(f"xtide: No harmonic data. python -m tidepredict -l \"{station_name}\" -genharm")
return f"Tide data not available for {station_name}. Station database may need initialization."
# Reconstruct tide model
tide = processdata.reconstruct_tide_model(station_dict, station_code)
if tide is None:
return f"Tide model unavailable for {station_name}"
# Set up time range (today only)
now = datetime.now()
start_time = now.strftime("%Y-%m-%d 00:00")
end_time = (now + timedelta(days=days)).strftime("%Y-%m-%d 00:00")
# Create time object
timeobj = timefunc.Tidetime(
st_time=start_time,
en_time=end_time,
station_tz=station_dict[station_code].get('tzone', 'UTC')
)
# Get predictions
predictions = processdata.predict_plain(tide, station_dict[station_code], 't', timeobj)
# Format output for mesh
lines = predictions.strip().split('\n')
if len(lines) > 2:
# Skip the header lines and format for mesh display
result = f"Tide: {station_name}\n"
tide_lines = lines[2:] # Skip first 2 header lines
# Format each tide prediction
for line in tide_lines[:8]: # Limit to 8 entries
parts = line.split()
if len(parts) >= 4:
date_str = parts[0]
time_str = parts[1]
height = parts[3]
tide_type = ' '.join(parts[4:])
# Convert to 12-hour format if not using zulu time
if not my_settings.zuluTime:
try:
time_obj = datetime.strptime(time_str, "%H%M")
hour = time_obj.hour
minute = time_obj.minute
if hour >= 12:
time_str = f"{hour-12 if hour > 12 else 12}:{minute:02d} PM"
else:
time_str = f"{hour if hour > 0 else 12}:{minute:02d} AM"
except:
pass
result += f"{tide_type} {time_str}, {height}\n"
return result.strip()
else:
return predictions
except FileNotFoundError as e:
logger.error(f"xtide: Station data file not found: {e}")
return "Tide station database not initialized. Network access required for first-time setup."
except Exception as e:
logger.error(f"xtide: Error getting tide predictions: {e}")
return f"Error getting tide data: {str(e)}"
def is_enabled():
"""Check if xtide/tidepredict is enabled in config"""
return getattr(my_settings, 'useTidePredict', False) and TIDEPREDICT_AVAILABLE

View File

@@ -1,4 +1,22 @@
## script/runShell.sh
**Purpose:**
`runShell.sh` is a simple demo shell script for the Mesh Bot project. It demonstrates how to execute shell commands within the projects scripting environment.
**Usage:**
Run this script from the terminal to see a basic example of shell scripting in the project context.
```sh
bash script/runShell.sh
```
**What it does:**
- Changes the working directory to the scripts location.
- Prints the current directory path and a message indicating the script is running.
- Serves as a template for creating additional shell scripts or automating tasks related to the project.
**Note:**
You can modify this script to add more shell commands or automation steps as needed for your workflow.
## script/runShell.sh
@@ -39,64 +57,4 @@ bash script/sysEnv.sh
- Designed to work on Linux systems, with special handling for Raspberry Pi hardware.
**Note:**
You can expand or modify this script to include additional telemetry or environment checks as needed for your deployment.
## script/configMerge.py
**Purpose:**
`configMerge.py` is a Python script that merges your user configuration (`config.ini`) with the default template (`config.template`). This helps you keep your settings up to date when the default configuration changes, while preserving your customizations.
**Usage:**
Run this script from the project root or the `script/` directory:
```sh
python3 script/configMerge.py
```
**What it does:**
- Backs up your current `config.ini` to `config.bak`.
- Merges new or updated settings from `config.template` into your `config.ini`.
- Saves the merged result as `config_new.ini`.
- Shows a summary of changes between your config and the merged version.
**Note:**
After reviewing the changes, you can replace your `config.ini` with the merged version:
```sh
cp config_new.ini config.ini
```
This script is useful for safely updating your configuration when new options are added upstream.
## script/addFav.py
**Purpose:**
`addFav.py` is a Python script to help manage and add favorite nodes to all interfaces using data from `config.ini`. It supports both bot and roof (client_base) node workflows, making it easier to retain DM keys and manage node lists across devices.
**Usage:**
Run this script from the main repo directory:
```sh
python3 script/addFav.py
```
- To print the contents of `roofNodeList.pkl` and exit, use:
```sh
# note it is not production ready
python3 script/addFav.py -p
```
**What it does:**
- Interactively asks if you are running on a roof (client_base) node or a bot.
- On the bot:
- Compiles a list of favorite nodes and saves it to `roofNodeList.pkl` for later use on the roof node.
- On the roof node:
- Loads the node list from `roofNodeList.pkl`.
- Shows which favorite nodes will be added and asks for confirmation.
- Adds favorite nodes to the appropriate devices, handling API rate limits.
- Logs actions and errors for troubleshooting.
**Note:**
- Always run this script from the main repo directory to ensure module imports work.
- After running on the bot, copy `roofNodeList.pkl` to the roof node and rerun the script there to complete the process.
You can expand or modify this script to include additional telemetry or environment checks as needed for your deployment.

View File

@@ -67,13 +67,12 @@ fi
if [[ -f "config.ini" ]]; then
owner=$(stat -f "%Su" config.ini)
perms=$(stat -f "%A" config.ini)
echo "config.ini is owned by: $owner"
echo "config.ini permissions: $perms"
if [[ "$owner" == "root" ]]; then
echo "config.ini is owned by: $owner"
echo "Warning: config.ini is owned by root check out the etc/set-permissions.sh script"
fi
if [[ $(stat -f "%Lp" config.ini) =~ .*[7,6,2]$ ]]; then
cho "config.ini permissions: $perms"
echo "Warning: config.ini is world-writable or world-readable! check out the etc/set-permissions.sh script"
fi