mirror of
https://github.com/rightup/pyMC_Repeater.git
synced 2026-05-15 05:46:07 +02:00
+17
@@ -7,6 +7,8 @@ __pycache__/
|
||||
*.so
|
||||
.Python
|
||||
build/
|
||||
.pybuild/
|
||||
repeater/_version.py
|
||||
develop-eggs/
|
||||
dist/
|
||||
downloads/
|
||||
@@ -23,9 +25,16 @@ share/python-wheels/
|
||||
*.egg-info/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
DEBIAN/
|
||||
debian/files
|
||||
debian/.debhelper/
|
||||
debian/pymc-repeater/
|
||||
debian/pymc-repeater.debhelper.log
|
||||
debian/pymc-repeater.substvars
|
||||
|
||||
# Virtual environments
|
||||
.venv/
|
||||
.venv_new/
|
||||
env/
|
||||
ENV/
|
||||
|
||||
@@ -43,8 +52,16 @@ htmlcov/
|
||||
|
||||
# Config
|
||||
config.yaml
|
||||
config.yaml.backup
|
||||
identity.json
|
||||
|
||||
# Data
|
||||
data/
|
||||
|
||||
# Logs
|
||||
*.log
|
||||
.DS_Store
|
||||
syncpi.sh
|
||||
|
||||
# Docker
|
||||
/data
|
||||
|
||||
@@ -30,18 +30,31 @@ The repeater daemon runs continuously as a background process, forwarding LoRa p
|
||||
|
||||
## Supported Hardware (Out of the Box)
|
||||
|
||||
The repeater supports two radio backends:
|
||||
|
||||
- **SX1262 (SPI)** — Direct connection to LoRa modules (HATs, etc.) as listed below.
|
||||
- **KISS modem** — Serial TNC using the KISS protocol. Set `radio_type: kiss` in config and configure `kiss.port` and `kiss.baud_rate`.
|
||||
|
||||
> [!CAUTION]
|
||||
> ## Compatibility
|
||||
>
|
||||
> ### Supported Radio Interfaces
|
||||
>
|
||||
> | Interface | Supported |
|
||||
> |------------|------------|
|
||||
> | Native SPI radio SX1262 | ✅ Yes |
|
||||
> | USB–SPI bridge (CH341F) | ✅ Yes |
|
||||
> | UART-based HATs | ❌ No |
|
||||
> | SX1302 concentrator boards | ❌ No |
|
||||
> | SX1303 concentrator boards | ❌ No |
|
||||
>
|
||||
> This project supports **single-radio SPI transceivers only**, either:
|
||||
> - Connected directly via SPI
|
||||
> - Connected via a CH341F USB–SPI adapter
|
||||
> - Connected using hardware that supports Meshcore Kiss Modem firmware
|
||||
|
||||
The following hardware is currently supported out-of-the-box:
|
||||
|
||||
Waveshare LoRaWAN/GNSS HAT (SPI Version Only)
|
||||
|
||||
Hardware: Waveshare SX1262 LoRa HAT (SPI interface - UART version not supported)
|
||||
Platform: Raspberry Pi (or compatible single-board computer)
|
||||
Frequency: 868MHz (EU) or 915MHz (US)
|
||||
TX Power: Up to 22dBm
|
||||
SPI Bus: SPI0
|
||||
GPIO Pins: CS=21, Reset=18, Busy=20, IRQ=16
|
||||
Note: Only the SPI version is supported. The UART version will not work.
|
||||
|
||||
HackerGadgets uConsole
|
||||
|
||||
Hardware: uConsole RTL-SDR/LoRa/GPS/RTC/USB Hub
|
||||
@@ -79,6 +92,27 @@ HT-RA62 module
|
||||
SPI Bus: SPI0
|
||||
GPIO Pins: CS=21, Reset=18, Busy=20, IRQ=16, use_dio3_tcxo=True, use_dio2_rf=True
|
||||
|
||||
Zindello Industries UltraPeater
|
||||
|
||||
Hardware: EBYTE E22/P 1W Module
|
||||
Platform: Luckfox Pico Ultra/W (NOT A PI DEVICE)
|
||||
Frequency: 868MHz (EU) or 915Mhz (US/AU)
|
||||
Tx Power: Up to 30dBm
|
||||
SPI Bus: SPI0
|
||||
GPIO Pins: CS=16, Reset=22, Busy=11, IRQ=10, TXEN=20 , RXEN=21 (E22 Only), EN=21 (E22P Only), TXLED=9, RXLED=1, use_dio2_rf=False, use_dio3_tcxo=True, use_gpiod_backend=True, gpio_chip=1
|
||||
|
||||
Waveshare LoRaWAN/GNSS HAT (SPI Version Only)
|
||||
|
||||
NO LONGER RECOMMENDED
|
||||
Note: May experience issues on "Narrow" (62.5KHz) settings due to a lack of TCXO
|
||||
Hardware: Waveshare SX1262 LoRa HAT (SPI interface - UART version not supported)
|
||||
Platform: Raspberry Pi (or compatible single-board computer)
|
||||
Frequency: 868MHz (EU) or 915MHz (US)
|
||||
TX Power: Up to 22dBm
|
||||
SPI Bus: SPI0
|
||||
GPIO Pins: CS=21, Reset=18, Busy=20, IRQ=16
|
||||
Note: Only the SPI version is supported. The UART version will not work.
|
||||
|
||||
...
|
||||
|
||||
## Screenshots
|
||||
@@ -166,6 +200,18 @@ The configuration file is created and configured during installation at:
|
||||
/etc/pymc_repeater/config.yaml
|
||||
```
|
||||
|
||||
### Optional pyMC_Glass integration
|
||||
The repeater now supports an additive `glass` config section for central control-plane integration.
|
||||
When enabled, it sends periodic `/inform` payloads to pyMC_Glass, receives queued commands, and reports command results on the next inform cycle.
|
||||
|
||||
Minimal example:
|
||||
```yaml
|
||||
glass:
|
||||
enabled: true
|
||||
base_url: "http://localhost:8080"
|
||||
inform_interval_seconds: 30
|
||||
```
|
||||
|
||||
To reconfigure radio and hardware settings after installation, run:
|
||||
```bash
|
||||
sudo bash setup-radio-config.sh /etc/pymc_repeater
|
||||
@@ -194,6 +240,91 @@ The upgrade script will:
|
||||
- Restart the service automatically
|
||||
- Preserve your existing configuration
|
||||
|
||||
---
|
||||
|
||||
## Installing on Proxmox (LXC Container)
|
||||
|
||||
pyMC Repeater can run inside a Proxmox LXC container using a **CH341 USB-to-SPI adapter** for radio communication. This is ideal for headless, always-on deployments without dedicating a full Raspberry Pi.
|
||||
|
||||
### Requirements
|
||||
|
||||
- **Proxmox VE 7.x or 8.x** host
|
||||
- **CH341 USB-to-SPI adapter** (VID `1a86`, PID `5512`) connected to the Proxmox host
|
||||
- **SX1262-based LoRa module** (e.g. Ebyte E22-900M30S) wired to the CH341 adapter
|
||||
- Internet connectivity for the container
|
||||
|
||||
### One-Line Install
|
||||
|
||||
Run this on the **Proxmox host** (not inside a container):
|
||||
|
||||
```bash
|
||||
bash -c "$(curl -fsSL https://raw.githubusercontent.com/rightup/pyMC_Repeater/feat/newRadios/scripts/proxmox-install.sh)"
|
||||
```
|
||||
|
||||
> **Tip:** Replace `feat/newRadios` in the URL with whichever branch you want to install.
|
||||
|
||||
The installer will interactively prompt you for container settings (hostname, RAM, disk, bridge, etc.) and then:
|
||||
|
||||
1. Download a Debian 12 LXC template
|
||||
2. Create a **privileged** container with USB passthrough
|
||||
3. Install a host-side udev rule for the CH341 device
|
||||
4. Clone the repository and pre-seed the config with CH341 GPIO pin mappings
|
||||
5. Run `manage.sh install` inside the container
|
||||
6. Display the dashboard URL when finished
|
||||
|
||||
### Default Container Settings
|
||||
|
||||
| Setting | Default |
|
||||
|-----------|-----------------|
|
||||
| Hostname | `pymc-repeater` |
|
||||
| RAM | 1024 MB |
|
||||
| Disk | 4 GB |
|
||||
| CPU cores | 2 |
|
||||
| Bridge | `vmbr0` |
|
||||
| Storage | `local-lvm` |
|
||||
| Password | `pymc` |
|
||||
|
||||
### After Installation
|
||||
|
||||
```bash
|
||||
# Enter the container
|
||||
pct enter <CTID>
|
||||
|
||||
# View service logs
|
||||
journalctl -u pymc-repeater -f
|
||||
|
||||
# Access web dashboard
|
||||
http://<container-ip>:8000
|
||||
|
||||
# Manage the repeater
|
||||
cd /opt/pymc_repeater && bash manage.sh
|
||||
```
|
||||
|
||||
### CH341 GPIO Pin Mapping
|
||||
|
||||
The installer pre-configures the CH341 GPIO pins for an E22 module. These differ from the Raspberry Pi BCM pin numbers:
|
||||
|
||||
| Function | CH341 GPIO | Pi BCM (default) |
|
||||
|----------|-----------|-------------------|
|
||||
| CS | 0 | 21 |
|
||||
| RXEN | 1 | -1 |
|
||||
| Reset | 2 | 18 |
|
||||
| Busy | 4 | 20 |
|
||||
| IRQ | 6 | 16 |
|
||||
|
||||
The installer also enables `use_dio3_tcxo` and `use_dio2_rf` for E22 modules.
|
||||
|
||||
### Troubleshooting (Proxmox)
|
||||
|
||||
- **USB device not found**: Make sure the CH341 is plugged into the Proxmox host and shows up with `lsusb -d 1a86:5512`
|
||||
- **Permission denied on USB**: The installer creates a host udev rule (`/etc/udev/rules.d/99-ch341.rules`). Run `udevadm trigger` on the host if needed
|
||||
- **Container can't see USB**: Verify USB passthrough lines exist in `/etc/pve/lxc/<CTID>.conf`:
|
||||
```
|
||||
lxc.cgroup2.devices.allow: c 189:* rwm
|
||||
lxc.mount.entry: /dev/bus/usb dev/bus/usb none bind,optional,create=dir 0 0
|
||||
```
|
||||
- **NoBackendError (libusb)**: The installer installs `libusb-1.0-0` automatically. If you see this error, run `apt-get install libusb-1.0-0` inside the container
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -212,6 +343,34 @@ This script will:
|
||||
The script will prompt you for each optional removal step.
|
||||
|
||||
|
||||
## Docker Compose
|
||||
|
||||
You can now run pyMC Repeater from within a [Docker Container](https://www.docker.com/). Checkout the example [Docker Compose](./docker-compose.yml) file before you get started. It will need some configuration changes based on what hardware you're using (USB vs SPI). Look at the commented out lines to see which hardware requires what lines and only enable what you need.
|
||||
|
||||
Here is what you'll need to do in order to get the container running:
|
||||
|
||||
1. Copy the `config.yaml.example` to `config.yaml`
|
||||
|
||||
```bash
|
||||
cp ./config.yaml.example ./config.yaml
|
||||
```
|
||||
|
||||
2. Run the configuration script and follow the prompts.
|
||||
|
||||
```bash
|
||||
sudo bash ./setup-radio-config.sh
|
||||
```
|
||||
|
||||
3. Modify the `config.yaml` file with a unique web UI password. This allows you to bypass the `/setup` page when logging for the first time. You can find the value under `repeater.security.admin_password`. Change to _anything_ besides the default of `admin123`.
|
||||
|
||||
4. Configure the [docker compose](./docker-compose.yml) to your specific hardware and file paths. Be sure to comment-out or delete lines that aren't required for your hardware. Please note that your hardware devices might be at a different path than those listed in the docker compose file.
|
||||
|
||||
5. Build and start the container.
|
||||
|
||||
```bash
|
||||
docker compose up -d --force-recreate --build
|
||||
```
|
||||
|
||||
## Roadmap / Planned Features
|
||||
|
||||
- [ ] **Public Map Integration** - Submit repeater location and details to public map for discovery
|
||||
@@ -259,8 +418,6 @@ Pre-commit hooks will automatically:
|
||||
- Lint with flake8
|
||||
- Fix trailing whitespace and other file issues
|
||||
|
||||
|
||||
|
||||
## Support
|
||||
|
||||
- [Core Lib Documentation](https://rightup.github.io/pyMC_core/)
|
||||
@@ -286,7 +443,3 @@ This software is intended for educational and experimental purposes. Always test
|
||||
## License
|
||||
|
||||
This project is licensed under the MIT License - see the LICENSE file for details.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
+1286
File diff suppressed because it is too large
Load Diff
+238
-41
@@ -1,9 +1,15 @@
|
||||
# Default Repeater Configuration
|
||||
# radio_type: sx1262 | kiss (use kiss for serial KISS TNC modem)
|
||||
radio_type: sx1262
|
||||
|
||||
repeater:
|
||||
# Node name for logging and identification
|
||||
node_name: "mesh-repeater-01"
|
||||
|
||||
# TX mode: forward | monitor | no_tx (default: forward)
|
||||
# forward = repeat on; monitor = no repeat but companions/tenants can send; no_tx = all TX off
|
||||
# mode: forward
|
||||
|
||||
# Geographic location (optional)
|
||||
# Latitude in decimal degrees (-90 to 90)
|
||||
latitude: 0.0
|
||||
@@ -14,8 +20,20 @@ repeater:
|
||||
# If not specified, a new identity will be generated
|
||||
identity_file: null
|
||||
|
||||
# Identity key (alternative to identity_file)
|
||||
# Store the private key directly in config as binary (set by convert_firmware_key.sh)
|
||||
# If both identity_file and identity_key are set, identity_key takes precedence
|
||||
# identity_key: null
|
||||
|
||||
# Owner information (shown to clients requesting owner info)
|
||||
owner_info: ""
|
||||
|
||||
# Duplicate packet cache TTL in seconds
|
||||
cache_ttl: 60
|
||||
cache_ttl: 3600
|
||||
|
||||
# Maximum number of hops a flood packet may have already traversed before
|
||||
# this repeater forwards it.
|
||||
max_flood_hops: 64
|
||||
|
||||
# Score-based transmission filtering
|
||||
# Enable quality-based packet filtering and adaptive delays
|
||||
@@ -39,12 +57,149 @@ repeater:
|
||||
# with its node information (node type 2 - repeater)
|
||||
allow_discovery: true
|
||||
|
||||
# Incoming advert rate limiter (per advert public key)
|
||||
# Uses a token bucket to smooth bursts.
|
||||
advert_rate_limit:
|
||||
# Master switch for token bucket limiting
|
||||
enabled: false
|
||||
# Max burst size allowed immediately per pubkey
|
||||
# Keep this small for long advert intervals.
|
||||
bucket_capacity: 2
|
||||
# Number of tokens added each refill interval
|
||||
refill_tokens: 1
|
||||
# Refill interval in seconds (10 hours)
|
||||
refill_interval_seconds: 36000
|
||||
# Optional hard minimum spacing between adverts from same pubkey
|
||||
# Set 0 to disable (recommended - mesh retransmissions are normal in active networks)
|
||||
min_interval_seconds: 0
|
||||
|
||||
# Penalty box for repeat advert limit violations (per pubkey)
|
||||
advert_penalty_box:
|
||||
# Master switch for escalating cooldowns
|
||||
enabled: false
|
||||
# Number of violations within decay window before cooldown starts
|
||||
violation_threshold: 2
|
||||
# Reset violation count if pubkey stays quiet for this long
|
||||
violation_decay_seconds: 43200
|
||||
# First penalty duration in seconds
|
||||
base_penalty_seconds: 21600
|
||||
# Exponential growth factor for repeated violations
|
||||
penalty_multiplier: 2.0
|
||||
# Maximum penalty duration cap
|
||||
max_penalty_seconds: 86400
|
||||
|
||||
# Adaptive rate limiting based on mesh activity
|
||||
# Rate limits scale with mesh busyness: quiet mesh = lenient, busy mesh = strict
|
||||
advert_adaptive:
|
||||
# Master switch for adaptive scaling
|
||||
enabled: false
|
||||
# EWMA smoothing factor (0.0-1.0, higher = faster response)
|
||||
ewma_alpha: 0.1
|
||||
# Seconds without metrics change before tier change takes effect (hysteresis)
|
||||
hysteresis_seconds: 300
|
||||
# Tier thresholds based on adverts per minute EWMA
|
||||
thresholds:
|
||||
quiet_max: 0.05 # Below this = QUIET tier (no limiting)
|
||||
normal_max: 0.20 # Below this = NORMAL tier (1x limits)
|
||||
busy_max: 0.50 # Below this = BUSY tier (0.5x capacity)
|
||||
# Above busy_max = CONGESTED tier (0.25x capacity)
|
||||
|
||||
# Security settings for login/authentication (shared across all identities)
|
||||
security:
|
||||
# Maximum number of authenticated clients (across all identities)
|
||||
max_clients: 1
|
||||
|
||||
# Admin password for full access
|
||||
admin_password: "admin123"
|
||||
|
||||
# Guest password for limited access
|
||||
guest_password: "guest123"
|
||||
|
||||
# Allow read-only access for clients without password/not in ACL
|
||||
allow_read_only: false
|
||||
|
||||
# JWT secret key for signing tokens (auto-generated if not provided)
|
||||
# Generate with: python -c "import secrets; print(secrets.token_hex(32))"
|
||||
jwt_secret: ""
|
||||
|
||||
# JWT token expiry time in minutes (default: 60 minutes / 1 hour)
|
||||
# Controls how long users stay logged in before needing to re-authenticate
|
||||
jwt_expiry_minutes: 60
|
||||
|
||||
# Mesh Network Configuration
|
||||
mesh:
|
||||
# Global flood policy - controls whether the repeater allows or denies flooding by default
|
||||
# true = allow flooding globally, false = deny flooding globally
|
||||
# Individual transport keys can override this setting
|
||||
global_flood_allow: true
|
||||
# Unscoped flood policy - controls whether the repeater allows or denies unscoped flooding
|
||||
# true = allow unscoped flooding, false = deny flooding globally
|
||||
unscoped_flood_allow: true
|
||||
|
||||
# Path hash mode for flood packets (0-hop): per-hop hash size in path encoding
|
||||
# 0 = 1-byte hashes (legacy), 1 = 2-byte, 2 = 3-byte. Must match mesh convention.
|
||||
# Affects originated adverts and any other flood packets sent by the repeater.
|
||||
path_hash_mode: 0
|
||||
|
||||
# Flood loop detection mode
|
||||
# off = disabled, minimal = allow up to 3 self-hashes, moderate = allow up to 1, strict = allow 0
|
||||
loop_detect: minimal
|
||||
|
||||
# Multiple Identity Configuration (Optional)
|
||||
# Define additional identities for the repeater to manage
|
||||
# Each identity operates independently with its own key pair and configuration
|
||||
identities:
|
||||
# Room Server Identities
|
||||
# Each room server acts as a separate logical node on the mesh
|
||||
room_servers:
|
||||
# Example room server configuration (commented out by default)
|
||||
# - name: "TestBBS"
|
||||
# identity_key: "your_room_identity_key_hex_here"
|
||||
# type: "room_server"
|
||||
#
|
||||
# # Room-specific settings
|
||||
# settings:
|
||||
# node_name: "Test BBS Room"
|
||||
# latitude: 0.0
|
||||
# longitude: 0.0
|
||||
# admin_password: "room_admin_password"
|
||||
# guest_password: "room_guest_password"
|
||||
# Add more room servers as needed
|
||||
# - name: "SocialHub"
|
||||
# identity_key: "another_identity_key_hex_here"
|
||||
# type: "room_server"
|
||||
# settings:
|
||||
# node_name: "Social Hub"
|
||||
# latitude: 0.0
|
||||
# longitude: 0.0
|
||||
# admin_password: "social_admin_123"
|
||||
# guest_password: "social_guest_123"
|
||||
|
||||
# Companion Identities
|
||||
# Each companion exposes the MeshCore frame protocol over TCP for standard clients.
|
||||
# One TCP client per companion at a time. Clients connect to repeater-ip:tcp_port.
|
||||
companions:
|
||||
# - name: "RepeaterCompanion"
|
||||
# identity_key: "your_companion_identity_key_hex_here"
|
||||
# settings:
|
||||
# node_name: "RepeaterCompanion"
|
||||
# tcp_port: 5000
|
||||
# bind_address: "0.0.0.0"
|
||||
# tcp_timeout: 120 # seconds; default 120 when omitted; 0 = disable (no timeout)
|
||||
# - name: "BotCompanion"
|
||||
# identity_key: "another_companion_identity_key_hex"
|
||||
# settings:
|
||||
# node_name: "meshcore-bot"
|
||||
# tcp_port: 5001
|
||||
# tcp_timeout: 120 # seconds; default 120 when omitted; 0 = disable (no timeout)
|
||||
|
||||
# Radio hardware type
|
||||
# Supported:
|
||||
# - sx1262 (Linux spidev + system GPIO)
|
||||
# - sx1262_ch341 (CH341 USB-to-SPI + CH341 GPIO 0-7)
|
||||
radio_type: sx1262
|
||||
|
||||
# CH341 USB-to-SPI adapter settings (only used when radio_type: sx1262_ch341)
|
||||
# NOTE: VID/PID are integers. Hex is also accepted in YAML, e.g. 0x1A86.
|
||||
ch341:
|
||||
vid: 6790 # 0x1A86
|
||||
pid: 21778 # 0x5512
|
||||
|
||||
radio:
|
||||
# Frequency in Hz (869.618 MHz for EU)
|
||||
@@ -68,19 +223,25 @@ radio:
|
||||
# Sync word (LoRa network ID)
|
||||
sync_word: 13380
|
||||
|
||||
# Enable CRC checking
|
||||
crc_enabled: true
|
||||
|
||||
# Use implicit header mode
|
||||
implicit_header: false
|
||||
|
||||
# KISS modem (when radio_type: kiss). Requires pyMC_core with KISS support.
|
||||
# kiss:
|
||||
# port: "/dev/ttyUSB0"
|
||||
# baud_rate: 9600
|
||||
|
||||
# SX1262 Hardware Configuration
|
||||
# NOTE:
|
||||
# - When radio_type: sx1262, these pins are BCM GPIO numbers.
|
||||
# - When radio_type: sx1262_ch341, these pins are CH341 GPIO numbers (0-7).
|
||||
sx1262:
|
||||
# SPI bus and chip select
|
||||
# NOTE: For CH341 these are not used but are still required parameters.
|
||||
bus_id: 0
|
||||
cs_id: 0
|
||||
|
||||
# GPIO pins (BCM numbering)
|
||||
# GPIO pins
|
||||
cs_pin: 21
|
||||
reset_pin: 18
|
||||
busy_pin: 20
|
||||
@@ -95,6 +256,8 @@ sx1262:
|
||||
rxled_pin: -1
|
||||
|
||||
use_dio3_tcxo: false
|
||||
dio3_tcxo_voltage: 1.8
|
||||
use_dio2_rf: false
|
||||
|
||||
# Waveshare hardware flag
|
||||
is_waveshare: false
|
||||
@@ -114,56 +277,48 @@ duty_cycle:
|
||||
# Maximum airtime per minute in milliseconds
|
||||
max_airtime_per_minute: 3600
|
||||
|
||||
|
||||
# Storage Configuration
|
||||
storage:
|
||||
# Directory for persistent storage files (SQLite, RRD)
|
||||
# Directory for persistent storage files (SQLite, RRD).
|
||||
# Use a writable path for local/dev (e.g. "./var/pymc_repeater" or "~/var/pymc_repeater").
|
||||
storage_dir: "/var/lib/pymc_repeater"
|
||||
|
||||
# MQTT publishing configuration (optional)
|
||||
mqtt:
|
||||
# Enable/disable MQTT publishing
|
||||
enabled: false
|
||||
|
||||
# MQTT broker settings
|
||||
broker: "localhost"
|
||||
port: 1883
|
||||
|
||||
# Authentication (optional)
|
||||
username: null
|
||||
password: null
|
||||
|
||||
# Base topic for publishing
|
||||
# Messages will be published to: {base_topic}/{node_name}/{packet|advert}
|
||||
base_topic: "meshcore/repeater"
|
||||
|
||||
# Data retention settings
|
||||
retention:
|
||||
# Clean up SQLite records older than this many days
|
||||
sqlite_cleanup_days: 31
|
||||
|
||||
|
||||
# RRD archives are managed automatically:
|
||||
# - 1 minute resolution for 1 week
|
||||
# - 5 minute resolution for 1 month
|
||||
# - 5 minute resolution for 1 month
|
||||
# - 1 hour resolution for 1 year
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
letsmesh:
|
||||
enabled: false
|
||||
mqtt:
|
||||
iata_code: "Test" # e.g., "SFO", "LHR", "Test"
|
||||
broker_index: 0 # Which LetsMesh broker (0=EU, 1=US West)
|
||||
status_interval: 300
|
||||
status_interval: 300 # How often a status message is sent (in seconds)
|
||||
owner: ""
|
||||
email: ""
|
||||
|
||||
# Block specific packet types from being published to LetsMesh
|
||||
brokers: []
|
||||
|
||||
# Below is the broker object schema:
|
||||
# enabled: true|false # Enable this specific mqtt broker
|
||||
# name: "" # Internal name for this broker
|
||||
# host: "" # hostname or ip of mqtt endpoints
|
||||
# port: # Typically 443 for websocket endpoints or 1883 for tcp
|
||||
# transport: "tcp" or "websockets"
|
||||
# audience: "" # For JWT auth'd endpoints, this is usually the host unless always stated by endpoint owners
|
||||
# use_jwt_auth: true|false # Does this endpoint require JWT auth
|
||||
# username: "" # Username for basic auth. If empty or missing, uses anonymous access
|
||||
# password: "" # Password for basic auth. Required if username is set
|
||||
# format: letsmesh|mqtt
|
||||
# retain_status: true|false # Sets MQTT "retain" on status messages so they remain on the broker when disconnected. Also enforces a QOS of 1 (guaranteed delivery)
|
||||
|
||||
# Block specific packet types from being published to the MQTT endpoint
|
||||
# If not specified or empty list, all types are published
|
||||
# Available types: REQ, RESPONSE, TXT_MSG, ACK, ADVERT, GRP_TXT,
|
||||
# Available types: REQ, RESPONSE, TXT_MSG, ACK, ADVERT, GRP_TXT,
|
||||
# GRP_DATA, ANON_REQ, PATH, TRACE, RAW_CUSTOM
|
||||
disallowed_packet_types: []
|
||||
# disallowed_packet_types: []
|
||||
# - REQ # Don't publish requests
|
||||
# - RESPONSE # Don't publish responses
|
||||
# - TXT_MSG # Don't publish text messages
|
||||
@@ -176,6 +331,48 @@ letsmesh:
|
||||
# - TRACE # Don't publish trace packets
|
||||
# - RAW_CUSTOM # Don't publish custom raw packets
|
||||
|
||||
# Example of using the US and EU LetsMesh endpoints
|
||||
# brokers:
|
||||
# - name: US West (LetsMesh v1)
|
||||
# host: mqtt-us-v1.letsmesh.net
|
||||
# port: 443
|
||||
# audience: mqtt-us-v1.letsmesh.net
|
||||
# use_jwt_auth: true
|
||||
# enabled: true
|
||||
|
||||
# - name: Europe (LetsMesh v1)
|
||||
# host: mqtt-eu-v1.letsmesh.net
|
||||
# port: 443
|
||||
# audience: mqtt-eu-v1.letsmesh.net
|
||||
# use_jwt_auth: true
|
||||
# enabled: true
|
||||
|
||||
# pyMC_Glass control-plane integration (optional)
|
||||
glass:
|
||||
# Enable repeater -> pyMC_Glass /inform loop
|
||||
enabled: false
|
||||
|
||||
# Base URL of Glass backend
|
||||
# Example local dev: "http://localhost:8080"
|
||||
# Example production: "https://glass.example.com"
|
||||
base_url: "http://localhost:8080"
|
||||
|
||||
# Inform interval in seconds (used as initial/default interval;
|
||||
# backend may override via noop.interval response)
|
||||
inform_interval_seconds: 30
|
||||
|
||||
# HTTP timeout per inform request
|
||||
request_timeout_seconds: 10
|
||||
|
||||
# Verify TLS certificates when using HTTPS
|
||||
verify_tls: true
|
||||
|
||||
# Optional bearer token for future authenticated inform endpoints
|
||||
api_token: ""
|
||||
|
||||
# Where cert_renewal payloads are written
|
||||
cert_store_dir: "/etc/pymc_repeater/glass"
|
||||
|
||||
logging:
|
||||
# Log level: DEBUG, INFO, WARNING, ERROR
|
||||
level: INFO
|
||||
|
||||
Executable
+299
@@ -0,0 +1,299 @@
|
||||
#!/bin/bash
|
||||
# Convert MeshCore firmware 64-byte private key to pyMC_Repeater format
|
||||
#
|
||||
# Usage: sudo ./convert_firmware_key.sh <64-byte-hex-key> [--output-format=<yaml|identity>] [config-path]
|
||||
# Example: sudo ./convert_firmware_key.sh 987BDA619630197351F2B3040FD19B2EE0DEE357DD69BBEEE295786FA78A4D5F298B0BF1B7DE73CBC23257CDB2C562F5033DF58C232916432948B0F6BA4448F2
|
||||
|
||||
set -e
|
||||
|
||||
if [ $# -eq 0 ]; then
|
||||
echo "Error: No key provided"
|
||||
echo ""
|
||||
echo "Usage: sudo $0 <64-byte-hex-key> [--output-format=<yaml|identity>] [config-path]"
|
||||
echo ""
|
||||
echo "This script imports a 64-byte MeshCore firmware private key into"
|
||||
echo "pyMC_Repeater for full identity compatibility."
|
||||
echo ""
|
||||
echo "The 64-byte key format: [32-byte scalar][32-byte nonce]"
|
||||
echo " - Enables same node address as firmware device"
|
||||
echo " - Supports signing using MeshCore/orlp ed25519 algorithm"
|
||||
echo " - Fully compatible with pyMC_core LocalIdentity"
|
||||
echo ""
|
||||
echo "Arguments:"
|
||||
echo " --output-format: Optional output format (yaml|identity, default: yaml)"
|
||||
echo " yaml - Store in config.yaml (embedded binary)"
|
||||
echo " identity - Save to identity.key file (base64 encoded)"
|
||||
echo " config-path: Optional path to config.yaml (default: /etc/pymc_repeater/config.yaml)"
|
||||
echo ""
|
||||
echo "Examples:"
|
||||
echo " # Save to config.yaml (default)"
|
||||
echo " sudo $0 987BDA619630197351F2B3040FD19B2EE0DEE357DD69BBEEE295786FA78A4D5F298B0BF1B7DE73CBC23257CDB2C562F5033DF58C232916432948B0F6BA4448F2"
|
||||
echo ""
|
||||
echo " # Save to identity.key file"
|
||||
echo " sudo $0 987BDA619630197351F2B3040FD19B2EE0DEE357DD69BBEEE295786FA78A4D5F298B0BF1B7DE73CBC23257CDB2C562F5033DF58C232916432948B0F6BA4448F2 --output-format=identity"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if running with sudo/root
|
||||
if [ "$EUID" -ne 0 ]; then
|
||||
echo "Error: This script must be run with sudo to update config.yaml"
|
||||
echo "Usage: sudo $0 <64-byte-hex-key>"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
FULL_KEY="$1"
|
||||
OUTPUT_FORMAT="yaml" # Default format
|
||||
CONFIG_PATH=""
|
||||
|
||||
# Parse arguments
|
||||
shift # Remove the key argument
|
||||
while [ $# -gt 0 ]; do
|
||||
case "$1" in
|
||||
--output-format=*)
|
||||
OUTPUT_FORMAT="${1#*=}"
|
||||
;;
|
||||
*)
|
||||
CONFIG_PATH="$1"
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
done
|
||||
|
||||
# Validate output format
|
||||
if [ "$OUTPUT_FORMAT" != "yaml" ] && [ "$OUTPUT_FORMAT" != "identity" ]; then
|
||||
echo "Error: Invalid output format '$OUTPUT_FORMAT'. Must be 'yaml' or 'identity'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Set default config path if not provided
|
||||
if [ -z "$CONFIG_PATH" ]; then
|
||||
CONFIG_PATH="/etc/pymc_repeater/config.yaml"
|
||||
fi
|
||||
|
||||
# Validate hex string
|
||||
if ! [[ "$FULL_KEY" =~ ^[0-9a-fA-F]+$ ]]; then
|
||||
echo "Error: Key must be a hexadecimal string"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
KEY_LEN=${#FULL_KEY}
|
||||
|
||||
if [ "$KEY_LEN" -ne 128 ]; then
|
||||
echo "Error: Key must be 64 bytes (128 hex characters), got $KEY_LEN characters"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if config/identity file location exists (only for yaml format or if saving identity.key)
|
||||
if [ "$OUTPUT_FORMAT" = "yaml" ]; then
|
||||
# Check if config exists
|
||||
if [ ! -f "$CONFIG_PATH" ]; then
|
||||
echo "Error: Config file not found: $CONFIG_PATH"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
# For identity format, use system-wide location matching config.yaml
|
||||
IDENTITY_DIR="/etc/pymc_repeater"
|
||||
IDENTITY_PATH="$IDENTITY_DIR/identity.key"
|
||||
fi
|
||||
|
||||
echo "=== MeshCore Firmware Key Import ==="
|
||||
echo ""
|
||||
echo "Output format: $OUTPUT_FORMAT"
|
||||
if [ "$OUTPUT_FORMAT" = "yaml" ]; then
|
||||
echo "Target file: $CONFIG_PATH"
|
||||
else
|
||||
echo "Target file: $IDENTITY_PATH"
|
||||
fi
|
||||
echo ""
|
||||
echo "Input (64-byte firmware key):"
|
||||
echo " $FULL_KEY"
|
||||
echo ""
|
||||
|
||||
# Verify public key derivation and import key using Python with safe YAML handling
|
||||
python3 <<EOF
|
||||
import sys
|
||||
import yaml
|
||||
import base64
|
||||
import hashlib
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
# Import the key
|
||||
key_hex = "$FULL_KEY"
|
||||
key_bytes = bytes.fromhex(key_hex)
|
||||
output_format = "$OUTPUT_FORMAT"
|
||||
|
||||
# Verify with pyMC if available
|
||||
try:
|
||||
from nacl.bindings import crypto_scalarmult_ed25519_base_noclamp
|
||||
|
||||
scalar = key_bytes[:32]
|
||||
pubkey = crypto_scalarmult_ed25519_base_noclamp(scalar)
|
||||
|
||||
print(f"Derived public key: {pubkey.hex()}")
|
||||
|
||||
# Calculate address (MeshCore uses first byte of pubkey directly, not SHA256)
|
||||
address = pubkey[0]
|
||||
print(f"Node address: 0x{address:02x}")
|
||||
print()
|
||||
|
||||
except ImportError:
|
||||
print("Warning: PyNaCl not available, skipping verification")
|
||||
print()
|
||||
|
||||
if output_format == "yaml":
|
||||
# Save to config.yaml
|
||||
config_path = Path("$CONFIG_PATH")
|
||||
try:
|
||||
with open(config_path, 'r') as f:
|
||||
config = yaml.safe_load(f) or {}
|
||||
except Exception as e:
|
||||
print(f"Error loading config: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
# Check for existing key
|
||||
if 'repeater' in config and 'identity_key' in config['repeater']:
|
||||
existing = config['repeater']['identity_key']
|
||||
if isinstance(existing, bytes):
|
||||
print(f"WARNING: Existing identity_key found ({len(existing)} bytes)")
|
||||
else:
|
||||
print(f"WARNING: Existing identity_key found")
|
||||
print()
|
||||
|
||||
# Ensure repeater section exists
|
||||
if 'repeater' not in config:
|
||||
config['repeater'] = {}
|
||||
|
||||
# Store the full 64-byte key
|
||||
config['repeater']['identity_key'] = key_bytes
|
||||
|
||||
# Save config atomically
|
||||
backup_path = f"{config_path}.backup.{Path(config_path).stat().st_mtime_ns}"
|
||||
import shutil
|
||||
shutil.copy2(config_path, backup_path)
|
||||
print(f"Created backup: {backup_path}")
|
||||
|
||||
try:
|
||||
with open(config_path, 'w') as f:
|
||||
yaml.safe_dump(config, f, default_flow_style=False, allow_unicode=True)
|
||||
print(f"✓ Successfully updated {config_path}")
|
||||
print()
|
||||
except Exception as e:
|
||||
print(f"Error writing config: {e}")
|
||||
shutil.copy2(backup_path, config_path)
|
||||
print(f"Restored from backup")
|
||||
sys.exit(1)
|
||||
|
||||
else:
|
||||
# Save to identity.key file
|
||||
identity_path = Path("$IDENTITY_PATH")
|
||||
|
||||
# Create directory if it doesn't exist
|
||||
identity_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Check for existing identity.key
|
||||
if identity_path.exists():
|
||||
print(f"WARNING: Existing identity.key found at {identity_path}")
|
||||
backup_path = identity_path.with_suffix('.key.backup')
|
||||
import shutil
|
||||
shutil.copy2(identity_path, backup_path)
|
||||
print(f"Created backup: {backup_path}")
|
||||
print()
|
||||
|
||||
# Save as base64-encoded
|
||||
try:
|
||||
with open(identity_path, 'wb') as f:
|
||||
f.write(base64.b64encode(key_bytes))
|
||||
os.chmod(identity_path, 0o600) # Restrict permissions
|
||||
print(f"✓ Successfully saved to {identity_path}")
|
||||
print(f"✓ File permissions set to 0600 (owner read/write only)")
|
||||
print()
|
||||
except Exception as e:
|
||||
print(f"Error writing identity.key: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
# Update config.yaml to remove embedded identity_key so it uses the file
|
||||
config_path = Path("$CONFIG_PATH")
|
||||
if config_path.exists():
|
||||
try:
|
||||
with open(config_path, 'r') as f:
|
||||
config = yaml.safe_load(f) or {}
|
||||
|
||||
# Check if identity_key exists in config
|
||||
if 'repeater' in config and 'identity_key' in config['repeater']:
|
||||
print(f"Updating {config_path} to use identity.key file...")
|
||||
|
||||
# Create backup
|
||||
backup_path = f"{config_path}.backup.{Path(config_path).stat().st_mtime_ns}"
|
||||
import shutil
|
||||
shutil.copy2(config_path, backup_path)
|
||||
print(f"Created backup: {backup_path}")
|
||||
|
||||
# Remove identity_key from config
|
||||
del config['repeater']['identity_key']
|
||||
|
||||
# Save updated config
|
||||
with open(config_path, 'w') as f:
|
||||
yaml.safe_dump(config, f, default_flow_style=False, allow_unicode=True)
|
||||
|
||||
print(f"✓ Removed embedded identity_key from {config_path}")
|
||||
print(f"✓ Config will now use {identity_path}")
|
||||
print()
|
||||
else:
|
||||
print(f"✓ Config file already configured to use identity.key file (no repeater.identity_key found)")
|
||||
print()
|
||||
|
||||
except Exception as e:
|
||||
print(f"Warning: Could not update config.yaml: {e}")
|
||||
print(f"You may need to manually remove 'identity_key' from {config_path}")
|
||||
print()
|
||||
else:
|
||||
print(f"Note: Config file not found at {config_path}")
|
||||
print(f" Identity will be loaded from {identity_path}")
|
||||
print()
|
||||
|
||||
EOF
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Error: Python script failed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Offer to restart service (only relevant for yaml format)
|
||||
if [ "$OUTPUT_FORMAT" = "yaml" ]; then
|
||||
if systemctl is-active --quiet pymc-repeater 2>/dev/null; then
|
||||
read -p "Restart pymc-repeater service now? (yes/no): " RESTART
|
||||
if [ "$RESTART" = "yes" ]; then
|
||||
systemctl restart pymc-repeater
|
||||
echo "✓ Service restarted"
|
||||
echo ""
|
||||
echo "Check logs for new identity:"
|
||||
echo " sudo journalctl -u pymc-repeater -f | grep -i 'identity\|hash'"
|
||||
else
|
||||
echo "Remember to restart the service:"
|
||||
echo " sudo systemctl restart pymc-repeater"
|
||||
fi
|
||||
else
|
||||
echo "Note: pymc-repeater service is not running"
|
||||
echo "Start it with: sudo systemctl start pymc-repeater"
|
||||
fi
|
||||
else
|
||||
echo "Identity key saved to file."
|
||||
echo ""
|
||||
if systemctl is-active --quiet pymc-repeater 2>/dev/null; then
|
||||
read -p "Restart pymc-repeater service now? (yes/no): " RESTART
|
||||
if [ "$RESTART" = "yes" ]; then
|
||||
systemctl restart pymc-repeater
|
||||
echo "✓ Service restarted"
|
||||
echo ""
|
||||
echo "Check logs for new identity:"
|
||||
echo " sudo journalctl -u pymc-repeater -f | grep -i 'identity\|hash'"
|
||||
else
|
||||
echo "Remember to restart the service:"
|
||||
echo " sudo systemctl restart pymc-repeater"
|
||||
fi
|
||||
else
|
||||
echo "Note: pymc-repeater service is not running"
|
||||
echo "Start it with: sudo systemctl start pymc-repeater"
|
||||
fi
|
||||
fi
|
||||
@@ -0,0 +1,6 @@
|
||||
*.debhelper
|
||||
*.debhelper.log
|
||||
*.substvars
|
||||
.debhelper/
|
||||
files
|
||||
pymc-repeater/
|
||||
Vendored
+6
@@ -0,0 +1,6 @@
|
||||
pymc-repeater (1.0.5~dev0) unstable; urgency=medium
|
||||
|
||||
* Development build from git commit 7112da9
|
||||
* Version: 1.0.5.post0
|
||||
|
||||
-- Lloyd <lloyd@rightup.co.uk> Tue, 30 Dec 2025 12:55:47 +0000
|
||||
Vendored
+43
@@ -0,0 +1,43 @@
|
||||
Source: pymc-repeater
|
||||
Section: net
|
||||
Priority: optional
|
||||
Maintainer: Lloyd <lloyd@rightup.co.uk>
|
||||
Build-Depends: debhelper-compat (= 13),
|
||||
dh-python,
|
||||
python3-all,
|
||||
python3-setuptools,
|
||||
python3-setuptools-scm,
|
||||
python3-wheel,
|
||||
python3-pip,
|
||||
python3-yaml,
|
||||
python3-cherrypy3,
|
||||
python3-paho-mqtt,
|
||||
python3-psutil,
|
||||
git
|
||||
Standards-Version: 4.6.2
|
||||
Homepage: https://github.com/rightup/pyMC_Repeater
|
||||
X-Python3-Version: >= 3.8
|
||||
|
||||
Package: pymc-repeater
|
||||
Architecture: all
|
||||
Depends: ${python3:Depends},
|
||||
${misc:Depends},
|
||||
python3-yaml,
|
||||
python3-cherrypy3,
|
||||
python3-paho-mqtt,
|
||||
python3-psutil,
|
||||
python3-jwt,
|
||||
python3-pip,
|
||||
python3-rrdtool,
|
||||
libffi-dev,
|
||||
jq
|
||||
Recommends: python3-periphery,
|
||||
python3-spidev
|
||||
Description: PyMC Repeater Daemon
|
||||
A mesh networking repeater daemon for LoRa devices.
|
||||
.
|
||||
This package provides the pymc-repeater service for managing
|
||||
mesh network repeater functionality with a web interface.
|
||||
.
|
||||
Note: This package will install pymc_core, cherrypy-cors, and ws4py
|
||||
from PyPI during postinst as they are not available in Debian repos.
|
||||
Vendored
+1
@@ -0,0 +1 @@
|
||||
pymc-repeater
|
||||
Vendored
+3
@@ -0,0 +1,3 @@
|
||||
etc/pymc_repeater
|
||||
var/log/pymc_repeater
|
||||
usr/share/pymc_repeater
|
||||
Vendored
+3
@@ -0,0 +1,3 @@
|
||||
config.yaml.example usr/share/pymc_repeater/
|
||||
radio-presets.json usr/share/pymc_repeater/
|
||||
radio-settings.json usr/share/pymc_repeater/
|
||||
+57
@@ -0,0 +1,57 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
case "$1" in
|
||||
configure)
|
||||
# Create system user
|
||||
if ! getent passwd pymc-repeater >/dev/null; then
|
||||
adduser --system --group --home /var/lib/pymc-repeater \
|
||||
--gecos "PyMC Repeater Service" pymc-repeater
|
||||
fi
|
||||
|
||||
# Add user to gpio and spi groups for hardware access
|
||||
if getent group gpio >/dev/null; then
|
||||
usermod -a -G gpio pymc-repeater
|
||||
fi
|
||||
if getent group spi >/dev/null; then
|
||||
usermod -a -G spi pymc-repeater
|
||||
fi
|
||||
# Create and set permissions on data directory
|
||||
mkdir -p /var/lib/pymc_repeater
|
||||
chown -R pymc-repeater:pymc-repeater /var/lib/pymc_repeater
|
||||
chmod 750 /var/lib/pymc_repeater
|
||||
# Set permissions
|
||||
chown -R pymc-repeater:pymc-repeater /etc/pymc_repeater
|
||||
chown -R pymc-repeater:pymc-repeater /var/log/pymc-repeater
|
||||
chmod 750 /etc/pymc_repeater
|
||||
chmod 750 /var/log/pymc-repeater
|
||||
|
||||
# Copy example config if no config exists
|
||||
if [ ! -f /etc/pymc_repeater/config.yaml ]; then
|
||||
cp /usr/share/pymc_repeater/config.yaml.example /etc/pymc_repeater/config.yaml
|
||||
chown pymc-repeater:pymc-repeater /etc/pymc_repeater/config.yaml
|
||||
chmod 640 /etc/pymc_repeater/config.yaml
|
||||
fi
|
||||
|
||||
# Install pymc_core from PyPI if not already installed
|
||||
if ! python3 -c "import pymc_core" 2>/dev/null; then
|
||||
echo "Installing pymc_core[hardware] from PyPI..."
|
||||
python3 -m pip install --break-system-packages 'pymc_core[hardware]>=1.0.7' || true
|
||||
fi
|
||||
|
||||
# Install packages not available in Debian repos
|
||||
if ! python3 -c "import cherrypy_cors" 2>/dev/null; then
|
||||
echo "Installing cherrypy-cors from PyPI..."
|
||||
python3 -m pip install --break-system-packages 'cherrypy-cors==1.7.0' || true
|
||||
fi
|
||||
|
||||
if ! python3 -c "import ws4py" 2>/dev/null; then
|
||||
echo "Installing ws4py from PyPI..."
|
||||
python3 -m pip install --break-system-packages 'ws4py>=0.5.1' || true
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
|
||||
#DEBHELPER#
|
||||
|
||||
exit 0
|
||||
+18
@@ -0,0 +1,18 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
case "$1" in
|
||||
purge)
|
||||
# Remove user and directories
|
||||
if getent passwd pymc-repeater >/dev/null; then
|
||||
deluser --system pymc-repeater || true
|
||||
fi
|
||||
rm -rf /etc/pymc-repeater
|
||||
rm -rf /var/log/pymc-repeater
|
||||
rm -rf /var/lib/pymc-repeater
|
||||
;;
|
||||
esac
|
||||
|
||||
#DEBHELPER#
|
||||
|
||||
exit 0
|
||||
Vendored
+15
@@ -0,0 +1,15 @@
|
||||
[Unit]
|
||||
Description=PyMC Repeater Daemon
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=pymc-repeater
|
||||
Group=pymc-repeater
|
||||
WorkingDirectory=/etc/pymc-repeater
|
||||
ExecStart=/usr/bin/pymc-repeater
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
+22
@@ -0,0 +1,22 @@
|
||||
#!/usr/bin/make -f
|
||||
# -*- makefile -*-
|
||||
|
||||
export PYBUILD_NAME=pymc-repeater
|
||||
export DH_VERBOSE=1
|
||||
|
||||
%:
|
||||
dh $@ --with python3 --buildsystem=pybuild
|
||||
|
||||
override_dh_auto_test:
|
||||
# Skip tests - cherrypy-cors not available in Debian repos
|
||||
# Tests pass in development with: pip install cherrypy-cors
|
||||
|
||||
override_dh_auto_clean:
|
||||
dh_auto_clean
|
||||
rm -rf build/
|
||||
rm -rf *.egg-info/
|
||||
rm -rf .pybuild/
|
||||
rm -f repeater/_version.py
|
||||
|
||||
override_dh_installsystemd:
|
||||
dh_installsystemd --name=pymc-repeater
|
||||
Vendored
+1
@@ -0,0 +1 @@
|
||||
3.0 (native)
|
||||
@@ -0,0 +1,22 @@
|
||||
services:
|
||||
pymc-repeater:
|
||||
build: .
|
||||
container_name: pymc-repeater
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- 8000:8000
|
||||
devices:
|
||||
# SPI DEVICES (Your path may differ)
|
||||
- /dev/spidev0.0
|
||||
- /dev/gpiochip0
|
||||
# USB DEVICES (Your path may differ)
|
||||
- /dev/bus/usb/002:/dev/bus/usb/002
|
||||
# SPI DEVICES PERMISSIONS
|
||||
cap_add:
|
||||
- SYS_RAWIO
|
||||
# USB DEVICSE PERMISSIONS
|
||||
group_add:
|
||||
- plugdev
|
||||
volumes:
|
||||
- ./config.yaml:/etc/pymc_repeater/config.yaml
|
||||
- ./data:/var/lib/pymc_repeater
|
||||
+38
@@ -0,0 +1,38 @@
|
||||
FROM python:3.12-slim-bookworm
|
||||
|
||||
ENV INSTALL_DIR=/opt/pymc_repeater \
|
||||
CONFIG_DIR=/etc/pymc_repeater \
|
||||
DATA_DIR=/var/lib/pymc_repeater \
|
||||
PYTHONUNBUFFERED=1 \
|
||||
SETUPTOOLS_SCM_PRETEND_VERSION_FOR_PYMC_REPEATER=1.0.5
|
||||
|
||||
# Install runtime dependencies only
|
||||
RUN apt-get update && apt-get install -y \
|
||||
libffi-dev \
|
||||
python3-rrdtool \
|
||||
jq \
|
||||
wget \
|
||||
libusb-1.0-0 \
|
||||
swig \
|
||||
git \
|
||||
build-essential \
|
||||
python3-dev \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Create runtime directories
|
||||
RUN mkdir -p ${INSTALL_DIR} ${CONFIG_DIR} ${DATA_DIR}
|
||||
|
||||
WORKDIR ${INSTALL_DIR}
|
||||
|
||||
# Copy source
|
||||
COPY repeater ./repeater
|
||||
COPY pyproject.toml .
|
||||
COPY radio-presets.json .
|
||||
COPY radio-settings.json .
|
||||
|
||||
# Install package
|
||||
RUN pip install --no-cache-dir .
|
||||
|
||||
EXPOSE 8000
|
||||
|
||||
ENTRYPOINT ["python3", "-m", "repeater.main", "--config", "/etc/pymc_repeater/config.yaml"]
|
||||
@@ -0,0 +1,346 @@
|
||||
# PR: Compute Packet Hash Once Per Forwarded Packet
|
||||
|
||||
**Branch:** `perf/hash-once`
|
||||
**Base:** `rightup/fix-perfom-speed`
|
||||
**Files changed:** `repeater/engine.py` (1 file, ~51 lines net)
|
||||
|
||||
---
|
||||
|
||||
## Problem
|
||||
|
||||
`packet.calculate_packet_hash()` runs a SHA-256 digest over the full serialised
|
||||
packet bytes, converts the result to a hex string, and uppercases it. Before
|
||||
this change the hot forwarding path triggered this computation **three times per
|
||||
packet**:
|
||||
|
||||
| Call site | Where | When |
|
||||
|-----------|-------|------|
|
||||
| `__call__` line 162 | `pkt_hash_full = packet.calculate_packet_hash()...` | Every received packet |
|
||||
| `flood_forward` / `direct_forward` via `is_duplicate` | `pkt_hash = packet.calculate_packet_hash()...` | Every packet that reaches the forward check |
|
||||
| `flood_forward` / `direct_forward` via `mark_seen` | `pkt_hash = packet_hash or packet.calculate_packet_hash()...` | Every packet that passes the duplicate check |
|
||||
|
||||
And on the drop path, a fourth computation:
|
||||
|
||||
| Call site | Where | When |
|
||||
|-----------|-------|------|
|
||||
| `_get_drop_reason` → `is_duplicate` | `pkt_hash = packet.calculate_packet_hash()...` | Every dropped packet |
|
||||
|
||||
The hash computed in `__call__` was already available as `pkt_hash_full` but was
|
||||
never passed into `process_packet`, `flood_forward`, `direct_forward`,
|
||||
`is_duplicate`, `mark_seen`, or `_get_drop_reason`. Each of those methods
|
||||
recomputed it independently.
|
||||
|
||||
---
|
||||
|
||||
## Root Cause
|
||||
|
||||
The `packet_hash` optional parameter existed on `mark_seen` but not on
|
||||
`is_duplicate`, `flood_forward`, `direct_forward`, `process_packet`, or
|
||||
`_get_drop_reason`. The call chain therefore had no way to propagate the
|
||||
already-computed hash.
|
||||
|
||||
---
|
||||
|
||||
## Solution
|
||||
|
||||
Thread the pre-computed `pkt_hash_full` from `__call__` down through the call
|
||||
chain as an optional `packet_hash: Optional[str] = None` parameter. Each method
|
||||
uses the provided hash if present, or falls back to computing it — preserving
|
||||
backward compatibility for any caller that doesn't have a pre-computed hash.
|
||||
|
||||
```
|
||||
Before:
|
||||
__call__ → calculate_packet_hash() #1
|
||||
→ process_packet
|
||||
→ flood_forward
|
||||
→ is_duplicate → calculate_packet_hash() #2
|
||||
→ mark_seen → calculate_packet_hash() #3
|
||||
(drop path)
|
||||
→ _get_drop_reason
|
||||
→ is_duplicate → calculate_packet_hash() #4
|
||||
|
||||
After:
|
||||
__call__ → calculate_packet_hash() #1 (only computation)
|
||||
→ process_packet(packet_hash=pkt_hash_full)
|
||||
→ flood_forward(packet_hash=pkt_hash_full)
|
||||
→ is_duplicate(packet_hash=pkt_hash_full) uses provided hash ✓
|
||||
→ mark_seen(packet_hash=pkt_hash_full) uses provided hash ✓
|
||||
(drop path)
|
||||
→ _get_drop_reason(packet_hash=pkt_hash_full)
|
||||
→ is_duplicate(packet_hash=pkt_hash_full) uses provided hash ✓
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Methods Changed
|
||||
|
||||
### `is_duplicate(packet, packet_hash=None)`
|
||||
|
||||
```python
|
||||
# Before
|
||||
def is_duplicate(self, packet: Packet) -> bool:
|
||||
pkt_hash = packet.calculate_packet_hash().hex().upper() # always recomputed
|
||||
if pkt_hash in self.seen_packets:
|
||||
return True
|
||||
return False
|
||||
|
||||
# After
|
||||
def is_duplicate(self, packet: Packet, packet_hash: Optional[str] = None) -> bool:
|
||||
"""...
|
||||
INVARIANT: purely synchronous — no await points. The caller relies on
|
||||
is_duplicate + mark_seen being atomic within the asyncio event loop.
|
||||
Do NOT add any await here without revisiting that invariant.
|
||||
"""
|
||||
pkt_hash = packet_hash or packet.calculate_packet_hash().hex().upper()
|
||||
return pkt_hash in self.seen_packets
|
||||
```
|
||||
|
||||
### `_get_drop_reason(packet, packet_hash=None)`
|
||||
|
||||
```python
|
||||
# Before
|
||||
def _get_drop_reason(self, packet: Packet) -> str:
|
||||
if self.is_duplicate(packet): ... # recomputes hash
|
||||
|
||||
# After
|
||||
def _get_drop_reason(self, packet: Packet, packet_hash: Optional[str] = None) -> str:
|
||||
if self.is_duplicate(packet, packet_hash=packet_hash): ... # propagates hash
|
||||
```
|
||||
|
||||
### `flood_forward(packet, packet_hash=None)`
|
||||
|
||||
```python
|
||||
# Before
|
||||
def flood_forward(self, packet: Packet) -> Optional[Packet]:
|
||||
...
|
||||
if self.is_duplicate(packet): ... # recomputes
|
||||
self.mark_seen(packet) # recomputes
|
||||
|
||||
# After
|
||||
def flood_forward(self, packet: Packet, packet_hash: Optional[str] = None) -> Optional[Packet]:
|
||||
"""...
|
||||
INVARIANT: purely synchronous — no await points.
|
||||
"""
|
||||
...
|
||||
if self.is_duplicate(packet, packet_hash=packet_hash): ... # propagates
|
||||
self.mark_seen(packet, packet_hash=packet_hash) # propagates
|
||||
```
|
||||
|
||||
### `direct_forward(packet, packet_hash=None)` — same pattern as `flood_forward`
|
||||
|
||||
### `process_packet(packet, snr=0.0, packet_hash=None)`
|
||||
|
||||
```python
|
||||
# Before
|
||||
def process_packet(self, packet, snr=0.0):
|
||||
fwd_pkt = self.flood_forward(packet) # no hash
|
||||
|
||||
# After
|
||||
def process_packet(self, packet, snr=0.0, packet_hash=None):
|
||||
"""...
|
||||
packet_hash: pre-computed SHA-256 hex from __call__; eliminates 2 SHA-256
|
||||
calls per forwarded packet by propagating the hash through the call chain.
|
||||
"""
|
||||
fwd_pkt = self.flood_forward(packet, packet_hash=packet_hash)
|
||||
```
|
||||
|
||||
### `__call__` — two call-site changes
|
||||
|
||||
```python
|
||||
# Before
|
||||
result = (None if ... else self.process_packet(processed_packet, snr))
|
||||
...
|
||||
drop_reason = processed_packet.drop_reason or self._get_drop_reason(processed_packet)
|
||||
|
||||
# After
|
||||
result = (None if ... else self.process_packet(processed_packet, snr, packet_hash=pkt_hash_full))
|
||||
...
|
||||
drop_reason = processed_packet.drop_reason or self._get_drop_reason(
|
||||
processed_packet, packet_hash=pkt_hash_full
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What Was Not Changed
|
||||
|
||||
`record_packet_only` (line 446) and `record_duplicate` (line 486) each compute
|
||||
the hash independently. These are separate recording paths (called from the
|
||||
inject path and from the raw-packet subscriber, respectively) that have no
|
||||
`pkt_hash_full` from `__call__` in scope. Changing them would require a larger
|
||||
refactor with no benefit to the forwarding hot path, so they are left unchanged.
|
||||
|
||||
The fallback `packet_hash or packet.calculate_packet_hash()...` pattern in
|
||||
`is_duplicate`, `mark_seen`, and `_build_packet_record` ensures external callers
|
||||
(e.g. `TraceHelper.is_duplicate(packet)` from trace processing) continue to work
|
||||
without any change.
|
||||
|
||||
---
|
||||
|
||||
## Invariant Comments Added
|
||||
|
||||
`flood_forward`, `direct_forward`, and `is_duplicate` now carry explicit docstring
|
||||
invariants:
|
||||
|
||||
> **INVARIANT:** purely synchronous — no await points. The is_duplicate +
|
||||
> mark_seen pair is atomic within the asyncio event loop. Do NOT add any await
|
||||
> here without revisiting that invariant in `__call__` / `process_packet`.
|
||||
|
||||
These invariants were implicit before. Making them explicit means a future
|
||||
contributor adding an `await` inside these methods will see the warning and
|
||||
understand the consequence: the duplicate-check and mark-seen can no longer be
|
||||
guaranteed atomic, allowing the same packet to be forwarded twice under concurrent
|
||||
task dispatch.
|
||||
|
||||
---
|
||||
|
||||
## Quantification
|
||||
|
||||
On a Raspberry Pi running CPython 3.13, `hashlib.sha256` on a 50–200 byte
|
||||
LoRa payload takes approximately 1–3 µs. The `.hex().upper()` string conversion
|
||||
adds another ~0.5 µs. Savings per forwarded packet: ~3–8 µs.
|
||||
|
||||
At 3 packets/second sustained forwarding rate this saves ~10–25 µs/second, which
|
||||
is negligible in absolute terms. The more significant benefit is correctness and
|
||||
clarity:
|
||||
|
||||
- One canonical hash value per packet in the forwarding path.
|
||||
- No possibility of the hash changing between the `is_duplicate` check and the
|
||||
`mark_seen` call if `calculate_packet_hash` had any mutable state (it doesn't,
|
||||
but the pattern is now provably correct).
|
||||
- Explicit invariant documentation closes a latent trap for future contributors.
|
||||
|
||||
---
|
||||
|
||||
## Test Plan
|
||||
|
||||
### Unit tests (no hardware)
|
||||
|
||||
**T1 — Hash computed exactly once per forwarded packet**
|
||||
|
||||
```python
|
||||
async def test_hash_computed_once_for_flood():
|
||||
call_count = 0
|
||||
original = Packet.calculate_packet_hash
|
||||
|
||||
def counting_hash(self):
|
||||
nonlocal call_count
|
||||
call_count += 1
|
||||
return original(self)
|
||||
|
||||
with patch.object(Packet, "calculate_packet_hash", counting_hash):
|
||||
await engine(flood_packet, metadata={})
|
||||
|
||||
assert call_count == 1, f"Expected 1 hash computation, got {call_count}"
|
||||
```
|
||||
|
||||
**T2 — Hash computed exactly once per dropped (duplicate) packet**
|
||||
|
||||
```python
|
||||
async def test_hash_computed_once_for_duplicate():
|
||||
# Mark packet seen first
|
||||
engine.seen_packets[packet.calculate_packet_hash().hex().upper()] = time.time()
|
||||
|
||||
call_count = 0
|
||||
original = Packet.calculate_packet_hash
|
||||
def counting_hash(self):
|
||||
nonlocal call_count; call_count += 1; return original(self)
|
||||
|
||||
with patch.object(Packet, "calculate_packet_hash", counting_hash):
|
||||
await engine(packet, metadata={})
|
||||
|
||||
# One computation in __call__ for pkt_hash_full; should not trigger again
|
||||
# in process_packet → flood_forward → is_duplicate (drop path via _get_drop_reason)
|
||||
assert call_count == 1
|
||||
```
|
||||
|
||||
**T3 — External callers of `is_duplicate` without hash still work**
|
||||
|
||||
```python
|
||||
def test_is_duplicate_without_hash():
|
||||
"""TraceHelper and other external callers pass no hash — must still work."""
|
||||
pkt = make_test_packet()
|
||||
engine.seen_packets[pkt.calculate_packet_hash().hex().upper()] = time.time()
|
||||
|
||||
assert engine.is_duplicate(pkt) is True # no packet_hash arg
|
||||
assert engine.is_duplicate(pkt, packet_hash="WRONGHASH") is False
|
||||
```
|
||||
|
||||
**T4 — mark_seen / is_duplicate agree on the same hash**
|
||||
|
||||
```python
|
||||
def test_mark_then_is_duplicate_consistent():
|
||||
pkt = make_test_packet()
|
||||
pkt_hash = pkt.calculate_packet_hash().hex().upper()
|
||||
|
||||
assert engine.is_duplicate(pkt, packet_hash=pkt_hash) is False
|
||||
engine.mark_seen(pkt, packet_hash=pkt_hash)
|
||||
assert engine.is_duplicate(pkt, packet_hash=pkt_hash) is True
|
||||
# Same result without the pre-computed hash (fallback path)
|
||||
assert engine.is_duplicate(pkt) is True
|
||||
```
|
||||
|
||||
**T5 — flood_forward / direct_forward signatures are backward compatible**
|
||||
|
||||
```python
|
||||
def test_flood_forward_no_hash_arg():
|
||||
"""Callers that don't pass packet_hash must still work (fallback compute)."""
|
||||
pkt = make_flood_packet()
|
||||
result = engine.flood_forward(pkt) # no packet_hash — must not raise
|
||||
assert result is not None or pkt.drop_reason is not None
|
||||
```
|
||||
|
||||
### Integration / field tests (with hardware)
|
||||
|
||||
**T6 — Forwarding throughput unchanged**
|
||||
|
||||
1. Forward 100 packets at maximum duty-cycle budget.
|
||||
2. Verify all eligible packets are forwarded (same count as before change).
|
||||
3. Verify no `Duplicate` drops that were not present before.
|
||||
|
||||
**T7 — Duplicate detection unchanged**
|
||||
|
||||
1. Send the same packet twice within 1 second.
|
||||
2. Verify the first is forwarded and the second is logged as `"Duplicate"`.
|
||||
|
||||
**T8 — CPU profile shows reduced `calculate_packet_hash` calls**
|
||||
|
||||
1. Enable Python profiling (`cProfile`) on the repeater for 60 seconds.
|
||||
2. Compare `calculate_packet_hash` call count before and after.
|
||||
|
||||
**Expected:** call count approximately halved for workloads where most packets
|
||||
are forwarded (≤ 1 call per forwarded packet vs ≥ 3 before).
|
||||
|
||||
---
|
||||
|
||||
## Proof of Correctness
|
||||
|
||||
### Why the fallback `packet_hash or packet.calculate_packet_hash()` is safe
|
||||
|
||||
`packet_hash` is either the correct hash (passed from `__call__`) or `None`.
|
||||
If it is `None`, the fallback computes the hash fresh — identical to the old
|
||||
behaviour. There is no case where a wrong hash is used: the only source of a
|
||||
non-None `packet_hash` is `pkt_hash_full = packet.calculate_packet_hash()...`
|
||||
in `__call__`, computed over the same `processed_packet` (a deep copy of the
|
||||
received packet, unchanged between hash computation and the call to
|
||||
`process_packet`).
|
||||
|
||||
### Why passing the hash through a deep-copied packet is correct
|
||||
|
||||
`processed_packet = copy.deepcopy(packet)` (line 178) happens before
|
||||
`pkt_hash_full` is passed to `process_packet`. The deep copy does not change
|
||||
the packet's wire representation — `calculate_packet_hash()` calls
|
||||
`packet.write_to()` which serialises the packet's fields. The copy has the
|
||||
same fields, so `deepcopy(packet).calculate_packet_hash() == packet.calculate_packet_hash()`.
|
||||
Passing the hash computed from the original to the copy is correct.
|
||||
|
||||
### Why the invariant is critical
|
||||
|
||||
asyncio only yields execution at `await` points. `flood_forward` and
|
||||
`direct_forward` have no `await`, so they run atomically from the event loop's
|
||||
perspective. The `is_duplicate` check and the `mark_seen` call inside them
|
||||
cannot be interleaved with another coroutine. If a future change added an
|
||||
`await` between them, two concurrent `_route_packet` tasks could both pass the
|
||||
duplicate check for the same packet before either marked it seen — sending the
|
||||
same packet twice. The invariant comment documents this so the risk is visible
|
||||
at the point where it could be broken.
|
||||
@@ -0,0 +1,349 @@
|
||||
# PR: Bounded In-Flight Task Counter + Simplified Route Task Management
|
||||
|
||||
**Branch:** `perf/in-flight-cap`
|
||||
**Base:** `rightup/fix-perfom-speed`
|
||||
**Files changed:** `repeater/packet_router.py` (1 file, ~33 lines net)
|
||||
|
||||
---
|
||||
|
||||
## Background
|
||||
|
||||
The queue loop dispatches each incoming packet as an `asyncio.create_task` so TX
|
||||
delay timers run concurrently — this is correct behaviour. The previous
|
||||
implementation tracked these tasks in a `set[asyncio.Task]` (`_route_tasks`) for
|
||||
two reasons:
|
||||
|
||||
1. **Error surfacing** — the done-callback read `task.result()` to log exceptions.
|
||||
2. **Shutdown cancellation** — `stop()` cancelled and awaited all tasks in the set.
|
||||
|
||||
This PR replaces the set with a simple integer counter and tightens the companion
|
||||
deduplication prune threshold.
|
||||
|
||||
---
|
||||
|
||||
## Problems
|
||||
|
||||
### Problem 1 — Unbounded task accumulation
|
||||
|
||||
LoRa airtime naturally limits steady-state throughput to a handful of in-flight
|
||||
tasks at any time. But burst arrivals can spike the count temporarily:
|
||||
|
||||
- **Multi-hop flood amplification**: a single source packet is forwarded by every
|
||||
repeater in range, each of which re-broadcasts it. A node at a mesh junction
|
||||
may receive 5–10 copies within 100 ms, each scheduling a separate `delayed_send`
|
||||
task.
|
||||
- **Collision retries**: hardware-level collisions produce duplicate RF bursts that
|
||||
all arrive within the same RX window.
|
||||
- **Bridge nodes**: high-traffic gateway nodes connect multiple mesh segments and
|
||||
forward both directions simultaneously.
|
||||
|
||||
Under these conditions `_route_tasks` can accumulate dozens of sleeping tasks.
|
||||
Each holds a reference to the packet, the forwarded packet copy, a closure over
|
||||
`delayed_send`, and associated asyncio task overhead. There is no cap; the set
|
||||
grows until the duty-cycle gate finally fires for each task.
|
||||
|
||||
### Problem 2 — `_route_tasks` set adds O(1) cost on every packet but O(n) cost on shutdown
|
||||
|
||||
Every packet adds one entry to `_route_tasks` and removes it in the done-callback.
|
||||
This is O(1) per operation, but the `stop()` shutdown path iterates the entire set
|
||||
to cancel and gather all tasks — O(n) where n is however many tasks happen to be
|
||||
in-flight at shutdown time. On a busy node this could delay clean shutdown.
|
||||
|
||||
### Problem 3 — `_COMPANION_DEDUPE_PRUNE_THRESHOLD = 1000` is too high
|
||||
|
||||
The companion delivery deduplication dict prunes itself only when it exceeds 1000
|
||||
entries. With a 60-second TTL, each PATH/protocol-response packet adds one entry.
|
||||
On a busy mesh with 50+ nodes sending adverts and PATH packets, the dict can grow
|
||||
to hundreds of entries before a prune is triggered — keeping stale entries in
|
||||
memory for up to 60 seconds × 1000/rate entries worth of time.
|
||||
|
||||
---
|
||||
|
||||
## Solution
|
||||
|
||||
### Replace `_route_tasks` set with `_in_flight` counter
|
||||
|
||||
An integer counter provides the same protection (tasks complete; done-callback
|
||||
fires) without holding strong references to each task object:
|
||||
|
||||
```python
|
||||
# __init__
|
||||
self._in_flight: int = 0
|
||||
self._max_in_flight: int = 30
|
||||
|
||||
# _process_queue — drop early if cap reached
|
||||
if self._in_flight >= self._max_in_flight:
|
||||
logger.warning("In-flight task cap reached (%d/%d), dropping packet", ...)
|
||||
continue
|
||||
self._in_flight += 1
|
||||
task = asyncio.create_task(self._route_packet(packet))
|
||||
task.add_done_callback(self._on_route_done)
|
||||
|
||||
# done-callback
|
||||
def _on_route_done(self, task):
|
||||
self._in_flight -= 1
|
||||
if not task.cancelled() and task.exception():
|
||||
logger.error("_route_packet raised: %s", task.exception(), ...)
|
||||
```
|
||||
|
||||
### Cap at 30 concurrent in-flight tasks
|
||||
|
||||
30 is chosen as a ceiling that is:
|
||||
- **Never reached in normal operation**: LoRa airtime at SF8/125 kHz limits
|
||||
throughput to ~2–3 packets per second; with delays of 0.5–5 s each, the
|
||||
steady-state in-flight count is at most 5–15 tasks.
|
||||
- **High enough not to drop legitimate traffic**: a burst of 30 nearly-simultaneous
|
||||
packets would require every node in a large mesh to transmit within 1 second.
|
||||
- **Low enough to protect against pathological scenarios**: a misconfigured node
|
||||
flooding the channel or a software bug causing infinite re-queuing.
|
||||
|
||||
### Tighten companion dedup prune threshold to 200
|
||||
|
||||
200 entries at 60 s TTL means a sweep is triggered after ~200 unique PATH/response
|
||||
packets arrive without any expiry. This is far more than a typical companion
|
||||
session (which sees a handful of active connections) but prevents multi-hour
|
||||
accumulation on a busy mesh.
|
||||
|
||||
---
|
||||
|
||||
## Trade-off: Shutdown Cancellation
|
||||
|
||||
The previous `_route_tasks` set allowed `stop()` to explicitly cancel and await
|
||||
all in-flight tasks on shutdown. The counter approach does not.
|
||||
|
||||
**Why this is acceptable:**
|
||||
|
||||
1. In-flight `_route_packet` tasks are sleeping inside `delayed_send` (waiting for
|
||||
their TX delay timer). When the event loop is shut down — whether via
|
||||
`asyncio.run()` completing, `loop.stop()`, or `SIGTERM` handling — Python
|
||||
cancels all pending tasks automatically.
|
||||
|
||||
2. Even under the old approach, cancelling a sleeping `delayed_send` means the
|
||||
packet is not transmitted. The result is the same whether cancellation happens
|
||||
explicitly in `stop()` or implicitly when the event loop closes.
|
||||
|
||||
3. For a graceful shutdown where we want to *wait* for in-flight packets to
|
||||
complete transmission, the right mechanism is `stop()` awaiting the queue to
|
||||
drain *before* cancelling the router task — not cancelling sleeping tasks.
|
||||
Neither the old code nor this PR implements that, so no regression.
|
||||
|
||||
---
|
||||
|
||||
## Why This Is the Right Approach
|
||||
|
||||
### Alternative A — Keep `_route_tasks` set, add a size cap
|
||||
|
||||
```python
|
||||
if len(self._route_tasks) >= 30:
|
||||
logger.warning(...)
|
||||
continue
|
||||
```
|
||||
|
||||
Works, but the set still holds a strong reference to every Task object for the
|
||||
duration of its sleep. The counter holds an integer. Task objects in Python 3.12+
|
||||
are already strongly referenced by the event loop scheduler; the set reference is
|
||||
redundant for preventing GC cancellation.
|
||||
|
||||
### Alternative B — `asyncio.Semaphore`
|
||||
|
||||
```python
|
||||
self._sem = asyncio.Semaphore(30)
|
||||
async with self._sem:
|
||||
await self._route_packet(packet)
|
||||
```
|
||||
|
||||
Correct but changes the queue loop from fire-and-forget to blocking: the loop
|
||||
would wait at `async with self._sem` for a slot to open, stalling packet reads
|
||||
while a slot is occupied. That reintroduces the queue freeze the concurrent
|
||||
dispatch was designed to prevent. A semaphore is the right tool for *rate-
|
||||
limiting* producers; a counter cap at the dispatch site is the right tool for
|
||||
bounding *background* tasks.
|
||||
|
||||
### Alternative C — Integer counter (this PR)
|
||||
|
||||
- O(1) increment and decrement.
|
||||
- No strong reference to task objects beyond the event loop's own reference.
|
||||
- Drop decision is synchronous and immediate — no sleeping on semaphore.
|
||||
- Error logging preserved in `_on_route_done`.
|
||||
- Simpler code, easier to reason about.
|
||||
|
||||
---
|
||||
|
||||
## Changes — `repeater/packet_router.py` only
|
||||
|
||||
| Location | Change | Reason |
|
||||
|----------|--------|--------|
|
||||
| Module level | Remove `_COMPANION_DEDUPE_PRUNE_THRESHOLD = 1000` | Replaced with inline literal `200`; no need for a named constant for a single usage site |
|
||||
| `__init__` | Remove `self._route_tasks = set()`; add `self._in_flight = 0`, `self._max_in_flight = 30` | Replace set-based tracking with counter |
|
||||
| `stop()` | Remove `_route_tasks` cancellation block | Tasks complete or are cancelled by event loop shutdown; explicit cancellation not needed |
|
||||
| `_on_route_task_done` → `_on_route_done` | Simpler done-callback: decrement counter + log exceptions | Error logging preserved; set management removed |
|
||||
| `_should_deliver_path_to_companions` | `> _COMPANION_DEDUPE_PRUNE_THRESHOLD` → `> 200` with explanatory comment | Lower threshold; comment explains the sizing rationale |
|
||||
| `_process_queue` | Check `_in_flight >= _max_in_flight` before `create_task`; increment `_in_flight`; use `_on_route_done` | Cap accumulation; counter tracks live task count |
|
||||
|
||||
---
|
||||
|
||||
## Test Plan
|
||||
|
||||
### Unit tests (no hardware)
|
||||
|
||||
**T1 — Counter increments and decrements correctly**
|
||||
|
||||
```python
|
||||
async def test_in_flight_counter():
|
||||
router = PacketRouter(mock_daemon)
|
||||
await router.start()
|
||||
|
||||
assert router._in_flight == 0
|
||||
|
||||
# Enqueue a packet that takes time to process
|
||||
async def slow_route(pkt):
|
||||
await asyncio.sleep(0.1)
|
||||
|
||||
router._route_packet = slow_route
|
||||
await router.enqueue(make_test_packet())
|
||||
await asyncio.sleep(0.01) # let queue loop run
|
||||
|
||||
assert router._in_flight == 1 # task is sleeping
|
||||
|
||||
await asyncio.sleep(0.15) # task finishes
|
||||
assert router._in_flight == 0 # counter decremented by done-callback
|
||||
```
|
||||
|
||||
**T2 — Cap enforced: packet dropped when at limit**
|
||||
|
||||
```python
|
||||
async def test_cap_drops_packet_at_limit():
|
||||
router = PacketRouter(mock_daemon)
|
||||
router._max_in_flight = 2
|
||||
router._in_flight = 2 # simulate cap reached
|
||||
|
||||
dropped = []
|
||||
original_create_task = asyncio.create_task
|
||||
asyncio.create_task = lambda coro: dropped.append(coro)
|
||||
|
||||
await router._process_queue_once(make_test_packet())
|
||||
|
||||
assert dropped == [], "create_task must not be called when cap is reached"
|
||||
asyncio.create_task = original_create_task
|
||||
```
|
||||
|
||||
**T3 — Exceptions in `_route_packet` are logged, not swallowed**
|
||||
|
||||
```python
|
||||
async def test_exception_logged():
|
||||
router = PacketRouter(mock_daemon)
|
||||
|
||||
async def failing_route(pkt):
|
||||
raise ValueError("simulated error")
|
||||
|
||||
router._route_packet = failing_route
|
||||
with patch("repeater.packet_router.logger") as mock_log:
|
||||
task = asyncio.create_task(failing_route(make_test_packet()))
|
||||
router._in_flight = 1
|
||||
task.add_done_callback(router._on_route_done)
|
||||
await asyncio.gather(task, return_exceptions=True)
|
||||
mock_log.error.assert_called_once()
|
||||
|
||||
assert router._in_flight == 0
|
||||
```
|
||||
|
||||
**T4 — Companion dedup dict pruned at 200, not 1000**
|
||||
|
||||
```python
|
||||
def test_companion_dedup_prune_threshold():
|
||||
router = PacketRouter(mock_daemon)
|
||||
future_time = time.time() + 999
|
||||
|
||||
# Fill with 199 entries (all unexpired) — no prune
|
||||
router._companion_delivered = {f"key{i}": future_time for i in range(199)}
|
||||
pkt = make_path_packet()
|
||||
router._should_deliver_path_to_companions(pkt)
|
||||
assert len(router._companion_delivered) == 200 # added one, no prune yet
|
||||
|
||||
# 201st entry triggers prune — all unexpired so count stays at 201
|
||||
router._companion_delivered[f"key_extra"] = future_time
|
||||
assert len(router._companion_delivered) == 201
|
||||
|
||||
# Force prune by making all existing entries expired
|
||||
past_time = time.time() - 1
|
||||
router._companion_delivered = {f"key{i}": past_time for i in range(201)}
|
||||
router._should_deliver_path_to_companions(pkt)
|
||||
# All expired entries pruned; only the new entry remains
|
||||
assert len(router._companion_delivered) == 1
|
||||
```
|
||||
|
||||
### Integration / field tests (with hardware)
|
||||
|
||||
**T5 — Burst flood: verify cap fires under pathological load**
|
||||
|
||||
1. Configure a test mesh with 4+ nodes all in range of the repeater.
|
||||
2. Have all nodes send a flood packet simultaneously.
|
||||
3. Observe repeater logs.
|
||||
|
||||
**Expected:** `_in_flight` peaks in low single digits (LoRa airtime prevents
|
||||
large bursts); no `"In-flight task cap reached"` warning fires under normal
|
||||
conditions, confirming the cap is never a bottleneck in practice.
|
||||
|
||||
**T6 — Counter reaches zero after all packets processed**
|
||||
|
||||
1. Send a burst of 10 packets.
|
||||
2. Wait 10 seconds (longer than max TX delay of 5 s).
|
||||
3. Query `router._in_flight` from a debug endpoint or log.
|
||||
|
||||
**Expected:** `_in_flight == 0` after all delays expire and packets transmit.
|
||||
|
||||
**T7 — Error in `_route_packet` is logged and counter is decremented**
|
||||
|
||||
1. Temporarily introduce a deliberate exception in `_route_packet`.
|
||||
2. Send a packet.
|
||||
3. Check logs for the error message and verify the repeater continues operating
|
||||
(counter decremented, queue still draining).
|
||||
|
||||
**T8 — Normal forwarding throughput unchanged**
|
||||
|
||||
1. Send packets at a steady rate of 1 every 10 seconds for 5 minutes.
|
||||
2. Verify all packets are forwarded with no warnings or errors.
|
||||
3. Confirm `_in_flight` never exceeds 3–4 during normal operation.
|
||||
|
||||
---
|
||||
|
||||
## Proof of Correctness
|
||||
|
||||
### Counter vs set: why the counter is sufficient
|
||||
|
||||
The `_route_tasks` set solved two problems:
|
||||
|
||||
1. **GC protection**: In Python < 3.12, a task with no strong references other
|
||||
than the event loop's internal weakref could be garbage collected before
|
||||
completing. Python 3.12+ strengthened task references in the event loop.
|
||||
However, even in earlier versions, the set was unnecessary once `create_task`
|
||||
returns — the caller holds the reference, and the done-callback fires reliably
|
||||
because the event loop holds the task alive until completion.
|
||||
|
||||
2. **Explicit shutdown cancellation**: The counter loses this. As argued above,
|
||||
the outcome is identical — sleeping tasks are cancelled either explicitly by
|
||||
`stop()` or implicitly by the event loop at shutdown — and no packet that
|
||||
hasn't been transmitted yet can complete its send after the radio is shut down
|
||||
anyway.
|
||||
|
||||
### Why `_on_route_done` is a done-callback and not a `try/finally` inside `_route_packet`
|
||||
|
||||
A `try/finally` block inside `_route_packet` would also decrement the counter.
|
||||
Done-callbacks are preferable because:
|
||||
|
||||
- They fire even if the task is externally cancelled (e.g. by event loop shutdown),
|
||||
whereas `finally` may not run if `CancelledError` is not caught.
|
||||
- They decouple counter management from `_route_packet` logic — `_route_packet`
|
||||
has no knowledge of or dependency on the cap mechanism.
|
||||
- They keep the pattern consistent with the rest of the codebase's use of
|
||||
`add_done_callback` for task lifecycle management.
|
||||
|
||||
### Why 30 and not a smaller number like 10
|
||||
|
||||
At SF8, 125 kHz bandwidth, a 30-byte payload takes ~111 ms airtime and produces
|
||||
a TX delay of roughly 0.5–3 s. With a 60-second duty-cycle window and 3.6 s
|
||||
max airtime, the node can forward at most ~32 packets per minute at full budget.
|
||||
If all 32 arrive within one second (they cannot physically, but as an upper
|
||||
bound), 32 tasks would be in-flight simultaneously. A cap of 30 is aggressive
|
||||
enough to protect against unbounded growth but not so low that it would drop
|
||||
legitimate traffic under any realistic burst scenario.
|
||||
@@ -0,0 +1,395 @@
|
||||
# PR: Serialise Radio TX and Close Duty-Cycle TOCTOU Race
|
||||
|
||||
**Branch:** `fix/tx-serialization`
|
||||
**Base:** `rightup/fix-perfom-speed`
|
||||
**Files changed:** `repeater/engine.py` (1 file, ~30 lines net)
|
||||
|
||||
---
|
||||
|
||||
## Problem
|
||||
|
||||
Two separate bugs share the same root cause: concurrent `delayed_send` coroutines
|
||||
racing each other at transmission time.
|
||||
|
||||
### Bug 1 — Interleaved SPI/serial commands to the radio
|
||||
|
||||
The queue loop (added in an earlier commit) dispatches each incoming packet as an
|
||||
`asyncio.create_task`, so multiple `delayed_send` coroutines can have their sleep
|
||||
timers running concurrently. That is correct and intentional — it mirrors how
|
||||
firmware nodes use a hardware timer so the radio keeps listening during a TX delay.
|
||||
|
||||
However the LoRa radio is **half-duplex**: it can only transmit one packet at a
|
||||
time. When two delay timers expire at nearly the same moment both coroutines call
|
||||
`dispatcher.send_packet` simultaneously. `send_packet` issues a sequence of
|
||||
SPI/serial register writes to the radio; two tasks interleaving these writes
|
||||
produces undefined radio state and the transmission of neither packet is reliable.
|
||||
|
||||
### Bug 2 — TOCTOU gap in duty-cycle enforcement
|
||||
|
||||
`__call__` calls `can_transmit()` before scheduling a task:
|
||||
|
||||
```python
|
||||
# __call__ (before this fix)
|
||||
can_tx, wait_time = self.airtime_mgr.can_transmit(airtime_ms)
|
||||
if not can_tx:
|
||||
... # drop or defer
|
||||
tx_task = await self.schedule_retransmit(fwd_pkt, delay, airtime_ms, ...)
|
||||
```
|
||||
|
||||
`record_tx()` is only called later, inside `delayed_send`, after the sleep
|
||||
completes. Between the check and the debit there is a window that spans the
|
||||
entire TX delay (up to several seconds). Two packets that both pass the check
|
||||
before either has slept and recorded its airtime will **both** be transmitted even
|
||||
if transmitting both would exceed the duty-cycle budget.
|
||||
|
||||
Under normal single-packet conditions this window is harmless. Under burst
|
||||
conditions — multi-hop amplification, collision retries, or a busy mesh segment
|
||||
where several packets arrive within the same delay window — multiple tasks pass
|
||||
the advisory check simultaneously, and the duty-cycle limit is exceeded.
|
||||
|
||||
---
|
||||
|
||||
## Root Cause
|
||||
|
||||
There is no mutual exclusion around the radio send path. Each `delayed_send`
|
||||
coroutine independently checks duty-cycle, sleeps, and transmits without
|
||||
coordinating with any other concurrent coroutine doing the same thing.
|
||||
|
||||
---
|
||||
|
||||
## Solution
|
||||
|
||||
Add `self._tx_lock = asyncio.Lock()` (initialised in `__init__`) and acquire it
|
||||
inside `delayed_send` **after** the sleep completes:
|
||||
|
||||
```
|
||||
Delay timers run concurrently (unchanged):
|
||||
Task A: sleep(1.2s) ──────────────────► acquire _tx_lock → check → TX A → release
|
||||
Task B: sleep(0.9s) ──────────────────► acquire _tx_lock (waits) ──────────► check → TX B → release
|
||||
Task C: sleep(2.1s) ────────────────────────────────────────────────────────────────► ...
|
||||
|
||||
Radio: one packet at a time, duty-cycle state always stable inside the lock.
|
||||
```
|
||||
|
||||
Inside the lock, a **second** `can_transmit()` call is made immediately before
|
||||
sending. Because only one task holds the lock at a time, airtime state is stable
|
||||
at this point and `record_tx()` follows on success — check and debit are
|
||||
effectively atomic. This closes the TOCTOU window completely.
|
||||
|
||||
The upfront `can_transmit()` in `__call__` is retained as an **advisory** fast
|
||||
path: it still drops or defers packets that are obviously over budget before a
|
||||
delay task is even scheduled, avoiding unnecessary sleep timers. It is no longer
|
||||
the enforcement point.
|
||||
|
||||
---
|
||||
|
||||
## Why This Is the Right Approach
|
||||
|
||||
### Alternative A — Move `record_tx()` before the sleep
|
||||
|
||||
```python
|
||||
# hypothetical
|
||||
self.airtime_mgr.record_tx(airtime_ms) # reserve before sleeping
|
||||
await asyncio.sleep(delay)
|
||||
await self.dispatcher.send_packet(...) # actual TX
|
||||
```
|
||||
|
||||
Records airtime even if the send fails (exception, LBT busy, radio error) —
|
||||
the budget is debited for a packet that was never transmitted. Over time this
|
||||
inflates the apparent airtime, causing the node to throttle legitimate traffic
|
||||
it actually has budget for. Requires a compensating `release_airtime()` on
|
||||
every failure path, creating new complexity and failure modes.
|
||||
|
||||
### Alternative B — A single global advisory check (status quo before this PR)
|
||||
|
||||
Already demonstrated to fail under burst conditions (two tasks both pass before
|
||||
either records its airtime).
|
||||
|
||||
### Alternative C — asyncio.Lock (this PR)
|
||||
|
||||
- Delay timers remain concurrent — no regression on the primary non-blocking TX
|
||||
improvement.
|
||||
- The check-and-debit pair is atomic within the lock — no TOCTOU window.
|
||||
- No phantom airtime on send failure — `record_tx()` is only called on success.
|
||||
- One `asyncio.Lock` object, no new state machines or compensating paths.
|
||||
- The lock is `async`, so it only blocks other TX tasks, not the event loop or
|
||||
the packet RX queue.
|
||||
|
||||
### Why `asyncio.Lock` rather than `threading.Lock`
|
||||
|
||||
The entire repeater runs on a single asyncio event loop. `asyncio.Lock` only
|
||||
yields at `await` points; it does not involve OS threads or context switches.
|
||||
A `threading.Lock` would work but is semantically wrong here (this is not a
|
||||
thread-safety problem) and would block the event loop thread if held across an
|
||||
`await`.
|
||||
|
||||
---
|
||||
|
||||
## Changes
|
||||
|
||||
### `repeater/engine.py`
|
||||
|
||||
**1. Move `import random` to module level**
|
||||
|
||||
```python
|
||||
# before (inside _calculate_tx_delay):
|
||||
def _calculate_tx_delay(self, packet, snr=0.0):
|
||||
import random
|
||||
...
|
||||
|
||||
# after (top of file, with other stdlib imports):
|
||||
import random
|
||||
```
|
||||
|
||||
This is a housekeeping fix bundled with this PR because `random` is a stdlib
|
||||
module that should never be imported inside a hot-path function — Python caches
|
||||
the import after the first call, but the attribute lookup and cache check still
|
||||
run on every call. Moving it to module level is the standard pattern.
|
||||
|
||||
**2. Add `self._tx_lock` to `__init__`**
|
||||
|
||||
```python
|
||||
# Serialise all radio TX calls.
|
||||
#
|
||||
# Background: since the queue loop dispatches each packet as an
|
||||
# asyncio.create_task, multiple _route_packet coroutines can have their
|
||||
# TX delay timers running concurrently — which is the intended behaviour
|
||||
# (firmware nodes do the same with a hardware timer). However, the
|
||||
# LoRa radio is half-duplex: it can only transmit one packet at a time.
|
||||
# Without serialisation, two tasks whose delay timers expire near-
|
||||
# simultaneously both call dispatcher.send_packet, interleaving SPI/serial
|
||||
# commands to the radio and both passing the LBT check before either has
|
||||
# actually transmitted.
|
||||
#
|
||||
# _tx_lock is acquired after each delay sleep and held for the entire
|
||||
# send_packet call. Delays still run concurrently; only the radio
|
||||
# access is serialised. This also eliminates the TOCTOU gap in duty-cycle
|
||||
# enforcement — see schedule_retransmit / delayed_send for details.
|
||||
self._tx_lock = asyncio.Lock()
|
||||
```
|
||||
|
||||
**3. Acquire lock inside `delayed_send`, add authoritative duty-cycle gate**
|
||||
|
||||
```python
|
||||
async def delayed_send():
|
||||
await asyncio.sleep(delay)
|
||||
|
||||
# Acquire the TX lock *after* the delay so that delay timers for
|
||||
# multiple packets still run concurrently (matching firmware). Only
|
||||
# one coroutine enters the radio send path at a time.
|
||||
async with self._tx_lock:
|
||||
# ── Authoritative duty-cycle gate ─────────────────────────────
|
||||
# The upfront can_transmit() call in __call__ is advisory: it
|
||||
# avoids scheduling packets that are obviously over budget, but
|
||||
# it cannot prevent a race between two tasks whose delay timers
|
||||
# expire at almost the same moment. Both tasks pass the advisory
|
||||
# check before either has recorded its airtime, then both try to
|
||||
# transmit.
|
||||
#
|
||||
# Inside _tx_lock only one task runs at a time, so airtime state
|
||||
# is stable here. The check and the subsequent record_tx() are
|
||||
# effectively atomic — no TOCTOU window.
|
||||
if airtime_ms > 0:
|
||||
can_tx_now, _ = self.airtime_mgr.can_transmit(airtime_ms)
|
||||
if not can_tx_now:
|
||||
logger.warning(
|
||||
"Packet dropped at TX time: duty-cycle exceeded "
|
||||
"(airtime=%.1fms)", airtime_ms,
|
||||
)
|
||||
return
|
||||
|
||||
last_error = None
|
||||
for attempt in range(2 if local_transmission else 1):
|
||||
try:
|
||||
await self.dispatcher.send_packet(fwd_pkt, wait_for_ack=False)
|
||||
self._record_packet_sent(fwd_pkt)
|
||||
if airtime_ms > 0:
|
||||
self.airtime_mgr.record_tx(airtime_ms)
|
||||
...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Invariants Maintained
|
||||
|
||||
| Property | Before | After |
|
||||
|----------|--------|-------|
|
||||
| Delay timers run concurrently | ✅ | ✅ |
|
||||
| Radio accessed by one task at a time | ❌ | ✅ |
|
||||
| Duty-cycle check and debit atomic | ❌ | ✅ |
|
||||
| Airtime recorded only on TX success | ✅ | ✅ |
|
||||
| Event loop not blocked by lock | ✅ | ✅ (asyncio.Lock) |
|
||||
|
||||
---
|
||||
|
||||
## Test Plan
|
||||
|
||||
### Unit tests (can run without hardware)
|
||||
|
||||
**T1 — Serial TX ordering**
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
|
||||
async def test_tx_serialized():
|
||||
"""Two tasks whose delays expire simultaneously must not interleave."""
|
||||
send_order = []
|
||||
send_lock = asyncio.Lock()
|
||||
|
||||
async def mock_send(pkt, **kw):
|
||||
# Confirm the _tx_lock is already held when we enter send_packet
|
||||
assert send_lock.locked(), "send_packet called without _tx_lock held"
|
||||
send_order.append(pkt)
|
||||
await asyncio.sleep(0) # yield; a second task must not enter here
|
||||
|
||||
engine._tx_lock = send_lock # replace with the mock lock reference
|
||||
engine.dispatcher.send_packet = mock_send
|
||||
|
||||
t1 = asyncio.create_task(engine.schedule_retransmit(pkt_a, delay=0.01, airtime_ms=100))
|
||||
t2 = asyncio.create_task(engine.schedule_retransmit(pkt_b, delay=0.01, airtime_ms=100))
|
||||
await asyncio.gather(t1, t2)
|
||||
|
||||
assert len(send_order) == 2 # both transmitted
|
||||
assert send_order[0] is not send_order[1] # different packets
|
||||
```
|
||||
|
||||
**T2 — Authoritative duty-cycle gate blocks over-budget second packet**
|
||||
|
||||
```python
|
||||
async def test_second_packet_dropped_when_over_budget():
|
||||
"""When first TX fills the budget, second task must be dropped inside the lock."""
|
||||
# Set a tiny budget: 50ms per minute
|
||||
engine.airtime_mgr.max_airtime_per_minute = 50
|
||||
|
||||
sent = []
|
||||
async def mock_send(pkt, **kw):
|
||||
sent.append(pkt)
|
||||
|
||||
engine.dispatcher.send_packet = mock_send
|
||||
|
||||
# Each packet costs ~111ms (SF8, BW125, 30-byte payload) — first passes, second must not
|
||||
t1 = asyncio.create_task(engine.schedule_retransmit(pkt_a, delay=0.01, airtime_ms=111))
|
||||
t2 = asyncio.create_task(engine.schedule_retransmit(pkt_b, delay=0.01, airtime_ms=111))
|
||||
await asyncio.gather(t1, t2)
|
||||
|
||||
assert len(sent) == 1, f"Expected 1 TX, got {len(sent)}"
|
||||
```
|
||||
|
||||
**T3 — Airtime not debited on TX failure**
|
||||
|
||||
```python
|
||||
async def test_airtime_not_recorded_on_send_failure():
|
||||
before = engine.airtime_mgr.total_airtime_ms
|
||||
|
||||
async def failing_send(pkt, **kw):
|
||||
raise RuntimeError("radio error")
|
||||
|
||||
engine.dispatcher.send_packet = failing_send
|
||||
|
||||
with pytest.raises(RuntimeError):
|
||||
await engine.schedule_retransmit(pkt, delay=0, airtime_ms=100)
|
||||
|
||||
assert engine.airtime_mgr.total_airtime_ms == before, \
|
||||
"Airtime must not be recorded when send raises"
|
||||
```
|
||||
|
||||
**T4 — Advisory check still drops before scheduling (fast path not regressed)**
|
||||
|
||||
```python
|
||||
async def test_advisory_check_still_drops_obvious_overage():
|
||||
"""__call__ should not even schedule a task when clearly over budget."""
|
||||
engine.airtime_mgr.max_airtime_per_minute = 0 # budget exhausted
|
||||
|
||||
tasks_created = []
|
||||
original = asyncio.create_task
|
||||
asyncio.create_task = lambda coro: tasks_created.append(coro) or original(coro)
|
||||
|
||||
await engine(over_budget_packet, metadata={})
|
||||
|
||||
assert not tasks_created, "No task should be created when advisory check fails"
|
||||
```
|
||||
|
||||
### Integration / field tests (with hardware)
|
||||
|
||||
**T5 — Burst scenario: 5 packets arrive within the same delay window**
|
||||
|
||||
1. Connect the repeater to a radio.
|
||||
2. Using a second node, send 5 FLOOD packets in quick succession (< 100 ms apart)
|
||||
with a low RSSI score so the repeater's delay is ~1–2 s for all of them.
|
||||
3. Monitor the radio with a spectrum analyser or a third node running in monitor
|
||||
mode.
|
||||
|
||||
**Expected (after this fix):**
|
||||
- Transmissions are sequential — no overlapping on-air signals.
|
||||
- `Retransmitted packet` log lines appear one after another, each with a non-zero
|
||||
airtime value.
|
||||
- No `Retransmit failed` errors in the log.
|
||||
- Duty-cycle log shows airtime accumulating correctly.
|
||||
|
||||
**Expected (before this fix, to confirm the bug existed):**
|
||||
- Occasional `Retransmit failed` errors under burst load.
|
||||
- Airtime tracking diverging from actual on-air time (double-counted or missed).
|
||||
|
||||
**T6 — Duty-cycle enforcement under burst**
|
||||
|
||||
1. Set `max_airtime_per_minute` to a low value (e.g. 500 ms) in config.
|
||||
2. Send 10 packets rapidly so the repeater tries to forward all 10.
|
||||
3. Observe logs.
|
||||
|
||||
**Expected:**
|
||||
- First N packets transmitted (total airtime ≤ 500 ms).
|
||||
- Subsequent packets log `"Packet dropped at TX time: duty-cycle exceeded"` from
|
||||
inside `delayed_send` (not just the advisory drop).
|
||||
- `airtime_mgr.get_stats()["utilization_percent"]` reads ≤ 100%.
|
||||
|
||||
**T7 — Normal single-packet forwarding not regressed**
|
||||
|
||||
1. Send one packet every 5 seconds (well within duty-cycle budget).
|
||||
2. Verify each packet is forwarded with correct airtime logged.
|
||||
3. Verify no lock contention warnings in the log.
|
||||
|
||||
**T8 — Local TX retry path (local_transmission=True) still works**
|
||||
|
||||
1. Send a command that triggers a local transmission (e.g. a ping reply).
|
||||
2. Briefly block the radio (simulate with a mock) so the first attempt fails.
|
||||
3. Verify the retry fires after 1 s and the packet is eventually transmitted.
|
||||
|
||||
---
|
||||
|
||||
## Proof of Correctness
|
||||
|
||||
### Why `asyncio.Lock` is sufficient (no OS-level synchronisation needed)
|
||||
|
||||
Python's asyncio event loop is **single-threaded**. All coroutines share one
|
||||
thread and only yield execution at `await` points. Between two consecutive
|
||||
`await` calls in a coroutine, the event loop does not switch to another coroutine.
|
||||
|
||||
`asyncio.Lock.acquire()` suspends the current coroutine if the lock is held,
|
||||
returning control to the event loop. `asyncio.Lock.release()` wakes the next
|
||||
waiter. Because `send_packet` is awaited inside the lock, no other TX task can
|
||||
run until the current one releases the lock and the event loop gets a chance to
|
||||
schedule the next waiter.
|
||||
|
||||
There is no possibility of the race seen with `threading.Lock` where an OS thread
|
||||
can be preempted mid-instruction.
|
||||
|
||||
### Why the advisory check in `__call__` cannot be removed
|
||||
|
||||
The advisory check is still necessary as a fast path. If it were removed, every
|
||||
incoming packet — even when the node is clearly at 100% duty-cycle — would
|
||||
schedule a `delayed_send` task that would sleep for the full TX delay (up to
|
||||
several seconds) before the lock drops it. Under a sustained flood of incoming
|
||||
packets this wastes memory and CPU. The advisory check prunes the queue early at
|
||||
negligible cost.
|
||||
|
||||
### Why `record_tx()` must be inside the lock (not before or after)
|
||||
|
||||
- **Before the send:** records airtime for a packet that may never be transmitted
|
||||
(send could fail, LBT could reject it). Budget is overcounted.
|
||||
- **After releasing the lock:** a second task could pass the authoritative
|
||||
`can_transmit()` check between `send_packet` returning and `record_tx()` being
|
||||
called — the TOCTOU window reopens at a smaller scale.
|
||||
- **Inside the lock, after a successful send:** the budget is debited exactly once
|
||||
for exactly the packets that were actually transmitted. The lock ensures no
|
||||
other task reads airtime state between the check and the debit.
|
||||
+11
-11
@@ -1,7 +1,5 @@
|
||||
"""
|
||||
Systemd service file template for Py MC - Meshcore Repeater Daemon.
|
||||
Install as /etc/systemd/system/pymc-repeater.service
|
||||
"""
|
||||
#Systemd service file template for Py MC - Meshcore Repeater Daemon.
|
||||
#Install as /etc/systemd/system/pymc-repeater.service
|
||||
|
||||
[Unit]
|
||||
Description=pyMC Repeater Daemon
|
||||
@@ -12,27 +10,29 @@ Wants=network-online.target
|
||||
Type=simple
|
||||
User=repeater
|
||||
Group=repeater
|
||||
WorkingDirectory=/opt/pymc_repeater
|
||||
Environment="PYTHONPATH=/opt/pymc_repeater"
|
||||
WorkingDirectory=/var/lib/pymc_repeater
|
||||
|
||||
# Start command - use python module directly with proper path
|
||||
ExecStart=/usr/bin/python3 -m repeater.main --config /etc/pymc_repeater/config.yaml
|
||||
# Start command - use venv python to avoid system package conflicts
|
||||
ExecStart=/opt/pymc_repeater/venv/bin/python -m repeater.main --config /etc/pymc_repeater/config.yaml
|
||||
|
||||
# Restart on failure
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
||||
# Allow up to 10s for graceful shutdown before SIGKILL
|
||||
TimeoutStopSec=10
|
||||
|
||||
# Resource limits
|
||||
MemoryLimit=256M
|
||||
MemoryHigh=256M
|
||||
|
||||
# Logging
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
SyslogIdentifier=pymc-repeater
|
||||
|
||||
# Security (relaxed for proper operation)
|
||||
NoNewPrivileges=true
|
||||
# Security (relaxed for service self-restart via sudo)
|
||||
ReadWritePaths=/var/log/pymc_repeater /var/lib/pymc_repeater /etc/pymc_repeater
|
||||
SupplementaryGroups=plugdev dialout
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
||||
+31
-6
@@ -1,10 +1,10 @@
|
||||
[build-system]
|
||||
requires = ["setuptools>=61.0", "wheel"]
|
||||
requires = ["setuptools>=61.0", "wheel", "setuptools_scm>=8.0"]
|
||||
build-backend = "setuptools.build_meta"
|
||||
|
||||
[project]
|
||||
name = "pymc_repeater"
|
||||
version = "1.0.5"
|
||||
dynamic = ["version"]
|
||||
authors = [
|
||||
{name = "Lloyd", email = "lloyd@rightup.co.uk"},
|
||||
]
|
||||
@@ -29,19 +29,28 @@ classifiers = [
|
||||
keywords = ["mesh", "networking", "lora", "repeater", "daemon", "iot"]
|
||||
|
||||
|
||||
|
||||
dependencies = [
|
||||
"pymc_core[hardware]",
|
||||
"pymc_core[hardware]==1.0.10",
|
||||
"pyyaml>=6.0.0",
|
||||
"cherrypy>=18.0.0",
|
||||
"paho-mqtt>=1.6.0",
|
||||
"cherrypy-cors==1.7.0",
|
||||
"psutil>=5.9.0",
|
||||
"pyjwt>=2.8.0",
|
||||
"ws4py>=0.6.0",
|
||||
]
|
||||
|
||||
|
||||
|
||||
[project.optional-dependencies]
|
||||
# SX1262/SPI support (Linux only; required for Raspberry Pi HATs)
|
||||
hardware = [
|
||||
"pymc_core[hardware]",
|
||||
]
|
||||
# RRD metrics (Performance Metrics chart); system librrd required (e.g. apt install rrdtool)
|
||||
rrd = [
|
||||
"rrdtool",
|
||||
]
|
||||
dev = [
|
||||
"pytest>=7.4.0",
|
||||
"pytest-asyncio>=0.21.0",
|
||||
@@ -52,9 +61,20 @@ dev = [
|
||||
|
||||
[project.scripts]
|
||||
pymc-repeater = "repeater.main:main"
|
||||
pymc-cli = "repeater.local_cli:main"
|
||||
|
||||
[tool.setuptools]
|
||||
packages = ["repeater"]
|
||||
[tool.setuptools.packages.find]
|
||||
where = ["."]
|
||||
include = ["repeater*"]
|
||||
|
||||
[tool.setuptools.package-data]
|
||||
repeater = [
|
||||
"web/html/*.html",
|
||||
"web/html/*.ico",
|
||||
"web/html/assets/**/*",
|
||||
"web/*.yaml",
|
||||
"web/*.html",
|
||||
]
|
||||
|
||||
[tool.black]
|
||||
line-length = 100
|
||||
@@ -63,3 +83,8 @@ target-version = ['py38', 'py39', 'py310', 'py311', 'py312']
|
||||
[tool.isort]
|
||||
profile = "black"
|
||||
line_length = 100
|
||||
|
||||
[tool.setuptools_scm]
|
||||
version_scheme = "guess-next-dev"
|
||||
local_scheme = "no-local-version"
|
||||
version_file = "repeater/_version.py"
|
||||
|
||||
+159
-1
@@ -1 +1,159 @@
|
||||
{"config":{"connect_screen":{"info_message":"The default pin for devices without a screen is 123456. Trouble pairing? Forget the bluetooth device in system settings."},"remote_management":{"repeaters":{"guest_login_enabled":true,"guest_login_disabled_message":"Guest login has been temporarily disabled. Please try again later.","guest_login_passwords":[""],"flood_routed_guest_login_enabled":true,"flood_routed_guest_login_disabled_message":"To avoid overwhelming the mesh with flood packets, please set a path to log in to a repeater as a guest."}},"suggested_radio_settings":{"info_message":"These radio settings have been suggested by the community.","entries":[{"title":"Australia","description":"915.800MHz / SF10 / BW250 / CR5","frequency":"915.800","spreading_factor":"10","bandwidth":"250","coding_rate":"5"},{"title":"Australia: Victoria","description":"916.575MHz / SF7 / BW62.5 / CR8","frequency":"916.575","spreading_factor":"7","bandwidth":"62.5","coding_rate":"8"},{"title":"EU/UK (Narrow)","description":"869.618MHz / SF8 / BW62.5 / CR8","frequency":"869.618","spreading_factor":"8","bandwidth":"62.5","coding_rate":"8"},{"title":"EU/UK (Long Range)","description":"869.525MHz / SF11 / BW250 / CR5","frequency":"869.525","spreading_factor":"11","bandwidth":"250","coding_rate":"5"},{"title":"EU/UK (Medium Range)","description":"869.525MHz / SF10 / BW250 / CR5","frequency":"869.525","spreading_factor":"10","bandwidth":"250","coding_rate":"5"},{"title":"Czech Republic (Narrow)","description":"869.525MHz / SF7 / BW62.5 / CR5","frequency":"869.525","spreading_factor":"7","bandwidth":"62.5","coding_rate":"5"},{"title":"EU 433MHz (Long Range)","description":"433.650MHz / SF11 / BW250 / CR5","frequency":"433.650","spreading_factor":"11","bandwidth":"250","coding_rate":"5"},{"title":"New Zealand","description":"917.375MHz / SF11 / BW250 / CR5","frequency":"917.375","spreading_factor":"11","bandwidth":"250","coding_rate":"5"},{"title":"New Zealand (Narrow)","description":"917.375MHz / SF7 / BW62.5 / CR5","frequency":"917.375","spreading_factor":"7","bandwidth":"62.5","coding_rate":"5"},{"title":"Portugal 433","description":"433.375MHz / SF9 / BW62.5 / CR6","frequency":"433.375","spreading_factor":"9","bandwidth":"62.5","coding_rate":"6"},{"title":"Portugal 868","description":"869.618MHz / SF7 / BW62.5 / CR6","frequency":"869.618","spreading_factor":"7","bandwidth":"62.5","coding_rate":"6"},{"title":"Switzerland","description":"869.618MHz / SF8 / BW62.5 / CR8","frequency":"869.618","spreading_factor":"8","bandwidth":"62.5","coding_rate":"8"},{"title":"USA/Canada (Recommended)","description":"910.525MHz / SF7 / BW62.5 / CR5","frequency":"910.525","spreading_factor":"7","bandwidth":"62.5","coding_rate":"5"},{"title":"USA/Canada (Alternate)","description":"910.525MHz / SF11 / BW250 / CR5","frequency":"910.525","spreading_factor":"11","bandwidth":"250","coding_rate":"5"},{"title":"Vietnam","description":"920.250MHz / SF11 / BW250 / CR5","frequency":"920.250","spreading_factor":"11","bandwidth":"250","coding_rate":"5"}]}}}
|
||||
{
|
||||
"config": {
|
||||
"connect_screen": {
|
||||
"info_message": "The default pin for devices without a screen is 123456. Trouble pairing? Forget the bluetooth device in system settings."
|
||||
},
|
||||
"remote_management": {
|
||||
"repeaters": {
|
||||
"guest_login_enabled": true,
|
||||
"guest_login_disabled_message": "Guest login has been temporarily disabled. Please try again later.",
|
||||
"guest_login_passwords": [
|
||||
""
|
||||
],
|
||||
"flood_routed_guest_login_enabled": true,
|
||||
"flood_routed_guest_login_disabled_message": "To avoid overwhelming the mesh with flood packets, please set a path to log in to a repeater as a guest."
|
||||
}
|
||||
},
|
||||
"suggested_radio_settings": {
|
||||
"info_message": "These radio settings have been suggested by the community.",
|
||||
"entries": [
|
||||
{
|
||||
"title": "Australia",
|
||||
"description": "915.800MHz / SF10 / BW250 / CR5",
|
||||
"frequency": "915.800",
|
||||
"spreading_factor": "10",
|
||||
"bandwidth": "250",
|
||||
"coding_rate": "5"
|
||||
},
|
||||
{
|
||||
"title": "Australia: NSW (Wide)",
|
||||
"description": "915.800MHz / SF11 / BW250 / CR5",
|
||||
"frequency": "915.800",
|
||||
"spreading_factor": "11",
|
||||
"bandwidth": "250",
|
||||
"coding_rate": "5"
|
||||
},
|
||||
{
|
||||
"title": "Australia (Narrow)",
|
||||
"description": "916.575MHz / SF7 / BW62.5 / CR8",
|
||||
"frequency": "916.575",
|
||||
"spreading_factor": "7",
|
||||
"bandwidth": "62.5",
|
||||
"coding_rate": "8"
|
||||
},
|
||||
{
|
||||
"title": "Australia: SA, WA, QLD",
|
||||
"description": "923.125MHz / SF8 / BW62.5 / CR8",
|
||||
"frequency": "923.125",
|
||||
"spreading_factor": "8",
|
||||
"bandwidth": "62.5",
|
||||
"coding_rate": "8"
|
||||
},
|
||||
{
|
||||
"title": "EU/UK (Narrow)",
|
||||
"description": "869.618MHz / SF8 / BW62.5 / CR8",
|
||||
"frequency": "869.618",
|
||||
"spreading_factor": "8",
|
||||
"bandwidth": "62.5",
|
||||
"coding_rate": "8"
|
||||
},
|
||||
{
|
||||
"title": "EU/UK (Long Range)",
|
||||
"description": "869.525MHz / SF11 / BW250 / CR5",
|
||||
"frequency": "869.525",
|
||||
"spreading_factor": "11",
|
||||
"bandwidth": "250",
|
||||
"coding_rate": "5"
|
||||
},
|
||||
{
|
||||
"title": "EU/UK (Medium Range)",
|
||||
"description": "869.525MHz / SF10 / BW250 / CR5",
|
||||
"frequency": "869.525",
|
||||
"spreading_factor": "10",
|
||||
"bandwidth": "250",
|
||||
"coding_rate": "5"
|
||||
},
|
||||
{
|
||||
"title": "Czech Republic (Narrow)",
|
||||
"description": "869.525MHz / SF7 / BW62.5 / CR5",
|
||||
"frequency": "869.525",
|
||||
"spreading_factor": "7",
|
||||
"bandwidth": "62.5",
|
||||
"coding_rate": "5"
|
||||
},
|
||||
{
|
||||
"title": "EU 433MHz (Long Range)",
|
||||
"description": "433.650MHz / SF11 / BW250 / CR5",
|
||||
"frequency": "433.650",
|
||||
"spreading_factor": "11",
|
||||
"bandwidth": "250",
|
||||
"coding_rate": "5"
|
||||
},
|
||||
{
|
||||
"title": "New Zealand",
|
||||
"description": "917.375MHz / SF11 / BW250 / CR5",
|
||||
"frequency": "917.375",
|
||||
"spreading_factor": "11",
|
||||
"bandwidth": "250",
|
||||
"coding_rate": "5"
|
||||
},
|
||||
{
|
||||
"title": "New Zealand (Narrow)",
|
||||
"description": "917.375MHz / SF7 / BW62.5 / CR5",
|
||||
"frequency": "917.375",
|
||||
"spreading_factor": "7",
|
||||
"bandwidth": "62.5",
|
||||
"coding_rate": "5"
|
||||
},
|
||||
{
|
||||
"title": "Portugal 433",
|
||||
"description": "433.375MHz / SF9 / BW62.5 / CR6",
|
||||
"frequency": "433.375",
|
||||
"spreading_factor": "9",
|
||||
"bandwidth": "62.5",
|
||||
"coding_rate": "6"
|
||||
},
|
||||
{
|
||||
"title": "Portugal 868",
|
||||
"description": "869.618MHz / SF7 / BW62.5 / CR6",
|
||||
"frequency": "869.618",
|
||||
"spreading_factor": "7",
|
||||
"bandwidth": "62.5",
|
||||
"coding_rate": "6"
|
||||
},
|
||||
{
|
||||
"title": "Switzerland",
|
||||
"description": "869.618MHz / SF8 / BW62.5 / CR8",
|
||||
"frequency": "869.618",
|
||||
"spreading_factor": "8",
|
||||
"bandwidth": "62.5",
|
||||
"coding_rate": "8"
|
||||
},
|
||||
{
|
||||
"title": "USA/Canada (Recommended)",
|
||||
"description": "910.525MHz / SF7 / BW62.5 / CR5",
|
||||
"frequency": "910.525",
|
||||
"spreading_factor": "7",
|
||||
"bandwidth": "62.5",
|
||||
"coding_rate": "5"
|
||||
},
|
||||
{
|
||||
"title": "USA/Canada (Alternate)",
|
||||
"description": "910.525MHz / SF11 / BW250 / CR5",
|
||||
"frequency": "910.525",
|
||||
"spreading_factor": "11",
|
||||
"bandwidth": "250",
|
||||
"coding_rate": "5"
|
||||
},
|
||||
{
|
||||
"title": "Vietnam",
|
||||
"description": "920.250MHz / SF11 / BW250 / CR5",
|
||||
"frequency": "920.250",
|
||||
"spreading_factor": "11",
|
||||
"bandwidth": "250",
|
||||
"coding_rate": "5"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,76 @@
|
||||
{
|
||||
"default_board": "luckfox-pimesh-v2",
|
||||
"default_radio_preset": "USA/Canada (Recommended)",
|
||||
"buildroot_hardware": {
|
||||
"luckfox-pimesh-v2": {
|
||||
"name": "Luckfox PiMesh V2",
|
||||
"description": "Luckfox Pico Pi with PiMesh-1W V2 / E22P wiring",
|
||||
"hardware_id": "pimesh-1w-v2",
|
||||
"tx_power": 22,
|
||||
"aliases": [
|
||||
"1",
|
||||
"v2",
|
||||
"pimesh-v2",
|
||||
"pimesh-1w-v2"
|
||||
],
|
||||
"sx1262_overrides": {
|
||||
"cs_pin": -1,
|
||||
"reset_pin": 54,
|
||||
"busy_pin": 122,
|
||||
"irq_pin": 121,
|
||||
"en_pin": 0,
|
||||
"txen_pin": -1,
|
||||
"rxen_pin": -1,
|
||||
"use_dio2_rf": true,
|
||||
"use_dio3_tcxo": true,
|
||||
"dio3_tcxo_voltage": 1.8
|
||||
}
|
||||
},
|
||||
"luckfox-pimesh-v1": {
|
||||
"name": "Luckfox PiMesh V1",
|
||||
"description": "Luckfox Pico Pi with PiMesh-1W V1 wiring",
|
||||
"hardware_id": "pimesh-1w-v1",
|
||||
"tx_power": 22,
|
||||
"aliases": [
|
||||
"2",
|
||||
"v1",
|
||||
"pimesh-v1",
|
||||
"pimesh-1w-v1"
|
||||
],
|
||||
"sx1262_overrides": {
|
||||
"cs_pin": 145,
|
||||
"reset_pin": 54,
|
||||
"busy_pin": 123,
|
||||
"irq_pin": 55,
|
||||
"en_pin": -1,
|
||||
"txen_pin": 52,
|
||||
"rxen_pin": 53,
|
||||
"use_dio2_rf": false,
|
||||
"use_dio3_tcxo": true,
|
||||
"dio3_tcxo_voltage": 1.8
|
||||
}
|
||||
},
|
||||
"luckfox-meshadv": {
|
||||
"name": "Luckfox MeshAdv",
|
||||
"description": "Luckfox Pico Pi with MeshAdv wiring",
|
||||
"hardware_id": "meshadv",
|
||||
"tx_power": 22,
|
||||
"aliases": [
|
||||
"3",
|
||||
"meshadv"
|
||||
],
|
||||
"sx1262_overrides": {
|
||||
"cs_pin": 145,
|
||||
"reset_pin": 54,
|
||||
"busy_pin": 123,
|
||||
"irq_pin": 55,
|
||||
"en_pin": -1,
|
||||
"txen_pin": 52,
|
||||
"rxen_pin": 53,
|
||||
"use_dio2_rf": false,
|
||||
"use_dio3_tcxo": true,
|
||||
"dio3_tcxo_voltage": 1.8
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
+167
-16
@@ -16,8 +16,8 @@
|
||||
"preamble_length": 17,
|
||||
"is_waveshare": true
|
||||
},
|
||||
"uconsole": {
|
||||
"name": "uConsole LoRa Module",
|
||||
"uconsole_aiov1": {
|
||||
"name": "uConsole LoRa Module aio v1",
|
||||
"bus_id": 1,
|
||||
"cs_id": 0,
|
||||
"cs_pin": -1,
|
||||
@@ -31,24 +31,25 @@
|
||||
"tx_power": 22,
|
||||
"preamble_length": 17
|
||||
},
|
||||
"pimesh-1w-usa": {
|
||||
"name": "PiMesh-1W (USA)",
|
||||
"bus_id": 0,
|
||||
"uconsole_aio_v2": {
|
||||
"name": "uConsole LoRa Module aio v2",
|
||||
"bus_id": 1,
|
||||
"cs_id": 0,
|
||||
"cs_pin": 21,
|
||||
"reset_pin": 18,
|
||||
"busy_pin": 20,
|
||||
"irq_pin": 16,
|
||||
"txen_pin": 13,
|
||||
"rxen_pin": 12,
|
||||
"cs_pin": -1,
|
||||
"reset_pin": 25,
|
||||
"busy_pin": 24,
|
||||
"irq_pin": 26,
|
||||
"txen_pin": -1,
|
||||
"rxen_pin": -1,
|
||||
"txled_pin": -1,
|
||||
"rxled_pin": -1,
|
||||
"tx_power": 30,
|
||||
"tx_power": 22,
|
||||
"preamble_length": 17,
|
||||
"use_dio3_tcxo": true,
|
||||
"preamble_length": 17
|
||||
"use_dio2_rf": true
|
||||
},
|
||||
"pimesh-1w-uk": {
|
||||
"name": "PiMesh-1W (UK)",
|
||||
"pimesh-1w-v1": {
|
||||
"name": "PiMesh-1W (V1)",
|
||||
"bus_id": 0,
|
||||
"cs_id": 0,
|
||||
"cs_pin": 21,
|
||||
@@ -63,6 +64,24 @@
|
||||
"use_dio3_tcxo": true,
|
||||
"preamble_length": 17
|
||||
},
|
||||
"pimesh-1w-v2": {
|
||||
"name": "PiMesh-1W (V2)",
|
||||
"bus_id": 0,
|
||||
"cs_id": 0,
|
||||
"cs_pin": -1,
|
||||
"reset_pin": 18,
|
||||
"busy_pin": 5,
|
||||
"irq_pin": 6,
|
||||
"txen_pin": -1,
|
||||
"rxen_pin": -1,
|
||||
"txled_pin": -1,
|
||||
"rxled_pin": -1,
|
||||
"en_pin": 26,
|
||||
"tx_power": 22,
|
||||
"use_dio3_tcxo": true,
|
||||
"use_dio2_rf": true,
|
||||
"preamble_length": 17
|
||||
},
|
||||
"meshadv-mini": {
|
||||
"name": "MeshAdv Mini",
|
||||
"bus_id": 0,
|
||||
@@ -78,7 +97,7 @@
|
||||
"tx_power": 22,
|
||||
"preamble_length": 17
|
||||
},
|
||||
"meshadv": {
|
||||
"meshadv": {
|
||||
"name": "MeshAdv",
|
||||
"bus_id": 0,
|
||||
"cs_id": 0,
|
||||
@@ -93,6 +112,138 @@
|
||||
"tx_power": 22,
|
||||
"use_dio3_tcxo": true,
|
||||
"preamble_length": 17
|
||||
},
|
||||
"zebra": {
|
||||
"name": "ZebraHat-1W",
|
||||
"bus_id": 0,
|
||||
"cs_id": 0,
|
||||
"cs_pin": 24,
|
||||
"reset_pin": 17,
|
||||
"busy_pin": 27,
|
||||
"irq_pin": 22,
|
||||
"txen_pin": -1,
|
||||
"rxen_pin": -1,
|
||||
"txled_pin": -1,
|
||||
"rxled_pin": -1,
|
||||
"tx_power": 18,
|
||||
"use_dio3_tcxo": true,
|
||||
"use_dio2_rf": true,
|
||||
"preamble_length": 17
|
||||
},
|
||||
"femtofox-1W-SX": {
|
||||
"name": "FemtoFox SX1262 (1W)",
|
||||
"bus_id": 0,
|
||||
"cs_id": 0,
|
||||
"cs_pin": 16,
|
||||
"gpio_chip": 1,
|
||||
"use_gpiod_backend": true,
|
||||
"reset_pin": 25,
|
||||
"busy_pin": 22,
|
||||
"irq_pin": 23,
|
||||
"txen_pin": -1,
|
||||
"rxen_pin": 24,
|
||||
"txled_pin": -1,
|
||||
"rxled_pin": -1,
|
||||
"tx_power": 30,
|
||||
"use_dio3_tcxo": true,
|
||||
"preamble_length": 17
|
||||
},
|
||||
"femtofox-2W-SX": {
|
||||
"name": "FemtoFox SX1262 (2W)",
|
||||
"bus_id": 0,
|
||||
"cs_id": 0,
|
||||
"cs_pin": 16,
|
||||
"gpio_chip": 1,
|
||||
"use_gpiod_backend": true,
|
||||
"reset_pin": 25,
|
||||
"busy_pin": 22,
|
||||
"irq_pin": 23,
|
||||
"txen_pin": -1,
|
||||
"rxen_pin": 24,
|
||||
"txled_pin": -1,
|
||||
"rxled_pin": -1,
|
||||
"tx_power": 8,
|
||||
"use_dio2_rf": true,
|
||||
"use_dio3_tcxo": true
|
||||
},
|
||||
"nebrahat": {
|
||||
"name": "NebraHat-2W",
|
||||
"bus_id": 0,
|
||||
"cs_id": 0,
|
||||
"cs_pin": 8,
|
||||
"reset_pin": 18,
|
||||
"busy_pin": 4,
|
||||
"irq_pin": 22,
|
||||
"txen_pin": -1,
|
||||
"rxen_pin": 25,
|
||||
"txled_pin": -1,
|
||||
"rxled_pin": -1,
|
||||
"tx_power": 8,
|
||||
"use_dio3_tcxo": true,
|
||||
"use_dio2_rf": true,
|
||||
"preamble_length": 17
|
||||
},
|
||||
"ch341-usb-sx1262": {
|
||||
"name": "CH341 USB-SPI + SX1262 (example)",
|
||||
"description": "SX1262 via CH341 USB-to-SPI adapter. NOTE: pin numbers are CH341 GPIO 0-7, not BCM.",
|
||||
"radio_type": "sx1262_ch341",
|
||||
"vid": 6790,
|
||||
"pid": 21778,
|
||||
"bus_id": 0,
|
||||
"cs_id": 0,
|
||||
"cs_pin": 0,
|
||||
"reset_pin": 2,
|
||||
"busy_pin": 4,
|
||||
"irq_pin": 6,
|
||||
"txen_pin": -1,
|
||||
"rxen_pin": 1,
|
||||
"txled_pin": -1,
|
||||
"rxled_pin": -1,
|
||||
"tx_power": 22,
|
||||
"use_dio2_rf": true,
|
||||
"use_dio3_tcxo": true,
|
||||
"dio3_tcxo_voltage": 1.8,
|
||||
"preamble_length": 17,
|
||||
"is_waveshare": false
|
||||
},
|
||||
"ultrapeater-e22": {
|
||||
"name": "Zindello Industries UltraPeater E22",
|
||||
"bus_id": 0,
|
||||
"cs_id": 0,
|
||||
"cs_pin": 16,
|
||||
"reset_pin": 22,
|
||||
"busy_pin": 11,
|
||||
"irq_pin": 10,
|
||||
"txen_pin": 20,
|
||||
"rxen_pin": 21,
|
||||
"txled_pin": 8,
|
||||
"rxled_pin": 1,
|
||||
"tx_power": 22,
|
||||
"use_dio2_rf": false,
|
||||
"use_dio3_tcxo": true,
|
||||
"preamble_length": 17,
|
||||
"use_gpiod_backend": true,
|
||||
"gpio_chip": 1
|
||||
},
|
||||
"ultrapeater-e22p": {
|
||||
"name": "Zindello Industries UltraPeater E22P",
|
||||
"bus_id": 0,
|
||||
"cs_id": 0,
|
||||
"cs_pin": 16,
|
||||
"reset_pin": 22,
|
||||
"busy_pin": 11,
|
||||
"irq_pin": 10,
|
||||
"txen_pin": 20,
|
||||
"rxen_pin": -1,
|
||||
"en_pin": 21,
|
||||
"txled_pin": 8,
|
||||
"rxled_pin": 1,
|
||||
"tx_power": 22,
|
||||
"use_dio2_rf": false,
|
||||
"use_dio3_tcxo": true,
|
||||
"preamble_length": 17,
|
||||
"use_gpiod_backend": true,
|
||||
"gpio_chip": 1
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1 +1,9 @@
|
||||
__version__ = "1.0.5"
|
||||
try:
|
||||
from ._version import version as __version__
|
||||
except ImportError:
|
||||
try:
|
||||
from importlib.metadata import version
|
||||
|
||||
__version__ = version("pymc_repeater")
|
||||
except Exception:
|
||||
__version__ = "unknown"
|
||||
|
||||
+62
-9
@@ -1,4 +1,5 @@
|
||||
import logging
|
||||
import math
|
||||
import time
|
||||
from typing import Tuple
|
||||
|
||||
@@ -8,30 +9,77 @@ logger = logging.getLogger("AirtimeManager")
|
||||
class AirtimeManager:
|
||||
def __init__(self, config: dict):
|
||||
self.config = config
|
||||
self.radio_config = config.get("radio", {})
|
||||
self.max_airtime_per_minute = config.get("duty_cycle", {}).get(
|
||||
"max_airtime_per_minute", 3600
|
||||
)
|
||||
|
||||
# Store radio settings for airtime calculations
|
||||
self.spreading_factor = self.radio_config.get("spreading_factor", 7)
|
||||
self.bandwidth = self.radio_config.get("bandwidth", 125000)
|
||||
self.coding_rate = self.radio_config.get("coding_rate", 5)
|
||||
self.preamble_length = self.radio_config.get("preamble_length", 8)
|
||||
|
||||
# Track airtime in rolling window
|
||||
self.tx_history = [] # [(timestamp, airtime_ms), ...]
|
||||
self.window_size = 60 # seconds
|
||||
self.total_airtime_ms = 0
|
||||
self.total_rx_airtime_ms = 0
|
||||
|
||||
def calculate_airtime(
|
||||
self,
|
||||
payload_len: int,
|
||||
spreading_factor: int = 7,
|
||||
bandwidth_hz: int = 125000,
|
||||
spreading_factor: int = None,
|
||||
bandwidth_hz: int = None,
|
||||
coding_rate: int = None,
|
||||
preamble_len: int = None,
|
||||
crc_enabled: bool = True,
|
||||
explicit_header: bool = True,
|
||||
) -> float:
|
||||
"""
|
||||
Calculate LoRa packet airtime using the Semtech reference formula.
|
||||
|
||||
bw_khz = bandwidth_hz / 1000
|
||||
symbol_time = (2**spreading_factor) / bw_khz
|
||||
preamble_time = 8 * symbol_time
|
||||
payload_symbols = (payload_len + 4.25) * 8
|
||||
payload_time = payload_symbols * symbol_time
|
||||
Reference: https://www.semtech.com/design-support/lora-calculator
|
||||
|
||||
total_ms = preamble_time + payload_time
|
||||
return total_ms
|
||||
Args:
|
||||
payload_len: Payload length in bytes
|
||||
spreading_factor: SF7-SF12 (uses config value if None)
|
||||
bandwidth_hz: Bandwidth in Hz (uses config value if None)
|
||||
coding_rate: CR denominator, 5=4/5, 6=4/6, 7=4/7, 8=4/8 (uses config value if None)
|
||||
preamble_len: Preamble symbols (uses config value if None)
|
||||
crc_enabled: Whether CRC is enabled (default: True)
|
||||
explicit_header: Whether explicit header mode is used (default: True)
|
||||
|
||||
Returns:
|
||||
Airtime in milliseconds
|
||||
"""
|
||||
sf = spreading_factor or self.spreading_factor
|
||||
bw_hz = (bandwidth_hz or self.bandwidth)
|
||||
cr = coding_rate or self.coding_rate
|
||||
preamble_len = preamble_len or self.preamble_length
|
||||
crc = 1 if crc_enabled else 0
|
||||
h = 0 if explicit_header else 1 # H=0 for explicit, H=1 for implicit
|
||||
|
||||
# Low data rate optimization: required for SF11/SF12 at 125kHz
|
||||
de = 1 if (sf >= 11 and bw_hz <= 125000) else 0
|
||||
|
||||
# Symbol time in milliseconds: T_sym = 2^SF / BW_kHz
|
||||
t_sym = (2 ** sf) / (bw_hz / 1000)
|
||||
|
||||
# Preamble time: T_preamble = (n_preamble + 4.25) * T_sym
|
||||
t_preamble = (preamble_len + 4.25) * t_sym
|
||||
|
||||
# Payload symbol calculation (Semtech formula):
|
||||
# n_payload = 8 + ceil(max(8*PL - 4*SF + 28 + 16*CRC - 20*H, 0) / (4*(SF - 2*DE))) * CR
|
||||
numerator = max(8 * payload_len - 4 * sf + 28 + 16 * crc - 20 * h, 0)
|
||||
denominator = 4 * (sf - 2 * de)
|
||||
n_payload = 8 + math.ceil(numerator / denominator) * cr
|
||||
|
||||
# Payload time
|
||||
t_payload = n_payload * t_sym
|
||||
|
||||
# Total packet airtime
|
||||
return t_preamble + t_payload
|
||||
|
||||
def can_transmit(self, airtime_ms: float) -> Tuple[bool, float]:
|
||||
enforcement_enabled = self.config.get("duty_cycle", {}).get("enforcement_enabled", True)
|
||||
@@ -63,6 +111,10 @@ class AirtimeManager:
|
||||
self.total_airtime_ms += airtime_ms
|
||||
logger.debug(f"TX recorded: {airtime_ms: .1f}ms (total: {self.total_airtime_ms: .0f}ms)")
|
||||
|
||||
def record_rx(self, airtime_ms: float):
|
||||
"""Record received packet airtime (for total RX airtime stats)."""
|
||||
self.total_rx_airtime_ms += airtime_ms
|
||||
|
||||
def get_stats(self) -> dict:
|
||||
now = time.time()
|
||||
self.tx_history = [(ts, at) for ts, at in self.tx_history if now - ts < self.window_size]
|
||||
@@ -75,4 +127,5 @@ class AirtimeManager:
|
||||
"max_airtime_ms": self.max_airtime_per_minute,
|
||||
"utilization_percent": utilization,
|
||||
"total_airtime_ms": self.total_airtime_ms,
|
||||
"total_rx_airtime_ms": self.total_rx_airtime_ms,
|
||||
}
|
||||
|
||||
@@ -0,0 +1,30 @@
|
||||
"""Companion identity support for pyMC Repeater.
|
||||
|
||||
Exposes the MeshCore companion frame protocol over TCP for standard clients.
|
||||
"""
|
||||
|
||||
from .bridge import RepeaterCompanionBridge
|
||||
from .constants import (
|
||||
CMD_APP_START,
|
||||
CMD_GET_CONTACTS,
|
||||
CMD_SEND_LOGIN,
|
||||
CMD_SEND_TXT_MSG,
|
||||
CMD_SYNC_NEXT_MESSAGE,
|
||||
PUSH_CODE_MSG_WAITING,
|
||||
RESP_CODE_ERR,
|
||||
RESP_CODE_OK,
|
||||
)
|
||||
from .frame_server import CompanionFrameServer
|
||||
|
||||
__all__ = [
|
||||
"CompanionFrameServer",
|
||||
"RepeaterCompanionBridge",
|
||||
"CMD_APP_START",
|
||||
"CMD_GET_CONTACTS",
|
||||
"CMD_SEND_TXT_MSG",
|
||||
"CMD_SYNC_NEXT_MESSAGE",
|
||||
"CMD_SEND_LOGIN",
|
||||
"RESP_CODE_OK",
|
||||
"RESP_CODE_ERR",
|
||||
"PUSH_CODE_MSG_WAITING",
|
||||
]
|
||||
@@ -0,0 +1,122 @@
|
||||
"""
|
||||
Repeater CompanionBridge with SQLite-backed preference persistence.
|
||||
|
||||
Persists full NodePrefs as a JSON blob so companion settings (including
|
||||
auto-add config) survive repeater restarts. Merge-on-load supports
|
||||
schema evolution when NodePrefs gains or loses fields.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import dataclasses
|
||||
import logging
|
||||
from enum import Enum
|
||||
from typing import Any, Callable, Optional
|
||||
|
||||
from pymc_core.companion import CompanionBridge
|
||||
|
||||
logger = logging.getLogger("RepeaterCompanionBridge")
|
||||
|
||||
|
||||
def _to_json_safe(value: Any) -> Any:
|
||||
"""Convert a value to a JSON-serializable form (avoids TypeError from enums, bytes, etc.)."""
|
||||
if value is None or isinstance(value, (bool, int, float, str)):
|
||||
return value
|
||||
if isinstance(value, Enum):
|
||||
return value.value
|
||||
if isinstance(value, bytes):
|
||||
return value.hex()
|
||||
if isinstance(value, (list, tuple)):
|
||||
return [_to_json_safe(v) for v in value]
|
||||
if isinstance(value, dict):
|
||||
return {k: _to_json_safe(v) for k, v in value.items()}
|
||||
if dataclasses.is_dataclass(value) and not isinstance(value, type):
|
||||
return {f.name: _to_json_safe(getattr(value, f.name)) for f in dataclasses.fields(value)}
|
||||
return value
|
||||
|
||||
|
||||
class RepeaterCompanionBridge(CompanionBridge):
|
||||
"""CompanionBridge that persists and loads prefs (full NodePrefs) via SQLite JSON blob."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
identity,
|
||||
packet_injector: Callable[..., Any],
|
||||
node_name: str = "pyMC",
|
||||
adv_type: int = 1,
|
||||
max_contacts: int = 1000,
|
||||
max_channels: int = 40,
|
||||
offline_queue_size: int = 512,
|
||||
radio_config: Optional[dict] = None,
|
||||
authenticate_callback: Optional[Callable[..., tuple[bool, int]]] = None,
|
||||
initial_contacts: Optional[Any] = None,
|
||||
*,
|
||||
sqlite_handler=None,
|
||||
companion_hash: str = "",
|
||||
on_prefs_saved: Optional[Callable[[str], None]] = None,
|
||||
) -> None:
|
||||
self._sqlite_handler = sqlite_handler
|
||||
self._companion_hash = companion_hash
|
||||
self._on_prefs_saved = on_prefs_saved
|
||||
super().__init__(
|
||||
identity=identity,
|
||||
packet_injector=packet_injector,
|
||||
node_name=node_name,
|
||||
adv_type=adv_type,
|
||||
max_contacts=max_contacts,
|
||||
max_channels=max_channels,
|
||||
offline_queue_size=offline_queue_size,
|
||||
radio_config=radio_config,
|
||||
authenticate_callback=authenticate_callback,
|
||||
initial_contacts=initial_contacts,
|
||||
)
|
||||
# Load persisted prefs (e.g. node_name) from SQLite so matching uses last-saved name
|
||||
self._load_prefs()
|
||||
|
||||
def _save_prefs(self) -> None:
|
||||
"""Persist full NodePrefs as JSON to SQLite."""
|
||||
if not self._sqlite_handler or not self._companion_hash:
|
||||
return
|
||||
try:
|
||||
prefs_dict = dataclasses.asdict(self.prefs)
|
||||
prefs_safe = _to_json_safe(prefs_dict)
|
||||
self._sqlite_handler.companion_save_prefs(
|
||||
str(self._companion_hash), prefs_safe
|
||||
)
|
||||
if self._on_prefs_saved:
|
||||
try:
|
||||
self._on_prefs_saved(self.prefs.node_name)
|
||||
except Exception as e:
|
||||
logger.warning("Failed to sync node_name to config: %s", e)
|
||||
except Exception as e:
|
||||
logger.warning("Failed to persist companion prefs: %s", e)
|
||||
|
||||
def _load_prefs(self) -> None:
|
||||
"""Load prefs from SQLite JSON and merge into self.prefs (only known keys)."""
|
||||
if not self._sqlite_handler or not self._companion_hash:
|
||||
return
|
||||
try:
|
||||
stored = self._sqlite_handler.companion_load_prefs(self._companion_hash)
|
||||
if not stored or not isinstance(stored, dict):
|
||||
return
|
||||
for key, value in stored.items():
|
||||
if not hasattr(self.prefs, key):
|
||||
continue
|
||||
current = getattr(self.prefs, key)
|
||||
try:
|
||||
if value is None:
|
||||
continue
|
||||
if isinstance(current, bool):
|
||||
setattr(self.prefs, key, bool(value))
|
||||
elif isinstance(current, int):
|
||||
setattr(self.prefs, key, int(value))
|
||||
elif isinstance(current, float):
|
||||
setattr(self.prefs, key, float(value))
|
||||
elif isinstance(current, str):
|
||||
setattr(self.prefs, key, str(value))
|
||||
else:
|
||||
setattr(self.prefs, key, value)
|
||||
except (TypeError, ValueError) as e:
|
||||
logger.debug("Skip prefs key %r: %s", key, e)
|
||||
except Exception as e:
|
||||
logger.warning("Failed to load companion prefs: %s", e)
|
||||
@@ -0,0 +1,150 @@
|
||||
"""Companion frame protocol constants — re-exported from pyMC_core.
|
||||
|
||||
All protocol constants now live in :mod:`pymc_core.companion.constants`.
|
||||
This module re-exports them so existing repeater imports continue to work.
|
||||
"""
|
||||
|
||||
# Re-exports; F401 ignored for re-exported names.
|
||||
from pymc_core.companion.constants import ( # noqa: F401
|
||||
ADV_TYPE_CHAT,
|
||||
ADV_TYPE_REPEATER,
|
||||
ADV_TYPE_ROOM,
|
||||
ADV_TYPE_SENSOR,
|
||||
ADVERT_LOC_NONE,
|
||||
ADVERT_LOC_SHARE,
|
||||
AUTOADD_CHAT,
|
||||
AUTOADD_OVERWRITE_OLDEST,
|
||||
AUTOADD_REPEATER,
|
||||
AUTOADD_ROOM,
|
||||
AUTOADD_SENSOR,
|
||||
CMD_ADD_UPDATE_CONTACT,
|
||||
CMD_APP_START,
|
||||
CMD_DEVICE_QUERY,
|
||||
CMD_EXPORT_CONTACT,
|
||||
CMD_EXPORT_PRIVATE_KEY,
|
||||
CMD_FACTORY_RESET,
|
||||
CMD_GET_ADVERT_PATH,
|
||||
CMD_GET_AUTOADD_CONFIG,
|
||||
CMD_GET_BATT_AND_STORAGE,
|
||||
CMD_GET_CHANNEL,
|
||||
CMD_GET_CONTACT_BY_KEY,
|
||||
CMD_GET_CONTACTS,
|
||||
CMD_GET_CUSTOM_VARS,
|
||||
CMD_GET_DEVICE_TIME,
|
||||
CMD_GET_STATS,
|
||||
CMD_GET_TUNING_PARAMS,
|
||||
CMD_HAS_CONNECTION,
|
||||
CMD_IMPORT_CONTACT,
|
||||
CMD_IMPORT_PRIVATE_KEY,
|
||||
CMD_LOGOUT,
|
||||
CMD_REBOOT,
|
||||
CMD_REMOVE_CONTACT,
|
||||
CMD_RESET_PATH,
|
||||
CMD_SEND_ANON_REQ,
|
||||
CMD_SEND_BINARY_REQ,
|
||||
CMD_SEND_CHANNEL_TXT_MSG,
|
||||
CMD_SEND_CONTROL_DATA,
|
||||
CMD_SEND_LOGIN,
|
||||
CMD_SEND_PATH_DISCOVERY_REQ,
|
||||
CMD_SEND_RAW_DATA,
|
||||
CMD_SEND_SELF_ADVERT,
|
||||
CMD_SEND_STATUS_REQ,
|
||||
CMD_SEND_TELEMETRY_REQ,
|
||||
CMD_SEND_TRACE_PATH,
|
||||
CMD_SEND_TXT_MSG,
|
||||
CMD_SET_ADVERT_LATLON,
|
||||
CMD_SET_ADVERT_NAME,
|
||||
CMD_SET_AUTOADD_CONFIG,
|
||||
CMD_SET_CHANNEL,
|
||||
CMD_SET_CUSTOM_VAR,
|
||||
CMD_SET_DEVICE_PIN,
|
||||
CMD_SET_DEVICE_TIME,
|
||||
CMD_SET_FLOOD_SCOPE,
|
||||
CMD_SET_OTHER_PARAMS,
|
||||
CMD_SET_RADIO_PARAMS,
|
||||
CMD_SET_RADIO_TX_POWER,
|
||||
CMD_SET_TUNING_PARAMS,
|
||||
CMD_SHARE_CONTACT,
|
||||
CMD_SIGN_DATA,
|
||||
CMD_SIGN_FINISH,
|
||||
CMD_SIGN_START,
|
||||
CMD_SYNC_NEXT_MESSAGE,
|
||||
CONTACT_NAME_SIZE,
|
||||
DEFAULT_MAX_CHANNELS,
|
||||
DEFAULT_MAX_CONTACTS,
|
||||
DEFAULT_OFFLINE_QUEUE_SIZE,
|
||||
DEFAULT_PUBLIC_CHANNEL_SECRET,
|
||||
DEFAULT_RESPONSE_TIMEOUT_MS,
|
||||
ERR_CODE_BAD_STATE,
|
||||
ERR_CODE_FILE_IO_ERROR,
|
||||
ERR_CODE_ILLEGAL_ARG,
|
||||
ERR_CODE_NOT_FOUND,
|
||||
ERR_CODE_TABLE_FULL,
|
||||
ERR_CODE_UNSUPPORTED_CMD,
|
||||
FRAME_INBOUND_PREFIX,
|
||||
FRAME_OUTBOUND_PREFIX,
|
||||
MAX_FRAME_SIZE,
|
||||
MAX_PATH_SIZE,
|
||||
MAX_SIGN_DATA_SIZE,
|
||||
MSG_SEND_FAILED,
|
||||
MSG_SEND_SENT_DIRECT,
|
||||
MSG_SEND_SENT_FLOOD,
|
||||
PROTOCOL_CODE_ANON_REQ,
|
||||
PROTOCOL_CODE_BINARY_REQ,
|
||||
PROTOCOL_CODE_RAW_DATA,
|
||||
PUB_KEY_SIZE,
|
||||
PUBLIC_GROUP_PSK,
|
||||
PUSH_CODE_ADVERT,
|
||||
PUSH_CODE_BINARY_RESPONSE,
|
||||
PUSH_CODE_CONTACT_DELETED,
|
||||
PUSH_CODE_CONTACTS_FULL,
|
||||
PUSH_CODE_CONTROL_DATA,
|
||||
PUSH_CODE_LOG_RX_DATA,
|
||||
PUSH_CODE_LOGIN_FAIL,
|
||||
PUSH_CODE_LOGIN_SUCCESS,
|
||||
PUSH_CODE_MSG_WAITING,
|
||||
PUSH_CODE_NEW_ADVERT,
|
||||
PUSH_CODE_PATH_DISCOVERY_RESPONSE,
|
||||
PUSH_CODE_PATH_UPDATED,
|
||||
PUSH_CODE_RAW_DATA,
|
||||
PUSH_CODE_SEND_CONFIRMED,
|
||||
PUSH_CODE_STATUS_RESPONSE,
|
||||
PUSH_CODE_TELEMETRY_RESPONSE,
|
||||
PUSH_CODE_TRACE_DATA,
|
||||
RESP_CODE_ADVERT_PATH,
|
||||
RESP_CODE_AUTOADD_CONFIG,
|
||||
RESP_CODE_BATT_AND_STORAGE,
|
||||
RESP_CODE_CHANNEL_INFO,
|
||||
RESP_CODE_CHANNEL_MSG_RECV,
|
||||
RESP_CODE_CHANNEL_MSG_RECV_V3,
|
||||
RESP_CODE_CONTACT,
|
||||
RESP_CODE_CONTACT_MSG_RECV,
|
||||
RESP_CODE_CONTACT_MSG_RECV_V3,
|
||||
RESP_CODE_CONTACTS_START,
|
||||
RESP_CODE_CURR_TIME,
|
||||
RESP_CODE_CUSTOM_VARS,
|
||||
RESP_CODE_DEVICE_INFO,
|
||||
RESP_CODE_DISABLED,
|
||||
RESP_CODE_END_OF_CONTACTS,
|
||||
RESP_CODE_ERR,
|
||||
RESP_CODE_EXPORT_CONTACT,
|
||||
RESP_CODE_NO_MORE_MESSAGES,
|
||||
RESP_CODE_OK,
|
||||
RESP_CODE_PRIVATE_KEY,
|
||||
RESP_CODE_SELF_INFO,
|
||||
RESP_CODE_SENT,
|
||||
RESP_CODE_SIGN_START,
|
||||
RESP_CODE_SIGNATURE,
|
||||
RESP_CODE_STATS,
|
||||
RESP_CODE_TUNING_PARAMS,
|
||||
STATS_TYPE_CORE,
|
||||
STATS_TYPE_PACKETS,
|
||||
STATS_TYPE_RADIO,
|
||||
TELEM_MODE_ALLOW_ALL,
|
||||
TELEM_MODE_ALLOW_FLAGS,
|
||||
TELEM_MODE_DENY,
|
||||
TXT_TYPE_CLI_DATA,
|
||||
TXT_TYPE_PLAIN,
|
||||
TXT_TYPE_SIGNED_PLAIN,
|
||||
BinaryReqType,
|
||||
)
|
||||
@@ -0,0 +1,178 @@
|
||||
"""
|
||||
Repeater-specific CompanionFrameServer with SQLite persistence.
|
||||
|
||||
Thin subclass of :class:`pymc_core.companion.frame_server.CompanionFrameServer`
|
||||
that adds SQLite-backed message, contact, and channel persistence via a
|
||||
``sqlite_handler`` dependency.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
from typing import Optional
|
||||
|
||||
from pymc_core.companion.constants import RESP_CODE_NO_MORE_MESSAGES
|
||||
from pymc_core.companion.frame_server import CompanionFrameServer as _BaseFrameServer
|
||||
from pymc_core.companion.models import QueuedMessage
|
||||
|
||||
logger = logging.getLogger("CompanionFrameServer")
|
||||
|
||||
|
||||
class CompanionFrameServer(_BaseFrameServer):
|
||||
"""Adds SQLite persistence for messages, contacts, and channels.
|
||||
|
||||
Constructor signature is intentionally kept compatible with the
|
||||
previous monolithic implementation so ``main.py`` call-sites need
|
||||
zero changes.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
bridge,
|
||||
companion_hash: str,
|
||||
port: int = 5000,
|
||||
bind_address: str = "0.0.0.0",
|
||||
client_idle_timeout_sec: Optional[int] = 8 * 60 * 60, # 8 hours
|
||||
sqlite_handler=None,
|
||||
local_hash: Optional[int] = None,
|
||||
stats_getter=None,
|
||||
control_handler=None,
|
||||
):
|
||||
super().__init__(
|
||||
bridge=bridge,
|
||||
companion_hash=companion_hash,
|
||||
port=port,
|
||||
bind_address=bind_address,
|
||||
client_idle_timeout_sec=client_idle_timeout_sec,
|
||||
device_model="pyMC-Repeater-Companion",
|
||||
device_version=None, # use FIRMWARE_VER_CODE from pyMC_core
|
||||
build_date="13 Feb 2026",
|
||||
local_hash=local_hash,
|
||||
stats_getter=stats_getter,
|
||||
control_handler=control_handler,
|
||||
)
|
||||
self.sqlite_handler = sqlite_handler
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
# Persistence hook overrides
|
||||
# -----------------------------------------------------------------
|
||||
|
||||
async def _persist_companion_message(self, msg_dict: dict) -> None:
|
||||
"""Persist message to SQLite and pop from bridge queue."""
|
||||
if not self.sqlite_handler:
|
||||
return
|
||||
await asyncio.to_thread(
|
||||
self.sqlite_handler.companion_push_message,
|
||||
self.companion_hash,
|
||||
msg_dict,
|
||||
)
|
||||
self.bridge.message_queue.pop_last()
|
||||
|
||||
def _sync_next_from_persistence(self) -> Optional[QueuedMessage]:
|
||||
"""Retrieve next message from SQLite when bridge queue is empty."""
|
||||
if not self.sqlite_handler:
|
||||
return None
|
||||
msg_dict = self.sqlite_handler.companion_pop_message(self.companion_hash)
|
||||
if not msg_dict:
|
||||
return None
|
||||
return QueuedMessage(
|
||||
sender_key=msg_dict.get("sender_key", b""),
|
||||
txt_type=msg_dict.get("txt_type", 0),
|
||||
timestamp=msg_dict.get("timestamp", 0),
|
||||
text=msg_dict.get("text", ""),
|
||||
is_channel=bool(msg_dict.get("is_channel", False)),
|
||||
channel_idx=msg_dict.get("channel_idx", 0),
|
||||
path_len=msg_dict.get("path_len", 0),
|
||||
)
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
# Non-blocking command overrides (keep event loop responsive)
|
||||
# -----------------------------------------------------------------
|
||||
|
||||
async def _cmd_sync_next_message(self, data: bytes) -> None:
|
||||
"""Sync next message; run persistence read in thread so SQLite does not block."""
|
||||
msg = self.bridge.sync_next_message()
|
||||
if msg is None:
|
||||
msg = await asyncio.to_thread(self._sync_next_from_persistence)
|
||||
if msg is None:
|
||||
self._write_frame(bytes([RESP_CODE_NO_MORE_MESSAGES]))
|
||||
return
|
||||
self._write_frame(self._build_message_frame(msg))
|
||||
|
||||
@staticmethod
|
||||
def _contact_to_dict(c) -> dict:
|
||||
"""Convert a Contact object to a persistence dict."""
|
||||
pk = c.public_key if isinstance(c.public_key, bytes) else bytes.fromhex(c.public_key)
|
||||
return {
|
||||
"pubkey": pk,
|
||||
"name": c.name,
|
||||
"adv_type": c.adv_type,
|
||||
"flags": c.flags,
|
||||
"out_path_len": c.out_path_len,
|
||||
"out_path": (
|
||||
c.out_path
|
||||
if isinstance(c.out_path, bytes)
|
||||
else (bytes.fromhex(c.out_path) if c.out_path else b"")
|
||||
),
|
||||
"last_advert_timestamp": c.last_advert_timestamp,
|
||||
"lastmod": c.lastmod,
|
||||
"gps_lat": c.gps_lat,
|
||||
"gps_lon": c.gps_lon,
|
||||
"sync_since": c.sync_since,
|
||||
}
|
||||
|
||||
async def _persist_contact(self, contact) -> None:
|
||||
"""Upsert a single contact to SQLite (non-blocking)."""
|
||||
if not self.sqlite_handler:
|
||||
return
|
||||
contact_dict = self._contact_to_dict(contact)
|
||||
await asyncio.to_thread(
|
||||
self.sqlite_handler.companion_upsert_contact,
|
||||
self.companion_hash,
|
||||
contact_dict,
|
||||
)
|
||||
|
||||
async def _save_contacts(self) -> None:
|
||||
"""Persist all contacts to SQLite (non-blocking)."""
|
||||
if not self.sqlite_handler:
|
||||
return
|
||||
contacts = self.bridge.get_contacts()
|
||||
dicts = [self._contact_to_dict(c) for c in contacts]
|
||||
await asyncio.to_thread(
|
||||
self.sqlite_handler.companion_save_contacts,
|
||||
self.companion_hash,
|
||||
dicts,
|
||||
)
|
||||
|
||||
async def _save_channels(self) -> None:
|
||||
"""Persist channels to SQLite (non-blocking)."""
|
||||
if not self.sqlite_handler:
|
||||
return
|
||||
channels = []
|
||||
max_ch = getattr(getattr(self.bridge, "channels", None), "max_channels", 40)
|
||||
for idx in range(max_ch):
|
||||
ch = self.bridge.get_channel(idx)
|
||||
if ch is not None:
|
||||
channels.append(
|
||||
{
|
||||
"channel_idx": idx,
|
||||
"name": ch.name,
|
||||
"secret": ch.secret,
|
||||
}
|
||||
)
|
||||
await asyncio.to_thread(
|
||||
self.sqlite_handler.companion_save_channels,
|
||||
self.companion_hash,
|
||||
channels,
|
||||
)
|
||||
|
||||
async def stop(self) -> None:
|
||||
"""Persist contacts and channels before stopping (so they survive daemon restart)."""
|
||||
if self.sqlite_handler:
|
||||
try:
|
||||
await self._save_contacts()
|
||||
await self._save_channels()
|
||||
except Exception as e:
|
||||
logger.warning("Failed to persist contacts/channels on stop: %s", e)
|
||||
await super().stop()
|
||||
@@ -0,0 +1,190 @@
|
||||
"""Resolve companion config rows by registration name, identity key, or public key prefix."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from typing import Any, List, Optional, Set, Tuple
|
||||
|
||||
from repeater.companion.utils import normalize_companion_identity_key
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Minimum hex chars for identity_key / public_key prefix disambiguation (4 bytes)
|
||||
_MIN_PREFIX_HEX_LEN = 8
|
||||
|
||||
|
||||
def _companion_registration_name(entry: dict) -> str:
|
||||
n = entry.get("name")
|
||||
if n is None:
|
||||
return ""
|
||||
return str(n).strip()
|
||||
|
||||
|
||||
def identity_key_bytes_from_config(identity_key: Any) -> Optional[bytes]:
|
||||
"""Parse companion identity_key from YAML (str hex or raw bytes)."""
|
||||
if identity_key is None:
|
||||
return None
|
||||
if isinstance(identity_key, (bytes, bytearray, memoryview)):
|
||||
raw = bytes(identity_key)
|
||||
return raw if len(raw) in (32, 64) else None
|
||||
if isinstance(identity_key, str):
|
||||
try:
|
||||
raw = bytes.fromhex(normalize_companion_identity_key(identity_key))
|
||||
except ValueError:
|
||||
return None
|
||||
return raw if len(raw) in (32, 64) else None
|
||||
return None
|
||||
|
||||
|
||||
def identity_key_hex_normalized(identity_key: Any) -> Optional[str]:
|
||||
"""Lowercase hex string of the raw key bytes (64 or 128 chars), or None."""
|
||||
raw = identity_key_bytes_from_config(identity_key)
|
||||
if raw is None:
|
||||
return None
|
||||
return raw.hex().lower()
|
||||
|
||||
|
||||
def derive_companion_public_key_hex(identity_key: Any) -> Optional[str]:
|
||||
"""Return ed25519 public key hex for a companion seed, or None if invalid."""
|
||||
raw = identity_key_bytes_from_config(identity_key)
|
||||
if raw is None:
|
||||
return None
|
||||
try:
|
||||
from pymc_core import LocalIdentity
|
||||
|
||||
identity = LocalIdentity(seed=raw)
|
||||
return identity.get_public_key().hex()
|
||||
except Exception as e:
|
||||
logger.debug("derive_companion_public_key_hex failed: %s", e)
|
||||
return None
|
||||
|
||||
|
||||
def suggest_companion_name_from_pubkey(pubkey_hex: str, prefix_len: int = 8) -> str:
|
||||
"""Stable default registration name: companion_<first prefix_len hex chars of pubkey>."""
|
||||
p = pubkey_hex.strip().lower()
|
||||
if p.startswith("0x"):
|
||||
p = p[2:]
|
||||
if len(p) < prefix_len:
|
||||
prefix = p
|
||||
else:
|
||||
prefix = p[:prefix_len]
|
||||
return f"companion_{prefix}"
|
||||
|
||||
|
||||
def unique_suggested_name(
|
||||
pubkey_hex: str,
|
||||
existing_names: set,
|
||||
prefix_len: int = 8,
|
||||
) -> str:
|
||||
"""Like suggest_companion_name_from_pubkey but appends -2, -3, ... if name collides."""
|
||||
base = suggest_companion_name_from_pubkey(pubkey_hex, prefix_len=prefix_len)
|
||||
if base not in existing_names:
|
||||
return base
|
||||
n = 2
|
||||
while f"{base}-{n}" in existing_names:
|
||||
n += 1
|
||||
return f"{base}-{n}"
|
||||
|
||||
|
||||
def find_companion_index(
|
||||
companions: List[dict],
|
||||
*,
|
||||
name: Optional[str] = None,
|
||||
identity_key: Optional[str] = None,
|
||||
public_key_prefix: Optional[str] = None,
|
||||
) -> Tuple[Optional[int], Optional[str]]:
|
||||
"""
|
||||
Find a single companion list index.
|
||||
|
||||
Lookup priority when multiple fields are set:
|
||||
1) name (non-empty after strip)
|
||||
2) identity_key (full hex or unique prefix)
|
||||
3) public_key_prefix (unique prefix of derived public key hex)
|
||||
|
||||
Returns (index, None) on success, or (None, error_message) on failure.
|
||||
"""
|
||||
name_s = str(name).strip() if name is not None else ""
|
||||
idk = str(identity_key).strip() if identity_key is not None else ""
|
||||
pkp = str(public_key_prefix).strip() if public_key_prefix is not None else ""
|
||||
if pkp.lower().startswith("0x"):
|
||||
pkp = pkp[2:].strip()
|
||||
pkp = pkp.lower()
|
||||
|
||||
if idk:
|
||||
idk = normalize_companion_identity_key(idk).lower()
|
||||
|
||||
if name_s:
|
||||
matches = [i for i, c in enumerate(companions) if _companion_registration_name(c) == name_s]
|
||||
if len(matches) == 1:
|
||||
return matches[0], None
|
||||
if len(matches) == 0:
|
||||
return None, f"Companion '{name_s}' not found"
|
||||
return None, f"Multiple companions named '{name_s}'"
|
||||
|
||||
if idk:
|
||||
if len(idk) < _MIN_PREFIX_HEX_LEN:
|
||||
return None, (
|
||||
f"identity_key lookup must be at least {_MIN_PREFIX_HEX_LEN} hex characters"
|
||||
)
|
||||
exact: List[int] = []
|
||||
prefix_matches: List[int] = []
|
||||
for i, c in enumerate(companions):
|
||||
h = identity_key_hex_normalized(c.get("identity_key"))
|
||||
if not h:
|
||||
continue
|
||||
if h == idk:
|
||||
exact.append(i)
|
||||
elif h.startswith(idk):
|
||||
prefix_matches.append(i)
|
||||
if len(exact) == 1:
|
||||
return exact[0], None
|
||||
if len(exact) > 1:
|
||||
return None, "Multiple companions match identity_key (ambiguous)"
|
||||
if len(prefix_matches) == 1:
|
||||
return prefix_matches[0], None
|
||||
if len(prefix_matches) == 0:
|
||||
return None, "No companion matches identity_key"
|
||||
return None, "Multiple companions match identity_key prefix (ambiguous)"
|
||||
|
||||
if pkp:
|
||||
if len(pkp) < _MIN_PREFIX_HEX_LEN:
|
||||
return None, (
|
||||
f"public_key_prefix must be at least {_MIN_PREFIX_HEX_LEN} hex characters"
|
||||
)
|
||||
matches: List[int] = []
|
||||
for i, c in enumerate(companions):
|
||||
pub = derive_companion_public_key_hex(c.get("identity_key"))
|
||||
if pub and pub.lower().startswith(pkp):
|
||||
matches.append(i)
|
||||
if len(matches) == 1:
|
||||
return matches[0], None
|
||||
if len(matches) == 0:
|
||||
return None, "No companion matches public_key_prefix"
|
||||
return None, "Multiple companions match public_key_prefix (ambiguous)"
|
||||
|
||||
return None, "Missing companion lookup: provide name, identity_key, or public_key_prefix"
|
||||
|
||||
|
||||
def heal_companion_empty_names(companions: List[dict]) -> bool:
|
||||
"""
|
||||
Assign companion_<pubkeyPrefix> names to entries with missing/blank registration names.
|
||||
Mutates companions in place. Returns True if any entry was updated.
|
||||
"""
|
||||
names_in_use: Set[str] = set()
|
||||
for c in companions:
|
||||
n = _companion_registration_name(c)
|
||||
if n:
|
||||
names_in_use.add(n)
|
||||
changed = False
|
||||
for entry in companions:
|
||||
if _companion_registration_name(entry):
|
||||
continue
|
||||
pk = derive_companion_public_key_hex(entry.get("identity_key"))
|
||||
if not pk:
|
||||
logger.warning("Skipping companion name heal: invalid or missing identity_key")
|
||||
continue
|
||||
new_name = unique_suggested_name(pk, names_in_use)
|
||||
entry["name"] = new_name
|
||||
names_in_use.add(new_name)
|
||||
changed = True
|
||||
return changed
|
||||
@@ -0,0 +1,25 @@
|
||||
"""Shared utilities for Companion (e.g. validation for config sync)."""
|
||||
|
||||
_INVALID_NODE_NAME_CHARS = "\n\r\x00"
|
||||
|
||||
|
||||
def normalize_companion_identity_key(identity_key: str) -> str:
|
||||
"""Strip whitespace and remove optional 0x prefix so fromhex() is consistent across installs."""
|
||||
s = identity_key.strip()
|
||||
if s.lower().startswith("0x"):
|
||||
s = s[2:].strip()
|
||||
return s
|
||||
|
||||
|
||||
def validate_companion_node_name(value: str) -> str:
|
||||
"""Validate node_name for config sync: non-empty, max 31 bytes UTF-8, no control chars."""
|
||||
if not isinstance(value, str):
|
||||
raise ValueError("node_name must be a string")
|
||||
s = value.strip()
|
||||
if not s:
|
||||
raise ValueError("node_name cannot be empty")
|
||||
if len(s.encode("utf-8")) > 31:
|
||||
raise ValueError("node_name too long (max 31 bytes UTF-8)")
|
||||
if any(c in s for c in _INVALID_NODE_NAME_CHARS):
|
||||
raise ValueError("node_name contains invalid characters")
|
||||
return s
|
||||
+174
-54
@@ -11,13 +11,13 @@ logger = logging.getLogger("Config")
|
||||
|
||||
def get_node_info(config: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""
|
||||
Extract node name, radio configuration, and LetsMesh settings from config.
|
||||
Extract node name, radio configuration, and MQTT settings from config.
|
||||
|
||||
Args:
|
||||
config: Configuration dictionary
|
||||
|
||||
Returns:
|
||||
Dictionary with node_name, radio_config, and LetsMesh configuration
|
||||
Dictionary with node_name, radio_config, and MQTT configuration
|
||||
"""
|
||||
node_name = config.get("repeater", {}).get("node_name", "PyMC-Repeater")
|
||||
radio_config = config.get("radio", {})
|
||||
@@ -30,26 +30,17 @@ def get_node_info(config: Dict[str, Any]) -> Dict[str, Any]:
|
||||
radio_bw_khz = radio_bw / 1_000
|
||||
radio_config_str = f"{radio_freq_mhz},{radio_bw_khz},{radio_sf},{radio_cr}"
|
||||
|
||||
letsmesh_config = config.get("letsmesh", {})
|
||||
|
||||
from pymc_core.protocol.utils import PAYLOAD_TYPES
|
||||
|
||||
disallowed_types = letsmesh_config.get("disallowed_packet_types", [])
|
||||
type_name_map = {name: code for code, name in PAYLOAD_TYPES.items()}
|
||||
|
||||
disallowed_hex = [type_name_map.get(name.upper(), None) for name in disallowed_types]
|
||||
disallowed_hex = [val for val in disallowed_hex if val is not None] # Filter out invalid names
|
||||
# Handle getting the config from mqtt brokers, falling back to letsmesh if it doesn't exist
|
||||
mqtt_config = config.get("mqtt_brokers", config.get("letsmesh", {}))
|
||||
|
||||
return {
|
||||
"node_name": node_name,
|
||||
"radio_config": radio_config_str,
|
||||
"iata_code": letsmesh_config.get("iata_code", "TEST"),
|
||||
"broker_index": letsmesh_config.get("broker_index", 0),
|
||||
"status_interval": letsmesh_config.get("status_interval", 60),
|
||||
"model": letsmesh_config.get("model", "PyMC-Repeater"),
|
||||
"disallowed_packet_types": disallowed_hex,
|
||||
"email": letsmesh_config.get("email", ""),
|
||||
"owner": letsmesh_config.get("owner", "")
|
||||
"iata_code": mqtt_config.get("iata_code", "TEST"),
|
||||
"status_interval": mqtt_config.get("status_interval", 60),
|
||||
"model": mqtt_config.get("model", "PyMC-Repeater"),
|
||||
"email": mqtt_config.get("email", ""),
|
||||
"owner": mqtt_config.get("owner", ""),
|
||||
}
|
||||
|
||||
|
||||
@@ -77,9 +68,42 @@ def load_config(config_path: Optional[str] = None) -> Dict[str, Any]:
|
||||
if "mesh" not in config:
|
||||
config["mesh"] = {}
|
||||
|
||||
# Only auto-generate identity_key if not provided
|
||||
if "identity_key" not in config["mesh"]:
|
||||
config["mesh"]["identity_key"] = _load_or_create_identity_key()
|
||||
if "glass" not in config:
|
||||
config["glass"] = {
|
||||
"enabled": False,
|
||||
"base_url": "http://localhost:8080",
|
||||
"inform_interval_seconds": 30,
|
||||
"request_timeout_seconds": 10,
|
||||
"verify_tls": True,
|
||||
"api_token": "",
|
||||
"cert_store_dir": "/etc/pymc_repeater/glass",
|
||||
}
|
||||
|
||||
# Ensure repeater.security exists with defaults for upgrades from older configs
|
||||
if "repeater" not in config:
|
||||
config["repeater"] = {}
|
||||
if "security" not in config["repeater"]:
|
||||
logger.warning(
|
||||
"No 'security' section found under 'repeater' in config. "
|
||||
"Adding defaults — please review and update passwords."
|
||||
)
|
||||
config["repeater"]["security"] = {
|
||||
"max_clients": 1,
|
||||
"admin_password": "admin123",
|
||||
"guest_password": "guest123",
|
||||
"allow_read_only": False,
|
||||
"jwt_secret": "",
|
||||
"jwt_expiry_minutes": 60,
|
||||
}
|
||||
|
||||
# Only auto-generate identity_key if not provided under repeater section
|
||||
if "identity_key" not in config["repeater"]:
|
||||
# Check if identity_file is specified
|
||||
identity_file = config["repeater"].get("identity_file")
|
||||
if identity_file:
|
||||
config["repeater"]["identity_key"] = _load_or_create_identity_key(path=identity_file)
|
||||
else:
|
||||
config["repeater"]["identity_key"] = _load_or_create_identity_key()
|
||||
|
||||
if os.getenv("PYMC_REPEATER_LOG_LEVEL"):
|
||||
if "logging" not in config:
|
||||
@@ -107,14 +131,21 @@ def save_config(config_data: Dict[str, Any], config_path: Optional[str] = None)
|
||||
# Create backup of existing config
|
||||
config_file = Path(config_path)
|
||||
if config_file.exists():
|
||||
backup_path = config_file.with_suffix('.yaml.backup')
|
||||
backup_path = config_file.with_suffix(".yaml.backup")
|
||||
config_file.rename(backup_path)
|
||||
logger.info(f"Created backup at {backup_path}")
|
||||
|
||||
# Save new config
|
||||
with open(config_path, 'w') as f:
|
||||
yaml.safe_dump(config_data, f, default_flow_style=False, sort_keys=False)
|
||||
|
||||
|
||||
# Save new config (allow_unicode=True so emojis etc. are not escaped as \U0001F47E)
|
||||
with open(config_path, "w", encoding="utf-8") as f:
|
||||
yaml.safe_dump(
|
||||
config_data,
|
||||
f,
|
||||
default_flow_style=False,
|
||||
sort_keys=False,
|
||||
allow_unicode=True,
|
||||
width=1000000,
|
||||
)
|
||||
|
||||
logger.info(f"Saved configuration to {config_path}")
|
||||
return True
|
||||
|
||||
@@ -123,12 +154,12 @@ def save_config(config_data: Dict[str, Any], config_path: Optional[str] = None)
|
||||
return False
|
||||
|
||||
|
||||
def update_global_flood_policy(allow: bool, config_path: Optional[str] = None) -> bool:
|
||||
def update_unscoped_flood_policy(allow: bool, config_path: Optional[str] = None) -> bool:
|
||||
"""
|
||||
Update the global flood policy in the configuration.
|
||||
Update the unscoped flood policy in the configuration.
|
||||
|
||||
Args:
|
||||
allow: True to allow flooding globally, False to deny
|
||||
allow: True to allow unscoped flooding, False to deny
|
||||
config_path: Path to config file (uses default if None)
|
||||
|
||||
Returns:
|
||||
@@ -144,25 +175,31 @@ def update_global_flood_policy(allow: bool, config_path: Optional[str] = None) -
|
||||
|
||||
# Set global flood policy
|
||||
config["mesh"]["global_flood_allow"] = allow
|
||||
config["mesh"]["unscoped_flood_allow"] = allow
|
||||
|
||||
# Save updated config
|
||||
return save_config(config, config_path)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to update global flood policy: {e}")
|
||||
logger.error(f"Failed to update unscoped flood policy: {e}")
|
||||
return False
|
||||
|
||||
|
||||
def _load_or_create_identity_key(path: Optional[str] = None) -> bytes:
|
||||
|
||||
if path is None:
|
||||
# Follow XDG spec
|
||||
xdg_config_home = os.environ.get("XDG_CONFIG_HOME")
|
||||
if xdg_config_home:
|
||||
config_dir = Path(xdg_config_home) / "pymc_repeater"
|
||||
# Check system-wide location first (matches config.yaml location)
|
||||
system_key_path = Path("/etc/pymc_repeater/identity.key")
|
||||
if system_key_path.exists():
|
||||
key_path = system_key_path
|
||||
else:
|
||||
config_dir = Path.home() / ".config" / "pymc_repeater"
|
||||
key_path = config_dir / "identity.key"
|
||||
# Follow XDG spec
|
||||
xdg_config_home = os.environ.get("XDG_CONFIG_HOME")
|
||||
if xdg_config_home:
|
||||
config_dir = Path(xdg_config_home) / "pymc_repeater"
|
||||
else:
|
||||
config_dir = Path.home() / ".config" / "pymc_repeater"
|
||||
key_path = config_dir / "identity.key"
|
||||
else:
|
||||
key_path = Path(path)
|
||||
|
||||
@@ -173,8 +210,8 @@ def _load_or_create_identity_key(path: Optional[str] = None) -> bytes:
|
||||
with open(key_path, "rb") as f:
|
||||
encoded = f.read()
|
||||
key = base64.b64decode(encoded)
|
||||
if len(key) != 32:
|
||||
raise ValueError(f"Invalid key length: {len(key)}, expected 32")
|
||||
if len(key) not in (32, 64):
|
||||
raise ValueError(f"Invalid key length: {len(key)}, expected 32 or 64")
|
||||
logger.info(f"Loaded existing identity key from {key_path}")
|
||||
return key
|
||||
except Exception as e:
|
||||
@@ -197,9 +234,20 @@ def _load_or_create_identity_key(path: Optional[str] = None) -> bytes:
|
||||
|
||||
def get_radio_for_board(board_config: dict):
|
||||
|
||||
radio_type = board_config.get("radio_type", "sx1262").lower()
|
||||
def _parse_int(value, *, default=None) -> int:
|
||||
if value is None:
|
||||
return default
|
||||
if isinstance(value, int):
|
||||
return value
|
||||
if isinstance(value, str):
|
||||
return int(value.strip().rstrip(','), 0)
|
||||
raise ValueError(f"Invalid int value type: {type(value)}")
|
||||
|
||||
if radio_type == "sx1262":
|
||||
radio_type = board_config.get("radio_type", "sx1262").lower().strip()
|
||||
if radio_type == "kiss-modem":
|
||||
radio_type = "kiss"
|
||||
|
||||
if radio_type in ("sx1262", "sx1262_ch341"):
|
||||
from pymc_core.hardware.sx1262_wrapper import SX1262Radio
|
||||
|
||||
# Get radio and SPI configuration - all settings must be in config file
|
||||
@@ -211,19 +259,37 @@ def get_radio_for_board(board_config: dict):
|
||||
if not radio_config:
|
||||
raise ValueError("Missing 'radio' section in configuration file")
|
||||
|
||||
# Build config with required fields - no defaults
|
||||
# CH341 integration: swap SPI transport + GPIO backend to CH341
|
||||
if radio_type == "sx1262_ch341":
|
||||
ch341_cfg = board_config.get("ch341")
|
||||
if not ch341_cfg:
|
||||
raise ValueError("Missing 'ch341' section in configuration file")
|
||||
|
||||
from pymc_core.hardware.lora.LoRaRF.SX126x import set_spi_transport
|
||||
from pymc_core.hardware.transports.ch341_spi_transport import CH341SPITransport
|
||||
|
||||
vid = _parse_int(ch341_cfg.get("vid"), default=0x1A86)
|
||||
pid = _parse_int(ch341_cfg.get("pid"), default=0x5512)
|
||||
|
||||
# Create CH341 transport (also configures CH341 GPIO manager globally)
|
||||
ch341_spi = CH341SPITransport(vid=vid, pid=pid, auto_setup_gpio=True)
|
||||
set_spi_transport(ch341_spi)
|
||||
|
||||
combined_config = {
|
||||
"bus_id": spi_config["bus_id"],
|
||||
"cs_id": spi_config["cs_id"],
|
||||
"cs_pin": spi_config["cs_pin"],
|
||||
"reset_pin": spi_config["reset_pin"],
|
||||
"busy_pin": spi_config["busy_pin"],
|
||||
"irq_pin": spi_config["irq_pin"],
|
||||
"txen_pin": spi_config["txen_pin"],
|
||||
"rxen_pin": spi_config["rxen_pin"],
|
||||
"txled_pin": spi_config.get("txled_pin", -1),
|
||||
"rxled_pin": spi_config.get("rxled_pin", -1),
|
||||
"bus_id": _parse_int(spi_config["bus_id"]),
|
||||
"cs_id": _parse_int(spi_config["cs_id"]),
|
||||
"cs_pin": _parse_int(spi_config["cs_pin"]),
|
||||
"reset_pin": _parse_int(spi_config["reset_pin"]),
|
||||
"busy_pin": _parse_int(spi_config["busy_pin"]),
|
||||
"irq_pin": _parse_int(spi_config["irq_pin"]),
|
||||
"txen_pin": _parse_int(spi_config["txen_pin"]),
|
||||
"rxen_pin": _parse_int(spi_config["rxen_pin"]),
|
||||
"txled_pin": _parse_int(spi_config.get("txled_pin", -1), default=-1),
|
||||
"rxled_pin": _parse_int(spi_config.get("rxled_pin", -1), default=-1),
|
||||
"en_pin": _parse_int(spi_config.get("en_pin", -1), default=-1),
|
||||
"use_dio3_tcxo": spi_config.get("use_dio3_tcxo", False),
|
||||
"dio3_tcxo_voltage": float(spi_config.get("dio3_tcxo_voltage", 1.8)),
|
||||
"use_dio2_rf": spi_config.get("use_dio2_rf", False),
|
||||
"is_waveshare": spi_config.get("is_waveshare", False),
|
||||
"frequency": int(radio_config["frequency"]),
|
||||
"tx_power": radio_config["tx_power"],
|
||||
@@ -234,6 +300,13 @@ def get_radio_for_board(board_config: dict):
|
||||
"sync_word": radio_config["sync_word"],
|
||||
}
|
||||
|
||||
# Add optional GPIO parameters if specified in config
|
||||
# These wont be supported by older versions of pymc_core
|
||||
if "gpio_chip" in spi_config:
|
||||
combined_config["gpio_chip"] = _parse_int(spi_config["gpio_chip"], default=0)
|
||||
if "use_gpiod_backend" in spi_config:
|
||||
combined_config["use_gpiod_backend"] = spi_config["use_gpiod_backend"]
|
||||
|
||||
radio = SX1262Radio.get_instance(**combined_config)
|
||||
|
||||
if hasattr(radio, "_initialized") and not radio._initialized:
|
||||
@@ -244,5 +317,52 @@ def get_radio_for_board(board_config: dict):
|
||||
|
||||
return radio
|
||||
|
||||
else:
|
||||
raise RuntimeError(f"Unknown radio type: {radio_type}. Supported: sx1262")
|
||||
elif radio_type == "kiss":
|
||||
try:
|
||||
from pymc_core.hardware.kiss_modem_wrapper import KissModemWrapper
|
||||
except ImportError:
|
||||
try:
|
||||
from pymc_core.hardware.kiss_serial_wrapper import (
|
||||
KissSerialWrapper as KissModemWrapper,
|
||||
)
|
||||
except ImportError:
|
||||
raise RuntimeError(
|
||||
"KISS modem support requires pyMC_core with KISS support. "
|
||||
"Install your fork with: pip install -e /path/to/pyMC_core"
|
||||
) from None
|
||||
|
||||
kiss_config = board_config.get("kiss")
|
||||
if not kiss_config:
|
||||
raise ValueError("Missing 'kiss' section in configuration file for radio_type: kiss")
|
||||
|
||||
port = kiss_config.get("port")
|
||||
if not port:
|
||||
raise ValueError("Missing 'port' in 'kiss' section (e.g. /dev/ttyUSB0)")
|
||||
|
||||
baudrate = int(kiss_config.get("baud_rate", 115200))
|
||||
radio_cfg = board_config.get("radio") or {}
|
||||
radio_config = {
|
||||
"frequency": int(radio_cfg.get("frequency", 869618000)),
|
||||
"bandwidth": int(radio_cfg.get("bandwidth", 62500)),
|
||||
"spreading_factor": int(radio_cfg.get("spreading_factor", 8)),
|
||||
"coding_rate": int(radio_cfg.get("coding_rate", 8)),
|
||||
"tx_power": int(radio_cfg.get("tx_power", 14)),
|
||||
}
|
||||
radio = KissModemWrapper(
|
||||
port=port,
|
||||
baudrate=baudrate,
|
||||
radio_config=radio_config,
|
||||
auto_configure=True,
|
||||
)
|
||||
|
||||
if hasattr(radio, "begin"):
|
||||
try:
|
||||
radio.begin()
|
||||
except Exception as e:
|
||||
raise RuntimeError(f"Failed to initialize KISS modem: {e}") from e
|
||||
|
||||
return radio
|
||||
|
||||
raise RuntimeError(
|
||||
f"Unknown radio type: {radio_type}. Supported: sx1262, sx1262_ch341, kiss (or kiss-modem)"
|
||||
)
|
||||
|
||||
@@ -0,0 +1,240 @@
|
||||
import logging
|
||||
import os
|
||||
import yaml
|
||||
from typing import Optional, Dict, Any, List
|
||||
|
||||
logger = logging.getLogger("ConfigManager")
|
||||
|
||||
|
||||
class ConfigManager:
|
||||
"""Manages configuration persistence and live updates to the daemon."""
|
||||
|
||||
def __init__(self, config_path: str, config: dict, daemon_instance=None):
|
||||
"""
|
||||
Initialize ConfigManager.
|
||||
|
||||
Args:
|
||||
config_path: Path to the YAML config file
|
||||
config: Reference to the config dictionary
|
||||
daemon_instance: Optional reference to the daemon for live updates
|
||||
"""
|
||||
self.config_path = config_path
|
||||
self.config = config
|
||||
self.daemon = daemon_instance
|
||||
|
||||
def save_to_file(self) -> bool:
|
||||
"""
|
||||
Save current config to YAML file.
|
||||
|
||||
Returns:
|
||||
True if successful, False otherwise
|
||||
"""
|
||||
try:
|
||||
os.makedirs(os.path.dirname(self.config_path), exist_ok=True)
|
||||
with open(self.config_path, 'w') as f:
|
||||
# Use safe_dump with explicit width to prevent line wrapping
|
||||
# Setting width to a very large number prevents truncation of long strings like identity keys
|
||||
yaml.safe_dump(
|
||||
self.config,
|
||||
f,
|
||||
default_flow_style=False,
|
||||
indent=2,
|
||||
width=1000000, # Very large width to prevent any line wrapping
|
||||
sort_keys=False,
|
||||
allow_unicode=True
|
||||
)
|
||||
logger.info(f"Configuration saved to {self.config_path}")
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to save config to {self.config_path}: {e}", exc_info=True)
|
||||
return False
|
||||
|
||||
def live_update_daemon(self, sections: Optional[List[str]] = None) -> bool:
|
||||
"""
|
||||
Apply configuration changes to the running daemon's in-memory config.
|
||||
|
||||
Args:
|
||||
sections: List of config sections to update (e.g., ['repeater', 'delays']).
|
||||
If None, updates all common sections.
|
||||
|
||||
Returns:
|
||||
True if live update was successful, False otherwise
|
||||
"""
|
||||
if not self.daemon or not hasattr(self.daemon, 'config'):
|
||||
logger.warning("Daemon not available for live update")
|
||||
return False
|
||||
|
||||
try:
|
||||
daemon_config = self.daemon.config
|
||||
|
||||
# Default sections to update if not specified
|
||||
if sections is None:
|
||||
sections = ['repeater', 'delays', 'radio', 'acl', 'identities', 'glass']
|
||||
|
||||
# Update each section
|
||||
for section in sections:
|
||||
if section in self.config:
|
||||
if section not in daemon_config:
|
||||
daemon_config[section] = {}
|
||||
|
||||
# Deep copy the section to avoid reference issues
|
||||
if isinstance(self.config[section], dict):
|
||||
daemon_config[section].update(self.config[section])
|
||||
else:
|
||||
daemon_config[section] = self.config[section]
|
||||
|
||||
logger.debug(f"Live updated daemon config section: {section}")
|
||||
|
||||
logger.info(f"Live updated daemon config sections: {', '.join(sections)}")
|
||||
|
||||
# Also reload runtime config in RepeaterHandler if delays or repeater sections changed
|
||||
if self.daemon and hasattr(self.daemon, 'repeater_handler'):
|
||||
if any(s in ['delays', 'repeater'] for s in sections):
|
||||
if hasattr(self.daemon.repeater_handler, 'reload_runtime_config'):
|
||||
self.daemon.repeater_handler.reload_runtime_config()
|
||||
logger.info("Reloaded RepeaterHandler runtime config")
|
||||
|
||||
# Also reload advert_helper config if repeater section changed
|
||||
if self.daemon and hasattr(self.daemon, 'advert_helper') and self.daemon.advert_helper:
|
||||
if 'repeater' in sections:
|
||||
if hasattr(self.daemon.advert_helper, 'reload_config'):
|
||||
self.daemon.advert_helper.reload_config()
|
||||
logger.info("Reloaded AdvertHelper config")
|
||||
|
||||
# Re-apply dispatcher path hash mode when mesh section changed
|
||||
if 'mesh' in sections and self.daemon and hasattr(self.daemon, 'dispatcher'):
|
||||
mesh_cfg = self.daemon.config.get("mesh", {})
|
||||
path_hash_mode = mesh_cfg.get("path_hash_mode", 0)
|
||||
if path_hash_mode not in (0, 1, 2):
|
||||
logger.warning(
|
||||
f"Invalid mesh.path_hash_mode={path_hash_mode}, must be 0/1/2; using 0"
|
||||
)
|
||||
path_hash_mode = 0
|
||||
self.daemon.dispatcher.set_default_path_hash_mode(path_hash_mode)
|
||||
logger.info(f"Reloaded path hash mode: mesh.path_hash_mode={path_hash_mode}")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to live update daemon config: {e}", exc_info=True)
|
||||
return False
|
||||
|
||||
def update_and_save(self,
|
||||
updates: Dict[str, Any],
|
||||
live_update: bool = True,
|
||||
live_update_sections: Optional[List[str]] = None) -> Dict[str, Any]:
|
||||
"""
|
||||
Apply updates to config, save to file, and optionally live update daemon.
|
||||
|
||||
This is the main method that should be used by both mesh_cli and api_endpoints.
|
||||
|
||||
Args:
|
||||
updates: Dictionary of config updates in nested format.
|
||||
Example: {"repeater": {"node_name": "NewName"}, "delays": {"tx_delay_factor": 1.5}}
|
||||
live_update: Whether to apply changes to running daemon immediately
|
||||
live_update_sections: Specific sections to live update. If None, auto-detects from updates.
|
||||
|
||||
Returns:
|
||||
Dict with keys:
|
||||
- success: bool - Whether operation succeeded
|
||||
- saved: bool - Whether config was saved to file
|
||||
- live_updated: bool - Whether daemon was live updated
|
||||
- error: str (optional) - Error message if failed
|
||||
"""
|
||||
result = {
|
||||
"success": False,
|
||||
"saved": False,
|
||||
"live_updated": False
|
||||
}
|
||||
|
||||
try:
|
||||
# Apply updates to config
|
||||
for section, values in updates.items():
|
||||
if section not in self.config:
|
||||
self.config[section] = {}
|
||||
|
||||
if isinstance(values, dict):
|
||||
self.config[section].update(values)
|
||||
else:
|
||||
self.config[section] = values
|
||||
|
||||
# Save to file
|
||||
result["saved"] = self.save_to_file()
|
||||
|
||||
if not result["saved"]:
|
||||
result["error"] = "Failed to save config to file"
|
||||
return result
|
||||
|
||||
# Live update daemon if requested
|
||||
if live_update:
|
||||
# Auto-detect sections if not specified
|
||||
if live_update_sections is None:
|
||||
live_update_sections = list(updates.keys())
|
||||
|
||||
result["live_updated"] = self.live_update_daemon(live_update_sections)
|
||||
|
||||
result["success"] = result["saved"]
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in update_and_save: {e}", exc_info=True)
|
||||
result["error"] = str(e)
|
||||
return result
|
||||
|
||||
def update_nested(self, path: str, value: Any, live_update: bool = True) -> Dict[str, Any]:
|
||||
"""
|
||||
Update a nested config value using dot notation.
|
||||
|
||||
Convenience method for simple updates like "repeater.node_name" = "NewName"
|
||||
|
||||
Args:
|
||||
path: Dot-separated path to config value (e.g., "repeater.node_name")
|
||||
value: Value to set
|
||||
live_update: Whether to apply changes to running daemon
|
||||
|
||||
Returns:
|
||||
Result dict from update_and_save
|
||||
"""
|
||||
parts = path.split('.')
|
||||
|
||||
if len(parts) == 1:
|
||||
# Top-level key
|
||||
updates = {parts[0]: value}
|
||||
elif len(parts) == 2:
|
||||
# Nested one level (most common case)
|
||||
updates = {parts[0]: {parts[1]: value}}
|
||||
else:
|
||||
# Build nested dict for deeper paths
|
||||
updates = {}
|
||||
current = updates
|
||||
for i, part in enumerate(parts[:-1]):
|
||||
if i == 0:
|
||||
current[part] = {}
|
||||
current = current[part]
|
||||
else:
|
||||
current[part] = {}
|
||||
current = current[part]
|
||||
current[parts[-1]] = value
|
||||
|
||||
# Determine which section to live update
|
||||
section = parts[0]
|
||||
|
||||
return self.update_and_save(
|
||||
updates=updates,
|
||||
live_update=live_update,
|
||||
live_update_sections=[section] if live_update else None
|
||||
)
|
||||
|
||||
def get_status(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Get status information about the ConfigManager.
|
||||
|
||||
Returns:
|
||||
Dict with config file path, existence, daemon availability
|
||||
"""
|
||||
return {
|
||||
"config_path": self.config_path,
|
||||
"config_exists": os.path.exists(self.config_path),
|
||||
"daemon_available": self.daemon is not None and hasattr(self.daemon, 'config'),
|
||||
"config_sections": list(self.config.keys()) if self.config else []
|
||||
}
|
||||
@@ -1,6 +1,5 @@
|
||||
from .glass_handler import GlassHandler
|
||||
from .rrdtool_handler import RRDToolHandler
|
||||
from .sqlite_handler import SQLiteHandler
|
||||
from .rrdtool_handler import RRDToolHandler
|
||||
from .mqtt_handler import MQTTHandler
|
||||
from .storage_collector import StorageCollector
|
||||
|
||||
__all__ = ['SQLiteHandler', 'RRDToolHandler', 'MQTTHandler', 'StorageCollector']
|
||||
__all__ = ["SQLiteHandler", "RRDToolHandler", "StorageCollector", "GlassHandler"]
|
||||
|
||||
@@ -0,0 +1,957 @@
|
||||
import asyncio
|
||||
import hashlib
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import ssl
|
||||
import time
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, List, Optional, Tuple
|
||||
from urllib.parse import urlparse
|
||||
from urllib import error, request
|
||||
|
||||
import psutil
|
||||
try:
|
||||
import paho.mqtt.client as mqtt
|
||||
except ImportError:
|
||||
mqtt = None
|
||||
|
||||
from repeater import __version__
|
||||
from repeater.service_utils import restart_service
|
||||
|
||||
logger = logging.getLogger("GlassHandler")
|
||||
_SENSITIVE_KEY_MARKERS = (
|
||||
"password",
|
||||
"passphrase",
|
||||
"secret",
|
||||
"token",
|
||||
"private_key",
|
||||
"identity_key",
|
||||
"client_key",
|
||||
"api_key",
|
||||
)
|
||||
_SENSITIVE_KEY_EXCEPTIONS = ("pubkey", "public_key")
|
||||
|
||||
|
||||
class GlassHandler:
|
||||
def __init__(self, config: dict, daemon_instance=None, config_manager=None):
|
||||
self.config = config
|
||||
self.daemon_instance = daemon_instance
|
||||
self.config_manager = config_manager
|
||||
|
||||
self.enabled = False
|
||||
self.base_url = "http://localhost:8080"
|
||||
self.request_timeout_seconds = 10
|
||||
self.verify_tls = True
|
||||
self.api_token = ""
|
||||
self.inform_interval_seconds = 30
|
||||
self.cert_store_dir = "/etc/pymc_repeater/glass"
|
||||
self._cert_expires_at: Optional[str] = None
|
||||
self.mqtt_enabled = False
|
||||
self.mqtt_broker_host = "localhost"
|
||||
self.mqtt_broker_port = 1883
|
||||
self.mqtt_base_topic = "glass"
|
||||
self.mqtt_tls_enabled = False
|
||||
self.mqtt_username: Optional[str] = None
|
||||
self.mqtt_password: Optional[str] = None
|
||||
self.client_cert_path: Optional[str] = None
|
||||
self.client_key_path: Optional[str] = None
|
||||
self.ca_cert_path: Optional[str] = None
|
||||
self._mqtt_client = None
|
||||
self._mqtt_ready = False
|
||||
self._mqtt_runtime_signature: Optional[
|
||||
Tuple[
|
||||
str,
|
||||
int,
|
||||
str,
|
||||
bool,
|
||||
bool,
|
||||
Optional[str],
|
||||
Optional[str],
|
||||
Optional[str],
|
||||
Optional[str],
|
||||
Optional[str],
|
||||
]
|
||||
] = None
|
||||
self._managed_settings_filename = "managed.json"
|
||||
|
||||
self._task: Optional[asyncio.Task] = None
|
||||
self._stop_event: Optional[asyncio.Event] = None
|
||||
self._pending_command_results: List[Dict[str, Any]] = []
|
||||
self._pending_lock = asyncio.Lock()
|
||||
|
||||
self._reload_runtime_settings()
|
||||
|
||||
async def start(self) -> None:
|
||||
self._reload_runtime_settings()
|
||||
if not self.enabled:
|
||||
logger.info("Glass integration disabled")
|
||||
self._close_mqtt_publisher()
|
||||
return
|
||||
|
||||
if self._task and not self._task.done():
|
||||
return
|
||||
self._sync_mqtt_publisher()
|
||||
|
||||
self._stop_event = asyncio.Event()
|
||||
self._task = asyncio.create_task(self._run_loop(), name="glass-inform-loop")
|
||||
logger.info(
|
||||
"Glass integration started (base_url=%s, inform_interval=%ss)",
|
||||
self.base_url,
|
||||
self.inform_interval_seconds,
|
||||
)
|
||||
|
||||
async def stop(self) -> None:
|
||||
if self._task:
|
||||
if self._stop_event:
|
||||
self._stop_event.set()
|
||||
|
||||
try:
|
||||
await self._task
|
||||
except Exception as exc:
|
||||
logger.debug("Glass task stop ignored exception: %s", exc)
|
||||
finally:
|
||||
self._task = None
|
||||
self._stop_event = None
|
||||
|
||||
self._close_mqtt_publisher()
|
||||
|
||||
def _reload_runtime_settings(self) -> None:
|
||||
glass_cfg = self.config.get("glass", {})
|
||||
self.enabled = bool(glass_cfg.get("enabled", False))
|
||||
|
||||
base_url = str(glass_cfg.get("base_url", "http://localhost:8080")).strip()
|
||||
self.base_url = base_url.rstrip("/") if base_url else "http://localhost:8080"
|
||||
|
||||
self.request_timeout_seconds = max(3, int(glass_cfg.get("request_timeout_seconds", 10)))
|
||||
self.verify_tls = bool(glass_cfg.get("verify_tls", True))
|
||||
self.api_token = str(glass_cfg.get("api_token", "") or "").strip()
|
||||
self.inform_interval_seconds = self._clamp_interval(
|
||||
int(glass_cfg.get("inform_interval_seconds", self.inform_interval_seconds))
|
||||
)
|
||||
self.cert_store_dir = str(
|
||||
glass_cfg.get("cert_store_dir", "/etc/pymc_repeater/glass") or "/etc/pymc_repeater/glass"
|
||||
)
|
||||
self.client_cert_path = (
|
||||
str(glass_cfg.get("client_cert_path")).strip()
|
||||
if glass_cfg.get("client_cert_path")
|
||||
else None
|
||||
)
|
||||
self.client_key_path = (
|
||||
str(glass_cfg.get("client_key_path")).strip()
|
||||
if glass_cfg.get("client_key_path")
|
||||
else None
|
||||
)
|
||||
self.ca_cert_path = (
|
||||
str(glass_cfg.get("ca_cert_path")).strip()
|
||||
if glass_cfg.get("ca_cert_path")
|
||||
else None
|
||||
)
|
||||
managed_cfg = self._load_managed_settings()
|
||||
parsed_base_url = urlparse(self.base_url)
|
||||
default_host = parsed_base_url.hostname or "localhost"
|
||||
|
||||
self.mqtt_enabled = bool(managed_cfg.get("mqtt_enabled", False))
|
||||
host_value = managed_cfg.get("mqtt_broker_host", default_host)
|
||||
self.mqtt_broker_host = str(host_value or default_host).strip() or default_host
|
||||
try:
|
||||
self.mqtt_broker_port = max(1, int(managed_cfg.get("mqtt_broker_port", 1883)))
|
||||
except (TypeError, ValueError):
|
||||
self.mqtt_broker_port = 1883
|
||||
topic_value = managed_cfg.get("mqtt_base_topic", "glass")
|
||||
self.mqtt_base_topic = str(topic_value or "glass").strip("/")
|
||||
self.mqtt_tls_enabled = bool(managed_cfg.get("mqtt_tls_enabled", False))
|
||||
username = managed_cfg.get("mqtt_username")
|
||||
password = managed_cfg.get("mqtt_password")
|
||||
self.mqtt_username = str(username).strip() if isinstance(username, str) and username else None
|
||||
self.mqtt_password = str(password) if isinstance(password, str) and password else None
|
||||
|
||||
def _managed_settings_path(self) -> Path:
|
||||
return Path(self.cert_store_dir) / self._managed_settings_filename
|
||||
|
||||
def _load_managed_settings(self) -> Dict[str, Any]:
|
||||
path = self._managed_settings_path()
|
||||
if not path.exists():
|
||||
return {}
|
||||
try:
|
||||
raw = json.loads(path.read_text(encoding="utf-8"))
|
||||
except Exception as exc:
|
||||
logger.warning("Invalid Glass managed settings file at %s: %s", path, exc)
|
||||
return {}
|
||||
if not isinstance(raw, dict):
|
||||
logger.warning("Ignoring non-object Glass managed settings file at %s", path)
|
||||
return {}
|
||||
return raw
|
||||
|
||||
def _save_managed_settings(self, updates: Dict[str, Any], *, replace: bool) -> Tuple[bool, str]:
|
||||
if not isinstance(updates, dict):
|
||||
return False, "glass_managed must be an object"
|
||||
|
||||
path = self._managed_settings_path()
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
current = {} if replace else self._load_managed_settings()
|
||||
if not isinstance(current, dict):
|
||||
current = {}
|
||||
merged = dict(current)
|
||||
merged.update(updates)
|
||||
try:
|
||||
path.write_text(
|
||||
json.dumps(merged, indent=2, sort_keys=True),
|
||||
encoding="utf-8",
|
||||
)
|
||||
os.chmod(path, 0o600)
|
||||
return True, "Managed settings updated"
|
||||
except Exception as exc:
|
||||
return False, f"Failed writing managed settings: {exc}"
|
||||
|
||||
async def _run_loop(self) -> None:
|
||||
while self._stop_event and not self._stop_event.is_set():
|
||||
self._reload_runtime_settings()
|
||||
self._sync_mqtt_publisher()
|
||||
try:
|
||||
interval = await self._inform_once()
|
||||
except Exception as exc:
|
||||
logger.warning("Glass inform failed: %s", exc)
|
||||
interval = self.inform_interval_seconds
|
||||
|
||||
wait_seconds = self._clamp_interval(interval)
|
||||
if not self._stop_event:
|
||||
break
|
||||
try:
|
||||
await asyncio.wait_for(self._stop_event.wait(), timeout=wait_seconds)
|
||||
except asyncio.TimeoutError:
|
||||
continue
|
||||
|
||||
async def _inform_once(self) -> int:
|
||||
self._reload_runtime_settings()
|
||||
if not self.enabled:
|
||||
return self.inform_interval_seconds
|
||||
|
||||
payload = await self._build_inform_payload()
|
||||
response = await self._post_inform(payload)
|
||||
|
||||
if payload.get("command_results"):
|
||||
async with self._pending_lock:
|
||||
self._pending_command_results = []
|
||||
|
||||
response_type = str(response.get("type", "noop"))
|
||||
response_interval = response.get("interval")
|
||||
|
||||
if response_type == "command":
|
||||
await self._handle_command_response(response)
|
||||
elif response_type == "config_update":
|
||||
ok, message = self._apply_config_update(
|
||||
response.get("config", {}),
|
||||
str(response.get("merge_mode", "patch")),
|
||||
)
|
||||
if ok:
|
||||
logger.info("Applied Glass config update")
|
||||
else:
|
||||
logger.warning("Failed to apply Glass config update: %s", message)
|
||||
elif response_type == "cert_renewal":
|
||||
ok, message = self._apply_cert_renewal(response)
|
||||
if ok:
|
||||
logger.info("Applied Glass certificate renewal")
|
||||
else:
|
||||
logger.warning("Failed to apply Glass certificate renewal: %s", message)
|
||||
elif response_type == "upgrade":
|
||||
logger.warning("Glass upgrade action received but not implemented on repeater")
|
||||
elif response_type != "noop":
|
||||
logger.warning("Unknown Glass response type: %s", response_type)
|
||||
|
||||
if isinstance(response_interval, int):
|
||||
self.inform_interval_seconds = self._clamp_interval(response_interval)
|
||||
return self.inform_interval_seconds
|
||||
|
||||
async def _build_inform_payload(self) -> Dict[str, Any]:
|
||||
if not self.daemon_instance or not getattr(self.daemon_instance, "local_identity", None):
|
||||
raise RuntimeError("Local identity not available for Glass inform")
|
||||
|
||||
stats = self.daemon_instance.get_stats() if self.daemon_instance else {}
|
||||
local_identity = self.daemon_instance.local_identity
|
||||
public_key = bytes(local_identity.get_public_key()).hex()
|
||||
node_name = self.config.get("repeater", {}).get("node_name", "unknown-repeater")
|
||||
|
||||
uptime_seconds = int(stats.get("uptime_seconds", 0))
|
||||
if uptime_seconds <= 0:
|
||||
repeater_handler = getattr(self.daemon_instance, "repeater_handler", None)
|
||||
if repeater_handler and getattr(repeater_handler, "start_time", None):
|
||||
uptime_seconds = max(0, int(time.time() - repeater_handler.start_time))
|
||||
|
||||
tx_total = int(stats.get("sent_flood_count", 0)) + int(stats.get("sent_direct_count", 0))
|
||||
if tx_total <= 0:
|
||||
tx_total = int(stats.get("forwarded_count", 0))
|
||||
|
||||
command_results = await self._get_pending_command_results()
|
||||
settings_snapshot = self._build_settings_snapshot()
|
||||
location = self._extract_location_from_settings(settings_snapshot)
|
||||
|
||||
return {
|
||||
"type": "inform",
|
||||
"version": 1,
|
||||
"node_name": node_name,
|
||||
"pubkey": f"0x{public_key}",
|
||||
"software_version": __version__,
|
||||
"state": self.config.get("repeater", {}).get("mode", "forward"),
|
||||
"location": location,
|
||||
"uptime_seconds": uptime_seconds,
|
||||
"config_hash": self._compute_config_hash(self.config),
|
||||
"cert_expires_at": self._cert_expires_at,
|
||||
"system": self._collect_system_stats(),
|
||||
"radio": {
|
||||
"frequency": int(self.config.get("radio", {}).get("frequency", 0)),
|
||||
"spreading_factor": int(self.config.get("radio", {}).get("spreading_factor", 7)),
|
||||
"bandwidth": int(self.config.get("radio", {}).get("bandwidth", 0)),
|
||||
"tx_power": int(self.config.get("radio", {}).get("tx_power", 0)),
|
||||
"noise_floor_dbm": stats.get("noise_floor_dbm"),
|
||||
"mode": self.config.get("repeater", {}).get("mode", "forward"),
|
||||
},
|
||||
"counters": {
|
||||
"rx_total": int(stats.get("rx_count", 0)),
|
||||
"tx_total": max(0, tx_total),
|
||||
"forwarded": int(stats.get("forwarded_count", 0)),
|
||||
"dropped": int(stats.get("dropped_count", 0)),
|
||||
"duplicates": int(stats.get("flood_dup_count", 0))
|
||||
+ int(stats.get("direct_dup_count", 0)),
|
||||
"airtime_percent": float(stats.get("utilization_percent", 0.0)),
|
||||
},
|
||||
"settings": settings_snapshot,
|
||||
"command_results": command_results,
|
||||
}
|
||||
|
||||
def _build_settings_snapshot(self) -> Dict[str, Any]:
|
||||
normalized = self._normalize_for_hash(self.config)
|
||||
sanitized = self._sanitize_settings_for_export(normalized)
|
||||
if isinstance(sanitized, dict):
|
||||
return sanitized
|
||||
return {}
|
||||
|
||||
def _sanitize_settings_for_export(self, value: Any, key_name: Optional[str] = None) -> Any:
|
||||
if isinstance(value, dict):
|
||||
output: Dict[str, Any] = {}
|
||||
for child_key, child_value in value.items():
|
||||
if self._is_sensitive_key(child_key):
|
||||
output[child_key] = "<redacted>"
|
||||
continue
|
||||
output[child_key] = self._sanitize_settings_for_export(child_value, child_key)
|
||||
return output
|
||||
if isinstance(value, list):
|
||||
return [self._sanitize_settings_for_export(item, key_name) for item in value]
|
||||
return value
|
||||
|
||||
@staticmethod
|
||||
def _is_sensitive_key(key: str) -> bool:
|
||||
lowered = str(key).lower()
|
||||
if any(exception in lowered for exception in _SENSITIVE_KEY_EXCEPTIONS):
|
||||
return False
|
||||
return any(marker in lowered for marker in _SENSITIVE_KEY_MARKERS)
|
||||
|
||||
@staticmethod
|
||||
def _normalize_location(value: Any) -> Optional[str]:
|
||||
if isinstance(value, str):
|
||||
text = value.strip()
|
||||
if not text:
|
||||
return None
|
||||
parts = [part.strip() for part in text.split(",")]
|
||||
if len(parts) != 2:
|
||||
return None
|
||||
try:
|
||||
lat = float(parts[0])
|
||||
lng = float(parts[1])
|
||||
except ValueError:
|
||||
return None
|
||||
elif isinstance(value, dict):
|
||||
lat = value.get("lat", value.get("latitude"))
|
||||
lng = value.get("lng", value.get("longitude"))
|
||||
try:
|
||||
if lat is None or lng is None:
|
||||
return None
|
||||
lat = float(lat)
|
||||
lng = float(lng)
|
||||
except (TypeError, ValueError):
|
||||
return None
|
||||
elif isinstance(value, (list, tuple)) and len(value) == 2:
|
||||
try:
|
||||
lat = float(value[0])
|
||||
lng = float(value[1])
|
||||
except (TypeError, ValueError):
|
||||
return None
|
||||
else:
|
||||
return None
|
||||
|
||||
if lat < -90 or lat > 90 or lng < -180 or lng > 180:
|
||||
return None
|
||||
return f"{lat:.6f},{lng:.6f}"
|
||||
|
||||
def _extract_location_from_settings(self, settings: Dict[str, Any]) -> Optional[str]:
|
||||
repeater_settings = settings.get("repeater")
|
||||
repeater_dict = repeater_settings if isinstance(repeater_settings, dict) else {}
|
||||
candidates = [
|
||||
settings.get("location"),
|
||||
repeater_dict.get("location"),
|
||||
settings.get("gps"),
|
||||
repeater_dict.get("gps"),
|
||||
{
|
||||
"lat": repeater_dict.get("latitude"),
|
||||
"lng": repeater_dict.get("longitude"),
|
||||
},
|
||||
]
|
||||
for candidate in candidates:
|
||||
location = self._normalize_location(candidate)
|
||||
if location:
|
||||
return location
|
||||
return None
|
||||
|
||||
def _collect_system_stats(self) -> Dict[str, Any]:
|
||||
temperature_c = None
|
||||
try:
|
||||
temperatures = psutil.sensors_temperatures() if hasattr(psutil, "sensors_temperatures") else {}
|
||||
if temperatures:
|
||||
for values in temperatures.values():
|
||||
if values:
|
||||
temperature_c = values[0].current
|
||||
break
|
||||
except Exception:
|
||||
temperature_c = None
|
||||
|
||||
load_avg_1m = None
|
||||
try:
|
||||
if hasattr(os, "getloadavg"):
|
||||
load_avg_1m = float(os.getloadavg()[0])
|
||||
except Exception:
|
||||
load_avg_1m = None
|
||||
|
||||
return {
|
||||
"cpu_percent": float(psutil.cpu_percent(interval=None)),
|
||||
"memory_percent": float(psutil.virtual_memory().percent),
|
||||
"disk_percent": float(psutil.disk_usage("/").percent),
|
||||
"temperature_c": temperature_c,
|
||||
"load_avg_1m": load_avg_1m,
|
||||
}
|
||||
|
||||
async def _post_inform(self, payload: Dict[str, Any]) -> Dict[str, Any]:
|
||||
loop = asyncio.get_running_loop()
|
||||
return await loop.run_in_executor(None, self._post_inform_sync, payload)
|
||||
|
||||
def _post_inform_sync(self, payload: Dict[str, Any]) -> Dict[str, Any]:
|
||||
url = f"{self.base_url}/inform"
|
||||
headers = {"Content-Type": "application/json"}
|
||||
if self.api_token:
|
||||
headers["Authorization"] = f"Bearer {self.api_token}"
|
||||
|
||||
body = json.dumps(payload).encode("utf-8")
|
||||
req = request.Request(url=url, data=body, method="POST", headers=headers)
|
||||
ssl_context = self._build_ssl_context(url)
|
||||
|
||||
try:
|
||||
with request.urlopen(
|
||||
req,
|
||||
timeout=self.request_timeout_seconds,
|
||||
context=ssl_context,
|
||||
) as response:
|
||||
response_bytes = response.read()
|
||||
except error.HTTPError as exc:
|
||||
details = ""
|
||||
try:
|
||||
details = exc.read().decode("utf-8")
|
||||
except Exception:
|
||||
details = str(exc)
|
||||
raise RuntimeError(f"HTTP {exc.code}: {details}") from exc
|
||||
except error.URLError as exc:
|
||||
raise RuntimeError(f"Connection error: {exc}") from exc
|
||||
|
||||
if not response_bytes:
|
||||
return {"type": "noop", "interval": self.inform_interval_seconds}
|
||||
|
||||
try:
|
||||
response_payload = json.loads(response_bytes.decode("utf-8"))
|
||||
except Exception as exc:
|
||||
raise RuntimeError("Invalid JSON response from Glass backend") from exc
|
||||
|
||||
if not isinstance(response_payload, dict):
|
||||
raise RuntimeError("Invalid response payload from Glass backend")
|
||||
return response_payload
|
||||
|
||||
def _build_ssl_context(self, url: str) -> Optional[ssl.SSLContext]:
|
||||
if not str(url).startswith("https"):
|
||||
return None
|
||||
|
||||
if self.verify_tls:
|
||||
if self.ca_cert_path:
|
||||
ca_path = self._require_ssl_file(self.ca_cert_path, "ca_cert_path")
|
||||
context = ssl.create_default_context(cafile=ca_path)
|
||||
else:
|
||||
context = ssl.create_default_context()
|
||||
else:
|
||||
context = ssl._create_unverified_context()
|
||||
|
||||
if self.client_cert_path or self.client_key_path:
|
||||
cert_path = self._require_ssl_file(self.client_cert_path, "client_cert_path")
|
||||
key_path = self._require_ssl_file(self.client_key_path, "client_key_path")
|
||||
context.load_cert_chain(certfile=cert_path, keyfile=key_path)
|
||||
|
||||
return context
|
||||
|
||||
@staticmethod
|
||||
def _require_ssl_file(path_value: Optional[str], field_name: str) -> str:
|
||||
if not path_value or not str(path_value).strip():
|
||||
raise RuntimeError(f"Missing {field_name} for Glass TLS configuration")
|
||||
normalized = str(path_value).strip()
|
||||
if not Path(normalized).exists():
|
||||
raise RuntimeError(f"Configured {field_name} does not exist: {normalized}")
|
||||
return normalized
|
||||
|
||||
async def _handle_command_response(self, response: Dict[str, Any]) -> None:
|
||||
command_id = str(response.get("command_id", "")).strip()
|
||||
action = str(response.get("action", "")).strip()
|
||||
params = response.get("params", {})
|
||||
|
||||
if not command_id or not action:
|
||||
logger.warning("Glass command response missing command_id or action")
|
||||
return
|
||||
|
||||
success = False
|
||||
message = "Action failed"
|
||||
details: Optional[Dict[str, Any]] = None
|
||||
try:
|
||||
success, message, details = await self._execute_command_action(action, params)
|
||||
except Exception as exc:
|
||||
success = False
|
||||
message = f"Exception executing action: {exc}"
|
||||
details = None
|
||||
|
||||
await self._queue_command_result(
|
||||
command_id=command_id,
|
||||
status="success" if success else "failed",
|
||||
message=message,
|
||||
details=details,
|
||||
)
|
||||
|
||||
async def _execute_command_action(
|
||||
self,
|
||||
action: str,
|
||||
params: Any,
|
||||
) -> Tuple[bool, str, Optional[Dict[str, Any]]]:
|
||||
params = params if isinstance(params, dict) else {}
|
||||
|
||||
if action == "restart_service":
|
||||
success, message = restart_service()
|
||||
return success, message, None
|
||||
|
||||
if action == "send_advert":
|
||||
if not self.daemon_instance or not hasattr(self.daemon_instance, "send_advert"):
|
||||
return False, "send_advert unavailable", None
|
||||
success = await self.daemon_instance.send_advert()
|
||||
return success, "Advert sent" if success else "Failed to send advert", None
|
||||
|
||||
if action == "set_mode":
|
||||
mode = str(params.get("mode", "")).strip()
|
||||
if mode not in ("forward", "monitor", "no_tx"):
|
||||
return False, "Invalid mode parameter", None
|
||||
success, message = self._apply_config_update(
|
||||
{"repeater": {"mode": mode}},
|
||||
merge_mode="patch",
|
||||
)
|
||||
return success, message, None
|
||||
|
||||
if action == "set_inform_interval":
|
||||
interval = params.get("interval_seconds", params.get("interval"))
|
||||
if not isinstance(interval, int):
|
||||
return False, "interval_seconds must be an integer", None
|
||||
interval = self._clamp_interval(interval)
|
||||
self.inform_interval_seconds = interval
|
||||
success, message = self._apply_config_update(
|
||||
{"glass": {"inform_interval_seconds": interval}},
|
||||
merge_mode="patch",
|
||||
)
|
||||
return success, message, None
|
||||
if action == "rotate_cert":
|
||||
return True, "Certificate rotation requested", None
|
||||
|
||||
if action == "config_update":
|
||||
config_patch = params.get("config", params)
|
||||
merge_mode = str(params.get("merge_mode", "patch"))
|
||||
success, message = self._apply_config_update(config_patch, merge_mode=merge_mode)
|
||||
return success, message, None
|
||||
|
||||
if action == "transport_keys_sync":
|
||||
success, message, details = self._apply_transport_keys_sync(params)
|
||||
return success, message, details
|
||||
|
||||
if action == "set_radio":
|
||||
radio_values = params.get("radio", params)
|
||||
if not isinstance(radio_values, dict):
|
||||
return False, "radio settings must be an object", None
|
||||
success, message = self._apply_config_update({"radio": radio_values}, merge_mode="patch")
|
||||
return success, message, None
|
||||
|
||||
if action == "run_diagnostic":
|
||||
stats = self.daemon_instance.get_stats() if self.daemon_instance else {}
|
||||
return True, (
|
||||
f"rx={int(stats.get('rx_count', 0))}, "
|
||||
f"tx={int(stats.get('forwarded_count', 0))}, "
|
||||
f"dropped={int(stats.get('dropped_count', 0))}"
|
||||
), None
|
||||
|
||||
if action == "export_config":
|
||||
normalized_config = self._normalize_for_hash(self.config)
|
||||
return (
|
||||
True,
|
||||
"Configuration exported",
|
||||
{
|
||||
"config": normalized_config,
|
||||
"config_hash": self._compute_config_hash(self.config),
|
||||
},
|
||||
)
|
||||
|
||||
return False, f"Unsupported action: {action}", None
|
||||
|
||||
def _apply_config_update(self, updates: Any, merge_mode: str = "patch") -> Tuple[bool, str]:
|
||||
if not isinstance(updates, dict) or not updates:
|
||||
return False, "Config update payload must be a non-empty object"
|
||||
merge_mode = merge_mode.lower().strip()
|
||||
|
||||
if merge_mode not in ("patch", "replace"):
|
||||
return False, f"Unsupported merge_mode: {merge_mode}"
|
||||
updates_to_apply = dict(updates)
|
||||
managed_updates = updates_to_apply.pop("glass_managed", None)
|
||||
if managed_updates is not None:
|
||||
managed_ok, managed_message = self._save_managed_settings(
|
||||
managed_updates,
|
||||
replace=merge_mode == "replace",
|
||||
)
|
||||
if not managed_ok:
|
||||
return False, managed_message
|
||||
self._reload_runtime_settings()
|
||||
self._sync_mqtt_publisher()
|
||||
|
||||
if not updates_to_apply:
|
||||
return True, "Managed settings updated"
|
||||
|
||||
sections = list(updates_to_apply.keys())
|
||||
|
||||
if merge_mode == "replace":
|
||||
for section, value in updates_to_apply.items():
|
||||
self.config[section] = value
|
||||
if self.config_manager:
|
||||
saved = self.config_manager.save_to_file()
|
||||
live_updated = self.config_manager.live_update_daemon(sections)
|
||||
return (
|
||||
bool(saved and live_updated),
|
||||
"Config replaced" if saved and live_updated else "Failed to persist replace update",
|
||||
)
|
||||
return True, "Config replaced"
|
||||
|
||||
# patch mode
|
||||
if self.config_manager:
|
||||
result = self.config_manager.update_and_save(
|
||||
updates=updates_to_apply,
|
||||
live_update=True,
|
||||
live_update_sections=sections,
|
||||
)
|
||||
if result.get("success"):
|
||||
if "glass" in sections:
|
||||
self._reload_runtime_settings()
|
||||
self._sync_mqtt_publisher()
|
||||
return True, "Config patched"
|
||||
return False, str(result.get("error", "Failed to patch config"))
|
||||
self._deep_merge(self.config, updates_to_apply)
|
||||
if "glass" in sections:
|
||||
self._reload_runtime_settings()
|
||||
self._sync_mqtt_publisher()
|
||||
return True, "Config patched"
|
||||
|
||||
def _get_sqlite_handler(self):
|
||||
if not self.daemon_instance:
|
||||
return None
|
||||
repeater_handler = getattr(self.daemon_instance, "repeater_handler", None)
|
||||
storage = getattr(repeater_handler, "storage", None)
|
||||
return getattr(storage, "sqlite_handler", None)
|
||||
|
||||
def _apply_transport_keys_sync(
|
||||
self,
|
||||
params: Dict[str, Any],
|
||||
) -> Tuple[bool, str, Optional[Dict[str, Any]]]:
|
||||
if not isinstance(params, dict):
|
||||
return False, "transport_keys_sync params must be an object", None
|
||||
entries = params.get("transport_keys")
|
||||
if not isinstance(entries, list):
|
||||
return False, "transport_keys_sync payload must include a transport_keys list", None
|
||||
sqlite_handler = self._get_sqlite_handler()
|
||||
if sqlite_handler is None:
|
||||
return False, "SQLite handler unavailable for transport key sync", None
|
||||
try:
|
||||
result = sqlite_handler.sync_transport_keys(entries)
|
||||
except Exception as exc:
|
||||
return False, f"Transport key sync failed: {exc}", None
|
||||
payload_hash = params.get("payload_hash")
|
||||
details: Dict[str, Any] = {
|
||||
"applied_nodes": int(result.get("applied_nodes", 0)),
|
||||
"generated_keys": int(result.get("generated_keys", 0)),
|
||||
}
|
||||
if isinstance(payload_hash, str) and payload_hash.strip():
|
||||
details["payload_hash"] = payload_hash
|
||||
return True, f"Applied transport key sync ({details['applied_nodes']} nodes)", details
|
||||
|
||||
def _apply_cert_renewal(self, response: Dict[str, Any]) -> Tuple[bool, str]:
|
||||
client_cert = response.get("client_cert")
|
||||
client_key = response.get("client_key")
|
||||
ca_cert = response.get("ca_cert")
|
||||
|
||||
if not all(isinstance(item, str) and item.strip() for item in (client_cert, client_key, ca_cert)):
|
||||
return False, "Missing certificate payload values"
|
||||
|
||||
cert_dir = Path(self.cert_store_dir)
|
||||
cert_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
client_cert_path = cert_dir / "glass-client.crt"
|
||||
client_key_path = cert_dir / "glass-client.key"
|
||||
ca_cert_path = cert_dir / "glass-ca.crt"
|
||||
|
||||
client_cert_path.write_text(client_cert, encoding="utf-8")
|
||||
client_key_path.write_text(client_key, encoding="utf-8")
|
||||
ca_cert_path.write_text(ca_cert, encoding="utf-8")
|
||||
os.chmod(client_key_path, 0o600)
|
||||
|
||||
return self._apply_config_update(
|
||||
{
|
||||
"glass": {
|
||||
"client_cert_path": str(client_cert_path),
|
||||
"client_key_path": str(client_key_path),
|
||||
"ca_cert_path": str(ca_cert_path),
|
||||
}
|
||||
},
|
||||
merge_mode="patch",
|
||||
)
|
||||
|
||||
async def _get_pending_command_results(self) -> List[Dict[str, Any]]:
|
||||
async with self._pending_lock:
|
||||
return list(self._pending_command_results)
|
||||
|
||||
async def _queue_command_result(
|
||||
self,
|
||||
command_id: str,
|
||||
status: str,
|
||||
message: str,
|
||||
details: Optional[Dict[str, Any]] = None,
|
||||
) -> None:
|
||||
completed_at = datetime.now(timezone.utc).isoformat().replace("+00:00", "Z")
|
||||
result = {
|
||||
"command_id": command_id,
|
||||
"status": status,
|
||||
"message": message[:1024] if message else "",
|
||||
"completed_at": completed_at,
|
||||
}
|
||||
if details:
|
||||
result["details"] = details
|
||||
async with self._pending_lock:
|
||||
self._pending_command_results.append(result)
|
||||
|
||||
def publish_telemetry(self, record_type: str, record: Dict[str, Any]) -> None:
|
||||
if not self.enabled or not self.mqtt_enabled or not self._mqtt_ready:
|
||||
return
|
||||
if not self._mqtt_client:
|
||||
return
|
||||
|
||||
node_name = self.config.get("repeater", {}).get("node_name", "unknown-repeater")
|
||||
event_type = "event"
|
||||
event_name: Optional[str] = record_type
|
||||
if record_type in ("packet", "advert"):
|
||||
event_type = record_type
|
||||
event_name = None
|
||||
|
||||
topic = self._mqtt_topic_for_record(node_name=node_name, record_type=record_type)
|
||||
timestamp = self._to_rfc3339_timestamp(record.get("timestamp"))
|
||||
payload = self._normalize_for_hash(record)
|
||||
|
||||
envelope: Dict[str, Any] = {
|
||||
"version": 1,
|
||||
"type": event_type,
|
||||
"topic": topic,
|
||||
"node_name": node_name,
|
||||
"timestamp": timestamp,
|
||||
"payload": payload,
|
||||
}
|
||||
if event_type == "event" and event_name:
|
||||
envelope["event_name"] = event_name
|
||||
|
||||
try:
|
||||
message = json.dumps(envelope, separators=(",", ":"), sort_keys=True, default=str)
|
||||
self._mqtt_client.publish(topic, message, qos=0, retain=False)
|
||||
except Exception as exc:
|
||||
logger.debug("Failed publishing Glass telemetry MQTT message: %s", exc)
|
||||
|
||||
def _mqtt_topic_for_record(self, *, node_name: str, record_type: str) -> str:
|
||||
base = self.mqtt_base_topic.strip("/") or "glass"
|
||||
if record_type in ("packet", "advert"):
|
||||
return f"{base}/{node_name}/{record_type}"
|
||||
return f"{base}/{node_name}/event/{record_type}"
|
||||
|
||||
def _to_rfc3339_timestamp(self, value: Any) -> str:
|
||||
if isinstance(value, (int, float)):
|
||||
dt = datetime.fromtimestamp(float(value), timezone.utc)
|
||||
elif isinstance(value, str):
|
||||
normalized = value.strip()
|
||||
if normalized.endswith("Z"):
|
||||
return normalized
|
||||
try:
|
||||
dt = datetime.fromisoformat(normalized)
|
||||
except ValueError:
|
||||
dt = datetime.now(timezone.utc)
|
||||
elif isinstance(value, datetime):
|
||||
dt = value
|
||||
else:
|
||||
dt = datetime.now(timezone.utc)
|
||||
|
||||
if dt.tzinfo is None:
|
||||
dt = dt.replace(tzinfo=timezone.utc)
|
||||
else:
|
||||
dt = dt.astimezone(timezone.utc)
|
||||
return dt.isoformat().replace("+00:00", "Z")
|
||||
|
||||
def _init_mqtt_publisher(self) -> None:
|
||||
if not self.mqtt_enabled:
|
||||
self._close_mqtt_publisher()
|
||||
return
|
||||
if mqtt is None:
|
||||
logger.warning("Glass MQTT telemetry publishing enabled but paho-mqtt is unavailable")
|
||||
self._close_mqtt_publisher()
|
||||
return
|
||||
if self._mqtt_client is not None:
|
||||
return
|
||||
|
||||
try:
|
||||
client = mqtt.Client()
|
||||
if self.mqtt_username:
|
||||
client.username_pw_set(self.mqtt_username, self.mqtt_password)
|
||||
if self.mqtt_tls_enabled:
|
||||
ca_certs = self._require_ssl_file(self.ca_cert_path, "ca_cert_path") if self.ca_cert_path else None
|
||||
certfile = None
|
||||
keyfile = None
|
||||
if self.client_cert_path or self.client_key_path:
|
||||
certfile = self._require_ssl_file(self.client_cert_path, "client_cert_path")
|
||||
keyfile = self._require_ssl_file(self.client_key_path, "client_key_path")
|
||||
cert_reqs = ssl.CERT_REQUIRED if self.verify_tls else ssl.CERT_NONE
|
||||
client.tls_set(
|
||||
ca_certs=ca_certs,
|
||||
certfile=certfile,
|
||||
keyfile=keyfile,
|
||||
cert_reqs=cert_reqs,
|
||||
tls_version=ssl.PROTOCOL_TLS_CLIENT,
|
||||
)
|
||||
if not self.verify_tls:
|
||||
client.tls_insecure_set(True)
|
||||
client.on_connect = self._on_mqtt_connect
|
||||
client.on_disconnect = self._on_mqtt_disconnect
|
||||
client.connect_async(self.mqtt_broker_host, self.mqtt_broker_port, 60)
|
||||
client.loop_start()
|
||||
self._mqtt_client = client
|
||||
self._mqtt_runtime_signature = self._current_mqtt_signature()
|
||||
logger.info(
|
||||
"Glass MQTT telemetry publisher started (%s:%s, base_topic=%s)",
|
||||
self.mqtt_broker_host,
|
||||
self.mqtt_broker_port,
|
||||
self.mqtt_base_topic,
|
||||
)
|
||||
except Exception as exc:
|
||||
self._mqtt_client = None
|
||||
self._mqtt_ready = False
|
||||
self._mqtt_runtime_signature = None
|
||||
logger.warning("Failed to start Glass MQTT telemetry publisher: %s", exc)
|
||||
|
||||
def _close_mqtt_publisher(self) -> None:
|
||||
client = self._mqtt_client
|
||||
self._mqtt_client = None
|
||||
self._mqtt_ready = False
|
||||
self._mqtt_runtime_signature = None
|
||||
if client is None:
|
||||
return
|
||||
try:
|
||||
client.loop_stop()
|
||||
client.disconnect()
|
||||
except Exception as exc:
|
||||
logger.debug("Error stopping Glass MQTT telemetry publisher: %s", exc)
|
||||
|
||||
def _on_mqtt_connect(self, _client, _userdata, _flags, reason_code, _properties=None) -> None:
|
||||
rc = getattr(reason_code, "value", reason_code)
|
||||
if rc == 0:
|
||||
self._mqtt_ready = True
|
||||
logger.info("Glass MQTT telemetry publisher connected")
|
||||
return
|
||||
self._mqtt_ready = False
|
||||
logger.warning("Glass MQTT telemetry publisher connect failed (code=%s)", rc)
|
||||
|
||||
def _on_mqtt_disconnect(self, _client, _userdata, reason_code, _properties=None) -> None:
|
||||
self._mqtt_ready = False
|
||||
rc = getattr(reason_code, "value", reason_code)
|
||||
if rc:
|
||||
logger.warning("Glass MQTT telemetry publisher disconnected (code=%s)", rc)
|
||||
|
||||
def _current_mqtt_signature(
|
||||
self,
|
||||
) -> Tuple[str, int, str, bool, bool, Optional[str], Optional[str], Optional[str], Optional[str], Optional[str]]:
|
||||
return (
|
||||
self.mqtt_broker_host,
|
||||
self.mqtt_broker_port,
|
||||
self.mqtt_base_topic,
|
||||
self.mqtt_tls_enabled,
|
||||
self.verify_tls,
|
||||
self.ca_cert_path,
|
||||
self.client_cert_path,
|
||||
self.client_key_path,
|
||||
self.mqtt_username,
|
||||
self.mqtt_password,
|
||||
)
|
||||
|
||||
def _sync_mqtt_publisher(self) -> None:
|
||||
if not self.enabled or not self.mqtt_enabled:
|
||||
self._close_mqtt_publisher()
|
||||
return
|
||||
if mqtt is None:
|
||||
self._close_mqtt_publisher()
|
||||
return
|
||||
|
||||
signature = self._current_mqtt_signature()
|
||||
if self._mqtt_client is None:
|
||||
self._init_mqtt_publisher()
|
||||
return
|
||||
if self._mqtt_runtime_signature != signature:
|
||||
self._close_mqtt_publisher()
|
||||
self._init_mqtt_publisher()
|
||||
|
||||
@staticmethod
|
||||
def _deep_merge(target: Dict[str, Any], source: Dict[str, Any]) -> None:
|
||||
for key, value in source.items():
|
||||
if (
|
||||
isinstance(value, dict)
|
||||
and isinstance(target.get(key), dict)
|
||||
):
|
||||
GlassHandler._deep_merge(target[key], value)
|
||||
else:
|
||||
target[key] = value
|
||||
|
||||
@staticmethod
|
||||
def _normalize_for_hash(value: Any) -> Any:
|
||||
if isinstance(value, bytes):
|
||||
return value.hex()
|
||||
if isinstance(value, dict):
|
||||
return {k: GlassHandler._normalize_for_hash(v) for k, v in value.items()}
|
||||
if isinstance(value, list):
|
||||
return [GlassHandler._normalize_for_hash(v) for v in value]
|
||||
return value
|
||||
|
||||
@staticmethod
|
||||
def _compute_config_hash(config: dict) -> str:
|
||||
normalized = GlassHandler._normalize_for_hash(config)
|
||||
encoded = json.dumps(normalized, sort_keys=True, separators=(",", ":")).encode("utf-8")
|
||||
digest = hashlib.sha256(encoded).hexdigest()
|
||||
return f"sha256:{digest}"
|
||||
|
||||
@staticmethod
|
||||
def _clamp_interval(interval_seconds: int) -> int:
|
||||
if interval_seconds < 5:
|
||||
return 5
|
||||
if interval_seconds > 3600:
|
||||
return 3600
|
||||
return interval_seconds
|
||||
@@ -5,13 +5,14 @@ KISS - Keep It Simple Stupid approach.
|
||||
|
||||
try:
|
||||
import psutil
|
||||
|
||||
PSUTIL_AVAILABLE = True
|
||||
except ImportError:
|
||||
PSUTIL_AVAILABLE = False
|
||||
psutil = None
|
||||
|
||||
import time
|
||||
import logging
|
||||
import time
|
||||
|
||||
logger = logging.getLogger("HardwareStats")
|
||||
|
||||
@@ -26,10 +27,8 @@ class HardwareStatsCollector:
|
||||
|
||||
if not PSUTIL_AVAILABLE:
|
||||
logger.error("psutil not available - cannot collect hardware stats")
|
||||
return {
|
||||
"error": "psutil library not available - cannot collect hardware statistics"
|
||||
}
|
||||
|
||||
return {"error": "psutil library not available - cannot collect hardware statistics"}
|
||||
|
||||
try:
|
||||
# Get current timestamp
|
||||
now = time.time()
|
||||
@@ -42,10 +41,10 @@ class HardwareStatsCollector:
|
||||
|
||||
# Memory stats
|
||||
memory = psutil.virtual_memory()
|
||||
|
||||
|
||||
# Disk stats
|
||||
disk = psutil.disk_usage('/')
|
||||
|
||||
disk = psutil.disk_usage("/")
|
||||
|
||||
# Network stats (total across all interfaces)
|
||||
net_io = psutil.net_io_counters()
|
||||
|
||||
@@ -79,48 +78,39 @@ class HardwareStatsCollector:
|
||||
"usage_percent": cpu_percent,
|
||||
"count": cpu_count,
|
||||
"frequency": cpu_freq.current if cpu_freq else 0,
|
||||
"load_avg": {
|
||||
"1min": load_avg[0],
|
||||
"5min": load_avg[1],
|
||||
"15min": load_avg[2]
|
||||
}
|
||||
"load_avg": {"1min": load_avg[0], "5min": load_avg[1], "15min": load_avg[2]},
|
||||
},
|
||||
"memory": {
|
||||
"total": memory.total,
|
||||
"available": memory.available,
|
||||
"used": memory.used,
|
||||
"usage_percent": memory.percent
|
||||
"usage_percent": memory.percent,
|
||||
},
|
||||
"disk": {
|
||||
"total": disk.total,
|
||||
"used": disk.used,
|
||||
"free": disk.free,
|
||||
"usage_percent": round((disk.used / disk.total) * 100, 1)
|
||||
"usage_percent": round((disk.used / disk.total) * 100, 1),
|
||||
},
|
||||
"network": {
|
||||
"bytes_sent": net_io.bytes_sent,
|
||||
"bytes_recv": net_io.bytes_recv,
|
||||
"packets_sent": net_io.packets_sent,
|
||||
"packets_recv": net_io.packets_recv
|
||||
"packets_recv": net_io.packets_recv,
|
||||
},
|
||||
"system": {
|
||||
"uptime": system_uptime,
|
||||
"boot_time": boot_time
|
||||
}
|
||||
"system": {"uptime": system_uptime, "boot_time": boot_time},
|
||||
}
|
||||
|
||||
|
||||
# Add temperatures if available
|
||||
if temperatures:
|
||||
stats["temperatures"] = temperatures
|
||||
|
||||
return stats
|
||||
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error collecting hardware stats: {e}")
|
||||
return {
|
||||
"error": str(e)
|
||||
}
|
||||
|
||||
return {"error": str(e)}
|
||||
|
||||
def get_processes_summary(self, limit=10):
|
||||
"""
|
||||
Get top processes by CPU and memory usage.
|
||||
@@ -131,44 +121,39 @@ class HardwareStatsCollector:
|
||||
return {
|
||||
"processes": [],
|
||||
"total_processes": 0,
|
||||
"error": "psutil library not available - cannot collect process statistics"
|
||||
"error": "psutil library not available - cannot collect process statistics",
|
||||
}
|
||||
|
||||
|
||||
try:
|
||||
processes = []
|
||||
|
||||
|
||||
# Get all processes
|
||||
for proc in psutil.process_iter(['pid', 'name', 'cpu_percent', 'memory_percent', 'memory_info']):
|
||||
for proc in psutil.process_iter(
|
||||
["pid", "name", "cpu_percent", "memory_percent", "memory_info"]
|
||||
):
|
||||
try:
|
||||
pinfo = proc.info
|
||||
# Calculate memory in MB
|
||||
memory_mb = 0
|
||||
if pinfo['memory_info']:
|
||||
memory_mb = pinfo['memory_info'].rss / 1024 / 1024 # RSS in MB
|
||||
|
||||
if pinfo["memory_info"]:
|
||||
memory_mb = pinfo["memory_info"].rss / 1024 / 1024 # RSS in MB
|
||||
|
||||
process_data = {
|
||||
"pid": pinfo['pid'],
|
||||
"name": pinfo['name'] or 'Unknown',
|
||||
"cpu_percent": pinfo['cpu_percent'] or 0.0,
|
||||
"memory_percent": pinfo['memory_percent'] or 0.0,
|
||||
"memory_mb": round(memory_mb, 1)
|
||||
"pid": pinfo["pid"],
|
||||
"name": pinfo["name"] or "Unknown",
|
||||
"cpu_percent": pinfo["cpu_percent"] or 0.0,
|
||||
"memory_percent": pinfo["memory_percent"] or 0.0,
|
||||
"memory_mb": round(memory_mb, 1),
|
||||
}
|
||||
processes.append(process_data)
|
||||
except (psutil.NoSuchProcess, psutil.AccessDenied):
|
||||
pass
|
||||
|
||||
|
||||
# Sort by CPU usage and get top processes
|
||||
top_processes = sorted(processes, key=lambda x: x['cpu_percent'], reverse=True)[:limit]
|
||||
|
||||
return {
|
||||
"processes": top_processes,
|
||||
"total_processes": len(processes)
|
||||
}
|
||||
|
||||
top_processes = sorted(processes, key=lambda x: x["cpu_percent"], reverse=True)[:limit]
|
||||
|
||||
return {"processes": top_processes, "total_processes": len(processes)}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error collecting process stats: {e}")
|
||||
return {
|
||||
"processes": [],
|
||||
"total_processes": 0,
|
||||
"error": str(e)
|
||||
}
|
||||
return {"processes": [], "total_processes": 0, "error": str(e)}
|
||||
|
||||
@@ -1,13 +1,32 @@
|
||||
import base64
|
||||
import binascii
|
||||
import json
|
||||
import logging
|
||||
import binascii
|
||||
import base64
|
||||
import paho.mqtt.client as mqtt
|
||||
import threading
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Callable, Dict, List, Optional
|
||||
|
||||
from datetime import datetime, timedelta, UTC
|
||||
import paho.mqtt.client as mqtt
|
||||
from nacl.signing import SigningKey
|
||||
from typing import Callable, Optional
|
||||
from .. import __version__
|
||||
|
||||
# Try to import datetime.UTC (Python 3.11+) otherwise fallback to timezone.utc
|
||||
try:
|
||||
from datetime import UTC
|
||||
except Exception:
|
||||
from datetime import timezone
|
||||
UTC = timezone.utc
|
||||
|
||||
from repeater import __version__
|
||||
|
||||
# Try to import paho-mqtt error code mappings
|
||||
try:
|
||||
from paho.mqtt.reasoncodes import ReasonCode
|
||||
|
||||
HAS_REASON_CODES = True
|
||||
except ImportError:
|
||||
HAS_REASON_CODES = False
|
||||
|
||||
logger = logging.getLogger("LetsMeshHandler")
|
||||
|
||||
|
||||
# --------------------------------------------------------------------
|
||||
@@ -37,65 +56,54 @@ LETSMESH_BROKERS = [
|
||||
|
||||
|
||||
# ====================================================================
|
||||
# MeshCore → MQTT Publisher with Ed25519 auth token
|
||||
# Single Broker Connection Manager
|
||||
# ====================================================================
|
||||
class MeshCoreToMqttJwtPusher:
|
||||
class _BrokerConnection:
|
||||
"""
|
||||
Push-only MQTT publisher for Let's Mesh MQTT brokers.
|
||||
Implements MeshCore-style Ed25519 token signing.
|
||||
No modifications to crypto.py.
|
||||
Manages a single MQTT broker connection with independent lifecycle.
|
||||
Internal class - not exposed publicly.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
private_key: str,
|
||||
broker: dict,
|
||||
local_identity,
|
||||
public_key: str,
|
||||
config: dict,
|
||||
jwt_expiry_minutes: int = 10,
|
||||
use_tls: bool = True,
|
||||
stats_provider: Optional[Callable[[], dict]] = None,
|
||||
iata_code: str,
|
||||
jwt_expiry_minutes: int,
|
||||
use_tls: bool,
|
||||
email: str,
|
||||
owner: str,
|
||||
broker_index: int = 0,
|
||||
on_connect_callback: Optional[Callable] = None,
|
||||
on_disconnect_callback: Optional[Callable] = None,
|
||||
):
|
||||
# Extract values from config
|
||||
from ..config import get_node_info
|
||||
|
||||
node_info = get_node_info(config)
|
||||
|
||||
iata_code = node_info["iata_code"]
|
||||
broker_index = node_info["broker_index"]
|
||||
self.email = node_info.get("email", "")
|
||||
self.owner = node_info.get("owner", "")
|
||||
status_interval = node_info["status_interval"]
|
||||
node_name = node_info["node_name"]
|
||||
radio_config = node_info["radio_config"]
|
||||
|
||||
if broker_index >= len(LETSMESH_BROKERS):
|
||||
raise ValueError(f"Invalid broker_index {broker_index}")
|
||||
|
||||
self.broker = LETSMESH_BROKERS[broker_index]
|
||||
self.private_key_hex = private_key
|
||||
self.broker = broker
|
||||
self.local_identity = local_identity
|
||||
self.public_key = public_key.upper()
|
||||
self.iata_code = iata_code
|
||||
self.jwt_expiry_minutes = jwt_expiry_minutes
|
||||
self.broker_index = broker_index
|
||||
self.use_tls = use_tls
|
||||
self.status_interval = status_interval
|
||||
self.app_version = __version__
|
||||
self.node_name = node_name
|
||||
self.radio_config = radio_config
|
||||
self.stats_provider = stats_provider
|
||||
self._status_task = None
|
||||
self._running = False
|
||||
self.email = email
|
||||
self.owner = owner
|
||||
self._on_connect_callback = on_connect_callback
|
||||
self._on_disconnect_callback = on_disconnect_callback
|
||||
self._connect_time = None
|
||||
self._tls_verified = False
|
||||
|
||||
# MQTT WebSocket client
|
||||
self.client = mqtt.Client(client_id=f"meshcore_{self.public_key}", transport="websockets")
|
||||
self._running = False
|
||||
self._reconnect_attempts = 0
|
||||
self._reconnect_timer = None
|
||||
self._max_reconnect_delay = 300 # 5 minutes max
|
||||
self._jwt_refresh_timer = None
|
||||
self._shutdown_requested = False
|
||||
client_id = f"meshcore_{self.public_key}_{broker['host']}"
|
||||
self.client = mqtt.Client(client_id=client_id, transport="websockets")
|
||||
self.client.on_connect = self._on_connect
|
||||
self.client.on_disconnect = self._on_disconnect
|
||||
|
||||
# ----------------------------------------------------------------
|
||||
# MeshCore-style Ed25519 token generator
|
||||
# ----------------------------------------------------------------
|
||||
def _generate_jwt(self) -> str:
|
||||
"""Generate MeshCore-style Ed25519 JWT token"""
|
||||
now = datetime.now(UTC)
|
||||
|
||||
header = {"alg": "Ed25519", "typ": "JWT"}
|
||||
@@ -106,126 +114,427 @@ class MeshCoreToMqttJwtPusher:
|
||||
"iat": int(now.timestamp()),
|
||||
"exp": int((now + timedelta(minutes=self.jwt_expiry_minutes)).timestamp()),
|
||||
}
|
||||
|
||||
|
||||
# Only include email/owner for verified TLS connections
|
||||
if self.use_tls and self._tls_verified and (self.email or self.owner):
|
||||
payload["email"] = self.email
|
||||
payload["owner"] = self.owner
|
||||
logging.debug("JWT includes email/owner (TLS verified)")
|
||||
else:
|
||||
payload["email"] = ""
|
||||
payload["owner"] = ""
|
||||
if not self.use_tls:
|
||||
logging.debug("JWT excludes email/owner (TLS disabled)")
|
||||
elif not self._tls_verified:
|
||||
logging.debug("JWT excludes email/owner (TLS not verified yet)")
|
||||
else:
|
||||
logging.debug("JWT excludes email/owner (email/owner not configured)")
|
||||
|
||||
# Encode header and payload (compact JSON - no spaces)
|
||||
header_b64 = b64url(json.dumps(header, separators=(",", ":")).encode())
|
||||
payload_b64 = b64url(json.dumps(payload, separators=(",", ":")).encode())
|
||||
|
||||
signing_input = f"{header_b64}.{payload_b64}".encode()
|
||||
seed32 = binascii.unhexlify(self.private_key_hex)
|
||||
signer = SigningKey(seed32)
|
||||
|
||||
# Verify the public key matches what we expect
|
||||
derived_public = binascii.hexlify(bytes(signer.verify_key)).decode()
|
||||
if derived_public.upper() != self.public_key.upper():
|
||||
raise ValueError(
|
||||
f"Public key mismatch! " f"Derived: {derived_public}, Expected: {self.public_key}"
|
||||
)
|
||||
# Sign using LocalIdentity (supports both standard and firmware keys)
|
||||
try:
|
||||
signature = self.local_identity.sign(signing_input)
|
||||
except Exception as e:
|
||||
logger.error(f"JWT signing failed for {self.broker['name']}: {e}")
|
||||
logger.error(f" - public_key: {self.public_key}")
|
||||
logger.error(f" - signing_input length: {len(signing_input)}")
|
||||
raise
|
||||
|
||||
# Sign the message
|
||||
signature = signer.sign(signing_input).signature
|
||||
signature_hex = binascii.hexlify(signature).decode()
|
||||
token = f"{header_b64}.{payload_b64}.{signature_hex}"
|
||||
|
||||
logging.debug(f"Generated MeshCore token: {token[:10]}...{token[-10:]}")
|
||||
logger.debug(f"JWT token generated for {self.broker['name']}: {token[:50]}...")
|
||||
|
||||
return token
|
||||
|
||||
# ----------------------------------------------------------------
|
||||
# MQTT setup
|
||||
# ----------------------------------------------------------------
|
||||
def _on_connect(self, client, userdata, flags, rc):
|
||||
"""MQTT connection callback"""
|
||||
if rc == 0:
|
||||
logging.info(f"Connected to {self.broker['name']}")
|
||||
logger.info(f"Connected to {self.broker['name']}")
|
||||
self._running = True
|
||||
|
||||
# Publish initial status on connect
|
||||
self.publish_status(
|
||||
state="online", origin=self.node_name, radio_config=self.radio_config
|
||||
)
|
||||
|
||||
# connected start heartbeat thread
|
||||
if self.status_interval > 0 and not self._status_task:
|
||||
import threading
|
||||
self._status_task = threading.Thread(target=self._status_heartbeat_loop, daemon=True)
|
||||
self._status_task.start()
|
||||
logging.info(f"Started status heartbeat (interval: {self.status_interval}s)")
|
||||
self._reconnect_attempts = 0 # Reset counter on success
|
||||
self._schedule_jwt_refresh() # Schedule proactive JWT refresh
|
||||
if self._on_connect_callback:
|
||||
self._on_connect_callback(self.broker["name"])
|
||||
else:
|
||||
logging.error(f"Failed with code {rc}")
|
||||
error_msg = get_mqtt_error_message(rc, is_disconnect=False)
|
||||
logger.error(f"Failed to connect to {self.broker['name']}: {error_msg}")
|
||||
self._schedule_reconnect()
|
||||
|
||||
def _on_disconnect(self, client, userdata, rc):
|
||||
logging.warning(f"Disconnected (rc={rc})")
|
||||
"""MQTT disconnection callback"""
|
||||
was_running = self._running
|
||||
self._running = False
|
||||
|
||||
def _refresh_jwt_token(self):
|
||||
"""Refresh JWT token for MQTT authentication"""
|
||||
token = self._generate_jwt()
|
||||
username = f"v1_{self.public_key}"
|
||||
self.client.username_pw_set(username=username, password=token)
|
||||
self._connect_time = datetime.now(UTC)
|
||||
logging.info("JWT token refreshed")
|
||||
if self._shutdown_requested:
|
||||
logger.info(f"Clean disconnect from {self.broker['name']}")
|
||||
if self._on_disconnect_callback:
|
||||
self._on_disconnect_callback(self.broker["name"])
|
||||
return
|
||||
|
||||
if rc != 0: # Unexpected disconnect
|
||||
error_msg = get_mqtt_error_message(rc, is_disconnect=True)
|
||||
logger.warning(f"Disconnected from {self.broker['name']} (rc={rc}): {error_msg}")
|
||||
if was_running: # Only reconnect if we were intentionally connected
|
||||
self._schedule_reconnect(reason=error_msg)
|
||||
else:
|
||||
logger.info(f"Clean disconnect from {self.broker['name']}")
|
||||
|
||||
if self._on_disconnect_callback:
|
||||
self._on_disconnect_callback(self.broker["name"])
|
||||
|
||||
def _schedule_reconnect(self, reason: str = "connection lost"):
|
||||
"""Schedule reconnection with exponential backoff"""
|
||||
if self._shutdown_requested:
|
||||
return
|
||||
|
||||
if self._reconnect_timer:
|
||||
self._reconnect_timer.cancel()
|
||||
|
||||
# Exponential backoff: 5s, 10s, 20s, 40s, 80s, up to max
|
||||
delay = min(5 * (2**self._reconnect_attempts), self._max_reconnect_delay)
|
||||
self._reconnect_attempts += 1
|
||||
|
||||
logger.info(
|
||||
f"Scheduling reconnect to {self.broker['name']} in {delay}s (attempt {self._reconnect_attempts}, reason: {reason})"
|
||||
)
|
||||
self._reconnect_timer = threading.Timer(delay, lambda: self._attempt_reconnect(reason))
|
||||
self._reconnect_timer.daemon = True
|
||||
self._reconnect_timer.start()
|
||||
|
||||
def _attempt_reconnect(self, reason: str = "connection lost"):
|
||||
"""Attempt to reconnect to broker with fresh JWT"""
|
||||
if self._shutdown_requested:
|
||||
return
|
||||
|
||||
try:
|
||||
logger.info(f"Attempting reconnection to {self.broker['name']} (reason: {reason})...")
|
||||
|
||||
# Stop the loop if it's still running (websocket mode requires clean restart)
|
||||
try:
|
||||
self.client.loop_stop()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
self._set_jwt_credentials()
|
||||
|
||||
# Reconnect and restart loop
|
||||
self.client.connect(self.broker["host"], self.broker["port"], keepalive=60)
|
||||
self.client.loop_start()
|
||||
self._loop_running = True
|
||||
except Exception as e:
|
||||
logger.error(f"Reconnection failed for {self.broker['name']}: {e}")
|
||||
self._schedule_reconnect() # Try again later
|
||||
|
||||
def _set_jwt_credentials(self):
|
||||
"""Set JWT token credentials before connecting (CONNECT handshake only)"""
|
||||
try:
|
||||
token = self._generate_jwt()
|
||||
username = f"v1_{self.public_key}"
|
||||
self.client.username_pw_set(username=username, password=token)
|
||||
self._connect_time = datetime.now(UTC)
|
||||
logger.debug(f"JWT credentials set for {self.broker['name']}")
|
||||
logger.debug(f"Using username: {username}")
|
||||
logger.debug(f"Public key: {self.public_key[:16]}...{self.public_key[-16:]}")
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to set JWT credentials for {self.broker['name']}: {e}")
|
||||
raise
|
||||
|
||||
# ----------------------------------------------------------------
|
||||
# Connect using WebSockets + TLS + MeshCore token auth
|
||||
# ----------------------------------------------------------------
|
||||
def connect(self):
|
||||
"""Establish connection to broker"""
|
||||
self._shutdown_requested = False
|
||||
|
||||
# Conditional TLS setup
|
||||
if self.use_tls:
|
||||
import ssl
|
||||
# Enable TLS with certificate verification
|
||||
self.client.tls_set(
|
||||
cert_reqs=ssl.CERT_REQUIRED,
|
||||
tls_version=ssl.PROTOCOL_TLS_CLIENT
|
||||
)
|
||||
self.client.tls_insecure_set(False) # Enforce hostname verification
|
||||
# Mark as verified - if connection fails, we won't connect anyway
|
||||
|
||||
self.client.tls_set(cert_reqs=ssl.CERT_REQUIRED, tls_version=ssl.PROTOCOL_TLS_CLIENT)
|
||||
self.client.tls_insecure_set(False)
|
||||
self._tls_verified = True
|
||||
if self.email or self.owner:
|
||||
logging.info("TLS enabled with certificate verification - email/owner will be included")
|
||||
protocol = "wss"
|
||||
else:
|
||||
protocol = "ws"
|
||||
|
||||
# Generate JWT token (will include email/owner if TLS verified)
|
||||
token = self._generate_jwt()
|
||||
username = f"v1_{self.public_key}"
|
||||
self.client.username_pw_set(username=username, password=token)
|
||||
# Set JWT credentials before CONNECT handshake
|
||||
self._set_jwt_credentials()
|
||||
|
||||
logging.info(
|
||||
logger.info(
|
||||
f"Connecting to {self.broker['name']} "
|
||||
f"({protocol}://{self.broker['host']}:{self.broker['port']}) ..."
|
||||
)
|
||||
|
||||
# Must use raw hostname without wss://
|
||||
self.client.connect(self.broker["host"], self.broker["port"], keepalive=60)
|
||||
self.client.loop_start()
|
||||
self._connect_time = datetime.now(UTC)
|
||||
self._loop_running = True
|
||||
|
||||
def disconnect(self):
|
||||
"""Disconnect from broker"""
|
||||
self._shutdown_requested = True
|
||||
self._running = False
|
||||
# Publish offline status before disconnecting
|
||||
self.publish_status(state="offline", origin=self.node_name, radio_config=self.radio_config)
|
||||
import time
|
||||
self._loop_running = False
|
||||
|
||||
time.sleep(0.5) # Give time for the message to be sent
|
||||
# Cancel any pending timers
|
||||
if self._reconnect_timer:
|
||||
self._reconnect_timer.cancel()
|
||||
self._reconnect_timer = None
|
||||
if self._jwt_refresh_timer:
|
||||
self._jwt_refresh_timer.cancel()
|
||||
self._jwt_refresh_timer = None
|
||||
|
||||
self.client.loop_stop()
|
||||
self.client.disconnect()
|
||||
logging.info("Disconnected")
|
||||
logger.info(f"Disconnected from {self.broker['name']}")
|
||||
|
||||
def publish(self, topic: str, payload: str, retain: bool = False):
|
||||
"""Publish message to broker"""
|
||||
if self._running:
|
||||
result = self.client.publish(topic, payload, retain=retain)
|
||||
return result
|
||||
return None
|
||||
|
||||
def is_connected(self) -> bool:
|
||||
"""Check if connection is active"""
|
||||
return self._running
|
||||
|
||||
def has_pending_reconnect(self) -> bool:
|
||||
"""Check if a reconnection is scheduled"""
|
||||
return self._reconnect_timer is not None and self._reconnect_timer.is_alive()
|
||||
|
||||
def should_reconnect_for_token_expiry(self) -> bool:
|
||||
"""Check if connection should be reconnected due to JWT expiry (at 80% of lifetime)"""
|
||||
if not self._connect_time:
|
||||
return False
|
||||
elapsed = (datetime.now(UTC) - self._connect_time).total_seconds()
|
||||
expiry_seconds = self.jwt_expiry_minutes * 60
|
||||
# Stagger refresh by 5% per broker to prevent simultaneous disconnects
|
||||
# Broker 0: 80%, Broker 1: 85%, Broker 2: 90%, etc.
|
||||
stagger_offset = self.broker_index * 0.05
|
||||
refresh_threshold = 0.80 + stagger_offset
|
||||
return elapsed >= expiry_seconds * refresh_threshold
|
||||
|
||||
def _schedule_jwt_refresh(self):
|
||||
"""Schedule proactive JWT refresh before token expires"""
|
||||
if self._jwt_refresh_timer:
|
||||
self._jwt_refresh_timer.cancel()
|
||||
|
||||
expiry_seconds = self.jwt_expiry_minutes * 60
|
||||
# Stagger refresh by 5% per broker to prevent simultaneous disconnects
|
||||
# Broker 0: 80%, Broker 1: 85%, Broker 2: 90%, etc.
|
||||
stagger_offset = self.broker_index * 0.05
|
||||
refresh_threshold = 0.80 + stagger_offset
|
||||
refresh_delay = expiry_seconds * refresh_threshold
|
||||
|
||||
logger.info(
|
||||
f"JWT refresh scheduled for {self.broker['name']} in {refresh_delay:.0f}s "
|
||||
f"({refresh_threshold*100:.0f}% of {self.jwt_expiry_minutes}min token lifetime)"
|
||||
)
|
||||
self._jwt_refresh_timer = threading.Timer(refresh_delay, self.reconnect_for_token_expiry)
|
||||
self._jwt_refresh_timer.daemon = True
|
||||
self._jwt_refresh_timer.start()
|
||||
|
||||
def reconnect_for_token_expiry(self):
|
||||
"""Proactively reconnect with new JWT before current one expires"""
|
||||
if not self._running:
|
||||
return
|
||||
|
||||
logger.info(f"JWT token expiring soon for {self.broker['name']}, refreshing...")
|
||||
self._running = False
|
||||
self._jwt_refresh_timer = None
|
||||
|
||||
self._schedule_reconnect(reason="JWT token expiry")
|
||||
self.client.disconnect()
|
||||
|
||||
|
||||
# ====================================================================
|
||||
# MeshCore → MQTT Publisher with Ed25519 auth token
|
||||
# ====================================================================
|
||||
class MeshCoreToMqttJwtPusher:
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
local_identity,
|
||||
config: dict,
|
||||
jwt_expiry_minutes: int = 10,
|
||||
use_tls: bool = True,
|
||||
stats_provider: Optional[Callable[[], dict]] = None,
|
||||
):
|
||||
# Store local identity and get public key
|
||||
self.local_identity = local_identity
|
||||
public_key = local_identity.get_public_key().hex().upper()
|
||||
|
||||
# Extract values from config
|
||||
from ..config import get_node_info
|
||||
|
||||
node_info = get_node_info(config)
|
||||
|
||||
iata_code = node_info["iata_code"]
|
||||
broker_index = node_info.get("broker_index")
|
||||
self.email = node_info.get("email", "")
|
||||
self.owner = node_info.get("owner", "")
|
||||
status_interval = node_info["status_interval"]
|
||||
node_name = node_info["node_name"]
|
||||
radio_config = node_info["radio_config"]
|
||||
|
||||
# Get additional brokers from config (optional)
|
||||
letsmesh_config = config.get("letsmesh", {})
|
||||
additional_brokers = letsmesh_config.get("additional_brokers", [])
|
||||
|
||||
# Determine which brokers to connect to
|
||||
if broker_index == -2:
|
||||
# Custom brokers only - no built-in brokers
|
||||
self.brokers = []
|
||||
logger.info("Custom broker mode: using only user-defined brokers")
|
||||
elif broker_index is None or broker_index == -1:
|
||||
# Connect to all built-in brokers + additional ones
|
||||
self.brokers = LETSMESH_BROKERS.copy()
|
||||
logger.info(
|
||||
f"Multi-broker mode: connecting to all {len(LETSMESH_BROKERS)} built-in brokers"
|
||||
)
|
||||
else:
|
||||
|
||||
if broker_index >= len(LETSMESH_BROKERS):
|
||||
raise ValueError(f"Invalid broker_index {broker_index}")
|
||||
self.brokers = [LETSMESH_BROKERS[broker_index]]
|
||||
logger.info(f"Single broker mode: connecting to {self.brokers[0]['name']}")
|
||||
|
||||
# Add additional brokers from config
|
||||
if additional_brokers:
|
||||
for broker_config in additional_brokers:
|
||||
if all(k in broker_config for k in ["name", "host", "port", "audience"]):
|
||||
self.brokers.append(broker_config)
|
||||
logger.info(f"Added custom broker: {broker_config['name']}")
|
||||
else:
|
||||
logger.warning(f"Skipping invalid broker config: {broker_config}")
|
||||
|
||||
# Validate that we have at least one broker
|
||||
if not self.brokers:
|
||||
raise ValueError(
|
||||
"No brokers configured. Either set broker_index to a valid value "
|
||||
"or provide additional_brokers in config."
|
||||
)
|
||||
|
||||
self.local_identity = local_identity
|
||||
self.public_key = public_key
|
||||
self.iata_code = iata_code
|
||||
self.jwt_expiry_minutes = jwt_expiry_minutes
|
||||
self.use_tls = use_tls
|
||||
self.status_interval = status_interval
|
||||
self.app_version = __version__
|
||||
self.node_name = node_name
|
||||
self.radio_config = radio_config
|
||||
self.stats_provider = stats_provider
|
||||
self._status_task = None
|
||||
self._running = False
|
||||
self._shutdown_requested = False
|
||||
self._lock = threading.Lock()
|
||||
self._connect_timers: List[threading.Timer] = []
|
||||
|
||||
# Create broker connections
|
||||
self.connections: List[_BrokerConnection] = []
|
||||
for idx, broker in enumerate(self.brokers):
|
||||
conn = _BrokerConnection(
|
||||
broker=broker,
|
||||
local_identity=self.local_identity,
|
||||
public_key=self.public_key,
|
||||
iata_code=self.iata_code,
|
||||
jwt_expiry_minutes=self.jwt_expiry_minutes,
|
||||
use_tls=self.use_tls,
|
||||
email=self.email,
|
||||
owner=self.owner,
|
||||
broker_index=idx,
|
||||
on_connect_callback=self._on_broker_connected,
|
||||
on_disconnect_callback=self._on_broker_disconnected,
|
||||
)
|
||||
self.connections.append(conn)
|
||||
|
||||
logger.info(f"Initialized with {len(self.connections)} broker connection(s)")
|
||||
|
||||
def _on_broker_connected(self, broker_name: str):
|
||||
"""Callback when a broker connects"""
|
||||
if self._shutdown_requested:
|
||||
return
|
||||
|
||||
# Publish initial status on first connection
|
||||
if not self._status_task and self.status_interval > 0:
|
||||
self._running = True
|
||||
self.publish_status(
|
||||
state="online", origin=self.node_name, radio_config=self.radio_config
|
||||
)
|
||||
# Start heartbeat thread
|
||||
self._status_task = threading.Thread(target=self._status_heartbeat_loop, daemon=True)
|
||||
self._status_task.start()
|
||||
logger.info(f"Started status heartbeat (interval: {self.status_interval}s)")
|
||||
|
||||
def _on_broker_disconnected(self, broker_name: str):
|
||||
"""Callback when a broker disconnects"""
|
||||
# Check if all connections are down AND none have pending reconnects
|
||||
all_down = all(not conn.is_connected() for conn in self.connections)
|
||||
any_reconnecting = any(conn.has_pending_reconnect() for conn in self.connections)
|
||||
|
||||
if all_down and not any_reconnecting:
|
||||
logger.warning("All broker connections lost with no pending reconnects")
|
||||
elif all_down:
|
||||
logger.info("All brokers temporarily disconnected, reconnects pending")
|
||||
|
||||
def connect(self):
|
||||
"""Establish connections to all configured brokers"""
|
||||
self._shutdown_requested = False
|
||||
self._connect_timers = []
|
||||
|
||||
for idx, conn in enumerate(self.connections):
|
||||
try:
|
||||
if idx == 0:
|
||||
# Connect first broker immediately
|
||||
conn.connect()
|
||||
else:
|
||||
# Stagger additional brokers using background timers
|
||||
delay = idx * 30
|
||||
logger.info(f"Staggering connection to {conn.broker['name']} by {delay}s")
|
||||
timer = threading.Timer(delay, lambda c=conn: self._delayed_connect(c))
|
||||
timer.daemon = True
|
||||
timer.start()
|
||||
self._connect_timers.append(timer)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to connect to {conn.broker['name']}: {e}")
|
||||
|
||||
def _delayed_connect(self, conn):
|
||||
"""Connect a broker after a delay (called by timer)"""
|
||||
if self._shutdown_requested:
|
||||
return
|
||||
|
||||
try:
|
||||
conn.connect()
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to connect to {conn.broker['name']}: {e}")
|
||||
|
||||
def disconnect(self):
|
||||
"""Disconnect from all brokers"""
|
||||
self._shutdown_requested = True
|
||||
|
||||
# Cancel any delayed connect timers first.
|
||||
for timer in self._connect_timers:
|
||||
try:
|
||||
timer.cancel()
|
||||
except Exception:
|
||||
pass
|
||||
self._connect_timers = []
|
||||
|
||||
# Stop the heartbeat loop
|
||||
self._running = False
|
||||
|
||||
# Publish offline status before disconnecting
|
||||
try:
|
||||
self.publish_status(state="offline", origin=self.node_name, radio_config=self.radio_config)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Disconnect all brokers
|
||||
for conn in self.connections:
|
||||
try:
|
||||
conn.disconnect()
|
||||
except Exception as e:
|
||||
logger.error(f"Error disconnecting from {conn.broker['name']}: {e}")
|
||||
|
||||
self._status_task = None
|
||||
logger.info("Disconnected from all brokers")
|
||||
|
||||
def _status_heartbeat_loop(self):
|
||||
"""Background thread that publishes periodic status updates"""
|
||||
@@ -233,20 +542,15 @@ class MeshCoreToMqttJwtPusher:
|
||||
|
||||
while self._running:
|
||||
try:
|
||||
# Refresh JWT token before it expires (at 80% of expiry time)
|
||||
if self._connect_time:
|
||||
elapsed = (datetime.now(UTC) - self._connect_time).total_seconds()
|
||||
expiry_seconds = self.jwt_expiry_minutes * 60
|
||||
if elapsed >= expiry_seconds * 0.8:
|
||||
self._refresh_jwt_token()
|
||||
|
||||
# Publish status (JWT refresh now handled by individual broker timers)
|
||||
self.publish_status(
|
||||
state="online", origin=self.node_name, radio_config=self.radio_config
|
||||
)
|
||||
logging.debug(f"Status heartbeat sent (next in {self.status_interval}s)")
|
||||
logger.debug(f"Status heartbeat sent (next in {self.status_interval}s)")
|
||||
|
||||
time.sleep(self.status_interval)
|
||||
except Exception as e:
|
||||
logging.error(f"Status heartbeat error: {e}")
|
||||
logger.error(f"Status heartbeat error: {e}")
|
||||
time.sleep(self.status_interval)
|
||||
|
||||
# ----------------------------------------------------------------
|
||||
@@ -307,9 +611,122 @@ class MeshCoreToMqttJwtPusher:
|
||||
return self.publish("status", status, retain=False)
|
||||
|
||||
def publish(self, subtopic: str, payload: dict, retain: bool = False):
|
||||
"""Publish message to all connected brokers"""
|
||||
topic = self._topic(subtopic)
|
||||
message = json.dumps(payload)
|
||||
result = self.client.publish(topic, message, retain=retain)
|
||||
logging.debug(f"Published to {topic}: {message}")
|
||||
return result
|
||||
|
||||
results = []
|
||||
with self._lock:
|
||||
for conn in self.connections:
|
||||
if conn.is_connected():
|
||||
result = conn.publish(topic, message, retain=retain)
|
||||
results.append((conn.broker["name"], result))
|
||||
logger.debug(f"Published to {conn.broker['name']}/{topic}")
|
||||
|
||||
if not results:
|
||||
logger.warning(f"No active broker connections for publishing to {topic}")
|
||||
|
||||
return results
|
||||
|
||||
|
||||
# ====================================================================
|
||||
# Helper Functions
|
||||
# ====================================================================
|
||||
|
||||
|
||||
def get_mqtt_error_message(rc: int, is_disconnect: bool = False) -> str:
|
||||
"""
|
||||
Get human-readable MQTT error message.
|
||||
|
||||
Args:
|
||||
rc: Return code from paho-mqtt
|
||||
is_disconnect: True if from on_disconnect, False if from on_connect
|
||||
|
||||
Returns:
|
||||
Human-readable error message
|
||||
"""
|
||||
if HAS_REASON_CODES:
|
||||
try:
|
||||
# ReasonCode object has getName() method and value property
|
||||
reason = ReasonCode(mqtt.CONNACK if not is_disconnect else mqtt.DISCONNECT, identifier=rc)
|
||||
name = reason.getName() if hasattr(reason, 'getName') else str(reason)
|
||||
return f"{name} (code {rc})"
|
||||
except Exception as e:
|
||||
# Log the exception for debugging
|
||||
logger.debug(f"Could not decode reason code {rc}: {e}")
|
||||
|
||||
# Fallback to manual mappings - Extended with MQTT v5 codes
|
||||
connect_errors = {
|
||||
0: "Connection accepted",
|
||||
1: "Incorrect protocol version",
|
||||
2: "Invalid client identifier",
|
||||
3: "Server unavailable",
|
||||
4: "Bad username or password (JWT invalid)",
|
||||
5: "Not authorized (JWT signature/format invalid)",
|
||||
# MQTT v5 codes
|
||||
128: "Unspecified error",
|
||||
129: "Malformed packet",
|
||||
130: "Protocol error",
|
||||
131: "Implementation specific error",
|
||||
132: "Unsupported protocol version",
|
||||
133: "Client identifier not valid",
|
||||
134: "Bad username or password",
|
||||
135: "Not authorized",
|
||||
136: "Server unavailable",
|
||||
137: "Server busy",
|
||||
138: "Banned",
|
||||
140: "Bad authentication method",
|
||||
144: "Topic name invalid",
|
||||
149: "Packet too large",
|
||||
151: "Quota exceeded",
|
||||
153: "Payload format invalid",
|
||||
154: "Retain not supported",
|
||||
155: "QoS not supported",
|
||||
156: "Use another server",
|
||||
157: "Server moved",
|
||||
159: "Connection rate exceeded",
|
||||
}
|
||||
|
||||
disconnect_errors = {
|
||||
0: "Normal disconnect",
|
||||
1: "Unacceptable protocol version",
|
||||
2: "Identifier rejected",
|
||||
3: "Server unavailable",
|
||||
4: "Bad username or password",
|
||||
5: "Not authorized",
|
||||
7: "Connection lost / network error",
|
||||
16: "Connection lost / protocol error",
|
||||
17: "Client timeout",
|
||||
# MQTT v5 codes
|
||||
4: "Disconnect with Will message",
|
||||
128: "Unspecified error",
|
||||
129: "Malformed packet",
|
||||
130: "Protocol error",
|
||||
131: "Implementation specific error",
|
||||
135: "Not authorized",
|
||||
137: "Server busy",
|
||||
139: "Server shutting down",
|
||||
141: "Keep alive timeout",
|
||||
142: "Session taken over",
|
||||
143: "Topic filter invalid",
|
||||
144: "Topic name invalid",
|
||||
147: "Receive maximum exceeded",
|
||||
148: "Topic alias invalid",
|
||||
149: "Packet too large",
|
||||
150: "Message rate too high",
|
||||
151: "Quota exceeded",
|
||||
152: "Administrative action",
|
||||
153: "Payload format invalid",
|
||||
154: "Retain not supported",
|
||||
155: "QoS not supported",
|
||||
156: "Use another server",
|
||||
157: "Server moved",
|
||||
158: "Shared subscriptions not supported",
|
||||
159: "Connection rate exceeded",
|
||||
160: "Maximum connect time",
|
||||
161: "Subscription identifiers not supported",
|
||||
162: "Wildcard subscriptions not supported",
|
||||
}
|
||||
|
||||
error_dict = disconnect_errors if is_disconnect else connect_errors
|
||||
return error_dict.get(rc, f"Unknown error code {rc}")
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,10 +1,11 @@
|
||||
import logging
|
||||
import time
|
||||
from pathlib import Path
|
||||
from typing import Optional, Dict, Any
|
||||
from typing import Any, Dict, Optional
|
||||
|
||||
try:
|
||||
import rrdtool
|
||||
|
||||
RRDTOOL_AVAILABLE = True
|
||||
except ImportError:
|
||||
RRDTOOL_AVAILABLE = False
|
||||
@@ -18,22 +19,31 @@ class RRDToolHandler:
|
||||
self.rrd_path = self.storage_dir / "metrics.rrd"
|
||||
self.available = RRDTOOL_AVAILABLE
|
||||
self._init_rrd()
|
||||
# Timestamp of the last successful rrdtool.update() call (unix seconds,
|
||||
# aligned to the 60-second RRD step). Used to skip writes whose period
|
||||
# has already been committed — no rrdtool.info() call needed.
|
||||
self._last_rrd_update: int = 0
|
||||
# Read-side cache: rrdtool.fetch() returns 24 h of data and is a
|
||||
# blocking disk read. Cache the result for 60 s — matching the RRD
|
||||
# step size — so repeated dashboard refreshes don't hammer the SD card.
|
||||
self._get_data_cache: tuple = (0.0, None) # (fetched_at, result)
|
||||
|
||||
def _init_rrd(self):
|
||||
if not self.available:
|
||||
logger.warning("RRDTool not available - skipping RRD initialization")
|
||||
return
|
||||
|
||||
|
||||
if self.rrd_path.exists():
|
||||
logger.info(f"RRD database exists: {self.rrd_path}")
|
||||
return
|
||||
|
||||
|
||||
try:
|
||||
rrdtool.create(
|
||||
str(self.rrd_path),
|
||||
"--step", "60",
|
||||
"--start", str(int(time.time() - 60)),
|
||||
|
||||
"--step",
|
||||
"60",
|
||||
"--start",
|
||||
str(int(time.time() - 60)),
|
||||
"DS:rx_count:COUNTER:120:0:U",
|
||||
"DS:tx_count:COUNTER:120:0:U",
|
||||
"DS:drop_count:COUNTER:120:0:U",
|
||||
@@ -42,7 +52,6 @@ class RRDToolHandler:
|
||||
"DS:avg_length:GAUGE:120:0:256",
|
||||
"DS:avg_score:GAUGE:120:0:1",
|
||||
"DS:neighbor_count:GAUGE:120:0:U",
|
||||
|
||||
"DS:type_0:COUNTER:120:0:U",
|
||||
"DS:type_1:COUNTER:120:0:U",
|
||||
"DS:type_2:COUNTER:120:0:U",
|
||||
@@ -60,121 +69,157 @@ class RRDToolHandler:
|
||||
"DS:type_14:COUNTER:120:0:U",
|
||||
"DS:type_15:COUNTER:120:0:U",
|
||||
"DS:type_other:COUNTER:120:0:U",
|
||||
|
||||
"RRA:AVERAGE:0.5:1:10080",
|
||||
"RRA:AVERAGE:0.5:5:8640",
|
||||
"RRA:AVERAGE:0.5:60:8760",
|
||||
"RRA:MAX:0.5:1:10080",
|
||||
"RRA:MIN:0.5:1:10080"
|
||||
"RRA:MIN:0.5:1:10080",
|
||||
)
|
||||
logger.info(f"RRD database created: {self.rrd_path}")
|
||||
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to create RRD database: {e}")
|
||||
|
||||
def update_packet_metrics(self, record: dict, cumulative_counts: dict):
|
||||
"""Write packet metrics to RRD, throttled to once per 60-second step.
|
||||
|
||||
RRD enforces a 60-second minimum step between updates. We track the
|
||||
last written timestamp ourselves — no rrdtool.info() call needed, which
|
||||
previously allocated thousands of Python objects per call.
|
||||
"""
|
||||
if not self.available or not self.rrd_path.exists():
|
||||
return
|
||||
|
||||
|
||||
try:
|
||||
timestamp = int(record.get("timestamp", time.time()))
|
||||
|
||||
try:
|
||||
info = rrdtool.info(str(self.rrd_path))
|
||||
last_update = int(info.get("last_update", timestamp - 60))
|
||||
if timestamp <= last_update:
|
||||
return
|
||||
except Exception as e:
|
||||
logger.debug(f"Failed to get RRD info for packet update: {e}")
|
||||
|
||||
|
||||
# Skip if this packet falls in the same 60-second period we already wrote.
|
||||
if timestamp <= self._last_rrd_update:
|
||||
return
|
||||
|
||||
# Build update string from cumulative counts
|
||||
rx_total = cumulative_counts.get("rx_total", 0)
|
||||
tx_total = cumulative_counts.get("tx_total", 0)
|
||||
drop_total = cumulative_counts.get("drop_total", 0)
|
||||
type_counts = cumulative_counts.get("type_counts", {})
|
||||
|
||||
|
||||
type_values = []
|
||||
for i in range(16):
|
||||
type_values.append(str(type_counts.get(f"type_{i}", 0)))
|
||||
type_values.append(str(type_counts.get("type_other", 0)))
|
||||
|
||||
basic_values = f"{timestamp}:{rx_total}:{tx_total}:{drop_total}:" \
|
||||
f"{record.get('rssi', 'U')}:{record.get('snr', 'U')}:" \
|
||||
f"{record.get('length', 'U')}:{record.get('score', 'U')}:" \
|
||||
f"U"
|
||||
|
||||
|
||||
rssi = record.get("rssi")
|
||||
snr = record.get("snr")
|
||||
score = record.get("score")
|
||||
|
||||
rssi_val = "U" if rssi is None else str(rssi)
|
||||
snr_val = "U" if snr is None else str(snr)
|
||||
score_val = "U" if score is None else str(score)
|
||||
length_val = str(record.get("length", 0))
|
||||
|
||||
basic_values = (
|
||||
f"{timestamp}:{rx_total}:{tx_total}:{drop_total}:"
|
||||
f"{rssi_val}:{snr_val}:{length_val}:{score_val}:"
|
||||
f"U"
|
||||
)
|
||||
|
||||
type_values_str = ":".join(type_values)
|
||||
values = f"{basic_values}:{type_values_str}"
|
||||
|
||||
|
||||
rrdtool.update(str(self.rrd_path), values)
|
||||
|
||||
self._last_rrd_update = timestamp
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to update RRD packet metrics: {e}")
|
||||
logger.debug(f"RRD packet update failed - record: {record}")
|
||||
|
||||
def get_data(self, start_time: Optional[int] = None, end_time: Optional[int] = None,
|
||||
resolution: str = "average") -> Optional[dict]:
|
||||
def get_data(
|
||||
self,
|
||||
start_time: Optional[int] = None,
|
||||
end_time: Optional[int] = None,
|
||||
resolution: str = "average",
|
||||
) -> Optional[dict]:
|
||||
if not self.available or not self.rrd_path.exists():
|
||||
logger.error(f"RRD not available: available={self.available}, rrd_path exists={self.rrd_path.exists()}")
|
||||
logger.error(
|
||||
f"RRD not available: available={self.available}, rrd_path exists={self.rrd_path.exists()}"
|
||||
)
|
||||
return None
|
||||
|
||||
|
||||
# Serve from cache if result is still fresh. RRD step is 60 s, so
|
||||
# anything newer than that is guaranteed to be identical to a live fetch.
|
||||
# Only the default (full 24-hour, no explicit bounds) call is cached —
|
||||
# explicit start/end requests always bypass the cache.
|
||||
now = time.time()
|
||||
use_cache = start_time is None and end_time is None
|
||||
if use_cache:
|
||||
cache_fetched_at, cache_result = self._get_data_cache
|
||||
if now - cache_fetched_at < 60.0 and cache_result is not None:
|
||||
return cache_result
|
||||
|
||||
try:
|
||||
if end_time is None:
|
||||
end_time = int(time.time())
|
||||
end_time = int(now)
|
||||
if start_time is None:
|
||||
start_time = end_time - (24 * 3600)
|
||||
|
||||
|
||||
fetch_result = rrdtool.fetch(
|
||||
str(self.rrd_path),
|
||||
resolution.upper(),
|
||||
"--start", str(start_time),
|
||||
"--end", str(end_time)
|
||||
"--start",
|
||||
str(start_time),
|
||||
"--end",
|
||||
str(end_time),
|
||||
)
|
||||
|
||||
|
||||
if not fetch_result:
|
||||
logger.error("RRD fetch returned None")
|
||||
return None
|
||||
|
||||
|
||||
(start, end, step), data_sources, data_points = fetch_result
|
||||
|
||||
|
||||
if not data_points:
|
||||
logger.warning("No data points returned from RRD fetch")
|
||||
|
||||
|
||||
result = {
|
||||
"start_time": start,
|
||||
"end_time": end,
|
||||
"step": step,
|
||||
"data_sources": data_sources,
|
||||
"packet_types": {},
|
||||
"metrics": {}
|
||||
"metrics": {},
|
||||
}
|
||||
|
||||
|
||||
timestamps = []
|
||||
current_time = start
|
||||
|
||||
|
||||
for ds in data_sources:
|
||||
if ds.startswith('type_'):
|
||||
if 'packet_types' not in result:
|
||||
result['packet_types'] = {}
|
||||
result['packet_types'][ds] = []
|
||||
if ds.startswith("type_"):
|
||||
if "packet_types" not in result:
|
||||
result["packet_types"] = {}
|
||||
result["packet_types"][ds] = []
|
||||
else:
|
||||
result['metrics'][ds] = []
|
||||
|
||||
result["metrics"][ds] = []
|
||||
|
||||
for point in data_points:
|
||||
timestamps.append(current_time)
|
||||
|
||||
|
||||
for i, value in enumerate(point):
|
||||
ds_name = data_sources[i]
|
||||
if ds_name.startswith('type_'):
|
||||
result['packet_types'][ds_name].append(value)
|
||||
if ds_name.startswith("type_"):
|
||||
result["packet_types"][ds_name].append(value)
|
||||
else:
|
||||
result['metrics'][ds_name].append(value)
|
||||
|
||||
result["metrics"][ds_name].append(value)
|
||||
|
||||
current_time += step
|
||||
|
||||
result['timestamps'] = timestamps
|
||||
|
||||
|
||||
result["timestamps"] = timestamps
|
||||
|
||||
# Populate read cache for default (unconstrained) calls only.
|
||||
if use_cache:
|
||||
self._get_data_cache = (now, result)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to get RRD data: {e}")
|
||||
return None
|
||||
@@ -183,65 +228,65 @@ class RRDToolHandler:
|
||||
try:
|
||||
end_time = int(time.time())
|
||||
start_time = end_time - (hours * 3600)
|
||||
|
||||
|
||||
rrd_data = self.get_data(start_time, end_time)
|
||||
if not rrd_data or 'packet_types' not in rrd_data:
|
||||
if not rrd_data or "packet_types" not in rrd_data:
|
||||
logger.warning(f"No RRD data available")
|
||||
return None
|
||||
|
||||
|
||||
type_totals = {}
|
||||
packet_type_names = {
|
||||
'type_0': 'Request (REQ)',
|
||||
'type_1': 'Response (RESPONSE)',
|
||||
'type_2': 'Plain Text Message (TXT_MSG)',
|
||||
'type_3': 'Acknowledgment (ACK)',
|
||||
'type_4': 'Node Advertisement (ADVERT)',
|
||||
'type_5': 'Group Text Message (GRP_TXT)',
|
||||
'type_6': 'Group Datagram (GRP_DATA)',
|
||||
'type_7': 'Anonymous Request (ANON_REQ)',
|
||||
'type_8': 'Returned Path (PATH)',
|
||||
'type_9': 'Trace (TRACE)',
|
||||
'type_10': 'Multi-part Packet',
|
||||
'type_11': 'Control Packet Data',
|
||||
'type_12': 'Reserved Type 12',
|
||||
'type_13': 'Reserved Type 13',
|
||||
'type_14': 'Reserved Type 14',
|
||||
'type_15': 'Custom Packet (RAW_CUSTOM)',
|
||||
'type_other': 'Other Types (>15)'
|
||||
"type_0": "Request (REQ)",
|
||||
"type_1": "Response (RESPONSE)",
|
||||
"type_2": "Plain Text Message (TXT_MSG)",
|
||||
"type_3": "Acknowledgment (ACK)",
|
||||
"type_4": "Node Advertisement (ADVERT)",
|
||||
"type_5": "Group Text Message (GRP_TXT)",
|
||||
"type_6": "Group Datagram (GRP_DATA)",
|
||||
"type_7": "Anonymous Request (ANON_REQ)",
|
||||
"type_8": "Returned Path (PATH)",
|
||||
"type_9": "Trace (TRACE)",
|
||||
"type_10": "Multi-part Packet (MULTIPART)",
|
||||
"type_11": "Control (CONTROL)",
|
||||
"type_12": "Reserved Type 12",
|
||||
"type_13": "Reserved Type 13",
|
||||
"type_14": "Reserved Type 14",
|
||||
"type_15": "Custom Packet (RAW_CUSTOM)",
|
||||
"type_other": "Other Types (>15)",
|
||||
}
|
||||
|
||||
|
||||
total_valid_points = 0
|
||||
for type_key, data_points in rrd_data['packet_types'].items():
|
||||
for type_key, data_points in rrd_data["packet_types"].items():
|
||||
valid_points = [p for p in data_points if p is not None]
|
||||
total_valid_points += len(valid_points)
|
||||
|
||||
|
||||
if total_valid_points < 10:
|
||||
logger.warning(f"RRD data too sparse ({total_valid_points} valid points)")
|
||||
return None
|
||||
|
||||
for type_key, data_points in rrd_data['packet_types'].items():
|
||||
|
||||
for type_key, data_points in rrd_data["packet_types"].items():
|
||||
valid_points = [p for p in data_points if p is not None]
|
||||
|
||||
|
||||
if len(valid_points) >= 2:
|
||||
total = max(valid_points) - min(valid_points)
|
||||
elif len(valid_points) == 1:
|
||||
total = valid_points[0]
|
||||
else:
|
||||
total = 0
|
||||
|
||||
|
||||
type_name = packet_type_names.get(type_key, type_key)
|
||||
type_totals[type_name] = max(0, total or 0)
|
||||
|
||||
|
||||
result = {
|
||||
"hours": hours,
|
||||
"packet_type_totals": type_totals,
|
||||
"total_packets": sum(type_totals.values()),
|
||||
"period": f"{hours} hours",
|
||||
"data_source": "rrd"
|
||||
"data_source": "rrd",
|
||||
}
|
||||
|
||||
|
||||
return result
|
||||
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to get packet type stats from RRD: {e}")
|
||||
return None
|
||||
return None
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,17 +1,16 @@
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import time
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Optional, Dict, Any
|
||||
from typing import Any, Dict, Optional
|
||||
|
||||
from .sqlite_handler import SQLiteHandler
|
||||
from .mqtt_handler import MeshCoreToMqttPusher
|
||||
from .rrdtool_handler import RRDToolHandler
|
||||
from .mqtt_handler import MQTTHandler
|
||||
from .letsmesh_handler import MeshCoreToMqttJwtPusher
|
||||
from .sqlite_handler import SQLiteHandler
|
||||
from .storage_utils import PacketRecord
|
||||
|
||||
|
||||
logger = logging.getLogger("StorageCollector")
|
||||
|
||||
|
||||
@@ -19,130 +18,296 @@ class StorageCollector:
|
||||
def __init__(self, config: dict, local_identity=None, repeater_handler=None):
|
||||
self.config = config
|
||||
self.repeater_handler = repeater_handler
|
||||
self.storage_dir = Path(config.get("storage_dir", "/var/lib/pymc_repeater"))
|
||||
self.storage_dir.mkdir(parents=True, exist_ok=True)
|
||||
self.glass_publish_callback = None
|
||||
self._pending_tasks = set()
|
||||
|
||||
node_name = config.get("repeater", {}).get("node_name", "unknown")
|
||||
node_id = local_identity.get_public_key().hex() if local_identity else "unknown"
|
||||
storage_dir_cfg = (
|
||||
config.get("storage", {}).get("storage_dir")
|
||||
or config.get("storage_dir")
|
||||
or "/var/lib/pymc_repeater"
|
||||
)
|
||||
self.storage_dir = Path(storage_dir_cfg)
|
||||
self.storage_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
self.sqlite_handler = SQLiteHandler(self.storage_dir)
|
||||
self.rrd_handler = RRDToolHandler(self.storage_dir)
|
||||
self.mqtt_handler = MQTTHandler(config.get("mqtt", {}), node_name, node_id)
|
||||
|
||||
# Initialize LetsMesh handler if configured
|
||||
self.letsmesh_handler = None
|
||||
if config.get("letsmesh", {}).get("enabled", False) and local_identity:
|
||||
# Initialize MQTT handler if configured
|
||||
self.mqtt_handler = None
|
||||
if (config.get("mqtt_brokers", {}) or config.get("letsmesh", {}) or config.get("mqtt", {})) and local_identity:
|
||||
try:
|
||||
# Get keys from local_identity (signing_key.encode() is the private key seed)
|
||||
private_key_hex = local_identity.signing_key.encode().hex()
|
||||
public_key_hex = local_identity.get_public_key().hex()
|
||||
|
||||
self.letsmesh_handler = MeshCoreToMqttJwtPusher(
|
||||
private_key=private_key_hex,
|
||||
public_key=public_key_hex,
|
||||
# Pass local_identity directly (supports both standard and firmware keys)
|
||||
self.mqtt_handler = MeshCoreToMqttPusher(
|
||||
local_identity=local_identity,
|
||||
config=config,
|
||||
stats_provider=self._get_live_stats,
|
||||
)
|
||||
self.letsmesh_handler.connect()
|
||||
|
||||
# Get disallowed packet types from config
|
||||
from ..config import get_node_info
|
||||
|
||||
node_info = get_node_info(config)
|
||||
self.disallowed_packet_types = set(node_info["disallowed_packet_types"])
|
||||
self.mqtt_handler.connect()
|
||||
|
||||
public_key_hex = local_identity.get_public_key().hex()
|
||||
logger.info(
|
||||
f"LetsMesh handler initialized with public key: {public_key_hex[:16]}..."
|
||||
f"MQTT handler initialized with public key: {public_key_hex[:16]}..."
|
||||
)
|
||||
if self.disallowed_packet_types:
|
||||
logger.info(f"Disallowed packet types: {sorted(self.disallowed_packet_types)}")
|
||||
else:
|
||||
logger.info("All packet types allowed")
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to initialize LetsMesh handler: {e}")
|
||||
self.letsmesh_handler = None
|
||||
self.disallowed_packet_types = set()
|
||||
else:
|
||||
self.disallowed_packet_types = set()
|
||||
|
||||
logger.error(f"Failed to initialize MQTT handler: {e}")
|
||||
self.mqtt_handler = None
|
||||
|
||||
# Initialize hardware stats collector
|
||||
from .hardware_stats import HardwareStatsCollector
|
||||
|
||||
self.hardware_stats = HardwareStatsCollector()
|
||||
logger.info("Hardware stats collector initialized")
|
||||
|
||||
# Initialize WebSocket handler for real-time updates
|
||||
self.websocket_available = False
|
||||
self.websocket_has_connected_clients = lambda: False
|
||||
self._last_ws_stats_broadcast: float = 0.0
|
||||
self._ws_stats_broadcast_interval_sec: float = 5.0
|
||||
try:
|
||||
from .websocket_handler import (
|
||||
broadcast_packet,
|
||||
broadcast_stats,
|
||||
has_connected_clients,
|
||||
)
|
||||
|
||||
self.websocket_broadcast_packet = broadcast_packet
|
||||
self.websocket_broadcast_stats = broadcast_stats
|
||||
self.websocket_has_connected_clients = has_connected_clients
|
||||
self.websocket_available = True
|
||||
logger.info("WebSocket handler initialized for real-time updates")
|
||||
except ImportError:
|
||||
logger.debug("WebSocket handler not available")
|
||||
|
||||
def _track_task(self, task: asyncio.Task):
|
||||
"""Track background task for lifecycle management and error handling."""
|
||||
self._pending_tasks.add(task)
|
||||
|
||||
def on_done(t: asyncio.Task):
|
||||
self._pending_tasks.discard(t)
|
||||
try:
|
||||
t.result()
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
except Exception as e:
|
||||
logger.error(f"Background task error: {e}", exc_info=True)
|
||||
|
||||
task.add_done_callback(on_done)
|
||||
|
||||
def _schedule_background(self, coro_factory, *args, sync_fallback=None):
|
||||
"""Schedule a coroutine if a loop exists; otherwise run sync fallback."""
|
||||
try:
|
||||
loop = asyncio.get_running_loop()
|
||||
except RuntimeError:
|
||||
if sync_fallback is not None:
|
||||
sync_fallback(*args)
|
||||
return
|
||||
|
||||
task = loop.create_task(coro_factory(*args))
|
||||
self._track_task(task)
|
||||
|
||||
def _get_live_stats(self) -> dict:
|
||||
"""Get live stats from RepeaterHandler"""
|
||||
if not self.repeater_handler:
|
||||
return {"uptime_secs": 0, "packets_sent": 0, "packets_received": 0}
|
||||
return {
|
||||
"uptime_secs": 0,
|
||||
"packets_sent": 0,
|
||||
"packets_received": 0,
|
||||
"errors": 0,
|
||||
"queue_len": 0,
|
||||
}
|
||||
|
||||
uptime_secs = int(time.time() - self.repeater_handler.start_time)
|
||||
return {
|
||||
|
||||
# Get airtime stats
|
||||
airtime_stats = self.repeater_handler.airtime_mgr.get_stats()
|
||||
|
||||
# Get latest noise floor from database
|
||||
noise_floor = None
|
||||
try:
|
||||
recent_noise = self.sqlite_handler.get_noise_floor_history(hours=0.5, limit=1)
|
||||
if recent_noise and len(recent_noise) > 0:
|
||||
noise_floor = recent_noise[-1].get("noise_floor_dbm")
|
||||
except Exception as e:
|
||||
logger.debug(f"Could not fetch noise floor: {e}")
|
||||
|
||||
stats = {
|
||||
"uptime_secs": uptime_secs,
|
||||
"packets_sent": self.repeater_handler.forwarded_count,
|
||||
"packets_received": self.repeater_handler.rx_count,
|
||||
"errors": 0,
|
||||
"queue_len": 0, # N/A for Python repeater
|
||||
}
|
||||
|
||||
def record_packet(self, packet_record: dict, skip_letsmesh_if_invalid: bool = True):
|
||||
"""Record packet to storage and publish to MQTT/LetsMesh
|
||||
|
||||
# Add airtime stats
|
||||
if airtime_stats:
|
||||
stats["tx_air_secs"] = airtime_stats["total_airtime_ms"] / 1000
|
||||
stats["current_airtime_ms"] = airtime_stats["current_airtime_ms"]
|
||||
stats["utilization_percent"] = airtime_stats["utilization_percent"]
|
||||
|
||||
# Add noise floor if available
|
||||
if noise_floor is not None:
|
||||
stats["noise_floor"] = noise_floor
|
||||
|
||||
return stats
|
||||
|
||||
def record_packet(self, packet_record: dict, skip_mqtt_if_invalid: bool = True):
|
||||
"""Record packet to storage and publish to MQTT
|
||||
|
||||
Args:
|
||||
packet_record: Dictionary containing packet information
|
||||
skip_letsmesh_if_invalid: If True, don't publish packets with drop_reason to LetsMesh
|
||||
skip_mqtt_if_invalid: If True, don't publish packets with drop_reason to mqtt
|
||||
"""
|
||||
logger.debug(
|
||||
f"Recording packet: type={packet_record.get('type')}, "
|
||||
f"transmitted={packet_record.get('transmitted')}"
|
||||
)
|
||||
|
||||
# Store to local databases and publish to local MQTT
|
||||
# HOT PATH: Store to local databases only (fast, non-blocking)
|
||||
self.sqlite_handler.store_packet(packet_record)
|
||||
cumulative_counts = self.sqlite_handler.get_cumulative_counts()
|
||||
self.rrd_handler.update_packet_metrics(packet_record, cumulative_counts)
|
||||
self.mqtt_handler.publish(packet_record, "packet")
|
||||
|
||||
# Publish to LetsMesh if enabled (skip invalid packets if requested)
|
||||
if skip_letsmesh_if_invalid and packet_record.get('drop_reason'):
|
||||
logger.debug(f"Skipping LetsMesh publish for packet with drop_reason: {packet_record.get('drop_reason')}")
|
||||
else:
|
||||
self._publish_to_letsmesh(packet_record)
|
||||
# DEFERRED: Publish to network sinks and WebSocket in background tasks
|
||||
# This prevents network latency from blocking packet processing
|
||||
self._schedule_background(
|
||||
self._deferred_publish,
|
||||
packet_record,
|
||||
skip_mqtt_if_invalid,
|
||||
sync_fallback=self._publish_packet_sync,
|
||||
)
|
||||
|
||||
def _publish_to_letsmesh(self, packet_record: dict):
|
||||
"""Publish packet to LetsMesh broker if enabled and allowed"""
|
||||
if not self.letsmesh_handler:
|
||||
async def _deferred_publish(self, packet_record: dict, skip_mqtt: bool):
|
||||
"""Deferred background task for all network publishing operations."""
|
||||
try:
|
||||
self._publish_packet_sync(packet_record, skip_mqtt)
|
||||
except Exception as e:
|
||||
logger.error(f"Deferred publish failed: {e}", exc_info=True)
|
||||
|
||||
def _publish_packet_sync(self, packet_record: dict, skip_mqtt: bool):
|
||||
"""Publish packet updates synchronously (used when no asyncio loop is active)."""
|
||||
self._publish_to_glass(packet_record, "packet")
|
||||
|
||||
if self.websocket_available:
|
||||
try:
|
||||
self.websocket_broadcast_packet(packet_record)
|
||||
if self.websocket_has_connected_clients():
|
||||
now_mono = time.monotonic()
|
||||
if (
|
||||
now_mono - self._last_ws_stats_broadcast
|
||||
>= self._ws_stats_broadcast_interval_sec
|
||||
):
|
||||
self._last_ws_stats_broadcast = now_mono
|
||||
packet_stats_24h = self.sqlite_handler.get_packet_stats(hours=24)
|
||||
uptime_seconds = (
|
||||
time.time() - self.repeater_handler.start_time if self.repeater_handler else 0
|
||||
)
|
||||
self.websocket_broadcast_stats(
|
||||
{
|
||||
"packet_stats": packet_stats_24h,
|
||||
"system_stats": {"uptime_seconds": uptime_seconds},
|
||||
}
|
||||
)
|
||||
except Exception as e:
|
||||
logger.debug(f"WebSocket broadcast failed: {e}")
|
||||
|
||||
|
||||
self._publish_packet_to_mqtt(packet_record)
|
||||
|
||||
def _publish_packet_to_mqtt(self, packet_record: dict):
|
||||
"""Publish packet to mqtt broker if enabled and allowed"""
|
||||
if not self.mqtt_handler:
|
||||
return
|
||||
|
||||
try:
|
||||
packet_type = packet_record.get("type")
|
||||
if packet_type is None:
|
||||
logger.error("Cannot publish to LetsMesh: packet_record missing 'type' field")
|
||||
return
|
||||
|
||||
if packet_type in self.disallowed_packet_types:
|
||||
logger.debug(f"Skipped publishing packet type 0x{packet_type:02X} (disallowed)")
|
||||
logger.error("Cannot publish to mqtt: packet_record missing 'type' field")
|
||||
return
|
||||
|
||||
node_name = self.config.get("repeater", {}).get("node_name", "Unknown")
|
||||
packet = PacketRecord.from_packet_record(
|
||||
packet_record, origin=node_name, origin_id=self.letsmesh_handler.public_key
|
||||
packet_record, origin=node_name, origin_id=self.mqtt_handler.public_key
|
||||
)
|
||||
|
||||
if packet:
|
||||
self.letsmesh_handler.publish_packet(packet.to_dict())
|
||||
logger.debug(f"Published packet type 0x{packet_type:02X} to LetsMesh")
|
||||
self.mqtt_handler.publish_packet(packet.to_dict())
|
||||
logger.debug(f"Published packet type 0x{packet_type:02X} to mqtt")
|
||||
else:
|
||||
logger.debug("Skipped LetsMesh publish: packet missing raw_packet data")
|
||||
logger.debug("Skipped mqtt publish: packet missing raw_packet data")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to publish packet to LetsMesh: {e}", exc_info=True)
|
||||
logger.error(f"Failed to publish packet to mqtt: {e}", exc_info=True)
|
||||
|
||||
def record_advert(self, advert_record: dict):
|
||||
"""Record advert to storage and defer network publishing to background tasks."""
|
||||
self.sqlite_handler.store_advert(advert_record)
|
||||
self.mqtt_handler.publish(advert_record, "advert")
|
||||
self._schedule_background(
|
||||
self._deferred_publish_advert,
|
||||
advert_record,
|
||||
sync_fallback=self._publish_advert_sync,
|
||||
)
|
||||
|
||||
async def _deferred_publish_advert(self, advert_record: dict):
|
||||
"""Deferred background task for advert publishing."""
|
||||
try:
|
||||
self._publish_advert_sync(advert_record)
|
||||
except Exception as e:
|
||||
logger.error(f"Deferred advert publish failed: {e}", exc_info=True)
|
||||
|
||||
def _publish_advert_sync(self, advert_record: dict):
|
||||
if self.mqtt_handler:
|
||||
self.mqtt_handler.publish_mqtt(advert_record, "advert")
|
||||
self._publish_to_glass(advert_record, "advert")
|
||||
|
||||
def record_noise_floor(self, noise_floor_dbm: float):
|
||||
"""Record noise floor to storage and defer network publishing to background tasks."""
|
||||
noise_record = {"timestamp": time.time(), "noise_floor_dbm": noise_floor_dbm}
|
||||
self.sqlite_handler.store_noise_floor(noise_record)
|
||||
self.mqtt_handler.publish(noise_record, "noise_floor")
|
||||
self._schedule_background(
|
||||
self._deferred_publish_noise_floor,
|
||||
noise_record,
|
||||
sync_fallback=self._publish_noise_floor_sync,
|
||||
)
|
||||
|
||||
async def _deferred_publish_noise_floor(self, noise_record: dict):
|
||||
"""Deferred background task for noise floor publishing."""
|
||||
try:
|
||||
self._publish_noise_floor_sync(noise_record)
|
||||
except Exception as e:
|
||||
logger.error(f"Deferred noise floor publish failed: {e}", exc_info=True)
|
||||
|
||||
def _publish_noise_floor_sync(self, noise_record: dict):
|
||||
if self.mqtt_handler:
|
||||
self.mqtt_handler.publish_mqtt(noise_record, "noise_floor")
|
||||
self._publish_to_glass(noise_record, "noise_floor")
|
||||
|
||||
def record_crc_errors(self, count: int):
|
||||
"""Record a batch of CRC errors detected since last poll and defer publishing."""
|
||||
crc_record = {"timestamp": time.time(), "count": count}
|
||||
self.sqlite_handler.store_crc_errors(crc_record)
|
||||
self._schedule_background(
|
||||
self._deferred_publish_crc_errors,
|
||||
crc_record,
|
||||
sync_fallback=self._publish_crc_errors_sync,
|
||||
)
|
||||
|
||||
async def _deferred_publish_crc_errors(self, crc_record: dict):
|
||||
"""Deferred background task for CRC error publishing."""
|
||||
try:
|
||||
self._publish_crc_errors_sync(crc_record)
|
||||
except Exception as e:
|
||||
logger.error(f"Deferred CRC errors publish failed: {e}", exc_info=True)
|
||||
|
||||
def _publish_crc_errors_sync(self, crc_record: dict):
|
||||
if self.mqtt_handler:
|
||||
self.mqtt_handler.publish_mqtt(crc_record, "crc_errors")
|
||||
self._publish_to_glass(crc_record, "crc_errors")
|
||||
|
||||
def get_crc_error_count(self, hours: int = 24) -> int:
|
||||
return self.sqlite_handler.get_crc_error_count(hours)
|
||||
|
||||
def get_crc_error_history(self, hours: int = 24, limit: int = None) -> list:
|
||||
return self.sqlite_handler.get_crc_error_history(hours, limit)
|
||||
|
||||
def get_packet_stats(self, hours: int = 24) -> dict:
|
||||
return self.sqlite_handler.get_packet_stats(hours)
|
||||
@@ -157,9 +322,32 @@ class StorageCollector:
|
||||
start_timestamp: Optional[float] = None,
|
||||
end_timestamp: Optional[float] = None,
|
||||
limit: int = 1000,
|
||||
offset: int = 0,
|
||||
) -> list:
|
||||
return self.sqlite_handler.get_filtered_packets(
|
||||
packet_type, route, start_timestamp, end_timestamp, limit
|
||||
packet_type, route, start_timestamp, end_timestamp, limit, offset
|
||||
)
|
||||
|
||||
def get_airtime_data(
|
||||
self,
|
||||
start_timestamp: Optional[float] = None,
|
||||
end_timestamp: Optional[float] = None,
|
||||
limit: int = 50000,
|
||||
) -> list:
|
||||
return self.sqlite_handler.get_airtime_data(start_timestamp, end_timestamp, limit)
|
||||
|
||||
def get_airtime_buckets(
|
||||
self,
|
||||
start_timestamp: float,
|
||||
end_timestamp: float,
|
||||
bucket_seconds: int = 60,
|
||||
sf: int = 9,
|
||||
bw_hz: int = 62500,
|
||||
cr: int = 5,
|
||||
preamble: int = 17,
|
||||
) -> dict:
|
||||
return self.sqlite_handler.get_airtime_buckets(
|
||||
start_timestamp, end_timestamp, bucket_seconds, sf, bw_hz, cr, preamble
|
||||
)
|
||||
|
||||
def get_packet_by_hash(self, packet_hash: str) -> Optional[dict]:
|
||||
@@ -187,23 +375,61 @@ class StorageCollector:
|
||||
def get_neighbors(self) -> dict:
|
||||
return self.sqlite_handler.get_neighbors()
|
||||
|
||||
def get_node_name_by_pubkey(self, pubkey: str) -> Optional[str]:
|
||||
"""
|
||||
Lookup node name from adverts table by public key.
|
||||
|
||||
Args:
|
||||
pubkey: Public key in hex string format
|
||||
|
||||
Returns:
|
||||
Node name if found, None otherwise
|
||||
"""
|
||||
try:
|
||||
import sqlite3
|
||||
|
||||
with sqlite3.connect(self.sqlite_handler.sqlite_path) as conn:
|
||||
result = conn.execute(
|
||||
"SELECT node_name FROM adverts WHERE pubkey = ? AND node_name IS NOT NULL ORDER BY last_seen DESC LIMIT 1",
|
||||
(pubkey,),
|
||||
).fetchone()
|
||||
return result[0] if result else None
|
||||
except Exception as e:
|
||||
logger.debug(f"Could not lookup node name for {pubkey[:8] if pubkey else 'None'}: {e}")
|
||||
return None
|
||||
|
||||
def cleanup_old_data(self, days: int = 7):
|
||||
self.sqlite_handler.cleanup_old_data(days)
|
||||
|
||||
def get_noise_floor_history(self, hours: int = 24) -> list:
|
||||
return self.sqlite_handler.get_noise_floor_history(hours)
|
||||
def get_noise_floor_history(self, hours: int = 24, limit: int = None) -> list:
|
||||
return self.sqlite_handler.get_noise_floor_history(hours, limit)
|
||||
|
||||
def get_noise_floor_stats(self, hours: int = 24) -> dict:
|
||||
return self.sqlite_handler.get_noise_floor_stats(hours)
|
||||
|
||||
def close(self):
|
||||
self.mqtt_handler.close()
|
||||
if self.letsmesh_handler:
|
||||
# Cancel all pending background tasks
|
||||
for task in self._pending_tasks:
|
||||
if not task.done():
|
||||
task.cancel()
|
||||
|
||||
if self.mqtt_handler:
|
||||
try:
|
||||
self.letsmesh_handler.disconnect()
|
||||
logger.info("LetsMesh handler disconnected")
|
||||
self.mqtt_handler.disconnect()
|
||||
logger.info("MQTT handler disconnected")
|
||||
except Exception as e:
|
||||
logger.error(f"Error disconnecting LetsMesh handler: {e}")
|
||||
logger.error(f"Error disconnecting MQTT handler: {e}")
|
||||
|
||||
def set_glass_publisher(self, publish_callback):
|
||||
self.glass_publish_callback = publish_callback
|
||||
|
||||
def _publish_to_glass(self, record: dict, record_type: str):
|
||||
if not self.glass_publish_callback:
|
||||
return
|
||||
try:
|
||||
self.glass_publish_callback(record_type, record)
|
||||
except Exception as e:
|
||||
logger.debug(f"Failed to publish telemetry to Glass MQTT: {e}")
|
||||
|
||||
def create_transport_key(
|
||||
self,
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
"""Storage utility classes and functions for data acquisition."""
|
||||
|
||||
from dataclasses import dataclass, asdict
|
||||
from dataclasses import asdict, dataclass
|
||||
from datetime import datetime
|
||||
from typing import Optional
|
||||
|
||||
@@ -10,7 +10,7 @@ class PacketRecord:
|
||||
"""
|
||||
Data class for packet record format.
|
||||
Converts internal packet_record format to standardized publish format.
|
||||
Reusable across MQTT, LetsMesh, and other handlers.
|
||||
Reusable across MQTT and other handlers.
|
||||
"""
|
||||
|
||||
origin: str
|
||||
|
||||
@@ -0,0 +1,168 @@
|
||||
"""
|
||||
WebSocket handler for real-time packet updates - simple ws4py implementation
|
||||
"""
|
||||
|
||||
import json
|
||||
import logging
|
||||
import threading
|
||||
import time
|
||||
from urllib.parse import parse_qs
|
||||
|
||||
import cherrypy
|
||||
from ws4py.server.cherrypyserver import WebSocketPlugin, WebSocketTool
|
||||
from ws4py.websocket import WebSocket
|
||||
|
||||
logger = logging.getLogger("WebSocket")
|
||||
|
||||
# Suppress noisy ws4py error logs for normal disconnections (ConnectionResetError, etc.)
|
||||
logging.getLogger("ws4py").setLevel(logging.CRITICAL)
|
||||
|
||||
# Global set of connected clients
|
||||
_connected_clients = set()
|
||||
|
||||
# Heartbeat configuration
|
||||
PING_INTERVAL = 30 # seconds
|
||||
_heartbeat_thread = None
|
||||
_heartbeat_running = False
|
||||
|
||||
|
||||
class PacketWebSocket(WebSocket):
|
||||
|
||||
def opened(self):
|
||||
"""Called when a WebSocket connection is established"""
|
||||
# Authenticate using JWT provided as query parameter (token=)
|
||||
jwt_handler = cherrypy.config.get("jwt_handler")
|
||||
|
||||
# Get query string from environ
|
||||
qs = ""
|
||||
if hasattr(self, "environ"):
|
||||
qs = self.environ.get("QUERY_STRING", "")
|
||||
|
||||
params = parse_qs(qs)
|
||||
token = params.get("token", [None])[0]
|
||||
client_id = params.get("client_id", [None])[0]
|
||||
|
||||
if not jwt_handler:
|
||||
logger.warning("WebSocket connection rejected: no JWT handler configured")
|
||||
self.close(code=1011, reason="server configuration error")
|
||||
return
|
||||
|
||||
if not token:
|
||||
logger.warning("WebSocket connection rejected: missing token")
|
||||
self.close(code=1008, reason="unauthorized")
|
||||
return
|
||||
|
||||
try:
|
||||
payload = jwt_handler.verify_jwt(token)
|
||||
if not payload:
|
||||
logger.warning("WebSocket connection rejected: invalid token")
|
||||
self.close(code=1008, reason="unauthorized")
|
||||
return
|
||||
except Exception as e:
|
||||
logger.warning(f"WebSocket auth error: {e}")
|
||||
self.close(code=1008, reason="unauthorized")
|
||||
return
|
||||
|
||||
if client_id and payload.get("client_id") and payload.get("client_id") != client_id:
|
||||
logger.warning("WebSocket connection rejected: client_id mismatch")
|
||||
self.close(code=1008, reason="unauthorized")
|
||||
return
|
||||
|
||||
# Auth success - store user and add to connected clients
|
||||
self.user = payload.get("sub") # type: ignore[attr-defined]
|
||||
_connected_clients.add(self)
|
||||
logger.info(
|
||||
f"WebSocket connected ({self.user or 'unknown user'}). Total clients: {len(_connected_clients)}"
|
||||
)
|
||||
|
||||
def closed(self, code, reason=None):
|
||||
"""Called when a WebSocket connection is closed"""
|
||||
_connected_clients.discard(self)
|
||||
user = getattr(self, "user", "unknown")
|
||||
logger.info(
|
||||
f"WebSocket disconnected (user: {user}, code: {code}, reason: {reason}). Total clients: {len(_connected_clients)}"
|
||||
)
|
||||
|
||||
def received_message(self, message):
|
||||
"""Handle messages from client"""
|
||||
try:
|
||||
data = json.loads(str(message))
|
||||
if data.get("type") == "ping":
|
||||
self.send(json.dumps({"type": "pong"}))
|
||||
elif data.get("type") == "pong":
|
||||
# Client responded to our ping
|
||||
pass
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
def broadcast_packet(packet_data: dict):
|
||||
|
||||
if not _connected_clients:
|
||||
return
|
||||
|
||||
message = json.dumps({"type": "packet", "data": packet_data})
|
||||
|
||||
for client in list(_connected_clients):
|
||||
try:
|
||||
client.send(message)
|
||||
except Exception as e:
|
||||
logger.error(f"WebSocket send error: {e}")
|
||||
_connected_clients.discard(client)
|
||||
|
||||
|
||||
def broadcast_stats(stats_data: dict):
|
||||
|
||||
if not _connected_clients:
|
||||
return
|
||||
|
||||
message = json.dumps({"type": "stats", "data": stats_data})
|
||||
|
||||
for client in list(_connected_clients):
|
||||
try:
|
||||
client.send(message)
|
||||
except Exception as e:
|
||||
logger.error(f"WebSocket send error: {e}")
|
||||
_connected_clients.discard(client)
|
||||
|
||||
|
||||
def has_connected_clients() -> bool:
|
||||
"""Return True when at least one authenticated websocket client is connected."""
|
||||
return bool(_connected_clients)
|
||||
|
||||
|
||||
def _heartbeat_loop():
|
||||
"""Background thread to send periodic pings to all connected clients"""
|
||||
global _heartbeat_running
|
||||
|
||||
while _heartbeat_running:
|
||||
time.sleep(PING_INTERVAL)
|
||||
|
||||
if not _connected_clients:
|
||||
continue
|
||||
|
||||
ping_message = json.dumps({"type": "ping"})
|
||||
|
||||
for client in list(_connected_clients):
|
||||
try:
|
||||
client.send(ping_message)
|
||||
except Exception as e:
|
||||
logger.debug(f"Heartbeat ping failed: {e}")
|
||||
_connected_clients.discard(client)
|
||||
|
||||
|
||||
def init_websocket():
|
||||
"""Initialize WebSocket plugin and start heartbeat"""
|
||||
global _heartbeat_thread, _heartbeat_running
|
||||
|
||||
WebSocketPlugin(cherrypy.engine).subscribe()
|
||||
cherrypy.tools.websocket = WebSocketTool()
|
||||
|
||||
# Start heartbeat thread
|
||||
if not _heartbeat_running:
|
||||
_heartbeat_running = True
|
||||
_heartbeat_thread = threading.Thread(target=_heartbeat_loop, daemon=True)
|
||||
_heartbeat_thread.start()
|
||||
logger.info(f"WebSocket initialized with {PING_INTERVAL}s heartbeat")
|
||||
else:
|
||||
logger.info("WebSocket initialized")
|
||||
+838
-237
File diff suppressed because it is too large
Load Diff
@@ -1,7 +1,19 @@
|
||||
"""Handler helper modules for pyMC Repeater."""
|
||||
|
||||
from .trace import TraceHelper
|
||||
from .discovery import DiscoveryHelper
|
||||
from .advert import AdvertHelper
|
||||
from .discovery import DiscoveryHelper
|
||||
from .login import LoginHelper
|
||||
from .path import PathHelper
|
||||
from .protocol_request import ProtocolRequestHelper
|
||||
from .text import TextHelper
|
||||
from .trace import TraceHelper
|
||||
|
||||
__all__ = ["TraceHelper", "DiscoveryHelper", "AdvertHelper"]
|
||||
__all__ = [
|
||||
"TraceHelper",
|
||||
"DiscoveryHelper",
|
||||
"AdvertHelper",
|
||||
"LoginHelper",
|
||||
"TextHelper",
|
||||
"PathHelper",
|
||||
"ProtocolRequestHelper",
|
||||
]
|
||||
|
||||
@@ -0,0 +1,179 @@
|
||||
import logging
|
||||
import time
|
||||
from typing import Dict, Optional
|
||||
|
||||
from pymc_core.protocol import Identity
|
||||
from pymc_core.protocol.constants import PUB_KEY_SIZE
|
||||
|
||||
logger = logging.getLogger("ACL")
|
||||
|
||||
PERM_ACL_GUEST = 0x01
|
||||
PERM_ACL_ADMIN = 0x02
|
||||
PERM_ACL_READ_WRITE = 0x01
|
||||
PERM_ACL_ROLE_MASK = 0x03
|
||||
|
||||
|
||||
class ClientInfo:
|
||||
"""Represents an authenticated client in the access control list."""
|
||||
|
||||
def __init__(self, identity: Identity, permissions: int = 0):
|
||||
self.id = identity
|
||||
self.permissions = permissions
|
||||
self.shared_secret = b""
|
||||
self.last_timestamp = 0
|
||||
self.last_activity = 0
|
||||
self.last_login_success = 0
|
||||
self.out_path_len = -1
|
||||
self.out_path = bytearray()
|
||||
self.sync_since = 0 # For room servers - timestamp of last synced message
|
||||
|
||||
def is_admin(self) -> bool:
|
||||
return (self.permissions & PERM_ACL_ROLE_MASK) == PERM_ACL_ADMIN
|
||||
|
||||
def is_guest(self) -> bool:
|
||||
return (self.permissions & PERM_ACL_ROLE_MASK) == PERM_ACL_GUEST
|
||||
|
||||
|
||||
class ACL:
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
max_clients: int = 50,
|
||||
admin_password: str = "admin123",
|
||||
guest_password: str = "guest123",
|
||||
allow_read_only: bool = True,
|
||||
):
|
||||
self.max_clients = max_clients
|
||||
self.admin_password = admin_password
|
||||
self.guest_password = guest_password
|
||||
self.allow_read_only = allow_read_only
|
||||
self.clients: Dict[bytes, ClientInfo] = {}
|
||||
|
||||
def authenticate_client(
|
||||
self,
|
||||
client_identity: Identity,
|
||||
shared_secret: bytes,
|
||||
password: str,
|
||||
timestamp: int,
|
||||
sync_since: int = None,
|
||||
target_identity_hash: int = None,
|
||||
target_identity_name: str = None,
|
||||
target_identity_config: dict = None,
|
||||
) -> tuple[bool, int]:
|
||||
|
||||
target_identity_config = target_identity_config or {}
|
||||
|
||||
# Check for identity-specific passwords (required for room servers)
|
||||
identity_settings = target_identity_config.get("settings", {})
|
||||
|
||||
# Determine if this is a room server by checking the type field
|
||||
identity_type = target_identity_config.get("type", "")
|
||||
is_room_server = identity_type == "room_server"
|
||||
|
||||
# Log sync_since if provided (room server format)
|
||||
if sync_since is not None:
|
||||
logger.debug(f"Client sync_since timestamp: {sync_since}")
|
||||
|
||||
if is_room_server:
|
||||
# Room servers use passwords from their settings section only
|
||||
# Empty strings are treated as "not set"
|
||||
admin_pwd = identity_settings.get("admin_password") or None
|
||||
guest_pwd = identity_settings.get("guest_password") or None
|
||||
|
||||
if not admin_pwd and not guest_pwd:
|
||||
logger.error(
|
||||
f"Room server '{target_identity_name}' has no passwords configured! Set admin_password and/or guest_password in settings."
|
||||
)
|
||||
return False, 0
|
||||
else:
|
||||
# Repeater uses global passwords from its own security section
|
||||
admin_pwd = self.admin_password
|
||||
guest_pwd = self.guest_password
|
||||
logger.debug(
|
||||
f"Repeater passwords - admin: {'SET' if admin_pwd else 'NONE'}, "
|
||||
f"guest: {'SET' if guest_pwd else 'NONE'}"
|
||||
)
|
||||
|
||||
if target_identity_name:
|
||||
logger.debug(
|
||||
f"Authenticating for identity '{target_identity_name}' (room_server={is_room_server})"
|
||||
)
|
||||
|
||||
pub_key = client_identity.get_public_key()[:PUB_KEY_SIZE]
|
||||
|
||||
if not password:
|
||||
client = self.clients.get(pub_key)
|
||||
if client is None:
|
||||
if self.allow_read_only:
|
||||
logger.info("Blank password, allowing read-only guest access")
|
||||
return True, PERM_ACL_GUEST
|
||||
else:
|
||||
logger.info("Blank password, sender not in ACL and read-only disabled")
|
||||
return False, 0
|
||||
logger.info(f"ACL-based login for {pub_key[:6].hex()}...")
|
||||
return True, client.permissions
|
||||
|
||||
permissions = 0
|
||||
logger.debug(f"Comparing password (len={len(password)}) against admin/guest")
|
||||
logger.debug(
|
||||
f"Admin pwd len={len(admin_pwd) if admin_pwd else 0}, Guest pwd len={len(guest_pwd) if guest_pwd else 0}"
|
||||
)
|
||||
logger.debug(
|
||||
f"Password comparison: '{password}' vs admin='{admin_pwd[:4]}...' ({len(admin_pwd)} chars)"
|
||||
)
|
||||
if admin_pwd and password == admin_pwd:
|
||||
permissions = PERM_ACL_ADMIN
|
||||
logger.info(f"Admin password validated for '{target_identity_name or 'unknown'}'")
|
||||
elif guest_pwd and password == guest_pwd:
|
||||
permissions = PERM_ACL_READ_WRITE
|
||||
logger.info(f"Guest password validated for '{target_identity_name or 'unknown'}'")
|
||||
else:
|
||||
logger.info(f"Invalid password for '{target_identity_name or 'unknown'}'")
|
||||
return False, 0
|
||||
|
||||
client = self.clients.get(pub_key)
|
||||
if client is None:
|
||||
if len(self.clients) >= self.max_clients:
|
||||
logger.warning("ACL full, cannot add client")
|
||||
return False, 0
|
||||
|
||||
client = ClientInfo(client_identity, 0)
|
||||
self.clients[pub_key] = client
|
||||
logger.info(f"Added new client {pub_key[:6].hex()}...")
|
||||
|
||||
if timestamp <= client.last_timestamp:
|
||||
logger.warning(
|
||||
f"Possible replay attack! timestamp={timestamp}, last={client.last_timestamp}"
|
||||
)
|
||||
return False, 0
|
||||
|
||||
client.last_timestamp = timestamp
|
||||
client.last_activity = int(time.time())
|
||||
client.last_login_success = int(time.time())
|
||||
client.permissions &= ~PERM_ACL_ROLE_MASK
|
||||
client.permissions |= permissions
|
||||
client.shared_secret = shared_secret
|
||||
|
||||
# Store sync_since for room server clients
|
||||
if sync_since is not None:
|
||||
client.sync_since = sync_since
|
||||
logger.debug(f"Stored sync_since={sync_since} for client")
|
||||
|
||||
logger.info(f"Login success! Permissions: {'ADMIN' if client.is_admin() else 'GUEST'}")
|
||||
return True, client.permissions
|
||||
|
||||
def get_client(self, pub_key: bytes) -> Optional[ClientInfo]:
|
||||
return self.clients.get(pub_key[:PUB_KEY_SIZE])
|
||||
|
||||
def get_num_clients(self) -> int:
|
||||
return len(self.clients)
|
||||
|
||||
def get_all_clients(self):
|
||||
return list(self.clients.values())
|
||||
|
||||
def remove_client(self, pub_key: bytes) -> bool:
|
||||
key = pub_key[:PUB_KEY_SIZE]
|
||||
if key in self.clients:
|
||||
del self.clients[key]
|
||||
return True
|
||||
return False
|
||||
@@ -2,20 +2,43 @@
|
||||
Advertisement packet handling helper for pyMC Repeater.
|
||||
|
||||
This module processes advertisement packets for neighbor tracking and discovery.
|
||||
Includes adaptive rate limiting based on mesh activity.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
import time
|
||||
import itertools
|
||||
from collections import OrderedDict, deque
|
||||
from enum import Enum
|
||||
from typing import Dict, Optional, Tuple
|
||||
|
||||
from pymc_core.node.handlers.advert import AdvertHandler
|
||||
|
||||
logger = logging.getLogger("AdvertHelper")
|
||||
|
||||
|
||||
class MeshActivityTier(Enum):
|
||||
"""Mesh activity levels for adaptive rate limiting."""
|
||||
QUIET = "quiet"
|
||||
NORMAL = "normal"
|
||||
BUSY = "busy"
|
||||
CONGESTED = "congested"
|
||||
|
||||
|
||||
# Tier multipliers for rate limit scaling
|
||||
TIER_MULTIPLIERS = {
|
||||
MeshActivityTier.QUIET: 0.0, # No rate limiting
|
||||
MeshActivityTier.NORMAL: 0.5, # Light limiting
|
||||
MeshActivityTier.BUSY: 1.0, # Standard limiting
|
||||
MeshActivityTier.CONGESTED: 2.0, # Aggressive limiting
|
||||
}
|
||||
|
||||
|
||||
class AdvertHelper:
|
||||
"""Helper class for processing advertisement packets in the repeater."""
|
||||
|
||||
def __init__(self, local_identity, storage, log_fn=None):
|
||||
def __init__(self, local_identity, storage, config=None, log_fn=None):
|
||||
"""
|
||||
Initialize the advert helper.
|
||||
|
||||
@@ -26,6 +49,7 @@ class AdvertHelper:
|
||||
"""
|
||||
self.local_identity = local_identity
|
||||
self.storage = storage
|
||||
self.config = config or {}
|
||||
|
||||
# Create AdvertHandler internally as a parsing utility
|
||||
self.advert_handler = AdvertHandler(log_fn=log_fn or logger.info)
|
||||
@@ -33,6 +57,467 @@ class AdvertHelper:
|
||||
# Cache for tracking known neighbors (avoid repeated database queries)
|
||||
self._known_neighbors = set()
|
||||
|
||||
repeater_cfg = self.config.get("repeater", {})
|
||||
|
||||
# --- Adaptive mode config ---
|
||||
adaptive_cfg = repeater_cfg.get("advert_adaptive", {})
|
||||
self._adaptive_enabled = bool(adaptive_cfg.get("enabled", True))
|
||||
self._ewma_alpha = max(0.01, min(1.0, float(adaptive_cfg.get("ewma_alpha", 0.1))))
|
||||
self._tier_hysteresis_seconds = max(0.0, float(adaptive_cfg.get("hysteresis_seconds", 300.0)))
|
||||
|
||||
# Tier thresholds (packets per minute)
|
||||
thresholds = adaptive_cfg.get("thresholds", {})
|
||||
self._threshold_normal = float(thresholds.get("normal", 1.0))
|
||||
self._threshold_busy = float(thresholds.get("busy", 5.0))
|
||||
self._threshold_congested = float(thresholds.get("congested", 15.0))
|
||||
|
||||
# --- Base rate limit config (scaled by tier) ---
|
||||
rate_cfg = repeater_cfg.get("advert_rate_limit", {})
|
||||
self._rate_limit_enabled = bool(rate_cfg.get("enabled", True))
|
||||
self._base_bucket_capacity = max(1.0, float(rate_cfg.get("bucket_capacity", 2)))
|
||||
self._base_refill_tokens = max(0.1, float(rate_cfg.get("refill_tokens", 1.0)))
|
||||
self._base_refill_interval = max(1.0, float(rate_cfg.get("refill_interval_seconds", 36000.0)))
|
||||
self._base_min_interval = max(0.0, float(rate_cfg.get("min_interval_seconds", 3600.0)))
|
||||
|
||||
# --- Penalty box config ---
|
||||
penalty_cfg = repeater_cfg.get("advert_penalty_box", {})
|
||||
self._penalty_enabled = bool(penalty_cfg.get("enabled", True))
|
||||
self._penalty_violation_threshold = max(1, int(penalty_cfg.get("violation_threshold", 2)))
|
||||
self._penalty_decay_seconds = max(1.0, float(penalty_cfg.get("violation_decay_seconds", 43200.0)))
|
||||
self._penalty_base_seconds = max(1.0, float(penalty_cfg.get("base_penalty_seconds", 21600.0)))
|
||||
self._penalty_multiplier = max(1.0, float(penalty_cfg.get("penalty_multiplier", 2.0)))
|
||||
self._penalty_max_seconds = max(
|
||||
self._penalty_base_seconds,
|
||||
float(penalty_cfg.get("max_penalty_seconds", 86400.0)),
|
||||
)
|
||||
|
||||
# --- Advert dedupe config ---
|
||||
dedupe_cfg = repeater_cfg.get("advert_dedupe", {})
|
||||
self._advert_dedupe_ttl_seconds = max(1.0, float(dedupe_cfg.get("ttl_seconds", 120.0)))
|
||||
self._advert_dedupe_max_hashes = max(100, int(dedupe_cfg.get("max_hashes", 10000)))
|
||||
|
||||
# --- Per-pubkey state ---
|
||||
self._bucket_state: Dict[str, dict] = {}
|
||||
self._penalty_until: Dict[str, float] = {}
|
||||
self._violation_state: Dict[str, dict] = {}
|
||||
self._recent_advert_hashes: OrderedDict[str, float] = OrderedDict()
|
||||
|
||||
# --- Adaptive metrics state ---
|
||||
self._adverts_ewma = 0.0 # EWMA of adverts per minute
|
||||
self._packets_ewma = 0.0 # EWMA of total packets per minute
|
||||
self._duplicates_ewma = 0.0 # EWMA of duplicate ratio
|
||||
self._last_metrics_update = time.time()
|
||||
self._metrics_window_seconds = 60.0
|
||||
self._adverts_in_window = 0
|
||||
self._packets_in_window = 0
|
||||
self._duplicates_in_window = 0
|
||||
|
||||
# Current activity tier with hysteresis
|
||||
self._current_tier = MeshActivityTier.NORMAL
|
||||
self._tier_since = time.time()
|
||||
self._pending_tier: Optional[MeshActivityTier] = None
|
||||
self._pending_tier_since = 0.0
|
||||
|
||||
# Stats counters
|
||||
self._stats_adverts_allowed = 0
|
||||
self._stats_adverts_dropped = 0
|
||||
self._stats_advert_duplicates = 0
|
||||
self._stats_tier_changes = 0
|
||||
|
||||
# Recent drops tracking — bounded deque so append is O(1) and the
|
||||
# oldest entry is evicted automatically (no pop(0) O(n) shift needed).
|
||||
self._recent_drops: deque = deque(maxlen=20)
|
||||
|
||||
# Memory management
|
||||
self._last_cleanup = time.time()
|
||||
self._cleanup_interval_seconds = 3600.0 # Clean up every hour
|
||||
self._bucket_state_retention_seconds = 604800.0 # Keep inactive pubkeys for 7 days
|
||||
self._max_tracked_pubkeys = 10000 # Hard limit on tracked pubkeys
|
||||
|
||||
logger.info(
|
||||
f"Advert limiter: adaptive={self._adaptive_enabled}, "
|
||||
f"rate_limit={self._rate_limit_enabled}, "
|
||||
f"bucket={self._base_bucket_capacity:.1f}, "
|
||||
f"penalty={self._penalty_enabled}, "
|
||||
f"dedupe=True"
|
||||
)
|
||||
|
||||
# -------------------------------------------------------------------------
|
||||
# Memory management
|
||||
# -------------------------------------------------------------------------
|
||||
|
||||
def _cleanup_old_state(self, now: float) -> None:
|
||||
"""Clean up old/expired entries to prevent unbounded memory growth."""
|
||||
while self._recent_advert_hashes:
|
||||
oldest_hash, expires_at = next(iter(self._recent_advert_hashes.items()))
|
||||
if expires_at > now:
|
||||
break
|
||||
self._recent_advert_hashes.pop(oldest_hash, None)
|
||||
|
||||
while len(self._recent_advert_hashes) > self._advert_dedupe_max_hashes:
|
||||
self._recent_advert_hashes.popitem(last=False)
|
||||
|
||||
|
||||
expired_penalties = [pk for pk, until in self._penalty_until.items() if until < now]
|
||||
for pk in expired_penalties:
|
||||
del self._penalty_until[pk]
|
||||
|
||||
|
||||
inactive_pubkeys = [
|
||||
pk for pk, state in self._bucket_state.items()
|
||||
if now - state.get("last_seen", 0) > self._bucket_state_retention_seconds
|
||||
]
|
||||
for pk in inactive_pubkeys:
|
||||
del self._bucket_state[pk]
|
||||
if pk in self._violation_state:
|
||||
del self._violation_state[pk]
|
||||
|
||||
# 3. Decay old violations based on decay time
|
||||
for pk, vstate in list(self._violation_state.items()):
|
||||
last_violation = vstate.get("last_violation", 0)
|
||||
if now - last_violation > self._penalty_decay_seconds:
|
||||
# Reset violation count after decay period
|
||||
vstate["count"] = 0
|
||||
|
||||
if len(self._bucket_state) > self._max_tracked_pubkeys:
|
||||
# Sort by last_seen and remove oldest 10%
|
||||
sorted_pubkeys = sorted(
|
||||
self._bucket_state.items(),
|
||||
key=lambda x: x[1].get("last_seen", 0)
|
||||
)
|
||||
to_remove = int(len(sorted_pubkeys) * 0.1)
|
||||
for pk, _ in sorted_pubkeys[:to_remove]:
|
||||
del self._bucket_state[pk]
|
||||
if pk in self._violation_state:
|
||||
del self._violation_state[pk]
|
||||
if pk in self._penalty_until:
|
||||
del self._penalty_until[pk]
|
||||
|
||||
# 5. Limit known neighbors set to prevent unbounded growth
|
||||
if len(self._known_neighbors) > 1000:
|
||||
# itertools.islice avoids materialising the full list first (O(n) → O(k))
|
||||
self._known_neighbors = set(itertools.islice(self._known_neighbors, 500))
|
||||
|
||||
if expired_penalties or inactive_pubkeys:
|
||||
logger.debug(
|
||||
f"Cleaned up {len(expired_penalties)} expired penalties, "
|
||||
f"{len(inactive_pubkeys)} inactive pubkeys. "
|
||||
f"Tracking: {len(self._bucket_state)} buckets, "
|
||||
f"{len(self._penalty_until)} penalties, "
|
||||
f"{len(self._known_neighbors)} neighbors, "
|
||||
f"{len(self._recent_advert_hashes)} advert hashes"
|
||||
)
|
||||
|
||||
def _dedupe_advert_packet_hash(self, packet, now: float) -> bool:
|
||||
"""Return True when advert packet hash was already seen recently."""
|
||||
try:
|
||||
pkt_hash = packet.calculate_packet_hash().hex().upper()
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
expires_at = self._recent_advert_hashes.get(pkt_hash)
|
||||
if expires_at and expires_at > now:
|
||||
# Move to end so hot hashes remain least likely to be evicted
|
||||
self._recent_advert_hashes.move_to_end(pkt_hash)
|
||||
return True
|
||||
|
||||
# Track first-seen (or expired hash re-seen)
|
||||
self._recent_advert_hashes[pkt_hash] = now + self._advert_dedupe_ttl_seconds
|
||||
self._recent_advert_hashes.move_to_end(pkt_hash)
|
||||
|
||||
# Opportunistic cleanup to keep memory bounded between scheduled cleanup runs
|
||||
while len(self._recent_advert_hashes) > self._advert_dedupe_max_hashes:
|
||||
self._recent_advert_hashes.popitem(last=False)
|
||||
|
||||
return False
|
||||
|
||||
# -------------------------------------------------------------------------
|
||||
# Adaptive tier calculation
|
||||
# -------------------------------------------------------------------------
|
||||
|
||||
def _update_metrics_window(self, now: float, is_advert: bool = True, is_duplicate: bool = False) -> None:
|
||||
"""Update rolling metrics window and EWMA."""
|
||||
elapsed = now - self._last_metrics_update
|
||||
|
||||
if elapsed >= self._metrics_window_seconds:
|
||||
# Calculate rates for window
|
||||
adverts_per_min = (self._adverts_in_window / elapsed) * 60.0
|
||||
packets_per_min = (self._packets_in_window / elapsed) * 60.0
|
||||
dup_ratio = (
|
||||
self._duplicates_in_window / max(1, self._packets_in_window)
|
||||
)
|
||||
|
||||
# Update EWMA
|
||||
alpha = self._ewma_alpha
|
||||
self._adverts_ewma = alpha * adverts_per_min + (1 - alpha) * self._adverts_ewma
|
||||
self._packets_ewma = alpha * packets_per_min + (1 - alpha) * self._packets_ewma
|
||||
self._duplicates_ewma = alpha * dup_ratio + (1 - alpha) * self._duplicates_ewma
|
||||
|
||||
# Reset window
|
||||
self._adverts_in_window = 0
|
||||
self._packets_in_window = 0
|
||||
self._duplicates_in_window = 0
|
||||
self._last_metrics_update = now
|
||||
|
||||
# Periodic cleanup
|
||||
if now - self._last_cleanup >= self._cleanup_interval_seconds:
|
||||
self._cleanup_old_state(now)
|
||||
self._last_cleanup = now
|
||||
|
||||
# Count this event
|
||||
if is_advert:
|
||||
self._adverts_in_window += 1
|
||||
self._packets_in_window += 1
|
||||
if is_duplicate:
|
||||
self._duplicates_in_window += 1
|
||||
|
||||
def _calculate_target_tier(self) -> MeshActivityTier:
|
||||
"""Determine target tier based on current EWMA metrics."""
|
||||
# Combined activity score (adverts + packets weighted)
|
||||
activity = self._adverts_ewma + (self._packets_ewma * 0.1)
|
||||
|
||||
if activity >= self._threshold_congested:
|
||||
return MeshActivityTier.CONGESTED
|
||||
elif activity >= self._threshold_busy:
|
||||
return MeshActivityTier.BUSY
|
||||
elif activity >= self._threshold_normal:
|
||||
return MeshActivityTier.NORMAL
|
||||
else:
|
||||
return MeshActivityTier.QUIET
|
||||
|
||||
def _update_tier(self, now: float) -> None:
|
||||
"""Update current tier with hysteresis to prevent flapping."""
|
||||
if not self._adaptive_enabled:
|
||||
return
|
||||
|
||||
target = self._calculate_target_tier()
|
||||
|
||||
if target == self._current_tier:
|
||||
# Stable, clear pending
|
||||
self._pending_tier = None
|
||||
return
|
||||
|
||||
if self._pending_tier != target:
|
||||
# New pending tier
|
||||
self._pending_tier = target
|
||||
self._pending_tier_since = now
|
||||
return
|
||||
|
||||
# Check hysteresis
|
||||
if (now - self._pending_tier_since) >= self._tier_hysteresis_seconds:
|
||||
old_tier = self._current_tier
|
||||
self._current_tier = target
|
||||
self._tier_since = now
|
||||
self._pending_tier = None
|
||||
self._stats_tier_changes += 1
|
||||
logger.info(f"Mesh activity tier changed: {old_tier.value} → {target.value}")
|
||||
|
||||
def get_current_tier(self) -> MeshActivityTier:
|
||||
"""Get current mesh activity tier."""
|
||||
return self._current_tier
|
||||
|
||||
def _get_effective_limits(self) -> Tuple[float, float, float, float]:
|
||||
"""Get effective rate limits scaled by current tier."""
|
||||
if not self._adaptive_enabled:
|
||||
return (
|
||||
self._base_bucket_capacity,
|
||||
self._base_refill_tokens,
|
||||
self._base_refill_interval,
|
||||
self._base_min_interval,
|
||||
)
|
||||
|
||||
multiplier = TIER_MULTIPLIERS.get(self._current_tier, 1.0)
|
||||
|
||||
if multiplier == 0.0:
|
||||
# QUIET mode: effectively disable rate limiting
|
||||
return (100.0, 100.0, 1.0, 0.0)
|
||||
|
||||
# Scale intervals UP (stricter) as multiplier increases
|
||||
return (
|
||||
self._base_bucket_capacity,
|
||||
self._base_refill_tokens,
|
||||
self._base_refill_interval * multiplier,
|
||||
self._base_min_interval * multiplier,
|
||||
)
|
||||
|
||||
def _refill_tokens_if_needed(self, pubkey: str, now: float) -> dict:
|
||||
"""Refill token bucket using effective (tier-scaled) limits."""
|
||||
bucket_cap, refill_tokens, refill_interval, _ = self._get_effective_limits()
|
||||
|
||||
state = self._bucket_state.get(pubkey)
|
||||
if state is None:
|
||||
state = {
|
||||
"tokens": bucket_cap,
|
||||
"last_refill": now,
|
||||
"last_seen": 0.0,
|
||||
}
|
||||
self._bucket_state[pubkey] = state
|
||||
return state
|
||||
|
||||
elapsed = now - state["last_refill"]
|
||||
if elapsed <= 0:
|
||||
return state
|
||||
|
||||
refill_steps = elapsed / refill_interval
|
||||
if refill_steps > 0:
|
||||
state["tokens"] = min(
|
||||
bucket_cap,
|
||||
state["tokens"] + (refill_steps * refill_tokens),
|
||||
)
|
||||
state["last_refill"] = now
|
||||
return state
|
||||
|
||||
def _record_violation_and_maybe_penalize(self, pubkey: str, now: float) -> None:
|
||||
if not self._penalty_enabled:
|
||||
return
|
||||
|
||||
state = self._violation_state.get(pubkey)
|
||||
if state is None:
|
||||
state = {"count": 0, "last_violation": 0.0}
|
||||
self._violation_state[pubkey] = state
|
||||
|
||||
if (now - state["last_violation"]) > self._penalty_decay_seconds:
|
||||
state["count"] = 0
|
||||
|
||||
state["count"] += 1
|
||||
state["last_violation"] = now
|
||||
|
||||
if state["count"] < self._penalty_violation_threshold:
|
||||
return
|
||||
|
||||
level = state["count"] - self._penalty_violation_threshold
|
||||
penalty_seconds = min(
|
||||
self._penalty_max_seconds,
|
||||
self._penalty_base_seconds * (self._penalty_multiplier**level),
|
||||
)
|
||||
new_until = now + penalty_seconds
|
||||
old_until = self._penalty_until.get(pubkey, 0.0)
|
||||
|
||||
if new_until > old_until:
|
||||
self._penalty_until[pubkey] = new_until
|
||||
logger.warning(
|
||||
f"Advert penalty activated for {pubkey[:16]}... "
|
||||
f"({penalty_seconds:.1f}s, violations={state['count']})"
|
||||
)
|
||||
|
||||
def _allow_advert(self, pubkey: str, now: float) -> Tuple[bool, str]:
|
||||
"""Check if advert is allowed using adaptive tier-scaled limits."""
|
||||
# Update metrics and tier
|
||||
self._update_metrics_window(now, is_advert=True)
|
||||
self._update_tier(now)
|
||||
|
||||
if not self._rate_limit_enabled:
|
||||
self._stats_adverts_allowed += 1
|
||||
return True, ""
|
||||
|
||||
# QUIET tier bypasses rate limiting
|
||||
if self._adaptive_enabled and self._current_tier == MeshActivityTier.QUIET:
|
||||
self._stats_adverts_allowed += 1
|
||||
return True, ""
|
||||
|
||||
penalty_until = self._penalty_until.get(pubkey, 0.0)
|
||||
if now < penalty_until:
|
||||
remaining = penalty_until - now
|
||||
self._stats_adverts_dropped += 1
|
||||
return False, f"advert penalty box active ({remaining:.1f}s remaining)"
|
||||
|
||||
state = self._refill_tokens_if_needed(pubkey, now)
|
||||
_, _, _, min_interval = self._get_effective_limits()
|
||||
|
||||
last_seen = float(state.get("last_seen", 0.0))
|
||||
if min_interval > 0 and last_seen > 0:
|
||||
since_last = now - last_seen
|
||||
if since_last < min_interval:
|
||||
self._record_violation_and_maybe_penalize(pubkey, now)
|
||||
self._stats_adverts_dropped += 1
|
||||
return (
|
||||
False,
|
||||
f"advert min-interval hit ({since_last:.2f}s < {min_interval:.2f}s)",
|
||||
)
|
||||
|
||||
if state["tokens"] < 1.0:
|
||||
self._record_violation_and_maybe_penalize(pubkey, now)
|
||||
self._stats_adverts_dropped += 1
|
||||
return False, "advert rate limit exceeded"
|
||||
|
||||
state["tokens"] -= 1.0
|
||||
state["last_seen"] = now
|
||||
self._stats_adverts_allowed += 1
|
||||
return True, ""
|
||||
|
||||
def record_packet_seen(self, is_duplicate: bool = False) -> None:
|
||||
"""Record a packet seen for metrics (called by router for non-advert packets)."""
|
||||
now = time.time()
|
||||
self._update_metrics_window(now, is_advert=False, is_duplicate=is_duplicate)
|
||||
|
||||
def get_rate_limit_stats(self) -> dict:
|
||||
"""Get comprehensive rate limiting and adaptive tier statistics."""
|
||||
now = time.time()
|
||||
bucket_cap, refill_tokens, refill_interval, min_interval = self._get_effective_limits()
|
||||
|
||||
# Active penalties
|
||||
active_penalties = {
|
||||
pk[:16]: round(until - now, 1)
|
||||
for pk, until in self._penalty_until.items()
|
||||
if until > now
|
||||
}
|
||||
|
||||
# Per-pubkey bucket states
|
||||
bucket_summary = {}
|
||||
for pk, state in self._bucket_state.items():
|
||||
bucket_summary[pk[:16]] = {
|
||||
"tokens": round(state["tokens"], 2),
|
||||
"last_seen_ago": round(now - state["last_seen"], 1) if state["last_seen"] > 0 else None,
|
||||
}
|
||||
|
||||
return {
|
||||
"adaptive": {
|
||||
"enabled": self._adaptive_enabled,
|
||||
"current_tier": self._current_tier.value,
|
||||
"tier_since": round(now - self._tier_since, 1),
|
||||
"pending_tier": self._pending_tier.value if self._pending_tier else None,
|
||||
"tier_changes": self._stats_tier_changes,
|
||||
},
|
||||
"metrics": {
|
||||
"adverts_per_min_ewma": round(self._adverts_ewma, 2),
|
||||
"packets_per_min_ewma": round(self._packets_ewma, 2),
|
||||
"duplicate_ratio_ewma": round(self._duplicates_ewma, 3),
|
||||
},
|
||||
"effective_limits": {
|
||||
"bucket_capacity": bucket_cap,
|
||||
"refill_tokens": refill_tokens,
|
||||
"refill_interval_seconds": round(refill_interval, 1),
|
||||
"min_interval_seconds": round(min_interval, 1),
|
||||
},
|
||||
"stats": {
|
||||
"adverts_allowed": self._stats_adverts_allowed,
|
||||
"adverts_dropped": self._stats_adverts_dropped,
|
||||
"adverts_duplicate_reheard": self._stats_advert_duplicates,
|
||||
"drop_rate": round(
|
||||
self._stats_adverts_dropped / max(1, self._stats_adverts_allowed + self._stats_adverts_dropped),
|
||||
3,
|
||||
),
|
||||
},
|
||||
"dedupe": {
|
||||
"enabled": True,
|
||||
"ttl_seconds": self._advert_dedupe_ttl_seconds,
|
||||
"tracked_hashes": len(self._recent_advert_hashes),
|
||||
"max_hashes": self._advert_dedupe_max_hashes,
|
||||
},
|
||||
"active_penalties": active_penalties,
|
||||
"tracked_pubkeys": len(self._bucket_state),
|
||||
"bucket_states": bucket_summary,
|
||||
"recent_drops": [
|
||||
{
|
||||
"pubkey": drop["pubkey"],
|
||||
"name": drop["name"],
|
||||
"reason": drop["reason"],
|
||||
"seconds_ago": round(now - drop["timestamp"], 1)
|
||||
}
|
||||
for drop in reversed(self._recent_drops) # Most recent first
|
||||
],
|
||||
}
|
||||
|
||||
async def process_advert_packet(self, packet, rssi: int, snr: float) -> None:
|
||||
"""
|
||||
Process an incoming advertisement packet.
|
||||
@@ -63,6 +548,45 @@ class AdvertHelper:
|
||||
pubkey = advert_data["public_key"]
|
||||
node_name = advert_data["name"]
|
||||
contact_type = advert_data["contact_type"]
|
||||
|
||||
now = time.time()
|
||||
|
||||
# Re-heard duplicates should be measured but not consume limiter tokens.
|
||||
if self._dedupe_advert_packet_hash(packet, now):
|
||||
self._stats_advert_duplicates += 1
|
||||
self._update_metrics_window(now, is_advert=False, is_duplicate=True)
|
||||
logger.debug(
|
||||
"Duplicate advert re-heard from '%s' (%s...), skipping limiter/storage",
|
||||
node_name,
|
||||
pubkey[:16],
|
||||
)
|
||||
return
|
||||
|
||||
# Per-pubkey rate limiting (token bucket + penalty box)
|
||||
allowed, reason = self._allow_advert(pubkey, now)
|
||||
if not allowed:
|
||||
logger.warning(f"Dropping advert from '{node_name}' ({pubkey[:16]}...): {reason}")
|
||||
packet.mark_do_not_retransmit()
|
||||
packet.drop_reason = reason
|
||||
|
||||
# Track recent drop (deduplicate by pubkey)
|
||||
pubkey_short = pubkey[:16]
|
||||
|
||||
# Remove any existing entry for this pubkey, then append the
|
||||
# updated record. Rebuilding as a deque preserves maxlen so
|
||||
# the oldest entry is evicted automatically — no pop(0) needed.
|
||||
self._recent_drops = deque(
|
||||
(d for d in self._recent_drops if d["pubkey"] != pubkey_short),
|
||||
maxlen=20,
|
||||
)
|
||||
self._recent_drops.append({
|
||||
"pubkey": pubkey_short,
|
||||
"name": node_name,
|
||||
"reason": reason,
|
||||
"timestamp": now
|
||||
})
|
||||
|
||||
return
|
||||
|
||||
# Skip our own adverts
|
||||
if self.local_identity:
|
||||
@@ -70,16 +594,22 @@ class AdvertHelper:
|
||||
if pubkey == local_pubkey:
|
||||
logger.debug("Ignoring own advert in neighbor tracking")
|
||||
return
|
||||
|
||||
|
||||
# Get route type from packet header
|
||||
from pymc_core.protocol.constants import PH_ROUTE_MASK
|
||||
|
||||
route_type = packet.header & PH_ROUTE_MASK
|
||||
|
||||
# Check if this is a new neighbor
|
||||
current_time = time.time()
|
||||
|
||||
# Check if this is a new neighbor (run DB read in thread to avoid blocking event loop)
|
||||
current_time = now
|
||||
if pubkey not in self._known_neighbors:
|
||||
# Only check database if not in cache
|
||||
current_neighbors = self.storage.get_neighbors() if self.storage else {}
|
||||
if self.storage:
|
||||
current_neighbors = await asyncio.to_thread(
|
||||
self.storage.get_neighbors
|
||||
)
|
||||
else:
|
||||
current_neighbors = {}
|
||||
is_new_neighbor = pubkey not in current_neighbors
|
||||
|
||||
if is_new_neighbor:
|
||||
@@ -88,6 +618,11 @@ class AdvertHelper:
|
||||
else:
|
||||
is_new_neighbor = False
|
||||
|
||||
# Determine zero-hop: direct routes are always zero-hop,
|
||||
# flood routes are zero-hop if path_len <= 1 (received directly)
|
||||
path_len = len(packet.path) if packet.path else 0
|
||||
zero_hop = path_len == 0
|
||||
|
||||
# Build advert record
|
||||
advert_record = {
|
||||
"timestamp": current_time,
|
||||
@@ -101,15 +636,68 @@ class AdvertHelper:
|
||||
"rssi": rssi,
|
||||
"snr": snr,
|
||||
"is_new_neighbor": is_new_neighbor,
|
||||
"zero_hop": route_type in [0x02, 0x03], # True for direct routes (no intermediate hops)
|
||||
"zero_hop": zero_hop,
|
||||
}
|
||||
|
||||
# Store to database
|
||||
# Store to database (run in thread so event loop stays responsive;
|
||||
# blocking here can cause companion TCP clients to disconnect)
|
||||
if self.storage:
|
||||
try:
|
||||
self.storage.record_advert(advert_record)
|
||||
await asyncio.to_thread(
|
||||
self.storage.record_advert,
|
||||
advert_record,
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to store advert record: {e}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error processing advert packet: {e}", exc_info=True)
|
||||
|
||||
def reload_config(self) -> None:
|
||||
"""Reload rate limiting configuration from self.config (called after live config updates)."""
|
||||
try:
|
||||
repeater_cfg = self.config.get("repeater", {})
|
||||
|
||||
# Adaptive mode config
|
||||
adaptive_cfg = repeater_cfg.get("advert_adaptive", {})
|
||||
self._adaptive_enabled = bool(adaptive_cfg.get("enabled", True))
|
||||
self._ewma_alpha = max(0.01, min(1.0, float(adaptive_cfg.get("ewma_alpha", 0.1))))
|
||||
self._tier_hysteresis_seconds = max(0.0, float(adaptive_cfg.get("hysteresis_seconds", 300.0)))
|
||||
|
||||
thresholds = adaptive_cfg.get("thresholds", {})
|
||||
self._threshold_normal = float(thresholds.get("normal", 1.0))
|
||||
self._threshold_busy = float(thresholds.get("busy", 5.0))
|
||||
self._threshold_congested = float(thresholds.get("congested", 15.0))
|
||||
|
||||
# Base rate limit config
|
||||
rate_cfg = repeater_cfg.get("advert_rate_limit", {})
|
||||
self._rate_limit_enabled = bool(rate_cfg.get("enabled", True))
|
||||
self._base_bucket_capacity = max(1.0, float(rate_cfg.get("bucket_capacity", 2)))
|
||||
self._base_refill_tokens = max(0.1, float(rate_cfg.get("refill_tokens", 1.0)))
|
||||
self._base_refill_interval = max(1.0, float(rate_cfg.get("refill_interval_seconds", 36000.0)))
|
||||
self._base_min_interval = max(0.0, float(rate_cfg.get("min_interval_seconds", 3600.0)))
|
||||
|
||||
# Penalty box config
|
||||
penalty_cfg = repeater_cfg.get("advert_penalty_box", {})
|
||||
self._penalty_enabled = bool(penalty_cfg.get("enabled", True))
|
||||
self._penalty_violation_threshold = max(1, int(penalty_cfg.get("violation_threshold", 2)))
|
||||
self._penalty_decay_seconds = max(1.0, float(penalty_cfg.get("violation_decay_seconds", 43200.0)))
|
||||
self._penalty_base_seconds = max(1.0, float(penalty_cfg.get("base_penalty_seconds", 21600.0)))
|
||||
self._penalty_multiplier = max(1.0, float(penalty_cfg.get("penalty_multiplier", 2.0)))
|
||||
self._penalty_max_seconds = max(
|
||||
self._penalty_base_seconds,
|
||||
float(penalty_cfg.get("max_penalty_seconds", 86400.0)),
|
||||
)
|
||||
|
||||
# Advert dedupe config
|
||||
dedupe_cfg = repeater_cfg.get("advert_dedupe", {})
|
||||
self._advert_dedupe_ttl_seconds = max(1.0, float(dedupe_cfg.get("ttl_seconds", 120.0)))
|
||||
self._advert_dedupe_max_hashes = max(100, int(dedupe_cfg.get("max_hashes", 10000)))
|
||||
|
||||
logger.info(
|
||||
f"Advert limiter config reloaded: adaptive={self._adaptive_enabled}, "
|
||||
f"rate_limit={self._rate_limit_enabled}, bucket={self._base_bucket_capacity:.1f}, "
|
||||
f"dedupe=True"
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Error reloading advert limiter config: {e}")
|
||||
|
||||
@@ -7,6 +7,7 @@ allowing other nodes to discover repeaters on the mesh network.
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
|
||||
from pymc_core.node.handlers.control import ControlHandler
|
||||
|
||||
logger = logging.getLogger("DiscoveryHelper")
|
||||
@@ -21,6 +22,7 @@ class DiscoveryHelper:
|
||||
packet_injector=None,
|
||||
node_type: int = 2,
|
||||
log_fn=None,
|
||||
debug_log_fn=None,
|
||||
):
|
||||
"""
|
||||
Initialize the discovery helper.
|
||||
@@ -30,18 +32,38 @@ class DiscoveryHelper:
|
||||
packet_injector: Callable to inject new packets into the router for sending
|
||||
node_type: Node type identifier (2 = Repeater)
|
||||
log_fn: Optional logging function for ControlHandler
|
||||
debug_log_fn: Optional logging for verbose ControlHandler messages (e.g. callback
|
||||
presence). Pass logger.debug to avoid INFO noise when forwarding to companions.
|
||||
"""
|
||||
self.local_identity = local_identity
|
||||
self.packet_injector = packet_injector # Function to inject packets into router
|
||||
self.node_type = node_type
|
||||
|
||||
|
||||
# Create ControlHandler internally as a parsing utility
|
||||
self.control_handler = ControlHandler(log_fn=log_fn or logger.info)
|
||||
self.control_handler = ControlHandler(
|
||||
log_fn=log_fn or logger.info,
|
||||
debug_log_fn=debug_log_fn,
|
||||
)
|
||||
self._pending_tasks = set()
|
||||
|
||||
# Set up the request callback
|
||||
self.control_handler.set_request_callback(self._on_discovery_request)
|
||||
logger.debug("Discovery handler initialized")
|
||||
|
||||
def _track_task(self, task: asyncio.Task) -> None:
|
||||
self._pending_tasks.add(task)
|
||||
|
||||
def _on_done(done_task: asyncio.Task) -> None:
|
||||
self._pending_tasks.discard(done_task)
|
||||
try:
|
||||
done_task.result()
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
except Exception as e:
|
||||
logger.error(f"Background discovery task failed: {e}", exc_info=True)
|
||||
|
||||
task.add_done_callback(_on_done)
|
||||
|
||||
def _on_discovery_request(self, request_data: dict) -> None:
|
||||
"""
|
||||
Handle incoming discovery request.
|
||||
@@ -108,7 +130,8 @@ class DiscoveryHelper:
|
||||
|
||||
# Send response via router injection
|
||||
if self.packet_injector:
|
||||
asyncio.create_task(self._send_packet_async(response_packet, tag))
|
||||
task = asyncio.create_task(self._send_packet_async(response_packet, tag))
|
||||
self._track_task(task)
|
||||
else:
|
||||
logger.warning("No packet injector available - discovery response not sent")
|
||||
|
||||
|
||||
@@ -0,0 +1,191 @@
|
||||
"""
|
||||
Login/ANON_REQ packet handling helper for pyMC Repeater.
|
||||
|
||||
This module processes login requests and manages authentication for all identities.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
|
||||
from pymc_core.node.handlers.login_server import LoginServerHandler
|
||||
from pymc_core.protocol.constants import PAYLOAD_TYPE_ANON_REQ
|
||||
|
||||
logger = logging.getLogger("LoginHelper")
|
||||
|
||||
|
||||
class LoginHelper:
|
||||
def __init__(self, identity_manager, packet_injector=None, log_fn=None):
|
||||
|
||||
self.identity_manager = identity_manager
|
||||
self.packet_injector = packet_injector
|
||||
self.log_fn = log_fn or logger.info
|
||||
|
||||
self.handlers = {}
|
||||
self.acls = {} # Per-identity ACLs keyed by hash_byte
|
||||
self._pending_tasks = set()
|
||||
|
||||
def _track_task(self, task: asyncio.Task) -> None:
|
||||
self._pending_tasks.add(task)
|
||||
|
||||
def _on_done(done_task: asyncio.Task) -> None:
|
||||
self._pending_tasks.discard(done_task)
|
||||
try:
|
||||
done_task.result()
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
except Exception as e:
|
||||
logger.error(f"Background login task failed: {e}", exc_info=True)
|
||||
|
||||
task.add_done_callback(_on_done)
|
||||
|
||||
def register_identity(
|
||||
self, name: str, identity, identity_type: str = "room_server", config: dict = None
|
||||
):
|
||||
config = config or {}
|
||||
|
||||
hash_byte = identity.get_public_key()[0]
|
||||
|
||||
# Create ACL for this identity
|
||||
from repeater.handler_helpers.acl import ACL
|
||||
|
||||
# Get security config for this identity
|
||||
if identity_type == "room_server":
|
||||
# Room servers use passwords from their settings section only
|
||||
settings = config.get("settings", {})
|
||||
|
||||
# Empty strings ('') are treated as "not set" by using 'or None'
|
||||
admin_password = settings.get("admin_password") or None
|
||||
guest_password = settings.get("guest_password") or None
|
||||
|
||||
# Validate room servers have passwords configured
|
||||
if not admin_password and not guest_password:
|
||||
logger.error(
|
||||
f"Room server '{name}' MUST have admin_password or guest_password configured. "
|
||||
f"Add them to 'settings' section. Skipping registration."
|
||||
)
|
||||
return
|
||||
|
||||
# Use configured passwords from settings
|
||||
final_security = {
|
||||
"max_clients": settings.get("max_clients", 50),
|
||||
"admin_password": admin_password,
|
||||
"guest_password": guest_password,
|
||||
"allow_read_only": settings.get("allow_read_only", True),
|
||||
}
|
||||
else:
|
||||
# Repeater uses security from repeater.security in config
|
||||
security = config.get("repeater", {}).get("security", {})
|
||||
final_security = {
|
||||
"max_clients": security.get("max_clients", 10),
|
||||
"admin_password": security.get("admin_password", "admin123"),
|
||||
"guest_password": security.get("guest_password", "guest123"),
|
||||
"allow_read_only": security.get("allow_read_only", True),
|
||||
}
|
||||
logger.debug(
|
||||
f"Repeater security config: admin_pw={'SET' if final_security['admin_password'] else 'NONE'}, "
|
||||
f"guest_pw={'SET' if final_security['guest_password'] else 'NONE'}, "
|
||||
f"max_clients={final_security['max_clients']}"
|
||||
)
|
||||
|
||||
# Create ACL for this identity
|
||||
identity_acl = ACL(
|
||||
max_clients=final_security["max_clients"],
|
||||
admin_password=final_security["admin_password"],
|
||||
guest_password=final_security["guest_password"],
|
||||
allow_read_only=final_security["allow_read_only"],
|
||||
)
|
||||
|
||||
self.acls[hash_byte] = identity_acl
|
||||
logger.info(f"Created ACL for {identity_type} '{name}': hash=0x{hash_byte:02X}")
|
||||
|
||||
# Create auth callback that uses this identity's ACL
|
||||
def auth_callback_with_context(
|
||||
client_identity, shared_secret, password, timestamp, sync_since=None
|
||||
):
|
||||
return identity_acl.authenticate_client(
|
||||
client_identity=client_identity,
|
||||
shared_secret=shared_secret,
|
||||
password=password,
|
||||
timestamp=timestamp,
|
||||
sync_since=sync_since,
|
||||
target_identity_hash=hash_byte,
|
||||
target_identity_name=name,
|
||||
target_identity_config=config,
|
||||
)
|
||||
|
||||
handler = LoginServerHandler(
|
||||
local_identity=identity,
|
||||
log_fn=self.log_fn,
|
||||
authenticate_callback=auth_callback_with_context,
|
||||
is_room_server=(identity_type == "room_server"),
|
||||
)
|
||||
|
||||
handler.set_send_packet_callback(self._send_packet_with_delay)
|
||||
|
||||
self.handlers[hash_byte] = handler
|
||||
|
||||
logger.info(f"Registered {identity_type} '{name}' login handler: hash=0x{hash_byte:02X}")
|
||||
|
||||
async def process_login_packet(self, packet):
|
||||
|
||||
try:
|
||||
if len(packet.payload) < 1:
|
||||
return False
|
||||
|
||||
dest_hash = packet.payload[0]
|
||||
|
||||
handler = self.handlers.get(dest_hash)
|
||||
if handler:
|
||||
logger.debug(f"Routing login to identity: hash=0x{dest_hash:02X}")
|
||||
await handler(packet)
|
||||
packet.mark_do_not_retransmit()
|
||||
return True
|
||||
else:
|
||||
# ANON_REQ to other nodes (e.g. owner-info to firmware) is normal; skip log to avoid spam
|
||||
ptype = getattr(packet, "get_payload_type", lambda: None)()
|
||||
if ptype != PAYLOAD_TYPE_ANON_REQ:
|
||||
logger.debug(
|
||||
f"No login handler registered for hash 0x{dest_hash:02X}, allowing forward"
|
||||
)
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error processing login packet: {e}")
|
||||
return False
|
||||
|
||||
def _send_packet_with_delay(self, packet, delay_ms: int):
|
||||
|
||||
if self.packet_injector:
|
||||
task = asyncio.create_task(self._delayed_send(packet, delay_ms))
|
||||
self._track_task(task)
|
||||
else:
|
||||
logger.error("No packet injector configured, cannot send login response")
|
||||
|
||||
async def _delayed_send(self, packet, delay_ms: int):
|
||||
|
||||
await asyncio.sleep(delay_ms / 1000.0)
|
||||
try:
|
||||
await self.packet_injector(packet, wait_for_ack=False)
|
||||
logger.debug(f"Sent login response after {delay_ms}ms delay")
|
||||
except Exception as e:
|
||||
logger.error(f"Error sending login response: {e}")
|
||||
|
||||
def get_acl_dict(self):
|
||||
"""Return dictionary of ACLs keyed by identity hash."""
|
||||
return self.acls
|
||||
|
||||
def get_acl_for_identity(self, hash_byte: int):
|
||||
"""Get ACL for a specific identity."""
|
||||
return self.acls.get(hash_byte)
|
||||
|
||||
def list_authenticated_clients(self, hash_byte: int = None):
|
||||
"""List authenticated clients for a specific identity or all identities."""
|
||||
if hash_byte is not None:
|
||||
acl = self.acls.get(hash_byte)
|
||||
return acl.get_all_clients() if acl else []
|
||||
|
||||
# Return clients from all ACLs
|
||||
all_clients = []
|
||||
for acl in self.acls.values():
|
||||
all_clients.extend(acl.get_all_clients())
|
||||
return all_clients
|
||||
@@ -0,0 +1,769 @@
|
||||
import logging
|
||||
import time
|
||||
from pathlib import Path
|
||||
from typing import Any, Callable, Dict, Optional
|
||||
|
||||
import yaml
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class MeshCLI:
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
config_path: str,
|
||||
config: Dict[str, Any],
|
||||
config_manager, # ConfigManager instance for save & live updates
|
||||
identity_type: str = "repeater",
|
||||
enable_regions: bool = True,
|
||||
send_advert_callback: Optional[Callable] = None,
|
||||
identity=None,
|
||||
storage_handler=None,
|
||||
):
|
||||
|
||||
self.config_path = Path(config_path)
|
||||
self.config = config
|
||||
self.config_manager = config_manager
|
||||
self.identity_type = identity_type
|
||||
self.enable_regions = enable_regions
|
||||
self.send_advert_callback = send_advert_callback
|
||||
self.identity = identity
|
||||
self.storage_handler = storage_handler
|
||||
|
||||
# Store event loop reference for thread-safe scheduling
|
||||
import asyncio
|
||||
try:
|
||||
self._event_loop = asyncio.get_running_loop()
|
||||
except RuntimeError:
|
||||
self._event_loop = None
|
||||
|
||||
# Get repeater config shortcut
|
||||
self.repeater_config = config.get("repeater", {})
|
||||
|
||||
def handle_command(self, sender_pubkey: bytes, command: str, is_admin: bool) -> str:
|
||||
|
||||
# Check admin permission first
|
||||
if not is_admin:
|
||||
return "Error: Admin permission required"
|
||||
|
||||
logger.debug(f"handle_command received: '{command}' (len={len(command)})")
|
||||
|
||||
# Extract optional sequence prefix (XX|)
|
||||
prefix = ""
|
||||
if len(command) > 4 and command[2] == "|":
|
||||
prefix = command[:3]
|
||||
command = command[3:]
|
||||
logger.debug(f"Extracted prefix: '{prefix}', remaining command: '{command}'")
|
||||
|
||||
# Strip leading/trailing whitespace
|
||||
command = command.strip()
|
||||
logger.debug(f"After strip: '{command}'")
|
||||
|
||||
# Route to appropriate handler
|
||||
reply = self._route_command(command)
|
||||
|
||||
# Add prefix back to reply if present
|
||||
if prefix:
|
||||
return prefix + reply
|
||||
return reply
|
||||
|
||||
def _route_command(self, command: str) -> str:
|
||||
|
||||
# Help
|
||||
if command == "help" or command.startswith("help "):
|
||||
return self._cmd_help(command)
|
||||
|
||||
# System commands
|
||||
elif command == "reboot":
|
||||
return self._cmd_reboot()
|
||||
elif command == "advert":
|
||||
return self._cmd_advert()
|
||||
elif command.startswith("clock"):
|
||||
return self._cmd_clock(command)
|
||||
elif command.startswith("time "):
|
||||
return self._cmd_time(command)
|
||||
elif command == "start ota":
|
||||
return "Error: OTA not supported in Python repeater"
|
||||
elif command.startswith("password "):
|
||||
return self._cmd_password(command)
|
||||
elif command == "clear stats":
|
||||
return self._cmd_clear_stats()
|
||||
elif command == "ver":
|
||||
return self._cmd_version()
|
||||
|
||||
# Get commands
|
||||
elif command.startswith("get "):
|
||||
return self._cmd_get(command[4:])
|
||||
|
||||
# Set commands
|
||||
elif command.startswith("set "):
|
||||
return self._cmd_set(command[4:])
|
||||
|
||||
# ACL commands
|
||||
elif command.startswith("setperm "):
|
||||
return self._cmd_setperm(command)
|
||||
elif command == "get acl":
|
||||
return "Error: Use 'get acl' via serial console only"
|
||||
|
||||
# Region commands (repeaters only)
|
||||
elif command.startswith("region"):
|
||||
if self.enable_regions:
|
||||
return self._cmd_region(command)
|
||||
else:
|
||||
return "Error: Region commands not available for room servers"
|
||||
|
||||
# Neighbor commands
|
||||
elif command == "neighbors":
|
||||
return self._cmd_neighbors()
|
||||
elif command.startswith("neighbor.remove "):
|
||||
return self._cmd_neighbor_remove(command)
|
||||
|
||||
# Temporary radio params
|
||||
elif command.startswith("tempradio "):
|
||||
return self._cmd_tempradio(command)
|
||||
|
||||
# Sensor commands
|
||||
elif command.startswith("sensor "):
|
||||
return "Error: Sensor commands not implemented in Python repeater"
|
||||
|
||||
# GPS commands
|
||||
elif command.startswith("gps"):
|
||||
return "Error: GPS commands not implemented in Python repeater"
|
||||
|
||||
# Logging commands
|
||||
elif command.startswith("log "):
|
||||
return self._cmd_log(command)
|
||||
|
||||
# Statistics commands
|
||||
elif command.startswith("stats-"):
|
||||
return "Error: Stats commands not fully implemented yet"
|
||||
|
||||
else:
|
||||
return "Unknown command"
|
||||
|
||||
# ==================== Help Command ====================
|
||||
|
||||
def _cmd_help(self, command: str) -> str:
|
||||
"""Show available commands or detailed help for a specific command."""
|
||||
parts = command.split(None, 1)
|
||||
if len(parts) == 2:
|
||||
return self._help_detail(parts[1])
|
||||
|
||||
lines = [
|
||||
"=== pyMC CLI Commands ===",
|
||||
"",
|
||||
"System:",
|
||||
" reboot Restart the repeater service",
|
||||
" advert Send self advertisement",
|
||||
" clock Show current UTC time",
|
||||
" clock sync Sync clock (no-op, uses system time)",
|
||||
" ver Show version info",
|
||||
" password <pw> Change admin password",
|
||||
" clear stats Clear statistics",
|
||||
"",
|
||||
"Get:",
|
||||
" get name Node name",
|
||||
" get radio Radio params (freq,bw,sf,cr)",
|
||||
" get freq Frequency (MHz)",
|
||||
" get tx TX power",
|
||||
" get af Airtime factor",
|
||||
" get repeat Repeat mode (on/off)",
|
||||
" get lat / get lon GPS coordinates",
|
||||
" get role Identity role",
|
||||
" get guest.password Guest password",
|
||||
" get allow.read.only Read-only access setting",
|
||||
" get advert.interval Advert interval (minutes)",
|
||||
" get flood.advert.interval Flood advert interval (hours)",
|
||||
" get flood.max Max flood hops",
|
||||
" get rxdelay RX delay base",
|
||||
" get txdelay TX delay factor",
|
||||
" get direct.txdelay Direct TX delay factor",
|
||||
" get multi.acks Multi-ack count",
|
||||
" get int.thresh Interference threshold",
|
||||
" get agc.reset.interval AGC reset interval",
|
||||
"",
|
||||
"Set: (use 'help set' for details)",
|
||||
" set <param> <value>",
|
||||
"",
|
||||
"Other:",
|
||||
" neighbors List neighbors",
|
||||
" neighbor.remove <key> Remove neighbor by pubkey",
|
||||
" tempradio <freq> <bw> <sf> <cr> <timeout_mins>",
|
||||
" setperm <pubkey> <perm> Set ACL permissions",
|
||||
" log start|stop|erase Logging control",
|
||||
]
|
||||
if self.enable_regions:
|
||||
lines.append(" region ... Region commands")
|
||||
lines += ["", "Type 'help <command>' for details on a specific command."]
|
||||
return "\n".join(lines)
|
||||
|
||||
def _help_detail(self, topic: str) -> str:
|
||||
"""Return detailed help for a specific command topic."""
|
||||
topic = topic.strip()
|
||||
details = {
|
||||
"set": (
|
||||
"Set commands \u2014 set <param> <value>:\n"
|
||||
" set name <name> Set node name\n"
|
||||
" set radio <f> <bw> <sf> <cr> Set radio (restart required)\n"
|
||||
" set freq <mhz> Set frequency (restart required)\n"
|
||||
" set tx <power> Set TX power\n"
|
||||
" set af <factor> Airtime factor\n"
|
||||
" set repeat on|off Enable/disable repeating\n"
|
||||
" set lat <deg> Latitude\n"
|
||||
" set lon <deg> Longitude\n"
|
||||
" set guest.password <pw> Guest password\n"
|
||||
" set allow.read.only on|off Read-only access\n"
|
||||
" set advert.interval <min> 60-240 minutes\n"
|
||||
" set flood.advert.interval <hr> 3-48 hours\n"
|
||||
" set flood.max <hops> Max flood hops (max 64)\n"
|
||||
" set rxdelay <val> RX delay base (>=0)\n"
|
||||
" set txdelay <val> TX delay factor (>=0)\n"
|
||||
" set direct.txdelay <val> Direct TX delay (>=0)\n"
|
||||
" set multi.acks <n> Multi-ack count\n"
|
||||
" set int.thresh <dbm> Interference threshold\n"
|
||||
" set agc.reset.interval <n> AGC reset (rounded to x4)"
|
||||
),
|
||||
"get": "Get commands \u2014 type 'help' to see all 'get' parameters.",
|
||||
"reboot": "Restart the repeater service via systemd.",
|
||||
"advert": "Trigger a self-advertisement flood packet.",
|
||||
"clock": "'clock' shows UTC time. 'clock sync' is a no-op (system time used).",
|
||||
"ver": "Show repeater version and identity type.",
|
||||
"password": "password <new_password> \u2014 Change the admin password.",
|
||||
"tempradio": (
|
||||
"tempradio <freq_mhz> <bw_khz> <sf> <cr> <timeout_mins>\n"
|
||||
" Apply temporary radio parameters that revert after timeout.\n"
|
||||
" freq: 300-2500 MHz, bw: 7-500 kHz, sf: 5-12, cr: 5-8"
|
||||
),
|
||||
"neighbors": "List known neighbor nodes from the routing table.",
|
||||
"setperm": "setperm <pubkey_hex> <permission_int> \u2014 Set ACL permissions for a node.",
|
||||
"log": "log start|stop|erase \u2014 Control logging.",
|
||||
}
|
||||
return details.get(topic, f"No detailed help for '{topic}'. Type 'help' for command list.")
|
||||
|
||||
# ==================== System Commands ====================
|
||||
|
||||
def _cmd_reboot(self) -> str:
|
||||
"""Reboot the repeater process."""
|
||||
from repeater.service_utils import restart_service
|
||||
|
||||
logger.warning("Reboot command received via mesh CLI")
|
||||
success, message = restart_service()
|
||||
|
||||
if success:
|
||||
return f"OK - {message}"
|
||||
else:
|
||||
return f"Error: {message}"
|
||||
|
||||
def _cmd_advert(self) -> str:
|
||||
"""Send self advertisement."""
|
||||
if not self.send_advert_callback:
|
||||
logger.warning("Advert command received but no callback configured")
|
||||
return "Error: Advert functionality not configured"
|
||||
|
||||
try:
|
||||
import asyncio
|
||||
|
||||
async def delayed_advert():
|
||||
"""Delay advert to let CLI response send first (matches C++ 1500ms delay)."""
|
||||
await asyncio.sleep(1.5)
|
||||
await self.send_advert_callback()
|
||||
|
||||
if self._event_loop and self._event_loop.is_running():
|
||||
asyncio.run_coroutine_threadsafe(delayed_advert(), self._event_loop)
|
||||
else:
|
||||
return "Error: Event loop not available"
|
||||
|
||||
logger.info("Advert scheduled for sending (1.5s delay)")
|
||||
return "OK - Advert sent"
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to schedule advert: {e}", exc_info=True)
|
||||
return f"Error: {e}"
|
||||
|
||||
def _cmd_clock(self, command: str) -> str:
|
||||
"""Handle clock commands."""
|
||||
if command == "clock":
|
||||
# Display current time
|
||||
import datetime
|
||||
|
||||
dt = datetime.datetime.utcnow()
|
||||
return f"{dt.hour:02d}:{dt.minute:02d} - {dt.day}/{dt.month}/{dt.year} UTC"
|
||||
elif command == "clock sync":
|
||||
# Clock sync happens automatically via sender_timestamp in protocol
|
||||
return "OK - clock sync not needed (system time used)"
|
||||
else:
|
||||
return "Unknown clock command"
|
||||
|
||||
def _cmd_time(self, command: str) -> str:
|
||||
"""Set time - not supported in Python (use system time)."""
|
||||
return "Error: Time setting not supported (system time is used)"
|
||||
|
||||
def _cmd_password(self, command: str) -> str:
|
||||
"""Change admin password."""
|
||||
new_password = command[9:].strip()
|
||||
|
||||
if not new_password:
|
||||
return "Error: Password cannot be empty"
|
||||
|
||||
# Update security config
|
||||
if "security" not in self.config:
|
||||
self.config["security"] = {}
|
||||
|
||||
self.config["security"]["password"] = new_password
|
||||
|
||||
# Save config and live update
|
||||
try:
|
||||
saved, err = self.config_manager.save_to_file()
|
||||
if not saved:
|
||||
logger.error(f"Failed to save password: {err}")
|
||||
return f"Error: Failed to save config: {err}"
|
||||
self.config_manager.live_update_daemon(["security"])
|
||||
return f"password now: {new_password}"
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to save password: {e}")
|
||||
return "Error: Failed to save password"
|
||||
|
||||
def _cmd_clear_stats(self) -> str:
|
||||
"""Clear statistics."""
|
||||
# TODO: Implement stats clearing
|
||||
return "Error: Not yet implemented"
|
||||
|
||||
def _cmd_version(self) -> str:
|
||||
"""Get version information."""
|
||||
role = "room_server" if self.identity_type == "room_server" else "repeater"
|
||||
version = self.config.get("version", "1.0.0")
|
||||
return f"pyMC_{role} v{version}"
|
||||
|
||||
# ==================== Get Commands ====================
|
||||
|
||||
def _cmd_get(self, param: str) -> str:
|
||||
"""Handle get commands."""
|
||||
param = param.strip()
|
||||
logger.debug(f"_cmd_get called with param: '{param}' (len={len(param)})")
|
||||
|
||||
if param == "af":
|
||||
af = self.repeater_config.get("airtime_factor", 1.0)
|
||||
return f"> {af}"
|
||||
|
||||
elif param == "name":
|
||||
name = self.repeater_config.get("name", "Unknown")
|
||||
return f"> {name}"
|
||||
|
||||
elif param == "repeat":
|
||||
mode = self.repeater_config.get("mode", "forward")
|
||||
return f"> {'on' if mode == 'forward' else 'off'}"
|
||||
|
||||
elif param == "lat":
|
||||
lat = self.repeater_config.get("latitude", 0.0)
|
||||
return f"> {lat}"
|
||||
|
||||
elif param == "lon":
|
||||
lon = self.repeater_config.get("longitude", 0.0)
|
||||
return f"> {lon}"
|
||||
|
||||
elif param == "radio":
|
||||
radio = self.config.get("radio", {})
|
||||
freq_hz = radio.get("frequency", 915000000)
|
||||
bw_hz = radio.get("bandwidth", 125000)
|
||||
sf = radio.get("spreading_factor", 7)
|
||||
cr = radio.get("coding_rate", 5)
|
||||
# Convert Hz to MHz for freq, Hz to kHz for bandwidth (match C++ ftoa output)
|
||||
freq_mhz = freq_hz / 1_000_000.0
|
||||
bw_khz = bw_hz / 1_000.0
|
||||
return f"> {freq_mhz},{bw_khz},{sf},{cr}"
|
||||
|
||||
elif param == "freq":
|
||||
freq_hz = self.config.get("radio", {}).get("frequency", 915000000)
|
||||
freq_mhz = freq_hz / 1_000_000.0
|
||||
return f"> {freq_mhz}"
|
||||
|
||||
elif param == "tx":
|
||||
power = self.config.get("radio", {}).get("tx_power", 20)
|
||||
return f"> {power}"
|
||||
|
||||
elif param == "public.key":
|
||||
if not self.identity:
|
||||
return "Error: Identity not available"
|
||||
try:
|
||||
pubkey = self.identity.get_public_key()
|
||||
pubkey_hex = pubkey.hex()
|
||||
return f"> {pubkey_hex}"
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to get public key: {e}")
|
||||
return f"Error: {e}"
|
||||
|
||||
elif param == "role":
|
||||
role = "room_server" if self.identity_type == "room_server" else "repeater"
|
||||
return f"> {role}"
|
||||
|
||||
elif param == "guest.password":
|
||||
guest_pw = self.config.get("security", {}).get("guest_password", "")
|
||||
return f"> {guest_pw}"
|
||||
|
||||
elif param == "allow.read.only":
|
||||
allow = self.config.get("security", {}).get("allow_read_only", False)
|
||||
return f"> {'on' if allow else 'off'}"
|
||||
|
||||
elif param == "advert.interval":
|
||||
interval = self.repeater_config.get("advert_interval_minutes", 120)
|
||||
return f"> {interval}"
|
||||
|
||||
elif param == "flood.advert.interval":
|
||||
interval = self.repeater_config.get("flood_advert_interval_hours", 24)
|
||||
return f"> {interval}"
|
||||
|
||||
elif param == "flood.max":
|
||||
max_flood = self.repeater_config.get("max_flood_hops", 64)
|
||||
return f"> {max_flood}"
|
||||
|
||||
elif param == "rxdelay":
|
||||
delay = self.repeater_config.get("rx_delay_base", 0.0)
|
||||
return f"> {delay}"
|
||||
|
||||
elif param == "txdelay":
|
||||
delay = self.repeater_config.get("tx_delay_factor", 1.0)
|
||||
return f"> {delay}"
|
||||
|
||||
elif param == "direct.txdelay":
|
||||
delay = self.repeater_config.get("direct_tx_delay_factor", 0.5)
|
||||
return f"> {delay}"
|
||||
|
||||
elif param == "multi.acks":
|
||||
acks = self.repeater_config.get("multi_acks", 0)
|
||||
return f"> {acks}"
|
||||
|
||||
elif param == "int.thresh":
|
||||
thresh = self.repeater_config.get("interference_threshold", -120)
|
||||
return f"> {thresh}"
|
||||
|
||||
elif param == "agc.reset.interval":
|
||||
interval = self.repeater_config.get("agc_reset_interval", 0)
|
||||
return f"> {interval}"
|
||||
|
||||
else:
|
||||
return f"??: {param}"
|
||||
|
||||
# ==================== Set Commands ====================
|
||||
|
||||
def _cmd_set(self, param: str) -> str:
|
||||
"""Handle set commands."""
|
||||
parts = param.split(None, 1)
|
||||
if len(parts) < 2:
|
||||
return "Error: Missing value"
|
||||
|
||||
key, value = parts[0], parts[1]
|
||||
|
||||
try:
|
||||
if key == "af":
|
||||
self.repeater_config["airtime_factor"] = float(value)
|
||||
saved, _ = self.config_manager.save_to_file()
|
||||
self.config_manager.live_update_daemon(["repeater"])
|
||||
return "OK"
|
||||
|
||||
elif key == "name":
|
||||
self.repeater_config["node_name"] = value
|
||||
saved, _ = self.config_manager.save_to_file()
|
||||
self.config_manager.live_update_daemon(["repeater"])
|
||||
return "OK"
|
||||
|
||||
elif key == "repeat":
|
||||
self.repeater_config["mode"] = "forward" if value.lower() == "on" else "monitor"
|
||||
saved, _ = self.config_manager.save_to_file()
|
||||
self.config_manager.live_update_daemon(["repeater"])
|
||||
return f"OK - repeat is now {'ON' if self.repeater_config['mode'] == 'forward' else 'OFF'}"
|
||||
|
||||
elif key == "lat":
|
||||
self.repeater_config["latitude"] = float(value)
|
||||
saved, _ = self.config_manager.save_to_file()
|
||||
self.config_manager.live_update_daemon(["repeater"])
|
||||
return "OK"
|
||||
|
||||
elif key == "lon":
|
||||
self.repeater_config["longitude"] = float(value)
|
||||
saved, _ = self.config_manager.save_to_file()
|
||||
self.config_manager.live_update_daemon(["repeater"])
|
||||
return "OK"
|
||||
|
||||
elif key == "radio":
|
||||
# Format: freq bw sf cr
|
||||
radio_parts = value.split()
|
||||
if len(radio_parts) != 4:
|
||||
return "Error: Expected freq bw sf cr"
|
||||
|
||||
if "radio" not in self.config:
|
||||
self.config["radio"] = {}
|
||||
|
||||
self.config["radio"]["frequency"] = float(radio_parts[0])
|
||||
self.config["radio"]["bandwidth"] = float(radio_parts[1])
|
||||
self.config["radio"]["spreading_factor"] = int(radio_parts[2])
|
||||
self.config["radio"]["coding_rate"] = int(radio_parts[3])
|
||||
saved, _ = self.config_manager.save_to_file()
|
||||
self.config_manager.live_update_daemon(["radio"])
|
||||
return "OK - restart repeater to apply"
|
||||
|
||||
elif key == "freq":
|
||||
if "radio" not in self.config:
|
||||
self.config["radio"] = {}
|
||||
self.config["radio"]["frequency"] = float(value)
|
||||
saved, _ = self.config_manager.save_to_file()
|
||||
self.config_manager.live_update_daemon(["radio"])
|
||||
return "OK - restart repeater to apply"
|
||||
|
||||
elif key == "tx":
|
||||
if "radio" not in self.config:
|
||||
self.config["radio"] = {}
|
||||
self.config["radio"]["tx_power"] = int(value)
|
||||
saved, _ = self.config_manager.save_to_file()
|
||||
self.config_manager.live_update_daemon(["radio"])
|
||||
return "OK"
|
||||
|
||||
elif key == "guest.password":
|
||||
if "security" not in self.config:
|
||||
self.config["security"] = {}
|
||||
self.config["security"]["guest_password"] = value
|
||||
saved, _ = self.config_manager.save_to_file()
|
||||
self.config_manager.live_update_daemon(["security"])
|
||||
return "OK"
|
||||
|
||||
elif key == "allow.read.only":
|
||||
if "security" not in self.config:
|
||||
self.config["security"] = {}
|
||||
self.config["security"]["allow_read_only"] = value.lower() == "on"
|
||||
saved, _ = self.config_manager.save_to_file()
|
||||
self.config_manager.live_update_daemon(["security"])
|
||||
return "OK"
|
||||
|
||||
elif key == "advert.interval":
|
||||
mins = int(value)
|
||||
if mins > 0 and (mins < 60 or mins > 240):
|
||||
return "Error: interval range is 60-240 minutes"
|
||||
self.repeater_config["advert_interval_minutes"] = mins
|
||||
saved, _ = self.config_manager.save_to_file()
|
||||
self.config_manager.live_update_daemon(["repeater"])
|
||||
return "OK"
|
||||
|
||||
elif key == "flood.advert.interval":
|
||||
hours = int(value)
|
||||
if (hours > 0 and hours < 3) or hours > 48:
|
||||
return "Error: interval range is 3-48 hours"
|
||||
self.repeater_config["flood_advert_interval_hours"] = hours
|
||||
saved, _ = self.config_manager.save_to_file()
|
||||
self.config_manager.live_update_daemon(["repeater"])
|
||||
return "OK"
|
||||
|
||||
elif key == "flood.max":
|
||||
max_val = int(value)
|
||||
if max_val > 64:
|
||||
return "Error: max 64"
|
||||
self.repeater_config["max_flood_hops"] = max_val
|
||||
saved, _ = self.config_manager.save_to_file()
|
||||
self.config_manager.live_update_daemon(["repeater"])
|
||||
return "OK"
|
||||
|
||||
elif key == "rxdelay":
|
||||
delay = float(value)
|
||||
if delay < 0:
|
||||
return "Error: cannot be negative"
|
||||
self.repeater_config["rx_delay_base"] = delay
|
||||
saved, _ = self.config_manager.save_to_file()
|
||||
self.config_manager.live_update_daemon(["repeater", "delays"])
|
||||
return "OK"
|
||||
|
||||
elif key == "txdelay":
|
||||
delay = float(value)
|
||||
if delay < 0:
|
||||
return "Error: cannot be negative"
|
||||
self.repeater_config["tx_delay_factor"] = delay
|
||||
saved, _ = self.config_manager.save_to_file()
|
||||
self.config_manager.live_update_daemon(["repeater", "delays"])
|
||||
return "OK"
|
||||
|
||||
elif key == "direct.txdelay":
|
||||
delay = float(value)
|
||||
if delay < 0:
|
||||
return "Error: cannot be negative"
|
||||
self.repeater_config["direct_tx_delay_factor"] = delay
|
||||
saved, _ = self.config_manager.save_to_file()
|
||||
self.config_manager.live_update_daemon(["repeater", "delays"])
|
||||
return "OK"
|
||||
|
||||
elif key == "multi.acks":
|
||||
self.repeater_config["multi_acks"] = int(value)
|
||||
saved, _ = self.config_manager.save_to_file()
|
||||
self.config_manager.live_update_daemon(["repeater"])
|
||||
return "OK"
|
||||
|
||||
elif key == "int.thresh":
|
||||
self.repeater_config["interference_threshold"] = int(value)
|
||||
saved, _ = self.config_manager.save_to_file()
|
||||
self.config_manager.live_update_daemon(["repeater"])
|
||||
return "OK"
|
||||
|
||||
elif key == "agc.reset.interval":
|
||||
interval = int(value)
|
||||
# Round to nearest multiple of 4
|
||||
rounded = (interval // 4) * 4
|
||||
self.repeater_config["agc_reset_interval"] = rounded
|
||||
saved, _ = self.config_manager.save_to_file()
|
||||
self.config_manager.live_update_daemon(["repeater"])
|
||||
return f"OK - interval rounded to {rounded}"
|
||||
|
||||
else:
|
||||
return f"unknown config: {key}"
|
||||
|
||||
except ValueError as e:
|
||||
return f"Error: invalid value - {e}"
|
||||
except Exception as e:
|
||||
logger.error(f"Set command error: {e}")
|
||||
return f"Error: {e}"
|
||||
|
||||
# ==================== ACL Commands ====================
|
||||
|
||||
def _cmd_setperm(self, command: str) -> str:
|
||||
"""Set permissions for a public key."""
|
||||
# Format: setperm {pubkey-hex} {permissions-int}
|
||||
parts = command[8:].split()
|
||||
if len(parts) < 2:
|
||||
return "Err - bad params"
|
||||
|
||||
pubkey_hex = parts[0]
|
||||
try:
|
||||
permissions = int(parts[1])
|
||||
except ValueError:
|
||||
return "Err - invalid permissions"
|
||||
|
||||
# TODO: Apply permissions via ACL
|
||||
logger.info(f"setperm command: {pubkey_hex} -> {permissions}")
|
||||
return "Error: Not yet implemented - use config file"
|
||||
|
||||
# ==================== Region Commands ====================
|
||||
|
||||
def _cmd_region(self, command: str) -> str:
|
||||
"""Handle region commands."""
|
||||
parts = command.split()
|
||||
|
||||
if len(parts) == 1:
|
||||
return "Error: Region commands not implemented in Python repeater"
|
||||
|
||||
subcommand = parts[1]
|
||||
|
||||
if subcommand == "load":
|
||||
return "Error: Region commands not implemented"
|
||||
elif subcommand == "save":
|
||||
return "Error: Region commands not implemented"
|
||||
elif subcommand in ("allowf", "denyf", "get", "home", "put", "remove"):
|
||||
return "Error: Region commands not implemented"
|
||||
else:
|
||||
return "Err - ??"
|
||||
|
||||
# ==================== Neighbor Commands ====================
|
||||
|
||||
def _cmd_neighbors(self) -> str:
|
||||
"""List neighbors."""
|
||||
if not self.storage_handler:
|
||||
return "Error: Storage not available"
|
||||
|
||||
try:
|
||||
neighbors = self.storage_handler.get_neighbors()
|
||||
|
||||
if not neighbors:
|
||||
return "No neighbors discovered yet"
|
||||
|
||||
# Filter to only show repeaters and zero hop nodes
|
||||
filtered_neighbors = {
|
||||
pubkey: info
|
||||
for pubkey, info in neighbors.items()
|
||||
if info.get("is_repeater", False) or info.get("zero_hop", False)
|
||||
}
|
||||
|
||||
if not filtered_neighbors:
|
||||
return "No repeaters or zero hop neighbors discovered yet"
|
||||
|
||||
# Format output similar to C++ version
|
||||
# Format: "<pubkey_prefix> heard Xs ago"
|
||||
import time
|
||||
|
||||
current_time = int(time.time())
|
||||
|
||||
lines = []
|
||||
for pubkey, info in filtered_neighbors.items():
|
||||
last_seen = info.get("last_seen", 0)
|
||||
seconds_ago = int(current_time - last_seen)
|
||||
|
||||
# Get first 4 bytes of pubkey as hex (match C++ format)
|
||||
pubkey_short = pubkey[:8] if len(pubkey) >= 8 else pubkey
|
||||
snr = info.get("snr", 0) or 0
|
||||
|
||||
# Format: <4byte_hex>:<seconds_ago>:<snr> (matches C++ format)
|
||||
lines.append(f"{pubkey_short}:{seconds_ago}:{int(snr)}")
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to list neighbors: {e}", exc_info=True)
|
||||
return f"Error: {e}"
|
||||
|
||||
def _cmd_neighbor_remove(self, command: str) -> str:
|
||||
"""Remove a neighbor."""
|
||||
pubkey_hex = command[16:].strip()
|
||||
|
||||
if not pubkey_hex:
|
||||
return "ERR: Missing pubkey"
|
||||
|
||||
# TODO: Remove neighbor from routing table
|
||||
logger.info(f"neighbor.remove: {pubkey_hex}")
|
||||
return "Error: Not yet implemented"
|
||||
|
||||
# ==================== Temporary Radio Commands ====================
|
||||
|
||||
def _cmd_tempradio(self, command: str) -> str:
|
||||
"""Apply temporary radio parameters."""
|
||||
# Format: tempradio {freq} {bw} {sf} {cr} {timeout_mins}
|
||||
parts = command[10:].split()
|
||||
|
||||
if len(parts) < 5:
|
||||
return "Error: Expected freq bw sf cr timeout_mins"
|
||||
|
||||
try:
|
||||
freq = float(parts[0])
|
||||
bw = float(parts[1])
|
||||
sf = int(parts[2])
|
||||
cr = int(parts[3])
|
||||
timeout_mins = int(parts[4])
|
||||
|
||||
# Validate
|
||||
if not (300.0 <= freq <= 2500.0):
|
||||
return "Error: invalid frequency"
|
||||
if not (7.0 <= bw <= 500.0):
|
||||
return "Error: invalid bandwidth"
|
||||
if not (5 <= sf <= 12):
|
||||
return "Error: invalid spreading factor"
|
||||
if not (5 <= cr <= 8):
|
||||
return "Error: invalid coding rate"
|
||||
if timeout_mins <= 0:
|
||||
return "Error: invalid timeout"
|
||||
|
||||
# TODO: Apply temporary radio parameters
|
||||
logger.info(f"tempradio: {freq}MHz {bw}kHz SF{sf} CR4/{cr} for {timeout_mins}min")
|
||||
return "Error: Not yet implemented"
|
||||
|
||||
except ValueError:
|
||||
return "Error, invalid params"
|
||||
|
||||
# ==================== Logging Commands ====================
|
||||
|
||||
def _cmd_log(self, command: str) -> str:
|
||||
"""Handle log commands."""
|
||||
if command == "log start":
|
||||
# TODO: Enable logging
|
||||
return "Error: Not yet implemented"
|
||||
elif command == "log stop":
|
||||
# TODO: Disable logging
|
||||
return "Error: Not yet implemented"
|
||||
elif command == "log erase":
|
||||
# TODO: Clear log file
|
||||
return "Error: Not yet implemented"
|
||||
elif command == "log":
|
||||
return "Error: Use journalctl to view logs"
|
||||
else:
|
||||
return "Unknown log command"
|
||||
@@ -0,0 +1,92 @@
|
||||
import logging
|
||||
import time
|
||||
|
||||
logger = logging.getLogger("PathHelper")
|
||||
|
||||
|
||||
class PathHelper:
|
||||
def __init__(self, acl_dict=None, log_fn=None):
|
||||
|
||||
self.acl_dict = acl_dict or {}
|
||||
self.log_fn = log_fn or logger.info
|
||||
|
||||
async def process_path_packet(self, packet):
|
||||
|
||||
from pymc_core.protocol.crypto import CryptoUtils
|
||||
|
||||
try:
|
||||
if len(packet.payload) < 2:
|
||||
return False
|
||||
|
||||
dest_hash = packet.payload[0]
|
||||
src_hash = packet.payload[1]
|
||||
|
||||
# Get the ACL for this destination identity
|
||||
identity_acl = self.acl_dict.get(dest_hash)
|
||||
if not identity_acl:
|
||||
logger.debug(f"No ACL for dest 0x{dest_hash:02X}, allowing forward")
|
||||
return False
|
||||
|
||||
# Find the client by source hash
|
||||
client = None
|
||||
for client_info in identity_acl.get_all_clients():
|
||||
pubkey = client_info.id.get_public_key()
|
||||
if pubkey[0] == src_hash:
|
||||
client = client_info
|
||||
break
|
||||
|
||||
if not client:
|
||||
logger.debug(f"PATH packet from unknown client 0x{src_hash:02X}, allowing forward")
|
||||
return False
|
||||
|
||||
# Get shared secret for decryption
|
||||
shared_secret = client.shared_secret
|
||||
if not shared_secret or len(shared_secret) == 0:
|
||||
logger.debug(f"No shared secret for client 0x{src_hash:02X}, cannot decrypt PATH")
|
||||
return False
|
||||
|
||||
# Decrypt the PATH packet payload
|
||||
# Payload format: dest_hash(1) + src_hash(1) + mac(2) + encrypted_data
|
||||
if len(packet.payload) < 4:
|
||||
logger.debug(f"PATH packet too short: {len(packet.payload)} bytes")
|
||||
return False
|
||||
|
||||
mac_and_data = packet.payload[2:] # Skip dest_hash and src_hash
|
||||
aes_key = shared_secret[:16]
|
||||
decrypted = CryptoUtils.mac_then_decrypt(aes_key, shared_secret, mac_and_data)
|
||||
|
||||
if not decrypted:
|
||||
logger.debug(f"Failed to decrypt PATH packet from 0x{src_hash:02X}")
|
||||
return False
|
||||
|
||||
# Parse decrypted PATH data
|
||||
# Format: path_len(1) + path[path_len] + extra_type(1) + extra[...]
|
||||
if len(decrypted) < 1:
|
||||
logger.debug(f"Decrypted PATH data too short")
|
||||
return False
|
||||
|
||||
path_len = decrypted[0]
|
||||
if len(decrypted) < 1 + path_len:
|
||||
logger.debug(
|
||||
f"PATH data truncated: need {1 + path_len} bytes, got {len(decrypted)}"
|
||||
)
|
||||
return False
|
||||
|
||||
path_data = decrypted[1 : 1 + path_len]
|
||||
|
||||
# Update client's out_path (same as C++ memcpy)
|
||||
client.out_path = bytearray(path_data)
|
||||
client.out_path_len = path_len
|
||||
client.last_activity = int(time.time())
|
||||
|
||||
logger.info(
|
||||
f"Updated out_path for client 0x{src_hash:02X} -> 0x{dest_hash:02X}: "
|
||||
f"path_len={path_len}, path={[hex(b) for b in path_data]}"
|
||||
)
|
||||
|
||||
# Don't mark as do_not_retransmit - let it forward normally
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error processing PATH packet: {e}", exc_info=True)
|
||||
return False
|
||||
@@ -0,0 +1,370 @@
|
||||
"""
|
||||
Protocol request (REQ) handling helper for pyMC Repeater.
|
||||
|
||||
Provides repeater-specific callbacks for status and telemetry requests.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
import struct
|
||||
import time
|
||||
|
||||
from pymc_core.node.handlers.protocol_request import (
|
||||
REQ_TYPE_GET_ACCESS_LIST,
|
||||
REQ_TYPE_GET_NEIGHBOURS,
|
||||
REQ_TYPE_GET_OWNER_INFO,
|
||||
REQ_TYPE_GET_STATUS,
|
||||
REQ_TYPE_GET_TELEMETRY_DATA,
|
||||
SERVER_RESPONSE_DELAY_MS,
|
||||
ProtocolRequestHandler,
|
||||
)
|
||||
|
||||
logger = logging.getLogger("ProtocolRequestHelper")
|
||||
|
||||
|
||||
class ProtocolRequestHelper:
|
||||
"""Provides repeater-specific protocol request handlers."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
identity_manager,
|
||||
packet_injector=None,
|
||||
acl_dict=None,
|
||||
radio=None,
|
||||
engine=None,
|
||||
neighbor_tracker=None,
|
||||
config=None,
|
||||
):
|
||||
|
||||
self.identity_manager = identity_manager
|
||||
self.packet_injector = packet_injector
|
||||
self.acl_dict = acl_dict or {}
|
||||
self.radio = radio
|
||||
self.engine = engine
|
||||
self.neighbor_tracker = neighbor_tracker
|
||||
self.config = config or {}
|
||||
|
||||
# Dictionary of core handlers keyed by dest_hash
|
||||
self.handlers = {}
|
||||
|
||||
def register_identity(self, name: str, identity, identity_type: str = "repeater"):
|
||||
|
||||
hash_byte = identity.get_public_key()[0]
|
||||
|
||||
# Get ACL for this identity
|
||||
identity_acl = self.acl_dict.get(hash_byte)
|
||||
if not identity_acl:
|
||||
logger.warning(f"Cannot register identity '{name}': no ACL for hash 0x{hash_byte:02X}")
|
||||
return
|
||||
|
||||
# Create ACL contacts wrapper
|
||||
acl_contacts = self._create_acl_contacts_wrapper(identity_acl)
|
||||
|
||||
# Build request handlers dict
|
||||
request_handlers = {
|
||||
REQ_TYPE_GET_STATUS: self._handle_get_status,
|
||||
REQ_TYPE_GET_ACCESS_LIST: self._make_handle_get_access_list(identity_acl),
|
||||
REQ_TYPE_GET_NEIGHBOURS: self._handle_get_neighbours,
|
||||
REQ_TYPE_GET_OWNER_INFO: self._handle_get_owner_info,
|
||||
}
|
||||
|
||||
# Create core handler
|
||||
handler = ProtocolRequestHandler(
|
||||
local_identity=identity,
|
||||
contacts=acl_contacts,
|
||||
get_client_fn=lambda src_hash: self._get_client_from_acl(identity_acl, src_hash),
|
||||
request_handlers=request_handlers,
|
||||
log_fn=logger.info,
|
||||
)
|
||||
|
||||
self.handlers[hash_byte] = {
|
||||
"handler": handler,
|
||||
"identity": identity,
|
||||
"name": name,
|
||||
"type": identity_type,
|
||||
}
|
||||
|
||||
logger.info(f"Registered protocol request handler for '{name}': hash=0x{hash_byte:02X}")
|
||||
|
||||
def _create_acl_contacts_wrapper(self, acl):
|
||||
"""Create contacts wrapper from ACL."""
|
||||
|
||||
class ACLContactsWrapper:
|
||||
def __init__(self, identity_acl):
|
||||
self._acl = identity_acl
|
||||
|
||||
@property
|
||||
def contacts(self):
|
||||
return self._acl.get_all_clients()
|
||||
|
||||
return ACLContactsWrapper(acl)
|
||||
|
||||
def _get_client_from_acl(self, acl, src_hash: int):
|
||||
"""Get client from ACL by source hash."""
|
||||
for client_info in acl.get_all_clients():
|
||||
if client_info.id.get_public_key()[0] == src_hash:
|
||||
return client_info
|
||||
return None
|
||||
|
||||
async def process_request_packet(self, packet):
|
||||
|
||||
try:
|
||||
if len(packet.payload) < 2:
|
||||
return False
|
||||
|
||||
dest_hash = packet.payload[0]
|
||||
|
||||
handler_info = self.handlers.get(dest_hash)
|
||||
if not handler_info:
|
||||
return False
|
||||
|
||||
# Let core handler build response
|
||||
response_packet = await handler_info["handler"](packet)
|
||||
|
||||
# Send response after delay
|
||||
if response_packet and self.packet_injector:
|
||||
await asyncio.sleep(SERVER_RESPONSE_DELAY_MS / 1000.0)
|
||||
await self.packet_injector(response_packet, wait_for_ack=False)
|
||||
|
||||
packet.mark_do_not_retransmit()
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error processing protocol request: {e}", exc_info=True)
|
||||
return False
|
||||
|
||||
def _handle_get_status(self, client, timestamp: int, req_data: bytes):
|
||||
"""Build 56-byte RepeaterStats (firmware layout from MeshCore simple_repeater/MyMesh.h)."""
|
||||
# RepeaterStats: uint16 batt, uint16 curr_tx_queue_len, int16 noise_floor, int16 last_rssi,
|
||||
# uint32 n_packets_recv, n_packets_sent, total_air_time_secs, total_up_time_secs,
|
||||
# n_sent_flood, n_sent_direct, n_recv_flood, n_recv_direct,
|
||||
# uint16 err_events, int16 last_snr (×4), uint16 n_direct_dups, n_flood_dups,
|
||||
# uint32 total_rx_air_time_secs, n_recv_errors → 56 bytes
|
||||
|
||||
# Uptime: use engine start_time when available (fixes wrong "20521 days" from time.time())
|
||||
if self.engine and hasattr(self.engine, "start_time"):
|
||||
total_up_time_secs = int(time.time() - self.engine.start_time)
|
||||
else:
|
||||
total_up_time_secs = 0
|
||||
|
||||
# Radio: noise floor, last RSSI, last SNR (firmware stores SNR × 4)
|
||||
if self.radio:
|
||||
noise_floor = int(getattr(self.radio, "get_noise_floor", lambda: 0)() or 0)
|
||||
if callable(getattr(self.radio, "get_last_rssi", None)):
|
||||
last_rssi = int(self.radio.get_last_rssi() or -120)
|
||||
else:
|
||||
last_rssi = int(getattr(self.radio, "last_rssi", -120) or -120)
|
||||
if callable(getattr(self.radio, "get_last_snr", None)):
|
||||
last_snr = int((self.radio.get_last_snr() or 0) * 4)
|
||||
else:
|
||||
last_snr = int((getattr(self.radio, "last_snr", 0) or 0) * 4)
|
||||
else:
|
||||
noise_floor = 0
|
||||
last_rssi = -120
|
||||
last_snr = 0
|
||||
|
||||
# Packet counts: prefer engine (rx_count, forwarded_count); fall back to radio if present
|
||||
if self.engine:
|
||||
n_packets_recv = getattr(self.engine, "rx_count", 0)
|
||||
n_packets_sent = getattr(self.engine, "forwarded_count", 0)
|
||||
elif self.radio:
|
||||
n_packets_recv = getattr(self.radio, "packets_received", 0) or 0
|
||||
n_packets_sent = getattr(self.radio, "packets_sent", 0) or 0
|
||||
else:
|
||||
n_packets_recv = 0
|
||||
n_packets_sent = 0
|
||||
|
||||
# Airtime (AirtimeManager uses total_airtime_ms for TX; total_rx_airtime_ms if we track RX)
|
||||
total_air_time_secs = 0
|
||||
total_rx_air_time_secs = 0
|
||||
if self.engine:
|
||||
am = getattr(self.engine, "airtime_mgr", None) or getattr(
|
||||
self.engine, "airtime_manager", None
|
||||
)
|
||||
if am is not None:
|
||||
total_air_time_secs = int(getattr(am, "total_airtime_ms", 0) or 0) // 1000
|
||||
total_rx_air_time_secs = int(getattr(am, "total_rx_airtime_ms", 0) or 0) // 1000
|
||||
|
||||
# Routing stats (flood/direct and dups - from engine when available)
|
||||
n_sent_flood = getattr(self.engine, "sent_flood_count", 0) if self.engine else 0
|
||||
n_sent_direct = getattr(self.engine, "sent_direct_count", 0) if self.engine else 0
|
||||
n_recv_flood = getattr(self.engine, "recv_flood_count", 0) if self.engine else 0
|
||||
n_recv_direct = getattr(self.engine, "recv_direct_count", 0) if self.engine else 0
|
||||
n_direct_dups = getattr(self.engine, "direct_dup_count", 0) if self.engine else 0
|
||||
n_flood_dups = getattr(self.engine, "flood_dup_count", 0) if self.engine else 0
|
||||
n_recv_errors = (
|
||||
int(getattr(self.radio, "crc_error_count", 0) or 0)
|
||||
if self.radio
|
||||
else 0
|
||||
)
|
||||
|
||||
# Pack 56-byte RepeaterStats (layout matches firmware)
|
||||
stats = struct.pack(
|
||||
"<HHhhIIIIIIIIHhHHII",
|
||||
0, # batt_milli_volts (not available on Pi)
|
||||
0, # curr_tx_queue_len (TODO)
|
||||
noise_floor,
|
||||
last_rssi,
|
||||
n_packets_recv,
|
||||
n_packets_sent,
|
||||
total_air_time_secs,
|
||||
total_up_time_secs,
|
||||
n_sent_flood,
|
||||
n_sent_direct,
|
||||
n_recv_flood,
|
||||
n_recv_direct,
|
||||
0, # err_events
|
||||
last_snr,
|
||||
n_direct_dups,
|
||||
n_flood_dups,
|
||||
total_rx_air_time_secs,
|
||||
n_recv_errors,
|
||||
)
|
||||
|
||||
logger.debug(
|
||||
"GET_STATUS: uptime=%ds, noise=%ddBm, rssi=%ddBm, snr=%.1fdB, rx=%s, tx=%s",
|
||||
total_up_time_secs,
|
||||
noise_floor,
|
||||
last_rssi,
|
||||
last_snr / 4.0,
|
||||
n_packets_recv,
|
||||
n_packets_sent,
|
||||
)
|
||||
|
||||
return stats
|
||||
|
||||
def _make_handle_get_access_list(self, identity_acl):
|
||||
"""Create a closure for GET_ACCESS_LIST bound to a specific identity ACL."""
|
||||
def _handler(client, timestamp: int, req_data: bytes):
|
||||
return self._handle_get_access_list(client, timestamp, req_data, identity_acl)
|
||||
return _handler
|
||||
|
||||
def _handle_get_access_list(self, client, timestamp: int, req_data: bytes, identity_acl):
|
||||
"""Return ACL entries: [pub_key_prefix(6) + permissions(1)] per client.
|
||||
|
||||
Admin-only. Matches C++ simple_repeater handleRequest REQ_TYPE_GET_ACCESS_LIST.
|
||||
"""
|
||||
if not hasattr(client, "is_admin") or not client.is_admin():
|
||||
logger.debug("GET_ACCESS_LIST rejected: client is not admin")
|
||||
return None
|
||||
|
||||
# req_data[0] and req_data[1] are reserved bytes; must both be 0
|
||||
if len(req_data) >= 2 and (req_data[0] != 0 or req_data[1] != 0):
|
||||
logger.debug("GET_ACCESS_LIST: reserved bytes non-zero, ignoring")
|
||||
return None
|
||||
|
||||
result = bytearray()
|
||||
for ci in identity_acl.get_all_clients():
|
||||
if ci.permissions == 0:
|
||||
continue # skip deleted entries
|
||||
pubkey = ci.id.get_public_key()
|
||||
result.extend(pubkey[:6]) # 6-byte pub_key prefix
|
||||
result.append(ci.permissions & 0xFF)
|
||||
|
||||
logger.debug("GET_ACCESS_LIST: returning %d entries", len(result) // 7)
|
||||
return bytes(result)
|
||||
|
||||
def _handle_get_neighbours(self, client, timestamp: int, req_data: bytes):
|
||||
"""Return paginated, sorted neighbour list.
|
||||
|
||||
Matches C++ simple_repeater handleRequest REQ_TYPE_GET_NEIGHBOURS.
|
||||
Request: version(1) + count(1) + offset(2 LE) + order_by(1) + pubkey_prefix_len(1) + random(4)
|
||||
Response: total_count(2 LE) + results_count(2 LE) + entries
|
||||
Each entry: pubkey_prefix(N) + heard_seconds_ago(4 LE) + snr(1 signed)
|
||||
"""
|
||||
if len(req_data) < 7:
|
||||
logger.debug("GET_NEIGHBOURS: req_data too short (%d bytes)", len(req_data))
|
||||
return None
|
||||
|
||||
request_version = req_data[0]
|
||||
if request_version != 0:
|
||||
logger.debug("GET_NEIGHBOURS: unsupported version %d", request_version)
|
||||
return None
|
||||
|
||||
count = req_data[1]
|
||||
offset = struct.unpack_from("<H", req_data, 2)[0]
|
||||
order_by = req_data[4]
|
||||
pubkey_prefix_len = min(req_data[5], 32)
|
||||
|
||||
# Fetch neighbours from storage
|
||||
storage = getattr(self.neighbor_tracker, "storage", None) if self.neighbor_tracker else None
|
||||
if not storage or not hasattr(storage, "get_neighbors"):
|
||||
logger.debug("GET_NEIGHBOURS: no storage available")
|
||||
# Return empty result
|
||||
return struct.pack("<HH", 0, 0)
|
||||
|
||||
raw_neighbors = storage.get_neighbors()
|
||||
now = time.time()
|
||||
|
||||
# Build sortable list: (pubkey_hex, heard_seconds_ago, snr)
|
||||
entries = []
|
||||
for pubkey_hex, info in raw_neighbors.items():
|
||||
last_seen = info.get("last_seen", 0) or 0
|
||||
heard_ago = max(0, int(now - last_seen))
|
||||
snr_raw = info.get("snr", 0) or 0
|
||||
# Store SNR as int8 (firmware stores snr * 4 as int8)
|
||||
snr_int = max(-128, min(127, int(snr_raw * 4)))
|
||||
entries.append((pubkey_hex, heard_ago, snr_int))
|
||||
|
||||
# Sort (matches C++ order_by values)
|
||||
if order_by == 0:
|
||||
entries.sort(key=lambda e: e[1]) # newest first (smallest heard_ago)
|
||||
elif order_by == 1:
|
||||
entries.sort(key=lambda e: e[1], reverse=True) # oldest first
|
||||
elif order_by == 2:
|
||||
entries.sort(key=lambda e: e[2], reverse=True) # strongest SNR first
|
||||
elif order_by == 3:
|
||||
entries.sort(key=lambda e: e[2]) # weakest SNR first
|
||||
|
||||
total_count = len(entries)
|
||||
|
||||
# Paginate
|
||||
entry_size = pubkey_prefix_len + 4 + 1
|
||||
max_results_bytes = 130 # firmware buffer limit
|
||||
results = bytearray()
|
||||
results_count = 0
|
||||
|
||||
for i in range(count):
|
||||
idx = i + offset
|
||||
if idx >= total_count:
|
||||
break
|
||||
if len(results) + entry_size > max_results_bytes:
|
||||
break
|
||||
|
||||
pubkey_hex, heard_ago, snr_int = entries[idx]
|
||||
try:
|
||||
pubkey_bytes = bytes.fromhex(pubkey_hex)
|
||||
except (ValueError, TypeError):
|
||||
continue
|
||||
results.extend(pubkey_bytes[:pubkey_prefix_len])
|
||||
results.extend(struct.pack("<I", heard_ago))
|
||||
results.append(snr_int & 0xFF)
|
||||
results_count += 1
|
||||
|
||||
header = struct.pack("<HH", total_count, results_count)
|
||||
|
||||
logger.debug(
|
||||
"GET_NEIGHBOURS: total=%d, returned=%d, offset=%d, order=%d",
|
||||
total_count, results_count, offset, order_by,
|
||||
)
|
||||
return header + bytes(results)
|
||||
|
||||
def _handle_get_owner_info(self, client, timestamp: int, req_data: bytes):
|
||||
"""Return firmware version, node name, and owner info.
|
||||
|
||||
Matches C++ simple_repeater: sprintf("%s\\n%s\\n%s", FIRMWARE_VERSION, node_name, owner_info)
|
||||
"""
|
||||
repeater_cfg = self.config.get("repeater", {})
|
||||
node_name = repeater_cfg.get("node_name", "pyMC_Repeater")
|
||||
owner_info = repeater_cfg.get("owner_info", "")
|
||||
|
||||
# Version: use package version if available, fallback to "pyMC"
|
||||
try:
|
||||
from importlib.metadata import version as pkg_version
|
||||
fw_version = pkg_version("pymc-repeater")
|
||||
except Exception:
|
||||
fw_version = "pyMC"
|
||||
|
||||
result = f"{fw_version}\n{node_name}\n{owner_info}".encode("utf-8")
|
||||
logger.debug("GET_OWNER_INFO: %s", result.decode("utf-8", errors="replace"))
|
||||
return result
|
||||
@@ -0,0 +1,702 @@
|
||||
"""
|
||||
Mesh CLI Handler
|
||||
Handles administrative commands sent to repeaters and room servers via TXT_MSG packets.
|
||||
Only users with admin permissions (via ACL) can execute these commands.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import time
|
||||
from pathlib import Path
|
||||
from typing import Any, Callable, Dict, Optional
|
||||
|
||||
import yaml
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class MeshCLI:
|
||||
"""
|
||||
CLI command handler for mesh node administration (repeaters and room servers).
|
||||
Commands follow the format: XX|command params
|
||||
where XX is an optional sequence number that gets echoed in the reply.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
config_path: str,
|
||||
config: Dict[str, Any],
|
||||
save_config_callback: Callable,
|
||||
identity_type: str = "repeater",
|
||||
enable_regions: bool = True,
|
||||
):
|
||||
"""
|
||||
Initialize the CLI handler.
|
||||
|
||||
Args:
|
||||
config_path: Path to the config.yaml file
|
||||
config: Current configuration dictionary
|
||||
save_config_callback: Callback to save config changes
|
||||
identity_type: Type of identity ('repeater' or 'room_server')
|
||||
enable_regions: Whether to enable region commands (only for repeaters)
|
||||
"""
|
||||
self.config_path = Path(config_path)
|
||||
self.config = config
|
||||
self.save_config = save_config_callback
|
||||
self.identity_type = identity_type
|
||||
self.enable_regions = enable_regions
|
||||
|
||||
# Get repeater config shortcut
|
||||
self.repeater_config = config.get("repeater", {})
|
||||
|
||||
def handle_command(self, sender_pubkey: bytes, command: str, is_admin: bool) -> str:
|
||||
"""
|
||||
Handle an incoming command from a client.
|
||||
|
||||
Args:
|
||||
sender_pubkey: Public key of sender
|
||||
command: Command string (may include XX| prefix)
|
||||
is_admin: Whether sender has admin permissions
|
||||
|
||||
Returns:
|
||||
Reply string to send back to sender
|
||||
"""
|
||||
# Check admin permission first
|
||||
if not is_admin:
|
||||
return "Error: Admin permission required"
|
||||
|
||||
logger.debug(f"handle_command received: '{command}' (len={len(command)})")
|
||||
|
||||
# Extract optional sequence prefix (XX|)
|
||||
prefix = ""
|
||||
if len(command) > 4 and command[2] == "|":
|
||||
prefix = command[:3]
|
||||
command = command[3:]
|
||||
logger.debug(f"Extracted prefix: '{prefix}', remaining command: '{command}'")
|
||||
|
||||
# Strip leading/trailing whitespace
|
||||
command = command.strip()
|
||||
logger.debug(f"After strip: '{command}'")
|
||||
|
||||
# Route to appropriate handler
|
||||
reply = self._route_command(command)
|
||||
|
||||
# Add prefix back to reply if present
|
||||
if prefix:
|
||||
return prefix + reply
|
||||
return reply
|
||||
|
||||
def _route_command(self, command: str) -> str:
|
||||
"""Route command to appropriate handler method."""
|
||||
|
||||
# Help
|
||||
if command == "help" or command.startswith("help "):
|
||||
return self._cmd_help(command)
|
||||
|
||||
# System commands
|
||||
elif command == "reboot":
|
||||
return self._cmd_reboot()
|
||||
elif command == "advert":
|
||||
return self._cmd_advert()
|
||||
elif command.startswith("clock"):
|
||||
return self._cmd_clock(command)
|
||||
elif command.startswith("time "):
|
||||
return self._cmd_time(command)
|
||||
elif command == "start ota":
|
||||
return "Error: OTA not supported in Python repeater"
|
||||
elif command.startswith("password "):
|
||||
return self._cmd_password(command)
|
||||
elif command == "clear stats":
|
||||
return self._cmd_clear_stats()
|
||||
elif command == "ver":
|
||||
return self._cmd_version()
|
||||
|
||||
# Get commands
|
||||
elif command.startswith("get "):
|
||||
return self._cmd_get(command[4:])
|
||||
|
||||
# Set commands
|
||||
elif command.startswith("set "):
|
||||
return self._cmd_set(command[4:])
|
||||
|
||||
# ACL commands
|
||||
elif command.startswith("setperm "):
|
||||
return self._cmd_setperm(command)
|
||||
elif command == "get acl":
|
||||
return "Error: Use 'get acl' via serial console only"
|
||||
|
||||
# Region commands (repeaters only)
|
||||
elif command.startswith("region"):
|
||||
if self.enable_regions:
|
||||
return self._cmd_region(command)
|
||||
else:
|
||||
return "Error: Region commands not available for room servers"
|
||||
|
||||
# Neighbor commands
|
||||
elif command == "neighbors":
|
||||
return self._cmd_neighbors()
|
||||
elif command.startswith("neighbor.remove "):
|
||||
return self._cmd_neighbor_remove(command)
|
||||
|
||||
# Temporary radio params
|
||||
elif command.startswith("tempradio "):
|
||||
return self._cmd_tempradio(command)
|
||||
|
||||
# Sensor commands
|
||||
elif command.startswith("sensor "):
|
||||
return "Error: Sensor commands not implemented in Python repeater"
|
||||
|
||||
# GPS commands
|
||||
elif command.startswith("gps"):
|
||||
return "Error: GPS commands not implemented in Python repeater"
|
||||
|
||||
# Logging commands
|
||||
elif command.startswith("log "):
|
||||
return self._cmd_log(command)
|
||||
|
||||
# Statistics commands
|
||||
elif command.startswith("stats-"):
|
||||
return "Error: Stats commands not fully implemented yet"
|
||||
|
||||
else:
|
||||
return "Unknown command"
|
||||
|
||||
# ==================== Help Command ====================
|
||||
|
||||
def _cmd_help(self, command: str) -> str:
|
||||
"""Show available commands or detailed help for a specific command."""
|
||||
parts = command.split(None, 1)
|
||||
if len(parts) == 2:
|
||||
return self._help_detail(parts[1])
|
||||
|
||||
lines = [
|
||||
"=== pyMC CLI Commands ===",
|
||||
"",
|
||||
"System:",
|
||||
" reboot Restart the repeater service",
|
||||
" advert Send self advertisement",
|
||||
" clock Show current UTC time",
|
||||
" clock sync Sync clock (no-op, uses system time)",
|
||||
" ver Show version info",
|
||||
" password <pw> Change admin password",
|
||||
" clear stats Clear statistics",
|
||||
"",
|
||||
"Get:",
|
||||
" get name Node name",
|
||||
" get radio Radio params (freq,bw,sf,cr)",
|
||||
" get freq Frequency (MHz)",
|
||||
" get tx TX power",
|
||||
" get af Airtime factor",
|
||||
" get repeat Repeat mode (on/off)",
|
||||
" get lat / get lon GPS coordinates",
|
||||
" get role Identity role",
|
||||
" get guest.password Guest password",
|
||||
" get allow.read.only Read-only access setting",
|
||||
" get advert.interval Advert interval (minutes)",
|
||||
" get flood.advert.interval Flood advert interval (hours)",
|
||||
" get flood.max Max flood hops",
|
||||
" get rxdelay RX delay base",
|
||||
" get txdelay TX delay factor",
|
||||
" get direct.txdelay Direct TX delay factor",
|
||||
" get multi.acks Multi-ack count",
|
||||
" get int.thresh Interference threshold",
|
||||
" get agc.reset.interval AGC reset interval",
|
||||
"",
|
||||
"Set: (use 'help set' for details)",
|
||||
" set <param> <value>",
|
||||
"",
|
||||
"Other:",
|
||||
" neighbors List neighbors",
|
||||
" neighbor.remove <key> Remove neighbor by pubkey",
|
||||
" tempradio <freq> <bw> <sf> <cr> <timeout_mins>",
|
||||
" setperm <pubkey> <perm> Set ACL permissions",
|
||||
" log start|stop|erase Logging control",
|
||||
]
|
||||
if self.enable_regions:
|
||||
lines.append(" region ... Region commands")
|
||||
lines += ["", "Type 'help <command>' for details on a specific command."]
|
||||
return "\n".join(lines)
|
||||
|
||||
def _help_detail(self, topic: str) -> str:
|
||||
"""Return detailed help for a specific command topic."""
|
||||
topic = topic.strip()
|
||||
details = {
|
||||
"set": (
|
||||
"Set commands — set <param> <value>:\n"
|
||||
" set name <name> Set node name\n"
|
||||
" set radio <f> <bw> <sf> <cr> Set radio (restart required)\n"
|
||||
" set freq <mhz> Set frequency (restart required)\n"
|
||||
" set tx <power> Set TX power\n"
|
||||
" set af <factor> Airtime factor\n"
|
||||
" set repeat on|off Enable/disable repeating\n"
|
||||
" set lat <deg> Latitude\n"
|
||||
" set lon <deg> Longitude\n"
|
||||
" set guest.password <pw> Guest password\n"
|
||||
" set allow.read.only on|off Read-only access\n"
|
||||
" set advert.interval <min> 60-240 minutes\n"
|
||||
" set flood.advert.interval <hr> 3-48 hours\n"
|
||||
" set flood.max <hops> Max flood hops (max 64)\n"
|
||||
" set rxdelay <val> RX delay base (>=0)\n"
|
||||
" set txdelay <val> TX delay factor (>=0)\n"
|
||||
" set direct.txdelay <val> Direct TX delay (>=0)\n"
|
||||
" set multi.acks <n> Multi-ack count\n"
|
||||
" set int.thresh <dbm> Interference threshold\n"
|
||||
" set agc.reset.interval <n> AGC reset (rounded to x4)"
|
||||
),
|
||||
"get": "Get commands — type 'help' to see all 'get' parameters.",
|
||||
"reboot": "Restart the repeater service via systemd.",
|
||||
"advert": "Trigger a self-advertisement flood packet.",
|
||||
"clock": "'clock' shows UTC time. 'clock sync' is a no-op (system time used).",
|
||||
"ver": "Show repeater version and identity type.",
|
||||
"password": "password <new_password> — Change the admin password.",
|
||||
"tempradio": (
|
||||
"tempradio <freq_mhz> <bw_khz> <sf> <cr> <timeout_mins>\n"
|
||||
" Apply temporary radio parameters that revert after timeout.\n"
|
||||
" freq: 300-2500 MHz, bw: 7-500 kHz, sf: 5-12, cr: 5-8"
|
||||
),
|
||||
"neighbors": "List known neighbor nodes from the routing table.",
|
||||
"setperm": "setperm <pubkey_hex> <permission_int> — Set ACL permissions for a node.",
|
||||
"log": "log start|stop|erase — Control logging.",
|
||||
}
|
||||
return details.get(topic, f"No detailed help for '{topic}'. Type 'help' for command list.")
|
||||
|
||||
# ==================== System Commands ==
|
||||
|
||||
def _cmd_reboot(self) -> str:
|
||||
"""Reboot the repeater process."""
|
||||
from repeater.service_utils import restart_service
|
||||
|
||||
logger.warning("Reboot command received via repeater CLI")
|
||||
success, message = restart_service()
|
||||
|
||||
if success:
|
||||
return f"OK - {message}"
|
||||
else:
|
||||
return f"Error: {message}"
|
||||
|
||||
def _cmd_advert(self) -> str:
|
||||
"""Send self advertisement."""
|
||||
logger.info("Advert command received")
|
||||
# TODO: Trigger advertisement through packet handler
|
||||
return "Error: Not yet implemented"
|
||||
|
||||
def _cmd_clock(self, command: str) -> str:
|
||||
"""Handle clock commands."""
|
||||
if command == "clock":
|
||||
# Display current time
|
||||
import datetime
|
||||
|
||||
dt = datetime.datetime.utcnow()
|
||||
return f"{dt.hour:02d}:{dt.minute:02d} - {dt.day}/{dt.month}/{dt.year} UTC"
|
||||
elif command == "clock sync":
|
||||
# Clock sync happens automatically via sender_timestamp in protocol
|
||||
return "OK - clock sync not needed (system time used)"
|
||||
else:
|
||||
return "Unknown clock command"
|
||||
|
||||
def _cmd_time(self, command: str) -> str:
|
||||
"""Set time - not supported in Python (use system time)."""
|
||||
return "Error: Time setting not supported (system time is used)"
|
||||
|
||||
def _cmd_password(self, command: str) -> str:
|
||||
"""Change admin password."""
|
||||
new_password = command[9:].strip()
|
||||
|
||||
if not new_password:
|
||||
return "Error: Password cannot be empty"
|
||||
|
||||
# Update security config
|
||||
if "security" not in self.config:
|
||||
self.config["security"] = {}
|
||||
|
||||
self.config["security"]["password"] = new_password
|
||||
|
||||
# Save config
|
||||
try:
|
||||
self.save_config()
|
||||
return f"password now: {new_password}"
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to save password: {e}")
|
||||
return "Error: Failed to save password"
|
||||
|
||||
def _cmd_clear_stats(self) -> str:
|
||||
"""Clear statistics."""
|
||||
# TODO: Implement stats clearing
|
||||
return "Error: Not yet implemented"
|
||||
|
||||
def _cmd_version(self) -> str:
|
||||
"""Get version information."""
|
||||
role = "room_server" if self.identity_type == "room_server" else "repeater"
|
||||
version = self.config.get("version", "1.0.0")
|
||||
return f"pyMC_{role} v{version}"
|
||||
|
||||
# ==================== Get Commands ====================
|
||||
|
||||
def _cmd_get(self, param: str) -> str:
|
||||
"""Handle get commands."""
|
||||
param = param.strip()
|
||||
logger.debug(f"_cmd_get called with param: '{param}' (len={len(param)})")
|
||||
|
||||
if param == "af":
|
||||
af = self.repeater_config.get("airtime_factor", 1.0)
|
||||
return f"> {af}"
|
||||
|
||||
elif param == "name":
|
||||
name = self.repeater_config.get("name", "Unknown")
|
||||
return f"> {name}"
|
||||
|
||||
elif param == "repeat":
|
||||
mode = self.repeater_config.get("mode", "forward")
|
||||
return f"> {'on' if mode == 'forward' else 'off'}"
|
||||
|
||||
elif param == "lat":
|
||||
lat = self.repeater_config.get("latitude", 0.0)
|
||||
return f"> {lat}"
|
||||
|
||||
elif param == "lon":
|
||||
lon = self.repeater_config.get("longitude", 0.0)
|
||||
return f"> {lon}"
|
||||
|
||||
elif param == "radio":
|
||||
radio = self.config.get("radio", {})
|
||||
freq_hz = radio.get("frequency", 915000000)
|
||||
bw_hz = radio.get("bandwidth", 125000)
|
||||
sf = radio.get("spreading_factor", 7)
|
||||
cr = radio.get("coding_rate", 5)
|
||||
# Convert Hz to MHz for freq, Hz to kHz for bandwidth (match C++ ftoa output)
|
||||
freq_mhz = freq_hz / 1_000_000.0
|
||||
bw_khz = bw_hz / 1_000.0
|
||||
return f"> {freq_mhz},{bw_khz},{sf},{cr}"
|
||||
|
||||
elif param == "freq":
|
||||
freq_hz = self.config.get("radio", {}).get("frequency", 915000000)
|
||||
freq_mhz = freq_hz / 1_000_000.0
|
||||
return f"> {freq_mhz}"
|
||||
|
||||
elif param == "tx":
|
||||
power = self.config.get("radio", {}).get("tx_power", 20)
|
||||
return f"> {power}"
|
||||
|
||||
elif param == "public.key":
|
||||
# TODO: Get from identity
|
||||
return "Error: Not yet implemented"
|
||||
|
||||
elif param == "role":
|
||||
role = "room_server" if self.identity_type == "room_server" else "repeater"
|
||||
return f"> {role}"
|
||||
|
||||
elif param == "guest.password":
|
||||
guest_pw = self.config.get("security", {}).get("guest_password", "")
|
||||
return f"> {guest_pw}"
|
||||
|
||||
elif param == "allow.read.only":
|
||||
allow = self.config.get("security", {}).get("allow_read_only", False)
|
||||
return f"> {'on' if allow else 'off'}"
|
||||
|
||||
elif param == "advert.interval":
|
||||
interval = self.repeater_config.get("advert_interval_minutes", 120)
|
||||
return f"> {interval}"
|
||||
|
||||
elif param == "flood.advert.interval":
|
||||
interval = self.repeater_config.get("flood_advert_interval_hours", 24)
|
||||
return f"> {interval}"
|
||||
|
||||
elif param == "flood.max":
|
||||
max_flood = self.repeater_config.get("max_flood_hops", 64)
|
||||
return f"> {max_flood}"
|
||||
|
||||
elif param == "rxdelay":
|
||||
delay = self.repeater_config.get("rx_delay_base", 0.0)
|
||||
return f"> {delay}"
|
||||
|
||||
elif param == "txdelay":
|
||||
delay = self.repeater_config.get("tx_delay_factor", 1.0)
|
||||
return f"> {delay}"
|
||||
|
||||
elif param == "direct.txdelay":
|
||||
delay = self.repeater_config.get("direct_tx_delay_factor", 0.5)
|
||||
return f"> {delay}"
|
||||
|
||||
elif param == "multi.acks":
|
||||
acks = self.repeater_config.get("multi_acks", 0)
|
||||
return f"> {acks}"
|
||||
|
||||
elif param == "int.thresh":
|
||||
thresh = self.repeater_config.get("interference_threshold", -120)
|
||||
return f"> {thresh}"
|
||||
|
||||
elif param == "agc.reset.interval":
|
||||
interval = self.repeater_config.get("agc_reset_interval", 0)
|
||||
return f"> {interval}"
|
||||
|
||||
else:
|
||||
return f"??: {param}"
|
||||
|
||||
# ==================== Set Commands ====================
|
||||
|
||||
def _cmd_set(self, param: str) -> str:
|
||||
"""Handle set commands."""
|
||||
parts = param.split(None, 1)
|
||||
if len(parts) < 2:
|
||||
return "Error: Missing value"
|
||||
|
||||
key, value = parts[0], parts[1]
|
||||
|
||||
try:
|
||||
if key == "af":
|
||||
self.repeater_config["airtime_factor"] = float(value)
|
||||
self.save_config()
|
||||
return "OK"
|
||||
|
||||
elif key == "name":
|
||||
self.repeater_config["name"] = value
|
||||
self.save_config()
|
||||
return "OK"
|
||||
|
||||
elif key == "repeat":
|
||||
self.repeater_config["mode"] = "forward" if value.lower() == "on" else "monitor"
|
||||
self.save_config()
|
||||
return f"OK - repeat is now {'ON' if self.repeater_config['mode'] == 'forward' else 'OFF'}"
|
||||
|
||||
elif key == "lat":
|
||||
self.repeater_config["latitude"] = float(value)
|
||||
self.save_config()
|
||||
return "OK"
|
||||
|
||||
elif key == "lon":
|
||||
self.repeater_config["longitude"] = float(value)
|
||||
self.save_config()
|
||||
return "OK"
|
||||
|
||||
elif key == "radio":
|
||||
# Format: freq bw sf cr
|
||||
radio_parts = value.split()
|
||||
if len(radio_parts) != 4:
|
||||
return "Error: Expected freq bw sf cr"
|
||||
|
||||
if "radio" not in self.config:
|
||||
self.config["radio"] = {}
|
||||
|
||||
self.config["radio"]["frequency"] = float(radio_parts[0])
|
||||
self.config["radio"]["bandwidth"] = float(radio_parts[1])
|
||||
self.config["radio"]["spreading_factor"] = int(radio_parts[2])
|
||||
self.config["radio"]["coding_rate"] = int(radio_parts[3])
|
||||
self.save_config()
|
||||
return "OK - restart repeater to apply"
|
||||
|
||||
elif key == "freq":
|
||||
if "radio" not in self.config:
|
||||
self.config["radio"] = {}
|
||||
self.config["radio"]["frequency"] = float(value)
|
||||
self.save_config()
|
||||
return "OK - restart repeater to apply"
|
||||
|
||||
elif key == "tx":
|
||||
if "radio" not in self.config:
|
||||
self.config["radio"] = {}
|
||||
self.config["radio"]["tx_power"] = int(value)
|
||||
self.save_config()
|
||||
return "OK"
|
||||
|
||||
elif key == "guest.password":
|
||||
if "security" not in self.config:
|
||||
self.config["security"] = {}
|
||||
self.config["security"]["guest_password"] = value
|
||||
self.save_config()
|
||||
return "OK"
|
||||
|
||||
elif key == "allow.read.only":
|
||||
if "security" not in self.config:
|
||||
self.config["security"] = {}
|
||||
self.config["security"]["allow_read_only"] = value.lower() == "on"
|
||||
self.save_config()
|
||||
return "OK"
|
||||
|
||||
elif key == "advert.interval":
|
||||
mins = int(value)
|
||||
if mins > 0 and (mins < 60 or mins > 240):
|
||||
return "Error: interval range is 60-240 minutes"
|
||||
self.repeater_config["advert_interval_minutes"] = mins
|
||||
self.save_config()
|
||||
return "OK"
|
||||
|
||||
elif key == "flood.advert.interval":
|
||||
hours = int(value)
|
||||
if (hours > 0 and hours < 3) or hours > 48:
|
||||
return "Error: interval range is 3-48 hours"
|
||||
self.repeater_config["flood_advert_interval_hours"] = hours
|
||||
self.save_config()
|
||||
return "OK"
|
||||
|
||||
elif key == "flood.max":
|
||||
max_val = int(value)
|
||||
if max_val > 64:
|
||||
return "Error: max 64"
|
||||
self.repeater_config["max_flood_hops"] = max_val
|
||||
self.save_config()
|
||||
return "OK"
|
||||
|
||||
elif key == "rxdelay":
|
||||
delay = float(value)
|
||||
if delay < 0:
|
||||
return "Error: cannot be negative"
|
||||
self.repeater_config["rx_delay_base"] = delay
|
||||
self.save_config()
|
||||
return "OK"
|
||||
|
||||
elif key == "txdelay":
|
||||
delay = float(value)
|
||||
if delay < 0:
|
||||
return "Error: cannot be negative"
|
||||
self.repeater_config["tx_delay_factor"] = delay
|
||||
self.save_config()
|
||||
return "OK"
|
||||
|
||||
elif key == "direct.txdelay":
|
||||
delay = float(value)
|
||||
if delay < 0:
|
||||
return "Error: cannot be negative"
|
||||
self.repeater_config["direct_tx_delay_factor"] = delay
|
||||
self.save_config()
|
||||
return "OK"
|
||||
|
||||
elif key == "multi.acks":
|
||||
self.repeater_config["multi_acks"] = int(value)
|
||||
self.save_config()
|
||||
return "OK"
|
||||
|
||||
elif key == "int.thresh":
|
||||
self.repeater_config["interference_threshold"] = int(value)
|
||||
self.save_config()
|
||||
return "OK"
|
||||
|
||||
elif key == "agc.reset.interval":
|
||||
interval = int(value)
|
||||
# Round to nearest multiple of 4
|
||||
rounded = (interval // 4) * 4
|
||||
self.repeater_config["agc_reset_interval"] = rounded
|
||||
self.save_config()
|
||||
return f"OK - interval rounded to {rounded}"
|
||||
|
||||
else:
|
||||
return f"unknown config: {key}"
|
||||
|
||||
except ValueError as e:
|
||||
return f"Error: invalid value - {e}"
|
||||
except Exception as e:
|
||||
logger.error(f"Set command error: {e}")
|
||||
return f"Error: {e}"
|
||||
|
||||
# ==================== ACL Commands ====================
|
||||
|
||||
def _cmd_setperm(self, command: str) -> str:
|
||||
"""Set permissions for a public key."""
|
||||
# Format: setperm {pubkey-hex} {permissions-int}
|
||||
parts = command[8:].split()
|
||||
if len(parts) < 2:
|
||||
return "Err - bad params"
|
||||
|
||||
pubkey_hex = parts[0]
|
||||
try:
|
||||
permissions = int(parts[1])
|
||||
except ValueError:
|
||||
return "Err - invalid permissions"
|
||||
|
||||
# TODO: Apply permissions via ACL
|
||||
logger.info(f"setperm command: {pubkey_hex} -> {permissions}")
|
||||
return "Error: Not yet implemented - use config file"
|
||||
|
||||
# ==================== Region Commands ====================
|
||||
|
||||
def _cmd_region(self, command: str) -> str:
|
||||
"""Handle region commands."""
|
||||
parts = command.split()
|
||||
|
||||
if len(parts) == 1:
|
||||
return "Error: Region commands not implemented in Python repeater"
|
||||
|
||||
subcommand = parts[1]
|
||||
|
||||
if subcommand == "load":
|
||||
return "Error: Region commands not implemented"
|
||||
elif subcommand == "save":
|
||||
return "Error: Region commands not implemented"
|
||||
elif subcommand in ("allowf", "denyf", "get", "home", "put", "remove"):
|
||||
return "Error: Region commands not implemented"
|
||||
else:
|
||||
return "Err - ??"
|
||||
|
||||
# ==================== Neighbor Commands ====================
|
||||
|
||||
def _cmd_neighbors(self) -> str:
|
||||
"""List neighbors."""
|
||||
# TODO: Get neighbors from routing table
|
||||
return "Error: Not yet implemented"
|
||||
|
||||
def _cmd_neighbor_remove(self, command: str) -> str:
|
||||
"""Remove a neighbor."""
|
||||
pubkey_hex = command[16:].strip()
|
||||
|
||||
if not pubkey_hex:
|
||||
return "ERR: Missing pubkey"
|
||||
|
||||
# TODO: Remove neighbor from routing table
|
||||
logger.info(f"neighbor.remove: {pubkey_hex}")
|
||||
return "Error: Not yet implemented"
|
||||
|
||||
# ==================== Temporary Radio Commands ====================
|
||||
|
||||
def _cmd_tempradio(self, command: str) -> str:
|
||||
"""Apply temporary radio parameters."""
|
||||
# Format: tempradio {freq} {bw} {sf} {cr} {timeout_mins}
|
||||
parts = command[10:].split()
|
||||
|
||||
if len(parts) < 5:
|
||||
return "Error: Expected freq bw sf cr timeout_mins"
|
||||
|
||||
try:
|
||||
freq = float(parts[0])
|
||||
bw = float(parts[1])
|
||||
sf = int(parts[2])
|
||||
cr = int(parts[3])
|
||||
timeout_mins = int(parts[4])
|
||||
|
||||
# Validate
|
||||
if not (300.0 <= freq <= 2500.0):
|
||||
return "Error: invalid frequency"
|
||||
if not (7.0 <= bw <= 500.0):
|
||||
return "Error: invalid bandwidth"
|
||||
if not (5 <= sf <= 12):
|
||||
return "Error: invalid spreading factor"
|
||||
if not (5 <= cr <= 8):
|
||||
return "Error: invalid coding rate"
|
||||
if timeout_mins <= 0:
|
||||
return "Error: invalid timeout"
|
||||
|
||||
# TODO: Apply temporary radio parameters
|
||||
logger.info(f"tempradio: {freq}MHz {bw}kHz SF{sf} CR4/{cr} for {timeout_mins}min")
|
||||
return "Error: Not yet implemented"
|
||||
|
||||
except ValueError:
|
||||
return "Error, invalid params"
|
||||
|
||||
# ==================== Logging Commands ====================
|
||||
|
||||
def _cmd_log(self, command: str) -> str:
|
||||
"""Handle log commands."""
|
||||
if command == "log start":
|
||||
# TODO: Enable logging
|
||||
return "Error: Not yet implemented"
|
||||
elif command == "log stop":
|
||||
# TODO: Disable logging
|
||||
return "Error: Not yet implemented"
|
||||
elif command == "log erase":
|
||||
# TODO: Clear log file
|
||||
return "Error: Not yet implemented"
|
||||
elif command == "log":
|
||||
return "Error: Use journalctl to view logs"
|
||||
else:
|
||||
return "Unknown log command"
|
||||
|
||||
|
||||
# Backward compatibility alias
|
||||
RepeaterCLI = MeshCLI
|
||||
@@ -0,0 +1,729 @@
|
||||
import asyncio
|
||||
import logging
|
||||
import time
|
||||
from typing import Dict, Optional
|
||||
|
||||
from pymc_core.protocol import CryptoUtils, PacketBuilder
|
||||
from pymc_core.protocol.constants import PAYLOAD_TYPE_TXT_MSG
|
||||
|
||||
logger = logging.getLogger("RoomServer")
|
||||
|
||||
# Hard limit from C++ simple_room_server
|
||||
MAX_UNSYNCED_POSTS = 32
|
||||
|
||||
# Text message type constants
|
||||
TXT_TYPE_PLAIN = 0x00
|
||||
TXT_TYPE_CLI_DATA = 0x01
|
||||
TXT_TYPE_SIGNED_PLAIN = 0x02
|
||||
|
||||
# Push timing constants (from C++ simple_room_server)
|
||||
PUSH_NOTIFY_DELAY_MS = 2000
|
||||
SYNC_PUSH_INTERVAL_MS = 1200
|
||||
POST_SYNC_DELAY_SECS = 6
|
||||
PUSH_ACK_TIMEOUT_FLOOD_MS = 12000
|
||||
PUSH_TIMEOUT_BASE_MS = 4000
|
||||
PUSH_ACK_TIMEOUT_FACTOR_MS = 2000
|
||||
|
||||
# Safety limits and protections
|
||||
MAX_MESSAGE_LENGTH = 160 # Match C++ MAX_POST_TEXT_LEN (151 bytes for text)
|
||||
MAX_POSTS_PER_CLIENT_PER_MINUTE = 10 # Prevent spam
|
||||
MAX_CLIENTS_PER_ROOM = 50 # From ACL default
|
||||
MAX_PUSH_FAILURES = 3 # Evict after this many consecutive failures
|
||||
INACTIVE_CLIENT_TIMEOUT = 3600 # Evict after 1 hour inactivity (seconds)
|
||||
MAX_CONSECUTIVE_SYNC_ERRORS = 10 # Circuit breaker threshold
|
||||
DB_ERROR_RETRY_DELAY = 60 # Wait 1 minute on DB error (seconds)
|
||||
|
||||
# Backoff strategy for failed pushes (seconds)
|
||||
RETRY_BACKOFF_SCHEDULE = [0, 30, 300, 3600] # 0s, 30s, 5min, 1hr
|
||||
|
||||
# Note: Server/system messages now use the room server's actual public key
|
||||
# This allows clients to identify which room server sent the message
|
||||
|
||||
# Global rate limiter (shared across all rooms)
|
||||
_global_push_limiter = None
|
||||
_global_push_lock = asyncio.Lock()
|
||||
GLOBAL_MIN_GAP_BETWEEN_MESSAGES = 1.1 # 1.1s minimum gap between transmissions
|
||||
|
||||
|
||||
class GlobalRateLimiter:
|
||||
|
||||
def __init__(self, min_gap_seconds: float = 0.1):
|
||||
self.min_gap = min_gap_seconds # Minimum gap between consecutive messages
|
||||
self.lock = asyncio.Lock() # Only one transmission at a time
|
||||
self.last_release_time = 0
|
||||
|
||||
async def acquire(self):
|
||||
|
||||
async with self.lock:
|
||||
# Enforce minimum gap between consecutive transmissions
|
||||
now = time.time()
|
||||
time_since_last = now - self.last_release_time
|
||||
if time_since_last < self.min_gap:
|
||||
wait_time = self.min_gap - time_since_last
|
||||
logger.debug(f"Global rate limiter: waiting {wait_time*1000:.0f}ms")
|
||||
await asyncio.sleep(wait_time)
|
||||
# Lock is now held - caller can transmit
|
||||
# Will be released when context exits
|
||||
|
||||
def release(self):
|
||||
self.last_release_time = time.time()
|
||||
|
||||
|
||||
class RoomServer:
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
room_hash: int,
|
||||
room_name: str,
|
||||
local_identity,
|
||||
sqlite_handler,
|
||||
packet_injector,
|
||||
acl,
|
||||
max_posts: int = 32,
|
||||
config_path: str = None,
|
||||
config: dict = None,
|
||||
config_manager=None,
|
||||
send_advert_callback=None,
|
||||
):
|
||||
|
||||
self.room_hash = room_hash
|
||||
self.room_name = room_name
|
||||
self.local_identity = local_identity
|
||||
self.db = sqlite_handler
|
||||
self.packet_injector = packet_injector
|
||||
self.acl = acl
|
||||
|
||||
# Create send_advert callback for this room server
|
||||
async def send_room_advert():
|
||||
"""Send advertisement for this specific room server."""
|
||||
if not packet_injector or not local_identity:
|
||||
logger.error(
|
||||
f"Room '{room_name}': Cannot send advert - missing injector or identity"
|
||||
)
|
||||
return False
|
||||
|
||||
try:
|
||||
from pymc_core.protocol import PacketBuilder
|
||||
from pymc_core.protocol.constants import (
|
||||
ADVERT_FLAG_HAS_NAME,
|
||||
ADVERT_FLAG_IS_ROOM_SERVER,
|
||||
)
|
||||
|
||||
# Get room config
|
||||
room_config = config.get("identities", {}).get("room_servers", [])
|
||||
room_settings = {}
|
||||
for rs in room_config:
|
||||
if rs.get("name") == room_name:
|
||||
room_settings = rs.get("settings", {})
|
||||
break
|
||||
|
||||
# Use room-specific name and location
|
||||
node_name = room_settings.get("room_name", room_name)
|
||||
latitude = room_settings.get("latitude", 0.0)
|
||||
longitude = room_settings.get("longitude", 0.0)
|
||||
|
||||
flags = ADVERT_FLAG_IS_ROOM_SERVER | ADVERT_FLAG_HAS_NAME
|
||||
|
||||
packet = PacketBuilder.create_advert(
|
||||
local_identity=local_identity,
|
||||
name=node_name,
|
||||
lat=latitude,
|
||||
lon=longitude,
|
||||
feature1=0,
|
||||
feature2=0,
|
||||
flags=flags,
|
||||
route_type="flood",
|
||||
)
|
||||
|
||||
# Send via packet injector
|
||||
await packet_injector(packet, wait_for_ack=False)
|
||||
|
||||
logger.info(
|
||||
f"Room '{room_name}': Sent flood advert '{node_name}' at ({latitude:.6f}, {longitude:.6f})"
|
||||
)
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Room '{room_name}': Failed to send advert: {e}", exc_info=True)
|
||||
return False
|
||||
|
||||
# Initialize CLI handler for room server commands
|
||||
self.cli = None
|
||||
if config_path and config and config_manager:
|
||||
from .mesh_cli import MeshCLI
|
||||
|
||||
self.cli = MeshCLI(
|
||||
config_path,
|
||||
config,
|
||||
config_manager,
|
||||
identity_type="room_server",
|
||||
enable_regions=False, # Room servers don't support region commands
|
||||
send_advert_callback=send_room_advert,
|
||||
identity=local_identity,
|
||||
storage_handler=sqlite_handler,
|
||||
)
|
||||
logger.info(f"Room '{room_name}': Initialized CLI handler with identity and storage")
|
||||
|
||||
# Enforce hard limit (match C++ MAX_UNSYNCED_POSTS)
|
||||
if max_posts > MAX_UNSYNCED_POSTS:
|
||||
logger.warning(
|
||||
f"Room '{room_name}': max_posts={max_posts} exceeds hard limit "
|
||||
f"of {MAX_UNSYNCED_POSTS}, capping to {MAX_UNSYNCED_POSTS}"
|
||||
)
|
||||
max_posts = MAX_UNSYNCED_POSTS
|
||||
self.max_posts = max_posts
|
||||
|
||||
# Round-robin state
|
||||
self.next_client_idx = 0
|
||||
self.next_push_time = 0
|
||||
|
||||
# Cleanup tracking
|
||||
self.last_cleanup_time = time.time()
|
||||
self.cleanup_interval = 600 # Cleanup every 10 minutes
|
||||
|
||||
# Safety and monitoring
|
||||
self.client_post_times = {} # Track last N post times per client for rate limiting
|
||||
self.consecutive_sync_errors = 0 # Circuit breaker counter
|
||||
self.last_eviction_check = time.time()
|
||||
self.eviction_check_interval = 300 # Check every 5 minutes
|
||||
|
||||
# Initialize global rate limiter (singleton)
|
||||
global _global_push_limiter
|
||||
if _global_push_limiter is None:
|
||||
_global_push_limiter = GlobalRateLimiter(GLOBAL_MIN_GAP_BETWEEN_MESSAGES)
|
||||
self.global_limiter = _global_push_limiter
|
||||
|
||||
# Background task handle
|
||||
self._sync_task = None
|
||||
self._running = False
|
||||
|
||||
logger.info(
|
||||
f"RoomServer initialized: name='{room_name}', "
|
||||
f"hash=0x{room_hash:02X}, max_posts={max_posts}"
|
||||
)
|
||||
|
||||
async def start(self):
|
||||
if self._running:
|
||||
logger.warning(f"Room '{self.room_name}' sync loop already running")
|
||||
return
|
||||
|
||||
self._running = True
|
||||
self._sync_task = asyncio.create_task(self._sync_loop())
|
||||
logger.info(f"Room '{self.room_name}' sync loop started")
|
||||
|
||||
async def stop(self):
|
||||
self._running = False
|
||||
if self._sync_task:
|
||||
self._sync_task.cancel()
|
||||
try:
|
||||
await self._sync_task
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
logger.info(f"Room '{self.room_name}' sync loop stopped")
|
||||
|
||||
async def add_post(
|
||||
self,
|
||||
client_pubkey: bytes,
|
||||
message_text: str,
|
||||
sender_timestamp: int,
|
||||
txt_type: int = TXT_TYPE_PLAIN,
|
||||
allow_server_author: bool = False,
|
||||
) -> bool:
|
||||
|
||||
try:
|
||||
# SAFETY: Validate message length
|
||||
if len(message_text) > MAX_MESSAGE_LENGTH:
|
||||
logger.warning(
|
||||
f"Room '{self.room_name}': Message from {client_pubkey[:4].hex()} "
|
||||
f"exceeds max length ({len(message_text)} > {MAX_MESSAGE_LENGTH}), truncating"
|
||||
)
|
||||
message_text = message_text[:MAX_MESSAGE_LENGTH]
|
||||
|
||||
# SAFETY: Rate limit per client
|
||||
client_key = client_pubkey.hex()
|
||||
now = time.time()
|
||||
|
||||
if client_key not in self.client_post_times:
|
||||
self.client_post_times[client_key] = []
|
||||
|
||||
# Remove timestamps older than 1 minute
|
||||
self.client_post_times[client_key] = [
|
||||
t for t in self.client_post_times[client_key] if now - t < 60
|
||||
]
|
||||
|
||||
# Check rate limit
|
||||
if len(self.client_post_times[client_key]) >= MAX_POSTS_PER_CLIENT_PER_MINUTE:
|
||||
logger.warning(
|
||||
f"Room '{self.room_name}': Client {client_pubkey[:4].hex()} "
|
||||
f"exceeded rate limit ({MAX_POSTS_PER_CLIENT_PER_MINUTE} posts/min), dropping message"
|
||||
)
|
||||
return False
|
||||
|
||||
# Record this post time
|
||||
self.client_post_times[client_key].append(now)
|
||||
|
||||
# Use our RTC time for post_timestamp
|
||||
post_timestamp = time.time()
|
||||
|
||||
# Store to database
|
||||
msg_id = self.db.insert_room_message(
|
||||
room_hash=f"0x{self.room_hash:02X}",
|
||||
author_pubkey=client_pubkey.hex(),
|
||||
message_text=message_text,
|
||||
post_timestamp=post_timestamp,
|
||||
sender_timestamp=sender_timestamp,
|
||||
txt_type=txt_type,
|
||||
)
|
||||
|
||||
if msg_id:
|
||||
logger.info(
|
||||
f"Room '{self.room_name}': New post #{msg_id} from "
|
||||
f"{client_pubkey[:4].hex()}: {message_text[:50]}"
|
||||
)
|
||||
|
||||
# Log authenticated clients count for debugging distribution
|
||||
all_clients = self.acl.get_all_clients()
|
||||
logger.info(
|
||||
f"Room '{self.room_name}': Message stored, will distribute to "
|
||||
f"{len(all_clients)} authenticated client(s)"
|
||||
)
|
||||
|
||||
# Update client's sync_since to this message's timestamp
|
||||
# This prevents the author from receiving their own message back
|
||||
# Also update activity timestamp (they're clearly active if posting)
|
||||
logger.debug(
|
||||
f"Room '{self.room_name}': Updating author's sync_since to {post_timestamp} "
|
||||
f"to prevent echo"
|
||||
)
|
||||
self.db.upsert_client_sync(
|
||||
room_hash=f"0x{self.room_hash:02X}",
|
||||
client_pubkey=client_pubkey.hex(),
|
||||
sync_since=post_timestamp, # Don't send this message back to author
|
||||
last_activity=time.time(),
|
||||
)
|
||||
|
||||
# Trigger push notification
|
||||
self.next_push_time = time.time() + (PUSH_NOTIFY_DELAY_MS / 1000.0)
|
||||
|
||||
return True
|
||||
else:
|
||||
logger.error(f"Failed to store message to database")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error adding post: {e}", exc_info=True)
|
||||
return False
|
||||
|
||||
async def push_post_to_client(self, client_info, post: Dict) -> bool:
|
||||
|
||||
try:
|
||||
# SAFETY: Global transmission lock - only ONE message on radio at a time
|
||||
# This is critical because LoRa is serial (0.5-9s airtime per message)
|
||||
await self.global_limiter.acquire()
|
||||
|
||||
# SAFETY: Check client failure backoff
|
||||
sync_state = self.db.get_client_sync(
|
||||
room_hash=f"0x{self.room_hash:02X}",
|
||||
client_pubkey=client_info.id.get_public_key().hex(),
|
||||
)
|
||||
|
||||
if sync_state:
|
||||
failures = sync_state.get("push_failures", 0)
|
||||
if failures > 0:
|
||||
# Apply exponential backoff
|
||||
backoff_idx = min(failures, len(RETRY_BACKOFF_SCHEDULE) - 1)
|
||||
backoff_delay = RETRY_BACKOFF_SCHEDULE[backoff_idx]
|
||||
last_failure_time = sync_state.get("updated_at", 0)
|
||||
time_since_failure = time.time() - last_failure_time
|
||||
|
||||
if time_since_failure < backoff_delay:
|
||||
wait_time = backoff_delay - time_since_failure
|
||||
logger.debug(
|
||||
f"Room '{self.room_name}': Client 0x{client_info.id.get_public_key()[0]:02X} "
|
||||
f"in backoff (failure {failures}), waiting {wait_time:.0f}s"
|
||||
)
|
||||
return False # Skip this client for now
|
||||
|
||||
# Build message payload
|
||||
timestamp = int(time.time())
|
||||
flags = TXT_TYPE_SIGNED_PLAIN << 2 # Include author prefix
|
||||
|
||||
# Author prefix (first 4 bytes of pubkey)
|
||||
author_pubkey = bytes.fromhex(post["author_pubkey"])
|
||||
author_prefix = author_pubkey[:4]
|
||||
|
||||
# Plaintext: timestamp(4) + flags(1) + author_prefix(4) + text
|
||||
message_bytes = post["message_text"].encode("utf-8")
|
||||
plaintext = (
|
||||
timestamp.to_bytes(4, "little") + bytes([flags]) + author_prefix + message_bytes
|
||||
)
|
||||
|
||||
# Calculate expected ACK (same algorithm as pymc_core)
|
||||
attempt = 0
|
||||
pack_data = PacketBuilder._pack_timestamp_data(timestamp, attempt, message_bytes)
|
||||
ack_hash = CryptoUtils.sha256(pack_data + client_info.id.get_public_key())[:4]
|
||||
expected_ack_crc = int.from_bytes(ack_hash, "little")
|
||||
|
||||
# Determine routing based on stored out_path
|
||||
route_type = "flood" if client_info.out_path_len < 0 else "direct"
|
||||
|
||||
# Create datagram
|
||||
packet = PacketBuilder.create_datagram(
|
||||
ptype=PAYLOAD_TYPE_TXT_MSG,
|
||||
dest=client_info.id,
|
||||
local_identity=self.local_identity,
|
||||
secret=client_info.shared_secret,
|
||||
plaintext=plaintext,
|
||||
route_type=route_type,
|
||||
)
|
||||
|
||||
# Add stored path for direct routing
|
||||
if route_type == "direct" and len(client_info.out_path) > 0:
|
||||
packet.path = bytearray(client_info.out_path[: client_info.out_path_len])
|
||||
packet.path_len = client_info.out_path_len
|
||||
|
||||
# Calculate ACK timeout
|
||||
if route_type == "flood":
|
||||
ack_timeout = PUSH_ACK_TIMEOUT_FLOOD_MS / 1000.0
|
||||
else:
|
||||
path_len = client_info.out_path_len if client_info.out_path_len >= 0 else 0
|
||||
ack_timeout = (
|
||||
PUSH_TIMEOUT_BASE_MS + PUSH_ACK_TIMEOUT_FACTOR_MS * (path_len + 1)
|
||||
) / 1000.0
|
||||
|
||||
# Update client sync state with pending ACK
|
||||
self.db.upsert_client_sync(
|
||||
room_hash=f"0x{self.room_hash:02X}",
|
||||
client_pubkey=client_info.id.get_public_key().hex(),
|
||||
pending_ack_crc=expected_ack_crc,
|
||||
push_post_timestamp=post["post_timestamp"],
|
||||
ack_timeout_time=time.time() + ack_timeout,
|
||||
)
|
||||
# Send packet (dispatcher will track ACK automatically)
|
||||
# This blocks for the entire transmission duration (0.5-9 seconds)
|
||||
success = await self.packet_injector(packet, wait_for_ack=True)
|
||||
|
||||
# SAFETY: Release transmission lock AFTER send completes
|
||||
self.global_limiter.release()
|
||||
|
||||
if success:
|
||||
# ACK received! Update sync state
|
||||
await self._handle_ack_received(
|
||||
client_info.id.get_public_key(), post["post_timestamp"]
|
||||
)
|
||||
logger.info(
|
||||
f"Room '{self.room_name}': Pushed post to "
|
||||
f"0x{client_info.id.get_public_key()[0]:02X} via {route_type.upper()}, ACK received"
|
||||
)
|
||||
else:
|
||||
# ACK timeout
|
||||
await self._handle_ack_timeout(client_info.id.get_public_key())
|
||||
logger.warning(
|
||||
f"Room '{self.room_name}': Push to "
|
||||
f"0x{client_info.id.get_public_key()[0]:02X} timed out"
|
||||
)
|
||||
|
||||
return success
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error pushing post to client: {e}", exc_info=True)
|
||||
return False
|
||||
|
||||
async def _handle_ack_received(self, client_pubkey: bytes, post_timestamp: float):
|
||||
|
||||
try:
|
||||
# Update sync state: advance sync_since, clear pending_ack, reset failures
|
||||
self.db.upsert_client_sync(
|
||||
room_hash=f"0x{self.room_hash:02X}",
|
||||
client_pubkey=client_pubkey.hex(),
|
||||
sync_since=post_timestamp,
|
||||
pending_ack_crc=0,
|
||||
push_failures=0,
|
||||
last_activity=time.time(),
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Error handling ACK received: {e}")
|
||||
|
||||
async def _handle_ack_timeout(self, client_pubkey: bytes):
|
||||
try:
|
||||
# Get current sync state
|
||||
sync_state = self.db.get_client_sync(
|
||||
room_hash=f"0x{self.room_hash:02X}", client_pubkey=client_pubkey.hex()
|
||||
)
|
||||
|
||||
if sync_state:
|
||||
# Increment failure counter, clear pending_ack
|
||||
failures = sync_state.get("push_failures", 0) + 1
|
||||
self.db.upsert_client_sync(
|
||||
room_hash=f"0x{self.room_hash:02X}",
|
||||
client_pubkey=client_pubkey.hex(),
|
||||
push_failures=failures,
|
||||
pending_ack_crc=0,
|
||||
)
|
||||
|
||||
if failures >= 3:
|
||||
logger.warning(
|
||||
f"Room '{self.room_name}': Client 0x{client_pubkey[0]:02X} "
|
||||
f"has {failures} consecutive failures"
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Error handling ACK timeout: {e}")
|
||||
|
||||
def get_unsynced_count(self, client_pubkey: bytes) -> int:
|
||||
try:
|
||||
# Get client's sync state
|
||||
sync_state = self.db.get_client_sync(
|
||||
room_hash=f"0x{self.room_hash:02X}", client_pubkey=client_pubkey.hex()
|
||||
)
|
||||
|
||||
sync_since = sync_state["sync_since"] if sync_state else 0
|
||||
|
||||
return self.db.get_unsynced_count(
|
||||
room_hash=f"0x{self.room_hash:02X}",
|
||||
client_pubkey=client_pubkey.hex(),
|
||||
sync_since=sync_since,
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting unsynced count: {e}")
|
||||
return 0
|
||||
|
||||
async def _evict_failed_clients(self):
|
||||
try:
|
||||
now = time.time()
|
||||
all_sync_states = self.db.get_all_room_clients(f"0x{self.room_hash:02X}")
|
||||
|
||||
for sync_state in all_sync_states:
|
||||
client_pubkey_hex = sync_state["client_pubkey"]
|
||||
push_failures = sync_state.get("push_failures", 0)
|
||||
last_activity = sync_state.get("last_activity", 0)
|
||||
|
||||
# Skip already-evicted clients (marked with last_activity=0)
|
||||
if last_activity == 0:
|
||||
continue
|
||||
|
||||
evict = False
|
||||
reason = ""
|
||||
|
||||
# Check max failures
|
||||
if push_failures >= MAX_PUSH_FAILURES:
|
||||
evict = True
|
||||
reason = f"max failures ({push_failures})"
|
||||
|
||||
# Check inactivity timeout
|
||||
elif now - last_activity > INACTIVE_CLIENT_TIMEOUT:
|
||||
evict = True
|
||||
reason = f"inactive for {(now - last_activity) / 60:.0f} minutes"
|
||||
|
||||
if evict:
|
||||
# Remove from database
|
||||
self.db.upsert_client_sync(
|
||||
room_hash=f"0x{self.room_hash:02X}",
|
||||
client_pubkey=client_pubkey_hex,
|
||||
last_activity=0, # Mark as evicted
|
||||
)
|
||||
|
||||
# Remove from ACL
|
||||
client_pubkey = bytes.fromhex(client_pubkey_hex)
|
||||
self.acl.remove_client(client_pubkey)
|
||||
|
||||
logger.info(
|
||||
f"Room '{self.room_name}': Evicted client "
|
||||
f"0x{client_pubkey[0]:02X} ({reason})"
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error evicting failed clients: {e}", exc_info=True)
|
||||
|
||||
async def _sync_loop(self):
|
||||
|
||||
# SAFETY: Stagger room startup to prevent thundering herd
|
||||
import random
|
||||
|
||||
startup_delay = random.uniform(0, 5) # 0-5 second random delay
|
||||
await asyncio.sleep(startup_delay)
|
||||
|
||||
logger.info(f"Room '{self.room_name}' sync loop starting (delayed {startup_delay:.1f}s)")
|
||||
|
||||
while self._running:
|
||||
try:
|
||||
await asyncio.sleep(SYNC_PUSH_INTERVAL_MS / 1000.0)
|
||||
|
||||
# SAFETY: Circuit breaker - stop if too many consecutive errors
|
||||
if self.consecutive_sync_errors >= MAX_CONSECUTIVE_SYNC_ERRORS:
|
||||
logger.error(
|
||||
f"Room '{self.room_name}': Circuit breaker tripped! "
|
||||
f"{self.consecutive_sync_errors} consecutive errors. Pausing for {DB_ERROR_RETRY_DELAY}s"
|
||||
)
|
||||
await asyncio.sleep(DB_ERROR_RETRY_DELAY)
|
||||
self.consecutive_sync_errors = 0 # Reset after pause
|
||||
continue
|
||||
|
||||
# SAFETY: Periodic eviction check (every 5 minutes)
|
||||
if time.time() - self.last_eviction_check > self.eviction_check_interval:
|
||||
await self._evict_failed_clients()
|
||||
self.last_eviction_check = time.time()
|
||||
|
||||
# Periodic cleanup check (every 10 minutes)
|
||||
if time.time() - self.last_cleanup_time > self.cleanup_interval:
|
||||
await self._cleanup_old_messages()
|
||||
self.last_cleanup_time = time.time()
|
||||
|
||||
# Check if it's time to push
|
||||
if time.time() < self.next_push_time:
|
||||
continue
|
||||
|
||||
# Get all clients for this room
|
||||
all_clients = self.acl.get_all_clients()
|
||||
if not all_clients:
|
||||
# Only log once when transitioning from clients to no clients
|
||||
# to avoid log spam when room is idle
|
||||
self.next_push_time = time.time() + 1.0 # Check again in 1 second
|
||||
continue
|
||||
|
||||
# SAFETY: Limit number of clients
|
||||
if len(all_clients) > MAX_CLIENTS_PER_ROOM:
|
||||
logger.warning(
|
||||
f"Room '{self.room_name}': Too many clients ({len(all_clients)} > {MAX_CLIENTS_PER_ROOM})"
|
||||
)
|
||||
all_clients = all_clients[:MAX_CLIENTS_PER_ROOM]
|
||||
|
||||
# Check for ACK timeouts first
|
||||
await self._check_ack_timeouts()
|
||||
|
||||
# Track how many clients we've checked in this iteration
|
||||
clients_checked = 0
|
||||
max_checks = len(all_clients)
|
||||
|
||||
# Round-robin: find next active client
|
||||
while clients_checked < max_checks:
|
||||
# Get next client
|
||||
if self.next_client_idx >= len(all_clients):
|
||||
self.next_client_idx = 0
|
||||
|
||||
client = all_clients[self.next_client_idx]
|
||||
self.next_client_idx = (self.next_client_idx + 1) % len(all_clients)
|
||||
clients_checked += 1
|
||||
|
||||
# Get client sync state
|
||||
sync_state = self.db.get_client_sync(
|
||||
room_hash=f"0x{self.room_hash:02X}",
|
||||
client_pubkey=client.id.get_public_key().hex(),
|
||||
)
|
||||
|
||||
# Skip if already waiting for ACK, evicted, or max failures
|
||||
if sync_state:
|
||||
pending_ack = sync_state.get("pending_ack_crc", 0)
|
||||
last_activity = sync_state.get("last_activity", 0)
|
||||
push_failures = sync_state.get("push_failures", 0)
|
||||
|
||||
if pending_ack != 0:
|
||||
logger.debug(
|
||||
f"Skipping client 0x{client.id.get_public_key()[0]:02X} (waiting for ACK)"
|
||||
)
|
||||
continue
|
||||
|
||||
if last_activity == 0:
|
||||
logger.debug(
|
||||
f"Skipping client 0x{client.id.get_public_key()[0]:02X} (evicted)"
|
||||
)
|
||||
continue
|
||||
|
||||
if push_failures >= 3:
|
||||
logger.debug(
|
||||
f"Skipping client 0x{client.id.get_public_key()[0]:02X} (max failures)"
|
||||
)
|
||||
continue
|
||||
|
||||
sync_since = sync_state.get("sync_since", 0)
|
||||
else:
|
||||
# Initialize sync state for new client
|
||||
# Use sync_since from ACL client (sent during login) if available
|
||||
sync_since = client.sync_since if hasattr(client, "sync_since") else 0
|
||||
logger.info(
|
||||
f"Room '{self.room_name}': Initializing client "
|
||||
f"0x{client.id.get_public_key()[0]:02X} with sync_since={sync_since}"
|
||||
)
|
||||
self.db.upsert_client_sync(
|
||||
room_hash=f"0x{self.room_hash:02X}",
|
||||
client_pubkey=client.id.get_public_key().hex(),
|
||||
sync_since=sync_since,
|
||||
last_activity=time.time(),
|
||||
)
|
||||
|
||||
# Find next unsynced message for this client
|
||||
unsynced = self.db.get_unsynced_messages(
|
||||
room_hash=f"0x{self.room_hash:02X}",
|
||||
client_pubkey=client.id.get_public_key().hex(),
|
||||
sync_since=sync_since,
|
||||
limit=1,
|
||||
)
|
||||
|
||||
if unsynced:
|
||||
post = unsynced[0]
|
||||
logger.debug(
|
||||
f"Room '{self.room_name}': Client 0x{client.id.get_public_key()[0]:02X} "
|
||||
f"has unsynced message #{post['id']}, post_timestamp={post['post_timestamp']:.1f}"
|
||||
)
|
||||
# Check if enough time has passed since post creation
|
||||
now = time.time()
|
||||
if now >= post["post_timestamp"] + POST_SYNC_DELAY_SECS:
|
||||
# Push this post
|
||||
await self.push_post_to_client(client, post)
|
||||
self.next_push_time = time.time() + (SYNC_PUSH_INTERVAL_MS / 1000.0)
|
||||
break # Exit the while loop
|
||||
else:
|
||||
# Not ready yet, check sooner
|
||||
self.next_push_time = time.time() + (SYNC_PUSH_INTERVAL_MS / 8000.0)
|
||||
break # Exit the while loop
|
||||
else:
|
||||
# No unsynced posts for this client, try next client
|
||||
continue
|
||||
|
||||
# If we checked all clients and none were active/ready
|
||||
if clients_checked >= max_checks:
|
||||
# All clients skipped or no messages - wait longer before next check
|
||||
self.next_push_time = time.time() + 5.0 # Wait 5 seconds
|
||||
|
||||
# SAFETY: Reset error counter on successful iteration
|
||||
self.consecutive_sync_errors = 0
|
||||
|
||||
except asyncio.CancelledError:
|
||||
break
|
||||
except Exception as e:
|
||||
# SAFETY: Track consecutive errors for circuit breaker
|
||||
self.consecutive_sync_errors += 1
|
||||
logger.error(
|
||||
f"Room '{self.room_name}': Sync loop error #{self.consecutive_sync_errors}: {e}",
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
# SAFETY: Back off on errors
|
||||
backoff = min(self.consecutive_sync_errors, 10) # Cap at 10 seconds
|
||||
await asyncio.sleep(backoff)
|
||||
|
||||
logger.info(f"Room '{self.room_name}' sync loop stopped")
|
||||
|
||||
async def _check_ack_timeouts(self):
|
||||
try:
|
||||
now = time.time()
|
||||
all_sync_states = self.db.get_all_room_clients(f"0x{self.room_hash:02X}")
|
||||
|
||||
for sync_state in all_sync_states:
|
||||
if sync_state["pending_ack_crc"] != 0:
|
||||
timeout_time = sync_state.get("ack_timeout_time", 0)
|
||||
if now >= timeout_time:
|
||||
# ACK timeout
|
||||
client_pubkey = bytes.fromhex(sync_state["client_pubkey"])
|
||||
await self._handle_ack_timeout(client_pubkey)
|
||||
except Exception as e:
|
||||
logger.error(f"Error checking ACK timeouts: {e}")
|
||||
|
||||
async def _cleanup_old_messages(self):
|
||||
try:
|
||||
deleted = self.db.cleanup_old_messages(
|
||||
room_hash=f"0x{self.room_hash:02X}", keep_count=self.max_posts
|
||||
)
|
||||
if deleted > 0:
|
||||
logger.info(f"Room '{self.room_name}': Cleaned up {deleted} old messages")
|
||||
except Exception as e:
|
||||
logger.error(f"Error cleaning up old messages: {e}")
|
||||
@@ -0,0 +1,574 @@
|
||||
"""
|
||||
Text message (TXT_MSG) handling helper for pyMC Repeater.
|
||||
|
||||
This module processes incoming text messages for all managed identities
|
||||
(repeater identity + identity manager identities).
|
||||
Also handles CLI commands for admin users on the repeater identity.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
import struct
|
||||
import time
|
||||
|
||||
from pymc_core.node.handlers.text import TextMessageHandler
|
||||
|
||||
from .mesh_cli import MeshCLI
|
||||
from .room_server import RoomServer
|
||||
|
||||
logger = logging.getLogger("TextHelper")
|
||||
|
||||
# Text message type flags
|
||||
TXT_TYPE_PLAIN = 0x00
|
||||
TXT_TYPE_CLI_DATA = 0x01
|
||||
|
||||
|
||||
class TextHelper:
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
identity_manager,
|
||||
packet_injector=None,
|
||||
acl_dict=None,
|
||||
log_fn=None,
|
||||
config_path: str = None,
|
||||
config: dict = None,
|
||||
config_manager=None,
|
||||
sqlite_handler=None,
|
||||
send_advert_callback=None,
|
||||
):
|
||||
|
||||
self.identity_manager = identity_manager
|
||||
self.packet_injector = packet_injector
|
||||
self.log_fn = log_fn or logger.info
|
||||
self.acl_dict = acl_dict or {} # Per-identity ACLs keyed by hash_byte
|
||||
self.sqlite_handler = sqlite_handler # For room server database operations
|
||||
self.send_advert_callback = send_advert_callback # Callback to send repeater advert
|
||||
|
||||
# Dictionary of handlers keyed by dest_hash
|
||||
self.handlers = {}
|
||||
|
||||
# Dictionary of room servers keyed by dest_hash
|
||||
self.room_servers = {}
|
||||
|
||||
# Track repeater identity for CLI commands
|
||||
self.repeater_hash = None
|
||||
|
||||
# Store config for later use
|
||||
self.config_path = config_path
|
||||
self.config = config
|
||||
self.config_manager = config_manager
|
||||
|
||||
# Store for later CLI initialization (needs identity and storage)
|
||||
self.config_path = config_path
|
||||
self.config = config
|
||||
|
||||
# Initialize CLI handler later when repeater identity is registered
|
||||
self.cli = None
|
||||
self._pending_tasks = set()
|
||||
|
||||
def _track_task(self, task: asyncio.Task) -> None:
|
||||
self._pending_tasks.add(task)
|
||||
|
||||
def _on_done(done_task: asyncio.Task) -> None:
|
||||
self._pending_tasks.discard(done_task)
|
||||
try:
|
||||
done_task.result()
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
except Exception as e:
|
||||
logger.error(f"Background text task failed: {e}", exc_info=True)
|
||||
|
||||
task.add_done_callback(_on_done)
|
||||
|
||||
def register_identity(
|
||||
self, name: str, identity, identity_type: str = "room_server", radio_config=None
|
||||
):
|
||||
|
||||
hash_byte = identity.get_public_key()[0]
|
||||
|
||||
# Get ACL for this identity
|
||||
identity_acl = self.acl_dict.get(hash_byte)
|
||||
if not identity_acl:
|
||||
logger.warning(f"Cannot register identity '{name}': no ACL for hash 0x{hash_byte:02X}")
|
||||
return
|
||||
|
||||
# Create a contacts wrapper from this identity's ACL
|
||||
acl_contacts = self._create_acl_contacts_wrapper(identity_acl)
|
||||
|
||||
# Create TextMessageHandler for this identity
|
||||
handler = TextMessageHandler(
|
||||
local_identity=identity,
|
||||
contacts=acl_contacts,
|
||||
log_fn=self.log_fn,
|
||||
send_packet_fn=self._send_packet,
|
||||
radio_config=radio_config,
|
||||
)
|
||||
|
||||
# Register by dest hash
|
||||
hash_byte = identity.get_public_key()[0]
|
||||
self.handlers[hash_byte] = {
|
||||
"handler": handler,
|
||||
"identity": identity,
|
||||
"name": name,
|
||||
"type": identity_type,
|
||||
}
|
||||
|
||||
# Track repeater identity for CLI commands
|
||||
if identity_type == "repeater":
|
||||
self.repeater_hash = hash_byte
|
||||
logger.info(f"Set repeater hash for CLI: 0x{hash_byte:02X}")
|
||||
|
||||
# Initialize CLI handler now that we have the repeater identity
|
||||
if self.config_path and self.config and self.config_manager:
|
||||
self.cli = MeshCLI(
|
||||
self.config_path,
|
||||
self.config,
|
||||
self.config_manager,
|
||||
identity_type="repeater",
|
||||
enable_regions=True,
|
||||
send_advert_callback=self.send_advert_callback,
|
||||
identity=identity,
|
||||
storage_handler=self.sqlite_handler,
|
||||
)
|
||||
logger.info(
|
||||
"Initialized CLI handler for repeater commands with identity and storage"
|
||||
)
|
||||
|
||||
# Create RoomServer instance for room_server identities
|
||||
if identity_type == "room_server" and self.sqlite_handler:
|
||||
try:
|
||||
from .room_server import MAX_UNSYNCED_POSTS
|
||||
|
||||
room_config = radio_config or {}
|
||||
max_posts = room_config.get("max_posts", MAX_UNSYNCED_POSTS)
|
||||
|
||||
# Enforce hard limit
|
||||
if max_posts > MAX_UNSYNCED_POSTS:
|
||||
logger.warning(
|
||||
f"Room '{name}': Configured max_posts={max_posts} exceeds hard limit "
|
||||
f"of {MAX_UNSYNCED_POSTS}, capping to {MAX_UNSYNCED_POSTS}"
|
||||
)
|
||||
max_posts = MAX_UNSYNCED_POSTS
|
||||
|
||||
room_server = RoomServer(
|
||||
room_hash=hash_byte,
|
||||
room_name=name,
|
||||
local_identity=identity,
|
||||
sqlite_handler=self.sqlite_handler,
|
||||
packet_injector=self.packet_injector,
|
||||
acl=identity_acl,
|
||||
max_posts=max_posts,
|
||||
config_path=self.config_path,
|
||||
config=self.config,
|
||||
config_manager=self.config_manager,
|
||||
)
|
||||
|
||||
self.room_servers[hash_byte] = room_server
|
||||
|
||||
# Start sync loop
|
||||
start_task = asyncio.create_task(room_server.start())
|
||||
self._track_task(start_task)
|
||||
|
||||
logger.info(
|
||||
f"Registered room server '{name}': hash=0x{hash_byte:02X}, "
|
||||
f"max_posts={max_posts}"
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to create room server '{name}': {e}", exc_info=True)
|
||||
|
||||
logger.info(f"Registered {identity_type} '{name}' text handler: hash=0x{hash_byte:02X}")
|
||||
|
||||
def _create_acl_contacts_wrapper(self, acl):
|
||||
|
||||
class ACLContactsWrapper:
|
||||
def __init__(self, identity_acl):
|
||||
self._acl = identity_acl
|
||||
|
||||
@property
|
||||
def contacts(self):
|
||||
contact_list = []
|
||||
for client_info in self._acl.get_all_clients():
|
||||
# Create a minimal contact object that TextMessageHandler needs
|
||||
class ContactProxy:
|
||||
def __init__(self, client):
|
||||
self.public_key = client.id.get_public_key().hex()
|
||||
self.name = f"client_{self.public_key[:8]}"
|
||||
|
||||
contact_list.append(ContactProxy(client_info))
|
||||
return contact_list
|
||||
|
||||
return ACLContactsWrapper(acl)
|
||||
|
||||
async def process_text_packet(self, packet):
|
||||
|
||||
try:
|
||||
if len(packet.payload) < 2:
|
||||
return False
|
||||
|
||||
dest_hash = packet.payload[0]
|
||||
src_hash = packet.payload[1]
|
||||
|
||||
handler_info = self.handlers.get(dest_hash)
|
||||
if handler_info:
|
||||
logger.debug(
|
||||
f"Routing text message to '{handler_info['name']}': "
|
||||
f"dest=0x{dest_hash:02X}, src=0x{src_hash:02X}"
|
||||
)
|
||||
|
||||
# Let handler decrypt the message first
|
||||
await handler_info["handler"](packet)
|
||||
|
||||
# Call placeholder for custom processing
|
||||
await self._on_message_received(
|
||||
identity_name=handler_info["name"],
|
||||
identity_type=handler_info["type"],
|
||||
packet=packet,
|
||||
dest_hash=dest_hash,
|
||||
src_hash=src_hash,
|
||||
)
|
||||
|
||||
# Mark packet as handled
|
||||
packet.mark_do_not_retransmit()
|
||||
return True
|
||||
else:
|
||||
logger.debug(f"No text handler for hash 0x{dest_hash:02X}, allowing forward")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error processing text packet: {e}")
|
||||
return False
|
||||
|
||||
async def _on_message_received(
|
||||
self,
|
||||
identity_name: str,
|
||||
identity_type: str,
|
||||
packet,
|
||||
dest_hash: int,
|
||||
src_hash: int,
|
||||
):
|
||||
|
||||
# Placeholder - can be overridden or callback can be added
|
||||
logger.debug(
|
||||
f"Message received for {identity_type} '{identity_name}' " f"from 0x{src_hash:02X}"
|
||||
)
|
||||
|
||||
# Extract decrypted message if available
|
||||
if hasattr(packet, "decrypted") and packet.decrypted:
|
||||
message_text = packet.decrypted.get("text", "<unknown>")
|
||||
|
||||
# Clean message text - remove null bytes and trailing whitespace
|
||||
message_text = message_text.rstrip("\x00").rstrip()
|
||||
|
||||
logger.info(f"[{identity_type}:{identity_name}] Message: {message_text}")
|
||||
|
||||
# Handle room server messages
|
||||
if identity_type == "room_server" and dest_hash in self.room_servers:
|
||||
room_server = self.room_servers[dest_hash]
|
||||
|
||||
# Check if this is a CLI command FIRST (before storing as post)
|
||||
if self._is_cli_command(message_text):
|
||||
# Handle CLI command - do NOT store as post
|
||||
if room_server and room_server.cli:
|
||||
try:
|
||||
# Check admin permission
|
||||
is_admin = self._check_admin_permission_for_identity(
|
||||
src_hash, dest_hash
|
||||
)
|
||||
|
||||
if not is_admin:
|
||||
logger.warning(
|
||||
f"Room '{identity_name}': CLI command denied from 0x{src_hash:02X} (not admin)"
|
||||
)
|
||||
return
|
||||
|
||||
# Get sender's full pubkey
|
||||
identity_acl = self.acl_dict.get(dest_hash)
|
||||
sender_pubkey = bytes([src_hash]) + b"\x00" * 31 # Default
|
||||
if identity_acl:
|
||||
for client_info in identity_acl.get_all_clients():
|
||||
if client_info.id.get_public_key()[0] == src_hash:
|
||||
sender_pubkey = client_info.id.get_public_key()
|
||||
break
|
||||
|
||||
# Handle CLI command
|
||||
reply = room_server.cli.handle_command(
|
||||
sender_pubkey=sender_pubkey, command=message_text, is_admin=is_admin
|
||||
)
|
||||
|
||||
logger.info(
|
||||
f"Room '{identity_name}': CLI command from 0x{src_hash:02X}: {message_text[:50]} -> {reply[:100]}"
|
||||
)
|
||||
|
||||
# Send reply back to sender
|
||||
handler_info = self.handlers.get(dest_hash)
|
||||
if handler_info:
|
||||
await self._send_cli_reply(packet, reply, handler_info)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Error processing room server CLI command: {e}", exc_info=True
|
||||
)
|
||||
|
||||
# CLI command handled, don't store as post
|
||||
return
|
||||
|
||||
# NOT a CLI command - store as regular room post
|
||||
try:
|
||||
# Get sender's full pubkey
|
||||
identity_acl = self.acl_dict.get(dest_hash)
|
||||
sender_pubkey = bytes([src_hash]) + b"\x00" * 31 # Default
|
||||
if identity_acl:
|
||||
for client_info in identity_acl.get_all_clients():
|
||||
if client_info.id.get_public_key()[0] == src_hash:
|
||||
sender_pubkey = client_info.id.get_public_key()
|
||||
break
|
||||
|
||||
# Store message as post
|
||||
sender_timestamp = int(time.time())
|
||||
success = await room_server.add_post(
|
||||
client_pubkey=sender_pubkey,
|
||||
message_text=message_text,
|
||||
sender_timestamp=sender_timestamp,
|
||||
txt_type=TXT_TYPE_PLAIN,
|
||||
)
|
||||
|
||||
if success:
|
||||
logger.info(
|
||||
f"Room '{identity_name}': New post from {sender_pubkey[:4].hex()}: {message_text[:50]}"
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error storing room post: {e}", exc_info=True)
|
||||
|
||||
return
|
||||
|
||||
# Check if this is a CLI command to the repeater (AFTER decryption)
|
||||
if dest_hash == self.repeater_hash and self.cli and self._is_cli_command(message_text):
|
||||
try:
|
||||
# Check admin permission
|
||||
is_admin = self._check_admin_permission_for_identity(
|
||||
src_hash, self.repeater_hash
|
||||
)
|
||||
|
||||
# If not admin, log and return without sending reply
|
||||
if not is_admin:
|
||||
logger.warning(
|
||||
f"CLI command denied from 0x{src_hash:02X} (not admin): {message_text[:50]}"
|
||||
)
|
||||
return
|
||||
|
||||
# Get client for full public key
|
||||
repeater_acl = self.acl_dict.get(self.repeater_hash)
|
||||
sender_pubkey = bytes([src_hash]) + b"\x00" * 31 # Default
|
||||
if repeater_acl:
|
||||
for client_info in repeater_acl.get_all_clients():
|
||||
if client_info.id.get_public_key()[0] == src_hash:
|
||||
sender_pubkey = client_info.id.get_public_key()
|
||||
break
|
||||
|
||||
# Handle CLI command
|
||||
reply = self.cli.handle_command(
|
||||
sender_pubkey=sender_pubkey, command=message_text, is_admin=is_admin
|
||||
)
|
||||
|
||||
logger.info(
|
||||
f"CLI command from 0x{src_hash:02X}: {message_text[:50]} -> {reply[:100]}"
|
||||
)
|
||||
|
||||
# Send reply back to sender
|
||||
handler_info = self.handlers.get(dest_hash)
|
||||
if handler_info:
|
||||
await self._send_cli_reply(packet, reply, handler_info)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error processing CLI command: {e}", exc_info=True)
|
||||
|
||||
async def _send_packet(self, packet, wait_for_ack: bool = False):
|
||||
|
||||
if self.packet_injector:
|
||||
try:
|
||||
return await self.packet_injector(packet, wait_for_ack=wait_for_ack)
|
||||
except Exception as e:
|
||||
logger.error(f"Error sending packet: {e}")
|
||||
return False
|
||||
else:
|
||||
logger.error("No packet injector configured, cannot send packet")
|
||||
return False
|
||||
|
||||
def set_message_callback(self, callback):
|
||||
|
||||
self._message_callback = callback
|
||||
|
||||
def list_registered_identities(self):
|
||||
|
||||
return [
|
||||
{
|
||||
"hash": hash_byte,
|
||||
"name": info["name"],
|
||||
"type": info["type"],
|
||||
}
|
||||
for hash_byte, info in self.handlers.items()
|
||||
]
|
||||
|
||||
async def cleanup(self):
|
||||
"""Cleanup room servers and handlers."""
|
||||
# Stop all room server sync loops
|
||||
for room_server in self.room_servers.values():
|
||||
try:
|
||||
await room_server.stop()
|
||||
except Exception as e:
|
||||
logger.error(f"Error stopping room server: {e}")
|
||||
|
||||
logger.info("TextHelper cleanup complete")
|
||||
|
||||
def _is_cli_command(self, message: str) -> bool:
|
||||
"""Check if message looks like a CLI command."""
|
||||
# Strip optional sequence prefix (XX|)
|
||||
if len(message) > 4 and message[2] == "|":
|
||||
message = message[3:].strip()
|
||||
|
||||
# Check for known command prefixes
|
||||
command_prefixes = [
|
||||
"get ",
|
||||
"set ",
|
||||
"reboot",
|
||||
"advert",
|
||||
"clock",
|
||||
"time ",
|
||||
"password ",
|
||||
"clear ",
|
||||
"ver",
|
||||
"board",
|
||||
"neighbors",
|
||||
"neighbor.",
|
||||
"tempradio ",
|
||||
"setperm ",
|
||||
"region",
|
||||
"sensor ",
|
||||
"gps",
|
||||
"log ",
|
||||
"stats-",
|
||||
"start ota",
|
||||
]
|
||||
|
||||
return any(message.startswith(prefix) for prefix in command_prefixes)
|
||||
|
||||
def _check_admin_permission(self, src_hash: int) -> bool:
|
||||
"""Check if sender has admin permissions for repeater (legacy method)."""
|
||||
return self._check_admin_permission_for_identity(src_hash, self.repeater_hash)
|
||||
|
||||
def _check_admin_permission_for_identity(self, src_hash: int, identity_hash: int) -> bool:
|
||||
"""Check if sender has admin permissions (bit 0x02) for a specific identity."""
|
||||
# Get the identity's ACL
|
||||
identity_acl = self.acl_dict.get(identity_hash)
|
||||
if not identity_acl:
|
||||
return False
|
||||
|
||||
# Get client by hash byte
|
||||
clients = identity_acl.get_all_clients()
|
||||
for client_info in clients:
|
||||
pubkey = client_info.id.get_public_key()
|
||||
if pubkey[0] == src_hash:
|
||||
# Check admin bit (0x02 = PERM_ACL_ADMIN)
|
||||
permissions = getattr(client_info, "permissions", 0)
|
||||
PERM_ACL_ADMIN = 0x02
|
||||
return (permissions & 0x02) == PERM_ACL_ADMIN
|
||||
|
||||
return False
|
||||
|
||||
async def _send_cli_reply(self, original_packet, reply_text: str, handler_info: dict):
|
||||
"""
|
||||
Send CLI reply back to sender using TXT_MSG datagram.
|
||||
|
||||
Follows the C++ pattern (lines 603-609 in MyMesh.cpp):
|
||||
- Creates TXT_MSG datagram with TXT_TYPE_CLI_DATA flag
|
||||
- Encrypts with shared secret from ACL client
|
||||
- Uses client->out_path_len to decide routing:
|
||||
* if out_path_len < 0: sendFlood()
|
||||
* else: sendDirect() with stored out_path
|
||||
"""
|
||||
import time
|
||||
|
||||
from pymc_core.protocol import Identity, PacketBuilder
|
||||
from pymc_core.protocol.constants import PAYLOAD_TYPE_TXT_MSG
|
||||
|
||||
try:
|
||||
src_hash = original_packet.payload[1]
|
||||
dest_hash = original_packet.payload[0]
|
||||
|
||||
incoming_route = original_packet.get_route_type()
|
||||
logger.debug(
|
||||
f"CLI reply: original packet dest=0x{dest_hash:02X}, src=0x{src_hash:02X}, incoming_route={incoming_route}"
|
||||
)
|
||||
|
||||
# Find the client in the DESTINATION identity's ACL (not always repeater!)
|
||||
# dest_hash is the identity that received the command (repeater OR room server)
|
||||
identity_acl = self.acl_dict.get(dest_hash)
|
||||
if not identity_acl:
|
||||
logger.error(f"No ACL found for identity 0x{dest_hash:02X} for CLI reply")
|
||||
return
|
||||
|
||||
client = None
|
||||
for client_info in identity_acl.get_all_clients():
|
||||
pubkey = client_info.id.get_public_key()
|
||||
if pubkey[0] == src_hash:
|
||||
client = client_info
|
||||
break
|
||||
|
||||
if not client:
|
||||
logger.error(
|
||||
f"Client 0x{src_hash:02X} not found in identity 0x{dest_hash:02X} ACL for CLI reply"
|
||||
)
|
||||
return
|
||||
|
||||
# Get shared secret from client
|
||||
shared_secret = client.shared_secret
|
||||
if not shared_secret or len(shared_secret) == 0:
|
||||
logger.error(f"No shared secret for client 0x{src_hash:02X}")
|
||||
return
|
||||
|
||||
# Build reply packet payload
|
||||
# Format: timestamp(4) + flags(1) + reply_text
|
||||
timestamp = int(time.time())
|
||||
TXT_TYPE_CLI_DATA = 0x01
|
||||
flags = TXT_TYPE_CLI_DATA << 2 # Upper 6 bits are txt_type
|
||||
|
||||
reply_bytes = reply_text.encode("utf-8")
|
||||
plaintext = timestamp.to_bytes(4, "little") + bytes([flags]) + reply_bytes
|
||||
|
||||
# Decide routing based on client->out_path_len (C++ pattern)
|
||||
# out_path is populated by PATH packets, NOT from incoming text message route
|
||||
route_type = "flood" if client.out_path_len < 0 else "direct"
|
||||
logger.debug(
|
||||
f"CLI reply: client.out_path_len={client.out_path_len}, using route_type={route_type}"
|
||||
)
|
||||
|
||||
reply_packet = PacketBuilder.create_datagram(
|
||||
ptype=PAYLOAD_TYPE_TXT_MSG,
|
||||
dest=client.id,
|
||||
local_identity=handler_info["identity"],
|
||||
secret=shared_secret,
|
||||
plaintext=plaintext,
|
||||
route_type=route_type,
|
||||
)
|
||||
|
||||
# Add path for direct routing if available from PATH packets
|
||||
if client.out_path_len >= 0 and len(client.out_path) > 0:
|
||||
reply_packet.path = bytearray(client.out_path[: client.out_path_len])
|
||||
reply_packet.path_len = client.out_path_len
|
||||
logger.debug(
|
||||
f"CLI reply: Added stored out_path - path_len={reply_packet.path_len}, path={[hex(b) for b in reply_packet.path]}"
|
||||
)
|
||||
|
||||
# Send with delay (CLI_REPLY_DELAY_MILLIS = 600ms in C++)
|
||||
CLI_REPLY_DELAY_MS = 600
|
||||
await asyncio.sleep(CLI_REPLY_DELAY_MS / 1000.0)
|
||||
|
||||
await self._send_packet(reply_packet, wait_for_ack=False)
|
||||
logger.info(
|
||||
f"CLI reply sent to 0x{src_hash:02X} via {route_type.upper()}: {reply_text[:50]}"
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error sending CLI reply: {e}", exc_info=True)
|
||||
@@ -6,13 +6,15 @@ which are used for network diagnostics to track the path and SNR
|
||||
of packets through the mesh network.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
import time
|
||||
from typing import Dict, Any
|
||||
from typing import Any, Dict, List
|
||||
|
||||
from pymc_core.hardware.signal_utils import snr_register_to_db
|
||||
from pymc_core.node.handlers.trace import TraceHandler
|
||||
from pymc_core.protocol.constants import MAX_PATH_SIZE, ROUTE_TYPE_DIRECT
|
||||
from pymc_core.protocol.packet_utils import PathUtils
|
||||
|
||||
logger = logging.getLogger("TraceHelper")
|
||||
|
||||
@@ -20,23 +22,52 @@ logger = logging.getLogger("TraceHelper")
|
||||
class TraceHelper:
|
||||
"""Helper class for processing trace packets in the repeater."""
|
||||
|
||||
def __init__(self, local_hash: int, repeater_handler, packet_injector=None, log_fn=None):
|
||||
def __init__(
|
||||
self,
|
||||
local_hash: int,
|
||||
repeater_handler,
|
||||
packet_injector=None,
|
||||
log_fn=None,
|
||||
local_identity=None,
|
||||
):
|
||||
"""
|
||||
Initialize the trace helper.
|
||||
|
||||
Args:
|
||||
local_hash: The local node's hash identifier
|
||||
local_hash: The local node's 1-byte hash (first byte of pubkey); legacy
|
||||
repeater_handler: The RepeaterHandler instance
|
||||
packet_injector: Callable to inject new packets into the router for sending
|
||||
log_fn: Optional logging function for TraceHandler
|
||||
local_identity: LocalIdentity (or any object with get_public_key()) for
|
||||
multibyte TRACE path matching (Mesh.cpp isHashMatch with 1<<path_sz bytes)
|
||||
"""
|
||||
self.local_hash = local_hash
|
||||
self.local_identity = local_identity
|
||||
self._pubkey_bytes: bytes = b""
|
||||
if local_identity is not None and hasattr(local_identity, "get_public_key"):
|
||||
try:
|
||||
self._pubkey_bytes = bytes(local_identity.get_public_key())
|
||||
except Exception:
|
||||
self._pubkey_bytes = b""
|
||||
self.repeater_handler = repeater_handler
|
||||
self.packet_injector = packet_injector # Function to inject packets into router
|
||||
|
||||
|
||||
# Ping callback system - track pending ping requests by tag
|
||||
self.pending_pings = (
|
||||
{}
|
||||
) # {tag: {'event': asyncio.Event(), 'result': dict, 'target': int, 'sent_at': float}}
|
||||
|
||||
# Optional: when trace reaches final node, call this (packet, parsed_data) to push 0x89 to companions
|
||||
self.on_trace_complete = None # async (packet, parsed_data) -> None
|
||||
|
||||
# Create TraceHandler internally as a parsing utility
|
||||
self.trace_handler = TraceHandler(log_fn=log_fn or logger.info)
|
||||
|
||||
def _pubkey_prefix(self, width: int) -> bytes:
|
||||
if width <= 0 or not self._pubkey_bytes:
|
||||
return b""
|
||||
return self._pubkey_bytes[:width]
|
||||
|
||||
async def process_trace_packet(self, packet) -> None:
|
||||
"""
|
||||
Process an incoming trace packet.
|
||||
@@ -48,29 +79,55 @@ class TraceHelper:
|
||||
packet: The trace packet to process
|
||||
"""
|
||||
try:
|
||||
# Only process direct route trace packets
|
||||
if packet.get_route_type() != ROUTE_TYPE_DIRECT or packet.path_len >= MAX_PATH_SIZE:
|
||||
# Only process direct route trace packets (SNR path uses len(packet.path))
|
||||
if packet.get_route_type() != ROUTE_TYPE_DIRECT or len(packet.path) >= MAX_PATH_SIZE:
|
||||
return
|
||||
|
||||
# Parse the trace payload
|
||||
parsed_data = self.trace_handler._parse_trace_payload(packet.payload)
|
||||
|
||||
if not parsed_data.get("valid", False):
|
||||
logger.warning(
|
||||
f"Invalid trace packet: {parsed_data.get('error', 'Unknown error')}"
|
||||
)
|
||||
logger.warning(f"Invalid trace packet: {parsed_data.get('error', 'Unknown error')}")
|
||||
return
|
||||
|
||||
trace_path = parsed_data["trace_path"]
|
||||
trace_path_len = len(trace_path)
|
||||
trace_bytes: bytes = parsed_data.get("trace_path_bytes") or b""
|
||||
flags = parsed_data.get("flags", 0)
|
||||
hash_width = PathUtils.trace_payload_hash_width(flags)
|
||||
trace_hops: List[bytes] = parsed_data.get("trace_hops") or []
|
||||
num_hops = len(trace_hops)
|
||||
legacy_trace_path = parsed_data.get("trace_path") or []
|
||||
|
||||
# Check if this is a response to one of our pings
|
||||
trace_tag = parsed_data.get("tag")
|
||||
if trace_tag in self.pending_pings:
|
||||
rssi_val = getattr(packet, "rssi", 0)
|
||||
if rssi_val == 0:
|
||||
logger.warning(
|
||||
f"Ignoring trace response for tag {trace_tag} "
|
||||
"with RSSI=0 (no signal data)"
|
||||
)
|
||||
return # wait for a valid response or let timeout handle it
|
||||
ping_info = self.pending_pings[trace_tag]
|
||||
# Store response data (legacy path list + structured hops)
|
||||
ping_info["result"] = {
|
||||
"path": legacy_trace_path,
|
||||
"trace_hops": trace_hops,
|
||||
"trace_path_bytes": trace_bytes,
|
||||
"snr": packet.get_snr(),
|
||||
"rssi": rssi_val,
|
||||
"received_at": time.time(),
|
||||
}
|
||||
# Signal the waiting coroutine
|
||||
ping_info["event"].set()
|
||||
logger.info(f"Ping response received for tag {trace_tag}")
|
||||
|
||||
# Record the trace packet for dashboard/statistics
|
||||
if self.repeater_handler:
|
||||
packet_record = self._create_trace_record(packet, trace_path, parsed_data)
|
||||
packet_record = self._create_trace_record(packet, parsed_data)
|
||||
self.repeater_handler.log_trace_record(packet_record)
|
||||
|
||||
# Extract and log path SNRs and hashes
|
||||
path_snrs, path_hashes = self._extract_path_info(packet, trace_path)
|
||||
path_snrs, path_hashes = self._extract_path_info(packet, parsed_data)
|
||||
|
||||
# Add packet metadata for logging
|
||||
parsed_data["snr"] = packet.get_snr()
|
||||
@@ -80,68 +137,104 @@ class TraceHelper:
|
||||
logger.info(f"{formatted_response}")
|
||||
logger.info(f"Path SNRs: [{', '.join(path_snrs)}], Hashes: [{', '.join(path_hashes)}]")
|
||||
|
||||
# Check if we should forward this trace packet
|
||||
should_forward = self._should_forward_trace(packet, trace_path, trace_path_len)
|
||||
should_forward = self._should_forward_trace(packet, trace_bytes, flags, hash_width)
|
||||
|
||||
if should_forward:
|
||||
await self._forward_trace_packet(packet, trace_path_len)
|
||||
await self._forward_trace_packet(packet, num_hops)
|
||||
else:
|
||||
# This is the final destination or can't forward - just log and record
|
||||
self._log_no_forward_reason(packet, trace_path, trace_path_len)
|
||||
self._log_no_forward_reason(packet, trace_bytes, hash_width)
|
||||
if (
|
||||
self.on_trace_complete
|
||||
and self._is_trace_complete(packet, trace_bytes, hash_width)
|
||||
and self.repeater_handler
|
||||
and not self.repeater_handler.is_duplicate(packet)
|
||||
):
|
||||
try:
|
||||
await self.on_trace_complete(packet, parsed_data)
|
||||
except Exception as e:
|
||||
logger.debug("on_trace_complete error: %s", e)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error processing trace packet: {e}")
|
||||
|
||||
def _create_trace_record(self, packet, trace_path: list, parsed_data: dict) -> Dict[str, Any]:
|
||||
def _is_trace_complete(self, packet, trace_bytes: bytes, hash_width: int) -> bool:
|
||||
"""Mirror Mesh.cpp: offset = path_len<<path_sz >= len(trace hash bytes)."""
|
||||
if not trace_bytes or hash_width <= 0:
|
||||
return False
|
||||
snr_count = len(packet.path)
|
||||
return snr_count * hash_width >= len(trace_bytes)
|
||||
|
||||
def _create_trace_record(self, packet, parsed_data: dict) -> Dict[str, Any]:
|
||||
"""
|
||||
Create a packet record for trace packets to log to statistics.
|
||||
|
||||
Args:
|
||||
packet: The trace packet
|
||||
trace_path: The parsed trace path from the payload
|
||||
parsed_data: The parsed trace data
|
||||
parsed_data: Full parse result from TraceHandler
|
||||
|
||||
Returns:
|
||||
A dictionary containing the packet record
|
||||
"""
|
||||
# Format trace path for display
|
||||
trace_path_bytes = [f"{h:02X}" for h in trace_path[:8]]
|
||||
if len(trace_path) > 8:
|
||||
trace_hops: List[bytes] = parsed_data.get("trace_hops") or []
|
||||
legacy = parsed_data.get("trace_path") or []
|
||||
|
||||
trace_path_bytes = [h.hex().upper() for h in trace_hops[:8]]
|
||||
if len(trace_hops) > 8:
|
||||
trace_path_bytes.append("...")
|
||||
path_hash = "[" + ", ".join(trace_path_bytes) + "]"
|
||||
|
||||
# Extract SNR information from the path
|
||||
# Extract SNR information from the path (one SNR byte per hop along trace)
|
||||
path_snrs = []
|
||||
path_snr_details = []
|
||||
for i in range(packet.path_len):
|
||||
if i < len(packet.path):
|
||||
snr_val = packet.path[i]
|
||||
snr_db = snr_register_to_db(snr_val)
|
||||
path_snrs.append(f"{snr_val}({snr_db:.1f}dB)")
|
||||
for i in range(len(packet.path)):
|
||||
snr_val = packet.path[i]
|
||||
snr_db = snr_register_to_db(snr_val)
|
||||
path_snrs.append(f"{snr_val}({snr_db:.1f}dB)")
|
||||
|
||||
# Add detailed SNR info if we have the corresponding hash
|
||||
if i < len(trace_path):
|
||||
path_snr_details.append({
|
||||
"hash": f"{trace_path[i]:02X}",
|
||||
if i < len(trace_hops):
|
||||
path_snr_details.append(
|
||||
{
|
||||
"hash": trace_hops[i].hex().upper(),
|
||||
"snr_raw": snr_val,
|
||||
"snr_db": snr_db
|
||||
})
|
||||
"snr_db": snr_db,
|
||||
}
|
||||
)
|
||||
elif i < len(legacy):
|
||||
path_snr_details.append(
|
||||
{
|
||||
"hash": f"{legacy[i]:02X}",
|
||||
"snr_raw": snr_val,
|
||||
"snr_db": snr_db,
|
||||
}
|
||||
)
|
||||
|
||||
return {
|
||||
"timestamp": time.time(),
|
||||
"header": f"0x{packet.header:02X}" if hasattr(packet, "header") and packet.header is not None else None,
|
||||
"payload": packet.payload.hex() if hasattr(packet, "payload") and packet.payload else None,
|
||||
"payload_length": len(packet.payload) if hasattr(packet, "payload") and packet.payload else 0,
|
||||
"header": (
|
||||
f"0x{packet.header:02X}"
|
||||
if hasattr(packet, "header") and packet.header is not None
|
||||
else None
|
||||
),
|
||||
"payload": (
|
||||
packet.payload.hex() if hasattr(packet, "payload") and packet.payload else None
|
||||
),
|
||||
"payload_length": (
|
||||
len(packet.payload) if hasattr(packet, "payload") and packet.payload else 0
|
||||
),
|
||||
"type": packet.get_payload_type(), # 0x09 for trace
|
||||
"route": packet.get_route_type(), # Should be direct (1)
|
||||
"route": packet.get_route_type(), # Should be direct (1)
|
||||
"length": len(packet.payload or b""),
|
||||
"rssi": getattr(packet, "rssi", 0),
|
||||
"snr": getattr(packet, "snr", 0.0),
|
||||
"score": self.repeater_handler.calculate_packet_score(
|
||||
getattr(packet, "snr", 0.0),
|
||||
len(packet.payload or b""),
|
||||
self.repeater_handler.radio_config.get("spreading_factor", 8)
|
||||
) if self.repeater_handler else 0.0,
|
||||
"score": (
|
||||
self.repeater_handler.calculate_packet_score(
|
||||
getattr(packet, "snr", 0.0),
|
||||
len(packet.payload or b""),
|
||||
self.repeater_handler.radio_config.get("spreading_factor", 8),
|
||||
)
|
||||
if self.repeater_handler
|
||||
else 0.0
|
||||
),
|
||||
"tx_delay_ms": 0,
|
||||
"transmitted": False,
|
||||
"is_duplicate": False,
|
||||
@@ -150,69 +243,77 @@ class TraceHelper:
|
||||
"path_hash": path_hash,
|
||||
"src_hash": None,
|
||||
"dst_hash": None,
|
||||
"original_path": [f"{h:02X}" for h in trace_path],
|
||||
"original_path": [h.hex() for h in trace_hops],
|
||||
"forwarded_path": None,
|
||||
# Add trace-specific SNR path information
|
||||
"path_snrs": path_snrs, # ["58(14.5dB)", "19(4.8dB)"]
|
||||
"path_snr_details": path_snr_details, # [{"hash": "29", "snr_raw": 58, "snr_db": 14.5}]
|
||||
"path_snr_details": path_snr_details,
|
||||
"is_trace": True,
|
||||
"raw_packet": packet.write_to().hex() if hasattr(packet, "write_to") else None,
|
||||
}
|
||||
|
||||
def _extract_path_info(self, packet, trace_path: list) -> tuple:
|
||||
def _extract_path_info(self, packet, parsed_data: dict) -> tuple:
|
||||
"""
|
||||
Extract SNR and hash information from the packet path.
|
||||
|
||||
Args:
|
||||
packet: The trace packet
|
||||
trace_path: The parsed trace path from the payload
|
||||
|
||||
Returns:
|
||||
A tuple of (path_snrs, path_hashes) lists
|
||||
A tuple of (path_snrs, path_hashes) display lists
|
||||
"""
|
||||
trace_hops: List[bytes] = parsed_data.get("trace_hops") or []
|
||||
path_snrs = []
|
||||
path_hashes = []
|
||||
|
||||
for i in range(packet.path_len):
|
||||
for i in range(len(packet.path)):
|
||||
if i < len(packet.path):
|
||||
snr_val = packet.path[i]
|
||||
snr_db = snr_register_to_db(snr_val)
|
||||
path_snrs.append(f"{snr_val}({snr_db:.1f}dB)")
|
||||
|
||||
if i < len(trace_path):
|
||||
path_hashes.append(f"0x{trace_path[i]:02x}")
|
||||
if i < len(trace_hops):
|
||||
path_hashes.append(f"0x{trace_hops[i].hex()}")
|
||||
|
||||
return path_snrs, path_hashes
|
||||
|
||||
def _should_forward_trace(self, packet, trace_path: list, trace_path_len: int) -> bool:
|
||||
def _should_forward_trace(
|
||||
self, packet, trace_bytes: bytes, flags: int, hash_width: int
|
||||
) -> bool:
|
||||
"""
|
||||
Determine if this node should forward the trace packet.
|
||||
Uses the same logic as the original working implementation.
|
||||
|
||||
Args:
|
||||
packet: The trace packet
|
||||
trace_path: The parsed trace path from the payload
|
||||
trace_path_len: The length of the trace path
|
||||
|
||||
Returns:
|
||||
True if the packet should be forwarded, False otherwise
|
||||
Mesh.cpp TRACE branch: forward if offset < len and next hash matches identity.
|
||||
offset = pkt->path_len<<path_sz uses SNR count in packet.path (len(packet.path)).
|
||||
"""
|
||||
# Use the exact logic from the original working code
|
||||
return (packet.path_len < trace_path_len and
|
||||
len(trace_path) > packet.path_len and
|
||||
trace_path[packet.path_len] == self.local_hash and
|
||||
self.repeater_handler and not self.repeater_handler.is_duplicate(packet))
|
||||
if not trace_bytes or hash_width <= 0:
|
||||
return False
|
||||
snr_count = len(packet.path)
|
||||
byte_off = snr_count * hash_width
|
||||
if byte_off >= len(trace_bytes):
|
||||
return False
|
||||
|
||||
async def _forward_trace_packet(self, packet, trace_path_len: int) -> None:
|
||||
next_hop = trace_bytes[byte_off : byte_off + hash_width]
|
||||
if len(next_hop) != hash_width:
|
||||
return False
|
||||
|
||||
pubkey_pfx = self._pubkey_prefix(hash_width)
|
||||
if len(pubkey_pfx) >= hash_width:
|
||||
match = next_hop == pubkey_pfx[:hash_width]
|
||||
else:
|
||||
match = hash_width == 1 and next_hop[0] == (self.local_hash & 0xFF)
|
||||
|
||||
if not match:
|
||||
return False
|
||||
if not self.repeater_handler:
|
||||
return False
|
||||
return not self.repeater_handler.is_duplicate(packet)
|
||||
|
||||
async def _forward_trace_packet(self, packet, num_hops: int) -> None:
|
||||
"""
|
||||
Forward a trace packet by appending SNR and sending via injection.
|
||||
|
||||
|
||||
Args:
|
||||
packet: The trace packet to forward
|
||||
trace_path_len: The length of the trace path
|
||||
num_hops: Total hops in trace path (for logging)
|
||||
"""
|
||||
# Update the packet record to show it will be transmitted
|
||||
if self.repeater_handler and hasattr(self.repeater_handler, 'recent_packets'):
|
||||
if self.repeater_handler and hasattr(self.repeater_handler, "recent_packets"):
|
||||
packet_hash = packet.calculate_packet_hash().hex().upper()[:16]
|
||||
for record in reversed(self.repeater_handler.recent_packets):
|
||||
if record.get("packet_hash") == packet_hash:
|
||||
@@ -242,7 +343,8 @@ class TraceHelper:
|
||||
packet.path_len += 1
|
||||
|
||||
logger.info(
|
||||
f"Forwarding trace, stored SNR {current_snr:.1f}dB at position {packet.path_len - 1}"
|
||||
f"Forwarding trace ({num_hops} hop path), stored SNR {current_snr:.1f}dB "
|
||||
f"at SNR index {packet.path_len - 1}"
|
||||
)
|
||||
|
||||
# Inject packet into router for proper routing and transmission
|
||||
@@ -251,21 +353,69 @@ class TraceHelper:
|
||||
else:
|
||||
logger.warning("No packet injector available - trace packet not forwarded")
|
||||
|
||||
def _log_no_forward_reason(self, packet, trace_path: list, trace_path_len: int) -> None:
|
||||
"""
|
||||
Log the reason why a trace packet was not forwarded.
|
||||
def _log_no_forward_reason(self, packet, trace_bytes: bytes, hash_width: int) -> None:
|
||||
"""Log the reason why this node did not forward the trace."""
|
||||
if self.repeater_handler and self.repeater_handler.is_duplicate(packet):
|
||||
logger.info("Duplicate packet, ignoring")
|
||||
return
|
||||
|
||||
snr_count = len(packet.path)
|
||||
if not trace_bytes or hash_width <= 0:
|
||||
logger.info("Trace: empty path or invalid hash width")
|
||||
return
|
||||
|
||||
if snr_count * hash_width >= len(trace_bytes):
|
||||
logger.info("Trace completed (reached end of path)")
|
||||
return
|
||||
|
||||
byte_off = snr_count * hash_width
|
||||
next_hop = trace_bytes[byte_off : byte_off + hash_width]
|
||||
pubkey_pfx = self._pubkey_prefix(hash_width)
|
||||
if len(next_hop) == hash_width and len(pubkey_pfx) >= hash_width:
|
||||
if next_hop != pubkey_pfx[:hash_width]:
|
||||
logger.info(f"Not our turn (next hop: 0x{next_hop.hex()})")
|
||||
return
|
||||
elif hash_width == 1 and next_hop:
|
||||
if (next_hop[0] & 0xFF) != (self.local_hash & 0xFF):
|
||||
logger.info(f"Not our turn (next hop: 0x{next_hop.hex()})")
|
||||
return
|
||||
|
||||
logger.info("Trace: not forwarded (internal)")
|
||||
|
||||
def register_ping(self, tag: int, target_hash: int) -> asyncio.Event:
|
||||
"""Register a ping request and return an event to wait on.
|
||||
|
||||
Args:
|
||||
packet: The trace packet
|
||||
trace_path: The parsed trace path from the payload
|
||||
trace_path_len: The length of the trace path
|
||||
tag: The unique trace tag for this ping
|
||||
target_hash: The hash of the target node
|
||||
|
||||
Returns:
|
||||
asyncio.Event that will be set when response is received
|
||||
"""
|
||||
if packet.path_len >= trace_path_len:
|
||||
logger.info("Trace completed (reached end of path)")
|
||||
elif len(trace_path) <= packet.path_len:
|
||||
logger.info("Path index out of bounds")
|
||||
elif trace_path[packet.path_len] != self.local_hash:
|
||||
expected_hash = trace_path[packet.path_len] if packet.path_len < len(trace_path) else None
|
||||
logger.info(f"Not our turn (next hop: 0x{expected_hash:02x})")
|
||||
elif self.repeater_handler and self.repeater_handler.is_duplicate(packet):
|
||||
logger.info("Duplicate packet, ignoring")
|
||||
event = asyncio.Event()
|
||||
self.pending_pings[tag] = {
|
||||
"event": event,
|
||||
"result": None,
|
||||
"target": target_hash,
|
||||
"sent_at": time.time(),
|
||||
}
|
||||
logger.debug(f"Registered ping with tag {tag} for target 0x{target_hash:02x}")
|
||||
return event
|
||||
|
||||
def cleanup_stale_pings(self, max_age_seconds: int = 30):
|
||||
"""Remove pending pings older than max_age_seconds.
|
||||
|
||||
Args:
|
||||
max_age_seconds: Maximum age in seconds before a ping is considered stale
|
||||
"""
|
||||
current_time = time.time()
|
||||
stale_tags = [
|
||||
tag
|
||||
for tag, info in self.pending_pings.items()
|
||||
if current_time - info["sent_at"] > max_age_seconds
|
||||
]
|
||||
for tag in stale_tags:
|
||||
self.pending_pings.pop(tag)
|
||||
logger.debug(f"Cleaned up stale ping with tag {tag}")
|
||||
if stale_tags:
|
||||
logger.info(f"Cleaned up {len(stale_tags)} stale ping(s)")
|
||||
|
||||
@@ -0,0 +1,67 @@
|
||||
import logging
|
||||
from typing import Any, Dict, Optional, Tuple
|
||||
|
||||
logger = logging.getLogger("IdentityManager")
|
||||
|
||||
|
||||
class IdentityManager:
|
||||
|
||||
def __init__(self, config: dict):
|
||||
self.config = config
|
||||
self.identities: Dict[int, Tuple[Any, dict, str]] = {}
|
||||
self.named_identities: Dict[str, Tuple[Any, dict, str]] = {}
|
||||
self.registered_hashes: Dict[int, str] = {}
|
||||
|
||||
def register_identity(self, name: str, identity, config: dict, identity_type: str):
|
||||
hash_byte = identity.get_public_key()[0]
|
||||
|
||||
if hash_byte in self.identities:
|
||||
existing_name = self.registered_hashes.get(hash_byte, "unknown")
|
||||
logger.error(
|
||||
f"Hash collision! Identity '{name}' (hash=0x{hash_byte:02X}) "
|
||||
f"conflicts with existing identity '{existing_name}'"
|
||||
)
|
||||
return False
|
||||
|
||||
self.identities[hash_byte] = (identity, config, identity_type)
|
||||
self.named_identities[name] = (identity, config, identity_type)
|
||||
self.registered_hashes[hash_byte] = f"{identity_type}:{name}"
|
||||
|
||||
logger.info(
|
||||
f"Identity registered: name={name}, hash=0x{hash_byte:02X}, type={identity_type}"
|
||||
)
|
||||
return True
|
||||
|
||||
def get_identity_by_hash(self, hash_byte: int) -> Optional[Tuple[Any, dict, str]]:
|
||||
return self.identities.get(hash_byte)
|
||||
|
||||
def get_identity_by_name(self, name: str) -> Optional[Tuple[Any, dict, str]]:
|
||||
return self.named_identities.get(name)
|
||||
|
||||
def has_identity(self, hash_byte: int) -> bool:
|
||||
return hash_byte in self.identities
|
||||
|
||||
def list_identities(self) -> list:
|
||||
identities = []
|
||||
for hash_byte, (identity, config, id_type) in self.identities.items():
|
||||
name = self.registered_hashes.get(hash_byte, "unknown")
|
||||
identities.append(
|
||||
{
|
||||
"hash": f"0x{hash_byte:02X}",
|
||||
"name": name,
|
||||
"type": id_type,
|
||||
"address": identity.get_address_bytes().hex() if identity else "N/A",
|
||||
"public_key": identity.get_public_key().hex() if identity else None,
|
||||
}
|
||||
)
|
||||
return identities
|
||||
|
||||
def has_identity_type(self, identity_type: str) -> bool:
|
||||
return any(id_type == identity_type for _, _, id_type in self.identities.values())
|
||||
|
||||
def get_identities_by_type(self, identity_type: str) -> list:
|
||||
results = []
|
||||
for name, (identity, config, id_type) in self.named_identities.items():
|
||||
if id_type == identity_type:
|
||||
results.append((name, identity, config))
|
||||
return results
|
||||
@@ -0,0 +1,69 @@
|
||||
"""
|
||||
MeshCore-compatible Ed25519 vanity key generator.
|
||||
|
||||
Generates Ed25519 keys whose public key hex starts with a user-chosen prefix.
|
||||
Algorithm matches MeshCore's custom scalar clamping (see meshcore-keygen).
|
||||
|
||||
Requires: PyNaCl (pip install PyNaCl)
|
||||
"""
|
||||
|
||||
import hashlib
|
||||
import secrets
|
||||
from typing import Optional, Tuple
|
||||
|
||||
from nacl.bindings import crypto_scalarmult_ed25519_base_noclamp
|
||||
|
||||
|
||||
def generate_meshcore_keypair() -> Tuple[bytes, bytes]:
|
||||
"""Generate a MeshCore-compatible Ed25519 keypair.
|
||||
|
||||
Returns:
|
||||
(public_key, private_key) as raw bytes.
|
||||
public_key is 32 bytes, private_key is 64 bytes.
|
||||
"""
|
||||
# 1. Random 32-byte seed
|
||||
seed = secrets.token_bytes(32)
|
||||
|
||||
# 2. SHA-512 hash
|
||||
digest = hashlib.sha512(seed).digest()
|
||||
|
||||
# 3. Ed25519 scalar clamping on first 32 bytes
|
||||
clamped = bytearray(digest[:32])
|
||||
clamped[0] &= 248 # Clear bottom 3 bits
|
||||
clamped[31] &= 63 # Clear top 2 bits
|
||||
clamped[31] |= 64 # Set bit 6
|
||||
|
||||
# 4. Derive public key
|
||||
public_key = crypto_scalarmult_ed25519_base_noclamp(bytes(clamped))
|
||||
|
||||
# 5. Private key = [clamped_scalar][sha512_upper_half]
|
||||
private_key = bytes(clamped) + digest[32:64]
|
||||
|
||||
return public_key, private_key
|
||||
|
||||
|
||||
def generate_vanity_key(
|
||||
prefix: str,
|
||||
max_iterations: int = 5_000_000,
|
||||
) -> Optional[dict]:
|
||||
"""Generate a MeshCore keypair whose public key hex starts with *prefix*.
|
||||
|
||||
Args:
|
||||
prefix: Hex prefix (1-4 chars, case-insensitive).
|
||||
max_iterations: Safety cap to avoid infinite loops.
|
||||
|
||||
Returns:
|
||||
Dict with public_hex, private_hex, attempts on success; None if cap hit.
|
||||
"""
|
||||
target = prefix.upper()
|
||||
|
||||
for attempt in range(1, max_iterations + 1):
|
||||
pub, priv = generate_meshcore_keypair()
|
||||
if pub.hex().upper().startswith(target):
|
||||
return {
|
||||
"public_hex": pub.hex(),
|
||||
"private_hex": priv.hex(),
|
||||
"attempts": attempt,
|
||||
}
|
||||
|
||||
return None
|
||||
@@ -0,0 +1,146 @@
|
||||
"""
|
||||
CLI client for pyMC Repeater.
|
||||
Connects to an already-running repeater daemon via its HTTP API.
|
||||
Reads admin password and HTTP port from the local config.yaml automatically.
|
||||
"""
|
||||
|
||||
import sys
|
||||
|
||||
|
||||
CONFIG_PATHS = [
|
||||
"/etc/pymc_repeater/config.yaml",
|
||||
"config.yaml",
|
||||
]
|
||||
|
||||
|
||||
def _load_config(config_path=None):
|
||||
"""Load repeater config.yaml, trying common paths."""
|
||||
import yaml
|
||||
from pathlib import Path
|
||||
|
||||
paths = [config_path] if config_path else CONFIG_PATHS
|
||||
for p in paths:
|
||||
path = Path(p)
|
||||
if path.is_file():
|
||||
with open(path) as f:
|
||||
return yaml.safe_load(f) or {}
|
||||
return {}
|
||||
|
||||
|
||||
def run_client_cli(host: str = "127.0.0.1", port: int = 8000, password: str = ""):
|
||||
"""
|
||||
Standalone CLI client that connects to a running repeater's HTTP API.
|
||||
"""
|
||||
import urllib.request
|
||||
import urllib.error
|
||||
import json
|
||||
|
||||
base_url = f"http://{host}:{port}"
|
||||
|
||||
# Authenticate to get JWT token
|
||||
token = None
|
||||
if password:
|
||||
try:
|
||||
auth_data = json.dumps({
|
||||
"username": "admin",
|
||||
"password": password,
|
||||
"client_id": "pymc-cli",
|
||||
}).encode()
|
||||
req = urllib.request.Request(
|
||||
f"{base_url}/auth/login",
|
||||
data=auth_data,
|
||||
headers={"Content-Type": "application/json"},
|
||||
method="POST",
|
||||
)
|
||||
with urllib.request.urlopen(req, timeout=5) as resp:
|
||||
result = json.loads(resp.read())
|
||||
token = result.get("token") or result.get("data", {}).get("token")
|
||||
except urllib.error.URLError as e:
|
||||
print(f"Error: Cannot connect to repeater at {base_url} — {e.reason}")
|
||||
sys.exit(1)
|
||||
except Exception as e:
|
||||
print(f"Authentication failed: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
if not token:
|
||||
print("Error: Authentication failed. Check password or repeater status.")
|
||||
sys.exit(1)
|
||||
|
||||
print(f"\npyMC Repeater CLI (connected to {base_url})")
|
||||
print("Type 'help' for available commands, 'exit' to quit.\n")
|
||||
|
||||
while True:
|
||||
try:
|
||||
command = input(">> ").strip()
|
||||
except (EOFError, KeyboardInterrupt):
|
||||
print()
|
||||
break
|
||||
|
||||
if not command:
|
||||
continue
|
||||
if command in ("exit", "quit"):
|
||||
break
|
||||
|
||||
try:
|
||||
payload = json.dumps({"command": command}).encode()
|
||||
req = urllib.request.Request(
|
||||
f"{base_url}/api/cli",
|
||||
data=payload,
|
||||
headers={
|
||||
"Content-Type": "application/json",
|
||||
"Authorization": f"Bearer {token}",
|
||||
},
|
||||
method="POST",
|
||||
)
|
||||
with urllib.request.urlopen(req, timeout=10) as resp:
|
||||
result = json.loads(resp.read())
|
||||
if result.get("success"):
|
||||
print(result["data"]["reply"])
|
||||
else:
|
||||
print(f"Error: {result.get('error', 'Unknown error')}")
|
||||
except urllib.error.URLError as e:
|
||||
print(f"Connection error: {e.reason}")
|
||||
except Exception as e:
|
||||
print(f"Error: {e}")
|
||||
|
||||
|
||||
def main():
|
||||
"""Entry point for pymc-cli command."""
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Connect to a running pyMC Repeater and issue CLI commands"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--config", default=None,
|
||||
help="Path to config.yaml (auto-detected if not set)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--host", default=None,
|
||||
help="Repeater HTTP host (default: 127.0.0.1)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--port", type=int, default=None,
|
||||
help="Repeater HTTP port (default: from config or 8000)",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
# Load config to get password and port automatically
|
||||
config = _load_config(args.config)
|
||||
repeater_cfg = config.get("repeater", {})
|
||||
security_cfg = repeater_cfg.get("security", {})
|
||||
password = security_cfg.get("admin_password", "")
|
||||
|
||||
if not password:
|
||||
print("Error: No admin_password found in config.yaml.")
|
||||
print("Searched: " + ", ".join(CONFIG_PATHS))
|
||||
sys.exit(1)
|
||||
|
||||
host = args.host or "127.0.0.1"
|
||||
port = args.port or config.get("http", {}).get("port", 8000)
|
||||
|
||||
run_client_cli(host=host, port=port, password=password)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
+999
-69
File diff suppressed because it is too large
Load Diff
+381
-55
@@ -1,41 +1,82 @@
|
||||
"""
|
||||
Packet router for pyMC Repeater.
|
||||
|
||||
This module provides a simple router that routes packets to appropriate handlers
|
||||
based on payload type. All statistics, queuing, and processing logic is handled
|
||||
by the repeater engine for better separation of concerns.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
import time
|
||||
|
||||
from pymc_core.node.handlers.trace import TraceHandler
|
||||
from pymc_core.node.handlers.control import ControlHandler
|
||||
from pymc_core.node.handlers.ack import AckHandler
|
||||
from pymc_core.node.handlers.advert import AdvertHandler
|
||||
from pymc_core.node.handlers.control import ControlHandler
|
||||
from pymc_core.node.handlers.group_text import GroupTextHandler
|
||||
from pymc_core.node.handlers.login_response import LoginResponseHandler
|
||||
from pymc_core.node.handlers.login_server import LoginServerHandler
|
||||
from pymc_core.node.handlers.path import PathHandler
|
||||
from pymc_core.node.handlers.protocol_request import ProtocolRequestHandler
|
||||
from pymc_core.node.handlers.protocol_response import ProtocolResponseHandler
|
||||
from pymc_core.node.handlers.text import TextMessageHandler
|
||||
from pymc_core.node.handlers.trace import TraceHandler
|
||||
from pymc_core.protocol.constants import (
|
||||
PH_ROUTE_MASK,
|
||||
ROUTE_TYPE_DIRECT,
|
||||
ROUTE_TYPE_TRANSPORT_DIRECT,
|
||||
)
|
||||
|
||||
logger = logging.getLogger("PacketRouter")
|
||||
|
||||
# Deliver PATH and protocol-response (PATH) to companion at most once per logical packet
|
||||
# so the client is not spammed with duplicate telemetry when the mesh delivers multiple copies.
|
||||
_COMPANION_DEDUPE_TTL_SEC = 60.0
|
||||
|
||||
|
||||
def _companion_dedup_key(packet) -> str | None:
|
||||
"""Return a stable key for companion delivery deduplication, or None if not available."""
|
||||
try:
|
||||
return packet.calculate_packet_hash().hex().upper()
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def _is_direct_final_hop(packet) -> bool:
|
||||
"""True if packet is DIRECT (or TRANSPORT_DIRECT) with empty path — we're the final destination."""
|
||||
route = getattr(packet, "header", 0) & PH_ROUTE_MASK
|
||||
if route != ROUTE_TYPE_DIRECT and route != ROUTE_TYPE_TRANSPORT_DIRECT:
|
||||
return False
|
||||
path = getattr(packet, "path", None)
|
||||
return not path or len(path) == 0
|
||||
|
||||
|
||||
class PacketRouter:
|
||||
"""
|
||||
Simple router that processes packets through handlers sequentially.
|
||||
All statistics and processing decisions are handled by the engine.
|
||||
"""
|
||||
|
||||
|
||||
def __init__(self, daemon_instance):
|
||||
self.daemon = daemon_instance
|
||||
self.queue = asyncio.Queue()
|
||||
self.queue = asyncio.Queue(maxsize=500)
|
||||
self.running = False
|
||||
self.router_task = None
|
||||
|
||||
# Serialize injects so one local TX completes before the next is processed
|
||||
self._inject_lock = asyncio.Lock()
|
||||
# Hash -> expiry time; skip delivering same PATH/protocol-response to companions more than once
|
||||
self._companion_delivered = {}
|
||||
# Safety valve: cap the number of _route_packet tasks sleeping concurrently.
|
||||
# LoRa's airtime budget naturally limits throughput, but burst arrivals
|
||||
# (multi-hop amplification, collision retries) can stack many sleeping
|
||||
# delay tasks before the duty-cycle gate fires. 30 is very generous for
|
||||
# any realistic LoRa network but protects against pathological scenarios
|
||||
# (e.g. a busy bridge node during a mesh-wide flood) exhausting memory or
|
||||
# starving the event loop.
|
||||
self._in_flight: int = 0
|
||||
self._max_in_flight: int = 30
|
||||
# Live set of in-flight tasks — kept in sync with _in_flight via the
|
||||
# done-callback. Used exclusively for shutdown drain; the integer
|
||||
# counter is used for the cap check (faster, single source of truth).
|
||||
self._route_tasks: set = set()
|
||||
# Total packets dropped because the cap was reached. Exposed in logs
|
||||
# at shutdown so operators know whether the cap is actually firing.
|
||||
self._cap_drop_count: int = 0
|
||||
|
||||
async def start(self):
|
||||
"""Start the router processing task."""
|
||||
self.running = True
|
||||
self.router_task = asyncio.create_task(self._process_queue())
|
||||
logger.info("Packet router started")
|
||||
|
||||
async def stop(self):
|
||||
"""Stop the router processing task."""
|
||||
self.running = False
|
||||
if self.router_task:
|
||||
self.router_task.cancel()
|
||||
@@ -43,89 +84,374 @@ class PacketRouter:
|
||||
await self.router_task
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
|
||||
# Drain in-flight tasks gracefully, then cancel any that outlast the
|
||||
# timeout. This mirrors what the old _route_tasks set enabled and gives
|
||||
# in-progress packets a fair chance to finish (e.g. their TX delay sleep
|
||||
# + send) before the process exits.
|
||||
if self._route_tasks:
|
||||
pending_snapshot = set(self._route_tasks)
|
||||
logger.info(
|
||||
"Draining %d in-flight route task(s) (5 s timeout)...",
|
||||
len(pending_snapshot),
|
||||
)
|
||||
_, still_pending = await asyncio.wait(pending_snapshot, timeout=5.0)
|
||||
if still_pending:
|
||||
logger.warning(
|
||||
"Cancelling %d route task(s) that did not finish within the shutdown timeout",
|
||||
len(still_pending),
|
||||
)
|
||||
for task in still_pending:
|
||||
task.cancel()
|
||||
await asyncio.gather(*still_pending, return_exceptions=True)
|
||||
|
||||
if self._cap_drop_count:
|
||||
logger.warning(
|
||||
"In-flight cap dropped %d packet(s) during this session — "
|
||||
"consider raising _max_in_flight if this is frequent",
|
||||
self._cap_drop_count,
|
||||
)
|
||||
logger.info("Packet router stopped")
|
||||
|
||||
def _on_route_done(self, task: asyncio.Task) -> None:
|
||||
"""Done-callback for _route_packet tasks: decrement counter and surface errors."""
|
||||
self._in_flight -= 1
|
||||
self._route_tasks.discard(task)
|
||||
if not task.cancelled():
|
||||
exc = task.exception()
|
||||
if exc is not None:
|
||||
logger.error("_route_packet raised: %s", exc, exc_info=exc)
|
||||
|
||||
def _should_deliver_path_to_companions(self, packet) -> bool:
|
||||
"""Return True if this PATH/protocol-response should be delivered to companions (first of duplicates)."""
|
||||
key = _companion_dedup_key(packet)
|
||||
if not key:
|
||||
return True
|
||||
now = time.time()
|
||||
# Prune expired entries only when the dict grows large, avoiding a full
|
||||
# dict comprehension on every packet. 200 entries × 60 s TTL means a
|
||||
# sweep only triggers after ~200 unique PATH packets with no expiry — far
|
||||
# more than any realistic companion session, and well below the 1000-entry
|
||||
# threshold that could accumulate over hours without pruning.
|
||||
if len(self._companion_delivered) > 200:
|
||||
self._companion_delivered = {
|
||||
k: v for k, v in self._companion_delivered.items() if v > now
|
||||
}
|
||||
if key in self._companion_delivered:
|
||||
return False
|
||||
self._companion_delivered[key] = now + _COMPANION_DEDUPE_TTL_SEC
|
||||
return True
|
||||
|
||||
def _record_for_ui(self, packet, metadata: dict) -> None:
|
||||
"""Record an injection-only packet for the web UI (storage + recent_packets)."""
|
||||
handler = getattr(self.daemon, "repeater_handler", None)
|
||||
if handler and getattr(handler, "storage", None):
|
||||
try:
|
||||
handler.record_packet_only(packet, metadata)
|
||||
except Exception as e:
|
||||
logger.debug("Record for UI failed: %s", e)
|
||||
|
||||
async def enqueue(self, packet):
|
||||
"""Add packet to router queue."""
|
||||
if self.queue.full():
|
||||
logger.warning("Packet router queue full (%d), dropping oldest", self.queue.maxsize)
|
||||
try:
|
||||
self.queue.get_nowait()
|
||||
except asyncio.QueueEmpty:
|
||||
pass
|
||||
await self.queue.put(packet)
|
||||
|
||||
async def inject_packet(self, packet, wait_for_ack: bool = False):
|
||||
"""
|
||||
Inject a new packet into the system for transmission through the engine.
|
||||
|
||||
This method uses the engine's main packet handler with the local_transmission
|
||||
flag to bypass forwarding logic while maintaining proper statistics and airtime.
|
||||
|
||||
Args:
|
||||
packet: The packet to send
|
||||
wait_for_ack: Whether to wait for acknowledgment
|
||||
|
||||
Returns:
|
||||
True if packet was sent successfully, False otherwise
|
||||
"""
|
||||
try:
|
||||
metadata = {
|
||||
"rssi": getattr(packet, "rssi", 0),
|
||||
"snr": getattr(packet, "snr", 0.0),
|
||||
"snr": getattr(packet, "snr", 0.0),
|
||||
"timestamp": getattr(packet, "timestamp", 0),
|
||||
}
|
||||
|
||||
# Use local_transmission=True to bypass forwarding logic
|
||||
await self.daemon.repeater_handler(packet, metadata, local_transmission=True)
|
||||
|
||||
|
||||
# Serialize injects so one local TX completes before the next runs
|
||||
# (avoids duty-cycle or dispatcher races where a later packet goes out first)
|
||||
async with self._inject_lock:
|
||||
# Use local_transmission=True to bypass forwarding logic
|
||||
await self.daemon.repeater_handler(
|
||||
packet, metadata, local_transmission=True
|
||||
)
|
||||
|
||||
# Mark so when this packet is dequeued we don't pass to engine again (avoid double-send / double-count)
|
||||
packet._injected_for_tx = True
|
||||
|
||||
# Enqueue so router can deliver to companion(s): TXT_MSG -> dest bridge, ACK -> all bridges (sender sees ACK)
|
||||
await self.enqueue(packet)
|
||||
|
||||
packet_len = len(packet.payload) if packet.payload else 0
|
||||
logger.debug(f"Injected packet processed by engine as local transmission ({packet_len} bytes)")
|
||||
logger.debug(
|
||||
f"Injected packet processed by engine as local transmission ({packet_len} bytes)"
|
||||
)
|
||||
# Log protocol REQ (e.g. status/telemetry) so we can confirm target node
|
||||
ptype = getattr(packet, "get_payload_type", lambda: None)()
|
||||
if ptype == ProtocolRequestHandler.payload_type() and packet.payload and packet_len >= 1:
|
||||
logger.info(
|
||||
"Injected protocol REQ: dest=0x%02x, payload=%d bytes",
|
||||
packet.payload[0],
|
||||
packet_len,
|
||||
)
|
||||
return True
|
||||
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error injecting packet through engine: {e}")
|
||||
return False
|
||||
|
||||
async def _process_queue(self):
|
||||
"""Process packets through the router queue."""
|
||||
while self.running:
|
||||
try:
|
||||
packet = await asyncio.wait_for(self.queue.get(), timeout=0.1)
|
||||
await self._route_packet(packet)
|
||||
# Drop early if the in-flight cap is reached. This is a last-resort
|
||||
# safety valve — under normal operation LoRa airtime and the duty-cycle
|
||||
# gate keep _in_flight well below _max_in_flight.
|
||||
if self._in_flight >= self._max_in_flight:
|
||||
self._cap_drop_count += 1
|
||||
logger.warning(
|
||||
"In-flight task cap reached (%d/%d), dropping packet "
|
||||
"(session total dropped: %d)",
|
||||
self._in_flight, self._max_in_flight, self._cap_drop_count,
|
||||
)
|
||||
continue
|
||||
self._in_flight += 1
|
||||
task = asyncio.create_task(self._route_packet(packet))
|
||||
self._route_tasks.add(task)
|
||||
task.add_done_callback(self._on_route_done)
|
||||
except asyncio.TimeoutError:
|
||||
continue
|
||||
except Exception as e:
|
||||
logger.error(f"Router error: {e}", exc_info=True)
|
||||
|
||||
|
||||
|
||||
async def _route_packet(self, packet):
|
||||
"""
|
||||
Route a packet to appropriate handlers based on payload type.
|
||||
|
||||
Simple routing logic:
|
||||
1. Route to specific handlers for parsing
|
||||
2. Pass to repeater engine for all processing decisions
|
||||
"""
|
||||
|
||||
payload_type = packet.get_payload_type()
|
||||
processed_by_injection = False
|
||||
|
||||
metadata = {
|
||||
"rssi": getattr(packet, "rssi", 0),
|
||||
"snr": getattr(packet, "snr", 0.0),
|
||||
"timestamp": getattr(packet, "timestamp", 0),
|
||||
}
|
||||
|
||||
# Route to specific handlers for parsing only
|
||||
if payload_type == TraceHandler.payload_type():
|
||||
# Process trace packet
|
||||
if self.daemon.trace_helper:
|
||||
# Locally injected TRACE requests are TX-only and re-enter the router so
|
||||
# companion delivery can still happen. They are not inbound RF responses,
|
||||
# so skip TraceHelper parsing to avoid matching pending ping tags against
|
||||
# zeroed local metadata.
|
||||
if getattr(packet, "_injected_for_tx", False):
|
||||
processed_by_injection = True
|
||||
elif self.daemon.trace_helper:
|
||||
await self.daemon.trace_helper.process_trace_packet(packet)
|
||||
# Skip engine processing for trace packets - they're handled by trace helper
|
||||
processed_by_injection = True
|
||||
# Do not call _record_for_ui: TraceHelper.log_trace_record already persists the
|
||||
# trace path from the payload. record_packet_only would treat packet.path (SNR bytes)
|
||||
# as routing hashes and log bogus duplicate rows.
|
||||
|
||||
elif payload_type == ControlHandler.payload_type():
|
||||
# Process control/discovery packet
|
||||
if self.daemon.discovery_helper:
|
||||
await self.daemon.discovery_helper.control_handler(packet)
|
||||
packet.mark_do_not_retransmit()
|
||||
|
||||
# Deliver to companions via daemon (frame servers push PUSH_CODE_CONTROL_DATA 0x8E)
|
||||
deliver = getattr(self.daemon, "deliver_control_data", None)
|
||||
if deliver:
|
||||
snr = getattr(packet, "_snr", None) or getattr(packet, "snr", 0.0)
|
||||
rssi = getattr(packet, "_rssi", None) or getattr(packet, "rssi", 0)
|
||||
path_len = getattr(packet, "path_len", 0) or 0
|
||||
path_bytes = (
|
||||
bytes(getattr(packet, "path", []))
|
||||
if getattr(packet, "path", None) is not None
|
||||
else b""
|
||||
)[:path_len]
|
||||
payload_bytes = bytes(packet.payload) if packet.payload else b""
|
||||
await deliver(snr, rssi, path_len, path_bytes, payload_bytes)
|
||||
|
||||
elif payload_type == AdvertHandler.payload_type():
|
||||
# Process advertisement packet for neighbor tracking
|
||||
if self.daemon.advert_helper:
|
||||
rssi = getattr(packet, "rssi", 0)
|
||||
snr = getattr(packet, "snr", 0.0)
|
||||
await self.daemon.advert_helper.process_advert_packet(packet, rssi, snr)
|
||||
|
||||
# Also feed adverts to companion bridges (for contact/path updates)
|
||||
for bridge in getattr(self.daemon, "companion_bridges", {}).values():
|
||||
try:
|
||||
await bridge.process_received_packet(packet)
|
||||
except Exception as e:
|
||||
logger.debug(f"Companion bridge advert error: {e}")
|
||||
|
||||
elif payload_type == LoginServerHandler.payload_type():
|
||||
# Route to companion if dest is a companion; else to login_helper (for logging into this repeater).
|
||||
# When dest is remote (not handled), pass to engine so DIRECT/FLOOD ANON_REQ can be forwarded.
|
||||
# Our own injected ANON_REQ is suppressed by the engine's duplicate (mark_seen) check.
|
||||
dest_hash = packet.payload[0] if packet.payload else None
|
||||
companion_bridges = getattr(self.daemon, "companion_bridges", {})
|
||||
if dest_hash is not None and dest_hash in companion_bridges:
|
||||
await companion_bridges[dest_hash].process_received_packet(packet)
|
||||
processed_by_injection = True
|
||||
elif self.daemon.login_helper:
|
||||
handled = await self.daemon.login_helper.process_login_packet(packet)
|
||||
if handled:
|
||||
processed_by_injection = True
|
||||
if processed_by_injection:
|
||||
self._record_for_ui(packet, metadata)
|
||||
|
||||
elif payload_type == AckHandler.payload_type():
|
||||
# ACK has no dest in payload (4-byte CRC only); deliver to all bridges so sender sees send_confirmed.
|
||||
# Do not set processed_by_injection so packet also reaches engine for DIRECT forwarding when we're a middle hop.
|
||||
companion_bridges = getattr(self.daemon, "companion_bridges", {})
|
||||
for bridge in companion_bridges.values():
|
||||
try:
|
||||
await bridge.process_received_packet(packet)
|
||||
except Exception as e:
|
||||
logger.debug(f"Companion bridge ACK error: {e}")
|
||||
|
||||
elif payload_type == TextMessageHandler.payload_type():
|
||||
dest_hash = packet.payload[0] if packet.payload else None
|
||||
companion_bridges = getattr(self.daemon, "companion_bridges", {})
|
||||
if dest_hash is not None and dest_hash in companion_bridges:
|
||||
await companion_bridges[dest_hash].process_received_packet(packet)
|
||||
processed_by_injection = True
|
||||
self._record_for_ui(packet, metadata)
|
||||
elif self.daemon.text_helper:
|
||||
handled = await self.daemon.text_helper.process_text_packet(packet)
|
||||
if handled:
|
||||
processed_by_injection = True
|
||||
self._record_for_ui(packet, metadata)
|
||||
|
||||
elif payload_type == PathHandler.payload_type():
|
||||
dest_hash = packet.payload[0] if packet.payload else None
|
||||
companion_bridges = getattr(self.daemon, "companion_bridges", {})
|
||||
if dest_hash is not None and dest_hash in companion_bridges:
|
||||
if self._should_deliver_path_to_companions(packet):
|
||||
await companion_bridges[dest_hash].process_received_packet(packet)
|
||||
# Do not set processed_by_injection so packet also reaches engine for DIRECT forwarding when we're a middle hop.
|
||||
elif companion_bridges and self._should_deliver_path_to_companions(packet):
|
||||
# Dest not in bridges: path-return with ephemeral dest (e.g. multi-hop login).
|
||||
# Deliver to all bridges; each will try to decrypt and ignore if not relevant.
|
||||
for bridge in companion_bridges.values():
|
||||
try:
|
||||
await bridge.process_received_packet(packet)
|
||||
except Exception as e:
|
||||
logger.debug(f"Companion bridge PATH error: {e}")
|
||||
logger.debug(
|
||||
"PATH dest=0x%02x (anon) delivered to %d bridge(s) for matching",
|
||||
dest_hash or 0,
|
||||
len(companion_bridges),
|
||||
)
|
||||
# Do not set processed_by_injection so packet also reaches engine for DIRECT forwarding when we're a middle hop.
|
||||
elif self.daemon.path_helper:
|
||||
await self.daemon.path_helper.process_path_packet(packet)
|
||||
|
||||
elif payload_type == LoginResponseHandler.payload_type():
|
||||
# PAYLOAD_TYPE_RESPONSE (0x01): payload is dest_hash(1)+src_hash(1)+encrypted.
|
||||
# Deliver to the bridge that is the destination, or to all bridges when the
|
||||
# response is addressed to this repeater (path-based reply: firmware sends
|
||||
# to first hop instead of original requester).
|
||||
# Do not set processed_by_injection so packet also reaches engine for DIRECT forwarding when we're a middle hop.
|
||||
dest_hash = packet.payload[0] if packet.payload and len(packet.payload) >= 1 else None
|
||||
companion_bridges = getattr(self.daemon, "companion_bridges", {})
|
||||
local_hash = getattr(self.daemon, "local_hash", None)
|
||||
if dest_hash is not None and dest_hash in companion_bridges:
|
||||
try:
|
||||
await companion_bridges[dest_hash].process_received_packet(packet)
|
||||
logger.info(
|
||||
"RESPONSE dest=0x%02x delivered to companion bridge",
|
||||
dest_hash,
|
||||
)
|
||||
except Exception as e:
|
||||
logger.debug(f"Companion bridge RESPONSE error: {e}")
|
||||
elif dest_hash == local_hash and companion_bridges:
|
||||
# Response addressed to this repeater (e.g. path-based reply to first hop)
|
||||
for bridge in companion_bridges.values():
|
||||
try:
|
||||
await bridge.process_received_packet(packet)
|
||||
except Exception as e:
|
||||
logger.debug(f"Companion bridge RESPONSE error: {e}")
|
||||
logger.info(
|
||||
"RESPONSE dest=0x%02x (local) delivered to %d companion bridge(s)",
|
||||
dest_hash,
|
||||
len(companion_bridges),
|
||||
)
|
||||
elif companion_bridges:
|
||||
# Dest not in bridges and not local: likely ANON_REQ response (dest = ephemeral
|
||||
# sender hash). Deliver to all bridges; each will try to decrypt and ignore if
|
||||
# not relevant (firmware-like behavior, works with multiple companion bridges).
|
||||
for bridge in companion_bridges.values():
|
||||
try:
|
||||
await bridge.process_received_packet(packet)
|
||||
except Exception as e:
|
||||
logger.debug(f"Companion bridge RESPONSE error: {e}")
|
||||
logger.debug(
|
||||
"RESPONSE dest=0x%02x (anon) delivered to %d bridge(s) for matching",
|
||||
dest_hash or 0,
|
||||
len(companion_bridges),
|
||||
)
|
||||
if companion_bridges and _is_direct_final_hop(packet):
|
||||
# DIRECT with empty path: we're the final hop; don't pass to engine (it would drop with "Direct: no path")
|
||||
processed_by_injection = True
|
||||
self._record_for_ui(packet, metadata)
|
||||
|
||||
elif payload_type == ProtocolResponseHandler.payload_type():
|
||||
# PAYLOAD_TYPE_PATH (0x08): protocol responses (telemetry, binary, etc.).
|
||||
# Deliver at most once per logical packet so the client is not spammed with duplicates.
|
||||
# Do not set processed_by_injection so packet also reaches engine for DIRECT forwarding when we're a middle hop.
|
||||
companion_bridges = getattr(self.daemon, "companion_bridges", {})
|
||||
if companion_bridges and self._should_deliver_path_to_companions(packet):
|
||||
for bridge in companion_bridges.values():
|
||||
try:
|
||||
await bridge.process_received_packet(packet)
|
||||
except Exception as e:
|
||||
logger.debug(f"Companion bridge RESPONSE error: {e}")
|
||||
if companion_bridges and _is_direct_final_hop(packet):
|
||||
# DIRECT with empty path: we're the final hop; ensure delivery to all bridges (anon)
|
||||
if not self._should_deliver_path_to_companions(packet):
|
||||
for bridge in companion_bridges.values():
|
||||
try:
|
||||
await bridge.process_received_packet(packet)
|
||||
except Exception as e:
|
||||
logger.debug(f"Companion bridge RESPONSE (final hop) error: {e}")
|
||||
processed_by_injection = True
|
||||
self._record_for_ui(packet, metadata)
|
||||
|
||||
elif payload_type == ProtocolRequestHandler.payload_type():
|
||||
dest_hash = packet.payload[0] if packet.payload else None
|
||||
companion_bridges = getattr(self.daemon, "companion_bridges", {})
|
||||
if dest_hash is not None and dest_hash in companion_bridges:
|
||||
await companion_bridges[dest_hash].process_received_packet(packet)
|
||||
processed_by_injection = True
|
||||
self._record_for_ui(packet, metadata)
|
||||
elif self.daemon.protocol_request_helper:
|
||||
handled = await self.daemon.protocol_request_helper.process_request_packet(packet)
|
||||
if handled:
|
||||
processed_by_injection = True
|
||||
self._record_for_ui(packet, metadata)
|
||||
elif companion_bridges and _is_direct_final_hop(packet):
|
||||
# DIRECT with empty path: we're the final hop; deliver to all bridges for anon matching
|
||||
for bridge in companion_bridges.values():
|
||||
try:
|
||||
await bridge.process_received_packet(packet)
|
||||
except Exception as e:
|
||||
logger.debug(f"Companion bridge REQ (final hop) error: {e}")
|
||||
processed_by_injection = True
|
||||
self._record_for_ui(packet, metadata)
|
||||
|
||||
elif payload_type == GroupTextHandler.payload_type():
|
||||
# GRP_TXT: pass to all companions (they filter by channel); still forward
|
||||
companion_bridges = getattr(self.daemon, "companion_bridges", {})
|
||||
for bridge in companion_bridges.values():
|
||||
try:
|
||||
await bridge.process_received_packet(packet)
|
||||
except Exception as e:
|
||||
logger.debug(f"Companion bridge GRP_TXT error: {e}")
|
||||
|
||||
# Only pass to repeater engine if not already processed by injection
|
||||
# Skip engine for packets we injected for TX (already sent; avoid double-send/double-count)
|
||||
if getattr(packet, "_injected_for_tx", False):
|
||||
processed_by_injection = True
|
||||
if self.daemon.repeater_handler and not processed_by_injection:
|
||||
metadata = {
|
||||
"rssi": getattr(packet, "rssi", 0),
|
||||
|
||||
@@ -0,0 +1,110 @@
|
||||
"""
|
||||
Service management utilities for pyMC Repeater.
|
||||
Provides functions for service control operations like restart.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import os
|
||||
import subprocess
|
||||
from typing import Tuple
|
||||
|
||||
logger = logging.getLogger("ServiceUtils")
|
||||
INIT_SCRIPT = "/etc/init.d/S80pymc-repeater"
|
||||
|
||||
|
||||
def is_buildroot() -> bool:
|
||||
if os.path.exists("/etc/pymc-image-build-id"):
|
||||
return True
|
||||
if os.path.exists("/etc/os-release"):
|
||||
try:
|
||||
with open("/etc/os-release", "r", encoding="utf-8") as handle:
|
||||
return any(line.strip() == "ID=buildroot" for line in handle)
|
||||
except OSError:
|
||||
return False
|
||||
return False
|
||||
|
||||
|
||||
def restart_service() -> Tuple[bool, str]:
|
||||
"""
|
||||
Restart the pymc-repeater service.
|
||||
|
||||
On Buildroot/Luckfox, use the shipped init script directly.
|
||||
On systemd hosts, try polkit-based restart first (plain systemctl), then
|
||||
fall back to sudo-based restart (requires sudoers.d rule installed by
|
||||
manage.sh).
|
||||
|
||||
Returns:
|
||||
Tuple[bool, str]: (success, message)
|
||||
"""
|
||||
if is_buildroot():
|
||||
if not os.path.exists(INIT_SCRIPT):
|
||||
logger.error("Buildroot init script not found: %s", INIT_SCRIPT)
|
||||
return False, f"init script not found: {INIT_SCRIPT}"
|
||||
|
||||
try:
|
||||
subprocess.Popen(
|
||||
["/bin/sh", "-c", f"sleep 1; exec {INIT_SCRIPT} restart >/dev/null 2>&1"],
|
||||
stdout=subprocess.DEVNULL,
|
||||
stderr=subprocess.DEVNULL,
|
||||
stdin=subprocess.DEVNULL,
|
||||
start_new_session=True,
|
||||
)
|
||||
logger.info("Service restart scheduled via Buildroot init script")
|
||||
return True, "Service restart initiated"
|
||||
except Exception as exc:
|
||||
logger.error(f"Buildroot restart failed: {exc}")
|
||||
return False, f"Restart failed: {exc}"
|
||||
|
||||
# Try polkit-based restart first (works on bare metal / VMs with polkit running)
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["systemctl", "restart", "pymc-repeater"], capture_output=True, text=True, timeout=5
|
||||
)
|
||||
|
||||
if result.returncode == 0:
|
||||
logger.info("Service restart via polkit succeeded")
|
||||
return True, "Service restart initiated"
|
||||
|
||||
stderr = result.stderr or ""
|
||||
if "Access denied" in stderr or "authorization" in stderr.lower():
|
||||
logger.info("Polkit denied restart, trying sudo fallback...")
|
||||
else:
|
||||
# Some other error, still try sudo
|
||||
logger.warning(f"systemctl restart failed ({result.returncode}): {stderr.strip()}")
|
||||
|
||||
except subprocess.TimeoutExpired:
|
||||
# Timeout likely means it's restarting - that's success
|
||||
logger.warning("Service restart command timed out (service may be restarting)")
|
||||
return True, "Service restart initiated (timeout - likely restarting)"
|
||||
except FileNotFoundError:
|
||||
logger.error("systemctl not found")
|
||||
return False, "systemctl not available"
|
||||
except Exception as e:
|
||||
logger.warning(f"Polkit restart attempt failed: {e}")
|
||||
|
||||
# Fallback: use sudo (requires /etc/sudoers.d/pymc-repeater rule)
|
||||
try:
|
||||
result = subprocess.run(
|
||||
['sudo', '--non-interactive', 'systemctl', 'restart', 'pymc-repeater'],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=5
|
||||
)
|
||||
|
||||
if result.returncode == 0:
|
||||
logger.info("Service restart via sudo succeeded")
|
||||
return True, "Service restart initiated"
|
||||
else:
|
||||
error_msg = result.stderr or "Unknown error"
|
||||
logger.error(f"Service restart via sudo failed: {error_msg}")
|
||||
return False, f"Restart failed: {error_msg}"
|
||||
|
||||
except subprocess.TimeoutExpired:
|
||||
logger.warning("Sudo restart timed out (service likely restarting)")
|
||||
return True, "Service restart initiated (timeout - likely restarting)"
|
||||
except FileNotFoundError:
|
||||
logger.error("sudo not found - cannot restart service")
|
||||
return False, "Neither polkit nor sudo available for service restart"
|
||||
except Exception as e:
|
||||
logger.error(f"Error executing sudo restart: {e}")
|
||||
return False, f"Restart command failed: {str(e)}"
|
||||
@@ -1,12 +1,14 @@
|
||||
from .http_server import HTTPStatsServer, StatsApp, LogBuffer, _log_buffer
|
||||
from .api_endpoints import APIEndpoints
|
||||
from .cad_calibration_engine import CADCalibrationEngine
|
||||
from .http_server import HTTPStatsServer, LogBuffer, StatsApp, _log_buffer
|
||||
from .update_endpoints import UpdateAPIEndpoints
|
||||
|
||||
__all__ = [
|
||||
'HTTPStatsServer',
|
||||
'StatsApp',
|
||||
'LogBuffer',
|
||||
'APIEndpoints',
|
||||
'CADCalibrationEngine',
|
||||
'_log_buffer'
|
||||
]
|
||||
"HTTPStatsServer",
|
||||
"StatsApp",
|
||||
"LogBuffer",
|
||||
"APIEndpoints",
|
||||
"CADCalibrationEngine",
|
||||
"UpdateAPIEndpoints",
|
||||
"_log_buffer",
|
||||
]
|
||||
|
||||
+4368
-285
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,5 @@
|
||||
from .api_tokens import APITokenManager
|
||||
from .jwt_handler import JWTHandler
|
||||
from .middleware import require_auth
|
||||
|
||||
__all__ = ["JWTHandler", "APITokenManager", "require_auth"]
|
||||
@@ -0,0 +1,44 @@
|
||||
import hashlib
|
||||
import hmac
|
||||
import logging
|
||||
import secrets
|
||||
from typing import Dict, List, Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class APITokenManager:
|
||||
def __init__(self, sqlite_handler, secret_key: str):
|
||||
|
||||
self.db = sqlite_handler
|
||||
self.secret_key = secret_key.encode("utf-8")
|
||||
|
||||
def generate_api_token(self) -> str:
|
||||
return secrets.token_hex(32)
|
||||
|
||||
def hash_token(self, token: str) -> str:
|
||||
return hmac.new(self.secret_key, token.encode("utf-8"), hashlib.sha256).hexdigest()
|
||||
|
||||
def create_token(self, name: str) -> tuple[int, str]:
|
||||
plaintext_token = self.generate_api_token()
|
||||
token_hash = self.hash_token(plaintext_token)
|
||||
|
||||
token_id = self.db.create_api_token(name, token_hash)
|
||||
|
||||
logger.info(f"Created API token '{name}' with ID {token_id}")
|
||||
return token_id, plaintext_token
|
||||
|
||||
def verify_token(self, token: str) -> Optional[Dict]:
|
||||
token_hash = self.hash_token(token)
|
||||
return self.db.verify_api_token(token_hash)
|
||||
|
||||
def revoke_token(self, token_id: int) -> bool:
|
||||
deleted = self.db.revoke_api_token(token_id)
|
||||
|
||||
if deleted:
|
||||
logger.info(f"Revoked API token ID {token_id}")
|
||||
|
||||
return deleted
|
||||
|
||||
def list_tokens(self) -> List[Dict]:
|
||||
return self.db.list_api_tokens()
|
||||
@@ -0,0 +1,84 @@
|
||||
import logging
|
||||
|
||||
import cherrypy
|
||||
|
||||
logger = logging.getLogger("HTTPServer")
|
||||
|
||||
|
||||
def check_auth():
|
||||
"""
|
||||
CherryPy tool to check authentication before processing request.
|
||||
|
||||
Checks for either JWT in Authorization header, API token in X-API-Key header,
|
||||
or JWT token in query parameter (for EventSource/SSE connections).
|
||||
Sets cherrypy.request.user on success.
|
||||
Returns 401 JSON response on failure.
|
||||
"""
|
||||
# Skip auth check for OPTIONS requests (CORS preflight)
|
||||
if cherrypy.request.method == "OPTIONS":
|
||||
return
|
||||
|
||||
# Skip auth check for /auth/login endpoint
|
||||
if cherrypy.request.path_info == "/auth/login":
|
||||
return
|
||||
|
||||
# Get auth handlers from config
|
||||
jwt_handler = cherrypy.config.get("jwt_handler")
|
||||
token_manager = cherrypy.config.get("token_manager")
|
||||
|
||||
if not jwt_handler or not token_manager:
|
||||
logger.error("Auth handlers not initialized in cherrypy.config")
|
||||
cherrypy.response.status = 500
|
||||
return {"success": False, "error": "Authentication system not configured"}
|
||||
|
||||
# Check for JWT token in Authorization header first
|
||||
auth_header = cherrypy.request.headers.get("Authorization", "")
|
||||
if auth_header.startswith("Bearer "):
|
||||
token = auth_header[7:] # Remove "Bearer " prefix
|
||||
payload = jwt_handler.verify_jwt(token)
|
||||
|
||||
if payload:
|
||||
cherrypy.request.user = {
|
||||
"username": payload.get("sub"),
|
||||
"client_id": payload.get("client_id"),
|
||||
"auth_type": "jwt",
|
||||
}
|
||||
return
|
||||
|
||||
# Check for JWT token in query parameter (for EventSource/SSE)
|
||||
# EventSource doesn't support custom headers, so we use query param
|
||||
query_token = cherrypy.request.params.get("token")
|
||||
if query_token:
|
||||
payload = jwt_handler.verify_jwt(query_token)
|
||||
|
||||
if payload:
|
||||
cherrypy.request.user = {
|
||||
"username": payload.get("sub"),
|
||||
"client_id": payload.get("client_id"),
|
||||
"auth_type": "jwt_query",
|
||||
}
|
||||
# Remove token from params to avoid exposing it in logs
|
||||
del cherrypy.request.params["token"]
|
||||
return
|
||||
|
||||
# Check for API token in X-API-Key header
|
||||
api_key = cherrypy.request.headers.get("X-API-Key", "")
|
||||
if api_key:
|
||||
token_info = token_manager.verify_token(api_key)
|
||||
|
||||
if token_info:
|
||||
cherrypy.request.user = {
|
||||
"token_id": token_info["id"],
|
||||
"token_name": token_info["name"],
|
||||
"auth_type": "api_token",
|
||||
}
|
||||
return
|
||||
|
||||
# No valid authentication found
|
||||
logger.warning(f"Unauthorized access attempt to {cherrypy.request.path_info}")
|
||||
raise cherrypy.HTTPError(401, "Unauthorized - Valid JWT or API token required")
|
||||
|
||||
|
||||
# Register the tool
|
||||
cherrypy.tools.require_auth = cherrypy.Tool("before_handler", check_auth)
|
||||
logger.info("CherryPy require_auth tool registered")
|
||||
@@ -0,0 +1,35 @@
|
||||
import logging
|
||||
import time
|
||||
from typing import Dict, Optional
|
||||
|
||||
import jwt
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class JWTHandler:
|
||||
def __init__(self, secret: str, expiry_minutes: int = 15):
|
||||
self.secret = secret
|
||||
self.expiry_minutes = expiry_minutes
|
||||
|
||||
def create_jwt(self, username: str, client_id: str) -> str:
|
||||
|
||||
now = int(time.time())
|
||||
expiry = now + (self.expiry_minutes * 60)
|
||||
|
||||
payload = {"sub": username, "exp": expiry, "iat": now, "client_id": client_id}
|
||||
|
||||
token = jwt.encode(payload, self.secret, algorithm="HS256")
|
||||
logger.info(f"Created JWT for user '{username}' with client_id '{client_id[:8]}...'")
|
||||
return token
|
||||
|
||||
def verify_jwt(self, token: str) -> Optional[Dict]:
|
||||
try:
|
||||
payload = jwt.decode(token, self.secret, algorithms=["HS256"])
|
||||
return payload
|
||||
except jwt.ExpiredSignatureError:
|
||||
logger.warning("JWT token expired")
|
||||
return None
|
||||
except jwt.InvalidTokenError as e:
|
||||
logger.warning(f"Invalid JWT token: {e}")
|
||||
return None
|
||||
@@ -0,0 +1,66 @@
|
||||
import logging
|
||||
from functools import wraps
|
||||
|
||||
import cherrypy
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def require_auth(func):
|
||||
|
||||
@wraps(func)
|
||||
def wrapper(*args, **kwargs):
|
||||
# Skip authentication for OPTIONS requests (CORS preflight)
|
||||
if cherrypy.request.method == "OPTIONS":
|
||||
return func(*args, **kwargs)
|
||||
|
||||
# Get auth handlers from global cherrypy config (not app config)
|
||||
jwt_handler = cherrypy.config.get("jwt_handler")
|
||||
token_manager = cherrypy.config.get("token_manager")
|
||||
|
||||
if not jwt_handler or not token_manager:
|
||||
logger.error("Auth handlers not configured")
|
||||
raise cherrypy.HTTPError(500, "Authentication not configured")
|
||||
|
||||
# Try JWT authentication first
|
||||
auth_header = cherrypy.request.headers.get("Authorization", "")
|
||||
if auth_header.startswith("Bearer "):
|
||||
token = auth_header[7:] # Remove 'Bearer ' prefix
|
||||
payload = jwt_handler.verify_jwt(token)
|
||||
|
||||
if payload:
|
||||
# JWT is valid
|
||||
cherrypy.request.user = {
|
||||
"username": payload["sub"],
|
||||
"client_id": payload["client_id"],
|
||||
"auth_type": "jwt",
|
||||
}
|
||||
return func(*args, **kwargs)
|
||||
else:
|
||||
logger.warning("Invalid or expired JWT token")
|
||||
|
||||
# Try API token authentication
|
||||
api_key = cherrypy.request.headers.get("X-API-Key", "")
|
||||
if api_key:
|
||||
token_info = token_manager.verify_token(api_key)
|
||||
|
||||
if token_info:
|
||||
# API token is valid
|
||||
cherrypy.request.user = {
|
||||
"username": "api_token",
|
||||
"token_name": token_info["name"],
|
||||
"token_id": token_info["id"],
|
||||
"auth_type": "api_token",
|
||||
}
|
||||
return func(*args, **kwargs)
|
||||
else:
|
||||
logger.warning("Invalid API token")
|
||||
|
||||
# No valid authentication found
|
||||
logger.warning(f"Unauthorized access attempt to {cherrypy.request.path_info}")
|
||||
|
||||
cherrypy.response.status = 401
|
||||
cherrypy.response.headers["Content-Type"] = "application/json"
|
||||
return {"success": False, "error": "Unauthorized - Valid JWT or API token required"}
|
||||
|
||||
return wrapper
|
||||
@@ -0,0 +1,463 @@
|
||||
"""
|
||||
Authentication endpoints for login and token management
|
||||
"""
|
||||
import cherrypy
|
||||
import logging
|
||||
from .auth.middleware import require_auth
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class AuthAPIEndpoints:
|
||||
"""Nested endpoint for /api/auth/* RESTful routes"""
|
||||
|
||||
def __init__(self):
|
||||
# Create tokens nested endpoint for /api/auth/tokens
|
||||
self.tokens = TokensAPIEndpoint()
|
||||
|
||||
|
||||
class TokensAPIEndpoint:
|
||||
"""RESTful token management endpoints for /api/auth/tokens"""
|
||||
|
||||
@cherrypy.expose
|
||||
@cherrypy.tools.json_out()
|
||||
@require_auth
|
||||
def index(self):
|
||||
# Handle CORS preflight
|
||||
if cherrypy.request.method == 'OPTIONS':
|
||||
return {}
|
||||
|
||||
# Get token manager from cherrypy config
|
||||
token_manager = cherrypy.config.get('token_manager')
|
||||
if not token_manager:
|
||||
cherrypy.response.status = 500
|
||||
return {'success': False, 'error': 'Token manager not available'}
|
||||
|
||||
if cherrypy.request.method == 'GET':
|
||||
try:
|
||||
tokens = token_manager.list_tokens()
|
||||
return {
|
||||
'success': True,
|
||||
'tokens': tokens
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Token list error: {e}")
|
||||
cherrypy.response.status = 500
|
||||
return {
|
||||
'success': False,
|
||||
'error': 'Failed to list tokens'
|
||||
}
|
||||
|
||||
elif cherrypy.request.method == 'POST':
|
||||
try:
|
||||
import json
|
||||
body = cherrypy.request.body.read().decode('utf-8')
|
||||
data = json.loads(body) if body else {}
|
||||
name = data.get('name', '').strip()
|
||||
|
||||
if not name:
|
||||
cherrypy.response.status = 400
|
||||
return {
|
||||
'success': False,
|
||||
'error': 'Token name is required'
|
||||
}
|
||||
|
||||
# Create the token
|
||||
token_id, plaintext_token = token_manager.create_token(name)
|
||||
|
||||
logger.info(f"Generated API token '{name}' (ID: {token_id}) by user {cherrypy.request.user['username']}")
|
||||
|
||||
return {
|
||||
'success': True,
|
||||
'token': plaintext_token,
|
||||
'token_id': token_id,
|
||||
'name': name,
|
||||
'warning': 'Save this token securely - it will not be shown again'
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Token generation error: {e}")
|
||||
cherrypy.response.status = 500
|
||||
return {
|
||||
'success': False,
|
||||
'error': 'Failed to generate token'
|
||||
}
|
||||
else:
|
||||
raise cherrypy.HTTPError(405, "Method not allowed")
|
||||
|
||||
@cherrypy.expose
|
||||
@cherrypy.tools.json_out()
|
||||
@require_auth
|
||||
def default(self, token_id=None):
|
||||
# Handle CORS preflight
|
||||
if cherrypy.request.method == 'OPTIONS':
|
||||
return {}
|
||||
|
||||
# Get token manager from cherrypy config
|
||||
token_manager = cherrypy.config.get('token_manager')
|
||||
if not token_manager:
|
||||
cherrypy.response.status = 500
|
||||
return {'success': False, 'error': 'Token manager not available'}
|
||||
|
||||
if cherrypy.request.method == 'DELETE':
|
||||
try:
|
||||
if not token_id:
|
||||
cherrypy.response.status = 400
|
||||
return {
|
||||
'success': False,
|
||||
'error': 'Token ID is required'
|
||||
}
|
||||
|
||||
# Convert to int
|
||||
try:
|
||||
token_id_int = int(token_id)
|
||||
except ValueError:
|
||||
cherrypy.response.status = 400
|
||||
return {
|
||||
'success': False,
|
||||
'error': 'Invalid token ID'
|
||||
}
|
||||
|
||||
# Revoke the token
|
||||
success = token_manager.revoke_token(token_id_int)
|
||||
|
||||
if success:
|
||||
logger.info(f"Revoked API token ID {token_id_int} by user {cherrypy.request.user['username']}")
|
||||
return {
|
||||
'success': True,
|
||||
'message': 'Token revoked successfully'
|
||||
}
|
||||
else:
|
||||
cherrypy.response.status = 404
|
||||
return {
|
||||
'success': False,
|
||||
'error': 'Token not found'
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Token revocation error: {e}")
|
||||
cherrypy.response.status = 500
|
||||
return {
|
||||
'success': False,
|
||||
'error': 'Failed to revoke token'
|
||||
}
|
||||
else:
|
||||
raise cherrypy.HTTPError(405, "Method not allowed")
|
||||
|
||||
|
||||
class AuthEndpoints:
|
||||
|
||||
def __init__(self, config, jwt_handler, token_manager, config_manager=None):
|
||||
self.config = config
|
||||
self.jwt_handler = jwt_handler
|
||||
self.token_manager = token_manager
|
||||
self.config_manager = config_manager
|
||||
|
||||
@cherrypy.expose
|
||||
def login(self, **kwargs):
|
||||
|
||||
cherrypy.response.headers['Content-Type'] = 'application/json'
|
||||
|
||||
# Handle CORS preflight
|
||||
if cherrypy.request.method == 'OPTIONS':
|
||||
cherrypy.response.headers['Access-Control-Allow-Methods'] = 'POST, OPTIONS'
|
||||
cherrypy.response.headers['Access-Control-Allow-Headers'] = 'Content-Type, Authorization, X-API-Key'
|
||||
return b''
|
||||
|
||||
if cherrypy.request.method != 'POST':
|
||||
raise cherrypy.HTTPError(405, "Method not allowed")
|
||||
|
||||
try:
|
||||
# Parse JSON body manually since we can't use json_in decorator with OPTIONS
|
||||
import json
|
||||
body = cherrypy.request.body.read().decode('utf-8')
|
||||
data = json.loads(body) if body else {}
|
||||
|
||||
username = data.get('username', '').strip()
|
||||
password = data.get('password', '')
|
||||
client_id = data.get('client_id', '').strip()
|
||||
|
||||
if not username or not password or not client_id:
|
||||
return json.dumps({
|
||||
'success': False,
|
||||
'error': 'Missing required fields: username, password, client_id'
|
||||
}).encode('utf-8')
|
||||
|
||||
# Validate credentials against config
|
||||
# Check if username is 'admin' and password matches config
|
||||
repeater_config = self.config.get('repeater', {})
|
||||
security_config = repeater_config.get('security', {})
|
||||
config_password = security_config.get('admin_password', '')
|
||||
|
||||
# Don't allow login with empty or unconfigured password
|
||||
if not config_password:
|
||||
logger.warning(f"Login attempt rejected - password not configured")
|
||||
return json.dumps({
|
||||
'success': False,
|
||||
'error': 'System not configured. Please complete setup wizard.'
|
||||
}).encode('utf-8')
|
||||
|
||||
if username == 'admin' and password == config_password:
|
||||
# Create JWT token
|
||||
token = self.jwt_handler.create_jwt(username, client_id)
|
||||
|
||||
logger.info(f"Successful login for user '{username}' from client '{client_id[:8]}...'")
|
||||
|
||||
return json.dumps({
|
||||
'success': True,
|
||||
'token': token,
|
||||
'expires_in': self.jwt_handler.expiry_minutes * 60,
|
||||
'username': username
|
||||
}).encode('utf-8')
|
||||
else:
|
||||
logger.warning(f"Failed login attempt for user '{username}'")
|
||||
|
||||
# Don't reveal which part was wrong
|
||||
return json.dumps({
|
||||
'success': False,
|
||||
'error': 'Invalid username or password'
|
||||
}).encode('utf-8')
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Login error: {e}")
|
||||
return json.dumps({
|
||||
'success': False,
|
||||
'error': 'Internal server error'
|
||||
}).encode('utf-8')
|
||||
|
||||
@cherrypy.expose
|
||||
@cherrypy.tools.json_out()
|
||||
@require_auth
|
||||
def verify(self):
|
||||
if cherrypy.request.method != 'GET':
|
||||
raise cherrypy.HTTPError(405, "Method not allowed")
|
||||
|
||||
return {
|
||||
'success': True,
|
||||
'authenticated': True,
|
||||
'user': cherrypy.request.user
|
||||
}
|
||||
|
||||
@cherrypy.expose
|
||||
def refresh(self, **kwargs):
|
||||
|
||||
cherrypy.response.headers['Content-Type'] = 'application/json'
|
||||
|
||||
# Handle CORS preflight
|
||||
if cherrypy.request.method == 'OPTIONS':
|
||||
cherrypy.response.headers['Access-Control-Allow-Methods'] = 'POST, OPTIONS'
|
||||
cherrypy.response.headers['Access-Control-Allow-Headers'] = 'Content-Type, Authorization, X-API-Key'
|
||||
return b''
|
||||
|
||||
if cherrypy.request.method != 'POST':
|
||||
raise cherrypy.HTTPError(405, "Method not allowed")
|
||||
|
||||
try:
|
||||
import json
|
||||
|
||||
# Manual authentication check (can't use @require_auth since we need to handle OPTIONS)
|
||||
auth_header = cherrypy.request.headers.get('Authorization', '')
|
||||
api_key = cherrypy.request.headers.get('X-API-Key', '')
|
||||
|
||||
jwt_handler = cherrypy.config.get('jwt_handler')
|
||||
token_manager = cherrypy.config.get('token_manager')
|
||||
|
||||
user_info = None
|
||||
|
||||
# Check JWT first
|
||||
if auth_header.startswith('Bearer '):
|
||||
token = auth_header[7:]
|
||||
payload = jwt_handler.verify_jwt(token)
|
||||
if payload:
|
||||
user_info = {
|
||||
'username': payload['sub'],
|
||||
'client_id': payload.get('client_id'),
|
||||
'auth_method': 'jwt'
|
||||
}
|
||||
|
||||
# Check API token
|
||||
if not user_info and api_key:
|
||||
token_data = token_manager.verify_token(api_key)
|
||||
if token_data:
|
||||
user_info = {
|
||||
'username': 'admin',
|
||||
'token_id': token_data['id'],
|
||||
'auth_method': 'api_token'
|
||||
}
|
||||
|
||||
if not user_info:
|
||||
return json.dumps({
|
||||
'success': False,
|
||||
'error': 'Unauthorized - Valid JWT or API token required'
|
||||
}).encode('utf-8')
|
||||
|
||||
# Parse request body
|
||||
body = cherrypy.request.body.read().decode('utf-8')
|
||||
data = json.loads(body) if body else {}
|
||||
|
||||
client_id = data.get('client_id', user_info.get('client_id', '')).strip()
|
||||
|
||||
if not client_id:
|
||||
return json.dumps({
|
||||
'success': False,
|
||||
'error': 'Client ID is required'
|
||||
}).encode('utf-8')
|
||||
|
||||
# Create new JWT token (refreshes expiry time)
|
||||
new_token = self.jwt_handler.create_jwt(user_info['username'], client_id)
|
||||
|
||||
logger.info(f"Token refreshed for user '{user_info['username']}' from client '{client_id[:8]}...'")
|
||||
|
||||
return json.dumps({
|
||||
'success': True,
|
||||
'token': new_token,
|
||||
'expires_in': self.jwt_handler.expiry_minutes * 60,
|
||||
'username': user_info['username']
|
||||
}).encode('utf-8')
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Token refresh error: {e}")
|
||||
return json.dumps({
|
||||
'success': False,
|
||||
'error': 'Failed to refresh token'
|
||||
}).encode('utf-8')
|
||||
|
||||
@cherrypy.expose
|
||||
def change_password(self):
|
||||
|
||||
import json
|
||||
|
||||
cherrypy.response.headers['Content-Type'] = 'application/json'
|
||||
|
||||
# Handle CORS preflight
|
||||
if cherrypy.request.method == 'OPTIONS':
|
||||
cherrypy.response.headers['Access-Control-Allow-Methods'] = 'POST, OPTIONS'
|
||||
cherrypy.response.headers['Access-Control-Allow-Headers'] = 'Content-Type, Authorization, X-API-Key'
|
||||
return b''
|
||||
|
||||
if cherrypy.request.method != 'POST':
|
||||
raise cherrypy.HTTPError(405, "Method not allowed")
|
||||
|
||||
# Require authentication for POST
|
||||
# Get auth handlers from global cherrypy config
|
||||
jwt_handler = cherrypy.config.get('jwt_handler')
|
||||
token_manager = cherrypy.config.get('token_manager')
|
||||
|
||||
if not jwt_handler or not token_manager:
|
||||
logger.error("Auth handlers not configured")
|
||||
raise cherrypy.HTTPError(500, "Authentication not configured")
|
||||
|
||||
# Try JWT authentication first
|
||||
auth_header = cherrypy.request.headers.get('Authorization', '')
|
||||
user = None
|
||||
|
||||
if auth_header.startswith('Bearer '):
|
||||
token = auth_header[7:] # Remove 'Bearer ' prefix
|
||||
payload = jwt_handler.verify_jwt(token)
|
||||
|
||||
if payload:
|
||||
user = {
|
||||
'username': payload['sub'],
|
||||
'client_id': payload['client_id'],
|
||||
'auth_type': 'jwt'
|
||||
}
|
||||
|
||||
# Try API token authentication if JWT failed
|
||||
if not user:
|
||||
api_key = cherrypy.request.headers.get('X-API-Key', '')
|
||||
if api_key:
|
||||
token_info = token_manager.verify_token(api_key)
|
||||
|
||||
if token_info:
|
||||
user = {
|
||||
'username': 'api_token',
|
||||
'token_name': token_info['name'],
|
||||
'token_id': token_info['id'],
|
||||
'auth_type': 'api_token'
|
||||
}
|
||||
|
||||
if not user:
|
||||
cherrypy.response.status = 401
|
||||
return json.dumps({
|
||||
'success': False,
|
||||
'error': 'Unauthorized - Valid JWT or API token required'
|
||||
}).encode('utf-8')
|
||||
|
||||
try:
|
||||
# Parse JSON body manually
|
||||
body = cherrypy.request.body.read().decode('utf-8')
|
||||
data = json.loads(body) if body else {}
|
||||
|
||||
current_password = data.get('current_password', '')
|
||||
new_password = data.get('new_password', '')
|
||||
|
||||
if not current_password or not new_password:
|
||||
cherrypy.response.status = 400
|
||||
return json.dumps({
|
||||
'success': False,
|
||||
'error': 'Both current_password and new_password are required'
|
||||
}).encode('utf-8')
|
||||
|
||||
# Validate new password strength
|
||||
if len(new_password) < 8:
|
||||
cherrypy.response.status = 400
|
||||
return json.dumps({
|
||||
'success': False,
|
||||
'error': 'New password must be at least 8 characters long'
|
||||
}).encode('utf-8')
|
||||
|
||||
# Verify current password
|
||||
repeater_config = self.config.get('repeater', {})
|
||||
security_config = repeater_config.get('security', {})
|
||||
config_password = security_config.get('admin_password', '')
|
||||
|
||||
if not config_password:
|
||||
cherrypy.response.status = 500
|
||||
return json.dumps({
|
||||
'success': False,
|
||||
'error': 'System configuration error'
|
||||
}).encode('utf-8')
|
||||
|
||||
if current_password != config_password:
|
||||
cherrypy.response.status = 401
|
||||
return json.dumps({
|
||||
'success': False,
|
||||
'error': 'Current password is incorrect'
|
||||
}).encode('utf-8')
|
||||
|
||||
# Update password in config
|
||||
if 'repeater' not in self.config:
|
||||
self.config['repeater'] = {}
|
||||
if 'security' not in self.config['repeater']:
|
||||
self.config['repeater']['security'] = {}
|
||||
|
||||
self.config['repeater']['security']['admin_password'] = new_password
|
||||
|
||||
# Save to config file using ConfigManager
|
||||
if self.config_manager:
|
||||
if self.config_manager.save_to_file():
|
||||
logger.info(f"Admin password changed successfully by user {user['username']}")
|
||||
return json.dumps({
|
||||
'success': True,
|
||||
'message': 'Password changed successfully. Please log in again with your new password.'
|
||||
}).encode('utf-8')
|
||||
else:
|
||||
cherrypy.response.status = 500
|
||||
return json.dumps({
|
||||
'success': False,
|
||||
'error': 'Failed to save password to config file'
|
||||
}).encode('utf-8')
|
||||
else:
|
||||
cherrypy.response.status = 500
|
||||
return json.dumps({
|
||||
'success': False,
|
||||
'error': 'Config manager not available'
|
||||
}).encode('utf-8')
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Password change error: {e}")
|
||||
cherrypy.response.status = 500
|
||||
return json.dumps({
|
||||
'success': False,
|
||||
'error': 'Failed to change password'
|
||||
}).encode('utf-8')
|
||||
@@ -3,13 +3,13 @@ import logging
|
||||
import random
|
||||
import threading
|
||||
import time
|
||||
from typing import Dict, Any, Optional
|
||||
from typing import Any, Dict, Optional
|
||||
|
||||
logger = logging.getLogger("HTTPServer")
|
||||
|
||||
|
||||
class CADCalibrationEngine:
|
||||
|
||||
|
||||
def __init__(self, daemon_instance=None, event_loop=None):
|
||||
self.daemon_instance = daemon_instance
|
||||
self.event_loop = event_loop
|
||||
@@ -19,26 +19,28 @@ class CADCalibrationEngine:
|
||||
self.progress = {"current": 0, "total": 0}
|
||||
self.clients = set() # SSE clients
|
||||
self.calibration_thread = None
|
||||
|
||||
|
||||
def get_test_ranges(self, spreading_factor: int):
|
||||
"""Get CAD test ranges"""
|
||||
# Higher values = less sensitive, lower values = more sensitive
|
||||
# Test from LESS sensitive to MORE sensitive to find the sweet spot
|
||||
sf_ranges = {
|
||||
7: (range(22, 30, 1), range(12, 20, 1)),
|
||||
8: (range(22, 30, 1), range(12, 20, 1)),
|
||||
9: (range(24, 32, 1), range(14, 22, 1)),
|
||||
10: (range(26, 34, 1), range(16, 24, 1)),
|
||||
11: (range(28, 36, 1), range(18, 26, 1)),
|
||||
12: (range(30, 38, 1), range(20, 28, 1)),
|
||||
7: (range(22, 30, 1), range(12, 20, 1)),
|
||||
8: (range(22, 30, 1), range(12, 20, 1)),
|
||||
9: (range(24, 32, 1), range(14, 22, 1)),
|
||||
10: (range(26, 34, 1), range(16, 24, 1)),
|
||||
11: (range(28, 36, 1), range(18, 26, 1)),
|
||||
12: (range(30, 38, 1), range(20, 28, 1)),
|
||||
}
|
||||
return sf_ranges.get(spreading_factor, sf_ranges[8])
|
||||
|
||||
async def test_cad_config(self, radio, det_peak: int, det_min: int, samples: int = 20) -> Dict[str, Any]:
|
||||
|
||||
|
||||
async def test_cad_config(
|
||||
self, radio, det_peak: int, det_min: int, samples: int = 20
|
||||
) -> Dict[str, Any]:
|
||||
|
||||
detections = 0
|
||||
baseline_detections = 0
|
||||
|
||||
|
||||
# First, get baseline with very insensitive settings (should detect nothing)
|
||||
baseline_samples = 5
|
||||
for _ in range(baseline_samples):
|
||||
@@ -50,10 +52,10 @@ class CADCalibrationEngine:
|
||||
except Exception:
|
||||
pass
|
||||
await asyncio.sleep(0.1) # 100ms between baseline samples
|
||||
|
||||
|
||||
# Wait before actual test
|
||||
await asyncio.sleep(0.5)
|
||||
|
||||
|
||||
# Now test the actual configuration
|
||||
for i in range(samples):
|
||||
try:
|
||||
@@ -62,226 +64,247 @@ class CADCalibrationEngine:
|
||||
detections += 1
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
# Variable delay to avoid sampling artifacts
|
||||
delay = 0.05 + (i % 3) * 0.05 # 50ms, 100ms, 150ms rotation
|
||||
await asyncio.sleep(delay)
|
||||
|
||||
|
||||
# Calculate adjusted detection rate
|
||||
baseline_rate = (baseline_detections / baseline_samples) * 100
|
||||
detection_rate = (detections / samples) * 100
|
||||
|
||||
|
||||
# Subtract baseline noise
|
||||
adjusted_rate = max(0, detection_rate - baseline_rate)
|
||||
|
||||
|
||||
return {
|
||||
'det_peak': det_peak,
|
||||
'det_min': det_min,
|
||||
'samples': samples,
|
||||
'detections': detections,
|
||||
'detection_rate': detection_rate,
|
||||
'baseline_rate': baseline_rate,
|
||||
'adjusted_rate': adjusted_rate, # This is the useful metric
|
||||
'sensitivity_score': self._calculate_sensitivity_score(det_peak, det_min, adjusted_rate)
|
||||
"det_peak": det_peak,
|
||||
"det_min": det_min,
|
||||
"samples": samples,
|
||||
"detections": detections,
|
||||
"detection_rate": detection_rate,
|
||||
"baseline_rate": baseline_rate,
|
||||
"adjusted_rate": adjusted_rate, # This is the useful metric
|
||||
"sensitivity_score": self._calculate_sensitivity_score(
|
||||
det_peak, det_min, adjusted_rate
|
||||
),
|
||||
}
|
||||
|
||||
def _calculate_sensitivity_score(self, det_peak: int, det_min: int, adjusted_rate: float) -> float:
|
||||
|
||||
|
||||
def _calculate_sensitivity_score(
|
||||
self, det_peak: int, det_min: int, adjusted_rate: float
|
||||
) -> float:
|
||||
|
||||
# Ideal detection rate is around 10-30% for good sensitivity without false positives
|
||||
ideal_rate = 20.0
|
||||
rate_penalty = abs(adjusted_rate - ideal_rate) / ideal_rate
|
||||
|
||||
|
||||
# Prefer moderate sensitivity settings (not too extreme)
|
||||
sensitivity_penalty = (abs(det_peak - 25) + abs(det_min - 15)) / 20.0
|
||||
|
||||
|
||||
# Lower penalty = higher score
|
||||
score = max(0, 100 - (rate_penalty * 50) - (sensitivity_penalty * 20))
|
||||
return score
|
||||
|
||||
|
||||
def broadcast_to_clients(self, data):
|
||||
|
||||
# Store the message for clients to pick up
|
||||
self.last_message = data
|
||||
# Also store in a queue for clients to consume
|
||||
if not hasattr(self, 'message_queue'):
|
||||
if not hasattr(self, "message_queue"):
|
||||
self.message_queue = []
|
||||
self.message_queue.append(data)
|
||||
|
||||
|
||||
def calibration_worker(self, samples: int, delay_ms: int):
|
||||
|
||||
|
||||
try:
|
||||
# Get radio from daemon instance
|
||||
if not self.daemon_instance:
|
||||
self.broadcast_to_clients({"type": "error", "message": "No daemon instance available"})
|
||||
self.broadcast_to_clients(
|
||||
{"type": "error", "message": "No daemon instance available"}
|
||||
)
|
||||
return
|
||||
|
||||
radio = getattr(self.daemon_instance, 'radio', None)
|
||||
|
||||
radio = getattr(self.daemon_instance, "radio", None)
|
||||
if not radio:
|
||||
self.broadcast_to_clients({"type": "error", "message": "Radio instance not available"})
|
||||
self.broadcast_to_clients(
|
||||
{"type": "error", "message": "Radio instance not available"}
|
||||
)
|
||||
return
|
||||
if not hasattr(radio, 'perform_cad'):
|
||||
self.broadcast_to_clients({"type": "error", "message": "Radio does not support CAD"})
|
||||
if not hasattr(radio, "perform_cad"):
|
||||
self.broadcast_to_clients(
|
||||
{"type": "error", "message": "Radio does not support CAD"}
|
||||
)
|
||||
return
|
||||
|
||||
|
||||
# Get spreading factor from daemon instance
|
||||
config = getattr(self.daemon_instance, 'config', {})
|
||||
config = getattr(self.daemon_instance, "config", {})
|
||||
radio_config = config.get("radio", {})
|
||||
sf = radio_config.get("spreading_factor", 8)
|
||||
|
||||
|
||||
# Get test ranges
|
||||
peak_range, min_range = self.get_test_ranges(sf)
|
||||
|
||||
|
||||
total_tests = len(peak_range) * len(min_range)
|
||||
self.progress = {"current": 0, "total": total_tests}
|
||||
|
||||
self.broadcast_to_clients({
|
||||
"type": "status",
|
||||
"message": f"Starting calibration: SF{sf}, {total_tests} tests",
|
||||
"test_ranges": {
|
||||
"peak_min": min(peak_range),
|
||||
"peak_max": max(peak_range),
|
||||
"min_min": min(min_range),
|
||||
"min_max": max(min_range),
|
||||
"spreading_factor": sf,
|
||||
"total_tests": total_tests
|
||||
|
||||
self.broadcast_to_clients(
|
||||
{
|
||||
"type": "status",
|
||||
"message": f"Starting calibration: SF{sf}, {total_tests} tests",
|
||||
"test_ranges": {
|
||||
"peak_min": min(peak_range),
|
||||
"peak_max": max(peak_range),
|
||||
"min_min": min(min_range),
|
||||
"min_max": max(min_range),
|
||||
"spreading_factor": sf,
|
||||
"total_tests": total_tests,
|
||||
},
|
||||
}
|
||||
})
|
||||
|
||||
)
|
||||
|
||||
current = 0
|
||||
|
||||
|
||||
peak_list = list(peak_range)
|
||||
min_list = list(min_range)
|
||||
|
||||
|
||||
# Create all test combinations
|
||||
test_combinations = []
|
||||
for det_peak in peak_list:
|
||||
for det_min in min_list:
|
||||
test_combinations.append((det_peak, det_min))
|
||||
|
||||
|
||||
# Sort by distance from center for center-out pattern
|
||||
peak_center = (max(peak_list) + min(peak_list)) / 2
|
||||
min_center = (max(min_list) + min(min_list)) / 2
|
||||
|
||||
|
||||
def distance_from_center(combo):
|
||||
peak, min_val = combo
|
||||
return ((peak - peak_center) ** 2 + (min_val - min_center) ** 2) ** 0.5
|
||||
|
||||
|
||||
# Sort by distance from center
|
||||
test_combinations.sort(key=distance_from_center)
|
||||
|
||||
|
||||
# Randomize within bands for better coverage
|
||||
band_size = max(1, len(test_combinations) // 8) # Create 8 bands
|
||||
randomized_combinations = []
|
||||
|
||||
|
||||
for i in range(0, len(test_combinations), band_size):
|
||||
band = test_combinations[i:i + band_size]
|
||||
band = test_combinations[i : i + band_size]
|
||||
random.shuffle(band) # Randomize within each band
|
||||
randomized_combinations.extend(band)
|
||||
|
||||
|
||||
# Run calibration in event loop with center-out randomized pattern
|
||||
if self.event_loop:
|
||||
for det_peak, det_min in randomized_combinations:
|
||||
if not self.running:
|
||||
break
|
||||
|
||||
|
||||
current += 1
|
||||
self.progress["current"] = current
|
||||
|
||||
|
||||
# Update progress
|
||||
self.broadcast_to_clients({
|
||||
"type": "progress",
|
||||
"current": current,
|
||||
"total": total_tests,
|
||||
"peak": det_peak,
|
||||
"min": det_min
|
||||
})
|
||||
|
||||
self.broadcast_to_clients(
|
||||
{
|
||||
"type": "progress",
|
||||
"current": current,
|
||||
"total": total_tests,
|
||||
"peak": det_peak,
|
||||
"min": det_min,
|
||||
}
|
||||
)
|
||||
|
||||
# Run the test
|
||||
future = asyncio.run_coroutine_threadsafe(
|
||||
self.test_cad_config(radio, det_peak, det_min, samples),
|
||||
self.event_loop
|
||||
self.test_cad_config(radio, det_peak, det_min, samples), self.event_loop
|
||||
)
|
||||
|
||||
|
||||
try:
|
||||
result = future.result(timeout=30) # 30 second timeout per test
|
||||
|
||||
|
||||
# Store result
|
||||
key = f"{det_peak}-{det_min}"
|
||||
self.results[key] = result
|
||||
|
||||
|
||||
# Send result to clients
|
||||
self.broadcast_to_clients({
|
||||
"type": "result",
|
||||
**result
|
||||
})
|
||||
self.broadcast_to_clients({"type": "result", **result})
|
||||
except Exception as e:
|
||||
logger.error(f"CAD test failed for peak={det_peak}, min={det_min}: {e}")
|
||||
|
||||
|
||||
# Delay between tests
|
||||
if self.running and delay_ms > 0:
|
||||
time.sleep(delay_ms / 1000.0)
|
||||
|
||||
|
||||
if self.running:
|
||||
# Find best result based on sensitivity score (not just detection rate)
|
||||
best_result = None
|
||||
recommended_result = None
|
||||
if self.results:
|
||||
# Find result with highest sensitivity score (best balance)
|
||||
best_result = max(self.results.values(), key=lambda x: x.get('sensitivity_score', 0))
|
||||
|
||||
best_result = max(
|
||||
self.results.values(), key=lambda x: x.get("sensitivity_score", 0)
|
||||
)
|
||||
|
||||
# Also find result with ideal adjusted detection rate (10-30%)
|
||||
ideal_results = [r for r in self.results.values() if 10 <= r.get('adjusted_rate', 0) <= 30]
|
||||
ideal_results = [
|
||||
r for r in self.results.values() if 10 <= r.get("adjusted_rate", 0) <= 30
|
||||
]
|
||||
if ideal_results:
|
||||
# Among ideal results, pick the one with best sensitivity score
|
||||
recommended_result = max(ideal_results, key=lambda x: x.get('sensitivity_score', 0))
|
||||
recommended_result = max(
|
||||
ideal_results, key=lambda x: x.get("sensitivity_score", 0)
|
||||
)
|
||||
else:
|
||||
recommended_result = best_result
|
||||
|
||||
self.broadcast_to_clients({
|
||||
"type": "completed",
|
||||
"message": "Calibration completed",
|
||||
"results": {
|
||||
"best": best_result,
|
||||
"recommended": recommended_result,
|
||||
"total_tests": len(self.results)
|
||||
} if best_result else None
|
||||
})
|
||||
|
||||
self.broadcast_to_clients(
|
||||
{
|
||||
"type": "completed",
|
||||
"message": "Calibration completed",
|
||||
"results": (
|
||||
{
|
||||
"best": best_result,
|
||||
"recommended": recommended_result,
|
||||
"total_tests": len(self.results),
|
||||
}
|
||||
if best_result
|
||||
else None
|
||||
),
|
||||
}
|
||||
)
|
||||
else:
|
||||
self.broadcast_to_clients({"type": "status", "message": "Calibration stopped"})
|
||||
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Calibration worker error: {e}")
|
||||
self.broadcast_to_clients({"type": "error", "message": str(e)})
|
||||
finally:
|
||||
self.running = False
|
||||
|
||||
|
||||
def start_calibration(self, samples: int = 8, delay_ms: int = 100):
|
||||
|
||||
|
||||
if self.running:
|
||||
return False
|
||||
|
||||
|
||||
self.running = True
|
||||
self.results.clear()
|
||||
self.progress = {"current": 0, "total": 0}
|
||||
self.clear_message_queue() # Clear any old messages
|
||||
|
||||
|
||||
# Start calibration in separate thread
|
||||
self.calibration_thread = threading.Thread(
|
||||
target=self.calibration_worker,
|
||||
args=(samples, delay_ms)
|
||||
target=self.calibration_worker, args=(samples, delay_ms)
|
||||
)
|
||||
self.calibration_thread.daemon = True
|
||||
self.calibration_thread.start()
|
||||
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def stop_calibration(self):
|
||||
|
||||
|
||||
self.running = False
|
||||
if self.calibration_thread:
|
||||
self.calibration_thread.join(timeout=2)
|
||||
|
||||
|
||||
def clear_message_queue(self):
|
||||
|
||||
if hasattr(self, 'message_queue'):
|
||||
self.message_queue.clear()
|
||||
|
||||
if hasattr(self, "message_queue"):
|
||||
self.message_queue.clear()
|
||||
|
||||
@@ -0,0 +1,724 @@
|
||||
"""
|
||||
Companion Bridge REST API and SSE event stream endpoints.
|
||||
|
||||
Mounted as a nested CherryPy object at /api/companion/ via APIEndpoints.
|
||||
Provides browser-accessible REST endpoints that proxy into the CompanionBridge
|
||||
async methods, plus a Server-Sent Events stream for real-time push callbacks.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import queue
|
||||
import threading
|
||||
import time
|
||||
from typing import Optional
|
||||
|
||||
import cherrypy
|
||||
|
||||
from repeater.companion.utils import validate_companion_node_name
|
||||
|
||||
from .auth.middleware import require_auth
|
||||
|
||||
logger = logging.getLogger("CompanionAPI")
|
||||
|
||||
|
||||
class CompanionAPIEndpoints:
|
||||
"""REST + SSE endpoints for a companion bridge.
|
||||
|
||||
CherryPy auto-mounts this at ``/api/companion/`` when assigned as
|
||||
``APIEndpoints.companion``. All async bridge calls are dispatched
|
||||
to the daemon's event loop via ``asyncio.run_coroutine_threadsafe``.
|
||||
"""
|
||||
|
||||
def __init__(self, daemon_instance=None, event_loop=None, config=None, config_manager=None):
|
||||
self.daemon_instance = daemon_instance
|
||||
self.event_loop = event_loop
|
||||
self.config = config or {}
|
||||
self.config_manager = config_manager
|
||||
|
||||
http_cfg = self.config.get("http", {}) if isinstance(self.config, dict) else {}
|
||||
self._sse_queue_maxsize = max(32, int(http_cfg.get("sse_queue_maxsize", 64)))
|
||||
self._sse_keepalive_sec = max(5, int(http_cfg.get("sse_keepalive_sec", 15)))
|
||||
|
||||
# SSE clients: each gets a thread-safe queue
|
||||
self._sse_clients: list[queue.Queue] = []
|
||||
self._sse_lock = threading.Lock()
|
||||
|
||||
# Flag: have we registered push callbacks yet?
|
||||
self._callbacks_registered = False
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Helpers
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def _get_bridge(self, name: Optional[str] = None, companion_hash: Optional[int] = None):
|
||||
"""Return the companion bridge, or raise 503/404 if unavailable.
|
||||
|
||||
Resolution order (mirrors room-server pattern):
|
||||
1. *name* — look up via identity_manager by registered name.
|
||||
2. *companion_hash* — direct lookup in ``companion_bridges`` dict.
|
||||
3. Neither — return the first (and typically only) bridge.
|
||||
"""
|
||||
if not self.daemon_instance:
|
||||
raise cherrypy.HTTPError(503, "Daemon not initialized")
|
||||
bridges = getattr(self.daemon_instance, "companion_bridges", {})
|
||||
if not bridges:
|
||||
raise cherrypy.HTTPError(503, "No companion bridges configured")
|
||||
|
||||
# --- resolve by name via identity_manager (same pattern as room servers) ---
|
||||
if name is not None:
|
||||
identity_manager = getattr(self.daemon_instance, "identity_manager", None)
|
||||
if identity_manager:
|
||||
for reg_name, identity, _cfg in identity_manager.get_identities_by_type(
|
||||
"companion"
|
||||
):
|
||||
if reg_name == name:
|
||||
hash_byte = identity.get_public_key()[0]
|
||||
bridge = bridges.get(hash_byte)
|
||||
if bridge:
|
||||
return bridge
|
||||
raise cherrypy.HTTPError(404, f"Companion '{name}' not found")
|
||||
|
||||
# --- resolve by hash (fallback) ---
|
||||
if companion_hash is not None:
|
||||
bridge = bridges.get(companion_hash)
|
||||
if not bridge:
|
||||
msg = f"Companion 0x{companion_hash:02X} not found" # noqa: E231
|
||||
raise cherrypy.HTTPError(404, msg)
|
||||
return bridge
|
||||
|
||||
# --- default: first bridge ---
|
||||
return next(iter(bridges.values()))
|
||||
|
||||
def _resolve_bridge_params(self, params) -> dict:
|
||||
"""Extract optional companion name/hash from request params.
|
||||
|
||||
Returns kwargs suitable for ``_get_bridge(**result)``.
|
||||
Follows the room-server convention: ``companion_name`` is the
|
||||
primary selector, ``companion_hash`` is the fallback.
|
||||
"""
|
||||
name = params.get("companion_name")
|
||||
raw_hash = params.get("companion_hash")
|
||||
result: dict = {}
|
||||
if name is not None:
|
||||
result["name"] = str(name)
|
||||
elif raw_hash is not None:
|
||||
try:
|
||||
result["companion_hash"] = int(str(raw_hash), 0)
|
||||
except (ValueError, TypeError):
|
||||
raise cherrypy.HTTPError(400, "Invalid companion_hash")
|
||||
return result
|
||||
|
||||
def _run_async(self, coro, timeout: float = 30.0):
|
||||
"""Run an async coroutine on the daemon event loop and return result."""
|
||||
if self.event_loop is None:
|
||||
raise cherrypy.HTTPError(503, "Event loop not available")
|
||||
future = asyncio.run_coroutine_threadsafe(coro, self.event_loop)
|
||||
return future.result(timeout=timeout)
|
||||
|
||||
@staticmethod
|
||||
def _success(data, **kwargs):
|
||||
result = {"success": True, "data": data}
|
||||
result.update(kwargs)
|
||||
return result
|
||||
|
||||
@staticmethod
|
||||
def _error(msg):
|
||||
return {"success": False, "error": str(msg)}
|
||||
|
||||
def _require_post(self):
|
||||
if cherrypy.request.method != "POST":
|
||||
cherrypy.response.headers["Allow"] = "POST"
|
||||
raise cherrypy.HTTPError(405, "Method not allowed. Use POST.")
|
||||
|
||||
def _get_json_body(self) -> dict:
|
||||
"""Read and parse the JSON request body."""
|
||||
try:
|
||||
raw = cherrypy.request.body.read()
|
||||
return json.loads(raw) if raw else {}
|
||||
except (json.JSONDecodeError, ValueError) as exc:
|
||||
raise cherrypy.HTTPError(400, f"Invalid JSON body: {exc}")
|
||||
|
||||
def _pub_key_from_hex(self, hex_str: str) -> bytes:
|
||||
"""Decode a hex public key, raising 400 on error."""
|
||||
try:
|
||||
key = bytes.fromhex(hex_str)
|
||||
if len(key) != 32:
|
||||
raise ValueError("Expected 32-byte key")
|
||||
return key
|
||||
except (ValueError, TypeError) as exc:
|
||||
raise cherrypy.HTTPError(400, f"Invalid public key: {exc}")
|
||||
|
||||
def _get_sqlite_handler(self):
|
||||
"""Return the repeater's sqlite_handler, or raise 503 if unavailable."""
|
||||
if not self.daemon_instance:
|
||||
raise cherrypy.HTTPError(503, "Daemon not initialized")
|
||||
if (
|
||||
not hasattr(self.daemon_instance, "repeater_handler")
|
||||
or not self.daemon_instance.repeater_handler
|
||||
):
|
||||
raise cherrypy.HTTPError(503, "Repeater handler not initialized")
|
||||
storage = getattr(self.daemon_instance.repeater_handler, "storage", None)
|
||||
if not storage:
|
||||
raise cherrypy.HTTPError(503, "Storage not initialized")
|
||||
sqlite_handler = getattr(storage, "sqlite_handler", None)
|
||||
if not sqlite_handler:
|
||||
raise cherrypy.HTTPError(503, "SQLite storage not available")
|
||||
return sqlite_handler
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# SSE push-event plumbing
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def _ensure_callbacks(self):
|
||||
"""Register push callbacks on the bridge (once)."""
|
||||
if self._callbacks_registered:
|
||||
return
|
||||
try:
|
||||
bridge = self._get_bridge()
|
||||
except cherrypy.HTTPError:
|
||||
return # bridge not yet available
|
||||
|
||||
def _make_cb(event_name):
|
||||
"""Create a callback that serialises event data for SSE clients."""
|
||||
|
||||
def _cb(*args, **kwargs):
|
||||
payload = self._serialise_event(event_name, args, kwargs)
|
||||
self._broadcast_sse(payload)
|
||||
|
||||
return _cb
|
||||
|
||||
callback_names = [
|
||||
"message_received",
|
||||
"channel_message_received",
|
||||
"advert_received",
|
||||
"contact_path_updated",
|
||||
"send_confirmed",
|
||||
"login_result",
|
||||
]
|
||||
for name in callback_names:
|
||||
register_fn = getattr(bridge, f"on_{name}", None)
|
||||
if register_fn:
|
||||
register_fn(_make_cb(name))
|
||||
|
||||
self._callbacks_registered = True
|
||||
|
||||
@staticmethod
|
||||
def _serialise_event(event_name: str, args: tuple, kwargs: dict) -> dict:
|
||||
"""Convert callback arguments to a JSON-safe dict."""
|
||||
data: dict = {"event": event_name, "timestamp": int(time.time())}
|
||||
for i, arg in enumerate(args):
|
||||
data[f"arg{i}"] = _to_json_safe(arg)
|
||||
for k, v in kwargs.items():
|
||||
data[k] = _to_json_safe(v)
|
||||
return data
|
||||
|
||||
def _broadcast_sse(self, payload: dict):
|
||||
"""Put *payload* into every active SSE client queue."""
|
||||
with self._sse_lock:
|
||||
dead = []
|
||||
for q in self._sse_clients:
|
||||
try:
|
||||
q.put_nowait(payload)
|
||||
except queue.Full:
|
||||
dead.append(q)
|
||||
for q in dead:
|
||||
self._sse_clients.remove(q)
|
||||
|
||||
# ==================================================================
|
||||
# REST Endpoints
|
||||
# ==================================================================
|
||||
|
||||
# ----- Index / listing -----
|
||||
|
||||
@cherrypy.expose
|
||||
@cherrypy.tools.json_out()
|
||||
@require_auth
|
||||
def index(self, **kwargs):
|
||||
"""GET /api/companion/ — list configured companions."""
|
||||
bridges = getattr(self.daemon_instance, "companion_bridges", {})
|
||||
identity_manager = getattr(self.daemon_instance, "identity_manager", None)
|
||||
|
||||
# Build name lookup from identity_manager (same pattern as room servers)
|
||||
name_by_hash: dict[int, str] = {}
|
||||
if identity_manager:
|
||||
for reg_name, identity, _cfg in identity_manager.get_identities_by_type("companion"):
|
||||
name_by_hash[identity.get_public_key()[0]] = reg_name
|
||||
|
||||
items = []
|
||||
for h, b in bridges.items():
|
||||
items.append(
|
||||
{
|
||||
"companion_name": name_by_hash.get(h, ""),
|
||||
"companion_hash": f"0x{h:02X}", # noqa: E231
|
||||
"node_name": b.prefs.node_name,
|
||||
"public_key": b.get_public_key().hex(),
|
||||
"is_running": b.is_running,
|
||||
"contacts_count": b.contacts.get_count(),
|
||||
"channels_count": b.channels.get_count(),
|
||||
}
|
||||
)
|
||||
return self._success(items)
|
||||
|
||||
# ----- Identity -----
|
||||
|
||||
@cherrypy.expose
|
||||
@cherrypy.tools.json_out()
|
||||
@require_auth
|
||||
def self_info(self, **kwargs):
|
||||
"""GET /api/companion/self_info — node identity and preferences."""
|
||||
bridge = self._get_bridge(**self._resolve_bridge_params(kwargs))
|
||||
prefs = bridge.get_self_info()
|
||||
return self._success(
|
||||
{
|
||||
"public_key": bridge.get_public_key().hex(),
|
||||
"node_name": prefs.node_name,
|
||||
"adv_type": prefs.adv_type,
|
||||
"tx_power_dbm": prefs.tx_power_dbm,
|
||||
"frequency_hz": prefs.frequency_hz,
|
||||
"bandwidth_hz": prefs.bandwidth_hz,
|
||||
"spreading_factor": prefs.spreading_factor,
|
||||
"coding_rate": prefs.coding_rate,
|
||||
"latitude": prefs.latitude,
|
||||
"longitude": prefs.longitude,
|
||||
}
|
||||
)
|
||||
|
||||
# ----- Contacts -----
|
||||
|
||||
@cherrypy.expose
|
||||
@cherrypy.tools.json_out()
|
||||
@require_auth
|
||||
def contacts(self, **kwargs):
|
||||
"""GET /api/companion/contacts — list all contacts."""
|
||||
bridge = self._get_bridge(**self._resolve_bridge_params(kwargs))
|
||||
since = int(kwargs.get("since", 0))
|
||||
contacts = bridge.get_contacts(since=since)
|
||||
items = []
|
||||
for c in contacts:
|
||||
items.append(
|
||||
{
|
||||
"public_key": (
|
||||
c.public_key.hex() if isinstance(c.public_key, bytes) else c.public_key
|
||||
),
|
||||
"name": c.name,
|
||||
"adv_type": c.adv_type,
|
||||
"flags": c.flags,
|
||||
"out_path_len": c.out_path_len,
|
||||
"last_advert_timestamp": c.last_advert_timestamp,
|
||||
"lastmod": c.lastmod,
|
||||
"gps_lat": c.gps_lat,
|
||||
"gps_lon": c.gps_lon,
|
||||
}
|
||||
)
|
||||
return self._success(items)
|
||||
|
||||
@cherrypy.expose
|
||||
@cherrypy.tools.json_out()
|
||||
@require_auth
|
||||
def contact(self, **kwargs):
|
||||
"""GET /api/companion/contact?pub_key=<hex> — get single contact."""
|
||||
bridge = self._get_bridge(**self._resolve_bridge_params(kwargs))
|
||||
pk_hex = kwargs.get("pub_key")
|
||||
if not pk_hex:
|
||||
raise cherrypy.HTTPError(400, "pub_key required")
|
||||
pub_key = self._pub_key_from_hex(pk_hex)
|
||||
c = bridge.get_contact_by_key(pub_key)
|
||||
if not c:
|
||||
raise cherrypy.HTTPError(404, "Contact not found")
|
||||
return self._success(
|
||||
{
|
||||
"public_key": (
|
||||
c.public_key.hex() if isinstance(c.public_key, bytes) else c.public_key
|
||||
),
|
||||
"name": c.name,
|
||||
"adv_type": c.adv_type,
|
||||
"flags": c.flags,
|
||||
"out_path_len": c.out_path_len,
|
||||
"out_path": c.out_path.hex() if isinstance(c.out_path, bytes) else "",
|
||||
"last_advert_timestamp": c.last_advert_timestamp,
|
||||
"lastmod": c.lastmod,
|
||||
"gps_lat": c.gps_lat,
|
||||
"gps_lon": c.gps_lon,
|
||||
}
|
||||
)
|
||||
|
||||
@cherrypy.expose
|
||||
@cherrypy.tools.json_out()
|
||||
@require_auth
|
||||
def import_repeater_contacts(self, **kwargs):
|
||||
"""POST /api/companion/import_repeater_contacts {companion_name, contact_types?, hours?, limit?}
|
||||
|
||||
Import repeater adverts into this companion's contact store (one-time seed).
|
||||
Optional: contact_types (list), hours (only adverts seen in last N hours),
|
||||
limit (max contacts to import, capped by companion max_contacts).
|
||||
Results are sorted by last_seen DESC. After import, contacts are hot-reloaded.
|
||||
"""
|
||||
self._require_post()
|
||||
body = self._get_json_body()
|
||||
companion_name = body.get("companion_name")
|
||||
if not companion_name:
|
||||
raise cherrypy.HTTPError(400, "companion_name required")
|
||||
contact_types = body.get("contact_types")
|
||||
if contact_types is not None:
|
||||
if not isinstance(contact_types, list):
|
||||
raise cherrypy.HTTPError(400, "contact_types must be a list")
|
||||
allowed = {"companion", "repeater", "room_server", "sensor"}
|
||||
for t in contact_types:
|
||||
if not isinstance(t, str) or t not in allowed:
|
||||
raise cherrypy.HTTPError(
|
||||
400,
|
||||
f"contact_types must contain only: companion, repeater, room_server, sensor (got {t!r})",
|
||||
)
|
||||
if not contact_types:
|
||||
contact_types = None
|
||||
hours = body.get("hours")
|
||||
if hours is not None:
|
||||
try:
|
||||
hours = int(hours)
|
||||
except (TypeError, ValueError):
|
||||
raise cherrypy.HTTPError(400, "hours must be a positive integer")
|
||||
if hours < 1:
|
||||
raise cherrypy.HTTPError(400, "hours must be a positive integer")
|
||||
limit = body.get("limit")
|
||||
if limit is not None:
|
||||
try:
|
||||
limit = int(limit)
|
||||
except (TypeError, ValueError):
|
||||
raise cherrypy.HTTPError(400, "limit must be a positive integer")
|
||||
if limit < 1:
|
||||
raise cherrypy.HTTPError(400, "limit must be a positive integer")
|
||||
bridge = self._get_bridge(**self._resolve_bridge_params(body))
|
||||
if limit is not None:
|
||||
max_contacts = getattr(bridge, "max_contacts", 1000)
|
||||
limit = min(limit, max_contacts)
|
||||
companion_hash = getattr(bridge, "_companion_hash", None)
|
||||
if not companion_hash:
|
||||
raise cherrypy.HTTPError(503, "Companion hash not available")
|
||||
sqlite_handler = self._get_sqlite_handler()
|
||||
count = sqlite_handler.companion_import_repeater_contacts(
|
||||
companion_hash,
|
||||
contact_types=contact_types,
|
||||
hours=hours,
|
||||
limit=limit,
|
||||
)
|
||||
contact_rows = sqlite_handler.companion_load_contacts(companion_hash)
|
||||
if contact_rows:
|
||||
records = []
|
||||
for row in contact_rows:
|
||||
d = dict(row)
|
||||
d["public_key"] = d.pop("pubkey", d.get("public_key", b""))
|
||||
records.append(d)
|
||||
bridge.contacts.load_from_dicts(records)
|
||||
return self._success({"imported": count})
|
||||
|
||||
# ----- Channels -----
|
||||
|
||||
@cherrypy.expose
|
||||
@cherrypy.tools.json_out()
|
||||
@require_auth
|
||||
def channels(self, **kwargs):
|
||||
"""GET /api/companion/channels — list configured channels."""
|
||||
try:
|
||||
bridge = self._get_bridge(**self._resolve_bridge_params(kwargs))
|
||||
items = []
|
||||
for idx in range(bridge.channels.max_channels):
|
||||
ch = bridge.channels.get(idx)
|
||||
if ch:
|
||||
items.append(
|
||||
{
|
||||
"index": idx,
|
||||
"name": ch.name,
|
||||
# Don't expose the PSK secret over REST
|
||||
}
|
||||
)
|
||||
return self._success(items)
|
||||
except cherrypy.HTTPError:
|
||||
raise
|
||||
except Exception as exc:
|
||||
logger.error(f"channels endpoint error: {exc}", exc_info=True)
|
||||
return self._error(str(exc))
|
||||
|
||||
# ----- Statistics -----
|
||||
|
||||
@cherrypy.expose
|
||||
@cherrypy.tools.json_out()
|
||||
@require_auth
|
||||
def stats(self, **kwargs):
|
||||
"""GET /api/companion/stats?type=packets — local companion stats."""
|
||||
bridge = self._get_bridge(**self._resolve_bridge_params(kwargs))
|
||||
stats_type_map = {"core": 0, "radio": 1, "packets": 2}
|
||||
stype = stats_type_map.get(kwargs.get("type", "packets"), 2)
|
||||
return self._success(bridge.get_stats(stype))
|
||||
|
||||
# ----- Messaging -----
|
||||
|
||||
@cherrypy.expose
|
||||
@cherrypy.tools.json_out()
|
||||
@require_auth
|
||||
def send_text(self, **kwargs):
|
||||
"""POST /api/companion/send_text {pub_key, text, txt_type?, companion_name?}"""
|
||||
self._require_post()
|
||||
body = self._get_json_body()
|
||||
bridge = self._get_bridge(**self._resolve_bridge_params(body))
|
||||
pub_key = self._pub_key_from_hex(body.get("pub_key", ""))
|
||||
text = body.get("text", "")
|
||||
if not text:
|
||||
raise cherrypy.HTTPError(400, "text required")
|
||||
txt_type = int(body.get("txt_type", 0))
|
||||
result = self._run_async(bridge.send_text_message(pub_key, text, txt_type=txt_type))
|
||||
return self._success(
|
||||
{
|
||||
"sent": result.success,
|
||||
"is_flood": result.is_flood,
|
||||
"expected_ack": result.expected_ack,
|
||||
}
|
||||
)
|
||||
|
||||
@cherrypy.expose
|
||||
@cherrypy.tools.json_out()
|
||||
@require_auth
|
||||
def send_channel_message(self, **kwargs):
|
||||
"""POST /api/companion/send_channel_message {channel_idx, text, companion_name?}"""
|
||||
self._require_post()
|
||||
body = self._get_json_body()
|
||||
bridge = self._get_bridge(**self._resolve_bridge_params(body))
|
||||
channel_idx = int(body.get("channel_idx", 0))
|
||||
text = body.get("text", "")
|
||||
if not text:
|
||||
raise cherrypy.HTTPError(400, "text required")
|
||||
success = self._run_async(bridge.send_channel_message(channel_idx, text))
|
||||
return self._success({"sent": success})
|
||||
|
||||
# ----- Login -----
|
||||
|
||||
@cherrypy.expose
|
||||
@cherrypy.tools.json_out()
|
||||
@require_auth
|
||||
def login(self, **kwargs):
|
||||
"""POST /api/companion/login {pub_key, password?, companion_name?}"""
|
||||
self._require_post()
|
||||
body = self._get_json_body()
|
||||
bridge = self._get_bridge(**self._resolve_bridge_params(body))
|
||||
pub_key = self._pub_key_from_hex(body.get("pub_key", ""))
|
||||
password = body.get("password", "")
|
||||
result = self._run_async(bridge.send_login(pub_key, password), timeout=15.0)
|
||||
return self._success(_to_json_safe(result))
|
||||
|
||||
# ----- Status / Telemetry Requests -----
|
||||
|
||||
@cherrypy.expose
|
||||
@cherrypy.tools.json_out()
|
||||
@require_auth
|
||||
def request_status(self, **kwargs):
|
||||
"""POST /api/companion/request_status {pub_key, timeout?, companion_name?}"""
|
||||
self._require_post()
|
||||
body = self._get_json_body()
|
||||
bridge = self._get_bridge(**self._resolve_bridge_params(body))
|
||||
pub_key = self._pub_key_from_hex(body.get("pub_key", ""))
|
||||
timeout = float(body.get("timeout", 15.0))
|
||||
result = self._run_async(
|
||||
bridge.send_status_request(pub_key, timeout=timeout),
|
||||
timeout=timeout + 5.0,
|
||||
)
|
||||
return self._success(_to_json_safe(result))
|
||||
|
||||
@cherrypy.expose
|
||||
@cherrypy.tools.json_out()
|
||||
@require_auth
|
||||
def request_telemetry(self, **kwargs):
|
||||
"""POST /api/companion/request_telemetry.
|
||||
|
||||
Body: pub_key, want_base?, want_location?, want_environment?,
|
||||
timeout?, companion_name?
|
||||
|
||||
On success, telemetry_data includes raw_bytes (LPP hex), sensors (parsed),
|
||||
and frame_bytes (hex): companion-style frame 0x8B + 0 + 6B pubkey prefix + LPP.
|
||||
"""
|
||||
self._require_post()
|
||||
try:
|
||||
body = self._get_json_body()
|
||||
bridge = self._get_bridge(**self._resolve_bridge_params(body))
|
||||
pub_key = self._pub_key_from_hex(body.get("pub_key", ""))
|
||||
timeout = float(body.get("timeout", 20.0))
|
||||
result = self._run_async(
|
||||
bridge.send_telemetry_request(
|
||||
pub_key,
|
||||
want_base=bool(body.get("want_base", True)),
|
||||
want_location=bool(body.get("want_location", True)),
|
||||
want_environment=bool(body.get("want_environment", True)),
|
||||
timeout=timeout,
|
||||
),
|
||||
timeout=timeout + 5.0,
|
||||
)
|
||||
# Ensure all values are JSON-serialisable (telemetry may contain bytes)
|
||||
return self._success(_to_json_safe(result))
|
||||
except cherrypy.HTTPError:
|
||||
raise
|
||||
except Exception as exc:
|
||||
logger.error(f"request_telemetry endpoint error: {exc}", exc_info=True)
|
||||
return self._error(str(exc))
|
||||
|
||||
# ----- Repeater Commands -----
|
||||
|
||||
@cherrypy.expose
|
||||
@cherrypy.tools.json_out()
|
||||
@require_auth
|
||||
def send_command(self, **kwargs):
|
||||
"""POST /api/companion/send_command {pub_key, command, parameters?, companion_name?}"""
|
||||
self._require_post()
|
||||
body = self._get_json_body()
|
||||
bridge = self._get_bridge(**self._resolve_bridge_params(body))
|
||||
pub_key = self._pub_key_from_hex(body.get("pub_key", ""))
|
||||
command = body.get("command", "")
|
||||
if not command:
|
||||
raise cherrypy.HTTPError(400, "command required")
|
||||
parameters = body.get("parameters")
|
||||
result = self._run_async(
|
||||
bridge.send_repeater_command(pub_key, command, parameters),
|
||||
timeout=20.0,
|
||||
)
|
||||
return self._success(_to_json_safe(result))
|
||||
|
||||
# ----- Path / Routing -----
|
||||
|
||||
@cherrypy.expose
|
||||
@cherrypy.tools.json_out()
|
||||
@require_auth
|
||||
def reset_path(self, **kwargs):
|
||||
"""POST /api/companion/reset_path {pub_key, companion_name?}"""
|
||||
self._require_post()
|
||||
body = self._get_json_body()
|
||||
bridge = self._get_bridge(**self._resolve_bridge_params(body))
|
||||
pub_key = self._pub_key_from_hex(body.get("pub_key", ""))
|
||||
ok = bridge.reset_path(pub_key)
|
||||
return self._success({"reset": ok})
|
||||
|
||||
# ----- Device Configuration -----
|
||||
|
||||
@cherrypy.expose
|
||||
@cherrypy.tools.json_out()
|
||||
@require_auth
|
||||
def set_advert_name(self, **kwargs):
|
||||
"""POST /api/companion/set_advert_name {advert_name, companion_name?}"""
|
||||
self._require_post()
|
||||
body = self._get_json_body()
|
||||
bridge = self._get_bridge(**self._resolve_bridge_params(body))
|
||||
name = body.get("advert_name", body.get("name", ""))
|
||||
if not name:
|
||||
raise cherrypy.HTTPError(400, "name required")
|
||||
try:
|
||||
validated_name = validate_companion_node_name(name)
|
||||
except ValueError as e:
|
||||
raise cherrypy.HTTPError(400, str(e)) from e
|
||||
bridge.set_advert_name(validated_name)
|
||||
# Optionally sync node_name to config.yaml so it survives restart
|
||||
companion_name = body.get("companion_name")
|
||||
if companion_name is None and getattr(self.daemon_instance, "identity_manager", None):
|
||||
pubkey = bridge.get_public_key()
|
||||
for reg_name, identity, _ in self.daemon_instance.identity_manager.get_identities_by_type(
|
||||
"companion"
|
||||
):
|
||||
if identity.get_public_key() == pubkey:
|
||||
companion_name = reg_name
|
||||
break
|
||||
if companion_name and self.config_manager:
|
||||
companions = (self.config.get("identities") or {}).get("companions") or []
|
||||
for entry in companions:
|
||||
if entry.get("name") == companion_name:
|
||||
if "settings" not in entry:
|
||||
entry["settings"] = {}
|
||||
entry["settings"]["node_name"] = validated_name
|
||||
try:
|
||||
if not self.config_manager.save_to_file():
|
||||
logger.warning("Failed to save config after set_advert_name")
|
||||
except Exception as e:
|
||||
logger.warning("Error saving config after set_advert_name: %s", e)
|
||||
break
|
||||
return self._success({"name": bridge.prefs.node_name})
|
||||
|
||||
@cherrypy.expose
|
||||
@cherrypy.tools.json_out()
|
||||
@require_auth
|
||||
def set_advert_location(self, **kwargs):
|
||||
"""POST /api/companion/set_advert_location {latitude, longitude, companion_name?}"""
|
||||
self._require_post()
|
||||
body = self._get_json_body()
|
||||
bridge = self._get_bridge(**self._resolve_bridge_params(body))
|
||||
lat = float(body.get("latitude", 0.0))
|
||||
lon = float(body.get("longitude", 0.0))
|
||||
bridge.set_advert_latlon(lat, lon)
|
||||
return self._success({"latitude": lat, "longitude": lon})
|
||||
|
||||
# ==================================================================
|
||||
# SSE Event Stream
|
||||
# ==================================================================
|
||||
|
||||
@cherrypy.expose
|
||||
def events(self, **kwargs):
|
||||
"""GET /api/companion/events — Server-Sent Events stream for push callbacks.
|
||||
|
||||
Connect with ``EventSource('/api/companion/events?token=JWT')``.
|
||||
Auth is handled by the CherryPy tool-level require_auth (supports
|
||||
query-param JWT tokens needed by the browser EventSource API).
|
||||
"""
|
||||
self._ensure_callbacks()
|
||||
|
||||
cherrypy.response.headers["Content-Type"] = "text/event-stream"
|
||||
cherrypy.response.headers["Cache-Control"] = "no-cache"
|
||||
cherrypy.response.headers["Connection"] = "keep-alive"
|
||||
cherrypy.response.headers["X-Accel-Buffering"] = "no"
|
||||
|
||||
client_queue: queue.Queue = queue.Queue(maxsize=self._sse_queue_maxsize)
|
||||
with self._sse_lock:
|
||||
self._sse_clients.append(client_queue)
|
||||
|
||||
def generate():
|
||||
try:
|
||||
payload = {"event": "connected", "timestamp": int(time.time())}
|
||||
yield f"data: {json.dumps(payload)}\n\n"
|
||||
|
||||
while True:
|
||||
try:
|
||||
item = client_queue.get(timeout=float(self._sse_keepalive_sec))
|
||||
yield f"data: {json.dumps(item)}\n\n"
|
||||
except queue.Empty:
|
||||
# Keep-alive comment frame keeps EventSource connected
|
||||
# without allocating additional JSON payload objects.
|
||||
yield ": keepalive\n\n"
|
||||
except GeneratorExit:
|
||||
pass
|
||||
except Exception as exc:
|
||||
logger.debug(f"SSE stream ended: {exc}")
|
||||
finally:
|
||||
with self._sse_lock:
|
||||
if client_queue in self._sse_clients:
|
||||
self._sse_clients.remove(client_queue)
|
||||
|
||||
return generate()
|
||||
|
||||
events._cp_config = {"response.stream": True}
|
||||
|
||||
|
||||
# ======================================================================
|
||||
# Utility: make arbitrary objects JSON-serialisable for SSE events
|
||||
# ======================================================================
|
||||
|
||||
|
||||
def _to_json_safe(obj):
|
||||
"""Convert common companion objects to JSON-safe dicts/values."""
|
||||
if obj is None or isinstance(obj, (bool, int, float, str)):
|
||||
return obj
|
||||
if isinstance(obj, bytes):
|
||||
return obj.hex()
|
||||
if isinstance(obj, bytearray):
|
||||
return bytes(obj).hex()
|
||||
if isinstance(obj, dict):
|
||||
return {k: _to_json_safe(v) for k, v in obj.items()}
|
||||
if isinstance(obj, (list, tuple)):
|
||||
return [_to_json_safe(v) for v in obj]
|
||||
# Dataclass / namedtuple with __dict__
|
||||
if hasattr(obj, "__dict__"):
|
||||
return {k: _to_json_safe(v) for k, v in obj.__dict__.items() if not k.startswith("_")}
|
||||
return str(obj)
|
||||
@@ -0,0 +1,230 @@
|
||||
"""
|
||||
WebSocket proxy for the companion frame protocol.
|
||||
|
||||
Bridges browser WebSocket to the companion TCP frame server.
|
||||
Raw byte pipe — no parsing, all protocol logic lives in the client.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import socket
|
||||
import threading
|
||||
from urllib.parse import parse_qs
|
||||
|
||||
import cherrypy
|
||||
from ws4py.websocket import WebSocket
|
||||
|
||||
logger = logging.getLogger("CompanionWSProxy")
|
||||
|
||||
# Set by http_server.py before CherryPy starts
|
||||
_daemon = None
|
||||
|
||||
|
||||
def set_daemon(instance):
|
||||
global _daemon
|
||||
_daemon = instance
|
||||
|
||||
|
||||
class CompanionFrameWebSocket(WebSocket):
|
||||
|
||||
def opened(self):
|
||||
"""Authenticate, resolve companion, open TCP socket, start reader."""
|
||||
# JWT auth — same pattern as PacketWebSocket
|
||||
jwt_handler = cherrypy.config.get("jwt_handler")
|
||||
|
||||
qs = ""
|
||||
if hasattr(self, "environ"):
|
||||
qs = self.environ.get("QUERY_STRING", "")
|
||||
|
||||
params = parse_qs(qs)
|
||||
token = params.get("token", [None])[0]
|
||||
companion_name = params.get("companion_name", [None])[0]
|
||||
|
||||
if not jwt_handler:
|
||||
logger.warning("Connection rejected: no JWT handler configured")
|
||||
self.close(code=1011, reason="server configuration error")
|
||||
return
|
||||
|
||||
if not token:
|
||||
logger.warning("Connection rejected: missing token")
|
||||
self.close(code=1008, reason="unauthorized")
|
||||
return
|
||||
|
||||
try:
|
||||
payload = jwt_handler.verify_jwt(token)
|
||||
if not payload:
|
||||
logger.warning("Connection rejected: invalid token")
|
||||
self.close(code=1008, reason="unauthorized")
|
||||
return
|
||||
except Exception as e:
|
||||
logger.warning(f"Auth error: {e}")
|
||||
self.close(code=1008, reason="unauthorized")
|
||||
return
|
||||
|
||||
if not companion_name:
|
||||
logger.warning("Connection rejected: missing companion_name")
|
||||
self.close(code=1008, reason="missing companion_name")
|
||||
return
|
||||
|
||||
# Resolve companion TCP port + bind address from config
|
||||
resolved = self._resolve_tcp_endpoint(companion_name)
|
||||
if resolved is None:
|
||||
logger.warning(f"Connection rejected: companion '{companion_name}' not found")
|
||||
self.close(code=1008, reason="companion not found")
|
||||
return
|
||||
|
||||
tcp_host, tcp_port = resolved
|
||||
|
||||
# Open TCP socket to the companion frame server
|
||||
try:
|
||||
self._tcp = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
self._tcp.settimeout(5.0)
|
||||
self._tcp.connect((tcp_host, tcp_port))
|
||||
self._tcp.settimeout(None)
|
||||
logger.debug(f"TCP connected to {tcp_host}:{tcp_port} for '{companion_name}'")
|
||||
except Exception as e:
|
||||
logger.error(f"TCP connect failed for '{companion_name}' {tcp_host}:{tcp_port}: {e}")
|
||||
self._tcp = None
|
||||
self.close(code=1011, reason="TCP connect failed")
|
||||
return
|
||||
|
||||
self._closing = False
|
||||
self._companion_name = companion_name
|
||||
self._reader = threading.Thread(
|
||||
target=self._tcp_to_ws, daemon=True, name=f"ws-tcp-{companion_name}"
|
||||
)
|
||||
self._reader.start()
|
||||
|
||||
user = payload.get("sub", "unknown")
|
||||
logger.info(f"Companion WS opened: user={user}, companion={companion_name}, tcp={tcp_host}:{tcp_port}")
|
||||
|
||||
def received_message(self, message):
|
||||
"""WS → TCP"""
|
||||
tcp = getattr(self, "_tcp", None)
|
||||
if tcp is None or getattr(self, "_closing", True):
|
||||
return
|
||||
try:
|
||||
data = message.data
|
||||
if isinstance(data, str):
|
||||
data = data.encode("latin-1")
|
||||
tcp.sendall(data)
|
||||
except Exception as e:
|
||||
name = getattr(self, "_companion_name", "?")
|
||||
logger.warning(f"WS→TCP send failed for '{name}': {e}")
|
||||
self._teardown()
|
||||
|
||||
def closed(self, code, reason=None):
|
||||
name = getattr(self, "_companion_name", "?")
|
||||
logger.info(f"Companion WS closed: companion={name}, code={code}, reason={reason}")
|
||||
self._teardown()
|
||||
|
||||
# ── internal ─────────────────────────────────────────────────────────
|
||||
|
||||
def _resolve_tcp_endpoint(self, companion_name):
|
||||
"""Look up companion TCP host + port from daemon config.
|
||||
|
||||
Returns ``(host, port)`` tuple or ``None`` if the companion can't be
|
||||
resolved. When ``bind_address`` is ``0.0.0.0`` (all interfaces) we
|
||||
connect via ``127.0.0.1``; otherwise we use the configured address.
|
||||
"""
|
||||
if not _daemon:
|
||||
logger.warning("_resolve_tcp_endpoint: daemon not set")
|
||||
return None
|
||||
|
||||
identity_manager = getattr(_daemon, "identity_manager", None)
|
||||
bridges = getattr(_daemon, "companion_bridges", {})
|
||||
|
||||
if not identity_manager:
|
||||
logger.warning("_resolve_tcp_endpoint: no identity_manager")
|
||||
return None
|
||||
if not bridges:
|
||||
logger.warning("_resolve_tcp_endpoint: no companion_bridges (dict empty or missing)")
|
||||
return None
|
||||
|
||||
# Find the companion identity by name and verify its bridge is running
|
||||
found = False
|
||||
for name, identity, _cfg in identity_manager.get_identities_by_type("companion"):
|
||||
if name == companion_name:
|
||||
h = identity.get_public_key()[0]
|
||||
if h in bridges:
|
||||
found = True
|
||||
else:
|
||||
logger.warning(
|
||||
f"_resolve_tcp_endpoint: companion '{companion_name}' identity found "
|
||||
f"(hash=0x{h:02x}) but no bridge registered for that hash. "
|
||||
f"Known bridge hashes: {[f'0x{k:02x}' for k in bridges.keys()]}"
|
||||
)
|
||||
break
|
||||
else:
|
||||
# Loop completed without finding the name
|
||||
known = [n for n, _, _ in identity_manager.get_identities_by_type("companion")]
|
||||
logger.warning(
|
||||
f"_resolve_tcp_endpoint: companion '{companion_name}' not in identity_manager. "
|
||||
f"Known companions: {known}"
|
||||
)
|
||||
|
||||
if not found:
|
||||
return None
|
||||
|
||||
# Look up TCP port + bind address from config
|
||||
companions = _daemon.config.get("identities", {}).get("companions") or []
|
||||
for entry in companions:
|
||||
if entry.get("name") == companion_name:
|
||||
settings = entry.get("settings") or {}
|
||||
port = settings.get("tcp_port", 5000)
|
||||
bind = settings.get("bind_address", "0.0.0.0")
|
||||
# 0.0.0.0 = all interfaces — connect via loopback
|
||||
host = "127.0.0.1" if bind == "0.0.0.0" else bind
|
||||
logger.debug(f"_resolve_tcp_endpoint: '{companion_name}' → {host}:{port}")
|
||||
return (host, port)
|
||||
|
||||
logger.warning(
|
||||
f"_resolve_tcp_endpoint: '{companion_name}' found in identity_manager but missing from config"
|
||||
)
|
||||
return None
|
||||
|
||||
def _tcp_to_ws(self):
|
||||
"""TCP → WS reader loop"""
|
||||
name = getattr(self, "_companion_name", "?")
|
||||
tcp = getattr(self, "_tcp", None)
|
||||
if tcp is None:
|
||||
return
|
||||
try:
|
||||
while not getattr(self, "_closing", True):
|
||||
data = tcp.recv(4096)
|
||||
if not data:
|
||||
logger.info(f"TCP→WS: frame server closed connection for '{name}'")
|
||||
break
|
||||
try:
|
||||
self.send(data, binary=True)
|
||||
except Exception as e:
|
||||
logger.warning(f"TCP→WS: WS send failed for '{name}': {e}")
|
||||
break
|
||||
except OSError as e:
|
||||
# Socket error (connection reset, etc.) — normal during teardown
|
||||
if not getattr(self, "_closing", True):
|
||||
logger.warning(f"TCP→WS: socket error for '{name}': {e}")
|
||||
except Exception as e:
|
||||
logger.warning(f"TCP→WS: unexpected error for '{name}': {e}")
|
||||
finally:
|
||||
self._teardown()
|
||||
|
||||
def _teardown(self):
|
||||
if getattr(self, "_closing", True):
|
||||
return
|
||||
self._closing = True
|
||||
|
||||
name = getattr(self, "_companion_name", "?")
|
||||
logger.debug(f"Tearing down WS proxy for '{name}'")
|
||||
|
||||
tcp = getattr(self, "_tcp", None)
|
||||
if tcp:
|
||||
try:
|
||||
tcp.close()
|
||||
except Exception:
|
||||
pass
|
||||
self._tcp = None
|
||||
|
||||
try:
|
||||
self.close()
|
||||
except Exception:
|
||||
pass
|
||||
File diff suppressed because one or more lines are too long
@@ -0,0 +1 @@
|
||||
.glass-card[data-v-60d82848]{background:var(--color-glass-bg);-webkit-backdrop-filter:blur(10px);backdrop-filter:blur(10px);border:1px solid var(--color-glass-border);box-shadow:var(--color-glass-shadow)}
|
||||
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@@ -0,0 +1 @@
|
||||
import{dt as e,g as t,l as n,pt as r,s as i,u as a,w as o}from"./runtime-core.esm-bundler-HnidnMFy.js";import{h as s}from"./index-BFltqMtv.js";var c={class:`flex items-center justify-between mb-4`},l={class:`text-xl font-semibold text-content-primary dark:text-content-primary`},u={class:`mb-6`},d={key:0,class:`w-6 h-6`,fill:`none`,stroke:`currentColor`,viewBox:`0 0 24 24`},f={key:1,class:`w-6 h-6`,fill:`none`,stroke:`currentColor`,viewBox:`0 0 24 24`},p={key:2,class:`w-6 h-6`,fill:`none`,stroke:`currentColor`,viewBox:`0 0 24 24`},m={class:`text-content-secondary dark:text-content-primary/80 text-base leading-relaxed`},h={class:`flex gap-3`},g=t({__name:`ConfirmDialog`,props:{show:{type:Boolean},title:{default:`Confirm Action`},message:{},confirmText:{default:`Confirm`},cancelText:{default:`Cancel`},variant:{default:`warning`}},emits:[`close`,`confirm`],setup(t,{emit:g}){let _=t,v=g,y=e=>{e.target===e.currentTarget&&v(`close`)},b={danger:`bg-red-100 dark:bg-red-500/20 border-red-500/30 text-red-600 dark:text-red-400`,warning:`bg-yellow-100 dark:bg-yellow-500/20 border-yellow-500/30 text-yellow-600 dark:text-yellow-400`,info:`bg-blue-500/20 border-blue-500/30 text-blue-600 dark:text-blue-400`},x={danger:`bg-red-500 hover:bg-red-600`,warning:`bg-yellow-500 hover:bg-yellow-600`,info:`bg-blue-500 hover:bg-blue-600`};return(t,g)=>_.show?(o(),a(`div`,{key:0,onClick:y,class:`fixed inset-0 bg-black/40 backdrop-blur-lg z-[99999] flex items-center justify-center p-4`,style:{"backdrop-filter":`blur(8px) saturate(180%)`,position:`fixed`,top:`0`,left:`0`,right:`0`,bottom:`0`}},[i(`div`,{class:`bg-white dark:bg-surface-elevated backdrop-blur-xl rounded-[20px] p-6 w-full max-w-md border border-stroke-subtle dark:border-white/10`,onClick:g[3]||=s(()=>{},[`stop`])},[i(`div`,c,[i(`h3`,l,r(_.title),1),i(`button`,{onClick:g[0]||=e=>v(`close`),class:`text-content-secondary dark:text-content-muted hover:text-content-primary dark:hover:text-content-primary transition-colors`},[...g[4]||=[i(`svg`,{class:`w-6 h-6`,fill:`none`,stroke:`currentColor`,viewBox:`0 0 24 24`},[i(`path`,{"stroke-linecap":`round`,"stroke-linejoin":`round`,"stroke-width":`2`,d:`M6 18L18 6M6 6l12 12`})],-1)]])]),i(`div`,u,[i(`div`,{class:e([`inline-flex p-3 rounded-xl mb-4`,b[_.variant]])},[_.variant===`danger`?(o(),a(`svg`,d,[...g[5]||=[i(`path`,{"stroke-linecap":`round`,"stroke-linejoin":`round`,"stroke-width":`2`,d:`M12 9v2m0 4h.01m-6.938 4h13.856c1.54 0 2.502-1.667 1.732-3L13.732 4c-.77-1.333-2.694-1.333-3.464 0L3.34 16c-.77 1.333.192 3 1.732 3z`},null,-1)]])):_.variant===`warning`?(o(),a(`svg`,f,[...g[6]||=[i(`path`,{"stroke-linecap":`round`,"stroke-linejoin":`round`,"stroke-width":`2`,d:`M12 9v2m0 4h.01m-6.938 4h13.856c1.54 0 2.502-1.667 1.732-3L13.732 4c-.77-1.333-2.694-1.333-3.464 0L3.34 16c-.77 1.333.192 3 1.732 3z`},null,-1)]])):(o(),a(`svg`,p,[...g[7]||=[i(`path`,{"stroke-linecap":`round`,"stroke-linejoin":`round`,"stroke-width":`2`,d:`M13 16h-1v-4h-1m1-4h.01M21 12a9 9 0 11-18 0 9 9 0 0118 0z`},null,-1)]]))],2),i(`p`,m,r(_.message),1)]),i(`div`,h,[i(`button`,{onClick:g[1]||=e=>v(`close`),class:`flex-1 px-4 py-3 rounded-xl bg-background-mute dark:bg-white/5 hover:bg-stroke-subtle dark:hover:bg-white/10 text-content-primary dark:text-content-primary transition-all duration-200 border border-stroke-subtle dark:border-stroke/10`},r(_.cancelText),1),i(`button`,{onClick:g[2]||=e=>v(`confirm`),class:e([`flex-1 px-4 py-3 rounded-xl text-white transition-all duration-200`,x[_.variant]])},r(_.confirmText),3)])])])):n(``,!0)}});export{g as t};
|
||||
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@@ -0,0 +1 @@
|
||||
import{f as e,g as t,u as n,w as r}from"./runtime-core.esm-bundler-HnidnMFy.js";var i=t({name:`HelpView`,__name:`Help`,setup(t){return(t,i)=>(r(),n(`div`,null,[...i[0]||=[e(`<div class="glass-card backdrop-blur border border-stroke-subtle dark:border-white/10 rounded-[15px] p-8"><h1 class="text-content-primary dark:text-content-primary text-2xl font-semibold mb-6"> Help & Documentation </h1><div class="text-center py-12"><div class="text-primary mb-6"><svg class="w-20 h-20 mx-auto mb-4" fill="none" stroke="currentColor" viewBox="0 0 24 24"><path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 6.253v13m0-13C10.832 5.477 9.246 5 7.5 5S4.168 5.477 3 6.253v13C4.168 18.477 5.754 18 7.5 18s3.332.477 4.5 1.253m0-13C13.168 5.477 14.754 5 16.5 5c1.746 0 3.332.477 4.5 1.253v13C19.832 18.477 18.246 18 16.5 18c-1.746 0-3.332.477-4.5 1.253"></path></svg></div><h2 class="text-content-primary dark:text-content-primary text-xl font-medium mb-3"> pyMC Repeater Wiki </h2><p class="text-content-secondary dark:text-content-muted mb-8 max-w-md mx-auto"> Access documentation, setup guides, troubleshooting tips, and community resources on our official wiki. </p><a href="https://github.com/rightup/pyMC_Repeater/wiki" target="_blank" rel="noopener noreferrer" class="inline-flex items-center gap-2 bg-primary hover:bg-primary/80 text-white dark:text-background font-medium py-3 px-6 rounded-xl transition-colors duration-200"><svg class="w-5 h-5" fill="none" stroke="currentColor" viewBox="0 0 24 24"><path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M10 6H6a2 2 0 00-2 2v10a2 2 0 002 2h10a2 2 0 002-2v-4M14 4h6m0 0v6m0-6L10 14"></path></svg> Visit Wiki Documentation </a><div class="mt-8 text-xs text-content-muted dark:text-content-muted"> Opens in a new tab </div></div></div>`,1)]]))}});export{i as default};
|
||||
@@ -0,0 +1 @@
|
||||
.bg-gradient-light[data-v-fec81ee3]{background:linear-gradient(#0ea5e966,#06b6d44d)}.bg-gradient-dark[data-v-fec81ee3]{background:linear-gradient(#67e8f94d,#a5f3fc26)}.login-card[data-v-fec81ee3]{-webkit-backdrop-filter:blur(40px)saturate(180%);background:#ffffffb3}.dark .login-card[data-v-fec81ee3]{background:#11191c66}.input-glass[data-v-fec81ee3]{-webkit-backdrop-filter:blur(20px);background:#ffffffe6;border:1px solid #d1d5db}.dark .input-glass[data-v-fec81ee3]{background:#ffffff0d;border-color:#ffffff1a}.input-glass[data-v-fec81ee3]:focus{background:#fff}.dark .input-glass[data-v-fec81ee3]:focus{background:#ffffff1a}.input-glass[data-v-fec81ee3]:focus{box-shadow:0 0 0 1px #aae8e833,0 0 20px #aae8e826,inset 0 1px #ffffff1a}.input-glow[data-v-fec81ee3]{opacity:0;transition:opacity .3s;box-shadow:inset 0 1px #ffffff0d}.input-glass:focus+.input-glow[data-v-fec81ee3]{opacity:1;box-shadow:0 0 20px #aae8e833,inset 0 1px #ffffff1a}.button-glass[data-v-fec81ee3]{-webkit-backdrop-filter:blur(20px);position:relative}.button-glass[data-v-fec81ee3]:before{content:"";-webkit-mask-composite:xor;background:linear-gradient(90deg,#0000 0%,#aae8e84d 50%,#0000 100%);border-radius:12px;padding:1px;transition:transform 1s;position:absolute;inset:0;transform:translate(-100%);-webkit-mask-image:linear-gradient(#fff 0 0),linear-gradient(#fff 0 0);-webkit-mask-position:0 0,0 0;-webkit-mask-size:auto,auto;-webkit-mask-repeat:repeat,repeat;-webkit-mask-clip:content-box,border-box;-webkit-mask-origin:content-box,border-box;-webkit-mask-composite:xor;mask-composite:exclude;-webkit-mask-source-type:auto,auto;mask-mode:match-source,match-source}.button-glass[data-v-fec81ee3]:hover:not(:disabled):before{transform:translate(100%)}.button-glass[data-v-fec81ee3]{box-shadow:0 0 0 1px #aae8e833,0 4px 16px #0003,inset 0 1px #ffffff1a}.button-glass[data-v-fec81ee3]:hover:not(:disabled){box-shadow:0 0 0 1px #aae8e866,0 0 30px #aae8e84d,0 4px 20px #0000004d,inset 0 1px #ffffff26}.login-content:has(.button-glass:hover:not(:disabled)) .logo-image[data-v-fec81ee3]{filter:brightness(1.4)drop-shadow(0 0 12px #aae8e8b3);transform:scale(1.02)}.login-content:has(.button-glass:hover:not(:disabled)) .logo-glow[data-v-fec81ee3]{opacity:.6;transform:scale(1.15)}.logo-glow[data-v-fec81ee3]{opacity:0}.dark .logo-glow[data-v-fec81ee3]{opacity:1}@keyframes float-fec81ee3{0%,to{transform:translateY(0)}50%{transform:translateY(-10px)}}@keyframes pulse-slow-fec81ee3{0%,to{opacity:.8;transform:scale(1)}50%{opacity:.6;transform:scale(1.05)}}@keyframes pulse-slower-fec81ee3{0%,to{opacity:.75;transform:scale(1)}50%{opacity:.5;transform:scale(1.08)}}@keyframes pulse-slowest-fec81ee3{0%,to{opacity:.8;transform:scale(1)}50%{opacity:.6;transform:scale(1.06)}}.animate-pulse-slow[data-v-fec81ee3]{animation:8s ease-in-out infinite pulse-slow-fec81ee3}.animate-pulse-slower[data-v-fec81ee3]{animation:10s ease-in-out infinite pulse-slower-fec81ee3}.animate-pulse-slowest[data-v-fec81ee3]{animation:12s ease-in-out infinite pulse-slowest-fec81ee3}@keyframes shake-fec81ee3{0%,to{transform:translate(0)}10%,30%,50%,70%,90%{transform:translate(-5px)}20%,40%,60%,80%{transform:translate(5px)}}.animate-shake[data-v-fec81ee3]{animation:.5s ease-in-out shake-fec81ee3}@keyframes logo-aura-cycle-fec81ee3{0%,to{filter:brightness()saturate()drop-shadow(0 0 7px #38bdf873)}25%{filter:brightness(1.02)saturate(1.05)drop-shadow(0 0 10px #6366f16b)}50%{filter:brightness()saturate(1.03)drop-shadow(0 0 8px #22d3ee73)}75%{filter:brightness(1.02)saturate(1.05)drop-shadow(0 0 10px #34d3996b)}}.logo-image-animated[data-v-fec81ee3]{will-change:filter;animation:6s ease-in-out infinite logo-aura-cycle-fec81ee3}.form-group[data-v-fec81ee3]{position:relative}.form-group:hover label[data-v-fec81ee3]{color:#aae8e8e6;transition:color .3s}
|
||||
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@@ -0,0 +1 @@
|
||||
import{dt as e,g as t,l as n,pt as r,s as i,u as a,w as o}from"./runtime-core.esm-bundler-HnidnMFy.js";import{h as s}from"./index-BFltqMtv.js";var c={class:`mb-6`},l={key:0,class:`w-6 h-6`,fill:`none`,stroke:`currentColor`,viewBox:`0 0 24 24`},u={key:1,class:`w-6 h-6`,fill:`none`,stroke:`currentColor`,viewBox:`0 0 24 24`},d={key:2,class:`w-6 h-6`,fill:`none`,stroke:`currentColor`,viewBox:`0 0 24 24`},f={class:`text-content-secondary dark:text-content-primary/80 text-base leading-relaxed`},p={class:`flex`},m=t({__name:`MessageDialog`,props:{show:{type:Boolean},message:{},variant:{default:`success`}},emits:[`close`],setup(t,{emit:m}){let h=t,g=m,_=e=>{e.target===e.currentTarget&&g(`close`)},v={success:`bg-green-100 dark:bg-green-500/20 border-green-600/40 dark:border-green-500/30 text-green-600 dark:text-green-400`,error:`bg-red-100 dark:bg-red-500/20 border-red-500/30 text-red-600 dark:text-red-400`,info:`bg-blue-500/20 border-blue-500/30 text-blue-600 dark:text-blue-400`},y={success:`bg-green-500 hover:bg-green-600`,error:`bg-red-500 hover:bg-red-600`,info:`bg-blue-500 hover:bg-blue-600`};return(t,m)=>h.show?(o(),a(`div`,{key:0,onClick:_,class:`fixed inset-0 bg-black/40 backdrop-blur-lg z-[99999] flex items-center justify-center p-4`,style:{"backdrop-filter":`blur(8px) saturate(180%)`,position:`fixed`,top:`0`,left:`0`,right:`0`,bottom:`0`}},[i(`div`,{class:`bg-white dark:bg-surface-elevated backdrop-blur-xl rounded-[20px] p-6 w-full max-w-md border border-stroke-subtle dark:border-white/10`,onClick:m[1]||=s(()=>{},[`stop`])},[i(`div`,c,[i(`div`,{class:e([`inline-flex p-3 rounded-xl mb-4`,v[h.variant]])},[h.variant===`success`?(o(),a(`svg`,l,[...m[2]||=[i(`path`,{"stroke-linecap":`round`,"stroke-linejoin":`round`,"stroke-width":`2`,d:`M5 13l4 4L19 7`},null,-1)]])):h.variant===`error`?(o(),a(`svg`,u,[...m[3]||=[i(`path`,{"stroke-linecap":`round`,"stroke-linejoin":`round`,"stroke-width":`2`,d:`M6 18L18 6M6 6l12 12`},null,-1)]])):(o(),a(`svg`,d,[...m[4]||=[i(`path`,{"stroke-linecap":`round`,"stroke-linejoin":`round`,"stroke-width":`2`,d:`M13 16h-1v-4h-1m1-4h.01M21 12a9 9 0 11-18 0 9 9 0 0118 0z`},null,-1)]]))],2),i(`p`,f,r(h.message),1)]),i(`div`,p,[i(`button`,{onClick:m[0]||=e=>g(`close`),class:e([`flex-1 px-4 py-3 rounded-xl text-white transition-all duration-200`,y[h.variant]])},` OK `,2)])])])):n(``,!0)}});export{m as t};
|
||||
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@@ -0,0 +1 @@
|
||||
import{n as e}from"./index-BFltqMtv.js";export{e as default};
|
||||
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@@ -0,0 +1 @@
|
||||
.glass-card[data-v-a201f2f2]{-webkit-backdrop-filter:blur(10px);backdrop-filter:blur(10px);background:#ffffff0d;border:1px solid #ffffff1a}.modal-enter-active[data-v-a201f2f2],.modal-leave-active[data-v-a201f2f2]{transition:opacity .3s}.modal-enter-from[data-v-a201f2f2],.modal-leave-to[data-v-a201f2f2]{opacity:0}.modal-enter-active .glass-card[data-v-a201f2f2],.modal-leave-active .glass-card[data-v-a201f2f2]{transition:transform .3s}.modal-enter-from .glass-card[data-v-a201f2f2],.modal-leave-to .glass-card[data-v-a201f2f2]{transform:scale(.9)}.slide-enter-active[data-v-a201f2f2],.slide-leave-active[data-v-a201f2f2]{transition:all .3s}.slide-enter-from[data-v-a201f2f2],.slide-leave-to[data-v-a201f2f2]{opacity:0;transform:translateY(-10px)}@keyframes float-slow-a201f2f2{0%,to{opacity:.8;transform:translate(0)scale(1)rotate(-24.22deg)}50%{opacity:.6;transform:translate(20px,-20px)scale(1.05)rotate(-24.22deg)}}@keyframes float-slower-a201f2f2{0%,to{opacity:.75;transform:translate(0)scale(1)rotate(-24.22deg)}50%{opacity:.5;transform:translate(-30px,20px)scale(1.08)rotate(-24.22deg)}}@keyframes float-slowest-a201f2f2{0%,to{opacity:.8;transform:translate(0)scale(1)rotate(-24.22deg)}50%{opacity:.55;transform:translate(25px,25px)scale(1.1)rotate(-24.22deg)}}.animate-pulse-slow[data-v-a201f2f2]{will-change:transform, opacity;animation:15s ease-in-out infinite float-slow-a201f2f2}.animate-pulse-slower[data-v-a201f2f2]{will-change:transform, opacity;animation:18s ease-in-out infinite float-slower-a201f2f2}.animate-pulse-slowest[data-v-a201f2f2]{will-change:transform, opacity;animation:20s ease-in-out infinite float-slowest-a201f2f2}
|
||||
File diff suppressed because one or more lines are too long
@@ -0,0 +1 @@
|
||||
.plotly-chart[data-v-54d032e1]{background:0 0!important}
|
||||
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user