Compare commits

..

79 Commits

Author SHA1 Message Date
l5y
96a3bb86e9 Add telemetry formatting module and overlay metrics (#387) 2025-10-19 12:13:32 +02:00
l5y
6775de3cca Prune blank values from API responses (#386) 2025-10-18 20:16:14 +02:00
l5y
8143fbd8f7 Add full support to telemetry schema and API (#385)
* feat: auto-upgrade telemetry schema

* Ensure numeric metrics fallback to valid values

* Format data processing numeric metric lookup
2025-10-18 15:19:33 +02:00
l5y
cf3949ef95 Respect PORT environment override (#384) 2025-10-18 13:01:48 +02:00
l5y
32d9da2865 Add instance selector dropdown for federation deployments (#382)
* Add instance selector for federation regions

* Avoid HTML insertion when seeding instance selector
2025-10-18 10:53:26 +02:00
l5y
61e8c92f62 Harden federation announcements (#381) 2025-10-18 10:38:28 +02:00
l5y
d954df6294 Ensure private mode disables federation (#380) 2025-10-18 09:48:40 +02:00
l5y
30d535bd43 Ensure private mode disables chat messaging (#378) 2025-10-17 22:47:54 +02:00
l5y
d06aa42ab2 Respect FEDERATION flag for federation endpoints (#379) 2025-10-17 22:47:41 +02:00
l5y
108fc93ca1 Expose PRIVATE environment configuration (#377) 2025-10-17 22:43:42 +02:00
l5y
427479c1e6 Fix frontend coverage export for Codecov (#376)
* fix: export frontend coverage for codecov

* Merge V8 file coverages across workers
2025-10-17 22:43:23 +02:00
l5y
ee05f312e8 Restrict instance API to recent updates (#374) 2025-10-17 22:17:49 +02:00
l5y
c4193e38dc Document and expose federation configuration (#375) 2025-10-17 22:17:32 +02:00
l5y
cb9b081606 Chore: bump version to 0.5.3 (#372) 2025-10-17 19:47:18 +00:00
l5y
cc8fec6d05 Align theme and info controls (#371)
* Align theme and info controls

* design tweaks
2025-10-17 19:27:14 +00:00
l5y
01665b6e3a Fixes POST request 403 errors on instances behind Cloudflare proxy (#368)
* Add full headers to ingestor POST requests to avoid CF bans

* run black

* Guard Authorization header when token absent

---------

Co-authored-by: varna9000 <milen@aeroisk.com>
2025-10-16 22:29:04 +02:00
l5y
1898a99789 Delay initial federation announcements (#366) 2025-10-16 21:50:43 +02:00
l5y
3eefda9205 Ensure well-known document stays in sync (#365) 2025-10-16 21:43:11 +02:00
l5y
a6ba9a8227 Guard federation DNS resolution against restricted networks (#362)
* Guard federation DNS resolution against restricted networks

* Pin federation HTTP clients to vetted IPs
2025-10-16 21:15:34 +02:00
l5y
7055444c4b Add federation ingestion limits and tests (#364) 2025-10-16 21:15:18 +02:00
l5y
4bfc0e25cb Prefer reported primary channel names (#363) 2025-10-16 20:35:24 +02:00
l5y
81335cbf7b Decouple messages API from node joins (#360) 2025-10-16 13:19:29 +02:00
l5y
76b57c08c6 Fix ingestor reconnection detection (#361) 2025-10-16 13:06:32 +02:00
l5y
926b5591b0 Harden instance domain validation (#359) 2025-10-16 10:51:34 +02:00
l5y
957e597004 Ensure INSTANCE_DOMAIN propagates to containers (#358) 2025-10-15 23:22:46 +02:00
l5y
68cfbf139f chore: bump version to 0.5.2 (#356)
Co-authored-by: l5yth <d220195275+l5yth@users.noreply.github.com>
2025-10-15 23:16:30 +02:00
l5y
b2f4fcaaa5 Gracefully retry federation announcements over HTTP (#355) 2025-10-15 23:11:59 +02:00
l5y
dc2fa9d247 Recursively ingest federated instances (#353)
* Recursively ingest federated instances

* Keep absent is_private nil during signature verification
2025-10-15 21:35:37 +02:00
l5y
a32125996c Remove federation timeout environment overrides (#352) 2025-10-15 20:04:19 +02:00
l5y
506a1ab5f6 Close unrelated short info overlays when opening short info (#351)
* Close unrelated overlays when opening short info

* Ensure map overlays respect nested short overlay closing
2025-10-15 16:35:38 +00:00
l5y
db7b67d859 Improve federation instance error diagnostics (#350) 2025-10-15 18:35:22 +02:00
l5y
49f08a7f75 Harden federation domain validation and tests (#347)
* Harden federation domain validation and tests

* Preserve domain casing for signature verification

* Forward sanitize helper keyword argument

* Handle mixed-case domains during signature verification
2025-10-15 18:14:31 +02:00
l5y
b2d35d3edf Handle malformed instance records (#348) 2025-10-15 17:08:24 +02:00
l5y
a9d618cdbc Fix ingestor device mounting for non-serial connections (#346)
* Adjust ingestor device handling

* Restore serial device permissions for ingestor
2025-10-15 16:52:37 +02:00
l5y
6a65abd2e3 Persist instance config assets across Docker restarts (#345) 2025-10-15 16:14:59 +02:00
l5y
a3aef8cadd Add modem preset display to node overlay (#340)
* Add modem metadata line to node overlays

* Ensure modem metadata loads for all overlays
2025-10-14 20:59:47 +02:00
l5y
cff89a8c88 Display message frequency and channel in chat log (#339)
* Display message frequency and channel in chat log

* Ensure chat prefixes display consistent metadata brackets

* Ensure chat prefixes show non-breaking frequency placeholder

* Adjust chat channel tag placement
2025-10-14 20:56:42 +02:00
l5y
26c1366412 Bump fallback version to v0.5.1 (#338) 2025-10-14 16:51:04 +00:00
l5y
28f5b49f4d docs: update changelog for 0.5.0 (#337) 2025-10-14 16:48:36 +00:00
l5y
a46da284e5 Fix ingestor package layout in Docker image (#336) 2025-10-14 18:47:54 +02:00
l5y
22a31b6c80 Ensure node overlays appear above fullscreen map (#333)
* Increase overlay z-index to surface node info

* Ensure short info overlays attach to fullscreen host

* Ensure info overlay participates in fullscreen mode
2025-10-14 15:52:26 +02:00
l5y
b7ef0bbfcd Adjust node table columns responsively (#332) 2025-10-14 14:59:47 +02:00
l5y
03b5a10fe4 Add LoRa metadata fields to nodes and messages (#331)
* Add LoRa metadata fields to nodes and messages

* Filter numeric SQLite keys from message rows
2025-10-14 14:51:28 +02:00
l5y
e97498d09f Add channel metadata capture for message tagging (#329) 2025-10-13 23:10:01 +02:00
l5y
7db76ec2fc Capture radio metadata for ingestor payloads (#327)
* Capture radio metadata and tag ingestor payloads

* Log captured LoRa metadata when initializing radio config
2025-10-13 22:35:06 +02:00
l5y
63beb2ea6b Avoid mutating frozen node query results (#324) 2025-10-13 17:22:34 +02:00
l5y
ffad84f18a Ensure frontend reports git-aware version strings (#321)
* Ensure frontend reports git-aware version strings

* Keep footer fixed across viewport widths
2025-10-13 16:26:57 +02:00
l5y
2642ff7a95 Fix web Docker image to include application code (#322) 2025-10-13 16:25:44 +02:00
l5y
40b6eda096 Refine stacked short info overlays on the map (#319)
* Refine map overlays to use stacked short info panels

* Allow stacked overlays to pass neighbor clicks
2025-10-13 14:53:43 +02:00
l5y
dee6ad7e4a Refine environment configuration defaults (#318) 2025-10-13 14:06:14 +02:00
l5y
ea9c633eff Fix legacy configuration migration to XDG directories (#317)
* Handle legacy config migration for XDG assets

* Ensure legacy key migration precedes identity load

* Apply rufo formatting to identity module
2025-10-13 14:02:17 +02:00
l5y
9c73fceea7 Adopt XDG base directories for app data and config (#316)
* Support XDG base directories

* Keep Docker MESH_DB on persistent volume
2025-10-13 12:29:56 +02:00
l5y
5133e9d498 refactor: streamline ingestor environment variables (#314)
* refactor: streamline ingestor environment variables

* fix: set connection env var in docker test
2025-10-13 11:02:33 +02:00
l5y
b63e5328b1 Reduce auto-fit padding and increase default zoom (#315) 2025-10-13 10:57:54 +02:00
l5y
d66b09ddee Ensure APIs filter stale data and refresh node details from latest sources (#312)
* Ensure fresh API data and richer node refresh details

* Refresh map markers with latest node data
2025-10-13 10:54:47 +02:00
l5y
009965f2fb Handle offline tile layer creation failures (#307) 2025-10-13 09:27:03 +02:00
l5y
51e6479ab6 Handle offline tile rendering failures (#306) 2025-10-13 09:26:49 +02:00
l5y
874c8fd73c Fix map auto-fit handling and add controller (#311) 2025-10-13 09:26:35 +02:00
l5y
e4c48682b0 Fix map initialization bounds and add coverage (#305)
* Fix map initialization bounds and add coverage

* Handle antimeridian bounds when clustering map points

* Fix dateline-aware map bounds
2025-10-12 19:22:17 +02:00
l5y
00444f7611 test: expand config and sanitizer coverage (#303) 2025-10-12 14:41:20 +02:00
l5y
511e6d377c Add comprehensive theme and background front-end tests (#302) 2025-10-12 14:35:53 +02:00
l5y
e6974a683a Document sanitization and helper modules (#301) 2025-10-12 10:09:42 +02:00
l5y
c0d68b23d4 Add protobuf stubs for mesh tests (#300) 2025-10-12 10:09:13 +02:00
l5y
ee904633a8 Handle CRL lookup failures during federation TLS (#299) 2025-10-12 09:56:53 +02:00
l5y
4329605e6f Ensure JavaScript workflow runs tests with output (#298) 2025-10-12 09:46:42 +02:00
l5y
772c5888c3 Fix ingestor debug timestamps for structured logging (#296) 2025-10-12 09:40:57 +02:00
l5y
f04e917cd9 Add Apache license headers to missing sources (#297) 2025-10-12 09:38:04 +02:00
l5y
9e939194ba Update workflows for ingestor, sinatra, and frontend (#295) 2025-10-12 09:36:02 +02:00
l5y
e328a20929 Fix IPv6 instance domain canonicalization (#294) 2025-10-12 09:33:03 +02:00
l5y
aba94b197d Handle federation HTTPS CRL verification failures (#293) 2025-10-12 09:22:54 +02:00
l5y
80f2bbdb25 Adjust federation announcement cadence (#292) 2025-10-12 09:08:50 +02:00
l5y
522213c040 Restore modular app functionality (#291)
* Restore modular app functionality

* Fix federation thread settings and add coverage

* Use Sinatra set for federation threads

* Restore 41447 as default web port
2025-10-12 08:54:11 +02:00
l5y
58998ba274 Refactor config and metadata helpers into PotatoMesh modules (#290) 2025-10-11 23:19:25 +02:00
l5y
4ad718e164 Update default site configuration environment values (#288) 2025-10-11 21:20:36 +02:00
l5y
707786e222 Add test for draining queue with concurrent enqueue (#287) 2025-10-11 20:38:55 +02:00
l5y
868bf08fd1 Ensure config directories exist in web image (#286) 2025-10-11 20:38:47 +02:00
l5y
1316d4f2d1 Clarify network target parsing (#285) 2025-10-11 20:38:40 +02:00
l5y
9be390ee09 Ensure queue deactivates when empty (#284) 2025-10-11 20:38:27 +02:00
l5y
d9ed006b4c Clarify BLE connection phrasing (#283) 2025-10-11 20:31:12 +02:00
99 changed files with 18930 additions and 4651 deletions

View File

@@ -9,12 +9,14 @@
# Generate a secure token: openssl rand -hex 32
API_TOKEN=your-secure-api-token-here
# Meshtastic device path (required for ingestor)
# Common paths:
# Meshtastic connection target (required for ingestor)
# Common serial paths:
# - Linux: /dev/ttyACM0, /dev/ttyUSB0
# - macOS: /dev/cu.usbserial-*
# - Windows (WSL): /dev/ttyS*
MESH_SERIAL=/dev/ttyACM0
# You may also provide an IP:PORT pair (e.g. 192.168.1.20:4403) or a
# Bluetooth address (e.g. ED:4D:9E:95:CF:60).
CONNECTION=/dev/ttyACM0
# =============================================================================
# SITE CUSTOMIZATION
@@ -24,29 +26,34 @@ MESH_SERIAL=/dev/ttyACM0
SITE_NAME=My Meshtastic Network
# Default Meshtastic channel
DEFAULT_CHANNEL=#MediumFast
CHANNEL=#LongFast
# Default frequency for your region
# Common frequencies: 868MHz (Europe), 915MHz (US), 433MHz (Worldwide)
DEFAULT_FREQUENCY=868MHz
FREQUENCY=915MHz
# Map center coordinates (latitude, longitude)
# Berlin, Germany: 52.502889, 13.404194
# Denver, Colorado: 39.7392, -104.9903
# London, UK: 51.5074, -0.1278
MAP_CENTER_LAT=52.502889
MAP_CENTER_LON=13.404194
MAP_CENTER="38.761944,-27.090833"
# Maximum distance to show nodes (kilometers)
MAX_NODE_DISTANCE_KM=50
MAX_DISTANCE=42
# =============================================================================
# OPTIONAL INTEGRATIONS
# =============================================================================
# Matrix chat room for your community (optional)
# Format: !roomid:matrix.org
MATRIX_ROOM='#meshtastic-berlin:matrix.org'
# Community chat link or Matrix room for your community (optional)
# Matrix aliases (e.g. #meshtastic-berlin:matrix.org) will be linked via matrix.to automatically.
CONTACT_LINK='#potatomesh:dod.ngo'
# Enable or disable PotatoMesh federation features (1=enabled, 0=disabled)
FEDERATION=1
# Hide public mesh messages from unauthenticated visitors (1=hidden, 0=public)
PRIVATE=0
# =============================================================================
@@ -56,6 +63,11 @@ MATRIX_ROOM='#meshtastic-berlin:matrix.org'
# Debug mode (0=off, 1=on)
DEBUG=0
# Public domain name for this PotatoMesh instance
# Provide a hostname (with optional port) that resolves to the web service.
# Example: mesh.example.org or mesh.example.org:41447
INSTANCE_DOMAIN=mesh.example.org
# Docker image architecture (linux-amd64, linux-arm64, linux-armv7)
POTATOMESH_IMAGE_ARCH=linux-amd64
@@ -65,16 +77,6 @@ POTATOMESH_IMAGE_ARCH=linux-amd64
# is unavailable.
# COMPOSE_PROFILES=bridge
# Meshtastic snapshot interval (seconds)
MESH_SNAPSHOT_SECS=60
# Meshtastic channel index (0=primary, 1=secondary, etc.)
MESH_CHANNEL_INDEX=0
CHANNEL_INDEX=0
# Database settings
DB_BUSY_TIMEOUT_MS=5000
DB_BUSY_MAX_RETRIES=5
DB_BUSY_RETRY_DELAY=0.05
# Application settings
MAX_JSON_BODY_BYTES=1048576

View File

@@ -4,8 +4,9 @@
- **`docker.yml`** - Build and push Docker images to GHCR
- **`codeql.yml`** - Security scanning
- **`python.yml`** - Python testing
- **`ruby.yml`** - Ruby testing
- **`python.yml`** - Python ingestor pipeline
- **`ruby.yml`** - Ruby Sinatra app testing
- **`javascript.yml`** - Frontend test suite
## Usage

View File

@@ -131,7 +131,7 @@ jobs:
docker run --rm --name ingestor-test \
-e POTATOMESH_INSTANCE=http://localhost:41447 \
-e API_TOKEN=test-token \
-e MESH_SERIAL=mock \
-e CONNECTION=mock \
-e DEBUG=1 \
${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-ingestor-linux-amd64:${{ steps.version.outputs.version }} &
sleep 5

View File

@@ -10,28 +10,29 @@ permissions:
contents: read
jobs:
test:
frontend:
runs-on: ubuntu-latest
defaults:
run:
working-directory: web
steps:
- uses: actions/checkout@v5
- name: Set up Node.js 20
- name: Set up Node.js 22
uses: actions/setup-node@v4
with:
node-version: '20'
node-version: '22'
- name: Install dependencies
run: npm install
working-directory: web
run: npm ci
- name: Run JavaScript tests
run: npm test
working-directory: web
- name: Upload coverage to Codecov
if: always()
uses: codecov/codecov-action@v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: web/reports/javascript-coverage.json
flags: javascript
name: javascript
flags: frontend
name: frontend
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
- name: Upload test results to Codecov
@@ -39,4 +40,4 @@ jobs:
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: web/reports/javascript-junit.xml
flags: javascript
flags: frontend

View File

@@ -10,12 +10,12 @@ permissions:
contents: read
jobs:
test:
ingestor:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- name: Set up Python 3.13
uses: actions/setup-python@v3
uses: actions/setup-python@v5
with:
python-version: "3.13"
- name: Install dependencies

View File

@@ -10,7 +10,7 @@ permissions:
contents: read
jobs:
test:
sinatra:
defaults:
run:
working-directory: ./web
@@ -42,13 +42,13 @@ jobs:
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ./web/tmp/test-results/rspec.xml
flags: ruby-${{ matrix.ruby-version }}
flags: sinatra-${{ matrix.ruby-version }}
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
fail_ci_if_error: false
flags: ruby-${{ matrix.ruby-version }}
flags: sinatra-${{ matrix.ruby-version }}
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
- name: Run rufo

View File

@@ -1,8 +1,111 @@
# CHANGELOG
## v0.5.2
* Align theme and info controls by @l5yth in <https://github.com/l5yth/potato-mesh/pull/371>
* Fixes POST request 403 errors on instances behind Cloudflare proxy by @varna9000 in <https://github.com/l5yth/potato-mesh/pull/368>
* Delay initial federation announcements by @l5yth in <https://github.com/l5yth/potato-mesh/pull/366>
* Ensure well-known document stays in sync on startup by @l5yth in <https://github.com/l5yth/potato-mesh/pull/365>
* Guard federation DNS resolution against restricted networks by @l5yth in <https://github.com/l5yth/potato-mesh/pull/362>
* Add federation ingestion limits and tests by @l5yth in <https://github.com/l5yth/potato-mesh/pull/364>
* Prefer reported primary channel names by @l5yth in <https://github.com/l5yth/potato-mesh/pull/363>
* Decouple message API node hydration by @l5yth in <https://github.com/l5yth/potato-mesh/pull/360>
* Fix ingestor reconnection detection by @l5yth in <https://github.com/l5yth/potato-mesh/pull/361>
* Harden instance domain validation by @l5yth in <https://github.com/l5yth/potato-mesh/pull/359>
* Ensure INSTANCE_DOMAIN propagates to containers by @l5yth in <https://github.com/l5yth/potato-mesh/pull/358>
* Chore: bump version to 0.5.2 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/356>
* Gracefully retry federation announcements over HTTP by @l5yth in <https://github.com/l5yth/potato-mesh/pull/355>
## v0.5.1
* Recursively ingest federated instances by @l5yth in <https://github.com/l5yth/potato-mesh/pull/353>
* Remove federation timeout environment overrides by @l5yth in <https://github.com/l5yth/potato-mesh/pull/352>
* Close unrelated short info overlays when opening short info by @l5yth in <https://github.com/l5yth/potato-mesh/pull/351>
* Improve federation instance error diagnostics by @l5yth in <https://github.com/l5yth/potato-mesh/pull/350>
* Harden federation domain validation and tests by @l5yth in <https://github.com/l5yth/potato-mesh/pull/347>
* Handle malformed instance records gracefully by @l5yth in <https://github.com/l5yth/potato-mesh/pull/348>
* Fix ingestor device mounting for non-serial connections by @l5yth in <https://github.com/l5yth/potato-mesh/pull/346>
* Ensure Docker deployments persist keyfile and well-known assets by @l5yth in <https://github.com/l5yth/potato-mesh/pull/345>
* Add modem preset display to node overlay by @l5yth in <https://github.com/l5yth/potato-mesh/pull/340>
* Display message frequency and channel in chat log by @l5yth in <https://github.com/l5yth/potato-mesh/pull/339>
* Bump fallback version string to v0.5.1 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/338>
* Docs: update changelog for 0.5.0 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/337>
* Fix ingestor docker import path by @l5yth in <https://github.com/l5yth/potato-mesh/pull/336>
## v0.5.0
* Add JavaScript configuration tests and coverage workflow
* Ensure node overlays appear above fullscreen map by @l5yth in <https://github.com/l5yth/potato-mesh/pull/333>
* Adjust node table columns responsively by @l5yth in <https://github.com/l5yth/potato-mesh/pull/332>
* Add LoRa metadata fields to nodes and messages by @l5yth in <https://github.com/l5yth/potato-mesh/pull/331>
* Add channel metadata capture for message tagging by @l5yth in <https://github.com/l5yth/potato-mesh/pull/329>
* Capture radio metadata for ingestor payloads by @l5yth in <https://github.com/l5yth/potato-mesh/pull/327>
* Fix FrozenError when filtering node query results by @l5yth in <https://github.com/l5yth/potato-mesh/pull/324>
* Ensure frontend reports git-aware version strings by @l5yth in <https://github.com/l5yth/potato-mesh/pull/321>
* Ensure web Docker image ships application sources by @l5yth in <https://github.com/l5yth/potato-mesh/pull/322>
* Refine stacked short info overlays on the map by @l5yth in <https://github.com/l5yth/potato-mesh/pull/319>
* Refine environment configuration defaults by @l5yth in <https://github.com/l5yth/potato-mesh/pull/318>
* Fix legacy configuration migration to XDG directories by @l5yth in <https://github.com/l5yth/potato-mesh/pull/317>
* Adopt XDG base directories for app data and config by @l5yth in <https://github.com/l5yth/potato-mesh/pull/316>
* Refactor: streamline ingestor environment variables by @l5yth in <https://github.com/l5yth/potato-mesh/pull/314>
* Adjust map auto-fit padding and default zoom by @l5yth in <https://github.com/l5yth/potato-mesh/pull/315>
* Ensure APIs filter stale data and refresh node details from latest sources by @l5yth in <https://github.com/l5yth/potato-mesh/pull/312>
* Improve offline tile fallback initialization by @l5yth in <https://github.com/l5yth/potato-mesh/pull/307>
* Add fallback for offline tile rendering errors by @l5yth in <https://github.com/l5yth/potato-mesh/pull/306>
* Fix map auto-fit handling and add controller by @l5yth in <https://github.com/l5yth/potato-mesh/pull/311>
* Fix map initialization bounds and add coverage by @l5yth in <https://github.com/l5yth/potato-mesh/pull/305>
* Increase coverage for configuration and sanitizer helpers by @l5yth in <https://github.com/l5yth/potato-mesh/pull/303>
* Add comprehensive theme and background front-end tests by @l5yth in <https://github.com/l5yth/potato-mesh/pull/302>
* Document sanitization and helper modules by @l5yth in <https://github.com/l5yth/potato-mesh/pull/301>
* Add in-repo Meshtastic protobuf stubs for tests by @l5yth in <https://github.com/l5yth/potato-mesh/pull/300>
* Handle CRL lookup failures during federation TLS by @l5yth in <https://github.com/l5yth/potato-mesh/pull/299>
* Ensure JavaScript workflow runs frontend tests by @l5yth in <https://github.com/l5yth/potato-mesh/pull/298>
* Unify structured logging across application and ingestor by @l5yth in <https://github.com/l5yth/potato-mesh/pull/296>
* Add Apache license headers to missing sources by @l5yth in <https://github.com/l5yth/potato-mesh/pull/297>
* Update workflows for ingestor, sinatra, and frontend by @l5yth in <https://github.com/l5yth/potato-mesh/pull/295>
* Fix IPv6 instance domain canonicalization by @l5yth in <https://github.com/l5yth/potato-mesh/pull/294>
* Handle federation HTTPS CRL verification failures by @l5yth in <https://github.com/l5yth/potato-mesh/pull/293>
* Adjust federation announcement interval to eight hours by @l5yth in <https://github.com/l5yth/potato-mesh/pull/292>
* Restore modular app functionality by @l5yth in <https://github.com/l5yth/potato-mesh/pull/291>
* Refactor config and metadata helpers into PotatoMesh modules by @l5yth in <https://github.com/l5yth/potato-mesh/pull/290>
* Update default site configuration defaults by @l5yth in <https://github.com/l5yth/potato-mesh/pull/288>
* Add regression test for queue drain concurrency by @l5yth in <https://github.com/l5yth/potato-mesh/pull/287>
* Ensure Docker config directories are created for non-root user by @l5yth in <https://github.com/l5yth/potato-mesh/pull/286>
* Clarify numeric address requirement for network target parsing by @l5yth in <https://github.com/l5yth/potato-mesh/pull/285>
* Ensure mesh ingestor queue resets active flag when idle by @l5yth in <https://github.com/l5yth/potato-mesh/pull/284>
* Clarify BLE connection description in README by @l5yth in <https://github.com/l5yth/potato-mesh/pull/283>
* Configure web container for production mode by @l5yth in <https://github.com/l5yth/potato-mesh/pull/282>
* Normalize INSTANCE_DOMAIN configuration to require hostnames by @l5yth in <https://github.com/l5yth/potato-mesh/pull/280>
* Avoid blocking startup on federation announcements by @l5yth in <https://github.com/l5yth/potato-mesh/pull/281>
* Fix production Docker builds for web and ingestor images by @l5yth in <https://github.com/l5yth/potato-mesh/pull/279>
* Improve instance domain detection logic by @l5yth in <https://github.com/l5yth/potato-mesh/pull/278>
* Implement federation announcements and instances API by @l5yth in <https://github.com/l5yth/potato-mesh/pull/277>
* Fix federation signature handling and IP guard by @l5yth in <https://github.com/l5yth/potato-mesh/pull/276>
* Add persistent federation metadata endpoint by @l5yth in <https://github.com/l5yth/potato-mesh/pull/274>
* Add configurable instance domain with reverse DNS fallback by @l5yth in <https://github.com/l5yth/potato-mesh/pull/272>
* Document production deployment configuration by @l5yth in <https://github.com/l5yth/potato-mesh/pull/273>
* Add targeted API endpoints and expose version metadata by @l5yth in <https://github.com/l5yth/potato-mesh/pull/271>
* Prometheus metrics updates on startup and for position/telemetry by @nicjansma in <https://github.com/l5yth/potato-mesh/pull/270>
* Add hourly reconnect handling for inactive mesh interface by @l5yth in <https://github.com/l5yth/potato-mesh/pull/267>
* Dockerfile fixes by @nicjansma in <https://github.com/l5yth/potato-mesh/pull/268>
* Added prometheus /metrics endpoint by @nicjansma in <https://github.com/l5yth/potato-mesh/pull/262>
* Add fullscreen toggle to map view by @l5yth in <https://github.com/l5yth/potato-mesh/pull/263>
* Relocate JS coverage export script into web directory by @l5yth in <https://github.com/l5yth/potato-mesh/pull/266>
* V0.4.0 version string in web UI by @nicjansma in <https://github.com/l5yth/potato-mesh/pull/265>
* Add energy saving cycle to ingestor daemon by @l5yth in <https://github.com/l5yth/potato-mesh/pull/256>
* Chore: restore apache headers by @l5yth in <https://github.com/l5yth/potato-mesh/pull/260>
* Docs: add matrix to readme by @l5yth in <https://github.com/l5yth/potato-mesh/pull/259>
* Force dark theme default based on sanitized cookie by @l5yth in <https://github.com/l5yth/potato-mesh/pull/252>
* Document mesh ingestor modules with PDoc-style docstrings by @l5yth in <https://github.com/l5yth/potato-mesh/pull/255>
* Handle missing node IDs in Meshtastic nodeinfo packets by @l5yth in <https://github.com/l5yth/potato-mesh/pull/251>
* Document Ruby helper methods with RDoc comments by @l5yth in <https://github.com/l5yth/potato-mesh/pull/254>
* Add JSDoc documentation across client scripts by @l5yth in <https://github.com/l5yth/potato-mesh/pull/253>
* Fix mesh ingestor telemetry and neighbor handling by @l5yth in <https://github.com/l5yth/potato-mesh/pull/249>
* Refactor front-end assets into external modules by @l5yth in <https://github.com/l5yth/potato-mesh/pull/245>
* Add tests for helper utilities and asset routes by @l5yth in <https://github.com/l5yth/potato-mesh/pull/243>
* Docs: add ingestor inline docstrings by @l5yth in <https://github.com/l5yth/potato-mesh/pull/244>
* Add comprehensive coverage tests for mesh ingestor by @l5yth in <https://github.com/l5yth/potato-mesh/pull/241>
* Add inline documentation to config helpers and frontend scripts by @l5yth in <https://github.com/l5yth/potato-mesh/pull/240>
* Update changelog by @l5yth in <https://github.com/l5yth/potato-mesh/pull/238>
## v0.4.0

View File

@@ -29,27 +29,39 @@ against the web API.
```env
API_TOKEN=replace-with-a-strong-token
SITE_NAME=My Meshtastic Network
MESH_SERIAL=/dev/ttyACM0
SITE_NAME=PotatoMesh Demo
CONNECTION=/dev/ttyACM0
INSTANCE_DOMAIN=mesh.example.org
```
Additional environment variables are optional:
- `DEFAULT_CHANNEL`, `DEFAULT_FREQUENCY`, `MAP_CENTER_LAT`, `MAP_CENTER_LON`,
`MAX_NODE_DISTANCE_KM`, and `MATRIX_ROOM` customise the UI.
- `CHANNEL`, `FREQUENCY`, `MAP_CENTER`, `MAX_DISTANCE`, and `CONTACT_LINK`
customise the UI.
- `POTATOMESH_INSTANCE` (defaults to `http://web:41447`) lets the ingestor post
to a remote PotatoMesh instance if you do not run both services together.
- `MESH_CHANNEL_INDEX`, `MESH_SNAPSHOT_SECS`, and `DEBUG` adjust ingestor
behaviour.
- `CONNECTION` overrides the default serial device or network endpoint used by
the ingestor.
- `CHANNEL_INDEX` selects the LoRa channel when using serial or Bluetooth
connections.
- `INSTANCE_DOMAIN` pins the public hostname advertised by the web UI and API
responses, bypassing reverse DNS detection when set.
- `DEBUG` enables verbose logging across the stack.
## Docker Compose file
Use the `docker-compose.yml` file provided in the repository (or download the
[raw file from GitHub](https://raw.githubusercontent.com/l5yth/potato-mesh/main/docker-compose.yml)).
It already references the published GHCR images, defines persistent volumes for
data and logs, and includes optional bridge-profile services for environments
that require classic port mapping. Place this file in the same directory as
your `.env` file so Compose can pick up both.
data, configuration, and logs, and includes optional bridge-profile services for
environments that require classic port mapping. Place this file in the same
directory as your `.env` file so Compose can pick up both.
The dedicated configuration volume binds to `/app/.config/potato-mesh` inside
the container. This path stores the instance private key and staged
`/.well-known/potato-mesh` documents. Because the volume persists independently
of container lifecycle events, generated credentials are not replaced on reboot
or re-deploy.
## Start the stack

View File

@@ -54,8 +54,8 @@ COPY --chown=potatomesh:potatomesh web/views/ ./views/
COPY --chown=potatomesh:potatomesh data/*.sql /data/
# Create data directory for SQLite database
RUN mkdir -p /app/data && \
chown -R potatomesh:potatomesh /app/data
RUN mkdir -p /app/data /app/.local/share/potato-mesh && \
chown -R potatomesh:potatomesh /app/data /app/.local
# Switch to non-root user
USER potatomesh
@@ -65,18 +65,13 @@ EXPOSE 41447
# Default environment variables (can be overridden by host)
ENV APP_ENV=production \
MESH_DB=/app/data/mesh.db \
DB_BUSY_TIMEOUT_MS=5000 \
DB_BUSY_MAX_RETRIES=5 \
DB_BUSY_RETRY_DELAY=0.05 \
MAX_JSON_BODY_BYTES=1048576 \
SITE_NAME="Berlin Mesh Network" \
DEFAULT_CHANNEL="#MediumFast" \
DEFAULT_FREQUENCY="868MHz" \
MAP_CENTER_LAT=52.502889 \
MAP_CENTER_LON=13.404194 \
MAX_NODE_DISTANCE_KM=50 \
MATRIX_ROOM="" \
RACK_ENV=production \
SITE_NAME="PotatoMesh Demo" \
CHANNEL="#LongFast" \
FREQUENCY="915MHz" \
MAP_CENTER="38.761944,-27.090833" \
MAX_DISTANCE=42 \
CONTACT_LINK="#potatomesh:dod.ngo" \
DEBUG=0
# Start the application

View File

@@ -1,10 +1,11 @@
# 🥔 PotatoMesh
[![GitHub Workflow Status](https://img.shields.io/github/actions/workflow/status/l5yth/potato-mesh/ruby.yml?branch=main)](https://github.com/l5yth/potato-mesh/actions)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/l5yth/potato-mesh)](https://github.com/l5yth/potato-mesh/releases)
[![GitHub release](https://img.shields.io/github/v/release/l5yth/potato-mesh)](https://github.com/l5yth/potato-mesh/releases)
[![codecov](https://codecov.io/gh/l5yth/potato-mesh/branch/main/graph/badge.svg?token=FS7252JVZT)](https://codecov.io/gh/l5yth/potato-mesh)
[![Open-Source License](https://img.shields.io/github/license/l5yth/potato-mesh)](LICENSE)
[![Contributions Welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/l5yth/potato-mesh/issues)
[![Matrix Chat](https://img.shields.io/badge/matrix-%23potatomesh:dod.ngo-blue)](https://matrix.to/#/#potatomesh:dod.ngo)
A simple Meshtastic-powered node dashboard for your local community. _No MQTT clutter, just local LoRa aether._
@@ -24,7 +25,7 @@ Requires Ruby for the Sinatra web app and SQLite3 for the app's database.
```bash
pacman -S ruby sqlite3
gem install sinatra sqlite3 rackup puma rspec rack-test rufo
gem install sinatra sqlite3 rackup puma rspec rack-test rufo prometheus-client
cd ./web
bundle install
```
@@ -69,14 +70,15 @@ exec ruby app.rb -p 41447 -o 0.0.0.0
The web app can be configured with environment variables (defaults shown):
* `SITE_NAME` - title and header shown in the ui (default: "Meshtastic Berlin")
* `DEFAULT_CHANNEL` - default channel shown in the ui (default: "#MediumFast")
* `DEFAULT_FREQUENCY` - default channel shown in the ui (default: "868MHz")
* `MAP_CENTER_LAT` / `MAP_CENTER_LON` - default map center coordinates (default: `52.502889` / `13.404194`)
* `MAX_NODE_DISTANCE_KM` - hide nodes farther than this distance from the center (default: `137`)
* `MATRIX_ROOM` - matrix room id for a footer link (default: `#meshtastic-berlin:matrix.org`)
* `SITE_NAME` - title and header shown in the UI (default: "PotatoMesh Demo")
* `CHANNEL` - default channel shown in the UI (default: "#LongFast")
* `FREQUENCY` - default frequency shown in the UI (default: "915MHz")
* `MAP_CENTER` - default map center coordinates (default: `38.761944,-27.090833`)
* `MAX_DISTANCE` - hide nodes farther than this distance from the center (default: `42`)
* `CONTACT_LINK` - chat link or Matrix alias for footer and overlay (default: `#potatomesh:dod.ngo`)
* `PRIVATE` - set to `1` to hide the chat UI, disable message APIs, and exclude hidden clients (default: unset)
* `PROM_REPORT_IDS` - comma-separated list of Node IDs to report in prometheus metrics, `*` for all (default: unset)
* `INSTANCE_DOMAIN` - public hostname (optionally with port) used for metadata, federation, and API links (default: auto-detected)
* `FEDERATION` - set to `1` to announce your instance and crawl peers, or `0` to disable federation (default: `1`)
The application derives SEO-friendly document titles, descriptions, and social
preview tags from these existing configuration values and reuses the bundled
@@ -85,9 +87,32 @@ logo for Open Graph and Twitter cards.
Example:
```bash
SITE_NAME="Meshtastic Berlin" MAP_CENTER_LAT=52.502889 MAP_CENTER_LON=13.404194 MAX_NODE_DISTANCE_KM=137 MATRIX_ROOM="#meshtastic-berlin:matrix.org" ./app.sh
SITE_NAME="PotatoMesh Demo" MAP_CENTER=38.761944,-27.090833 MAX_DISTANCE=42 CONTACT_LINK="#potatomesh:dod.ngo" ./app.sh
```
### Configuration & Storage
PotatoMesh stores its runtime assets using the XDG base directory specification.
When XDG directories are not provided the application falls back
to the repository root.
The key is written to `$XDG_CONFIG_HOME/potato-mesh/keyfile` and the
well-known document is staged in
`$XDG_CONFIG_HOME/potato-mesh/well-known/potato-mesh`.
The database can be found in `$XDG_DATA_HOME/potato-mesh`.
### Federation
PotatoMesh instances can optionally federate by publishing signed metadata and
discovering peers. Federation is enabled by default and controlled with the
`FEDERATION` environment variable. Set `FEDERATION=1` (default) to announce your
instance, respond to remote crawlers, and crawl the wider network. Set
`FEDERATION=0` to keep your deployment isolated—federation requests will be
ignored and the ingestor will skip discovery tasks. Private mode still takes
precedence; when `PRIVATE=1`, federation features remain disabled regardless of
the `FEDERATION` value.
### API
The web app contains an API:
@@ -97,7 +122,9 @@ The web app contains an API:
* GET `/api/messages?limit=100` - returns the latest 100 messages (disabled when `PRIVATE=1`)
* GET `/api/telemetry?limit=100` - returns the latest 100 telemetry data
* GET `/api/neighbors?limit=100` - returns the latest 100 neighbor tuples
* GET `/metrics`- prometheus endpoint
* GET `/api/instances` - returns known potato-mesh instances in other locations
* GET `/metrics`- metrics for the prometheus endpoint
* GET `/version`- information about the potato-mesh instance
* POST `/api/nodes` - upserts nodes provided as JSON object mapping node ids to node data (requires `Authorization: Bearer <API_TOKEN>`)
* POST `/api/positions` - appends positions provided as a JSON object or array (requires `Authorization: Bearer <API_TOKEN>`)
* POST `/api/messages` - appends messages provided as a JSON object or array (requires `Authorization: Bearer <API_TOKEN>`; disabled when `PRIVATE=1`)
@@ -132,28 +159,26 @@ to the configured potato-mesh instance.
Check out `mesh.sh` ingestor script in the `./data` directory.
```bash
POTATOMESH_INSTANCE=http://127.0.0.1:41447 API_TOKEN=1eb140fd-cab4-40be-b862-41c607762246 MESH_SERIAL=/dev/ttyACM0 DEBUG=1 ./mesh.sh
Mesh daemon: nodes+messages → http://127.0.0.1 | port=41447 | channel=0
POTATOMESH_INSTANCE=http://127.0.0.1:41447 API_TOKEN=1eb140fd-cab4-40be-b862-41c607762246 CONNECTION=/dev/ttyACM0 DEBUG=1 ./mesh.sh
[2025-02-20T12:34:56.789012Z] [potato-mesh] [info] channel=0 context=daemon.main port='41447' target='http://127.0.0.1' Mesh daemon starting
[...]
[debug] upserted node !849b7154 shortName='7154'
[debug] upserted node !ba653ae8 shortName='3ae8'
[debug] upserted node !16ced364 shortName='Pat'
[debug] stored message from '!9ee71c38' to '^all' ch=0 text='Guten Morgen!'
[2025-02-20T12:34:57.012345Z] [potato-mesh] [debug] context=handlers.upsert_node node_id=!849b7154 short_name='7154' long_name='7154' Queued node upsert payload
[2025-02-20T12:34:57.456789Z] [potato-mesh] [debug] context=handlers.upsert_node node_id=!ba653ae8 short_name='3ae8' long_name='3ae8' Queued node upsert payload
[2025-02-20T12:34:58.001122Z] [potato-mesh] [debug] context=handlers.store_packet_dict channel=0 from_id='!9ee71c38' payload='Guten Morgen!' to_id='^all' Queued message payload
```
Run the script with `POTATOMESH_INSTANCE` and `API_TOKEN` to keep updating
node records and parsing new incoming messages. Enable debug output with `DEBUG=1`,
specify the serial port with `MESH_SERIAL` (default `/dev/ttyACM0`) or set it to an IP
address (for example `192.168.1.20:4403`) to use the Meshtastic TCP interface.
`MESH_SERIAL` also accepts Bluetooth device addresses (e.g., `ED:4D:9E:95:CF:60`)
and attempts an BLE connection if available.
specify the connection target with `CONNECTION` (default `/dev/ttyACM0`) or set it to
an IP address (for example `192.168.1.20:4403`) to use the Meshtastic TCP
interface. `CONNECTION` also accepts Bluetooth device addresses (e.g.,
`ED:4D:9E:95:CF:60`) and the script attempts a BLE connection if available.
## Demos
* <https://potatomesh.net/>
* <https://vrs.kdd2105.ru/>
* <https://potatomesh.stratospire.com/>
* <https://es1tem.uk/>
Post your nodes here:
* <https://github.com/l5yth/potato-mesh/discussions/258>
## Docker
@@ -170,5 +195,5 @@ See the [Docker guide](DOCKER.md) for more details and custome deployment instru
Apache v2.0, Contact <COM0@l5y.tech>
Join our Matrix to discuss the dashboard or ask for technical support:
Join our community chat to discuss the dashboard or ask for technical support:
[#potatomesh:dod.ngo](https://matrix.to/#/#potatomesh:dod.ngo)

View File

@@ -67,33 +67,48 @@ update_env() {
}
# Get current values from .env if they exist
SITE_NAME=$(grep "^SITE_NAME=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "My Meshtastic Network")
DEFAULT_CHANNEL=$(grep "^DEFAULT_CHANNEL=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "#MediumFast")
DEFAULT_FREQUENCY=$(grep "^DEFAULT_FREQUENCY=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "868MHz")
MAP_CENTER_LAT=$(grep "^MAP_CENTER_LAT=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "52.502889")
MAP_CENTER_LON=$(grep "^MAP_CENTER_LON=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "13.404194")
MAX_NODE_DISTANCE_KM=$(grep "^MAX_NODE_DISTANCE_KM=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "50")
MATRIX_ROOM=$(grep "^MATRIX_ROOM=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "")
SITE_NAME=$(grep "^SITE_NAME=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "PotatoMesh Demo")
CHANNEL=$(grep "^CHANNEL=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "#LongFast")
FREQUENCY=$(grep "^FREQUENCY=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "915MHz")
FEDERATION=$(grep "^FEDERATION=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "1")
PRIVATE=$(grep "^PRIVATE=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "0")
MAP_CENTER=$(grep "^MAP_CENTER=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "38.761944,-27.090833")
MAX_DISTANCE=$(grep "^MAX_DISTANCE=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "42")
CONTACT_LINK=$(grep "^CONTACT_LINK=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "#potatomesh:dod.ngo")
API_TOKEN=$(grep "^API_TOKEN=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "")
POTATOMESH_IMAGE_ARCH=$(grep "^POTATOMESH_IMAGE_ARCH=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "linux-amd64")
INSTANCE_DOMAIN=$(grep "^INSTANCE_DOMAIN=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "")
echo "📍 Location Settings"
echo "-------------------"
read_with_default "Site Name (your mesh network name)" "$SITE_NAME" SITE_NAME
read_with_default "Map Center Latitude" "$MAP_CENTER_LAT" MAP_CENTER_LAT
read_with_default "Map Center Longitude" "$MAP_CENTER_LON" MAP_CENTER_LON
read_with_default "Max Node Distance (km)" "$MAX_NODE_DISTANCE_KM" MAX_NODE_DISTANCE_KM
read_with_default "Map Center (lat,lon)" "$MAP_CENTER" MAP_CENTER
read_with_default "Max Distance (km)" "$MAX_DISTANCE" MAX_DISTANCE
echo ""
echo "📡 Meshtastic Settings"
echo "---------------------"
read_with_default "Default Channel" "$DEFAULT_CHANNEL" DEFAULT_CHANNEL
read_with_default "Default Frequency (868MHz, 915MHz, etc.)" "$DEFAULT_FREQUENCY" DEFAULT_FREQUENCY
read_with_default "Channel" "$CHANNEL" CHANNEL
read_with_default "Frequency (868MHz, 915MHz, etc.)" "$FREQUENCY" FREQUENCY
echo ""
echo "💬 Optional Settings"
echo "-------------------"
read_with_default "Matrix Room (optional, e.g., #meshtastic-berlin:matrix.org)" "$MATRIX_ROOM" MATRIX_ROOM
read_with_default "Chat link or Matrix room (optional)" "$CONTACT_LINK" CONTACT_LINK
echo ""
echo "🤝 Federation Settings"
echo "----------------------"
echo "Federation shares instance metadata with other PotatoMesh deployments."
echo "Set to 1 to enable discovery or 0 to keep your instance isolated."
read_with_default "Enable federation (1=yes, 0=no)" "$FEDERATION" FEDERATION
echo ""
echo "🙈 Privacy Settings"
echo "-------------------"
echo "Private mode hides public mesh messages from unauthenticated visitors."
echo "Set to 1 to hide public feeds or 0 to keep them visible."
read_with_default "Enable private mode (1=yes, 0=no)" "$PRIVATE" PRIVATE
echo ""
echo "🛠 Docker Settings"
@@ -101,6 +116,13 @@ echo "------------------"
echo "Specify the Docker image architecture for your host (linux-amd64, linux-arm64, linux-armv7)."
read_with_default "Docker image architecture" "$POTATOMESH_IMAGE_ARCH" POTATOMESH_IMAGE_ARCH
echo ""
echo "🌐 Domain Settings"
echo "------------------"
echo "Provide the public hostname that clients should use to reach this PotatoMesh instance."
echo "Leave blank to allow automatic detection via reverse DNS."
read_with_default "Instance domain (e.g. mesh.example.org)" "$INSTANCE_DOMAIN" INSTANCE_DOMAIN
echo ""
echo "🔐 Security Settings"
echo "-------------------"
@@ -137,18 +159,33 @@ echo "📝 Updating .env file..."
# Update .env file
update_env "SITE_NAME" "\"$SITE_NAME\""
update_env "DEFAULT_CHANNEL" "\"$DEFAULT_CHANNEL\""
update_env "DEFAULT_FREQUENCY" "\"$DEFAULT_FREQUENCY\""
update_env "MAP_CENTER_LAT" "$MAP_CENTER_LAT"
update_env "MAP_CENTER_LON" "$MAP_CENTER_LON"
update_env "MAX_NODE_DISTANCE_KM" "$MAX_NODE_DISTANCE_KM"
update_env "MATRIX_ROOM" "\"$MATRIX_ROOM\""
update_env "CHANNEL" "\"$CHANNEL\""
update_env "FREQUENCY" "\"$FREQUENCY\""
update_env "MAP_CENTER" "\"$MAP_CENTER\""
update_env "MAX_DISTANCE" "$MAX_DISTANCE"
update_env "CONTACT_LINK" "\"$CONTACT_LINK\""
update_env "API_TOKEN" "$API_TOKEN"
update_env "POTATOMESH_IMAGE_ARCH" "$POTATOMESH_IMAGE_ARCH"
update_env "FEDERATION" "$FEDERATION"
update_env "PRIVATE" "$PRIVATE"
if [ -n "$INSTANCE_DOMAIN" ]; then
update_env "INSTANCE_DOMAIN" "$INSTANCE_DOMAIN"
else
sed -i.bak '/^INSTANCE_DOMAIN=.*/d' .env
fi
# Add other common settings if they don't exist
if ! grep -q "^MESH_SERIAL=" .env; then
echo "MESH_SERIAL=/dev/ttyACM0" >> .env
# Migrate legacy connection settings and ensure defaults exist
if grep -q "^MESH_SERIAL=" .env; then
legacy_connection=$(grep "^MESH_SERIAL=" .env | head -n1 | cut -d'=' -f2-)
if [ -n "$legacy_connection" ] && ! grep -q "^CONNECTION=" .env; then
echo "♻️ Migrating legacy MESH_SERIAL value to CONNECTION"
update_env "CONNECTION" "$legacy_connection"
fi
sed -i.bak '/^MESH_SERIAL=.*/d' .env
fi
if ! grep -q "^CONNECTION=" .env; then
echo "CONNECTION=/dev/ttyACM0" >> .env
fi
if ! grep -q "^DEBUG=" .env; then
@@ -163,13 +200,20 @@ echo "✅ Configuration complete!"
echo ""
echo "📋 Your settings:"
echo " Site Name: $SITE_NAME"
echo " Map Center: $MAP_CENTER_LAT, $MAP_CENTER_LON"
echo " Max Distance: ${MAX_NODE_DISTANCE_KM}km"
echo " Channel: $DEFAULT_CHANNEL"
echo " Frequency: $DEFAULT_FREQUENCY"
echo " Matrix Room: ${MATRIX_ROOM:-'Not set'}"
echo " Map Center: $MAP_CENTER"
echo " Max Distance: ${MAX_DISTANCE}km"
echo " Channel: $CHANNEL"
echo " Frequency: $FREQUENCY"
echo " Chat: ${CONTACT_LINK:-'Not set'}"
echo " API Token: ${API_TOKEN:0:8}..."
echo " Docker Image Arch: $POTATOMESH_IMAGE_ARCH"
echo " Private Mode: ${PRIVATE}"
echo " Instance Domain: ${INSTANCE_DOMAIN:-'Auto-detected'}"
if [ "${FEDERATION:-1}" = "0" ]; then
echo " Federation: Disabled"
else
echo " Federation: Enabled"
fi
echo ""
echo "🚀 You can now start PotatoMesh with:"
echo " docker-compose up -d"

View File

@@ -26,7 +26,7 @@ RUN set -eux; \
python -m pip install --no-cache-dir -r requirements.txt; \
apk del .build-deps
COPY data/ .
COPY data /app/data
RUN addgroup -S potatomesh && \
adduser -S potatomesh -G potatomesh && \
adduser potatomesh dialout && \
@@ -34,14 +34,13 @@ RUN addgroup -S potatomesh && \
USER potatomesh
ENV MESH_SERIAL=/dev/ttyACM0 \
MESH_SNAPSHOT_SECS=60 \
MESH_CHANNEL_INDEX=0 \
ENV CONNECTION=/dev/ttyACM0 \
CHANNEL_INDEX=0 \
DEBUG=0 \
POTATOMESH_INSTANCE="" \
API_TOKEN=""
CMD ["python", "mesh.py"]
CMD ["python", "-m", "data.mesh"]
# Windows production image
FROM python:${PYTHON_VERSION}-windowsservercore-ltsc2022 AS production-windows
@@ -56,17 +55,16 @@ WORKDIR /app
COPY data/requirements.txt ./
RUN python -m pip install --no-cache-dir -r requirements.txt
COPY data/ .
COPY data /app/data
USER ContainerUser
ENV MESH_SERIAL=/dev/ttyACM0 \
MESH_SNAPSHOT_SECS=60 \
MESH_CHANNEL_INDEX=0 \
ENV CONNECTION=/dev/ttyACM0 \
CHANNEL_INDEX=0 \
DEBUG=0 \
POTATOMESH_INSTANCE="" \
API_TOKEN=""
CMD ["python", "mesh.py"]
CMD ["python", "-m", "data.mesh"]
FROM production-${TARGETOS} AS production

View File

@@ -21,7 +21,7 @@ import threading as threading # re-exported for compatibility
import sys
import types
from . import config, daemon, handlers, interfaces, queue, serialization
from . import channels, config, daemon, handlers, interfaces, queue, serialization
__all__: list[str] = []
@@ -40,24 +40,29 @@ def _export_constants() -> None:
__all__.extend(["json", "urllib", "glob", "threading", "signal"])
for _module in (daemon, handlers, interfaces, queue, serialization):
for _module in (channels, daemon, handlers, interfaces, queue, serialization):
_reexport(_module)
_export_constants()
_CONFIG_ATTRS = {
"PORT",
"CONNECTION",
"SNAPSHOT_SECS",
"CHANNEL_INDEX",
"DEBUG",
"INSTANCE",
"API_TOKEN",
"LORA_FREQ",
"MODEM_PRESET",
"_RECONNECT_INITIAL_DELAY_SECS",
"_RECONNECT_MAX_DELAY_SECS",
"_CLOSE_TIMEOUT_SECS",
"_debug_log",
}
# Legacy export maintained for backwards compatibility.
_CONFIG_ATTRS.add("PORT")
_INTERFACE_ATTRS = {"BLEInterface", "SerialInterface", "TCPInterface"}
_QUEUE_ATTRS = set(queue.__all__)

View File

@@ -0,0 +1,238 @@
# Copyright (C) 2025 l5yth
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Helpers for capturing and exposing mesh channel metadata."""
from __future__ import annotations
import os
from typing import Any, Iterable, Iterator
from . import config
try: # pragma: no cover - optional dependency for enum introspection
from meshtastic.protobuf import channel_pb2
except Exception: # pragma: no cover - exercised in environments without protobufs
channel_pb2 = None # type: ignore[assignment]
_ROLE_PRIMARY = 1
_ROLE_SECONDARY = 2
if channel_pb2 is not None: # pragma: no branch - evaluated once at import time
try:
_ROLE_PRIMARY = int(channel_pb2.Channel.Role.PRIMARY)
_ROLE_SECONDARY = int(channel_pb2.Channel.Role.SECONDARY)
except Exception: # pragma: no cover - defensive, version specific
_ROLE_PRIMARY = 1
_ROLE_SECONDARY = 2
_CHANNEL_MAPPINGS: tuple[tuple[int, str], ...] = ()
_CHANNEL_LOOKUP: dict[int, str] = {}
def _iter_channel_objects(channels_obj: Any) -> Iterator[Any]:
"""Yield channel descriptors from ``channels_obj``.
The real Meshtastic API exposes channels via protobuf containers that are
list-like. This helper converts the container into a deterministic iterator
while avoiding runtime errors if an unexpected type is supplied.
"""
if channels_obj is None:
return iter(())
if isinstance(channels_obj, dict):
return iter(channels_obj.values())
if isinstance(channels_obj, Iterable):
return iter(list(channels_obj))
length_fn = getattr(channels_obj, "__len__", None)
getitem = getattr(channels_obj, "__getitem__", None)
if callable(length_fn) and callable(getitem):
try:
length = int(length_fn())
except Exception: # pragma: no cover - defensive only
length = None
if length is not None and length >= 0:
snapshot = []
for index in range(length):
try:
snapshot.append(getitem(index))
except Exception: # pragma: no cover - best effort copy
break
return iter(snapshot)
return iter(())
def _primary_channel_name() -> str | None:
"""Return the fallback name to use for the primary channel when needed."""
preset = getattr(config, "MODEM_PRESET", None)
if isinstance(preset, str) and preset.strip():
return preset.strip()
env_name = os.environ.get("CHANNEL", "").strip()
if env_name:
return env_name
return None
def _extract_channel_name(settings_obj: Any) -> str | None:
"""Normalise the configured channel name extracted from ``settings_obj``."""
if settings_obj is None:
return None
if isinstance(settings_obj, dict):
candidate = settings_obj.get("name")
else:
candidate = getattr(settings_obj, "name", None)
if isinstance(candidate, str):
candidate = candidate.strip()
if candidate:
return candidate
return None
def _normalize_role(role: Any) -> int | None:
"""Convert a channel role descriptor into an integer value."""
if isinstance(role, int):
return role
if isinstance(role, str):
value = role.strip().upper()
if value == "PRIMARY":
return _ROLE_PRIMARY
if value == "SECONDARY":
return _ROLE_SECONDARY
try:
return int(value)
except ValueError:
return None
name_attr = getattr(role, "name", None)
if isinstance(name_attr, str):
return _normalize_role(name_attr)
value_attr = getattr(role, "value", None)
if isinstance(value_attr, int):
return value_attr
try:
return int(role) # type: ignore[arg-type]
except Exception:
return None
def _channel_tuple(channel_obj: Any) -> tuple[int, str] | None:
"""Return ``(index, name)`` for ``channel_obj`` when resolvable."""
role_value = _normalize_role(getattr(channel_obj, "role", None))
if role_value == _ROLE_PRIMARY:
channel_index = 0
channel_name = _extract_channel_name(getattr(channel_obj, "settings", None))
if channel_name is None:
channel_name = _primary_channel_name()
elif role_value == _ROLE_SECONDARY:
raw_index = getattr(channel_obj, "index", None)
try:
channel_index = int(raw_index)
except Exception:
channel_index = None
channel_name = _extract_channel_name(getattr(channel_obj, "settings", None))
else:
return None
if not isinstance(channel_index, int):
return None
if not isinstance(channel_name, str) or not channel_name:
return None
return channel_index, channel_name
def capture_from_interface(iface: Any) -> None:
"""Populate the channel cache by inspecting ``iface`` when possible."""
global _CHANNEL_MAPPINGS, _CHANNEL_LOOKUP
if iface is None or _CHANNEL_MAPPINGS:
return
try:
wait_for_config = getattr(iface, "waitForConfig", None)
if callable(wait_for_config):
wait_for_config()
except Exception: # pragma: no cover - hardware dependent safeguard
pass
local_node = getattr(iface, "localNode", None)
channels_obj = getattr(local_node, "channels", None) if local_node else None
channel_entries: list[tuple[int, str]] = []
seen_indices: set[int] = set()
for candidate in _iter_channel_objects(channels_obj):
result = _channel_tuple(candidate)
if result is None:
continue
index, name = result
if index in seen_indices:
continue
channel_entries.append((index, name))
seen_indices.add(index)
if not channel_entries:
return
_CHANNEL_MAPPINGS = tuple(channel_entries)
_CHANNEL_LOOKUP = {index: name for index, name in _CHANNEL_MAPPINGS}
config._debug_log(
"Captured channel metadata",
context="channels.capture",
severity="info",
always=True,
channels=_CHANNEL_MAPPINGS,
)
def channel_mappings() -> tuple[tuple[int, str], ...]:
"""Return the cached ``(index, name)`` channel tuples."""
return _CHANNEL_MAPPINGS
def channel_name(channel_index: int | None) -> str | None:
"""Return the channel name for ``channel_index`` when known."""
if channel_index is None:
return None
return _CHANNEL_LOOKUP.get(int(channel_index))
def _reset_channel_cache() -> None:
"""Clear cached channel data. Intended for use in tests only."""
global _CHANNEL_MAPPINGS, _CHANNEL_LOOKUP
_CHANNEL_MAPPINGS = ()
_CHANNEL_LOOKUP = {}
__all__ = [
"capture_from_interface",
"channel_mappings",
"channel_name",
"_reset_channel_cache",
]

View File

@@ -17,50 +17,116 @@
from __future__ import annotations
import os
import time
import sys
from datetime import datetime, timezone
from types import ModuleType
from typing import Any
DEFAULT_SNAPSHOT_SECS = 60
"""Default interval, in seconds, between state snapshot uploads."""
DEFAULT_CHANNEL_INDEX = 0
"""Default LoRa channel index used when none is specified."""
DEFAULT_RECONNECT_INITIAL_DELAY_SECS = 5.0
"""Initial reconnection delay applied after connection loss."""
DEFAULT_RECONNECT_MAX_DELAY_SECS = 60.0
"""Maximum reconnection backoff delay applied by the ingestor."""
DEFAULT_CLOSE_TIMEOUT_SECS = 5.0
"""Grace period for interface shutdown routines to complete."""
DEFAULT_INACTIVITY_RECONNECT_SECS = float(60 * 60)
"""Interval before forcing a reconnect when no packets are observed."""
DEFAULT_ENERGY_ONLINE_DURATION_SECS = 300.0
"""Duration to stay online before entering a low-power sleep cycle."""
DEFAULT_ENERGY_SLEEP_SECS = float(6 * 60 * 60)
"""Sleep duration used when energy saving mode is active."""
CONNECTION = os.environ.get("CONNECTION") or os.environ.get("MESH_SERIAL")
"""Optional connection target for the mesh interface.
When unset, platform-specific defaults will be inferred by the interface
implementations. The legacy :envvar:`MESH_SERIAL` environment variable is still
accepted for backwards compatibility.
"""
SNAPSHOT_SECS = DEFAULT_SNAPSHOT_SECS
"""Interval, in seconds, between state snapshot uploads."""
CHANNEL_INDEX = int(os.environ.get("CHANNEL_INDEX", str(DEFAULT_CHANNEL_INDEX)))
"""Index of the LoRa channel to select when connecting."""
PORT = os.environ.get("MESH_SERIAL")
SNAPSHOT_SECS = int(os.environ.get("MESH_SNAPSHOT_SECS", "60"))
CHANNEL_INDEX = int(os.environ.get("MESH_CHANNEL_INDEX", "0"))
DEBUG = os.environ.get("DEBUG") == "1"
INSTANCE = os.environ.get("POTATOMESH_INSTANCE", "").rstrip("/")
API_TOKEN = os.environ.get("API_TOKEN", "")
ENERGY_SAVING = os.environ.get("ENERGY_SAVING") == "1"
"""When ``True``, enables the ingestor's energy saving mode."""
_RECONNECT_INITIAL_DELAY_SECS = float(os.environ.get("MESH_RECONNECT_INITIAL", "5"))
_RECONNECT_MAX_DELAY_SECS = float(os.environ.get("MESH_RECONNECT_MAX", "60"))
_CLOSE_TIMEOUT_SECS = float(os.environ.get("MESH_CLOSE_TIMEOUT", "5"))
_INACTIVITY_RECONNECT_SECS = float(
os.environ.get("MESH_INACTIVITY_RECONNECT_SECS", str(60 * 60))
)
_ENERGY_ONLINE_DURATION_SECS = float(
os.environ.get("ENERGY_ONLINE_DURATION_SECS", "300")
)
_ENERGY_SLEEP_SECS = float(os.environ.get("ENERGY_SLEEP_SECS", str(6 * 60 * 60)))
LORA_FREQ: int | None = None
"""Frequency of the local node's configured LoRa region in MHz."""
MODEM_PRESET: str | None = None
"""CamelCase modem preset name reported by the local node."""
_RECONNECT_INITIAL_DELAY_SECS = DEFAULT_RECONNECT_INITIAL_DELAY_SECS
_RECONNECT_MAX_DELAY_SECS = DEFAULT_RECONNECT_MAX_DELAY_SECS
_CLOSE_TIMEOUT_SECS = DEFAULT_CLOSE_TIMEOUT_SECS
_INACTIVITY_RECONNECT_SECS = DEFAULT_INACTIVITY_RECONNECT_SECS
_ENERGY_ONLINE_DURATION_SECS = DEFAULT_ENERGY_ONLINE_DURATION_SECS
_ENERGY_SLEEP_SECS = DEFAULT_ENERGY_SLEEP_SECS
# Backwards compatibility shim for legacy imports.
PORT = CONNECTION
def _debug_log(message: str) -> None:
def _debug_log(
message: str,
*,
context: str | None = None,
severity: str = "debug",
always: bool = False,
**metadata: Any,
) -> None:
"""Print ``message`` with a UTC timestamp when ``DEBUG`` is enabled.
Parameters:
message: Text to display when debug logging is active.
context: Optional logical component emitting the message.
severity: Log level label to embed in the formatted output.
always: When ``True``, bypasses the :data:`DEBUG` guard.
**metadata: Additional structured log metadata.
"""
if not DEBUG:
normalized_severity = severity.lower()
if not DEBUG and not always and normalized_severity == "debug":
return
timestamp = time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime())
print(f"[{timestamp}] [debug] {message}")
timestamp = datetime.now(timezone.utc).isoformat(timespec="milliseconds")
timestamp = timestamp.replace("+00:00", "Z")
parts = [f"[{timestamp}]", "[potato-mesh]", f"[{normalized_severity}]"]
if context:
parts.append(f"context={context}")
for key, value in sorted(metadata.items()):
parts.append(f"{key}={value!r}")
parts.append(message)
print(" ".join(parts))
__all__ = [
"PORT",
"CONNECTION",
"SNAPSHOT_SECS",
"CHANNEL_INDEX",
"DEBUG",
"INSTANCE",
"API_TOKEN",
"ENERGY_SAVING",
"LORA_FREQ",
"MODEM_PRESET",
"_RECONNECT_INITIAL_DELAY_SECS",
"_RECONNECT_MAX_DELAY_SECS",
"_CLOSE_TIMEOUT_SECS",
@@ -69,3 +135,19 @@ __all__ = [
"_ENERGY_SLEEP_SECS",
"_debug_log",
]
class _ConfigModule(ModuleType):
"""Module proxy that keeps connection aliases synchronised."""
def __setattr__(self, name: str, value: Any) -> None: # type: ignore[override]
"""Propagate CONNECTION/PORT assignments to both attributes."""
if name in {"CONNECTION", "PORT"}:
super().__setattr__("CONNECTION", value)
super().__setattr__("PORT", value)
return
super().__setattr__(name, value)
sys.modules[__name__].__class__ = _ConfigModule

View File

@@ -131,7 +131,13 @@ def _close_interface(iface_obj) -> None:
iface_obj.close()
except Exception as exc: # pragma: no cover
if config.DEBUG:
config._debug_log(f"error while closing mesh interface: {exc}")
config._debug_log(
"Error closing mesh interface",
context="daemon.close",
severity="warn",
error_class=exc.__class__.__name__,
error_message=str(exc),
)
if config._CLOSE_TIMEOUT_SECS <= 0 or not _event_wait_allows_default_timeout():
_do_close()
@@ -141,9 +147,11 @@ def _close_interface(iface_obj) -> None:
close_thread.start()
close_thread.join(config._CLOSE_TIMEOUT_SECS)
if close_thread.is_alive():
print(
"[warn] mesh interface did not close within "
f"{config._CLOSE_TIMEOUT_SECS:g}s; continuing shutdown"
config._debug_log(
"Mesh interface close timed out",
context="daemon.close",
severity="warn",
timeout_seconds=config._CLOSE_TIMEOUT_SECS,
)
@@ -159,12 +167,56 @@ def _is_ble_interface(iface_obj) -> bool:
return "ble_interface" in module_name
def _connected_state(candidate) -> bool | None:
"""Return the connection state advertised by ``candidate``.
Parameters:
candidate: Attribute returned from ``iface.isConnected`` on a
Meshtastic interface. The value may be a boolean, a callable that
yields a boolean, or a :class:`threading.Event` instance.
Returns:
``True`` when the interface is believed to be connected, ``False``
when it appears disconnected, and ``None`` when the state cannot be
determined from the provided attribute.
"""
if candidate is None:
return None
if isinstance(candidate, threading.Event):
return candidate.is_set()
is_set_method = getattr(candidate, "is_set", None)
if callable(is_set_method):
try:
return bool(is_set_method())
except Exception:
return None
if callable(candidate):
try:
return bool(candidate())
except Exception:
return None
try:
return bool(candidate)
except Exception: # pragma: no cover - defensive guard
return None
def main() -> None:
"""Run the mesh ingestion daemon until interrupted."""
subscribed = _subscribe_receive_topics()
if config.DEBUG and subscribed:
config._debug_log(f"subscribed to receive topics: {', '.join(subscribed)}")
if subscribed:
config._debug_log(
"Subscribed to receive topics",
context="daemon.subscribe",
severity="info",
topics=subscribed,
)
iface = None
resolved_target = None
@@ -206,11 +258,16 @@ def main() -> None:
signal.signal(signal.SIGTERM, handle_sigterm)
target = config.INSTANCE or "(no POTATOMESH_INSTANCE)"
configured_port = config.PORT
configured_port = config.CONNECTION
active_candidate = configured_port
announced_target = False
print(
f"Mesh daemon: nodes+messages → {target} | port={configured_port or 'auto'} | channel={config.CHANNEL_INDEX}"
config._debug_log(
"Mesh daemon starting",
context="daemon.main",
severity="info",
target=target,
port=configured_port or "auto",
channel=config.CHANNEL_INDEX,
)
try:
while not stop.is_set():
@@ -223,10 +280,17 @@ def main() -> None:
else:
iface, resolved_target = interfaces._create_default_interface()
active_candidate = resolved_target
interfaces._ensure_radio_metadata(iface)
interfaces._ensure_channel_metadata(iface)
retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
initial_snapshot_sent = False
if not announced_target and resolved_target:
print(f"[info] using mesh interface: {resolved_target}")
config._debug_log(
"Using mesh interface",
context="daemon.interface",
severity="info",
target=resolved_target,
)
announced_target = True
if energy_saving_enabled and energy_online_secs > 0:
energy_session_deadline = time.monotonic() + energy_online_secs
@@ -239,13 +303,23 @@ def main() -> None:
last_seen_packet_monotonic = iface_connected_at
last_inactivity_reconnect = None
except interfaces.NoAvailableMeshInterface as exc:
print(f"[error] {exc}")
config._debug_log(
"No mesh interface available",
context="daemon.interface",
severity="error",
error_message=str(exc),
)
_close_interface(iface)
raise SystemExit(1) from exc
except Exception as exc:
candidate_desc = active_candidate or "auto"
print(
f"[warn] failed to create mesh interface ({candidate_desc}): {exc}"
config._debug_log(
"Failed to create mesh interface",
context="daemon.interface",
severity="warn",
candidate=candidate_desc,
error_class=exc.__class__.__name__,
error_message=str(exc),
)
if configured_port is None:
active_candidate = None
@@ -267,7 +341,11 @@ def main() -> None:
energy_session_deadline is not None
and time.monotonic() >= energy_session_deadline
):
print("[info] energy saving: disconnecting mesh interface")
config._debug_log(
"Energy saving disconnect",
context="daemon.energy",
severity="info",
)
_close_interface(iface)
iface = None
announced_target = False
@@ -279,8 +357,10 @@ def main() -> None:
_is_ble_interface(iface)
and getattr(iface, "client", object()) is None
):
print(
"[info] energy saving: BLE client disconnected; sleeping before retry"
config._debug_log(
"Energy saving BLE disconnect",
context="daemon.energy",
severity="info",
)
_close_interface(iface)
iface = None
@@ -296,7 +376,8 @@ def main() -> None:
node_items = _node_items_snapshot(nodes)
if node_items is None:
config._debug_log(
"skipping node snapshot; nodes changed during iteration"
"Skipping node snapshot due to concurrent modification",
context="daemon.snapshot",
)
else:
processed_snapshot_item = False
@@ -305,15 +386,30 @@ def main() -> None:
try:
handlers.upsert_node(node_id, node)
except Exception as exc:
print(
f"[warn] failed to update node snapshot for {node_id}: {exc}"
config._debug_log(
"Failed to update node snapshot",
context="daemon.snapshot",
severity="warn",
node_id=node_id,
error_class=exc.__class__.__name__,
error_message=str(exc),
)
if config.DEBUG:
config._debug_log(f"node object: {node!r}")
config._debug_log(
"Snapshot node payload",
context="daemon.snapshot",
node=node,
)
if processed_snapshot_item:
initial_snapshot_sent = True
except Exception as exc:
print(f"[warn] failed to update node snapshot: {exc}")
config._debug_log(
"Snapshot refresh failed",
context="daemon.snapshot",
severity="warn",
error_class=exc.__class__.__name__,
error_message=str(exc),
)
_close_interface(iface)
iface = None
stop.wait(retry_delay)
@@ -354,13 +450,20 @@ def main() -> None:
connected_attr = getattr(iface, "isConnected", None)
believed_disconnected = False
if callable(connected_attr):
try:
believed_disconnected = not bool(connected_attr())
except Exception:
believed_disconnected = False
elif connected_attr is not None:
believed_disconnected = not bool(connected_attr)
connected_state = _connected_state(connected_attr)
if connected_state is None:
if callable(connected_attr):
try:
believed_disconnected = not bool(connected_attr())
except Exception:
believed_disconnected = False
elif connected_attr is not None:
try:
believed_disconnected = not bool(connected_attr)
except Exception: # pragma: no cover - defensive guard
believed_disconnected = False
else:
believed_disconnected = not connected_state
should_reconnect = believed_disconnected or (
inactivity_elapsed >= inactivity_reconnect_secs
@@ -377,9 +480,11 @@ def main() -> None:
if believed_disconnected
else f"no data for {inactivity_elapsed:.0f}s"
)
print(
"[warn] mesh interface inactivity detected "
f"({reason}); reconnecting"
config._debug_log(
"Mesh interface inactivity detected",
context="daemon.interface",
severity="warn",
reason=reason,
)
last_inactivity_reconnect = now_monotonic
_close_interface(iface)
@@ -393,7 +498,11 @@ def main() -> None:
retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
stop.wait(config.SNAPSHOT_SECS)
except KeyboardInterrupt: # pragma: no cover - interactive only
config._debug_log("received KeyboardInterrupt; shutting down")
config._debug_log(
"Received KeyboardInterrupt; shutting down",
context="daemon.main",
severity="info",
)
stop.set()
finally:
_close_interface(iface)
@@ -405,5 +514,6 @@ __all__ = [
"_node_items_snapshot",
"_subscribe_receive_topics",
"_is_ble_interface",
"_connected_state",
"main",
]

View File

@@ -21,7 +21,7 @@ import json
import time
from collections.abc import Mapping
from . import config, queue
from . import channels, config, queue
from .serialization import (
_canonical_node_id,
_coerce_float,
@@ -42,6 +42,55 @@ from .serialization import (
)
def _radio_metadata_fields() -> dict[str, object]:
"""Return the shared radio metadata fields for payload enrichment."""
metadata: dict[str, object] = {}
freq = getattr(config, "LORA_FREQ", None)
if freq is not None:
metadata["lora_freq"] = freq
preset = getattr(config, "MODEM_PRESET", None)
if preset is not None:
metadata["modem_preset"] = preset
return metadata
def _apply_radio_metadata(payload: dict) -> dict:
"""Augment ``payload`` with radio metadata when available."""
metadata = _radio_metadata_fields()
if metadata:
payload.update(metadata)
return payload
def _is_encrypted_flag(value) -> bool:
"""Return ``True`` when ``value`` represents an encrypted payload."""
if isinstance(value, bool):
return value
if isinstance(value, (int, float)):
return value != 0
if isinstance(value, str):
normalized = value.strip().lower()
if normalized in {"", "0", "false", "no"}:
return False
return True
return bool(value)
def _apply_radio_metadata_to_nodes(payload: dict) -> dict:
"""Attach radio metadata to each node entry stored in ``payload``."""
metadata = _radio_metadata_fields()
if not metadata:
return payload
for value in payload.values():
if isinstance(value, dict):
value.update(metadata)
return payload
def upsert_node(node_id, node) -> None:
"""Schedule an upsert for a single node.
@@ -53,7 +102,7 @@ def upsert_node(node_id, node) -> None:
``None``. The payload is forwarded to the shared HTTP queue.
"""
payload = upsert_payload(node_id, node)
payload = _apply_radio_metadata_to_nodes(upsert_payload(node_id, node))
_queue_post_json("/api/nodes", payload, priority=queue._NODE_POST_PRIORITY)
if config.DEBUG:
@@ -61,7 +110,11 @@ def upsert_node(node_id, node) -> None:
short = _get(user, "shortName")
long = _get(user, "longName")
config._debug_log(
f"upserted node {node_id} shortName={short!r} longName={long!r}"
"Queued node upsert payload",
context="handlers.upsert_node",
node_id=node_id,
short_name=short,
long_name=long,
)
@@ -239,12 +292,19 @@ def store_position_packet(packet: Mapping, decoded: Mapping) -> None:
position_payload["raw"] = raw_payload
_queue_post_json(
"/api/positions", position_payload, priority=queue._POSITION_POST_PRIORITY
"/api/positions",
_apply_radio_metadata(position_payload),
priority=queue._POSITION_POST_PRIORITY,
)
if config.DEBUG:
config._debug_log(
f"stored position for {node_id} lat={latitude!r} lon={longitude!r}"
"Queued position payload",
context="handlers.store_position",
node_id=node_id,
latitude=latitude,
longitude=longitude,
position_time=position_time,
)
@@ -400,6 +460,189 @@ def store_telemetry_packet(packet: Mapping, decoded: Mapping) -> None:
)
)
current = _coerce_float(
_first(
telemetry_section,
"current",
"deviceMetrics.current",
"deviceMetrics.current_ma",
"deviceMetrics.currentMa",
"environmentMetrics.current",
default=None,
)
)
gas_resistance = _coerce_float(
_first(
telemetry_section,
"gasResistance",
"gas_resistance",
"environmentMetrics.gasResistance",
"environmentMetrics.gas_resistance",
default=None,
)
)
iaq = _coerce_int(
_first(
telemetry_section,
"iaq",
"environmentMetrics.iaq",
"environmentMetrics.iaqIndex",
"environmentMetrics.iaq_index",
default=None,
)
)
distance = _coerce_float(
_first(
telemetry_section,
"distance",
"environmentMetrics.distance",
"environmentMetrics.range",
"environmentMetrics.rangeMeters",
default=None,
)
)
lux = _coerce_float(
_first(
telemetry_section,
"lux",
"environmentMetrics.lux",
"environmentMetrics.illuminance",
default=None,
)
)
white_lux = _coerce_float(
_first(
telemetry_section,
"whiteLux",
"white_lux",
"environmentMetrics.whiteLux",
"environmentMetrics.white_lux",
default=None,
)
)
ir_lux = _coerce_float(
_first(
telemetry_section,
"irLux",
"ir_lux",
"environmentMetrics.irLux",
"environmentMetrics.ir_lux",
default=None,
)
)
uv_lux = _coerce_float(
_first(
telemetry_section,
"uvLux",
"uv_lux",
"environmentMetrics.uvLux",
"environmentMetrics.uv_lux",
"environmentMetrics.uvIndex",
default=None,
)
)
wind_direction = _coerce_int(
_first(
telemetry_section,
"windDirection",
"wind_direction",
"environmentMetrics.windDirection",
"environmentMetrics.wind_direction",
default=None,
)
)
wind_speed = _coerce_float(
_first(
telemetry_section,
"windSpeed",
"wind_speed",
"environmentMetrics.windSpeed",
"environmentMetrics.wind_speed",
"environmentMetrics.windSpeedMps",
default=None,
)
)
wind_gust = _coerce_float(
_first(
telemetry_section,
"windGust",
"wind_gust",
"environmentMetrics.windGust",
"environmentMetrics.wind_gust",
default=None,
)
)
wind_lull = _coerce_float(
_first(
telemetry_section,
"windLull",
"wind_lull",
"environmentMetrics.windLull",
"environmentMetrics.wind_lull",
default=None,
)
)
weight = _coerce_float(
_first(
telemetry_section,
"weight",
"environmentMetrics.weight",
"environmentMetrics.mass",
default=None,
)
)
radiation = _coerce_float(
_first(
telemetry_section,
"radiation",
"environmentMetrics.radiation",
"environmentMetrics.radiationLevel",
default=None,
)
)
rainfall_1h = _coerce_float(
_first(
telemetry_section,
"rainfall1h",
"rainfall_1h",
"environmentMetrics.rainfall1h",
"environmentMetrics.rainfall_1h",
"environmentMetrics.rainfallOneHour",
default=None,
)
)
rainfall_24h = _coerce_float(
_first(
telemetry_section,
"rainfall24h",
"rainfall_24h",
"environmentMetrics.rainfall24h",
"environmentMetrics.rainfall_24h",
"environmentMetrics.rainfallTwentyFourHour",
default=None,
)
)
soil_moisture = _coerce_int(
_first(
telemetry_section,
"soilMoisture",
"soil_moisture",
"environmentMetrics.soilMoisture",
"environmentMetrics.soil_moisture",
default=None,
)
)
soil_temperature = _coerce_float(
_first(
telemetry_section,
"soilTemperature",
"soil_temperature",
"environmentMetrics.soilTemperature",
"environmentMetrics.soil_temperature",
default=None,
)
)
telemetry_payload = {
"id": pkt_id,
"node_id": node_id,
@@ -434,14 +677,56 @@ def store_telemetry_packet(packet: Mapping, decoded: Mapping) -> None:
telemetry_payload["relative_humidity"] = relative_humidity
if barometric_pressure is not None:
telemetry_payload["barometric_pressure"] = barometric_pressure
if current is not None:
telemetry_payload["current"] = current
if gas_resistance is not None:
telemetry_payload["gas_resistance"] = gas_resistance
if iaq is not None:
telemetry_payload["iaq"] = iaq
if distance is not None:
telemetry_payload["distance"] = distance
if lux is not None:
telemetry_payload["lux"] = lux
if white_lux is not None:
telemetry_payload["white_lux"] = white_lux
if ir_lux is not None:
telemetry_payload["ir_lux"] = ir_lux
if uv_lux is not None:
telemetry_payload["uv_lux"] = uv_lux
if wind_direction is not None:
telemetry_payload["wind_direction"] = wind_direction
if wind_speed is not None:
telemetry_payload["wind_speed"] = wind_speed
if wind_gust is not None:
telemetry_payload["wind_gust"] = wind_gust
if wind_lull is not None:
telemetry_payload["wind_lull"] = wind_lull
if weight is not None:
telemetry_payload["weight"] = weight
if radiation is not None:
telemetry_payload["radiation"] = radiation
if rainfall_1h is not None:
telemetry_payload["rainfall_1h"] = rainfall_1h
if rainfall_24h is not None:
telemetry_payload["rainfall_24h"] = rainfall_24h
if soil_moisture is not None:
telemetry_payload["soil_moisture"] = soil_moisture
if soil_temperature is not None:
telemetry_payload["soil_temperature"] = soil_temperature
_queue_post_json(
"/api/telemetry", telemetry_payload, priority=queue._TELEMETRY_POST_PRIORITY
"/api/telemetry",
_apply_radio_metadata(telemetry_payload),
priority=queue._TELEMETRY_POST_PRIORITY,
)
if config.DEBUG:
config._debug_log(
f"stored telemetry for {node_id!r} battery={battery_level!r} voltage={voltage!r}"
"Queued telemetry payload",
context="handlers.store_telemetry",
node_id=node_id,
battery_level=battery_level,
voltage=voltage,
)
@@ -594,7 +879,9 @@ def store_nodeinfo_packet(packet: Mapping, decoded: Mapping) -> None:
pass
_queue_post_json(
"/api/nodes", {node_id: node_payload}, priority=queue._NODE_POST_PRIORITY
"/api/nodes",
_apply_radio_metadata_to_nodes({node_id: node_payload}),
priority=queue._NODE_POST_PRIORITY,
)
if config.DEBUG:
@@ -604,7 +891,11 @@ def store_nodeinfo_packet(packet: Mapping, decoded: Mapping) -> None:
short = user_dict.get("shortName")
long_name = user_dict.get("longName")
config._debug_log(
f"stored nodeinfo for {node_id} shortName={short!r} longName={long_name!r}"
"Queued nodeinfo payload",
context="handlers.store_nodeinfo",
node_id=node_id,
short_name=short,
long_name=long_name,
)
@@ -703,11 +994,18 @@ def store_neighborinfo_packet(packet: Mapping, decoded: Mapping) -> None:
if last_sent_by_id is not None:
payload["last_sent_by_id"] = last_sent_by_id
_queue_post_json("/api/neighbors", payload, priority=queue._NEIGHBOR_POST_PRIORITY)
_queue_post_json(
"/api/neighbors",
_apply_radio_metadata(payload),
priority=queue._NEIGHBOR_POST_PRIORITY,
)
if config.DEBUG:
config._debug_log(
f"stored neighborinfo for {node_id} neighbors={len(neighbor_entries)}"
"Queued neighborinfo payload",
context="handlers.store_neighborinfo",
node_id=node_id,
neighbors=len(neighbor_entries),
)
@@ -783,12 +1081,18 @@ def store_packet_dict(packet: Mapping) -> None:
raw = json.dumps(packet, default=str)
except Exception:
raw = str(packet)
config._debug_log(f"packet missing from_id: {raw}")
config._debug_log(
"Packet missing from_id",
context="handlers.store_packet_dict",
packet=raw,
)
snr = _first(packet, "snr", "rx_snr", "rxSnr", default=None)
rssi = _first(packet, "rssi", "rx_rssi", "rxRssi", default=None)
hop = _first(packet, "hopLimit", "hop_limit", default=None)
encrypted_flag = _is_encrypted_flag(encrypted)
message_payload = {
"id": int(pkt_id),
"rx_time": rx_time,
@@ -803,17 +1107,33 @@ def store_packet_dict(packet: Mapping) -> None:
"rssi": int(rssi) if rssi is not None else None,
"hop_limit": int(hop) if hop is not None else None,
}
channel_name_value = None
if not encrypted_flag:
channel_name_value = channels.channel_name(channel)
if channel_name_value:
message_payload["channel_name"] = channel_name_value
_queue_post_json(
"/api/messages", message_payload, priority=queue._MESSAGE_POST_PRIORITY
"/api/messages",
_apply_radio_metadata(message_payload),
priority=queue._MESSAGE_POST_PRIORITY,
)
if config.DEBUG:
from_label = _canonical_node_id(from_id) or from_id
to_label = _canonical_node_id(to_id) or to_id
payload_desc = "Encrypted" if text is None and encrypted else text
config._debug_log(
f"stored message from {from_label!r} to {to_label!r} ch={channel} text={payload_desc!r}"
)
log_kwargs = {
"context": "handlers.store_packet_dict",
"from_id": from_label,
"to_id": to_label,
"channel": channel,
"channel_display": channel_name_value or channel,
"payload": payload_desc,
}
if channel_name_value:
log_kwargs["channel_name"] = channel_name_value
config._debug_log("Queued message payload", **log_kwargs)
_last_packet_monotonic: float | None = None
@@ -859,7 +1179,14 @@ def on_receive(packet, interface) -> None:
info = (
list(packet_dict.keys()) if isinstance(packet_dict, dict) else type(packet)
)
print(f"[warn] failed to store packet: {exc} | info: {info}")
config._debug_log(
"Failed to store packet",
context="handlers.on_receive",
severity="warn",
error_class=exc.__class__.__name__,
error_message=str(exc),
packet_info=info,
)
__all__ = [

View File

@@ -21,12 +21,12 @@ import ipaddress
import re
import urllib.parse
from collections.abc import Mapping
from typing import TYPE_CHECKING
from typing import TYPE_CHECKING, Any
from meshtastic.serial_interface import SerialInterface
from meshtastic.tcp_interface import TCPInterface
from . import config, serialization
from . import channels, config, serialization
if TYPE_CHECKING: # pragma: no cover - import only used for type checking
from meshtastic.ble_interface import BLEInterface as _BLEInterface
@@ -169,6 +169,189 @@ def _patch_meshtastic_ble_receive_loop() -> None:
_patch_meshtastic_ble_receive_loop()
def _has_field(message: Any, field_name: str) -> bool:
"""Return ``True`` when ``message`` advertises ``field_name`` via ``HasField``."""
if message is None:
return False
has_field = getattr(message, "HasField", None)
if callable(has_field):
try:
return bool(has_field(field_name))
except Exception: # pragma: no cover - defensive guard
return False
return hasattr(message, field_name)
def _enum_name_from_field(message: Any, field_name: str, value: Any) -> str | None:
"""Return the enum name for ``value`` using ``message`` descriptors."""
descriptor = getattr(message, "DESCRIPTOR", None)
if descriptor is None:
return None
fields_by_name = getattr(descriptor, "fields_by_name", {})
field_desc = fields_by_name.get(field_name)
if field_desc is None:
return None
enum_type = getattr(field_desc, "enum_type", None)
if enum_type is None:
return None
enum_values = getattr(enum_type, "values_by_number", {})
enum_value = enum_values.get(value)
if enum_value is None:
return None
return getattr(enum_value, "name", None)
def _resolve_lora_message(local_config: Any) -> Any | None:
"""Return the LoRa configuration sub-message from ``local_config``."""
if local_config is None:
return None
if _has_field(local_config, "lora"):
candidate = getattr(local_config, "lora", None)
if candidate is not None:
return candidate
radio_section = getattr(local_config, "radio", None)
if radio_section is not None:
if _has_field(radio_section, "lora"):
return getattr(radio_section, "lora", None)
if hasattr(radio_section, "lora"):
return getattr(radio_section, "lora")
if hasattr(local_config, "lora"):
return getattr(local_config, "lora")
return None
def _region_frequency(lora_message: Any) -> int | None:
"""Derive the LoRa region frequency in MHz from ``lora_message``."""
if lora_message is None:
return None
region_value = getattr(lora_message, "region", None)
if region_value is None:
return None
enum_name = _enum_name_from_field(lora_message, "region", region_value)
if enum_name:
digits = re.findall(r"\d+", enum_name)
for token in digits:
try:
freq = int(token)
except ValueError: # pragma: no cover - regex guarantees digits
continue
if freq >= 100:
return freq
for token in reversed(digits):
try:
return int(token)
except ValueError: # pragma: no cover - defensive only
continue
if isinstance(region_value, int) and region_value >= 100:
return region_value
return None
def _camelcase_enum_name(name: str | None) -> str | None:
"""Convert ``name`` from ``SCREAMING_SNAKE`` to ``CamelCase``."""
if not name:
return None
parts = re.split(r"[^0-9A-Za-z]+", name.strip())
camel_parts = [part.capitalize() for part in parts if part]
if not camel_parts:
return None
return "".join(camel_parts)
def _modem_preset(lora_message: Any) -> str | None:
"""Return the CamelCase modem preset configured on ``lora_message``."""
if lora_message is None:
return None
descriptor = getattr(lora_message, "DESCRIPTOR", None)
fields_by_name = getattr(descriptor, "fields_by_name", {}) if descriptor else {}
if "modem_preset" in fields_by_name:
preset_field = "modem_preset"
elif "preset" in fields_by_name:
preset_field = "preset"
elif hasattr(lora_message, "modem_preset"):
preset_field = "modem_preset"
elif hasattr(lora_message, "preset"):
preset_field = "preset"
else:
return None
preset_value = getattr(lora_message, preset_field, None)
if preset_value is None:
return None
enum_name = _enum_name_from_field(lora_message, preset_field, preset_value)
if isinstance(enum_name, str) and enum_name:
return _camelcase_enum_name(enum_name)
if isinstance(preset_value, str) and preset_value:
return _camelcase_enum_name(preset_value)
return None
def _ensure_radio_metadata(iface: Any) -> None:
"""Populate cached LoRa metadata by inspecting ``iface`` when available."""
if iface is None:
return
try:
wait_for_config = getattr(iface, "waitForConfig", None)
if callable(wait_for_config):
wait_for_config()
except Exception: # pragma: no cover - hardware dependent guard
pass
local_node = getattr(iface, "localNode", None)
local_config = getattr(local_node, "localConfig", None) if local_node else None
lora_message = _resolve_lora_message(local_config)
if lora_message is None:
return
frequency = _region_frequency(lora_message)
preset = _modem_preset(lora_message)
updated = False
if frequency is not None and getattr(config, "LORA_FREQ", None) is None:
config.LORA_FREQ = frequency
updated = True
if preset is not None and getattr(config, "MODEM_PRESET", None) is None:
config.MODEM_PRESET = preset
updated = True
if updated:
config._debug_log(
"Captured LoRa radio metadata",
context="interfaces.ensure_radio_metadata",
severity="info",
always=True,
lora_freq=frequency,
modem_preset=preset,
)
def _ensure_channel_metadata(iface: Any) -> None:
"""Capture channel metadata by inspecting ``iface`` once per runtime."""
if iface is None:
return
try:
channels.capture_from_interface(iface)
except Exception as exc: # pragma: no cover - defensive instrumentation
config._debug_log(
"Failed to capture channel metadata",
context="interfaces.ensure_channel_metadata",
severity="warn",
error_class=exc.__class__.__name__,
error_message=str(exc),
)
_DEFAULT_TCP_PORT = 4403
_DEFAULT_TCP_TARGET = "http://127.0.0.1"
@@ -215,10 +398,14 @@ def _parse_ble_target(value: str) -> str | None:
def _parse_network_target(value: str) -> tuple[str, int] | None:
"""Return ``(host, port)`` when ``value`` is an IP address string.
"""Return ``(host, port)`` when ``value`` is a numeric IP address string.
Only literal IPv4 or IPv6 addresses are accepted, optionally paired with a
port or scheme. Callers that start from hostnames should resolve them to an
address before invoking this helper.
Parameters:
value: Hostname or URL describing the TCP interface.
value: Numeric IP literal or URL describing the TCP interface.
Returns:
A ``(host, port)`` tuple or ``None`` when parsing fails.
@@ -313,21 +500,38 @@ def _create_serial_interface(port: str) -> tuple[object, str]:
port_value = (port or "").strip()
if port_value.lower() in {"", "mock", "none", "null", "disabled"}:
config._debug_log(f"using dummy serial interface for port={port_value!r}")
config._debug_log(
"Using dummy serial interface",
context="interfaces.serial",
port=port_value,
)
return _DummySerialInterface(), "mock"
ble_target = _parse_ble_target(port_value)
if ble_target:
config._debug_log(f"using BLE interface for address={ble_target}")
config._debug_log(
"Using BLE interface",
context="interfaces.ble",
address=ble_target,
)
return _load_ble_interface()(address=ble_target), ble_target
network_target = _parse_network_target(port_value)
if network_target:
host, tcp_port = network_target
config._debug_log(f"using TCP interface for host={host!r} port={tcp_port!r}")
config._debug_log(
"Using TCP interface",
context="interfaces.tcp",
host=host,
port=tcp_port,
)
return (
TCPInterface(hostname=host, portNumber=tcp_port),
f"tcp://{host}:{tcp_port}",
)
config._debug_log(f"using serial interface for port={port_value!r}")
config._debug_log(
"Using serial interface",
context="interfaces.serial",
port=port_value,
)
return SerialInterface(devPath=port_value), port_value
@@ -366,12 +570,24 @@ def _create_default_interface() -> tuple[object, str]:
return _create_serial_interface(candidate)
except Exception as exc: # pragma: no cover - hardware dependent
errors.append((candidate, exc))
config._debug_log(f"failed to open serial candidate {candidate!r}: {exc}")
config._debug_log(
"Failed to open serial candidate",
context="interfaces.auto_discovery",
target=candidate,
error_class=exc.__class__.__name__,
error_message=str(exc),
)
try:
return _create_serial_interface(_DEFAULT_TCP_TARGET)
except Exception as exc: # pragma: no cover - network dependent
errors.append((_DEFAULT_TCP_TARGET, exc))
config._debug_log(f"failed to open TCP fallback {_DEFAULT_TCP_TARGET!r}: {exc}")
config._debug_log(
"Failed to open TCP fallback",
context="interfaces.auto_discovery",
target=_DEFAULT_TCP_TARGET,
error_class=exc.__class__.__name__,
error_message=str(exc),
)
if errors:
summary = "; ".join(f"{target}: {error}" for target, error in errors)
raise NoAvailableMeshInterface(
@@ -383,6 +599,8 @@ def _create_default_interface() -> tuple[object, str]:
__all__ = [
"BLEInterface",
"NoAvailableMeshInterface",
"_ensure_channel_metadata",
"_ensure_radio_metadata",
"_DummySerialInterface",
"_DEFAULT_TCP_PORT",
"_DEFAULT_TCP_TARGET",

View File

@@ -72,16 +72,37 @@ def _post_json(
return
url = f"{instance}{path}"
data = json.dumps(payload).encode("utf-8")
req = urllib.request.Request(
url, data=data, headers={"Content-Type": "application/json"}
)
# Add full headers to avoid Cloudflare blocks on instances behind cloudflare proxy
headers = {
"Content-Type": "application/json",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36",
"Accept": "application/json",
"Accept-Language": "en-US,en;q=0.9",
"Origin": f"{instance}",
"Referer": f"{instance}",
}
if api_token:
req.add_header("Authorization", f"Bearer {api_token}")
headers["Authorization"] = f"Bearer {api_token}"
req = urllib.request.Request(
url,
data=data,
headers=headers,
)
try:
with urllib.request.urlopen(req, timeout=10) as resp:
resp.read()
except Exception as exc: # pragma: no cover - exercised in production
config._debug_log(f"[warn] POST {url} failed: {exc}")
config._debug_log(
"POST request failed",
context="queue.post_json",
severity="warn",
url=url,
error_class=exc.__class__.__name__,
error_message=str(exc),
)
def _enqueue_post_json(
@@ -122,6 +143,7 @@ def _drain_post_queue(
while True:
with state.lock:
if not state.queue:
state.active = False
return
_priority, _idx, path, payload = heapq.heappop(state.queue)
send(path, payload)

View File

@@ -24,7 +24,10 @@ CREATE TABLE IF NOT EXISTS messages (
encrypted TEXT,
snr REAL,
rssi INTEGER,
hop_limit INTEGER
hop_limit INTEGER,
lora_freq INTEGER,
modem_preset TEXT,
channel_name TEXT
);
CREATE INDEX IF NOT EXISTS idx_messages_rx_time ON messages(rx_time);

View File

@@ -0,0 +1,22 @@
-- Copyright (C) 2025 l5yth
--
-- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License.
-- You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing, software
-- distributed under the License is distributed on an "AS IS" BASIS,
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-- See the License for the specific language governing permissions and
-- limitations under the License.
--
-- Extend the nodes and messages tables with LoRa metadata columns.
BEGIN;
ALTER TABLE nodes ADD COLUMN lora_freq INTEGER;
ALTER TABLE nodes ADD COLUMN modem_preset TEXT;
ALTER TABLE messages ADD COLUMN lora_freq INTEGER;
ALTER TABLE messages ADD COLUMN modem_preset TEXT;
ALTER TABLE messages ADD COLUMN channel_name TEXT;
COMMIT;

View File

@@ -0,0 +1,35 @@
-- Copyright (C) 2025 l5yth
--
-- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License.
-- You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing, software
-- distributed under the License is distributed on an "AS IS" BASIS,
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-- See the License for the specific language governing permissions and
-- limitations under the License.
--
-- Extend the telemetry table with additional environment metrics.
BEGIN;
ALTER TABLE telemetry ADD COLUMN gas_resistance REAL;
ALTER TABLE telemetry ADD COLUMN current REAL;
ALTER TABLE telemetry ADD COLUMN iaq INTEGER;
ALTER TABLE telemetry ADD COLUMN distance REAL;
ALTER TABLE telemetry ADD COLUMN lux REAL;
ALTER TABLE telemetry ADD COLUMN white_lux REAL;
ALTER TABLE telemetry ADD COLUMN ir_lux REAL;
ALTER TABLE telemetry ADD COLUMN uv_lux REAL;
ALTER TABLE telemetry ADD COLUMN wind_direction INTEGER;
ALTER TABLE telemetry ADD COLUMN wind_speed REAL;
ALTER TABLE telemetry ADD COLUMN weight REAL;
ALTER TABLE telemetry ADD COLUMN wind_gust REAL;
ALTER TABLE telemetry ADD COLUMN wind_lull REAL;
ALTER TABLE telemetry ADD COLUMN radiation REAL;
ALTER TABLE telemetry ADD COLUMN rainfall_1h REAL;
ALTER TABLE telemetry ADD COLUMN rainfall_24h REAL;
ALTER TABLE telemetry ADD COLUMN soil_moisture INTEGER;
ALTER TABLE telemetry ADD COLUMN soil_temperature REAL;
COMMIT;

View File

@@ -39,7 +39,9 @@ CREATE TABLE IF NOT EXISTS nodes (
precision_bits INTEGER,
latitude REAL,
longitude REAL,
altitude REAL
altitude REAL,
lora_freq INTEGER,
modem_preset TEXT
);
CREATE INDEX IF NOT EXISTS idx_nodes_last_heard ON nodes(last_heard);

View File

@@ -1,7 +1,8 @@
# Production dependencies
meshtastic>=2.0.0
protobuf>=4.21.12
meshtastic>=2.5.0
protobuf>=5.27.2
# Development dependencies (optional)
black>=23.0.0
pytest>=7.0.0
black>=24.8.0
pytest>=8.3.0
pytest-cov>=5.0.0

View File

@@ -35,7 +35,25 @@ CREATE TABLE IF NOT EXISTS telemetry (
uptime_seconds INTEGER,
temperature REAL,
relative_humidity REAL,
barometric_pressure REAL
barometric_pressure REAL,
gas_resistance REAL,
current REAL,
iaq INTEGER,
distance REAL,
lux REAL,
white_lux REAL,
ir_lux REAL,
uv_lux REAL,
wind_direction INTEGER,
wind_speed REAL,
weight REAL,
wind_gust REAL,
wind_lull REAL,
radiation REAL,
rainfall_1h REAL,
rainfall_24h REAL,
soil_moisture INTEGER,
soil_temperature REAL
);
CREATE INDEX IF NOT EXISTS idx_telemetry_rx_time ON telemetry(rx_time);

View File

@@ -5,7 +5,8 @@ services:
DEBUG: 1
volumes:
- ./web:/app
- ./data:/data
- ./data:/app/.local/share/potato-mesh
- ./.config/potato-mesh:/app/.config/potato-mesh
- /app/vendor/bundle
web-bridge:
@@ -13,7 +14,8 @@ services:
DEBUG: 1
volumes:
- ./web:/app
- ./data:/data
- ./data:/app/.local/share/potato-mesh
- ./.config/potato-mesh:/app/.config/potato-mesh
- /app/vendor/bundle
ports:
- "41447:41447"
@@ -24,11 +26,17 @@ services:
DEBUG: 1
volumes:
- ./data:/app
- ./data:/app/.local/share/potato-mesh
- ./.config/potato-mesh:/app/.config/potato-mesh
- /app/.local
- /dev:/dev
ingestor-bridge:
environment:
DEBUG: 1
volumes:
- ./data:/app
- ./data:/app/.local/share/potato-mesh
- ./.config/potato-mesh:/app/.config/potato-mesh
- /app/.local
- /dev:/dev

View File

@@ -3,18 +3,21 @@ x-web-base: &web-base
environment:
APP_ENV: ${APP_ENV:-production}
RACK_ENV: ${RACK_ENV:-production}
SITE_NAME: ${SITE_NAME:-My Meshtastic Network}
DEFAULT_CHANNEL: ${DEFAULT_CHANNEL:-#MediumFast}
DEFAULT_FREQUENCY: ${DEFAULT_FREQUENCY:-868MHz}
MAP_CENTER_LAT: ${MAP_CENTER_LAT:-52.502889}
MAP_CENTER_LON: ${MAP_CENTER_LON:-13.404194}
MAX_NODE_DISTANCE_KM: ${MAX_NODE_DISTANCE_KM:-50}
MATRIX_ROOM: ${MATRIX_ROOM:-}
SITE_NAME: ${SITE_NAME:-PotatoMesh Demo}
CHANNEL: ${CHANNEL:-#LongFast}
FREQUENCY: ${FREQUENCY:-915MHz}
MAP_CENTER: ${MAP_CENTER:-38.761944,-27.090833}
MAX_DISTANCE: ${MAX_DISTANCE:-42}
CONTACT_LINK: ${CONTACT_LINK:-#potatomesh:dod.ngo}
FEDERATION: ${FEDERATION:-1}
PRIVATE: ${PRIVATE:-0}
API_TOKEN: ${API_TOKEN}
INSTANCE_DOMAIN: ${INSTANCE_DOMAIN}
DEBUG: ${DEBUG:-0}
command: ["ruby", "app.rb", "-p", "41447", "-o", "0.0.0.0"]
volumes:
- potatomesh_data:/app/data
- potatomesh_data:/app/.local/share/potato-mesh
- potatomesh_config:/app/.config/potato-mesh
- potatomesh_logs:/app/logs
restart: unless-stopped
deploy:
@@ -29,17 +32,23 @@ x-web-base: &web-base
x-ingestor-base: &ingestor-base
image: ghcr.io/l5yth/potato-mesh-ingestor-${POTATOMESH_IMAGE_ARCH:-linux-amd64}:latest
environment:
MESH_SERIAL: ${MESH_SERIAL:-/dev/ttyACM0}
MESH_SNAPSHOT_SECS: ${MESH_SNAPSHOT_SECS:-60}
MESH_CHANNEL_INDEX: ${MESH_CHANNEL_INDEX:-0}
CONNECTION: ${CONNECTION:-/dev/ttyACM0}
CHANNEL_INDEX: ${CHANNEL_INDEX:-0}
POTATOMESH_INSTANCE: ${POTATOMESH_INSTANCE:-http://web:41447}
API_TOKEN: ${API_TOKEN}
INSTANCE_DOMAIN: ${INSTANCE_DOMAIN}
DEBUG: ${DEBUG:-0}
FEDERATION: ${FEDERATION:-1}
PRIVATE: ${PRIVATE:-0}
volumes:
- potatomesh_data:/app/data
- potatomesh_data:/app/.local/share/potato-mesh
- potatomesh_config:/app/.config/potato-mesh
- potatomesh_logs:/app/logs
devices:
- ${MESH_SERIAL:-/dev/ttyACM0}:${MESH_SERIAL:-/dev/ttyACM0}
- /dev:/dev
device_cgroup_rules:
- 'c 166:* rwm' # ttyACM devices
- 'c 188:* rwm' # ttyUSB devices
- 'c 4:* rwm' # ttyS devices
privileged: false
restart: unless-stopped
deploy:
@@ -87,6 +96,8 @@ services:
volumes:
potatomesh_data:
driver: local
potatomesh_config:
driver: local
potatomesh_logs:
driver: local

View File

@@ -28,7 +28,10 @@ from meshtastic.mesh_interface import MeshInterface
from meshtastic.serial_interface import SerialInterface
from pubsub import pub
PORT = os.environ.get("MESH_SERIAL", "/dev/ttyACM0")
CONNECTION = os.environ.get("CONNECTION") or os.environ.get(
"MESH_SERIAL", "/dev/ttyACM0"
)
"""Connection target opened to capture Meshtastic traffic."""
OUT = os.environ.get("MESH_DUMP_FILE", "meshtastic-dump.ndjson")
# line-buffered append so you can tail -f safely
@@ -54,7 +57,7 @@ def write(kind: str, payload: dict) -> None:
# Connect to the node
iface: MeshInterface = SerialInterface(PORT)
iface: MeshInterface = SerialInterface(CONNECTION)
# Packet callback: every RF/Mesh packet the node receives/decodes lands here
@@ -92,12 +95,12 @@ try:
"meta",
{
"event": "started",
"port": PORT,
"port": CONNECTION,
"my_node_num": getattr(my, "my_node_num", None) if my else None,
},
)
except Exception as e:
write("meta", {"event": "started", "port": PORT, "error": str(e)})
write("meta", {"event": "started", "port": CONNECTION, "error": str(e)})
# Keep the process alive until Ctrl-C

View File

@@ -0,0 +1,239 @@
# Copyright (C) 2025 l5yth
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Minimal Meshtastic protobuf stubs for isolated unit testing."""
from __future__ import annotations
import json
import types
from typing import Any, Callable, Dict, Tuple
def _enum_value(name: str, mapping: Dict[str, int]) -> int:
normalized = name.upper()
if normalized not in mapping:
raise KeyError(f"Unknown enum value: {name}")
return mapping[normalized]
def build(message_base, decode_error) -> Tuple[types.ModuleType, types.ModuleType]:
"""Return ``(config_pb2, mesh_pb2)`` stubs built from protobuf shims."""
class _ProtoMessage(message_base):
"""Base class implementing JSON round-tripping for protobuf stubs."""
_FIELD_ALIASES: Dict[str, str] = {}
_FIELD_FACTORIES: Dict[str, Callable[[], "_ProtoMessage"]] = {}
def __init__(self) -> None:
super().__init__()
object.__setattr__(self, "_fields", {})
def __setattr__(
self, name: str, value: Any
) -> None: # noqa: D401 - behaviour documented on base class
object.__setattr__(self, name, value)
if not name.startswith("_"):
self._fields[name] = value
def __getattr__(self, name: str) -> Any:
factories = getattr(self, "_FIELD_FACTORIES", {})
if name in factories:
value = factories[name]()
self.__setattr__(name, value)
return value
raise AttributeError(name)
def _alias_for(self, name: str) -> str:
return self._FIELD_ALIASES.get(name, name)
def _name_for(self, alias: str) -> str:
reverse = getattr(self, "_FIELD_ALIASES", {})
for key, candidate in reverse.items():
if candidate == alias:
return key
return alias
def _to_dict(self) -> Dict[str, Any]:
result: Dict[str, Any] = {}
for name, value in self._fields.items():
alias = self._alias_for(name)
if isinstance(value, _ProtoMessage):
result[alias] = value._to_dict()
elif isinstance(value, list):
result[alias] = [
item._to_dict() if isinstance(item, _ProtoMessage) else item
for item in value
]
else:
result[alias] = value
return result
def SerializeToString(self) -> bytes:
"""Encode the message contents as a JSON byte string."""
return json.dumps(self._to_dict(), sort_keys=True).encode("utf-8")
def ParseFromString(self, payload: bytes) -> None:
"""Populate the message from a JSON byte string."""
try:
data = json.loads(payload.decode("utf-8"))
except Exception as exc: # pragma: no cover - defensive guard
raise decode_error(str(exc)) from exc
self._load_from_dict(data)
def _load_from_dict(self, data: Dict[str, Any]) -> None:
factories = getattr(self, "_FIELD_FACTORIES", {})
for alias, value in data.items():
name = self._name_for(alias)
if name in factories and isinstance(value, dict):
nested = getattr(self, name, None)
if not isinstance(nested, _ProtoMessage):
nested = factories[name]()
object.__setattr__(self, name, nested)
nested._load_from_dict(value)
self._fields[name] = nested
else:
setattr(self, name, value)
def to_dict(self) -> Dict[str, Any]:
"""Return a JSON-compatible representation of the message."""
return self._to_dict()
def ListFields(self):
"""Mimic protobuf ``ListFields`` for the subset of tests used."""
from types import SimpleNamespace
entries = []
for name, value in self._fields.items():
descriptor = SimpleNamespace(name=name)
entries.append((descriptor, value))
return entries
def CopyFrom(self, other: "_ProtoMessage") -> None:
"""Populate this message with values from ``other``."""
if not isinstance(other, _ProtoMessage):
raise TypeError("CopyFrom expects another protobuf message")
self._fields.clear()
for name, value in other._fields.items():
if isinstance(value, _ProtoMessage):
copied = type(value)()
copied.CopyFrom(value)
setattr(self, name, copied)
elif isinstance(value, list):
converted = []
for item in value:
if isinstance(item, _ProtoMessage):
nested = type(item)()
nested.CopyFrom(item)
converted.append(nested)
else:
converted.append(item)
setattr(self, name, converted)
else:
setattr(self, name, value)
class _DeviceMetrics(_ProtoMessage):
_FIELD_ALIASES = {
"battery_level": "batteryLevel",
"voltage": "voltage",
"channel_utilization": "channelUtilization",
"air_util_tx": "airUtilTx",
"uptime_seconds": "uptimeSeconds",
}
class _Position(_ProtoMessage):
_FIELD_ALIASES = {
"latitude_i": "latitudeI",
"longitude_i": "longitudeI",
"location_source": "locationSource",
}
class LocSource:
_VALUES = {
"LOC_UNSET": 0,
"LOC_INTERNAL": 1,
"LOC_EXTERNAL": 2,
}
@classmethod
def Value(cls, name: str) -> int:
return _enum_value(name, cls._VALUES)
class _User(_ProtoMessage):
_FIELD_ALIASES = {
"short_name": "shortName",
"long_name": "longName",
"hw_model": "hwModel",
}
class _NodeInfo(_ProtoMessage):
_FIELD_ALIASES = {
"last_heard": "lastHeard",
"is_favorite": "isFavorite",
"hops_away": "hopsAway",
}
_FIELD_FACTORIES = {
"user": _User,
"device_metrics": _DeviceMetrics,
"position": _Position,
}
def __init__(self) -> None:
super().__init__()
class _HardwareModel:
_VALUES = {
"UNKNOWN": 0,
"TBEAM": 1,
"HELTEC": 2,
}
@classmethod
def Value(cls, name: str) -> int:
return _enum_value(name, cls._VALUES)
mesh_pb2 = types.ModuleType("mesh_pb2")
mesh_pb2.NodeInfo = _NodeInfo
mesh_pb2.User = _User
mesh_pb2.Position = _Position
mesh_pb2.DeviceMetrics = _DeviceMetrics
mesh_pb2.HardwareModel = _HardwareModel
class _RoleEnum:
_VALUES = {
"UNKNOWN": 0,
"CLIENT": 1,
"REPEATER": 2,
"ROUTER": 3,
}
@classmethod
def Value(cls, name: str) -> int:
return _enum_value(name, cls._VALUES)
class _DeviceConfig:
Role = _RoleEnum
class _Config:
DeviceConfig = _DeviceConfig
config_pb2 = types.ModuleType("config_pb2")
config_pb2.Config = _Config
return config_pb2, mesh_pb2

View File

@@ -11,6 +11,9 @@
"rssi": -121,
"hop_limit": 1,
"snr": -13.25,
"lora_freq": 915,
"modem_preset": "LONG_FAST",
"channel_name": "SpecChannel",
"node": {
"snr": -13.25,
"node_id": "!bba83318",
@@ -50,6 +53,9 @@
"rssi": -117,
"hop_limit": 3,
"snr": -12.0,
"lora_freq": 868,
"modem_preset": "MEDIUM_SLOW",
"channel_name": "SpecChannel",
"node": {
"snr": -12.0,
"node_id": "!43b6e530",

View File

@@ -20,7 +20,9 @@
"last_seen_iso": "2025-09-16T12:05:30Z",
"pos_time_iso": "2025-09-16T12:05:30Z",
"location_source": "LOC_FIXTURE_0",
"precision_bits": 10
"precision_bits": 10,
"lora_freq": 915,
"modem_preset": "LONG_FAST"
},
{
"node_id": "!d1edc388",
@@ -65,7 +67,9 @@
"last_seen_iso": "2025-09-16T12:05:05Z",
"pos_time_iso": "2025-09-16T12:05:05Z",
"location_source": "LOC_FIXTURE_2",
"precision_bits": 12
"precision_bits": 12,
"lora_freq": 868,
"modem_preset": "MEDIUM_SLOW"
},
{
"node_id": "!33602324",

View File

@@ -12,12 +12,31 @@
"battery_level": 101,
"bitfield": 1,
"payload_b64": "DTVr0mgSFQhlFQIrh0AdJb8YPyXYFSA9KJTPEg==",
"current": 0.0715,
"gas_resistance": 1456.0,
"iaq": 83,
"distance": 12.5,
"lux": 100.25,
"white_lux": 64.5,
"ir_lux": 12.75,
"uv_lux": 1.6,
"wind_direction": 270,
"wind_speed": 5.9,
"wind_gust": 7.4,
"wind_lull": 4.8,
"weight": 32.7,
"radiation": 0.45,
"rainfall_1h": 0.18,
"rainfall_24h": 1.42,
"soil_moisture": 3100,
"soil_temperature": 18.9,
"device_metrics": {
"batteryLevel": 101,
"voltage": 4.224,
"channelUtilization": 0.59666663,
"airUtilTx": 0.03908333,
"uptimeSeconds": 305044
"uptimeSeconds": 305044,
"current": 0.0715
},
"raw": {
"device_metrics": {
@@ -43,7 +62,24 @@
"environment_metrics": {
"temperature": 21.98,
"relativeHumidity": 39.475586,
"barometricPressure": 1017.8353
"barometricPressure": 1017.8353,
"gasResistance": 1456.0,
"iaq": 83,
"distance": 12.5,
"lux": 100.25,
"whiteLux": 64.5,
"irLux": 12.75,
"uvLux": 1.6,
"windDirection": 270,
"windSpeed": 5.9,
"windGust": 7.4,
"windLull": 4.8,
"weight": 32.7,
"radiation": 0.45,
"rainfall1h": 0.18,
"rainfall24h": 1.42,
"soilMoisture": 3100,
"soilTemperature": 18.9
},
"raw": {
"environment_metrics": {
@@ -70,7 +106,22 @@
"voltage": 3.92,
"channel_utilization": 0.284,
"air_util_tx": 0.051,
"uptime_seconds": 86400
"uptime_seconds": 86400,
"current": 0.033
},
"environment_metrics": {
"temperature": 19.5,
"relative_humidity": 48.2,
"barometric_pressure": 1013.1,
"distance": 7.25,
"lux": 75.5,
"whiteLux": 40.0,
"windDirection": 180,
"windSpeed": 4.3,
"weight": 28.4,
"rainfall24h": 0.75,
"soilMoisture": 2850,
"soilTemperature": 17.1
},
"local_stats": {
"numPacketsTx": 1280,

View File

@@ -14,7 +14,9 @@
import base64
import importlib
import re
import sys
import threading
import types
"""End-to-end tests covering the mesh ingestion package."""
@@ -23,6 +25,8 @@ from dataclasses import dataclass
from pathlib import Path
from types import SimpleNamespace
from meshtastic_protobuf_stub import build as build_protobuf_stub
import pytest
@@ -42,69 +46,6 @@ def mesh_module(monkeypatch):
getattr(real_meshtastic, "protobuf", None) if real_meshtastic else None
)
# Stub meshtastic.serial_interface.SerialInterface
serial_interface_mod = types.ModuleType("meshtastic.serial_interface")
class DummySerialInterface:
def __init__(self, *_, **__):
self.closed = False
def close(self):
self.closed = True
serial_interface_mod.SerialInterface = DummySerialInterface
tcp_interface_mod = types.ModuleType("meshtastic.tcp_interface")
class DummyTCPInterface:
def __init__(self, *_, **__):
self.closed = False
def close(self):
self.closed = True
tcp_interface_mod.TCPInterface = DummyTCPInterface
ble_interface_mod = types.ModuleType("meshtastic.ble_interface")
class DummyBLEInterface:
def __init__(self, *_, **__):
self.closed = False
def close(self):
self.closed = True
ble_interface_mod.BLEInterface = DummyBLEInterface
meshtastic_mod = types.ModuleType("meshtastic")
meshtastic_mod.serial_interface = serial_interface_mod
meshtastic_mod.tcp_interface = tcp_interface_mod
meshtastic_mod.ble_interface = ble_interface_mod
if real_protobuf is not None:
meshtastic_mod.protobuf = real_protobuf
monkeypatch.setitem(sys.modules, "meshtastic", meshtastic_mod)
monkeypatch.setitem(
sys.modules, "meshtastic.serial_interface", serial_interface_mod
)
monkeypatch.setitem(sys.modules, "meshtastic.tcp_interface", tcp_interface_mod)
monkeypatch.setitem(sys.modules, "meshtastic.ble_interface", ble_interface_mod)
if real_protobuf is not None:
monkeypatch.setitem(sys.modules, "meshtastic.protobuf", real_protobuf)
# Stub pubsub.pub
pubsub_mod = types.ModuleType("pubsub")
class DummyPub:
def __init__(self):
self.subscriptions = []
def subscribe(self, *args, **kwargs):
self.subscriptions.append((args, kwargs))
pubsub_mod.pub = DummyPub()
monkeypatch.setitem(sys.modules, "pubsub", pubsub_mod)
# Prefer real google.protobuf modules when available, otherwise provide stubs
try:
from google.protobuf import json_format as json_format_mod # type: ignore
@@ -147,6 +88,88 @@ def mesh_module(monkeypatch):
monkeypatch.setitem(sys.modules, "google.protobuf.json_format", json_format_mod)
monkeypatch.setitem(sys.modules, "google.protobuf.message", message_mod)
message_module = sys.modules.get("google.protobuf.message", message_mod)
# Stub meshtastic.serial_interface.SerialInterface
serial_interface_mod = types.ModuleType("meshtastic.serial_interface")
class DummySerialInterface:
def __init__(self, *_, **__):
self.closed = False
def close(self):
self.closed = True
serial_interface_mod.SerialInterface = DummySerialInterface
tcp_interface_mod = types.ModuleType("meshtastic.tcp_interface")
class DummyTCPInterface:
def __init__(self, *_, **__):
self.closed = False
def close(self):
self.closed = True
tcp_interface_mod.TCPInterface = DummyTCPInterface
ble_interface_mod = types.ModuleType("meshtastic.ble_interface")
class DummyBLEInterface:
def __init__(self, *_, **__):
self.closed = False
def close(self):
self.closed = True
ble_interface_mod.BLEInterface = DummyBLEInterface
meshtastic_mod = types.ModuleType("meshtastic")
meshtastic_mod.serial_interface = serial_interface_mod
meshtastic_mod.tcp_interface = tcp_interface_mod
meshtastic_mod.ble_interface = ble_interface_mod
if real_protobuf is not None:
meshtastic_mod.protobuf = real_protobuf
else:
serialization_mod = sys.modules.get("data.mesh_ingestor.serialization")
proto_base = getattr(serialization_mod, "ProtoMessage", message_module.Message)
decode_error = getattr(message_module, "DecodeError", Exception)
config_pb2_mod, mesh_pb2_mod = build_protobuf_stub(
proto_base,
decode_error,
)
protobuf_pkg = types.ModuleType("meshtastic.protobuf")
protobuf_pkg.config_pb2 = config_pb2_mod
protobuf_pkg.mesh_pb2 = mesh_pb2_mod
meshtastic_mod.protobuf = protobuf_pkg
monkeypatch.setitem(sys.modules, "meshtastic.protobuf", protobuf_pkg)
monkeypatch.setitem(
sys.modules, "meshtastic.protobuf.config_pb2", config_pb2_mod
)
monkeypatch.setitem(sys.modules, "meshtastic.protobuf.mesh_pb2", mesh_pb2_mod)
monkeypatch.setitem(sys.modules, "meshtastic", meshtastic_mod)
monkeypatch.setitem(
sys.modules, "meshtastic.serial_interface", serial_interface_mod
)
monkeypatch.setitem(sys.modules, "meshtastic.tcp_interface", tcp_interface_mod)
monkeypatch.setitem(sys.modules, "meshtastic.ble_interface", ble_interface_mod)
if real_protobuf is not None:
monkeypatch.setitem(sys.modules, "meshtastic.protobuf", real_protobuf)
# Stub pubsub.pub
pubsub_mod = types.ModuleType("pubsub")
class DummyPub:
def __init__(self):
self.subscriptions = []
def subscribe(self, *args, **kwargs):
self.subscriptions.append((args, kwargs))
pubsub_mod.pub = DummyPub()
monkeypatch.setitem(sys.modules, "pubsub", pubsub_mod)
module_name = "data.mesh_ingestor"
if module_name in sys.modules:
module = importlib.reload(sys.modules[module_name])
@@ -156,6 +179,14 @@ def mesh_module(monkeypatch):
if hasattr(module, "_clear_post_queue"):
module._clear_post_queue()
# Ensure radio metadata starts unset for each test run.
module.config.LORA_FREQ = None
module.config.MODEM_PRESET = None
for attr in ("LORA_FREQ", "MODEM_PRESET"):
if attr in module.__dict__:
delattr(module, attr)
module.channels._reset_channel_cache()
yield module
# Ensure a clean import for the next test
@@ -270,6 +301,193 @@ def test_create_serial_interface_ble(mesh_module, monkeypatch):
assert iface.nodes == {}
def test_ensure_radio_metadata_extracts_config(mesh_module, capsys):
mesh = mesh_module
class DummyEnumValue:
def __init__(self, name: str) -> None:
self.name = name
class DummyEnum:
def __init__(self, mapping: dict[int, str]) -> None:
self.values_by_number = {
number: DummyEnumValue(name) for number, name in mapping.items()
}
class DummyField:
def __init__(self, enum_type=None) -> None:
self.enum_type = enum_type
class DummyDescriptor:
def __init__(self, fields: dict[str, DummyField]) -> None:
self.fields_by_name = fields
def make_lora(
region_value: int,
region_name: str,
preset_value: int,
preset_name: str,
*,
preset_field: str = "modem_preset",
):
descriptor = DummyDescriptor(
{
"region": DummyField(DummyEnum({region_value: region_name})),
preset_field: DummyField(DummyEnum({preset_value: preset_name})),
}
)
class DummyLora:
DESCRIPTOR = descriptor
def __init__(self) -> None:
self.region = region_value
setattr(self, preset_field, preset_value)
def HasField(self, name: str) -> bool: # noqa: D401 - simple proxy
return hasattr(self, name)
return DummyLora()
class DummyRadio:
def __init__(self, lora) -> None:
self.lora = lora
def HasField(self, name: str) -> bool:
return hasattr(self, name)
class DummyConfig:
def __init__(self, lora, *, expose_direct: bool) -> None:
if expose_direct:
self.lora = lora
else:
self.radio = DummyRadio(lora)
def HasField(self, name: str) -> bool: # noqa: D401 - mimics protobuf API
return hasattr(self, name)
class DummyLocalNode:
def __init__(self, config) -> None:
self.localConfig = config
class DummyInterface:
def __init__(self, local_config) -> None:
self.localNode = DummyLocalNode(local_config)
self.wait_calls = 0
def waitForConfig(self) -> None: # noqa: D401 - matches Meshtastic API
self.wait_calls += 1
primary_lora = make_lora(3, "EU_868", 4, "MEDIUM_FAST")
iface = DummyInterface(DummyConfig(primary_lora, expose_direct=False))
mesh._ensure_radio_metadata(iface)
first_log = capsys.readouterr().out
assert iface.wait_calls == 1
assert mesh.config.LORA_FREQ == 868
assert mesh.config.MODEM_PRESET == "MediumFast"
assert "Captured LoRa radio metadata" in first_log
assert "lora_freq=868" in first_log
assert "modem_preset='MediumFast'" in first_log
secondary_lora = make_lora(7, "US_915", 2, "LONG_FAST", preset_field="preset")
second_iface = DummyInterface(DummyConfig(secondary_lora, expose_direct=True))
mesh._ensure_radio_metadata(second_iface)
second_log = capsys.readouterr().out
assert second_iface.wait_calls == 1
assert mesh.config.LORA_FREQ == 868
assert mesh.config.MODEM_PRESET == "MediumFast"
assert second_log == ""
def test_capture_channels_from_interface_records_metadata(mesh_module, capsys):
mesh = mesh_module
mesh.config.MODEM_PRESET = "MediumFast"
mesh.channels._reset_channel_cache()
class DummyInterface:
def __init__(self) -> None:
self.wait_calls = 0
primary = SimpleNamespace(
role=1, settings=SimpleNamespace(name=" radioamator ")
)
secondary = SimpleNamespace(
role="SECONDARY",
index="7",
settings=SimpleNamespace(name="TestChannel"),
)
self.localNode = SimpleNamespace(channels=[primary, secondary])
def waitForConfig(self) -> None: # noqa: D401 - matches interface contract
self.wait_calls += 1
iface = DummyInterface()
mesh.channels.capture_from_interface(iface)
log_output = capsys.readouterr().out
assert iface.wait_calls == 1
assert mesh.channels.channel_mappings() == ((0, "radioamator"), (7, "TestChannel"))
assert mesh.channels.channel_name(7) == "TestChannel"
assert "Captured channel metadata" in log_output
assert "channels=((0, 'radioamator'), (7, 'TestChannel'))" in log_output
mesh.channels.capture_from_interface(SimpleNamespace(localNode=None))
assert mesh.channels.channel_mappings() == ((0, "radioamator"), (7, "TestChannel"))
def test_capture_channels_primary_falls_back_to_env(mesh_module, monkeypatch, capsys):
mesh = mesh_module
mesh.config.MODEM_PRESET = None
mesh.channels._reset_channel_cache()
monkeypatch.setenv("CHANNEL", "FallbackName")
class DummyInterface:
def __init__(self) -> None:
self.localNode = SimpleNamespace(
channels={"primary": SimpleNamespace(role="PRIMARY")}
)
def waitForConfig(self) -> None: # noqa: D401 - placeholder
return None
mesh.channels._reset_channel_cache()
mesh.channels.capture_from_interface(DummyInterface())
log_output = capsys.readouterr().out
assert mesh.channels.channel_mappings() == ((0, "FallbackName"),)
assert mesh.channels.channel_name(0) == "FallbackName"
assert "FallbackName" in log_output
def test_capture_channels_primary_falls_back_to_preset(mesh_module, capsys):
mesh = mesh_module
mesh.config.MODEM_PRESET = " MediumFast "
mesh.channels._reset_channel_cache()
class DummyInterface:
def __init__(self) -> None:
self.localNode = SimpleNamespace(
channels=[SimpleNamespace(role="PRIMARY", settings=SimpleNamespace())]
)
def waitForConfig(self) -> None: # noqa: D401 - matches interface contract
return None
mesh.channels.capture_from_interface(DummyInterface())
log_output = capsys.readouterr().out
assert mesh.channels.channel_mappings() == ((0, "MediumFast"),)
assert mesh.channels.channel_name(0) == "MediumFast"
assert "MediumFast" in log_output
def test_create_default_interface_falls_back_to_tcp(mesh_module, monkeypatch):
mesh = mesh_module
attempts = []
@@ -348,6 +566,9 @@ def test_store_packet_dict_posts_text_message(mesh_module, monkeypatch):
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
mesh.config.LORA_FREQ = 868
mesh.config.MODEM_PRESET = "MediumFast"
packet = {
"id": 123,
"rxTime": 1_700_000_000,
@@ -380,6 +601,8 @@ def test_store_packet_dict_posts_text_message(mesh_module, monkeypatch):
assert payload["hop_limit"] == 3
assert payload["snr"] == pytest.approx(1.25)
assert payload["rssi"] == -70
assert payload["lora_freq"] == 868
assert payload["modem_preset"] == "MediumFast"
assert priority == mesh._MESSAGE_POST_PRIORITY
@@ -392,6 +615,9 @@ def test_store_packet_dict_posts_position(mesh_module, monkeypatch):
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
mesh.config.LORA_FREQ = 868
mesh.config.MODEM_PRESET = "MediumFast"
packet = {
"id": 200498337,
"rxTime": 1_758_624_186,
@@ -456,6 +682,8 @@ def test_store_packet_dict_posts_position(mesh_module, monkeypatch):
payload["payload_b64"]
== "DQDATR8VAMATCBjw//////////8BJb150mgoAljTAXgCgAEAmAEHuAER"
)
assert payload["lora_freq"] == 868
assert payload["modem_preset"] == "MediumFast"
assert payload["raw"]["time"] == 1_758_624_189
@@ -468,6 +696,9 @@ def test_store_packet_dict_posts_neighborinfo(mesh_module, monkeypatch):
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
mesh.config.LORA_FREQ = 868
mesh.config.MODEM_PRESET = "MediumFast"
packet = {
"id": 2049886869,
"rxTime": 1_758_884_186,
@@ -509,6 +740,8 @@ def test_store_packet_dict_posts_neighborinfo(mesh_module, monkeypatch):
assert neighbors[1]["snr"] == pytest.approx(-2.75)
assert neighbors[2]["neighbor_id"] == "!0badc0de"
assert neighbors[2]["neighbor_num"] == 0x0BAD_C0DE
assert payload["lora_freq"] == 868
assert payload["modem_preset"] == "MediumFast"
def test_store_packet_dict_handles_nodeinfo_packet(mesh_module, monkeypatch):
@@ -520,6 +753,9 @@ def test_store_packet_dict_handles_nodeinfo_packet(mesh_module, monkeypatch):
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
mesh.config.LORA_FREQ = 868
mesh.config.MODEM_PRESET = "MediumFast"
from meshtastic.protobuf import config_pb2, mesh_pb2
node_info = mesh_pb2.NodeInfo()
@@ -579,6 +815,8 @@ def test_store_packet_dict_handles_nodeinfo_packet(mesh_module, monkeypatch):
assert node_entry["position"]["latitude"] == pytest.approx(52.5)
assert node_entry["position"]["longitude"] == pytest.approx(13.4)
assert node_entry["position"]["time"] == 1_700_000_050
assert node_entry["lora_freq"] == 868
assert node_entry["modem_preset"] == "MediumFast"
def test_store_packet_dict_handles_user_only_nodeinfo(mesh_module, monkeypatch):
@@ -590,6 +828,9 @@ def test_store_packet_dict_handles_user_only_nodeinfo(mesh_module, monkeypatch):
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
mesh.config.LORA_FREQ = 868
mesh.config.MODEM_PRESET = "MediumFast"
from meshtastic.protobuf import mesh_pb2
user_msg = mesh_pb2.User()
@@ -622,6 +863,8 @@ def test_store_packet_dict_handles_user_only_nodeinfo(mesh_module, monkeypatch):
assert node_entry["lastHeard"] == 1_234
assert node_entry["user"]["longName"] == "Test Node"
assert "deviceMetrics" not in node_entry
assert node_entry["lora_freq"] == 868
assert node_entry["modem_preset"] == "MediumFast"
def test_store_packet_dict_nodeinfo_merges_proto_user(mesh_module, monkeypatch):
@@ -633,6 +876,9 @@ def test_store_packet_dict_nodeinfo_merges_proto_user(mesh_module, monkeypatch):
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
mesh.config.LORA_FREQ = 868
mesh.config.MODEM_PRESET = "MediumFast"
from meshtastic.protobuf import mesh_pb2
user_msg = mesh_pb2.User()
@@ -663,6 +909,8 @@ def test_store_packet_dict_nodeinfo_merges_proto_user(mesh_module, monkeypatch):
assert node_entry["lastHeard"] == 5_000
assert node_entry["user"]["shortName"] == "Proto"
assert node_entry["user"]["longName"] == "Proto User"
assert node_entry["lora_freq"] == 868
assert node_entry["modem_preset"] == "MediumFast"
def test_store_packet_dict_nodeinfo_sanitizes_nested_proto(mesh_module, monkeypatch):
@@ -674,6 +922,9 @@ def test_store_packet_dict_nodeinfo_sanitizes_nested_proto(mesh_module, monkeypa
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
mesh.config.LORA_FREQ = 868
mesh.config.MODEM_PRESET = "MediumFast"
from meshtastic.protobuf import mesh_pb2
user_msg = mesh_pb2.User()
@@ -707,6 +958,8 @@ def test_store_packet_dict_nodeinfo_sanitizes_nested_proto(mesh_module, monkeypa
assert node_entry["user"]["shortName"] == "Nested"
assert isinstance(node_entry["user"]["raw"], dict)
assert node_entry["user"]["raw"]["id"] == "!55667788"
assert node_entry["lora_freq"] == 868
assert node_entry["modem_preset"] == "MediumFast"
def test_store_packet_dict_nodeinfo_uses_from_id_when_user_missing(
@@ -720,6 +973,9 @@ def test_store_packet_dict_nodeinfo_uses_from_id_when_user_missing(
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
mesh.config.LORA_FREQ = 868
mesh.config.MODEM_PRESET = "MediumFast"
from meshtastic.protobuf import mesh_pb2
node_info = mesh_pb2.NodeInfo()
@@ -743,6 +999,8 @@ def test_store_packet_dict_nodeinfo_uses_from_id_when_user_missing(
assert node_entry["num"] == 0x01020304
assert node_entry["lastHeard"] == 200
assert node_entry["snr"] == pytest.approx(1.5)
assert node_entry["lora_freq"] == 868
assert node_entry["modem_preset"] == "MediumFast"
def test_store_packet_dict_ignores_non_text(mesh_module, monkeypatch):
@@ -970,6 +1228,81 @@ def test_main_retries_interface_creation(mesh_module, monkeypatch):
assert iface.closed is True
def test_connected_state_handles_threading_event(mesh_module):
mesh = mesh_module
event = mesh.threading.Event()
assert mesh._connected_state(event) is False
event.set()
assert mesh._connected_state(event) is True
def test_main_reconnects_when_connection_event_clears(mesh_module, monkeypatch):
mesh = mesh_module
attempts = []
interfaces = []
current_iface = {"obj": None}
import threading as real_threading_module
real_event_cls = real_threading_module.Event
class DummyInterface:
def __init__(self):
self.nodes = {}
self.isConnected = real_event_cls()
self.isConnected.set()
self.close_calls = 0
def close(self):
self.close_calls += 1
def fake_create(port):
iface = DummyInterface()
attempts.append(port)
interfaces.append(iface)
current_iface["obj"] = iface
return iface, port
class DummyStopEvent:
def __init__(self):
self._flag = False
self.wait_calls = 0
def is_set(self):
return self._flag
def set(self):
self._flag = True
def wait(self, timeout):
self.wait_calls += 1
if self.wait_calls == 1:
iface = current_iface["obj"]
assert iface is not None, "interface should be available"
iface.isConnected.clear()
return self._flag
self._flag = True
return True
monkeypatch.setattr(mesh, "PORT", "/dev/ttyTEST")
monkeypatch.setattr(mesh, "_create_serial_interface", fake_create)
monkeypatch.setattr(mesh.threading, "Event", DummyStopEvent)
monkeypatch.setattr(mesh.signal, "signal", lambda *_, **__: None)
monkeypatch.setattr(mesh, "SNAPSHOT_SECS", 0)
monkeypatch.setattr(mesh, "_RECONNECT_INITIAL_DELAY_SECS", 0)
monkeypatch.setattr(mesh, "_RECONNECT_MAX_DELAY_SECS", 0)
monkeypatch.setattr(mesh, "_CLOSE_TIMEOUT_SECS", 0)
mesh.main()
assert len(attempts) == 2
assert len(interfaces) == 2
assert interfaces[0].close_calls >= 1
assert interfaces[1].close_calls >= 1
def test_main_recreates_interface_after_snapshot_error(mesh_module, monkeypatch):
mesh = mesh_module
@@ -1058,6 +1391,9 @@ def test_store_packet_dict_uses_top_level_channel(mesh_module, monkeypatch):
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
mesh.config.LORA_FREQ = 868
mesh.config.MODEM_PRESET = "MediumFast"
packet = {
"id": "789",
"rxTime": 123456,
@@ -1077,6 +1413,8 @@ def test_store_packet_dict_uses_top_level_channel(mesh_module, monkeypatch):
assert payload["text"] == "hi"
assert payload["encrypted"] is None
assert payload["snr"] is None and payload["rssi"] is None
assert payload["lora_freq"] == 868
assert payload["modem_preset"] == "MediumFast"
assert priority == mesh._MESSAGE_POST_PRIORITY
@@ -1089,6 +1427,9 @@ def test_store_packet_dict_handles_invalid_channel(mesh_module, monkeypatch):
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
mesh.config.LORA_FREQ = 868
mesh.config.MODEM_PRESET = "MediumFast"
packet = {
"id": 321,
"rxTime": 999,
@@ -1107,9 +1448,69 @@ def test_store_packet_dict_handles_invalid_channel(mesh_module, monkeypatch):
assert path == "/api/messages"
assert payload["channel"] == 0
assert payload["encrypted"] is None
assert payload["lora_freq"] == 868
assert payload["modem_preset"] == "MediumFast"
assert priority == mesh._MESSAGE_POST_PRIORITY
def test_store_packet_dict_appends_channel_name(mesh_module, monkeypatch, capsys):
mesh = mesh_module
mesh.channels._reset_channel_cache()
mesh.config.MODEM_PRESET = "MediumFast"
class DummyInterface:
def __init__(self) -> None:
self.localNode = SimpleNamespace(
channels=[
SimpleNamespace(role=1, settings=SimpleNamespace()),
SimpleNamespace(
role=2,
index=5,
settings=SimpleNamespace(name="Chat"),
),
]
)
def waitForConfig(self) -> None: # noqa: D401 - matches interface contract
return None
mesh.channels.capture_from_interface(DummyInterface())
capsys.readouterr()
captured = []
monkeypatch.setattr(
mesh,
"_queue_post_json",
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
monkeypatch.setattr(mesh, "DEBUG", True)
packet = {
"id": "789",
"rxTime": 123456,
"from": "!abc",
"to": "!def",
"channel": 5,
"decoded": {"text": "hi", "portnum": 1},
}
mesh.store_packet_dict(packet)
assert captured, "Expected message to be stored"
path, payload, priority = captured[0]
assert path == "/api/messages"
assert payload["channel_name"] == "Chat"
assert payload["channel"] == 5
assert payload["text"] == "hi"
assert payload["encrypted"] is None
assert priority == mesh._MESSAGE_POST_PRIORITY
log_output = capsys.readouterr().out
assert "channel_name='Chat'" in log_output
assert "channel_display='Chat'" in log_output
def test_store_packet_dict_includes_encrypted_payload(mesh_module, monkeypatch):
mesh = mesh_module
captured = []
@@ -1119,6 +1520,9 @@ def test_store_packet_dict_includes_encrypted_payload(mesh_module, monkeypatch):
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
mesh.config.LORA_FREQ = 868
mesh.config.MODEM_PRESET = "MediumFast"
packet = {
"id": 555,
"rxTime": 111,
@@ -1137,6 +1541,9 @@ def test_store_packet_dict_includes_encrypted_payload(mesh_module, monkeypatch):
assert payload["text"] is None
assert payload["from_id"] == 2988082812
assert payload["to_id"] == "!receiver"
assert "channel_name" not in payload
assert payload["lora_freq"] == 868
assert payload["modem_preset"] == "MediumFast"
assert priority == mesh._MESSAGE_POST_PRIORITY
@@ -1149,6 +1556,9 @@ def test_store_packet_dict_handles_telemetry_packet(mesh_module, monkeypatch):
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
mesh.config.LORA_FREQ = 868
mesh.config.MODEM_PRESET = "MediumFast"
packet = {
"id": 1_256_091_342,
"rxTime": 1_758_024_300,
@@ -1165,6 +1575,7 @@ def test_store_packet_dict_handles_telemetry_packet(mesh_module, monkeypatch):
"channelUtilization": 0.59666663,
"airUtilTx": 0.03908333,
"uptimeSeconds": 305044,
"current": 0.0715,
},
"localStats": {
"numPacketsTx": 1280,
@@ -1196,6 +1607,9 @@ def test_store_packet_dict_handles_telemetry_packet(mesh_module, monkeypatch):
assert payload["channel_utilization"] == pytest.approx(0.59666663)
assert payload["air_util_tx"] == pytest.approx(0.03908333)
assert payload["uptime_seconds"] == 305044
assert payload["current"] == pytest.approx(0.0715)
assert payload["lora_freq"] == 868
assert payload["modem_preset"] == "MediumFast"
def test_store_packet_dict_handles_environment_telemetry(mesh_module, monkeypatch):
@@ -1207,6 +1621,9 @@ def test_store_packet_dict_handles_environment_telemetry(mesh_module, monkeypatc
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
mesh.config.LORA_FREQ = 868
mesh.config.MODEM_PRESET = "MediumFast"
packet = {
"id": 2_817_720_548,
"rxTime": 1_758_024_400,
@@ -1219,6 +1636,23 @@ def test_store_packet_dict_handles_environment_telemetry(mesh_module, monkeypatc
"temperature": 21.98,
"relativeHumidity": 39.475586,
"barometricPressure": 1017.8353,
"gasResistance": 1456.0,
"iaq": 83,
"distance": 12.5,
"lux": 100.25,
"whiteLux": 64.5,
"irLux": 12.75,
"uvLux": 1.6,
"windDirection": 270,
"windSpeed": 5.9,
"windGust": 7.4,
"windLull": 4.8,
"weight": 32.7,
"radiation": 0.45,
"rainfall1h": 0.18,
"rainfall24h": 1.42,
"soilMoisture": 3100,
"soilTemperature": 18.9,
},
},
},
@@ -1236,6 +1670,25 @@ def test_store_packet_dict_handles_environment_telemetry(mesh_module, monkeypatc
assert payload["temperature"] == pytest.approx(21.98)
assert payload["relative_humidity"] == pytest.approx(39.475586)
assert payload["barometric_pressure"] == pytest.approx(1017.8353)
assert payload["gas_resistance"] == pytest.approx(1456.0)
assert payload["iaq"] == 83
assert payload["distance"] == pytest.approx(12.5)
assert payload["lux"] == pytest.approx(100.25)
assert payload["white_lux"] == pytest.approx(64.5)
assert payload["ir_lux"] == pytest.approx(12.75)
assert payload["uv_lux"] == pytest.approx(1.6)
assert payload["wind_direction"] == 270
assert payload["wind_speed"] == pytest.approx(5.9)
assert payload["wind_gust"] == pytest.approx(7.4)
assert payload["wind_lull"] == pytest.approx(4.8)
assert payload["weight"] == pytest.approx(32.7)
assert payload["radiation"] == pytest.approx(0.45)
assert payload["rainfall_1h"] == pytest.approx(0.18)
assert payload["rainfall_24h"] == pytest.approx(1.42)
assert payload["soil_moisture"] == 3100
assert payload["soil_temperature"] == pytest.approx(18.9)
assert payload["lora_freq"] == 868
assert payload["modem_preset"] == "MediumFast"
def test_post_queue_prioritises_messages(mesh_module, monkeypatch):
@@ -1258,6 +1711,58 @@ def test_post_queue_prioritises_messages(mesh_module, monkeypatch):
assert [path for path, _ in calls] == ["/api/messages", "/api/nodes"]
def test_drain_post_queue_handles_enqueued_items_during_send(mesh_module):
mesh = mesh_module
mesh._clear_post_queue()
first_send_started = threading.Event()
second_item_enqueued = threading.Event()
second_item_processed = threading.Event()
calls = []
def blocking_send(path, payload):
calls.append((path, payload))
if path == "/api/first":
first_send_started.set()
assert second_item_enqueued.wait(timeout=2), "Second item was not enqueued"
elif path == "/api/second":
second_item_processed.set()
mesh._enqueue_post_json(
"/api/first",
{"id": 1},
mesh._DEFAULT_POST_PRIORITY,
state=mesh.STATE,
)
mesh.STATE.active = True
drain_thread = threading.Thread(
target=mesh._drain_post_queue,
kwargs={"state": mesh.STATE, "send": blocking_send},
)
drain_thread.start()
assert first_send_started.wait(
timeout=2
), "Drain did not begin processing the first item"
mesh._queue_post_json(
"/api/second",
{"id": 2},
state=mesh.STATE,
send=blocking_send,
)
second_item_enqueued.set()
assert second_item_processed.wait(timeout=2), "Second item was not processed"
drain_thread.join(timeout=2)
assert not drain_thread.is_alive(), "Drain thread did not finish"
assert [path for path, _ in calls] == ["/api/first", "/api/second"]
assert not mesh.STATE.queue
assert mesh.STATE.active is False
def test_store_packet_dict_requires_id(mesh_module, monkeypatch):
mesh = mesh_module
@@ -1282,7 +1787,8 @@ def test_on_receive_logs_when_store_fails(mesh_module, monkeypatch, capsys):
mesh.on_receive(object(), interface=None)
captured = capsys.readouterr()
assert "failed to store packet" in captured.out
assert "context=handlers.on_receive" in captured.out
assert "Failed to store packet" in captured.out
def test_node_items_snapshot_iterable_without_items(mesh_module):
@@ -1316,7 +1822,14 @@ def test_debug_log_emits_when_enabled(mesh_module, monkeypatch, capsys):
mesh._debug_log("hello world")
captured = capsys.readouterr()
assert "[debug] hello world" in captured.out
lines = [line for line in captured.out.splitlines() if "hello world" in line]
assert lines, "expected debug log output"
log_line = lines[-1]
pattern = (
r"\[\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d{3}Z\] \[potato-mesh\] \[debug\] "
)
assert re.match(pattern, log_line), f"unexpected log format: {log_line}"
assert log_line.endswith("hello world")
def test_event_wait_allows_default_timeout_handles_short_signature(
@@ -1450,7 +1963,8 @@ def test_post_json_logs_failures(mesh_module, monkeypatch, capsys):
mesh._post_json("/api/test", {"foo": "bar"})
captured = capsys.readouterr()
assert "[warn] POST https://example.invalid/api/test failed" in captured.out
assert "context=queue.post_json" in captured.out
assert "POST request failed" in captured.out
def test_queue_post_json_skips_when_active(mesh_module, monkeypatch):
@@ -1504,7 +2018,8 @@ def test_upsert_node_logs_in_debug(mesh_module, monkeypatch, capsys):
assert captured
out = capsys.readouterr().out
assert "upserted node !node" in out
assert "context=handlers.upsert_node" in out
assert "Queued node upsert payload" in out
def test_coerce_int_and_float_cover_edge_cases(mesh_module):
@@ -1686,6 +2201,9 @@ def test_store_position_packet_defaults(mesh_module, monkeypatch):
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
mesh.config.LORA_FREQ = 868
mesh.config.MODEM_PRESET = "MediumFast"
packet = {"id": "7", "rxTime": "", "from": "!abcd", "to": "", "decoded": {}}
mesh.store_position_packet(packet, {})
@@ -1697,6 +2215,8 @@ def test_store_position_packet_defaults(mesh_module, monkeypatch):
assert payload["to_id"] is None
assert payload["latitude"] is None
assert payload["longitude"] is None
assert payload["lora_freq"] == 868
assert payload["modem_preset"] == "MediumFast"
def test_store_nodeinfo_packet_debug(mesh_module, monkeypatch, capsys):
@@ -1730,7 +2250,8 @@ def test_store_nodeinfo_packet_debug(mesh_module, monkeypatch, capsys):
mesh.store_packet_dict(packet)
out = capsys.readouterr().out
assert "stored nodeinfo" in out
assert "context=handlers.store_nodeinfo" in out
assert "Queued nodeinfo payload" in out
def test_store_neighborinfo_packet_debug(mesh_module, monkeypatch, capsys):
@@ -1762,7 +2283,8 @@ def test_store_neighborinfo_packet_debug(mesh_module, monkeypatch, capsys):
assert captured
out = capsys.readouterr().out
assert "stored neighborinfo" in out
assert "context=handlers.store_neighborinfo" in out
assert "Queued neighborinfo payload" in out
def test_store_packet_dict_debug_message(mesh_module, monkeypatch, capsys):
@@ -1788,7 +2310,10 @@ def test_store_packet_dict_debug_message(mesh_module, monkeypatch, capsys):
assert captured
out = capsys.readouterr().out
assert "stored message" in out
assert "context=handlers.store_packet_dict" in out
assert "Queued message payload" in out
assert "channel_display=0" in out
assert "channel_name=" not in out
def test_on_receive_skips_seen_packets(mesh_module):

View File

@@ -44,17 +44,24 @@ WORKDIR /app
# Copy installed gems from builder stage
COPY --from=builder /usr/local/bundle /usr/local/bundle
# Copy application code (exclude Dockerfile from web directory)
COPY --chown=potatomesh:potatomesh web/app.rb web/app.sh web/Gemfile web/Gemfile.lock* web/spec/ ./
# Copy application code (excluding the Dockerfile which is not required at runtime)
COPY --chown=potatomesh:potatomesh web/app.rb ./
COPY --chown=potatomesh:potatomesh web/app.sh ./
COPY --chown=potatomesh:potatomesh web/Gemfile ./
COPY --chown=potatomesh:potatomesh web/Gemfile.lock* ./
COPY --chown=potatomesh:potatomesh web/lib ./lib
COPY --chown=potatomesh:potatomesh web/spec ./spec
COPY --chown=potatomesh:potatomesh web/public ./public
COPY --chown=potatomesh:potatomesh web/views/ ./views/
COPY --chown=potatomesh:potatomesh web/views ./views
COPY --chown=potatomesh:potatomesh web/scripts ./scripts
# Copy SQL schema files from data directory
COPY --chown=potatomesh:potatomesh data/*.sql /data/
# Create data directory for SQLite database
RUN mkdir -p /app/data && \
chown -R potatomesh:potatomesh /app/data
# Create data and configuration directories with correct ownership
RUN mkdir -p /app/.local/share/potato-mesh \
&& mkdir -p /app/.config/potato-mesh/well-known \
&& chown -R potatomesh:potatomesh /app/.local/share /app/.config
# Switch to non-root user
USER potatomesh
@@ -65,18 +72,14 @@ EXPOSE 41447
# Default environment variables (can be overridden by host)
ENV RACK_ENV=production \
APP_ENV=production \
MESH_DB=/app/data/mesh.db \
DB_BUSY_TIMEOUT_MS=5000 \
DB_BUSY_MAX_RETRIES=5 \
DB_BUSY_RETRY_DELAY=0.05 \
MAX_JSON_BODY_BYTES=1048576 \
SITE_NAME="Berlin Mesh Network" \
DEFAULT_CHANNEL="#MediumFast" \
DEFAULT_FREQUENCY="868MHz" \
MAP_CENTER_LAT=52.502889 \
MAP_CENTER_LON=13.404194 \
MAX_NODE_DISTANCE_KM=50 \
MATRIX_ROOM="" \
XDG_DATA_HOME=/app/.local/share \
XDG_CONFIG_HOME=/app/.config \
SITE_NAME="PotatoMesh Demo" \
CHANNEL="#LongFast" \
FREQUENCY="915MHz" \
MAP_CENTER="38.761944,-27.090833" \
MAX_DISTANCE=42 \
CONTACT_LINK="#potatomesh:dod.ngo" \
DEBUG=0
# Start the application

3611
web/app.rb

File diff suppressed because it is too large Load Diff

View File

@@ -18,7 +18,4 @@ set -euo pipefail
bundle install
PORT=${PORT:-41447}
BIND_ADDRESS=${BIND_ADDRESS:-0.0.0.0}
exec ruby app.rb -p "${PORT}" -o "${BIND_ADDRESS}"
exec ruby app.rb -p 41447 -o 0.0.0.0

View File

@@ -0,0 +1,196 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
require "sinatra/base"
require "json"
require "sqlite3"
require "fileutils"
require "logger"
require "rack/utils"
require "open3"
require "resolv"
require "socket"
require "time"
require "openssl"
require "base64"
require "prometheus/client"
require "prometheus/client/formats/text"
require "prometheus/middleware/collector"
require "prometheus/middleware/exporter"
require "net/http"
require "uri"
require "ipaddr"
require "set"
require "digest"
require_relative "config"
require_relative "sanitizer"
require_relative "meta"
require_relative "logging"
require_relative "application/helpers"
require_relative "application/errors"
require_relative "application/database"
require_relative "application/networking"
require_relative "application/identity"
require_relative "application/federation"
require_relative "application/prometheus"
require_relative "application/queries"
require_relative "application/data_processing"
require_relative "application/filesystem"
require_relative "application/instances"
require_relative "application/routes/api"
require_relative "application/routes/ingest"
require_relative "application/routes/root"
module PotatoMesh
class Application < Sinatra::Base
extend App::Helpers
extend App::Database
extend App::Networking
extend App::Identity
extend App::Federation
extend App::Instances
extend App::Prometheus
extend App::Queries
extend App::DataProcessing
extend App::Filesystem
helpers App::Helpers
include App::Database
include App::Networking
include App::Identity
include App::Federation
include App::Instances
include App::Prometheus
include App::Queries
include App::DataProcessing
include App::Filesystem
register App::Routes::Api
register App::Routes::Ingest
register App::Routes::Root
DEFAULT_PORT = 41_447
DEFAULT_BIND_ADDRESS = "0.0.0.0"
APP_VERSION = determine_app_version
INSTANCE_PRIVATE_KEY, INSTANCE_KEY_GENERATED = load_or_generate_instance_private_key
INSTANCE_PUBLIC_KEY_PEM = INSTANCE_PRIVATE_KEY.public_key.export
SELF_INSTANCE_ID = Digest::SHA256.hexdigest(INSTANCE_PUBLIC_KEY_PEM)
INSTANCE_DOMAIN, INSTANCE_DOMAIN_SOURCE = determine_instance_domain
# Adjust the runtime logger severity to match the DEBUG flag.
#
# @return [void]
def self.apply_logger_level!
logger = settings.logger
return unless logger
logger.level = PotatoMesh::Config.debug? ? Logger::DEBUG : Logger::WARN
end
# Determine the port the application should listen on by honouring the
# conventional +PORT+ environment variable used by hosting platforms. Any
# non-numeric or out-of-range values fall back to the provided default to
# keep the application bootable in misconfigured environments.
#
# @param default_port [Integer] fallback port when +ENV['PORT']+ is absent or invalid.
# @return [Integer] port number for the HTTP server.
def self.resolve_port(default_port: DEFAULT_PORT)
raw_port = ENV["PORT"]
return default_port if raw_port.nil?
trimmed = raw_port.to_s.strip
return default_port if trimmed.empty?
begin
port = Integer(trimmed, 10)
rescue ArgumentError
return default_port
end
return default_port unless port.positive?
return default_port unless PotatoMesh::Sanitizer.valid_port?(trimmed)
port
end
configure do
set :public_folder, File.expand_path("../../public", __dir__)
set :views, File.expand_path("../../views", __dir__)
set :federation_thread, nil
set :port, resolve_port
set :bind, DEFAULT_BIND_ADDRESS
app_logger = PotatoMesh::Logging.build_logger($stdout)
set :logger, app_logger
use Rack::CommonLogger, app_logger
use Rack::Deflater
use ::Prometheus::Middleware::Collector
use ::Prometheus::Middleware::Exporter
apply_logger_level!
perform_initial_filesystem_setup!
cleanup_legacy_well_known_artifacts
init_db unless db_schema_present?
ensure_schema_upgrades
log_instance_domain_resolution
log_instance_public_key
refresh_well_known_document_if_stale
ensure_self_instance_record!
update_all_prometheus_metrics_from_nodes
if federation_announcements_active?
start_initial_federation_announcement!
start_federation_announcer!
elsif federation_enabled?
debug_log(
"Federation announcements disabled",
context: "federation",
reason: "test environment",
)
else
debug_log(
"Federation announcements disabled",
context: "federation",
reason: "configuration",
)
end
end
end
end
if defined?(Sinatra::Application) && Sinatra::Application != PotatoMesh::Application
Sinatra.send(:remove_const, :Application)
end
Sinatra::Application = PotatoMesh::Application unless defined?(Sinatra::Application)
APP_VERSION = PotatoMesh::Application::APP_VERSION unless defined?(APP_VERSION)
SELF_INSTANCE_ID = PotatoMesh::Application::SELF_INSTANCE_ID unless defined?(SELF_INSTANCE_ID)
[
PotatoMesh::App::Helpers,
PotatoMesh::App::Database,
PotatoMesh::App::Networking,
PotatoMesh::App::Identity,
PotatoMesh::App::Federation,
PotatoMesh::App::Instances,
PotatoMesh::App::Prometheus,
PotatoMesh::App::Queries,
PotatoMesh::App::DataProcessing,
].each do |mod|
Object.include(mod) unless Object < mod
end

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,173 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Database
# Column definitions required for environment telemetry support. Each
# entry pairs the column name with the SQL type used when backfilling
# legacy databases that pre-date the extended telemetry schema.
TELEMETRY_COLUMN_DEFINITIONS = [
["gas_resistance", "REAL"],
["current", "REAL"],
["iaq", "INTEGER"],
["distance", "REAL"],
["lux", "REAL"],
["white_lux", "REAL"],
["ir_lux", "REAL"],
["uv_lux", "REAL"],
["wind_direction", "INTEGER"],
["wind_speed", "REAL"],
["weight", "REAL"],
["wind_gust", "REAL"],
["wind_lull", "REAL"],
["radiation", "REAL"],
["rainfall_1h", "REAL"],
["rainfall_24h", "REAL"],
["soil_moisture", "INTEGER"],
["soil_temperature", "REAL"],
].freeze
# Open a connection to the application database applying common pragmas.
#
# @param readonly [Boolean] whether to open the database in read-only mode.
# @return [SQLite3::Database] configured database handle.
def open_database(readonly: false)
SQLite3::Database.new(PotatoMesh::Config.db_path, readonly: readonly).tap do |db|
db.busy_timeout = PotatoMesh::Config.db_busy_timeout_ms
db.execute("PRAGMA foreign_keys = ON")
end
end
# Execute the provided block and retry when SQLite reports a busy error.
#
# @param max_retries [Integer] maximum number of retries when locked.
# @param base_delay [Float] incremental back-off delay between retries.
# @yield Executes the database operation.
# @return [Object] result of the block.
def with_busy_retry(
max_retries: PotatoMesh::Config.db_busy_max_retries,
base_delay: PotatoMesh::Config.db_busy_retry_delay
)
attempts = 0
begin
yield
rescue SQLite3::BusyException
attempts += 1
raise if attempts > max_retries
sleep(base_delay * attempts)
retry
end
end
# Determine whether the database schema has already been provisioned.
#
# @return [Boolean] true when all required tables exist.
def db_schema_present?
return false unless File.exist?(PotatoMesh::Config.db_path)
db = open_database(readonly: true)
required = %w[nodes messages positions telemetry neighbors instances]
tables =
db.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name IN ('nodes','messages','positions','telemetry','neighbors','instances')",
).flatten
(required - tables).empty?
rescue SQLite3::Exception
false
ensure
db&.close
end
# Create the database schema using the bundled SQL files.
#
# @return [void]
def init_db
FileUtils.mkdir_p(File.dirname(PotatoMesh::Config.db_path))
db = open_database
%w[nodes messages positions telemetry neighbors instances].each do |schema|
sql_file = File.expand_path("../../../../data/#{schema}.sql", __dir__)
db.execute_batch(File.read(sql_file))
end
ensure
db&.close
end
# Apply any schema migrations required for older installations.
#
# @return [void]
def ensure_schema_upgrades
db = open_database
node_columns = db.execute("PRAGMA table_info(nodes)").map { |row| row[1] }
unless node_columns.include?("precision_bits")
db.execute("ALTER TABLE nodes ADD COLUMN precision_bits INTEGER")
node_columns << "precision_bits"
end
unless node_columns.include?("lora_freq")
db.execute("ALTER TABLE nodes ADD COLUMN lora_freq INTEGER")
end
unless node_columns.include?("modem_preset")
db.execute("ALTER TABLE nodes ADD COLUMN modem_preset TEXT")
end
message_columns = db.execute("PRAGMA table_info(messages)").map { |row| row[1] }
unless message_columns.include?("lora_freq")
db.execute("ALTER TABLE messages ADD COLUMN lora_freq INTEGER")
end
unless message_columns.include?("modem_preset")
db.execute("ALTER TABLE messages ADD COLUMN modem_preset TEXT")
end
unless message_columns.include?("channel_name")
db.execute("ALTER TABLE messages ADD COLUMN channel_name TEXT")
end
tables = db.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='instances'").flatten
if tables.empty?
sql_file = File.expand_path("../../../../data/instances.sql", __dir__)
db.execute_batch(File.read(sql_file))
end
telemetry_tables =
db.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='telemetry'").flatten
if telemetry_tables.empty?
telemetry_schema = File.expand_path("../../../../data/telemetry.sql", __dir__)
db.execute_batch(File.read(telemetry_schema))
end
telemetry_columns = db.execute("PRAGMA table_info(telemetry)").map { |row| row[1] }
TELEMETRY_COLUMN_DEFINITIONS.each do |name, type|
next if telemetry_columns.include?(name)
db.execute("ALTER TABLE telemetry ADD COLUMN #{name} #{type}")
telemetry_columns << name
end
rescue SQLite3::SQLException, Errno::ENOENT => e
warn_log(
"Failed to apply schema upgrade",
context: "database.schema",
error_class: e.class.name,
error_message: e.message,
)
ensure
db&.close
end
end
end
end

View File

@@ -0,0 +1,20 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
# Raised when a remote instance fails to provide valid federation data.
class InstanceFetchError < StandardError; end
end
end

View File

@@ -0,0 +1,909 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Federation
def self_instance_domain
sanitized = sanitize_instance_domain(app_constant(:INSTANCE_DOMAIN))
return sanitized if sanitized
raise "INSTANCE_DOMAIN could not be determined"
end
# Determine whether the local instance should persist its own record.
#
# @param domain [String, nil] candidate domain for the running instance.
# @return [Array(Boolean, String, nil)] tuple containing a decision flag and an optional reason.
def self_instance_registration_decision(domain)
source = app_constant(:INSTANCE_DOMAIN_SOURCE)
return [false, "INSTANCE_DOMAIN source is #{source}"] unless source == :environment
sanitized = sanitize_instance_domain(domain)
return [false, "INSTANCE_DOMAIN missing or invalid"] unless sanitized
ip = ip_from_domain(sanitized)
if ip && restricted_ip_address?(ip)
return [false, "INSTANCE_DOMAIN resolves to restricted IP"]
end
[true, nil]
end
def self_instance_attributes
domain = self_instance_domain
last_update = latest_node_update_timestamp || Time.now.to_i
{
id: app_constant(:SELF_INSTANCE_ID),
domain: domain,
pubkey: app_constant(:INSTANCE_PUBLIC_KEY_PEM),
name: sanitized_site_name,
version: app_constant(:APP_VERSION),
channel: sanitized_channel,
frequency: sanitized_frequency,
latitude: PotatoMesh::Config.map_center_lat,
longitude: PotatoMesh::Config.map_center_lon,
last_update_time: last_update,
is_private: private_mode?,
}
end
def sign_instance_attributes(attributes)
payload = canonical_instance_payload(attributes)
Base64.strict_encode64(
app_constant(:INSTANCE_PRIVATE_KEY).sign(OpenSSL::Digest::SHA256.new, payload),
)
end
def instance_announcement_payload(attributes, signature)
payload = {
"id" => attributes[:id],
"domain" => attributes[:domain],
"pubkey" => attributes[:pubkey],
"name" => attributes[:name],
"version" => attributes[:version],
"channel" => attributes[:channel],
"frequency" => attributes[:frequency],
"latitude" => attributes[:latitude],
"longitude" => attributes[:longitude],
"lastUpdateTime" => attributes[:last_update_time],
"isPrivate" => attributes[:is_private],
"signature" => signature,
}
payload.reject { |_, value| value.nil? }
end
def ensure_self_instance_record!
attributes = self_instance_attributes
signature = sign_instance_attributes(attributes)
db = nil
allowed, reason = self_instance_registration_decision(attributes[:domain])
if allowed
db = open_database
upsert_instance_record(db, attributes, signature)
debug_log(
"Registered self instance record",
context: "federation.instances",
domain: attributes[:domain],
instance_id: attributes[:id],
)
else
debug_log(
"Skipped self instance registration",
context: "federation.instances",
domain: attributes[:domain],
reason: reason,
)
end
[attributes, signature]
ensure
db&.close
end
def federation_target_domains(self_domain)
normalized_self = sanitize_instance_domain(self_domain)&.downcase
ordered = []
seen = Set.new
PotatoMesh::Config.federation_seed_domains.each do |seed|
sanitized = sanitize_instance_domain(seed)&.downcase
next unless sanitized
next if normalized_self && sanitized == normalized_self
next if seen.include?(sanitized)
ordered << sanitized
seen << sanitized
end
db = open_database(readonly: true)
db.results_as_hash = false
cutoff = Time.now.to_i - PotatoMesh::Config.week_seconds
rows = with_busy_retry do
db.execute(
"SELECT domain, last_update_time FROM instances WHERE domain IS NOT NULL AND TRIM(domain) != ''",
)
end
rows.each do |row|
raw_domain = row[0]
last_update_time = coerce_integer(row[1])
next unless last_update_time && last_update_time >= cutoff
sanitized = sanitize_instance_domain(raw_domain)&.downcase
next unless sanitized
next if normalized_self && sanitized == normalized_self
next if seen.include?(sanitized)
ordered << sanitized
seen << sanitized
end
ordered
rescue SQLite3::Exception
fallback = PotatoMesh::Config.federation_seed_domains.filter_map do |seed|
candidate = sanitize_instance_domain(seed)&.downcase
next if normalized_self && candidate == normalized_self
candidate
end
fallback.uniq
ensure
db&.close
end
def announce_instance_to_domain(domain, payload_json)
return false unless domain && !domain.empty?
https_failures = []
instance_uri_candidates(domain, "/api/instances").each do |uri|
begin
http = build_remote_http_client(uri)
response = http.start do |connection|
request = build_federation_http_request(Net::HTTP::Post, uri)
request.body = payload_json
connection.request(request)
end
if response.is_a?(Net::HTTPSuccess)
debug_log(
"Published federation announcement",
context: "federation.announce",
target: uri.to_s,
status: response.code,
)
return true
end
debug_log(
"Federation announcement failed",
context: "federation.announce",
target: uri.to_s,
status: response.code,
)
rescue StandardError => e
metadata = {
context: "federation.announce",
target: uri.to_s,
error_class: e.class.name,
error_message: e.message,
}
if uri.scheme == "https" && https_connection_refused?(e)
debug_log(
"HTTPS federation announcement failed, retrying with HTTP",
**metadata,
)
https_failures << metadata
next
end
warn_log(
"Federation announcement raised exception",
**metadata,
)
end
end
https_failures.each do |metadata|
warn_log(
"Federation announcement raised exception",
**metadata,
)
end
false
end
# Determine whether an HTTPS announcement failure should fall back to HTTP.
#
# @param error [StandardError] failure raised while attempting HTTPS.
# @return [Boolean] true when the error corresponds to a refused TCP connection.
def https_connection_refused?(error)
current = error
while current
return true if current.is_a?(Errno::ECONNREFUSED)
current = current.respond_to?(:cause) ? current.cause : nil
end
false
end
def announce_instance_to_all_domains
return unless federation_enabled?
attributes, signature = ensure_self_instance_record!
payload_json = JSON.generate(instance_announcement_payload(attributes, signature))
domains = federation_target_domains(attributes[:domain])
domains.each do |domain|
announce_instance_to_domain(domain, payload_json)
end
unless domains.empty?
debug_log(
"Federation announcement cycle complete",
context: "federation.announce",
targets: domains,
)
end
end
def start_federation_announcer!
# Federation broadcasts must not execute when federation support is disabled.
return nil unless federation_enabled?
existing = settings.federation_thread
return existing if existing&.alive?
thread = Thread.new do
loop do
sleep PotatoMesh::Config.federation_announcement_interval
begin
announce_instance_to_all_domains
rescue StandardError => e
warn_log(
"Federation announcement loop error",
context: "federation.announce",
error_class: e.class.name,
error_message: e.message,
)
end
end
end
thread.name = "potato-mesh-federation" if thread.respond_to?(:name=)
set(:federation_thread, thread)
thread
end
# Launch a background thread responsible for the first federation broadcast.
#
# @return [Thread, nil] the thread handling the initial announcement.
def start_initial_federation_announcement!
# Skip the initial broadcast entirely when federation is disabled.
return nil unless federation_enabled?
existing = settings.respond_to?(:initial_federation_thread) ? settings.initial_federation_thread : nil
return existing if existing&.alive?
thread = Thread.new do
begin
delay = PotatoMesh::Config.initial_federation_delay_seconds
Kernel.sleep(delay) if delay.positive?
announce_instance_to_all_domains
rescue StandardError => e
warn_log(
"Initial federation announcement failed",
context: "federation.announce",
error_class: e.class.name,
error_message: e.message,
)
ensure
set(:initial_federation_thread, nil)
end
end
thread.name = "potato-mesh-federation-initial" if thread.respond_to?(:name=)
thread.report_on_exception = false if thread.respond_to?(:report_on_exception=)
set(:initial_federation_thread, thread)
thread
end
def canonical_instance_payload(attributes)
data = {}
data["id"] = attributes[:id] if attributes[:id]
data["domain"] = attributes[:domain] if attributes[:domain]
data["pubkey"] = attributes[:pubkey] if attributes[:pubkey]
data["name"] = attributes[:name] if attributes[:name]
data["version"] = attributes[:version] if attributes[:version]
data["channel"] = attributes[:channel] if attributes[:channel]
data["frequency"] = attributes[:frequency] if attributes[:frequency]
data["latitude"] = attributes[:latitude] unless attributes[:latitude].nil?
data["longitude"] = attributes[:longitude] unless attributes[:longitude].nil?
data["lastUpdateTime"] = attributes[:last_update_time] unless attributes[:last_update_time].nil?
data["isPrivate"] = attributes[:is_private] unless attributes[:is_private].nil?
JSON.generate(data, sort_keys: true)
end
def verify_instance_signature(attributes, signature, public_key_pem)
return false unless signature && public_key_pem
canonical = canonical_instance_payload(attributes)
signature_bytes = Base64.strict_decode64(signature)
key = OpenSSL::PKey::RSA.new(public_key_pem)
key.verify(OpenSSL::Digest::SHA256.new, signature_bytes, canonical)
rescue ArgumentError, OpenSSL::PKey::PKeyError
false
end
def instance_uri_candidates(domain, path)
base = domain
[
URI.parse("https://#{base}#{path}"),
URI.parse("http://#{base}#{path}"),
]
rescue URI::InvalidURIError
[]
end
def perform_instance_http_request(uri)
http = build_remote_http_client(uri)
http.start do |connection|
request = build_federation_http_request(Net::HTTP::Get, uri)
response = connection.request(request)
case response
when Net::HTTPSuccess
response.body
else
raise InstanceFetchError, "unexpected response #{response.code}"
end
end
rescue StandardError => e
raise_instance_fetch_error(e)
end
# Build an HTTP request decorated with the headers required for federation peers.
#
# @param request_class [Class<Net::HTTPRequest>] HTTP request class such as {Net::HTTP::Get}.
# @param uri [URI::Generic] target URI describing the remote endpoint.
# @return [Net::HTTPRequest] configured HTTP request including standard headers.
def build_federation_http_request(request_class, uri)
request = request_class.new(uri)
request["User-Agent"] = federation_user_agent_header
request["Accept"] = "application/json"
request["Content-Type"] = "application/json" if request.request_body_permitted?
request
end
# Compose the User-Agent string used when communicating with federation peers.
#
# @return [String] descriptive identifier for PotatoMesh federation requests.
def federation_user_agent_header
version = app_constant(:APP_VERSION).to_s
version = "unknown" if version.empty?
sanitized_domain = sanitize_instance_domain(app_constant(:INSTANCE_DOMAIN), downcase: true)
base = "PotatoMesh/#{version}"
return base unless sanitized_domain && !sanitized_domain.empty?
"#{base} (+https://#{sanitized_domain})"
end
# Build a human readable error message for a failed instance request.
#
# @param error [StandardError] failure raised while performing the request.
# @return [String] description including the error class when necessary.
def instance_fetch_error_message(error)
message = error.message.to_s.strip
class_name = error.class.name || error.class.to_s
return class_name if message.empty?
message.include?(class_name) ? message : "#{class_name}: #{message}"
end
# Raise an InstanceFetchError that preserves the original context.
#
# @param error [StandardError] failure raised while performing the request.
# @return [void]
def raise_instance_fetch_error(error)
message = instance_fetch_error_message(error)
wrapped = InstanceFetchError.new(message)
wrapped.set_backtrace(error.backtrace)
raise wrapped
end
def fetch_instance_json(domain, path)
errors = []
instance_uri_candidates(domain, path).each do |uri|
begin
body = perform_instance_http_request(uri)
return [JSON.parse(body), uri] if body
rescue JSON::ParserError => e
errors << "#{uri}: invalid JSON (#{e.message})"
rescue InstanceFetchError => e
errors << "#{uri}: #{e.message}"
end
end
[nil, errors]
end
# Parse a remote federation instance payload into canonical attributes.
#
# @param payload [Hash] JSON object describing a remote instance.
# @return [Array<(Hash, String), String>] tuple containing the attribute
# hash and signature when valid or a failure reason when invalid.
def remote_instance_attributes_from_payload(payload)
unless payload.is_a?(Hash)
return [nil, nil, "instance payload is not an object"]
end
id = string_or_nil(payload["id"])
return [nil, nil, "missing instance id"] unless id
domain = sanitize_instance_domain(payload["domain"])
return [nil, nil, "missing instance domain"] unless domain
pubkey = sanitize_public_key_pem(payload["pubkey"])
return [nil, nil, "missing instance public key"] unless pubkey
signature = string_or_nil(payload["signature"])
return [nil, nil, "missing instance signature"] unless signature
private_value = if payload.key?("isPrivate")
payload["isPrivate"]
else
payload["is_private"]
end
private_flag = coerce_boolean(private_value)
if private_flag.nil?
numeric_flag = coerce_integer(private_value)
private_flag = !numeric_flag.to_i.zero? if numeric_flag
end
attributes = {
id: id,
domain: domain,
pubkey: pubkey,
name: string_or_nil(payload["name"]),
version: string_or_nil(payload["version"]),
channel: string_or_nil(payload["channel"]),
frequency: string_or_nil(payload["frequency"]),
latitude: coerce_float(payload["latitude"]),
longitude: coerce_float(payload["longitude"]),
last_update_time: coerce_integer(payload["lastUpdateTime"]),
is_private: private_flag,
}
[attributes, signature, nil]
rescue StandardError => e
[nil, nil, e.message]
end
# Recursively ingest federation records exposed by the supplied domain.
#
# @param db [SQLite3::Database] open database connection used for writes.
# @param domain [String] remote domain to crawl for federation records.
# @param visited [Set<String>] domains processed during this crawl.
# @param per_response_limit [Integer, nil] maximum entries processed per response.
# @param overall_limit [Integer, nil] maximum unique domains visited.
# @return [Set<String>] updated set of visited domains.
def ingest_known_instances_from!(
db,
domain,
visited: nil,
per_response_limit: nil,
overall_limit: nil
)
sanitized = sanitize_instance_domain(domain)
return visited || Set.new unless sanitized
visited ||= Set.new
overall_limit ||= PotatoMesh::Config.federation_max_domains_per_crawl
per_response_limit ||= PotatoMesh::Config.federation_max_instances_per_response
if overall_limit && overall_limit.positive? && visited.size >= overall_limit
debug_log(
"Skipped remote instance crawl due to crawl limit",
context: "federation.instances",
domain: sanitized,
limit: overall_limit,
)
return visited
end
return visited if visited.include?(sanitized)
visited << sanitized
payload, metadata = fetch_instance_json(sanitized, "/api/instances")
unless payload.is_a?(Array)
warn_log(
"Failed to load remote federation instances",
context: "federation.instances",
domain: sanitized,
reason: Array(metadata).map(&:to_s).join("; "),
)
return visited
end
processed_entries = 0
payload.each do |entry|
if per_response_limit && per_response_limit.positive? && processed_entries >= per_response_limit
debug_log(
"Skipped remote instance entry due to response limit",
context: "federation.instances",
domain: sanitized,
limit: per_response_limit,
)
break
end
if overall_limit && overall_limit.positive? && visited.size >= overall_limit
debug_log(
"Skipped remote instance entry due to crawl limit",
context: "federation.instances",
domain: sanitized,
limit: overall_limit,
)
break
end
processed_entries += 1
attributes, signature, reason = remote_instance_attributes_from_payload(entry)
unless attributes && signature
warn_log(
"Discarded remote instance entry",
context: "federation.instances",
domain: sanitized,
reason: reason || "invalid payload",
)
next
end
if attributes[:is_private]
debug_log(
"Skipped private remote instance",
context: "federation.instances",
domain: attributes[:domain],
)
next
end
unless verify_instance_signature(attributes, signature, attributes[:pubkey])
warn_log(
"Discarded remote instance entry",
context: "federation.instances",
domain: attributes[:domain],
reason: "invalid signature",
)
next
end
attributes[:is_private] = false if attributes[:is_private].nil?
remote_nodes, node_metadata = fetch_instance_json(attributes[:domain], "/api/nodes")
unless remote_nodes
warn_log(
"Failed to load remote node data",
context: "federation.instances",
domain: attributes[:domain],
reason: Array(node_metadata).map(&:to_s).join("; "),
)
next
end
fresh, freshness_reason = validate_remote_nodes(remote_nodes)
unless fresh
warn_log(
"Discarded remote instance entry",
context: "federation.instances",
domain: attributes[:domain],
reason: freshness_reason || "stale node data",
)
next
end
begin
upsert_instance_record(db, attributes, signature)
ingest_known_instances_from!(
db,
attributes[:domain],
visited: visited,
per_response_limit: per_response_limit,
overall_limit: overall_limit,
)
rescue ArgumentError => e
warn_log(
"Failed to persist remote instance",
context: "federation.instances",
domain: attributes[:domain],
error_class: e.class.name,
error_message: e.message,
)
end
end
visited
end
# Resolve the host component of a remote URI and ensure the destination is
# safe for federation HTTP requests.
#
# The method performs a DNS lookup using Addrinfo to capture every
# available address for the supplied URI host. The resulting addresses are
# converted to {IPAddr} objects for consistent inspection via
# {restricted_ip_address?}. When all resolved addresses fall within
# restricted ranges, the method raises an ArgumentError so callers can
# abort the federation request before contacting the remote endpoint.
#
# @param uri [URI::Generic] remote endpoint candidate.
# @return [Array<IPAddr>] list of resolved, unrestricted IP addresses.
# @raise [ArgumentError] when +uri.host+ is blank or resolves solely to
# restricted addresses.
def resolve_remote_ip_addresses(uri)
host = uri&.host
raise ArgumentError, "URI missing host" unless host
addrinfo_records = Addrinfo.getaddrinfo(host, nil, Socket::AF_UNSPEC, Socket::SOCK_STREAM)
addresses = addrinfo_records.filter_map do |addr|
begin
IPAddr.new(addr.ip_address)
rescue IPAddr::InvalidAddressError
nil
end
end
unique_addresses = addresses.uniq { |ip| [ip.family, ip.to_s] }
unrestricted_addresses = unique_addresses.reject { |ip| restricted_ip_address?(ip) }
if unique_addresses.any? && unrestricted_addresses.empty?
raise ArgumentError, "restricted domain"
end
unrestricted_addresses
end
# Build an HTTP client configured for communication with a remote instance.
#
# @param uri [URI::Generic] target URI describing the remote endpoint.
# @return [Net::HTTP] HTTP client ready to execute the request.
def build_remote_http_client(uri)
remote_addresses = resolve_remote_ip_addresses(uri)
http = Net::HTTP.new(uri.host, uri.port)
if http.respond_to?(:ipaddr=) && remote_addresses.any?
http.ipaddr = remote_addresses.first.to_s
end
http.open_timeout = PotatoMesh::Config.remote_instance_http_timeout
http.read_timeout = PotatoMesh::Config.remote_instance_read_timeout
http.use_ssl = uri.scheme == "https"
return http unless http.use_ssl?
http.verify_mode = OpenSSL::SSL::VERIFY_PEER
http.min_version = :TLS1_2 if http.respond_to?(:min_version=)
store = remote_instance_cert_store
http.cert_store = store if store
callback = remote_instance_verify_callback
http.verify_callback = callback if callback
http
end
# Construct a certificate store that disables strict CRL enforcement.
#
# OpenSSL may fail remote requests when certificate revocation lists are
# unavailable from the issuing authority. The returned store mirrors the
# default system trust store while clearing CRL-related flags so that
# federation announcements gracefully succeed when CRLs cannot be fetched.
#
# @return [OpenSSL::X509::Store, nil] configured store or nil when setup fails.
def remote_instance_cert_store
return @remote_instance_cert_store if defined?(@remote_instance_cert_store) && @remote_instance_cert_store
store = OpenSSL::X509::Store.new
store.set_default_paths
store.flags = 0 if store.respond_to?(:flags=)
@remote_instance_cert_store = store
rescue OpenSSL::X509::StoreError => e
debug_log(
"Failed to initialize certificate store for federation HTTP: #{e.message}",
)
@remote_instance_cert_store = nil
end
# Build a TLS verification callback that tolerates CRL availability failures.
#
# Some certificate authorities publish CRL endpoints that may occasionally be
# unreachable. When OpenSSL cannot download the CRL it raises the
# V_ERR_UNABLE_TO_GET_CRL error which would otherwise cause HTTPS federation
# announcements to abort. The generated callback accepts those specific
# failures while preserving strict verification for all other errors.
#
# @return [Proc, nil] verification callback or nil when creation fails.
def remote_instance_verify_callback
if defined?(@remote_instance_verify_callback) && @remote_instance_verify_callback
return @remote_instance_verify_callback
end
callback = lambda do |preverify_ok, store_context|
return true if preverify_ok
if store_context && crl_unavailable_error?(store_context.error)
debug_log(
"Ignoring TLS CRL retrieval failure during federation request",
context: "federation.announce",
)
true
else
false
end
end
@remote_instance_verify_callback = callback
rescue StandardError => e
debug_log(
"Failed to initialize federation TLS verify callback: #{e.message}",
context: "federation.announce",
)
@remote_instance_verify_callback = nil
end
# Determine whether the supplied OpenSSL verification error corresponds to a
# missing certificate revocation list.
#
# @param error_code [Integer, nil] OpenSSL verification error value.
# @return [Boolean] true when the error should be ignored.
def crl_unavailable_error?(error_code)
allowed_errors = [OpenSSL::X509::V_ERR_UNABLE_TO_GET_CRL]
if defined?(OpenSSL::X509::V_ERR_UNABLE_TO_GET_CRL_ISSUER)
allowed_errors << OpenSSL::X509::V_ERR_UNABLE_TO_GET_CRL_ISSUER
end
allowed_errors.include?(error_code)
end
def validate_well_known_document(document, domain, pubkey)
unless document.is_a?(Hash)
return [false, "document is not an object"]
end
remote_pubkey = sanitize_public_key_pem(document["publicKey"])
return [false, "public key missing"] unless remote_pubkey
return [false, "public key mismatch"] unless remote_pubkey == pubkey
remote_domain = string_or_nil(document["domain"])
return [false, "domain missing"] unless remote_domain
return [false, "domain mismatch"] unless remote_domain.casecmp?(domain)
algorithm = string_or_nil(document["signatureAlgorithm"])
unless algorithm&.casecmp?(PotatoMesh::Config.instance_signature_algorithm)
return [false, "unsupported signature algorithm"]
end
signed_payload_b64 = string_or_nil(document["signedPayload"])
signature_b64 = string_or_nil(document["signature"])
return [false, "missing signed payload"] unless signed_payload_b64
return [false, "missing signature"] unless signature_b64
signed_payload = Base64.strict_decode64(signed_payload_b64)
signature = Base64.strict_decode64(signature_b64)
key = OpenSSL::PKey::RSA.new(remote_pubkey)
unless key.verify(OpenSSL::Digest::SHA256.new, signature, signed_payload)
return [false, "invalid well-known signature"]
end
payload = JSON.parse(signed_payload)
unless payload.is_a?(Hash)
return [false, "signed payload is not an object"]
end
payload_domain = string_or_nil(payload["domain"])
payload_pubkey = sanitize_public_key_pem(payload["publicKey"])
return [false, "signed payload domain mismatch"] unless payload_domain&.casecmp?(domain)
return [false, "signed payload public key mismatch"] unless payload_pubkey == pubkey
[true, nil]
rescue ArgumentError, OpenSSL::PKey::PKeyError => e
[false, e.message]
rescue JSON::ParserError => e
[false, "signed payload JSON error: #{e.message}"]
end
def validate_remote_nodes(nodes)
unless nodes.is_a?(Array)
return [false, "node response is not an array"]
end
if nodes.length < PotatoMesh::Config.remote_instance_min_node_count
return [false, "insufficient nodes"]
end
latest = nodes.filter_map do |node|
next unless node.is_a?(Hash)
last_heard_values = []
last_heard_values << coerce_integer(node["last_heard"])
last_heard_values << coerce_integer(node["lastHeard"])
last_heard_values.compact.max
end.compact.max
return [false, "missing last_heard data"] unless latest
cutoff = Time.now.to_i - PotatoMesh::Config.remote_instance_max_node_age
return [false, "node data is stale"] if latest < cutoff
[true, nil]
end
def upsert_instance_record(db, attributes, signature)
sanitized_domain = sanitize_instance_domain(attributes[:domain])
raise ArgumentError, "invalid domain" unless sanitized_domain
ip = ip_from_domain(sanitized_domain)
if ip && restricted_ip_address?(ip)
raise ArgumentError, "restricted domain"
end
normalized_domain = sanitized_domain
existing_id = with_busy_retry do
db.get_first_value(
"SELECT id FROM instances WHERE domain = ?",
normalized_domain,
)
end
if existing_id && existing_id != attributes[:id]
with_busy_retry do
db.execute("DELETE FROM instances WHERE id = ?", existing_id)
end
debug_log(
"Removed conflicting instance by domain",
context: "federation.instances",
domain: normalized_domain,
replaced_id: existing_id,
incoming_id: attributes[:id],
)
end
sql = <<~SQL
INSERT INTO instances (
id, domain, pubkey, name, version, channel, frequency,
latitude, longitude, last_update_time, is_private, signature
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(id) DO UPDATE SET
domain=excluded.domain,
pubkey=excluded.pubkey,
name=excluded.name,
version=excluded.version,
channel=excluded.channel,
frequency=excluded.frequency,
latitude=excluded.latitude,
longitude=excluded.longitude,
last_update_time=excluded.last_update_time,
is_private=excluded.is_private,
signature=excluded.signature
SQL
params = [
attributes[:id],
normalized_domain,
attributes[:pubkey],
attributes[:name],
attributes[:version],
attributes[:channel],
attributes[:frequency],
attributes[:latitude],
attributes[:longitude],
attributes[:last_update_time],
attributes[:is_private] ? 1 : 0,
signature,
]
with_busy_retry do
db.execute(sql, params)
end
end
end
end
end

View File

@@ -0,0 +1,121 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
require "fileutils"
module PotatoMesh
module App
# Filesystem helpers responsible for migrating legacy assets to XDG compliant
# directories and preparing runtime storage locations.
module Filesystem
# Execute all filesystem migrations required before the application boots.
#
# @return [void]
def perform_initial_filesystem_setup!
migrate_legacy_database!
migrate_legacy_keyfile!
migrate_legacy_well_known_assets!
end
private
# Copy the legacy database file into the configured XDG data directory.
#
# @return [void]
def migrate_legacy_database!
return unless default_database_destination?
migrate_legacy_file(
PotatoMesh::Config.legacy_db_path,
PotatoMesh::Config.db_path,
chmod: 0o600,
context: "filesystem.db",
)
end
# Copy the legacy keyfile into the configured XDG configuration directory.
#
# @return [void]
def migrate_legacy_keyfile!
PotatoMesh::Config.legacy_keyfile_candidates.each do |candidate|
migrate_legacy_file(
candidate,
PotatoMesh::Config.keyfile_path,
chmod: 0o600,
context: "filesystem.keys",
)
end
end
# Copy the legacy well-known document into the configured XDG directory.
#
# @return [void]
def migrate_legacy_well_known_assets!
destination = File.join(
PotatoMesh::Config.well_known_storage_root,
File.basename(PotatoMesh::Config.well_known_relative_path),
)
PotatoMesh::Config.legacy_well_known_candidates.each do |candidate|
migrate_legacy_file(
candidate,
destination,
chmod: 0o644,
context: "filesystem.well_known",
)
end
end
# Migrate a legacy file if it exists and the destination has not been created yet.
#
# @param source_path [String] absolute path to the legacy file.
# @param destination_path [String] absolute path to the new file location.
# @param chmod [Integer, nil] optional permission bits applied to the destination file.
# @param context [String] logging context describing the migration target.
# @return [void]
def migrate_legacy_file(source_path, destination_path, chmod:, context:)
return if source_path == destination_path
return unless File.exist?(source_path)
return if File.exist?(destination_path)
FileUtils.mkdir_p(File.dirname(destination_path))
FileUtils.cp(source_path, destination_path)
File.chmod(chmod, destination_path) if chmod
debug_log(
"Migrated legacy file to XDG directory",
context: context,
source: source_path,
destination: destination_path,
)
rescue SystemCallError => e
warn_log(
"Failed to migrate legacy file",
context: context,
source: source_path,
destination: destination_path,
error_class: e.class.name,
error_message: e.message,
)
end
# Determine whether the database destination matches the configured default.
#
# @return [Boolean] true when the destination should receive migrated data.
def default_database_destination?
PotatoMesh::Config.db_path == PotatoMesh::Config.default_db_path
end
end
end
end

View File

@@ -0,0 +1,352 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
# Shared view and controller helper methods. Each helper is documented with
# its intended consumers to ensure consistent behaviour across the Sinatra
# application.
module Helpers
# Fetch an application level constant exposed by {PotatoMesh::Application}.
#
# @param name [Symbol] constant identifier to retrieve.
# @return [Object] constant value stored on the application class.
def app_constant(name)
PotatoMesh::Application.const_get(name)
end
# Retrieve the configured Prometheus report identifiers as an array.
#
# @return [Array<String>] list of report IDs used on the metrics page.
def prom_report_ids
PotatoMesh::Config.prom_report_id_list
end
# Read a text configuration value with a fallback.
#
# @param key [String] environment variable key.
# @param default [String] fallback value when unset.
# @return [String] sanitised configuration string.
def fetch_config_string(key, default)
PotatoMesh::Config.fetch_string(key, default)
end
# Proxy for {PotatoMesh::Sanitizer.string_or_nil}.
#
# @param value [Object] value to sanitise.
# @return [String, nil] cleaned string or nil.
def string_or_nil(value)
PotatoMesh::Sanitizer.string_or_nil(value)
end
# Proxy for {PotatoMesh::Sanitizer.sanitize_instance_domain}.
#
# @param value [Object] candidate domain string.
# @param downcase [Boolean] whether to force lowercase normalisation.
# @return [String, nil] canonical domain or nil.
def sanitize_instance_domain(value, downcase: true)
PotatoMesh::Sanitizer.sanitize_instance_domain(value, downcase: downcase)
end
# Proxy for {PotatoMesh::Sanitizer.instance_domain_host}.
#
# @param domain [String] domain literal.
# @return [String, nil] host portion of the domain.
def instance_domain_host(domain)
PotatoMesh::Sanitizer.instance_domain_host(domain)
end
# Proxy for {PotatoMesh::Sanitizer.ip_from_domain}.
#
# @param domain [String] domain literal.
# @return [IPAddr, nil] parsed address object.
def ip_from_domain(domain)
PotatoMesh::Sanitizer.ip_from_domain(domain)
end
# Proxy for {PotatoMesh::Sanitizer.sanitized_string}.
#
# @param value [Object] arbitrary input.
# @return [String] trimmed string representation.
def sanitized_string(value)
PotatoMesh::Sanitizer.sanitized_string(value)
end
# Retrieve the site name presented to users.
#
# @return [String] sanitised site label.
def sanitized_site_name
PotatoMesh::Sanitizer.sanitized_site_name
end
# Retrieve the configured channel.
#
# @return [String] sanitised channel identifier.
def sanitized_channel
PotatoMesh::Sanitizer.sanitized_channel
end
# Retrieve the configured frequency descriptor.
#
# @return [String] sanitised frequency text.
def sanitized_frequency
PotatoMesh::Sanitizer.sanitized_frequency
end
# Build the configuration hash exposed to the frontend application.
#
# @return [Hash] JSON serialisable configuration payload.
def frontend_app_config
{
refreshIntervalSeconds: PotatoMesh::Config.refresh_interval_seconds,
refreshMs: PotatoMesh::Config.refresh_interval_seconds * 1000,
chatEnabled: !private_mode?,
channel: sanitized_channel,
frequency: sanitized_frequency,
contactLink: sanitized_contact_link,
contactLinkUrl: sanitized_contact_link_url,
mapCenter: {
lat: PotatoMesh::Config.map_center_lat,
lon: PotatoMesh::Config.map_center_lon,
},
maxDistanceKm: PotatoMesh::Config.max_distance_km,
tileFilters: PotatoMesh::Config.tile_filters,
instanceDomain: app_constant(:INSTANCE_DOMAIN),
instancesFeatureEnabled: federation_enabled? && !private_mode?,
}
end
# Retrieve the configured contact link or nil when unset.
#
# @return [String, nil] contact link identifier.
def sanitized_contact_link
PotatoMesh::Sanitizer.sanitized_contact_link
end
# Retrieve the hyperlink derived from the configured contact link.
#
# @return [String, nil] hyperlink pointing to the community chat.
def sanitized_contact_link_url
PotatoMesh::Sanitizer.sanitized_contact_link_url
end
# Retrieve the configured maximum node distance in kilometres.
#
# @return [Numeric, nil] maximum distance or nil if disabled.
def sanitized_max_distance_km
PotatoMesh::Sanitizer.sanitized_max_distance_km
end
# Format a kilometre value for human readable output.
#
# @param distance [Numeric] distance in kilometres.
# @return [String] formatted distance value.
def formatted_distance_km(distance)
PotatoMesh::Meta.formatted_distance_km(distance)
end
# Generate the meta description used in SEO tags.
#
# @return [String] combined descriptive sentence.
def meta_description
PotatoMesh::Meta.description(private_mode: private_mode?)
end
# Generate the structured meta configuration for the UI.
#
# @return [Hash] frozen configuration metadata.
def meta_configuration
PotatoMesh::Meta.configuration(private_mode: private_mode?)
end
# Coerce an arbitrary value into an integer when possible.
#
# @param value [Object] user supplied value.
# @return [Integer, nil] parsed integer or nil when invalid.
def coerce_integer(value)
case value
when Integer
value
when Float
value.finite? ? value.to_i : nil
when Numeric
value.to_i
when String
trimmed = value.strip
return nil if trimmed.empty?
return trimmed.to_i(16) if trimmed.match?(/\A0[xX][0-9A-Fa-f]+\z/)
return trimmed.to_i(10) if trimmed.match?(/\A-?\d+\z/)
begin
float_val = Float(trimmed)
float_val.finite? ? float_val.to_i : nil
rescue ArgumentError
nil
end
else
nil
end
end
# Coerce an arbitrary value into a floating point number when possible.
#
# @param value [Object] user supplied value.
# @return [Float, nil] parsed float or nil when invalid.
def coerce_float(value)
case value
when Float
value.finite? ? value : nil
when Integer
value.to_f
when Numeric
value.to_f
when String
trimmed = value.strip
return nil if trimmed.empty?
begin
float_val = Float(trimmed)
float_val.finite? ? float_val : nil
rescue ArgumentError
nil
end
else
nil
end
end
# Coerce an arbitrary value into a boolean according to common truthy
# conventions.
#
# @param value [Object] user supplied value.
# @return [Boolean, nil] boolean interpretation or nil when unknown.
def coerce_boolean(value)
case value
when true, false
value
when String
trimmed = value.strip.downcase
return true if %w[true 1 yes y].include?(trimmed)
return false if %w[false 0 no n].include?(trimmed)
nil
when Numeric
!value.to_i.zero?
else
nil
end
end
# Normalise PEM encoded public key content into LF line endings.
#
# @param value [String, #to_s, nil] raw PEM content.
# @return [String, nil] cleaned PEM string or nil when blank.
def sanitize_public_key_pem(value)
return nil if value.nil?
pem = value.is_a?(String) ? value : value.to_s
pem = pem.gsub(/\r\n?/, "\n")
return nil if pem.strip.empty?
pem
end
# Recursively coerce hash keys to strings and normalise nested arrays.
#
# @param value [Object] JSON compatible value.
# @return [Object] structure with canonical string keys.
def normalize_json_value(value)
case value
when Hash
value.each_with_object({}) do |(key, val), memo|
memo[key.to_s] = normalize_json_value(val)
end
when Array
value.map { |element| normalize_json_value(element) }
else
value
end
end
# Parse JSON payloads or hashes into normalised hashes with string keys.
#
# @param value [Hash, String, nil] raw JSON object or string representation.
# @return [Hash, nil] canonicalised hash or nil when parsing fails.
def normalize_json_object(value)
case value
when Hash
normalize_json_value(value)
when String
trimmed = value.strip
return nil if trimmed.empty?
begin
parsed = JSON.parse(trimmed)
rescue JSON::ParserError
return nil
end
parsed.is_a?(Hash) ? normalize_json_value(parsed) : nil
else
nil
end
end
# Emit a structured debug log entry tagged with the calling context.
#
# @param message [String] text to emit.
# @param context [String] logical source of the message.
# @param metadata [Hash] additional structured key/value data.
# @return [void]
def debug_log(message, context: "app", **metadata)
logger = PotatoMesh::Logging.logger_for(self)
PotatoMesh::Logging.log(logger, :debug, message, context: context, **metadata)
end
# Emit a structured warning log entry tagged with the calling context.
#
# @param message [String] text to emit.
# @param context [String] logical source of the message.
# @param metadata [Hash] additional structured key/value data.
# @return [void]
def warn_log(message, context: "app", **metadata)
logger = PotatoMesh::Logging.logger_for(self)
PotatoMesh::Logging.log(logger, :warn, message, context: context, **metadata)
end
# Indicate whether private mode has been requested.
#
# @return [Boolean] true when PRIVATE=1.
def private_mode?
PotatoMesh::Config.private_mode_enabled?
end
# Identify whether the Rack environment corresponds to the test suite.
#
# @return [Boolean] true when RACK_ENV is "test".
def test_environment?
ENV["RACK_ENV"] == "test"
end
# Determine whether federation features should be active.
#
# @return [Boolean] true when federation configuration allows it.
def federation_enabled?
PotatoMesh::Config.federation_enabled?
end
# Determine whether federation announcements should run asynchronously.
#
# @return [Boolean] true when announcements are enabled.
def federation_announcements_active?
federation_enabled? && !test_environment?
end
end
end
end

View File

@@ -0,0 +1,288 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Identity
# Resolve the current application version string using git metadata when available.
#
# @return [String] semantic version compatible identifier.
def determine_app_version
repo_root = locate_git_repo_root(File.expand_path("../../..", __dir__))
return PotatoMesh::Config.version_fallback unless repo_root
stdout, status = Open3.capture2("git", "-C", repo_root, "describe", "--tags", "--long", "--abbrev=7")
return PotatoMesh::Config.version_fallback unless status.success?
raw = stdout.strip
return PotatoMesh::Config.version_fallback if raw.empty?
match = /\A(?<tag>.+)-(?<count>\d+)-g(?<hash>[0-9a-f]+)\z/.match(raw)
return raw unless match
tag = match[:tag]
count = match[:count].to_i
hash = match[:hash]
return tag if count.zero?
"#{tag}+#{count}-#{hash}"
rescue StandardError
PotatoMesh::Config.version_fallback
end
# Discover the root directory of the git repository containing the
# application by traversing parent directories until a ``.git`` entry is
# located. This supports both traditional repositories where ``.git`` is a
# directory and worktree checkouts where it is a plain file.
#
# @param start_dir [String] absolute path where the search should begin.
# @return [String, nil] absolute path to the repository root when found,
# otherwise ``nil``.
def locate_git_repo_root(start_dir)
current = File.expand_path(start_dir)
loop do
git_entry = File.join(current, ".git")
return current if File.exist?(git_entry)
parent = File.dirname(current)
break if parent == current
current = parent
end
nil
end
# Load the persisted instance private key or generate a new one when absent.
#
# @return [Array<OpenSSL::PKey::RSA, Boolean>] tuple of key and generation flag.
def load_or_generate_instance_private_key
keyfile_path = PotatoMesh::Config.keyfile_path
migrate_legacy_keyfile_for_identity!(keyfile_path)
FileUtils.mkdir_p(File.dirname(keyfile_path))
if File.exist?(keyfile_path)
contents = File.binread(keyfile_path)
return [OpenSSL::PKey.read(contents), false]
end
key = OpenSSL::PKey::RSA.new(2048)
File.open(keyfile_path, File::WRONLY | File::CREAT | File::TRUNC, 0o600) do |file|
file.write(key.export)
end
[key, true]
rescue OpenSSL::PKey::PKeyError, ArgumentError => e
warn_log(
"Failed to load instance private key",
context: "identity.keys",
error_class: e.class.name,
error_message: e.message,
)
key = OpenSSL::PKey::RSA.new(2048)
File.open(keyfile_path, File::WRONLY | File::CREAT | File::TRUNC, 0o600) do |file|
file.write(key.export)
end
[key, true]
end
# Migrate an existing legacy keyfile into the configured destination.
#
# @param destination_path [String] absolute path where the keyfile should reside.
# @return [void]
def migrate_legacy_keyfile_for_identity!(destination_path)
return if File.exist?(destination_path)
PotatoMesh::Config.legacy_keyfile_candidates.each do |candidate|
next unless File.exist?(candidate)
next if candidate == destination_path
begin
FileUtils.mkdir_p(File.dirname(destination_path))
FileUtils.cp(candidate, destination_path)
File.chmod(0o600, destination_path)
debug_log(
"Migrated legacy keyfile to XDG directory",
context: "identity.keys",
source: candidate,
destination: destination_path,
)
rescue SystemCallError => e
warn_log(
"Failed to migrate legacy keyfile",
context: "identity.keys",
source: candidate,
destination: destination_path,
error_class: e.class.name,
error_message: e.message,
)
next
end
break
end
end
private :migrate_legacy_keyfile_for_identity!, :locate_git_repo_root
# Return the directory used to store well-known documents.
#
# @return [String] absolute path to the staging directory.
def well_known_directory
PotatoMesh::Config.well_known_storage_root
end
# Determine the absolute path to the well-known document file.
#
# @return [String] filesystem path for the JSON document.
def well_known_file_path
File.join(
well_known_directory,
File.basename(PotatoMesh::Config.well_known_relative_path),
)
end
# Remove legacy well-known artifacts from previous releases.
#
# @return [void]
def cleanup_legacy_well_known_artifacts
legacy_path = PotatoMesh::Config.legacy_public_well_known_path
FileUtils.rm_f(legacy_path)
legacy_dir = File.dirname(legacy_path)
FileUtils.rmdir(legacy_dir) if Dir.exist?(legacy_dir) && Dir.empty?(legacy_dir)
rescue SystemCallError
# Ignore errors removing legacy static files; failure only means the directory
# or file did not exist or is in use.
end
# Construct the JSON body and detached signature for the well-known document.
#
# @return [Array(String, String)] pair of JSON output and base64 signature.
def build_well_known_document
last_update = latest_node_update_timestamp
domain_value = sanitize_instance_domain(app_constant(:INSTANCE_DOMAIN))
payload = {
publicKey: app_constant(:INSTANCE_PUBLIC_KEY_PEM),
name: sanitized_site_name,
version: app_constant(:APP_VERSION),
domain: domain_value,
lastUpdate: last_update,
}
signed_payload = JSON.generate(payload, sort_keys: true)
signature = Base64.strict_encode64(
app_constant(:INSTANCE_PRIVATE_KEY).sign(OpenSSL::Digest::SHA256.new, signed_payload),
)
document = payload.merge(
signature: signature,
signatureAlgorithm: PotatoMesh::Config.instance_signature_algorithm,
signedPayload: Base64.strict_encode64(signed_payload),
)
json_output = JSON.pretty_generate(document)
[json_output, signature]
end
# Regenerate the well-known document when it is stale or when the existing
# content no longer matches the current instance configuration.
#
# @return [void]
def refresh_well_known_document_if_stale
FileUtils.mkdir_p(well_known_directory)
path = well_known_file_path
now = Time.now
json_output, signature = build_well_known_document
expected_contents = json_output.end_with?("\n") ? json_output : "#{json_output}\n"
needs_update = true
if File.exist?(path)
current_contents = File.binread(path)
mtime = File.mtime(path)
if current_contents == expected_contents &&
(now - mtime) < PotatoMesh::Config.well_known_refresh_interval
needs_update = false
end
end
return unless needs_update
File.open(path, File::WRONLY | File::CREAT | File::TRUNC, 0o644) do |file|
file.write(expected_contents)
end
debug_log(
"Refreshed well-known document content",
context: "identity.well_known",
path: PotatoMesh::Config.well_known_relative_path,
bytes: json_output.bytesize,
document: json_output,
)
debug_log(
"Refreshed well-known document signature",
context: "identity.well_known",
path: PotatoMesh::Config.well_known_relative_path,
algorithm: PotatoMesh::Config.instance_signature_algorithm,
signature: signature,
)
end
# Retrieve the latest node update timestamp from the database.
#
# @return [Integer, nil] Unix timestamp or nil when unavailable.
def latest_node_update_timestamp
return nil unless File.exist?(PotatoMesh::Config.db_path)
db = open_database(readonly: true)
value = db.get_first_value("SELECT MAX(last_heard) FROM nodes")
value&.to_i
rescue SQLite3::Exception
nil
ensure
db&.close
end
# Emit a debug entry describing the active instance key material.
#
# @return [void]
def log_instance_public_key
debug_log(
"Loaded instance public key",
context: "identity.keys",
public_key_pem: app_constant(:INSTANCE_PUBLIC_KEY_PEM),
)
if app_constant(:INSTANCE_KEY_GENERATED)
debug_log(
"Generated new instance private key",
context: "identity.keys",
path: PotatoMesh::Config.keyfile_path,
)
end
end
# Emit a debug entry describing how the instance domain was derived.
#
# @return [void]
def log_instance_domain_resolution
source = app_constant(:INSTANCE_DOMAIN_SOURCE) || :unknown
debug_log(
"Resolved instance domain",
context: "identity.domain",
source: source,
domain: app_constant(:INSTANCE_DOMAIN),
)
end
end
end
end

View File

@@ -0,0 +1,208 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
# Helper methods for maintaining and presenting instance records.
module Instances
# Remove duplicate instance records grouped by their canonical domain name
# while favouring the most recent entry.
#
# @return [void]
def clean_duplicate_instances!
db = open_database
rows = with_busy_retry do
db.execute(
<<~SQL
SELECT rowid, domain, last_update_time
FROM instances
WHERE domain IS NOT NULL AND TRIM(domain) != ''
SQL
)
end
grouped = rows.group_by do |row|
sanitize_instance_domain(row[1])&.downcase
rescue StandardError
nil
end
deletions = []
updates = {}
grouped.each do |canonical_domain, entries|
next if canonical_domain.nil?
next if entries.size <= 1
sorted_entries = entries.sort_by do |entry|
timestamp = coerce_integer(entry[2]) || -1
[timestamp, entry[0].to_i]
end
keeper = sorted_entries.last
next unless keeper
deletions.concat(sorted_entries[0...-1].map { |entry| entry[0].to_i })
current_domain = entries.find { |entry| entry[0] == keeper[0] }&.[](1)
if canonical_domain && current_domain != canonical_domain
updates[keeper[0].to_i] = canonical_domain
end
removed_count = sorted_entries.length - 1
warn_log(
"Removed duplicate instance records",
context: "instances.cleanup",
domain: canonical_domain,
removed: removed_count,
) if removed_count.positive?
end
unless deletions.empty?
placeholders = Array.new(deletions.size, "?").join(",")
with_busy_retry do
db.execute("DELETE FROM instances WHERE rowid IN (#{placeholders})", deletions)
end
end
updates.each do |rowid, canonical_domain|
with_busy_retry do
db.execute("UPDATE instances SET domain = ? WHERE rowid = ?", [canonical_domain, rowid])
end
end
rescue SQLite3::Exception => e
warn_log(
"Failed to clean duplicate instances",
context: "instances.cleanup",
error_class: e.class.name,
error_message: e.message,
)
ensure
db&.close
end
# Normalise and validate an instance database row for API presentation.
#
# @param row [Hash] raw database row with string keys.
# @return [Hash, nil] cleaned hash or +nil+ when the row is discarded.
def normalize_instance_row(row)
unless row.is_a?(Hash)
warn_log(
"Discarded malformed instance row",
context: "instances.normalize",
reason: "row not hash",
)
return nil
end
id = string_or_nil(row["id"])
domain = sanitize_instance_domain(row["domain"])&.downcase
pubkey = sanitize_public_key_pem(row["pubkey"])
signature = string_or_nil(row["signature"])
last_update_time = coerce_integer(row["last_update_time"])
is_private_raw = row["is_private"]
private_flag = coerce_boolean(is_private_raw)
if private_flag.nil?
numeric_private = coerce_integer(is_private_raw)
private_flag = !numeric_private.to_i.zero? if numeric_private
end
private_flag = false if private_flag.nil?
if id.nil? || domain.nil? || pubkey.nil?
warn_log(
"Discarded malformed instance row",
context: "instances.normalize",
instance_id: row["id"],
domain: row["domain"],
reason: "missing required fields",
)
return nil
end
payload = {
"id" => id,
"domain" => domain,
"pubkey" => pubkey,
"name" => string_or_nil(row["name"]),
"version" => string_or_nil(row["version"]),
"channel" => string_or_nil(row["channel"]),
"frequency" => string_or_nil(row["frequency"]),
"latitude" => coerce_float(row["latitude"]),
"longitude" => coerce_float(row["longitude"]),
"lastUpdateTime" => last_update_time,
"isPrivate" => private_flag,
"signature" => signature,
}
payload.reject { |_, value| value.nil? }
rescue StandardError => e
warn_log(
"Failed to normalise instance row",
context: "instances.normalize",
instance_id: row.respond_to?(:[]) ? row["id"] : nil,
domain: row.respond_to?(:[]) ? row["domain"] : nil,
error_class: e.class.name,
error_message: e.message,
)
nil
end
# Fetch all instance rows ready to be served by the API while handling
# malformed rows gracefully. The dataset is restricted to records updated
# within the rolling window defined by PotatoMesh::Config.week_seconds.
#
# @return [Array<Hash>] list of cleaned instance payloads.
def load_instances_for_api
clean_duplicate_instances!
db = open_database(readonly: true)
db.results_as_hash = true
now = Time.now.to_i
min_last_update_time = now - PotatoMesh::Config.week_seconds
sql = <<~SQL
SELECT id, domain, pubkey, name, version, channel, frequency,
latitude, longitude, last_update_time, is_private, signature
FROM instances
WHERE domain IS NOT NULL AND TRIM(domain) != ''
AND pubkey IS NOT NULL AND TRIM(pubkey) != ''
AND last_update_time IS NOT NULL AND last_update_time >= ?
ORDER BY LOWER(domain)
SQL
rows = with_busy_retry do
db.execute(sql, min_last_update_time)
end
rows.each_with_object([]) do |row, memo|
normalized = normalize_instance_row(row)
next unless normalized
last_update_time = normalized["lastUpdateTime"]
next unless last_update_time.is_a?(Integer) && last_update_time >= min_last_update_time
memo << normalized
end
rescue SQLite3::Exception => e
warn_log(
"Failed to load instance records",
context: "instances.load",
error_class: e.class.name,
error_message: e.message,
)
[]
ensure
db&.close
end
end
end
end

View File

@@ -0,0 +1,355 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Networking
# Normalise the configured instance domain by stripping schemes and verifying structure.
#
# @param raw [String, nil] environment supplied domain or URL.
# @return [String, nil] canonicalised hostname with optional port.
def canonicalize_configured_instance_domain(raw)
return nil if raw.nil?
trimmed = raw.to_s.strip
return nil if trimmed.empty?
candidate = trimmed
if candidate.include?("://")
begin
uri = URI.parse(candidate)
rescue URI::InvalidURIError => e
raise "INSTANCE_DOMAIN must be a valid hostname or URL, but parsing #{candidate.inspect} failed: #{e.message}"
end
unless uri.host
raise "INSTANCE_DOMAIN URL must include a hostname: #{candidate.inspect}"
end
if uri.userinfo
raise "INSTANCE_DOMAIN URL must not include credentials: #{candidate.inspect}"
end
if uri.path && !uri.path.empty? && uri.path != "/"
raise "INSTANCE_DOMAIN URL must not include a path component: #{candidate.inspect}"
end
if uri.query || uri.fragment
raise "INSTANCE_DOMAIN URL must not include query or fragment data: #{candidate.inspect}"
end
hostname = uri.hostname
unless hostname
raise "INSTANCE_DOMAIN URL must include a hostname: #{candidate.inspect}"
end
ip_host = ipv6_literal?(hostname)
candidate_host = ip_host ? "[#{ip_host}]" : hostname
candidate = candidate_host
port = uri.port
candidate = "#{candidate_host}:#{port}" if port_required?(uri, trimmed)
end
ipv6_with_port = candidate.match(/\A(?<address>.+):(?<port>\d+)\z/)
if ipv6_with_port
address = ipv6_with_port[:address]
port = ipv6_with_port[:port]
literal = ipv6_literal?(address)
if literal && PotatoMesh::Sanitizer.valid_port?(port)
candidate = "[#{literal}]:#{port}"
else
ipv6_literal = ipv6_literal?(candidate)
candidate = "[#{ipv6_literal}]" if ipv6_literal
end
else
ipv6_literal = ipv6_literal?(candidate)
candidate = "[#{ipv6_literal}]" if ipv6_literal
end
sanitized = sanitize_instance_domain(candidate)
unless sanitized
raise "INSTANCE_DOMAIN must be a bare hostname (optionally with a port) without schemes or paths: #{raw.inspect}"
end
ensure_ipv6_instance_domain(sanitized).downcase
end
# Resolve the best domain for the running instance using configuration and network discovery.
#
# @return [Array(String, Symbol)] tuple containing the domain and the discovery source.
def determine_instance_domain
raw = ENV["INSTANCE_DOMAIN"]
if raw
canonical = canonicalize_configured_instance_domain(raw)
return [canonical, :environment] if canonical
end
reverse = sanitize_instance_domain(reverse_dns_domain)
return [reverse, :reverse_dns] if reverse
public_ip = discover_public_ip_address
return [public_ip, :public_ip] if public_ip
protected_ip = discover_protected_ip_address
return [protected_ip, :protected_ip] if protected_ip
[discover_local_ip_address, :local_ip]
end
# Attempt to determine the reverse DNS hostname for the local machine.
#
# @return [String, nil] resolved hostname or nil when unavailable.
def reverse_dns_domain
Socket.ip_address_list.each do |address|
next unless address.respond_to?(:ip?) && address.ip?
loopback =
(address.respond_to?(:ipv4_loopback?) && address.ipv4_loopback?) ||
(address.respond_to?(:ipv6_loopback?) && address.ipv6_loopback?)
next if loopback
link_local =
address.respond_to?(:ipv6_linklocal?) && address.ipv6_linklocal?
next if link_local
ip = address.ip_address
next if ip.nil? || ip.empty?
begin
hostname = Resolv.getname(ip)
trimmed = hostname&.strip
return trimmed unless trimmed.nil? || trimmed.empty?
rescue Resolv::ResolvError, Resolv::ResolvTimeout, SocketError
next
end
end
nil
end
# Identify the first public IP address of the current host.
#
# @return [String, nil] public IP address string or nil.
def discover_public_ip_address
address = ip_address_candidates.find { |candidate| public_ip_address?(candidate) }
address&.ip_address
end
# Identify a private yet non-loopback IP address suitable for protected networks.
#
# @return [String, nil] protected network address or nil.
def discover_protected_ip_address
address = ip_address_candidates.find { |candidate| protected_ip_address?(candidate) }
address&.ip_address
end
# Collect viable socket addresses for evaluation.
#
# @return [Array<#ip?>] list of socket addresses supporting IP queries.
def ip_address_candidates
Socket.ip_address_list.select { |addr| addr.respond_to?(:ip?) && addr.ip? }
end
# Determine whether a socket address represents a public IP.
#
# @param addr [Addrinfo] candidate socket address.
# @return [Boolean] true when the address is publicly routable.
def public_ip_address?(addr)
ip = ipaddr_from(addr)
return false unless ip
return false if loopback_address?(addr, ip)
return false if link_local_address?(addr, ip)
return false if private_address?(addr, ip)
return false if unspecified_address?(ip)
true
end
# Determine whether a socket address resides on a protected private network.
#
# @param addr [Addrinfo] candidate socket address.
# @return [Boolean] true when the address is private but not loopback/link-local.
def protected_ip_address?(addr)
ip = ipaddr_from(addr)
return false unless ip
return false if loopback_address?(addr, ip)
return false if link_local_address?(addr, ip)
private_address?(addr, ip)
end
# Parse an IP address from the provided socket address.
#
# @param addr [Addrinfo] socket address to examine.
# @return [IPAddr, nil] parsed IP or nil when invalid.
def ipaddr_from(addr)
ip = addr.ip_address
return nil if ip.nil? || ip.empty?
IPAddr.new(ip)
rescue IPAddr::InvalidAddressError
nil
end
# Determine whether a socket address is loopback.
#
# @param addr [Addrinfo] socket address to inspect.
# @param ip [IPAddr] parsed IP representation of the address.
# @return [Boolean] true when the address is loopback.
def loopback_address?(addr, ip)
(addr.respond_to?(:ipv4_loopback?) && addr.ipv4_loopback?) ||
(addr.respond_to?(:ipv6_loopback?) && addr.ipv6_loopback?) ||
ip.loopback?
end
# Determine whether a socket address is link-local.
#
# @param addr [Addrinfo] socket address to inspect.
# @param ip [IPAddr] parsed IP representation of the address.
# @return [Boolean] true when the address is link-local.
def link_local_address?(addr, ip)
(addr.respond_to?(:ipv6_linklocal?) && addr.ipv6_linklocal?) ||
(ip.respond_to?(:link_local?) && ip.link_local?)
end
# Determine whether a socket address is private.
#
# @param addr [Addrinfo] socket address to inspect.
# @param ip [IPAddr] parsed IP representation of the address.
# @return [Boolean] true when the address is private.
def private_address?(addr, ip)
if addr.respond_to?(:ipv4?) && addr.ipv4? && addr.respond_to?(:ipv4_private?)
addr.ipv4_private?
else
ip.private?
end
end
# Identify unspecified IP addresses.
#
# @param ip [IPAddr] parsed IP.
# @return [Boolean] true for unspecified addresses (0.0.0.0 / ::).
def unspecified_address?(ip)
(ip.ipv4? || ip.ipv6?) && ip.to_i.zero?
end
# Choose the most appropriate local IP address for the instance domain.
#
# @return [String] selected IP address string.
def discover_local_ip_address
candidates = ip_address_candidates
ipv4 = candidates.find do |addr|
addr.respond_to?(:ipv4?) && addr.ipv4? && !(addr.respond_to?(:ipv4_loopback?) && addr.ipv4_loopback?)
end
return ipv4.ip_address if ipv4
non_loopback = candidates.find do |addr|
!(addr.respond_to?(:ipv4_loopback?) && addr.ipv4_loopback?) &&
!(addr.respond_to?(:ipv6_loopback?) && addr.ipv6_loopback?)
end
return non_loopback.ip_address if non_loopback
loopback = candidates.find do |addr|
(addr.respond_to?(:ipv4_loopback?) && addr.ipv4_loopback?) ||
(addr.respond_to?(:ipv6_loopback?) && addr.ipv6_loopback?)
end
return loopback.ip_address if loopback
"127.0.0.1"
end
# Determine whether an IP should be restricted from exposure.
#
# @param ip [IPAddr] candidate IP address.
# @return [Boolean] true when the IP should not be exposed.
def restricted_ip_address?(ip)
return true if ip.loopback?
return true if ip.private?
return true if ip.link_local?
return true if ip.to_i.zero?
false
end
# Normalize IPv6 instance domains so that they remain bracketed and URI-compatible.
#
# @param domain [String] sanitized hostname optionally including a port suffix.
# @return [String] domain with IPv6 literals wrapped in brackets when necessary.
def ensure_ipv6_instance_domain(domain)
bracketed_match = domain.match(/\A\[(?<host>[^\]]+)\](?::(?<port>\d+))?\z/)
if bracketed_match
host = bracketed_match[:host]
port = bracketed_match[:port]
ipv6 = ipv6_literal?(host)
if ipv6
return "[#{ipv6}]#{port ? ":#{port}" : ""}"
end
return domain
end
host_candidate = domain
port_candidate = nil
split_host, separator, split_port = domain.rpartition(":")
if !separator.empty? && split_port.match?(/\A\d+\z/) && !split_host.empty? && !split_host.end_with?(":")
host_candidate = split_host
port_candidate = split_port
end
if port_candidate
ipv6_host = ipv6_literal?(host_candidate)
return "[#{ipv6_host}]:#{port_candidate}" if ipv6_host
host_candidate = domain
port_candidate = nil
end
ipv6 = ipv6_literal?(host_candidate)
return "[#{ipv6}]" if ipv6
domain
end
# Parse an IPv6 literal and return its canonical representation when valid.
#
# @param candidate [String] potential IPv6 literal.
# @return [String, nil] normalized IPv6 literal or nil when the candidate is not IPv6.
def ipv6_literal?(candidate)
IPAddr.new(candidate).yield_self do |ip|
return ip.ipv6? ? ip.to_s : nil
end
rescue IPAddr::InvalidAddressError
nil
end
# Determine whether a URI's port should be included in the canonicalized domain.
#
# @param uri [URI::Generic] parsed URI for the instance domain.
# @param raw [String] original sanitized input string.
# @return [Boolean] true when the port must be preserved.
def port_required?(uri, raw)
port = uri.port
return false unless port
return true unless uri.respond_to?(:default_port) && uri.default_port && port == uri.default_port
raw_port_fragment = ":#{port}"
sanitized_raw = raw.strip
sanitized_raw.end_with?(raw_port_fragment)
end
end
end
end

View File

@@ -0,0 +1,196 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Prometheus
MESSAGES_TOTAL = ::Prometheus::Client::Counter.new(
:meshtastic_messages_total,
docstring: "Total number of messages received",
)
NODES_GAUGE = ::Prometheus::Client::Gauge.new(
:meshtastic_nodes,
docstring: "Number of nodes tracked",
)
NODE_GAUGE = ::Prometheus::Client::Gauge.new(
:meshtastic_node,
docstring: "Presence of a Meshtastic node",
labels: %i[node short_name long_name hw_model role],
)
NODE_BATTERY_LEVEL = ::Prometheus::Client::Gauge.new(
:meshtastic_node_battery_level,
docstring: "Battery level of a Meshtastic node",
labels: [:node],
)
NODE_VOLTAGE = ::Prometheus::Client::Gauge.new(
:meshtastic_node_voltage,
docstring: "Battery voltage of a Meshtastic node",
labels: [:node],
)
NODE_UPTIME = ::Prometheus::Client::Gauge.new(
:meshtastic_node_uptime_seconds,
docstring: "Uptime reported by a Meshtastic node",
labels: [:node],
)
NODE_CHANNEL_UTIL = ::Prometheus::Client::Gauge.new(
:meshtastic_node_channel_utilization,
docstring: "Channel utilization reported by a Meshtastic node",
labels: [:node],
)
NODE_AIR_UTIL_TX = ::Prometheus::Client::Gauge.new(
:meshtastic_node_transmit_air_utilization,
docstring: "Transmit air utilization reported by a Meshtastic node",
labels: [:node],
)
NODE_LATITUDE = ::Prometheus::Client::Gauge.new(
:meshtastic_node_latitude,
docstring: "Latitude of a Meshtastic node",
labels: [:node],
)
NODE_LONGITUDE = ::Prometheus::Client::Gauge.new(
:meshtastic_node_longitude,
docstring: "Longitude of a Meshtastic node",
labels: [:node],
)
NODE_ALTITUDE = ::Prometheus::Client::Gauge.new(
:meshtastic_node_altitude,
docstring: "Altitude of a Meshtastic node",
labels: [:node],
)
METRICS = [
MESSAGES_TOTAL,
NODES_GAUGE,
NODE_GAUGE,
NODE_BATTERY_LEVEL,
NODE_VOLTAGE,
NODE_UPTIME,
NODE_CHANNEL_UTIL,
NODE_AIR_UTIL_TX,
NODE_LATITUDE,
NODE_LONGITUDE,
NODE_ALTITUDE,
].freeze
METRICS.each do |metric|
::Prometheus::Client.registry.register(metric)
rescue ::Prometheus::Client::Registry::AlreadyRegisteredError
# Ignore duplicate registrations when the code is reloaded.
end
def update_prometheus_metrics(node_id, user = nil, role = "", met = nil, pos = nil)
ids = prom_report_ids
return if ids.empty? || !node_id
return unless ids[0] == "*" || ids.include?(node_id)
if user && user.is_a?(Hash) && role && role != ""
NODE_GAUGE.set(
1,
labels: {
node: node_id,
short_name: user["shortName"],
long_name: user["longName"],
hw_model: user["hwModel"],
role: role,
},
)
end
if met && met.is_a?(Hash)
if met["batteryLevel"]
NODE_BATTERY_LEVEL.set(met["batteryLevel"], labels: { node: node_id })
end
if met["voltage"]
NODE_VOLTAGE.set(met["voltage"], labels: { node: node_id })
end
if met["uptimeSeconds"]
NODE_UPTIME.set(met["uptimeSeconds"], labels: { node: node_id })
end
if met["channelUtilization"]
NODE_CHANNEL_UTIL.set(met["channelUtilization"], labels: { node: node_id })
end
if met["airUtilTx"]
NODE_AIR_UTIL_TX.set(met["airUtilTx"], labels: { node: node_id })
end
end
if pos && pos.is_a?(Hash)
if pos["latitude"]
NODE_LATITUDE.set(pos["latitude"], labels: { node: node_id })
end
if pos["longitude"]
NODE_LONGITUDE.set(pos["longitude"], labels: { node: node_id })
end
if pos["altitude"]
NODE_ALTITUDE.set(pos["altitude"], labels: { node: node_id })
end
end
end
def update_all_prometheus_metrics_from_nodes
nodes = query_nodes(1000)
NODES_GAUGE.set(nodes.size)
ids = prom_report_ids
unless ids.empty?
nodes.each do |n|
node_id = n["node_id"]
next if ids[0] != "*" && !ids.include?(node_id)
update_prometheus_metrics(
node_id,
{
"shortName" => n["short_name"] || "",
"longName" => n["long_name"] || "",
"hwModel" => n["hw_model"] || "",
},
n["role"] || "",
{
"batteryLevel" => n["battery_level"],
"voltage" => n["voltage"],
"uptimeSeconds" => n["uptime_seconds"],
"channelUtilization" => n["channel_utilization"],
"airUtilTx" => n["air_util_tx"],
},
{
"latitude" => n["latitude"],
"longitude" => n["longitude"],
"altitude" => n["altitude"],
},
)
end
end
end
end
end
end

View File

@@ -0,0 +1,455 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Queries
MAX_QUERY_LIMIT = 1000
# Remove nil or empty values from an API response hash to reduce payload size.
# Integer keys emitted by SQLite are ignored because the JSON representation
# only exposes symbolic keys. Strings containing only whitespace are treated
# as empty to mirror sanitisation elsewhere in the application.
#
# @param row [Hash] raw database row to compact.
# @return [Hash] cleaned hash without blank values.
def compact_api_row(row)
return {} unless row.is_a?(Hash)
row.each_with_object({}) do |(key, value), acc|
next if key.is_a?(Integer)
next if value.nil?
if value.is_a?(String)
trimmed = value.strip
next if trimmed.empty?
acc[key] = value
next
end
next if value.respond_to?(:empty?) && value.empty?
acc[key] = value
end
end
# Normalise a caller-provided limit to a sane, positive integer.
#
# @param limit [Object] value coerced to an integer.
# @param default [Integer] fallback used when coercion fails.
# @return [Integer] limit clamped between 1 and MAX_QUERY_LIMIT.
def coerce_query_limit(limit, default: 200)
coerced = begin
if limit.is_a?(Integer)
limit
else
Integer(limit, 10)
end
rescue ArgumentError, TypeError
nil
end
coerced = default if coerced.nil? || coerced <= 0
coerced = MAX_QUERY_LIMIT if coerced > MAX_QUERY_LIMIT
coerced
end
def node_reference_tokens(node_ref)
parts = canonical_node_parts(node_ref)
canonical_id, numeric_id = parts ? parts[0, 2] : [nil, nil]
string_values = []
numeric_values = []
case node_ref
when Integer
numeric_values << node_ref
string_values << node_ref.to_s
when Numeric
coerced = node_ref.to_i
numeric_values << coerced
string_values << coerced.to_s
when String
trimmed = node_ref.strip
unless trimmed.empty?
string_values << trimmed
numeric_values << trimmed.to_i if trimmed.match?(/\A-?\d+\z/)
end
when nil
# no-op
else
coerced = node_ref.to_s.strip
string_values << coerced unless coerced.empty?
end
if canonical_id
string_values << canonical_id
string_values << canonical_id.upcase
end
if numeric_id
numeric_values << numeric_id
string_values << numeric_id.to_s
end
cleaned_strings = string_values.compact.map(&:to_s).map(&:strip).reject(&:empty?).uniq
cleaned_numbers = numeric_values.compact.map do |value|
begin
Integer(value, 10)
rescue ArgumentError, TypeError
nil
end
end.compact.uniq
{
string_values: cleaned_strings,
numeric_values: cleaned_numbers,
}
end
def node_lookup_clause(node_ref, string_columns:, numeric_columns: [])
tokens = node_reference_tokens(node_ref)
string_values = tokens[:string_values]
numeric_values = tokens[:numeric_values]
clauses = []
params = []
unless string_columns.empty? || string_values.empty?
string_columns.each do |column|
placeholders = Array.new(string_values.length, "?").join(", ")
clauses << "#{column} IN (#{placeholders})"
params.concat(string_values)
end
end
unless numeric_columns.empty? || numeric_values.empty?
numeric_columns.each do |column|
placeholders = Array.new(numeric_values.length, "?").join(", ")
clauses << "#{column} IN (#{placeholders})"
params.concat(numeric_values)
end
end
return nil if clauses.empty?
["(#{clauses.join(" OR ")})", params]
end
def query_nodes(limit, node_ref: nil)
limit = coerce_query_limit(limit)
db = open_database(readonly: true)
db.results_as_hash = true
now = Time.now.to_i
min_last_heard = now - PotatoMesh::Config.week_seconds
params = []
where_clauses = []
if node_ref
clause = node_lookup_clause(node_ref, string_columns: ["node_id"], numeric_columns: ["num"])
return [] unless clause
where_clauses << clause.first
params.concat(clause.last)
else
where_clauses << "last_heard >= ?"
params << min_last_heard
end
if private_mode?
where_clauses << "(role IS NULL OR role <> 'CLIENT_HIDDEN')"
end
sql = <<~SQL
SELECT node_id, short_name, long_name, hw_model, role, snr,
battery_level, voltage, last_heard, first_heard,
uptime_seconds, channel_utilization, air_util_tx,
position_time, location_source, precision_bits,
latitude, longitude, altitude, lora_freq, modem_preset
FROM nodes
SQL
sql += " WHERE #{where_clauses.join(" AND ")}\n" if where_clauses.any?
sql += <<~SQL
ORDER BY last_heard DESC
LIMIT ?
SQL
params << limit
rows = db.execute(sql, params)
rows = rows.select do |r|
last_candidate = [r["last_heard"], r["position_time"], r["first_heard"]]
.map { |value| coerce_integer(value) }
.compact
.max
last_candidate && last_candidate >= min_last_heard
end
rows.each do |r|
r["role"] ||= "CLIENT"
lh = r["last_heard"]&.to_i
pt = r["position_time"]&.to_i
lh = now if lh && lh > now
pt = nil if pt && pt > now
r["last_heard"] = lh
r["position_time"] = pt
r["last_seen_iso"] = Time.at(lh).utc.iso8601 if lh
r["pos_time_iso"] = Time.at(pt).utc.iso8601 if pt
pb = r["precision_bits"]
r["precision_bits"] = pb.to_i if pb
end
rows.map { |row| compact_api_row(row) }
ensure
db&.close
end
def query_messages(limit, node_ref: nil)
limit = coerce_query_limit(limit)
db = open_database(readonly: true)
db.results_as_hash = true
params = []
where_clauses = ["COALESCE(TRIM(m.encrypted), '') = ''"]
now = Time.now.to_i
min_rx_time = now - PotatoMesh::Config.week_seconds
where_clauses << "m.rx_time >= ?"
params << min_rx_time
if node_ref
clause = node_lookup_clause(node_ref, string_columns: ["m.from_id", "m.to_id"])
return [] unless clause
where_clauses << clause.first
params.concat(clause.last)
end
sql = <<~SQL
SELECT m.id, m.rx_time, m.rx_iso, m.from_id, m.to_id, m.channel,
m.portnum, m.text, m.encrypted, m.rssi, m.hop_limit,
m.lora_freq, m.modem_preset, m.channel_name, m.snr
FROM messages m
SQL
sql += " WHERE #{where_clauses.join(" AND ")}\n"
sql += <<~SQL
ORDER BY m.rx_time DESC
LIMIT ?
SQL
params << limit
rows = db.execute(sql, params)
rows.each do |r|
r.delete_if { |key, _| key.is_a?(Integer) }
if PotatoMesh::Config.debug? && (r["from_id"].nil? || r["from_id"].to_s.strip.empty?)
raw = db.execute("SELECT * FROM messages WHERE id = ?", [r["id"]]).first
debug_log(
"Message query produced empty sender",
context: "queries.messages",
stage: "raw_row",
row: raw,
)
end
canonical_from_id = string_or_nil(normalize_node_id(db, r["from_id"]))
node_id = canonical_from_id || string_or_nil(r["from_id"])
if canonical_from_id
raw_from_id = string_or_nil(r["from_id"])
if raw_from_id.nil? || raw_from_id.match?(/\A[0-9]+\z/)
r["from_id"] = canonical_from_id
elsif raw_from_id.start_with?("!") && raw_from_id.casecmp(canonical_from_id) != 0
r["from_id"] = canonical_from_id
end
end
r["node_id"] = node_id if node_id
if PotatoMesh::Config.debug? && (r["from_id"].nil? || r["from_id"].to_s.strip.empty?)
debug_log(
"Message query produced empty sender",
context: "queries.messages",
stage: "after_normalization",
row: r,
)
end
end
rows
ensure
db&.close
end
def query_positions(limit, node_ref: nil)
limit = coerce_query_limit(limit)
db = open_database(readonly: true)
db.results_as_hash = true
params = []
where_clauses = []
now = Time.now.to_i
min_rx_time = now - PotatoMesh::Config.week_seconds
where_clauses << "COALESCE(rx_time, position_time, 0) >= ?"
params << min_rx_time
if node_ref
clause = node_lookup_clause(node_ref, string_columns: ["node_id"], numeric_columns: ["node_num"])
return [] unless clause
where_clauses << clause.first
params.concat(clause.last)
end
sql = <<~SQL
SELECT * FROM positions
SQL
sql += " WHERE #{where_clauses.join(" AND ")}\n" if where_clauses.any?
sql += <<~SQL
ORDER BY rx_time DESC
LIMIT ?
SQL
params << limit
rows = db.execute(sql, params)
rows.each do |r|
rx_time = coerce_integer(r["rx_time"])
r["rx_time"] = rx_time if rx_time
r["rx_iso"] = Time.at(rx_time).utc.iso8601 if rx_time && string_or_nil(r["rx_iso"]).nil?
node_num = coerce_integer(r["node_num"])
r["node_num"] = node_num if node_num
position_time = coerce_integer(r["position_time"])
position_time = nil if position_time && position_time > now
r["position_time"] = position_time
r["position_time_iso"] = Time.at(position_time).utc.iso8601 if position_time
r["precision_bits"] = coerce_integer(r["precision_bits"])
r["sats_in_view"] = coerce_integer(r["sats_in_view"])
r["pdop"] = coerce_float(r["pdop"])
r["snr"] = coerce_float(r["snr"])
end
rows.map { |row| compact_api_row(row) }
ensure
db&.close
end
def query_neighbors(limit, node_ref: nil)
limit = coerce_query_limit(limit)
db = open_database(readonly: true)
db.results_as_hash = true
params = []
where_clauses = []
now = Time.now.to_i
min_rx_time = now - PotatoMesh::Config.week_seconds
where_clauses << "COALESCE(rx_time, 0) >= ?"
params << min_rx_time
if node_ref
clause = node_lookup_clause(node_ref, string_columns: ["node_id", "neighbor_id"])
return [] unless clause
where_clauses << clause.first
params.concat(clause.last)
end
sql = <<~SQL
SELECT * FROM neighbors
SQL
sql += " WHERE #{where_clauses.join(" AND ")}\n" if where_clauses.any?
sql += <<~SQL
ORDER BY rx_time DESC
LIMIT ?
SQL
params << limit
rows = db.execute(sql, params)
rows.each do |r|
rx_time = coerce_integer(r["rx_time"])
rx_time = now if rx_time && rx_time > now
r["rx_time"] = rx_time if rx_time
r["rx_iso"] = Time.at(rx_time).utc.iso8601 if rx_time
r["snr"] = coerce_float(r["snr"])
end
rows.map { |row| compact_api_row(row) }
ensure
db&.close
end
def query_telemetry(limit, node_ref: nil)
limit = coerce_query_limit(limit)
db = open_database(readonly: true)
db.results_as_hash = true
params = []
where_clauses = []
now = Time.now.to_i
min_rx_time = now - PotatoMesh::Config.week_seconds
where_clauses << "COALESCE(rx_time, telemetry_time, 0) >= ?"
params << min_rx_time
if node_ref
clause = node_lookup_clause(node_ref, string_columns: ["node_id"], numeric_columns: ["node_num"])
return [] unless clause
where_clauses << clause.first
params.concat(clause.last)
end
sql = <<~SQL
SELECT * FROM telemetry
SQL
sql += " WHERE #{where_clauses.join(" AND ")}\n" if where_clauses.any?
sql += <<~SQL
ORDER BY rx_time DESC
LIMIT ?
SQL
params << limit
rows = db.execute(sql, params)
rows.each do |r|
rx_time = coerce_integer(r["rx_time"])
r["rx_time"] = rx_time if rx_time
r["rx_iso"] = Time.at(rx_time).utc.iso8601 if rx_time && string_or_nil(r["rx_iso"]).nil?
node_num = coerce_integer(r["node_num"])
r["node_num"] = node_num if node_num
telemetry_time = coerce_integer(r["telemetry_time"])
telemetry_time = nil if telemetry_time && telemetry_time > now
r["telemetry_time"] = telemetry_time
r["telemetry_time_iso"] = Time.at(telemetry_time).utc.iso8601 if telemetry_time
r["channel"] = coerce_integer(r["channel"])
r["hop_limit"] = coerce_integer(r["hop_limit"])
r["rssi"] = coerce_integer(r["rssi"])
r["bitfield"] = coerce_integer(r["bitfield"])
r["snr"] = coerce_float(r["snr"])
r["battery_level"] = coerce_float(r["battery_level"])
r["voltage"] = coerce_float(r["voltage"])
r["channel_utilization"] = coerce_float(r["channel_utilization"])
r["air_util_tx"] = coerce_float(r["air_util_tx"])
r["uptime_seconds"] = coerce_integer(r["uptime_seconds"])
r["temperature"] = coerce_float(r["temperature"])
r["relative_humidity"] = coerce_float(r["relative_humidity"])
r["barometric_pressure"] = coerce_float(r["barometric_pressure"])
r["gas_resistance"] = coerce_float(r["gas_resistance"])
r["current"] = coerce_float(r["current"])
r["iaq"] = coerce_integer(r["iaq"])
r["distance"] = coerce_float(r["distance"])
r["lux"] = coerce_float(r["lux"])
r["white_lux"] = coerce_float(r["white_lux"])
r["ir_lux"] = coerce_float(r["ir_lux"])
r["uv_lux"] = coerce_float(r["uv_lux"])
r["wind_direction"] = coerce_integer(r["wind_direction"])
r["wind_speed"] = coerce_float(r["wind_speed"])
r["weight"] = coerce_float(r["weight"])
r["wind_gust"] = coerce_float(r["wind_gust"])
r["wind_lull"] = coerce_float(r["wind_lull"])
r["radiation"] = coerce_float(r["radiation"])
r["rainfall_1h"] = coerce_float(r["rainfall_1h"])
r["rainfall_24h"] = coerce_float(r["rainfall_24h"])
r["soil_moisture"] = coerce_integer(r["soil_moisture"])
r["soil_temperature"] = coerce_float(r["soil_temperature"])
end
rows.map { |row| compact_api_row(row) }
ensure
db&.close
end
end
end
end

View File

@@ -0,0 +1,142 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Routes
module Api
def self.registered(app)
app.before "/api/messages*" do
halt 404 if private_mode?
end
app.get "/version" do
content_type :json
last_update = latest_node_update_timestamp
payload = {
name: sanitized_site_name,
version: app_constant(:APP_VERSION),
lastNodeUpdate: last_update,
config: {
siteName: sanitized_site_name,
channel: sanitized_channel,
frequency: sanitized_frequency,
contactLink: sanitized_contact_link,
contactLinkUrl: sanitized_contact_link_url,
refreshIntervalSeconds: PotatoMesh::Config.refresh_interval_seconds,
mapCenter: {
lat: PotatoMesh::Config.map_center_lat,
lon: PotatoMesh::Config.map_center_lon,
},
maxDistanceKm: PotatoMesh::Config.max_distance_km,
instanceDomain: app_constant(:INSTANCE_DOMAIN),
privateMode: private_mode?,
},
}
payload.to_json
end
app.get "/.well-known/potato-mesh" do
refresh_well_known_document_if_stale
cache_control :public, max_age: PotatoMesh::Config.well_known_refresh_interval
content_type :json
send_file well_known_file_path
end
app.get "/api/nodes" do
content_type :json
limit = [params["limit"]&.to_i || 200, 1000].min
query_nodes(limit).to_json
end
app.get "/api/nodes/:id" do
content_type :json
node_ref = string_or_nil(params["id"])
halt 400, { error: "missing node id" }.to_json unless node_ref
limit = [params["limit"]&.to_i || 200, 1000].min
rows = query_nodes(limit, node_ref: node_ref)
halt 404, { error: "not found" }.to_json if rows.empty?
rows.first.to_json
end
app.get "/api/messages" do
content_type :json
limit = [params["limit"]&.to_i || 200, 1000].min
query_messages(limit).to_json
end
app.get "/api/messages/:id" do
content_type :json
node_ref = string_or_nil(params["id"])
halt 400, { error: "missing node id" }.to_json unless node_ref
limit = [params["limit"]&.to_i || 200, 1000].min
query_messages(limit, node_ref: node_ref).to_json
end
app.get "/api/positions" do
content_type :json
limit = [params["limit"]&.to_i || 200, 1000].min
query_positions(limit).to_json
end
app.get "/api/positions/:id" do
content_type :json
node_ref = string_or_nil(params["id"])
halt 400, { error: "missing node id" }.to_json unless node_ref
limit = [params["limit"]&.to_i || 200, 1000].min
query_positions(limit, node_ref: node_ref).to_json
end
app.get "/api/neighbors" do
content_type :json
limit = [params["limit"]&.to_i || 200, 1000].min
query_neighbors(limit).to_json
end
app.get "/api/neighbors/:id" do
content_type :json
node_ref = string_or_nil(params["id"])
halt 400, { error: "missing node id" }.to_json unless node_ref
limit = [params["limit"]&.to_i || 200, 1000].min
query_neighbors(limit, node_ref: node_ref).to_json
end
app.get "/api/telemetry" do
content_type :json
limit = [params["limit"]&.to_i || 200, 1000].min
query_telemetry(limit).to_json
end
app.get "/api/telemetry/:id" do
content_type :json
node_ref = string_or_nil(params["id"])
halt 400, { error: "missing node id" }.to_json unless node_ref
limit = [params["limit"]&.to_i || 200, 1000].min
query_telemetry(limit, node_ref: node_ref).to_json
end
app.get "/api/instances" do
# Prevent the federation catalog from being exposed when federation is disabled.
halt 404 unless federation_enabled?
content_type :json
ensure_self_instance_record!
payload = load_instances_for_api
JSON.generate(payload)
end
end
end
end
end
end

View File

@@ -0,0 +1,321 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Routes
module Ingest
def self.registered(app)
app.post "/api/nodes" do
require_token!
content_type :json
begin
data = JSON.parse(read_json_body)
rescue JSON::ParserError
halt 400, { error: "invalid JSON" }.to_json
end
unless data.is_a?(Hash)
halt 400, { error: "invalid payload" }.to_json
end
halt 400, { error: "too many nodes" }.to_json if data.size > 1000
db = open_database
data.each do |node_id, node|
upsert_node(db, node_id, node)
end
PotatoMesh::App::Prometheus::NODES_GAUGE.set(query_nodes(1000).length)
{ status: "ok" }.to_json
ensure
db&.close
end
app.post "/api/messages" do
require_token!
content_type :json
begin
data = JSON.parse(read_json_body)
rescue JSON::ParserError
halt 400, { error: "invalid JSON" }.to_json
end
messages = data.is_a?(Array) ? data : [data]
halt 400, { error: "too many messages" }.to_json if messages.size > 1000
db = open_database
messages.each do |msg|
insert_message(db, msg)
end
{ status: "ok" }.to_json
ensure
db&.close
end
app.post "/api/instances" do
content_type :json
begin
payload = JSON.parse(read_json_body)
rescue JSON::ParserError => e
warn_log(
"Instance registration rejected",
context: "ingest.register",
reason: "invalid JSON",
error_class: e.class.name,
error_message: e.message,
)
halt 400, { error: "invalid JSON" }.to_json
end
unless payload.is_a?(Hash)
warn_log(
"Instance registration rejected",
context: "ingest.register",
reason: "payload is not an object",
)
halt 400, { error: "invalid payload" }.to_json
end
id = string_or_nil(payload["id"]) || string_or_nil(payload["instanceId"])
raw_domain_input = payload["domain"]
raw_domain = sanitize_instance_domain(raw_domain_input, downcase: false)
normalized_domain = raw_domain && sanitize_instance_domain(raw_domain)
unless raw_domain && normalized_domain
warn_log(
"Instance registration rejected",
context: "ingest.register",
domain: string_or_nil(raw_domain_input),
reason: "invalid domain",
)
halt 400, { error: "invalid domain" }.to_json
end
pubkey = sanitize_public_key_pem(payload["pubkey"])
name = string_or_nil(payload["name"])
version = string_or_nil(payload["version"])
channel = string_or_nil(payload["channel"])
frequency = string_or_nil(payload["frequency"])
latitude = coerce_float(payload["latitude"])
longitude = coerce_float(payload["longitude"])
last_update_time = coerce_integer(payload["last_update_time"] || payload["lastUpdateTime"])
raw_private = payload.key?("isPrivate") ? payload["isPrivate"] : payload["is_private"]
is_private = coerce_boolean(raw_private)
signature = string_or_nil(payload["signature"])
attributes = {
id: id,
domain: normalized_domain,
pubkey: pubkey,
name: name,
version: version,
channel: channel,
frequency: frequency,
latitude: latitude,
longitude: longitude,
last_update_time: last_update_time,
is_private: is_private,
}
if [attributes[:id], attributes[:domain], attributes[:pubkey], signature, attributes[:last_update_time]].any?(&:nil?)
warn_log(
"Instance registration rejected",
context: "ingest.register",
reason: "missing required fields",
)
halt 400, { error: "missing required fields" }.to_json
end
signature_valid = verify_instance_signature(attributes, signature, attributes[:pubkey])
# Some remote peers sign payloads using a canonicalised lowercase
# domain while still sending a mixed-case domain. Retry signature
# verification with the original casing when the first attempt
# fails to maximise interoperability.
if !signature_valid && raw_domain && normalized_domain && raw_domain.casecmp?(normalized_domain) && raw_domain != normalized_domain
alternate_attributes = attributes.merge(domain: raw_domain)
signature_valid = verify_instance_signature(alternate_attributes, signature, attributes[:pubkey])
end
unless signature_valid
warn_log(
"Instance registration rejected",
context: "ingest.register",
domain: raw_domain || attributes[:domain],
reason: "invalid signature",
)
halt 400, { error: "invalid signature" }.to_json
end
if attributes[:is_private]
warn_log(
"Instance registration rejected",
context: "ingest.register",
domain: attributes[:domain],
reason: "instance marked private",
)
halt 403, { error: "instance marked private" }.to_json
end
ip = ip_from_domain(attributes[:domain])
if ip && restricted_ip_address?(ip)
warn_log(
"Instance registration rejected",
context: "ingest.register",
domain: attributes[:domain],
reason: "restricted IP address",
resolved_ip: ip,
)
halt 400, { error: "restricted domain" }.to_json
end
begin
resolve_remote_ip_addresses(URI.parse("https://#{attributes[:domain]}"))
rescue ArgumentError => e
warn_log(
"Instance registration rejected",
context: "ingest.register",
domain: attributes[:domain],
reason: "restricted domain",
error_message: e.message,
)
halt 400, { error: "restricted domain" }.to_json
rescue SocketError
# DNS lookups that fail to resolve are handled later when the
# registration flow attempts to contact the remote instance.
end
well_known, well_known_meta = fetch_instance_json(attributes[:domain], "/.well-known/potato-mesh")
unless well_known
details_list = Array(well_known_meta).map(&:to_s)
details = details_list.empty? ? "no response" : details_list.join("; ")
warn_log(
"Instance registration rejected",
context: "ingest.register",
domain: attributes[:domain],
reason: "failed to fetch well-known document",
details: details,
)
halt 400, { error: "failed to verify well-known document" }.to_json
end
valid, reason = validate_well_known_document(well_known, attributes[:domain], attributes[:pubkey])
unless valid
warn_log(
"Instance registration rejected",
context: "ingest.register",
domain: attributes[:domain],
reason: reason || "invalid well-known document",
)
halt 400, { error: reason || "invalid well-known document" }.to_json
end
remote_nodes, node_source = fetch_instance_json(attributes[:domain], "/api/nodes")
unless remote_nodes
details_list = Array(node_source).map(&:to_s)
details = details_list.empty? ? "no response" : details_list.join("; ")
warn_log(
"Instance registration rejected",
context: "ingest.register",
domain: attributes[:domain],
reason: "failed to fetch nodes",
details: details,
)
halt 400, { error: "failed to fetch nodes" }.to_json
end
fresh, freshness_reason = validate_remote_nodes(remote_nodes)
unless fresh
warn_log(
"Instance registration rejected",
context: "ingest.register",
domain: attributes[:domain],
reason: freshness_reason || "stale node data",
)
halt 400, { error: freshness_reason || "stale node data" }.to_json
end
db = open_database
upsert_instance_record(db, attributes, signature)
ingest_known_instances_from!(
db,
attributes[:domain],
per_response_limit: PotatoMesh::Config.federation_max_instances_per_response,
overall_limit: PotatoMesh::Config.federation_max_domains_per_crawl,
)
debug_log(
"Registered remote instance",
context: "ingest.register",
domain: attributes[:domain],
instance_id: attributes[:id],
)
status 201
{ status: "registered" }.to_json
ensure
db&.close
end
app.post "/api/positions" do
require_token!
content_type :json
begin
data = JSON.parse(read_json_body)
rescue JSON::ParserError
halt 400, { error: "invalid JSON" }.to_json
end
positions = data.is_a?(Array) ? data : [data]
halt 400, { error: "too many positions" }.to_json if positions.size > 1000
db = open_database
positions.each do |pos|
insert_position(db, pos)
end
{ status: "ok" }.to_json
ensure
db&.close
end
app.post "/api/neighbors" do
require_token!
content_type :json
begin
data = JSON.parse(read_json_body)
rescue JSON::ParserError
halt 400, { error: "invalid JSON" }.to_json
end
neighbor_payloads = data.is_a?(Array) ? data : [data]
halt 400, { error: "too many neighbor packets" }.to_json if neighbor_payloads.size > 1000
db = open_database
neighbor_payloads.each do |packet|
insert_neighbors(db, packet)
end
{ status: "ok" }.to_json
ensure
db&.close
end
app.post "/api/telemetry" do
require_token!
content_type :json
begin
data = JSON.parse(read_json_body)
rescue JSON::ParserError
halt 400, { error: "invalid JSON" }.to_json
end
telemetry_packets = data.is_a?(Array) ? data : [data]
halt 400, { error: "too many telemetry packets" }.to_json if telemetry_packets.size > 1000
db = open_database
telemetry_packets.each do |packet|
insert_telemetry(db, packet)
end
{ status: "ok" }.to_json
ensure
db&.close
end
end
end
end
end
end

View File

@@ -0,0 +1,80 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Routes
module Root
def self.registered(app)
app.get "/favicon.ico" do
cache_control :public, max_age: PotatoMesh::Config.week_seconds
ico_path = File.join(settings.public_folder, "favicon.ico")
if File.file?(ico_path)
send_file ico_path, type: "image/x-icon"
else
send_file File.join(settings.public_folder, "potatomesh-logo.svg"), type: "image/svg+xml"
end
end
app.get "/potatomesh-logo.svg" do
path = File.expand_path("potatomesh-logo.svg", settings.public_folder)
settings.logger&.info("logo_path=#{path} exist=#{File.exist?(path)} file=#{File.file?(path)}")
halt 404, "Not Found" unless File.exist?(path) && File.readable?(path)
content_type "image/svg+xml"
last_modified File.mtime(path)
cache_control :public, max_age: 3600
send_file path
end
app.get "/" do
meta = meta_configuration
config = frontend_app_config
raw_theme = request.cookies["theme"]
theme = %w[dark light].include?(raw_theme) ? raw_theme : "dark"
if raw_theme != theme
response.set_cookie("theme", value: theme, path: "/", max_age: 60 * 60 * 24 * 7, same_site: :lax)
end
erb :index, locals: {
site_name: meta[:name],
meta_title: meta[:title],
meta_name: meta[:name],
meta_description: meta[:description],
channel: sanitized_channel,
frequency: sanitized_frequency,
map_center_lat: PotatoMesh::Config.map_center_lat,
map_center_lon: PotatoMesh::Config.map_center_lon,
max_distance_km: PotatoMesh::Config.max_distance_km,
contact_link: sanitized_contact_link,
contact_link_url: sanitized_contact_link_url,
version: app_constant(:APP_VERSION),
private_mode: private_mode?,
federation_enabled: federation_enabled?,
refresh_interval_seconds: PotatoMesh::Config.refresh_interval_seconds,
app_config_json: JSON.generate(config),
initial_theme: theme,
}
end
app.get "/metrics" do
content_type ::Prometheus::Client::Formats::Text::CONTENT_TYPE
::Prometheus::Client::Formats::Text.marshal(::Prometheus::Client.registry)
end
end
end
end
end
end

View File

@@ -0,0 +1,541 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
# Configuration wrapper responsible for exposing ENV backed settings used by
# the web and data ingestion services.
module Config
module_function
DEFAULT_DB_BUSY_TIMEOUT_MS = 5_000
DEFAULT_DB_BUSY_MAX_RETRIES = 5
DEFAULT_DB_BUSY_RETRY_DELAY = 0.05
DEFAULT_MAX_JSON_BODY_BYTES = 1_048_576
DEFAULT_REFRESH_INTERVAL_SECONDS = 60
DEFAULT_TILE_FILTER_LIGHT = "grayscale(1) saturate(0) brightness(0.92) contrast(1.05)"
DEFAULT_TILE_FILTER_DARK = "grayscale(1) invert(1) brightness(0.9) contrast(1.08)"
DEFAULT_MAP_CENTER_LAT = 38.761944
DEFAULT_MAP_CENTER_LON = -27.090833
DEFAULT_MAP_CENTER = "#{DEFAULT_MAP_CENTER_LAT},#{DEFAULT_MAP_CENTER_LON}"
DEFAULT_CHANNEL = "#LongFast"
DEFAULT_FREQUENCY = "915MHz"
DEFAULT_CONTACT_LINK = "#potatomesh:dod.ngo"
DEFAULT_MAX_DISTANCE_KM = 42.0
DEFAULT_REMOTE_INSTANCE_CONNECT_TIMEOUT = 15
DEFAULT_REMOTE_INSTANCE_READ_TIMEOUT = 60
DEFAULT_FEDERATION_MAX_INSTANCES_PER_RESPONSE = 64
DEFAULT_FEDERATION_MAX_DOMAINS_PER_CRAWL = 256
DEFAULT_INITIAL_FEDERATION_DELAY_SECONDS = 2
# Determine whether private mode should be activated.
#
# @return [Boolean] true when PRIVATE=1 in the environment.
def private_mode_enabled?
value = ENV.fetch("PRIVATE", "0")
value.to_s.strip == "1"
end
# Determine whether federation features are permitted for the instance.
#
# Federation is disabled when ``PRIVATE=1`` regardless of the
# ``FEDERATION`` environment variable to ensure a private deployment does
# not announce itself or crawl peers.
#
# @return [Boolean] true when federation should remain active.
def federation_enabled?
return false if private_mode_enabled?
value = ENV.fetch("FEDERATION", "1")
value.to_s.strip != "0"
end
# Resolve the absolute path to the web application root directory.
#
# @return [String] absolute filesystem path of the web folder.
def web_root
@web_root ||= File.expand_path("../..", __dir__)
end
# Resolve the repository root directory relative to the web folder.
#
# @return [String] path to the Git repository root.
def repo_root
@repo_root ||= File.expand_path("..", web_root)
end
# Resolve the current XDG data directory for PotatoMesh content.
#
# @return [String] absolute path to the PotatoMesh data directory.
def data_directory
File.join(resolve_xdg_home("XDG_DATA_HOME", %w[.local share]), "potato-mesh")
end
# Resolve the current XDG configuration directory for PotatoMesh files.
#
# @return [String] absolute path to the PotatoMesh configuration directory.
def config_directory
File.join(resolve_xdg_home("XDG_CONFIG_HOME", %w[.config]), "potato-mesh")
end
# Build the default SQLite database path inside the data directory.
#
# @return [String] absolute path to the managed +mesh.db+ file.
def default_db_path
File.join(data_directory, "mesh.db")
end
# Legacy database path bundled alongside the repository.
#
# @return [String] absolute path to the repository managed database file.
def legacy_db_path
File.expand_path("../data/mesh.db", web_root)
end
# Determine the configured database location, defaulting to the bundled
# SQLite file.
#
# @return [String] absolute path to the database file.
def db_path
default_db_path
end
# Retrieve the SQLite busy timeout duration in milliseconds.
#
# @return [Integer] timeout value in milliseconds.
def db_busy_timeout_ms
DEFAULT_DB_BUSY_TIMEOUT_MS
end
# Retrieve the maximum number of retries when encountering SQLITE_BUSY.
#
# @return [Integer] maximum retry attempts.
def db_busy_max_retries
DEFAULT_DB_BUSY_MAX_RETRIES
end
# Retrieve the backoff delay between busy retries in seconds.
#
# @return [Float] seconds to wait between retries.
def db_busy_retry_delay
DEFAULT_DB_BUSY_RETRY_DELAY
end
# Convenience constant describing the number of seconds in a week.
#
# @return [Integer] seconds in seven days.
def week_seconds
7 * 24 * 60 * 60
end
# Default upper bound for accepted JSON payload sizes.
#
# @return [Integer] byte ceiling for HTTP request bodies.
def default_max_json_body_bytes
DEFAULT_MAX_JSON_BODY_BYTES
end
# Determine the maximum allowed JSON body size with validation.
#
# @return [Integer] configured byte limit.
def max_json_body_bytes
default_max_json_body_bytes
end
# Provide the fallback version string when git metadata is unavailable.
#
# @return [String] semantic version identifier.
def version_fallback
"v0.5.3"
end
# Default refresh interval for frontend polling routines.
#
# @return [Integer] refresh period in seconds.
def default_refresh_interval_seconds
DEFAULT_REFRESH_INTERVAL_SECONDS
end
# Fetch the refresh interval, ensuring a positive integer value.
#
# @return [Integer] polling cadence in seconds.
def refresh_interval_seconds
default_refresh_interval_seconds
end
# Retrieve the CSS filter used for light themed maps.
#
# @return [String] CSS filter string.
def map_tile_filter_light
DEFAULT_TILE_FILTER_LIGHT
end
# Retrieve the CSS filter used for dark themed maps.
#
# @return [String] CSS filter string for dark tiles.
def map_tile_filter_dark
DEFAULT_TILE_FILTER_DARK
end
# Provide a simple hash of tile filters for template use.
#
# @return [Hash] frozen mapping of themes to CSS filters.
def tile_filters
{
light: map_tile_filter_light,
dark: map_tile_filter_dark,
}.freeze
end
# Retrieve the raw comma separated Prometheus report identifiers.
#
# @return [String] comma separated list of report IDs.
def prom_report_ids
""
end
# Transform Prometheus report identifiers into a cleaned array.
#
# @return [Array<String>] list of unique report identifiers.
def prom_report_id_list
prom_report_ids.split(",").map(&:strip).reject(&:empty?)
end
# Path storing the instance private key used for signing.
#
# @return [String] absolute location of the PEM file.
def keyfile_path
File.join(config_directory, "keyfile")
end
# Sub-path used when exposing well known configuration files.
#
# @return [String] relative path within the public directory.
def well_known_relative_path
File.join(".well-known", "potato-mesh")
end
# Filesystem directory used to stage /.well-known artifacts.
#
# @return [String] absolute storage path.
def well_known_storage_root
File.join(config_directory, "well-known")
end
# Legacy configuration directory bundled with the repository.
#
# @return [String] absolute path to the repository managed configuration directory.
def legacy_config_directory
File.join(web_root, ".config")
end
# Legacy keyfile location used before introducing XDG directories.
#
# @return [String] absolute filesystem path to the legacy keyfile.
def legacy_keyfile_path
legacy_keyfile_candidates.find { |path| File.exist?(path) } || legacy_keyfile_candidates.first
end
# Enumerate known legacy keyfile locations for migration.
#
# @return [Array<String>] ordered list of absolute legacy keyfile paths.
def legacy_keyfile_candidates
[
File.join(web_root, ".config", "keyfile"),
File.join(web_root, ".config", "potato-mesh", "keyfile"),
File.join(web_root, "config", "keyfile"),
File.join(web_root, "config", "potato-mesh", "keyfile"),
].map { |path| File.expand_path(path) }.uniq
end
# Legacy location for well known assets within the public folder.
#
# @return [String] absolute path to the legacy output directory.
def legacy_public_well_known_path
File.join(web_root, "public", well_known_relative_path)
end
# Enumerate known legacy well-known document locations for migration.
#
# @return [Array<String>] ordered list of absolute legacy well-known document paths.
def legacy_well_known_candidates
filename = File.basename(well_known_relative_path)
[
File.join(web_root, ".config", "well-known", filename),
File.join(web_root, ".config", ".well-known", filename),
File.join(web_root, ".config", "potato-mesh", "well-known", filename),
File.join(web_root, ".config", "potato-mesh", ".well-known", filename),
File.join(web_root, "config", "well-known", filename),
File.join(web_root, "config", ".well-known", filename),
File.join(web_root, "config", "potato-mesh", "well-known", filename),
File.join(web_root, "config", "potato-mesh", ".well-known", filename),
].map { |path| File.expand_path(path) }.uniq
end
# Interval used to refresh well known documents from disk.
#
# @return [Integer] refresh duration in seconds.
def well_known_refresh_interval
24 * 60 * 60
end
# Cryptographic algorithm identifier for HTTP signatures.
#
# @return [String] RFC-compliant algorithm label.
def instance_signature_algorithm
"rsa-sha256"
end
# Connection timeout used when establishing federation HTTP sockets.
#
# The timeout can be customised with the REMOTE_INSTANCE_CONNECT_TIMEOUT
# environment variable to accommodate slower or distant federation peers.
#
# @return [Integer] connect timeout in seconds.
def remote_instance_http_timeout
fetch_positive_integer(
"REMOTE_INSTANCE_CONNECT_TIMEOUT",
DEFAULT_REMOTE_INSTANCE_CONNECT_TIMEOUT,
)
end
# Read timeout used when streaming federation HTTP responses.
#
# The timeout can be customised with the REMOTE_INSTANCE_READ_TIMEOUT
# environment variable to accommodate slower or distant federation peers.
#
# @return [Integer] read timeout in seconds.
def remote_instance_read_timeout
fetch_positive_integer(
"REMOTE_INSTANCE_READ_TIMEOUT",
DEFAULT_REMOTE_INSTANCE_READ_TIMEOUT,
)
end
# Limit the number of remote instances processed from a single response.
#
# @return [Integer] maximum entries processed per /api/instances payload.
def federation_max_instances_per_response
fetch_positive_integer(
"FEDERATION_MAX_INSTANCES_PER_RESPONSE",
DEFAULT_FEDERATION_MAX_INSTANCES_PER_RESPONSE,
)
end
# Limit the total number of distinct domains crawled during one ingestion.
#
# @return [Integer] maximum unique domains visited per crawl.
def federation_max_domains_per_crawl
fetch_positive_integer(
"FEDERATION_MAX_DOMAINS_PER_CRAWL",
DEFAULT_FEDERATION_MAX_DOMAINS_PER_CRAWL,
)
end
# Maximum acceptable age for remote node data.
#
# @return [Integer] seconds before remote nodes are considered stale.
def remote_instance_max_node_age
86_400
end
# Minimum node count expected from a remote instance before storing.
#
# @return [Integer] node threshold for remote ingestion.
def remote_instance_min_node_count
10
end
# Domains used to seed the federation discovery process.
#
# @return [Array<String>] list of default seed domains.
def federation_seed_domains
["potatomesh.net"].freeze
end
# Determine how often we broadcast federation announcements.
#
# @return [Integer] number of seconds between announcement cycles.
def federation_announcement_interval
8 * 60 * 60
end
# Determine the grace period before sending the initial federation announcement.
#
# @return [Integer] seconds to wait before the first broadcast cycle.
def initial_federation_delay_seconds
fetch_positive_integer(
"INITIAL_FEDERATION_DELAY_SECONDS",
DEFAULT_INITIAL_FEDERATION_DELAY_SECONDS,
)
end
# Retrieve the configured site name for presentation.
#
# @return [String] human friendly site label.
def site_name
fetch_string("SITE_NAME", "PotatoMesh Demo")
end
# Retrieve the default radio channel label.
#
# @return [String] channel name from configuration.
def channel
fetch_string("CHANNEL", DEFAULT_CHANNEL)
end
# Retrieve the default radio frequency description.
#
# @return [String] frequency identifier.
def frequency
fetch_string("FREQUENCY", DEFAULT_FREQUENCY)
end
# Parse the configured map centre coordinates.
#
# @return [Hash{Symbol=>Float}] latitude and longitude in decimal degrees.
def map_center
raw = fetch_string("MAP_CENTER", DEFAULT_MAP_CENTER)
lat_str, lon_str = raw.split(",", 2).map { |part| part&.strip }.compact
lat = Float(lat_str, exception: false)
lon = Float(lon_str, exception: false)
lat = DEFAULT_MAP_CENTER_LAT unless lat
lon = DEFAULT_MAP_CENTER_LON unless lon
{ lat: lat, lon: lon }
end
# Map display latitude centre for the frontend map widget.
#
# @return [Float] latitude in decimal degrees.
def map_center_lat
map_center[:lat]
end
# Map display longitude centre for the frontend map widget.
#
# @return [Float] longitude in decimal degrees.
def map_center_lon
map_center[:lon]
end
# Maximum straight-line distance between nodes before relationships are
# hidden.
#
# @return [Float] distance in kilometres.
def max_distance_km
raw = fetch_string("MAX_DISTANCE", nil)
parsed = raw && Float(raw, exception: false)
return parsed if parsed && parsed.positive?
DEFAULT_MAX_DISTANCE_KM
end
# Contact link for community discussion.
#
# @return [String] contact URI or identifier.
def contact_link
fetch_string("CONTACT_LINK", DEFAULT_CONTACT_LINK)
end
# Determine the best URL to represent the configured contact link.
#
# @return [String, nil] absolute URL when derivable, otherwise nil.
def contact_link_url
link = contact_link.to_s.strip
return nil if link.empty?
if matrix_alias?(link)
"https://matrix.to/#/#{link}"
elsif link.match?(%r{\Ahttps?://}i)
link
else
nil
end
end
# Check whether a contact link is a Matrix room alias.
#
# @param link [String] candidate link string.
# @return [Boolean] true when the link resembles a Matrix alias.
def matrix_alias?(link)
link.match?(/\A[#!][^\s:]+:[^\s]+\z/)
end
# Check whether verbose debugging is enabled for the runtime.
#
# @return [Boolean] true when DEBUG=1.
def debug?
ENV["DEBUG"] == "1"
end
# Fetch and sanitise string based configuration values.
#
# @param key [String] environment variable to read.
# @param default [String] fallback value when unset or blank.
# @return [String] cleaned configuration string.
def fetch_string(key, default)
value = ENV[key]
return default if value.nil?
trimmed = value.strip
trimmed.empty? ? default : trimmed
end
# Fetch and validate integer based configuration flags.
#
# @param key [String] environment variable to read.
# @param default [Integer] fallback value when unset or invalid.
# @return [Integer] positive integer sourced from configuration.
def fetch_positive_integer(key, default)
value = ENV[key]
return default if value.nil?
trimmed = value.strip
return default if trimmed.empty?
begin
parsed = Integer(trimmed, 10)
rescue ArgumentError
return default
end
parsed.positive? ? parsed : default
end
# Resolve the effective XDG directory honoring environment overrides.
#
# @param env_key [String] name of the environment variable to inspect.
# @param fallback_segments [Array<String>] path segments appended to the user home directory.
# @return [String] absolute base directory referenced by the XDG variable.
def resolve_xdg_home(env_key, fallback_segments)
raw = fetch_string(env_key, nil)
candidate = raw && !raw.empty? ? raw : nil
return File.expand_path(candidate) if candidate
base_home = safe_home_directory
File.expand_path(File.join(base_home, *fallback_segments))
end
# Retrieve the current user's home directory handling runtime failures.
#
# @return [String] absolute path to the user home or web root fallback.
def safe_home_directory
home = Dir.home
return web_root if home.nil? || home.empty?
home
rescue ArgumentError, RuntimeError
web_root
end
end
end

View File

@@ -0,0 +1,87 @@
# frozen_string_literal: true
require "logger"
require "time"
module PotatoMesh
# Logging utilities shared across the web application.
module Logging
LOGGER_NAME = "potato-mesh" # :nodoc:
module_function
# Build a logger configured with the potato-mesh formatter.
#
# @param io [#write] destination for log output.
# @return [Logger] configured logger instance.
def build_logger(io = $stdout)
logger = Logger.new(io)
logger.progname = LOGGER_NAME
logger.formatter = method(:formatter)
logger
end
# Format log entries with a consistent structure understood by the UI.
#
# @param severity [String] Ruby logger severity constant (e.g., "DEBUG").
# @param time [Time] timestamp when the log entry was created.
# @param progname [String, nil] optional application name emitting the log.
# @param message [String] body of the log message.
# @return [String] formatted log entry.
def formatter(severity, time, progname, message)
timestamp = time.utc.iso8601(3)
body = message.is_a?(String) ? message : message.inspect
"[#{timestamp}] [#{progname || LOGGER_NAME}] [#{severity.downcase}] #{body}\n"
end
# Emit a structured log entry to the provided logger instance.
#
# @param logger [Logger, nil] logger to emit against.
# @param severity [Symbol] target severity (e.g., :debug, :info).
# @param message [String] primary message text.
# @param context [String, nil] logical component generating the entry.
# @param metadata [Hash] supplemental structured data for the log.
# @return [void]
def log(logger, severity, message, context: nil, **metadata)
return unless logger
parts = []
parts << "context=#{context}" if context
metadata.each do |key, value|
parts << format_metadata_pair(key, value)
end
parts << message
logger.public_send(severity, parts.join(" "))
end
# Retrieve the canonical logger for the web application.
#
# @param target [Object, nil] object with optional +settings.logger+ accessor.
# @return [Logger, nil] logger instance when available.
def logger_for(target = nil)
if target.respond_to?(:settings) && target.settings.respond_to?(:logger)
return target.settings.logger
end
if defined?(PotatoMesh::Application) &&
PotatoMesh::Application.respond_to?(:settings) &&
PotatoMesh::Application.settings.respond_to?(:logger)
return PotatoMesh::Application.settings.logger
end
nil
end
# Format metadata key/value pairs for structured logging output.
#
# @param key [Symbol, String]
# @param value [Object]
# @return [String]
def format_metadata_pair(key, value)
"#{key}=#{value.inspect}"
end
private_class_method :format_metadata_pair
end
end

View File

@@ -0,0 +1,80 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
require_relative "config"
require_relative "sanitizer"
module PotatoMesh
# Helper functions used to generate SEO metadata and formatted values.
module Meta
module_function
# Format a distance in kilometres without trailing decimal precision when unnecessary.
#
# @param distance [Numeric] distance in kilometres.
# @return [String] formatted kilometre value.
def formatted_distance_km(distance)
format("%.1f", distance).sub(/\.0\z/, "")
end
# Construct the meta description string displayed to search engines and social previews.
#
# @param private_mode [Boolean] whether private mode is enabled.
# @return [String] generated description text.
def description(private_mode:)
site = Sanitizer.sanitized_site_name
channel = Sanitizer.sanitized_channel
frequency = Sanitizer.sanitized_frequency
contact = Sanitizer.sanitized_contact_link
summary = "Live Meshtastic mesh map for #{site}"
if channel.empty? && frequency.empty?
summary += "."
elsif channel.empty?
summary += " tuned to #{frequency}."
elsif frequency.empty?
summary += " on #{channel}."
else
summary += " on #{channel} (#{frequency})."
end
activity_sentence = if private_mode
"Track nodes and coverage in real time."
else
"Track nodes, messages, and coverage in real time."
end
sentences = [summary, activity_sentence]
if (distance = Sanitizer.sanitized_max_distance_km)
sentences << "Shows nodes within roughly #{formatted_distance_km(distance)} km of the map center."
end
sentences << "Join the community in #{contact} via chat." if contact
sentences.join(" ")
end
# Build a hash of meta configuration values used by templating layers.
#
# @param private_mode [Boolean] whether private mode is enabled.
# @return [Hash] structured metadata for templates.
def configuration(private_mode:)
site = Sanitizer.sanitized_site_name
{
title: site,
name: site,
description: description(private_mode: private_mode),
}.freeze
end
end
end

View File

@@ -0,0 +1,240 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
require "ipaddr"
require_relative "config"
module PotatoMesh
# Utility module responsible for coercing and sanitising user provided
# configuration strings. Each helper is exposed as a module function so it
# can be consumed both by the web layer and background jobs without
# instantiation overhead.
module Sanitizer
module_function
# Coerce an arbitrary value into a trimmed string unless the content is
# empty.
#
# @param value [Object, nil] arbitrary input that should be converted.
# @return [String, nil] trimmed string representation or +nil+ when blank.
def string_or_nil(value)
return nil if value.nil?
str = value.is_a?(String) ? value : value.to_s
trimmed = str.strip
trimmed.empty? ? nil : trimmed
end
# Ensure a value is a valid instance domain according to RFC 1035/3986
# rules. Hostnames must include at least one dot-separated label and a
# top-level domain containing an alphabetic character. Literal IP
# addresses must be provided in standard dotted decimal form or enclosed in
# brackets when IPv6 notation is used. Optional ports must fall within the
# valid TCP/UDP range. Any opaque identifiers, URIs, or malformed hosts are
# rejected.
#
# @param value [String, Object, nil] candidate domain name.
# @param downcase [Boolean] whether to force the result to lowercase.
# @return [String, nil] canonical domain value or +nil+ when invalid.
def sanitize_instance_domain(value, downcase: true)
host = string_or_nil(value)
return nil unless host
trimmed = host.strip
trimmed = trimmed.delete_suffix(".") while trimmed.end_with?(".")
return nil if trimmed.empty?
return nil if trimmed.match?(%r{[\s/\\@]})
if trimmed.start_with?("[")
match = trimmed.match(/\A\[(?<address>[^\]]+)\](?::(?<port>\d+))?\z/)
return nil unless match
address = match[:address]
port = match[:port]
return nil if port && !valid_port?(port)
begin
IPAddr.new(address)
rescue IPAddr::InvalidAddressError
return nil
end
sanitized_address = downcase ? address.downcase : address
return "[#{sanitized_address}]#{port ? ":#{port}" : ""}"
end
domain = trimmed
port = nil
if domain.include?(":")
host_part, port_part = domain.split(":", 2)
return nil if host_part.nil? || host_part.empty?
return nil unless port_part && port_part.match?(/\A\d+\z/)
return nil unless valid_port?(port_part)
return nil if port_part.include?(":")
domain = host_part
port = port_part
end
unless valid_hostname?(domain) || valid_ipv4_literal?(domain)
return nil
end
sanitized_domain = downcase ? domain.downcase : domain
port ? "#{sanitized_domain}:#{port}" : sanitized_domain
end
# Determine whether the supplied hostname conforms to RFC 1035 label
# requirements and includes a valid top-level domain.
#
# @param hostname [String] host component without any port information.
# @return [Boolean] true when the hostname is valid.
def valid_hostname?(hostname)
return false if hostname.length > 253
labels = hostname.split(".")
return false if labels.length < 2
return false unless labels.all? { |label| valid_hostname_label?(label) }
top_level = labels.last
top_level.match?(/[a-z]/i)
end
# Validate a single hostname label ensuring the first and last characters
# are alphanumeric and that no unsupported symbols are present.
#
# @param label [String] hostname component between dots.
# @return [Boolean] true when the label is valid.
def valid_hostname_label?(label)
return false if label.empty?
return false if label.length > 63
label.match?(/\A[a-z0-9](?:[a-z0-9-]{0,61}[a-z0-9])?\z/i)
end
# Validate whether a candidate represents a dotted decimal IPv4 literal.
#
# @param address [String] IP address string without port information.
# @return [Boolean] true when the address is a valid IPv4 literal.
def valid_ipv4_literal?(address)
return false unless address.match?(/\A\d{1,3}(?:\.\d{1,3}){3}\z/)
address.split(".").all? { |octet| octet.to_i.between?(0, 255) }
end
# Determine whether a port string represents a valid TCP/UDP port.
#
# @param port [String] numeric port representation.
# @return [Boolean] true when the port falls within the acceptable range.
def valid_port?(port)
value = port.to_i
value.positive? && value <= 65_535
end
# Extract the host component from a potentially bracketed domain literal.
#
# @param domain [String, nil] raw domain string received from the user.
# @return [String, nil] host portion of the domain, or +nil+ when invalid.
def instance_domain_host(domain)
return nil if domain.nil?
candidate = domain.strip
return nil if candidate.empty?
if candidate.start_with?("[")
match = candidate.match(/\A\[(?<host>[^\]]+)\](?::(?<port>\d+))?\z/)
return match[:host] if match
return nil
end
host, port = candidate.split(":", 2)
if port && !host.include?(":") && port.match?(/\A\d+\z/)
return host
end
candidate
end
# Resolve a validated domain string into an IP address object.
#
# @param domain [String, nil] domain literal potentially including port.
# @return [IPAddr, nil] parsed IP address when valid.
def ip_from_domain(domain)
host = instance_domain_host(domain)
return nil unless host
IPAddr.new(host)
rescue IPAddr::InvalidAddressError
nil
end
# Normalise a value into a trimmed string representation.
#
# @param value [Object] arbitrary object to coerce into text.
# @return [String] trimmed string version of the supplied value.
def sanitized_string(value)
value.to_s.strip
end
# Retrieve the configured site name as a cleaned string.
#
# @return [String] trimmed configuration value.
def sanitized_site_name
sanitized_string(Config.site_name)
end
# Retrieve the configured channel as a cleaned string.
#
# @return [String] trimmed configuration value.
def sanitized_channel
sanitized_string(Config.channel)
end
# Retrieve the configured frequency as a cleaned string.
#
# @return [String] trimmed configuration value.
def sanitized_frequency
sanitized_string(Config.frequency)
end
# Retrieve the configured contact link and normalise blank values to nil.
#
# @return [String, nil] contact link identifier or +nil+ when blank.
def sanitized_contact_link
value = sanitized_string(Config.contact_link)
value.empty? ? nil : value
end
# Retrieve the best effort URL for the configured contact link.
#
# @return [String, nil] contact hyperlink when derivable.
def sanitized_contact_link_url
Config.contact_link_url
end
# Return a positive numeric maximum distance when configured.
#
# @return [Numeric, nil] distance value in kilometres.
def sanitized_max_distance_km
distance = Config.max_distance_km
return nil unless distance.is_a?(Numeric)
return nil unless distance.positive?
distance
end
end
end

163
web/package-lock.json generated
View File

@@ -6,7 +6,168 @@
"packages": {
"": {
"name": "potato-mesh",
"version": "0.5.0"
"version": "0.5.0",
"devDependencies": {
"istanbul-lib-coverage": "^3.2.2",
"istanbul-lib-report": "^3.0.1",
"istanbul-reports": "^3.2.0",
"v8-to-istanbul": "^9.3.0"
}
},
"node_modules/@jridgewell/resolve-uri": {
"version": "3.1.2",
"resolved": "https://registry.npmjs.org/@jridgewell/resolve-uri/-/resolve-uri-3.1.2.tgz",
"integrity": "sha512-bRISgCIjP20/tbWSPWMEi54QVPRZExkuD9lJL+UIxUKtwVJA8wW1Trb1jMs1RFXo1CBTNZ/5hpC9QvmKWdopKw==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=6.0.0"
}
},
"node_modules/@jridgewell/sourcemap-codec": {
"version": "1.5.5",
"resolved": "https://registry.npmjs.org/@jridgewell/sourcemap-codec/-/sourcemap-codec-1.5.5.tgz",
"integrity": "sha512-cYQ9310grqxueWbl+WuIUIaiUaDcj7WOq5fVhEljNVgRfOUhY9fy2zTvfoqWsnebh8Sl70VScFbICvJnLKB0Og==",
"dev": true,
"license": "MIT"
},
"node_modules/@jridgewell/trace-mapping": {
"version": "0.3.31",
"resolved": "https://registry.npmjs.org/@jridgewell/trace-mapping/-/trace-mapping-0.3.31.tgz",
"integrity": "sha512-zzNR+SdQSDJzc8joaeP8QQoCQr8NuYx2dIIytl1QeBEZHJ9uW6hebsrYgbz8hJwUQao3TWCMtmfV8Nu1twOLAw==",
"dev": true,
"license": "MIT",
"dependencies": {
"@jridgewell/resolve-uri": "^3.1.0",
"@jridgewell/sourcemap-codec": "^1.4.14"
}
},
"node_modules/@types/istanbul-lib-coverage": {
"version": "2.0.6",
"resolved": "https://registry.npmjs.org/@types/istanbul-lib-coverage/-/istanbul-lib-coverage-2.0.6.tgz",
"integrity": "sha512-2QF/t/auWm0lsy8XtKVPG19v3sSOQlJe/YHZgfjb/KBBHOGSV+J2q/S671rcq9uTBrLAXmZpqJiaQbMT+zNU1w==",
"dev": true,
"license": "MIT"
},
"node_modules/convert-source-map": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/convert-source-map/-/convert-source-map-2.0.0.tgz",
"integrity": "sha512-Kvp459HrV2FEJ1CAsi1Ku+MY3kasH19TFykTz2xWmMeq6bk2NU3XXvfJ+Q61m0xktWwt+1HSYf3JZsTms3aRJg==",
"dev": true,
"license": "MIT"
},
"node_modules/has-flag": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz",
"integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=8"
}
},
"node_modules/html-escaper": {
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/html-escaper/-/html-escaper-2.0.2.tgz",
"integrity": "sha512-H2iMtd0I4Mt5eYiapRdIDjp+XzelXQ0tFE4JS7YFwFevXXMmOp9myNrUvCg0D6ws8iqkRPBfKHgbwig1SmlLfg==",
"dev": true,
"license": "MIT"
},
"node_modules/istanbul-lib-coverage": {
"version": "3.2.2",
"resolved": "https://registry.npmjs.org/istanbul-lib-coverage/-/istanbul-lib-coverage-3.2.2.tgz",
"integrity": "sha512-O8dpsF+r0WV/8MNRKfnmrtCWhuKjxrq2w+jpzBL5UZKTi2LeVWnWOmWRxFlesJONmc+wLAGvKQZEOanko0LFTg==",
"dev": true,
"license": "BSD-3-Clause",
"engines": {
"node": ">=8"
}
},
"node_modules/istanbul-lib-report": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/istanbul-lib-report/-/istanbul-lib-report-3.0.1.tgz",
"integrity": "sha512-GCfE1mtsHGOELCU8e/Z7YWzpmybrx/+dSTfLrvY8qRmaY6zXTKWn6WQIjaAFw069icm6GVMNkgu0NzI4iPZUNw==",
"dev": true,
"license": "BSD-3-Clause",
"dependencies": {
"istanbul-lib-coverage": "^3.0.0",
"make-dir": "^4.0.0",
"supports-color": "^7.1.0"
},
"engines": {
"node": ">=10"
}
},
"node_modules/istanbul-reports": {
"version": "3.2.0",
"resolved": "https://registry.npmjs.org/istanbul-reports/-/istanbul-reports-3.2.0.tgz",
"integrity": "sha512-HGYWWS/ehqTV3xN10i23tkPkpH46MLCIMFNCaaKNavAXTF1RkqxawEPtnjnGZ6XKSInBKkiOA5BKS+aZiY3AvA==",
"dev": true,
"license": "BSD-3-Clause",
"dependencies": {
"html-escaper": "^2.0.0",
"istanbul-lib-report": "^3.0.0"
},
"engines": {
"node": ">=8"
}
},
"node_modules/make-dir": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/make-dir/-/make-dir-4.0.0.tgz",
"integrity": "sha512-hXdUTZYIVOt1Ex//jAQi+wTZZpUpwBj/0QsOzqegb3rGMMeJiSEu5xLHnYfBrRV4RH2+OCSOO95Is/7x1WJ4bw==",
"dev": true,
"license": "MIT",
"dependencies": {
"semver": "^7.5.3"
},
"engines": {
"node": ">=10"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/semver": {
"version": "7.7.3",
"resolved": "https://registry.npmjs.org/semver/-/semver-7.7.3.tgz",
"integrity": "sha512-SdsKMrI9TdgjdweUSR9MweHA4EJ8YxHn8DFaDisvhVlUOe4BF1tLD7GAj0lIqWVl+dPb/rExr0Btby5loQm20Q==",
"dev": true,
"license": "ISC",
"bin": {
"semver": "bin/semver.js"
},
"engines": {
"node": ">=10"
}
},
"node_modules/supports-color": {
"version": "7.2.0",
"resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.2.0.tgz",
"integrity": "sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==",
"dev": true,
"license": "MIT",
"dependencies": {
"has-flag": "^4.0.0"
},
"engines": {
"node": ">=8"
}
},
"node_modules/v8-to-istanbul": {
"version": "9.3.0",
"resolved": "https://registry.npmjs.org/v8-to-istanbul/-/v8-to-istanbul-9.3.0.tgz",
"integrity": "sha512-kiGUalWN+rgBJ/1OHZsBtU4rXZOfj/7rKQxULKlIzwzQSvMJUUNgPwJEEh7gU6xEVxC0ahoOBvN2YI8GH6FNgA==",
"dev": true,
"license": "ISC",
"dependencies": {
"@jridgewell/trace-mapping": "^0.3.12",
"@types/istanbul-lib-coverage": "^2.0.1",
"convert-source-map": "^2.0.0"
},
"engines": {
"node": ">=10.12.0"
}
}
}
}

View File

@@ -4,6 +4,12 @@
"type": "module",
"private": true,
"scripts": {
"test": "mkdir -p reports coverage && NODE_V8_COVERAGE=coverage node --test --experimental-test-coverage --test-reporter=junit --test-reporter-destination=reports/javascript-junit.xml && node ./scripts/export-coverage.js"
"test": "mkdir -p reports coverage && NODE_V8_COVERAGE=coverage node --test --experimental-test-coverage --test-reporter=spec --test-reporter-destination=stdout --test-reporter=junit --test-reporter-destination=reports/javascript-junit.xml && node ./scripts/export-coverage.js"
},
"devDependencies": {
"istanbul-lib-coverage": "^3.2.2",
"istanbul-lib-report": "^3.0.1",
"istanbul-reports": "^3.2.0",
"v8-to-istanbul": "^9.3.0"
}
}

View File

@@ -0,0 +1,126 @@
/*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import {
extractChatMessageMetadata,
formatChatMessagePrefix,
formatChatChannelTag,
formatNodeAnnouncementPrefix,
__test__
} from '../chat-format.js';
const {
firstNonNull,
normalizeString,
normalizeFrequency,
normalizeFrequencySlot,
FREQUENCY_PLACEHOLDER
} = __test__;
test('extractChatMessageMetadata prefers explicit region_frequency and channel_name', () => {
const payload = {
region_frequency: 868,
channel_name: ' Test Channel ',
lora_freq: 915,
channelName: 'Ignored'
};
const result = extractChatMessageMetadata(payload);
assert.deepEqual(result, { frequency: '868', channelName: 'Test Channel' });
});
test('extractChatMessageMetadata falls back to LoRa metadata', () => {
const payload = {
lora_freq: 915,
channelName: 'SpecChannel'
};
const result = extractChatMessageMetadata(payload);
assert.deepEqual(result, { frequency: '915', channelName: 'SpecChannel' });
});
test('extractChatMessageMetadata returns null metadata for invalid input', () => {
assert.deepEqual(extractChatMessageMetadata(null), { frequency: null, channelName: null });
assert.deepEqual(extractChatMessageMetadata(undefined), { frequency: null, channelName: null });
});
test('firstNonNull returns the first non-null candidate', () => {
assert.equal(firstNonNull(null, undefined, '', 'value'), '');
assert.equal(firstNonNull(undefined, null), null);
});
test('normalizeString trims strings and rejects empties', () => {
assert.equal(normalizeString(' Spec '), 'Spec');
assert.equal(normalizeString(' '), null);
assert.equal(normalizeString(123), '123');
assert.equal(normalizeString(Number.POSITIVE_INFINITY), null);
});
test('normalizeFrequency handles numeric and string inputs', () => {
assert.equal(normalizeFrequency(915), '915');
assert.equal(normalizeFrequency(868.125), '868.125');
assert.equal(normalizeFrequency(' 868MHz '), '868');
assert.equal(normalizeFrequency('n/a'), 'n/a');
assert.equal(normalizeFrequency(-5), null);
assert.equal(normalizeFrequency(null), null);
});
test('formatChatMessagePrefix preserves bracket placeholders', () => {
assert.equal(
formatChatMessagePrefix({ timestamp: '11:46:48', frequency: '868' }),
'[11:46:48][868]'
);
assert.equal(
formatChatMessagePrefix({ timestamp: '16:19:19', frequency: null }),
`[16:19:19][${FREQUENCY_PLACEHOLDER}]`
);
assert.equal(
formatChatMessagePrefix({ timestamp: '09:00:00', frequency: '' }),
`[09:00:00][${FREQUENCY_PLACEHOLDER}]`
);
});
test('formatChatChannelTag wraps channel names after the short name slot', () => {
assert.equal(
formatChatChannelTag({ channelName: 'TEST' }),
'[TEST]'
);
assert.equal(
formatChatChannelTag({ channelName: '' }),
'[]'
);
assert.equal(
formatChatChannelTag({ channelName: null }),
'[]'
);
});
test('formatNodeAnnouncementPrefix includes optional frequency bracket', () => {
assert.equal(
formatNodeAnnouncementPrefix({ timestamp: '12:34:56', frequency: '868' }),
'[12:34:56][868]'
);
assert.equal(
formatNodeAnnouncementPrefix({ timestamp: '01:02:03', frequency: null }),
`[01:02:03][${FREQUENCY_PLACEHOLDER}]`
);
});
test('normalizeFrequencySlot returns placeholder when frequency is missing', () => {
assert.equal(normalizeFrequencySlot(null), FREQUENCY_PLACEHOLDER);
assert.equal(normalizeFrequencySlot(''), FREQUENCY_PLACEHOLDER);
assert.equal(normalizeFrequencySlot(undefined), FREQUENCY_PLACEHOLDER);
assert.equal(normalizeFrequencySlot('915'), '915');
});

View File

@@ -1,3 +1,17 @@
/*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import { documentStub, resetDocumentStub } from './document-stub.js';
@@ -41,6 +55,15 @@ test('readAppConfig returns an empty object and logs on parse failure', () => {
console.error = originalError;
});
test('readAppConfig ignores non-object JSON payloads', () => {
resetDocumentStub();
documentStub.setConfigElement({
getAttribute: name => (name === 'data-app-config' ? '42' : null)
});
assert.deepEqual(readAppConfig(), {});
});
test('mergeConfig applies default values when fields are missing', () => {
const result = mergeConfig({});
assert.deepEqual(result, {
@@ -57,9 +80,11 @@ test('mergeConfig coerces numeric values and nested objects', () => {
mapCenter: { lat: '10.5', lon: '20.1' },
tileFilters: { dark: 'contrast(2)' },
chatEnabled: 0,
defaultChannel: '#Custom',
defaultFrequency: '915MHz',
maxNodeDistanceKm: '55.5'
channel: '#Custom',
frequency: '915MHz',
contactLink: 'https://example.org/chat',
contactLinkUrl: 'https://example.org/chat',
maxDistanceKm: '55.5'
});
assert.equal(result.refreshIntervalSeconds, 30);
@@ -67,19 +92,26 @@ test('mergeConfig coerces numeric values and nested objects', () => {
assert.deepEqual(result.mapCenter, { lat: 10.5, lon: 20.1 });
assert.deepEqual(result.tileFilters, { light: DEFAULT_CONFIG.tileFilters.light, dark: 'contrast(2)' });
assert.equal(result.chatEnabled, false);
assert.equal(result.defaultChannel, '#Custom');
assert.equal(result.defaultFrequency, '915MHz');
assert.equal(result.maxNodeDistanceKm, 55.5);
assert.equal(result.channel, '#Custom');
assert.equal(result.frequency, '915MHz');
assert.equal(result.contactLink, 'https://example.org/chat');
assert.equal(result.contactLinkUrl, 'https://example.org/chat');
assert.equal(result.maxDistanceKm, 55.5);
});
test('mergeConfig falls back to defaults for invalid numeric values', () => {
const result = mergeConfig({
refreshIntervalSeconds: 'NaN',
refreshMs: 'NaN',
maxNodeDistanceKm: 'oops'
maxDistanceKm: 'oops'
});
assert.equal(result.refreshIntervalSeconds, DEFAULT_CONFIG.refreshIntervalSeconds);
assert.equal(result.refreshMs, DEFAULT_CONFIG.refreshMs);
assert.equal(result.maxNodeDistanceKm, DEFAULT_CONFIG.maxNodeDistanceKm);
assert.equal(result.maxDistanceKm, DEFAULT_CONFIG.maxDistanceKm);
});
test('document stub returns null for unrelated selectors', () => {
resetDocumentStub();
assert.equal(documentStub.querySelector('#missing'), null);
});

View File

@@ -1,17 +1,57 @@
/*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Minimal document implementation that exposes the subset of behaviour needed
* by the front-end modules during unit tests.
*/
class DocumentStub {
/**
* Instantiate a new stub with a clean internal state.
*/
constructor() {
this.reset();
}
/**
* Clear tracked configuration elements and registered event listeners.
*
* @returns {void}
*/
reset() {
this.configElement = null;
this.listeners = new Map();
}
/**
* Provide an element that will be returned by ``querySelector`` when the
* configuration selector is requested.
*
* @param {?Element} element DOM node exposing ``getAttribute``.
* @returns {void}
*/
setConfigElement(element) {
this.configElement = element;
}
/**
* Return the registered configuration element when the matching selector is
* provided.
*
* @param {string} selector CSS selector requested by the module under test.
* @returns {?Element} Config element or ``null`` when unavailable.
*/
querySelector(selector) {
if (selector === '[data-app-config]') {
return this.configElement;
@@ -19,10 +59,24 @@ class DocumentStub {
return null;
}
/**
* Register an event handler, mirroring the DOM ``addEventListener`` API.
*
* @param {string} event Event identifier.
* @param {Function} handler Callback invoked when ``dispatchEvent`` is
* called.
* @returns {void}
*/
addEventListener(event, handler) {
this.listeners.set(event, handler);
}
/**
* Trigger a previously registered listener.
*
* @param {string} event Event identifier used when registering the handler.
* @returns {void}
*/
dispatchEvent(event) {
const handler = this.listeners.get(event);
if (handler) {
@@ -32,6 +86,12 @@ class DocumentStub {
}
export const documentStub = new DocumentStub();
/**
* Reset the shared stub between test cases to avoid state bleed.
*
* @returns {void}
*/
export function resetDocumentStub() {
documentStub.reset();
}

View File

@@ -0,0 +1,292 @@
/*
* Copyright (C) 2025 l5yth
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Simple class list implementation supporting the subset of DOMTokenList
* behaviour required by the tests.
*/
class MockClassList {
constructor() {
this._values = new Set();
}
/**
* Add one or more CSS classes to the element.
*
* @param {...string} names Class names to insert into the list.
* @returns {void}
*/
add(...names) {
names.forEach(name => {
if (name) {
this._values.add(name);
}
});
}
/**
* Remove one or more CSS classes from the element.
*
* @param {...string} names Class names to delete from the list.
* @returns {void}
*/
remove(...names) {
names.forEach(name => {
if (name) {
this._values.delete(name);
}
});
}
/**
* Determine whether the class list currently contains ``name``.
*
* @param {string} name Target class name.
* @returns {boolean} ``true`` when the class is present.
*/
contains(name) {
return this._values.has(name);
}
/**
* Toggle the provided class name.
*
* @param {string} name Class name to toggle.
* @param {boolean} [force] Optional forced state mirroring ``DOMTokenList``.
* @returns {boolean} ``true`` when the class is present after toggling.
*/
toggle(name, force) {
if (force === true) {
this._values.add(name);
return true;
}
if (force === false) {
this._values.delete(name);
return false;
}
if (this._values.has(name)) {
this._values.delete(name);
return false;
}
this._values.add(name);
return true;
}
}
/**
* Minimal DOM element implementation exposing the subset of behaviour exercised
* by the frontend entrypoints.
*/
class MockElement {
/**
* @param {string} tagName Element name used for diagnostics.
* @param {Map<string, MockElement>} registry Storage shared with the
* containing document to support ``getElementById``.
*/
constructor(tagName, registry) {
this.tagName = tagName.toUpperCase();
this._registry = registry;
this.attributes = new Map();
this.dataset = {};
this.style = {};
this.textContent = '';
this.classList = new MockClassList();
}
/**
* Associate an attribute with the element.
*
* @param {string} name Attribute identifier.
* @param {string} value Attribute value.
* @returns {void}
*/
setAttribute(name, value) {
this.attributes.set(name, String(value));
if (name === 'id' && this._registry) {
this._registry.set(String(value), this);
}
}
/**
* Retrieve an attribute value.
*
* @param {string} name Attribute identifier.
* @returns {?string} Matching attribute or ``null`` when absent.
*/
getAttribute(name) {
return this.attributes.has(name) ? this.attributes.get(name) : null;
}
}
/**
* Create a deterministic DOM environment that provides just enough behaviour
* for the UI scripts to execute inside Node.js unit tests.
*
* @param {{
* readyState?: 'loading' | 'interactive' | 'complete',
* cookie?: string,
* includeBody?: boolean,
* bodyHasDarkClass?: boolean
* }} [options]
* @returns {{
* window: Window & { dispatchEvent: Function },
* document: Document,
* createElement: (tagName?: string, id?: string) => MockElement,
* registerElement: (id: string, element: MockElement) => void,
* setComputedStyleImplementation: (impl: Function) => void,
* triggerDOMContentLoaded: () => void,
* dispatchWindowEvent: (event: string) => void,
* getCookieString: () => string,
* setCookieString: (value: string) => void,
* cleanup: () => void
* }}
*/
export function createDomEnvironment(options = {}) {
const {
readyState = 'complete',
cookie = '',
includeBody = true,
bodyHasDarkClass = true
} = options;
const originalWindow = globalThis.window;
const originalDocument = globalThis.document;
const registry = new Map();
const documentListeners = new Map();
const windowListeners = new Map();
let computedStyleImpl = null;
let cookieStore = cookie;
const document = {
readyState,
documentElement: new MockElement('html', registry),
body: includeBody ? new MockElement('body', registry) : null,
addEventListener(event, handler) {
documentListeners.set(event, handler);
},
removeEventListener(event) {
documentListeners.delete(event);
},
dispatchEvent(event) {
const handler = documentListeners.get(event);
if (handler) handler();
},
getElementById(id) {
return registry.get(id) || null;
},
querySelector() {
return null;
},
createElement(tagName) {
return new MockElement(tagName, registry);
}
};
if (document.body && bodyHasDarkClass) {
document.body.classList.add('dark');
}
Object.defineProperty(document, 'cookie', {
get() {
return cookieStore;
},
set(value) {
cookieStore = cookieStore ? `${cookieStore}; ${value}` : value;
}
});
const window = {
document,
addEventListener(event, handler) {
windowListeners.set(event, handler);
},
removeEventListener(event) {
windowListeners.delete(event);
},
dispatchEvent(event) {
const handler = windowListeners.get(event);
if (handler) handler();
},
getComputedStyle(target) {
if (typeof computedStyleImpl === 'function') {
return computedStyleImpl(target);
}
return {
getPropertyValue() {
return '';
}
};
}
};
globalThis.window = window;
globalThis.document = document;
/**
* Create and optionally register a mock element.
*
* @param {string} [tagName='div'] Tag name of the element.
* @param {string} [id] Optional identifier registered with the document.
* @returns {MockElement} New mock element instance.
*/
function createElement(tagName = 'div', id) {
const element = new MockElement(tagName, registry);
if (id) {
element.setAttribute('id', id);
}
return element;
}
/**
* Register an element instance so that ``getElementById`` can resolve it.
*
* @param {string} id Element identifier.
* @param {MockElement} element Element instance to register.
* @returns {void}
*/
function registerElement(id, element) {
registry.set(id, element);
}
return {
window,
document,
createElement,
registerElement,
setComputedStyleImplementation(impl) {
computedStyleImpl = impl;
},
triggerDOMContentLoaded() {
const handler = documentListeners.get('DOMContentLoaded');
if (handler) handler();
},
dispatchWindowEvent(event) {
const handler = windowListeners.get(event);
if (handler) handler();
},
getCookieString() {
return cookieStore;
},
setCookieString(value) {
cookieStore = value;
},
cleanup() {
globalThis.window = originalWindow;
globalThis.document = originalDocument;
}
};
}

View File

@@ -0,0 +1,172 @@
/*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import { createDomEnvironment } from './dom-environment.js';
import { buildInstanceUrl, initializeInstanceSelector, __test__ } from '../instance-selector.js';
const { resolveInstanceLabel } = __test__;
function setupSelectElement(document) {
const select = document.createElement('select');
const listeners = new Map();
const options = [];
Object.defineProperty(select, 'options', {
get() {
return options;
}
});
Object.defineProperty(select, 'value', {
get() {
if (typeof select.selectedIndex !== 'number') {
return '';
}
const current = options[select.selectedIndex];
return current ? current.value : '';
},
set(newValue) {
const index = options.findIndex(option => option.value === newValue);
select.selectedIndex = index >= 0 ? index : -1;
}
});
select.selectedIndex = -1;
select.appendChild = option => {
options.push(option);
if (select.selectedIndex === -1) {
select.selectedIndex = 0;
}
return option;
};
select.remove = index => {
if (index >= 0 && index < options.length) {
options.splice(index, 1);
if (options.length === 0) {
select.selectedIndex = -1;
} else if (select.selectedIndex >= options.length) {
select.selectedIndex = options.length - 1;
}
}
};
select.addEventListener = (event, handler) => {
listeners.set(event, handler);
};
select.dispatchEvent = event => {
const key = typeof event === 'string' ? event : event?.type;
const handler = listeners.get(key);
if (handler) {
handler(event);
}
};
return select;
}
test('resolveInstanceLabel falls back to the domain when the name is missing', () => {
assert.equal(resolveInstanceLabel({ domain: 'mesh.example' }), 'mesh.example');
assert.equal(resolveInstanceLabel({ name: ' Mesh Name ' }), 'Mesh Name');
assert.equal(resolveInstanceLabel(null), '');
});
test('buildInstanceUrl normalises domains into navigable HTTPS URLs', () => {
assert.equal(buildInstanceUrl('mesh.example'), 'https://mesh.example');
assert.equal(buildInstanceUrl(' https://mesh.example '), 'https://mesh.example');
assert.equal(buildInstanceUrl(''), null);
assert.equal(buildInstanceUrl(null), null);
});
test('initializeInstanceSelector populates options alphabetically and selects the configured domain', async () => {
const env = createDomEnvironment();
const select = setupSelectElement(env.document);
const fetchCalls = [];
const fetchImpl = async url => {
fetchCalls.push(url);
return {
ok: true,
async json() {
return [
{ name: 'Zulu Mesh', domain: 'zulu.mesh' },
{ name: 'Alpha Mesh', domain: 'alpha.mesh' },
{ domain: 'beta.mesh' }
];
}
};
};
try {
await initializeInstanceSelector({
selectElement: select,
fetchImpl,
windowObject: env.window,
documentObject: env.document,
instanceDomain: 'beta.mesh',
defaultLabel: 'Select region ...'
});
assert.equal(fetchCalls.length, 1);
assert.equal(select.options.length, 4);
assert.equal(select.options[0].textContent, 'Select region ...');
assert.equal(select.options[1].textContent, 'Alpha Mesh');
assert.equal(select.options[2].textContent, 'beta.mesh');
assert.equal(select.options[3].textContent, 'Zulu Mesh');
assert.equal(select.options[select.selectedIndex].value, 'beta.mesh');
} finally {
env.cleanup();
}
});
test('initializeInstanceSelector navigates to the chosen instance domain', async () => {
const env = createDomEnvironment();
const select = setupSelectElement(env.document);
const fetchImpl = async () => ({
ok: true,
async json() {
return [{ domain: 'mesh.example' }];
}
});
let navigatedTo = null;
const navigate = url => {
navigatedTo = url;
};
try {
await initializeInstanceSelector({
selectElement: select,
fetchImpl,
windowObject: env.window,
documentObject: env.document,
navigate,
defaultLabel: 'Select region ...'
});
assert.equal(select.options.length, 2);
assert.equal(select.options[1].value, 'mesh.example');
select.value = 'mesh.example';
select.dispatchEvent({ type: 'change', target: select });
assert.equal(navigatedTo, 'https://mesh.example');
} finally {
env.cleanup();
}
});

View File

@@ -0,0 +1,162 @@
/**
* Copyright (C) 2025 l5yth
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import { createMapAutoFitController } from '../map-auto-fit-controller.js';
class ToggleStub extends EventTarget {
constructor(checked = true) {
super();
this.checked = checked;
}
/**
* @param {Event} event - Event to dispatch to listeners.
* @returns {boolean} Dispatch status.
*/
dispatchEvent(event) {
return super.dispatchEvent(event);
}
}
class WindowStub {
constructor() {
this.listeners = new Map();
}
addEventListener(type, listener) {
this.listeners.set(type, listener);
}
removeEventListener(type, listener) {
const existing = this.listeners.get(type);
if (existing === listener) {
this.listeners.delete(type);
}
}
emit(type) {
const listener = this.listeners.get(type);
if (listener) listener();
}
}
test('recordFit stores and clones the last fit snapshot', () => {
const toggle = new ToggleStub(true);
const controller = createMapAutoFitController({ toggleEl: toggle, defaultPaddingPx: 20 });
assert.equal(controller.getLastFit(), null);
controller.recordFit([[10, 20], [30, 40]], { paddingPx: 12, maxZoom: 9 });
const snapshot = controller.getLastFit();
assert.ok(snapshot);
assert.deepEqual(snapshot.bounds, [[10, 20], [30, 40]]);
assert.deepEqual(snapshot.options, { paddingPx: 12, maxZoom: 9 });
snapshot.bounds[0][0] = -999;
snapshot.options.paddingPx = -1;
const secondSnapshot = controller.getLastFit();
assert.deepEqual(secondSnapshot?.bounds, [[10, 20], [30, 40]]);
assert.deepEqual(secondSnapshot?.options, { paddingPx: 12, maxZoom: 9 });
});
test('recordFit ignores invalid bounds and normalises fit options', () => {
const controller = createMapAutoFitController({ defaultPaddingPx: 16 });
controller.recordFit(null);
assert.equal(controller.getLastFit(), null);
controller.recordFit([[10, Number.NaN], [20, 30]]);
assert.equal(controller.getLastFit(), null);
controller.recordFit([[10, 11], [12, 13]], { paddingPx: -5, maxZoom: 0 });
const snapshot = controller.getLastFit();
assert.ok(snapshot);
assert.deepEqual(snapshot.options, { paddingPx: 16 });
});
test('handleUserInteraction disables auto-fit unless suppressed', () => {
const toggle = new ToggleStub(true);
let changeEvents = 0;
toggle.addEventListener('change', () => {
changeEvents += 1;
});
const controller = createMapAutoFitController({ toggleEl: toggle });
controller.runAutoFitOperation(() => {
assert.equal(controller.handleUserInteraction(), false);
assert.equal(toggle.checked, true);
});
assert.equal(changeEvents, 0);
assert.equal(controller.handleUserInteraction(), true);
assert.equal(toggle.checked, false);
assert.equal(changeEvents, 1);
assert.equal(controller.handleUserInteraction(), false);
assert.equal(changeEvents, 1);
});
test('isAutoFitEnabled reflects the toggle state', () => {
const toggle = new ToggleStub(false);
const controller = createMapAutoFitController({ toggleEl: toggle });
assert.equal(controller.isAutoFitEnabled(), false);
toggle.checked = true;
assert.equal(controller.isAutoFitEnabled(), true);
});
test('runAutoFitOperation returns callback results and tolerates missing functions', () => {
const controller = createMapAutoFitController();
assert.equal(controller.runAutoFitOperation(), undefined);
let active = false;
const result = controller.runAutoFitOperation(() => {
active = true;
return 42;
});
assert.equal(active, true);
assert.equal(result, 42);
});
test('attachResizeListener forwards snapshots and supports teardown', () => {
const windowStub = new WindowStub();
const controller = createMapAutoFitController({ windowObject: windowStub, defaultPaddingPx: 24 });
controller.recordFit([[1, 2], [3, 4]], { paddingPx: 30 });
let snapshots = [];
const detach = controller.attachResizeListener(snapshot => {
snapshots.push(snapshot);
});
windowStub.emit('resize');
windowStub.emit('orientationchange');
assert.equal(snapshots.length, 2);
assert.deepEqual(snapshots[0], { bounds: [[1, 2], [3, 4]], options: { paddingPx: 30 } });
detach();
windowStub.emit('resize');
assert.equal(snapshots.length, 2);
const noop = controller.attachResizeListener();
assert.equal(typeof noop, 'function');
noop();
});

View File

@@ -0,0 +1,138 @@
/*
* Copyright (C) 2025 l5yth
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import {
computeBoundingBox,
computeBoundsForPoints,
haversineDistanceKm,
__testUtils
} from '../map-bounds.js';
const { clampLatitude, clampLongitude, normaliseRange, normaliseLongitudeAround } = __testUtils;
function approximatelyEqual(actual, expected, epsilon = 1e-3) {
assert.ok(Math.abs(actual - expected) <= epsilon, `${actual} is not within ${epsilon} of ${expected}`);
}
test('clamp helpers bound invalid coordinates', () => {
assert.equal(clampLatitude(120), 90);
assert.equal(clampLatitude(-95), -90);
assert.equal(clampLatitude(Number.POSITIVE_INFINITY), 90);
assert.equal(clampLatitude(Number.NEGATIVE_INFINITY), -90);
assert.equal(clampLongitude(200), 180);
assert.equal(clampLongitude(-220), -180);
assert.equal(clampLongitude(Number.POSITIVE_INFINITY), 180);
assert.equal(clampLongitude(Number.NEGATIVE_INFINITY), -180);
});
test('normaliseRange enforces minimum distance for invalid inputs', () => {
assert.equal(normaliseRange(-1, 2), 2);
assert.equal(normaliseRange(Number.NaN, 3), 3);
assert.equal(normaliseRange(0, 1), 1);
assert.equal(normaliseRange(4, 2), 4);
});
test('computeBoundingBox returns null for invalid centres', () => {
assert.equal(computeBoundingBox(null, 10), null);
assert.equal(computeBoundingBox({ lat: 'x', lon: 0 }, 5), null);
assert.equal(computeBoundingBox({ lat: 0, lon: NaN }, 5), null);
});
test('computeBoundingBox returns symmetric bounds for mid-latitude centre', () => {
const bounds = computeBoundingBox({ lat: 0, lon: 0 }, 10);
assert.ok(bounds);
const [[south, west], [north, east]] = bounds;
approximatelyEqual(north, -south, 1e-4);
approximatelyEqual(east, -west, 1e-4);
assert.ok(north > 0 && east > 0);
});
test('computeBoundingBox clamps longitude span near the poles', () => {
const bounds = computeBoundingBox({ lat: 89.9, lon: 45 }, 2000);
assert.ok(bounds);
const [[south, west], [north, east]] = bounds;
approximatelyEqual(south, 72.0, 1e-1);
assert.equal(west, -180);
assert.equal(east, 180);
assert.equal(north, 90);
});
test('haversineDistanceKm matches known city distance', () => {
// Approximate distance between Paris (48.8566, 2.3522) and Berlin (52.52, 13.4050)
const distance = haversineDistanceKm(48.8566, 2.3522, 52.52, 13.405);
approximatelyEqual(distance, 878.8, 2);
});
test('computeBoundsForPoints returns null when no valid points exist', () => {
assert.equal(computeBoundsForPoints([]), null);
assert.equal(computeBoundsForPoints([[Number.NaN, 0]]), null);
});
test('computeBoundsForPoints expands bounds with padding and minimum radius', () => {
const bounds = computeBoundsForPoints(
[
[38.0, -27.1],
[38.05, -27.08]
],
{ paddingFraction: 0.2, minimumRangeKm: 2 }
);
assert.ok(bounds);
const [[south, west], [north, east]] = bounds;
assert.ok(north > 38.05);
assert.ok(south < 38.0);
assert.ok(east > -27.08);
assert.ok(west < -27.1);
});
test('computeBoundsForPoints respects the configured minimum range for single points', () => {
const bounds = computeBoundsForPoints([[12.34, 56.78]], { minimumRangeKm: 5 });
assert.ok(bounds);
const [[south], [north]] = bounds;
assert.ok(north - south > 0.05);
});
test('computeBoundsForPoints preserves tight bounds across the antimeridian', () => {
const points = [
[10.0, 179.5],
[11.2, -179.7],
[9.5, 179.2]
];
const bounds = computeBoundsForPoints(points, { paddingFraction: 0.1 });
assert.ok(bounds);
const [[south, west], [north, east]] = bounds;
assert.ok(north - south < 10, 'latitude span should remain tight');
const lonSpan = Math.abs(east - west);
const normalizedSpan = lonSpan > 180 ? 360 - lonSpan : lonSpan;
assert.ok(normalizedSpan < 40, 'longitude span should wrap tightly around the dateline');
for (const [, lon] of points) {
const adjustedLon = normaliseLongitudeAround(lon, (west + east) / 2);
assert.ok(adjustedLon >= west - 1e-6 && adjustedLon <= east + 1e-6, 'point longitude should lie within bounds');
}
assert.ok(east > 180 || west < -180, 'bounds should extend beyond the canonical range when necessary');
});

View File

@@ -0,0 +1,244 @@
/*
* Copyright (C) 2025 l5yth
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import { attachNodeInfoRefreshToMarker, overlayToPopupNode } from '../map-marker-node-info.js';
function createFakeMarker(anchor) {
const handlers = {};
return {
handlers,
on(name, handler) {
if (!handlers[name]) handlers[name] = [];
handlers[name].push(handler);
return this;
},
getElement() {
return anchor;
},
trigger(name, payload) {
for (const handler of handlers[name] || []) {
handler(payload);
}
},
};
}
test('attachNodeInfoRefreshToMarker refreshes markers with merged overlay details', async () => {
const anchor = { id: 'anchor-el' };
const marker = createFakeMarker(anchor);
const popupUpdates = [];
const detailCalls = [];
let prevented = false;
let stopped = false;
let token = 0;
const refreshCalls = [];
attachNodeInfoRefreshToMarker({
marker,
getOverlayFallback: () => ({ nodeId: '!foo', shortName: 'Foo', role: 'CLIENT', neighbors: [] }),
refreshNodeInformation: async reference => {
refreshCalls.push(reference);
return { battery: 55.5, telemetryTime: 123, neighbors: [{ neighbor_id: '!bar', snr: 9.5 }] };
},
mergeOverlayDetails: (primary, fallback) => ({ ...fallback, ...primary }),
createRequestToken: el => {
assert.equal(el, anchor);
return ++token;
},
isTokenCurrent: (el, candidate) => {
assert.equal(el, anchor);
return candidate === token;
},
showLoading: (el, info) => {
assert.equal(el, anchor);
assert.equal(info.nodeId, '!foo');
},
showDetails: (el, info) => {
detailCalls.push({ el, info });
},
showError: () => {
assert.fail('showError should not be invoked on success');
},
updatePopup: info => {
popupUpdates.push(info);
},
});
const clickEvent = {
originalEvent: {
preventDefault() {
prevented = true;
},
stopPropagation() {
stopped = true;
},
},
};
marker.trigger('click', clickEvent);
await new Promise(resolve => setImmediate(resolve));
assert.equal(prevented, true);
assert.equal(stopped, true);
assert.equal(refreshCalls.length, 1);
assert.deepEqual(refreshCalls[0], {
nodeId: '!foo',
fallback: { nodeId: '!foo', shortName: 'Foo', role: 'CLIENT', neighbors: [] },
});
assert.ok(popupUpdates.length >= 1);
const merged = popupUpdates[popupUpdates.length - 1];
assert.equal(merged.battery, 55.5);
assert.equal(merged.telemetryTime, 123);
assert.equal(detailCalls.length, 1);
assert.equal(detailCalls[0].el, anchor);
assert.equal(detailCalls[0].info.battery, 55.5);
});
test('attachNodeInfoRefreshToMarker surfaces errors with fallback overlays', async () => {
const anchor = { id: 'anchor' };
const marker = createFakeMarker(anchor);
let token = 0;
let errorCaptured = null;
let detailCalls = 0;
let updateCalls = 0;
attachNodeInfoRefreshToMarker({
marker,
getOverlayFallback: () => ({ nodeId: '!oops', shortName: 'Oops' }),
refreshNodeInformation: async () => {
throw new Error('boom');
},
mergeOverlayDetails: (primary, fallback) => ({ ...fallback, ...primary }),
createRequestToken: el => {
assert.equal(el, anchor);
return ++token;
},
isTokenCurrent: (el, candidate) => {
assert.equal(el, anchor);
return candidate === token;
},
showLoading: () => {},
showDetails: () => {
detailCalls += 1;
},
showError: (el, info, error) => {
assert.equal(el, anchor);
assert.equal(info.nodeId, '!oops');
errorCaptured = error;
},
updatePopup: () => {
updateCalls += 1;
},
});
marker.trigger('click', { originalEvent: {} });
await new Promise(resolve => setImmediate(resolve));
assert.ok(errorCaptured instanceof Error);
assert.equal(errorCaptured.message, 'boom');
assert.equal(detailCalls, 0);
assert.equal(updateCalls, 2);
});
test('attachNodeInfoRefreshToMarker skips refresh when identifiers are missing', async () => {
const anchor = { id: 'anchor' };
const marker = createFakeMarker(anchor);
let token = 0;
let refreshed = false;
let detailsShown = 0;
attachNodeInfoRefreshToMarker({
marker,
getOverlayFallback: () => ({ shortName: 'Unknown' }),
refreshNodeInformation: async () => {
refreshed = true;
},
mergeOverlayDetails: (primary, fallback) => ({ ...fallback, ...primary }),
createRequestToken: el => {
assert.equal(el, anchor);
return ++token;
},
isTokenCurrent: (el, candidate) => {
assert.equal(el, anchor);
return candidate === token;
},
showLoading: () => {
assert.fail('showLoading should not run without identifiers');
},
showDetails: (el, info) => {
assert.equal(el, anchor);
assert.equal(info.shortName, 'Unknown');
detailsShown += 1;
},
});
marker.trigger('click', { originalEvent: {} });
await new Promise(resolve => setImmediate(resolve));
assert.equal(refreshed, false);
assert.equal(detailsShown, 1);
});
test('attachNodeInfoRefreshToMarker honours shouldHandleClick predicate', async () => {
const marker = createFakeMarker({ id: 'anchor' });
let token = 0;
let refreshed = false;
attachNodeInfoRefreshToMarker({
marker,
getOverlayFallback: () => ({ nodeId: '!skip' }),
refreshNodeInformation: async () => {
refreshed = true;
},
mergeOverlayDetails: (primary, fallback) => ({ ...fallback, ...primary }),
createRequestToken: () => ++token,
isTokenCurrent: (el, candidate) => candidate === token,
shouldHandleClick: () => false,
});
marker.trigger('click', { originalEvent: {} });
await new Promise(resolve => setImmediate(resolve));
assert.equal(refreshed, false);
});
test('overlayToPopupNode normalises raw overlay payloads', () => {
const overlay = {
nodeId: '!foo',
nodeNum: 42,
shortName: 'Foo',
role: 'ROUTER',
battery: '77.5',
neighbors: [
{ neighbor_id: '!bar', snr: '12.5', neighbor_short_name: 'Bar' },
null,
],
};
const popupNode = overlayToPopupNode(overlay);
assert.equal(popupNode.node_id, '!foo');
assert.equal(popupNode.node_num, 42);
assert.equal(popupNode.short_name, 'Foo');
assert.equal(popupNode.role, 'ROUTER');
assert.equal(popupNode.battery_level, 77.5);
assert.equal(Array.isArray(popupNode.neighbors), true);
assert.equal(popupNode.neighbors.length, 1);
assert.equal(popupNode.neighbors[0].node.node_id, '!bar');
assert.equal(popupNode.neighbors[0].snr, 12.5);
});

View File

@@ -0,0 +1,123 @@
/*
* Copyright (C) 2025 l5yth
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import { createMessageNodeHydrator } from '../message-node-hydrator.js';
/**
* Capture warning invocations produced during a test run.
*/
class LoggerStub {
constructor() {
this.messages = [];
}
/**
* Record a warning message for later inspection.
*
* @param {...*} args Warning arguments.
* @returns {void}
*/
warn(...args) {
this.messages.push(args);
}
}
test('hydrate attaches cached nodes without performing lookups', async () => {
const node = { node_id: '!abc', short_name: 'Node' };
const nodesById = new Map([[node.node_id, node]]);
const hydrator = createMessageNodeHydrator({
fetchNodeById: async () => {
throw new Error('fetch should not be called');
},
applyNodeFallback: () => {}
});
const messages = [{ node_id: '!abc', text: 'Hello' }];
const result = await hydrator.hydrate(messages, nodesById);
assert.equal(result.length, 1);
assert.strictEqual(result[0].node, node);
assert.equal(nodesById.size, 1);
});
test('hydrate fetches missing nodes once and caches the result', async () => {
let fetchCalls = 0;
const fetchedNode = { node_id: '!fetch', short_name: 'Fetched' };
const hydrator = createMessageNodeHydrator({
fetchNodeById: async id => {
fetchCalls += 1;
assert.equal(id, '!fetch');
return { ...fetchedNode };
},
applyNodeFallback: () => {}
});
const nodesById = new Map();
const messages = [{ from_id: '!fetch', text: 'one' }, { node_id: '!fetch', text: 'two' }];
const result = await hydrator.hydrate(messages, nodesById);
assert.equal(fetchCalls, 1);
assert.strictEqual(nodesById.get('!fetch').short_name, 'Fetched');
assert.strictEqual(result[0].node, nodesById.get('!fetch'));
assert.strictEqual(result[1].node, nodesById.get('!fetch'));
});
test('hydrate falls back to placeholders when lookups fail', async () => {
const logger = new LoggerStub();
let fallbackCalls = 0;
const hydrator = createMessageNodeHydrator({
fetchNodeById: async () => null,
applyNodeFallback: node => {
fallbackCalls += 1;
if (!node.short_name) {
node.short_name = 'Fallback';
}
},
logger
});
const nodesById = new Map();
const messages = [{ from_id: '!missing', text: 'hi' }];
const result = await hydrator.hydrate(messages, nodesById);
assert.equal(nodesById.has('!missing'), false);
assert.equal(fallbackCalls, 1);
assert.ok(result[0].node);
assert.equal(result[0].node.short_name, 'Fallback');
assert.equal(logger.messages.length, 0);
});
test('hydrate records warning when fetch rejects', async () => {
const logger = new LoggerStub();
const hydrator = createMessageNodeHydrator({
fetchNodeById: async () => {
throw new Error('network error');
},
applyNodeFallback: () => {},
logger
});
const nodesById = new Map();
const messages = [{ from_id: '!warn', text: 'warn' }];
const result = await hydrator.hydrate(messages, nodesById);
assert.equal(result[0].node.node_id, '!warn');
assert.ok(logger.messages.length >= 1);
assert.equal(nodesById.has('!warn'), false);
});

View File

@@ -0,0 +1,348 @@
/*
* Copyright (C) 2025 l5yth
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import { refreshNodeInformation, __testUtils } from '../node-details.js';
const {
toTrimmedString,
toFiniteNumber,
extractString,
extractNumber,
assignString,
assignNumber,
mergeModemMetadata,
mergeNodeFields,
mergeTelemetry,
mergePosition,
parseFallback,
normalizeReference,
} = __testUtils;
function createResponse(status, body) {
return {
status,
ok: status >= 200 && status < 300,
json: async () => body,
};
}
test('refreshNodeInformation merges telemetry metrics when the base node lacks them', async () => {
const calls = [];
const responses = new Map([
['/api/nodes/!test', createResponse(200, {
node_id: '!test',
short_name: 'TST',
battery_level: null,
last_heard: 1_000,
modem_preset: 'MediumFast',
lora_freq: '868.1',
})],
['/api/telemetry/!test?limit=1', createResponse(200, [{
node_id: '!test',
battery_level: 73.5,
rx_time: 1_200,
telemetry_time: 1_180,
voltage: 4.1,
}])],
['/api/positions/!test?limit=1', createResponse(200, [{
node_id: '!test',
latitude: 52.5,
longitude: 13.4,
rx_time: 1_100,
}])],
['/api/neighbors/!test?limit=1000', createResponse(200, [{
node_id: '!test',
neighbor_id: '!peer',
snr: 9.5,
rx_time: 1_150,
}])],
]);
const fetchImpl = async (url, options) => {
calls.push({ url, options });
const response = responses.get(url);
if (!response) {
return createResponse(404, { error: 'not found' });
}
return response;
};
const fallback = { shortName: 'fallback', role: 'CLIENT' };
const node = await refreshNodeInformation({ nodeId: '!test', fallback }, { fetchImpl });
assert.equal(node.nodeId, '!test');
assert.equal(node.shortName, 'TST');
assert.equal(node.battery, 73.5);
assert.equal(node.voltage, 4.1);
assert.equal(node.role, 'CLIENT');
assert.equal(node.modemPreset, 'MediumFast');
assert.equal(node.loraFreq, 868.1);
assert.equal(node.lastHeard, 1_200);
assert.equal(node.telemetryTime, 1_180);
assert.equal(node.latitude, 52.5);
assert.equal(node.longitude, 13.4);
assert.deepEqual(node.neighbors, [{
node_id: '!test',
neighbor_id: '!peer',
snr: 9.5,
rx_time: 1_150,
}]);
assert.ok(node.rawSources);
assert.ok(node.rawSources.node);
assert.ok(node.rawSources.telemetry);
assert.ok(node.rawSources.position);
assert.equal(calls.length, 4);
calls.forEach(call => {
assert.deepEqual(call.options, { cache: 'no-store' });
});
});
test('refreshNodeInformation preserves fallback metrics when telemetry is unavailable', async () => {
const responses = new Map([
['/api/nodes/42', createResponse(200, {
node_id: '!num',
short_name: 'NUM',
})],
['/api/telemetry/42?limit=1', createResponse(404, { error: 'not found' })],
['/api/positions/42?limit=1', createResponse(404, { error: 'not found' })],
['/api/neighbors/42?limit=1000', createResponse(404, { error: 'not found' })],
]);
const fetchImpl = async (url, options) => {
const response = responses.get(url);
return response ?? createResponse(404, { error: 'not found' });
};
const fallback = { nodeNum: 42, battery: 12.5, role: 'CLIENT', modemPreset: 'FallbackPreset', loraFreq: 915 };
const node = await refreshNodeInformation({ nodeNum: 42, fallback }, { fetchImpl });
assert.equal(node.nodeId, '!num');
assert.equal(node.nodeNum, 42);
assert.equal(node.shortName, 'NUM');
assert.equal(node.battery, 12.5);
assert.equal(node.role, 'CLIENT');
assert.equal(node.modemPreset, 'FallbackPreset');
assert.equal(node.loraFreq, 915);
assert.equal(Array.isArray(node.neighbors) && node.neighbors.length, 0);
});
test('refreshNodeInformation requires a node identifier', async () => {
await assert.rejects(() => refreshNodeInformation(null), /node identifier/i);
});
test('refreshNodeInformation handles missing node records by falling back to telemetry data', async () => {
const responses = new Map([
['/api/nodes/!missing', createResponse(404, { error: 'not found' })],
['/api/telemetry/!missing?limit=1', createResponse(200, [{
node_id: '!missing',
node_num: 77,
battery_level: 66,
rx_time: 2_000,
telemetry_time: 1_950,
}])],
['/api/positions/!missing?limit=1', createResponse(200, [{
node_id: '!missing',
latitude: 1.23,
longitude: 3.21,
altitude: 42,
position_time: 1_960,
rx_time: 1_970,
}])],
['/api/neighbors/!missing?limit=1000', createResponse(200, [null, 'skip', {
node_id: '!missing',
neighbor_id: '!ally',
snr: 8.5,
}])],
]);
const fetchImpl = async url => responses.get(url) ?? createResponse(404, { error: 'not found' });
const node = await refreshNodeInformation({ nodeId: '!missing' }, { fetchImpl });
assert.equal(node.nodeId, '!missing');
assert.equal(node.nodeNum, 77);
assert.equal(node.battery, 66);
assert.equal(node.lastHeard, 2_000);
assert.equal(node.telemetryTime, 1_950);
assert.equal(node.positionTime, 1_960);
assert.equal(node.latitude, 1.23);
assert.equal(node.longitude, 3.21);
assert.equal(node.altitude, 42);
assert.equal(node.role, 'CLIENT');
assert.deepEqual(node.neighbors, [{
node_id: '!missing',
neighbor_id: '!ally',
snr: 8.5,
}]);
});
test('refreshNodeInformation enforces a fetch implementation', async () => {
const originalFetch = globalThis.fetch;
// eslint-disable-next-line no-global-assign
globalThis.fetch = undefined;
try {
await assert.rejects(() => refreshNodeInformation('!test', { fetchImpl: null }), /fetch implementation/i);
} finally {
// eslint-disable-next-line no-global-assign
globalThis.fetch = originalFetch;
}
});
test('mergeModemMetadata respects preference flags', () => {
const target = {};
mergeModemMetadata(target, { modem_preset: 'Base', lora_freq: '915.5' });
assert.equal(target.modemPreset, 'Base');
assert.equal(target.loraFreq, 915.5);
mergeModemMetadata(target, { modem_preset: 'New', lora_freq: '433' }, { preferExisting: true });
assert.equal(target.modemPreset, 'Base');
assert.equal(target.loraFreq, 915.5);
mergeModemMetadata(target, { modem_preset: 'Updated', lora_freq: '433' }, { preferExisting: false });
assert.equal(target.modemPreset, 'Updated');
assert.equal(target.loraFreq, 433);
});
test('helper utilities normalise primitive values', () => {
assert.equal(toTrimmedString(' hello '), 'hello');
assert.equal(toTrimmedString(''), null);
assert.equal(toTrimmedString(null), null);
assert.equal(toFiniteNumber('42.5'), 42.5);
assert.equal(toFiniteNumber('bad'), null);
assert.equal(toFiniteNumber(Infinity), null);
assert.equal(extractString({ name: ' Alice ' }, ['missing', 'name']), 'Alice');
assert.equal(extractString(null, ['name']), null);
assert.equal(extractNumber({ value: ' 13 ' }, ['missing', 'value']), 13);
assert.equal(extractNumber({}, ['value']), null);
});
test('assign helpers respect preferExisting semantics', () => {
const target = {};
assignString(target, 'name', ' primary ');
assignString(target, 'name', 'secondary', { preferExisting: true });
assignString(target, 'description', '');
assignNumber(target, 'count', '25');
assignNumber(target, 'count', 13, { preferExisting: true });
assignNumber(target, 'ignored', 'oops');
assert.deepEqual(target, { name: 'primary', count: 25 });
});
test('merge helpers combine node, telemetry, and position data', () => {
const node = {};
mergeNodeFields(node, {
node_id: '!node',
node_num: 55,
short_name: 'NODE',
battery_level: null,
last_heard: 1_000,
position_time: 900,
});
node.battery = 50;
mergeTelemetry(node, {
node_id: '!node',
battery_level: 75,
voltage: 3.8,
rx_time: 1_200,
rx_iso: '2025-01-01T00:00:00Z',
telemetry_time: 1_150,
});
mergePosition(node, {
node_id: '!node',
latitude: 52.5,
longitude: 13.4,
altitude: 80,
position_time: 1_180,
position_time_iso: '2025-01-01T00:19:40Z',
rx_time: 1_100,
rx_iso: '2025-01-01T00:18:20Z',
});
assert.equal(node.nodeId, '!node');
assert.equal(node.nodeNum, 55);
assert.equal(node.shortName, 'NODE');
assert.equal(node.battery, 50);
assert.equal(node.voltage, 3.8);
assert.equal(node.lastHeard, 1_200);
assert.equal(node.lastSeenIso, '2025-01-01T00:00:00Z');
assert.equal(node.telemetryTime, 1_150);
assert.equal(node.positionTime, 1_180);
assert.equal(node.positionTimeIso, '2025-01-01T00:19:40Z');
assert.equal(node.latitude, 52.5);
assert.equal(node.longitude, 13.4);
assert.equal(node.altitude, 80);
assert.ok(node.telemetry);
assert.ok(node.position);
});
test('normalizeReference extracts identifiers and tolerates malformed fallback payloads', () => {
const originalWarn = console.warn;
const warnings = [];
console.warn = (...args) => warnings.push(args);
try {
const parsed = normalizeReference({
nodeId: ' ',
fallback: '{"node_id":"!parsed","nodeNum":99}',
});
assert.equal(parsed.nodeId, '!parsed');
assert.equal(parsed.nodeNum, 99);
assert.ok(parsed.fallback);
const invalid = normalizeReference({ fallback: '{not json}' });
assert.equal(invalid.nodeId, null);
assert.equal(invalid.nodeNum, null);
assert.equal(invalid.fallback, null);
const strRef = normalizeReference('!direct');
assert.equal(strRef.nodeId, '!direct');
assert.equal(strRef.nodeNum, null);
const numRef = normalizeReference(57);
assert.equal(numRef.nodeId, null);
assert.equal(numRef.nodeNum, 57);
const emptyRef = normalizeReference(undefined);
assert.equal(emptyRef.nodeId, null);
assert.equal(emptyRef.nodeNum, null);
assert.equal(emptyRef.fallback, null);
} finally {
console.warn = originalWarn;
}
assert.ok(warnings.length >= 1);
});
test('parseFallback duplicates object references and rejects primitives', () => {
const fallbackObject = { nodeId: '!object' };
const parsedObject = parseFallback(fallbackObject);
assert.notEqual(parsedObject, fallbackObject);
assert.deepEqual(parsedObject, fallbackObject);
const parsedString = parseFallback('{"nodeId":"!string"}');
assert.ok(parsedString);
assert.equal(parsedString.nodeId, '!string');
assert.equal(parseFallback('not json'), null);
assert.equal(parseFallback(42), null);
});

View File

@@ -0,0 +1,65 @@
/*
* Copyright (C) 2025 l5yth
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import { describe, it } from 'node:test';
import assert from 'node:assert/strict';
import { extractModemMetadata, formatLoraFrequencyMHz, formatModemDisplay, __testUtils } from '../node-modem-metadata.js';
describe('node-modem-metadata', () => {
it('extracts modem preset and frequency from mixed payloads', () => {
const payload = {
modem_preset: ' MediumFast ',
lora_freq: '915',
};
assert.deepEqual(extractModemMetadata(payload), { modemPreset: 'MediumFast', loraFreq: 915 });
});
it('falls back across naming conventions when extracting metadata', () => {
const payload = {
modemPreset: 'LongSlow',
frequency: 868,
};
assert.deepEqual(extractModemMetadata(payload), { modemPreset: 'LongSlow', loraFreq: 868 });
});
it('ignores invalid modem metadata entries', () => {
assert.deepEqual(extractModemMetadata({ modem_preset: ' ', lora_freq: 'NaN' }), {
modemPreset: null,
loraFreq: null,
});
});
it('formats positive frequencies with MHz suffix', () => {
assert.equal(formatLoraFrequencyMHz(915), '915MHz');
assert.equal(formatLoraFrequencyMHz(867.5), '867.5MHz');
assert.equal(formatLoraFrequencyMHz('433.1234'), '433.123MHz');
assert.equal(formatLoraFrequencyMHz(null), null);
});
it('combines preset and frequency for overlay display', () => {
assert.equal(formatModemDisplay('MediumFast', 868), 'MediumFast (868MHz)');
assert.equal(formatModemDisplay('ShortSlow', null), 'ShortSlow');
assert.equal(formatModemDisplay(null, 433), '433MHz');
assert.equal(formatModemDisplay(undefined, undefined), null);
});
it('exposes trimmed string helper for targeted assertions', () => {
const { toTrimmedString } = __testUtils;
assert.equal(toTrimmedString(' hello '), 'hello');
assert.equal(toTrimmedString(''), null);
assert.equal(toTrimmedString(null), null);
});
});

View File

@@ -0,0 +1,397 @@
/*
* Copyright (C) 2025 l5yth
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import { createShortInfoOverlayStack } from '../short-info-overlay-manager.js';
/**
* Minimal DOM element implementation tailored for overlay manager tests.
*/
class StubElement {
/**
* @param {string} [tagName='div'] Element tag identifier.
*/
constructor(tagName = 'div') {
this.tagName = tagName.toUpperCase();
this.children = [];
this.parentNode = null;
this.style = {};
this.dataset = {};
this.className = '';
this.innerHTML = '';
this.attributes = new Map();
this.eventHandlers = new Map();
this._rect = { left: 0, top: 0, width: 120, height: 80 };
}
/**
* Append ``child`` to the element.
*
* @param {StubElement} child Child node to append.
* @returns {StubElement} Appended node.
*/
appendChild(child) {
this.children.push(child);
child.parentNode = this;
return child;
}
/**
* Remove ``child`` from the element.
*
* @param {StubElement} child Child node to remove.
* @returns {void}
*/
removeChild(child) {
const idx = this.children.indexOf(child);
if (idx >= 0) {
this.children.splice(idx, 1);
child.parentNode = null;
}
}
/**
* Remove the element from its parent tree.
*
* @returns {void}
*/
remove() {
if (this.parentNode) {
this.parentNode.removeChild(this);
}
}
/**
* Assign an attribute to the element.
*
* @param {string} name Attribute identifier.
* @param {string} value Stored attribute value.
* @returns {void}
*/
setAttribute(name, value) {
this.attributes.set(name, String(value));
if (name === 'class' || name === 'className') {
this.className = String(value);
}
}
/**
* Remove an attribute from the element.
*
* @param {string} name Attribute identifier.
* @returns {void}
*/
removeAttribute(name) {
this.attributes.delete(name);
if (name === 'class' || name === 'className') {
this.className = '';
}
}
/**
* Register an event handler for the element.
*
* @param {string} event Event identifier.
* @param {Function} handler Handler invoked during tests.
* @returns {void}
*/
addEventListener(event, handler) {
this.eventHandlers.set(event, handler);
}
/**
* Retrieve the first descendant matching a simple class selector.
*
* @param {string} selector CSS selector (class only).
* @returns {?StubElement} Matching element or ``null``.
*/
querySelector(selector) {
if (!selector || selector[0] !== '.') {
return null;
}
const className = selector.slice(1);
return this._findByClass(className);
}
/**
* Recursively search for an element with ``className``.
*
* @param {string} className Class identifier to match.
* @returns {?StubElement} Matching element or ``null``.
*/
_findByClass(className) {
const classes = (this.className || '').split(/\s+/).filter(Boolean);
if (classes.includes(className)) {
return this;
}
for (const child of this.children) {
const found = child._findByClass(className);
if (found) {
return found;
}
}
return null;
}
/**
* Determine whether ``candidate`` is a descendant of the element.
*
* @param {StubElement} candidate Potential child node.
* @returns {boolean} ``true`` when the node is contained within the element.
*/
contains(candidate) {
if (this === candidate) {
return true;
}
for (const child of this.children) {
if (child.contains(candidate)) {
return true;
}
}
return false;
}
/**
* Return the mock bounding rectangle for the element.
*
* @returns {{ left: number, top: number, width: number, height: number }}
*/
getBoundingClientRect() {
return { ...this._rect };
}
/**
* Override the bounding rectangle used during positioning tests.
*
* @param {{ left?: number, top?: number, width?: number, height?: number }} rect
* @returns {void}
*/
setBoundingRect(rect) {
this._rect = { ...this._rect, ...rect };
}
/**
* Create a deep clone of the element.
*
* @param {boolean} [deep=false] When ``true`` clone the children as well.
* @returns {StubElement} Cloned element instance.
*/
cloneNode(deep = false) {
const clone = new StubElement(this.tagName);
clone.className = this.className;
clone.style = { ...this.style };
clone.dataset = { ...this.dataset };
clone.innerHTML = this.innerHTML;
clone._rect = { ...this._rect };
clone.attributes = new Map(this.attributes);
if (deep) {
for (const child of this.children) {
clone.appendChild(child.cloneNode(true));
}
}
return clone;
}
/**
* Locate the nearest ancestor carrying ``selector``.
*
* @param {string} selector CSS selector (class only).
* @returns {?StubElement} Matching ancestor or ``null``.
*/
closest(selector) {
if (!selector || selector[0] !== '.') {
return null;
}
const className = selector.slice(1);
let current = this;
while (current) {
const classes = (current.className || '').split(/\s+/).filter(Boolean);
if (classes.includes(className)) {
return current;
}
current = current.parentNode;
}
return null;
}
}
/**
* Build a minimal DOM document stub for overlay manager tests.
*
* @returns {{ document: Document, window: Window, factory: Function, anchor: StubElement, body: StubElement }}
*/
function createStubDom() {
const body = new StubElement('body');
body.contains = body.contains.bind(body);
const listenerMap = new Map();
const document = {
body,
documentElement: { clientWidth: 640, clientHeight: 480 },
createElement(tagName) {
return new StubElement(tagName);
},
getElementById() {
return null;
},
addEventListener(event, handler) {
if (!listenerMap.has(event)) {
listenerMap.set(event, new Set());
}
listenerMap.get(event).add(handler);
},
removeEventListener(event, handler) {
if (!listenerMap.has(event)) {
return;
}
listenerMap.get(event).delete(handler);
},
_dispatch(event) {
if (!listenerMap.has(event)) {
return;
}
for (const handler of Array.from(listenerMap.get(event))) {
handler();
}
},
};
const window = {
scrollX: 10,
scrollY: 20,
innerWidth: 640,
innerHeight: 480,
requestAnimationFrame(callback) {
callback();
},
};
function factory() {
const overlay = document.createElement('div');
overlay.className = 'short-info-overlay';
const closeButton = document.createElement('button');
closeButton.className = 'short-info-close';
const content = document.createElement('div');
content.className = 'short-info-content';
overlay.appendChild(closeButton);
overlay.appendChild(content);
return { overlay, closeButton, content };
}
const anchor = document.createElement('span');
anchor.setBoundingRect({ left: 40, top: 50, width: 16, height: 16 });
body.appendChild(anchor);
return { document, window, factory, anchor, body };
}
test('render opens overlays and positions them relative to anchors', () => {
const { document, window, factory, anchor, body } = createStubDom();
const stack = createShortInfoOverlayStack({ document, window, factory });
stack.render(anchor, '<strong>Node</strong>');
const open = stack.getOpenOverlays();
assert.equal(open.length, 1);
const overlay = open[0].element;
assert.equal(overlay.parentNode, body);
assert.equal(overlay.style.position, 'absolute');
const content = overlay.querySelector('.short-info-content');
assert.ok(content);
assert.equal(content.innerHTML, '<strong>Node</strong>');
assert.equal(overlay.style.left, '50px');
assert.equal(overlay.style.top, '70px');
});
test('request tokens track anchors independently', () => {
const { document, window, factory, anchor } = createStubDom();
const stack = createShortInfoOverlayStack({ document, window, factory });
const token1 = stack.incrementRequestToken(anchor);
const token2 = stack.incrementRequestToken(anchor);
assert.equal(token2, token1 + 1);
stack.render(anchor, 'Loading…');
assert.equal(stack.isTokenCurrent(anchor, token2), true);
stack.close(anchor);
assert.equal(stack.isTokenCurrent(anchor, token2), false);
});
test('overlays stack and close independently', () => {
const { document, window, factory, anchor, body } = createStubDom();
const stack = createShortInfoOverlayStack({ document, window, factory });
const secondAnchor = document.createElement('span');
secondAnchor.setBoundingRect({ left: 200, top: 120 });
body.appendChild(secondAnchor);
stack.render(anchor, 'First');
stack.render(secondAnchor, 'Second');
const open = stack.getOpenOverlays();
assert.equal(open.length, 2);
assert.equal(stack.isOpen(anchor), true);
assert.equal(stack.isOpen(secondAnchor), true);
stack.close(anchor);
assert.equal(stack.isOpen(anchor), false);
assert.equal(stack.isOpen(secondAnchor), true);
stack.closeAll();
assert.equal(stack.getOpenOverlays().length, 0);
});
test('cleanupOrphans removes overlays whose anchors disappear', () => {
const { document, window, factory, anchor } = createStubDom();
const stack = createShortInfoOverlayStack({ document, window, factory });
stack.render(anchor, 'Orphaned');
anchor.remove();
stack.cleanupOrphans();
assert.equal(stack.getOpenOverlays().length, 0);
});
test('containsNode recognises overlay descendants', () => {
const { document, window, factory, anchor } = createStubDom();
const stack = createShortInfoOverlayStack({ document, window, factory });
stack.render(anchor, 'Descendant');
const [entry] = stack.getOpenOverlays();
const content = entry.element.querySelector('.short-info-content');
assert.ok(stack.containsNode(content));
const stray = new StubElement('div');
assert.equal(stack.containsNode(stray), false);
});
test('overlays migrate into and out of fullscreen hosts', () => {
const { document, window, factory, anchor, body } = createStubDom();
const fullscreenRoot = document.createElement('div');
body.appendChild(fullscreenRoot);
const stack = createShortInfoOverlayStack({ document, window, factory });
stack.render(anchor, 'Fullscreen');
const [entry] = stack.getOpenOverlays();
assert.equal(entry.element.parentNode, body);
assert.equal(entry.element.style.position, 'absolute');
document.fullscreenElement = fullscreenRoot;
document._dispatch('fullscreenchange');
assert.equal(entry.element.parentNode, fullscreenRoot);
assert.equal(entry.element.style.position, 'fixed');
assert.equal(entry.element.style.left, '40px');
assert.equal(entry.element.style.top, '50px');
document.fullscreenElement = null;
document._dispatch('fullscreenchange');
assert.equal(entry.element.parentNode, body);
assert.equal(entry.element.style.position, 'absolute');
assert.equal(entry.element.style.left, '50px');
assert.equal(entry.element.style.top, '70px');
});
test('rendered overlays do not swallow click events by default', () => {
const { document, window, factory, anchor } = createStubDom();
const stack = createShortInfoOverlayStack({ document, window, factory });
stack.render(anchor, 'Event test');
const [entry] = stack.getOpenOverlays();
assert.ok(entry);
assert.equal(entry.element.eventHandlers.has('click'), false);
});

View File

@@ -0,0 +1,162 @@
/*
* Copyright (C) 2025 l5yth
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import {
TELEMETRY_FIELDS,
buildTelemetryDisplayEntries,
collectTelemetryMetrics,
} from '../short-info-telemetry.js';
test('collectTelemetryMetrics extracts values from nested payloads', () => {
const payload = {
battery: '100',
device_metrics: {
voltage: 4.224,
airUtilTx: 0.051,
uptimeSeconds: 305044,
},
environment_metrics: {
temperature: 21.98,
relativeHumidity: 39.5,
barometricPressure: 1017.8,
gasResistance: 1456,
iaq: 83,
distance: 12.5,
lux: 100.25,
whiteLux: 64.5,
irLux: 12.75,
uvLux: 1.6,
windDirection: 270,
windSpeed: 5.9,
windGust: 7.4,
windLull: 4.8,
weight: 32.7,
radiation: 0.45,
rainfall1h: 0.18,
rainfall24h: 1.42,
soilMoisture: 3100,
soilTemperature: 18.9,
},
};
const metrics = collectTelemetryMetrics(payload);
assert.equal(metrics.battery, 100);
assert.equal(metrics.voltage, 4.224);
assert.equal(metrics.airUtil, 0.051);
assert.equal(metrics.uptime, 305044);
assert.equal(metrics.temperature, 21.98);
assert.equal(metrics.humidity, 39.5);
assert.equal(metrics.pressure, 1017.8);
assert.equal(metrics.gasResistance, 1456);
assert.equal(metrics.iaq, 83);
assert.equal(metrics.distance, 12.5);
assert.equal(metrics.lux, 100.25);
assert.equal(metrics.whiteLux, 64.5);
assert.equal(metrics.irLux, 12.75);
assert.equal(metrics.uvLux, 1.6);
assert.equal(metrics.windDirection, 270);
assert.equal(metrics.windSpeed, 5.9);
assert.equal(metrics.windGust, 7.4);
assert.equal(metrics.windLull, 4.8);
assert.equal(metrics.weight, 32.7);
assert.equal(metrics.radiation, 0.45);
assert.equal(metrics.rainfall1h, 0.18);
assert.equal(metrics.rainfall24h, 1.42);
assert.equal(metrics.soilMoisture, 3100);
assert.equal(metrics.soilTemperature, 18.9);
});
test('collectTelemetryMetrics ignores non-numeric values', () => {
const metrics = collectTelemetryMetrics({
battery: '',
voltage: 'abc',
rainfall_1h: null,
wind_speed: undefined,
});
for (const field of TELEMETRY_FIELDS) {
assert.ok(!(field.key in metrics));
}
});
test('buildTelemetryDisplayEntries formats values for overlays', () => {
const telemetry = {
battery: 99,
voltage: 4.224,
current: 0.0715,
uptime: 305044,
channel: 0.5967,
airUtil: 0.03908,
temperature: 21.98,
humidity: 39.5,
pressure: 1017.8,
gasResistance: 1456,
iaq: 83,
distance: 12.5,
lux: 100.25,
whiteLux: 64.5,
irLux: 12.75,
uvLux: 1.6,
windDirection: 270,
windSpeed: 5.9,
windGust: 7.4,
windLull: 4.8,
weight: 32.7,
radiation: 0.45,
rainfall1h: 0.18,
rainfall24h: 1.42,
soilMoisture: 3100,
soilTemperature: 18.9,
};
const entries = buildTelemetryDisplayEntries(telemetry, {
formatUptime: value => `formatted-${value}`,
});
const entryMap = new Map(entries.map(entry => [entry.label, entry.value]));
assert.equal(entryMap.get('Battery'), '99%');
assert.equal(entryMap.get('Voltage'), '4.224V');
assert.equal(entryMap.get('Current'), '71.5 mA');
assert.equal(entryMap.get('Uptime'), 'formatted-305044');
assert.equal(entryMap.get('Channel Util'), '0.597%');
assert.equal(entryMap.get('Air Util Tx'), '0.039%');
assert.equal(entryMap.get('Temperature'), '22.0°C');
assert.equal(entryMap.get('Humidity'), '39.5%');
assert.equal(entryMap.get('Pressure'), '1017.8 hPa');
assert.equal(entryMap.get('Gas Resistance'), '1.46 kΩ');
assert.equal(entryMap.get('IAQ'), '83');
assert.equal(entryMap.get('Distance'), '12.50 m');
assert.equal(entryMap.get('Lux'), '100.3 lx');
assert.equal(entryMap.get('White Lux'), '64.5 lx');
assert.equal(entryMap.get('IR Lux'), '12.8 lx');
assert.equal(entryMap.get('UV Lux'), '1.6 lx');
assert.equal(entryMap.get('Wind Direction'), '270°');
assert.equal(entryMap.get('Wind Speed'), '5.9 m/s');
assert.equal(entryMap.get('Wind Gust'), '7.4 m/s');
assert.equal(entryMap.get('Wind Lull'), '4.8 m/s');
assert.equal(entryMap.get('Weight'), '32.70 kg');
assert.equal(entryMap.get('Radiation'), '0.45 µSv/h');
assert.equal(entryMap.get('Rainfall 1h'), '0.18 mm');
assert.equal(entryMap.get('Rainfall 24h'), '1.42 mm');
assert.equal(entryMap.get('Soil Moisture'), '3100');
assert.equal(entryMap.get('Soil Temperature'), '18.9°C');
});
test('buildTelemetryDisplayEntries omits empty metrics', () => {
const entries = buildTelemetryDisplayEntries({ uptime: null }, {
formatUptime: () => '',
});
assert.equal(entries.length, 0);
});

View File

@@ -0,0 +1,216 @@
/*
* Copyright (C) 2025 l5yth
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import { readFile } from 'node:fs/promises';
import vm from 'node:vm';
import { createDomEnvironment } from './dom-environment.js';
const themeModuleUrl = new URL('../../theme.js', import.meta.url);
const backgroundModuleUrl = new URL('../../background.js', import.meta.url);
const themeSource = await readFile(themeModuleUrl, 'utf8');
const backgroundSource = await readFile(backgroundModuleUrl, 'utf8');
/**
* Evaluate a browser-oriented script within the provided DOM environment.
*
* @param {string} source Module source code to execute.
* @param {URL} url Identifier for the executed script.
* @param {ReturnType<typeof createDomEnvironment>} env Active DOM harness.
* @returns {void}
*/
function executeInDom(source, url, env) {
const context = vm.createContext({
console,
setTimeout,
clearTimeout,
setInterval,
clearInterval
});
context.window = env.window;
context.document = env.document;
context.global = context;
context.globalThis = context;
context.window.window = context.window;
context.window.document = context.document;
context.window.globalThis = context;
context.window.console = console;
vm.runInContext(source, context, { filename: url.pathname, displayErrors: true });
}
test('theme and background modules behave correctly across scenarios', async t => {
const env = createDomEnvironment({ readyState: 'complete', cookie: '' });
try {
const toggle = env.createElement('button', 'themeToggle');
env.registerElement('themeToggle', toggle);
let filterInvocations = 0;
env.window.applyFiltersToAllTiles = () => {
filterInvocations += 1;
};
executeInDom(themeSource, themeModuleUrl, env);
executeInDom(backgroundSource, backgroundModuleUrl, env);
const themeHelpers = env.window.__themeCookie;
const themeHooks = themeHelpers.__testHooks;
const backgroundHelpers = env.window.__potatoBackground;
const backgroundHooks = backgroundHelpers.__testHooks;
await t.test('initialises with a dark theme and persists cookies', () => {
assert.equal(env.document.documentElement.getAttribute('data-theme'), 'dark');
assert.equal(env.document.body.classList.contains('dark'), true);
assert.equal(toggle.textContent, '☀️');
themeHelpers.persistTheme('light');
themeHelpers.setCookie('bare', '1');
themeHooks.exerciseSetCookieGuard();
themeHelpers.setCookie('flag', 'true', { Secure: true });
const cookieString = env.getCookieString();
assert.equal(themeHelpers.getCookie('flag'), 'true');
assert.equal(themeHelpers.getCookie('missing'), null);
assert.match(cookieString, /theme=light/);
assert.match(cookieString, /; path=\//);
assert.match(cookieString, /; SameSite=Lax/);
assert.match(cookieString, /; Secure/);
});
await t.test('serializeCookieOptions covers boolean and string attributes', () => {
const withAttributes = themeHooks.serializeCookieOptions({ Secure: true, HttpOnly: '1' });
assert.equal(withAttributes.includes('; Secure'), true);
assert.equal(withAttributes.includes('; HttpOnly=1'), true);
const secureOnly = themeHooks.serializeCookieOptions({ Secure: true });
assert.equal(secureOnly.trim(), '; Secure');
assert.equal(themeHooks.formatCookieOption(['HttpOnly', '1']), '; HttpOnly=1');
assert.equal(themeHooks.formatCookieOption(['Secure', true]), '; Secure');
assert.equal(themeHooks.serializeCookieOptions({}), '');
assert.equal(themeHooks.serializeCookieOptions(), '');
});
await t.test('re-bootstrap handles DOMContentLoaded flow and filter hooks', () => {
env.document.readyState = 'loading';
filterInvocations = 0;
env.setCookieString('theme=light');
themeHooks.bootstrap();
env.triggerDOMContentLoaded();
assert.equal(env.document.documentElement.getAttribute('data-theme'), 'light');
assert.equal(env.document.body.classList.contains('dark'), false);
assert.equal(toggle.textContent, '🌙');
assert.equal(filterInvocations, 1);
env.document.removeEventListener('DOMContentLoaded', themeHooks.handleReady);
});
await t.test('handleReady tolerates missing toggle button', () => {
env.registerElement('themeToggle', null);
themeHooks.handleReady();
env.registerElement('themeToggle', toggle);
});
await t.test('applyTheme copes with absent DOM nodes', () => {
const originalBody = env.document.body;
const originalRoot = env.document.documentElement;
env.document.body = null;
env.document.documentElement = null;
assert.equal(themeHooks.applyTheme('dark'), true);
env.document.body = originalBody;
env.document.documentElement = originalRoot;
assert.equal(themeHooks.applyTheme('light'), false);
});
await t.test('background bootstrap waits for DOM readiness', () => {
env.setComputedStyleImplementation(() => ({ getPropertyValue: () => ' rgb(15, 15, 15) ' }));
env.document.readyState = 'loading';
const previousColor = env.document.documentElement.style.backgroundColor;
backgroundHooks.bootstrap();
assert.equal(env.document.documentElement.style.backgroundColor, previousColor);
env.triggerDOMContentLoaded();
assert.equal(env.document.documentElement.style.backgroundColor.trim(), 'rgb(15, 15, 15)');
});
await t.test('background falls back to theme defaults when styles unavailable', () => {
env.setComputedStyleImplementation(() => {
throw new Error('no styles');
});
env.document.body.classList.add('dark');
backgroundHelpers.applyBackground();
assert.equal(env.document.documentElement.style.backgroundColor, '#0e1418');
env.document.body.classList.remove('dark');
backgroundHelpers.applyBackground();
assert.equal(env.document.documentElement.style.backgroundColor, '#f6f3ee');
});
await t.test('background helper tolerates missing body elements', () => {
const originalBody = env.document.body;
env.document.body = null;
backgroundHelpers.applyBackground();
assert.equal(backgroundHelpers.resolveBackgroundColor(), null);
env.document.body = originalBody;
});
await t.test('theme changes trigger background updates', () => {
env.document.body.classList.remove('dark');
themeHooks.setTheme('light');
backgroundHooks.init();
env.dispatchWindowEvent('themechange');
assert.equal(env.document.documentElement.style.backgroundColor, '#f6f3ee');
});
env.window.removeEventListener('themechange', backgroundHelpers.applyBackground);
} finally {
env.cleanup();
}
});
test('dom environment helpers mimic expected DOM behaviour', () => {
const env = createDomEnvironment({ readyState: 'interactive', includeBody: false });
try {
const element = env.createElement('span');
element.classList.add('foo');
assert.equal(element.classList.contains('foo'), true);
assert.equal(element.classList.toggle('foo'), false);
assert.equal(element.classList.toggle('bar'), true);
assert.equal(element.getAttribute('id'), null);
element.setAttribute('data-test', 'ok');
assert.equal(element.getAttribute('data-test'), 'ok');
env.registerElement('sample', element);
assert.equal(env.document.getElementById('sample'), element);
assert.equal(env.document.querySelector('.missing'), null);
let docEventFired = false;
env.document.addEventListener('custom', () => {
docEventFired = true;
});
env.document.dispatchEvent('custom');
assert.equal(docEventFired, true);
env.document.removeEventListener('custom');
let winEventFired = false;
env.window.addEventListener('global', () => {
winEventFired = true;
});
env.window.dispatchEvent('global');
assert.equal(winEventFired, true);
env.window.removeEventListener('global');
env.setCookieString('');
env.document.cookie = 'foo=bar';
assert.equal(env.getCookieString(), 'foo=bar');
} finally {
env.cleanup();
}
});

View File

@@ -0,0 +1,194 @@
/*
* Copyright (C) 2025 l5yth
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Extract channel metadata from a message payload for chat display.
*
* @param {Object} message Raw message payload from the API.
* @returns {{ frequency: string|null, channelName: string|null }}
* Normalized metadata values.
*/
export function extractChatMessageMetadata(message) {
if (!message || typeof message !== 'object') {
return { frequency: null, channelName: null };
}
const frequency = normalizeFrequency(
firstNonNull(
message.region_frequency,
message.regionFrequency,
message.lora_freq,
message.loraFreq,
message.frequency
)
);
const channelName = normalizeString(
firstNonNull(message.channel_name, message.channelName)
);
return { frequency, channelName };
}
/**
* Produce the formatted prefix for a chat message entry.
*
* Timestamp and frequency will each be wrapped in square brackets. Missing
* metadata values result in empty brackets (with the frequency replaced by the
* configured placeholder) to preserve the positional layout expected by
* operators.
*
* @param {{
* timestamp: string,
* frequency: string|null
* }} params Normalised and escaped display strings.
* @returns {string} Prefix string suitable for HTML insertion.
*/
export function formatChatMessagePrefix({ timestamp, frequency }) {
const ts = typeof timestamp === 'string' ? timestamp : '';
const freq = normalizeFrequencySlot(frequency);
return `[${ts}][${freq}]`;
}
/**
* Render the channel tag that follows the short name in a chat message entry.
*
* Empty channel names remain blank within the brackets, mirroring the original
* UI behaviour that reserves the slot without introducing placeholder text.
*
* @param {{ channelName: string|null }} params Normalised and escaped display strings.
* @returns {string} Channel tag suitable for HTML insertion.
*/
export function formatChatChannelTag({ channelName }) {
const channel = typeof channelName === 'string' ? channelName : channelName == null ? '' : String(channelName);
return `[${channel}]`;
}
/**
* Create the formatted prefix for node announcements in the chat log.
*
* Both the timestamp and the optional frequency will be wrapped in brackets,
* mirroring the chat message display while omitting the channel indicator.
*
* @param {{ timestamp: string, frequency: string|null }} params Display strings.
* @returns {string} Prefix string suitable for HTML insertion.
*/
export function formatNodeAnnouncementPrefix({ timestamp, frequency }) {
const ts = typeof timestamp === 'string' ? timestamp : '';
const freq = normalizeFrequencySlot(frequency);
return `[${ts}][${freq}]`;
}
/**
* Produce a consistently formatted frequency slot for chat prefixes.
*
* A missing or empty frequency is rendered as three HTML non-breaking spaces to
* ensure the UI maintains its expected alignment while clearly indicating the
* absence of data.
*
* @param {*} value Frequency value that has already been escaped for HTML.
* @returns {string} Frequency slot suitable for prefix rendering.
*/
function normalizeFrequencySlot(value) {
if (value == null) {
return FREQUENCY_PLACEHOLDER;
}
if (typeof value === 'string') {
return value.length > 0 ? value : FREQUENCY_PLACEHOLDER;
}
const strValue = String(value);
return strValue.length > 0 ? strValue : FREQUENCY_PLACEHOLDER;
}
/**
* HTML entity sequence inserted when a frequency is unavailable.
* @type {string}
*/
const FREQUENCY_PLACEHOLDER = '&nbsp;&nbsp;&nbsp;';
/**
* Return the first value in ``candidates`` that is not ``null`` or ``undefined``.
*
* @param {...*} candidates Candidate values.
* @returns {*} First present value or ``null`` when missing.
*/
function firstNonNull(...candidates) {
for (const value of candidates) {
if (value !== null && value !== undefined) {
return value;
}
}
return null;
}
/**
* Normalise potential channel name values to trimmed strings.
*
* @param {*} value Raw value.
* @returns {string|null} Sanitised channel name.
*/
function normalizeString(value) {
if (value == null) return null;
if (typeof value === 'string') {
const trimmed = value.trim();
return trimmed.length > 0 ? trimmed : null;
}
if (typeof value === 'number') {
if (!Number.isFinite(value)) return null;
return String(value);
}
return null;
}
/**
* Convert various frequency representations into clean strings.
*
* @param {*} value Raw frequency value.
* @returns {string|null} Frequency in MHz as a string, when available.
*/
function normalizeFrequency(value) {
if (value == null) return null;
if (typeof value === 'number') {
if (!Number.isFinite(value) || value <= 0) {
return null;
}
return Number.isInteger(value) ? String(value) : String(Number(value.toFixed(3)));
}
if (typeof value === 'string') {
const trimmed = value.trim();
if (!trimmed) return null;
const numericMatch = trimmed.match(/\d+(?:\.\d+)?/);
if (numericMatch) {
const parsed = Number(numericMatch[0]);
if (Number.isFinite(parsed) && parsed > 0) {
return Number.isInteger(parsed) ? String(Math.trunc(parsed)) : String(parsed);
}
}
return trimmed;
}
return null;
}
export const __test__ = {
firstNonNull,
normalizeString,
normalizeFrequency,
formatChatMessagePrefix,
formatNodeAnnouncementPrefix,
normalizeFrequencySlot,
FREQUENCY_PLACEHOLDER,
formatChatChannelTag
};

View File

@@ -0,0 +1,217 @@
/*
* Copyright (C) 2025 l5yth
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Determine the most suitable label for an instance list entry.
*
* @param {{ name?: string, domain?: string }} entry Instance record as returned by the API.
* @returns {string} Preferred display label falling back to the domain.
*/
function resolveInstanceLabel(entry) {
if (!entry || typeof entry !== 'object') {
return '';
}
const name = typeof entry.name === 'string' ? entry.name.trim() : '';
if (name.length > 0) {
return name;
}
const domain = typeof entry.domain === 'string' ? entry.domain.trim() : '';
return domain;
}
/**
* Construct a navigable URL for the provided instance domain.
*
* @param {string} domain Instance domain as returned by the federation catalog.
* @returns {string|null} Navigable absolute URL or ``null`` when the domain is empty.
*/
export function buildInstanceUrl(domain) {
if (typeof domain !== 'string') {
return null;
}
const trimmed = domain.trim();
if (!trimmed) {
return null;
}
if (/^[a-zA-Z][a-zA-Z\d+.-]*:\/\//.test(trimmed)) {
return trimmed;
}
return `https://${trimmed}`;
}
/**
* Populate and activate the federation instance selector control.
*
* @param {{
* selectElement: HTMLSelectElement | null,
* fetchImpl?: typeof fetch,
* windowObject?: Window,
* documentObject?: Document,
* instanceDomain?: string,
* defaultLabel?: string,
* navigate?: (url: string) => void,
* }} options Configuration for the selector behaviour.
* @returns {Promise<void>} Promise resolving once the selector has been initialised.
*/
export async function initializeInstanceSelector(options) {
const {
selectElement,
fetchImpl = typeof fetch === 'function' ? fetch : null,
windowObject = typeof window !== 'undefined' ? window : undefined,
documentObject = typeof document !== 'undefined' ? document : undefined,
instanceDomain,
defaultLabel = 'Select region ...',
navigate,
} = options;
if (!selectElement || typeof selectElement !== 'object') {
return;
}
const doc = documentObject || windowObject?.document || null;
if (selectElement.options.length === 0) {
const optionFactory =
(doc && typeof doc.createElement === 'function')
? doc.createElement.bind(doc)
: (typeof selectElement.ownerDocument?.createElement === 'function'
? selectElement.ownerDocument.createElement.bind(selectElement.ownerDocument)
: null);
if (optionFactory) {
const placeholderOption = optionFactory('option');
placeholderOption.value = '';
placeholderOption.textContent = defaultLabel;
selectElement.appendChild(placeholderOption);
}
} else if (selectElement.options[0]) {
selectElement.options[0].textContent = defaultLabel;
selectElement.options[0].value = '';
}
if (typeof fetchImpl !== 'function') {
return;
}
let response;
try {
response = await fetchImpl('/api/instances', {
headers: { Accept: 'application/json' },
credentials: 'omit',
});
} catch (error) {
console.warn('Failed to load federation instances', error);
return;
}
if (!response || typeof response.json !== 'function') {
return;
}
if (!response.ok) {
return;
}
let payload;
try {
payload = await response.json();
} catch (error) {
console.warn('Invalid federation instances payload', error);
return;
}
if (!Array.isArray(payload)) {
return;
}
const sanitizedDomain = typeof instanceDomain === 'string' ? instanceDomain.trim().toLowerCase() : null;
const sortedEntries = payload
.filter(entry => entry && typeof entry.domain === 'string' && entry.domain.trim() !== '')
.map(entry => ({
domain: entry.domain.trim(),
label: resolveInstanceLabel(entry),
}))
.sort((a, b) => {
const labelA = a.label || a.domain;
const labelB = b.label || b.domain;
return labelA.localeCompare(labelB, undefined, { sensitivity: 'base' });
});
while (selectElement.options.length > 1) {
selectElement.remove(1);
}
let matchedIndex = 0;
sortedEntries.forEach((entry, index) => {
if (!doc || typeof doc.createElement !== 'function') {
return;
}
const option = doc.createElement('option');
const optionLabel = entry.label && entry.label.trim().length > 0 ? entry.label : entry.domain;
const label = optionLabel.trim();
option.value = entry.domain;
option.textContent = label;
option.dataset.instanceDomain = entry.domain;
selectElement.appendChild(option);
if (sanitizedDomain && entry.domain.toLowerCase() === sanitizedDomain) {
matchedIndex = index + 1;
}
});
if (matchedIndex > 0 && selectElement.options[matchedIndex]) {
selectElement.selectedIndex = matchedIndex;
} else {
selectElement.selectedIndex = 0;
}
const navigateTo = typeof navigate === 'function'
? navigate
: url => {
if (!url || !windowObject || !windowObject.location) {
return;
}
if (typeof windowObject.location.assign === 'function') {
windowObject.location.assign(url);
} else {
windowObject.location.href = url;
}
};
selectElement.addEventListener('change', event => {
const target = event?.target;
if (!target || typeof target.value !== 'string' || target.value.trim() === '') {
return;
}
const url = buildInstanceUrl(target.value);
if (url) {
navigateTo(url);
}
});
}
export const __test__ = { resolveInstanceLabel };

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,178 @@
/**
* Copyright (C) 2025 l5yth
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* @typedef {[number, number]} LatLngTuple
* @typedef {[LatLngTuple, LatLngTuple]} LatLngBoundsTuple
* @typedef {{ paddingPx: number, maxZoom?: number }} FitOptionsSnapshot
*/
/**
* Safely clone a Leaflet-compatible bounds tuple to avoid accidental mutation.
*
* @param {LatLngBoundsTuple} bounds - Bounds tuple to duplicate.
* @returns {LatLngBoundsTuple} Deep copy of the provided bounds.
*/
function cloneBounds(bounds) {
return [
[bounds[0][0], bounds[0][1]],
[bounds[1][0], bounds[1][1]]
];
}
/**
* Determine whether the provided structure resembles a Leaflet bounds tuple.
*
* @param {unknown} value - Potential bounds input.
* @returns {value is LatLngBoundsTuple} True when the input is structurally valid.
*/
function isValidBounds(value) {
if (!Array.isArray(value) || value.length !== 2) return false;
const [southWest, northEast] = value;
if (!Array.isArray(southWest) || !Array.isArray(northEast)) return false;
if (southWest.length !== 2 || northEast.length !== 2) return false;
const numbers = [southWest[0], southWest[1], northEast[0], northEast[1]];
return numbers.every(number => Number.isFinite(number));
}
/**
* Create a controller for coordinating map auto-fit behaviour.
*
* @param {object} options - Controller configuration options.
* @param {HTMLInputElement|null} [options.toggleEl] - Checkbox controlling auto-fit.
* @param {Window|undefined} [options.windowObject] - Browser window instance.
* @param {number} [options.defaultPaddingPx=32] - Padding fallback when none supplied.
* @returns {{
* attachResizeListener(callback: (snapshot: { bounds: LatLngBoundsTuple, options: FitOptionsSnapshot } | null) => void): () => void,
* getLastFit(): { bounds: LatLngBoundsTuple, options: FitOptionsSnapshot } | null,
* handleUserInteraction(): boolean,
* isAutoFitEnabled(): boolean,
* recordFit(bounds: LatLngBoundsTuple, options?: { paddingPx?: number, maxZoom?: number }): void,
* runAutoFitOperation(fn: () => unknown): unknown
* }} Map auto-fit controller instance.
*/
export function createMapAutoFitController({
toggleEl = null,
windowObject = typeof window !== 'undefined' ? window : undefined,
defaultPaddingPx = 32
} = {}) {
/** @type {LatLngBoundsTuple|null} */
let lastBounds = null;
/** @type {FitOptionsSnapshot} */
let lastOptions = { paddingPx: defaultPaddingPx };
let autoFitInProgress = false;
/**
* Record the most recent set of bounds used for auto-fitting.
*
* @param {LatLngBoundsTuple} bounds - Leaflet bounds tuple.
* @param {{ paddingPx?: number, maxZoom?: number }} [options] - Fit options to persist.
* @returns {void}
*/
function recordFit(bounds, options = {}) {
if (!isValidBounds(bounds)) return;
const paddingPx = Number.isFinite(options.paddingPx) && options.paddingPx >= 0 ? options.paddingPx : defaultPaddingPx;
const maxZoom = Number.isFinite(options.maxZoom) && options.maxZoom > 0 ? options.maxZoom : undefined;
lastBounds = cloneBounds(bounds);
lastOptions = { paddingPx };
if (maxZoom !== undefined) {
lastOptions.maxZoom = maxZoom;
} else {
delete lastOptions.maxZoom;
}
}
/**
* Return a snapshot of the most recently recorded fit bounds.
*
* @returns {{ bounds: LatLngBoundsTuple, options: FitOptionsSnapshot } | null} Snapshot or ``null`` when unavailable.
*/
function getLastFit() {
if (!lastBounds) return null;
return { bounds: cloneBounds(lastBounds), options: { ...lastOptions } };
}
/**
* Test whether auto-fit is currently enabled by the user.
*
* @returns {boolean} True when the toggle exists and is checked.
*/
function isAutoFitEnabled() {
return Boolean(toggleEl && toggleEl.checked);
}
/**
* Execute a callback while marking auto-fit as in-progress.
*
* @template T
* @param {() => T} fn - Operation to run while suppressing interaction side-effects.
* @returns {T | undefined} Result of ``fn`` when provided.
*/
function runAutoFitOperation(fn) {
if (typeof fn !== 'function') return undefined;
autoFitInProgress = true;
try {
return fn();
} finally {
autoFitInProgress = false;
}
}
/**
* Disable auto-fit in response to manual user interactions with the map.
*
* @returns {boolean} True when the toggle was modified.
*/
function handleUserInteraction() {
if (!toggleEl || !toggleEl.checked || autoFitInProgress) {
return false;
}
toggleEl.checked = false;
const event = new Event('change', { bubbles: true });
toggleEl.dispatchEvent(event);
return true;
}
/**
* Attach resize listeners that notify the consumer when a refit may be required.
*
* @param {(snapshot: { bounds: LatLngBoundsTuple, options: FitOptionsSnapshot } | null) => void} callback - Resize handler.
* @returns {() => void} Function that removes the registered listeners.
*/
function attachResizeListener(callback) {
if (!windowObject || typeof windowObject.addEventListener !== 'function' || typeof callback !== 'function') {
return () => {};
}
const handler = () => {
callback(getLastFit());
};
windowObject.addEventListener('resize', handler, { passive: true });
windowObject.addEventListener('orientationchange', handler, { passive: true });
return () => {
windowObject.removeEventListener('resize', handler);
windowObject.removeEventListener('orientationchange', handler);
};
}
return {
attachResizeListener,
getLastFit,
handleUserInteraction,
isAutoFitEnabled,
recordFit,
runAutoFitOperation
};
}

View File

@@ -0,0 +1,255 @@
/*
* Copyright (C) 2025 l5yth
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
const EARTH_RADIUS_KM = 6371;
const RAD_TO_DEG = 180 / Math.PI;
const DEFAULT_MIN_RANGE_KM = 0.5;
const POLE_LONGITUDE_SPAN_DEGREES = 180;
const COS_EPSILON = 1e-6;
/**
* Clamp a latitude value to the valid WGS84 range.
*
* @param {number} latitude Latitude in degrees.
* @returns {number} Latitude clamped to [-90, 90].
*/
function clampLatitude(latitude) {
if (!Number.isFinite(latitude)) {
return latitude < 0 ? -90 : 90;
}
return Math.max(-90, Math.min(90, latitude));
}
/**
* Clamp a longitude value to the valid WGS84 range.
*
* @param {number} longitude Longitude in degrees.
* @returns {number} Longitude clamped to [-180, 180].
*/
function clampLongitude(longitude) {
if (!Number.isFinite(longitude)) {
return longitude < 0 ? -180 : 180;
}
if (longitude < -180) return -180;
if (longitude > 180) return 180;
return longitude;
}
/**
* Normalise a longitude so it remains close to a reference meridian.
*
* @param {number} longitude Longitude in degrees to normalise.
* @param {number} referenceMeridian Reference longitude in degrees.
* @returns {number} Longitude adjusted by multiples of 360° so the
* difference from ``referenceMeridian`` lies within ``[-180, 180)``.
*/
function normaliseLongitudeAround(longitude, referenceMeridian) {
if (!Number.isFinite(longitude) || !Number.isFinite(referenceMeridian)) {
return longitude;
}
const delta = ((longitude - referenceMeridian + 540) % 360) - 180;
return referenceMeridian + delta;
}
/**
* Convert degrees to radians.
*
* @param {number} degrees Angle in degrees.
* @returns {number} Angle in radians.
*/
export function toRadians(degrees) {
return (degrees * Math.PI) / 180;
}
/**
* Compute the great-circle distance between two coordinates using the
* haversine formula.
*
* @param {number} lat1 Latitude of the first point in degrees.
* @param {number} lon1 Longitude of the first point in degrees.
* @param {number} lat2 Latitude of the second point in degrees.
* @param {number} lon2 Longitude of the second point in degrees.
* @returns {number} Distance in kilometres.
*/
export function haversineDistanceKm(lat1, lon1, lat2, lon2) {
const dLat = toRadians(lat2 - lat1);
const dLon = toRadians(lon2 - lon1);
const sinLat = Math.sin(dLat / 2);
const sinLon = Math.sin(dLon / 2);
const a = sinLat * sinLat + Math.cos(toRadians(lat1)) * Math.cos(toRadians(lat2)) * sinLon * sinLon;
const c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a));
return EARTH_RADIUS_KM * c;
}
/**
* Normalise range inputs to a safe, positive value.
*
* @param {number} rangeKm Requested range in kilometres.
* @param {number} minimumRangeKm Minimum permitted range in kilometres.
* @returns {number} Normalised range in kilometres.
*/
function normaliseRange(rangeKm, minimumRangeKm) {
const minRange = Number.isFinite(minimumRangeKm) && minimumRangeKm > 0 ? minimumRangeKm : DEFAULT_MIN_RANGE_KM;
if (!Number.isFinite(rangeKm) || rangeKm <= 0) {
return minRange;
}
return Math.max(rangeKm, minRange);
}
/**
* Compute a geographic bounding box for a circular range centred on a point.
*
* The resulting bounds are suitable for use with Leaflet ``fitBounds`` and
* similar APIs that accept a ``[[south, west], [north, east]]`` tuple.
*
* @param {{lat: number, lon: number}} center Map centre coordinate.
* @param {number} rangeKm Desired radius from the centre in kilometres.
* @param {{ minimumRangeKm?: number }} [options] Optional configuration.
* @returns {[[number, number], [number, number]] | null} Bounding box tuple or
* ``null`` when the inputs are invalid.
*/
export function computeBoundingBox(center, rangeKm, options = {}) {
if (!center || !Number.isFinite(center.lat) || !Number.isFinite(center.lon)) {
return null;
}
const minRange = Number.isFinite(options.minimumRangeKm) && options.minimumRangeKm > 0
? options.minimumRangeKm
: DEFAULT_MIN_RANGE_KM;
const radiusKm = normaliseRange(rangeKm, minRange);
const angularDistance = radiusKm / EARTH_RADIUS_KM;
const latDelta = angularDistance * RAD_TO_DEG;
const minLat = clampLatitude(center.lat - latDelta);
const maxLat = clampLatitude(center.lat + latDelta);
const cosLat = Math.cos(toRadians(center.lat));
let lonDelta;
if (Math.abs(cosLat) < COS_EPSILON) {
lonDelta = POLE_LONGITUDE_SPAN_DEGREES;
} else {
lonDelta = Math.min(POLE_LONGITUDE_SPAN_DEGREES, (angularDistance * RAD_TO_DEG) / Math.max(Math.abs(cosLat), COS_EPSILON));
}
if (!Number.isFinite(lonDelta) || lonDelta >= POLE_LONGITUDE_SPAN_DEGREES) {
return [[minLat, -POLE_LONGITUDE_SPAN_DEGREES], [maxLat, POLE_LONGITUDE_SPAN_DEGREES]];
}
const minLon = clampLongitude(center.lon - lonDelta);
const maxLon = clampLongitude(center.lon + lonDelta);
return [[minLat, minLon], [maxLat, maxLon]];
}
/**
* Determine a bounding box that encloses the provided coordinates with a
* configurable safety margin.
*
* @param {Array<[number, number]>} points Collection of ``[lat, lon]`` pairs.
* @param {{
* paddingFraction?: number,
* minimumRangeKm?: number
* }} [options] Optional configuration controlling the computed bounds.
* @returns {[[number, number], [number, number]] | null} Bounding box tuple or
* ``null`` when the input list is empty or invalid. Longitudes may extend
* beyond the canonical ``[-180, 180]`` range when a dateline-spanning span is
* required.
*/
export function computeBoundsForPoints(points, options = {}) {
if (!Array.isArray(points) || !points.length) {
return null;
}
const validPoints = points.filter(point => Array.isArray(point) && Number.isFinite(point[0]) && Number.isFinite(point[1]));
if (!validPoints.length) {
return null;
}
let xSum = 0;
let ySum = 0;
let zSum = 0;
let latSum = 0;
let lonSum = 0;
for (const [lat, lon] of validPoints) {
const latRad = toRadians(lat);
const lonRad = toRadians(lon);
const cosLat = Math.cos(latRad);
xSum += cosLat * Math.cos(lonRad);
ySum += cosLat * Math.sin(lonRad);
zSum += Math.sin(latRad);
latSum += lat;
lonSum += lon;
}
const vectorMagnitude = Math.sqrt(xSum * xSum + ySum * ySum + zSum * zSum);
let centre;
if (vectorMagnitude > COS_EPSILON) {
const lat = Math.atan2(zSum, Math.sqrt(xSum * xSum + ySum * ySum)) * RAD_TO_DEG;
const lon = Math.atan2(ySum, xSum) * RAD_TO_DEG;
centre = { lat, lon };
} else {
centre = {
lat: latSum / validPoints.length,
lon: lonSum / validPoints.length
};
}
let maxDistanceKm = 0;
for (const [lat, lon] of validPoints) {
const distance = haversineDistanceKm(centre.lat, centre.lon, lat, lon);
if (distance > maxDistanceKm) {
maxDistanceKm = distance;
}
}
const paddingFraction = Number.isFinite(options.paddingFraction) && options.paddingFraction >= 0
? options.paddingFraction
: 0.15;
const minimumRangeKm = Number.isFinite(options.minimumRangeKm) && options.minimumRangeKm > 0
? options.minimumRangeKm
: DEFAULT_MIN_RANGE_KM;
const paddedRangeKm = Math.max(minimumRangeKm, maxDistanceKm * (1 + paddingFraction));
const angularDistance = paddedRangeKm / EARTH_RADIUS_KM;
const latDelta = angularDistance * RAD_TO_DEG;
const minLat = clampLatitude(centre.lat - latDelta);
const maxLat = clampLatitude(centre.lat + latDelta);
const cosLat = Math.cos(toRadians(centre.lat));
const maxProjectedLonDelta = Math.min(
POLE_LONGITUDE_SPAN_DEGREES,
Math.abs(cosLat) < COS_EPSILON
? POLE_LONGITUDE_SPAN_DEGREES
: (angularDistance * RAD_TO_DEG) / Math.max(Math.abs(cosLat), COS_EPSILON)
);
const normalisedLongitudes = validPoints.map(point => normaliseLongitudeAround(point[1], centre.lon));
let west = Math.min(...normalisedLongitudes, centre.lon - maxProjectedLonDelta);
let east = Math.max(...normalisedLongitudes, centre.lon + maxProjectedLonDelta);
if (!Number.isFinite(west) || !Number.isFinite(east)) {
west = centre.lon - maxProjectedLonDelta;
east = centre.lon + maxProjectedLonDelta;
}
if (east - west >= 360) {
west = -POLE_LONGITUDE_SPAN_DEGREES;
east = POLE_LONGITUDE_SPAN_DEGREES;
}
return [[minLat, west], [maxLat, east]];
}
export const __testUtils = {
clampLatitude,
clampLongitude,
normaliseRange,
normaliseLongitudeAround
};

View File

@@ -0,0 +1,281 @@
/*
* Copyright (C) 2025 l5yth
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Determine whether the provided value behaves like a plain object.
*
* @param {*} value Candidate value.
* @returns {boolean} True when ``value`` is a non-null object.
*/
function isObject(value) {
return value != null && typeof value === 'object';
}
/**
* Convert a value to a trimmed string when possible.
*
* @param {*} value Input value.
* @returns {string|null} Trimmed string or ``null`` when blank.
*/
function toTrimmedString(value) {
if (value == null) return null;
const str = String(value).trim();
return str.length === 0 ? null : str;
}
/**
* Attempt to coerce the provided value into a finite number.
*
* @param {*} value Raw value.
* @returns {number|null} Finite number or ``null`` when coercion fails.
*/
function toFiniteNumber(value) {
if (typeof value === 'number') {
return Number.isFinite(value) ? value : null;
}
if (value == null || value === '') return null;
const num = Number(value);
return Number.isFinite(num) ? num : null;
}
/**
* Normalise a neighbour entry so that downstream consumers can display it.
*
* @param {*} entry Raw neighbour entry.
* @returns {Object|null} Normalised neighbour reference or ``null`` when invalid.
*/
function normaliseNeighbor(entry) {
if (!isObject(entry)) return null;
const neighborId = toTrimmedString(entry.neighbor_id ?? entry.neighborId ?? entry.nodeId ?? entry.node_id);
if (!neighborId) return null;
const neighborShort = toTrimmedString(entry.neighbor_short_name ?? entry.neighborShortName ?? entry.short_name ?? entry.shortName);
const neighborLong = toTrimmedString(entry.neighbor_long_name ?? entry.neighborLongName ?? entry.long_name ?? entry.longName);
const neighborRole = toTrimmedString(entry.neighbor_role ?? entry.neighborRole ?? entry.role) || 'CLIENT';
const node = {
node_id: neighborId,
short_name: neighborShort ?? '',
long_name: neighborLong ?? '',
role: neighborRole,
};
const snr = toFiniteNumber(entry.snr);
const rxTime = toFiniteNumber(entry.rx_time ?? entry.rxTime);
const result = { node };
if (snr != null) {
result.snr = snr;
}
if (rxTime != null) {
result.rxTime = rxTime;
result.rx_time = rxTime;
}
return result;
}
/**
* Convert overlay node details into a map friendly payload.
*
* @param {*} source Raw overlay details.
* @returns {Object} Map node payload containing snake_case keys.
*/
export function overlayToPopupNode(source) {
if (!isObject(source)) {
return {
node_id: '',
node_num: null,
short_name: '',
long_name: '',
role: 'CLIENT',
neighbors: [],
};
}
const nodeId = toTrimmedString(source.nodeId ?? source.node_id ?? source.id) ?? '';
const nodeNum = toFiniteNumber(source.nodeNum ?? source.node_num ?? source.num);
const role = toTrimmedString(source.role) || 'CLIENT';
const neighbours = Array.isArray(source.neighbors)
? source.neighbors.map(normaliseNeighbor).filter(Boolean)
: [];
const payload = {
node_id: nodeId,
node_num: nodeNum,
short_name: toTrimmedString(source.shortName ?? source.short_name ?? source.name) ?? '',
long_name: toTrimmedString(source.longName ?? source.long_name ?? source.fullName ?? '') ?? '',
role,
hw_model: toTrimmedString(source.hwModel ?? source.hw_model ?? source.hardware) ?? '',
battery_level: toFiniteNumber(source.battery ?? source.battery_level),
voltage: toFiniteNumber(source.voltage),
uptime_seconds: toFiniteNumber(source.uptime ?? source.uptime_seconds),
channel_utilization: toFiniteNumber(source.channel ?? source.channel_utilization),
air_util_tx: toFiniteNumber(source.airUtil ?? source.air_util_tx),
temperature: toFiniteNumber(source.temperature),
relative_humidity: toFiniteNumber(source.humidity ?? source.relative_humidity),
barometric_pressure: toFiniteNumber(source.pressure ?? source.barometric_pressure),
telemetry_time: toFiniteNumber(source.telemetryTime ?? source.telemetry_time),
last_heard: toFiniteNumber(source.lastHeard ?? source.last_heard),
position_time: toFiniteNumber(source.positionTime ?? source.position_time),
latitude: toFiniteNumber(source.latitude),
longitude: toFiniteNumber(source.longitude),
altitude: toFiniteNumber(source.altitude),
neighbors: neighbours,
};
if (!payload.long_name && payload.short_name) {
payload.long_name = payload.short_name;
}
return payload;
}
/**
* Attach an asynchronous refresh handler to a Leaflet marker so that
* up-to-date node information is fetched whenever the marker is clicked.
*
* @param {Object} options Behaviour configuration.
* @param {Object} options.marker Leaflet marker instance supporting ``on``.
* @param {Function} options.getOverlayFallback Returns the fallback overlay payload.
* @param {Function} options.refreshNodeInformation Async function fetching node details.
* @param {Function} options.mergeOverlayDetails Merge function combining fetched and fallback details.
* @param {Function} options.createRequestToken Generates a token for cancellation tracking.
* Receives the marker anchor element and the fallback overlay payload.
* @param {Function} options.isTokenCurrent Tests whether a request token is still current.
* Receives the marker anchor element and the candidate token value.
* @param {Function} [options.showLoading] Callback invoked before refreshing.
* @param {Function} [options.showDetails] Callback invoked with merged overlay details.
* @param {Function} [options.showError] Callback invoked when refreshing fails.
* @param {Function} [options.updatePopup] Callback updating the marker popup contents.
* @param {Function} [options.shouldHandleClick] Predicate that decides whether the click should trigger a refresh.
* @returns {void}
*/
export function attachNodeInfoRefreshToMarker({
marker,
getOverlayFallback,
refreshNodeInformation,
mergeOverlayDetails,
createRequestToken,
isTokenCurrent,
showLoading,
showDetails,
showError,
updatePopup,
shouldHandleClick,
}) {
if (!isObject(marker) || typeof marker.on !== 'function') {
throw new TypeError('A Leaflet marker with an on() method is required');
}
if (typeof refreshNodeInformation !== 'function') {
throw new TypeError('A refreshNodeInformation function must be provided');
}
if (typeof mergeOverlayDetails !== 'function') {
throw new TypeError('A mergeOverlayDetails function must be provided');
}
if (typeof createRequestToken !== 'function' || typeof isTokenCurrent !== 'function') {
throw new TypeError('Token management callbacks must be provided');
}
marker.on('click', event => {
if (event && event.originalEvent) {
const original = event.originalEvent;
if (typeof original.preventDefault === 'function') {
original.preventDefault();
}
if (typeof original.stopPropagation === 'function') {
original.stopPropagation();
}
}
const fallbackOverlay = typeof getOverlayFallback === 'function' ? getOverlayFallback() : null;
const anchor = typeof marker.getElement === 'function' ? marker.getElement() : null;
if (!isObject(fallbackOverlay)) {
if (anchor && typeof showDetails === 'function') {
showDetails(anchor, {});
}
return;
}
if (typeof shouldHandleClick === 'function' && !shouldHandleClick(anchor, fallbackOverlay)) {
return;
}
if (typeof updatePopup === 'function') {
updatePopup(fallbackOverlay);
}
const nodeId = toTrimmedString(fallbackOverlay.nodeId ?? fallbackOverlay.node_id ?? fallbackOverlay.id);
const nodeNum = toFiniteNumber(fallbackOverlay.nodeNum ?? fallbackOverlay.node_num ?? fallbackOverlay.num);
if (!nodeId && nodeNum == null) {
if (anchor && typeof showDetails === 'function') {
showDetails(anchor, fallbackOverlay);
}
return;
}
const requestToken = createRequestToken(anchor, fallbackOverlay);
if (anchor && typeof showLoading === 'function') {
showLoading(anchor, fallbackOverlay);
}
const reference = { fallback: fallbackOverlay };
if (nodeId) reference.nodeId = nodeId;
if (nodeNum != null) reference.nodeNum = nodeNum;
let refreshPromise;
try {
refreshPromise = Promise.resolve(refreshNodeInformation(reference));
} catch (error) {
if (isTokenCurrent(anchor, requestToken)) {
if (anchor && typeof showError === 'function') {
showError(anchor, fallbackOverlay, error);
}
}
return;
}
refreshPromise
.then(details => {
if (!isTokenCurrent(anchor, requestToken)) {
return;
}
const merged = mergeOverlayDetails(details, fallbackOverlay);
if (typeof updatePopup === 'function') {
updatePopup(merged);
}
if (anchor && typeof showDetails === 'function') {
showDetails(anchor, merged);
}
})
.catch(error => {
if (!isTokenCurrent(anchor, requestToken)) {
return;
}
if (typeof updatePopup === 'function') {
updatePopup(fallbackOverlay);
}
if (anchor && typeof showError === 'function') {
showError(anchor, fallbackOverlay, error);
}
});
});
}
export const __testUtils = {
isObject,
toTrimmedString,
toFiniteNumber,
normaliseNeighbor,
};

View File

@@ -0,0 +1,150 @@
/*
* Copyright (C) 2025 l5yth
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Build a hydrator capable of attaching node metadata to chat messages.
*
* @param {{
* fetchNodeById: (nodeId: string) => Promise<object|null>,
* applyNodeFallback: (node: object) => void,
* logger?: { warn?: (message?: any, ...optionalParams: any[]) => void }
* }} options Factory configuration.
* @returns {{
* hydrate: (messages: Array<object>|null|undefined, nodesById: Map<string, object>) => Promise<Array<object>>
* }} Hydrator API.
*/
export function createMessageNodeHydrator({ fetchNodeById, applyNodeFallback, logger = console }) {
if (typeof fetchNodeById !== 'function') {
throw new TypeError('fetchNodeById must be a function');
}
if (typeof applyNodeFallback !== 'function') {
throw new TypeError('applyNodeFallback must be a function');
}
/** @type {Map<string, Promise<object|null>>} */
const inflightLookups = new Map();
/**
* Normalise potential node identifiers into canonical strings.
*
* @param {*} value Raw node identifier value.
* @returns {string} Trimmed identifier or empty string when invalid.
*/
function normalizeNodeId(value) {
if (value == null) return '';
const source = typeof value === 'string' ? value : String(value);
const trimmed = source.trim();
return trimmed.length > 0 ? trimmed : '';
}
/**
* Resolve the node metadata for the provided identifier.
*
* @param {string} nodeId Canonical node identifier.
* @param {Map<string, object>} nodesById Existing node cache.
* @returns {Promise<object|null>} Resolved node or null when unavailable.
*/
async function resolveNode(nodeId, nodesById) {
const id = normalizeNodeId(nodeId);
if (!id) return null;
if (nodesById instanceof Map && nodesById.has(id)) {
return nodesById.get(id);
}
if (inflightLookups.has(id)) {
return inflightLookups.get(id);
}
const promise = Promise.resolve()
.then(() => fetchNodeById(id))
.then(node => {
if (node && typeof node === 'object') {
applyNodeFallback(node);
if (nodesById instanceof Map) {
nodesById.set(id, node);
}
return node;
}
return null;
})
.catch(error => {
if (logger && typeof logger.warn === 'function') {
logger.warn('message node lookup failed', { nodeId: id, error });
}
return null;
})
.finally(() => {
inflightLookups.delete(id);
});
inflightLookups.set(id, promise);
return promise;
}
/**
* Attach node information to the provided message collection.
*
* @param {Array<object>|null|undefined} messages Message payloads from the API.
* @param {Map<string, object>} nodesById Lookup table of known nodes.
* @returns {Promise<Array<object>>} Hydrated message entries.
*/
async function hydrate(messages, nodesById) {
if (!Array.isArray(messages) || messages.length === 0) {
return Array.isArray(messages) ? messages : [];
}
const tasks = [];
for (const message of messages) {
if (!message || typeof message !== 'object') {
continue;
}
const explicitId = normalizeNodeId(message.node_id ?? message.nodeId ?? '');
const fallbackId = normalizeNodeId(message.from_id ?? message.fromId ?? '');
const targetId = explicitId || fallbackId;
if (!targetId) {
message.node = null;
continue;
}
message.node_id = targetId;
const existing = nodesById instanceof Map ? nodesById.get(targetId) : null;
if (existing) {
message.node = existing;
continue;
}
const task = resolveNode(targetId, nodesById).then(node => {
if (node) {
message.node = node;
} else {
const placeholder = { node_id: targetId };
applyNodeFallback(placeholder);
message.node = placeholder;
}
});
tasks.push(task);
}
if (tasks.length > 0) {
await Promise.all(tasks);
}
return messages;
}
return { hydrate };
}

View File

@@ -0,0 +1,445 @@
/*
* Copyright (C) 2025 l5yth
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import { extractModemMetadata } from './node-modem-metadata.js';
const DEFAULT_FETCH_OPTIONS = Object.freeze({ cache: 'no-store' });
const TELEMETRY_LIMIT = 1;
const POSITION_LIMIT = 1;
const NEIGHBOR_LIMIT = 1000;
/**
* Determine whether the supplied value behaves like a plain object.
*
* @param {*} value Candidate value.
* @returns {boolean} True when ``value`` is an object instance.
*/
function isObject(value) {
return value != null && typeof value === 'object';
}
/**
* Convert a candidate value into a trimmed string representation.
*
* @param {*} value Raw value from an API payload.
* @returns {string|null} Trimmed string or ``null`` when blank.
*/
function toTrimmedString(value) {
if (value == null) return null;
const str = String(value).trim();
return str.length === 0 ? null : str;
}
/**
* Coerce a candidate value to a finite number when possible.
*
* @param {*} value Raw value from an API payload.
* @returns {number|null} Finite number or ``null`` when coercion fails.
*/
function toFiniteNumber(value) {
if (typeof value === 'number') {
return Number.isFinite(value) ? value : null;
}
if (value == null || value === '') return null;
const num = Number(value);
return Number.isFinite(num) ? num : null;
}
/**
* Extract the first non-empty string associated with one of the provided keys.
*
* @param {Object} record Source record inspected for values.
* @param {Array<string>} keys Candidate property names.
* @returns {string|null} First non-empty string or ``null``.
*/
function extractString(record, keys) {
if (!isObject(record)) return null;
for (const key of keys) {
if (!Object.prototype.hasOwnProperty.call(record, key)) continue;
const value = toTrimmedString(record[key]);
if (value != null) return value;
}
return null;
}
/**
* Extract the first finite number associated with the provided keys.
*
* @param {Object} record Source record inspected for values.
* @param {Array<string>} keys Candidate property names.
* @returns {number|null} First finite number or ``null``.
*/
function extractNumber(record, keys) {
if (!isObject(record)) return null;
for (const key of keys) {
if (!Object.prototype.hasOwnProperty.call(record, key)) continue;
const value = toFiniteNumber(record[key]);
if (value != null) return value;
}
return null;
}
/**
* Assign a string property when the supplied value is present.
*
* @param {Object} target Destination object mutated with the value.
* @param {string} key Property name to assign.
* @param {*} value Raw value to assign.
* @param {Object} [options] Behaviour modifiers.
* @param {boolean} [options.preferExisting=false] When true, only assign when the target lacks a value.
* @returns {void}
*/
function assignString(target, key, value, { preferExisting = false } = {}) {
const stringValue = toTrimmedString(value);
if (stringValue == null) return;
if (preferExisting) {
const existing = toTrimmedString(target[key]);
if (existing != null) return;
}
target[key] = stringValue;
}
/**
* Assign a numeric property when the supplied value parses successfully.
*
* @param {Object} target Destination object mutated with the value.
* @param {string} key Property name to assign.
* @param {*} value Raw value to assign.
* @param {Object} [options] Behaviour modifiers.
* @param {boolean} [options.preferExisting=false] When true, only assign when the target lacks a value.
* @returns {void}
*/
function assignNumber(target, key, value, { preferExisting = false } = {}) {
const numericValue = toFiniteNumber(value);
if (numericValue == null) return;
if (preferExisting) {
const existing = toFiniteNumber(target[key]);
if (existing != null) return;
}
target[key] = numericValue;
}
/**
* Merge modem preset and frequency metadata into the aggregate node object.
*
* @param {Object} target Mutable aggregate node reference.
* @param {*} source Source record inspected for modem attributes.
* @param {{ preferExisting?: boolean }} [options] Behaviour modifiers.
* @returns {void}
*/
function mergeModemMetadata(target, source, { preferExisting = false } = {}) {
if (!isObject(target)) return;
if (!source || typeof source !== 'object') return;
const metadata = extractModemMetadata(source);
if (metadata.modemPreset) {
if (!preferExisting || toTrimmedString(target.modemPreset) == null) {
target.modemPreset = metadata.modemPreset;
}
}
if (metadata.loraFreq != null) {
if (!preferExisting || toFiniteNumber(target.loraFreq) == null) {
target.loraFreq = metadata.loraFreq;
}
}
}
/**
* Merge base node fields from an arbitrary record into the aggregate node object.
*
* @param {Object} target Mutable aggregate node reference.
* @param {Object} record Source record providing base attributes.
* @returns {void}
*/
function mergeNodeFields(target, record) {
if (!isObject(record)) return;
assignString(target, 'nodeId', extractString(record, ['nodeId', 'node_id']));
assignNumber(target, 'nodeNum', extractNumber(record, ['nodeNum', 'node_num', 'num']));
assignString(target, 'shortName', extractString(record, ['shortName', 'short_name']));
assignString(target, 'longName', extractString(record, ['longName', 'long_name']));
assignString(target, 'role', extractString(record, ['role']));
assignString(target, 'hwModel', extractString(record, ['hwModel', 'hw_model']));
mergeModemMetadata(target, record);
assignNumber(target, 'snr', extractNumber(record, ['snr']));
assignNumber(target, 'battery', extractNumber(record, ['battery', 'battery_level', 'batteryLevel']));
assignNumber(target, 'voltage', extractNumber(record, ['voltage']));
assignNumber(target, 'uptime', extractNumber(record, ['uptime', 'uptime_seconds', 'uptimeSeconds']));
assignNumber(target, 'channel', extractNumber(record, ['channel', 'channel_utilization', 'channelUtilization']));
assignNumber(target, 'airUtil', extractNumber(record, ['airUtil', 'air_util_tx', 'airUtilTx']));
assignNumber(target, 'temperature', extractNumber(record, ['temperature']));
assignNumber(target, 'humidity', extractNumber(record, ['humidity', 'relative_humidity', 'relativeHumidity']));
assignNumber(target, 'pressure', extractNumber(record, ['pressure', 'barometric_pressure', 'barometricPressure']));
assignNumber(target, 'lastHeard', extractNumber(record, ['lastHeard', 'last_heard']));
assignString(target, 'lastSeenIso', extractString(record, ['lastSeenIso', 'last_seen_iso']));
assignNumber(target, 'positionTime', extractNumber(record, ['position_time', 'positionTime']));
assignString(target, 'positionTimeIso', extractString(record, ['position_time_iso', 'positionTimeIso']));
assignNumber(target, 'telemetryTime', extractNumber(record, ['telemetry_time', 'telemetryTime']));
assignNumber(target, 'latitude', extractNumber(record, ['latitude']));
assignNumber(target, 'longitude', extractNumber(record, ['longitude']));
assignNumber(target, 'altitude', extractNumber(record, ['altitude']));
}
/**
* Merge telemetry metrics into the aggregate node object when missing.
*
* @param {Object} target Mutable aggregate node reference.
* @param {Object} telemetry Telemetry record returned by the API.
* @returns {void}
*/
function mergeTelemetry(target, telemetry) {
if (!isObject(telemetry)) return;
target.telemetry = telemetry;
assignString(target, 'nodeId', extractString(telemetry, ['node_id', 'nodeId']), { preferExisting: true });
assignNumber(target, 'nodeNum', extractNumber(telemetry, ['node_num', 'nodeNum']), { preferExisting: true });
mergeModemMetadata(target, telemetry, { preferExisting: true });
assignNumber(target, 'battery', extractNumber(telemetry, ['battery_level', 'batteryLevel']), { preferExisting: true });
assignNumber(target, 'voltage', extractNumber(telemetry, ['voltage']), { preferExisting: true });
assignNumber(target, 'uptime', extractNumber(telemetry, ['uptime_seconds', 'uptimeSeconds']), { preferExisting: true });
assignNumber(target, 'channel', extractNumber(telemetry, ['channel', 'channel_utilization', 'channelUtilization']), { preferExisting: true });
assignNumber(target, 'airUtil', extractNumber(telemetry, ['air_util_tx', 'airUtilTx', 'airUtil']), { preferExisting: true });
assignNumber(target, 'temperature', extractNumber(telemetry, ['temperature']), { preferExisting: true });
assignNumber(target, 'humidity', extractNumber(telemetry, ['relative_humidity', 'relativeHumidity', 'humidity']), { preferExisting: true });
assignNumber(target, 'pressure', extractNumber(telemetry, ['barometric_pressure', 'barometricPressure', 'pressure']), { preferExisting: true });
const telemetryTime = extractNumber(telemetry, ['telemetry_time', 'telemetryTime']);
if (telemetryTime != null) {
const existingTelemetryTime = toFiniteNumber(target.telemetryTime);
if (existingTelemetryTime == null || telemetryTime > existingTelemetryTime) {
target.telemetryTime = telemetryTime;
}
}
const rxTime = extractNumber(telemetry, ['rx_time', 'rxTime']);
if (rxTime != null) {
const existingLastHeard = toFiniteNumber(target.lastHeard);
if (existingLastHeard == null || rxTime > existingLastHeard) {
target.lastHeard = rxTime;
assignString(target, 'lastSeenIso', extractString(telemetry, ['rx_iso', 'rxIso']));
} else {
assignString(target, 'lastSeenIso', extractString(telemetry, ['rx_iso', 'rxIso']), { preferExisting: true });
}
}
}
/**
* Merge position data into the aggregate node object when missing.
*
* @param {Object} target Mutable aggregate node reference.
* @param {Object} position Position record returned by the API.
* @returns {void}
*/
function mergePosition(target, position) {
if (!isObject(position)) return;
target.position = position;
assignString(target, 'nodeId', extractString(position, ['node_id', 'nodeId']), { preferExisting: true });
assignNumber(target, 'nodeNum', extractNumber(position, ['node_num', 'nodeNum']), { preferExisting: true });
assignNumber(target, 'latitude', extractNumber(position, ['latitude']), { preferExisting: true });
assignNumber(target, 'longitude', extractNumber(position, ['longitude']), { preferExisting: true });
assignNumber(target, 'altitude', extractNumber(position, ['altitude']), { preferExisting: true });
const positionTime = extractNumber(position, ['position_time', 'positionTime']);
if (positionTime != null) {
const existingPositionTime = toFiniteNumber(target.positionTime);
if (existingPositionTime == null || positionTime > existingPositionTime) {
target.positionTime = positionTime;
assignString(target, 'positionTimeIso', extractString(position, ['position_time_iso', 'positionTimeIso']));
} else {
assignString(target, 'positionTimeIso', extractString(position, ['position_time_iso', 'positionTimeIso']), { preferExisting: true });
}
}
const rxTime = extractNumber(position, ['rx_time', 'rxTime']);
if (rxTime != null) {
const existingLastHeard = toFiniteNumber(target.lastHeard);
if (existingLastHeard == null || rxTime > existingLastHeard) {
target.lastHeard = rxTime;
assignString(target, 'lastSeenIso', extractString(position, ['rx_iso', 'rxIso']));
} else {
assignString(target, 'lastSeenIso', extractString(position, ['rx_iso', 'rxIso']), { preferExisting: true });
}
}
}
/**
* Safely parse a fallback payload used as an initial node reference.
*
* @param {*} fallback User-provided fallback data.
* @returns {Object|null} Parsed fallback object or ``null``.
*/
function parseFallback(fallback) {
if (isObject(fallback)) return { ...fallback };
if (typeof fallback === 'string') {
try {
const parsed = JSON.parse(fallback);
return isObject(parsed) ? parsed : null;
} catch (error) {
console.warn('Failed to parse node fallback payload', error);
return null;
}
}
return null;
}
/**
* Normalise a node reference into a canonical structure used by the fetcher.
*
* @param {*} reference Raw reference passed to {@link refreshNodeInformation}.
* @returns {{nodeId: (string|null), nodeNum: (number|null), fallback: (Object|null)}} Normalised reference data.
*/
function normalizeReference(reference) {
if (reference == null) {
return { nodeId: null, nodeNum: null, fallback: null };
}
if (typeof reference === 'string') {
return { nodeId: toTrimmedString(reference), nodeNum: null, fallback: null };
}
if (typeof reference === 'number') {
const nodeNum = toFiniteNumber(reference);
return { nodeId: null, nodeNum, fallback: null };
}
if (!isObject(reference)) {
return { nodeId: null, nodeNum: null, fallback: null };
}
const fallback = parseFallback(reference.fallback ?? reference.nodeInfo ?? null);
let nodeId = toTrimmedString(reference.nodeId ?? reference.node_id ?? null);
if (nodeId == null) {
nodeId = toTrimmedString(fallback?.nodeId ?? fallback?.node_id ?? null);
}
let nodeNum = reference.nodeNum ?? reference.node_num ?? null;
if (nodeNum == null) {
nodeNum = fallback?.nodeNum ?? fallback?.node_num ?? null;
}
nodeNum = toFiniteNumber(nodeNum);
return { nodeId, nodeNum, fallback };
}
/**
* Retrieve and merge node, telemetry, position, and neighbor information.
*
* @param {*} reference Node identifier string/number or an object containing ``nodeId``/``nodeNum``.
* @param {{fetchImpl?: Function}} [options] Optional overrides such as a custom ``fetch`` implementation.
* @returns {Promise<Object>} Normalised node payload enriched with telemetry, position, and neighbor data.
*/
export async function refreshNodeInformation(reference, options = {}) {
const normalized = normalizeReference(reference);
const fetchImpl = typeof options.fetchImpl === 'function' ? options.fetchImpl : globalThis.fetch;
if (typeof fetchImpl !== 'function') {
throw new TypeError('A fetch implementation is required to refresh node information');
}
const identifier = normalized.nodeId ?? normalized.nodeNum;
if (identifier == null) {
throw new Error('A node identifier or numeric reference must be provided');
}
const encodedId = encodeURIComponent(String(identifier));
const [nodeRecord, telemetryRecords, positionRecords, neighborRecords] = await Promise.all([
(async () => {
const response = await fetchImpl(`/api/nodes/${encodedId}`, DEFAULT_FETCH_OPTIONS);
if (response.status === 404) return null;
if (!response.ok) {
throw new Error(`Failed to load node information (HTTP ${response.status})`);
}
return response.json();
})(),
(async () => {
const response = await fetchImpl(`/api/telemetry/${encodedId}?limit=${TELEMETRY_LIMIT}`, DEFAULT_FETCH_OPTIONS);
if (response.status === 404) return [];
if (!response.ok) {
throw new Error(`Failed to load telemetry information (HTTP ${response.status})`);
}
return response.json();
})(),
(async () => {
const response = await fetchImpl(`/api/positions/${encodedId}?limit=${POSITION_LIMIT}`, DEFAULT_FETCH_OPTIONS);
if (response.status === 404) return [];
if (!response.ok) {
throw new Error(`Failed to load position information (HTTP ${response.status})`);
}
return response.json();
})(),
(async () => {
const response = await fetchImpl(`/api/neighbors/${encodedId}?limit=${NEIGHBOR_LIMIT}`, DEFAULT_FETCH_OPTIONS);
if (response.status === 404) return [];
if (!response.ok) {
throw new Error(`Failed to load neighbor information (HTTP ${response.status})`);
}
return response.json();
})(),
]);
const telemetryEntry = Array.isArray(telemetryRecords) ? telemetryRecords[0] ?? null : telemetryRecords ?? null;
const positionEntry = Array.isArray(positionRecords) ? positionRecords[0] ?? null : positionRecords ?? null;
const neighborEntries = Array.isArray(neighborRecords) ? neighborRecords.filter(isObject) : [];
const node = { neighbors: neighborEntries };
if (normalized.fallback) {
mergeNodeFields(node, normalized.fallback);
}
if (nodeRecord) {
mergeNodeFields(node, nodeRecord);
}
if (normalized.nodeId && !node.nodeId) {
node.nodeId = normalized.nodeId;
}
if (normalized.nodeNum != null && toFiniteNumber(node.nodeNum) == null) {
node.nodeNum = normalized.nodeNum;
}
mergeTelemetry(node, telemetryEntry);
mergePosition(node, positionEntry);
const derivedLastHeardValues = [
toFiniteNumber(node.lastHeard),
toFiniteNumber(node.telemetryTime),
toFiniteNumber(node.positionTime),
].filter(value => value != null);
if (derivedLastHeardValues.length > 0) {
node.lastHeard = Math.max(...derivedLastHeardValues);
}
if (!node.role) {
node.role = 'CLIENT';
}
node.rawSources = {
node: nodeRecord,
telemetry: telemetryEntry,
position: positionEntry,
neighbors: neighborEntries,
};
return node;
}
export const __testUtils = {
toTrimmedString,
toFiniteNumber,
extractString,
extractNumber,
assignString,
assignNumber,
mergeModemMetadata,
mergeNodeFields,
mergeTelemetry,
mergePosition,
parseFallback,
normalizeReference,
};

View File

@@ -0,0 +1,95 @@
/*
* Copyright (C) 2025 l5yth
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Convert arbitrary input into a trimmed string representation.
*
* @param {*} value Candidate value.
* @returns {string|null} Trimmed string or ``null`` when empty.
*/
function toTrimmedString(value) {
if (value == null) return null;
const stringValue = String(value).trim();
return stringValue.length > 0 ? stringValue : null;
}
/**
* Normalize modem-related metadata from a node-shaped record.
*
* @param {*} source Arbitrary payload that may contain modem attributes.
* @returns {{ modemPreset: (string|null), loraFreq: (number|null) }} Normalized modem metadata.
*/
export function extractModemMetadata(source) {
if (!source || typeof source !== 'object') {
return { modemPreset: null, loraFreq: null };
}
const presetCandidate =
source.modemPreset ?? source.modem_preset ?? source.modempreset ?? source.ModemPreset ?? null;
const modemPreset = toTrimmedString(presetCandidate);
const freqCandidate = source.loraFreq ?? source.lora_freq ?? source.frequency ?? null;
const parsedFreq = Number(freqCandidate);
const loraFreq = Number.isFinite(parsedFreq) && parsedFreq > 0 ? parsedFreq : null;
return { modemPreset, loraFreq };
}
/**
* Format a numeric LoRa frequency in MHz with up to three fractional digits.
*
* @param {*} value Numeric frequency in MHz.
* @returns {string|null} Formatted frequency with units or ``null`` when invalid.
*/
export function formatLoraFrequencyMHz(value) {
const numeric = typeof value === 'number' ? value : Number(value);
if (!Number.isFinite(numeric) || numeric <= 0) {
return null;
}
const formatter = new Intl.NumberFormat('en-US', {
minimumFractionDigits: 0,
maximumFractionDigits: 3,
});
return `${formatter.format(numeric)}MHz`;
}
/**
* Produce a combined modem preset and frequency description suitable for overlays.
*
* @param {*} preset Raw modem preset value.
* @param {*} frequency Raw frequency value expressed in MHz.
* @returns {string|null} Human-readable description or ``null`` when no data available.
*/
export function formatModemDisplay(preset, frequency) {
const presetText = toTrimmedString(preset);
const freqText = formatLoraFrequencyMHz(frequency);
if (!presetText && !freqText) {
return null;
}
if (presetText && freqText) {
return `${presetText} (${freqText})`;
}
return presetText ?? freqText;
}
export const __testUtils = {
toTrimmedString,
};

View File

@@ -1,3 +1,17 @@
/*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Default configuration values applied when the server omits a field.
*
@@ -5,10 +19,12 @@
* refreshMs: number,
* refreshIntervalSeconds: number,
* chatEnabled: boolean,
* defaultChannel: string,
* defaultFrequency: string,
* channel: string,
* frequency: string,
* contactLink: string,
* contactLinkUrl: string | null,
* mapCenter: { lat: number, lon: number },
* maxNodeDistanceKm: number,
* maxDistanceKm: number,
* tileFilters: { light: string, dark: string }
* }}
*/
@@ -16,10 +32,12 @@ export const DEFAULT_CONFIG = {
refreshMs: 60_000,
refreshIntervalSeconds: 60,
chatEnabled: true,
defaultChannel: '#MediumFast',
defaultFrequency: '868MHz',
mapCenter: { lat: 52.502889, lon: 13.404194 },
maxNodeDistanceKm: 137,
channel: '#LongFast',
frequency: '915MHz',
contactLink: '#potatomesh:dod.ngo',
contactLinkUrl: 'https://matrix.to/#/#potatomesh:dod.ngo',
mapCenter: { lat: 38.761944, lon: -27.090833 },
maxDistanceKm: 42,
tileFilters: {
light: 'grayscale(1) saturate(0) brightness(0.92) contrast(1.05)',
dark: 'grayscale(1) invert(1) brightness(0.9) contrast(1.08)'
@@ -51,11 +69,13 @@ export function mergeConfig(raw) {
const refreshMs = Number(raw?.refreshMs ?? config.refreshIntervalSeconds * 1000);
config.refreshMs = Number.isFinite(refreshMs) ? refreshMs : DEFAULT_CONFIG.refreshMs;
config.chatEnabled = Boolean(raw?.chatEnabled ?? DEFAULT_CONFIG.chatEnabled);
config.defaultChannel = raw?.defaultChannel || DEFAULT_CONFIG.defaultChannel;
config.defaultFrequency = raw?.defaultFrequency || DEFAULT_CONFIG.defaultFrequency;
const maxDistance = Number(raw?.maxNodeDistanceKm ?? DEFAULT_CONFIG.maxNodeDistanceKm);
config.maxNodeDistanceKm = Number.isFinite(maxDistance)
config.channel = raw?.channel || DEFAULT_CONFIG.channel;
config.frequency = raw?.frequency || DEFAULT_CONFIG.frequency;
config.contactLink = raw?.contactLink || DEFAULT_CONFIG.contactLink;
config.contactLinkUrl = raw?.contactLinkUrl ?? DEFAULT_CONFIG.contactLinkUrl;
const maxDistance = Number(raw?.maxDistanceKm ?? DEFAULT_CONFIG.maxDistanceKm);
config.maxDistanceKm = Number.isFinite(maxDistance)
? maxDistance
: DEFAULT_CONFIG.maxNodeDistanceKm;
: DEFAULT_CONFIG.maxDistanceKm;
return config;
}

View File

@@ -0,0 +1,574 @@
/*
* Copyright (C) 2025 l5yth
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
const DEFAULT_TEMPLATE_ID = 'shortInfoOverlayTemplate';
const FULLSCREEN_CHANGE_EVENTS = [
'fullscreenchange',
'webkitfullscreenchange',
'mozfullscreenchange',
'MSFullscreenChange',
];
/**
* Resolve the element currently presented in fullscreen mode.
*
* @param {Document} doc Host document reference.
* @returns {?Element} Fullscreen element or ``null`` when fullscreen is inactive.
*/
function getFullscreenElement(doc) {
if (!doc) return null;
return (
doc.fullscreenElement ||
doc.webkitFullscreenElement ||
doc.mozFullScreenElement ||
doc.msFullscreenElement ||
null
);
}
/**
* Determine the container that should host overlays.
*
* @param {Document} doc Host document reference.
* @returns {?Element} Preferred overlay host element.
*/
function resolveOverlayHost(doc) {
const fullscreenElement = getFullscreenElement(doc);
if (fullscreenElement && typeof fullscreenElement.appendChild === 'function') {
return fullscreenElement;
}
return doc && doc.body && typeof doc.body.appendChild === 'function' ? doc.body : null;
}
/**
* Update overlay positioning mode based on fullscreen state.
*
* @param {Element} element Overlay DOM node.
* @param {Document} doc Host document reference.
* @returns {void}
*/
function applyOverlayPositioning(element, doc) {
if (!element || !element.style) {
return;
}
const fullscreenElement = getFullscreenElement(doc);
const desired = fullscreenElement ? 'fixed' : 'absolute';
if (element.style.position !== desired) {
element.style.position = desired;
}
}
/**
* Determine whether a value behaves like a DOM element that can host overlays.
*
* @param {*} candidate Potential anchor element.
* @returns {boolean} ``true`` when the candidate exposes the required DOM API.
*/
function isValidAnchor(candidate) {
return (
candidate != null &&
typeof candidate === 'object' &&
typeof candidate.getBoundingClientRect === 'function'
);
}
/**
* Create a factory that instantiates overlay DOM nodes.
*
* @param {Document} document Host document reference.
* @param {?Element} template Template element cloned for each overlay.
* @returns {Function} Factory generating overlay nodes with close/content refs.
*/
function createDefaultOverlayFactory(document, template) {
const templateNode =
template && template.content && template.content.firstElementChild
? template.content.firstElementChild
: null;
return () => {
let overlay;
if (templateNode && typeof templateNode.cloneNode === 'function') {
overlay = templateNode.cloneNode(true);
} else {
overlay = document.createElement('div');
overlay.className = 'short-info-overlay';
overlay.setAttribute('role', 'dialog');
overlay.setAttribute('aria-modal', 'false');
const closeButton = document.createElement('button');
closeButton.type = 'button';
closeButton.className = 'short-info-close';
closeButton.setAttribute('aria-label', 'Close node details');
closeButton.textContent = '×';
const content = document.createElement('div');
content.className = 'short-info-content';
overlay.appendChild(closeButton);
overlay.appendChild(content);
}
const closeButton =
typeof overlay.querySelector === 'function'
? overlay.querySelector('.short-info-close')
: null;
const content =
typeof overlay.querySelector === 'function'
? overlay.querySelector('.short-info-content')
: null;
return { overlay, closeButton, content };
};
}
/**
* Create a no-op overlay stack used when the DOM primitives are unavailable.
*
* @returns {Object} Overlay stack interface with inert behaviour.
*/
function createNoopOverlayStack() {
return {
render() {},
close() {},
closeAll() {},
isOpen() {
return false;
},
containsNode() {
return false;
},
positionAll() {},
cleanupOrphans() {},
incrementRequestToken() {
return 0;
},
isTokenCurrent() {
return false;
},
getOpenOverlays() {
return [];
},
};
}
/**
* Create a stack manager that renders and positions short-info overlays.
*
* @param {{
* document?: Document,
* window?: Window,
* templateId?: string,
* template?: Element,
* factory?: Function
* }} [options] Overlay configuration and host references.
* @returns {{
* render: (anchor: Element, html: string) => void,
* close: (anchor: Element) => void,
* closeAll: () => void,
* isOpen: (anchor: Element) => boolean,
* containsNode: (node: Node) => boolean,
* positionAll: () => void,
* cleanupOrphans: () => void,
* incrementRequestToken: (anchor: Element) => number,
* isTokenCurrent: (anchor: Element, token: number) => boolean,
* getOpenOverlays: () => Array<{ anchor: Element, element: Element }>
* }} Overlay stack interface.
*/
export function createShortInfoOverlayStack(options = {}) {
const doc = options.document || globalThis.document || null;
const win = options.window || globalThis.window || null;
if (!doc || !doc.body) {
return createNoopOverlayStack();
}
const template =
options.template !== undefined
? options.template
: doc.getElementById(options.templateId || DEFAULT_TEMPLATE_ID);
const overlayFactory =
typeof options.factory === 'function'
? options.factory
: createDefaultOverlayFactory(doc, template);
const overlayStates = new Map();
const overlayOrder = [];
/**
* Retrieve the active overlay host element.
*
* @returns {?Element} Host element capable of containing overlays.
*/
function getOverlayHost() {
return resolveOverlayHost(doc);
}
/**
* Append ``element`` to the preferred overlay host when necessary.
*
* @param {Element} element Overlay root element.
* @returns {void}
*/
function ensureOverlayAttached(element) {
if (!element) return;
const host = getOverlayHost();
if (!host) return;
if (element.parentNode !== host) {
host.appendChild(element);
}
applyOverlayPositioning(element, doc);
}
/**
* React to fullscreen transitions by reattaching overlays to the active host.
*
* @returns {void}
*/
function handleFullscreenChange() {
for (const state of overlayStates.values()) {
ensureOverlayAttached(state.element);
}
positionAll();
}
if (doc && typeof doc.addEventListener === 'function') {
for (const eventName of FULLSCREEN_CHANGE_EVENTS) {
doc.addEventListener(eventName, handleFullscreenChange);
}
}
/**
* Remove an overlay element from the DOM tree.
*
* @param {Element} element Overlay root element.
* @returns {void}
*/
function detachOverlayElement(element) {
if (!element) return;
if (typeof element.remove === 'function') {
element.remove();
return;
}
if (element.parentNode && typeof element.parentNode.removeChild === 'function') {
element.parentNode.removeChild(element);
}
}
/**
* Create or retrieve the overlay state associated with ``anchor``.
*
* @param {Element} anchor Anchor element.
* @returns {{
* anchor: Element,
* element: Element,
* content: Element,
* closeButton: Element,
* requestToken: number
* }|null} Overlay state or ``null`` when creation fails.
*/
function ensureState(anchor) {
if (!isValidAnchor(anchor)) {
return null;
}
let state = overlayStates.get(anchor);
if (state) {
return state;
}
const created = overlayFactory();
if (!created || !created.overlay || !created.content) {
return null;
}
const overlayEl = created.overlay;
const closeButton = created.closeButton || null;
const contentEl = created.content;
if (typeof overlayEl.setAttribute === 'function') {
overlayEl.setAttribute('data-short-info-overlay', '');
}
if (closeButton && typeof closeButton.addEventListener === 'function') {
closeButton.addEventListener('click', event => {
if (event) {
if (typeof event.preventDefault === 'function') {
event.preventDefault();
}
if (typeof event.stopPropagation === 'function') {
event.stopPropagation();
}
}
close(anchor);
});
}
ensureOverlayAttached(overlayEl);
state = {
anchor,
element: overlayEl,
content: contentEl,
closeButton,
requestToken: 0,
};
overlayStates.set(anchor, state);
overlayOrder.push(state);
return state;
}
/**
* Remove the overlay state associated with ``anchor``.
*
* @param {Element} anchor Anchor element.
* @returns {void}
*/
function removeState(anchor) {
const state = overlayStates.get(anchor);
if (!state) return;
overlayStates.delete(anchor);
const index = overlayOrder.indexOf(state);
if (index >= 0) {
overlayOrder.splice(index, 1);
}
detachOverlayElement(state.element);
}
/**
* Position an overlay relative to its anchor element.
*
* @param {{ anchor: Element, element: Element }} state Overlay state entry.
* @returns {void}
*/
function positionState(state) {
if (!state || !state.anchor || !state.element) {
return;
}
if (!doc.body.contains(state.anchor)) {
close(state.anchor);
return;
}
const rect = state.anchor.getBoundingClientRect();
const overlayRect =
typeof state.element.getBoundingClientRect === 'function'
? state.element.getBoundingClientRect()
: { width: 0, height: 0 };
const viewportWidth =
(doc.documentElement && doc.documentElement.clientWidth) ||
(win && typeof win.innerWidth === 'number' ? win.innerWidth : 0);
const viewportHeight =
(doc.documentElement && doc.documentElement.clientHeight) ||
(win && typeof win.innerHeight === 'number' ? win.innerHeight : 0);
const scrollX = (win && typeof win.scrollX === 'number' ? win.scrollX : 0) || 0;
const scrollY = (win && typeof win.scrollY === 'number' ? win.scrollY : 0) || 0;
const fullscreenElement = getFullscreenElement(doc);
const offsetX = fullscreenElement ? 0 : scrollX;
const offsetY = fullscreenElement ? 0 : scrollY;
let left = rect.left + offsetX;
let top = rect.top + offsetY;
if (viewportWidth > 0) {
const maxLeft = offsetX + viewportWidth - overlayRect.width - 8;
left = Math.max(offsetX + 8, Math.min(left, maxLeft));
}
if (viewportHeight > 0) {
const maxTop = offsetY + viewportHeight - overlayRect.height - 8;
top = Math.max(offsetY + 8, Math.min(top, maxTop));
}
if (state.element.style) {
applyOverlayPositioning(state.element, doc);
state.element.style.left = `${left}px`;
state.element.style.top = `${top}px`;
state.element.style.visibility = 'visible';
}
}
/**
* Schedule positioning of an overlay for the next animation frame.
*
* @param {{ anchor: Element, element: Element }} state Overlay state entry.
* @returns {void}
*/
function schedulePosition(state) {
if (!state || !state.element) return;
if (state.element.style) {
state.element.style.visibility = 'hidden';
}
const raf = (win && win.requestAnimationFrame) || globalThis.requestAnimationFrame;
if (typeof raf === 'function') {
raf(() => positionState(state));
} else {
setTimeout(() => positionState(state), 16);
}
}
/**
* Render overlay content anchored to the provided element.
*
* @param {Element} anchor Anchor element driving overlay placement.
* @param {string} html Inner HTML displayed in the overlay body.
* @returns {void}
*/
function render(anchor, html) {
const state = ensureState(anchor);
if (!state) {
return;
}
ensureOverlayAttached(state.element);
if (state.content && typeof state.content.innerHTML === 'string') {
state.content.innerHTML = html;
}
if (state.element && typeof state.element.removeAttribute === 'function') {
state.element.removeAttribute('hidden');
}
schedulePosition(state);
}
/**
* Close the overlay associated with ``anchor``.
*
* @param {Element} anchor Anchor element whose overlay should be removed.
* @returns {void}
*/
function close(anchor) {
const state = overlayStates.get(anchor);
if (!state) return;
state.requestToken += 1;
removeState(anchor);
}
/**
* Determine whether an overlay for ``anchor`` is currently open.
*
* @param {Element} anchor Anchor element to test.
* @returns {boolean} ``true`` when an overlay exists for the anchor.
*/
function isOpen(anchor) {
return overlayStates.has(anchor);
}
/**
* Close every active overlay.
*
* @returns {void}
*/
function closeAll() {
const anchors = Array.from(overlayStates.keys());
for (const anchor of anchors) {
close(anchor);
}
}
/**
* Test whether the provided DOM node belongs to any overlay.
*
* @param {Node} node Candidate DOM node.
* @returns {boolean} ``true`` when the node is inside an overlay.
*/
function containsNode(node) {
if (!node) return false;
for (const state of overlayStates.values()) {
if (state.element && typeof state.element.contains === 'function') {
if (state.element.contains(node)) {
return true;
}
}
}
return false;
}
/**
* Reposition all overlays based on the latest viewport metrics.
*
* @returns {void}
*/
function positionAll() {
for (const state of overlayStates.values()) {
positionState(state);
}
}
/**
* Remove overlays whose anchors are no longer part of the document body.
*
* @returns {void}
*/
function cleanupOrphans() {
for (const state of Array.from(overlayStates.values())) {
if (!doc.body.contains(state.anchor)) {
close(state.anchor);
}
}
}
/**
* Increment and return the request token for the provided anchor.
*
* @param {Element} anchor Anchor whose request token should be updated.
* @returns {number} Updated token value.
*/
function incrementRequestToken(anchor) {
const state = ensureState(anchor);
if (!state) {
return 0;
}
state.requestToken += 1;
return state.requestToken;
}
/**
* Determine whether ``token`` is still current for ``anchor``.
*
* @param {Element} anchor Anchor element associated with the request.
* @param {number} token Token obtained from ``incrementRequestToken``.
* @returns {boolean} ``true`` when the token is current.
*/
function isTokenCurrent(anchor, token) {
const state = overlayStates.get(anchor);
if (!state) {
return false;
}
return state.requestToken === token;
}
/**
* Retrieve diagnostic information about open overlays.
*
* @returns {Array<{ anchor: Element, element: Element }>}
*/
function getOpenOverlays() {
return overlayOrder.map(state => ({ anchor: state.anchor, element: state.element }));
}
return {
render,
close,
closeAll,
isOpen,
containsNode,
positionAll,
cleanupOrphans,
incrementRequestToken,
isTokenCurrent,
getOpenOverlays,
};
}
export const __testUtils = {
isValidAnchor,
createDefaultOverlayFactory,
createNoopOverlayStack,
};

View File

@@ -0,0 +1,408 @@
/*
* Copyright (C) 2025 l5yth
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Determine whether ``value`` can be treated as a finite number.
*
* @param {*} value Candidate numeric value.
* @returns {boolean} ``true`` when the value parses to a finite number.
*/
function isFiniteNumber(value) {
if (value == null || value === '') return false;
const number = typeof value === 'number' ? value : Number(value);
return Number.isFinite(number);
}
/**
* Retrieve the first defined property from ``container`` using ``keys``.
*
* @param {Object} container Object inspected for values.
* @param {Array<string>} keys Candidate property names.
* @returns {*} First non-nullish value discovered.
*/
function pickFirstValue(container, keys) {
if (!container || typeof container !== 'object') {
return undefined;
}
for (const key of keys) {
if (Object.prototype.hasOwnProperty.call(container, key)) {
const candidate = container[key];
if (candidate != null && (candidate !== '' || candidate === 0)) {
return candidate;
}
}
}
return undefined;
}
/**
* Format arbitrary telemetry values using a numeric suffix.
*
* @param {*} value Raw value to format.
* @param {string} suffix Unit suffix appended when formatting succeeds.
* @returns {string} Formatted value or an empty string for invalid input.
*/
export function fmtAlt(value, suffix) {
if (!isFiniteNumber(value) && !(value === 0 || value === '0')) {
return '';
}
return `${Number(value)}${suffix}`;
}
/**
* Format utilisation metrics as percentages.
*
* @param {*} value Raw utilisation value.
* @param {number} [decimals=3] Decimal precision applied to the percentage.
* @returns {string} Formatted percentage string.
*/
export function fmtTx(value, decimals = 3) {
if (!isFiniteNumber(value)) return '';
const num = Number(value);
return `${num.toFixed(decimals)}%`;
}
/**
* Format temperature telemetry in degrees Celsius.
*
* @param {*} value Raw temperature reading.
* @returns {string} Formatted temperature string.
*/
export function fmtTemperature(value) {
if (!isFiniteNumber(value)) return '';
const num = Number(value);
return `${num.toFixed(1)}°C`;
}
/**
* Format relative humidity telemetry as a percentage.
*
* @param {*} value Raw humidity reading.
* @returns {string} Formatted humidity string.
*/
export function fmtHumidity(value) {
if (!isFiniteNumber(value)) return '';
const num = Number(value);
return `${num.toFixed(1)}%`;
}
/**
* Format barometric pressure telemetry in hectopascals.
*
* @param {*} value Raw pressure value.
* @returns {string} Formatted pressure string.
*/
export function fmtPressure(value) {
if (!isFiniteNumber(value)) return '';
const num = Number(value);
return `${num.toFixed(1)} hPa`;
}
/**
* Format current telemetry, automatically scaling to milliamperes when
* appropriate.
*
* @param {*} value Raw current reading expressed in amperes.
* @returns {string} Formatted current string.
*/
export function fmtCurrent(value) {
if (!isFiniteNumber(value)) return '';
const num = Number(value);
if (Math.abs(num) < 1) {
return `${(num * 1000).toFixed(1)} mA`;
}
return `${num.toFixed(2)} A`;
}
/**
* Format gas resistance telemetry using a human readable Ohm prefix.
*
* @param {*} value Raw resistance value expressed in Ohms.
* @returns {string} Formatted resistance string.
*/
export function fmtGasResistance(value) {
if (!isFiniteNumber(value)) return '';
const num = Number(value);
const absVal = Math.abs(num);
if (absVal >= 1_000_000) {
return `${(num / 1_000_000).toFixed(2)}`;
}
if (absVal >= 1_000) {
return `${(num / 1_000).toFixed(2)}`;
}
return `${num.toFixed(2)} Ω`;
}
/**
* Format generic distance telemetry in metres.
*
* @param {*} value Raw distance value.
* @returns {string} Formatted distance string.
*/
export function fmtDistance(value) {
if (!isFiniteNumber(value)) return '';
const num = Number(value);
return `${num.toFixed(2)} m`;
}
/**
* Format optical telemetry in lux.
*
* @param {*} value Raw lux reading.
* @returns {string} Formatted lux string.
*/
export function fmtLux(value) {
if (!isFiniteNumber(value)) return '';
const num = Number(value);
return `${num.toFixed(1)} lx`;
}
/**
* Format wind direction telemetry in degrees.
*
* @param {*} value Raw wind direction reading.
* @returns {string} Formatted wind direction string.
*/
export function fmtWindDirection(value) {
if (!isFiniteNumber(value)) return '';
const num = Number(value);
return `${Math.round(num)}°`;
}
/**
* Format wind speed telemetry in metres per second.
*
* @param {*} value Raw wind speed reading.
* @returns {string} Formatted wind speed string.
*/
export function fmtWindSpeed(value) {
if (!isFiniteNumber(value)) return '';
const num = Number(value);
return `${num.toFixed(1)} m/s`;
}
/**
* Format weight telemetry in kilograms.
*
* @param {*} value Raw weight value.
* @returns {string} Formatted weight string.
*/
export function fmtWeight(value) {
if (!isFiniteNumber(value)) return '';
const num = Number(value);
return `${num.toFixed(2)} kg`;
}
/**
* Format radiation telemetry using microsieverts per hour.
*
* @param {*} value Raw radiation value.
* @returns {string} Formatted radiation string.
*/
export function fmtRadiation(value) {
if (!isFiniteNumber(value)) return '';
const num = Number(value);
return `${num.toFixed(2)} µSv/h`;
}
/**
* Format rainfall telemetry using millimetres.
*
* @param {*} value Raw rainfall accumulation value.
* @returns {string} Formatted rainfall string.
*/
export function fmtRainfall(value) {
if (!isFiniteNumber(value)) return '';
const num = Number(value);
return `${num.toFixed(2)} mm`;
}
/**
* Format soil moisture telemetry. The metrics are typically raw sensor values
* without defined units, therefore the raw integer is surfaced unchanged.
*
* @param {*} value Raw soil moisture reading.
* @returns {string} Soil moisture string.
*/
export function fmtSoilMoisture(value) {
if (!isFiniteNumber(value)) return '';
const num = Number(value);
return `${Math.round(num)}`;
}
/**
* Format soil temperature telemetry in degrees Celsius.
*
* @param {*} value Raw soil temperature reading.
* @returns {string} Formatted soil temperature string.
*/
export function fmtSoilTemperature(value) {
return fmtTemperature(value);
}
/**
* Format indoor air quality index values.
*
* @param {*} value Raw IAQ reading.
* @returns {string} IAQ string.
*/
export function fmtIaq(value) {
if (!isFiniteNumber(value)) return '';
const num = Number(value);
return `${Math.round(num)}`;
}
/**
* Telemetry descriptors consumed by the short-info overlay.
*
* Each descriptor includes a canonical key, display label, candidate source
* property names, and a formatter that converts numeric values into a human
* readable string.
*/
export const TELEMETRY_FIELDS = [
{ key: 'battery', label: 'Battery', sources: ['battery', 'battery_level', 'batteryLevel'], formatter: value => fmtAlt(value, '%') },
{ key: 'voltage', label: 'Voltage', sources: ['voltage'], formatter: value => fmtAlt(value, 'V') },
{ key: 'current', label: 'Current', sources: ['current'], formatter: fmtCurrent },
{ key: 'uptime', label: 'Uptime', sources: ['uptime', 'uptime_seconds', 'uptimeSeconds'], formatter: (value, utils) => (typeof utils.formatUptime === 'function' ? utils.formatUptime(value) : '') },
{
key: 'channel',
label: 'Channel Util',
sources: ['channel', 'channel_utilization', 'channelUtilization'],
formatter: value => fmtTx(value),
},
{
key: 'airUtil',
label: 'Air Util Tx',
sources: ['airUtil', 'air_util_tx', 'airUtilTx'],
formatter: value => fmtTx(value),
},
{ key: 'temperature', label: 'Temperature', sources: ['temperature', 'temp'], formatter: fmtTemperature },
{ key: 'humidity', label: 'Humidity', sources: ['humidity', 'relative_humidity', 'relativeHumidity'], formatter: fmtHumidity },
{ key: 'pressure', label: 'Pressure', sources: ['pressure', 'barometric_pressure', 'barometricPressure'], formatter: fmtPressure },
{ key: 'gasResistance', label: 'Gas Resistance', sources: ['gas_resistance', 'gasResistance'], formatter: fmtGasResistance },
{ key: 'iaq', label: 'IAQ', sources: ['iaq'], formatter: fmtIaq },
{ key: 'distance', label: 'Distance', sources: ['distance'], formatter: fmtDistance },
{ key: 'lux', label: 'Lux', sources: ['lux'], formatter: fmtLux },
{ key: 'whiteLux', label: 'White Lux', sources: ['white_lux', 'whiteLux'], formatter: fmtLux },
{ key: 'irLux', label: 'IR Lux', sources: ['ir_lux', 'irLux'], formatter: fmtLux },
{ key: 'uvLux', label: 'UV Lux', sources: ['uv_lux', 'uvLux'], formatter: fmtLux },
{ key: 'windDirection', label: 'Wind Direction', sources: ['wind_direction', 'windDirection'], formatter: fmtWindDirection },
{ key: 'windSpeed', label: 'Wind Speed', sources: ['wind_speed', 'windSpeed', 'windSpeedMps'], formatter: fmtWindSpeed },
{ key: 'windGust', label: 'Wind Gust', sources: ['wind_gust', 'windGust'], formatter: fmtWindSpeed },
{ key: 'windLull', label: 'Wind Lull', sources: ['wind_lull', 'windLull'], formatter: fmtWindSpeed },
{ key: 'weight', label: 'Weight', sources: ['weight'], formatter: fmtWeight },
{ key: 'radiation', label: 'Radiation', sources: ['radiation', 'radiationLevel'], formatter: fmtRadiation },
{ key: 'rainfall1h', label: 'Rainfall 1h', sources: ['rainfall_1h', 'rainfall1h', 'rainfall1H'], formatter: fmtRainfall },
{ key: 'rainfall24h', label: 'Rainfall 24h', sources: ['rainfall_24h', 'rainfall24h', 'rainfall24H'], formatter: fmtRainfall },
{ key: 'soilMoisture', label: 'Soil Moisture', sources: ['soil_moisture', 'soilMoisture'], formatter: fmtSoilMoisture },
{ key: 'soilTemperature', label: 'Soil Temperature', sources: ['soil_temperature', 'soilTemperature'], formatter: fmtSoilTemperature },
];
/**
* Collect telemetry metrics from arbitrary node payloads.
*
* The function inspects common top-level, device metric, and environment
* metric collections in order to surface numeric telemetry values.
*
* @param {*} source Node payload that may contain telemetry.
* @returns {Object} Object containing numeric telemetry keyed by descriptor.
*/
export function collectTelemetryMetrics(source) {
const metrics = {};
if (!source || typeof source !== 'object') {
return metrics;
}
const containers = [
source,
source.device_metrics,
source.deviceMetrics,
source.environment_metrics,
source.environmentMetrics,
source.telemetry,
].filter(container => container && typeof container === 'object');
for (const field of TELEMETRY_FIELDS) {
const keys = Array.isArray(field.sources) && field.sources.length > 0
? field.sources
: [field.key];
for (const container of containers) {
const raw = pickFirstValue(container, keys);
if (!isFiniteNumber(raw) && !(raw === 0 || raw === '0')) {
continue;
}
const num = Number(raw);
if (Number.isFinite(num)) {
metrics[field.key] = num;
break;
}
}
}
return metrics;
}
/**
* Build display entries for telemetry values suitable for short-info overlays.
*
* @param {Object} telemetry Telemetry metrics keyed by descriptor ``key``.
* @param {{formatUptime?: Function}} [utils] Optional formatter overrides.
* @returns {Array<{label: string, value: string}>} Renderable telemetry entries.
*/
export function buildTelemetryDisplayEntries(telemetry, utils = {}) {
const entries = [];
if (!telemetry || typeof telemetry !== 'object') {
return entries;
}
for (const field of TELEMETRY_FIELDS) {
if (!Object.prototype.hasOwnProperty.call(telemetry, field.key)) {
continue;
}
const value = telemetry[field.key];
if (value == null) {
continue;
}
const formatted = typeof field.formatter === 'function'
? field.formatter(value, utils)
: String(value);
if (formatted == null || formatted === '') {
continue;
}
entries.push({ label: field.label, value: formatted });
}
return entries;
}
export default {
TELEMETRY_FIELDS,
collectTelemetryMetrics,
buildTelemetryDisplayEntries,
fmtAlt,
fmtTx,
fmtTemperature,
fmtHumidity,
fmtPressure,
fmtCurrent,
fmtGasResistance,
fmtDistance,
fmtLux,
fmtWindDirection,
fmtWindSpeed,
fmtWeight,
fmtRadiation,
fmtRainfall,
fmtSoilMoisture,
fmtSoilTemperature,
fmtIaq,
};

View File

@@ -73,12 +73,17 @@
applyBackground();
}
if (document.readyState === 'loading') {
document.addEventListener('DOMContentLoaded', init);
} else {
init();
function bootstrap() {
document.removeEventListener('DOMContentLoaded', init);
if (document.readyState === 'loading') {
document.addEventListener('DOMContentLoaded', init);
} else {
init();
}
}
bootstrap();
window.addEventListener('themechange', applyBackground);
/**
@@ -86,11 +91,19 @@
*
* @type {{
* applyBackground: function(): void,
* resolveBackgroundColor: function(): (?string)
* resolveBackgroundColor: function(): (?string),
* __testHooks: {
* bootstrap: function(): void,
* init: function(): void
* }
* }}
*/
window.__potatoBackground = {
applyBackground: applyBackground,
resolveBackgroundColor: resolveBackgroundColor
resolveBackgroundColor: resolveBackgroundColor,
__testHooks: {
bootstrap: bootstrap,
init: init
}
};
})();

View File

@@ -36,6 +36,32 @@
return match ? decodeURIComponent(match[1]) : null;
}
/**
* Convert cookie options to a serialized string suitable for ``document.cookie``.
*
* @param {Object<string, *>} options Map of cookie attribute keys and values.
* @returns {string} Serialized cookie attribute segment prefixed with ``; `` when non-empty.
*/
function formatCookieOption(pair) {
var key = pair[0];
var optionValue = pair[1];
if (optionValue === true) {
return '; ' + key;
}
return '; ' + key + '=' + optionValue;
}
function serializeCookieOptions(options) {
var buffer = '';
var source = options == null ? {} : options;
var entries = Object.entries(source);
for (var index = 0; index < entries.length;) {
buffer += formatCookieOption(entries[index]);
index += 1;
}
return buffer;
}
/**
* Persist a cookie with optional attributes.
*
@@ -50,10 +76,7 @@
opts || {}
);
var updated = encodeURIComponent(name) + '=' + encodeURIComponent(value);
for (var k in options) {
if (!Object.prototype.hasOwnProperty.call(options, k)) continue;
updated += '; ' + k + (options[k] === true ? '' : '=' + options[k]);
}
updated += serializeCookieOptions(options);
document.cookie = updated;
}
@@ -84,13 +107,35 @@
return isDark;
}
var theme = getCookie('theme');
if (theme !== 'dark' && theme !== 'light') {
theme = 'dark';
function exerciseSetCookieGuard() {
var originalHasOwnProperty = Object.prototype.hasOwnProperty;
Object.prototype.hasOwnProperty = function alwaysFalse() {
return false;
};
try {
setCookie('probe', 'probe', { SameSite: 'Lax' });
} finally {
Object.prototype.hasOwnProperty = originalHasOwnProperty;
}
}
persistTheme(theme);
applyTheme(theme);
var theme = 'dark';
function bootstrap() {
document.removeEventListener('DOMContentLoaded', handleReady);
theme = getCookie('theme');
if (theme !== 'dark' && theme !== 'light') {
theme = 'dark';
}
persistTheme(theme);
applyTheme(theme);
if (document.readyState === 'loading') {
document.addEventListener('DOMContentLoaded', handleReady);
} else {
handleReady();
}
}
function handleReady() {
var isDark = applyTheme(theme);
@@ -105,11 +150,7 @@
}
}
if (document.readyState === 'loading') {
document.addEventListener('DOMContentLoaded', handleReady);
} else {
handleReady();
}
bootstrap();
/**
* Testing hooks exposing cookie helpers for integration tests.
@@ -118,13 +159,30 @@
* getCookie: function(string): (?string),
* setCookie: function(string, string, Object<string, *>=): void,
* persistTheme: function(string): void,
* maxAge: number
* maxAge: number,
* __testHooks: {
* applyTheme: function(string): boolean,
* handleReady: function(): void,
* bootstrap: function(): void,
* setTheme: function(string): void
* }
* }}
*/
window.__themeCookie = {
getCookie: getCookie,
setCookie: setCookie,
persistTheme: persistTheme,
maxAge: THEME_COOKIE_MAX_AGE
maxAge: THEME_COOKIE_MAX_AGE,
__testHooks: {
applyTheme: applyTheme,
handleReady: handleReady,
bootstrap: bootstrap,
setTheme: function setTheme(value) {
theme = value;
},
exerciseSetCookieGuard: exerciseSetCookieGuard,
serializeCookieOptions: serializeCookieOptions,
formatCookieOption: formatCookieOption
}
};
})();

View File

@@ -126,12 +126,20 @@ tbody tr:nth-child(even) td {
body {
font-family: system-ui, Segoe UI, Roboto, Ubuntu, Arial, sans-serif;
margin: var(--pad);
padding-bottom: 32px;
padding-bottom: 96px;
--map-tiles-filter: var(--map-tile-filter-light);
}
h1 {
margin: 0 0 8px;
margin: 0;
}
.site-header {
display: flex;
flex-wrap: wrap;
align-items: center;
gap: 12px;
margin-bottom: 8px;
}
.site-title {
@@ -152,6 +160,63 @@ h1 {
margin-bottom: 12px;
}
.instance-selector {
display: flex;
align-items: center;
}
.instance-select {
appearance: none;
-webkit-appearance: none;
-moz-appearance: none;
background-color: var(--input-bg);
color: var(--input-fg);
border: 1px solid var(--input-border);
border-radius: 8px;
padding: 6px 32px 6px 12px;
font-size: 14px;
line-height: 1.4;
min-width: 220px;
background-image: linear-gradient(45deg, transparent 50%, var(--muted) 50%),
linear-gradient(135deg, var(--muted) 50%, transparent 50%);
background-position: calc(100% - 18px) calc(50% - 4px), calc(100% - 12px) calc(50% - 4px);
background-size: 6px 6px, 6px 6px;
background-repeat: no-repeat;
}
.instance-select:focus {
outline: 2px solid var(--accent);
outline-offset: 2px;
}
.visually-hidden {
position: absolute;
width: 1px;
height: 1px;
padding: 0;
margin: -1px;
overflow: hidden;
clip: rect(0, 0, 0, 0);
white-space: nowrap;
border: 0;
}
@media (max-width: 900px) {
.site-header {
flex-direction: column;
align-items: flex-start;
}
.instance-selector,
.instance-select {
width: 100%;
}
.instance-select {
min-width: 0;
}
}
.pill {
display: inline-block;
padding: 2px 8px;
@@ -162,14 +227,12 @@ h1 {
#map {
position: relative;
flex: 1;
width: 100%;
height: 60vh;
border: 1px solid #ddd;
border-radius: 8px;
overflow: hidden;
display: flex;
align-items: center;
justify-content: center;
display: block;
}
.map-panel.is-fullscreen,
@@ -347,9 +410,9 @@ th {
.map-panel {
position: relative;
flex: 1;
display: flex;
flex: 1 1 0%;
min-width: 0;
display: block;
}
.map-toolbar {
@@ -448,7 +511,7 @@ th {
line-height: 1.4;
min-width: 200px;
max-width: 240px;
z-index: 2000;
z-index: 12000;
}
.short-info-overlay[hidden] {
@@ -576,6 +639,16 @@ button {
color: var(--fg);
}
.icon-button {
width: 36px;
height: 36px;
padding: 0;
display: inline-flex;
align-items: center;
justify-content: center;
line-height: 1;
}
button:hover {
background: #f6f6f6;
}
@@ -742,16 +815,66 @@ input[type="radio"] {
font-size: 12px;
}
footer {
.app-footer {
position: fixed;
bottom: 0;
left: var(--pad);
width: calc(100% - 2 * var(--pad));
inset-block-end: 0;
inset-inline: 0;
width: 100%;
background: #fafafa;
border-top: 1px solid #ddd;
text-align: center;
font-size: 12px;
padding: 4px 0;
z-index: 4100;
}
.app-footer .footer-content {
display: flex;
align-items: center;
justify-content: center;
flex-wrap: wrap;
gap: 6px;
margin: 0 auto;
width: 100%;
max-width: 960px;
padding: 6px var(--pad);
text-align: center;
box-sizing: border-box;
}
.app-footer .footer-separator {
margin: 0 4px;
}
.app-footer .footer-links {
display: inline-flex;
align-items: center;
gap: 6px;
flex-wrap: wrap;
justify-content: center;
}
.app-footer .footer-brand {
font-weight: 600;
}
.app-footer a {
color: inherit;
}
@media (max-width: 600px) {
.app-footer .footer-content {
padding: 10px 12px;
justify-content: center;
gap: 4px 8px;
}
.app-footer .footer-links {
flex-direction: column;
gap: 4px;
}
.app-footer .footer-separator {
display: none;
}
}
.info-overlay {
@@ -762,7 +885,7 @@ footer {
align-items: center;
justify-content: center;
padding: var(--pad);
z-index: 4000;
z-index: 13000;
}
.info-overlay[hidden] {
@@ -834,19 +957,6 @@ footer {
word-break: break-word;
}
@media (max-width: 1280px) {
#nodes th:nth-child(15),
#nodes td:nth-child(15),
#nodes th:nth-child(16),
#nodes td:nth-child(16),
#nodes th:nth-child(17),
#nodes td:nth-child(17),
#nodes th:nth-child(18),
#nodes td:nth-child(18) {
display: none;
}
}
@media (max-width: 1024px) {
.row {
flex-direction: column;
@@ -900,7 +1010,7 @@ footer {
#map {
order: 1;
flex: none;
width: 100%;
max-width: 100%;
height: 50vh;
}
@@ -912,36 +1022,57 @@ footer {
height: 30vh;
}
#nodes th:nth-child(1),
#nodes td:nth-child(1),
#nodes th:nth-child(5),
#nodes td:nth-child(5),
#nodes th:nth-child(6),
#nodes td:nth-child(6),
#nodes th:nth-child(9),
#nodes td:nth-child(9),
#nodes th:nth-child(11),
#nodes td:nth-child(11),
#nodes th:nth-child(13),
#nodes td:nth-child(13),
#nodes th:nth-child(14),
#nodes td:nth-child(14),
#nodes th:nth-child(15),
#nodes td:nth-child(15),
#nodes th:nth-child(16),
#nodes td:nth-child(16),
#nodes th:nth-child(17),
#nodes td:nth-child(17),
#nodes th:nth-child(18),
#nodes td:nth-child(18) {
display: none;
}
.legend {
max-width: min(240px, 80vw);
}
}
@media (max-width: 1679px) {
.nodes-col--node-id {
display: none;
}
}
@media (max-width: 1559px) {
.nodes-col--temperature,
.nodes-col--humidity,
.nodes-col--pressure {
display: none;
}
}
@media (max-width: 1319px) {
.nodes-col--latitude,
.nodes-col--longitude,
.nodes-col--last-position {
display: none;
}
}
@media (max-width: 1109px) {
.nodes-col--voltage,
.nodes-col--air-util-tx,
.nodes-col--altitude {
display: none;
}
}
@media (max-width: 899px) {
.nodes-col--uptime,
.nodes-col--frequency,
.nodes-col--modem-preset {
display: none;
}
}
@media (max-width: 659px) {
.nodes-col--battery,
.nodes-col--channel-util,
.nodes-col--hw-model {
display: none;
}
}
body.dark {
background: #111;
color: #eee;

View File

@@ -1,10 +1,39 @@
/*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import { promises as fs } from 'node:fs';
import path from 'node:path';
const coverageDir = 'coverage';
const reportsDir = 'reports';
const outputPath = path.join(reportsDir, 'javascript-coverage.json');
import istanbulLibCoverage from 'istanbul-lib-coverage';
import istanbulLibReport from 'istanbul-lib-report';
import istanbulReports from 'istanbul-reports';
import v8toIstanbul from 'v8-to-istanbul';
const { createCoverageMap } = istanbulLibCoverage;
const { createContext } = istanbulLibReport;
const coverageDir = path.resolve('coverage');
const reportsDir = path.resolve('reports');
const jsonOutputName = 'javascript-coverage.json';
const lcovOutputName = 'javascript-coverage.lcov';
const projectRoot = process.cwd();
/**
* Ensure the reports directory exists so that coverage artefacts can be written.
*
* @returns {Promise<void>} A promise that resolves when the directory is available.
*/
async function ensureReportsDir() {
try {
await fs.mkdir(reportsDir, { recursive: true });
@@ -14,32 +43,150 @@ async function ensureReportsDir() {
}
}
async function copyLatestCoverage() {
/**
* Read the coverage directory and return a deterministically ordered list of JSON files.
*
* @returns {Promise<string[]>} The absolute paths of available coverage JSON artefacts.
*/
async function listCoverageFiles() {
let entries;
try {
entries = await fs.readdir(coverageDir);
} catch (error) {
if (error.code === 'ENOENT') {
console.warn('Coverage directory not found; skipping export.');
return;
return [];
}
throw error;
}
const coverageFiles = entries.filter(name => name.endsWith('.json'));
const coverageFiles = entries
.filter(name => name.endsWith('.json'))
.map(name => path.join(coverageDir, name))
.sort();
if (!coverageFiles.length) {
console.warn('No coverage files generated; skipping export.');
return;
return [];
}
// Sort to pick the most recent entry deterministically.
coverageFiles.sort();
const latest = coverageFiles[coverageFiles.length - 1];
const source = path.join(coverageDir, latest);
return coverageFiles;
}
await fs.copyFile(source, outputPath);
console.log(`Copied coverage report to ${outputPath}`);
/**
* Convert a V8 coverage URL to a project-local filesystem path.
*
* @param {string | undefined} url The coverage URL emitted by V8.
* @returns {string | null} A normalised absolute path, or null when the URL should be ignored.
*/
function normaliseFileUrl(url) {
if (!url || url.startsWith('node:')) {
return null;
}
if (!url.startsWith('file://')) {
return null;
}
let filePath;
try {
filePath = decodeURIComponent(new URL(url).pathname);
} catch {
return null;
}
if (!filePath.startsWith(projectRoot)) {
return null;
}
if (filePath.includes('node_modules')) {
return null;
}
return filePath;
}
/**
* Transform the raw V8 coverage reports into an Istanbul coverage map.
*
* @param {string[]} coverageFiles A list of coverage artefacts to consume.
* @returns {Promise<import('istanbul-lib-coverage').CoverageMap>} The aggregated coverage map.
*/
async function buildCoverageMap(coverageFiles) {
const coverageMap = createCoverageMap({});
for (const file of coverageFiles) {
const raw = await fs.readFile(file, 'utf8');
const parsed = JSON.parse(raw);
const entries = Array.isArray(parsed.result) ? parsed.result : [];
for (const entry of entries) {
const { url, functions } = entry;
const filePath = normaliseFileUrl(url);
if (!filePath) {
continue;
}
try {
const converter = v8toIstanbul(filePath, 0, {
source: await fs.readFile(filePath, 'utf8'),
});
await converter.load();
converter.applyCoverage(functions);
const fileCoverages = converter.toIstanbul();
for (const coverage of Object.values(fileCoverages)) {
if (coverage.path) {
const relativePath = path.relative(projectRoot, coverage.path);
coverage.path = relativePath || coverage.path;
}
try {
const existingCoverage = coverageMap.fileCoverageFor(coverage.path);
existingCoverage.merge(coverage);
} catch (error) {
if (error && typeof error.message === 'string' && error.message.includes('No file coverage')) {
coverageMap.addFileCoverage(coverage);
} else {
throw error;
}
}
}
} catch (error) {
console.warn(`Failed to translate coverage for ${filePath}:`, error);
}
}
}
return coverageMap;
}
/**
* Persist the Istanbul coverage map as JSON and LCOV artefacts for downstream tooling.
*
* @param {import('istanbul-lib-coverage').CoverageMap} coverageMap The populated coverage map.
* @returns {Promise<void>} A promise that resolves when the outputs are written.
*/
async function writeCoverageOutputs(coverageMap) {
const jsonOutputPath = path.join(reportsDir, jsonOutputName);
const lcovOutputPath = path.join(reportsDir, lcovOutputName);
await fs.writeFile(jsonOutputPath, `${JSON.stringify(coverageMap.toJSON(), null, 2)}\n`);
const context = createContext({ dir: reportsDir, coverageMap });
istanbulReports.create('lcovonly', { file: lcovOutputName }).execute(context);
console.log(`Wrote coverage reports to ${jsonOutputPath} and ${lcovOutputPath}`);
}
await ensureReportsDir();
await copyLatestCoverage();
const coverageFiles = await listCoverageFiles();
if (!coverageFiles.length) {
process.exit(0);
}
const coverageMap = await buildCoverageMap(coverageFiles);
if (!coverageMap.files().length) {
console.warn('No project coverage entries were recognised; skipping export.');
process.exit(0);
}
await writeCoverageOutputs(coverageMap);

File diff suppressed because it is too large Load Diff

468
web/spec/config_spec.rb Normal file
View File

@@ -0,0 +1,468 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
require "spec_helper"
RSpec.describe PotatoMesh::Config do
describe ".data_directory" do
it "uses the configured XDG data home when provided" do
Dir.mktmpdir do |dir|
data_home = File.join(dir, "xdg-data")
within_env("XDG_DATA_HOME" => data_home) do
expect(described_class.data_directory).to eq(File.join(data_home, "potato-mesh"))
end
end
end
it "falls back to the user home directory" do
within_env("XDG_DATA_HOME" => nil) do
allow(Dir).to receive(:home).and_return("/home/spec")
expect(described_class.data_directory).to eq("/home/spec/.local/share/potato-mesh")
end
ensure
allow(Dir).to receive(:home).and_call_original
end
it "falls back to the web root when the home directory is unavailable" do
within_env("XDG_DATA_HOME" => nil) do
allow(Dir).to receive(:home).and_raise(ArgumentError)
expected = File.join(described_class.web_root, ".local", "share", "potato-mesh")
expect(described_class.data_directory).to eq(expected)
end
ensure
allow(Dir).to receive(:home).and_call_original
end
it "falls back to the web root when the home directory is nil" do
within_env("XDG_DATA_HOME" => nil) do
allow(Dir).to receive(:home).and_return(nil)
expected = File.join(described_class.web_root, ".local", "share", "potato-mesh")
expect(described_class.data_directory).to eq(expected)
end
ensure
allow(Dir).to receive(:home).and_call_original
end
end
describe ".config_directory" do
it "uses the configured XDG config home when provided" do
Dir.mktmpdir do |dir|
config_home = File.join(dir, "xdg-config")
within_env("XDG_CONFIG_HOME" => config_home) do
expect(described_class.config_directory).to eq(File.join(config_home, "potato-mesh"))
end
end
end
it "falls back to the web root when the home directory is empty" do
within_env("XDG_CONFIG_HOME" => nil) do
allow(Dir).to receive(:home).and_return("")
expected = File.join(described_class.web_root, ".config", "potato-mesh")
expect(described_class.config_directory).to eq(expected)
end
ensure
allow(Dir).to receive(:home).and_call_original
end
end
describe ".legacy_config_directory" do
it "returns the repository managed configuration directory" do
expect(described_class.legacy_config_directory).to eq(
File.join(described_class.web_root, ".config"),
)
end
end
describe ".legacy_keyfile_path" do
it "returns the legacy keyfile location" do
expect(described_class.legacy_keyfile_path).to eq(
File.join(described_class.web_root, ".config", "keyfile"),
)
end
it "prefers repository config keyfiles when present" do
Dir.mktmpdir do |dir|
web_root = File.join(dir, "web")
legacy_key = File.join(web_root, "config", "potato-mesh", "keyfile")
FileUtils.mkdir_p(File.dirname(legacy_key))
File.write(legacy_key, "legacy")
allow(described_class).to receive(:web_root).and_return(web_root)
expect(described_class.legacy_keyfile_path).to eq(legacy_key)
end
ensure
allow(described_class).to receive(:web_root).and_call_original
end
end
describe ".legacy_db_path" do
it "returns the bundled database location" do
expect(described_class.legacy_db_path).to eq(
File.expand_path("../data/mesh.db", described_class.web_root),
)
end
end
describe ".private_mode_enabled?" do
it "returns false when PRIVATE is unset" do
within_env("PRIVATE" => nil) do
expect(described_class.private_mode_enabled?).to be(false)
end
end
it "returns false when PRIVATE=0" do
within_env("PRIVATE" => "0") do
expect(described_class.private_mode_enabled?).to be(false)
end
end
it "returns true when PRIVATE=1" do
within_env("PRIVATE" => "1") do
expect(described_class.private_mode_enabled?).to be(true)
end
end
it "ignores surrounding whitespace" do
within_env("PRIVATE" => " 1 ") do
expect(described_class.private_mode_enabled?).to be(true)
end
end
end
describe ".federation_enabled?" do
it "returns true when FEDERATION is unset" do
within_env("FEDERATION" => nil, "PRIVATE" => "0") do
expect(described_class.federation_enabled?).to be(true)
end
end
it "returns false when FEDERATION=0" do
within_env("FEDERATION" => "0", "PRIVATE" => "0") do
expect(described_class.federation_enabled?).to be(false)
end
end
it "returns false when PRIVATE=1" do
within_env("FEDERATION" => "1", "PRIVATE" => "1") do
expect(described_class.federation_enabled?).to be(false)
end
end
it "ignores surrounding whitespace" do
within_env("FEDERATION" => " 0 ", "PRIVATE" => "0") do
expect(described_class.federation_enabled?).to be(false)
end
end
end
describe ".legacy_well_known_candidates" do
it "includes repository config directories" do
Dir.mktmpdir do |dir|
web_root = File.join(dir, "web")
allow(described_class).to receive(:web_root).and_return(web_root)
candidates = described_class.legacy_well_known_candidates
expect(candidates).to include(
File.join(web_root, "config", "potato-mesh", "well-known", "potato-mesh"),
)
end
ensure
allow(described_class).to receive(:web_root).and_call_original
end
end
describe ".federation_announcement_interval" do
it "returns eight hours in seconds" do
expect(described_class.federation_announcement_interval).to eq(8 * 60 * 60)
end
end
describe ".remote_instance_http_timeout" do
it "returns the baked-in connect timeout when unset" do
within_env("REMOTE_INSTANCE_CONNECT_TIMEOUT" => nil) do
expect(described_class.remote_instance_http_timeout).to eq(
PotatoMesh::Config::DEFAULT_REMOTE_INSTANCE_CONNECT_TIMEOUT,
)
end
end
it "accepts positive environment overrides" do
within_env("REMOTE_INSTANCE_CONNECT_TIMEOUT" => "27") do
expect(described_class.remote_instance_http_timeout).to eq(27)
end
end
it "rejects non-positive overrides" do
within_env("REMOTE_INSTANCE_CONNECT_TIMEOUT" => "0") do
expect(described_class.remote_instance_http_timeout).to eq(
PotatoMesh::Config::DEFAULT_REMOTE_INSTANCE_CONNECT_TIMEOUT,
)
end
end
end
describe ".remote_instance_read_timeout" do
it "returns the baked-in read timeout when unset" do
within_env("REMOTE_INSTANCE_READ_TIMEOUT" => nil) do
expect(described_class.remote_instance_read_timeout).to eq(
PotatoMesh::Config::DEFAULT_REMOTE_INSTANCE_READ_TIMEOUT,
)
end
end
it "accepts positive overrides" do
within_env("REMOTE_INSTANCE_READ_TIMEOUT" => "20") do
expect(described_class.remote_instance_read_timeout).to eq(20)
end
end
it "rejects non-positive overrides" do
within_env("REMOTE_INSTANCE_READ_TIMEOUT" => "-5") do
expect(described_class.remote_instance_read_timeout).to eq(
PotatoMesh::Config::DEFAULT_REMOTE_INSTANCE_READ_TIMEOUT,
)
end
end
end
describe ".federation_max_instances_per_response" do
it "returns the baked-in response limit when unset" do
within_env("FEDERATION_MAX_INSTANCES_PER_RESPONSE" => nil) do
expect(described_class.federation_max_instances_per_response).to eq(
PotatoMesh::Config::DEFAULT_FEDERATION_MAX_INSTANCES_PER_RESPONSE,
)
end
end
it "accepts positive overrides" do
within_env("FEDERATION_MAX_INSTANCES_PER_RESPONSE" => "7") do
expect(described_class.federation_max_instances_per_response).to eq(7)
end
end
it "rejects non-positive overrides" do
within_env("FEDERATION_MAX_INSTANCES_PER_RESPONSE" => "0") do
expect(described_class.federation_max_instances_per_response).to eq(
PotatoMesh::Config::DEFAULT_FEDERATION_MAX_INSTANCES_PER_RESPONSE,
)
end
end
end
describe ".federation_max_domains_per_crawl" do
it "returns the baked-in crawl limit when unset" do
within_env("FEDERATION_MAX_DOMAINS_PER_CRAWL" => nil) do
expect(described_class.federation_max_domains_per_crawl).to eq(
PotatoMesh::Config::DEFAULT_FEDERATION_MAX_DOMAINS_PER_CRAWL,
)
end
end
it "accepts positive overrides" do
within_env("FEDERATION_MAX_DOMAINS_PER_CRAWL" => "11") do
expect(described_class.federation_max_domains_per_crawl).to eq(11)
end
end
it "rejects invalid overrides" do
within_env("FEDERATION_MAX_DOMAINS_PER_CRAWL" => "-5") do
expect(described_class.federation_max_domains_per_crawl).to eq(
PotatoMesh::Config::DEFAULT_FEDERATION_MAX_DOMAINS_PER_CRAWL,
)
end
end
end
describe ".db_path" do
it "returns the default path inside the data directory" do
expect(described_class.db_path).to eq(described_class.default_db_path)
expect(described_class.db_path).to eq(File.join(described_class.data_directory, "mesh.db"))
end
end
describe ".max_json_body_bytes" do
it "returns the baked-in default size" do
expect(described_class.max_json_body_bytes).to eq(described_class.default_max_json_body_bytes)
end
end
describe ".refresh_interval_seconds" do
it "returns the baked-in refresh cadence" do
expect(described_class.refresh_interval_seconds).to eq(described_class.default_refresh_interval_seconds)
end
end
describe ".prom_report_id_list" do
it "returns an empty collection when no identifiers are configured" do
expect(described_class.prom_report_id_list).to eq([])
end
end
describe ".channel" do
it "returns the default channel when unset" do
within_env("CHANNEL" => nil) do
expect(described_class.channel).to eq(PotatoMesh::Config::DEFAULT_CHANNEL)
end
end
it "trims whitespace from overrides" do
within_env("CHANNEL" => " #Spec ") do
expect(described_class.channel).to eq("#Spec")
end
end
end
describe ".frequency" do
it "returns the default frequency when unset" do
within_env("FREQUENCY" => nil) do
expect(described_class.frequency).to eq(PotatoMesh::Config::DEFAULT_FREQUENCY)
end
end
it "trims whitespace from overrides" do
within_env("FREQUENCY" => " 915MHz ") do
expect(described_class.frequency).to eq("915MHz")
end
end
end
describe ".map_center" do
it "parses latitude and longitude from the environment" do
within_env("MAP_CENTER" => "10.5, -20.25") do
expect(described_class.map_center).to eq({ lat: 10.5, lon: -20.25 })
end
end
it "falls back to defaults when parsing fails" do
within_env("MAP_CENTER" => "potato") do
expect(described_class.map_center).to eq({ lat: PotatoMesh::Config::DEFAULT_MAP_CENTER_LAT, lon: PotatoMesh::Config::DEFAULT_MAP_CENTER_LON })
end
end
end
describe ".max_distance_km" do
it "returns the default distance when unset" do
within_env("MAX_DISTANCE" => nil) do
expect(described_class.max_distance_km).to eq(PotatoMesh::Config::DEFAULT_MAX_DISTANCE_KM)
end
end
it "parses positive numeric overrides" do
within_env("MAX_DISTANCE" => "105.5") do
expect(described_class.max_distance_km).to eq(105.5)
end
end
it "rejects invalid overrides" do
within_env("MAX_DISTANCE" => "-1") do
expect(described_class.max_distance_km).to eq(PotatoMesh::Config::DEFAULT_MAX_DISTANCE_KM)
end
end
end
describe ".contact_link" do
it "returns the default contact when unset" do
within_env("CONTACT_LINK" => nil) do
expect(described_class.contact_link).to eq(PotatoMesh::Config::DEFAULT_CONTACT_LINK)
end
end
it "trims whitespace from overrides" do
within_env("CONTACT_LINK" => " https://example.org/chat ") do
expect(described_class.contact_link).to eq("https://example.org/chat")
end
end
end
describe ".contact_link_url" do
it "builds a matrix.to URL for aliases" do
within_env("CONTACT_LINK" => "#spec:example.org") do
expect(described_class.contact_link_url).to eq("https://matrix.to/#/#spec:example.org")
end
end
it "passes through existing URLs" do
within_env("CONTACT_LINK" => "https://example.org/chat") do
expect(described_class.contact_link_url).to eq("https://example.org/chat")
end
end
it "returns nil for unrecognised values" do
within_env("CONTACT_LINK" => "Community Portal") do
expect(described_class.contact_link_url).to be_nil
end
end
end
describe ".fetch_string" do
it "trims whitespace and falls back when blank" do
within_env("SITE_NAME" => " \t ") do
expect(described_class.site_name).to eq("PotatoMesh Demo")
end
within_env("SITE_NAME" => " Spec Mesh ") do
expect(described_class.site_name).to eq("Spec Mesh")
end
end
end
describe ".debug?" do
it "reflects the DEBUG environment variable" do
within_env("DEBUG" => "1") do
expect(described_class.debug?).to be(true)
end
within_env("DEBUG" => nil) do
expect(described_class.debug?).to be(false)
end
end
end
describe ".tile_filters" do
it "returns a frozen mapping" do
filters = described_class.tile_filters
expect(filters).to match(light: String, dark: String)
expect(filters).to be_frozen
end
end
# Execute the provided block with temporary environment overrides.
#
# @param values [Hash{String=>String, nil}] key/value pairs to set in ENV.
# @yield [] block executed while the overrides are active.
# @return [void]
def within_env(values)
original = {}
values.each do |key, value|
original[key] = ENV.key?(key) ? ENV[key] : :__unset__
if value.nil?
ENV.delete(key)
else
ENV[key] = value
end
end
yield
ensure
original.each do |key, value|
if value == :__unset__
ENV.delete(key)
else
ENV[key] = value
end
end
end
end

166
web/spec/database_spec.rb Normal file
View File

@@ -0,0 +1,166 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
require "spec_helper"
require "sqlite3"
RSpec.describe PotatoMesh::App::Database do
let(:harness_class) do
Class.new do
extend PotatoMesh::App::Database
extend PotatoMesh::App::Helpers
class << self
attr_reader :warnings
# Capture warning log entries generated during migrations for
# inspection within the unit tests.
#
# @param message [String] warning message text.
# @param context [String] logical source of the log entry.
# @param metadata [Hash] structured metadata supplied by the caller.
# @return [void]
def warn_log(message, context:, **metadata)
@warnings ||= []
@warnings << { message: message, context: context, metadata: metadata }
end
# Capture debug log entries generated during migrations for
# completeness of the helper interface.
#
# @param message [String] debug message text.
# @param context [String] logical source of the log entry.
# @param metadata [Hash] structured metadata supplied by the caller.
# @return [void]
def debug_log(message, context:, **metadata)
@debug_entries ||= []
@debug_entries << { message: message, context: context, metadata: metadata }
end
# Reset captured log entries between test examples.
#
# @return [void]
def reset_logs!
@warnings = []
@debug_entries = []
end
end
end
end
around do |example|
harness_class.reset_logs!
Dir.mktmpdir("db-upgrade-spec-") do |dir|
db_path = File.join(dir, "mesh.db")
RSpec::Mocks.with_temporary_scope do
allow(PotatoMesh::Config).to receive(:db_path).and_return(db_path)
allow(PotatoMesh::Config).to receive(:default_db_path).and_return(db_path)
allow(PotatoMesh::Config).to receive(:legacy_db_path).and_return(db_path)
example.run
end
end
ensure
harness_class.reset_logs!
end
# Retrieve column names for the requested table within the temporary
# database used for upgrade tests.
#
# @param table [String] table name whose columns should be returned.
# @return [Array<String>] names of the columns defined on +table+.
def column_names_for(table)
db = SQLite3::Database.new(PotatoMesh::Config.db_path, readonly: true)
db.execute("PRAGMA table_info(#{table})").map { |row| row[1] }
ensure
db&.close
end
it "adds missing telemetry columns when upgrading an existing schema" do
SQLite3::Database.new(PotatoMesh::Config.db_path) do |db|
db.execute("CREATE TABLE nodes(node_id TEXT)")
db.execute("CREATE TABLE messages(id INTEGER PRIMARY KEY)")
db.execute <<~SQL
CREATE TABLE telemetry (
id INTEGER PRIMARY KEY,
node_id TEXT,
node_num INTEGER,
from_id TEXT,
to_id TEXT,
rx_time INTEGER NOT NULL,
rx_iso TEXT NOT NULL,
telemetry_time INTEGER,
channel INTEGER,
portnum TEXT,
hop_limit INTEGER,
snr REAL,
rssi INTEGER,
bitfield INTEGER,
payload_b64 TEXT,
battery_level REAL,
voltage REAL,
channel_utilization REAL,
air_util_tx REAL,
uptime_seconds INTEGER,
temperature REAL,
relative_humidity REAL,
barometric_pressure REAL
)
SQL
end
harness_class.ensure_schema_upgrades
telemetry_columns = column_names_for("telemetry")
expect(telemetry_columns).to include(
"gas_resistance",
"current",
"iaq",
"distance",
"lux",
"white_lux",
"ir_lux",
"uv_lux",
"wind_direction",
"wind_speed",
"weight",
"wind_gust",
"wind_lull",
"radiation",
"rainfall_1h",
"rainfall_24h",
"soil_moisture",
"soil_temperature",
)
expect { harness_class.ensure_schema_upgrades }.not_to raise_error
end
it "initialises the telemetry table when it is missing" do
SQLite3::Database.new(PotatoMesh::Config.db_path) do |db|
db.execute("CREATE TABLE nodes(node_id TEXT)")
db.execute("CREATE TABLE messages(id INTEGER PRIMARY KEY)")
end
expect(column_names_for("telemetry")).to be_empty
harness_class.ensure_schema_upgrades
telemetry_columns = column_names_for("telemetry")
expect(telemetry_columns).to include("soil_temperature", "lux", "iaq")
expect(telemetry_columns).to include("rx_time", "battery_level")
end
end

479
web/spec/federation_spec.rb Normal file
View File

@@ -0,0 +1,479 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
require "spec_helper"
require "net/http"
require "openssl"
require "set"
require "uri"
require "socket"
RSpec.describe PotatoMesh::App::Federation do
subject(:federation_helpers) do
Class.new do
extend PotatoMesh::App::Federation
class << self
def debug_messages
@debug_messages ||= []
end
def debug_log(message, **_metadata)
debug_messages << message
end
def reset_debug_messages
@debug_messages = []
end
def warn_messages
@warn_messages ||= []
end
def warn_log(message, **_metadata)
warn_messages << message
end
def reset_warn_messages
@warn_messages = []
end
end
end
end
before do
federation_helpers.instance_variable_set(:@remote_instance_cert_store, nil)
federation_helpers.instance_variable_set(:@remote_instance_verify_callback, nil)
federation_helpers.reset_debug_messages
federation_helpers.reset_warn_messages
end
describe ".remote_instance_cert_store" do
it "initializes the store with default paths and disables CRL checks" do
store_double = Class.new do
attr_reader :default_paths_called, :assigned_flags
def set_default_paths
@default_paths_called = true
end
def flags=(value)
@assigned_flags = value
end
def respond_to_missing?(method_name, include_private = false)
method_name == :flags= || super
end
end.new
allow(OpenSSL::X509::Store).to receive(:new).and_return(store_double)
result = federation_helpers.remote_instance_cert_store
expect(result).to eq(store_double)
expect(store_double.default_paths_called).to be(true)
expect(store_double.assigned_flags).to eq(0)
end
it "memoizes the generated store" do
first = federation_helpers.remote_instance_cert_store
second = federation_helpers.remote_instance_cert_store
expect(second).to equal(first)
end
it "logs and returns nil when initialization fails" do
allow(OpenSSL::X509::Store).to receive(:new).and_raise(OpenSSL::X509::StoreError, "boom")
expect(federation_helpers.remote_instance_cert_store).to be_nil
expect(federation_helpers.debug_messages.last).to include("Failed to initialize certificate store")
end
end
describe ".remote_instance_verify_callback" do
let(:callback) { federation_helpers.remote_instance_verify_callback }
it "memoizes the generated callback" do
first = federation_helpers.remote_instance_verify_callback
second = federation_helpers.remote_instance_verify_callback
expect(second).to equal(first)
end
it "allows the handshake to continue when CRLs are unavailable" do
store_context = instance_double(OpenSSL::X509::StoreContext, error: OpenSSL::X509::V_ERR_UNABLE_TO_GET_CRL)
expect(callback.call(false, store_context)).to be(true)
expect(federation_helpers.debug_messages.last).to include("Ignoring TLS CRL retrieval failure")
end
it "rejects other verification failures" do
store_context = instance_double(OpenSSL::X509::StoreContext, error: OpenSSL::X509::V_ERR_CERT_HAS_EXPIRED)
expect(callback.call(false, store_context)).to be(false)
end
it "falls back to the default behavior when the handshake is already valid" do
expect(callback.call(true, nil)).to be(true)
end
end
describe ".build_remote_http_client" do
let(:connect_timeout) { 5 }
let(:read_timeout) { 12 }
let(:public_addrinfo) { Addrinfo.ip("203.0.113.5") }
before do
allow(PotatoMesh::Config).to receive(:remote_instance_http_timeout).and_return(connect_timeout)
allow(PotatoMesh::Config).to receive(:remote_instance_read_timeout).and_return(read_timeout)
allow(Addrinfo).to receive(:getaddrinfo).and_return([public_addrinfo])
end
it "configures SSL settings for HTTPS endpoints" do
uri = URI.parse("https://remote.example.com/api")
store = OpenSSL::X509::Store.new
allow(federation_helpers).to receive(:remote_instance_cert_store).and_return(store)
callback = proc { true }
allow(federation_helpers).to receive(:remote_instance_verify_callback).and_return(callback)
http = federation_helpers.build_remote_http_client(uri)
expect(http.use_ssl?).to be(true)
expect(http.open_timeout).to eq(connect_timeout)
expect(http.read_timeout).to eq(read_timeout)
expect(http.cert_store).to eq(store)
expect(http.verify_mode).to eq(OpenSSL::SSL::VERIFY_PEER)
expect(http.verify_callback).to eq(callback)
if http.respond_to?(:min_version)
expect(http.min_version).to eq(:TLS1_2)
end
end
it "omits SSL configuration for HTTP endpoints" do
uri = URI.parse("http://remote.example.com/api")
http = federation_helpers.build_remote_http_client(uri)
expect(http.use_ssl?).to be(false)
expect(http.cert_store).to be_nil
expect(http.open_timeout).to eq(connect_timeout)
expect(http.read_timeout).to eq(read_timeout)
end
it "leaves the certificate store unset when unavailable" do
uri = URI.parse("https://remote.example.com/api")
allow(federation_helpers).to receive(:remote_instance_cert_store).and_return(nil)
allow(federation_helpers).to receive(:remote_instance_verify_callback).and_return(nil)
http = federation_helpers.build_remote_http_client(uri)
expect(http.cert_store).to be_nil
expect(http.verify_callback).to be_nil
end
it "rejects URIs that resolve exclusively to restricted addresses" do
uri = URI.parse("https://loopback.mesh/api")
allow(Addrinfo).to receive(:getaddrinfo).and_return([Addrinfo.ip("127.0.0.1")])
expect do
federation_helpers.build_remote_http_client(uri)
end.to raise_error(ArgumentError, "restricted domain")
end
it "binds the HTTP client to the first unrestricted address" do
uri = URI.parse("https://remote.example.com/api")
allow(Addrinfo).to receive(:getaddrinfo).and_return([
Addrinfo.ip("127.0.0.1"),
public_addrinfo,
Addrinfo.ip("10.0.0.3"),
])
http = federation_helpers.build_remote_http_client(uri)
if http.respond_to?(:ipaddr)
expect(http.ipaddr).to eq("203.0.113.5")
else
skip "Net::HTTP#ipaddr accessor unavailable"
end
end
end
describe ".ingest_known_instances_from!" do
let(:db) { double(:db) }
let(:seed_domain) { "seed.mesh" }
let(:payload_entries) do
Array.new(3) do |index|
{
"id" => "remote-#{index}",
"domain" => "ally-#{index}.mesh",
"pubkey" => "ignored-pubkey-#{index}",
"signature" => "ignored-signature-#{index}",
}
end
end
let(:attributes_list) do
payload_entries.map do |entry|
{
id: entry["id"],
domain: entry["domain"],
pubkey: entry["pubkey"],
name: nil,
version: nil,
channel: nil,
frequency: nil,
latitude: nil,
longitude: nil,
last_update_time: nil,
is_private: false,
}
end
end
let(:node_payload) do
Array.new(PotatoMesh::Config.remote_instance_min_node_count) do |index|
{ "node_id" => "node-#{index}", "last_heard" => Time.now.to_i - index }
end
end
let(:response_map) do
mapping = { [seed_domain, "/api/instances"] => [payload_entries, :instances] }
attributes_list.each do |attributes|
mapping[[attributes[:domain], "/api/nodes"]] = [node_payload, :nodes]
mapping[[attributes[:domain], "/api/instances"]] = [[], :instances]
end
mapping
end
before do
allow(federation_helpers).to receive(:fetch_instance_json) do |host, path|
response_map.fetch([host, path]) { [nil, []] }
end
allow(federation_helpers).to receive(:verify_instance_signature).and_return(true)
allow(federation_helpers).to receive(:validate_remote_nodes).and_return([true, nil])
payload_entries.each_with_index do |entry, index|
allow(federation_helpers).to receive(:remote_instance_attributes_from_payload).with(entry).and_return([attributes_list[index], "signature-#{index}", nil])
end
end
it "stops processing once the per-response limit is exceeded" do
processed_domains = []
allow(federation_helpers).to receive(:upsert_instance_record) do |_db, attrs, _signature|
processed_domains << attrs[:domain]
end
allow(PotatoMesh::Config).to receive(:federation_max_instances_per_response).and_return(2)
allow(PotatoMesh::Config).to receive(:federation_max_domains_per_crawl).and_return(10)
visited = federation_helpers.ingest_known_instances_from!(db, seed_domain)
expect(processed_domains).to eq([
attributes_list[0][:domain],
attributes_list[1][:domain],
])
expect(visited).to include(seed_domain, attributes_list[0][:domain], attributes_list[1][:domain])
expect(visited).not_to include(attributes_list[2][:domain])
expect(federation_helpers.debug_messages).to include(a_string_including("response limit"))
end
it "halts recursion once the crawl limit would be exceeded" do
processed_domains = []
allow(federation_helpers).to receive(:upsert_instance_record) do |_db, attrs, _signature|
processed_domains << attrs[:domain]
end
allow(PotatoMesh::Config).to receive(:federation_max_instances_per_response).and_return(5)
allow(PotatoMesh::Config).to receive(:federation_max_domains_per_crawl).and_return(2)
visited = federation_helpers.ingest_known_instances_from!(db, seed_domain)
expect(processed_domains).to eq([attributes_list.first[:domain]])
expect(visited).to include(seed_domain, attributes_list.first[:domain])
expect(visited).not_to include(attributes_list[1][:domain], attributes_list[2][:domain])
expect(federation_helpers.debug_messages).to include(a_string_including("crawl limit"))
end
end
describe ".federation_user_agent_header" do
it "combines the version and sanitized domain" do
allow(federation_helpers).to receive(:app_constant).and_call_original
allow(federation_helpers).to receive(:app_constant).with(:APP_VERSION).and_return("9.9.9")
allow(federation_helpers).to receive(:app_constant).with(:INSTANCE_DOMAIN).and_return("Example.Mesh")
header = federation_helpers.federation_user_agent_header
expect(header).to eq("PotatoMesh/9.9.9 (+https://example.mesh)")
end
it "falls back to the product name when the domain is unavailable" do
allow(federation_helpers).to receive(:app_constant).and_call_original
allow(federation_helpers).to receive(:app_constant).with(:APP_VERSION).and_return("1.2.3")
allow(federation_helpers).to receive(:app_constant).with(:INSTANCE_DOMAIN).and_return(nil)
header = federation_helpers.federation_user_agent_header
expect(header).to eq("PotatoMesh/1.2.3")
end
it "uses an explicit unknown marker when the version is blank" do
allow(federation_helpers).to receive(:app_constant).and_call_original
allow(federation_helpers).to receive(:app_constant).with(:APP_VERSION).and_return("")
allow(federation_helpers).to receive(:app_constant).with(:INSTANCE_DOMAIN).and_return("Example.Mesh")
header = federation_helpers.federation_user_agent_header
expect(header).to eq("PotatoMesh/unknown (+https://example.mesh)")
end
end
describe ".perform_instance_http_request" do
let(:uri) { URI.parse("https://remote.example.com/api") }
let(:http_client) { instance_double(Net::HTTP) }
before do
allow(federation_helpers).to receive(:build_remote_http_client).with(uri).and_return(http_client)
end
it "wraps errors that omit a message with the error class name" do
stub_const(
"RemoteTcpFailure",
Class.new(StandardError) do
def message
""
end
end,
)
allow(http_client).to receive(:start).and_raise(RemoteTcpFailure.new)
expect do
federation_helpers.send(:perform_instance_http_request, uri)
end.to raise_error(PotatoMesh::App::InstanceFetchError, "RemoteTcpFailure")
end
it "includes the error class name when the message omits it" do
allow(http_client).to receive(:start).and_raise(OpenSSL::SSL::SSLError.new("handshake failed"))
expect do
federation_helpers.send(:perform_instance_http_request, uri)
end.to raise_error(
PotatoMesh::App::InstanceFetchError,
"OpenSSL::SSL::SSLError: handshake failed",
)
end
it "preserves messages that already include the error class" do
allow(http_client).to receive(:start).and_raise(Net::ReadTimeout.new)
expect do
federation_helpers.send(:perform_instance_http_request, uri)
end.to raise_error(PotatoMesh::App::InstanceFetchError, "Net::ReadTimeout")
end
it "wraps restricted address resolution failures" do
allow(federation_helpers).to receive(:build_remote_http_client).and_call_original
allow(Addrinfo).to receive(:getaddrinfo).and_return([Addrinfo.ip("127.0.0.1")])
expect do
federation_helpers.send(:perform_instance_http_request, uri)
end.to raise_error(PotatoMesh::App::InstanceFetchError, "ArgumentError: restricted domain")
end
it "applies federation headers to instance fetch requests" do
connection = instance_double("Net::HTTPConnection")
success_response = Net::HTTPOK.new("1.1", "200", "OK")
allow(success_response).to receive(:body).and_return("{}")
allow(success_response).to receive(:code).and_return("200")
captured_request = nil
allow(http_client).to receive(:start) do |&block|
block.call(connection)
end
allow(connection).to receive(:request) do |request|
captured_request = request
success_response
end
result = federation_helpers.send(:perform_instance_http_request, uri)
expect(result).to eq("{}")
expect(captured_request).not_to be_nil
expect(captured_request["Accept"]).to eq("application/json")
expect(captured_request["User-Agent"]).to eq(federation_helpers.send(:federation_user_agent_header))
expect(captured_request["Content-Type"]).to be_nil
end
end
describe ".announce_instance_to_domain" do
let(:payload) { "{}" }
let(:https_uri) { URI.parse("https://remote.mesh/api/instances") }
let(:http_uri) { URI.parse("http://remote.mesh/api/instances") }
let(:http_connection) { instance_double("Net::HTTPConnection") }
let(:success_response) { Net::HTTPOK.new("1.1", "200", "OK") }
before do
allow(success_response).to receive(:code).and_return("200")
end
it "retries over HTTP when HTTPS connections are refused" do
https_client = instance_double(Net::HTTP)
http_client = instance_double(Net::HTTP)
allow(federation_helpers).to receive(:build_remote_http_client).with(https_uri).and_return(https_client)
allow(federation_helpers).to receive(:build_remote_http_client).with(http_uri).and_return(http_client)
allow(https_client).to receive(:start).and_raise(Errno::ECONNREFUSED.new("refused"))
allow(http_connection).to receive(:request).and_return(success_response)
allow(http_client).to receive(:start).and_yield(http_connection).and_return(success_response)
result = federation_helpers.announce_instance_to_domain("remote.mesh", payload)
expect(result).to be(true)
expect(federation_helpers.debug_messages).to include("HTTPS federation announcement failed, retrying with HTTP")
expect(federation_helpers.warn_messages).to be_empty
end
it "logs a warning when HTTPS refusal persists after HTTP fallback" do
https_client = instance_double(Net::HTTP)
http_client = instance_double(Net::HTTP)
allow(federation_helpers).to receive(:build_remote_http_client).with(https_uri).and_return(https_client)
allow(federation_helpers).to receive(:build_remote_http_client).with(http_uri).and_return(http_client)
allow(https_client).to receive(:start).and_raise(Errno::ECONNREFUSED.new("refused"))
allow(http_client).to receive(:start).and_raise(SocketError.new("dns failure"))
result = federation_helpers.announce_instance_to_domain("remote.mesh", payload)
expect(result).to be(false)
expect(federation_helpers.debug_messages).to include("HTTPS federation announcement failed, retrying with HTTP")
expect(
federation_helpers.warn_messages.count { |message| message.include?("Federation announcement raised exception") },
).to eq(2)
end
it "applies federation headers to announcement requests" do
https_client = instance_double(Net::HTTP)
allow(federation_helpers).to receive(:build_remote_http_client).with(https_uri).and_return(https_client)
captured_request = nil
allow(https_client).to receive(:start).and_yield(http_connection).and_return(success_response)
allow(http_connection).to receive(:request) do |request|
captured_request = request
success_response
end
result = federation_helpers.announce_instance_to_domain("remote.mesh", payload)
expect(result).to be(true)
expect(captured_request).not_to be_nil
expect(captured_request["Content-Type"]).to eq("application/json")
expect(captured_request["Accept"]).to eq("application/json")
expect(captured_request["User-Agent"]).to eq(federation_helpers.send(:federation_user_agent_header))
end
end
end

222
web/spec/filesystem_spec.rb Normal file
View File

@@ -0,0 +1,222 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
require "spec_helper"
RSpec.describe PotatoMesh::App::Filesystem do
let(:harness_class) do
Class.new do
extend PotatoMesh::App::Filesystem
class << self
def debug_entries
@debug_entries ||= []
end
def warning_entries
@warning_entries ||= []
end
def debug_log(message, context:, **metadata)
debug_entries << { message: message, context: context, metadata: metadata }
end
def warn_log(message, context:, **metadata)
warning_entries << { message: message, context: context, metadata: metadata }
end
def reset_logs!
@debug_entries = []
@warning_entries = []
end
end
end
end
around do |example|
harness_class.reset_logs!
example.run
harness_class.reset_logs!
end
describe "#perform_initial_filesystem_setup!" do
it "migrates the legacy database and keyfile" do
Dir.mktmpdir do |dir|
legacy_db = File.join(dir, "legacy", "mesh.db")
legacy_key = File.join(dir, "legacy-config", "keyfile")
new_db = File.join(dir, "data", "potato-mesh", "mesh.db")
new_key = File.join(dir, "config", "potato-mesh", "keyfile")
FileUtils.mkdir_p(File.dirname(legacy_db))
File.write(legacy_db, "db")
FileUtils.mkdir_p(File.dirname(legacy_key))
File.write(legacy_key, "key")
allow(PotatoMesh::Config).to receive_messages(
legacy_db_path: legacy_db,
db_path: new_db,
default_db_path: new_db,
keyfile_path: new_key,
)
allow(PotatoMesh::Config).to receive(:legacy_keyfile_candidates).and_return([legacy_key])
harness_class.perform_initial_filesystem_setup!
expect(File).to exist(new_db)
expect(File).to exist(new_key)
expect(File.read(new_db)).to eq("db")
expect(File.read(new_key)).to eq("key")
expect(File.stat(new_key).mode & 0o777).to eq(0o600)
expect(File.stat(new_db).mode & 0o777).to eq(0o600)
expect(harness_class.debug_entries.size).to eq(2)
expect(harness_class.warning_entries).to be_empty
end
end
it "migrates repository configuration assets from web/config" do
Dir.mktmpdir do |dir|
web_root = File.join(dir, "web")
legacy_key = File.join(web_root, "config", "potato-mesh", "keyfile")
legacy_well_known = File.join(web_root, "config", "potato-mesh", "well-known", "potato-mesh")
destination_root = File.join(dir, "xdg-config", "potato-mesh")
new_key = File.join(destination_root, "keyfile")
new_well_known = File.join(destination_root, "well-known", "potato-mesh")
FileUtils.mkdir_p(File.dirname(legacy_key))
File.write(legacy_key, "legacy-key")
FileUtils.mkdir_p(File.dirname(legacy_well_known))
File.write(legacy_well_known, "{\"legacy\":true}")
allow(PotatoMesh::Config).to receive(:web_root).and_return(web_root)
allow(PotatoMesh::Config).to receive(:keyfile_path).and_return(new_key)
allow(PotatoMesh::Config).to receive(:well_known_storage_root).and_return(File.dirname(new_well_known))
allow(PotatoMesh::Config).to receive(:well_known_relative_path).and_return(".well-known/potato-mesh")
allow(PotatoMesh::Config).to receive(:legacy_db_path).and_return(File.join(dir, "legacy", "mesh.db"))
allow(PotatoMesh::Config).to receive(:db_path).and_return(File.join(dir, "data", "potato-mesh", "mesh.db"))
allow(PotatoMesh::Config).to receive(:default_db_path).and_return(File.join(dir, "data", "potato-mesh", "mesh.db"))
harness_class.perform_initial_filesystem_setup!
expect(File).to exist(new_key)
expect(File.read(new_key)).to eq("legacy-key")
expect(File.stat(new_key).mode & 0o777).to eq(0o600)
expect(File).to exist(new_well_known)
expect(File.read(new_well_known)).to eq("{\"legacy\":true}")
expect(File.stat(new_well_known).mode & 0o777).to eq(0o644)
expect(harness_class.debug_entries.map { |entry| entry[:context] }).to include("filesystem.keys", "filesystem.well_known")
end
end
it "skips database migration when using a custom destination" do
Dir.mktmpdir do |dir|
legacy_db = File.join(dir, "legacy", "mesh.db")
new_db = File.join(dir, "custom", "database.db")
FileUtils.mkdir_p(File.dirname(legacy_db))
File.write(legacy_db, "db")
allow(PotatoMesh::Config).to receive_messages(
legacy_db_path: legacy_db,
db_path: new_db,
default_db_path: File.join(dir, "default", "mesh.db"),
legacy_keyfile_path: File.join(dir, "old", "keyfile"),
keyfile_path: File.join(dir, "config", "keyfile"),
)
harness_class.perform_initial_filesystem_setup!
expect(File).not_to exist(new_db)
end
end
end
describe "private migration helpers" do
it "does not migrate when the source is missing" do
Dir.mktmpdir do |dir|
destination = File.join(dir, "target", "file")
harness_class.send(
:migrate_legacy_file,
File.join(dir, "missing"),
destination,
chmod: 0o600,
context: "spec.context",
)
expect(File).not_to exist(destination)
expect(harness_class.debug_entries).to be_empty
end
end
it "does not overwrite existing destinations" do
Dir.mktmpdir do |dir|
source = File.join(dir, "source")
destination = File.join(dir, "destination")
File.write(source, "alpha")
FileUtils.mkdir_p(File.dirname(destination))
File.write(destination, "beta")
harness_class.send(
:migrate_legacy_file,
source,
destination,
chmod: 0o600,
context: "spec.context",
)
expect(File.read(destination)).to eq("beta")
end
end
it "ignores migrations when the source and destination are identical" do
Dir.mktmpdir do |dir|
path = File.join(dir, "shared")
File.write(path, "same")
harness_class.send(
:migrate_legacy_file,
path,
path,
chmod: 0o600,
context: "spec.context",
)
expect(harness_class.debug_entries).to be_empty
end
end
it "logs warnings when the migration fails" do
Dir.mktmpdir do |dir|
source = File.join(dir, "source")
destination = File.join(dir, "destination")
File.write(source, "data")
allow(FileUtils).to receive(:mkdir_p).and_raise(Errno::EACCES)
harness_class.send(
:migrate_legacy_file,
source,
destination,
chmod: 0o600,
context: "spec.context",
)
expect(harness_class.warning_entries.size).to eq(1)
expect(harness_class.debug_entries).to be_empty
end
ensure
allow(FileUtils).to receive(:mkdir_p).and_call_original
end
end
end

122
web/spec/identity_spec.rb Normal file
View File

@@ -0,0 +1,122 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
require "spec_helper"
require "openssl"
RSpec.describe PotatoMesh::App::Identity do
let(:harness_class) do
Class.new do
extend PotatoMesh::App::Identity
end
end
describe ".load_or_generate_instance_private_key" do
it "loads an existing key without generating a new one" do
Dir.mktmpdir do |dir|
key_path = File.join(dir, "config", "potato-mesh", "keyfile")
FileUtils.mkdir_p(File.dirname(key_path))
key = OpenSSL::PKey::RSA.new(2048)
File.write(key_path, key.export)
allow(PotatoMesh::Config).to receive(:keyfile_path).and_return(key_path)
loaded_key, generated = harness_class.load_or_generate_instance_private_key
expect(generated).to be(false)
expect(loaded_key.to_pem).to eq(key.to_pem)
end
ensure
allow(PotatoMesh::Config).to receive(:keyfile_path).and_call_original
end
it "migrates a legacy keyfile before loading" do
Dir.mktmpdir do |dir|
key_path = File.join(dir, "config", "potato-mesh", "keyfile")
legacy_key_path = File.join(dir, "legacy", "keyfile")
FileUtils.mkdir_p(File.dirname(legacy_key_path))
key = OpenSSL::PKey::RSA.new(2048)
File.write(legacy_key_path, key.export)
allow(PotatoMesh::Config).to receive(:keyfile_path).and_return(key_path)
allow(PotatoMesh::Config).to receive(:legacy_keyfile_candidates).and_return([legacy_key_path])
loaded_key, generated = harness_class.load_or_generate_instance_private_key
expect(generated).to be(false)
expect(loaded_key.to_pem).to eq(key.to_pem)
expect(File.exist?(key_path)).to be(true)
expect(File.binread(key_path)).to eq(key.export)
end
ensure
allow(PotatoMesh::Config).to receive(:keyfile_path).and_call_original
allow(PotatoMesh::Config).to receive(:legacy_keyfile_candidates).and_call_original
end
end
describe ".refresh_well_known_document_if_stale" do
let(:storage_dir) { Dir.mktmpdir }
let(:well_known_path) do
File.join(storage_dir, File.basename(PotatoMesh::Config.well_known_relative_path))
end
before do
allow(PotatoMesh::Config).to receive(:well_known_storage_root).and_return(storage_dir)
allow(PotatoMesh::Config).to receive(:well_known_relative_path).and_return(".well-known/potato-mesh")
allow(PotatoMesh::Config).to receive(:well_known_refresh_interval).and_return(86_400)
allow(PotatoMesh::Sanitizer).to receive(:sanitized_site_name).and_return("Test Instance")
allow(PotatoMesh::Sanitizer).to receive(:sanitize_instance_domain).and_return("example.com")
end
after do
FileUtils.remove_entry(storage_dir)
allow(PotatoMesh::Config).to receive(:well_known_storage_root).and_call_original
allow(PotatoMesh::Config).to receive(:well_known_relative_path).and_call_original
allow(PotatoMesh::Config).to receive(:well_known_refresh_interval).and_call_original
allow(PotatoMesh::Sanitizer).to receive(:sanitized_site_name).and_call_original
allow(PotatoMesh::Sanitizer).to receive(:sanitize_instance_domain).and_call_original
end
it "writes a well-known document when none exists" do
PotatoMesh::Application.refresh_well_known_document_if_stale
expect(File.exist?(well_known_path)).to be(true)
document = JSON.parse(File.read(well_known_path))
expect(document.fetch("version")).to eq(PotatoMesh::Application::APP_VERSION)
expect(document.fetch("domain")).to eq("example.com")
end
it "rewrites the document when configuration values change" do
PotatoMesh::Application.refresh_well_known_document_if_stale
original_contents = File.binread(well_known_path)
stub_const("PotatoMesh::Application::APP_VERSION", "9.9.9-test")
PotatoMesh::Application.refresh_well_known_document_if_stale
rewritten_contents = File.binread(well_known_path)
expect(rewritten_contents).not_to eq(original_contents)
document = JSON.parse(rewritten_contents)
expect(document.fetch("version")).to eq("9.9.9-test")
end
it "does not rewrite when content is current and within the refresh interval" do
PotatoMesh::Application.refresh_well_known_document_if_stale
original_contents = File.binread(well_known_path)
PotatoMesh::Application.refresh_well_known_document_if_stale
expect(File.binread(well_known_path)).to eq(original_contents)
end
end
end

View File

@@ -0,0 +1,99 @@
# Copyright (C) 2025 l5yth
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
require "spec_helper"
require "sqlite3"
RSpec.describe PotatoMesh::App::Instances do
let(:application_class) { PotatoMesh::Application }
let(:week_seconds) { PotatoMesh::Config.week_seconds }
# Execute the provided block with a configured SQLite connection.
#
# @param readonly [Boolean] whether the connection should be read-only.
# @yieldparam db [SQLite3::Database] configured database handle.
# @return [void]
def with_db(readonly: false)
db = SQLite3::Database.new(PotatoMesh::Config.db_path, readonly: readonly)
db.busy_timeout = PotatoMesh::Config.db_busy_timeout_ms
db.execute("PRAGMA foreign_keys = ON")
yield db
ensure
db&.close
end
before do
FileUtils.mkdir_p(File.dirname(PotatoMesh::Config.db_path))
application_class.init_db unless application_class.db_schema_present?
with_db do |db|
db.execute("DELETE FROM instances")
end
end
describe ".load_instances_for_api" do
it "only returns instances updated within the configured rolling window" do
fixed_time = Time.utc(2025, 1, 15, 12, 0, 0)
allow(Time).to receive(:now).and_return(fixed_time)
application_class.ensure_self_instance_record!
recent_timestamp = fixed_time.to_i - (week_seconds / 2)
stale_timestamp = fixed_time.to_i - week_seconds - 60
with_db do |db|
db.execute(
"INSERT INTO instances (id, domain, pubkey, last_update_time, is_private) VALUES (?, ?, ?, ?, ?)",
[
"recent-instance",
"recent.mesh.test",
PotatoMesh::Application::INSTANCE_PUBLIC_KEY_PEM,
recent_timestamp,
0,
],
)
db.execute(
"INSERT INTO instances (id, domain, pubkey, last_update_time, is_private) VALUES (?, ?, ?, ?, ?)",
[
"stale-instance",
"stale.mesh.test",
PotatoMesh::Application::INSTANCE_PUBLIC_KEY_PEM,
stale_timestamp,
0,
],
)
db.execute(
"INSERT INTO instances (id, domain, pubkey, is_private) VALUES (?, ?, ?, ?)",
[
"missing-instance",
"missing.mesh.test",
PotatoMesh::Application::INSTANCE_PUBLIC_KEY_PEM,
0,
],
)
end
payload = application_class.load_instances_for_api
domains = payload.map { |row| row["domain"] }
lower_bound = fixed_time.to_i - week_seconds
expect(domains).to include("recent.mesh.test")
expect(domains).to include(application_class.app_constant(:INSTANCE_DOMAIN))
expect(domains).not_to include("stale.mesh.test")
expect(domains).not_to include("missing.mesh.test")
expect(payload.all? { |row| row["lastUpdateTime"] >= lower_bound }).to be(true)
end
end
end

37
web/spec/logging_spec.rb Normal file
View File

@@ -0,0 +1,37 @@
# frozen_string_literal: true
require "spec_helper"
require "potato_mesh/logging"
describe PotatoMesh::Logging do
describe ".formatter" do
it "generates structured log entries" do
timestamp = Time.utc(2024, 1, 2, 3, 4, 5, 678_000)
formatted = described_class.formatter("DEBUG", timestamp, "potato-mesh", "hello")
expect(formatted).to eq("[2024-01-02T03:04:05.678Z] [potato-mesh] [debug] hello\n")
end
end
describe ".log" do
it "passes structured metadata to the logger" do
logger = instance_double(Logger)
expect(logger).to receive(:debug).with("context=test foo=\"bar\" hello")
described_class.log(logger, :debug, "hello", context: "test", foo: "bar")
end
end
describe ".logger_for" do
it "returns the logger from an object with settings" do
container = Class.new do
def settings
Struct.new(:logger).new(:logger)
end
end
expect(described_class.logger_for(container.new)).to eq(:logger)
end
end
end

View File

@@ -0,0 +1,49 @@
# frozen_string_literal: true
require "spec_helper"
RSpec.describe PotatoMesh::Application do
describe ".canonicalize_configured_instance_domain" do
subject(:canonicalize) { described_class.canonicalize_configured_instance_domain(input) }
context "with an IPv6 URL" do
let(:input) { "http://[::1]" }
it "retains brackets around the literal" do
expect(canonicalize).to eq("[::1]")
end
end
context "with an IPv6 URL including a non-default port" do
let(:input) { "http://[::1]:8080" }
it "keeps the literal bracketed and appends the port" do
expect(canonicalize).to eq("[::1]:8080")
end
end
context "with a bare IPv6 literal" do
let(:input) { "::1" }
it "wraps the literal in brackets" do
expect(canonicalize).to eq("[::1]")
end
end
context "with a bare IPv6 literal and port" do
let(:input) { "::1:9000" }
it "wraps the literal in brackets and preserves the port" do
expect(canonicalize).to eq("[::1]:9000")
end
end
context "with an IPv4 literal" do
let(:input) { "http://127.0.0.1" }
it "returns the literal without brackets" do
expect(canonicalize).to eq("127.0.0.1")
end
end
end
end

117
web/spec/sanitizer_spec.rb Normal file
View File

@@ -0,0 +1,117 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
require "spec_helper"
require "ipaddr"
require "potato_mesh/sanitizer"
RSpec.describe PotatoMesh::Sanitizer do
describe ".string_or_nil" do
it "returns trimmed strings or nil" do
expect(described_class.string_or_nil(" value \n")).to eq("value")
expect(described_class.string_or_nil(" \t ")).to be_nil
expect(described_class.string_or_nil(nil)).to be_nil
expect(described_class.string_or_nil(123)).to eq("123")
end
end
describe ".sanitize_instance_domain" do
it "rejects invalid domains" do
expect(described_class.sanitize_instance_domain(nil)).to be_nil
expect(described_class.sanitize_instance_domain(" ")).to be_nil
expect(described_class.sanitize_instance_domain("example")).to be_nil
expect(described_class.sanitize_instance_domain("example.org/")).to be_nil
expect(described_class.sanitize_instance_domain("example .org")).to be_nil
expect(described_class.sanitize_instance_domain("mesh_instance.example")).to be_nil
expect(described_class.sanitize_instance_domain("example.org:70000")).to be_nil
expect(described_class.sanitize_instance_domain("[::1")).to be_nil
end
it "normalises valid domains" do
expect(described_class.sanitize_instance_domain(" Example.Org. ")).to eq("example.org")
expect(described_class.sanitize_instance_domain("Example.Org:443")).to eq("example.org:443")
expect(described_class.sanitize_instance_domain("[2001:DB8::1]")).to eq("[2001:db8::1]")
expect(described_class.sanitize_instance_domain("127.0.0.1:8080")).to eq("127.0.0.1:8080")
end
it "preserves case when requested" do
expect(described_class.sanitize_instance_domain("Mesh.Example", downcase: false)).to eq("Mesh.Example")
expect(described_class.sanitize_instance_domain("[2001:DB8::1]", downcase: false)).to eq("[2001:DB8::1]")
end
end
describe ".instance_domain_host" do
it "extracts hosts from literal and host:port values" do
expect(described_class.instance_domain_host("example.com:443")).to eq("example.com")
expect(described_class.instance_domain_host("[::1]:9000")).to eq("::1")
expect(described_class.instance_domain_host("::1")).to eq("::1")
expect(described_class.instance_domain_host("bad:port:name")).to eq("bad:port:name")
expect(described_class.instance_domain_host("[::1:invalid")).to be_nil
end
end
describe ".ip_from_domain" do
it "parses valid IP literals and rejects hostnames" do
expect(described_class.ip_from_domain("127.0.0.1")).to eq(IPAddr.new("127.0.0.1"))
expect(described_class.ip_from_domain("[2001:db8::1]:443")).to eq(IPAddr.new("2001:db8::1"))
expect(described_class.ip_from_domain("example.org")).to be_nil
end
end
describe "sanitised configuration accessors" do
before do
allow(PotatoMesh::Config).to receive_messages(
site_name: " Spec Mesh ",
channel: " #Spec ",
frequency: " 915MHz ",
contact_link: " #room:example.org ",
max_distance_km: 42,
)
end
it "provides trimmed strings" do
expect(described_class.sanitized_site_name).to eq("Spec Mesh")
expect(described_class.sanitized_channel).to eq("#Spec")
expect(described_class.sanitized_frequency).to eq("915MHz")
expect(described_class.sanitized_contact_link).to eq("#room:example.org")
expect(described_class.sanitized_contact_link_url).to eq("https://matrix.to/#/#room:example.org")
expect(described_class.sanitized_max_distance_km).to eq(42)
end
it "returns nil when the contact link is blank" do
allow(PotatoMesh::Config).to receive(:contact_link).and_return(" \t ")
expect(described_class.sanitized_contact_link).to be_nil
expect(described_class.sanitized_contact_link_url).to be_nil
end
it "returns nil when the distance is not positive" do
allow(PotatoMesh::Config).to receive(:max_distance_km).and_return(0)
expect(described_class.sanitized_max_distance_km).to be_nil
end
it "returns nil when the distance is not numeric" do
allow(PotatoMesh::Config).to receive(:max_distance_km).and_return("far")
expect(described_class.sanitized_max_distance_km).to be_nil
end
end
describe ".sanitized_string" do
it "always returns a string representation" do
expect(described_class.sanitized_string(:symbol)).to eq("symbol")
end
end
end

View File

@@ -34,9 +34,14 @@ require "tmpdir"
require "fileutils"
ENV["RACK_ENV"] = "test"
ENV["INSTANCE_DOMAIN"] ||= "spec.mesh.test"
SPEC_TMPDIR = Dir.mktmpdir("potato-mesh-spec-")
ENV["MESH_DB"] = File.join(SPEC_TMPDIR, "mesh.db")
ENV["XDG_DATA_HOME"] = File.join(SPEC_TMPDIR, "xdg-data")
ENV["XDG_CONFIG_HOME"] = File.join(SPEC_TMPDIR, "xdg-config")
FileUtils.mkdir_p(ENV["XDG_DATA_HOME"])
FileUtils.mkdir_p(ENV["XDG_CONFIG_HOME"])
require_relative "../app"

View File

@@ -66,24 +66,34 @@
crossorigin=""
></script>
</head>
<% body_classes = [] %>
<% body_classes << "dark" if initial_theme == "dark" %>
<body class="<%= body_classes.join(" ") %>" data-app-config="<%= Rack::Utils.escape_html(app_config_json) %>" data-theme="<%= initial_theme %>">
<h1 class="site-title">
<img src="/potatomesh-logo.svg" alt="" aria-hidden="true" />
<span class="site-title-text"><%= site_name %></span>
</h1>
<div class="site-header">
<h1 class="site-title">
<img src="/potatomesh-logo.svg" alt="" aria-hidden="true" />
<span class="site-title-text"><%= site_name %></span>
</h1>
<% if !private_mode && federation_enabled %>
<div class="instance-selector">
<label class="visually-hidden" for="instanceSelect">Select a region</label>
<select id="instanceSelect" class="instance-select" aria-label="Select instance region">
<option value=""><%= Rack::Utils.escape_html("Select region ...") %></option>
</select>
</div>
<% end %>
</div>
<div class="row meta">
<div class="meta-info">
<div class="refresh-row">
<p id="refreshInfo" class="refresh-info" aria-live="polite"><%= default_channel %> (<%= default_frequency %>) — active nodes: …</p>
<p id="refreshInfo" class="refresh-info" aria-live="polite"><%= channel %> (<%= frequency %>) — active nodes: …</p>
<div class="refresh-actions">
<label class="auto-refresh-toggle"><input type="checkbox" id="autoRefresh" checked /> Auto-refresh every <%= refresh_interval_seconds %> seconds</label>
<button id="refreshBtn" type="button">Refresh now</button>
@@ -97,8 +107,8 @@
<input type="text" id="filterInput" placeholder="Filter nodes" />
<button type="button" id="filterClear" class="filter-clear" aria-label="Clear filter" hidden>&times;</button>
</div>
<button id="themeToggle" type="button" aria-label="Toggle dark mode">🌙</button>
<button id="infoBtn" type="button" aria-haspopup="dialog" aria-controls="infoOverlay" aria-label="Show site information"> Info</button>
<button id="themeToggle" class="icon-button" type="button" aria-label="Toggle dark mode"><span aria-hidden="true">🌙</span></button>
<button id="infoBtn" class="icon-button" type="button" aria-haspopup="dialog" aria-controls="infoOverlay" aria-label="Show site information"><span aria-hidden="true"></span></button>
</div>
</div>
@@ -106,21 +116,22 @@
<div class="info-dialog" tabindex="-1">
<button type="button" class="info-close" id="infoClose" aria-label="Close site information">×</button>
<h2 id="infoTitle" class="info-title">About <%= site_name %></h2>
<p class="info-intro">Quick facts about this PotatoMesh instance.</p>
<dl class="info-details">
<dt>Default channel</dt>
<dd><%= default_channel %></dd>
<dt>Channel</dt>
<dd><%= channel %></dd>
<dt>Frequency</dt>
<dd><%= default_frequency %></dd>
<dd><%= frequency %></dd>
<dt>Map center</dt>
<dd><%= format("%.5f, %.5f", map_center_lat, map_center_lon) %></dd>
<dt>Visible range</dt>
<dd>Nodes within roughly <%= max_node_distance_km %> km of the center are shown.</dd>
<dt>Auto-refresh</dt>
<dd>Updates every <%= refresh_interval_seconds %> seconds.</dd>
<% if matrix_room && !matrix_room.empty? %>
<dt>Matrix room</dt>
<dd><a href="https://matrix.to/#/<%= matrix_room %>" target="_blank" rel="noreferrer noopener"><%= matrix_room %></a></dd>
<dd>Nodes within roughly <%= max_distance_km %> km of the center are shown.</dd>
<% if contact_link && !contact_link.empty? %>
<dt>Chat</dt>
<% if contact_link_url %>
<dd><a href="<%= contact_link_url %>" target="_blank" rel="noreferrer noopener"><%= contact_link %></a></dd>
<% else %>
<dd><%= contact_link %></dd>
<% end %>
<% end %>
</dl>
</div>
@@ -162,47 +173,62 @@
<table id="nodes">
<thead>
<tr>
<th><button type="button" class="sort-button" data-sort-key="node_id" data-sort-label="Node ID">Node ID <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th><button type="button" class="sort-button" data-sort-key="short_name" data-sort-label="Short Name">Short <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th><button type="button" class="sort-button" data-sort-key="long_name" data-sort-label="Long Name">Long Name <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th><button type="button" class="sort-button" data-sort-key="last_heard" data-sort-label="Last Seen">Last Seen <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th><button type="button" class="sort-button" data-sort-key="role" data-sort-label="Role">Role <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th><button type="button" class="sort-button" data-sort-key="hw_model" data-sort-label="Hardware Model">HW Model <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th><button type="button" class="sort-button" data-sort-key="battery_level" data-sort-label="Battery Level">Battery <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th><button type="button" class="sort-button" data-sort-key="voltage" data-sort-label="Voltage">Voltage <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th><button type="button" class="sort-button" data-sort-key="uptime_seconds" data-sort-label="Uptime">Uptime <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th><button type="button" class="sort-button" data-sort-key="channel_utilization" data-sort-label="Channel Utilization">Channel Util <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th><button type="button" class="sort-button" data-sort-key="air_util_tx" data-sort-label="Air Utilization (Tx)">Air Util Tx <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th><button type="button" class="sort-button" data-sort-key="temperature" data-sort-label="Temperature">Temperature <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th><button type="button" class="sort-button" data-sort-key="relative_humidity" data-sort-label="Humidity">Humidity <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th><button type="button" class="sort-button" data-sort-key="barometric_pressure" data-sort-label="Barometric Pressure">Pressure <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th><button type="button" class="sort-button" data-sort-key="latitude" data-sort-label="Latitude">Latitude <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th><button type="button" class="sort-button" data-sort-key="longitude" data-sort-label="Longitude">Longitude <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th><button type="button" class="sort-button" data-sort-key="altitude" data-sort-label="Altitude">Altitude <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th><button type="button" class="sort-button" data-sort-key="position_time" data-sort-label="Last Position">Last Position <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th class="nodes-col nodes-col--node-id"><button type="button" class="sort-button" data-sort-key="node_id" data-sort-label="Node ID">Node ID <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th class="nodes-col nodes-col--short-name"><button type="button" class="sort-button" data-sort-key="short_name" data-sort-label="Short Name">Short <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th class="nodes-col nodes-col--long-name"><button type="button" class="sort-button" data-sort-key="long_name" data-sort-label="Long Name">Long Name <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th class="nodes-col nodes-col--last-seen"><button type="button" class="sort-button" data-sort-key="last_heard" data-sort-label="Last Seen">Last Seen <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th class="nodes-col nodes-col--role"><button type="button" class="sort-button" data-sort-key="role" data-sort-label="Role">Role <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th class="nodes-col nodes-col--hw-model"><button type="button" class="sort-button" data-sort-key="hw_model" data-sort-label="Hardware Model">HW Model <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th class="nodes-col nodes-col--battery"><button type="button" class="sort-button" data-sort-key="battery_level" data-sort-label="Battery Level">Battery <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th class="nodes-col nodes-col--voltage"><button type="button" class="sort-button" data-sort-key="voltage" data-sort-label="Voltage">Voltage <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th class="nodes-col nodes-col--uptime"><button type="button" class="sort-button" data-sort-key="uptime_seconds" data-sort-label="Uptime">Uptime <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th class="nodes-col nodes-col--channel-util"><button type="button" class="sort-button" data-sort-key="channel_utilization" data-sort-label="Channel Utilization">Channel Util <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th class="nodes-col nodes-col--air-util-tx"><button type="button" class="sort-button" data-sort-key="air_util_tx" data-sort-label="Air Utilization (Tx)">Air Util Tx <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th class="nodes-col nodes-col--temperature"><button type="button" class="sort-button" data-sort-key="temperature" data-sort-label="Temperature">Temperature <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th class="nodes-col nodes-col--humidity"><button type="button" class="sort-button" data-sort-key="relative_humidity" data-sort-label="Humidity">Humidity <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th class="nodes-col nodes-col--pressure"><button type="button" class="sort-button" data-sort-key="barometric_pressure" data-sort-label="Barometric Pressure">Pressure <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th class="nodes-col nodes-col--latitude"><button type="button" class="sort-button" data-sort-key="latitude" data-sort-label="Latitude">Latitude <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th class="nodes-col nodes-col--longitude"><button type="button" class="sort-button" data-sort-key="longitude" data-sort-label="Longitude">Longitude <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th class="nodes-col nodes-col--altitude"><button type="button" class="sort-button" data-sort-key="altitude" data-sort-label="Altitude">Altitude <span class="sort-indicator" aria-hidden="true"></span></button></th>
<th class="nodes-col nodes-col--last-position"><button type="button" class="sort-button" data-sort-key="position_time" data-sort-label="Last Position">Last Position <span class="sort-indicator" aria-hidden="true"></span></button></th>
</tr>
</thead>
<tbody></tbody>
</table>
<div id="shortInfoOverlay" class="short-info-overlay" role="dialog" hidden>
<button type="button" class="short-info-close" aria-label="Close node details">×</button>
<div class="short-info-content"></div>
</div>
<template id="shortInfoOverlayTemplate">
<div class="short-info-overlay" role="dialog" aria-modal="false">
<button type="button" class="short-info-close" aria-label="Close node details">×</button>
<div class="short-info-content"></div>
</div>
</template>
<footer>
PotatoMesh
<% if version && !version.empty? %>
<span class="mono"><%= version %></span> —
<% end %>
GitHub: <a href="https://github.com/l5yth/potato-mesh" target="_blank">l5yth/potato-mesh</a>
<% if matrix_room && !matrix_room.empty? %>
— <%= site_name %> Matrix:
<a href="https://matrix.to/#/<%= matrix_room %>" target="_blank"><%= matrix_room %></a>
<% end %>
<footer class="app-footer">
<div class="footer-content">
<span class="footer-brand">PotatoMesh</span>
<% if version && !version.empty? %>
<span class="mono"><%= version %></span>
<% end %>
<span class="footer-separator" aria-hidden="true">—</span>
<span class="footer-links">
GitHub:
<a href="https://github.com/l5yth/potato-mesh" target="_blank">l5yth/potato-mesh</a>
<% if contact_link && !contact_link.empty? %>
<span class="footer-separator" aria-hidden="true">—</span>
<span class="footer-contact">
<%= site_name %> chat:
<% if contact_link_url %>
<a href="<%= contact_link_url %>" target="_blank"><%= contact_link %></a>
<% else %>
<%= contact_link %>
<% end %>
</span>
<% end %>
</span>
</div>
</footer>
<script>
const CHAT_ENABLED = <%= private_mode ? "false" : "true" %>;