mirror of
https://github.com/l5yth/potato-mesh.git
synced 2026-05-07 05:44:50 +02:00
Compare commits
30 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 13b2ce9067 | |||
| 5a73e212a3 | |||
| 07c8e85caa | |||
| c08b3f2c2d | |||
| 851b2180dd | |||
| c175445251 | |||
| b951dbffeb | |||
| 10e6c99196 | |||
| aeb97477f0 | |||
| 81e588e44c | |||
| 083de6418f | |||
| 5b9e6e3d48 | |||
| 4a6ba38e94 | |||
| 4d38ddd341 | |||
| 267d2ec9e1 | |||
| 526a0c7246 | |||
| 95aa1de8a8 | |||
| d8b80c2a97 | |||
| 406fa80dd0 | |||
| de1ccc5a2e | |||
| 0a479e4517 | |||
| 8c59396ec8 | |||
| 3647cb125b | |||
| adc122fce0 | |||
| d33ebd8f4c | |||
| 06530f36ff | |||
| 3cfa0db7e6 | |||
| d9420ff13b | |||
| 7e0ba60a22 | |||
| 257e26c996 |
+11
-1
@@ -1,3 +1,6 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
# Licensed under the Apache License, Version 2.0 (see LICENSE)
|
||||
#
|
||||
# PotatoMesh Environment Configuration
|
||||
# Copy this file to .env and customize for your setup
|
||||
|
||||
@@ -14,7 +17,7 @@ INSTANCE_DOMAIN="mesh.example.org"
|
||||
# Generate a secure token: openssl rand -hex 32
|
||||
API_TOKEN="your-secure-api-token-here"
|
||||
|
||||
# Meshtastic connection target (required for ingestor)
|
||||
# Mesh radio connection target (required for ingestor)
|
||||
# Common serial paths:
|
||||
# - Linux: /dev/ttyACM0, /dev/ttyUSB0
|
||||
# - macOS: /dev/cu.usbserial-*
|
||||
@@ -23,6 +26,10 @@ API_TOKEN="your-secure-api-token-here"
|
||||
# Bluetooth address (e.g. ED:4D:9E:95:CF:60).
|
||||
CONNECTION="/dev/ttyACM0"
|
||||
|
||||
# Mesh protocol to use (meshtastic or meshcore)
|
||||
# Default: meshtastic
|
||||
PROTOCOL="meshtastic"
|
||||
|
||||
# =============================================================================
|
||||
# SITE CUSTOMIZATION
|
||||
# =============================================================================
|
||||
@@ -68,6 +75,9 @@ PRIVATE=0
|
||||
# Debug mode (0=off, 1=on)
|
||||
DEBUG=0
|
||||
|
||||
# Energy saving mode — sleep between ingestion cycles (0=off, 1=on)
|
||||
ENERGY_SAVING=0
|
||||
|
||||
# Default map zoom override
|
||||
# MAP_ZOOM=15
|
||||
|
||||
|
||||
@@ -1,3 +1,6 @@
|
||||
<!-- Copyright © 2025-26 l5yth & contributors -->
|
||||
<!-- Licensed under the Apache License, Version 2.0 (see LICENSE) -->
|
||||
|
||||
# GitHub Actions Workflows
|
||||
|
||||
## Workflows
|
||||
@@ -10,12 +13,3 @@
|
||||
- **`mobile.yml`** - Flutter mobile tests with coverage reporting
|
||||
- **`release.yml`** - Tag-triggered Flutter release builds for Android and iOS
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Build locally
|
||||
docker-compose build
|
||||
|
||||
# Deploy
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
@@ -23,7 +23,7 @@ on:
|
||||
jobs:
|
||||
analyze:
|
||||
name: Analyze (${{ matrix.language }})
|
||||
runs-on: ${{ (matrix.language == 'swift' && 'macos-latest') || 'ubuntu-latest' }}
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
security-events: write
|
||||
packages: read
|
||||
|
||||
@@ -188,7 +188,7 @@ jobs:
|
||||
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-ingestor-linux-amd64:${{ steps.version.outputs.version }}
|
||||
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-ingestor-linux-amd64:${{ steps.version.outputs.version_with_v }}
|
||||
docker run --rm --name ingestor-test \
|
||||
-e POTATOMESH_INSTANCE=http://localhost:41447 \
|
||||
-e INSTANCE_DOMAIN=http://localhost:41447 \
|
||||
-e API_TOKEN=test-token \
|
||||
-e CONNECTION=mock \
|
||||
-e DEBUG=1 \
|
||||
|
||||
@@ -39,7 +39,7 @@ jobs:
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install black pytest pytest-cov meshtastic
|
||||
pip install black pytest pytest-cov meshtastic meshcore
|
||||
- name: Test with pytest and coverage
|
||||
run: |
|
||||
mkdir -p reports
|
||||
|
||||
@@ -74,6 +74,9 @@ web/.config
|
||||
node_modules/
|
||||
web/node_modules/
|
||||
|
||||
# Operator-customised static pages (keep only the shipped default)
|
||||
web/pages/*.md
|
||||
|
||||
# Debug symbols
|
||||
ignored.txt
|
||||
ignored-*.txt
|
||||
|
||||
+103
@@ -1,5 +1,108 @@
|
||||
<!-- Copyright © 2025-26 l5yth & contributors -->
|
||||
<!-- Licensed under the Apache License, Version 2.0 (see LICENSE) -->
|
||||
|
||||
# CHANGELOG
|
||||
|
||||
## v0.6.0
|
||||
|
||||
This is a service release of the radio mesh app-suite `potato-mesh` v0.6.0 which introduces new features and overhauls the user interface. The primary notable change is added support for multi-protocol along with an implementation of **Meshcore** in ingestor, web app, and frontend.
|
||||
|
||||
Demo: <https://potatomesh.net/>
|
||||
|
||||
### Meshcore
|
||||
|
||||
To start ingesting Meshcore data to an upgraded potato-mesh web app, simply tell your ingestor to use the `PROTOCOL="meshcore"`.
|
||||
|
||||
### About Pages
|
||||
|
||||
The other notable feature is the removal of the "darkmode" and "info" buttons in favor of customizable markdown pages that allow for more flexibility with regard to custom content (info about presets, contact information, etc.) - see `/pages/*.md` in the web app ([#723](https://github.com/l5yth/potato-mesh/pull/723)).
|
||||
|
||||
### Breaking Variable Changes
|
||||
|
||||
The following deprecated environmental variables have been removed in this release finally ([#704](https://github.com/l5yth/potato-mesh/pull/704)):
|
||||
* ~~POTATOMESH_INSTANCE~~ - please use `INSTANCE_DOMAIN`
|
||||
* ~~MESH_SERIAL~~ and ~~PORT~~ - please use `CONNECTION`
|
||||
|
||||
### Features
|
||||
* Web: add markdown static pages by @l5yth in <https://github.com/l5yth/potato-mesh/pull/723>
|
||||
* Data: trace analysus multi ingestor support by @l5yth in <https://github.com/l5yth/potato-mesh/pull/721>
|
||||
* Web: facelift by @l5yth in <https://github.com/l5yth/potato-mesh/pull/716>
|
||||
* Web: sort channels by activity not index by @l5yth in <https://github.com/l5yth/potato-mesh/pull/711>
|
||||
* Data: derive meshcore channel probe bound from device max_channels by @l5yth in <https://github.com/l5yth/potato-mesh/pull/701>
|
||||
* Web: define meshcore modem presets by @l5yth in <https://github.com/l5yth/potato-mesh/pull/696>
|
||||
* Data: register meshcore channel mappings by @l5yth in <https://github.com/l5yth/potato-mesh/pull/695>
|
||||
* Data: provide frequency and modem preset for meshcore by @l5yth in <https://github.com/l5yth/potato-mesh/pull/694>
|
||||
* Web: distinguish meshcore from meshtastic in frontend by @l5yth in <https://github.com/l5yth/potato-mesh/pull/688>
|
||||
* [Meshcore] fix: get meshcore protocol icon displaying correctly by @benallfree in <https://github.com/l5yth/potato-mesh/pull/681>
|
||||
|
||||
### Fixes
|
||||
* Web: fix federation for multi protocol by @l5yth in <https://github.com/l5yth/potato-mesh/pull/722>
|
||||
* Data: fix position time updates by @l5yth in <https://github.com/l5yth/potato-mesh/pull/715>
|
||||
* Data: fix meshcore ingestor self reporting by @l5yth in <https://github.com/l5yth/potato-mesh/pull/713>
|
||||
* Web: reference meshcore nodes in chat by @l5yth in <https://github.com/l5yth/potato-mesh/pull/709>
|
||||
* Web: fix node disappearance role reset by @l5yth in <https://github.com/l5yth/potato-mesh/pull/707>
|
||||
* Web: protect real node names from fallback by @l5yth in <https://github.com/l5yth/potato-mesh/pull/702>
|
||||
* Web: add proper short names for meshcore companions by @l5yth in <https://github.com/l5yth/potato-mesh/pull/693>
|
||||
* Fix: address review comments from PRs #676 and #681 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/689>
|
||||
* [Meshcore] fix: race condition by @benallfree in <https://github.com/l5yth/potato-mesh/pull/676>
|
||||
|
||||
### Chores
|
||||
* Release: v0.6.0 — remove deprecated env var aliases by @l5yth in <https://github.com/l5yth/potato-mesh/pull/704>
|
||||
* Chore: prepare codebase for breaking release by @l5yth in <https://github.com/l5yth/potato-mesh/pull/718>
|
||||
|
||||
## v0.5.12
|
||||
|
||||
This is a service release of the app potato-mesh v0.5.12 which improves performance and stability.
|
||||
|
||||
Notably, the frontend went through some graphical tweaks to prepare for an upcoming multi-protocol release (meshcore, reticulum, etc.).
|
||||
|
||||
* Enh: surface meshcore role types (#680) by @l5yth in https://github.com/l5yth/potato-mesh/pull/685
|
||||
* Chore: refactor codebase before meshcore release by @l5yth in https://github.com/l5yth/potato-mesh/pull/682
|
||||
* [Meshcore] enh: short name should be 1st 4 hex digits of public key by @benallfree in https://github.com/l5yth/potato-mesh/pull/679
|
||||
* Chore: update xcode deps by @benallfree in https://github.com/l5yth/potato-mesh/pull/674
|
||||
* Chore: update mesh.sh to use requirements file by @benallfree in https://github.com/l5yth/potato-mesh/pull/675
|
||||
* Data/meshcore: fix ble and enable tcp by @l5yth in https://github.com/l5yth/potato-mesh/pull/669
|
||||
* Data: handle store_forward and router_heartbeat portnum by @l5yth in https://github.com/l5yth/potato-mesh/pull/667
|
||||
* Feat: implement meshcore provider by @l5yth in https://github.com/l5yth/potato-mesh/pull/663
|
||||
* Ci: update dependabot and codecov settings by @l5yth in https://github.com/l5yth/potato-mesh/pull/666
|
||||
* Web: prepare release by @l5yth in https://github.com/l5yth/potato-mesh/pull/665
|
||||
* App: only query meshtastic provider by @l5yth in https://github.com/l5yth/potato-mesh/pull/664
|
||||
* Data: prepare ingestor for meshcore by @l5yth in https://github.com/l5yth/potato-mesh/pull/658
|
||||
* Web: fix css issues by @l5yth in https://github.com/l5yth/potato-mesh/pull/659
|
||||
* Web: prepare frontend for multi protocol by @l5yth in https://github.com/l5yth/potato-mesh/pull/657
|
||||
* Feat: split device and power-sensor telemetry charts (#643) by @l5yth in https://github.com/l5yth/potato-mesh/pull/656
|
||||
* Web: implement a 'protocol' field across systems by @l5yth in https://github.com/l5yth/potato-mesh/pull/655
|
||||
* Fix upsert clearing node coordinates bug by @l5yth in https://github.com/l5yth/potato-mesh/pull/654
|
||||
* Data: resolve circular dependency of deamon.py by @l5yth in https://github.com/l5yth/potato-mesh/pull/653
|
||||
* Proposal: mesh provider pattern refactor by @benallfree in https://github.com/l5yth/potato-mesh/pull/651
|
||||
* Build(deps): bump rustls-webpki from 0.103.8 to 0.103.10 in /matrix by @dependabot[bot] in https://github.com/l5yth/potato-mesh/pull/649
|
||||
* Build(deps): bump quinn-proto from 0.11.13 to 0.11.14 in /matrix by @dependabot[bot] in https://github.com/l5yth/potato-mesh/pull/646
|
||||
|
||||
## v0.5.11
|
||||
|
||||
* Chore: bump version to 0.5.11 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/645>
|
||||
* Web: limit horizontal size of dropdown by @l5yth in <https://github.com/l5yth/potato-mesh/pull/644>
|
||||
|
||||
## v0.5.10
|
||||
|
||||
* Web: expose node stats in distinct api by @l5yth in <https://github.com/l5yth/potato-mesh/pull/641>
|
||||
* Web: do merge channels by name by @l5yth in <https://github.com/l5yth/potato-mesh/pull/640>
|
||||
* Web: do not merge channels by ID in frontend by @l5yth in <https://github.com/l5yth/potato-mesh/pull/637>
|
||||
* Web: do not touch neighbor last seen on neighbor info by @l5yth in <https://github.com/l5yth/potato-mesh/pull/636>
|
||||
* Ingestor: report self id per packet by @l5yth in <https://github.com/l5yth/potato-mesh/pull/635>
|
||||
* Ci: fix docker compose and docs by @l5yth in <https://github.com/l5yth/potato-mesh/pull/634>
|
||||
* Web: supress encrypted text messages in frontend by @l5yth in <https://github.com/l5yth/potato-mesh/pull/633>
|
||||
* Federation: ensure requests timeout properly and can be terminated by @l5yth in <https://github.com/l5yth/potato-mesh/pull/631>
|
||||
* Build(deps): bump bytes from 1.11.0 to 1.11.1 in /matrix by @dependabot[bot]< in https://github.com/l5yth/potato-mesh/pull/627>
|
||||
* Matrix: config loading now merges optional TOML with CLI/env/secret inputs by @l5yth in <https://github.com/l5yth/potato-mesh/pull/617>
|
||||
* Matrix: logs only non-sensitive config fields by @l5yth in <https://github.com/l5yth/potato-mesh/pull/616>
|
||||
* Web: decrypted takes precedence by @l5yth in <https://github.com/l5yth/potato-mesh/pull/614>
|
||||
* Add Apache 2.0 license headers to missing sources by @l5yth in <https://github.com/l5yth/potato-mesh/pull/615>
|
||||
* Web: decrypt PSK-1 unencrypted messages on arrival by @l5yth in <https://github.com/l5yth/potato-mesh/pull/611>
|
||||
* Web: daemonize federation worker pool to avoid deadlocks on stuck announcments by @l5yth in <https://github.com/l5yth/potato-mesh/pull/610>
|
||||
* Web: add announcement banner by @l5yth in <https://github.com/l5yth/potato-mesh/pull/609>
|
||||
* L5Y chore version 0510 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/608>
|
||||
|
||||
## v0.5.9
|
||||
|
||||
* Matrix: listen for synapse on port 41448 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/607>
|
||||
|
||||
@@ -1,6 +1,9 @@
|
||||
<!-- Copyright © 2025-26 l5yth & contributors -->
|
||||
<!-- Licensed under the Apache License, Version 2.0 (see LICENSE) -->
|
||||
|
||||
# Repository Guidelines
|
||||
|
||||
Keep code as modular as possible to reduce duplication and improve reusability and readability. If a module grows large, split it into a submodule structure. Prefer composing small, single-purpose units over monolithic files.
|
||||
Keep code as modular as possible to reduce duplication and improve reusability and readability — this applies to tests as well as production code. If a module grows large, split it into a submodule structure. Prefer composing small, single-purpose units over monolithic files.
|
||||
|
||||
Make sure all tests pass for Python (`pytest`), Ruby (`rspec`), and JavaScript (`npm test`).
|
||||
|
||||
@@ -8,14 +11,14 @@ All code must be 100% unit tested — every line, branch, and code path must hav
|
||||
|
||||
All code must be 100% documented according to the language's API-doc standard (PDoc for Python, RDoc for Ruby, JSDoc for JavaScript, rustdoc for Rust, dartdoc for Dart). Documentation must be sufficient to generate complete API docs from source. In addition to API-level docs, add inline comments wherever the logic is not immediately self-evident.
|
||||
|
||||
New source files should have Apache v2 license headers using the exact string `Copyright © 2025-26 l5yth & contributors`.
|
||||
Every file in the repository must carry an Apache v2 license notice using the exact string `Copyright © 2025-26 l5yth & contributors`. **Source-code files** (`.rb`, `.py`, `.js`, `.rs`, `.dart`, etc.) must include the full Apache v2 license header block. **Non-source files** (docs, configs, YAML, TOML, Dockerfiles, etc.) must include a short 2-line Apache v2 notice (copyright line + license reference).
|
||||
|
||||
Run linters for Python (`black`) and Ruby (`rufo`) to ensure consistent code formatting.
|
||||
|
||||
## Project Structure & Module Organization
|
||||
The repository splits runtime and ingestion logic. `web/` holds the Sinatra dashboard (Ruby code in `lib/potato_mesh`, views in `views/`, static bundles in `public/`).
|
||||
|
||||
`data/` hosts the Python Meshtastic ingestor plus migrations and CLI scripts. The ingestor is structured as the `data/mesh_ingestor/` package with the following key modules: `daemon.py` (main loop), `handlers.py` (packet processing), `interfaces.py` (interface helpers), `config.py` (env-driven config), `events.py` (TypedDict event schemas), `provider.py` (Provider protocol), `node_identity.py` (canonical node ID utilities), `decode_payload.py` (CLI protobuf decoder), and the `providers/` subpackage (currently `meshtastic.py`). API contracts for all POST ingest routes are documented in `data/mesh_ingestor/CONTRACTS.md`. API fixtures and end-to-end harnesses live in `tests/`. Dockerfiles and compose files support containerized workflows.
|
||||
`data/` hosts the Python Meshtastic ingestor plus migrations and CLI scripts. The ingestor is structured as the `data/mesh_ingestor/` package with the following key modules: `daemon.py` (main loop), `handlers.py` (packet processing), `interfaces.py` (interface helpers), `config.py` (env-driven config), `events.py` (TypedDict event schemas), `mesh_protocol.py` (MeshProtocol base), `node_identity.py` (canonical node ID utilities), `decode_payload.py` (CLI protobuf decoder), and the `protocols/` subpackage (currently `meshtastic.py`). API contracts for all POST ingest routes are documented in `data/mesh_ingestor/CONTRACTS.md`. API fixtures and end-to-end harnesses live in `tests/`. Dockerfiles and compose files support containerized workflows.
|
||||
|
||||
`matrix/` contains the Rust Matrix bridge; build with `cargo build --release` or `docker build -f matrix/Dockerfile .`, and keep bridge config under `matrix/Config.toml` when running locally.
|
||||
|
||||
@@ -41,15 +44,15 @@ Ruby specs run with `cd web && bundle exec rspec`, producing SimpleCov output in
|
||||
|
||||
The ingestion layer is tested with `pytest -q tests/`; leave fixtures in `tests/` untouched so CI can replay them. The suite includes both integration tests (`test_mesh.py`) and focused unit tests — `test_events_unit.py` (TypedDict schemas), `test_provider_unit.py` (Provider protocol conformance and `MeshtasticProvider`), `test_node_identity_unit.py` (canonical ID helpers), `test_daemon_unit.py`, `test_serialization_unit.py`, and `test_decode_payload.py`. New features should ship with matching specs and updated integration checks.
|
||||
|
||||
## Adding a New Ingestor Provider
|
||||
The `data/mesh_ingestor/provider.py` module defines a `@runtime_checkable` `Provider` Protocol with five members: `name` (str), `subscribe()`, `connect(*, active_candidate)`, `extract_host_node_id(iface)`, and `node_snapshot_items(iface)`. To add a new backend (e.g. Reticulum, MeshCore):
|
||||
## Adding a New Ingestor Protocol
|
||||
The `data/mesh_ingestor/mesh_protocol.py` module defines a `@runtime_checkable` `MeshProtocol` class with five members: `name` (str), `subscribe()`, `connect(*, active_candidate)`, `extract_host_node_id(iface)`, and `node_snapshot_items(iface)`. To add a new backend (e.g. Reticulum):
|
||||
|
||||
1. Create `data/mesh_ingestor/providers/<name>.py` with a class satisfying the Protocol.
|
||||
2. Register it in `data/mesh_ingestor/providers/__init__.py`.
|
||||
1. Create `data/mesh_ingestor/protocols/<name>.py` with a class satisfying the `MeshProtocol` interface.
|
||||
2. Register it in `data/mesh_ingestor/protocols/__init__.py`.
|
||||
3. Pass an instance via `daemon.main(provider=...)` or make it the default in `main()`.
|
||||
4. Cover the provider with unit tests in `tests/test_provider_unit.py` — at minimum an `isinstance(..., Provider)` conformance check and any retry/error-handling paths.
|
||||
4. Cover the protocol with unit tests in `tests/test_provider_unit.py` — at minimum an `isinstance(..., MeshProtocol)` conformance check and any retry/error-handling paths.
|
||||
|
||||
Consult `data/mesh_ingestor/CONTRACTS.md` for the canonical event shapes all providers must emit.
|
||||
Consult `data/mesh_ingestor/CONTRACTS.md` for the canonical event shapes all protocols must emit.
|
||||
|
||||
## GitHub Configuration Standards
|
||||
Every language used in the repository must have a Dependabot entry checking for dependency updates on a **weekly** schedule. Keep the Dependabot config up to date as new languages or package ecosystems are added.
|
||||
|
||||
@@ -1,3 +1,6 @@
|
||||
<!-- Copyright © 2025-26 l5yth & contributors -->
|
||||
<!-- Licensed under the Apache License, Version 2.0 (see LICENSE) -->
|
||||
|
||||
# PotatoMesh Docker Guide
|
||||
|
||||
PotatoMesh publishes ready-to-run container images to the GitHub Packages container
|
||||
@@ -13,16 +16,16 @@ will pull the latest release images for you.
|
||||
|
||||
## Images on GHCR
|
||||
|
||||
| Service | Image |
|
||||
|----------|---------------------------------------------------------------------------------------------------------------|
|
||||
| Web UI | `ghcr.io/l5yth/potato-mesh-web-linux-amd64:<tag>` (e.g. `latest`, `3.0`, `v3.0`, or `3.1.0-rc1`) |
|
||||
| Ingestor | `ghcr.io/l5yth/potato-mesh-ingestor-linux-amd64:<tag>` (e.g. `latest`, `3.0`, `v3.0`, or `3.1.0-rc1`) |
|
||||
| Service | Image |
|
||||
|----------|----------------------------------------------------------------------------------------------------------------|
|
||||
| Web UI | `ghcr.io/l5yth/potato-mesh-web-linux-amd64:<tag>` (e.g. `latest`, `0.6.0`, `v0.6.0`, or `0.7.0-rc1`) |
|
||||
| Ingestor | `ghcr.io/l5yth/potato-mesh-ingestor-linux-amd64:<tag>` (e.g. `latest`, `0.6.0`, `v0.6.0`, or `0.7.0-rc1`) |
|
||||
|
||||
Images are published for every tagged release. Stable builds receive both
|
||||
semantic version tags (for example `3.0`) and a matching `v`-prefixed tag (for
|
||||
example `v3.0`), plus a `latest` tag that tracks the newest stable release.
|
||||
semantic version tags (for example `0.6.0`) and a matching `v`-prefixed tag (for
|
||||
example `v0.6.0`), plus a `latest` tag that tracks the newest stable release.
|
||||
Pre-release tags (for example `-rc`, `-beta`, `-alpha`, or `-dev` suffixes) are
|
||||
published only with their explicit version strings (`3.1.0-rc1` and `v3.1.0-rc1`
|
||||
published only with their explicit version strings (`0.7.0-rc1` and `v0.7.0-rc1`
|
||||
in this example) and do **not** advance `latest`. Pin the versioned tags when
|
||||
you need a specific build.
|
||||
|
||||
@@ -60,9 +63,8 @@ Additional environment variables are optional:
|
||||
| `CONNECTION` | `/dev/ttyACM0` | Serial device, TCP endpoint, or Bluetooth target used by the ingestor to reach the radio. |
|
||||
|
||||
The ingestor posts to the URL configured via `INSTANCE_DOMAIN` (defaulting to
|
||||
`http://web:41447` in the provided compose file) and still accepts
|
||||
`POTATOMESH_INSTANCE` as a legacy alias when the primary variable is unset. Use
|
||||
`CHANNEL_INDEX` to select a LoRa channel on serial or Bluetooth connections.
|
||||
`http://web:41447` in the provided compose file). Use `CHANNEL_INDEX` to select
|
||||
a LoRa channel on serial or Bluetooth connections.
|
||||
|
||||
## Docker Compose file
|
||||
|
||||
@@ -79,6 +81,18 @@ the container. This path stores the instance private key and staged
|
||||
of container lifecycle events, generated credentials are not replaced on reboot
|
||||
or re-deploy.
|
||||
|
||||
The `potatomesh_pages` volume mounts to `/app/pages` and holds operator-managed
|
||||
Markdown files that are rendered as static content pages in the web UI. On first
|
||||
start the default `1-about.md` page is copied from the image into the volume.
|
||||
You can add, edit, or remove `.md` files in this volume to customise your
|
||||
instance's navigation. To use a host directory instead of a named volume, replace
|
||||
the volume entry with a bind mount:
|
||||
|
||||
```yaml
|
||||
volumes:
|
||||
- ./my-pages:/app/pages
|
||||
```
|
||||
|
||||
## Start the stack
|
||||
|
||||
From the directory containing the Compose file:
|
||||
|
||||
+30
-9
@@ -1,3 +1,4 @@
|
||||
# syntax=docker/dockerfile:1.6
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
@@ -25,6 +26,9 @@ ENV BUNDLE_FORCE_RUBY_PLATFORM=true
|
||||
# Install build dependencies and SQLite3
|
||||
RUN apk add --no-cache \
|
||||
build-base \
|
||||
python3 \
|
||||
py3-pip \
|
||||
py3-virtualenv \
|
||||
sqlite-dev \
|
||||
linux-headers \
|
||||
pkgconfig
|
||||
@@ -40,11 +44,16 @@ RUN bundle config set --local force_ruby_platform true && \
|
||||
bundle config set --local without 'development test' && \
|
||||
bundle install --jobs=4 --retry=3
|
||||
|
||||
# Install Meshtastic decoder dependencies in a dedicated venv
|
||||
RUN python3 -m venv /opt/meshtastic-venv && \
|
||||
/opt/meshtastic-venv/bin/pip install --no-cache-dir meshtastic protobuf
|
||||
|
||||
# Production stage
|
||||
FROM ruby:3.3-alpine AS production
|
||||
|
||||
# Install runtime dependencies
|
||||
RUN apk add --no-cache \
|
||||
python3 \
|
||||
sqlite \
|
||||
tzdata \
|
||||
curl
|
||||
@@ -58,18 +67,27 @@ WORKDIR /app
|
||||
|
||||
# Copy installed gems from builder stage
|
||||
COPY --from=builder /usr/local/bundle /usr/local/bundle
|
||||
COPY --from=builder /opt/meshtastic-venv /opt/meshtastic-venv
|
||||
|
||||
# Copy application code (exclude Dockerfile from web directory)
|
||||
COPY --chown=potatomesh:potatomesh web/app.rb web/app.sh web/Gemfile web/Gemfile.lock* web/spec/ ./
|
||||
# Copy application code (excluding the Dockerfile which is not required at runtime)
|
||||
COPY --chown=potatomesh:potatomesh web/app.rb ./
|
||||
COPY --chown=potatomesh:potatomesh web/app.sh ./
|
||||
COPY --chown=potatomesh:potatomesh web/Gemfile ./
|
||||
COPY --chown=potatomesh:potatomesh web/Gemfile.lock* ./
|
||||
COPY --chown=potatomesh:potatomesh web/lib ./lib
|
||||
COPY --chown=potatomesh:potatomesh web/spec ./spec
|
||||
COPY --chown=potatomesh:potatomesh web/public ./public
|
||||
COPY --chown=potatomesh:potatomesh web/views/ ./views/
|
||||
COPY --chown=potatomesh:potatomesh web/views ./views
|
||||
COPY --chown=potatomesh:potatomesh web/scripts ./scripts
|
||||
|
||||
# Copy SQL schema files from data directory
|
||||
COPY --chown=potatomesh:potatomesh data/*.sql /data/
|
||||
COPY --chown=potatomesh:potatomesh data/mesh_ingestor/decode_payload.py /app/data/mesh_ingestor/decode_payload.py
|
||||
|
||||
# Create data directory for SQLite database
|
||||
RUN mkdir -p /app/data /app/.local/share/potato-mesh && \
|
||||
chown -R potatomesh:potatomesh /app/data /app/.local
|
||||
# Create data and configuration directories with correct ownership
|
||||
RUN mkdir -p /app/.local/share/potato-mesh \
|
||||
&& mkdir -p /app/.config/potato-mesh/well-known \
|
||||
&& chown -R potatomesh:potatomesh /app/.local/share /app/.config
|
||||
|
||||
# Switch to non-root user
|
||||
USER potatomesh
|
||||
@@ -78,13 +96,16 @@ USER potatomesh
|
||||
EXPOSE 41447
|
||||
|
||||
# Default environment variables (can be overridden by host)
|
||||
ENV APP_ENV=production \
|
||||
RACK_ENV=production \
|
||||
ENV RACK_ENV=production \
|
||||
APP_ENV=production \
|
||||
MESHTASTIC_PYTHON=/opt/meshtastic-venv/bin/python \
|
||||
XDG_DATA_HOME=/app/.local/share \
|
||||
XDG_CONFIG_HOME=/app/.config \
|
||||
SITE_NAME="PotatoMesh Demo" \
|
||||
INSTANCE_DOMAIN="potato.example.com" \
|
||||
CHANNEL="#LongFast" \
|
||||
FREQUENCY="915MHz" \
|
||||
MAP_CENTER="38.761944,-27.090833" \
|
||||
MAP_ZOOM="" \
|
||||
MAX_DISTANCE=42 \
|
||||
CONTACT_LINK="#potatomesh:dod.ngo" \
|
||||
DEBUG=0
|
||||
|
||||
@@ -1,3 +1,6 @@
|
||||
<!-- Copyright © 2025-26 l5yth & contributors -->
|
||||
<!-- Licensed under the Apache License, Version 2.0 (see LICENSE) -->
|
||||
|
||||
# Prometheus Monitoring for PotatoMesh
|
||||
|
||||
PotatoMesh exposes runtime telemetry through a dedicated Prometheus endpoint so you can
|
||||
|
||||
@@ -1,3 +1,6 @@
|
||||
<!-- Copyright © 2025-26 l5yth & contributors -->
|
||||
<!-- Licensed under the Apache License, Version 2.0 (see LICENSE) -->
|
||||
|
||||
# 🥔 PotatoMesh
|
||||
|
||||
[](https://github.com/l5yth/potato-mesh/actions)
|
||||
@@ -7,7 +10,10 @@
|
||||
[](https://github.com/l5yth/potato-mesh/issues)
|
||||
[](https://matrix.to/#/#potatomesh:dod.ngo)
|
||||
|
||||
A federated, Meshtastic-powered node dashboard for your local community.
|
||||
[](https://meshtastic.org)
|
||||
[](https://meshcore.co.uk)
|
||||
|
||||
A federated, Meshtastic & Meshcore node dashboard for your local community.
|
||||
_No MQTT clutter, just local LoRa aether._
|
||||
|
||||
* Web dashboard with chat window and map view showing nodes, positions, neighbors,
|
||||
@@ -17,15 +23,17 @@ _No MQTT clutter, just local LoRa aether._
|
||||
* Allows searching and filtering for nodes in map and table view.
|
||||
* Federated: _automatically_ froms a federation with other communities running
|
||||
Potato Mesh!
|
||||
* Supports Meshtastic and Meshcore
|
||||
* Supplemental Python ingestor to feed the POST APIs of the Web app with data remotely.
|
||||
* Supports multiple ingestors per instance.
|
||||
* Supports Meshtastic and Meshcore
|
||||
* Matrix bridge that posts Meshtastic messages to a defined matrix channel (no
|
||||
radio required).
|
||||
* Mobile app to _read_ messages on your local aether (no radio required).
|
||||
|
||||
Live demo for Berlin #MediumFast: [potatomesh.net](https://potatomesh.net)
|
||||
Live demo for Berlin: [potatomesh.net](https://potatomesh.net)
|
||||
|
||||

|
||||

|
||||
|
||||
## Web App
|
||||
|
||||
@@ -120,6 +128,28 @@ well-known document is staged in
|
||||
|
||||
The database can be found in `$XDG_DATA_HOME/potato-mesh`.
|
||||
|
||||
### Custom Pages
|
||||
|
||||
Instance operators can publish static content pages (contact details, mesh
|
||||
protocol information, legal notices, etc.) by placing Markdown files in the
|
||||
`pages/` directory inside `web/`. Each `.md` file automatically becomes a nav
|
||||
entry and a route under `/pages/<slug>`.
|
||||
|
||||
Files are named `<sort-prefix>-<slug>.md` — the numeric prefix controls
|
||||
navigation order and the slug becomes the URL path and nav label:
|
||||
|
||||
| Filename | Nav Label | URL |
|
||||
| ---------------------- | -------------- | ----------------------- |
|
||||
| `1-about.md` | About | `/pages/about` |
|
||||
| `5-rules.md` | Rules | `/pages/rules` |
|
||||
| `9-contact.md` | Contact | `/pages/contact` |
|
||||
| `20-impressum.md` | Impressum | `/pages/impressum` |
|
||||
|
||||
A default `1-about.md` ships with the app. In Docker deployments the directory
|
||||
is exposed as the `potatomesh_pages` volume (mounted at `/app/pages`) so you can
|
||||
add or edit pages without rebuilding the image. The pages directory can also be
|
||||
overridden with the `PAGES_DIR` environment variable.
|
||||
|
||||
### Federation
|
||||
|
||||
PotatoMesh instances can optionally federate by publishing signed metadata and
|
||||
@@ -270,9 +300,9 @@ docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-arm64:latest
|
||||
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-armv7:latest
|
||||
|
||||
# version-pinned examples
|
||||
docker pull ghcr.io/l5yth/potato-mesh-web-linux-amd64:v0.5.5
|
||||
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-amd64:v0.5.5
|
||||
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-amd64:v0.5.5
|
||||
docker pull ghcr.io/l5yth/potato-mesh-web-linux-amd64:v0.6.0
|
||||
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-amd64:v0.6.0
|
||||
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-amd64:v0.6.0
|
||||
```
|
||||
|
||||
Note: `latest` is only published for non-prerelease versions. Pre-release tags
|
||||
|
||||
+6
-2
@@ -1,6 +1,10 @@
|
||||
# Meshtastic Reader
|
||||
<!-- Copyright © 2025-26 l5yth & contributors -->
|
||||
<!-- Licensed under the Apache License, Version 2.0 (see LICENSE) -->
|
||||
|
||||
Meshtastic Reader – read-only PotatoMesh chat client for Android and iOS.
|
||||
# PotatoMesh Mobile
|
||||
|
||||
PotatoMesh Mobile — read-only mesh chat client for Android and iOS.
|
||||
Supports Meshtastic and MeshCore networks.
|
||||
|
||||
## Setup
|
||||
|
||||
|
||||
@@ -15,11 +15,11 @@
|
||||
<key>CFBundlePackageType</key>
|
||||
<string>FMWK</string>
|
||||
<key>CFBundleShortVersionString</key>
|
||||
<string>0.5.12</string>
|
||||
<string>0.6.1</string>
|
||||
<key>CFBundleSignature</key>
|
||||
<string>????</string>
|
||||
<key>CFBundleVersion</key>
|
||||
<string>0.5.12</string>
|
||||
<string>0.6.1</string>
|
||||
<key>MinimumOSVersion</key>
|
||||
<string>14.0</string>
|
||||
</dict>
|
||||
|
||||
+1
-1
@@ -1,7 +1,7 @@
|
||||
name: potato_mesh_reader
|
||||
description: Meshtastic Reader — read-only view for PotatoMesh messages.
|
||||
publish_to: "none"
|
||||
version: 0.5.12
|
||||
version: 0.6.1
|
||||
|
||||
environment:
|
||||
sdk: ">=3.4.0 <4.0.0"
|
||||
|
||||
@@ -219,16 +219,6 @@ else
|
||||
sed -i.bak '/^INSTANCE_DOMAIN=.*/d' .env
|
||||
fi
|
||||
|
||||
# Migrate legacy connection settings and ensure defaults exist
|
||||
if grep -q "^MESH_SERIAL=" .env; then
|
||||
legacy_connection=$(grep "^MESH_SERIAL=" .env | head -n1 | cut -d'=' -f2-)
|
||||
if [ -n "$legacy_connection" ] && ! grep -q "^CONNECTION=" .env; then
|
||||
echo "♻️ Migrating legacy MESH_SERIAL value to CONNECTION"
|
||||
update_env "CONNECTION" "$legacy_connection"
|
||||
fi
|
||||
sed -i.bak '/^MESH_SERIAL=.*/d' .env
|
||||
fi
|
||||
|
||||
if ! grep -q "^CONNECTION=" .env; then
|
||||
echo "CONNECTION=/dev/ttyACM0" >> .env
|
||||
fi
|
||||
|
||||
@@ -50,6 +50,7 @@ USER potatomesh
|
||||
ENV CONNECTION=/dev/ttyACM0 \
|
||||
CHANNEL_INDEX=0 \
|
||||
DEBUG=0 \
|
||||
PROTOCOL=meshtastic \
|
||||
ALLOWED_CHANNELS="" \
|
||||
HIDDEN_CHANNELS="" \
|
||||
INSTANCE_DOMAIN="" \
|
||||
@@ -77,6 +78,7 @@ USER ContainerUser
|
||||
ENV CONNECTION=/dev/ttyACM0 \
|
||||
CHANNEL_INDEX=0 \
|
||||
DEBUG=0 \
|
||||
PROTOCOL=meshtastic \
|
||||
ALLOWED_CHANNELS="" \
|
||||
HIDDEN_CHANNELS="" \
|
||||
INSTANCE_DOMAIN="" \
|
||||
|
||||
+1
-1
@@ -18,7 +18,7 @@ The ``data.mesh`` module exposes helpers for reading Meshtastic node and
|
||||
message information before forwarding it to the accompanying web application.
|
||||
"""
|
||||
|
||||
VERSION = "0.5.12"
|
||||
VERSION = "0.6.1"
|
||||
"""Semantic version identifier shared with the dashboard and front-end."""
|
||||
|
||||
__version__ = VERSION
|
||||
|
||||
@@ -27,6 +27,8 @@ CREATE TABLE IF NOT EXISTS instances (
|
||||
last_update_time INTEGER,
|
||||
is_private BOOLEAN NOT NULL DEFAULT 0,
|
||||
nodes_count INTEGER,
|
||||
meshcore_nodes_count INTEGER,
|
||||
meshtastic_nodes_count INTEGER,
|
||||
contact_link TEXT,
|
||||
signature TEXT
|
||||
);
|
||||
|
||||
+11
-5
@@ -15,8 +15,14 @@
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
python -m venv .venv
|
||||
source .venv/bin/activate
|
||||
pip install -U pip
|
||||
pip install -r "$(dirname "$0")/requirements.txt"
|
||||
exec python mesh.py
|
||||
# Recreate the venv only when its embedded Python is missing or points to the
|
||||
# wrong prefix (e.g. a stale shebang from a sibling project's venv). Avoid
|
||||
# --clear on every run: it wipes installed packages before each start, so any
|
||||
# restart during a PyPI outage turns a transient network failure into hard
|
||||
# ingestor downtime.
|
||||
if ! .venv/bin/python -c "import sys; exit(0 if '.venv' in sys.prefix else 1)" 2>/dev/null; then
|
||||
python -m venv --clear .venv
|
||||
fi
|
||||
.venv/bin/pip install -U pip
|
||||
.venv/bin/pip install -r "$(dirname "$0")/requirements.txt"
|
||||
exec .venv/bin/python mesh.py
|
||||
|
||||
@@ -1,3 +1,6 @@
|
||||
<!-- Copyright © 2025-26 l5yth & contributors -->
|
||||
<!-- Licensed under the Apache License, Version 2.0 (see LICENSE) -->
|
||||
|
||||
## Mesh ingestor contracts (stable interfaces)
|
||||
|
||||
This repo’s ingestion pipeline is split into:
|
||||
@@ -5,7 +8,7 @@ This repo’s ingestion pipeline is split into:
|
||||
- **Python collector** (`data/mesh_ingestor/*`) which normalizes packets/events and POSTs JSON to the web app.
|
||||
- **Sinatra web app** (`web/`) which accepts those payloads on `POST /api/*` ingest routes and persists them into SQLite tables defined under `data/*.sql`.
|
||||
|
||||
This document records the **contracts that future providers must preserve**. The intent is to enable adding new providers (MeshCore, Reticulum, …) without changing the Ruby/DB/UI read-side.
|
||||
This document records the **contracts that future protocols must preserve**. The intent is to enable adding new protocols (MeshCore, Reticulum, …) without changing the Ruby/DB/UI read-side.
|
||||
|
||||
### Canonical node identity
|
||||
|
||||
@@ -16,7 +19,7 @@ This document records the **contracts that future providers must preserve**. The
|
||||
- Ruby normalizes via `web/lib/potato_mesh/application/data_processing.rb:canonical_node_parts`.
|
||||
- **Dual addressing**: Ruby routes and queries accept either a canonical `!xxxxxxxx` string or a numeric node id; they normalize to `node_id`.
|
||||
|
||||
Note: non-Meshtastic providers will need a strategy to map their native node identifiers into this `!%08x` space. That mapping is intentionally not standardized in code yet.
|
||||
Note: non-Meshtastic protocols will need a strategy to map their native node identifiers into this `!%08x` space. That mapping is intentionally not standardized in code yet.
|
||||
|
||||
### Ingest HTTP routes and payload shapes
|
||||
|
||||
|
||||
@@ -70,6 +70,7 @@ _CONFIG_ATTRS = {
|
||||
"CHANNEL_INDEX",
|
||||
"DEBUG",
|
||||
"INSTANCE",
|
||||
"INSTANCES",
|
||||
"API_TOKEN",
|
||||
"ALLOWED_CHANNELS",
|
||||
"HIDDEN_CHANNELS",
|
||||
@@ -82,9 +83,6 @@ _CONFIG_ATTRS = {
|
||||
"_debug_log",
|
||||
}
|
||||
|
||||
# Legacy export maintained for backwards compatibility.
|
||||
_CONFIG_ATTRS.add("PORT")
|
||||
|
||||
_INTERFACE_ATTRS = {"BLEInterface", "SerialInterface", "TCPInterface"}
|
||||
|
||||
_QUEUE_ATTRS = set(queue.__all__)
|
||||
|
||||
@@ -273,6 +273,43 @@ def is_hidden_channel(channel_name_value: str | None) -> bool:
|
||||
return False
|
||||
|
||||
|
||||
def register_channel(channel_idx: int, channel_name_value: str) -> None:
|
||||
"""Register a single channel index → name mapping.
|
||||
|
||||
Unlike :func:`capture_from_interface`, which scans a complete interface
|
||||
object in one shot, this function registers entries one at a time. It is
|
||||
intended for protocols (e.g. MeshCore) that expose channel metadata via
|
||||
per-index requests rather than a bulk channel list.
|
||||
|
||||
Idempotent: silently skips if *channel_idx* is already cached or
|
||||
*channel_name_value* is blank, matching the first-seen-wins semantics of
|
||||
:func:`capture_from_interface`.
|
||||
|
||||
Parameters:
|
||||
channel_idx: Zero-based channel index.
|
||||
channel_name_value: Human-readable channel name reported by the device.
|
||||
"""
|
||||
|
||||
global _CHANNEL_MAPPINGS, _CHANNEL_LOOKUP
|
||||
|
||||
if not isinstance(channel_name_value, str) or not channel_name_value.strip():
|
||||
return
|
||||
if channel_idx in _CHANNEL_LOOKUP:
|
||||
return
|
||||
|
||||
name = channel_name_value.strip()
|
||||
_CHANNEL_LOOKUP[channel_idx] = name
|
||||
_CHANNEL_MAPPINGS = tuple(sorted(_CHANNEL_LOOKUP.items()))
|
||||
|
||||
config._debug_log(
|
||||
"Registered channel",
|
||||
context="channels.register",
|
||||
severity="info",
|
||||
channel_idx=channel_idx,
|
||||
channel_name=name,
|
||||
)
|
||||
|
||||
|
||||
def _reset_channel_cache() -> None:
|
||||
"""Clear cached channel data. Intended for use in tests only."""
|
||||
|
||||
@@ -285,6 +322,7 @@ __all__ = [
|
||||
"capture_from_interface",
|
||||
"channel_mappings",
|
||||
"channel_name",
|
||||
"register_channel",
|
||||
"allowed_channel_names",
|
||||
"hidden_channel_names",
|
||||
"is_allowed_channel",
|
||||
|
||||
+137
-43
@@ -16,10 +16,9 @@
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import math
|
||||
import os
|
||||
import sys
|
||||
from datetime import datetime, timezone
|
||||
from types import ModuleType
|
||||
from typing import Any
|
||||
|
||||
DEFAULT_SNAPSHOT_SECS = 60
|
||||
@@ -49,12 +48,14 @@ DEFAULT_ENERGY_SLEEP_SECS = float(6 * 60 * 60)
|
||||
DEFAULT_INGESTOR_HEARTBEAT_SECS = float(60 * 60)
|
||||
"""Interval between ingestor heartbeat announcements."""
|
||||
|
||||
CONNECTION = os.environ.get("CONNECTION") or os.environ.get("MESH_SERIAL")
|
||||
DEFAULT_SELF_NODE_REPORT_INTERVAL_SECS = float(60 * 60)
|
||||
"""Interval between periodic forced self-node re-reports from the daemon."""
|
||||
|
||||
CONNECTION = os.environ.get("CONNECTION")
|
||||
"""Optional connection target for the mesh interface.
|
||||
|
||||
When unset, platform-specific defaults will be inferred by the interface
|
||||
implementations. The legacy :envvar:`MESH_SERIAL` environment variable is still
|
||||
accepted for backwards compatibility.
|
||||
implementations.
|
||||
"""
|
||||
|
||||
SNAPSHOT_SECS = DEFAULT_SNAPSHOT_SECS
|
||||
@@ -65,22 +66,53 @@ CHANNEL_INDEX = int(os.environ.get("CHANNEL_INDEX", str(DEFAULT_CHANNEL_INDEX)))
|
||||
|
||||
DEBUG = os.environ.get("DEBUG") == "1"
|
||||
|
||||
_KNOWN_PROVIDERS = ("meshtastic", "meshcore")
|
||||
_KNOWN_PROTOCOLS = ("meshtastic", "meshcore")
|
||||
|
||||
_raw_provider = os.environ.get("PROVIDER", "meshtastic").strip().lower()
|
||||
if _raw_provider not in _KNOWN_PROVIDERS:
|
||||
_raw_protocol = os.environ.get("PROTOCOL", "meshtastic").strip().lower()
|
||||
if _raw_protocol not in _KNOWN_PROTOCOLS:
|
||||
raise ValueError(
|
||||
f"Unknown PROVIDER={_raw_provider!r}. "
|
||||
f"Valid options: {', '.join(_KNOWN_PROVIDERS)}"
|
||||
f"Unknown PROTOCOL={_raw_protocol!r}. "
|
||||
f"Valid options: {', '.join(_KNOWN_PROTOCOLS)}"
|
||||
)
|
||||
|
||||
PROVIDER = _raw_provider
|
||||
"""Active ingestion provider, selected via the :envvar:`PROVIDER` environment variable.
|
||||
PROTOCOL = _raw_protocol
|
||||
"""Active ingestion protocol, selected via the :envvar:`PROTOCOL` environment variable.
|
||||
|
||||
Accepted values are ``meshtastic`` (default) and ``meshcore``.
|
||||
"""
|
||||
|
||||
|
||||
def _parse_lora_freq_env(raw: str | None) -> float | int | None:
|
||||
"""Parse the ``FREQUENCY`` environment variable into a numeric LoRa frequency.
|
||||
|
||||
Returns an :class:`int` for whole-number strings (e.g. ``"868"``), a
|
||||
:class:`float` for decimal strings (e.g. ``"869.525"``), or ``None`` when
|
||||
*raw* is empty, absent, non-numeric, or non-finite (e.g. ``"inf"``).
|
||||
|
||||
Non-numeric labels such as ``"EU_868"`` intentionally return ``None`` so
|
||||
that :data:`LORA_FREQ` is left unset and :func:`~interfaces._ensure_radio_metadata`
|
||||
can still populate it from the detected radio configuration.
|
||||
|
||||
Parameters:
|
||||
raw: Raw value of the ``FREQUENCY`` environment variable.
|
||||
|
||||
Returns:
|
||||
Numeric frequency value, or ``None``.
|
||||
"""
|
||||
if not raw:
|
||||
return None
|
||||
stripped = raw.strip()
|
||||
if not stripped:
|
||||
return None
|
||||
try:
|
||||
as_float = float(stripped)
|
||||
except ValueError:
|
||||
return None
|
||||
if not math.isfinite(as_float):
|
||||
return None
|
||||
return int(as_float) if as_float == int(as_float) else as_float
|
||||
|
||||
|
||||
def _parse_channel_names(raw_value: str | None) -> tuple[str, ...]:
|
||||
"""Normalise a comma-separated list of channel names.
|
||||
|
||||
@@ -127,16 +159,16 @@ ALLOWED_CHANNELS = _parse_channel_names(os.environ.get("ALLOWED_CHANNELS"))
|
||||
def _resolve_instance_domain() -> str:
|
||||
"""Resolve the configured instance domain from the environment.
|
||||
|
||||
The ingestor prefers the :envvar:`INSTANCE_DOMAIN` variable for clarity and
|
||||
compatibility with the web application. For deployments that still
|
||||
configure the legacy :envvar:`POTATOMESH_INSTANCE` variable, the resolver
|
||||
falls back to that value when no primary domain is set.
|
||||
Reads the :envvar:`INSTANCE_DOMAIN` variable. When the value does not
|
||||
contain a scheme, ``https://`` is prepended automatically.
|
||||
|
||||
.. note::
|
||||
|
||||
Kept for backward compatibility with existing tests and callers.
|
||||
New code should use :func:`_resolve_instance_domains` instead.
|
||||
"""
|
||||
|
||||
instance_domain = os.environ.get("INSTANCE_DOMAIN", "")
|
||||
legacy_instance = os.environ.get("POTATOMESH_INSTANCE", "")
|
||||
|
||||
configured_instance = (instance_domain or legacy_instance).rstrip("/")
|
||||
configured_instance = os.environ.get("INSTANCE_DOMAIN", "").rstrip("/")
|
||||
|
||||
if configured_instance and "://" not in configured_instance:
|
||||
return f"https://{configured_instance}"
|
||||
@@ -144,13 +176,91 @@ def _resolve_instance_domain() -> str:
|
||||
return configured_instance
|
||||
|
||||
|
||||
INSTANCE = _resolve_instance_domain()
|
||||
API_TOKEN = os.environ.get("API_TOKEN", "")
|
||||
def _normalise_domain(raw: str) -> str:
|
||||
"""Strip whitespace and trailing slashes, prepend ``https://`` when needed.
|
||||
|
||||
Parameters:
|
||||
raw: Single domain string to normalise.
|
||||
|
||||
Returns:
|
||||
A URL string with a scheme prefix.
|
||||
"""
|
||||
|
||||
domain = raw.strip().rstrip("/")
|
||||
if domain and "://" not in domain:
|
||||
return f"https://{domain}"
|
||||
return domain
|
||||
|
||||
|
||||
def _resolve_instance_domains() -> tuple[tuple[str, str], ...]:
|
||||
"""Parse :envvar:`INSTANCE_DOMAIN` and :envvar:`API_TOKEN` into paired tuples.
|
||||
|
||||
When ``INSTANCE_DOMAIN`` contains comma-separated values, each entry is
|
||||
treated as an independent target. ``API_TOKEN`` is either broadcast to
|
||||
every target (single value) or positionally paired (comma-separated with
|
||||
a matching count).
|
||||
|
||||
Returns:
|
||||
A tuple of ``(instance_url, api_token)`` pairs, deduplicated by URL.
|
||||
|
||||
Raises:
|
||||
ValueError: When the number of comma-separated tokens exceeds the
|
||||
number of domains.
|
||||
"""
|
||||
|
||||
raw_domain = os.environ.get("INSTANCE_DOMAIN", "")
|
||||
raw_token = os.environ.get("API_TOKEN", "")
|
||||
|
||||
domains: list[str] = []
|
||||
seen: set[str] = set()
|
||||
for part in raw_domain.split(","):
|
||||
normalised = _normalise_domain(part)
|
||||
if not normalised:
|
||||
continue
|
||||
key = normalised.casefold()
|
||||
if key in seen:
|
||||
continue
|
||||
seen.add(key)
|
||||
domains.append(normalised)
|
||||
|
||||
if not domains:
|
||||
return ()
|
||||
|
||||
tokens = [t.strip() for t in raw_token.split(",")]
|
||||
# A single token (including empty string) is broadcast to all domains.
|
||||
if len(tokens) == 1:
|
||||
token = tokens[0]
|
||||
return tuple((d, token) for d in domains)
|
||||
|
||||
if len(tokens) != len(domains):
|
||||
raise ValueError(
|
||||
f"API_TOKEN has {len(tokens)} comma-separated values but "
|
||||
f"INSTANCE_DOMAIN has {len(domains)}; counts must match or "
|
||||
f"API_TOKEN must be a single value"
|
||||
)
|
||||
|
||||
return tuple(zip(domains, tokens))
|
||||
|
||||
|
||||
INSTANCES: tuple[tuple[str, str], ...] = _resolve_instance_domains()
|
||||
"""Paired ``(instance_url, api_token)`` tuples derived from the environment."""
|
||||
|
||||
INSTANCE = INSTANCES[0][0] if INSTANCES else _resolve_instance_domain()
|
||||
"""First configured instance URL, kept for backward compatibility."""
|
||||
|
||||
API_TOKEN = INSTANCES[0][1] if INSTANCES else os.environ.get("API_TOKEN", "")
|
||||
"""API token for the first configured instance, kept for backward compatibility."""
|
||||
ENERGY_SAVING = os.environ.get("ENERGY_SAVING") == "1"
|
||||
"""When ``True``, enables the ingestor's energy saving mode."""
|
||||
|
||||
LORA_FREQ: float | int | str | None = None
|
||||
"""Frequency of the local node's configured LoRa region in MHz or raw region label."""
|
||||
LORA_FREQ: float | int | str | None = _parse_lora_freq_env(os.environ.get("FREQUENCY"))
|
||||
"""Frequency of the local node's configured LoRa region in MHz or raw region label.
|
||||
|
||||
Pre-seeded from the ``FREQUENCY`` environment variable when set to a finite
|
||||
numeric value, allowing operators to override auto-detected values.
|
||||
Non-numeric or non-finite values are ignored so that auto-detection from the
|
||||
radio interface can still fill this in.
|
||||
"""
|
||||
|
||||
MODEM_PRESET: str | None = None
|
||||
"""CamelCase modem preset name reported by the local node."""
|
||||
@@ -162,9 +272,7 @@ _INACTIVITY_RECONNECT_SECS = DEFAULT_INACTIVITY_RECONNECT_SECS
|
||||
_ENERGY_ONLINE_DURATION_SECS = DEFAULT_ENERGY_ONLINE_DURATION_SECS
|
||||
_ENERGY_SLEEP_SECS = DEFAULT_ENERGY_SLEEP_SECS
|
||||
_INGESTOR_HEARTBEAT_SECS = DEFAULT_INGESTOR_HEARTBEAT_SECS
|
||||
|
||||
# Backwards compatibility shim for legacy imports.
|
||||
PORT = CONNECTION
|
||||
_SELF_NODE_REPORT_INTERVAL_SECS = DEFAULT_SELF_NODE_REPORT_INTERVAL_SECS
|
||||
|
||||
|
||||
def _debug_log(
|
||||
@@ -209,6 +317,7 @@ __all__ = [
|
||||
"HIDDEN_CHANNELS",
|
||||
"ALLOWED_CHANNELS",
|
||||
"INSTANCE",
|
||||
"INSTANCES",
|
||||
"API_TOKEN",
|
||||
"ENERGY_SAVING",
|
||||
"LORA_FREQ",
|
||||
@@ -220,21 +329,6 @@ __all__ = [
|
||||
"_ENERGY_ONLINE_DURATION_SECS",
|
||||
"_ENERGY_SLEEP_SECS",
|
||||
"_INGESTOR_HEARTBEAT_SECS",
|
||||
"_SELF_NODE_REPORT_INTERVAL_SECS",
|
||||
"_debug_log",
|
||||
]
|
||||
|
||||
|
||||
class _ConfigModule(ModuleType):
|
||||
"""Module proxy that keeps connection aliases synchronised."""
|
||||
|
||||
def __setattr__(self, name: str, value: Any) -> None: # type: ignore[override]
|
||||
"""Propagate CONNECTION/PORT assignments to both attributes."""
|
||||
|
||||
if name in {"CONNECTION", "PORT"}:
|
||||
super().__setattr__("CONNECTION", value)
|
||||
super().__setattr__("PORT", value)
|
||||
return
|
||||
super().__setattr__(name, value)
|
||||
|
||||
|
||||
sys.modules[__name__].__class__ = _ConfigModule
|
||||
|
||||
@@ -24,8 +24,8 @@ import time
|
||||
|
||||
from pubsub import pub
|
||||
|
||||
from . import config, handlers, ingestors, interfaces
|
||||
from .provider import Provider
|
||||
from . import config, handlers, ingestors, interfaces, queue
|
||||
from .mesh_protocol import MeshProtocol
|
||||
from .utils import _retry_dict_snapshot
|
||||
|
||||
_RECEIVE_TOPICS = (
|
||||
@@ -245,7 +245,7 @@ def _connected_state(candidate) -> bool | None:
|
||||
class _DaemonState:
|
||||
"""All mutable state for the :func:`main` daemon loop."""
|
||||
|
||||
provider: Provider
|
||||
provider: MeshProtocol
|
||||
stop: threading.Event
|
||||
configured_port: str | None
|
||||
inactivity_reconnect_secs: float
|
||||
@@ -264,6 +264,7 @@ class _DaemonState:
|
||||
last_inactivity_reconnect: float | None = None
|
||||
ingestor_announcement_sent: bool = False
|
||||
announced_target: bool = False
|
||||
last_self_node_report: float | None = None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -309,6 +310,7 @@ def _try_connect(state: _DaemonState) -> bool:
|
||||
ingestors.set_ingestor_node_id(handlers.host_node_id())
|
||||
state.retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
|
||||
state.initial_snapshot_sent = False
|
||||
state.last_self_node_report = None
|
||||
if not state.announced_target and state.resolved_target:
|
||||
config._debug_log(
|
||||
"Using mesh interface",
|
||||
@@ -387,6 +389,7 @@ def _check_energy_saving(state: _DaemonState) -> bool:
|
||||
state.iface = None
|
||||
state.announced_target = False
|
||||
state.initial_snapshot_sent = False
|
||||
state.last_self_node_report = None
|
||||
state.energy_session_deadline = None
|
||||
_energy_sleep(state, reason)
|
||||
return True
|
||||
@@ -485,33 +488,91 @@ def _check_inactivity_reconnect(state: _DaemonState) -> bool:
|
||||
):
|
||||
return False
|
||||
|
||||
if (
|
||||
state.last_inactivity_reconnect is not None
|
||||
and now - state.last_inactivity_reconnect < state.inactivity_reconnect_secs
|
||||
):
|
||||
return False
|
||||
if state.last_inactivity_reconnect is not None:
|
||||
# For explicit disconnects use the shorter max-reconnect-delay window
|
||||
# so the daemon reconnects promptly without thrashing. For inactivity-
|
||||
# only triggers retain the full inactivity window as the throttle.
|
||||
throttle_secs = (
|
||||
config._RECONNECT_MAX_DELAY_SECS
|
||||
if believed_disconnected
|
||||
else state.inactivity_reconnect_secs
|
||||
)
|
||||
if now - state.last_inactivity_reconnect < throttle_secs:
|
||||
return False
|
||||
|
||||
reason = (
|
||||
"disconnected"
|
||||
if believed_disconnected
|
||||
else f"no data for {inactivity_elapsed:.0f}s"
|
||||
)
|
||||
# Uses the module-level global STATE — acceptable because there is only
|
||||
# one queue in production, and in tests this is purely informational.
|
||||
queue_depth = len(queue.STATE.queue)
|
||||
config._debug_log(
|
||||
"Mesh interface inactivity detected",
|
||||
context="daemon.interface",
|
||||
severity="warn",
|
||||
reason=reason,
|
||||
queue_depth=queue_depth,
|
||||
)
|
||||
state.last_inactivity_reconnect = now
|
||||
_close_interface(state.iface)
|
||||
state.iface = None
|
||||
state.announced_target = False
|
||||
state.initial_snapshot_sent = False
|
||||
state.last_self_node_report = None
|
||||
state.energy_session_deadline = None
|
||||
state.iface_connected_at = None
|
||||
return True
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Periodic self-node report helper
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _try_send_self_node(state: _DaemonState) -> None:
|
||||
"""Re-upsert the host self-node when the provider supports it.
|
||||
|
||||
Called once immediately after the initial snapshot and then at most once
|
||||
per :data:`~data.mesh_ingestor.config._SELF_NODE_REPORT_INTERVAL_SECS`.
|
||||
This ensures the self-node's protocol and radio metadata are refreshed
|
||||
even when the ingestor heartbeat races ahead of the first SELF_INFO event
|
||||
(meshcore) or when the protocol never sends periodic NODEINFO for itself.
|
||||
|
||||
Parameters:
|
||||
state: Current daemon loop state.
|
||||
|
||||
Returns:
|
||||
``None``. Errors are logged and suppressed so a single failure does
|
||||
not break the main loop.
|
||||
"""
|
||||
self_node_fn = getattr(state.provider, "self_node_item", None)
|
||||
if not callable(self_node_fn):
|
||||
return
|
||||
try:
|
||||
item = self_node_fn(state.iface)
|
||||
if item is None:
|
||||
return
|
||||
node_id, node = item
|
||||
handlers.upsert_node(node_id, node)
|
||||
state.last_self_node_report = time.monotonic()
|
||||
config._debug_log(
|
||||
"Sent periodic self-node report",
|
||||
context="daemon.self_node",
|
||||
severity="info",
|
||||
node_id=node_id,
|
||||
)
|
||||
except Exception as exc:
|
||||
config._debug_log(
|
||||
"Self-node re-report failed",
|
||||
context="daemon.self_node",
|
||||
severity="warn",
|
||||
error_class=exc.__class__.__name__,
|
||||
error_message=str(exc),
|
||||
)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Loop iteration helper
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -540,6 +601,15 @@ def _loop_iteration(state: _DaemonState) -> bool:
|
||||
state.ingestor_announcement_sent = _process_ingestor_heartbeat(
|
||||
state.iface, ingestor_announcement_sent=state.ingestor_announcement_sent
|
||||
)
|
||||
# Periodically re-upsert the host self-node so that its protocol and radio
|
||||
# metadata are corrected after the ingestor heartbeat is registered, and
|
||||
# kept fresh for protocols (e.g. meshcore) that only emit SELF_INFO once.
|
||||
_now = time.monotonic()
|
||||
if state.initial_snapshot_sent and (
|
||||
state.last_self_node_report is None
|
||||
or _now - state.last_self_node_report >= config._SELF_NODE_REPORT_INTERVAL_SECS
|
||||
):
|
||||
_try_send_self_node(state)
|
||||
state.retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
|
||||
return False
|
||||
|
||||
@@ -549,16 +619,16 @@ def _loop_iteration(state: _DaemonState) -> bool:
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def main(*, provider: Provider | None = None) -> None:
|
||||
def main(*, provider: MeshProtocol | None = None) -> None:
|
||||
"""Run the mesh ingestion daemon until interrupted."""
|
||||
|
||||
if provider is None:
|
||||
if config.PROVIDER == "meshcore":
|
||||
from .providers.meshcore import MeshcoreProvider
|
||||
if config.PROTOCOL == "meshcore":
|
||||
from .protocols.meshcore import MeshcoreProvider
|
||||
|
||||
provider = MeshcoreProvider()
|
||||
else:
|
||||
from .providers.meshtastic import MeshtasticProvider
|
||||
from .protocols.meshtastic import MeshtasticProvider
|
||||
|
||||
provider = MeshtasticProvider()
|
||||
|
||||
@@ -571,6 +641,17 @@ def main(*, provider: Provider | None = None) -> None:
|
||||
topics=subscribed,
|
||||
)
|
||||
|
||||
if not config.INSTANCES and not config.INSTANCE:
|
||||
config._debug_log(
|
||||
"No INSTANCE_DOMAIN configured — cannot forward data; exiting",
|
||||
context="daemon.main",
|
||||
severity="error",
|
||||
always=True,
|
||||
)
|
||||
return
|
||||
|
||||
queue._start_queue_drainer(queue.STATE)
|
||||
|
||||
state = _DaemonState(
|
||||
provider=provider,
|
||||
stop=threading.Event(),
|
||||
@@ -606,11 +687,12 @@ def main(*, provider: Provider | None = None) -> None:
|
||||
signal.signal(signal.SIGINT, handle_sigint)
|
||||
signal.signal(signal.SIGTERM, handle_sigterm)
|
||||
|
||||
instance_label = ", ".join(inst for inst, _ in config.INSTANCES)
|
||||
config._debug_log(
|
||||
"Mesh daemon starting",
|
||||
context="daemon.main",
|
||||
severity="info",
|
||||
target=config.INSTANCE or "(no INSTANCE_DOMAIN configured)",
|
||||
target=instance_label,
|
||||
port=config.CONNECTION or "auto",
|
||||
channel=config.CHANNEL_INDEX,
|
||||
)
|
||||
@@ -644,6 +726,7 @@ __all__ = [
|
||||
"_process_ingestor_heartbeat",
|
||||
"_subscribe_receive_topics",
|
||||
"_try_connect",
|
||||
"_try_send_self_node",
|
||||
"_try_send_snapshot",
|
||||
"main",
|
||||
]
|
||||
|
||||
@@ -26,7 +26,7 @@ This package is organised into focused submodules:
|
||||
- :mod:`.generic` — packet dispatcher, node upsert, and the main receive callback
|
||||
|
||||
All public names from the original flat ``handlers`` module are re-exported
|
||||
here so existing callers (e.g. ``daemon.py``, ``providers/``) require no
|
||||
here so existing callers (e.g. ``daemon.py``, ``protocols/``) require no
|
||||
changes.
|
||||
"""
|
||||
|
||||
@@ -34,6 +34,7 @@ from __future__ import annotations
|
||||
|
||||
from .. import queue as _queue
|
||||
from ._state import (
|
||||
_mark_packet_seen,
|
||||
host_node_id,
|
||||
last_packet_monotonic,
|
||||
register_host_node_id,
|
||||
@@ -79,6 +80,7 @@ __all__ = [
|
||||
"_apply_radio_metadata",
|
||||
"_apply_radio_metadata_to_nodes",
|
||||
"_is_encrypted_flag",
|
||||
"_mark_packet_seen",
|
||||
"_normalize_trace_hops",
|
||||
"_portnum_candidates",
|
||||
"_queue_post_json",
|
||||
|
||||
@@ -45,6 +45,18 @@ every packet would overwrite the host's profile too aggressively; this window
|
||||
throttles updates to at most once per hour.
|
||||
"""
|
||||
|
||||
_host_nodeinfo_last_seen: float | None = None
|
||||
"""Monotonic timestamp of the last accepted host NODEINFO upsert."""
|
||||
|
||||
_HOST_NODEINFO_INTERVAL_SECS: int = 60 * 60
|
||||
"""Minimum interval (seconds) between accepted host NODEINFO upserts.
|
||||
|
||||
The meshtastic library re-broadcasts the local node's NODEINFO to the mesh
|
||||
periodically. Accepting every broadcast would overwrite the host node record
|
||||
too aggressively; this window throttles self-NODEINFO upserts to at most once
|
||||
per hour.
|
||||
"""
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Packet receipt tracking
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -69,10 +81,11 @@ def register_host_node_id(node_id: str | None) -> None:
|
||||
the current host assignment.
|
||||
"""
|
||||
|
||||
global _host_node_id, _host_telemetry_last_rx
|
||||
global _host_node_id, _host_telemetry_last_rx, _host_nodeinfo_last_seen
|
||||
canonical = _canonical_node_id(node_id)
|
||||
_host_node_id = canonical
|
||||
_host_telemetry_last_rx = None
|
||||
_host_nodeinfo_last_seen = None
|
||||
if canonical:
|
||||
config._debug_log(
|
||||
"Registered host device node id",
|
||||
@@ -128,6 +141,35 @@ def _host_telemetry_suppressed(rx_time: int) -> tuple[bool, int]:
|
||||
return True, int(math.ceil(remaining_secs / 60.0))
|
||||
|
||||
|
||||
def _host_nodeinfo_suppressed(now: float) -> bool:
|
||||
"""Return ``True`` when a host NODEINFO upsert should be suppressed.
|
||||
|
||||
Self-NODEINFO upserts are throttled to at most once per
|
||||
:data:`_HOST_NODEINFO_INTERVAL_SECS` to prevent the meshtastic library's
|
||||
periodic rebroadcast from overwriting the host node record too aggressively.
|
||||
|
||||
Parameters:
|
||||
now: Current :func:`time.monotonic` value.
|
||||
|
||||
Returns:
|
||||
``True`` when the request should be dropped; ``False`` when it should
|
||||
proceed.
|
||||
"""
|
||||
if _host_nodeinfo_last_seen is None:
|
||||
return False
|
||||
return (now - _host_nodeinfo_last_seen) < _HOST_NODEINFO_INTERVAL_SECS
|
||||
|
||||
|
||||
def _mark_host_nodeinfo_seen(now: float) -> None:
|
||||
"""Record that a host NODEINFO upsert was accepted.
|
||||
|
||||
Parameters:
|
||||
now: Current :func:`time.monotonic` value from the accepted upsert.
|
||||
"""
|
||||
global _host_nodeinfo_last_seen
|
||||
_host_nodeinfo_last_seen = now
|
||||
|
||||
|
||||
def last_packet_monotonic() -> float | None:
|
||||
"""Return the monotonic timestamp of the most recently processed packet.
|
||||
|
||||
@@ -147,8 +189,11 @@ def _mark_packet_seen() -> None:
|
||||
|
||||
|
||||
__all__ = [
|
||||
"_HOST_NODEINFO_INTERVAL_SECS",
|
||||
"_HOST_TELEMETRY_INTERVAL_SECS",
|
||||
"_host_nodeinfo_suppressed",
|
||||
"_host_telemetry_suppressed",
|
||||
"_mark_host_nodeinfo_seen",
|
||||
"_mark_host_telemetry_seen",
|
||||
"_mark_packet_seen",
|
||||
"host_node_id",
|
||||
|
||||
@@ -76,6 +76,21 @@ def store_nodeinfo_packet(packet: Mapping, decoded: Mapping) -> None:
|
||||
if node_id is None:
|
||||
return
|
||||
|
||||
# Throttle self-NODEINFO upserts to at most once per hour. The meshtastic
|
||||
# library rebroadcasts the local node's NODEINFO periodically; accepting
|
||||
# every broadcast would overwrite the host node record too aggressively.
|
||||
if node_id == _state.host_node_id():
|
||||
_now = time.monotonic()
|
||||
if _state._host_nodeinfo_suppressed(_now):
|
||||
if config.DEBUG:
|
||||
config._debug_log(
|
||||
"Suppressed host self-NODEINFO update within throttle window",
|
||||
context="handlers.store_nodeinfo",
|
||||
node_id=node_id,
|
||||
)
|
||||
return
|
||||
_state._mark_host_nodeinfo_seen(_now)
|
||||
|
||||
node_payload: dict = {}
|
||||
if user_dict:
|
||||
node_payload["user"] = user_dict
|
||||
|
||||
@@ -123,7 +123,7 @@ def store_telemetry_packet(packet: Mapping, decoded: Mapping) -> None:
|
||||
# Priority order matters: deviceMetrics is checked first because the device
|
||||
# sub-object also carries a voltage field that overlaps with powerMetrics.
|
||||
# Meshtastic uses a protobuf oneof so only one sub-object can be populated per
|
||||
# packet; the elif chain handles any hypothetical overlap from future providers.
|
||||
# packet; the elif chain handles any hypothetical overlap from future protocols.
|
||||
if isinstance(_dm, Mapping):
|
||||
telemetry_type: str | None = "device"
|
||||
elif isinstance(_em, Mapping):
|
||||
|
||||
@@ -113,7 +113,7 @@ def queue_ingestor_heartbeat(
|
||||
"start_time": STATE.start_time,
|
||||
"last_seen_time": now,
|
||||
"version": INGESTOR_VERSION,
|
||||
"protocol": getattr(config, "PROVIDER", "meshtastic") or "meshtastic",
|
||||
"protocol": getattr(config, "PROTOCOL", "meshtastic") or "meshtastic",
|
||||
}
|
||||
if getattr(config, "LORA_FREQ", None) is not None:
|
||||
payload["lora_freq"] = config.LORA_FREQ
|
||||
|
||||
@@ -511,16 +511,96 @@ def _resolve_lora_message(local_config: Any) -> Any | None:
|
||||
return None
|
||||
|
||||
|
||||
# Maps Meshtastic region enum name to (base_freq_MHz, channel_spacing_MHz).
|
||||
# Values are derived from the Meshtastic firmware RegionInfo tables.
|
||||
# Used by _computed_channel_frequency to derive the actual radio frequency
|
||||
# from the region and channel index.
|
||||
_REGION_CHANNEL_PARAMS: dict[str, tuple[float, float]] = {
|
||||
"US": (902.0, 0.25), # 902–928 MHz; e.g. ch 52 ≈ 915 MHz at 250 kHz spacing
|
||||
"EU_433": (433.175, 0.2),
|
||||
"EU_868": (869.525, 0.5), # actual primary ≈ 869.525 MHz, not 868
|
||||
"CN": (470.0, 0.2),
|
||||
"JP": (920.875, 0.5),
|
||||
"ANZ": (916.0, 0.5),
|
||||
"KR": (921.9, 0.5),
|
||||
"TW": (923.0, 0.5),
|
||||
"RU": (868.9, 0.5),
|
||||
"IN": (865.0, 0.5),
|
||||
"NZ_865": (864.0, 0.5),
|
||||
"TH": (920.0, 0.5),
|
||||
"LORA_24": (2400.0, 0.5),
|
||||
"UA_433": (433.175, 0.2),
|
||||
"UA_868": (868.0, 0.5),
|
||||
"MY_433": (433.0, 0.2),
|
||||
"MY_919": (919.0, 0.5),
|
||||
"SG_923": (923.0, 0.5),
|
||||
"PH_433": (433.0, 0.2),
|
||||
"PH_868": (868.0, 0.5),
|
||||
"PH_915": (915.0, 0.5),
|
||||
"ANZ_433": (433.0, 0.2),
|
||||
"KZ_433": (433.0, 0.2),
|
||||
"KZ_863": (863.125, 0.5),
|
||||
"NP_865": (865.0, 0.5),
|
||||
"BR_902": (902.0, 0.25),
|
||||
# IL (Israel) is absent from meshtastic Python lib 2.7.8 protobufs; the
|
||||
# enum value is unresolvable at runtime. Operators on IL firmware should
|
||||
# set the FREQUENCY environment variable to override.
|
||||
}
|
||||
|
||||
|
||||
def _computed_channel_frequency(
|
||||
enum_name: str | None,
|
||||
channel_num: int | None,
|
||||
) -> int | None:
|
||||
"""Compute the floor MHz frequency for a known region and channel index.
|
||||
|
||||
Looks up *enum_name* in :data:`_REGION_CHANNEL_PARAMS` and returns
|
||||
``floor(base_freq + channel_num * spacing)``. Returns ``None`` when the
|
||||
region is not in the table. A missing or negative *channel_num* is
|
||||
treated as 0 so the base frequency is always usable.
|
||||
|
||||
Args:
|
||||
enum_name: Region enum name as returned by
|
||||
:func:`_enum_name_from_field`, e.g. ``"EU_868"`` or ``"US"``.
|
||||
channel_num: Zero-based channel index from the device LoRa config.
|
||||
|
||||
Returns:
|
||||
Floored MHz as :class:`int`, or ``None`` if the region is unknown.
|
||||
"""
|
||||
if enum_name is None:
|
||||
return None
|
||||
params = _REGION_CHANNEL_PARAMS.get(enum_name)
|
||||
if params is None:
|
||||
return None
|
||||
base, spacing = params
|
||||
idx = channel_num if (isinstance(channel_num, int) and channel_num >= 0) else 0
|
||||
return math.floor(base + idx * spacing)
|
||||
|
||||
|
||||
def _region_frequency(lora_message: Any) -> int | float | str | None:
|
||||
"""Derive the LoRa region frequency in MHz or the region label from ``lora_message``.
|
||||
|
||||
Numeric override values are floored to the nearest MHz to align with the
|
||||
integer frequencies expected elsewhere in the ingestion pipeline.
|
||||
Frequency sources are tried in priority order:
|
||||
|
||||
1. ``override_frequency > 0`` — explicit radio override, floored to MHz.
|
||||
2. :data:`_REGION_CHANNEL_PARAMS` lookup + ``channel_num`` — actual
|
||||
band-plan frequency derived from the device's region and channel index,
|
||||
floored to MHz.
|
||||
3. Largest digit token ≥ 100 parsed from the region enum name string.
|
||||
4. Largest digit token < 100 from the enum name (reversed scan).
|
||||
5. Full enum name string, raw integer ≥ 100, or raw string as a label.
|
||||
|
||||
Args:
|
||||
lora_message: A LoRa config protobuf message or compatible object.
|
||||
|
||||
Returns:
|
||||
An integer MHz frequency, a fallback string label, or ``None``.
|
||||
"""
|
||||
|
||||
if lora_message is None:
|
||||
return None
|
||||
|
||||
# Step 1 — explicit radio override
|
||||
override_frequency = getattr(lora_message, "override_frequency", None)
|
||||
if override_frequency is not None:
|
||||
if isinstance(override_frequency, (int, float)):
|
||||
@@ -533,6 +613,15 @@ def _region_frequency(lora_message: Any) -> int | float | str | None:
|
||||
if region_value is None:
|
||||
return None
|
||||
enum_name = _enum_name_from_field(lora_message, "region", region_value)
|
||||
|
||||
# Step 2 — lookup table + channel offset (actual band-plan frequency)
|
||||
if enum_name:
|
||||
channel_num = getattr(lora_message, "channel_num", None)
|
||||
computed = _computed_channel_frequency(enum_name, channel_num)
|
||||
if computed is not None:
|
||||
return computed
|
||||
|
||||
# Steps 3–5 — parse digits from enum name (fallback for unknown regions)
|
||||
if enum_name:
|
||||
digits = re.findall(r"\d+", enum_name)
|
||||
for token in digits:
|
||||
|
||||
@@ -12,11 +12,10 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Provider interface for ingestion sources.
|
||||
"""MeshProtocol interface for ingestion sources.
|
||||
|
||||
Today the repo ships a Meshtastic provider only. This module defines the seam so
|
||||
future providers (MeshCore, Reticulum, ...) can be added without changing the
|
||||
web app ingest contract.
|
||||
This module defines the seam so future protocols (MeshCore, Reticulum, ...) can
|
||||
be added without changing the web app ingest contract.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
@@ -26,8 +25,8 @@ from typing import Protocol, runtime_checkable
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class Provider(Protocol):
|
||||
"""Abstract source of mesh observations."""
|
||||
class MeshProtocol(Protocol):
|
||||
"""Abstract mesh protocol source."""
|
||||
|
||||
name: str
|
||||
|
||||
@@ -51,5 +50,8 @@ class Provider(Protocol):
|
||||
|
||||
|
||||
__all__ = [
|
||||
"Provider",
|
||||
"MeshProtocol",
|
||||
]
|
||||
|
||||
# Backwards-compatibility alias — import Provider from here during transition.
|
||||
Provider = MeshProtocol
|
||||
+11
-6
@@ -12,9 +12,9 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Provider implementations.
|
||||
"""Protocol implementations.
|
||||
|
||||
This package contains protocol-specific provider implementations (Meshtastic,
|
||||
This package contains protocol-specific implementations (Meshtastic,
|
||||
MeshCore, and others in the future).
|
||||
"""
|
||||
|
||||
@@ -24,16 +24,21 @@ from .meshtastic import MeshtasticProvider
|
||||
|
||||
|
||||
def __getattr__(name: str) -> object:
|
||||
"""Lazy-load provider classes that carry optional heavy dependencies.
|
||||
"""Lazy-load protocol classes and exceptions that carry optional heavy dependencies.
|
||||
|
||||
``MeshcoreProvider`` is imported on demand so that the MeshCore library
|
||||
(once wired in) is not loaded at startup when ``PROVIDER=meshtastic``.
|
||||
``MeshcoreProvider`` and ``ClosedBeforeConnectedError`` are imported on
|
||||
demand so that the MeshCore library (once wired in) is not loaded at
|
||||
startup when ``PROTOCOL=meshtastic``.
|
||||
"""
|
||||
if name == "MeshcoreProvider":
|
||||
from .meshcore import MeshcoreProvider
|
||||
|
||||
return MeshcoreProvider
|
||||
if name == "ClosedBeforeConnectedError":
|
||||
from .meshcore import ClosedBeforeConnectedError
|
||||
|
||||
return ClosedBeforeConnectedError
|
||||
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
|
||||
|
||||
|
||||
__all__ = ["MeshtasticProvider", "MeshcoreProvider"]
|
||||
__all__ = ["MeshtasticProvider", "MeshcoreProvider", "ClosedBeforeConnectedError"]
|
||||
+484
-35
@@ -12,17 +12,17 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""MeshCore provider implementation.
|
||||
"""MeshCore protocol implementation.
|
||||
|
||||
This module defines :class:`MeshcoreProvider`, which satisfies the
|
||||
:class:`~data.mesh_ingestor.provider.Provider` protocol for MeshCore nodes
|
||||
connected via serial port, BLE, or TCP/IP.
|
||||
:class:`~data.mesh_ingestor.mesh_protocol.MeshProtocol` interface for MeshCore
|
||||
nodes connected via serial port, BLE, or TCP/IP.
|
||||
|
||||
The provider runs MeshCore's ``asyncio`` event loop in a background daemon
|
||||
thread so that incoming events are dispatched without blocking the
|
||||
The protocol backend runs MeshCore's ``asyncio`` event loop in a background
|
||||
daemon thread so that incoming events are dispatched without blocking the
|
||||
synchronous daemon loop. Received contacts, channel messages, and direct
|
||||
messages are forwarded to the shared HTTP ingest queue via the same
|
||||
:mod:`~data.mesh_ingestor.handlers` helpers used by the Meshtastic provider.
|
||||
:mod:`~data.mesh_ingestor.handlers` helpers used by the Meshtastic protocol.
|
||||
|
||||
Connection type is detected automatically from the target string:
|
||||
|
||||
@@ -35,8 +35,8 @@ Connection type is detected automatically from the target string:
|
||||
Node identities are derived from the first four bytes (eight hex characters)
|
||||
of each contact's 32-byte public key, formatted as ``!xxxxxxxx`` to match
|
||||
the canonical node-ID schema used across the system. Ingested
|
||||
``user.shortName`` is the first four hex digits of that key (two bytes),
|
||||
not the advertised name.
|
||||
``user.shortName`` is the first two bytes (four hex characters) of the
|
||||
node ID, not the advertised name.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
@@ -45,13 +45,50 @@ import asyncio
|
||||
import base64
|
||||
import hashlib
|
||||
import json
|
||||
import re
|
||||
import threading
|
||||
import time
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
|
||||
from .. import config
|
||||
# Import meshcore symbols at module level rather than lazily inside functions.
|
||||
# The original deferred-import pattern was introduced so that loading
|
||||
# ``protocols/__init__.py`` under ``PROTOCOL=meshtastic`` would not pull in the
|
||||
# meshcore library. That protection is preserved: ``protocols/__init__.py``
|
||||
# only imports THIS module on demand (via its ``__getattr__`` lazy loader), so
|
||||
# this top-level import still never executes for meshtastic-only deployments.
|
||||
# The import was hoisted because, after the rename from ``providers/meshcore``
|
||||
# to ``protocols/meshcore``, Python's absolute import resolver matched the
|
||||
# module's own short name (``meshcore``) against the installed package, causing
|
||||
# a ``ModuleNotFoundError`` when the deferred ``from meshcore import …`` ran
|
||||
# inside a background thread at connect time.
|
||||
from meshcore import (
|
||||
BLEConnection,
|
||||
EventType,
|
||||
MeshCore,
|
||||
SerialConnection,
|
||||
TCPConnection,
|
||||
)
|
||||
|
||||
from .. import config, ingestors as _ingestors, queue as _queue
|
||||
from ..connection import default_serial_targets, parse_ble_target, parse_tcp_target
|
||||
from ..serialization import _iso, _node_num_from_id
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Exceptions
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class ClosedBeforeConnectedError(ConnectionError):
|
||||
"""Raised when :meth:`_MeshcoreInterface.close` is called while the
|
||||
connection coroutine is still waiting for the device handshake to complete.
|
||||
|
||||
This is a :exc:`ConnectionError` subclass so callers that only handle the
|
||||
base class continue to work, while callers that need to distinguish a
|
||||
user-initiated shutdown from a hardware failure can catch this type
|
||||
specifically.
|
||||
"""
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Debug log file
|
||||
@@ -128,23 +165,28 @@ def _meshcore_node_id(public_key_hex: str | None) -> str | None:
|
||||
return "!" + public_key_hex[:8].lower()
|
||||
|
||||
|
||||
def _meshcore_short_name(public_key_hex: str | None) -> str:
|
||||
"""Return the first four hex digits of a MeshCore public key as short name.
|
||||
def _meshcore_short_name(node_id: str | None) -> str:
|
||||
"""Derive a four-character short name from a canonical node ID.
|
||||
|
||||
Meshtastic-style ``shortName`` fields are four characters wide; MeshCore
|
||||
ingest uses the leading two bytes of the 32-byte public key in lowercase
|
||||
hex so the label is stable and unique per key prefix.
|
||||
Uses the first two bytes (four hex characters) of the ``!xxxxxxxx`` node
|
||||
ID. This keeps the short name consistent with the node ID itself — if the
|
||||
node ID is later replaced when the real public key is heard, the short name
|
||||
will update alongside it.
|
||||
|
||||
Parameters:
|
||||
public_key_hex: Full public key as a hex string from the MeshCore API.
|
||||
node_id: Canonical ``!xxxxxxxx`` node ID string (as returned by
|
||||
:func:`_meshcore_node_id`).
|
||||
|
||||
Returns:
|
||||
Four lowercase hex characters (e.g. ``"aabb"``), or an empty string
|
||||
when the key is missing or shorter than four hex characters.
|
||||
Four lowercase hex characters (e.g. ``"cafe"``), or an empty string
|
||||
when the node ID is missing or too short.
|
||||
"""
|
||||
if not public_key_hex or len(public_key_hex) < 4:
|
||||
if not node_id:
|
||||
return ""
|
||||
return public_key_hex[:4].lower()
|
||||
raw = node_id.lstrip("!")
|
||||
if len(raw) < 4:
|
||||
return ""
|
||||
return raw[:4].lower()
|
||||
|
||||
|
||||
def _meshcore_adv_type_to_role(adv_type: object) -> str | None:
|
||||
@@ -169,6 +211,94 @@ def _meshcore_adv_type_to_role(adv_type: object) -> str | None:
|
||||
return _MESHCORE_ADV_TYPE_ROLE.get(adv_type)
|
||||
|
||||
|
||||
def _parse_sender_name(text: str) -> str | None:
|
||||
"""Extract the sender name from a MeshCore channel message text.
|
||||
|
||||
MeshCore channel messages use the convention ``"SenderName: body"``.
|
||||
Only the first colon is treated as the separator; colons that appear in the
|
||||
body are preserved. The sender name is stripped of leading and trailing
|
||||
whitespace.
|
||||
|
||||
Parameters:
|
||||
text: Raw message text as stored in the database.
|
||||
|
||||
Returns:
|
||||
Stripped sender name string, or ``None`` when the text does not
|
||||
contain a colon or the portion before the colon is blank.
|
||||
"""
|
||||
colon_idx = text.find(":")
|
||||
if colon_idx < 0:
|
||||
return None
|
||||
name = text[:colon_idx].strip()
|
||||
return name if name else None
|
||||
|
||||
|
||||
# Matches @[Name] mention patterns in MeshCore message bodies.
|
||||
_MENTION_RE = re.compile(r"@\[([^\]]+)\]")
|
||||
|
||||
|
||||
def _derive_synthetic_node_id(long_name: str) -> str:
|
||||
"""Derive a deterministic synthetic ``!xxxxxxxx`` node ID from a long name.
|
||||
|
||||
Uses the first four bytes of SHA-256(UTF-8 encoded name), formatted as
|
||||
``!xxxxxxxx``. The same long name always produces the same ID across
|
||||
restarts. The probability of collision with a real public-key-derived ID
|
||||
is ~1 in 4 billion per pair, which is negligible in practice.
|
||||
|
||||
Parameters:
|
||||
long_name: Node long name used as the hash input.
|
||||
|
||||
Returns:
|
||||
Canonical ``!xxxxxxxx`` node ID string.
|
||||
"""
|
||||
return "!" + hashlib.sha256(long_name.encode("utf-8")).hexdigest()[:8]
|
||||
|
||||
|
||||
def _synthetic_node_dict(long_name: str) -> dict:
|
||||
"""Build a synthetic node dict for an unknown MeshCore channel sender.
|
||||
|
||||
Synthetic nodes are placeholder entries created when a channel message
|
||||
arrives from a sender who is not yet in the connected device's contacts
|
||||
roster. They carry ``role=COMPANION`` (the only role capable of sending
|
||||
channel messages). The short name is intentionally omitted here — the
|
||||
Ruby web app derives it at query time via
|
||||
``meshcore_companion_display_short_name`` for all COMPANION nodes.
|
||||
|
||||
When the real contact advertisement is later received, the Ruby web app
|
||||
detects the matching long name, migrates all messages from the synthetic
|
||||
node ID to the real one, and removes the placeholder row.
|
||||
|
||||
Parameters:
|
||||
long_name: Sender name parsed from the ``"SenderName: body"`` prefix.
|
||||
|
||||
Returns:
|
||||
Node dict compatible with the ``POST /api/nodes`` payload format,
|
||||
with ``user.synthetic`` set to ``True``.
|
||||
"""
|
||||
return {
|
||||
"lastHeard": int(time.time()),
|
||||
"protocol": "meshcore",
|
||||
"user": {
|
||||
"longName": long_name,
|
||||
"shortName": "",
|
||||
"role": "COMPANION",
|
||||
"synthetic": True,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def _extract_mention_names(text: str) -> list[str]:
|
||||
"""Extract all ``@[Name]`` mention names from a MeshCore message body.
|
||||
|
||||
Parameters:
|
||||
text: Raw message text that may contain ``@[Name]`` mention patterns.
|
||||
|
||||
Returns:
|
||||
List of extracted name strings (may be empty).
|
||||
"""
|
||||
return _MENTION_RE.findall(text)
|
||||
|
||||
|
||||
def _pubkey_prefix_to_node_id(contacts: dict, pubkey_prefix: str) -> str | None:
|
||||
"""Look up a canonical node ID by six-byte public-key prefix.
|
||||
|
||||
@@ -199,13 +329,15 @@ def _contact_to_node_dict(contact: dict) -> dict:
|
||||
Node dict compatible with the ``POST /api/nodes`` payload format.
|
||||
"""
|
||||
pub_key = contact.get("public_key", "")
|
||||
node_id = _meshcore_node_id(pub_key)
|
||||
name = (contact.get("adv_name") or "").strip()
|
||||
role = _meshcore_adv_type_to_role(contact.get("type"))
|
||||
node: dict = {
|
||||
"lastHeard": contact.get("last_advert"),
|
||||
"protocol": "meshcore",
|
||||
"user": {
|
||||
"longName": name,
|
||||
"shortName": _meshcore_short_name(pub_key),
|
||||
"shortName": _meshcore_short_name(node_id),
|
||||
"publicKey": pub_key,
|
||||
**({"role": role} if role is not None else {}),
|
||||
},
|
||||
@@ -213,10 +345,31 @@ def _contact_to_node_dict(contact: dict) -> dict:
|
||||
lat = contact.get("adv_lat")
|
||||
lon = contact.get("adv_lon")
|
||||
if lat is not None and lon is not None and (lat or lon):
|
||||
node["position"] = {"latitude": lat, "longitude": lon}
|
||||
pos: dict = {"latitude": lat, "longitude": lon}
|
||||
last_advert = contact.get("last_advert")
|
||||
if last_advert is not None:
|
||||
pos["time"] = last_advert
|
||||
node["position"] = pos
|
||||
return node
|
||||
|
||||
|
||||
def _derive_modem_preset(sf: object, bw: object, cr: object) -> str | None:
|
||||
"""Return a compact radio-parameter string from spreading factor, bandwidth, and coding rate.
|
||||
|
||||
Parameters:
|
||||
sf: Spreading factor (int, e.g. ``12``).
|
||||
bw: Bandwidth in kHz (int or float, e.g. ``125.0``).
|
||||
cr: Coding rate denominator (int, e.g. ``5`` meaning 4/5).
|
||||
|
||||
Returns:
|
||||
A string such as ``"SF12/BW125/CR5"``, or ``None`` when any parameter
|
||||
is absent or zero (meaning the radio config was not reported).
|
||||
"""
|
||||
if not sf or not bw or not cr:
|
||||
return None
|
||||
return f"SF{int(sf)}/BW{int(bw)}/CR{int(cr)}"
|
||||
|
||||
|
||||
def _self_info_to_node_dict(self_info: dict) -> dict:
|
||||
"""Convert a MeshCore ``SELF_INFO`` payload to a Meshtastic-ish node dict.
|
||||
|
||||
@@ -230,12 +383,14 @@ def _self_info_to_node_dict(self_info: dict) -> dict:
|
||||
"""
|
||||
name = (self_info.get("name") or "").strip()
|
||||
pub_key = self_info.get("public_key", "")
|
||||
node_id = _meshcore_node_id(pub_key)
|
||||
role = _meshcore_adv_type_to_role(self_info.get("adv_type"))
|
||||
node: dict = {
|
||||
"lastHeard": int(time.time()),
|
||||
"protocol": "meshcore",
|
||||
"user": {
|
||||
"longName": name,
|
||||
"shortName": _meshcore_short_name(pub_key),
|
||||
"shortName": _meshcore_short_name(node_id),
|
||||
"publicKey": pub_key,
|
||||
**({"role": role} if role is not None else {}),
|
||||
},
|
||||
@@ -243,10 +398,56 @@ def _self_info_to_node_dict(self_info: dict) -> dict:
|
||||
lat = self_info.get("adv_lat")
|
||||
lon = self_info.get("adv_lon")
|
||||
if lat is not None and lon is not None and (lat or lon):
|
||||
node["position"] = {"latitude": lat, "longitude": lon}
|
||||
node["position"] = {"latitude": lat, "longitude": lon, "time": int(time.time())}
|
||||
return node
|
||||
|
||||
|
||||
def _store_meshcore_position(
|
||||
node_id: str,
|
||||
lat: float,
|
||||
lon: float,
|
||||
position_time: int | None,
|
||||
ingestor: str | None,
|
||||
) -> None:
|
||||
"""Enqueue a ``POST /api/positions`` for a MeshCore contact's advertised position.
|
||||
|
||||
MeshCore does not issue dedicated position packets; position data is embedded
|
||||
in contact advertisements. A stable pseudo-ID is derived from the node
|
||||
identity and the position timestamp so repeated advertisements of the same
|
||||
position are idempotently de-duplicated by the web app's ``ON CONFLICT``
|
||||
clause.
|
||||
|
||||
Parameters:
|
||||
node_id: Canonical ``!xxxxxxxx`` node identifier.
|
||||
lat: Latitude in decimal degrees.
|
||||
lon: Longitude in decimal degrees.
|
||||
position_time: Unix timestamp from the contact's ``last_advert`` field,
|
||||
or ``None`` to fall back to the current wall-clock time.
|
||||
ingestor: Canonical node ID of the host ingestor, or ``None``.
|
||||
"""
|
||||
rx_time = int(time.time())
|
||||
pt = position_time or rx_time
|
||||
# Stable 63-bit pseudo-ID unique to (node, position_time) so that the web
|
||||
# app ON CONFLICT clause de-duplicates repeated advertisements of the same
|
||||
# position without collisions between different nodes.
|
||||
digest = hashlib.sha256(f"{node_id}:{pt}".encode()).digest()
|
||||
pos_id = int.from_bytes(digest[:8], "big") & 0x7FFFFFFFFFFFFFFF
|
||||
node_num = _node_num_from_id(node_id)
|
||||
payload = {
|
||||
"id": pos_id,
|
||||
"rx_time": rx_time,
|
||||
"rx_iso": _iso(rx_time),
|
||||
"node_id": node_id,
|
||||
"node_num": node_num,
|
||||
"from_id": node_id,
|
||||
"latitude": lat,
|
||||
"longitude": lon,
|
||||
"position_time": pt,
|
||||
"ingestor": ingestor,
|
||||
}
|
||||
_queue._queue_post_json("/api/positions", payload)
|
||||
|
||||
|
||||
def _to_json_safe(value: object) -> object:
|
||||
"""Recursively convert *value* to a JSON-serialisable form.
|
||||
|
||||
@@ -317,6 +518,14 @@ class _MeshcoreInterface:
|
||||
self._contacts_lock = threading.Lock()
|
||||
self._contacts: dict = {}
|
||||
self.isConnected: bool = False
|
||||
# Tracks synthetic node IDs already upserted this session to avoid
|
||||
# repeating the HTTP POST for every message from the same unknown sender.
|
||||
# This set is reset on reconnect (because _MeshcoreInterface is recreated),
|
||||
# which may cause extra upserts after a disconnect — the ON CONFLICT guard
|
||||
# in the Ruby web app ensures those are idempotent and safe.
|
||||
self._synthetic_node_ids: set[str] = set()
|
||||
self._self_info_payload: dict | None = None
|
||||
"""Most recent SELF_INFO payload received from the device, or ``None``."""
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Contact management (called from the asyncio thread)
|
||||
@@ -363,6 +572,32 @@ class _MeshcoreInterface:
|
||||
with self._contacts_lock:
|
||||
return _pubkey_prefix_to_node_id(self._contacts, pubkey_prefix)
|
||||
|
||||
def lookup_node_id_by_name(self, adv_name: str) -> str | None:
|
||||
"""Return the canonical node ID for the contact whose ``adv_name`` matches.
|
||||
|
||||
Used to resolve the sender of a MeshCore channel message from the
|
||||
``"SenderName: body"`` text prefix when no ``pubkey_prefix`` is
|
||||
available in the event payload. The comparison is case-sensitive
|
||||
because ``adv_name`` values come verbatim from the MeshCore firmware.
|
||||
|
||||
Parameters:
|
||||
adv_name: Advertised name to look up. Leading and trailing
|
||||
whitespace is stripped before comparison.
|
||||
|
||||
Returns:
|
||||
Canonical ``!xxxxxxxx`` node ID, or ``None`` when no contact with
|
||||
that name is known.
|
||||
"""
|
||||
name = adv_name.strip() if adv_name else ""
|
||||
if not name:
|
||||
return None
|
||||
with self._contacts_lock:
|
||||
for pub_key, contact in self._contacts.items():
|
||||
contact_name = (contact.get("adv_name") or "").strip()
|
||||
if contact_name == name:
|
||||
return _meshcore_node_id(pub_key)
|
||||
return None
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Lifecycle
|
||||
# ------------------------------------------------------------------
|
||||
@@ -388,6 +623,74 @@ class _MeshcoreInterface:
|
||||
thread.join(timeout=5.0)
|
||||
|
||||
|
||||
# Fallback upper bound for channel index probing when the device query fails
|
||||
# or returns an older firmware version that omits ``max_channels``.
|
||||
_CHANNEL_PROBE_FALLBACK_MAX = 32
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Channel name resolution
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
async def _ensure_channel_names(mc: object) -> None:
|
||||
"""Probe channel names from the device and populate the channel cache.
|
||||
|
||||
Queries the device for its authoritative channel count via
|
||||
:meth:`~meshcore.MeshCore.commands.send_device_query` (``max_channels``
|
||||
field of the ``DEVICE_INFO`` response), then iterates every index from 0
|
||||
through ``max_channels - 1``, requesting each via
|
||||
:meth:`~meshcore.MeshCore.commands.get_channel`. The responses arrive as
|
||||
:attr:`~meshcore.EventType.CHANNEL_INFO` events and are registered into
|
||||
the shared channel cache via :func:`~data.mesh_ingestor.channels.register_channel`.
|
||||
|
||||
Falls back to a probe bound of :data:`_CHANNEL_PROBE_FALLBACK_MAX` when the
|
||||
device query fails or returns an older firmware that omits ``max_channels``.
|
||||
|
||||
Probes every index without early-stopping on ``ERROR`` responses, so sparse
|
||||
configurations (e.g. slots 0 and 5 configured, slots 1–4 empty) are handled
|
||||
correctly. Only a hard exception (connection loss, timeout) aborts the loop.
|
||||
|
||||
Parameters:
|
||||
mc: Connected :class:`~meshcore.MeshCore` instance.
|
||||
"""
|
||||
# Deferred — see _make_event_handlers for the circular-dependency note.
|
||||
from .. import channels as _channels
|
||||
|
||||
max_idx = _CHANNEL_PROBE_FALLBACK_MAX
|
||||
try:
|
||||
dev_evt = await mc.commands.send_device_query()
|
||||
if dev_evt.type == EventType.DEVICE_INFO:
|
||||
reported = (dev_evt.payload or {}).get("max_channels")
|
||||
if isinstance(reported, int) and reported > 0:
|
||||
max_idx = reported
|
||||
except Exception as exc:
|
||||
config._debug_log(
|
||||
"Device query failed; using fallback channel probe bound",
|
||||
context="meshcore.channels",
|
||||
severity="warning",
|
||||
fallback_max=max_idx,
|
||||
error=str(exc),
|
||||
)
|
||||
|
||||
for idx in range(max_idx):
|
||||
try:
|
||||
evt = await mc.commands.get_channel(idx)
|
||||
if evt.type == EventType.CHANNEL_INFO:
|
||||
name = (evt.payload or {}).get("channel_name", "")
|
||||
if name:
|
||||
_channels.register_channel(idx, name)
|
||||
# ERROR response — unconfigured slot; continue to next index
|
||||
except Exception as exc:
|
||||
config._debug_log(
|
||||
"Channel probe failed",
|
||||
context="meshcore.channels",
|
||||
severity="warning",
|
||||
channel_idx=idx,
|
||||
error=str(exc),
|
||||
)
|
||||
break
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Handler logic helpers (module-level to keep _make_event_handlers lean)
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -396,21 +699,59 @@ class _MeshcoreInterface:
|
||||
def _process_self_info(
|
||||
payload: dict, iface: _MeshcoreInterface, handlers: object
|
||||
) -> None:
|
||||
"""Apply a ``SELF_INFO`` payload: set host_node_id and upsert the host node.
|
||||
"""Apply a ``SELF_INFO`` payload: set host_node_id, upsert the host node,
|
||||
and capture LoRa radio metadata into the shared config cache.
|
||||
|
||||
Parameters:
|
||||
payload: Event payload dict containing at minimum ``public_key`` and
|
||||
optionally ``name``, ``adv_lat``, ``adv_lon``.
|
||||
optionally ``name``, ``adv_lat``, ``adv_lon``, ``radio_freq``,
|
||||
``radio_bw``, ``radio_sf``, ``radio_cr``.
|
||||
iface: Active interface whose :attr:`host_node_id` will be updated.
|
||||
handlers: Module reference for :func:`~data.mesh_ingestor.handlers`
|
||||
functions (passed to avoid circular-import issues).
|
||||
"""
|
||||
# Cache the payload so node_snapshot_items / self_node_item can use it later.
|
||||
iface._self_info_payload = payload
|
||||
|
||||
pub_key = payload.get("public_key", "")
|
||||
node_id = _meshcore_node_id(pub_key)
|
||||
|
||||
# Capture radio metadata BEFORE upserting the node so that
|
||||
# _apply_radio_metadata_to_nodes finds populated values on the very first
|
||||
# SELF_INFO. Never overwrite a previously cached value.
|
||||
radio_freq = payload.get("radio_freq")
|
||||
if radio_freq is not None and getattr(config, "LORA_FREQ", None) is None:
|
||||
config.LORA_FREQ = radio_freq
|
||||
modem_preset = _derive_modem_preset(
|
||||
payload.get("radio_sf"), payload.get("radio_bw"), payload.get("radio_cr")
|
||||
)
|
||||
if modem_preset is not None and getattr(config, "MODEM_PRESET", None) is None:
|
||||
config.MODEM_PRESET = modem_preset
|
||||
|
||||
if node_id:
|
||||
iface.host_node_id = node_id
|
||||
handlers.register_host_node_id(node_id)
|
||||
# Queue the ingestor registration BEFORE any node upserts so the web
|
||||
# backend assigns the correct protocol to all subsequent records.
|
||||
# Radio metadata (LORA_FREQ, MODEM_PRESET) is captured just above and
|
||||
# will be included in the heartbeat payload by queue_ingestor_heartbeat.
|
||||
_ingestors.queue_ingestor_heartbeat(force=True, node_id=node_id)
|
||||
handlers.upsert_node(node_id, _self_info_to_node_dict(payload))
|
||||
lat = payload.get("adv_lat")
|
||||
lon = payload.get("adv_lon")
|
||||
if lat is not None and lon is not None and (lat or lon):
|
||||
_store_meshcore_position(
|
||||
node_id, lat, lon, int(time.time()), handlers.host_node_id()
|
||||
)
|
||||
|
||||
config._debug_log(
|
||||
"MeshCore radio metadata captured",
|
||||
context="meshcore.self_info.radio",
|
||||
severity="info",
|
||||
lora_freq=radio_freq,
|
||||
modem_preset=modem_preset,
|
||||
)
|
||||
|
||||
handlers._mark_packet_seen()
|
||||
config._debug_log(
|
||||
"MeshCore self-info received",
|
||||
@@ -436,6 +777,16 @@ def _process_contacts(
|
||||
continue
|
||||
iface._update_contact(contact)
|
||||
handlers.upsert_node(node_id, _contact_to_node_dict(contact))
|
||||
lat = contact.get("adv_lat")
|
||||
lon = contact.get("adv_lon")
|
||||
if lat is not None and lon is not None and (lat or lon):
|
||||
_store_meshcore_position(
|
||||
node_id,
|
||||
lat,
|
||||
lon,
|
||||
contact.get("last_advert"),
|
||||
handlers.host_node_id(),
|
||||
)
|
||||
handlers._mark_packet_seen()
|
||||
|
||||
|
||||
@@ -455,6 +806,16 @@ def _process_contact_update(
|
||||
return
|
||||
iface._update_contact(contact)
|
||||
handlers.upsert_node(node_id, _contact_to_node_dict(contact))
|
||||
lat = contact.get("adv_lat")
|
||||
lon = contact.get("adv_lon")
|
||||
if lat is not None and lon is not None and (lat or lon):
|
||||
_store_meshcore_position(
|
||||
node_id,
|
||||
lat,
|
||||
lon,
|
||||
contact.get("last_advert"),
|
||||
handlers.host_node_id(),
|
||||
)
|
||||
handlers._mark_packet_seen()
|
||||
config._debug_log(
|
||||
"MeshCore contact updated",
|
||||
@@ -482,11 +843,19 @@ def _make_event_handlers(iface: _MeshcoreInterface, target: str | None) -> dict:
|
||||
Returns:
|
||||
Mapping of ``EventType`` member name → async callback coroutine.
|
||||
"""
|
||||
# Deferred import to avoid a circular dependency: meshcore.py is imported by
|
||||
# providers/__init__.py which is imported by the top-level mesh_ingestor
|
||||
# package, while handlers.py imports from that same package.
|
||||
# Deferred imports to avoid a circular dependency: meshcore.py is imported by
|
||||
# protocols/__init__.py which is imported by the top-level mesh_ingestor
|
||||
# package, while handlers.py and channels.py import from that same package.
|
||||
from .. import channels as _channels
|
||||
from .. import handlers as _handlers
|
||||
|
||||
async def on_channel_info(evt) -> None:
|
||||
payload = evt.payload or {}
|
||||
idx = payload.get("channel_idx")
|
||||
name = payload.get("channel_name", "")
|
||||
if idx is not None and name:
|
||||
_channels.register_channel(idx, name)
|
||||
|
||||
async def on_self_info(evt) -> None:
|
||||
_process_self_info(evt.payload or {}, iface, _handlers)
|
||||
|
||||
@@ -506,11 +875,40 @@ def _make_event_handlers(iface: _MeshcoreInterface, target: str | None) -> dict:
|
||||
rx_time = int(time.time())
|
||||
channel_idx = payload.get("channel_idx", 0)
|
||||
|
||||
# MeshCore channel messages carry no sender identifier in the event
|
||||
# payload. Try to resolve the sender from the "SenderName: body"
|
||||
# convention embedded in the message text, matched against the known
|
||||
# contacts roster. When the contacts roster does not yet contain the
|
||||
# sender, create a synthetic placeholder node so that the message
|
||||
# receives a stable from_id and the UI can render a badge immediately.
|
||||
# The web app will migrate messages to the real node ID once the sender
|
||||
# is seen via a contact advertisement.
|
||||
sender_name = _parse_sender_name(text)
|
||||
from_id = iface.lookup_node_id_by_name(sender_name) if sender_name else None
|
||||
if from_id is None and sender_name:
|
||||
synthetic_id = _derive_synthetic_node_id(sender_name)
|
||||
if synthetic_id not in iface._synthetic_node_ids:
|
||||
_handlers.upsert_node(synthetic_id, _synthetic_node_dict(sender_name))
|
||||
iface._synthetic_node_ids.add(synthetic_id)
|
||||
from_id = synthetic_id
|
||||
|
||||
# Upsert synthetic placeholder nodes for any @[Name] mentions in the
|
||||
# message body whose names are not yet in the contacts roster. This
|
||||
# ensures mention badges resolve even before the mentioned node is seen.
|
||||
for mention_name in _extract_mention_names(text):
|
||||
if not iface.lookup_node_id_by_name(mention_name):
|
||||
mention_id = _derive_synthetic_node_id(mention_name)
|
||||
if mention_id not in iface._synthetic_node_ids:
|
||||
_handlers.upsert_node(
|
||||
mention_id, _synthetic_node_dict(mention_name)
|
||||
)
|
||||
iface._synthetic_node_ids.add(mention_id)
|
||||
|
||||
packet = {
|
||||
"id": _derive_message_id(sender_ts, f"c{channel_idx}", text),
|
||||
"rxTime": rx_time,
|
||||
"rx_time": rx_time,
|
||||
"from_id": None,
|
||||
"from_id": from_id,
|
||||
"to_id": "^all",
|
||||
"channel": channel_idx,
|
||||
"snr": payload.get("SNR"),
|
||||
@@ -528,6 +926,8 @@ def _make_event_handlers(iface: _MeshcoreInterface, target: str | None) -> dict:
|
||||
"MeshCore channel message",
|
||||
context="meshcore.channel_msg",
|
||||
channel=channel_idx,
|
||||
sender=sender_name,
|
||||
from_id=from_id,
|
||||
)
|
||||
|
||||
async def on_contact_msg(evt) -> None:
|
||||
@@ -570,6 +970,7 @@ def _make_event_handlers(iface: _MeshcoreInterface, target: str | None) -> dict:
|
||||
)
|
||||
|
||||
return {
|
||||
"CHANNEL_INFO": on_channel_info,
|
||||
"SELF_INFO": on_self_info,
|
||||
"CONTACTS": on_contacts,
|
||||
"NEW_CONTACT": on_contact_update,
|
||||
@@ -602,8 +1003,6 @@ def _make_connection(target: str, baudrate: int) -> object:
|
||||
Returns:
|
||||
An unconnected ``meshcore`` connection object.
|
||||
"""
|
||||
from meshcore import BLEConnection, SerialConnection, TCPConnection
|
||||
|
||||
ble_addr = parse_ble_target(target)
|
||||
if ble_addr:
|
||||
return BLEConnection(address=ble_addr)
|
||||
@@ -637,7 +1036,12 @@ async def _run_meshcore(
|
||||
error_holder: Single-element list; set to the raised exception when
|
||||
the connection attempt fails so the caller can re-raise it.
|
||||
"""
|
||||
from meshcore import EventType, MeshCore
|
||||
# Install early so :meth:`_MeshcoreInterface.close` can signal shutdown with
|
||||
# ``stop_event.set()`` instead of ``loop.stop()`` while ``connect()`` or the
|
||||
# ``finally`` disconnect is still running (avoids RuntimeError from
|
||||
# :meth:`asyncio.loop.run_until_complete`).
|
||||
stop_event = asyncio.Event()
|
||||
iface._stop_event = stop_event
|
||||
|
||||
mc: MeshCore | None = None
|
||||
try:
|
||||
@@ -682,6 +1086,11 @@ async def _run_meshcore(
|
||||
"firmware."
|
||||
)
|
||||
|
||||
if stop_event.is_set():
|
||||
raise ClosedBeforeConnectedError(
|
||||
"Mesh interface close was requested before the connection could be completed."
|
||||
)
|
||||
|
||||
iface.isConnected = True
|
||||
connected_event.set()
|
||||
|
||||
@@ -696,10 +1105,18 @@ async def _run_meshcore(
|
||||
error=str(exc),
|
||||
)
|
||||
|
||||
try:
|
||||
await _ensure_channel_names(mc)
|
||||
except Exception as exc:
|
||||
config._debug_log(
|
||||
"Failed to fetch channel names",
|
||||
context="meshcore.channels",
|
||||
severity="warning",
|
||||
error=str(exc),
|
||||
)
|
||||
|
||||
await mc.start_auto_message_fetching()
|
||||
|
||||
stop_event = asyncio.Event()
|
||||
iface._stop_event = stop_event
|
||||
await stop_event.wait()
|
||||
|
||||
except Exception as exc:
|
||||
@@ -826,9 +1243,37 @@ class MeshcoreProvider:
|
||||
"""
|
||||
return getattr(iface, "host_node_id", None)
|
||||
|
||||
def self_node_item(self, iface: object) -> tuple[str, dict] | None:
|
||||
"""Return the ``(node_id, node_dict)`` pair for the host self-node.
|
||||
|
||||
Uses the most recently cached ``SELF_INFO`` payload stored on the
|
||||
interface. Returns ``None`` when no SELF_INFO has been received yet
|
||||
or when the public key cannot be mapped to a valid node ID.
|
||||
|
||||
Parameters:
|
||||
iface: Active :class:`_MeshcoreInterface` instance.
|
||||
|
||||
Returns:
|
||||
``(canonical_node_id, node_dict)`` tuple or ``None``.
|
||||
"""
|
||||
if not isinstance(iface, _MeshcoreInterface):
|
||||
return None
|
||||
payload = getattr(iface, "_self_info_payload", None)
|
||||
if not payload:
|
||||
return None
|
||||
node_id = _meshcore_node_id(payload.get("public_key", ""))
|
||||
if not node_id:
|
||||
return None
|
||||
return node_id, _self_info_to_node_dict(payload)
|
||||
|
||||
def node_snapshot_items(self, iface: object) -> list[tuple[str, dict]]:
|
||||
"""Return a snapshot of all known MeshCore contacts as node entries.
|
||||
|
||||
Includes the host self-node when a ``SELF_INFO`` payload has already
|
||||
been received, so that the initial snapshot sent by the daemon
|
||||
covers the local device even when the background event loop delivers
|
||||
``SELF_INFO`` before the snapshot is taken.
|
||||
|
||||
Parameters:
|
||||
iface: Active :class:`_MeshcoreInterface` instance. Any other
|
||||
object type causes an empty list to be returned.
|
||||
@@ -839,7 +1284,11 @@ class MeshcoreProvider:
|
||||
"""
|
||||
if not isinstance(iface, _MeshcoreInterface):
|
||||
return []
|
||||
return iface.contacts_snapshot()
|
||||
items: list[tuple[str, dict]] = list(iface.contacts_snapshot())
|
||||
self_item = self.self_node_item(iface)
|
||||
if self_item is not None:
|
||||
items.append(self_item)
|
||||
return items
|
||||
|
||||
|
||||
__all__ = ["MeshcoreProvider"]
|
||||
+2
-2
@@ -12,7 +12,7 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Meshtastic provider implementation."""
|
||||
"""Meshtastic protocol implementation."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
@@ -23,7 +23,7 @@ from ..utils import _retry_dict_snapshot
|
||||
|
||||
|
||||
class MeshtasticProvider:
|
||||
"""Meshtastic ingestion provider (current default)."""
|
||||
"""Meshtastic ingestion protocol (current default)."""
|
||||
|
||||
name = "meshtastic"
|
||||
|
||||
+348
-34
@@ -73,52 +73,61 @@ def _payload_key_value_pairs(payload: Mapping[str, object]) -> str:
|
||||
return " ".join(pairs)
|
||||
|
||||
|
||||
_MESSAGE_POST_PRIORITY = 10
|
||||
_INGESTOR_POST_PRIORITY = 80
|
||||
_NEIGHBOR_POST_PRIORITY = 20
|
||||
_TRACE_POST_PRIORITY = 25
|
||||
_POSITION_POST_PRIORITY = 30
|
||||
_TELEMETRY_POST_PRIORITY = 40
|
||||
_NODE_POST_PRIORITY = 50
|
||||
_INGESTOR_POST_PRIORITY = 0
|
||||
_CHANNEL_POST_PRIORITY = 10
|
||||
_NODE_POST_PRIORITY = 20
|
||||
_MESSAGE_POST_PRIORITY = 30
|
||||
_NEIGHBOR_POST_PRIORITY = 40
|
||||
_TRACE_POST_PRIORITY = 50
|
||||
_POSITION_POST_PRIORITY = 60
|
||||
_TELEMETRY_POST_PRIORITY = 70
|
||||
_DEFAULT_POST_PRIORITY = 90
|
||||
|
||||
_MAX_SEND_RETRIES = 3
|
||||
"""Maximum number of times a failed POST item is re-queued before being dropped."""
|
||||
|
||||
|
||||
@dataclass
|
||||
class QueueState:
|
||||
"""Mutable state for the HTTP POST priority queue."""
|
||||
|
||||
lock: threading.Lock = field(default_factory=threading.Lock)
|
||||
queue: list[tuple[int, int, str, dict]] = field(default_factory=list)
|
||||
# Heap tuple: (priority, counter, path, payload, retries).
|
||||
queue: list[tuple[int, int, str, dict, int]] = field(default_factory=list)
|
||||
counter: Iterable[int] = field(default_factory=itertools.count)
|
||||
active: bool = False
|
||||
# Background drain thread. When the drainer is alive, _queue_post_json
|
||||
# signals drain_event instead of blocking the caller with HTTP calls.
|
||||
drain_event: threading.Event = field(default_factory=threading.Event)
|
||||
drainer: threading.Thread | None = None
|
||||
# Set to request the drainer thread to exit its loop cleanly.
|
||||
shutdown: threading.Event = field(default_factory=threading.Event)
|
||||
|
||||
|
||||
STATE = QueueState()
|
||||
|
||||
|
||||
def _post_json(
|
||||
def _send_single(
|
||||
instance: str,
|
||||
api_token: str,
|
||||
path: str,
|
||||
payload: dict,
|
||||
*,
|
||||
instance: str | None = None,
|
||||
api_token: str | None = None,
|
||||
) -> None:
|
||||
"""Send a JSON payload to the configured web API.
|
||||
) -> bool:
|
||||
"""Transmit a single JSON payload to one instance.
|
||||
|
||||
Parameters:
|
||||
path: API path relative to the configured instance root.
|
||||
instance: Base URL of the target instance.
|
||||
api_token: Bearer token for this instance (may be empty).
|
||||
path: API path relative to the instance root.
|
||||
payload: JSON-serialisable body to transmit.
|
||||
instance: Optional override for :data:`config.INSTANCE`.
|
||||
api_token: Optional override for :data:`config.API_TOKEN`.
|
||||
|
||||
Returns:
|
||||
``True`` when the request succeeded, ``False`` on failure.
|
||||
"""
|
||||
|
||||
if instance is None:
|
||||
instance = config.INSTANCE
|
||||
if api_token is None:
|
||||
api_token = config.API_TOKEN
|
||||
|
||||
if not instance:
|
||||
return
|
||||
return True
|
||||
|
||||
url = f"{instance}{path}"
|
||||
data = json.dumps(payload).encode("utf-8")
|
||||
|
||||
@@ -143,15 +152,80 @@ def _post_json(
|
||||
try:
|
||||
with urllib.request.urlopen(req, timeout=10) as resp:
|
||||
resp.read()
|
||||
except Exception as exc: # pragma: no cover - exercised in production
|
||||
return True
|
||||
except Exception as exc:
|
||||
config._debug_log(
|
||||
"POST request failed",
|
||||
context="queue.post_json",
|
||||
severity="warn",
|
||||
always=True,
|
||||
url=url,
|
||||
error_class=exc.__class__.__name__,
|
||||
error_message=str(exc),
|
||||
)
|
||||
return False
|
||||
|
||||
|
||||
def _post_json(
|
||||
path: str,
|
||||
payload: dict,
|
||||
*,
|
||||
instance: str | None = None,
|
||||
api_token: str | None = None,
|
||||
) -> bool:
|
||||
"""Send a JSON payload to one or more configured web API instances.
|
||||
|
||||
When ``instance`` is provided explicitly the payload is sent to that
|
||||
single target. Otherwise every ``(url, token)`` pair in
|
||||
:data:`config.INSTANCES` receives the payload independently so that
|
||||
one failure does not block delivery to the remaining targets.
|
||||
|
||||
Parameters:
|
||||
path: API path relative to the instance root.
|
||||
payload: JSON-serialisable body to transmit.
|
||||
instance: Optional single-instance override.
|
||||
api_token: Optional token override (only used with ``instance``).
|
||||
|
||||
Returns:
|
||||
``True`` when at least one instance received the payload
|
||||
successfully, ``False`` when all targets failed. A missing
|
||||
configuration is not a transient failure and returns ``True``
|
||||
(retrying would not help).
|
||||
"""
|
||||
|
||||
if instance is not None:
|
||||
if not instance:
|
||||
return True
|
||||
return _send_single(instance, api_token or "", path, payload)
|
||||
|
||||
targets: tuple[tuple[str, str], ...] = config.INSTANCES
|
||||
if not targets:
|
||||
# Backward-compatible fallback for callers that only set
|
||||
# config.INSTANCE / config.API_TOKEN directly.
|
||||
inst = config.INSTANCE
|
||||
if not inst:
|
||||
try:
|
||||
config._debug_log(
|
||||
"No target instances configured; discarding payload",
|
||||
context="queue.post_json",
|
||||
severity="error",
|
||||
always=True,
|
||||
path=path,
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
return False
|
||||
return _send_single(inst, api_token or config.API_TOKEN, path, payload)
|
||||
|
||||
any_ok = False
|
||||
any_attempted = False
|
||||
for inst, token in targets:
|
||||
if not inst:
|
||||
continue
|
||||
any_attempted = True
|
||||
if _send_single(inst, token, path, payload):
|
||||
any_ok = True
|
||||
return any_ok or not any_attempted
|
||||
|
||||
|
||||
def _enqueue_post_json(
|
||||
@@ -160,6 +234,7 @@ def _enqueue_post_json(
|
||||
priority: int,
|
||||
*,
|
||||
state: QueueState = STATE,
|
||||
retries: int = 0,
|
||||
) -> None:
|
||||
"""Store a POST request in the priority queue.
|
||||
|
||||
@@ -168,15 +243,17 @@ def _enqueue_post_json(
|
||||
payload: JSON-serialisable body.
|
||||
priority: Lower values execute first.
|
||||
state: Shared queue state, injectable for testing.
|
||||
retries: Number of prior failed send attempts for this item.
|
||||
"""
|
||||
|
||||
with state.lock:
|
||||
counter = next(state.counter)
|
||||
# Heap tuple: (priority, counter, path, payload). Lower priority
|
||||
# values are dequeued first (min-heap semantics). The monotonically
|
||||
# increasing counter breaks ties so equal-priority items are processed
|
||||
# in FIFO order without comparing the non-orderable payload dict.
|
||||
heapq.heappush(state.queue, (priority, counter, path, payload))
|
||||
# Heap tuple: (priority, counter, path, payload, retries). Lower
|
||||
# priority values are dequeued first (min-heap semantics). The
|
||||
# monotonically increasing counter breaks ties so equal-priority
|
||||
# items are processed in FIFO order without comparing the
|
||||
# non-orderable payload dict.
|
||||
heapq.heappush(state.queue, (priority, counter, path, payload, retries))
|
||||
|
||||
|
||||
def _drain_post_queue(
|
||||
@@ -184,6 +261,12 @@ def _drain_post_queue(
|
||||
) -> None:
|
||||
"""Process queued POST requests in priority order.
|
||||
|
||||
When the *send* callable returns ``False`` (transient failure) the item
|
||||
is re-queued up to :data:`_MAX_SEND_RETRIES` times. Items exceeding
|
||||
the limit are dropped with a warning. Custom *send* callables that
|
||||
return ``None`` (the typical test/heartbeat pattern) are never retried
|
||||
— the ``result is False`` identity check ensures backward compatibility.
|
||||
|
||||
Parameters:
|
||||
state: Queue container holding pending items.
|
||||
send: Optional callable used to transmit requests.
|
||||
@@ -198,13 +281,184 @@ def _drain_post_queue(
|
||||
if not state.queue:
|
||||
state.active = False
|
||||
return
|
||||
_priority, _idx, path, payload = heapq.heappop(state.queue)
|
||||
send(path, payload)
|
||||
item = heapq.heappop(state.queue)
|
||||
|
||||
# Support both 5-tuple (current) and 4-tuple (legacy/test) items.
|
||||
if len(item) >= 5:
|
||||
priority, _idx, path, payload, retries = item[:5]
|
||||
else:
|
||||
priority, _idx, path, payload = item[:4]
|
||||
retries = 0
|
||||
|
||||
result = send(path, payload)
|
||||
|
||||
# Only retry when the send callable explicitly signals failure
|
||||
# (returns False). Custom send callables (tests, heartbeat)
|
||||
# return None and must NOT be treated as failures.
|
||||
if result is False:
|
||||
if retries < _MAX_SEND_RETRIES:
|
||||
_enqueue_post_json(
|
||||
path, payload, priority, state=state, retries=retries + 1
|
||||
)
|
||||
else:
|
||||
try:
|
||||
config._debug_log(
|
||||
"Dropping item after max retries",
|
||||
context="queue.drain",
|
||||
severity="warn",
|
||||
always=True,
|
||||
path=path,
|
||||
retries=retries,
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
finally:
|
||||
with state.lock:
|
||||
state.active = False
|
||||
|
||||
|
||||
_QUEUE_DEPTH_WARNING_THRESHOLD = 100
|
||||
"""Log a warning when the queue grows past this many items."""
|
||||
|
||||
|
||||
def _queue_drainer_loop(state: QueueState = STATE) -> None:
|
||||
"""Body of the background queue-drain daemon thread.
|
||||
|
||||
Blocks on :attr:`QueueState.drain_event`, clears it, then empties the
|
||||
queue by calling :func:`_drain_post_queue`. The thread is created as a
|
||||
daemon so it terminates automatically when the process exits.
|
||||
|
||||
The loop exits cleanly when :attr:`QueueState.shutdown` is set, allowing
|
||||
tests (and graceful-shutdown paths) to join the thread instead of leaking
|
||||
daemon threads that accumulate across a test run.
|
||||
|
||||
The loop is deliberately hardened so that **no** :class:`Exception` can
|
||||
kill the thread. The ``_debug_log`` calls inside the error handler are
|
||||
themselves wrapped in ``try/except`` to prevent cascading failures
|
||||
(e.g. ``BrokenPipeError`` from ``print()`` to a closed stdout).
|
||||
|
||||
.. note::
|
||||
There is a benign race between ``drain_event.clear()`` and the end
|
||||
of :func:`_drain_post_queue`: a signal arriving in that window is
|
||||
consumed by ``clear()`` but the item is still drained because the
|
||||
drain loop empties the queue completely. However, an item enqueued
|
||||
*after* the drain loop finds the queue empty and *before*
|
||||
``wait()`` re-blocks will sit until the next ``drain_event.set()``
|
||||
call (i.e. the next enqueue). This is acceptable for a best-effort
|
||||
ingestor — maximum extra latency equals the inter-packet interval.
|
||||
|
||||
Parameters:
|
||||
state: Queue state instance to drain.
|
||||
"""
|
||||
try:
|
||||
config._debug_log(
|
||||
"Queue drainer thread started",
|
||||
context="queue.drainer",
|
||||
severity="info",
|
||||
always=True,
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
while not state.shutdown.is_set():
|
||||
state.drain_event.wait(timeout=1.0)
|
||||
if state.shutdown.is_set():
|
||||
break
|
||||
state.drain_event.clear()
|
||||
|
||||
depth = len(state.queue)
|
||||
if depth > _QUEUE_DEPTH_WARNING_THRESHOLD:
|
||||
try:
|
||||
config._debug_log(
|
||||
"Queue depth warning",
|
||||
context="queue.drainer",
|
||||
severity="warn",
|
||||
always=True,
|
||||
depth=depth,
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
try:
|
||||
_drain_post_queue(state)
|
||||
except Exception as exc:
|
||||
try:
|
||||
config._debug_log(
|
||||
"Queue drainer error",
|
||||
context="queue.drainer",
|
||||
severity="error",
|
||||
always=True,
|
||||
error_class=exc.__class__.__name__,
|
||||
error_message=str(exc),
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
try:
|
||||
config._debug_log(
|
||||
"Queue drainer thread exiting",
|
||||
context="queue.drainer",
|
||||
severity="info",
|
||||
always=True,
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
def _start_queue_drainer(state: QueueState = STATE) -> None:
|
||||
"""Idempotently start the background queue-drain thread.
|
||||
|
||||
Calling this function when a drainer thread is already alive is a
|
||||
no-op. The thread is created as a daemon so it does not prevent
|
||||
process exit. The check-and-start is performed under :attr:`state.lock`
|
||||
to avoid starting duplicate threads under concurrent callers.
|
||||
|
||||
If items are already in the queue when the drainer is started,
|
||||
:attr:`QueueState.drain_event` is signalled immediately so they are not
|
||||
stranded waiting for the next packet to arrive.
|
||||
|
||||
Parameters:
|
||||
state: Queue state whose :func:`_queue_drainer_loop` to start.
|
||||
"""
|
||||
with state.lock:
|
||||
if state.drainer is not None and state.drainer.is_alive():
|
||||
return
|
||||
# Reset in case the prior thread was stopped or crashed while
|
||||
# shutdown was already set.
|
||||
state.shutdown.clear()
|
||||
t = threading.Thread(
|
||||
target=_queue_drainer_loop,
|
||||
args=(state,),
|
||||
name="queue-drainer",
|
||||
daemon=True,
|
||||
)
|
||||
t.start()
|
||||
state.drainer = t
|
||||
if state.queue:
|
||||
state.drain_event.set()
|
||||
|
||||
|
||||
def _stop_queue_drainer(state: QueueState = STATE, timeout: float = 5.0) -> None:
|
||||
"""Signal the drainer thread to exit and wait for it to finish.
|
||||
|
||||
Sets :attr:`QueueState.shutdown` and :attr:`QueueState.drain_event` so
|
||||
the loop wakes up, observes the shutdown flag, and terminates. After
|
||||
joining (up to *timeout* seconds) the drainer reference is cleared.
|
||||
|
||||
Safe to call when no drainer is running (no-op).
|
||||
|
||||
Parameters:
|
||||
state: Queue state whose drainer to stop.
|
||||
timeout: Maximum seconds to wait for the thread to finish.
|
||||
"""
|
||||
if state.drainer is None or not state.drainer.is_alive():
|
||||
return
|
||||
state.shutdown.set()
|
||||
state.drain_event.set()
|
||||
state.drainer.join(timeout=timeout)
|
||||
state.drainer = None
|
||||
|
||||
|
||||
def _queue_post_json(
|
||||
path: str,
|
||||
payload: dict,
|
||||
@@ -213,14 +467,32 @@ def _queue_post_json(
|
||||
state: QueueState = STATE,
|
||||
send: Callable[[str, dict], None] | None = None,
|
||||
) -> None:
|
||||
"""Queue a POST request and start processing if idle.
|
||||
"""Queue a POST request and wake the drain thread (or drain inline).
|
||||
|
||||
When a background drainer thread is running (started via
|
||||
:func:`_start_queue_drainer`), this function enqueues the item and
|
||||
signals :attr:`QueueState.drain_event` without blocking — the drain
|
||||
happens on the dedicated thread. This keeps the caller's thread (which
|
||||
may be the Meshtastic asyncio I/O thread) free to process serial events.
|
||||
|
||||
When no background drainer is alive the call falls back to a
|
||||
synchronous inline drain. This path is used by tests (which pass a
|
||||
``send`` override via :func:`_fresh_state`) and for any standalone use
|
||||
without calling :func:`_start_queue_drainer`.
|
||||
|
||||
.. note::
|
||||
The background drainer is used **only** when no custom ``send``
|
||||
override is provided (i.e. the production ``_post_json`` path).
|
||||
Any caller that supplies a custom ``send`` (tests, heartbeat
|
||||
helpers) always gets the synchronous inline drain so its transport
|
||||
is honoured correctly.
|
||||
|
||||
Parameters:
|
||||
path: API path for the request.
|
||||
payload: JSON payload to send.
|
||||
priority: Scheduling priority where lower values run first.
|
||||
state: Queue container used to store pending requests.
|
||||
send: Optional transport override, primarily for tests.
|
||||
send: Optional transport override (synchronous fallback only).
|
||||
"""
|
||||
|
||||
if send is None:
|
||||
@@ -240,6 +512,42 @@ def _queue_post_json(
|
||||
)
|
||||
|
||||
_enqueue_post_json(path, payload, priority, state=state)
|
||||
|
||||
# Use the background drainer only when it is alive AND no custom send
|
||||
# override is in play. A custom send (used by tests and callers such as
|
||||
# ingestors.queue_ingestor_heartbeat) must be honoured synchronously
|
||||
# because the background drainer always calls _drain_post_queue without
|
||||
# a send override.
|
||||
#
|
||||
# The ``is`` check is intentional: _post_json is a module-level function
|
||||
# so identity comparison reliably detects the "no override" default that
|
||||
# was assigned at the top of this function.
|
||||
if send is _post_json:
|
||||
if state.drainer is not None and state.drainer.is_alive():
|
||||
state.drain_event.set()
|
||||
return
|
||||
|
||||
# The drainer was previously started but has died (e.g. unhandled
|
||||
# exception). Restart it so the caller stays non-blocking and the
|
||||
# MeshCore asyncio event loop is not stalled by inline HTTP calls.
|
||||
if state.drainer is not None:
|
||||
try:
|
||||
config._debug_log(
|
||||
"Restarting dead queue drainer thread",
|
||||
context="queue.queue_post_json",
|
||||
severity="warn",
|
||||
always=True,
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
_start_queue_drainer(state)
|
||||
# If the restart succeeded, delegate to the background thread.
|
||||
if state.drainer is not None and state.drainer.is_alive():
|
||||
state.drain_event.set()
|
||||
return
|
||||
|
||||
# Synchronous fallback: no drainer was ever started, the restart
|
||||
# failed, or a custom send override is in play.
|
||||
with state.lock:
|
||||
if state.active:
|
||||
return
|
||||
@@ -262,17 +570,23 @@ def _clear_post_queue(state: QueueState = STATE) -> None:
|
||||
__all__ = [
|
||||
"STATE",
|
||||
"QueueState",
|
||||
"_CHANNEL_POST_PRIORITY",
|
||||
"_DEFAULT_POST_PRIORITY",
|
||||
"_MESSAGE_POST_PRIORITY",
|
||||
"_INGESTOR_POST_PRIORITY",
|
||||
"_MAX_SEND_RETRIES",
|
||||
"_MESSAGE_POST_PRIORITY",
|
||||
"_NEIGHBOR_POST_PRIORITY",
|
||||
"_NODE_POST_PRIORITY",
|
||||
"_POSITION_POST_PRIORITY",
|
||||
"_QUEUE_DEPTH_WARNING_THRESHOLD",
|
||||
"_TRACE_POST_PRIORITY",
|
||||
"_TELEMETRY_POST_PRIORITY",
|
||||
"_clear_post_queue",
|
||||
"_drain_post_queue",
|
||||
"_enqueue_post_json",
|
||||
"_post_json",
|
||||
"_queue_drainer_loop",
|
||||
"_queue_post_json",
|
||||
"_start_queue_drainer",
|
||||
"_stop_queue_drainer",
|
||||
]
|
||||
|
||||
+3
-1
@@ -42,9 +42,11 @@ CREATE TABLE IF NOT EXISTS nodes (
|
||||
altitude REAL,
|
||||
lora_freq INTEGER,
|
||||
modem_preset TEXT,
|
||||
protocol TEXT NOT NULL DEFAULT 'meshtastic'
|
||||
protocol TEXT NOT NULL DEFAULT 'meshtastic',
|
||||
synthetic BOOLEAN NOT NULL DEFAULT 0
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_nodes_last_heard ON nodes(last_heard);
|
||||
CREATE INDEX IF NOT EXISTS idx_nodes_hw_model ON nodes(hw_model);
|
||||
CREATE INDEX IF NOT EXISTS idx_nodes_latlon ON nodes(latitude, longitude);
|
||||
CREATE INDEX IF NOT EXISTS idx_nodes_long_name ON nodes(long_name);
|
||||
|
||||
@@ -49,3 +49,21 @@ services:
|
||||
environment:
|
||||
DEBUG: 0
|
||||
restart: always
|
||||
|
||||
matrix-bridge:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: matrix/Dockerfile
|
||||
target: runtime
|
||||
environment:
|
||||
DEBUG: 0
|
||||
restart: always
|
||||
|
||||
matrix-bridge-bridge:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: matrix/Dockerfile
|
||||
target: runtime
|
||||
environment:
|
||||
DEBUG: 0
|
||||
restart: always
|
||||
|
||||
+6
-2
@@ -34,6 +34,7 @@ x-web-base: &web-base
|
||||
- potatomesh_data:/app/.local/share/potato-mesh
|
||||
- potatomesh_config:/app/.config/potato-mesh
|
||||
- potatomesh_logs:/app/logs
|
||||
- potatomesh_pages:/app/pages
|
||||
restart: unless-stopped
|
||||
deploy:
|
||||
resources:
|
||||
@@ -52,9 +53,10 @@ x-ingestor-base: &ingestor-base
|
||||
ALLOWED_CHANNELS: ${ALLOWED_CHANNELS:-""}
|
||||
HIDDEN_CHANNELS: ${HIDDEN_CHANNELS:-""}
|
||||
API_TOKEN: ${API_TOKEN}
|
||||
INSTANCE_DOMAIN: ${INSTANCE_DOMAIN}
|
||||
POTATOMESH_INSTANCE: ${POTATOMESH_INSTANCE:-http://web:41447}
|
||||
INSTANCE_DOMAIN: ${INSTANCE_DOMAIN:-http://web:41447}
|
||||
DEBUG: ${DEBUG:-0}
|
||||
PROTOCOL: ${PROTOCOL:-meshtastic}
|
||||
ENERGY_SAVING: ${ENERGY_SAVING:-0}
|
||||
FEDERATION: ${FEDERATION:-1}
|
||||
PRIVATE: ${PRIVATE:-0}
|
||||
volumes:
|
||||
@@ -159,6 +161,8 @@ volumes:
|
||||
driver: local
|
||||
potatomesh_logs:
|
||||
driver: local
|
||||
potatomesh_pages:
|
||||
driver: local
|
||||
potatomesh_matrix_bridge_state:
|
||||
driver: local
|
||||
|
||||
|
||||
Generated
+3
-3
@@ -969,7 +969,7 @@ checksum = "7edddbd0b52d732b21ad9a5fab5c704c14cd949e5e9a1ec5929a24fded1b904c"
|
||||
|
||||
[[package]]
|
||||
name = "potatomesh-matrix-bridge"
|
||||
version = "0.5.12"
|
||||
version = "0.6.1"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"axum",
|
||||
@@ -1087,9 +1087,9 @@ checksum = "69cdb34c158ceb288df11e18b4bd39de994f6657d83847bdffdbd7f346754b0f"
|
||||
|
||||
[[package]]
|
||||
name = "rand"
|
||||
version = "0.9.2"
|
||||
version = "0.9.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "6db2770f06117d490610c7488547d543617b21bfa07796d7a12f6f1bd53850d1"
|
||||
checksum = "44c5af06bb1b7d3216d91932aed5265164bf384dc89cd6ba05cf59a35f5f76ea"
|
||||
dependencies = [
|
||||
"rand_chacha",
|
||||
"rand_core",
|
||||
|
||||
+1
-1
@@ -14,7 +14,7 @@
|
||||
|
||||
[package]
|
||||
name = "potatomesh-matrix-bridge"
|
||||
version = "0.5.12"
|
||||
version = "0.6.1"
|
||||
edition = "2021"
|
||||
|
||||
[dependencies]
|
||||
|
||||
+4
-1
@@ -1,3 +1,6 @@
|
||||
<!-- Copyright © 2025-26 l5yth & contributors -->
|
||||
<!-- Licensed under the Apache License, Version 2.0 (see LICENSE) -->
|
||||
|
||||
# potatomesh-matrix-bridge
|
||||
|
||||
A small Rust daemon that bridges **PotatoMesh** LoRa messages into a **Matrix** room.
|
||||
@@ -90,7 +93,7 @@ room_id = "!yourroomid:example.org"
|
||||
[state]
|
||||
# Where to persist last seen message id
|
||||
state_file = "bridge_state.json"
|
||||
````
|
||||
```
|
||||
|
||||
The `hs_token` is used to validate inbound appservice transactions. Keep it identical in `Config.toml` and your Matrix appservice registration file.
|
||||
|
||||
|
||||
Binary file not shown.
|
After Width: | Height: | Size: 1.5 MiB |
+1
-3
@@ -28,9 +28,7 @@ from meshtastic.mesh_interface import MeshInterface
|
||||
from meshtastic.serial_interface import SerialInterface
|
||||
from pubsub import pub
|
||||
|
||||
CONNECTION = os.environ.get("CONNECTION") or os.environ.get(
|
||||
"MESH_SERIAL", "/dev/ttyACM0"
|
||||
)
|
||||
CONNECTION = os.environ.get("CONNECTION", "/dev/ttyACM0")
|
||||
"""Connection target opened to capture Meshtastic traffic."""
|
||||
OUT = os.environ.get("MESH_DUMP_FILE", "meshtastic-dump.ndjson")
|
||||
|
||||
|
||||
@@ -421,3 +421,54 @@ class TestIsHiddenChannel:
|
||||
"""Non-configured names are not hidden."""
|
||||
monkeypatch.setattr(config, "HIDDEN_CHANNELS", ("Chat",))
|
||||
assert channels.is_hidden_channel("LongFast") is False
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# register_channel
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestRegisterChannel:
|
||||
"""Tests for :func:`channels.register_channel`."""
|
||||
|
||||
def test_adds_to_lookup(self):
|
||||
"""register_channel must make the name retrievable via channel_name."""
|
||||
channels.register_channel(1, "Chat")
|
||||
assert channels.channel_name(1) == "Chat"
|
||||
|
||||
def test_no_overwrite(self):
|
||||
"""Second call with same index must not replace the first-registered name."""
|
||||
channels.register_channel(0, "LongFast")
|
||||
channels.register_channel(0, "Other")
|
||||
assert channels.channel_name(0) == "LongFast"
|
||||
|
||||
def test_strips_whitespace(self):
|
||||
"""Leading and trailing whitespace is stripped from the channel name."""
|
||||
channels.register_channel(2, " Chat ")
|
||||
assert channels.channel_name(2) == "Chat"
|
||||
|
||||
def test_ignores_empty_string(self):
|
||||
"""Empty string is silently ignored and does not populate the cache."""
|
||||
channels.register_channel(3, "")
|
||||
assert channels.channel_name(3) is None
|
||||
|
||||
def test_ignores_whitespace_only_string(self):
|
||||
"""Whitespace-only name is silently ignored."""
|
||||
channels.register_channel(3, " ")
|
||||
assert channels.channel_name(3) is None
|
||||
|
||||
def test_updates_mappings_tuple(self):
|
||||
"""channel_mappings() reflects all registered entries, sorted by index."""
|
||||
channels.register_channel(2, "Admin")
|
||||
channels.register_channel(0, "LongFast")
|
||||
assert channels.channel_mappings() == ((0, "LongFast"), (2, "Admin"))
|
||||
|
||||
def test_coexists_with_capture_from_interface(self):
|
||||
"""Entries from register_channel and capture_from_interface merge correctly."""
|
||||
# Simulate capture_from_interface populating index 0.
|
||||
channels._CHANNEL_LOOKUP[0] = "LongFast"
|
||||
channels._CHANNEL_MAPPINGS = ((0, "LongFast"),)
|
||||
# register_channel should add index 1 without disturbing index 0.
|
||||
channels.register_channel(1, "Chat")
|
||||
assert channels.channel_name(0) == "LongFast"
|
||||
assert channels.channel_name(1) == "Chat"
|
||||
|
||||
+166
-40
@@ -96,45 +96,111 @@ class TestParseHiddenChannels:
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestResolveInstanceDomains:
|
||||
"""Tests for :func:`config._resolve_instance_domains`."""
|
||||
|
||||
def test_single_domain(self, monkeypatch):
|
||||
"""Single domain produces one-element tuple."""
|
||||
monkeypatch.setenv("INSTANCE_DOMAIN", "foo.tld")
|
||||
monkeypatch.setenv("API_TOKEN", "secret")
|
||||
result = config._resolve_instance_domains()
|
||||
assert result == (("https://foo.tld", "secret"),)
|
||||
|
||||
def test_multi_domain_broadcast_token(self, monkeypatch):
|
||||
"""Multiple domains with a single token broadcast the token."""
|
||||
monkeypatch.setenv("INSTANCE_DOMAIN", "foo.tld, bar.tld")
|
||||
monkeypatch.setenv("API_TOKEN", "shared")
|
||||
result = config._resolve_instance_domains()
|
||||
assert result == (
|
||||
("https://foo.tld", "shared"),
|
||||
("https://bar.tld", "shared"),
|
||||
)
|
||||
|
||||
def test_multi_domain_per_instance_tokens(self, monkeypatch):
|
||||
"""Comma-separated tokens are positionally paired with domains."""
|
||||
monkeypatch.setenv("INSTANCE_DOMAIN", "a.tld,b.tld")
|
||||
monkeypatch.setenv("API_TOKEN", "tok1,tok2")
|
||||
result = config._resolve_instance_domains()
|
||||
assert result == (("https://a.tld", "tok1"), ("https://b.tld", "tok2"))
|
||||
|
||||
def test_token_count_mismatch_raises(self, monkeypatch):
|
||||
"""Mismatched counts raise ValueError at parse time."""
|
||||
monkeypatch.setenv("INSTANCE_DOMAIN", "a.tld,b.tld")
|
||||
monkeypatch.setenv("API_TOKEN", "t1,t2,t3")
|
||||
with pytest.raises(ValueError, match="counts must match"):
|
||||
config._resolve_instance_domains()
|
||||
|
||||
def test_deduplicates_domains(self, monkeypatch):
|
||||
"""Duplicate domains are collapsed to a single entry."""
|
||||
monkeypatch.setenv("INSTANCE_DOMAIN", "foo.tld, foo.tld")
|
||||
monkeypatch.setenv("API_TOKEN", "tok")
|
||||
result = config._resolve_instance_domains()
|
||||
assert result == (("https://foo.tld", "tok"),)
|
||||
|
||||
def test_preserves_explicit_scheme(self, monkeypatch):
|
||||
"""Domains with explicit schemes keep them; others get https://."""
|
||||
monkeypatch.setenv("INSTANCE_DOMAIN", "http://local:41447,bar.tld")
|
||||
monkeypatch.setenv("API_TOKEN", "tok")
|
||||
result = config._resolve_instance_domains()
|
||||
assert result == (
|
||||
("http://local:41447", "tok"),
|
||||
("https://bar.tld", "tok"),
|
||||
)
|
||||
|
||||
def test_empty_domain(self, monkeypatch):
|
||||
"""Empty INSTANCE_DOMAIN returns an empty tuple."""
|
||||
monkeypatch.setenv("INSTANCE_DOMAIN", "")
|
||||
monkeypatch.setenv("API_TOKEN", "tok")
|
||||
result = config._resolve_instance_domains()
|
||||
assert result == ()
|
||||
|
||||
def test_strips_trailing_slashes(self, monkeypatch):
|
||||
"""Trailing slashes are stripped from domains."""
|
||||
monkeypatch.setenv("INSTANCE_DOMAIN", "foo.tld/")
|
||||
monkeypatch.setenv("API_TOKEN", "tok")
|
||||
result = config._resolve_instance_domains()
|
||||
assert result == (("https://foo.tld", "tok"),)
|
||||
|
||||
def test_empty_token_broadcast(self, monkeypatch):
|
||||
"""Empty API_TOKEN broadcasts empty string to all instances."""
|
||||
monkeypatch.setenv("INSTANCE_DOMAIN", "a.tld,b.tld")
|
||||
monkeypatch.setenv("API_TOKEN", "")
|
||||
result = config._resolve_instance_domains()
|
||||
assert result == (("https://a.tld", ""), ("https://b.tld", ""))
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _resolve_instance_domain (legacy, kept for backward compatibility)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestResolveInstanceDomain:
|
||||
"""Tests for :func:`config._resolve_instance_domain`."""
|
||||
|
||||
def test_returns_instance_domain_when_set(self, monkeypatch):
|
||||
"""Uses INSTANCE_DOMAIN when set."""
|
||||
monkeypatch.setenv("INSTANCE_DOMAIN", "mesh.example.com")
|
||||
monkeypatch.delenv("POTATOMESH_INSTANCE", raising=False)
|
||||
result = config._resolve_instance_domain()
|
||||
assert result == "https://mesh.example.com"
|
||||
|
||||
def test_adds_https_when_no_scheme(self, monkeypatch):
|
||||
"""Adds https:// prefix when no scheme is present."""
|
||||
monkeypatch.setenv("INSTANCE_DOMAIN", "example.com")
|
||||
monkeypatch.delenv("POTATOMESH_INSTANCE", raising=False)
|
||||
assert config._resolve_instance_domain() == "https://example.com"
|
||||
|
||||
def test_preserves_existing_scheme(self, monkeypatch):
|
||||
"""Leaves existing http:// scheme intact."""
|
||||
monkeypatch.setenv("INSTANCE_DOMAIN", "http://example.com")
|
||||
monkeypatch.delenv("POTATOMESH_INSTANCE", raising=False)
|
||||
assert config._resolve_instance_domain() == "http://example.com"
|
||||
|
||||
def test_strips_trailing_slash(self, monkeypatch):
|
||||
"""Strips trailing slash from instance domain."""
|
||||
monkeypatch.setenv("INSTANCE_DOMAIN", "https://example.com/")
|
||||
monkeypatch.delenv("POTATOMESH_INSTANCE", raising=False)
|
||||
assert config._resolve_instance_domain() == "https://example.com"
|
||||
|
||||
def test_falls_back_to_legacy_env(self, monkeypatch):
|
||||
"""Falls back to POTATOMESH_INSTANCE when INSTANCE_DOMAIN is absent."""
|
||||
def test_returns_empty_when_not_set(self, monkeypatch):
|
||||
"""Returns empty string when INSTANCE_DOMAIN is unset."""
|
||||
monkeypatch.delenv("INSTANCE_DOMAIN", raising=False)
|
||||
monkeypatch.setenv("POTATOMESH_INSTANCE", "legacy.example.com")
|
||||
result = config._resolve_instance_domain()
|
||||
assert result == "https://legacy.example.com"
|
||||
|
||||
def test_returns_empty_when_neither_set(self, monkeypatch):
|
||||
"""Returns empty string when neither env var is set."""
|
||||
monkeypatch.delenv("INSTANCE_DOMAIN", raising=False)
|
||||
monkeypatch.delenv("POTATOMESH_INSTANCE", raising=False)
|
||||
assert config._resolve_instance_domain() == ""
|
||||
|
||||
|
||||
@@ -196,50 +262,110 @@ class TestDebugLog:
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# PROVIDER validation
|
||||
# PROTOCOL validation
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestProviderValidation:
|
||||
"""Tests for PROVIDER environment validation at import time."""
|
||||
class TestProtocolValidation:
|
||||
"""Tests for PROTOCOL environment validation at import time."""
|
||||
|
||||
def test_valid_provider_does_not_raise(self, monkeypatch):
|
||||
"""Importing config with a valid PROVIDER succeeds."""
|
||||
def test_valid_protocol_does_not_raise(self, monkeypatch):
|
||||
"""Importing config with a valid PROTOCOL succeeds."""
|
||||
import importlib
|
||||
|
||||
monkeypatch.setenv("PROVIDER", "meshtastic")
|
||||
monkeypatch.setenv("PROTOCOL", "meshtastic")
|
||||
# Re-importing should not raise
|
||||
importlib.reload(config)
|
||||
|
||||
def test_invalid_provider_raises_value_error(self, monkeypatch):
|
||||
"""An invalid PROVIDER value raises ValueError at module load."""
|
||||
def test_invalid_protocol_raises_value_error(self, monkeypatch):
|
||||
"""An invalid PROTOCOL value raises ValueError at module load."""
|
||||
import importlib
|
||||
|
||||
monkeypatch.setenv("PROVIDER", "bogus_provider_xyz")
|
||||
with pytest.raises(ValueError, match="Unknown PROVIDER"):
|
||||
monkeypatch.setenv("PROTOCOL", "bogus_protocol_xyz")
|
||||
with pytest.raises(ValueError, match="Unknown PROTOCOL"):
|
||||
importlib.reload(config)
|
||||
# Restore to valid value so subsequent tests work
|
||||
monkeypatch.setenv("PROVIDER", "meshtastic")
|
||||
monkeypatch.setenv("PROTOCOL", "meshtastic")
|
||||
importlib.reload(config)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _ConfigModule proxy
|
||||
# _parse_lora_freq_env
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestConfigModuleProxy:
|
||||
"""Tests for the :class:`config._ConfigModule` proxy behaviour."""
|
||||
class TestParseLoraFreqEnv:
|
||||
"""Tests for :func:`config._parse_lora_freq_env`."""
|
||||
|
||||
def test_connection_and_port_stay_in_sync(self):
|
||||
"""Setting CONNECTION also updates PORT and vice versa."""
|
||||
original_connection = config.CONNECTION
|
||||
original_port = config.PORT
|
||||
try:
|
||||
config.CONNECTION = "tcp://testhost"
|
||||
assert config.PORT == "tcp://testhost"
|
||||
config.PORT = "serial:/dev/ttyUSB0"
|
||||
assert config.CONNECTION == "serial:/dev/ttyUSB0"
|
||||
finally:
|
||||
config.CONNECTION = original_connection
|
||||
config.PORT = original_port
|
||||
def test_none_returns_none(self):
|
||||
"""None input returns None."""
|
||||
assert config._parse_lora_freq_env(None) is None
|
||||
|
||||
def test_empty_string_returns_none(self):
|
||||
"""Empty string returns None."""
|
||||
assert config._parse_lora_freq_env("") is None
|
||||
|
||||
def test_whitespace_only_returns_none(self):
|
||||
"""Whitespace-only string returns None."""
|
||||
assert config._parse_lora_freq_env(" ") is None
|
||||
|
||||
def test_integer_string_returns_int(self):
|
||||
"""Whole-number string returns int."""
|
||||
result = config._parse_lora_freq_env("868")
|
||||
assert result == 868
|
||||
assert isinstance(result, int)
|
||||
|
||||
def test_float_integer_value_returns_int(self):
|
||||
"""String like '915.0' (whole float) returns int 915."""
|
||||
result = config._parse_lora_freq_env("915.0")
|
||||
assert result == 915
|
||||
assert isinstance(result, int)
|
||||
|
||||
def test_decimal_string_returns_float(self):
|
||||
"""Decimal string returns float."""
|
||||
result = config._parse_lora_freq_env("869.525")
|
||||
assert result == pytest.approx(869.525)
|
||||
assert isinstance(result, float)
|
||||
|
||||
def test_non_numeric_label_returns_none(self):
|
||||
"""Non-numeric string returns None so auto-detection is not blocked."""
|
||||
assert config._parse_lora_freq_env("EU_868") is None
|
||||
|
||||
def test_unit_suffixed_string_returns_none(self):
|
||||
"""String like '915MHz' returns None (not numeric)."""
|
||||
assert config._parse_lora_freq_env("915MHz") is None
|
||||
|
||||
def test_inf_returns_none(self):
|
||||
"""'inf' is non-finite and returns None."""
|
||||
assert config._parse_lora_freq_env("inf") is None
|
||||
|
||||
def test_large_exponent_returns_none(self):
|
||||
"""'1e309' overflows to inf and returns None."""
|
||||
assert config._parse_lora_freq_env("1e309") is None
|
||||
|
||||
def test_nan_returns_none(self):
|
||||
"""'nan' is non-finite and returns None."""
|
||||
assert config._parse_lora_freq_env("nan") is None
|
||||
|
||||
def test_whitespace_stripped(self):
|
||||
"""Leading/trailing whitespace is ignored."""
|
||||
assert config._parse_lora_freq_env(" 919 ") == 919
|
||||
|
||||
def test_frequency_env_preseeds_lora_freq(self, monkeypatch):
|
||||
"""FREQUENCY env var pre-seeds LORA_FREQ at module load."""
|
||||
import importlib
|
||||
|
||||
monkeypatch.setenv("FREQUENCY", "915")
|
||||
importlib.reload(config)
|
||||
assert config.LORA_FREQ == 915
|
||||
# Restore
|
||||
monkeypatch.delenv("FREQUENCY")
|
||||
importlib.reload(config)
|
||||
|
||||
def test_no_frequency_env_leaves_lora_freq_none(self, monkeypatch):
|
||||
"""Absent FREQUENCY env var leaves LORA_FREQ as None."""
|
||||
import importlib
|
||||
|
||||
monkeypatch.delenv("FREQUENCY", raising=False)
|
||||
importlib.reload(config)
|
||||
assert config.LORA_FREQ is None
|
||||
|
||||
+374
-17
@@ -261,6 +261,8 @@ def _configure_common_defaults(
|
||||
):
|
||||
"""Set fast configuration defaults shared by daemon integration tests."""
|
||||
|
||||
monkeypatch.setattr(daemon.config, "INSTANCES", (("http://test", ""),))
|
||||
monkeypatch.setattr(daemon.config, "INSTANCE", "http://test")
|
||||
monkeypatch.setattr(daemon.config, "SNAPSHOT_SECS", 0)
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_INITIAL_DELAY_SECS", 0)
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_MAX_DELAY_SECS", 0)
|
||||
@@ -828,7 +830,7 @@ def test_loop_iteration_full_pass_returns_false(monkeypatch):
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# PROVIDER env-var selection
|
||||
# PROTOCOL env-var selection
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@@ -891,12 +893,12 @@ def _reload_config() -> types.ModuleType:
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def reset_provider_config():
|
||||
"""Reload config after the test so PROVIDER changes don't leak across tests."""
|
||||
def reset_protocol_config():
|
||||
"""Reload config after the test so PROTOCOL changes don't leak across tests."""
|
||||
yield
|
||||
import os
|
||||
|
||||
os.environ.pop("PROVIDER", None)
|
||||
os.environ.pop("PROTOCOL", None)
|
||||
_reload_config()
|
||||
|
||||
|
||||
@@ -907,33 +909,34 @@ def reset_provider_config():
|
||||
("meshcore", "meshcore"),
|
||||
],
|
||||
)
|
||||
def test_config_provider_env(monkeypatch, reset_provider_config, env_value, expected):
|
||||
"""PROVIDER env var selects the provider; absent defaults to 'meshtastic'."""
|
||||
def test_config_protocol_env(monkeypatch, reset_protocol_config, env_value, expected):
|
||||
"""PROTOCOL env var selects the protocol; absent defaults to 'meshtastic'."""
|
||||
if env_value is None:
|
||||
monkeypatch.delenv("PROVIDER", raising=False)
|
||||
monkeypatch.delenv("PROTOCOL", raising=False)
|
||||
else:
|
||||
monkeypatch.setenv("PROVIDER", env_value)
|
||||
assert _reload_config().PROVIDER == expected
|
||||
monkeypatch.setenv("PROTOCOL", env_value)
|
||||
cfg = _reload_config()
|
||||
assert cfg.PROTOCOL == expected
|
||||
|
||||
|
||||
def test_config_provider_unknown_raises(monkeypatch, reset_provider_config):
|
||||
"""An unrecognised PROVIDER value must raise ValueError at import time."""
|
||||
monkeypatch.setenv("PROVIDER", "reticulum")
|
||||
with pytest.raises(ValueError, match="PROVIDER"):
|
||||
def test_config_protocol_unknown_raises(monkeypatch, reset_protocol_config):
|
||||
"""An unrecognised PROTOCOL value must raise ValueError at import time."""
|
||||
monkeypatch.setenv("PROTOCOL", "reticulum")
|
||||
with pytest.raises(ValueError, match="PROTOCOL"):
|
||||
_reload_config()
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"provider_name, module_path, class_name",
|
||||
[
|
||||
("meshtastic", "data.mesh_ingestor.providers.meshtastic", "MeshtasticProvider"),
|
||||
("meshcore", "data.mesh_ingestor.providers.meshcore", "MeshcoreProvider"),
|
||||
("meshtastic", "data.mesh_ingestor.protocols.meshtastic", "MeshtasticProvider"),
|
||||
("meshcore", "data.mesh_ingestor.protocols.meshcore", "MeshcoreProvider"),
|
||||
],
|
||||
)
|
||||
def test_daemon_main_selects_provider(
|
||||
monkeypatch, provider_name, module_path, class_name
|
||||
):
|
||||
"""main() must instantiate the correct provider class based on PROVIDER."""
|
||||
"""main() must instantiate the correct protocol class based on PROTOCOL."""
|
||||
mod = importlib.import_module(module_path)
|
||||
instantiated = []
|
||||
|
||||
@@ -943,7 +946,7 @@ def test_daemon_main_selects_provider(
|
||||
return p
|
||||
|
||||
_patch_daemon_for_fast_exit(monkeypatch)
|
||||
monkeypatch.setattr(daemon.config, "PROVIDER", provider_name)
|
||||
monkeypatch.setattr(daemon.config, "PROTOCOL", provider_name)
|
||||
monkeypatch.setattr(mod, class_name, make_provider)
|
||||
|
||||
daemon.main()
|
||||
@@ -1086,3 +1089,357 @@ def test_check_inactivity_reconnect_elapsed_triggers(monkeypatch):
|
||||
# latest_activity = iface_connected_at(0.0); elapsed = 100s > 30s → trigger
|
||||
result = daemon._check_inactivity_reconnect(state)
|
||||
assert result is True
|
||||
|
||||
|
||||
def test_inactivity_reconnect_bypasses_throttle_when_explicitly_disconnected(
|
||||
monkeypatch,
|
||||
):
|
||||
"""Explicit disconnect reconnects even when last_inactivity_reconnect is recent.
|
||||
|
||||
When isConnected reports False the daemon must not wait the full
|
||||
inactivity window before reconnecting. It uses the shorter
|
||||
_RECONNECT_MAX_DELAY_SECS window instead.
|
||||
"""
|
||||
state = _make_state(inactivity_reconnect_secs=3600.0)
|
||||
state.iface = DummyInterface(is_connected=False)
|
||||
state.iface_connected_at = 0.0
|
||||
# 61 seconds since last reconnect attempt — outside the 60 s anti-thrash window.
|
||||
state.last_inactivity_reconnect = 3589.0
|
||||
|
||||
monkeypatch.setattr(daemon.time, "monotonic", lambda: 3650.0)
|
||||
monkeypatch.setattr(daemon.handlers, "last_packet_monotonic", lambda: None)
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_MAX_DELAY_SECS", 60.0)
|
||||
monkeypatch.setattr(daemon.config, "_debug_log", lambda *_a, **_k: None)
|
||||
monkeypatch.setattr(daemon, "_close_interface", lambda iface: None)
|
||||
|
||||
result = daemon._check_inactivity_reconnect(state)
|
||||
assert (
|
||||
result is True
|
||||
), "Expected reconnect to fire when explicitly disconnected and 61s have elapsed"
|
||||
|
||||
|
||||
def test_inactivity_reconnect_still_throttles_inactivity(monkeypatch):
|
||||
"""The full inactivity window still throttles reconnects that are not explicit disconnects."""
|
||||
state = _make_state(inactivity_reconnect_secs=3600.0)
|
||||
# isConnected=True → inactivity-only trigger (no explicit disconnect signal)
|
||||
state.iface = DummyInterface(is_connected=True)
|
||||
state.iface_connected_at = 0.0
|
||||
# now=3700, last_inactivity_reconnect=3691 → 9 s elapsed, well within 3600 s window.
|
||||
state.last_inactivity_reconnect = 3691.0
|
||||
|
||||
monkeypatch.setattr(daemon.time, "monotonic", lambda: 3700.0)
|
||||
# No recent packet → inactivity_elapsed = 3700 s > inactivity_reconnect_secs (3600 s)
|
||||
monkeypatch.setattr(daemon.handlers, "last_packet_monotonic", lambda: None)
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_MAX_DELAY_SECS", 60.0)
|
||||
|
||||
# Even though enough inactive time has passed, last_inactivity_reconnect is
|
||||
# only 9 s ago (< 3600 s throttle window) → reconnect is suppressed.
|
||||
result = daemon._check_inactivity_reconnect(state)
|
||||
assert (
|
||||
result is False
|
||||
), "Expected throttle to suppress reconnect when last attempt was 9 s ago"
|
||||
|
||||
|
||||
def test_inactivity_reconnect_logs_queue_depth(monkeypatch):
|
||||
"""The inactivity reconnect debug log includes the current queue depth."""
|
||||
state = _make_state(inactivity_reconnect_secs=30.0)
|
||||
state.iface = DummyInterface(is_connected=True)
|
||||
state.iface_connected_at = 0.0
|
||||
state.last_inactivity_reconnect = None
|
||||
|
||||
monkeypatch.setattr(daemon.time, "monotonic", lambda: 100.0)
|
||||
monkeypatch.setattr(daemon.handlers, "last_packet_monotonic", lambda: None)
|
||||
monkeypatch.setattr(daemon, "_close_interface", lambda iface: None)
|
||||
|
||||
# Seed the global queue with two dummy items so queue_depth is non-zero.
|
||||
from data.mesh_ingestor.queue import STATE, _enqueue_post_json
|
||||
|
||||
_enqueue_post_json("/api/a", {}, 10, state=STATE)
|
||||
_enqueue_post_json("/api/b", {}, 20, state=STATE)
|
||||
|
||||
log_kwargs: list[dict] = []
|
||||
monkeypatch.setattr(
|
||||
daemon.config,
|
||||
"_debug_log",
|
||||
lambda msg, **kw: log_kwargs.append(kw),
|
||||
)
|
||||
|
||||
try:
|
||||
result = daemon._check_inactivity_reconnect(state)
|
||||
assert result is True
|
||||
assert any(
|
||||
kw.get("queue_depth") == 2 for kw in log_kwargs
|
||||
), f"Expected queue_depth=2 in log kwargs, got {log_kwargs}"
|
||||
finally:
|
||||
# Clean up global state so other tests are not affected.
|
||||
STATE.queue.clear()
|
||||
|
||||
|
||||
def test_main_exits_early_when_no_instances(monkeypatch):
|
||||
"""main() returns immediately when no INSTANCE_DOMAIN is configured.
|
||||
|
||||
The queue drainer must NOT be started on the early-exit path.
|
||||
"""
|
||||
monkeypatch.setattr(daemon.config, "INSTANCES", ())
|
||||
monkeypatch.setattr(daemon.config, "INSTANCE", "")
|
||||
log_msgs: list[str] = []
|
||||
monkeypatch.setattr(
|
||||
daemon.config,
|
||||
"_debug_log",
|
||||
lambda msg, **kw: log_msgs.append(msg),
|
||||
)
|
||||
drainer_calls: list[object] = []
|
||||
monkeypatch.setattr(
|
||||
daemon.queue,
|
||||
"_start_queue_drainer",
|
||||
lambda state=None: drainer_calls.append(state),
|
||||
)
|
||||
|
||||
provider = _make_minimal_fake_provider("meshtastic")
|
||||
daemon.main(provider=provider)
|
||||
|
||||
assert any("no instance_domain" in m.lower() for m in log_msgs)
|
||||
assert drainer_calls == [], "Drainer must not start when no instances configured"
|
||||
|
||||
|
||||
def test_main_starts_queue_drainer(monkeypatch):
|
||||
"""main() calls queue._start_queue_drainer after subscribing."""
|
||||
drainer_calls: list[object] = []
|
||||
monkeypatch.setattr(
|
||||
daemon.queue,
|
||||
"_start_queue_drainer",
|
||||
lambda state=None: drainer_calls.append(state),
|
||||
)
|
||||
|
||||
_patch_daemon_for_fast_exit(monkeypatch)
|
||||
provider = _make_minimal_fake_provider("meshtastic")
|
||||
daemon.main(provider=provider)
|
||||
|
||||
assert len(drainer_calls) == 1
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _try_send_self_node
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_try_send_self_node_skips_when_no_method():
|
||||
"""_try_send_self_node does nothing when provider has no self_node_item."""
|
||||
|
||||
class _NoSelfNode:
|
||||
pass
|
||||
|
||||
state = _make_state()
|
||||
state.provider = _NoSelfNode() # type: ignore[assignment]
|
||||
state.iface = DummyInterface()
|
||||
# Should not raise; last_self_node_report stays None.
|
||||
daemon._try_send_self_node(state)
|
||||
assert state.last_self_node_report is None
|
||||
|
||||
|
||||
def test_try_send_self_node_skips_when_item_is_none(monkeypatch):
|
||||
"""_try_send_self_node does nothing when self_node_item returns None."""
|
||||
|
||||
class _NullSelfNode:
|
||||
def self_node_item(self, iface):
|
||||
return None
|
||||
|
||||
upserted = []
|
||||
monkeypatch.setattr(
|
||||
daemon.handlers, "upsert_node", lambda nid, n: upserted.append(nid)
|
||||
)
|
||||
|
||||
state = _make_state()
|
||||
state.provider = _NullSelfNode() # type: ignore[assignment]
|
||||
state.iface = DummyInterface()
|
||||
daemon._try_send_self_node(state)
|
||||
|
||||
assert upserted == []
|
||||
assert state.last_self_node_report is None
|
||||
|
||||
|
||||
def test_try_send_self_node_calls_upsert_and_sets_timestamp(monkeypatch):
|
||||
"""_try_send_self_node upserts the self-node and records the timestamp."""
|
||||
|
||||
class _GoodSelfNode:
|
||||
def self_node_item(self, iface):
|
||||
return "!aabbccdd", {"user": {"longName": "Host"}}
|
||||
|
||||
upserted = []
|
||||
monkeypatch.setattr(
|
||||
daemon.handlers, "upsert_node", lambda nid, n: upserted.append(nid)
|
||||
)
|
||||
monkeypatch.setattr(daemon.config, "_debug_log", lambda *_a, **_k: None)
|
||||
fixed_time = 5000.0
|
||||
monkeypatch.setattr(daemon.time, "monotonic", lambda: fixed_time)
|
||||
|
||||
state = _make_state()
|
||||
state.provider = _GoodSelfNode() # type: ignore[assignment]
|
||||
state.iface = DummyInterface()
|
||||
daemon._try_send_self_node(state)
|
||||
|
||||
assert upserted == ["!aabbccdd"]
|
||||
assert state.last_self_node_report == fixed_time
|
||||
|
||||
|
||||
def test_try_send_self_node_upsert_error_suppressed(monkeypatch):
|
||||
"""_try_send_self_node suppresses upsert errors and does not update timestamp."""
|
||||
|
||||
class _GoodSelfNode:
|
||||
def self_node_item(self, iface):
|
||||
return "!aabbccdd", {}
|
||||
|
||||
def _raise(*_a, **_k):
|
||||
raise RuntimeError("network error")
|
||||
|
||||
monkeypatch.setattr(daemon.handlers, "upsert_node", _raise)
|
||||
logged = []
|
||||
monkeypatch.setattr(daemon.config, "_debug_log", lambda *a, **kw: logged.append(kw))
|
||||
|
||||
state = _make_state()
|
||||
state.provider = _GoodSelfNode() # type: ignore[assignment]
|
||||
state.iface = DummyInterface()
|
||||
# Must not raise.
|
||||
daemon._try_send_self_node(state)
|
||||
|
||||
assert state.last_self_node_report is None
|
||||
assert any(c.get("context") == "daemon.self_node" for c in logged)
|
||||
|
||||
|
||||
def test_try_send_self_node_self_node_item_error_suppressed(monkeypatch):
|
||||
"""_try_send_self_node suppresses errors raised by self_node_item itself."""
|
||||
|
||||
class _BrokenSelfNode:
|
||||
def self_node_item(self, iface):
|
||||
raise RuntimeError("provider error")
|
||||
|
||||
logged = []
|
||||
monkeypatch.setattr(daemon.config, "_debug_log", lambda *a, **kw: logged.append(kw))
|
||||
|
||||
state = _make_state()
|
||||
state.provider = _BrokenSelfNode() # type: ignore[assignment]
|
||||
state.iface = DummyInterface()
|
||||
# Must not raise.
|
||||
daemon._try_send_self_node(state)
|
||||
|
||||
assert state.last_self_node_report is None
|
||||
assert any(c.get("context") == "daemon.self_node" for c in logged)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _loop_iteration — periodic self-node report
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _make_self_node_provider(node_item=("!aabbccdd", {"user": {}})):
|
||||
"""Return a minimal provider stub that exposes ``self_node_item``."""
|
||||
|
||||
class _SelfNodeProvider:
|
||||
name = "test"
|
||||
|
||||
def subscribe(self):
|
||||
return []
|
||||
|
||||
def node_snapshot_items(self, iface):
|
||||
return []
|
||||
|
||||
def self_node_item(self, iface):
|
||||
return node_item
|
||||
|
||||
return _SelfNodeProvider()
|
||||
|
||||
|
||||
def _patch_loop_iteration_common(monkeypatch, *, now=100.0):
|
||||
"""Apply monkeypatches shared by all _loop_iteration self-node tests."""
|
||||
monkeypatch.setattr(daemon.handlers, "last_packet_monotonic", lambda: None)
|
||||
monkeypatch.setattr(daemon.config, "_debug_log", lambda *_a, **_k: None)
|
||||
monkeypatch.setattr(daemon.config, "_SELF_NODE_REPORT_INTERVAL_SECS", 3600.0)
|
||||
monkeypatch.setattr(daemon.time, "monotonic", lambda: now)
|
||||
monkeypatch.setattr(
|
||||
daemon,
|
||||
"_process_ingestor_heartbeat",
|
||||
lambda iface, **kw: kw.get("ingestor_announcement_sent", False),
|
||||
)
|
||||
|
||||
|
||||
def test_loop_iteration_triggers_self_node_report_immediately_after_snapshot(
|
||||
monkeypatch,
|
||||
):
|
||||
"""Self-node report fires on the first iteration after the initial snapshot."""
|
||||
upserted = []
|
||||
monkeypatch.setattr(
|
||||
daemon.handlers, "upsert_node", lambda nid, n: upserted.append(nid)
|
||||
)
|
||||
_patch_loop_iteration_common(monkeypatch)
|
||||
|
||||
state = _make_state()
|
||||
state.iface = DummyInterface()
|
||||
state.provider = _make_self_node_provider() # type: ignore[assignment]
|
||||
state.initial_snapshot_sent = True
|
||||
state.last_self_node_report = None # never reported before
|
||||
|
||||
daemon._loop_iteration(state)
|
||||
|
||||
assert "!aabbccdd" in upserted
|
||||
|
||||
|
||||
def test_loop_iteration_self_node_not_triggered_before_snapshot(monkeypatch):
|
||||
"""Self-node report is NOT triggered before the initial snapshot is sent."""
|
||||
upserted = []
|
||||
monkeypatch.setattr(
|
||||
daemon.handlers, "upsert_node", lambda nid, n: upserted.append(nid)
|
||||
)
|
||||
_patch_loop_iteration_common(monkeypatch)
|
||||
|
||||
state = _make_state()
|
||||
state.iface = DummyInterface()
|
||||
state.provider = _make_self_node_provider() # type: ignore[assignment]
|
||||
state.initial_snapshot_sent = False # snapshot not yet sent
|
||||
|
||||
# _loop_iteration will attempt _try_connect because iface is set but
|
||||
# initial_snapshot_sent is False — prevent real connect by patching snapshot
|
||||
monkeypatch.setattr(daemon, "_try_send_snapshot", lambda s: True)
|
||||
|
||||
daemon._loop_iteration(state)
|
||||
|
||||
assert "!aabbccdd" not in upserted
|
||||
|
||||
|
||||
def test_loop_iteration_self_node_not_retried_within_interval(monkeypatch):
|
||||
"""Self-node report is NOT re-fired within the throttle interval."""
|
||||
upserted = []
|
||||
monkeypatch.setattr(
|
||||
daemon.handlers, "upsert_node", lambda nid, n: upserted.append(nid)
|
||||
)
|
||||
_patch_loop_iteration_common(monkeypatch, now=100.0)
|
||||
|
||||
state = _make_state()
|
||||
state.iface = DummyInterface()
|
||||
state.provider = _make_self_node_provider() # type: ignore[assignment]
|
||||
state.initial_snapshot_sent = True
|
||||
# Simulate a recent report: 100 - 50 = 50 seconds ago < 3600 interval
|
||||
state.last_self_node_report = 50.0
|
||||
|
||||
daemon._loop_iteration(state)
|
||||
|
||||
assert "!aabbccdd" not in upserted
|
||||
|
||||
|
||||
def test_loop_iteration_self_node_retried_after_interval(monkeypatch):
|
||||
"""Self-node report fires again after the full interval has elapsed."""
|
||||
upserted = []
|
||||
monkeypatch.setattr(
|
||||
daemon.handlers, "upsert_node", lambda nid, n: upserted.append(nid)
|
||||
)
|
||||
# now=5000; last_report=1000; elapsed=4000 > 3600 → should fire
|
||||
_patch_loop_iteration_common(monkeypatch, now=5000.0)
|
||||
|
||||
state = _make_state()
|
||||
state.iface = DummyInterface()
|
||||
state.provider = _make_self_node_provider() # type: ignore[assignment]
|
||||
state.initial_snapshot_sent = True
|
||||
state.last_self_node_report = 1000.0 # 4000 seconds ago
|
||||
|
||||
daemon._loop_iteration(state)
|
||||
|
||||
assert "!aabbccdd" in upserted
|
||||
|
||||
@@ -39,10 +39,12 @@ def reset_handler_state():
|
||||
"""Reset global handler state between tests."""
|
||||
_state_mod._host_node_id = None
|
||||
_state_mod._host_telemetry_last_rx = None
|
||||
_state_mod._host_nodeinfo_last_seen = None
|
||||
_state_mod._last_packet_monotonic = None
|
||||
yield
|
||||
_state_mod._host_node_id = None
|
||||
_state_mod._host_telemetry_last_rx = None
|
||||
_state_mod._host_nodeinfo_last_seen = None
|
||||
_state_mod._last_packet_monotonic = None
|
||||
|
||||
|
||||
@@ -75,6 +77,12 @@ class TestHostNodeId:
|
||||
handlers.register_host_node_id("!aabbccdd")
|
||||
assert _state_mod._host_telemetry_last_rx is None
|
||||
|
||||
def test_register_resets_nodeinfo_window(self):
|
||||
"""Registering a new host ID resets the NODEINFO suppression window."""
|
||||
_state_mod._host_nodeinfo_last_seen = 12345.0
|
||||
handlers.register_host_node_id("!aabbccdd")
|
||||
assert _state_mod._host_nodeinfo_last_seen is None
|
||||
|
||||
def test_register_canonicalises_numeric(self):
|
||||
"""Numeric node ID is converted to !xxxxxxxx form."""
|
||||
handlers.register_host_node_id(0xAABBCCDD)
|
||||
@@ -100,6 +108,13 @@ class TestLastPacketMonotonic:
|
||||
assert ts is not None
|
||||
assert isinstance(ts, float)
|
||||
|
||||
def test_mark_packet_seen_exported_from_handlers(self):
|
||||
"""handlers._mark_packet_seen must be accessible via the package."""
|
||||
assert callable(handlers._mark_packet_seen)
|
||||
handlers._mark_packet_seen()
|
||||
ts = handlers.last_packet_monotonic()
|
||||
assert ts is not None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _state: _host_telemetry_suppressed
|
||||
@@ -145,6 +160,51 @@ class TestHostTelemetrySuppressed:
|
||||
assert mins == 1
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _state: _host_nodeinfo_suppressed / _mark_host_nodeinfo_seen
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestHostNodeinfoSuppressed:
|
||||
"""Tests for host NODEINFO suppression logic."""
|
||||
|
||||
def test_not_suppressed_when_no_previous(self):
|
||||
"""Not suppressed when no previous NODEINFO timestamp is set."""
|
||||
assert _state_mod._host_nodeinfo_suppressed(time.monotonic()) is False
|
||||
|
||||
def test_suppressed_within_interval(self):
|
||||
"""Suppressed when within the suppression window."""
|
||||
now = time.monotonic()
|
||||
_state_mod._host_nodeinfo_last_seen = now - 10.0 # 10 seconds ago
|
||||
assert _state_mod._host_nodeinfo_suppressed(now) is True
|
||||
|
||||
def test_not_suppressed_after_interval(self):
|
||||
"""Not suppressed after the full interval has elapsed."""
|
||||
now = time.monotonic()
|
||||
_state_mod._host_nodeinfo_last_seen = (
|
||||
now - _state_mod._HOST_NODEINFO_INTERVAL_SECS - 1.0
|
||||
)
|
||||
assert _state_mod._host_nodeinfo_suppressed(now) is False
|
||||
|
||||
def test_mark_updates_timestamp(self):
|
||||
"""_mark_host_nodeinfo_seen stores the provided timestamp."""
|
||||
now = time.monotonic()
|
||||
_state_mod._mark_host_nodeinfo_seen(now)
|
||||
assert _state_mod._host_nodeinfo_last_seen == now
|
||||
|
||||
def test_suppressed_after_mark(self):
|
||||
"""Immediately after marking, a second call is suppressed."""
|
||||
now = time.monotonic()
|
||||
_state_mod._mark_host_nodeinfo_seen(now)
|
||||
assert _state_mod._host_nodeinfo_suppressed(now + 1.0) is True
|
||||
|
||||
def test_not_suppressed_after_mark_and_full_interval(self):
|
||||
"""After a full interval has elapsed, suppression lifts."""
|
||||
long_ago = time.monotonic() - _state_mod._HOST_NODEINFO_INTERVAL_SECS - 5.0
|
||||
_state_mod._mark_host_nodeinfo_seen(long_ago)
|
||||
assert _state_mod._host_nodeinfo_suppressed(time.monotonic()) is False
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# radio: _radio_metadata_fields / _apply_radio_metadata
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -657,6 +717,91 @@ class TestStoreNodeinfoPacket:
|
||||
q._queue_post_json = original
|
||||
assert sent == []
|
||||
|
||||
def test_host_nodeinfo_not_suppressed_on_first_call(self):
|
||||
"""First NODEINFO from the host node is always forwarded."""
|
||||
import data.mesh_ingestor.queue as q
|
||||
|
||||
handlers.register_host_node_id("!aabbccdd")
|
||||
sent = []
|
||||
original = q._queue_post_json
|
||||
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(path)
|
||||
try:
|
||||
handlers.store_nodeinfo_packet(
|
||||
{"id": 1, "rxTime": 100, "fromId": "!aabbccdd"},
|
||||
{"user": {"id": "!aabbccdd", "shortName": "AB", "longName": "Alpha"}},
|
||||
)
|
||||
finally:
|
||||
q._queue_post_json = original
|
||||
assert "/api/nodes" in sent
|
||||
|
||||
def test_host_nodeinfo_suppressed_within_window(self):
|
||||
"""Second NODEINFO from the host within the throttle window is dropped."""
|
||||
import data.mesh_ingestor.queue as q
|
||||
|
||||
handlers.register_host_node_id("!aabbccdd")
|
||||
# Simulate a recent upsert so the window is active.
|
||||
_state_mod._mark_host_nodeinfo_seen(time.monotonic())
|
||||
|
||||
sent = []
|
||||
original = q._queue_post_json
|
||||
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(path)
|
||||
try:
|
||||
handlers.store_nodeinfo_packet(
|
||||
{"id": 2, "rxTime": 200, "fromId": "!aabbccdd"},
|
||||
{"user": {"id": "!aabbccdd", "shortName": "AB", "longName": "Alpha"}},
|
||||
)
|
||||
finally:
|
||||
q._queue_post_json = original
|
||||
assert sent == []
|
||||
|
||||
def test_host_nodeinfo_allowed_after_window_expires(self):
|
||||
"""NODEINFO from the host is forwarded after the throttle window expires."""
|
||||
import data.mesh_ingestor.queue as q
|
||||
|
||||
handlers.register_host_node_id("!aabbccdd")
|
||||
# Place last-seen far in the past so the window has expired.
|
||||
_state_mod._host_nodeinfo_last_seen = (
|
||||
time.monotonic() - _state_mod._HOST_NODEINFO_INTERVAL_SECS - 10.0
|
||||
)
|
||||
|
||||
sent = []
|
||||
original = q._queue_post_json
|
||||
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(path)
|
||||
try:
|
||||
handlers.store_nodeinfo_packet(
|
||||
{"id": 3, "rxTime": 300, "fromId": "!aabbccdd"},
|
||||
{"user": {"id": "!aabbccdd", "shortName": "AB", "longName": "Alpha"}},
|
||||
)
|
||||
finally:
|
||||
q._queue_post_json = original
|
||||
assert "/api/nodes" in sent
|
||||
|
||||
def test_non_host_nodeinfo_never_suppressed(self):
|
||||
"""NODEINFO from a non-host node is never throttled."""
|
||||
import data.mesh_ingestor.queue as q
|
||||
|
||||
handlers.register_host_node_id("!aabbccdd")
|
||||
# Mark the host as recently seen to activate the throttle.
|
||||
_state_mod._mark_host_nodeinfo_seen(time.monotonic())
|
||||
|
||||
sent = []
|
||||
original = q._queue_post_json
|
||||
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(path)
|
||||
try:
|
||||
handlers.store_nodeinfo_packet(
|
||||
{"id": 4, "rxTime": 400, "fromId": "!11223344"},
|
||||
{
|
||||
"user": {
|
||||
"id": "!11223344",
|
||||
"shortName": "CD",
|
||||
"longName": "Charlie Delta",
|
||||
}
|
||||
},
|
||||
)
|
||||
finally:
|
||||
q._queue_post_json = original
|
||||
assert "/api/nodes" in sent
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# store_neighborinfo_packet
|
||||
|
||||
+1
-11
@@ -41,16 +41,6 @@ def reset_state(monkeypatch):
|
||||
importlib.reload(config)
|
||||
|
||||
|
||||
def test_config_module_port_aliases(monkeypatch):
|
||||
"""Ensure the config module keeps CONNECTION and PORT in sync."""
|
||||
|
||||
reloaded = importlib.reload(config)
|
||||
monkeypatch.setattr(reloaded, "CONNECTION", "dev-tty", raising=False)
|
||||
reloaded.PORT = "new-port"
|
||||
assert reloaded.CONNECTION == "new-port"
|
||||
assert reloaded.PORT == "new-port"
|
||||
|
||||
|
||||
def test_queue_stringification_and_ordering():
|
||||
"""Exercise queue payload formatting and priority ordering."""
|
||||
|
||||
@@ -237,7 +227,7 @@ def test_region_frequency_and_resolution_helpers():
|
||||
assert freq == "915MHz"
|
||||
|
||||
freq = interfaces._region_frequency(LoraMessage(2))
|
||||
assert freq == "US"
|
||||
assert freq == 902 # "US" is in the region lookup table → base 902 MHz
|
||||
|
||||
class StringRegionMessage:
|
||||
def __init__(self, region):
|
||||
|
||||
@@ -267,6 +267,72 @@ class TestEnumNameFromField:
|
||||
assert ifaces._enum_name_from_field(msg, "region", 3) == "US_915"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _computed_channel_frequency
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestComputedChannelFrequency:
|
||||
"""Tests for :func:`interfaces._computed_channel_frequency`."""
|
||||
|
||||
def test_none_enum_name_returns_none(self):
|
||||
"""None enum_name returns None."""
|
||||
assert ifaces._computed_channel_frequency(None, 0) is None
|
||||
|
||||
def test_unknown_region_returns_none(self):
|
||||
"""Enum name not in lookup table returns None."""
|
||||
assert ifaces._computed_channel_frequency("UNKNOWN_REGION", 0) is None
|
||||
|
||||
def test_us_channel_0_base_frequency(self):
|
||||
"""US region, channel 0, returns floor(902.0 + 0*0.25) = 902."""
|
||||
assert ifaces._computed_channel_frequency("US", 0) == 902
|
||||
|
||||
def test_us_channel_52_mid_band(self):
|
||||
"""US region, channel 52, returns floor(902.0 + 52*0.25) = 915."""
|
||||
assert ifaces._computed_channel_frequency("US", 52) == 915
|
||||
|
||||
def test_eu_868_channel_0_returns_869(self):
|
||||
"""EU_868 region, channel 0, returns floor(869.525) = 869, not 868."""
|
||||
assert ifaces._computed_channel_frequency("EU_868", 0) == 869
|
||||
|
||||
def test_eu_868_channel_1_returns_870(self):
|
||||
"""EU_868 region, channel 1, returns floor(869.525 + 0.5) = 870."""
|
||||
assert ifaces._computed_channel_frequency("EU_868", 1) == 870
|
||||
|
||||
def test_my_919_channel_0(self):
|
||||
"""MY_919 region, channel 0, returns floor(919.0) = 919."""
|
||||
assert ifaces._computed_channel_frequency("MY_919", 0) == 919
|
||||
|
||||
def test_lora_24_channel_0(self):
|
||||
"""LORA_24 region, channel 0, returns floor(2400.0) = 2400."""
|
||||
assert ifaces._computed_channel_frequency("LORA_24", 0) == 2400
|
||||
|
||||
def test_none_channel_num_defaults_to_zero(self):
|
||||
"""None channel_num is treated as 0, returning the base frequency."""
|
||||
assert ifaces._computed_channel_frequency("ANZ", None) == 916
|
||||
|
||||
def test_negative_channel_num_clamped_to_zero(self):
|
||||
"""Negative channel_num is clamped to 0, returning the base frequency."""
|
||||
assert ifaces._computed_channel_frequency("ANZ", -1) == 916
|
||||
|
||||
def test_result_is_int(self):
|
||||
"""Return type is int (math.floor result), not float."""
|
||||
result = ifaces._computed_channel_frequency("EU_868", 0)
|
||||
assert isinstance(result, int)
|
||||
|
||||
def test_nz_865_channel_0(self):
|
||||
"""NZ_865 region, channel 0, returns floor(864.0) = 864."""
|
||||
assert ifaces._computed_channel_frequency("NZ_865", 0) == 864
|
||||
|
||||
def test_br_902_channel_4_spacing_0_25(self):
|
||||
"""BR_902 region, channel 4, returns floor(902.0 + 4*0.25) = 903."""
|
||||
assert ifaces._computed_channel_frequency("BR_902", 4) == 903
|
||||
|
||||
def test_kz_863_channel_0(self):
|
||||
"""KZ_863 region, channel 0, returns floor(863.125) = 863."""
|
||||
assert ifaces._computed_channel_frequency("KZ_863", 0) == 863
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _region_frequency
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -323,6 +389,65 @@ class TestRegionFrequency:
|
||||
msg = SimpleNamespace(DESCRIPTOR=None, override_frequency=None, region="EU433")
|
||||
assert ifaces._region_frequency(msg) == "EU433"
|
||||
|
||||
def test_us_enum_lookup_table_used(self):
|
||||
"""US region with channel_num=0 returns 902 from lookup table, not None."""
|
||||
enum_val = SimpleNamespace(name="US")
|
||||
enum_type = SimpleNamespace(values_by_number={1: enum_val})
|
||||
field_desc = SimpleNamespace(enum_type=enum_type)
|
||||
desc = SimpleNamespace(fields_by_name={"region": field_desc})
|
||||
msg = SimpleNamespace(
|
||||
DESCRIPTOR=desc, override_frequency=None, region=1, channel_num=0
|
||||
)
|
||||
assert ifaces._region_frequency(msg) == 902
|
||||
|
||||
def test_eu_868_returns_869_not_868(self):
|
||||
"""EU_868 region returns 869 from lookup table, not 868 parsed from name."""
|
||||
enum_val = SimpleNamespace(name="EU_868")
|
||||
enum_type = SimpleNamespace(values_by_number={3: enum_val})
|
||||
field_desc = SimpleNamespace(enum_type=enum_type)
|
||||
desc = SimpleNamespace(fields_by_name={"region": field_desc})
|
||||
msg = SimpleNamespace(
|
||||
DESCRIPTOR=desc, override_frequency=None, region=3, channel_num=0
|
||||
)
|
||||
assert ifaces._region_frequency(msg) == 869
|
||||
|
||||
def test_unrecognised_int_falls_through(self):
|
||||
"""Raw int region with no DESCRIPTOR and value < 100 returns None."""
|
||||
msg = SimpleNamespace(DESCRIPTOR=None, override_frequency=None, region=99)
|
||||
assert ifaces._region_frequency(msg) is None
|
||||
|
||||
def test_missing_channel_num_attr_uses_base(self):
|
||||
"""Region in lookup table with no channel_num attribute returns base freq."""
|
||||
enum_val = SimpleNamespace(name="MY_919")
|
||||
enum_type = SimpleNamespace(values_by_number={17: enum_val})
|
||||
field_desc = SimpleNamespace(enum_type=enum_type)
|
||||
desc = SimpleNamespace(fields_by_name={"region": field_desc})
|
||||
# deliberately no channel_num attribute
|
||||
msg = SimpleNamespace(DESCRIPTOR=desc, override_frequency=None, region=17)
|
||||
assert ifaces._region_frequency(msg) == 919
|
||||
|
||||
def test_override_takes_priority_over_lookup_table(self):
|
||||
"""override_frequency takes priority over the lookup table."""
|
||||
enum_val = SimpleNamespace(name="EU_868")
|
||||
enum_type = SimpleNamespace(values_by_number={3: enum_val})
|
||||
field_desc = SimpleNamespace(enum_type=enum_type)
|
||||
desc = SimpleNamespace(fields_by_name={"region": field_desc})
|
||||
msg = SimpleNamespace(
|
||||
DESCRIPTOR=desc, override_frequency=867.3, region=3, channel_num=0
|
||||
)
|
||||
assert ifaces._region_frequency(msg) == 867
|
||||
|
||||
def test_unknown_enum_name_falls_to_digit_parse(self):
|
||||
"""Enum name not in lookup table falls through to digit parsing."""
|
||||
enum_val = SimpleNamespace(name="FUTURE_999")
|
||||
enum_type = SimpleNamespace(values_by_number={99: enum_val})
|
||||
field_desc = SimpleNamespace(enum_type=enum_type)
|
||||
desc = SimpleNamespace(fields_by_name={"region": field_desc})
|
||||
msg = SimpleNamespace(
|
||||
DESCRIPTOR=desc, override_frequency=None, region=99, channel_num=0
|
||||
)
|
||||
assert ifaces._region_frequency(msg) == 999
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _camelcase_enum_name
|
||||
|
||||
+27
-34
@@ -228,13 +228,14 @@ def mesh_module(monkeypatch):
|
||||
|
||||
|
||||
def test_instance_domain_prefers_primary_env(mesh_module, monkeypatch):
|
||||
"""Ensure the ingestor prefers ``INSTANCE_DOMAIN`` over the legacy variable."""
|
||||
"""Ensure the ingestor reads ``INSTANCE_DOMAIN``."""
|
||||
|
||||
monkeypatch.setenv("INSTANCE_DOMAIN", "https://new.example")
|
||||
monkeypatch.setenv("POTATOMESH_INSTANCE", "https://legacy.example")
|
||||
|
||||
try:
|
||||
refreshed_instances = mesh_module.config._resolve_instance_domains()
|
||||
refreshed_instance = mesh_module.config._resolve_instance_domain()
|
||||
mesh_module.config.INSTANCES = refreshed_instances
|
||||
mesh_module.config.INSTANCE = refreshed_instance
|
||||
mesh_module.INSTANCE = refreshed_instance
|
||||
|
||||
@@ -242,26 +243,7 @@ def test_instance_domain_prefers_primary_env(mesh_module, monkeypatch):
|
||||
assert mesh_module.INSTANCE == "https://new.example"
|
||||
finally:
|
||||
monkeypatch.delenv("INSTANCE_DOMAIN", raising=False)
|
||||
monkeypatch.delenv("POTATOMESH_INSTANCE", raising=False)
|
||||
mesh_module.config.INSTANCE = mesh_module.config._resolve_instance_domain()
|
||||
mesh_module.INSTANCE = mesh_module.config.INSTANCE
|
||||
|
||||
|
||||
def test_instance_domain_falls_back_to_legacy(mesh_module, monkeypatch):
|
||||
"""Verify ``POTATOMESH_INSTANCE`` is used when ``INSTANCE_DOMAIN`` is unset."""
|
||||
|
||||
monkeypatch.delenv("INSTANCE_DOMAIN", raising=False)
|
||||
monkeypatch.setenv("POTATOMESH_INSTANCE", "https://legacy-only.example")
|
||||
|
||||
try:
|
||||
refreshed_instance = mesh_module.config._resolve_instance_domain()
|
||||
mesh_module.config.INSTANCE = refreshed_instance
|
||||
mesh_module.INSTANCE = refreshed_instance
|
||||
|
||||
assert refreshed_instance == "https://legacy-only.example"
|
||||
assert mesh_module.INSTANCE == "https://legacy-only.example"
|
||||
finally:
|
||||
monkeypatch.delenv("POTATOMESH_INSTANCE", raising=False)
|
||||
mesh_module.config.INSTANCES = mesh_module.config._resolve_instance_domains()
|
||||
mesh_module.config.INSTANCE = mesh_module.config._resolve_instance_domain()
|
||||
mesh_module.INSTANCE = mesh_module.config.INSTANCE
|
||||
|
||||
@@ -270,10 +252,11 @@ def test_instance_domain_infers_scheme_for_hostnames(mesh_module, monkeypatch):
|
||||
"""Ensure bare hostnames are promoted to HTTPS URLs for ingestion."""
|
||||
|
||||
monkeypatch.setenv("INSTANCE_DOMAIN", "mesh.example.org")
|
||||
monkeypatch.delenv("POTATOMESH_INSTANCE", raising=False)
|
||||
|
||||
try:
|
||||
refreshed_instances = mesh_module.config._resolve_instance_domains()
|
||||
refreshed_instance = mesh_module.config._resolve_instance_domain()
|
||||
mesh_module.config.INSTANCES = refreshed_instances
|
||||
mesh_module.config.INSTANCE = refreshed_instance
|
||||
mesh_module.INSTANCE = refreshed_instance
|
||||
|
||||
@@ -281,6 +264,7 @@ def test_instance_domain_infers_scheme_for_hostnames(mesh_module, monkeypatch):
|
||||
assert mesh_module.INSTANCE == "https://mesh.example.org"
|
||||
finally:
|
||||
monkeypatch.delenv("INSTANCE_DOMAIN", raising=False)
|
||||
mesh_module.config.INSTANCES = mesh_module.config._resolve_instance_domains()
|
||||
mesh_module.config.INSTANCE = mesh_module.config._resolve_instance_domain()
|
||||
mesh_module.INSTANCE = mesh_module.config.INSTANCE
|
||||
|
||||
@@ -605,10 +589,10 @@ def test_ensure_radio_metadata_extracts_config(mesh_module, capsys):
|
||||
first_log = capsys.readouterr().out
|
||||
|
||||
assert iface.wait_calls == 1
|
||||
assert mesh.config.LORA_FREQ == 868
|
||||
assert mesh.config.LORA_FREQ == 869
|
||||
assert mesh.config.MODEM_PRESET == "MediumFast"
|
||||
assert "Captured LoRa radio metadata" in first_log
|
||||
assert "lora_freq=868" in first_log
|
||||
assert "lora_freq=869" in first_log
|
||||
assert "modem_preset='MediumFast'" in first_log
|
||||
|
||||
secondary_lora = make_lora(7, "US_915", 2, "LONG_FAST", preset_field="preset")
|
||||
@@ -618,7 +602,7 @@ def test_ensure_radio_metadata_extracts_config(mesh_module, capsys):
|
||||
second_log = capsys.readouterr().out
|
||||
|
||||
assert second_iface.wait_calls == 1
|
||||
assert mesh.config.LORA_FREQ == 868
|
||||
assert mesh.config.LORA_FREQ == 869
|
||||
assert mesh.config.MODEM_PRESET == "MediumFast"
|
||||
assert second_log == ""
|
||||
|
||||
@@ -1637,7 +1621,9 @@ def test_main_retries_interface_creation(mesh_module, monkeypatch):
|
||||
raise RuntimeError("boom")
|
||||
return iface, port
|
||||
|
||||
monkeypatch.setattr(mesh, "PORT", "/dev/ttyTEST")
|
||||
monkeypatch.setattr(mesh, "INSTANCES", (("http://test", ""),))
|
||||
monkeypatch.setattr(mesh, "INSTANCE", "http://test")
|
||||
monkeypatch.setattr(mesh, "CONNECTION", "/dev/ttyTEST")
|
||||
monkeypatch.setattr(mesh, "_create_serial_interface", fake_create)
|
||||
monkeypatch.setattr(mesh.threading, "Event", DummyEvent)
|
||||
monkeypatch.setattr(mesh.signal, "signal", lambda *_, **__: None)
|
||||
@@ -1709,7 +1695,9 @@ def test_main_reconnects_when_connection_event_clears(mesh_module, monkeypatch):
|
||||
self._flag = True
|
||||
return True
|
||||
|
||||
monkeypatch.setattr(mesh, "PORT", "/dev/ttyTEST")
|
||||
monkeypatch.setattr(mesh, "INSTANCES", (("http://test", ""),))
|
||||
monkeypatch.setattr(mesh, "INSTANCE", "http://test")
|
||||
monkeypatch.setattr(mesh, "CONNECTION", "/dev/ttyTEST")
|
||||
monkeypatch.setattr(mesh, "_create_serial_interface", fake_create)
|
||||
monkeypatch.setattr(mesh.threading, "Event", DummyStopEvent)
|
||||
monkeypatch.setattr(mesh.signal, "signal", lambda *_, **__: None)
|
||||
@@ -1773,7 +1761,9 @@ def test_main_recreates_interface_after_snapshot_error(mesh_module, monkeypatch)
|
||||
def record_upsert(node_id, node):
|
||||
upsert_calls.append(node_id)
|
||||
|
||||
monkeypatch.setattr(mesh, "PORT", "/dev/ttyTEST")
|
||||
monkeypatch.setattr(mesh, "INSTANCES", (("http://test", ""),))
|
||||
monkeypatch.setattr(mesh, "INSTANCE", "http://test")
|
||||
monkeypatch.setattr(mesh, "CONNECTION", "/dev/ttyTEST")
|
||||
monkeypatch.setattr(mesh, "_create_serial_interface", fake_create)
|
||||
monkeypatch.setattr(mesh, "upsert_node", record_upsert)
|
||||
monkeypatch.setattr(mesh.threading, "Event", DummyEvent)
|
||||
@@ -1795,7 +1785,9 @@ def test_main_exits_when_defaults_unavailable(mesh_module, monkeypatch):
|
||||
def fail_default():
|
||||
raise mesh.NoAvailableMeshInterface("no interface available")
|
||||
|
||||
monkeypatch.setattr(mesh, "PORT", None)
|
||||
monkeypatch.setattr(mesh, "INSTANCES", (("http://test", ""),))
|
||||
monkeypatch.setattr(mesh, "INSTANCE", "http://test")
|
||||
monkeypatch.setattr(mesh, "CONNECTION", None)
|
||||
monkeypatch.setattr(mesh, "_create_default_interface", fail_default)
|
||||
monkeypatch.setattr(mesh.signal, "signal", lambda *_, **__: None)
|
||||
|
||||
@@ -2718,7 +2710,8 @@ def test_traceroute_packet_without_identifiers_is_ignored(mesh_module, monkeypat
|
||||
assert captured == []
|
||||
|
||||
|
||||
def test_post_queue_prioritises_messages(mesh_module, monkeypatch):
|
||||
def test_post_queue_prioritises_nodes_over_messages(mesh_module, monkeypatch):
|
||||
"""Nodes (priority 20) must be processed before messages (priority 30)."""
|
||||
mesh = mesh_module
|
||||
mesh._clear_post_queue()
|
||||
calls = []
|
||||
@@ -2735,7 +2728,7 @@ def test_post_queue_prioritises_messages(mesh_module, monkeypatch):
|
||||
|
||||
mesh._drain_post_queue()
|
||||
|
||||
assert [path for path, _ in calls] == ["/api/messages", "/api/nodes"]
|
||||
assert [path for path, _ in calls] == ["/api/nodes", "/api/messages"]
|
||||
|
||||
|
||||
def test_drain_post_queue_handles_enqueued_items_during_send(mesh_module):
|
||||
@@ -3203,7 +3196,7 @@ def test_queue_ingestor_heartbeat_enqueues_and_throttles(mesh_module, monkeypatc
|
||||
|
||||
|
||||
def test_queue_ingestor_heartbeat_protocol_meshcore(mesh_module, monkeypatch):
|
||||
"""Heartbeat payload must carry the configured PROVIDER as its protocol."""
|
||||
"""Heartbeat payload must carry the configured PROTOCOL as its protocol."""
|
||||
mesh = mesh_module
|
||||
captured = []
|
||||
|
||||
@@ -3215,7 +3208,7 @@ def test_queue_ingestor_heartbeat_protocol_meshcore(mesh_module, monkeypatch):
|
||||
|
||||
mesh.ingestors.STATE.last_heartbeat = None
|
||||
mesh.ingestors.STATE.node_id = None
|
||||
mesh.config.PROVIDER = "meshcore"
|
||||
mesh.config.PROTOCOL = "meshcore"
|
||||
|
||||
mesh.ingestors.set_ingestor_node_id("!aabbccdd")
|
||||
mesh.ingestors.queue_ingestor_heartbeat(force=True)
|
||||
|
||||
+1459
-88
File diff suppressed because it is too large
Load Diff
+868
-37
@@ -17,6 +17,7 @@ from __future__ import annotations
|
||||
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
import urllib.error
|
||||
import urllib.request
|
||||
from pathlib import Path
|
||||
@@ -29,16 +30,29 @@ if str(REPO_ROOT) not in sys.path:
|
||||
sys.path.insert(0, str(REPO_ROOT))
|
||||
|
||||
import data.mesh_ingestor.config as config
|
||||
import data.mesh_ingestor.queue as _queue_mod
|
||||
from data.mesh_ingestor.queue import (
|
||||
QueueState,
|
||||
_clear_post_queue,
|
||||
_drain_post_queue,
|
||||
_enqueue_post_json,
|
||||
_MAX_SEND_RETRIES,
|
||||
_post_json,
|
||||
_QUEUE_DEPTH_WARNING_THRESHOLD,
|
||||
_queue_drainer_loop,
|
||||
_queue_post_json,
|
||||
_send_single,
|
||||
_start_queue_drainer,
|
||||
_stop_queue_drainer,
|
||||
_CHANNEL_POST_PRIORITY,
|
||||
_DEFAULT_POST_PRIORITY,
|
||||
_INGESTOR_POST_PRIORITY,
|
||||
_MESSAGE_POST_PRIORITY,
|
||||
_NEIGHBOR_POST_PRIORITY,
|
||||
_NODE_POST_PRIORITY,
|
||||
_POSITION_POST_PRIORITY,
|
||||
_TELEMETRY_POST_PRIORITY,
|
||||
_TRACE_POST_PRIORITY,
|
||||
)
|
||||
|
||||
|
||||
@@ -47,6 +61,42 @@ def _fresh_state() -> QueueState:
|
||||
return QueueState()
|
||||
|
||||
|
||||
class _FakeResp:
|
||||
"""Minimal context-manager response stub for ``urlopen`` patches."""
|
||||
|
||||
def read(self):
|
||||
return b""
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, *a):
|
||||
pass
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Priority constant ordering
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_priority_constants_ordering():
|
||||
"""Verify the intended priority hierarchy: ingestor first, telemetry last.
|
||||
|
||||
Lower numeric values are dequeued first (min-heap semantics). The ordering
|
||||
must be: ingestor < channel < node < message < neighbor < trace < position
|
||||
< telemetry < default. Any regression in this order means the web backend
|
||||
may assign the wrong protocol to nodes and messages on startup.
|
||||
"""
|
||||
assert _INGESTOR_POST_PRIORITY < _CHANNEL_POST_PRIORITY
|
||||
assert _CHANNEL_POST_PRIORITY < _NODE_POST_PRIORITY
|
||||
assert _NODE_POST_PRIORITY < _MESSAGE_POST_PRIORITY
|
||||
assert _MESSAGE_POST_PRIORITY < _NEIGHBOR_POST_PRIORITY
|
||||
assert _NEIGHBOR_POST_PRIORITY < _TRACE_POST_PRIORITY
|
||||
assert _TRACE_POST_PRIORITY < _POSITION_POST_PRIORITY
|
||||
assert _POSITION_POST_PRIORITY < _TELEMETRY_POST_PRIORITY
|
||||
assert _TELEMETRY_POST_PRIORITY < _DEFAULT_POST_PRIORITY
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _post_json
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -56,33 +106,24 @@ class TestPostJson:
|
||||
"""Tests for :func:`queue._post_json`."""
|
||||
|
||||
def test_skips_when_no_instance(self, monkeypatch):
|
||||
"""Does nothing when INSTANCE is empty."""
|
||||
"""Does nothing when INSTANCES is empty."""
|
||||
monkeypatch.setattr(config, "INSTANCES", ())
|
||||
monkeypatch.setattr(config, "INSTANCE", "")
|
||||
sent = []
|
||||
with patch("urllib.request.urlopen") as mock_open:
|
||||
_post_json("/api/test", {"key": "val"})
|
||||
mock_open.assert_not_called()
|
||||
|
||||
def test_sends_json_post(self, monkeypatch):
|
||||
"""Sends a POST request with JSON body and correct headers."""
|
||||
monkeypatch.setattr(config, "INSTANCES", (("http://localhost", "tok"),))
|
||||
monkeypatch.setattr(config, "INSTANCE", "http://localhost")
|
||||
monkeypatch.setattr(config, "API_TOKEN", "tok")
|
||||
|
||||
captured_req = []
|
||||
|
||||
class FakeResp:
|
||||
def read(self):
|
||||
return b""
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, *a):
|
||||
pass
|
||||
|
||||
def fake_urlopen(req, timeout=None):
|
||||
captured_req.append(req)
|
||||
return FakeResp()
|
||||
return _FakeResp()
|
||||
|
||||
with patch("urllib.request.urlopen", fake_urlopen):
|
||||
_post_json("/api/nodes", {"a": 1})
|
||||
@@ -95,6 +136,7 @@ class TestPostJson:
|
||||
|
||||
def test_handles_network_error_gracefully(self, monkeypatch, capsys):
|
||||
"""Network errors are caught and logged, not raised."""
|
||||
monkeypatch.setattr(config, "INSTANCES", (("http://localhost", ""),))
|
||||
monkeypatch.setattr(config, "INSTANCE", "http://localhost")
|
||||
monkeypatch.setattr(config, "API_TOKEN", "")
|
||||
monkeypatch.setattr(config, "DEBUG", True)
|
||||
@@ -111,19 +153,9 @@ class TestPostJson:
|
||||
|
||||
captured_req = []
|
||||
|
||||
class FakeResp:
|
||||
def read(self):
|
||||
return b""
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, *a):
|
||||
pass
|
||||
|
||||
def fake_urlopen(req, timeout=None):
|
||||
captured_req.append(req)
|
||||
return FakeResp()
|
||||
return _FakeResp()
|
||||
|
||||
with patch("urllib.request.urlopen", fake_urlopen):
|
||||
_post_json("/api/test", {}, instance="http://override")
|
||||
@@ -132,24 +164,15 @@ class TestPostJson:
|
||||
|
||||
def test_no_auth_header_when_token_empty(self, monkeypatch):
|
||||
"""No Authorization header is added when API_TOKEN is empty."""
|
||||
monkeypatch.setattr(config, "INSTANCES", (("http://localhost", ""),))
|
||||
monkeypatch.setattr(config, "INSTANCE", "http://localhost")
|
||||
monkeypatch.setattr(config, "API_TOKEN", "")
|
||||
|
||||
captured_req = []
|
||||
|
||||
class FakeResp:
|
||||
def read(self):
|
||||
return b""
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, *a):
|
||||
pass
|
||||
|
||||
def fake_urlopen(req, timeout=None):
|
||||
captured_req.append(req)
|
||||
return FakeResp()
|
||||
return _FakeResp()
|
||||
|
||||
with patch("urllib.request.urlopen", fake_urlopen):
|
||||
_post_json("/api/test", {})
|
||||
@@ -170,10 +193,11 @@ class TestEnqueuePostJson:
|
||||
state = _fresh_state()
|
||||
_enqueue_post_json("/api/test", {"k": 1}, 50, state=state)
|
||||
assert len(state.queue) == 1
|
||||
priority, _counter, path, payload = state.queue[0]
|
||||
priority, _counter, path, payload, retries = state.queue[0]
|
||||
assert priority == 50
|
||||
assert path == "/api/test"
|
||||
assert payload == {"k": 1}
|
||||
assert retries == 0
|
||||
|
||||
def test_heap_ordering(self):
|
||||
"""Lower priority values are dequeued first (min-heap)."""
|
||||
@@ -182,7 +206,7 @@ class TestEnqueuePostJson:
|
||||
state = _fresh_state()
|
||||
_enqueue_post_json("/api/low", {}, 90, state=state)
|
||||
_enqueue_post_json("/api/high", {}, 10, state=state)
|
||||
_priority, _counter, path, _payload = heapq.heappop(state.queue)
|
||||
_priority, _counter, path, _payload, _retries = heapq.heappop(state.queue)
|
||||
assert path == "/api/high"
|
||||
|
||||
def test_counter_increments(self):
|
||||
@@ -365,3 +389,810 @@ class TestClearPostQueue:
|
||||
state = _fresh_state()
|
||||
_clear_post_queue(state=state)
|
||||
assert state.queue == []
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Multi-instance fan-out
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestMultiInstanceFanOut:
|
||||
"""Tests for multi-instance POST fan-out in :func:`queue._post_json`."""
|
||||
|
||||
def test_fans_out_to_all_instances(self, monkeypatch):
|
||||
"""Each configured instance receives the payload."""
|
||||
monkeypatch.setattr(
|
||||
config,
|
||||
"INSTANCES",
|
||||
(("http://alpha", "t1"), ("http://beta", "t2")),
|
||||
)
|
||||
|
||||
captured = []
|
||||
|
||||
def fake_urlopen(req, timeout=None):
|
||||
captured.append(req)
|
||||
return _FakeResp()
|
||||
|
||||
with patch("urllib.request.urlopen", fake_urlopen):
|
||||
_post_json("/api/nodes", {"a": 1})
|
||||
|
||||
assert len(captured) == 2
|
||||
urls = {r.get_full_url() for r in captured}
|
||||
assert urls == {"http://alpha/api/nodes", "http://beta/api/nodes"}
|
||||
tokens = {r.get_header("Authorization") for r in captured}
|
||||
assert tokens == {"Bearer t1", "Bearer t2"}
|
||||
|
||||
def test_failure_isolation(self, monkeypatch):
|
||||
"""A failure on one instance does not prevent delivery to the next."""
|
||||
monkeypatch.setattr(
|
||||
config,
|
||||
"INSTANCES",
|
||||
(("http://broken", "t1"), ("http://ok", "t2")),
|
||||
)
|
||||
monkeypatch.setattr(config, "DEBUG", False)
|
||||
|
||||
captured = []
|
||||
|
||||
def fake_urlopen(req, timeout=None):
|
||||
if "broken" in req.get_full_url():
|
||||
raise OSError("connection refused")
|
||||
captured.append(req)
|
||||
return _FakeResp()
|
||||
|
||||
with patch("urllib.request.urlopen", fake_urlopen):
|
||||
_post_json("/api/test", {"x": 1})
|
||||
|
||||
assert len(captured) == 1
|
||||
assert "http://ok" in captured[0].get_full_url()
|
||||
|
||||
def test_explicit_instance_skips_fanout(self, monkeypatch):
|
||||
"""Passing instance= explicitly bypasses the INSTANCES fan-out."""
|
||||
monkeypatch.setattr(
|
||||
config,
|
||||
"INSTANCES",
|
||||
(("http://a", "t1"), ("http://b", "t2")),
|
||||
)
|
||||
|
||||
captured = []
|
||||
|
||||
def fake_urlopen(req, timeout=None):
|
||||
captured.append(req)
|
||||
return _FakeResp()
|
||||
|
||||
with patch("urllib.request.urlopen", fake_urlopen):
|
||||
_post_json("/api/test", {}, instance="http://override")
|
||||
|
||||
assert len(captured) == 1
|
||||
assert "http://override" in captured[0].get_full_url()
|
||||
|
||||
def test_empty_instances_noop(self, monkeypatch):
|
||||
"""No requests are made when INSTANCES is empty."""
|
||||
monkeypatch.setattr(config, "INSTANCES", ())
|
||||
monkeypatch.setattr(config, "INSTANCE", "")
|
||||
|
||||
with patch("urllib.request.urlopen") as mock_open:
|
||||
_post_json("/api/test", {})
|
||||
mock_open.assert_not_called()
|
||||
|
||||
def test_backward_compat_fallback(self, monkeypatch):
|
||||
"""Falls back to config.INSTANCE when INSTANCES is empty."""
|
||||
monkeypatch.setattr(config, "INSTANCES", ())
|
||||
monkeypatch.setattr(config, "INSTANCE", "http://legacy")
|
||||
monkeypatch.setattr(config, "API_TOKEN", "tok")
|
||||
|
||||
captured = []
|
||||
|
||||
def fake_urlopen(req, timeout=None):
|
||||
captured.append(req)
|
||||
return _FakeResp()
|
||||
|
||||
with patch("urllib.request.urlopen", fake_urlopen):
|
||||
_post_json("/api/test", {"v": 1})
|
||||
|
||||
assert len(captured) == 1
|
||||
assert "http://legacy" in captured[0].get_full_url()
|
||||
assert captured[0].get_header("Authorization") == "Bearer tok"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# HTTP failure always-logging
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_http_failure_always_logged(monkeypatch):
|
||||
"""POST failures are logged with always=True regardless of DEBUG mode.
|
||||
|
||||
Operators must be able to see HTTP errors without enabling DEBUG so they
|
||||
can tell whether the ingestor is silently dropping data.
|
||||
"""
|
||||
monkeypatch.setattr(config, "INSTANCES", (("http://localhost", ""),))
|
||||
monkeypatch.setattr(config, "INSTANCE", "http://localhost")
|
||||
monkeypatch.setattr(config, "DEBUG", False)
|
||||
|
||||
log_calls: list[dict] = []
|
||||
original_debug_log = config._debug_log
|
||||
|
||||
def capture_debug_log(msg, **kwargs):
|
||||
log_calls.append(kwargs)
|
||||
original_debug_log(msg, **kwargs)
|
||||
|
||||
monkeypatch.setattr(config, "_debug_log", capture_debug_log)
|
||||
|
||||
def raise_error(req, timeout=None):
|
||||
raise OSError("connection refused")
|
||||
|
||||
with patch("urllib.request.urlopen", raise_error):
|
||||
_send_single("http://localhost", "", "/api/test", {"x": 1})
|
||||
|
||||
assert any(
|
||||
c.get("always") is True for c in log_calls
|
||||
), "Expected at least one _debug_log call with always=True on HTTP failure"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Background drain thread
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestQueueDrainer:
|
||||
"""Tests for :func:`_start_queue_drainer` and :func:`_queue_drainer_loop`."""
|
||||
|
||||
def test_start_queue_drainer_starts_thread(self):
|
||||
"""_start_queue_drainer creates and starts a daemon thread."""
|
||||
state = _fresh_state()
|
||||
assert state.drainer is None
|
||||
_start_queue_drainer(state)
|
||||
assert state.drainer is not None
|
||||
assert state.drainer.is_alive()
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
def test_start_queue_drainer_idempotent(self):
|
||||
"""Calling _start_queue_drainer twice does not create a second thread."""
|
||||
state = _fresh_state()
|
||||
_start_queue_drainer(state)
|
||||
first_thread = state.drainer
|
||||
_start_queue_drainer(state)
|
||||
assert state.drainer is first_thread
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
def test_queue_drainer_loop_drains_items(self):
|
||||
"""_queue_drainer_loop drains enqueued items when signalled."""
|
||||
state = _fresh_state()
|
||||
drained: list[str] = []
|
||||
|
||||
original_post_json = _queue_mod._post_json
|
||||
_queue_mod._post_json = lambda path, payload: drained.append(path)
|
||||
try:
|
||||
_start_queue_drainer(state)
|
||||
_enqueue_post_json("/api/drainer-test", {}, 10, state=state)
|
||||
state.drain_event.set()
|
||||
deadline = time.monotonic() + 2.0
|
||||
while "/api/drainer-test" not in drained and time.monotonic() < deadline:
|
||||
time.sleep(0.01)
|
||||
assert "/api/drainer-test" in drained
|
||||
finally:
|
||||
_queue_mod._post_json = original_post_json
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
def test_queue_post_json_signals_drain_event_with_drainer(self):
|
||||
"""When a drainer is alive, _queue_post_json signals drain_event instead of blocking."""
|
||||
state = _fresh_state()
|
||||
drained: list[str] = []
|
||||
|
||||
original_post_json = _queue_mod._post_json
|
||||
_queue_mod._post_json = lambda path, payload: drained.append(path)
|
||||
try:
|
||||
_start_queue_drainer(state)
|
||||
# With a live drainer, the call should return immediately
|
||||
# (signal only) and the drainer processes the item in the background.
|
||||
_queue_post_json("/api/bg-test", {"k": 1}, priority=10, state=state)
|
||||
deadline = time.monotonic() + 2.0
|
||||
while "/api/bg-test" not in drained and time.monotonic() < deadline:
|
||||
time.sleep(0.01)
|
||||
assert "/api/bg-test" in drained
|
||||
finally:
|
||||
_queue_mod._post_json = original_post_json
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
def test_queue_post_json_falls_back_to_sync_drain_without_drainer(self):
|
||||
"""When no drainer is running, _queue_post_json drains synchronously."""
|
||||
state = _fresh_state()
|
||||
# state.drainer is None → synchronous path
|
||||
sent: list[str] = []
|
||||
_queue_post_json(
|
||||
"/api/sync",
|
||||
{"v": 1},
|
||||
priority=10,
|
||||
state=state,
|
||||
send=lambda p, d: sent.append(p),
|
||||
)
|
||||
assert "/api/sync" in sent
|
||||
|
||||
def test_enqueue_during_drain_is_processed(self):
|
||||
"""Items enqueued while the drainer is mid-drain are still drained.
|
||||
|
||||
Simulates the race where a new item arrives while
|
||||
``_drain_post_queue`` is actively processing. The new item must
|
||||
be picked up within the same drain cycle or on the next signal.
|
||||
"""
|
||||
state = _fresh_state()
|
||||
drained: list[str] = []
|
||||
gate = threading.Event()
|
||||
|
||||
original_post_json = _queue_mod._post_json
|
||||
|
||||
def slow_send(path, payload):
|
||||
"""Drain the first item slowly, allowing a second enqueue."""
|
||||
drained.append(path)
|
||||
if path == "/api/first":
|
||||
gate.set()
|
||||
|
||||
_queue_mod._post_json = slow_send
|
||||
try:
|
||||
_start_queue_drainer(state)
|
||||
_enqueue_post_json("/api/first", {}, 10, state=state)
|
||||
state.drain_event.set()
|
||||
# Wait until the drainer has started processing /api/first.
|
||||
gate.wait(timeout=2.0)
|
||||
# Enqueue a second item while the drainer is active.
|
||||
_enqueue_post_json("/api/second", {}, 10, state=state)
|
||||
state.drain_event.set()
|
||||
deadline = time.monotonic() + 2.0
|
||||
while "/api/second" not in drained and time.monotonic() < deadline:
|
||||
time.sleep(0.01)
|
||||
assert "/api/second" in drained
|
||||
finally:
|
||||
_queue_mod._post_json = original_post_json
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
def test_stop_queue_drainer(self):
|
||||
"""_stop_queue_drainer signals the thread to exit and joins it."""
|
||||
state = _fresh_state()
|
||||
_start_queue_drainer(state)
|
||||
assert state.drainer is not None
|
||||
assert state.drainer.is_alive()
|
||||
_stop_queue_drainer(state)
|
||||
assert state.drainer is None
|
||||
assert state.shutdown.is_set()
|
||||
|
||||
def test_stop_queue_drainer_noop_when_not_running(self):
|
||||
"""_stop_queue_drainer is safe to call with no drainer."""
|
||||
state = _fresh_state()
|
||||
_stop_queue_drainer(state)
|
||||
assert state.drainer is None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Drainer resilience
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestDrainerResilience:
|
||||
"""Tests verifying the drainer thread cannot be killed by exceptions."""
|
||||
|
||||
def test_drainer_survives_drain_exception(self, monkeypatch):
|
||||
"""The drainer loop keeps running after _drain_post_queue raises."""
|
||||
state = _fresh_state()
|
||||
drained: list[str] = []
|
||||
call_count = [0]
|
||||
|
||||
original_drain = _queue_mod._drain_post_queue
|
||||
|
||||
def flaky_drain(s, send=None):
|
||||
call_count[0] += 1
|
||||
if call_count[0] == 1:
|
||||
raise RuntimeError("transient drain error")
|
||||
original_drain(s, send=send)
|
||||
|
||||
original_post_json = _queue_mod._post_json
|
||||
_queue_mod._post_json = lambda path, payload: drained.append(path)
|
||||
monkeypatch.setattr(_queue_mod, "_drain_post_queue", flaky_drain)
|
||||
try:
|
||||
_start_queue_drainer(state)
|
||||
# First signal triggers the RuntimeError; drainer should survive.
|
||||
_enqueue_post_json("/api/first", {}, 10, state=state)
|
||||
state.drain_event.set()
|
||||
time.sleep(0.2)
|
||||
assert state.drainer.is_alive(), "Drainer died after drain exception"
|
||||
# Second signal should succeed normally.
|
||||
_enqueue_post_json("/api/second", {}, 10, state=state)
|
||||
state.drain_event.set()
|
||||
deadline = time.monotonic() + 2.0
|
||||
while "/api/second" not in drained and time.monotonic() < deadline:
|
||||
time.sleep(0.01)
|
||||
assert "/api/second" in drained
|
||||
finally:
|
||||
_queue_mod._post_json = original_post_json
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
def test_drainer_survives_debug_log_exception(self, monkeypatch):
|
||||
"""The drainer survives even when _debug_log raises inside the error handler."""
|
||||
state = _fresh_state()
|
||||
drained: list[str] = []
|
||||
call_count = [0]
|
||||
|
||||
original_drain = _queue_mod._drain_post_queue
|
||||
|
||||
def flaky_drain(s, send=None):
|
||||
call_count[0] += 1
|
||||
if call_count[0] == 1:
|
||||
raise RuntimeError("drain error")
|
||||
original_drain(s, send=send)
|
||||
|
||||
def broken_log(*args, **kwargs):
|
||||
raise BrokenPipeError("stdout closed")
|
||||
|
||||
original_post_json = _queue_mod._post_json
|
||||
_queue_mod._post_json = lambda path, payload: drained.append(path)
|
||||
monkeypatch.setattr(_queue_mod, "_drain_post_queue", flaky_drain)
|
||||
monkeypatch.setattr(config, "_debug_log", broken_log)
|
||||
try:
|
||||
_start_queue_drainer(state)
|
||||
_enqueue_post_json("/api/first", {}, 10, state=state)
|
||||
state.drain_event.set()
|
||||
time.sleep(0.2)
|
||||
assert state.drainer.is_alive(), "Drainer died after log exception"
|
||||
# Restore log so the second drain can proceed.
|
||||
monkeypatch.undo()
|
||||
_queue_mod._post_json = lambda path, payload: drained.append(path)
|
||||
monkeypatch.setattr(_queue_mod, "_drain_post_queue", original_drain)
|
||||
_enqueue_post_json("/api/second", {}, 10, state=state)
|
||||
state.drain_event.set()
|
||||
deadline = time.monotonic() + 2.0
|
||||
while "/api/second" not in drained and time.monotonic() < deadline:
|
||||
time.sleep(0.01)
|
||||
assert "/api/second" in drained
|
||||
finally:
|
||||
_queue_mod._post_json = original_post_json
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
def test_drainer_logs_startup(self, monkeypatch):
|
||||
"""The drainer logs a startup message."""
|
||||
state = _fresh_state()
|
||||
log_msgs: list[str] = []
|
||||
monkeypatch.setattr(
|
||||
config, "_debug_log", lambda msg, **kw: log_msgs.append(msg)
|
||||
)
|
||||
_start_queue_drainer(state)
|
||||
time.sleep(0.1)
|
||||
_stop_queue_drainer(state)
|
||||
assert any("started" in m.lower() for m in log_msgs)
|
||||
|
||||
def test_drainer_logs_exit(self, monkeypatch):
|
||||
"""The drainer logs an exit message on clean shutdown."""
|
||||
state = _fresh_state()
|
||||
log_msgs: list[str] = []
|
||||
monkeypatch.setattr(
|
||||
config, "_debug_log", lambda msg, **kw: log_msgs.append(msg)
|
||||
)
|
||||
_start_queue_drainer(state)
|
||||
time.sleep(0.1)
|
||||
_stop_queue_drainer(state)
|
||||
assert any("exiting" in m.lower() for m in log_msgs)
|
||||
|
||||
def test_drainer_logs_depth_warning(self, monkeypatch):
|
||||
"""A warning is emitted when queue depth exceeds the threshold."""
|
||||
state = _fresh_state()
|
||||
log_kwargs: list[dict] = []
|
||||
monkeypatch.setattr(
|
||||
config,
|
||||
"_debug_log",
|
||||
lambda msg, **kw: log_kwargs.append({"msg": msg, **kw}),
|
||||
)
|
||||
|
||||
original_post_json = _queue_mod._post_json
|
||||
_queue_mod._post_json = lambda path, payload: None
|
||||
try:
|
||||
for i in range(_QUEUE_DEPTH_WARNING_THRESHOLD + 1):
|
||||
_enqueue_post_json(f"/api/{i}", {}, 10, state=state)
|
||||
_start_queue_drainer(state)
|
||||
state.drain_event.set()
|
||||
deadline = time.monotonic() + 2.0
|
||||
while (
|
||||
not any("depth" in e.get("msg", "").lower() for e in log_kwargs)
|
||||
and time.monotonic() < deadline
|
||||
):
|
||||
time.sleep(0.01)
|
||||
assert any("depth" in e.get("msg", "").lower() for e in log_kwargs)
|
||||
finally:
|
||||
_queue_mod._post_json = original_post_json
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Retry logic
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestRetryLogic:
|
||||
"""Tests for send failure retry in :func:`_drain_post_queue`."""
|
||||
|
||||
def test_send_single_returns_true_on_success(self, monkeypatch):
|
||||
"""_send_single returns True when the HTTP call succeeds."""
|
||||
with patch("urllib.request.urlopen", lambda req, timeout=None: _FakeResp()):
|
||||
assert _send_single("http://localhost", "", "/api/ok", {}) is True
|
||||
|
||||
def test_send_single_returns_false_on_failure(self, monkeypatch):
|
||||
"""_send_single returns False when the HTTP call fails."""
|
||||
monkeypatch.setattr(config, "_debug_log", lambda *a, **kw: None)
|
||||
|
||||
def raise_error(req, timeout=None):
|
||||
raise OSError("fail")
|
||||
|
||||
with patch("urllib.request.urlopen", raise_error):
|
||||
assert _send_single("http://localhost", "", "/api/fail", {}) is False
|
||||
|
||||
def test_post_json_returns_true_on_success(self, monkeypatch):
|
||||
"""_post_json returns True when the instance succeeds."""
|
||||
monkeypatch.setattr(config, "INSTANCES", (("http://ok", ""),))
|
||||
with patch("urllib.request.urlopen", lambda req, timeout=None: _FakeResp()):
|
||||
assert _post_json("/api/ok", {}) is True
|
||||
|
||||
def test_post_json_returns_false_when_all_fail(self, monkeypatch):
|
||||
"""_post_json returns False when all instances fail."""
|
||||
monkeypatch.setattr(config, "INSTANCES", (("http://a", ""), ("http://b", "")))
|
||||
monkeypatch.setattr(config, "_debug_log", lambda *a, **kw: None)
|
||||
|
||||
def raise_error(req, timeout=None):
|
||||
raise OSError("fail")
|
||||
|
||||
with patch("urllib.request.urlopen", raise_error):
|
||||
assert _post_json("/api/fail", {}) is False
|
||||
|
||||
def test_post_json_returns_true_when_at_least_one_succeeds(self, monkeypatch):
|
||||
"""_post_json returns True when at least one instance succeeds."""
|
||||
monkeypatch.setattr(
|
||||
config, "INSTANCES", (("http://broken", ""), ("http://ok", ""))
|
||||
)
|
||||
monkeypatch.setattr(config, "_debug_log", lambda *a, **kw: None)
|
||||
|
||||
def selective_urlopen(req, timeout=None):
|
||||
if "broken" in req.get_full_url():
|
||||
raise OSError("fail")
|
||||
return _FakeResp()
|
||||
|
||||
with patch("urllib.request.urlopen", selective_urlopen):
|
||||
assert _post_json("/api/mixed", {}) is True
|
||||
|
||||
def test_drain_retries_on_send_failure(self):
|
||||
"""Items are re-queued and retried when send returns False."""
|
||||
state = _fresh_state()
|
||||
attempts: list[str] = []
|
||||
call_count = [0]
|
||||
|
||||
def flaky_send(path, payload):
|
||||
call_count[0] += 1
|
||||
attempts.append(path)
|
||||
# Fail on first attempt, succeed on retry.
|
||||
return call_count[0] > 1
|
||||
|
||||
_enqueue_post_json("/api/retry", {"v": 1}, 10, state=state)
|
||||
_drain_post_queue(state, send=flaky_send)
|
||||
assert attempts.count("/api/retry") == 2
|
||||
|
||||
def test_drain_drops_after_max_retries(self, monkeypatch):
|
||||
"""Items are dropped with a warning after exceeding max retries."""
|
||||
state = _fresh_state()
|
||||
attempts: list[str] = []
|
||||
log_kwargs: list[dict] = []
|
||||
monkeypatch.setattr(
|
||||
config,
|
||||
"_debug_log",
|
||||
lambda msg, **kw: log_kwargs.append({"msg": msg, **kw}),
|
||||
)
|
||||
|
||||
def always_fail(path, payload):
|
||||
attempts.append(path)
|
||||
return False
|
||||
|
||||
_enqueue_post_json("/api/doomed", {}, 10, state=state)
|
||||
_drain_post_queue(state, send=always_fail)
|
||||
# Initial attempt + _MAX_SEND_RETRIES retries.
|
||||
assert attempts.count("/api/doomed") == _MAX_SEND_RETRIES + 1
|
||||
assert any("dropping" in e.get("msg", "").lower() for e in log_kwargs)
|
||||
|
||||
def test_drain_no_retry_for_none_return(self):
|
||||
"""Custom send callables returning None are NOT retried.
|
||||
|
||||
This preserves backward compatibility with test lambdas that do not
|
||||
return a boolean.
|
||||
"""
|
||||
state = _fresh_state()
|
||||
attempts: list[str] = []
|
||||
|
||||
def custom_send(path, payload):
|
||||
attempts.append(path)
|
||||
return None
|
||||
|
||||
_enqueue_post_json("/api/once", {}, 10, state=state)
|
||||
_drain_post_queue(state, send=custom_send)
|
||||
assert attempts.count("/api/once") == 1
|
||||
|
||||
def test_enqueue_with_retries_parameter(self):
|
||||
"""_enqueue_post_json stores the retry count in the 5th tuple position."""
|
||||
state = _fresh_state()
|
||||
_enqueue_post_json("/api/r", {}, 10, state=state, retries=2)
|
||||
assert len(state.queue) == 1
|
||||
assert state.queue[0][4] == 2
|
||||
|
||||
def test_drain_handles_legacy_4_tuple(self):
|
||||
"""_drain_post_queue handles 4-tuple items without crashing."""
|
||||
import heapq
|
||||
|
||||
state = _fresh_state()
|
||||
sent: list[str] = []
|
||||
# Push a legacy 4-tuple directly.
|
||||
with state.lock:
|
||||
heapq.heappush(state.queue, (10, 0, "/api/legacy", {"v": 1}))
|
||||
_drain_post_queue(state, send=lambda p, d: sent.append(p))
|
||||
assert "/api/legacy" in sent
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Drainer auto-restart
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestDrainerAutoRestart:
|
||||
"""Tests for automatic drainer thread recovery in :func:`_queue_post_json`."""
|
||||
|
||||
def test_queue_post_json_restarts_dead_drainer(self, monkeypatch):
|
||||
"""A dead drainer is automatically restarted by _queue_post_json."""
|
||||
state = _fresh_state()
|
||||
drained: list[str] = []
|
||||
|
||||
original_post_json = _queue_mod._post_json
|
||||
_queue_mod._post_json = lambda path, payload: drained.append(path)
|
||||
monkeypatch.setattr(config, "_debug_log", lambda *a, **kw: None)
|
||||
try:
|
||||
# Start and then kill the drainer.
|
||||
_start_queue_drainer(state)
|
||||
_stop_queue_drainer(state)
|
||||
# _stop_queue_drainer sets drainer=None, so simulate a crash
|
||||
# where the Thread object is still present but dead.
|
||||
state.drainer = threading.Thread(target=lambda: None, daemon=True)
|
||||
state.drainer.start()
|
||||
state.drainer.join() # Dead thread, is_alive()=False
|
||||
|
||||
_queue_post_json("/api/revived", {"v": 1}, priority=10, state=state)
|
||||
deadline = time.monotonic() + 2.0
|
||||
while "/api/revived" not in drained and time.monotonic() < deadline:
|
||||
time.sleep(0.01)
|
||||
assert "/api/revived" in drained
|
||||
assert state.drainer is not None
|
||||
assert state.drainer.is_alive()
|
||||
finally:
|
||||
_queue_mod._post_json = original_post_json
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
def test_queue_post_json_no_restart_when_never_started(self):
|
||||
"""No drainer is started when state.drainer is None (daemon's job)."""
|
||||
state = _fresh_state()
|
||||
assert state.drainer is None
|
||||
sent: list[str] = []
|
||||
_queue_post_json(
|
||||
"/api/no-restart",
|
||||
{},
|
||||
priority=10,
|
||||
state=state,
|
||||
send=lambda p, d: sent.append(p),
|
||||
)
|
||||
assert "/api/no-restart" in sent
|
||||
assert state.drainer is None
|
||||
|
||||
def test_start_queue_drainer_resets_shutdown(self):
|
||||
"""_start_queue_drainer clears the shutdown event before starting."""
|
||||
state = _fresh_state()
|
||||
_start_queue_drainer(state)
|
||||
_stop_queue_drainer(state)
|
||||
assert state.shutdown.is_set()
|
||||
# Re-start should clear shutdown and start a live thread.
|
||||
_start_queue_drainer(state)
|
||||
assert not state.shutdown.is_set()
|
||||
assert state.drainer is not None
|
||||
assert state.drainer.is_alive()
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# No-instances warning
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestNoInstancesWarning:
|
||||
"""Tests for the warning log when no target instances are configured."""
|
||||
|
||||
def test_post_json_errors_when_no_instances(self, monkeypatch):
|
||||
"""An error is logged when INSTANCES and INSTANCE are both empty."""
|
||||
monkeypatch.setattr(config, "INSTANCES", ())
|
||||
monkeypatch.setattr(config, "INSTANCE", "")
|
||||
log_kwargs: list[dict] = []
|
||||
monkeypatch.setattr(
|
||||
config,
|
||||
"_debug_log",
|
||||
lambda msg, **kw: log_kwargs.append({"msg": msg, **kw}),
|
||||
)
|
||||
|
||||
result = _post_json("/api/nowhere", {"v": 1})
|
||||
|
||||
assert result is False
|
||||
assert any(
|
||||
kw.get("always") is True and kw.get("severity") == "error"
|
||||
for kw in log_kwargs
|
||||
)
|
||||
|
||||
def test_post_json_survives_log_exception_on_no_instances(self, monkeypatch):
|
||||
"""_post_json still returns False when logging itself raises."""
|
||||
monkeypatch.setattr(config, "INSTANCES", ())
|
||||
monkeypatch.setattr(config, "INSTANCE", "")
|
||||
monkeypatch.setattr(
|
||||
config,
|
||||
"_debug_log",
|
||||
lambda *a, **kw: (_ for _ in ()).throw(OSError("log broken")),
|
||||
)
|
||||
assert _post_json("/api/nowhere", {}) is False
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Defensive exception guard coverage
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestDefensiveExceptionGuards:
|
||||
"""Cover the ``except Exception: pass`` guards wrapping ``_debug_log`` calls.
|
||||
|
||||
These guards ensure that a broken logging backend (e.g. ``BrokenPipeError``
|
||||
from ``print()`` to a closed stdout) never crashes the drainer thread or
|
||||
drops data.
|
||||
"""
|
||||
|
||||
def test_drain_drop_log_exception(self, monkeypatch):
|
||||
"""Max-retries drop path survives a broken _debug_log."""
|
||||
state = _fresh_state()
|
||||
monkeypatch.setattr(
|
||||
config,
|
||||
"_debug_log",
|
||||
lambda *a, **kw: (_ for _ in ()).throw(BrokenPipeError("broken")),
|
||||
)
|
||||
|
||||
attempts: list[str] = []
|
||||
|
||||
def always_fail(path, payload):
|
||||
attempts.append(path)
|
||||
return False
|
||||
|
||||
_enqueue_post_json("/api/fail", {}, 10, state=state)
|
||||
# Should not raise even though _debug_log throws on the drop message.
|
||||
_drain_post_queue(state, send=always_fail)
|
||||
assert attempts.count("/api/fail") == _MAX_SEND_RETRIES + 1
|
||||
|
||||
def test_drainer_startup_log_exception(self, monkeypatch):
|
||||
"""Drainer thread starts even when the startup log raises."""
|
||||
state = _fresh_state()
|
||||
monkeypatch.setattr(
|
||||
config,
|
||||
"_debug_log",
|
||||
lambda *a, **kw: (_ for _ in ()).throw(BrokenPipeError("broken")),
|
||||
)
|
||||
_start_queue_drainer(state)
|
||||
time.sleep(0.15)
|
||||
assert state.drainer is not None
|
||||
assert state.drainer.is_alive()
|
||||
# Restore log so stop can log cleanly.
|
||||
monkeypatch.undo()
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
def test_drainer_exit_log_exception(self, monkeypatch):
|
||||
"""Drainer thread exits cleanly even when the exit log raises."""
|
||||
state = _fresh_state()
|
||||
_start_queue_drainer(state)
|
||||
time.sleep(0.05)
|
||||
# Break _debug_log AFTER startup so only the exit log raises.
|
||||
monkeypatch.setattr(
|
||||
config,
|
||||
"_debug_log",
|
||||
lambda *a, **kw: (_ for _ in ()).throw(BrokenPipeError("broken")),
|
||||
)
|
||||
_stop_queue_drainer(state)
|
||||
assert state.drainer is None
|
||||
|
||||
def test_drainer_depth_warning_log_exception(self, monkeypatch):
|
||||
"""Drainer survives a broken _debug_log during depth warning."""
|
||||
state = _fresh_state()
|
||||
drained: list[str] = []
|
||||
|
||||
original_post_json = _queue_mod._post_json
|
||||
_queue_mod._post_json = lambda path, payload: drained.append(path)
|
||||
try:
|
||||
_start_queue_drainer(state)
|
||||
time.sleep(0.05)
|
||||
# Break _debug_log so the depth warning raises.
|
||||
monkeypatch.setattr(
|
||||
config,
|
||||
"_debug_log",
|
||||
lambda *a, **kw: (_ for _ in ()).throw(BrokenPipeError("broken")),
|
||||
)
|
||||
for i in range(_QUEUE_DEPTH_WARNING_THRESHOLD + 1):
|
||||
_enqueue_post_json(f"/api/{i}", {}, 10, state=state)
|
||||
state.drain_event.set()
|
||||
deadline = time.monotonic() + 2.0
|
||||
while (
|
||||
len(drained) < _QUEUE_DEPTH_WARNING_THRESHOLD + 1
|
||||
and time.monotonic() < deadline
|
||||
):
|
||||
time.sleep(0.01)
|
||||
assert len(drained) == _QUEUE_DEPTH_WARNING_THRESHOLD + 1
|
||||
finally:
|
||||
_queue_mod._post_json = original_post_json
|
||||
monkeypatch.undo()
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
def test_drainer_error_handler_log_exception(self, monkeypatch):
|
||||
"""Drainer survives when both drain and error-log raise."""
|
||||
state = _fresh_state()
|
||||
call_count = [0]
|
||||
original_drain = _queue_mod._drain_post_queue
|
||||
|
||||
def flaky_drain(s, send=None):
|
||||
call_count[0] += 1
|
||||
if call_count[0] == 1:
|
||||
raise RuntimeError("drain boom")
|
||||
original_drain(s, send=send)
|
||||
|
||||
drained: list[str] = []
|
||||
original_post_json = _queue_mod._post_json
|
||||
_queue_mod._post_json = lambda path, payload: drained.append(path)
|
||||
monkeypatch.setattr(_queue_mod, "_drain_post_queue", flaky_drain)
|
||||
# _debug_log raises on the error handler's inner logging call.
|
||||
monkeypatch.setattr(
|
||||
config,
|
||||
"_debug_log",
|
||||
lambda *a, **kw: (_ for _ in ()).throw(BrokenPipeError("broken")),
|
||||
)
|
||||
try:
|
||||
_start_queue_drainer(state)
|
||||
_enqueue_post_json("/api/first", {}, 10, state=state)
|
||||
state.drain_event.set()
|
||||
time.sleep(0.3)
|
||||
assert state.drainer.is_alive()
|
||||
# Restore to process an item normally.
|
||||
monkeypatch.undo()
|
||||
_queue_mod._post_json = lambda path, payload: drained.append(path)
|
||||
monkeypatch.setattr(_queue_mod, "_drain_post_queue", original_drain)
|
||||
_enqueue_post_json("/api/second", {}, 10, state=state)
|
||||
state.drain_event.set()
|
||||
deadline = time.monotonic() + 2.0
|
||||
while "/api/second" not in drained and time.monotonic() < deadline:
|
||||
time.sleep(0.01)
|
||||
assert "/api/second" in drained
|
||||
finally:
|
||||
_queue_mod._post_json = original_post_json
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
def test_restart_warning_log_exception(self, monkeypatch):
|
||||
"""Drainer restart proceeds even when the restart warning log raises."""
|
||||
state = _fresh_state()
|
||||
drained: list[str] = []
|
||||
original_post_json = _queue_mod._post_json
|
||||
_queue_mod._post_json = lambda path, payload: drained.append(path)
|
||||
monkeypatch.setattr(
|
||||
config,
|
||||
"_debug_log",
|
||||
lambda *a, **kw: (_ for _ in ()).throw(BrokenPipeError("broken")),
|
||||
)
|
||||
try:
|
||||
# Simulate a crashed drainer (dead Thread, not None).
|
||||
state.drainer = threading.Thread(target=lambda: None, daemon=True)
|
||||
state.drainer.start()
|
||||
state.drainer.join()
|
||||
assert not state.drainer.is_alive()
|
||||
|
||||
_queue_post_json("/api/restarted", {"v": 1}, priority=10, state=state)
|
||||
deadline = time.monotonic() + 2.0
|
||||
while "/api/restarted" not in drained and time.monotonic() < deadline:
|
||||
time.sleep(0.01)
|
||||
assert "/api/restarted" in drained
|
||||
finally:
|
||||
_queue_mod._post_json = original_post_json
|
||||
monkeypatch.undo()
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
+3
-1
@@ -76,6 +76,7 @@ COPY --chown=potatomesh:potatomesh web/spec ./spec
|
||||
COPY --chown=potatomesh:potatomesh web/public ./public
|
||||
COPY --chown=potatomesh:potatomesh web/views ./views
|
||||
COPY --chown=potatomesh:potatomesh web/scripts ./scripts
|
||||
COPY --chown=potatomesh:potatomesh web/pages ./pages
|
||||
|
||||
# Copy SQL schema files from data directory
|
||||
COPY --chown=potatomesh:potatomesh data/*.sql /data/
|
||||
@@ -84,7 +85,8 @@ COPY --chown=potatomesh:potatomesh data/mesh_ingestor/decode_payload.py /app/dat
|
||||
# Create data and configuration directories with correct ownership
|
||||
RUN mkdir -p /app/.local/share/potato-mesh \
|
||||
&& mkdir -p /app/.config/potato-mesh/well-known \
|
||||
&& chown -R potatomesh:potatomesh /app/.local/share /app/.config
|
||||
&& mkdir -p /app/pages \
|
||||
&& chown -R potatomesh:potatomesh /app/.local/share /app/.config /app/pages
|
||||
|
||||
# Switch to non-root user
|
||||
USER potatomesh
|
||||
|
||||
@@ -20,6 +20,8 @@ gem "sqlite3", "~> 1.7"
|
||||
gem "rackup", "~> 2.2"
|
||||
gem "puma", "~> 7.0"
|
||||
gem "prometheus-client"
|
||||
gem "kramdown", "~> 2.4"
|
||||
gem "kramdown-parser-gfm", "~> 1.1"
|
||||
|
||||
group :test do
|
||||
gem "rspec", "~> 3.12"
|
||||
@@ -29,3 +31,5 @@ group :test do
|
||||
gem "simplecov_json_formatter", "~> 0.1", require: false
|
||||
gem "rspec_junit_formatter", "~> 0.6", require: false
|
||||
end
|
||||
|
||||
gem "sanitize", "7.0.0"
|
||||
|
||||
@@ -57,6 +57,8 @@ require_relative "application/meshtastic/cipher"
|
||||
require_relative "application/meshtastic/payload_decoder"
|
||||
require_relative "application/data_processing"
|
||||
require_relative "application/filesystem"
|
||||
require_relative "application/api_cache"
|
||||
require_relative "application/pages"
|
||||
require_relative "application/instances"
|
||||
require_relative "application/routes/api"
|
||||
require_relative "application/routes/ingest"
|
||||
@@ -74,6 +76,7 @@ module PotatoMesh
|
||||
extend App::Queries
|
||||
extend App::DataProcessing
|
||||
extend App::Filesystem
|
||||
extend App::Pages
|
||||
|
||||
helpers App::Helpers
|
||||
include App::Database
|
||||
@@ -85,6 +88,7 @@ module PotatoMesh
|
||||
include App::Queries
|
||||
include App::DataProcessing
|
||||
include App::Filesystem
|
||||
include App::Pages
|
||||
|
||||
register App::Routes::Api
|
||||
register App::Routes::Ingest
|
||||
@@ -210,6 +214,7 @@ SELF_INSTANCE_ID = PotatoMesh::Application::SELF_INSTANCE_ID unless defined?(SEL
|
||||
PotatoMesh::App::Prometheus,
|
||||
PotatoMesh::App::Queries,
|
||||
PotatoMesh::App::DataProcessing,
|
||||
PotatoMesh::App::Pages,
|
||||
].each do |mod|
|
||||
Object.include(mod) unless Object < mod
|
||||
end
|
||||
|
||||
@@ -0,0 +1,163 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# frozen_string_literal: true
|
||||
|
||||
require "digest"
|
||||
|
||||
module PotatoMesh
|
||||
module App
|
||||
# Thread-safe in-memory cache for serialised API responses.
|
||||
#
|
||||
# Each entry is stored with a monotonic expiration time and a pre-computed
|
||||
# ETag so the route handler can skip recomputing the digest on cache hits.
|
||||
#
|
||||
# The cache is bounded to {MAX_ENTRIES} to prevent unbounded memory growth
|
||||
# from attacker-controlled query parameters. When the limit is reached the
|
||||
# oldest entry by insertion order is evicted (LRU-ish via Ruby hash ordering).
|
||||
#
|
||||
# Invalidation can target a specific prefix (e.g. +"api:nodes:"+) so that an
|
||||
# ingest POST to +/api/messages+ does not flush the neighbors cache.
|
||||
# A single-flight guard coalesces concurrent misses for the same key so only
|
||||
# one thread computes the value while others wait for the result.
|
||||
module ApiCache
|
||||
# Hard cap on the number of cached entries to prevent memory exhaustion.
|
||||
# With the whitelisted protocol values and known limit set, the realistic
|
||||
# key space is ~30 entries. 64 provides generous headroom.
|
||||
MAX_ENTRIES = 64
|
||||
|
||||
@store = {}
|
||||
@inflight = {}
|
||||
@mutex = Mutex.new
|
||||
|
||||
class << self
|
||||
# Retrieve a cached value or compute and store it.
|
||||
#
|
||||
# When multiple threads request the same cold key concurrently only one
|
||||
# executes the block; the others wait for the result (single-flight).
|
||||
#
|
||||
# The returned hash contains both +:value+ (the JSON string) and +:etag+
|
||||
# (pre-computed weak ETag) so callers can set the header without
|
||||
# re-hashing the body.
|
||||
#
|
||||
# @param key [String] cache key incorporating all relevant query
|
||||
# parameters (limit, protocol, etc.).
|
||||
# @param ttl_seconds [Numeric] time-to-live for the cached entry.
|
||||
# @yield Computes the value to cache when the entry is missing or
|
||||
# expired. The block should return the serialised JSON string.
|
||||
# @return [Hash{Symbol => String}] +:value+ and +:etag+ of the response.
|
||||
def fetch(key, ttl_seconds:)
|
||||
now = monotonic_now
|
||||
|
||||
@mutex.synchronize do
|
||||
entry = @store[key]
|
||||
if entry && now < entry[:expires_at]
|
||||
return { value: entry[:value], etag: entry[:etag] }
|
||||
end
|
||||
|
||||
# Single-flight: if another thread is already computing this key,
|
||||
# wait for it to finish and use its result. The loop guards
|
||||
# against spurious wakeups from ConditionVariable#wait.
|
||||
while @inflight.key?(key)
|
||||
cv = @inflight[key]
|
||||
cv.wait(@mutex)
|
||||
entry = @store[key]
|
||||
if entry && monotonic_now < entry[:expires_at]
|
||||
return { value: entry[:value], etag: entry[:etag] }
|
||||
end
|
||||
end
|
||||
|
||||
# Mark this key as in-flight so concurrent requests wait.
|
||||
@inflight[key] = ConditionVariable.new
|
||||
end
|
||||
|
||||
value = yield
|
||||
etag = Digest::MD5.hexdigest(value)
|
||||
|
||||
@mutex.synchronize do
|
||||
evict_oldest_if_full
|
||||
@store[key] = { value: value, etag: etag, expires_at: monotonic_now + ttl_seconds }
|
||||
cv = @inflight.delete(key)
|
||||
cv&.broadcast
|
||||
end
|
||||
|
||||
{ value: value, etag: etag }
|
||||
rescue => e
|
||||
# On error, unblock any waiters and re-raise.
|
||||
@mutex.synchronize do
|
||||
cv = @inflight.delete(key)
|
||||
cv&.broadcast
|
||||
end
|
||||
raise e
|
||||
end
|
||||
|
||||
# Remove entries whose keys start with any of the given prefixes.
|
||||
#
|
||||
# Targeted invalidation so that e.g. a messages POST does not flush the
|
||||
# neighbors or telemetry caches.
|
||||
#
|
||||
# @param prefixes [Array<String>] key prefixes to match.
|
||||
# @return [void]
|
||||
def invalidate_prefix(*prefixes)
|
||||
@mutex.synchronize do
|
||||
@store.reject! do |key, _|
|
||||
prefixes.any? { |p| key.start_with?(p) }
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Remove all entries from the cache.
|
||||
#
|
||||
# @return [void]
|
||||
def invalidate_all
|
||||
@mutex.synchronize { @store.clear }
|
||||
end
|
||||
|
||||
# Remove specific entries by exact key.
|
||||
#
|
||||
# @param keys [Array<String>] cache keys to evict.
|
||||
# @return [void]
|
||||
def invalidate(*keys)
|
||||
@mutex.synchronize do
|
||||
keys.each { |k| @store.delete(k) }
|
||||
end
|
||||
end
|
||||
|
||||
# Return the number of entries currently held in the cache.
|
||||
#
|
||||
# @return [Integer] entry count.
|
||||
def size
|
||||
@mutex.synchronize { @store.size }
|
||||
end
|
||||
|
||||
private
|
||||
|
||||
# Use the monotonic clock so TTL calculations are immune to wall-clock
|
||||
# adjustments (NTP jumps, DST transitions, etc.).
|
||||
def monotonic_now
|
||||
Process.clock_gettime(Process::CLOCK_MONOTONIC)
|
||||
end
|
||||
|
||||
# Evict the oldest entry when the store is at capacity. Ruby hashes
|
||||
# preserve insertion order, so +first+ is the oldest key.
|
||||
def evict_oldest_if_full
|
||||
while @store.size >= MAX_ENTRIES
|
||||
oldest_key = @store.each_key.first
|
||||
@store.delete(oldest_key)
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
@@ -170,18 +170,21 @@ module PotatoMesh
|
||||
)
|
||||
return if existing
|
||||
|
||||
protocol_label = protocol.split(/[-_]/).map(&:capitalize).join
|
||||
long_name = "#{protocol_label} #{short_id}"
|
||||
long_name = "#{protocol_display_label(protocol)} #{short_id}"
|
||||
default_role = case protocol
|
||||
when "meshcore" then "COMPANION"
|
||||
else "CLIENT_HIDDEN"
|
||||
end
|
||||
heard_time = coerce_integer(heard_time)
|
||||
inserted = false
|
||||
|
||||
with_busy_retry do
|
||||
db.execute(
|
||||
<<~SQL,
|
||||
INSERT OR IGNORE INTO nodes(node_id,num,short_name,long_name,role,last_heard,first_heard)
|
||||
VALUES (?,?,?,?,?,?,?)
|
||||
INSERT OR IGNORE INTO nodes(node_id,num,short_name,long_name,role,last_heard,first_heard,protocol)
|
||||
VALUES (?,?,?,?,?,?,?,?)
|
||||
SQL
|
||||
[node_id, node_num, short_id, long_name, "CLIENT_HIDDEN", heard_time, heard_time],
|
||||
[node_id, node_num, short_id, long_name, default_role, heard_time, heard_time, protocol],
|
||||
)
|
||||
inserted = db.changes.positive?
|
||||
end
|
||||
@@ -200,6 +203,27 @@ module PotatoMesh
|
||||
inserted
|
||||
end
|
||||
|
||||
# Converts a protocol identifier such as +meshtastic+ or +mesh-core+ into
|
||||
# the display label used in generated node names: capitalised parts joined
|
||||
# without a separator (e.g. +Meshtastic+, +MeshCore+).
|
||||
def protocol_display_label(protocol)
|
||||
protocol.split(/[-_]/).map(&:capitalize).join
|
||||
end
|
||||
|
||||
# Returns true if +long_name+ is the synthetic placeholder generated by
|
||||
# +ensure_unknown_node+ for the given +node_id+ and +protocol+. Such
|
||||
# names carry no real information and must not overwrite a known name
|
||||
# already on record.
|
||||
def generic_fallback_name?(long_name, node_id, protocol)
|
||||
return false unless long_name && !long_name.empty?
|
||||
|
||||
parts = canonical_node_parts(node_id)
|
||||
return false unless parts
|
||||
|
||||
short_id = parts[2]
|
||||
long_name == "#{protocol_display_label(protocol)} #{short_id}"
|
||||
end
|
||||
|
||||
def touch_node_last_seen(
|
||||
db,
|
||||
node_ref,
|
||||
@@ -332,13 +356,20 @@ module PotatoMesh
|
||||
user = n["user"] || {}
|
||||
met = n["deviceMetrics"] || {}
|
||||
pos = n["position"] || {}
|
||||
role = user["role"] || "CLIENT"
|
||||
# nil when user info absent; COALESCE in the conflict clause preserves
|
||||
# the stored role rather than overwriting with a default.
|
||||
role = user["role"]
|
||||
lh = coerce_integer(n["lastHeard"])
|
||||
pt = coerce_integer(pos["time"])
|
||||
now = Time.now.to_i
|
||||
pt = nil if pt && pt > now
|
||||
lh = now if lh && lh > now
|
||||
lh = pt if pt && (!lh || lh < pt)
|
||||
# 0 is truthy in Ruby — `lh ||= now` won't replace it, leaving the
|
||||
# 7-day list filter to evaluate `0 >= now-7days` → false (node hidden).
|
||||
lh = nil if lh && lh <= 0
|
||||
# position.time = 0 means no GPS fix; skip it as a last_heard anchor
|
||||
# (would re-introduce the same zero-timestamp exclusion bug for lh).
|
||||
lh = pt if pt && pt > 0 && (!lh || lh < pt)
|
||||
lh ||= now
|
||||
node_num = resolve_node_num(node_id, n)
|
||||
|
||||
@@ -346,12 +377,33 @@ module PotatoMesh
|
||||
|
||||
lora_freq = coerce_integer(n["lora_freq"] || n["loraFrequency"])
|
||||
modem_preset = string_or_nil(n["modem_preset"] || n["modemPreset"])
|
||||
# Synthetic flag: true for placeholder nodes created from channel message
|
||||
# sender names before the real contact advertisement is received.
|
||||
synthetic = user["synthetic"] ? 1 : 0
|
||||
long_name = user["longName"]
|
||||
|
||||
# If the incoming long name is a generic placeholder, prefer any real
|
||||
# name already on record so we never stomp known data with fallback
|
||||
# text. For new nodes there is nothing to preserve, so the generic
|
||||
# name is still written via the INSERT VALUES path.
|
||||
long_name_conflict_sql = if generic_fallback_name?(long_name, node_id, protocol)
|
||||
# Generic placeholder: keep any real name already on record.
|
||||
# COALESCE returns nodes.long_name when non-null, otherwise falls
|
||||
# back to the incoming generic — so brand-new nodes still get it.
|
||||
"COALESCE(nodes.long_name, excluded.long_name)"
|
||||
else
|
||||
# Real name (or nil): use the incoming value, preserving the
|
||||
# existing name only when the incoming value is nil. A nil
|
||||
# long_name in the packet carries no information, so falling back
|
||||
# to what we already have is better than overwriting with NULL.
|
||||
"COALESCE(excluded.long_name, nodes.long_name)"
|
||||
end
|
||||
|
||||
row = [
|
||||
node_id,
|
||||
node_num,
|
||||
user["shortName"],
|
||||
user["longName"],
|
||||
long_name,
|
||||
user["macaddr"],
|
||||
user["hwModel"] || n["hwModel"],
|
||||
role,
|
||||
@@ -380,30 +432,82 @@ module PotatoMesh
|
||||
lora_freq,
|
||||
modem_preset,
|
||||
protocol,
|
||||
synthetic,
|
||||
]
|
||||
with_busy_retry do
|
||||
db.execute <<~SQL, row
|
||||
INSERT INTO nodes(node_id,num,short_name,long_name,macaddr,hw_model,role,public_key,is_unmessagable,is_favorite,
|
||||
hops_away,snr,last_heard,first_heard,battery_level,voltage,channel_utilization,air_util_tx,uptime_seconds,
|
||||
position_time,location_source,precision_bits,latitude,longitude,altitude,lora_freq,modem_preset,protocol)
|
||||
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
|
||||
ON CONFLICT(node_id) DO UPDATE SET
|
||||
num=excluded.num, short_name=excluded.short_name, long_name=excluded.long_name, macaddr=excluded.macaddr,
|
||||
hw_model=excluded.hw_model, role=excluded.role, public_key=excluded.public_key, is_unmessagable=excluded.is_unmessagable,
|
||||
is_favorite=excluded.is_favorite, hops_away=excluded.hops_away, snr=excluded.snr, last_heard=excluded.last_heard,
|
||||
first_heard=COALESCE(nodes.first_heard, excluded.first_heard, excluded.last_heard),
|
||||
battery_level=excluded.battery_level, voltage=excluded.voltage, channel_utilization=excluded.channel_utilization,
|
||||
air_util_tx=excluded.air_util_tx, uptime_seconds=excluded.uptime_seconds,
|
||||
position_time=COALESCE(excluded.position_time, nodes.position_time),
|
||||
location_source=COALESCE(excluded.location_source, nodes.location_source),
|
||||
precision_bits=COALESCE(excluded.precision_bits, nodes.precision_bits),
|
||||
latitude=COALESCE(excluded.latitude, nodes.latitude),
|
||||
longitude=COALESCE(excluded.longitude, nodes.longitude),
|
||||
altitude=COALESCE(excluded.altitude, nodes.altitude),
|
||||
lora_freq=excluded.lora_freq, modem_preset=excluded.modem_preset,
|
||||
protocol=COALESCE(NULLIF(nodes.protocol,'meshtastic'), excluded.protocol)
|
||||
WHERE COALESCE(excluded.last_heard,0) >= COALESCE(nodes.last_heard,0)
|
||||
SQL
|
||||
db.transaction do
|
||||
db.execute(<<~SQL, row)
|
||||
INSERT INTO nodes(node_id,num,short_name,long_name,macaddr,hw_model,role,public_key,is_unmessagable,is_favorite,
|
||||
hops_away,snr,last_heard,first_heard,battery_level,voltage,channel_utilization,air_util_tx,uptime_seconds,
|
||||
position_time,location_source,precision_bits,latitude,longitude,altitude,lora_freq,modem_preset,protocol,synthetic)
|
||||
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
|
||||
ON CONFLICT(node_id) DO UPDATE SET
|
||||
num=COALESCE(excluded.num, nodes.num),
|
||||
short_name=COALESCE(excluded.short_name, nodes.short_name),
|
||||
long_name=#{long_name_conflict_sql},
|
||||
macaddr=COALESCE(excluded.macaddr, nodes.macaddr),
|
||||
hw_model=COALESCE(excluded.hw_model, nodes.hw_model),
|
||||
role=COALESCE(excluded.role, nodes.role),
|
||||
public_key=COALESCE(excluded.public_key, nodes.public_key),
|
||||
is_unmessagable=COALESCE(excluded.is_unmessagable, nodes.is_unmessagable),
|
||||
is_favorite=excluded.is_favorite, hops_away=excluded.hops_away, snr=excluded.snr, last_heard=excluded.last_heard,
|
||||
first_heard=COALESCE(nodes.first_heard, excluded.first_heard, excluded.last_heard),
|
||||
battery_level=excluded.battery_level, voltage=excluded.voltage, channel_utilization=excluded.channel_utilization,
|
||||
air_util_tx=excluded.air_util_tx, uptime_seconds=excluded.uptime_seconds,
|
||||
position_time=COALESCE(excluded.position_time, nodes.position_time),
|
||||
location_source=COALESCE(excluded.location_source, nodes.location_source),
|
||||
precision_bits=COALESCE(excluded.precision_bits, nodes.precision_bits),
|
||||
latitude=COALESCE(excluded.latitude, nodes.latitude),
|
||||
longitude=COALESCE(excluded.longitude, nodes.longitude),
|
||||
altitude=COALESCE(excluded.altitude, nodes.altitude),
|
||||
lora_freq=excluded.lora_freq, modem_preset=excluded.modem_preset,
|
||||
protocol=COALESCE(NULLIF(nodes.protocol,'meshtastic'), excluded.protocol),
|
||||
synthetic=MIN(COALESCE(excluded.synthetic,1), COALESCE(nodes.synthetic,1))
|
||||
WHERE COALESCE(excluded.last_heard,0) >= COALESCE(nodes.last_heard,0)
|
||||
AND NOT (COALESCE(nodes.synthetic,0) = 0 AND excluded.synthetic = 1)
|
||||
SQL
|
||||
|
||||
# When a real (non-synthetic) node is upserted with a known long
|
||||
# name, migrate any synthetic placeholder rows that share that name.
|
||||
# This fires when the MeshCore device finally receives the sender's
|
||||
# contact advertisement, resolving the placeholder to a real node ID.
|
||||
if synthetic == 0 && long_name && !long_name.empty?
|
||||
merge_synthetic_nodes(db, node_id, long_name)
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Migrate messages from synthetic placeholder nodes to a newly confirmed
|
||||
# real node, then remove the placeholders.
|
||||
#
|
||||
# Called inside a transaction from +upsert_node+ when a real (non-synthetic)
|
||||
# MeshCore node with the same +long_name+ is upserted.
|
||||
#
|
||||
# Only +messages.from_id+ is migrated. Synthetic nodes are placeholders
|
||||
# created solely from parsed channel message sender names, so they cannot
|
||||
# have associated positions, telemetry, neighbors, or traces — those tables
|
||||
# are intentionally left untouched.
|
||||
#
|
||||
# @param db [SQLite3::Database] open database connection.
|
||||
# @param real_node_id [String] canonical node ID for the real contact.
|
||||
# @param long_name [String] long name to match against synthetic rows.
|
||||
# @return [void]
|
||||
def merge_synthetic_nodes(db, real_node_id, long_name)
|
||||
synthetic_ids = db.execute(
|
||||
"SELECT node_id FROM nodes WHERE long_name = ? AND synthetic = 1 AND protocol = 'meshcore' AND node_id != ?",
|
||||
[long_name, real_node_id],
|
||||
).map { |row| row[0] }
|
||||
|
||||
synthetic_ids.each do |synthetic_id|
|
||||
db.execute(
|
||||
"UPDATE messages SET from_id = ? WHERE from_id = ?",
|
||||
[real_node_id, synthetic_id],
|
||||
)
|
||||
db.execute(
|
||||
"DELETE FROM nodes WHERE node_id = ? AND synthetic = 1",
|
||||
[synthetic_id],
|
||||
)
|
||||
end
|
||||
end
|
||||
|
||||
|
||||
@@ -136,6 +136,25 @@ module PotatoMesh
|
||||
db.execute("ALTER TABLE nodes ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic'")
|
||||
db.execute("UPDATE nodes SET protocol = 'meshtastic' WHERE protocol IS NULL OR TRIM(protocol) = ''")
|
||||
end
|
||||
|
||||
unless node_columns.include?("synthetic")
|
||||
db.execute("ALTER TABLE nodes ADD COLUMN synthetic BOOLEAN NOT NULL DEFAULT 0")
|
||||
end
|
||||
|
||||
if node_columns.include?("long_name")
|
||||
existing_indexes = db.execute("SELECT name FROM sqlite_master WHERE type='index' AND tbl_name='nodes'").flatten
|
||||
unless existing_indexes.include?("idx_nodes_long_name")
|
||||
db.execute("CREATE INDEX IF NOT EXISTS idx_nodes_long_name ON nodes(long_name)")
|
||||
end
|
||||
end
|
||||
|
||||
# Backfill #747: ensure_unknown_node previously omitted the protocol
|
||||
# column and hardcoded role=CLIENT_HIDDEN, causing meshcore placeholder
|
||||
# nodes to be stored as meshtastic/CLIENT_HIDDEN. Fix both in one pass.
|
||||
if node_columns.include?("protocol")
|
||||
db.execute("UPDATE nodes SET protocol = 'meshcore' WHERE long_name LIKE 'Meshcore %' AND protocol = 'meshtastic'")
|
||||
db.execute("UPDATE nodes SET role = 'COMPANION' WHERE protocol = 'meshcore' AND role = 'CLIENT_HIDDEN'")
|
||||
end
|
||||
end
|
||||
|
||||
message_table_exists = db.get_first_value(
|
||||
@@ -198,6 +217,17 @@ module PotatoMesh
|
||||
|
||||
unless instance_columns.include?("nodes_count")
|
||||
db.execute("ALTER TABLE instances ADD COLUMN nodes_count INTEGER")
|
||||
instance_columns << "nodes_count"
|
||||
end
|
||||
|
||||
unless instance_columns.include?("meshcore_nodes_count")
|
||||
db.execute("ALTER TABLE instances ADD COLUMN meshcore_nodes_count INTEGER")
|
||||
instance_columns << "meshcore_nodes_count"
|
||||
end
|
||||
|
||||
unless instance_columns.include?("meshtastic_nodes_count")
|
||||
db.execute("ALTER TABLE instances ADD COLUMN meshtastic_nodes_count INTEGER")
|
||||
instance_columns << "meshtastic_nodes_count"
|
||||
end
|
||||
|
||||
telemetry_tables =
|
||||
|
||||
@@ -63,7 +63,11 @@ module PotatoMesh
|
||||
def self_instance_attributes
|
||||
domain = self_instance_domain
|
||||
last_update = latest_node_update_timestamp || Time.now.to_i
|
||||
nodes_count = active_node_count_since(Time.now.to_i - PotatoMesh::Config.remote_instance_max_node_age)
|
||||
cutoff = Time.now.to_i - PotatoMesh::Config.remote_instance_max_node_age
|
||||
db = open_database(readonly: true)
|
||||
nodes_count = active_node_count_since(cutoff, db: db)
|
||||
mc_count = active_node_count_since_for_protocol(cutoff, "meshcore", db: db)
|
||||
mt_count = active_node_count_since_for_protocol(cutoff, "meshtastic", db: db)
|
||||
{
|
||||
id: app_constant(:SELF_INSTANCE_ID),
|
||||
domain: domain,
|
||||
@@ -78,7 +82,11 @@ module PotatoMesh
|
||||
is_private: private_mode?,
|
||||
contact_link: sanitized_contact_link,
|
||||
nodes_count: nodes_count,
|
||||
meshcore_nodes_count: mc_count,
|
||||
meshtastic_nodes_count: mt_count,
|
||||
}
|
||||
ensure
|
||||
db&.close
|
||||
end
|
||||
|
||||
# Count the number of nodes active since the supplied timestamp.
|
||||
@@ -107,6 +115,39 @@ module PotatoMesh
|
||||
handle&.close unless db
|
||||
end
|
||||
|
||||
# Count the number of nodes for a specific protocol active since the
|
||||
# supplied timestamp.
|
||||
#
|
||||
# @param cutoff [Integer] unix timestamp in seconds.
|
||||
# @param protocol [String] protocol name (e.g. "meshcore", "meshtastic").
|
||||
# @param db [SQLite3::Database, nil] optional open handle to reuse.
|
||||
# @return [Integer, nil] node count or nil when unavailable.
|
||||
def active_node_count_since_for_protocol(cutoff, protocol, db: nil)
|
||||
return nil unless cutoff && protocol
|
||||
|
||||
handle = db || open_database(readonly: true)
|
||||
count =
|
||||
with_busy_retry do
|
||||
handle.get_first_value(
|
||||
"SELECT COUNT(*) FROM nodes WHERE last_heard >= ? AND protocol = ?",
|
||||
cutoff.to_i,
|
||||
protocol,
|
||||
)
|
||||
end
|
||||
Integer(count)
|
||||
rescue SQLite3::Exception, ArgumentError => e
|
||||
warn_log(
|
||||
"Failed to count active nodes for protocol",
|
||||
context: "instances.protocol_nodes_count",
|
||||
protocol: protocol,
|
||||
error_class: e.class.name,
|
||||
error_message: e.message,
|
||||
)
|
||||
nil
|
||||
ensure
|
||||
handle&.close unless db
|
||||
end
|
||||
|
||||
def sign_instance_attributes(attributes)
|
||||
payload = canonical_instance_payload(attributes)
|
||||
Base64.strict_encode64(
|
||||
@@ -297,9 +338,12 @@ module PotatoMesh
|
||||
def shutdown_federation_background_work!(timeout: nil)
|
||||
request_federation_shutdown!
|
||||
timeout_value = timeout || PotatoMesh::Config.federation_shutdown_timeout_seconds
|
||||
# Drain the worker pool first so federation threads blocked in
|
||||
# wait_for_federation_tasks unblock promptly instead of waiting
|
||||
# for each task's individual timeout to expire.
|
||||
shutdown_federation_worker_pool!
|
||||
stop_federation_thread!(:initial_federation_thread, timeout: timeout_value)
|
||||
stop_federation_thread!(:federation_thread, timeout: timeout_value)
|
||||
shutdown_federation_worker_pool!
|
||||
clear_federation_crawl_state!
|
||||
end
|
||||
|
||||
@@ -377,6 +421,13 @@ module PotatoMesh
|
||||
db&.close
|
||||
end
|
||||
|
||||
# Announce the local instance record to a remote federation peer,
|
||||
# cycling through resolved IP addresses when transport-level failures
|
||||
# occur.
|
||||
#
|
||||
# @param domain [String] remote peer hostname.
|
||||
# @param payload_json [String] JSON-encoded announcement body.
|
||||
# @return [Boolean] true when the announcement was accepted.
|
||||
def announce_instance_to_domain(domain, payload_json)
|
||||
return false unless domain && !domain.empty?
|
||||
return false if federation_shutdown_requested?
|
||||
@@ -387,14 +438,7 @@ module PotatoMesh
|
||||
break false if federation_shutdown_requested?
|
||||
|
||||
begin
|
||||
http = build_remote_http_client(uri)
|
||||
response = Timeout.timeout(PotatoMesh::Config.remote_instance_request_timeout) do
|
||||
http.start do |connection|
|
||||
request = build_federation_http_request(Net::HTTP::Post, uri)
|
||||
request.body = payload_json
|
||||
connection.request(request)
|
||||
end
|
||||
end
|
||||
response = perform_announce_request(uri, payload_json)
|
||||
if response.is_a?(Net::HTTPSuccess)
|
||||
debug_log(
|
||||
"Published federation announcement",
|
||||
@@ -448,6 +492,55 @@ module PotatoMesh
|
||||
published
|
||||
end
|
||||
|
||||
# Execute a POST announcement request against the supplied URI, cycling
|
||||
# through resolved IP addresses on connection-level failures.
|
||||
#
|
||||
# @param uri [URI::Generic] target endpoint.
|
||||
# @param payload_json [String] JSON-encoded announcement body.
|
||||
# @return [Net::HTTPResponse] the HTTP response from the first reachable address.
|
||||
# @raise [StandardError] when all addresses fail or a non-retryable error occurs.
|
||||
def perform_announce_request(uri, payload_json)
|
||||
remote_addresses = sort_addresses_for_connection(resolve_remote_ip_addresses(uri))
|
||||
addresses = remote_addresses.empty? ? [nil] : remote_addresses
|
||||
|
||||
last_error = nil
|
||||
addresses.each do |address|
|
||||
break if federation_shutdown_requested?
|
||||
|
||||
begin
|
||||
return perform_single_announce_request(uri, payload_json, ip_address: address&.to_s)
|
||||
rescue StandardError => e
|
||||
if connection_refused_or_unreachable?(e)
|
||||
last_error = e
|
||||
else
|
||||
raise
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
raise(last_error || StandardError.new("all resolved addresses failed"))
|
||||
end
|
||||
|
||||
# Execute a single POST announcement request, optionally pinning the
|
||||
# connection to a specific IP address.
|
||||
#
|
||||
# @param uri [URI::Generic] target endpoint.
|
||||
# @param payload_json [String] JSON-encoded announcement body.
|
||||
# @param ip_address [String, nil] resolved IP address to pin the
|
||||
# connection to, or +nil+ to let {build_remote_http_client} resolve.
|
||||
# @return [Net::HTTPResponse] the HTTP response.
|
||||
# @raise [StandardError] when the request fails.
|
||||
def perform_single_announce_request(uri, payload_json, ip_address: nil)
|
||||
http = build_remote_http_client(uri, ip_address: ip_address)
|
||||
Timeout.timeout(PotatoMesh::Config.remote_instance_request_timeout) do
|
||||
http.start do |connection|
|
||||
request = build_federation_http_request(Net::HTTP::Post, uri)
|
||||
request.body = payload_json
|
||||
connection.request(request)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Determine whether an HTTPS announcement failure should fall back to HTTP.
|
||||
#
|
||||
# @param error [StandardError] failure raised while attempting HTTPS.
|
||||
@@ -463,6 +556,34 @@ module PotatoMesh
|
||||
false
|
||||
end
|
||||
|
||||
# Determine whether an error indicates a transport-level connection
|
||||
# failure that may succeed on an alternative resolved address.
|
||||
#
|
||||
# Connection refusals, host/network unreachable errors, and TCP open
|
||||
# timeouts signal that the selected IP address cannot be reached but
|
||||
# do not rule out alternative addresses for the same hostname.
|
||||
#
|
||||
# @param error [StandardError] failure raised during the connection attempt.
|
||||
# @return [Boolean] true when a retry with a different address is warranted.
|
||||
def connection_refused_or_unreachable?(error)
|
||||
retryable_classes = [
|
||||
Errno::ECONNREFUSED,
|
||||
Errno::EHOSTUNREACH,
|
||||
Errno::ENETUNREACH,
|
||||
Errno::ECONNRESET,
|
||||
Errno::ETIMEDOUT,
|
||||
Net::OpenTimeout,
|
||||
]
|
||||
current = error
|
||||
while current
|
||||
return true if retryable_classes.any? { |klass| current.is_a?(klass) }
|
||||
|
||||
current = current.respond_to?(:cause) ? current.cause : nil
|
||||
end
|
||||
|
||||
false
|
||||
end
|
||||
|
||||
def announce_instance_to_all_domains
|
||||
return unless federation_enabled?
|
||||
return if federation_shutdown_requested?
|
||||
@@ -661,10 +782,57 @@ module PotatoMesh
|
||||
[]
|
||||
end
|
||||
|
||||
# Execute a GET request against the supplied federation URI, cycling
|
||||
# through resolved IP addresses when a transport-level connection
|
||||
# failure occurs.
|
||||
#
|
||||
# DNS resolution is performed once and the resulting addresses are
|
||||
# sorted with IPv4 first via {sort_addresses_for_connection}. Each
|
||||
# address is attempted sequentially; when a connection-level error
|
||||
# (refused, unreachable, timeout) is raised the next address is tried.
|
||||
# Non-connection errors (SSL failures, HTTP-level errors) are raised
|
||||
# immediately without trying further addresses.
|
||||
#
|
||||
# @param uri [URI::Generic] target endpoint to request.
|
||||
# @return [String] raw HTTP response body on success.
|
||||
# @raise [InstanceFetchError] when all addresses are exhausted or a
|
||||
# non-retryable error occurs.
|
||||
def perform_instance_http_request(uri)
|
||||
raise InstanceFetchError, "federation shutdown requested" if federation_shutdown_requested?
|
||||
|
||||
http = build_remote_http_client(uri)
|
||||
remote_addresses = sort_addresses_for_connection(resolve_remote_ip_addresses(uri))
|
||||
addresses = remote_addresses.empty? ? [nil] : remote_addresses
|
||||
|
||||
last_error = nil
|
||||
addresses.each do |address|
|
||||
break if federation_shutdown_requested?
|
||||
|
||||
begin
|
||||
return perform_single_http_request(uri, ip_address: address&.to_s)
|
||||
rescue InstanceFetchError => e
|
||||
if connection_refused_or_unreachable?(e)
|
||||
last_error = e
|
||||
else
|
||||
raise
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
raise last_error || InstanceFetchError.new("all resolved addresses failed")
|
||||
rescue ArgumentError => e
|
||||
raise_instance_fetch_error(e)
|
||||
end
|
||||
|
||||
# Execute a single HTTP GET request against the supplied URI, optionally
|
||||
# pinning the connection to a specific IP address.
|
||||
#
|
||||
# @param uri [URI::Generic] target endpoint.
|
||||
# @param ip_address [String, nil] resolved IP address to pin the
|
||||
# connection to, or +nil+ to let {build_remote_http_client} resolve.
|
||||
# @return [String] raw HTTP response body.
|
||||
# @raise [InstanceFetchError] when the request fails.
|
||||
def perform_single_http_request(uri, ip_address: nil)
|
||||
http = build_remote_http_client(uri, ip_address: ip_address)
|
||||
Timeout.timeout(PotatoMesh::Config.remote_instance_request_timeout) do
|
||||
http.start do |connection|
|
||||
request = build_federation_http_request(Net::HTTP::Get, uri)
|
||||
@@ -1094,6 +1262,14 @@ module PotatoMesh
|
||||
)
|
||||
attributes[:nodes_count] = stats_count if stats_count
|
||||
|
||||
# Extract per-protocol 24h counts (informational, not signed).
|
||||
if stats_payload.is_a?(Hash)
|
||||
mc_day = stats_payload.dig("meshcore", "day")
|
||||
mt_day = stats_payload.dig("meshtastic", "day")
|
||||
attributes[:meshcore_nodes_count] = coerce_integer(mc_day) if mc_day
|
||||
attributes[:meshtastic_nodes_count] = coerce_integer(mt_day) if mt_day
|
||||
end
|
||||
|
||||
nodes_since_path = "/api/nodes?since=#{recent_cutoff}&limit=1000"
|
||||
nodes_since_window, nodes_since_metadata = fetch_instance_json(attributes[:domain], nodes_since_path)
|
||||
if stats_count.nil? && attributes[:nodes_count].nil? && nodes_since_window.is_a?(Array)
|
||||
@@ -1194,15 +1370,41 @@ module PotatoMesh
|
||||
unrestricted_addresses
|
||||
end
|
||||
|
||||
# Sort resolved addresses so that IPv4 precedes IPv6.
|
||||
#
|
||||
# Federation peers with dual-stack DNS may publish addresses where one
|
||||
# family is unreachable. Placing IPv4 entries first mirrors the
|
||||
# preference used by {discover_local_ip_address} and improves the
|
||||
# likelihood that the first connection attempt succeeds.
|
||||
#
|
||||
# @param addresses [Array<IPAddr>] resolved IP address list.
|
||||
# @return [Array<IPAddr>] addresses sorted with IPv4 entries before IPv6.
|
||||
def sort_addresses_for_connection(addresses)
|
||||
return addresses if addresses.nil? || addresses.length <= 1
|
||||
|
||||
v4, v6 = addresses.partition { |ip| !ip.ipv6? }
|
||||
v4 + v6
|
||||
end
|
||||
|
||||
# Build an HTTP client configured for communication with a remote instance.
|
||||
#
|
||||
# When +ip_address+ is supplied the client is pinned to that specific
|
||||
# address, bypassing DNS resolution. Callers that iterate over
|
||||
# multiple resolved addresses should pass each candidate in turn.
|
||||
#
|
||||
# @param uri [URI::Generic] target URI describing the remote endpoint.
|
||||
# @param ip_address [String, nil] explicit IP address to connect to,
|
||||
# or +nil+ to resolve via DNS and use the first result.
|
||||
# @return [Net::HTTP] HTTP client ready to execute the request.
|
||||
def build_remote_http_client(uri)
|
||||
remote_addresses = resolve_remote_ip_addresses(uri)
|
||||
def build_remote_http_client(uri, ip_address: nil)
|
||||
http = Net::HTTP.new(uri.host, uri.port)
|
||||
if http.respond_to?(:ipaddr=) && remote_addresses.any?
|
||||
http.ipaddr = remote_addresses.first.to_s
|
||||
if ip_address
|
||||
http.ipaddr = ip_address if http.respond_to?(:ipaddr=)
|
||||
else
|
||||
remote_addresses = resolve_remote_ip_addresses(uri)
|
||||
if http.respond_to?(:ipaddr=) && remote_addresses.any?
|
||||
http.ipaddr = remote_addresses.first.to_s
|
||||
end
|
||||
end
|
||||
http.open_timeout = PotatoMesh::Config.remote_instance_http_timeout
|
||||
http.read_timeout = PotatoMesh::Config.remote_instance_read_timeout
|
||||
@@ -1395,8 +1597,9 @@ module PotatoMesh
|
||||
sql = <<~SQL
|
||||
INSERT INTO instances (
|
||||
id, domain, pubkey, name, version, channel, frequency,
|
||||
latitude, longitude, last_update_time, is_private, nodes_count, contact_link, signature
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
latitude, longitude, last_update_time, is_private, nodes_count,
|
||||
meshcore_nodes_count, meshtastic_nodes_count, contact_link, signature
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
ON CONFLICT(id) DO UPDATE SET
|
||||
domain=excluded.domain,
|
||||
pubkey=excluded.pubkey,
|
||||
@@ -1409,6 +1612,8 @@ module PotatoMesh
|
||||
last_update_time=excluded.last_update_time,
|
||||
is_private=excluded.is_private,
|
||||
nodes_count=excluded.nodes_count,
|
||||
meshcore_nodes_count=excluded.meshcore_nodes_count,
|
||||
meshtastic_nodes_count=excluded.meshtastic_nodes_count,
|
||||
contact_link=excluded.contact_link,
|
||||
signature=excluded.signature
|
||||
SQL
|
||||
@@ -1427,6 +1632,8 @@ module PotatoMesh
|
||||
attributes[:last_update_time],
|
||||
attributes[:is_private] ? 1 : 0,
|
||||
nodes_count,
|
||||
coerce_integer(attributes[:meshcore_nodes_count]),
|
||||
coerce_integer(attributes[:meshtastic_nodes_count]),
|
||||
attributes[:contact_link],
|
||||
signature,
|
||||
]
|
||||
|
||||
@@ -66,6 +66,53 @@ module PotatoMesh
|
||||
trimmed.start_with?("!") ? trimmed : "!#{trimmed}"
|
||||
end
|
||||
|
||||
# Broad emoji regex covering the most common Unicode emoji blocks:
|
||||
# Supplementary Multilingual Plane emoji (U+1F000–U+1FFFF), Miscellaneous
|
||||
# Symbols and Dingbats (U+2600–U+27BF), and Miscellaneous Symbols and
|
||||
# Arrows (U+2B00–U+2BFF).
|
||||
#
|
||||
# @type [Regexp]
|
||||
MESHCORE_COMPANION_EMOJI_PATTERN = /[\u{1F000}-\u{1FFFF}\u{2600}-\u{27BF}\u{2B00}-\u{2BFF}]/u
|
||||
|
||||
# Derive a display short name for a MeshCore COMPANION node from its long
|
||||
# name. The ingestor stores a raw 2-byte short name; this method produces a
|
||||
# richer, human-readable variant for the API layer without touching the DB.
|
||||
#
|
||||
# Algorithm (applied in priority order):
|
||||
# 1. If the long name contains an emoji character (see
|
||||
# +MESHCORE_COMPANION_EMOJI_PATTERN+), use the first emoji embedded in a
|
||||
# 4-column display slot: ``" E "`` (one leading space, emoji, one trailing
|
||||
# space). Emoji are rendered double-width in monospace fonts, so one leading
|
||||
# space keeps the badge at four visual columns.
|
||||
# 2. If the long name contains two or more whitespace-separated words, use
|
||||
# the capitalised first letters of the first two words: ``" XY "``.
|
||||
# 3. Return +nil+ — single-word names fall back to the raw short name stored
|
||||
# in the database (typically the first two bytes of the node ID). A single
|
||||
# initial looked poor and carried no more information than the raw value.
|
||||
#
|
||||
# @param long_name [String, nil] long name stored on the node.
|
||||
# @return [String, nil] derived display short name or +nil+.
|
||||
def meshcore_companion_display_short_name(long_name)
|
||||
name = string_or_nil(long_name)
|
||||
return nil unless name
|
||||
|
||||
emoji = name.scan(MESHCORE_COMPANION_EMOJI_PATTERN).first
|
||||
# Wide emoji occupies two display columns, so use one leading space and
|
||||
# one trailing space to stay within the four-column badge width.
|
||||
return " #{emoji} " if emoji
|
||||
|
||||
words = name.strip.split(/\s+/).reject(&:empty?)
|
||||
return nil if words.empty?
|
||||
|
||||
if words.length >= 2
|
||||
first = words[0][0]&.upcase
|
||||
second = words[1][0]&.upcase
|
||||
return " #{first}#{second} " if first && second
|
||||
end
|
||||
|
||||
nil
|
||||
end
|
||||
|
||||
# Recursively coerce hash keys to strings and normalise nested arrays.
|
||||
#
|
||||
# @param value [Object] JSON compatible value.
|
||||
|
||||
@@ -144,6 +144,8 @@ module PotatoMesh
|
||||
"lastUpdateTime" => last_update_time,
|
||||
"isPrivate" => private_flag,
|
||||
"nodesCount" => coerce_integer(row["nodes_count"]),
|
||||
"meshcoreNodesCount" => coerce_integer(row["meshcore_nodes_count"]),
|
||||
"meshtasticNodesCount" => coerce_integer(row["meshtastic_nodes_count"]),
|
||||
"contactLink" => string_or_nil(row["contact_link"]),
|
||||
"signature" => signature,
|
||||
}
|
||||
@@ -175,7 +177,8 @@ module PotatoMesh
|
||||
min_last_update_time = now - PotatoMesh::Config.week_seconds
|
||||
sql = <<~SQL
|
||||
SELECT id, domain, pubkey, name, version, channel, frequency,
|
||||
latitude, longitude, last_update_time, is_private, nodes_count, contact_link, signature
|
||||
latitude, longitude, last_update_time, is_private, nodes_count,
|
||||
meshcore_nodes_count, meshtastic_nodes_count, contact_link, signature
|
||||
FROM instances
|
||||
WHERE domain IS NOT NULL AND TRIM(domain) != ''
|
||||
AND pubkey IS NOT NULL AND TRIM(pubkey) != ''
|
||||
|
||||
@@ -0,0 +1,226 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# frozen_string_literal: true
|
||||
|
||||
require "kramdown"
|
||||
require "kramdown-parser-gfm"
|
||||
require "sanitize"
|
||||
|
||||
module PotatoMesh
|
||||
module App
|
||||
# Discovers, parses, and renders operator-managed Markdown pages from the
|
||||
# configured pages directory. Files are named with an optional numeric
|
||||
# prefix for ordering (e.g. +1-about.md+, +9-contact.md+) and exposed as
|
||||
# navigable routes under +/pages/:slug+.
|
||||
module Pages
|
||||
module_function
|
||||
|
||||
# Lightweight value object describing a single static page discovered on
|
||||
# disk. Fields are populated by {parse_page_filename} and consumed by
|
||||
# route handlers and layout templates.
|
||||
#
|
||||
# @!attribute [r] sort_key
|
||||
# @return [String] filename stem used for alphabetical ordering.
|
||||
# @!attribute [r] slug
|
||||
# @return [String] URL-safe identifier derived from the filename.
|
||||
# @!attribute [r] title
|
||||
# @return [String] human-readable nav label.
|
||||
# @!attribute [r] path
|
||||
# @return [String] absolute filesystem path to the Markdown source.
|
||||
PageEntry = Struct.new(:sort_key, :slug, :title, :path, keyword_init: true)
|
||||
|
||||
# Pattern matching a safe slug segment: lowercase alphanumeric words
|
||||
# separated by single hyphens. Used to validate both parsed slugs and
|
||||
# incoming route parameters.
|
||||
SLUG_PATTERN = /\A[a-z0-9]+(-[a-z0-9]+)*\z/
|
||||
|
||||
# Pattern used to split a page filename into an optional numeric sort
|
||||
# prefix and the slug portion.
|
||||
FILENAME_PATTERN = /\A(\d+)-(.+)\z/
|
||||
|
||||
# Maximum number of pages loaded from disk. Prevents accidental
|
||||
# directory-bomb scenarios from consuming unbounded memory.
|
||||
MAX_PAGES = 50
|
||||
|
||||
# Kramdown options shared across all page renders.
|
||||
KRAMDOWN_OPTIONS = {
|
||||
input: "GFM",
|
||||
hard_wrap: false,
|
||||
}.freeze
|
||||
|
||||
# HTML tags allowed in rendered markdown output. Tags not in this list
|
||||
# are stripped after rendering to prevent XSS from operator content.
|
||||
ALLOWED_TAGS = Set.new(%w[
|
||||
h1 h2 h3 h4 h5 h6 p a em strong b i u s del code pre br hr
|
||||
ul ol li dl dt dd blockquote table thead tbody tfoot tr th td
|
||||
img span div sup sub abbr mark small details summary
|
||||
]).freeze
|
||||
|
||||
@pages_cache = nil
|
||||
@pages_cache_mutex = Mutex.new
|
||||
|
||||
# Parse a Markdown filename into a {PageEntry} without the filesystem
|
||||
# path populated.
|
||||
#
|
||||
# Filenames are expected to follow the pattern +<digits>-<slug>.md+ where
|
||||
# the numeric prefix controls navigation order. Files without a prefix
|
||||
# are accepted, using the full stem as both sort key and slug.
|
||||
#
|
||||
# @param basename [String] bare filename (e.g. +"9-contact.md"+).
|
||||
# @return [PageEntry, nil] parsed entry or +nil+ when the filename is
|
||||
# invalid or contains an unsafe slug.
|
||||
def parse_page_filename(basename)
|
||||
stem = basename.sub(/\.md\z/i, "")
|
||||
return nil if stem.empty?
|
||||
|
||||
match = stem.match(FILENAME_PATTERN)
|
||||
if match
|
||||
slug = match[2].downcase
|
||||
sort_key = stem
|
||||
else
|
||||
slug = stem.downcase
|
||||
sort_key = stem
|
||||
end
|
||||
|
||||
return nil unless slug.match?(SLUG_PATTERN)
|
||||
|
||||
title = slug.split("-").map(&:capitalize).join(" ")
|
||||
PageEntry.new(sort_key: sort_key, slug: slug, title: title, path: nil)
|
||||
end
|
||||
|
||||
# Scan the pages directory and return a sorted list of page entries.
|
||||
#
|
||||
# The directory is read once per call; results are not cached here (see
|
||||
# {static_pages} for the cached interface). Non-+.md+ files and entries
|
||||
# with invalid filenames are silently skipped.
|
||||
#
|
||||
# @param directory [String] absolute path to the pages directory.
|
||||
# @return [Array<PageEntry>] frozen, sort-key-ordered list of pages.
|
||||
def load_static_pages(directory = PotatoMesh::Config.pages_directory)
|
||||
return [].freeze unless directory && File.directory?(directory)
|
||||
|
||||
entries = Dir.glob(File.join(directory, "*.md")).filter_map do |path|
|
||||
basename = File.basename(path)
|
||||
entry = parse_page_filename(basename)
|
||||
next unless entry
|
||||
|
||||
PageEntry.new(
|
||||
sort_key: entry.sort_key,
|
||||
slug: entry.slug,
|
||||
title: entry.title,
|
||||
path: path,
|
||||
)
|
||||
end
|
||||
|
||||
entries.sort_by!(&:sort_key)
|
||||
entries.uniq!(&:slug)
|
||||
entries.take(MAX_PAGES).freeze
|
||||
end
|
||||
|
||||
# Return the current set of static pages, reloading from disk when the
|
||||
# cache has expired.
|
||||
#
|
||||
# The TTL is short in non-production environments (1 second) so that
|
||||
# newly added files appear almost immediately during development.
|
||||
#
|
||||
# @return [Array<PageEntry>] cached page entries.
|
||||
def static_pages
|
||||
@pages_cache_mutex.synchronize do
|
||||
if @pages_cache.nil? || Time.now > @pages_cache[:expires_at]
|
||||
ttl = production_environment? ? 60 : 1
|
||||
@pages_cache = {
|
||||
entries: load_static_pages,
|
||||
expires_at: Time.now + ttl,
|
||||
}
|
||||
end
|
||||
@pages_cache[:entries]
|
||||
end
|
||||
end
|
||||
|
||||
# Look up a page entry by its URL slug.
|
||||
#
|
||||
# @param slug [String] URL slug to search for.
|
||||
# @return [PageEntry, nil] matching entry or +nil+.
|
||||
def find_page_by_slug(slug)
|
||||
static_pages.find { |entry| entry.slug == slug }
|
||||
end
|
||||
|
||||
# Read and render a page's Markdown source to HTML.
|
||||
#
|
||||
# Files exceeding {Config.max_page_file_bytes} are rejected to guard
|
||||
# against accidental out-of-memory conditions. Raw HTML blocks are
|
||||
# disabled at the parser level to prevent XSS.
|
||||
#
|
||||
# @param page_entry [PageEntry] entry whose +path+ points to the source.
|
||||
# @return [String, nil] sanitised HTML string, or +nil+ when the file
|
||||
# cannot be read.
|
||||
def render_page_content(page_entry)
|
||||
return nil unless page_entry&.path
|
||||
return nil unless File.file?(page_entry.path) && File.readable?(page_entry.path)
|
||||
|
||||
size = File.size(page_entry.path)
|
||||
return nil if size > PotatoMesh::Config.max_page_file_bytes
|
||||
|
||||
content = File.read(page_entry.path, encoding: "utf-8")
|
||||
raw_html = Kramdown::Document.new(content, **KRAMDOWN_OPTIONS).to_html
|
||||
strip_unsafe_html(raw_html)
|
||||
rescue SystemCallError
|
||||
nil
|
||||
end
|
||||
|
||||
# Remove HTML tags not present in {ALLOWED_TAGS} and strip dangerous
|
||||
# attributes (event handlers, javascript: URIs) from the rendered output.
|
||||
# This provides a safety net against XSS when operators include raw HTML
|
||||
# in their Markdown source.
|
||||
#
|
||||
# @param html [String] raw HTML produced by kramdown.
|
||||
# @return [String] HTML with disallowed tags and attributes stripped.
|
||||
def strip_unsafe_html(html)
|
||||
# Delegate to the sanitize gem for robust HTML and attribute
|
||||
# sanitization instead of relying on ad-hoc regular expressions.
|
||||
Sanitize.fragment(
|
||||
html,
|
||||
elements: ALLOWED_TAGS,
|
||||
attributes: {
|
||||
:all => %w[id class title alt],
|
||||
"a" => %w[href],
|
||||
"img" => %w[src width height loading decoding],
|
||||
},
|
||||
protocols: {
|
||||
"a" => { "href" => ["http", "https", "mailto"] },
|
||||
"img" => { "src" => ["http", "https"] },
|
||||
},
|
||||
)
|
||||
end
|
||||
|
||||
# Invalidate the in-memory page cache so the next call to
|
||||
# {static_pages} re-scans the directory. Intended for test teardown.
|
||||
#
|
||||
# @return [void]
|
||||
def clear_pages_cache!
|
||||
@pages_cache_mutex.synchronize { @pages_cache = nil }
|
||||
end
|
||||
|
||||
# Determine whether the application is running in a production-like
|
||||
# environment.
|
||||
#
|
||||
# @return [Boolean] true when +RACK_ENV+ or +APP_ENV+ is +"production"+.
|
||||
def production_environment?
|
||||
%w[production].include?(ENV.fetch("RACK_ENV", nil)) ||
|
||||
%w[production].include?(ENV.fetch("APP_ENV", nil))
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
@@ -64,6 +64,12 @@ module PotatoMesh
|
||||
SQL
|
||||
params << limit
|
||||
rows = db.execute(sql, params)
|
||||
|
||||
# Batch-resolve all unique from_id values to canonical node_ids in a
|
||||
# single query instead of issuing 1-2 SELECTs per message row.
|
||||
raw_from_ids = rows.filter_map { |r| string_or_nil(r["from_id"]&.to_s&.strip) }.uniq
|
||||
canonical_map = batch_resolve_node_ids(db, raw_from_ids)
|
||||
|
||||
rows.each do |r|
|
||||
r.delete_if { |key, _| key.is_a?(Integer) }
|
||||
r["reply_id"] = coerce_integer(r["reply_id"]) if r.key?("reply_id")
|
||||
@@ -81,7 +87,7 @@ module PotatoMesh
|
||||
)
|
||||
end
|
||||
|
||||
canonical_from_id = string_or_nil(normalize_node_id(db, r["from_id"]))
|
||||
canonical_from_id = canonical_map[r["from_id"]&.to_s&.strip]
|
||||
node_id = canonical_from_id || string_or_nil(r["from_id"])
|
||||
|
||||
if canonical_from_id
|
||||
|
||||
@@ -133,6 +133,57 @@ module PotatoMesh
|
||||
coerced
|
||||
end
|
||||
|
||||
# Resolve a collection of raw node reference strings to their canonical
|
||||
# +node_id+ values in a single batch query. This avoids the N+1 pattern
|
||||
# of calling +normalize_node_id+ once per row.
|
||||
#
|
||||
# @param db [SQLite3::Database] open database handle.
|
||||
# @param refs [Array<String>] raw node identifiers (hex strings or numeric
|
||||
# strings) to resolve.
|
||||
# @return [Hash{String => String}] mapping from each input reference to its
|
||||
# canonical +node_id+, omitting entries that could not be resolved.
|
||||
def batch_resolve_node_ids(db, refs)
|
||||
return {} if refs.nil? || refs.empty?
|
||||
|
||||
result = {}
|
||||
string_refs = []
|
||||
numeric_refs = []
|
||||
|
||||
refs.each do |ref|
|
||||
next if ref.nil? || ref.strip.empty?
|
||||
string_refs << ref.strip
|
||||
begin
|
||||
numeric_refs << Integer(ref.strip, 10)
|
||||
rescue ArgumentError
|
||||
# not a numeric reference — skip the numeric branch
|
||||
end
|
||||
end
|
||||
|
||||
# Batch lookup by node_id (string match)
|
||||
unless string_refs.empty?
|
||||
placeholders = Array.new(string_refs.length, "?").join(", ")
|
||||
rows = db.execute("SELECT node_id FROM nodes WHERE node_id IN (#{placeholders})", string_refs)
|
||||
rows.each do |row|
|
||||
nid = row.is_a?(Hash) ? row["node_id"] : row[0]
|
||||
result[nid] = nid if nid
|
||||
end
|
||||
end
|
||||
|
||||
# Batch lookup by num (numeric match) for refs not yet resolved
|
||||
unresolved_numeric = numeric_refs.select { |n| !result.key?(n.to_s) }
|
||||
unless unresolved_numeric.empty?
|
||||
placeholders = Array.new(unresolved_numeric.length, "?").join(", ")
|
||||
rows = db.execute("SELECT node_id, num FROM nodes WHERE num IN (#{placeholders})", unresolved_numeric)
|
||||
rows.each do |row|
|
||||
nid = row.is_a?(Hash) ? row["node_id"] : row[0]
|
||||
num = row.is_a?(Hash) ? row["num"] : row[1]
|
||||
result[num.to_s] = nid if nid && num
|
||||
end
|
||||
end
|
||||
|
||||
result
|
||||
end
|
||||
|
||||
# Normalise a caller-supplied timestamp for API pagination windows.
|
||||
#
|
||||
# @param since [Object] requested lower bound expressed as seconds since the epoch.
|
||||
|
||||
@@ -37,7 +37,7 @@ module PotatoMesh
|
||||
params << since_threshold
|
||||
|
||||
if node_ref
|
||||
clause = node_lookup_clause(node_ref, string_columns: ["node_id"], numeric_columns: ["node_num"])
|
||||
clause = node_lookup_clause(node_ref, string_columns: ["node_id"], numeric_columns: ["node_num"], db: db)
|
||||
return [] unless clause
|
||||
where_clauses << clause.first
|
||||
params.concat(clause.last)
|
||||
|
||||
@@ -70,11 +70,42 @@ module PotatoMesh
|
||||
}
|
||||
end
|
||||
|
||||
def node_lookup_clause(node_ref, string_columns:, numeric_columns: [])
|
||||
# Build a WHERE clause fragment for looking up a node across one or more
|
||||
# columns. When +numeric_columns+ are provided together with an open +db+
|
||||
# handle the numeric identifiers are resolved to canonical +node_id+
|
||||
# strings up-front so the resulting SQL uses only string-column +IN+
|
||||
# predicates. This avoids an +OR+ across heterogeneous columns which
|
||||
# prevents SQLite from choosing the optimal index.
|
||||
#
|
||||
# @param node_ref [String, Integer, nil] raw node reference from the request.
|
||||
# @param string_columns [Array<String>] SQL column names holding string identifiers.
|
||||
# @param numeric_columns [Array<String>] SQL column names holding numeric identifiers.
|
||||
# @param db [SQLite3::Database, nil] open database handle used to resolve
|
||||
# numeric IDs to canonical strings. When provided and +numeric_columns+
|
||||
# is non-empty the numeric branch is folded into the string branch.
|
||||
# @return [Array(String, Array), nil] SQL fragment and bind parameters, or
|
||||
# +nil+ when no lookup can be constructed.
|
||||
def node_lookup_clause(node_ref, string_columns:, numeric_columns: [], db: nil)
|
||||
tokens = node_reference_tokens(node_ref)
|
||||
string_values = tokens[:string_values]
|
||||
numeric_values = tokens[:numeric_values]
|
||||
|
||||
# When a database handle is available, resolve numeric identifiers to
|
||||
# canonical node_id strings so the query can use a single indexed column
|
||||
# instead of an OR across string and numeric columns.
|
||||
if db && !numeric_columns.empty? && !numeric_values.empty?
|
||||
numeric_values.each do |num|
|
||||
resolved = db.get_first_value("SELECT node_id FROM nodes WHERE num = ? LIMIT 1", [num])
|
||||
if resolved
|
||||
string_values << resolved unless string_values.include?(resolved)
|
||||
end
|
||||
end
|
||||
# All numeric values have been folded into string_values; drop the
|
||||
# numeric branch so the generated SQL avoids an OR.
|
||||
numeric_columns = []
|
||||
numeric_values = []
|
||||
end
|
||||
|
||||
clauses = []
|
||||
params = []
|
||||
|
||||
@@ -117,7 +148,7 @@ module PotatoMesh
|
||||
where_clauses = []
|
||||
|
||||
if node_ref
|
||||
clause = node_lookup_clause(node_ref, string_columns: ["node_id"], numeric_columns: ["num"])
|
||||
clause = node_lookup_clause(node_ref, string_columns: ["node_id"], numeric_columns: ["num"], db: db)
|
||||
return [] unless clause
|
||||
where_clauses << clause.first
|
||||
params.concat(clause.last)
|
||||
@@ -157,6 +188,19 @@ module PotatoMesh
|
||||
end
|
||||
rows.each do |r|
|
||||
r["role"] ||= "CLIENT"
|
||||
if r["role"] == "COMPANION"
|
||||
derived = meshcore_companion_display_short_name(r["long_name"])
|
||||
if derived
|
||||
r["short_name"] = derived
|
||||
elsif r["short_name"].nil? || r["short_name"].strip.empty?
|
||||
# No derived name and no stored public-key hex — synthesise from
|
||||
# the node ID (first four hex chars after the leading "!") so the
|
||||
# badge is stable, unique, and consistent with how the ingestor
|
||||
# builds short names from public keys.
|
||||
node_id = r["node_id"].to_s.delete_prefix("!")
|
||||
r["short_name"] = node_id[0, 4] unless node_id.empty?
|
||||
end
|
||||
end
|
||||
lh = r["last_heard"]&.to_i
|
||||
pt = r["position_time"]&.to_i
|
||||
lh = now if lh && lh > now
|
||||
@@ -225,7 +269,8 @@ module PotatoMesh
|
||||
#
|
||||
# @param now [Integer] reference unix timestamp in seconds.
|
||||
# @param db [SQLite3::Database, nil] optional open database handle to reuse.
|
||||
# @return [Hash{String => Integer}] counts keyed by hour/day/week/month.
|
||||
# @return [Hash{String => Object}] counts keyed by hour/day/week/month plus
|
||||
# per-protocol breakdowns under "meshcore" and "meshtastic" sub-hashes.
|
||||
def query_active_node_stats(now: Time.now.to_i, db: nil)
|
||||
handle = db || open_database(readonly: true)
|
||||
handle.results_as_hash = true
|
||||
@@ -234,22 +279,48 @@ module PotatoMesh
|
||||
day_cutoff = reference_now - 86_400
|
||||
week_cutoff = reference_now - PotatoMesh::Config.week_seconds
|
||||
month_cutoff = reference_now - (30 * 24 * 60 * 60)
|
||||
private_filter = private_mode? ? " AND (role IS NULL OR role <> 'CLIENT_HIDDEN')" : ""
|
||||
pf = private_mode? ? " AND (role IS NULL OR role <> 'CLIENT_HIDDEN')" : ""
|
||||
proto = " AND protocol = ?"
|
||||
sql = <<~SQL
|
||||
SELECT
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{private_filter}) AS hour_count,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{private_filter}) AS day_count,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{private_filter}) AS week_count,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{private_filter}) AS month_count
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{pf}) AS hour_count,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{pf}) AS day_count,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{pf}) AS week_count,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{pf}) AS month_count,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{pf}#{proto}) AS mc_hour,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{pf}#{proto}) AS mc_day,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{pf}#{proto}) AS mc_week,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{pf}#{proto}) AS mc_month,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{pf}#{proto}) AS mt_hour,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{pf}#{proto}) AS mt_day,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{pf}#{proto}) AS mt_week,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{pf}#{proto}) AS mt_month
|
||||
SQL
|
||||
cutoffs = [hour_cutoff, day_cutoff, week_cutoff, month_cutoff]
|
||||
# Total counts bind only cutoffs; per-protocol counts bind cutoff + protocol string.
|
||||
params = cutoffs +
|
||||
cutoffs.flat_map { |c| [c, "meshcore"] } +
|
||||
cutoffs.flat_map { |c| [c, "meshtastic"] }
|
||||
row = with_busy_retry do
|
||||
handle.get_first_row(sql, [hour_cutoff, day_cutoff, week_cutoff, month_cutoff])
|
||||
handle.get_first_row(sql, params)
|
||||
end || {}
|
||||
{
|
||||
"hour" => row["hour_count"].to_i,
|
||||
"day" => row["day_count"].to_i,
|
||||
"week" => row["week_count"].to_i,
|
||||
"month" => row["month_count"].to_i,
|
||||
"meshcore" => {
|
||||
"hour" => row["mc_hour"].to_i,
|
||||
"day" => row["mc_day"].to_i,
|
||||
"week" => row["mc_week"].to_i,
|
||||
"month" => row["mc_month"].to_i,
|
||||
},
|
||||
"meshtastic" => {
|
||||
"hour" => row["mt_hour"].to_i,
|
||||
"day" => row["mt_day"].to_i,
|
||||
"week" => row["mt_week"].to_i,
|
||||
"month" => row["mt_month"].to_i,
|
||||
},
|
||||
}
|
||||
ensure
|
||||
handle&.close unless db
|
||||
|
||||
@@ -37,7 +37,7 @@ module PotatoMesh
|
||||
params << since_threshold
|
||||
|
||||
if node_ref
|
||||
clause = node_lookup_clause(node_ref, string_columns: ["node_id"], numeric_columns: ["node_num"])
|
||||
clause = node_lookup_clause(node_ref, string_columns: ["node_id"], numeric_columns: ["node_num"], db: db)
|
||||
return [] unless clause
|
||||
where_clauses << clause.first
|
||||
params.concat(clause.last)
|
||||
|
||||
@@ -18,12 +18,34 @@ module PotatoMesh
|
||||
module App
|
||||
module Routes
|
||||
module Api
|
||||
# Accepted protocol filter values. Unknown values are discarded to
|
||||
# prevent attacker-controlled strings from polluting the cache keyspace.
|
||||
KNOWN_PROTOCOLS = Set.new(%w[meshcore meshtastic]).freeze
|
||||
|
||||
# Register read-only API endpoints that expose cached mesh data and
|
||||
# instance metadata. Invoked by Sinatra during extension registration.
|
||||
#
|
||||
# @param app [Sinatra::Base] application instance receiving the routes.
|
||||
# @return [void]
|
||||
def self.registered(app)
|
||||
known_protocols = KNOWN_PROTOCOLS
|
||||
|
||||
app.helpers do
|
||||
# Sanitise the protocol query parameter to a known value.
|
||||
define_method(:sanitize_protocol) do |raw|
|
||||
val = raw&.to_s&.strip&.downcase
|
||||
known_protocols.include?(val) ? val : nil
|
||||
end
|
||||
|
||||
# Set Cache-Control headers appropriate for the current mode.
|
||||
# Private-mode instances must not allow intermediary caches to
|
||||
# store responses that may contain filtered data.
|
||||
define_method(:api_cache_control) do |max_age: 10|
|
||||
visibility = private_mode? ? :private : :public
|
||||
cache_control visibility, :must_revalidate, max_age: max_age
|
||||
end
|
||||
end
|
||||
|
||||
app.before "/api/messages*" do
|
||||
halt 404 if private_mode?
|
||||
end
|
||||
@@ -63,92 +85,213 @@ module PotatoMesh
|
||||
|
||||
app.get "/api/nodes" do
|
||||
content_type :json
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
query_nodes(limit, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
since = params["since"]
|
||||
protocol = sanitize_protocol(params["protocol"])
|
||||
since_val = coerce_integer(since) || 0
|
||||
priv = private_mode? ? 1 : 0
|
||||
|
||||
if since_val > 0
|
||||
json_body = query_nodes(limit, since: since, protocol: protocol).to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control
|
||||
json_body
|
||||
else
|
||||
cached = PotatoMesh::App::ApiCache.fetch("api:nodes:#{limit}:#{protocol}:#{priv}", ttl_seconds: 15) do
|
||||
query_nodes(limit, since: since, protocol: protocol).to_json
|
||||
end
|
||||
etag cached[:etag], kind: :weak
|
||||
api_cache_control
|
||||
cached[:value]
|
||||
end
|
||||
end
|
||||
|
||||
app.get "/api/stats" do
|
||||
content_type :json
|
||||
{
|
||||
active_nodes: query_active_node_stats,
|
||||
sampled: false,
|
||||
}.to_json
|
||||
priv = private_mode? ? 1 : 0
|
||||
cached = PotatoMesh::App::ApiCache.fetch("api:stats:#{priv}", ttl_seconds: 15) do
|
||||
stats = query_active_node_stats
|
||||
{
|
||||
active_nodes: {
|
||||
"hour" => stats["hour"], "day" => stats["day"],
|
||||
"week" => stats["week"], "month" => stats["month"],
|
||||
},
|
||||
meshcore: stats["meshcore"],
|
||||
meshtastic: stats["meshtastic"],
|
||||
sampled: false,
|
||||
}.to_json
|
||||
end
|
||||
|
||||
etag cached[:etag], kind: :weak
|
||||
api_cache_control
|
||||
cached[:value]
|
||||
end
|
||||
|
||||
app.get "/api/nodes/:id" do
|
||||
content_type :json
|
||||
node_ref = string_or_nil(params["id"])
|
||||
halt 400, { error: "missing node id" }.to_json unless node_ref
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
rows = query_nodes(limit, node_ref: node_ref, since: params["since"])
|
||||
halt 404, { error: "not found" }.to_json if rows.empty?
|
||||
rows.first.to_json
|
||||
json_body = rows.first.to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control
|
||||
json_body
|
||||
end
|
||||
|
||||
app.get "/api/ingestors" do
|
||||
content_type :json
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
query_ingestors(limit, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
protocol = sanitize_protocol(params["protocol"])
|
||||
since = params["since"]
|
||||
since_val = coerce_integer(since) || 0
|
||||
|
||||
if since_val > 0
|
||||
json_body = query_ingestors(limit, since: since, protocol: protocol).to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control
|
||||
json_body
|
||||
else
|
||||
cached = PotatoMesh::App::ApiCache.fetch("api:ingestors:#{limit}:#{protocol}", ttl_seconds: 30) do
|
||||
query_ingestors(limit, since: since, protocol: protocol).to_json
|
||||
end
|
||||
etag cached[:etag], kind: :weak
|
||||
api_cache_control
|
||||
cached[:value]
|
||||
end
|
||||
end
|
||||
|
||||
app.get "/api/messages" do
|
||||
content_type :json
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
include_encrypted = coerce_boolean(params["encrypted"]) || false
|
||||
since = coerce_integer(params["since"])
|
||||
since = 0 if since.nil? || since.negative?
|
||||
query_messages(limit, include_encrypted: include_encrypted, since: since, protocol: string_or_nil(params["protocol"])).to_json
|
||||
protocol = sanitize_protocol(params["protocol"])
|
||||
enc_key = include_encrypted ? "1" : "0"
|
||||
|
||||
if since > 0
|
||||
json_body = query_messages(limit, include_encrypted: include_encrypted, since: since, protocol: protocol).to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control
|
||||
json_body
|
||||
else
|
||||
cached = PotatoMesh::App::ApiCache.fetch("api:messages:#{limit}:#{enc_key}:#{protocol}", ttl_seconds: 10) do
|
||||
query_messages(limit, include_encrypted: include_encrypted, since: since, protocol: protocol).to_json
|
||||
end
|
||||
etag cached[:etag], kind: :weak
|
||||
api_cache_control
|
||||
cached[:value]
|
||||
end
|
||||
end
|
||||
|
||||
app.get "/api/messages/:id" do
|
||||
content_type :json
|
||||
node_ref = string_or_nil(params["id"])
|
||||
halt 400, { error: "missing node id" }.to_json unless node_ref
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
include_encrypted = coerce_boolean(params["encrypted"]) || false
|
||||
since = coerce_integer(params["since"])
|
||||
since = 0 if since.nil? || since.negative?
|
||||
query_messages(
|
||||
json_body = query_messages(
|
||||
limit,
|
||||
node_ref: node_ref,
|
||||
include_encrypted: include_encrypted,
|
||||
since: since,
|
||||
protocol: string_or_nil(params["protocol"]),
|
||||
protocol: sanitize_protocol(params["protocol"]),
|
||||
).to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control
|
||||
json_body
|
||||
end
|
||||
|
||||
app.get "/api/positions" do
|
||||
content_type :json
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
query_positions(limit, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
since = params["since"]
|
||||
protocol = sanitize_protocol(params["protocol"])
|
||||
since_val = coerce_integer(since) || 0
|
||||
|
||||
if since_val > 0
|
||||
json_body = query_positions(limit, since: since, protocol: protocol).to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control
|
||||
json_body
|
||||
else
|
||||
cached = PotatoMesh::App::ApiCache.fetch("api:positions:#{limit}:#{protocol}", ttl_seconds: 15) do
|
||||
query_positions(limit, since: since, protocol: protocol).to_json
|
||||
end
|
||||
etag cached[:etag], kind: :weak
|
||||
api_cache_control
|
||||
cached[:value]
|
||||
end
|
||||
end
|
||||
|
||||
app.get "/api/positions/:id" do
|
||||
content_type :json
|
||||
node_ref = string_or_nil(params["id"])
|
||||
halt 400, { error: "missing node id" }.to_json unless node_ref
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
query_positions(limit, node_ref: node_ref, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
json_body = query_positions(limit, node_ref: node_ref, since: params["since"], protocol: sanitize_protocol(params["protocol"])).to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control
|
||||
json_body
|
||||
end
|
||||
|
||||
app.get "/api/neighbors" do
|
||||
content_type :json
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
query_neighbors(limit, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
since = params["since"]
|
||||
protocol = sanitize_protocol(params["protocol"])
|
||||
since_val = coerce_integer(since) || 0
|
||||
|
||||
if since_val > 0
|
||||
json_body = query_neighbors(limit, since: since, protocol: protocol).to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control
|
||||
json_body
|
||||
else
|
||||
cached = PotatoMesh::App::ApiCache.fetch("api:neighbors:#{limit}:#{protocol}", ttl_seconds: 30) do
|
||||
query_neighbors(limit, since: since, protocol: protocol).to_json
|
||||
end
|
||||
etag cached[:etag], kind: :weak
|
||||
api_cache_control
|
||||
cached[:value]
|
||||
end
|
||||
end
|
||||
|
||||
app.get "/api/neighbors/:id" do
|
||||
content_type :json
|
||||
node_ref = string_or_nil(params["id"])
|
||||
halt 400, { error: "missing node id" }.to_json unless node_ref
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
query_neighbors(limit, node_ref: node_ref, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
json_body = query_neighbors(limit, node_ref: node_ref, since: params["since"], protocol: sanitize_protocol(params["protocol"])).to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control
|
||||
json_body
|
||||
end
|
||||
|
||||
app.get "/api/telemetry" do
|
||||
content_type :json
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
query_telemetry(limit, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
since = params["since"]
|
||||
protocol = sanitize_protocol(params["protocol"])
|
||||
since_val = coerce_integer(since) || 0
|
||||
|
||||
if since_val > 0
|
||||
json_body = query_telemetry(limit, since: since, protocol: protocol).to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control
|
||||
json_body
|
||||
else
|
||||
cached = PotatoMesh::App::ApiCache.fetch("api:telemetry:#{limit}:#{protocol}", ttl_seconds: 15) do
|
||||
query_telemetry(limit, since: since, protocol: protocol).to_json
|
||||
end
|
||||
etag cached[:etag], kind: :weak
|
||||
api_cache_control
|
||||
cached[:value]
|
||||
end
|
||||
end
|
||||
|
||||
app.get "/api/telemetry/aggregated" do
|
||||
@@ -179,33 +322,67 @@ module PotatoMesh
|
||||
halt 400, { error: "bucketSeconds too small for requested window" }.to_json
|
||||
end
|
||||
|
||||
query_telemetry_buckets(
|
||||
window_seconds: window_seconds,
|
||||
bucket_seconds: bucket_seconds,
|
||||
since: params["since"],
|
||||
).to_json
|
||||
since = params["since"]
|
||||
since_val = coerce_integer(since) || 0
|
||||
|
||||
if since_val > 0
|
||||
json_body = query_telemetry_buckets(window_seconds: window_seconds, bucket_seconds: bucket_seconds, since: since).to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control(max_age: 30)
|
||||
json_body
|
||||
else
|
||||
cache_key = "api:telemetry_agg:#{window_seconds}:#{bucket_seconds}"
|
||||
cached = PotatoMesh::App::ApiCache.fetch(cache_key, ttl_seconds: 60) do
|
||||
query_telemetry_buckets(window_seconds: window_seconds, bucket_seconds: bucket_seconds, since: since).to_json
|
||||
end
|
||||
etag cached[:etag], kind: :weak
|
||||
api_cache_control(max_age: 30)
|
||||
cached[:value]
|
||||
end
|
||||
end
|
||||
|
||||
app.get "/api/telemetry/:id" do
|
||||
content_type :json
|
||||
node_ref = string_or_nil(params["id"])
|
||||
halt 400, { error: "missing node id" }.to_json unless node_ref
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
query_telemetry(limit, node_ref: node_ref, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
json_body = query_telemetry(limit, node_ref: node_ref, since: params["since"], protocol: sanitize_protocol(params["protocol"])).to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control
|
||||
json_body
|
||||
end
|
||||
|
||||
app.get "/api/traces" do
|
||||
content_type :json
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
query_traces(limit, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
since = params["since"]
|
||||
protocol = sanitize_protocol(params["protocol"])
|
||||
since_val = coerce_integer(since) || 0
|
||||
|
||||
if since_val > 0
|
||||
json_body = query_traces(limit, since: since, protocol: protocol).to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control
|
||||
json_body
|
||||
else
|
||||
cached = PotatoMesh::App::ApiCache.fetch("api:traces:#{limit}:#{protocol}", ttl_seconds: 30) do
|
||||
query_traces(limit, since: since, protocol: protocol).to_json
|
||||
end
|
||||
etag cached[:etag], kind: :weak
|
||||
api_cache_control
|
||||
cached[:value]
|
||||
end
|
||||
end
|
||||
|
||||
app.get "/api/traces/:id" do
|
||||
content_type :json
|
||||
node_ref = string_or_nil(params["id"])
|
||||
halt 400, { error: "missing node id" }.to_json unless node_ref
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
query_traces(limit, node_ref: node_ref, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
json_body = query_traces(limit, node_ref: node_ref, since: params["since"], protocol: sanitize_protocol(params["protocol"])).to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control
|
||||
json_body
|
||||
end
|
||||
|
||||
app.get "/api/instances" do
|
||||
|
||||
@@ -45,6 +45,7 @@ module PotatoMesh
|
||||
upsert_node(db, node_id, node, protocol: protocol)
|
||||
end
|
||||
PotatoMesh::App::Prometheus::NODES_GAUGE.set(query_nodes(1000).length)
|
||||
PotatoMesh::App::ApiCache.invalidate_prefix("api:nodes:", "api:stats:")
|
||||
{ status: "ok" }.to_json
|
||||
ensure
|
||||
db&.close
|
||||
@@ -65,6 +66,7 @@ module PotatoMesh
|
||||
messages.each do |msg|
|
||||
insert_message(db, msg, protocol_cache: protocol_cache)
|
||||
end
|
||||
PotatoMesh::App::ApiCache.invalidate_prefix("api:messages:", "api:stats:")
|
||||
{ status: "ok" }.to_json
|
||||
ensure
|
||||
db&.close
|
||||
@@ -84,6 +86,7 @@ module PotatoMesh
|
||||
db = open_database
|
||||
stored = upsert_ingestor(db, payload)
|
||||
halt 400, { error: "invalid payload" }.to_json unless stored
|
||||
PotatoMesh::App::ApiCache.invalidate_prefix("api:ingestors:")
|
||||
{ status: "ok" }.to_json
|
||||
ensure
|
||||
db&.close
|
||||
@@ -314,6 +317,7 @@ module PotatoMesh
|
||||
positions.each do |pos|
|
||||
insert_position(db, pos, protocol_cache: protocol_cache)
|
||||
end
|
||||
PotatoMesh::App::ApiCache.invalidate_prefix("api:positions:", "api:nodes:", "api:stats:")
|
||||
{ status: "ok" }.to_json
|
||||
ensure
|
||||
db&.close
|
||||
@@ -334,6 +338,7 @@ module PotatoMesh
|
||||
neighbor_payloads.each do |packet|
|
||||
insert_neighbors(db, packet, protocol_cache: protocol_cache)
|
||||
end
|
||||
PotatoMesh::App::ApiCache.invalidate_prefix("api:neighbors:", "api:stats:")
|
||||
{ status: "ok" }.to_json
|
||||
ensure
|
||||
db&.close
|
||||
@@ -354,6 +359,7 @@ module PotatoMesh
|
||||
telemetry_packets.each do |packet|
|
||||
insert_telemetry(db, packet, protocol_cache: protocol_cache)
|
||||
end
|
||||
PotatoMesh::App::ApiCache.invalidate_prefix("api:telemetry:", "api:stats:")
|
||||
{ status: "ok" }.to_json
|
||||
ensure
|
||||
db&.close
|
||||
@@ -374,6 +380,7 @@ module PotatoMesh
|
||||
trace_packets.each do |packet|
|
||||
insert_trace(db, packet, protocol_cache: protocol_cache)
|
||||
end
|
||||
PotatoMesh::App::ApiCache.invalidate_prefix("api:traces:", "api:stats:")
|
||||
{ status: "ok" }.to_json
|
||||
ensure
|
||||
db&.close
|
||||
|
||||
@@ -19,23 +19,12 @@ module PotatoMesh
|
||||
module Routes
|
||||
module Root
|
||||
module Helpers
|
||||
# Determine the initial theme from the request cookie and persist
|
||||
# sanitised values back to the client to avoid invalid states.
|
||||
# Return the fixed dark theme identifier. Light mode is no longer
|
||||
# supported; theme selection and cookie persistence have been removed.
|
||||
#
|
||||
# @return [String] normalised theme value ('dark' or 'light').
|
||||
# @return [String] always 'dark'.
|
||||
def resolve_initial_theme
|
||||
raw_theme = request.cookies["theme"]
|
||||
theme = %w[dark light].include?(raw_theme) ? raw_theme : "dark"
|
||||
if raw_theme != theme
|
||||
response.set_cookie(
|
||||
"theme",
|
||||
value: theme,
|
||||
path: "/",
|
||||
max_age: 60 * 60 * 24 * 7,
|
||||
same_site: :lax,
|
||||
)
|
||||
end
|
||||
theme
|
||||
"dark"
|
||||
end
|
||||
|
||||
# Render a dashboard-oriented ERB template within the shared layout.
|
||||
@@ -70,6 +59,7 @@ module PotatoMesh
|
||||
initial_theme: theme,
|
||||
current_view_mode: view_mode_sym,
|
||||
map_zoom: PotatoMesh::Config.map_zoom,
|
||||
static_pages: PotatoMesh::App::Pages.static_pages,
|
||||
}
|
||||
sanitized_locals = extra_locals.is_a?(Hash) ? extra_locals : {}
|
||||
merged_locals = base_locals.merge(sanitized_locals)
|
||||
@@ -191,6 +181,26 @@ module PotatoMesh
|
||||
render_root_view(:federation, view_mode: :federation)
|
||||
end
|
||||
|
||||
app.get "/pages/:slug" do
|
||||
slug = params.fetch("slug", "")
|
||||
halt 400, "Bad Request" unless slug.match?(PotatoMesh::App::Pages::SLUG_PATTERN)
|
||||
|
||||
page = PotatoMesh::App::Pages.find_page_by_slug(slug)
|
||||
halt 404, "Not Found" unless page
|
||||
|
||||
page_html = PotatoMesh::App::Pages.render_page_content(page)
|
||||
halt 500, "Internal Server Error" unless page_html
|
||||
|
||||
render_root_view(
|
||||
:page,
|
||||
view_mode: :"page_#{slug}",
|
||||
extra_locals: {
|
||||
page_title: page.title,
|
||||
page_content_html: page_html,
|
||||
},
|
||||
)
|
||||
end
|
||||
|
||||
app.get "/nodes/:id" do
|
||||
node_ref = params.fetch("id", nil)
|
||||
reference_payload = build_node_detail_reference(node_ref)
|
||||
|
||||
@@ -84,6 +84,26 @@ module PotatoMesh
|
||||
value.to_s.strip != "0"
|
||||
end
|
||||
|
||||
# Resolve the absolute path to the operator-managed static pages directory.
|
||||
#
|
||||
# The directory defaults to +pages/+ at the application root and can be
|
||||
# overridden with the +PAGES_DIR+ environment variable.
|
||||
#
|
||||
# @return [String] absolute filesystem path to the pages directory.
|
||||
def pages_directory
|
||||
custom = fetch_string("PAGES_DIR", nil)
|
||||
return File.expand_path(custom) if custom
|
||||
|
||||
File.join(web_root, "pages")
|
||||
end
|
||||
|
||||
# Maximum file size in bytes accepted when reading a static page.
|
||||
#
|
||||
# @return [Integer] byte ceiling for markdown files.
|
||||
def max_page_file_bytes
|
||||
512 * 1024
|
||||
end
|
||||
|
||||
# Resolve the absolute path to the web application root directory.
|
||||
#
|
||||
# @return [String] absolute filesystem path of the web folder.
|
||||
@@ -187,7 +207,7 @@ module PotatoMesh
|
||||
#
|
||||
# @return [String] semantic version identifier.
|
||||
def version_fallback
|
||||
"0.5.12"
|
||||
"0.6.1"
|
||||
end
|
||||
|
||||
# Default refresh interval for frontend polling routines.
|
||||
|
||||
Generated
+2
-2
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "potato-mesh",
|
||||
"version": "0.5.12",
|
||||
"version": "0.6.1",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "potato-mesh",
|
||||
"version": "0.5.12",
|
||||
"version": "0.6.1",
|
||||
"devDependencies": {
|
||||
"istanbul-lib-coverage": "^3.2.2",
|
||||
"istanbul-lib-report": "^3.0.1",
|
||||
|
||||
+1
-1
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "potato-mesh",
|
||||
"version": "0.5.12",
|
||||
"version": "0.6.1",
|
||||
"type": "module",
|
||||
"private": true,
|
||||
"scripts": {
|
||||
|
||||
@@ -0,0 +1,73 @@
|
||||
# About This Mesh
|
||||
|
||||
Welcome to this [PotatoMesh](https://github.com/l5yth/potato-mesh) instance - a community dashboard for off-grid mesh networks. This is an example page, please modify it before deploying.
|
||||
|
||||
## What Is Meshtastic?
|
||||
|
||||
[Meshtastic](https://meshtastic.org) is an open-source project that turns
|
||||
affordable LoRa radios into a decentralised, long-range communication network.
|
||||
No cellular service or internet connection is required - nodes relay messages
|
||||
across the mesh automatically.
|
||||
|
||||
## What Is Meshcore?
|
||||
|
||||
[Meshcore](https://meshcore.co.uk) is a firmware for LoRa radios focused on
|
||||
reliable, low-power mesh networking. It provides a public channel system and
|
||||
supports narrow-band presets optimised for long range in dense environments.
|
||||
|
||||
## Network Details
|
||||
|
||||
| Setting | Meshtastic | Meshcore |
|
||||
| --------- | --------------- | ----------------- |
|
||||
| Channel | #MediumFast | Public |
|
||||
| Frequency | 869.525 MHz | 869.618 MHz |
|
||||
| Bandwidth | 250 kHz | 62.5 kHz |
|
||||
| SF | 8 | 8 |
|
||||
| CR | 4/5 | 4/8 |
|
||||
| Preset | Medium / Fast | EU/UK Narrow |
|
||||
|
||||
> Adjust this table to match the configuration of your local mesh.
|
||||
|
||||
## Contact
|
||||
|
||||
- **Public chat:** [#potatomesh:dod.ngo](https://matrix.to/#/#potatomesh:dod.ngo)
|
||||
- **Source code:** [github.com/l5yth/potato-mesh](https://github.com/l5yth/potato-mesh)
|
||||
|
||||
## Custom Pages
|
||||
|
||||
Instance operators can add, edit, or remove pages by placing Markdown files in
|
||||
the `pages/` directory (mounted as a Docker volume at `/app/pages`). Each file
|
||||
becomes a new entry in the navigation bar.
|
||||
|
||||
### Filename Convention
|
||||
|
||||
```
|
||||
<sort-prefix>-<slug>.md
|
||||
```
|
||||
|
||||
- **Sort prefix** - a number that controls the order in the nav bar (e.g. `1`,
|
||||
`5`, `10`). Files are sorted alphabetically by their full filename.
|
||||
- **Slug** - lowercase, hyphen-separated words that become the URL path and nav
|
||||
label. `contact` becomes `/pages/contact` with the label "Contact";
|
||||
`privacy-policy` becomes `/pages/privacy-policy` labelled "Privacy Policy".
|
||||
|
||||
### Examples
|
||||
|
||||
| Filename | Nav Label | URL |
|
||||
| --------------------- | ---------------- | ---------------------- |
|
||||
| `1-about.md` | About | `/pages/about` |
|
||||
| `5-rules.md` | Rules | `/pages/rules` |
|
||||
| `9-contact.md` | Contact | `/pages/contact` |
|
||||
| `10-privacy-policy.md`| Privacy Policy | `/pages/privacy-policy`|
|
||||
|
||||
### Impressum / Legal Notice
|
||||
|
||||
Operators subject to legal disclosure requirements (e.g. the German
|
||||
Telemediengesetz) can create an `impressum.md` page:
|
||||
|
||||
```
|
||||
20-impressum.md
|
||||
```
|
||||
|
||||
Fill it with your legally required contact details - name, address, email, phone
|
||||
- and it will appear in the navigation as "Impressum".
|
||||
@@ -178,3 +178,57 @@ test('normalizePresetSlot enforces placeholders and uppercase output', () => {
|
||||
assert.equal(normalizePresetSlot(''), PRESET_PLACEHOLDER);
|
||||
assert.equal(normalizePresetSlot(null), PRESET_PLACEHOLDER);
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// abbreviatePreset — MeshCore SF/BW/CR presets
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// [description, preset, freqMHz, expectedCode]
|
||||
const ABBREVIATE_MESHCORE_CASES = [
|
||||
['AU/NZ Wide → Wi', 'SF10/BW250/CR5', null, 'Wi'],
|
||||
['EU/UK Narrow → Na', 'SF8/BW62/CR8', null, 'Na'],
|
||||
['CZ/SK Narrow at 868 MHz → Na', 'SF7/BW62/CR5', 868, 'Na'],
|
||||
['US/CA Narrow at 915 MHz → Na', 'SF7/BW62/CR5', 915, 'Na'],
|
||||
['US/CA Narrow at exact 900 MHz boundary', 'SF7/BW62/CR5', 900, 'Na'],
|
||||
['BW fallback Na when freq unknown', 'SF7/BW62/CR5', null, 'Na'],
|
||||
['125 kHz BW fallback → St', 'SF9/BW125/CR6', null, 'St'],
|
||||
['unknown BW → null', 'SF12/BW500/CR7', null, null],
|
||||
];
|
||||
for (const [desc, preset, freq, expected] of ABBREVIATE_MESHCORE_CASES) {
|
||||
test(`abbreviatePreset MeshCore: ${desc}`, () => {
|
||||
assert.equal(abbreviatePreset(preset, freq), expected);
|
||||
});
|
||||
}
|
||||
|
||||
test('abbreviatePreset leaves Meshtastic named presets unaffected', () => {
|
||||
assert.equal(abbreviatePreset('MediumFast', null), 'MF');
|
||||
assert.equal(abbreviatePreset('LongSlow', null), 'LS');
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// extractChatMessageMetadata — SF/BW/CR preset + frequency
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('extractChatMessageMetadata produces Wi code for AU/NZ Wide with freq', () => {
|
||||
const result = extractChatMessageMetadata({
|
||||
region_frequency: 915,
|
||||
modem_preset: 'SF10/BW250/CR5',
|
||||
});
|
||||
assert.equal(result.presetCode, 'Wi');
|
||||
assert.equal(result.frequency, '915');
|
||||
});
|
||||
|
||||
test('extractChatMessageMetadata produces Na code for EU/UK Narrow with freq', () => {
|
||||
const result = extractChatMessageMetadata({
|
||||
lora_freq: 868,
|
||||
modem_preset: 'SF8/BW62/CR8',
|
||||
});
|
||||
assert.equal(result.presetCode, 'Na');
|
||||
});
|
||||
|
||||
test('extractChatMessageMetadata uses BW fallback Na when freq is absent', () => {
|
||||
const result = extractChatMessageMetadata({
|
||||
modem_preset: 'SF7/BW62/CR5',
|
||||
});
|
||||
assert.equal(result.presetCode, 'Na');
|
||||
});
|
||||
|
||||
@@ -92,13 +92,15 @@ test('buildChatTabModel returns sorted nodes and channel buckets', () => {
|
||||
);
|
||||
|
||||
assert.equal(model.channels.length, 6);
|
||||
// Primary channels (index 0) come first, secondary channels (index > 0) come last.
|
||||
// Within each tier, ties on messageCount are broken alphabetically by label.
|
||||
assert.deepEqual(model.channels.map(channel => channel.label), [
|
||||
'EnvDefault',
|
||||
'Fallback',
|
||||
'MediumFast',
|
||||
'ShortFast',
|
||||
'1',
|
||||
'BerlinMesh'
|
||||
'BerlinMesh',
|
||||
]);
|
||||
|
||||
const channelByLabel = Object.fromEntries(model.channels.map(channel => [channel.label, channel]));
|
||||
@@ -454,3 +456,127 @@ test('buildChatTabModel falls back to hashed id for unsluggable secondary labels
|
||||
assert.ok(channel.id.startsWith('channel-secondary-name-'));
|
||||
assert.ok(channel.id.length > 'channel-secondary-name-'.length);
|
||||
});
|
||||
|
||||
test('buildChatTabModel sets messageCount equal to entries.length on each channel', () => {
|
||||
const model = buildChatTabModel({
|
||||
nodes: [],
|
||||
messages: [
|
||||
{ id: 'a', rx_time: NOW - 10, channel: 0, channel_name: 'Primary' },
|
||||
{ id: 'b', rx_time: NOW - 8, channel: 0, channel_name: 'Primary' },
|
||||
{ id: 'c', rx_time: NOW - 6, channel: 1, channel_name: 'Secondary' }
|
||||
],
|
||||
nowSeconds: NOW,
|
||||
windowSeconds: WINDOW
|
||||
});
|
||||
for (const channel of model.channels) {
|
||||
assert.equal(channel.messageCount, channel.entries.length);
|
||||
}
|
||||
const primary = model.channels.find(channel => channel.label === 'Primary');
|
||||
assert.ok(primary);
|
||||
assert.equal(primary.messageCount, 2);
|
||||
});
|
||||
|
||||
test('buildChatTabModel sorts channels by messageCount descending', () => {
|
||||
// Channel A has 3 messages, Channel B has 1. A must come first.
|
||||
const model = buildChatTabModel({
|
||||
nodes: [],
|
||||
messages: [
|
||||
{ id: 'b1', rx_time: NOW - 15, channel: 1, channel_name: 'Beta' },
|
||||
{ id: 'a1', rx_time: NOW - 12, channel: 2, channel_name: 'Alpha' },
|
||||
{ id: 'a2', rx_time: NOW - 10, channel: 2, channel_name: 'Alpha' },
|
||||
{ id: 'a3', rx_time: NOW - 8, channel: 2, channel_name: 'Alpha' }
|
||||
],
|
||||
nowSeconds: NOW,
|
||||
windowSeconds: WINDOW
|
||||
});
|
||||
assert.equal(model.channels.length, 2);
|
||||
assert.equal(model.channels[0].label, 'Alpha');
|
||||
assert.equal(model.channels[0].messageCount, 3);
|
||||
assert.equal(model.channels[1].label, 'Beta');
|
||||
assert.equal(model.channels[1].messageCount, 1);
|
||||
});
|
||||
|
||||
test('buildChatTabModel breaks messageCount ties alphabetically', () => {
|
||||
// Zebra and Apple each have 2 messages; Apple should sort first.
|
||||
const model = buildChatTabModel({
|
||||
nodes: [],
|
||||
messages: [
|
||||
{ id: 'z1', rx_time: NOW - 20, channel: 1, channel_name: 'Zebra' },
|
||||
{ id: 'z2', rx_time: NOW - 18, channel: 1, channel_name: 'Zebra' },
|
||||
{ id: 'ap1', rx_time: NOW - 16, channel: 2, channel_name: 'Apple' },
|
||||
{ id: 'ap2', rx_time: NOW - 14, channel: 2, channel_name: 'Apple' }
|
||||
],
|
||||
nowSeconds: NOW,
|
||||
windowSeconds: WINDOW
|
||||
});
|
||||
assert.equal(model.channels.length, 2);
|
||||
assert.equal(model.channels[0].label, 'Apple');
|
||||
assert.equal(model.channels[1].label, 'Zebra');
|
||||
});
|
||||
|
||||
test('buildChatTabModel puts primary channels (index 0) before secondary channels', () => {
|
||||
const model = buildChatTabModel({
|
||||
nodes: [],
|
||||
messages: [
|
||||
// Secondary channels with many messages
|
||||
{ id: 's1', rx_time: NOW - 30, channel: 2, channel_name: 'SecondaryA' },
|
||||
{ id: 's2', rx_time: NOW - 28, channel: 2, channel_name: 'SecondaryA' },
|
||||
{ id: 's3', rx_time: NOW - 26, channel: 2, channel_name: 'SecondaryA' },
|
||||
// Primary channel (index 0) with fewer messages
|
||||
{ id: 'p1', rx_time: NOW - 20, channel: 0, channel_name: 'LongFast' },
|
||||
],
|
||||
nowSeconds: NOW,
|
||||
windowSeconds: WINDOW
|
||||
});
|
||||
assert.equal(model.channels.length, 2);
|
||||
assert.equal(model.channels[0].label, 'LongFast', 'primary channel must come first regardless of activity');
|
||||
assert.equal(model.channels[0].index, 0);
|
||||
assert.equal(model.channels[1].label, 'SecondaryA', 'secondary channel must come second');
|
||||
});
|
||||
|
||||
test('buildChatTabModel sorts primary channels by activity then alpha within the primary tier', () => {
|
||||
const model = buildChatTabModel({
|
||||
nodes: [],
|
||||
messages: [
|
||||
// LongFast: 1 message
|
||||
{ id: 'lf1', rx_time: NOW - 30, channel: 0, channel_name: 'LongFast' },
|
||||
// MediumFast: 3 messages (most active primary)
|
||||
{ id: 'mf1', rx_time: NOW - 28, channel: 0, channel_name: 'MediumFast' },
|
||||
{ id: 'mf2', rx_time: NOW - 26, channel: 0, channel_name: 'MediumFast' },
|
||||
{ id: 'mf3', rx_time: NOW - 24, channel: 0, channel_name: 'MediumFast' },
|
||||
// Public: 2 messages
|
||||
{ id: 'pb1', rx_time: NOW - 22, channel: 0, channel_name: 'Public' },
|
||||
{ id: 'pb2', rx_time: NOW - 20, channel: 0, channel_name: 'Public' },
|
||||
],
|
||||
nowSeconds: NOW,
|
||||
windowSeconds: WINDOW
|
||||
});
|
||||
assert.equal(model.channels.length, 3);
|
||||
assert.equal(model.channels[0].label, 'MediumFast', 'most active primary first');
|
||||
assert.equal(model.channels[1].label, 'Public', 'second most active primary second');
|
||||
assert.equal(model.channels[2].label, 'LongFast', 'least active primary last');
|
||||
});
|
||||
|
||||
test('buildChatTabModel sorts secondary channels by activity then alpha after all primaries', () => {
|
||||
const model = buildChatTabModel({
|
||||
nodes: [],
|
||||
messages: [
|
||||
// Primary with 1 message
|
||||
{ id: 'p1', rx_time: NOW - 50, channel: 0, channel_name: 'LongFast' },
|
||||
// Secondary channels
|
||||
{ id: 'b1', rx_time: NOW - 40, channel: 3, channel_name: 'Beta' },
|
||||
{ id: 'a1', rx_time: NOW - 38, channel: 1, channel_name: 'Alpha' },
|
||||
{ id: 'a2', rx_time: NOW - 36, channel: 1, channel_name: 'Alpha' },
|
||||
{ id: 'a3', rx_time: NOW - 34, channel: 1, channel_name: 'Alpha' },
|
||||
{ id: 'g1', rx_time: NOW - 32, channel: 2, channel_name: 'Gamma' },
|
||||
{ id: 'g2', rx_time: NOW - 30, channel: 2, channel_name: 'Gamma' },
|
||||
],
|
||||
nowSeconds: NOW,
|
||||
windowSeconds: WINDOW
|
||||
});
|
||||
assert.equal(model.channels.length, 4);
|
||||
assert.equal(model.channels[0].label, 'LongFast', 'primary always first');
|
||||
assert.equal(model.channels[1].label, 'Alpha', 'most active secondary first');
|
||||
assert.equal(model.channels[2].label, 'Gamma', 'second most active secondary second');
|
||||
assert.equal(model.channels[3].label, 'Beta', 'least active secondary last');
|
||||
});
|
||||
|
||||
@@ -67,6 +67,10 @@ class MockElement {
|
||||
this.hidden = false;
|
||||
this.scrollTop = 0;
|
||||
this.scrollHeight = 200;
|
||||
this.scrollLeft = 0;
|
||||
this.clientWidth = 0;
|
||||
this.scrollWidth = 0;
|
||||
this.scrollIntoViewCalls = [];
|
||||
}
|
||||
|
||||
appendChild(node) {
|
||||
@@ -122,6 +126,14 @@ class MockElement {
|
||||
handler({});
|
||||
}
|
||||
}
|
||||
|
||||
scrollIntoView(opts) {
|
||||
this.scrollIntoViewCalls.push(opts);
|
||||
}
|
||||
|
||||
scrollBy() {
|
||||
// no-op in tests; presence is enough to avoid guards
|
||||
}
|
||||
}
|
||||
|
||||
class MockTextNode {
|
||||
@@ -164,9 +176,13 @@ test('renderChatTabs creates tab markup and selects default active tab', () => {
|
||||
|
||||
assert.equal(active, 'channel-0');
|
||||
assert.equal(container.dataset.activeTab, 'channel-0');
|
||||
// container now holds [tabListWrapper, panelWrapper]
|
||||
assert.equal(container.children.length, 2);
|
||||
|
||||
const [tabList, panelWrapper] = container.children;
|
||||
const [tabListWrapper, panelWrapper] = container.children;
|
||||
// tabListWrapper holds [prevBtn, tabList, nextBtn]
|
||||
assert.equal(tabListWrapper.children.length, 3);
|
||||
const [, tabList] = tabListWrapper.children;
|
||||
assert.equal(tabList.children.length, 3);
|
||||
assert.equal(panelWrapper.children.length, 3);
|
||||
assert.equal(panelWrapper.children[1].hidden, false);
|
||||
@@ -198,7 +214,8 @@ test('renderChatTabs reuses previous active tab when still available', () => {
|
||||
});
|
||||
|
||||
assert.equal(active, 'log');
|
||||
const [tabList, panels] = container.children;
|
||||
const [tabListWrapper, panels] = container.children;
|
||||
const [, tabList] = tabListWrapper.children;
|
||||
assert.equal(tabList.children[0].getAttribute('aria-selected'), 'true');
|
||||
assert.equal(panels.children[0].hidden, false);
|
||||
});
|
||||
@@ -224,7 +241,8 @@ test('renderChatTabs renders icon img child when tab.iconSrc is provided', () =>
|
||||
|
||||
renderChatTabs({ document, container, tabs });
|
||||
|
||||
const [tabList] = container.children;
|
||||
const [tabListWrapper] = container.children;
|
||||
const [, tabList] = tabListWrapper.children;
|
||||
const button = tabList.children[0];
|
||||
// Button has one element child (the icon <img>) and one text node — two childNodes total.
|
||||
assert.equal(button.children.length, 1, 'should have exactly one element child (icon img)');
|
||||
@@ -246,9 +264,91 @@ test('renderChatTabs uses textContent when no iconSrc is provided', () => {
|
||||
|
||||
renderChatTabs({ document, container, tabs });
|
||||
|
||||
const [tabList] = container.children;
|
||||
const [tabListWrapper] = container.children;
|
||||
const [, tabList] = tabListWrapper.children;
|
||||
const button = tabList.children[0];
|
||||
assert.equal(button.textContent, 'Log');
|
||||
// No icon child elements
|
||||
assert.equal(button.children.length, 0);
|
||||
});
|
||||
|
||||
test('renderChatTabs includes prev and next scroll buttons inside the wrapper', () => {
|
||||
const document = createMockDocument();
|
||||
const container = new MockElement('div');
|
||||
|
||||
renderChatTabs({
|
||||
document,
|
||||
container,
|
||||
tabs: [{ id: 'log', label: 'Log', content: new MockElement('div') }]
|
||||
});
|
||||
|
||||
const [tabListWrapper] = container.children;
|
||||
const [prevBtn, , nextBtn] = tabListWrapper.children;
|
||||
assert.equal(prevBtn.getAttribute('aria-hidden'), 'true');
|
||||
assert.equal(nextBtn.getAttribute('aria-hidden'), 'true');
|
||||
assert.ok(prevBtn.className.includes('chat-tab-scroll-btn--prev'));
|
||||
assert.ok(nextBtn.className.includes('chat-tab-scroll-btn--next'));
|
||||
// Both start hidden (no overflow in test environment)
|
||||
assert.equal(prevBtn.hidden, true);
|
||||
assert.equal(nextBtn.hidden, true);
|
||||
});
|
||||
|
||||
test('renderChatTabs scrolls active button into view on tab switch', () => {
|
||||
const document = createMockDocument();
|
||||
const container = new MockElement('div');
|
||||
|
||||
const tabs = [
|
||||
{ id: 'log', label: 'Log', content: new MockElement('div') },
|
||||
{ id: 'ch1', label: 'Channel (5)', content: new MockElement('div') }
|
||||
];
|
||||
|
||||
renderChatTabs({ document, container, tabs, defaultActiveTabId: 'log' });
|
||||
|
||||
const [tabListWrapper] = container.children;
|
||||
const [, tabList] = tabListWrapper.children;
|
||||
const ch1Button = tabList.children[1];
|
||||
|
||||
ch1Button.dispatch('click');
|
||||
assert.equal(container.dataset.activeTab, 'ch1');
|
||||
assert.equal(ch1Button.scrollIntoViewCalls.length, 1);
|
||||
assert.deepEqual(ch1Button.scrollIntoViewCalls[0], { block: 'nearest', inline: 'nearest' });
|
||||
});
|
||||
|
||||
test('renderChatTabs arrow buttons reflect scroll position via scroll event', () => {
|
||||
const document = createMockDocument();
|
||||
const container = new MockElement('div');
|
||||
|
||||
renderChatTabs({
|
||||
document,
|
||||
container,
|
||||
tabs: [{ id: 'log', label: 'Log', content: new MockElement('div') }]
|
||||
});
|
||||
|
||||
const [tabListWrapper] = container.children;
|
||||
const [prevBtn, tabList, nextBtn] = tabListWrapper.children;
|
||||
|
||||
// Simulate a scrollable list: total width 400, viewport 100, scrolled 50.
|
||||
tabList.scrollLeft = 50;
|
||||
tabList.clientWidth = 100;
|
||||
tabList.scrollWidth = 400;
|
||||
|
||||
// Fire the scroll event so updateArrows recalculates.
|
||||
tabList.dispatch('scroll');
|
||||
|
||||
// scrolled past start → prev should be visible
|
||||
assert.equal(prevBtn.hidden, false);
|
||||
// not yet at end (50 + 100 = 150 < 400 - 1) → next should be visible
|
||||
assert.equal(nextBtn.hidden, false);
|
||||
|
||||
// Scroll to the very end.
|
||||
tabList.scrollLeft = 300; // 300 + 100 = 400 >= 400 - 1
|
||||
tabList.dispatch('scroll');
|
||||
assert.equal(prevBtn.hidden, false);
|
||||
assert.equal(nextBtn.hidden, true);
|
||||
|
||||
// Scroll back to start.
|
||||
tabList.scrollLeft = 0;
|
||||
tabList.dispatch('scroll');
|
||||
assert.equal(prevBtn.hidden, true);
|
||||
assert.equal(nextBtn.hidden, false);
|
||||
});
|
||||
|
||||
@@ -326,6 +326,14 @@ export function createDomEnvironment(options = {}) {
|
||||
querySelector() {
|
||||
return null;
|
||||
},
|
||||
querySelectorAll(selector) {
|
||||
// Delegate to body when available — MockElement.querySelectorAll supports
|
||||
// class selectors which covers the majority of test-time lookups.
|
||||
if (document.body && typeof document.body.querySelectorAll === 'function') {
|
||||
return document.body.querySelectorAll(selector);
|
||||
}
|
||||
return [];
|
||||
},
|
||||
createElement(tagName) {
|
||||
return new MockElement(tagName, registry);
|
||||
},
|
||||
|
||||
@@ -809,3 +809,136 @@ test('federation page sorts by full site names before truncating visible labels'
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('federation table linkifies Matrix room aliases, user IDs, and bare domain paths', async () => {
|
||||
const { tbodyEl, cleanup } = (() => {
|
||||
const e = createBasicFederationPageHarness();
|
||||
return { tbodyEl: e.tbodyEl, cleanup: e.cleanup.bind(e) };
|
||||
})();
|
||||
|
||||
const fetchImpl = () => Promise.resolve({
|
||||
ok: true,
|
||||
json: async () => [
|
||||
{
|
||||
domain: 'mesh.example',
|
||||
name: 'Room Test',
|
||||
contactLink: '@jmrplens:matrix.jmrp.io',
|
||||
channel: '#mesh:server.tld',
|
||||
version: '1.0.0',
|
||||
latitude: 0,
|
||||
longitude: 0,
|
||||
lastUpdateTime: Math.floor(Date.now() / 1000) - 60,
|
||||
nodesCount: 3
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
try {
|
||||
await initializeFederationPage({ config: {}, fetchImpl, leaflet: createBasicLeafletStub() });
|
||||
|
||||
const rowHtml = tbodyEl.childNodes[0].innerHTML;
|
||||
// Matrix user ID: @jmrplens:matrix.jmrp.io → https://matrix.to/#/@jmrplens:matrix.jmrp.io
|
||||
assert.match(rowHtml, /href="https:\/\/matrix\.to\/#\/@jmrplens:matrix\.jmrp\.io"/);
|
||||
assert.match(rowHtml, /@jmrplens:matrix\.jmrp\.io/);
|
||||
// Matrix room alias in channel cell: #mesh:server.tld → https://matrix.to/#/#mesh:server.tld
|
||||
assert.match(rowHtml, /href="https:\/\/matrix\.to\/#\/#mesh:server\.tld"/);
|
||||
assert.match(rowHtml, /#mesh:server\.tld/);
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('federation table linkifies bare domain-with-path as https', async () => {
|
||||
const { tbodyEl, cleanup } = (() => {
|
||||
const e = createBasicFederationPageHarness();
|
||||
return { tbodyEl: e.tbodyEl, cleanup: e.cleanup.bind(e) };
|
||||
})();
|
||||
|
||||
const fetchImpl = () => Promise.resolve({
|
||||
ok: true,
|
||||
json: async () => [
|
||||
{
|
||||
domain: 'mesh.example',
|
||||
contactLink: 'discord.gg/EGdbRKQnFk',
|
||||
version: '1.0.0',
|
||||
latitude: 0,
|
||||
longitude: 0,
|
||||
lastUpdateTime: Math.floor(Date.now() / 1000) - 60,
|
||||
nodesCount: 1
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
try {
|
||||
await initializeFederationPage({ config: {}, fetchImpl, leaflet: createBasicLeafletStub() });
|
||||
|
||||
const rowHtml = tbodyEl.childNodes[0].innerHTML;
|
||||
assert.match(rowHtml, /href="https:\/\/discord\.gg\/EGdbRKQnFk"/);
|
||||
assert.match(rowHtml, /discord\.gg\/EGdbRKQnFk/);
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('federation table sanitises <a> tags and strips other HTML in contact field', async () => {
|
||||
const { tbodyEl, cleanup } = (() => {
|
||||
const e = createBasicFederationPageHarness();
|
||||
return { tbodyEl: e.tbodyEl, cleanup: e.cleanup.bind(e) };
|
||||
})();
|
||||
|
||||
const contactWithHtml =
|
||||
'<a href=https://t.me/+BpSW3no2mJgzM2I8 target=_blank>YO Telegram group</a><b> Contact:</b> YO3IBZ';
|
||||
const contactViber =
|
||||
'<a href="https://invite.viber.com/?g=64h1QIFIC1Unai6DS6SE2Ot8ks9xoTm6">Viber Group</a>';
|
||||
|
||||
const fetchImpl = () => Promise.resolve({
|
||||
ok: true,
|
||||
json: async () => [
|
||||
{
|
||||
domain: 'a.mesh',
|
||||
contactLink: contactWithHtml,
|
||||
version: '1.0.0',
|
||||
latitude: 0,
|
||||
longitude: 0,
|
||||
lastUpdateTime: Math.floor(Date.now() / 1000) - 60,
|
||||
nodesCount: 1
|
||||
},
|
||||
{
|
||||
domain: 'b.mesh',
|
||||
contactLink: contactViber,
|
||||
version: '1.0.0',
|
||||
latitude: 0,
|
||||
longitude: 0,
|
||||
lastUpdateTime: Math.floor(Date.now() / 1000) - 60,
|
||||
nodesCount: 1
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
try {
|
||||
await initializeFederationPage({ config: {}, fetchImpl, leaflet: createBasicLeafletStub() });
|
||||
|
||||
const rows = tbodyEl.childNodes;
|
||||
const aHtml = rows[0].innerHTML;
|
||||
const bHtml = rows[1].innerHTML;
|
||||
|
||||
// Unquoted href extracted and normalised
|
||||
assert.match(aHtml, /href="https:\/\/t\.me\/\+BpSW3no2mJgzM2I8"/);
|
||||
assert.match(aHtml, /YO Telegram group/);
|
||||
// <b> tag stripped, text content preserved
|
||||
assert.match(aHtml, /Contact:/);
|
||||
assert.doesNotMatch(aHtml, /<b>/);
|
||||
// Remaining plain text present
|
||||
assert.match(aHtml, /YO3IBZ/);
|
||||
|
||||
// Quoted href passes through correctly
|
||||
assert.match(bHtml, /href="https:\/\/invite\.viber\.com\/\?g=64h1QIFIC1Unai6DS6SE2Ot8ks9xoTm6"/);
|
||||
assert.match(bHtml, /Viber Group/);
|
||||
|
||||
// No raw HTML from input leaks into output
|
||||
assert.doesNotMatch(aHtml, /target=_blank/);
|
||||
assert.doesNotMatch(aHtml, /<\/b>/);
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
@@ -0,0 +1,212 @@
|
||||
/*
|
||||
* Copyright © 2025-26 l5yth & contributors
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
import test from 'node:test';
|
||||
import assert from 'node:assert/strict';
|
||||
import { maxRecordTimestamp, mergeById, mergeByCompositeKey, trimToLimit } from '../incremental-helpers.js';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// maxRecordTimestamp
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('maxRecordTimestamp returns 0 for an empty array', () => {
|
||||
assert.equal(maxRecordTimestamp([]), 0);
|
||||
});
|
||||
|
||||
test('maxRecordTimestamp returns 0 for non-array input', () => {
|
||||
assert.equal(maxRecordTimestamp(null), 0);
|
||||
assert.equal(maxRecordTimestamp(undefined), 0);
|
||||
assert.equal(maxRecordTimestamp('string'), 0);
|
||||
});
|
||||
|
||||
test('maxRecordTimestamp extracts the highest rx_time by default', () => {
|
||||
const records = [
|
||||
{ rx_time: 100 },
|
||||
{ rx_time: 300 },
|
||||
{ rx_time: 200 },
|
||||
];
|
||||
assert.equal(maxRecordTimestamp(records), 300);
|
||||
});
|
||||
|
||||
test('maxRecordTimestamp inspects last_heard by default', () => {
|
||||
const records = [
|
||||
{ last_heard: 500 },
|
||||
{ last_heard: 250 },
|
||||
];
|
||||
assert.equal(maxRecordTimestamp(records), 500);
|
||||
});
|
||||
|
||||
test('maxRecordTimestamp returns 0 when records lack timestamp fields', () => {
|
||||
const records = [{ node_id: '!abc' }, { node_id: '!def' }];
|
||||
assert.equal(maxRecordTimestamp(records), 0);
|
||||
});
|
||||
|
||||
test('maxRecordTimestamp accepts custom field names', () => {
|
||||
const records = [
|
||||
{ telemetry_time: 700, rx_time: 600 },
|
||||
{ telemetry_time: 800 },
|
||||
];
|
||||
assert.equal(maxRecordTimestamp(records, ['telemetry_time']), 800);
|
||||
});
|
||||
|
||||
test('maxRecordTimestamp picks the max across multiple fields', () => {
|
||||
const records = [
|
||||
{ rx_time: 100, position_time: 400 },
|
||||
{ rx_time: 300, position_time: 200 },
|
||||
];
|
||||
assert.equal(maxRecordTimestamp(records, ['rx_time', 'position_time']), 400);
|
||||
});
|
||||
|
||||
test('maxRecordTimestamp skips null and non-object entries', () => {
|
||||
const records = [null, undefined, 42, { rx_time: 10 }];
|
||||
assert.equal(maxRecordTimestamp(records), 10);
|
||||
});
|
||||
|
||||
test('maxRecordTimestamp ignores non-number timestamp values', () => {
|
||||
const records = [{ rx_time: 'abc' }, { rx_time: 50 }];
|
||||
assert.equal(maxRecordTimestamp(records), 50);
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// mergeById
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('mergeById returns existing when incoming is empty', () => {
|
||||
const existing = [{ id: 1, v: 'a' }];
|
||||
assert.strictEqual(mergeById(existing, [], 'id'), existing);
|
||||
assert.strictEqual(mergeById(existing, null, 'id'), existing);
|
||||
assert.strictEqual(mergeById(existing, undefined, 'id'), existing);
|
||||
});
|
||||
|
||||
test('mergeById deduplicates by keyField keeping the incoming value', () => {
|
||||
const existing = [
|
||||
{ id: 1, v: 'old' },
|
||||
{ id: 2, v: 'keep' },
|
||||
];
|
||||
const incoming = [
|
||||
{ id: 1, v: 'new' },
|
||||
{ id: 3, v: 'added' },
|
||||
];
|
||||
const result = mergeById(existing, incoming, 'id');
|
||||
assert.equal(result.length, 3);
|
||||
const byId = Object.fromEntries(result.map(r => [r.id, r.v]));
|
||||
assert.equal(byId[1], 'new');
|
||||
assert.equal(byId[2], 'keep');
|
||||
assert.equal(byId[3], 'added');
|
||||
});
|
||||
|
||||
test('mergeById works with string keys', () => {
|
||||
const existing = [{ node_id: '!abc', name: 'A' }];
|
||||
const incoming = [{ node_id: '!abc', name: 'B' }];
|
||||
const result = mergeById(existing, incoming, 'node_id');
|
||||
assert.equal(result.length, 1);
|
||||
assert.equal(result[0].name, 'B');
|
||||
});
|
||||
|
||||
test('mergeById skips items with null or undefined key', () => {
|
||||
const existing = [{ id: 1, v: 'a' }];
|
||||
const incoming = [{ v: 'no-id' }, { id: 2, v: 'b' }];
|
||||
const result = mergeById(existing, incoming, 'id');
|
||||
assert.equal(result.length, 2);
|
||||
});
|
||||
|
||||
test('mergeById returns all incoming when existing is empty', () => {
|
||||
const result = mergeById([], [{ id: 1 }, { id: 2 }], 'id');
|
||||
assert.equal(result.length, 2);
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// mergeByCompositeKey
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('mergeByCompositeKey deduplicates by composite key', () => {
|
||||
const existing = [
|
||||
{ node_id: '!a', neighbor_id: '!b', snr: 5 },
|
||||
{ node_id: '!a', neighbor_id: '!c', snr: 3 },
|
||||
];
|
||||
const incoming = [
|
||||
{ node_id: '!a', neighbor_id: '!b', snr: 8 },
|
||||
{ node_id: '!a', neighbor_id: '!d', snr: 1 },
|
||||
];
|
||||
const result = mergeByCompositeKey(existing, incoming, ['node_id', 'neighbor_id']);
|
||||
assert.equal(result.length, 3);
|
||||
const ab = result.find(r => r.neighbor_id === '!b');
|
||||
assert.equal(ab.snr, 8, 'incoming should overwrite existing for same composite key');
|
||||
});
|
||||
|
||||
test('mergeByCompositeKey returns existing when incoming is empty', () => {
|
||||
const existing = [{ a: 1, b: 2 }];
|
||||
assert.strictEqual(mergeByCompositeKey(existing, [], ['a', 'b']), existing);
|
||||
assert.strictEqual(mergeByCompositeKey(existing, null, ['a', 'b']), existing);
|
||||
});
|
||||
|
||||
test('mergeByCompositeKey handles missing key fields gracefully', () => {
|
||||
const existing = [{ node_id: '!a' }];
|
||||
const incoming = [{ node_id: '!a', neighbor_id: '!b' }];
|
||||
const result = mergeByCompositeKey(existing, incoming, ['node_id', 'neighbor_id']);
|
||||
assert.equal(result.length, 2, 'different composite keys due to missing field');
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// trimToLimit
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('trimToLimit returns the same array when within limit', () => {
|
||||
const records = [{ id: 1, rx_time: 100 }, { id: 2, rx_time: 200 }];
|
||||
const result = trimToLimit(records, 5);
|
||||
assert.strictEqual(result, records);
|
||||
});
|
||||
|
||||
test('trimToLimit trims to limit keeping newest entries', () => {
|
||||
const records = [
|
||||
{ id: 1, rx_time: 100 },
|
||||
{ id: 2, rx_time: 300 },
|
||||
{ id: 3, rx_time: 200 },
|
||||
{ id: 4, rx_time: 400 },
|
||||
];
|
||||
const result = trimToLimit(records, 2);
|
||||
assert.equal(result.length, 2);
|
||||
const ids = result.map(r => r.id);
|
||||
assert.ok(ids.includes(4), 'should keep newest (id=4)');
|
||||
assert.ok(ids.includes(2), 'should keep second newest (id=2)');
|
||||
});
|
||||
|
||||
test('trimToLimit uses custom timestamp field', () => {
|
||||
const records = [
|
||||
{ id: 1, last_heard: 100 },
|
||||
{ id: 2, last_heard: 300 },
|
||||
{ id: 3, last_heard: 200 },
|
||||
];
|
||||
const result = trimToLimit(records, 1, 'last_heard');
|
||||
assert.equal(result.length, 1);
|
||||
assert.equal(result[0].id, 2);
|
||||
});
|
||||
|
||||
test('trimToLimit returns input for non-array values', () => {
|
||||
assert.equal(trimToLimit(null, 10), null);
|
||||
assert.equal(trimToLimit(undefined, 10), undefined);
|
||||
});
|
||||
|
||||
test('trimToLimit handles records with missing timestamp fields', () => {
|
||||
const records = [
|
||||
{ id: 1, rx_time: 100 },
|
||||
{ id: 2 },
|
||||
{ id: 3, rx_time: 300 },
|
||||
];
|
||||
const result = trimToLimit(records, 2);
|
||||
assert.equal(result.length, 2);
|
||||
assert.equal(result[0].id, 3);
|
||||
});
|
||||
@@ -230,7 +230,7 @@ test('initializeInstanceSelector navigates to the chosen instance domain', async
|
||||
const fetchImpl = async () => ({
|
||||
ok: true,
|
||||
async json() {
|
||||
return [{ domain: 'mesh.example' }];
|
||||
return [{ domain: 'mesh.example' }, { domain: 'other.mesh' }];
|
||||
}
|
||||
});
|
||||
|
||||
@@ -249,7 +249,7 @@ test('initializeInstanceSelector navigates to the chosen instance domain', async
|
||||
defaultLabel: 'Select region ...'
|
||||
});
|
||||
|
||||
assert.equal(select.options.length, 2);
|
||||
assert.equal(select.options.length, 3);
|
||||
assert.equal(select.options[1].value, 'mesh.example');
|
||||
|
||||
select.value = 'mesh.example';
|
||||
@@ -261,6 +261,68 @@ test('initializeInstanceSelector navigates to the chosen instance domain', async
|
||||
}
|
||||
});
|
||||
|
||||
test('initializeInstanceSelector hides the selector container when fewer than 2 instances are available', async () => {
|
||||
const env = createDomEnvironment();
|
||||
const select = setupSelectElement(env.document);
|
||||
|
||||
// Simulate a parent container; mock elements lack closest() so we set
|
||||
// parentElement directly so the hide logic falls back to it.
|
||||
const container = env.document.createElement('div');
|
||||
container.classList.add('header-federation');
|
||||
select.parentElement = container;
|
||||
env.document.body.appendChild(container);
|
||||
|
||||
const fetchImpl = async () => ({
|
||||
ok: true,
|
||||
async json() {
|
||||
return [{ domain: 'only.mesh' }];
|
||||
}
|
||||
});
|
||||
|
||||
try {
|
||||
await initializeInstanceSelector({
|
||||
selectElement: select,
|
||||
fetchImpl,
|
||||
windowObject: env.window,
|
||||
documentObject: env.document
|
||||
});
|
||||
|
||||
assert.equal(container.hidden, true, 'container should be hidden with fewer than 2 instances');
|
||||
} finally {
|
||||
env.cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('initializeInstanceSelector keeps the selector visible when 2 or more instances are available', async () => {
|
||||
const env = createDomEnvironment();
|
||||
const select = setupSelectElement(env.document);
|
||||
|
||||
const container = env.document.createElement('div');
|
||||
container.classList.add('header-federation');
|
||||
select.parentElement = container;
|
||||
env.document.body.appendChild(container);
|
||||
|
||||
const fetchImpl = async () => ({
|
||||
ok: true,
|
||||
async json() {
|
||||
return [{ domain: 'alpha.mesh' }, { domain: 'beta.mesh' }];
|
||||
}
|
||||
});
|
||||
|
||||
try {
|
||||
await initializeInstanceSelector({
|
||||
selectElement: select,
|
||||
fetchImpl,
|
||||
windowObject: env.window,
|
||||
documentObject: env.document
|
||||
});
|
||||
|
||||
assert.ok(!container.hidden, 'container should remain visible with 2 or more instances');
|
||||
} finally {
|
||||
env.cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('initializeInstanceSelector updates federation navigation labels with instance count', async () => {
|
||||
const env = createDomEnvironment();
|
||||
const select = setupSelectElement(env.document);
|
||||
|
||||
@@ -0,0 +1,94 @@
|
||||
/*
|
||||
* Copyright © 2025-26 l5yth & contributors
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
import { createDomEnvironment } from './dom-environment.js';
|
||||
import { initializeApp } from '../main.js';
|
||||
|
||||
/**
|
||||
* Minimal {@link initializeApp} configuration shared across main.js test suites.
|
||||
* Frozen to prevent accidental mutation between tests.
|
||||
*/
|
||||
export const MINIMAL_CONFIG = Object.freeze({
|
||||
channel: 'Primary',
|
||||
frequency: '915MHz',
|
||||
refreshMs: 0,
|
||||
refreshIntervalSeconds: 30,
|
||||
chatEnabled: true,
|
||||
mapCenter: { lat: 0, lon: 0 },
|
||||
mapZoom: null,
|
||||
maxDistanceKm: 0,
|
||||
tileFilters: { light: '', dark: '' },
|
||||
instancesFeatureEnabled: false,
|
||||
instanceDomain: null,
|
||||
snapshotWindowSeconds: 3600,
|
||||
});
|
||||
|
||||
/**
|
||||
* Spin up a minimal DOM environment, call {@link initializeApp} with a stub
|
||||
* config, and return the inner test utilities alongside a cleanup handle.
|
||||
*
|
||||
* @returns {{ testUtils: Object, cleanup: Function }}
|
||||
*/
|
||||
export function setupApp() {
|
||||
const env = createDomEnvironment({ includeBody: true });
|
||||
const { _testUtils } = initializeApp(MINIMAL_CONFIG);
|
||||
return { testUtils: _testUtils, cleanup: env.cleanup.bind(env) };
|
||||
}
|
||||
|
||||
/**
|
||||
* Run a test body with a fresh app instance, ensuring cleanup regardless of
|
||||
* outcome. Eliminates the repetitive try/finally boilerplate across tests.
|
||||
*
|
||||
* @param {function(Object): void} fn Receives the _testUtils object.
|
||||
*/
|
||||
export function withApp(fn) {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
fn(testUtils);
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Spin up a DOM environment, optionally pre-register elements by id, then
|
||||
* initialise the app with a custom config override. Returns the test utils,
|
||||
* the environment (for DOM inspection), and a cleanup handle.
|
||||
*
|
||||
* @param {{ extraElements?: string[], configOverrides?: Object }} [opts]
|
||||
* @returns {{ testUtils: Object, env: Object, cleanup: Function }}
|
||||
*/
|
||||
export function setupAppWithOptions({ extraElements = [], configOverrides = {} } = {}) {
|
||||
const env = createDomEnvironment({ includeBody: true });
|
||||
for (const id of extraElements) {
|
||||
env.registerElement(id, env.createElement('span', id));
|
||||
}
|
||||
const config = { ...MINIMAL_CONFIG, ...configOverrides };
|
||||
const { _testUtils } = initializeApp(config);
|
||||
return { testUtils: _testUtils, env, cleanup: env.cleanup.bind(env) };
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract the serialised HTML string from a DOM element returned by the test
|
||||
* utils. The stub environment exposes innerHTML as a plain string; this
|
||||
* normalises the fallback path for environments where it may not be.
|
||||
*
|
||||
* @param {HTMLElement} el
|
||||
* @returns {string}
|
||||
*/
|
||||
export function innerHtml(el) {
|
||||
return String(typeof el.innerHTML === 'string' ? el.innerHTML : el.childNodes?.[0] ?? '');
|
||||
}
|
||||
@@ -0,0 +1,40 @@
|
||||
/*
|
||||
* Copyright © 2025-26 l5yth & contributors
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
import test from 'node:test';
|
||||
import assert from 'node:assert/strict';
|
||||
|
||||
import { withApp } from './main-app-test-helpers.js';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// isAutorefreshPaused
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('isAutorefreshPaused returns false by default', () => {
|
||||
withApp((t) => {
|
||||
assert.equal(t.isAutorefreshPaused(), false);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// restartAutoRefresh is safe when called without a timer
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('restartAutoRefresh does not throw when invoked with refreshMs 0', () => {
|
||||
withApp((t) => {
|
||||
assert.doesNotThrow(() => t.restartAutoRefresh());
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,543 @@
|
||||
/*
|
||||
* Copyright © 2025-26 l5yth & contributors
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
import test from 'node:test';
|
||||
import assert from 'node:assert/strict';
|
||||
|
||||
import { withApp } from './main-app-test-helpers.js';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// makeRoleFilterKey
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('makeRoleFilterKey produces compound key for meshtastic protocol', () => {
|
||||
withApp((t) => {
|
||||
assert.equal(t.makeRoleFilterKey('SENSOR', 'meshtastic'), 'meshtastic:SENSOR');
|
||||
assert.equal(t.makeRoleFilterKey('ROUTER', 'meshtastic'), 'meshtastic:ROUTER');
|
||||
});
|
||||
});
|
||||
|
||||
test('makeRoleFilterKey produces compound key for meshcore protocol', () => {
|
||||
withApp((t) => {
|
||||
assert.equal(t.makeRoleFilterKey('SENSOR', 'meshcore'), 'meshcore:SENSOR');
|
||||
assert.equal(t.makeRoleFilterKey('REPEATER', 'meshcore'), 'meshcore:REPEATER');
|
||||
});
|
||||
});
|
||||
|
||||
test('makeRoleFilterKey defaults null protocol to meshtastic bucket', () => {
|
||||
withApp((t) => {
|
||||
assert.equal(t.makeRoleFilterKey('SENSOR', null), 'meshtastic:SENSOR');
|
||||
assert.equal(t.makeRoleFilterKey('ROUTER', null), 'meshtastic:ROUTER');
|
||||
});
|
||||
});
|
||||
|
||||
test('makeRoleFilterKey defaults absent protocol to meshtastic bucket', () => {
|
||||
withApp((t) => {
|
||||
assert.equal(t.makeRoleFilterKey('CLIENT', undefined), 'meshtastic:CLIENT');
|
||||
});
|
||||
});
|
||||
|
||||
test('makeRoleFilterKey SENSOR and REPEATER produce distinct keys across protocols', () => {
|
||||
withApp((t) => {
|
||||
const meshtasticSensor = t.makeRoleFilterKey('SENSOR', 'meshtastic');
|
||||
const meshcoreSensor = t.makeRoleFilterKey('SENSOR', 'meshcore');
|
||||
assert.notEqual(meshtasticSensor, meshcoreSensor);
|
||||
|
||||
const meshtasticRepeater = t.makeRoleFilterKey('REPEATER', 'meshtastic');
|
||||
const meshcoreRepeater = t.makeRoleFilterKey('REPEATER', 'meshcore');
|
||||
assert.notEqual(meshtasticRepeater, meshcoreRepeater);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// matchesRoleFilter — no active filters
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('matchesRoleFilter returns true when no roles are hidden', () => {
|
||||
withApp((t) => {
|
||||
t.activeRoleFilters.clear();
|
||||
assert.equal(t.matchesRoleFilter({ role: 'ROUTER', protocol: 'meshtastic' }), true);
|
||||
assert.equal(t.matchesRoleFilter({ role: 'SENSOR', protocol: 'meshcore' }), true);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// matchesRoleFilter — exclusion-set semantics (roles in set are hidden)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('matchesRoleFilter hides meshtastic SENSOR when in exclusion set', () => {
|
||||
withApp((t) => {
|
||||
t.activeRoleFilters.clear();
|
||||
t.activeRoleFilters.add('meshtastic:SENSOR');
|
||||
assert.equal(t.matchesRoleFilter({ role: 'SENSOR', protocol: 'meshtastic' }), false);
|
||||
});
|
||||
});
|
||||
|
||||
test('matchesRoleFilter does not hide meshcore SENSOR when meshtastic SENSOR is hidden', () => {
|
||||
withApp((t) => {
|
||||
t.activeRoleFilters.clear();
|
||||
t.activeRoleFilters.add('meshtastic:SENSOR');
|
||||
assert.equal(t.matchesRoleFilter({ role: 'SENSOR', protocol: 'meshcore' }), true);
|
||||
});
|
||||
});
|
||||
|
||||
test('matchesRoleFilter hides meshcore SENSOR when in exclusion set', () => {
|
||||
withApp((t) => {
|
||||
t.activeRoleFilters.clear();
|
||||
t.activeRoleFilters.add('meshcore:SENSOR');
|
||||
assert.equal(t.matchesRoleFilter({ role: 'SENSOR', protocol: 'meshcore' }), false);
|
||||
});
|
||||
});
|
||||
|
||||
test('matchesRoleFilter does not hide meshtastic SENSOR when meshcore SENSOR is hidden', () => {
|
||||
withApp((t) => {
|
||||
t.activeRoleFilters.clear();
|
||||
t.activeRoleFilters.add('meshcore:SENSOR');
|
||||
assert.equal(t.matchesRoleFilter({ role: 'SENSOR', protocol: 'meshtastic' }), true);
|
||||
});
|
||||
});
|
||||
|
||||
test('matchesRoleFilter hides meshtastic REPEATER but not meshcore REPEATER', () => {
|
||||
withApp((t) => {
|
||||
t.activeRoleFilters.clear();
|
||||
t.activeRoleFilters.add('meshtastic:REPEATER');
|
||||
assert.equal(t.matchesRoleFilter({ role: 'REPEATER', protocol: 'meshtastic' }), false);
|
||||
assert.equal(t.matchesRoleFilter({ role: 'REPEATER', protocol: 'meshcore' }), true);
|
||||
});
|
||||
});
|
||||
|
||||
test('matchesRoleFilter hides meshcore REPEATER but not meshtastic REPEATER', () => {
|
||||
withApp((t) => {
|
||||
t.activeRoleFilters.clear();
|
||||
t.activeRoleFilters.add('meshcore:REPEATER');
|
||||
assert.equal(t.matchesRoleFilter({ role: 'REPEATER', protocol: 'meshcore' }), false);
|
||||
assert.equal(t.matchesRoleFilter({ role: 'REPEATER', protocol: 'meshtastic' }), true);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// matchesRoleFilter — null/absent protocol treated as meshtastic
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('matchesRoleFilter treats null protocol as meshtastic for exclusion', () => {
|
||||
withApp((t) => {
|
||||
t.activeRoleFilters.clear();
|
||||
t.activeRoleFilters.add('meshtastic:SENSOR');
|
||||
// null-protocol node should be hidden by the meshtastic SENSOR exclusion
|
||||
assert.equal(t.matchesRoleFilter({ role: 'SENSOR', protocol: null }), false);
|
||||
// but meshcore SENSOR exclusion should not affect null-protocol nodes
|
||||
t.activeRoleFilters.clear();
|
||||
t.activeRoleFilters.add('meshcore:SENSOR');
|
||||
assert.equal(t.matchesRoleFilter({ role: 'SENSOR', protocol: null }), true);
|
||||
});
|
||||
});
|
||||
|
||||
test('matchesRoleFilter with multiple hidden roles hides only those roles', () => {
|
||||
withApp((t) => {
|
||||
t.activeRoleFilters.clear();
|
||||
t.activeRoleFilters.add('meshtastic:SENSOR');
|
||||
t.activeRoleFilters.add('meshcore:REPEATER');
|
||||
assert.equal(t.matchesRoleFilter({ role: 'SENSOR', protocol: 'meshtastic' }), false);
|
||||
assert.equal(t.matchesRoleFilter({ role: 'REPEATER', protocol: 'meshcore' }), false);
|
||||
assert.equal(t.matchesRoleFilter({ role: 'ROUTER', protocol: 'meshtastic' }), true);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// matchesProtocolFilter
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('matchesProtocolFilter returns true when no protocols are hidden', () => {
|
||||
withApp((t) => {
|
||||
t.hiddenProtocols.clear();
|
||||
assert.equal(t.matchesProtocolFilter({ protocol: 'meshtastic' }), true);
|
||||
assert.equal(t.matchesProtocolFilter({ protocol: 'meshcore' }), true);
|
||||
assert.equal(t.matchesProtocolFilter({ protocol: null }), true);
|
||||
});
|
||||
});
|
||||
|
||||
test('matchesProtocolFilter hides meshtastic nodes when meshtastic is hidden', () => {
|
||||
withApp((t) => {
|
||||
t.hiddenProtocols.clear();
|
||||
t.hiddenProtocols.add('meshtastic');
|
||||
assert.equal(t.matchesProtocolFilter({ protocol: 'meshtastic' }), false);
|
||||
assert.equal(t.matchesProtocolFilter({ protocol: 'meshcore' }), true);
|
||||
});
|
||||
});
|
||||
|
||||
test('matchesProtocolFilter hides meshcore nodes when meshcore is hidden', () => {
|
||||
withApp((t) => {
|
||||
t.hiddenProtocols.clear();
|
||||
t.hiddenProtocols.add('meshcore');
|
||||
assert.equal(t.matchesProtocolFilter({ protocol: 'meshcore' }), false);
|
||||
assert.equal(t.matchesProtocolFilter({ protocol: 'meshtastic' }), true);
|
||||
});
|
||||
});
|
||||
|
||||
test('matchesProtocolFilter always shows null-protocol nodes even when meshtastic is hidden', () => {
|
||||
withApp((t) => {
|
||||
t.hiddenProtocols.clear();
|
||||
t.hiddenProtocols.add('meshtastic');
|
||||
// null/absent protocol nodes are NOT hidden — they predate the protocol field
|
||||
assert.equal(t.matchesProtocolFilter({ protocol: null }), true);
|
||||
assert.equal(t.matchesProtocolFilter({}), true);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Role filter key independence across protocols
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('SENSOR filter keys for meshtastic and meshcore are distinct strings', () => {
|
||||
withApp((t) => {
|
||||
const m = t.makeRoleFilterKey('SENSOR', 'meshtastic');
|
||||
const mc = t.makeRoleFilterKey('SENSOR', 'meshcore');
|
||||
// They must be different keys so they can live independently in the Set
|
||||
assert.notEqual(m, mc);
|
||||
// Adding one to the filter set must not affect the other
|
||||
t.activeRoleFilters.clear();
|
||||
t.activeRoleFilters.add(m);
|
||||
assert.equal(t.activeRoleFilters.has(m), true);
|
||||
assert.equal(t.activeRoleFilters.has(mc), false);
|
||||
});
|
||||
});
|
||||
|
||||
test('REPEATER filter keys for meshtastic and meshcore are distinct strings', () => {
|
||||
withApp((t) => {
|
||||
const m = t.makeRoleFilterKey('REPEATER', 'meshtastic');
|
||||
const mc = t.makeRoleFilterKey('REPEATER', 'meshcore');
|
||||
assert.notEqual(m, mc);
|
||||
t.activeRoleFilters.clear();
|
||||
t.activeRoleFilters.add(mc);
|
||||
assert.equal(t.activeRoleFilters.has(mc), true);
|
||||
assert.equal(t.activeRoleFilters.has(m), false);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// normalizeFilterProtocol
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('normalizeFilterProtocol returns meshcore for explicit meshcore', () => {
|
||||
withApp((t) => {
|
||||
assert.equal(t.normalizeFilterProtocol('meshcore'), 'meshcore');
|
||||
});
|
||||
});
|
||||
|
||||
test('normalizeFilterProtocol returns meshtastic for explicit meshtastic', () => {
|
||||
withApp((t) => {
|
||||
assert.equal(t.normalizeFilterProtocol('meshtastic'), 'meshtastic');
|
||||
});
|
||||
});
|
||||
|
||||
test('normalizeFilterProtocol returns meshtastic for null', () => {
|
||||
withApp((t) => {
|
||||
assert.equal(t.normalizeFilterProtocol(null), 'meshtastic');
|
||||
});
|
||||
});
|
||||
|
||||
test('normalizeFilterProtocol returns meshtastic for undefined', () => {
|
||||
withApp((t) => {
|
||||
assert.equal(t.normalizeFilterProtocol(undefined), 'meshtastic');
|
||||
});
|
||||
});
|
||||
|
||||
test('normalizeFilterProtocol returns meshtastic for unknown protocol', () => {
|
||||
withApp((t) => {
|
||||
assert.equal(t.normalizeFilterProtocol('reticulum'), 'meshtastic');
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// buildProtocolIconImg / buildMeshtasticIconImg / buildMeshcoreIconImg
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('buildProtocolIconImg returns an img element with the correct src and class', () => {
|
||||
withApp((t) => {
|
||||
const img = t.buildProtocolIconImg('/assets/img/test.svg', 'protocol-icon--test');
|
||||
assert.equal(img.tagName.toLowerCase(), 'img');
|
||||
assert.equal(img.getAttribute('src'), '/assets/img/test.svg');
|
||||
assert.ok(img.className.includes('protocol-icon'));
|
||||
assert.ok(img.className.includes('protocol-icon--test'));
|
||||
assert.equal(img.getAttribute('aria-hidden'), 'true');
|
||||
assert.equal(img.getAttribute('alt'), '');
|
||||
assert.equal(img.getAttribute('width'), '12');
|
||||
assert.equal(img.getAttribute('height'), '12');
|
||||
});
|
||||
});
|
||||
|
||||
test('buildMeshtasticIconImg references meshtastic.svg and carries the meshtastic class', () => {
|
||||
withApp((t) => {
|
||||
const img = t.buildMeshtasticIconImg();
|
||||
assert.ok(img.getAttribute('src').includes('meshtastic.svg'));
|
||||
assert.ok(img.className.includes('protocol-icon--meshtastic'));
|
||||
assert.equal(img.getAttribute('aria-hidden'), 'true');
|
||||
});
|
||||
});
|
||||
|
||||
test('buildMeshcoreIconImg references meshcore.svg and carries the meshcore class', () => {
|
||||
withApp((t) => {
|
||||
const img = t.buildMeshcoreIconImg();
|
||||
assert.ok(img.getAttribute('src').includes('meshcore.svg'));
|
||||
assert.ok(img.className.includes('protocol-icon--meshcore'));
|
||||
assert.equal(img.getAttribute('aria-hidden'), 'true');
|
||||
});
|
||||
});
|
||||
|
||||
test('buildMeshtasticIconImg and buildMeshcoreIconImg return different src values', () => {
|
||||
withApp((t) => {
|
||||
const mt = t.buildMeshtasticIconImg();
|
||||
const mc = t.buildMeshcoreIconImg();
|
||||
assert.notEqual(mt.getAttribute('src'), mc.getAttribute('src'));
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// legendClickHandler
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('legendClickHandler calls preventDefault and stopPropagation before fn', () => {
|
||||
withApp((t) => {
|
||||
let fnCalled = false;
|
||||
let preventDefaultCalled = false;
|
||||
let stopPropagationCalled = false;
|
||||
const handler = t.legendClickHandler(() => { fnCalled = true; });
|
||||
const fakeEvent = {
|
||||
preventDefault: () => { preventDefaultCalled = true; },
|
||||
stopPropagation: () => { stopPropagationCalled = true; },
|
||||
};
|
||||
handler(fakeEvent);
|
||||
assert.equal(preventDefaultCalled, true);
|
||||
assert.equal(stopPropagationCalled, true);
|
||||
assert.equal(fnCalled, true);
|
||||
});
|
||||
});
|
||||
|
||||
test('legendClickHandler passes the event to fn', () => {
|
||||
withApp((t) => {
|
||||
let received = null;
|
||||
const handler = t.legendClickHandler(ev => { received = ev; });
|
||||
const fakeEvent = {
|
||||
preventDefault: () => {},
|
||||
stopPropagation: () => {},
|
||||
detail: 'test',
|
||||
};
|
||||
handler(fakeEvent);
|
||||
assert.equal(received, fakeEvent);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// buildRoleButtons
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('buildRoleButtons appends one child per palette entry', () => {
|
||||
withApp((t) => {
|
||||
t.legendRoleButtons.clear();
|
||||
const col = document.createElement('div');
|
||||
t.buildRoleButtons(col, { SENSOR: '#40749E', REPEATER: '#B8C4D4' }, 'meshcore');
|
||||
assert.equal(col.childNodes.length, 2);
|
||||
});
|
||||
});
|
||||
|
||||
test('buildRoleButtons sets dataset.role and dataset.protocol on each button', () => {
|
||||
withApp((t) => {
|
||||
t.legendRoleButtons.clear();
|
||||
const col = document.createElement('div');
|
||||
t.buildRoleButtons(col, { SENSOR: '#40749E' }, 'meshcore');
|
||||
const btn = t.legendRoleButtons.get('meshcore:SENSOR');
|
||||
assert.ok(btn, 'button should be in legendRoleButtons');
|
||||
assert.equal(btn.dataset.role, 'SENSOR');
|
||||
assert.equal(btn.dataset.protocol, 'meshcore');
|
||||
});
|
||||
});
|
||||
|
||||
test('buildRoleButtons registers compound keys in legendRoleButtons', () => {
|
||||
withApp((t) => {
|
||||
t.legendRoleButtons.clear();
|
||||
const col = document.createElement('div');
|
||||
t.buildRoleButtons(col, { SENSOR: '#40749E', REPEATER: '#B8C4D4' }, 'meshcore');
|
||||
assert.ok(t.legendRoleButtons.has('meshcore:SENSOR'));
|
||||
assert.ok(t.legendRoleButtons.has('meshcore:REPEATER'));
|
||||
});
|
||||
});
|
||||
|
||||
test('buildRoleButtons keeps meshtastic and meshcore SENSOR keys distinct', () => {
|
||||
withApp((t) => {
|
||||
t.legendRoleButtons.clear();
|
||||
const colMc = document.createElement('div');
|
||||
const colMt = document.createElement('div');
|
||||
t.buildRoleButtons(colMc, { SENSOR: '#40749E' }, 'meshcore');
|
||||
t.buildRoleButtons(colMt, { SENSOR: '#A8D5BA' }, 'meshtastic');
|
||||
assert.ok(t.legendRoleButtons.has('meshcore:SENSOR'));
|
||||
assert.ok(t.legendRoleButtons.has('meshtastic:SENSOR'));
|
||||
assert.notEqual(
|
||||
t.legendRoleButtons.get('meshcore:SENSOR'),
|
||||
t.legendRoleButtons.get('meshtastic:SENSOR'),
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
test('buildRoleButtons sets aria-pressed to true initially (all visible)', () => {
|
||||
withApp((t) => {
|
||||
t.legendRoleButtons.clear();
|
||||
const col = document.createElement('div');
|
||||
t.buildRoleButtons(col, { ROUTER: '#ff0019' }, 'meshtastic');
|
||||
const btn = t.legendRoleButtons.get('meshtastic:ROUTER');
|
||||
assert.ok(btn, 'button should be in legendRoleButtons');
|
||||
assert.equal(btn.getAttribute('aria-pressed'), 'true');
|
||||
});
|
||||
});
|
||||
|
||||
test('buildRoleButtons creates swatch child with background color', () => {
|
||||
withApp((t) => {
|
||||
t.legendRoleButtons.clear();
|
||||
const col = document.createElement('div');
|
||||
t.buildRoleButtons(col, { ROUTER: '#ff0019' }, 'meshtastic');
|
||||
const btn = t.legendRoleButtons.get('meshtastic:ROUTER');
|
||||
// swatch is the first child of the button
|
||||
const swatch = btn.childNodes[0];
|
||||
assert.ok(swatch, 'swatch element should exist');
|
||||
assert.ok(swatch.style.background, 'swatch should have background color');
|
||||
});
|
||||
});
|
||||
|
||||
test('buildRoleButtons creates label child with role text', () => {
|
||||
withApp((t) => {
|
||||
t.legendRoleButtons.clear();
|
||||
const col = document.createElement('div');
|
||||
t.buildRoleButtons(col, { ROUTER: '#ff0019' }, 'meshtastic');
|
||||
const btn = t.legendRoleButtons.get('meshtastic:ROUTER');
|
||||
// label is the second child of the button
|
||||
const label = btn.childNodes[1];
|
||||
assert.ok(label, 'label element should exist');
|
||||
assert.equal(label.textContent, 'ROUTER');
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// updateLegendRoleFiltersUI
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('updateLegendRoleFiltersUI sets aria-pressed false on hidden role buttons', () => {
|
||||
withApp((t) => {
|
||||
t.legendRoleButtons.clear();
|
||||
const col = document.createElement('div');
|
||||
t.buildRoleButtons(col, { SENSOR: '#40749E' }, 'meshcore');
|
||||
const btn = t.legendRoleButtons.get('meshcore:SENSOR');
|
||||
t.activeRoleFilters.clear();
|
||||
t.activeRoleFilters.add('meshcore:SENSOR');
|
||||
t.updateLegendRoleFiltersUI();
|
||||
assert.equal(btn.getAttribute('aria-pressed'), 'false');
|
||||
});
|
||||
});
|
||||
|
||||
test('updateLegendRoleFiltersUI sets aria-pressed true on visible role buttons', () => {
|
||||
withApp((t) => {
|
||||
t.legendRoleButtons.clear();
|
||||
const col = document.createElement('div');
|
||||
t.buildRoleButtons(col, { SENSOR: '#40749E' }, 'meshcore');
|
||||
const btn = t.legendRoleButtons.get('meshcore:SENSOR');
|
||||
t.activeRoleFilters.clear();
|
||||
t.updateLegendRoleFiltersUI();
|
||||
assert.equal(btn.getAttribute('aria-pressed'), 'true');
|
||||
});
|
||||
});
|
||||
|
||||
test('updateLegendRoleFiltersUI is safe when legendContainer is null', () => {
|
||||
withApp((t) => {
|
||||
// legendContainer starts null in tests (no map); should not throw
|
||||
assert.doesNotThrow(() => t.updateLegendRoleFiltersUI());
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// adjustStatsForHiddenProtocols
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('adjustStatsForHiddenProtocols returns original stats when nothing is hidden', () => {
|
||||
withApp((t) => {
|
||||
t.hiddenProtocols.clear();
|
||||
const stats = { hour: 10, day: 50, week: 100, month: 200, meshcore: { hour: 2, day: 10, week: 20, month: 40 }, meshtastic: { hour: 8, day: 40, week: 80, month: 160 } };
|
||||
const result = t.adjustStatsForHiddenProtocols(stats);
|
||||
assert.equal(result, stats);
|
||||
});
|
||||
});
|
||||
|
||||
test('adjustStatsForHiddenProtocols subtracts meshcore counts when meshcore hidden', () => {
|
||||
withApp((t) => {
|
||||
t.hiddenProtocols.clear();
|
||||
t.hiddenProtocols.add('meshcore');
|
||||
const stats = { hour: 10, day: 50, week: 100, month: 200, meshcore: { hour: 2, day: 10, week: 20, month: 40 }, meshtastic: { hour: 8, day: 40, week: 80, month: 160 } };
|
||||
const result = t.adjustStatsForHiddenProtocols(stats);
|
||||
assert.equal(result.week, 80);
|
||||
assert.equal(result.day, 40);
|
||||
assert.equal(result.month, 160);
|
||||
assert.equal(result.hour, 8);
|
||||
});
|
||||
});
|
||||
|
||||
test('adjustStatsForHiddenProtocols subtracts meshtastic counts when meshtastic hidden', () => {
|
||||
withApp((t) => {
|
||||
t.hiddenProtocols.clear();
|
||||
t.hiddenProtocols.add('meshtastic');
|
||||
const stats = { hour: 10, day: 50, week: 100, month: 200, meshcore: { hour: 2, day: 10, week: 20, month: 40 }, meshtastic: { hour: 8, day: 40, week: 80, month: 160 } };
|
||||
const result = t.adjustStatsForHiddenProtocols(stats);
|
||||
assert.equal(result.week, 20);
|
||||
assert.equal(result.day, 10);
|
||||
});
|
||||
});
|
||||
|
||||
test('adjustStatsForHiddenProtocols subtracts both when both hidden', () => {
|
||||
withApp((t) => {
|
||||
t.hiddenProtocols.clear();
|
||||
t.hiddenProtocols.add('meshcore');
|
||||
t.hiddenProtocols.add('meshtastic');
|
||||
const stats = { hour: 10, day: 50, week: 100, month: 200, meshcore: { hour: 2, day: 10, week: 20, month: 40 }, meshtastic: { hour: 8, day: 40, week: 80, month: 160 } };
|
||||
const result = t.adjustStatsForHiddenProtocols(stats);
|
||||
assert.equal(result.week, 0);
|
||||
assert.equal(result.day, 0);
|
||||
});
|
||||
});
|
||||
|
||||
test('adjustStatsForHiddenProtocols floors at zero', () => {
|
||||
withApp((t) => {
|
||||
t.hiddenProtocols.clear();
|
||||
t.hiddenProtocols.add('meshcore');
|
||||
const stats = { hour: 1, day: 5, week: 10, month: 20, meshcore: { hour: 50, day: 50, week: 50, month: 50 } };
|
||||
const result = t.adjustStatsForHiddenProtocols(stats);
|
||||
assert.equal(result.week, 0);
|
||||
assert.equal(result.day, 0);
|
||||
});
|
||||
});
|
||||
|
||||
test('adjustStatsForHiddenProtocols handles null stats gracefully', () => {
|
||||
withApp((t) => {
|
||||
t.hiddenProtocols.add('meshcore');
|
||||
assert.equal(t.adjustStatsForHiddenProtocols(null), null);
|
||||
assert.equal(t.adjustStatsForHiddenProtocols(undefined), undefined);
|
||||
});
|
||||
});
|
||||
|
||||
test('adjustStatsForHiddenProtocols handles missing protocol bucket', () => {
|
||||
withApp((t) => {
|
||||
t.hiddenProtocols.clear();
|
||||
t.hiddenProtocols.add('meshcore');
|
||||
const stats = { hour: 10, day: 50, week: 100, month: 200 };
|
||||
const result = t.adjustStatsForHiddenProtocols(stats);
|
||||
assert.equal(result.week, 100);
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,240 @@
|
||||
/*
|
||||
* Copyright © 2025-26 l5yth & contributors
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
import test from 'node:test';
|
||||
import assert from 'node:assert/strict';
|
||||
import { createDomEnvironment } from './dom-environment.js';
|
||||
import { initializeApp } from '../main.js';
|
||||
|
||||
/** Minimal config that disables auto-refresh so we control timing. */
|
||||
const BASE_CONFIG = Object.freeze({
|
||||
channel: 'Primary',
|
||||
frequency: '915MHz',
|
||||
refreshMs: 0,
|
||||
refreshIntervalSeconds: 0,
|
||||
chatEnabled: true,
|
||||
mapCenter: { lat: 0, lon: 0 },
|
||||
mapZoom: null,
|
||||
maxDistanceKm: 0,
|
||||
tileFilters: { light: '', dark: '' },
|
||||
instancesFeatureEnabled: false,
|
||||
instanceDomain: null,
|
||||
snapshotWindowSeconds: 3600,
|
||||
});
|
||||
|
||||
/**
|
||||
* Build a stubbed fetch that records every call and responds with canned data.
|
||||
*
|
||||
* @param {Object} responsesByEndpoint Map of URL prefix to JSON response body.
|
||||
* @returns {{ fetch: Function, calls: Array<{ url: string, options: Object }> }}
|
||||
*/
|
||||
function buildStubFetch(responsesByEndpoint = {}) {
|
||||
const calls = [];
|
||||
|
||||
function stubFetch(url, options = {}) {
|
||||
calls.push({ url, options });
|
||||
for (const [prefix, body] of Object.entries(responsesByEndpoint)) {
|
||||
if (url.includes(prefix)) {
|
||||
return Promise.resolve({
|
||||
ok: true,
|
||||
status: 200,
|
||||
json: () => Promise.resolve(
|
||||
typeof body === 'function' ? body() : body,
|
||||
),
|
||||
});
|
||||
}
|
||||
}
|
||||
return Promise.resolve({
|
||||
ok: true,
|
||||
status: 200,
|
||||
json: () => Promise.resolve([]),
|
||||
});
|
||||
}
|
||||
|
||||
return { fetch: stubFetch, calls };
|
||||
}
|
||||
|
||||
/**
|
||||
* Run test body with a fetch-stubbed app instance.
|
||||
*
|
||||
* @param {Object} stubResponses Response map for the stub fetch.
|
||||
* @param {function(Object): Promise<void>} fn Receives { testUtils, calls }.
|
||||
*/
|
||||
async function withStubFetchApp(stubResponses, fn) {
|
||||
const env = createDomEnvironment({ includeBody: true });
|
||||
const originalFetch = globalThis.fetch;
|
||||
const { fetch: stubFetch, calls } = buildStubFetch(stubResponses);
|
||||
globalThis.fetch = stubFetch;
|
||||
try {
|
||||
const { _testUtils } = initializeApp(BASE_CONFIG);
|
||||
// Allow the initial refresh() to settle (it is async).
|
||||
await new Promise(r => setTimeout(r, 50));
|
||||
await fn({ testUtils: _testUtils, calls });
|
||||
} finally {
|
||||
globalThis.fetch = originalFetch;
|
||||
env.cleanup();
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Verify fetch functions append since parameter
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('first refresh does not include since parameter in fetch URLs', async () => {
|
||||
await withStubFetchApp({}, ({ calls }) => {
|
||||
const apiCalls = calls.filter(c => c.url.startsWith('/api/'));
|
||||
assert.ok(apiCalls.length > 0, 'should have made API calls');
|
||||
for (const call of apiCalls) {
|
||||
assert.ok(
|
||||
!call.url.includes('since='),
|
||||
`first refresh should not pass since: ${call.url}`,
|
||||
);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
test('second refresh includes since parameter for endpoints with timestamp data', async () => {
|
||||
const now = Math.floor(Date.now() / 1000);
|
||||
const stubResponses = {
|
||||
'/api/nodes': [{ node_id: '!aabb', last_heard: now, short_name: 'AB', role: 'CLIENT' }],
|
||||
'/api/messages': [{ id: 1, rx_time: now, from_id: '!aabb', text: 'hello' }],
|
||||
'/api/positions': [{ id: 1, node_id: '!aabb', rx_time: now, latitude: 52.5, longitude: 13.4 }],
|
||||
'/api/telemetry': [{ id: 1, node_id: '!aabb', rx_time: now, battery_level: 90 }],
|
||||
'/api/neighbors': [{ node_id: '!aabb', neighbor_id: '!ccdd', rx_time: now, snr: 10 }],
|
||||
'/api/traces': [{ id: 1, rx_time: now, src: 1, dest: 2 }],
|
||||
};
|
||||
|
||||
await withStubFetchApp(stubResponses, async ({ testUtils, calls }) => {
|
||||
// Verify first refresh completed without since params
|
||||
const firstRoundCalls = [...calls];
|
||||
const firstApiCalls = firstRoundCalls.filter(c => c.url.startsWith('/api/'));
|
||||
assert.ok(firstApiCalls.length > 0, 'initial refresh should have fired');
|
||||
for (const call of firstApiCalls) {
|
||||
assert.ok(
|
||||
!call.url.includes('since='),
|
||||
`first refresh should not pass since: ${call.url}`,
|
||||
);
|
||||
}
|
||||
|
||||
// Clear call log and trigger a second refresh
|
||||
calls.length = 0;
|
||||
await testUtils.refresh();
|
||||
await new Promise(r => setTimeout(r, 50));
|
||||
|
||||
// Second refresh should include since= on all data endpoints
|
||||
const secondApiCalls = calls.filter(c => c.url.startsWith('/api/'));
|
||||
assert.ok(secondApiCalls.length > 0, 'second refresh should have fired');
|
||||
|
||||
const nodeCall = secondApiCalls.find(c => c.url.includes('/api/nodes?'));
|
||||
assert.ok(nodeCall, 'should have made a nodes call');
|
||||
assert.ok(nodeCall.url.includes('since='), `nodes should include since: ${nodeCall.url}`);
|
||||
|
||||
const posCall = secondApiCalls.find(c => c.url.includes('/api/positions?'));
|
||||
assert.ok(posCall, 'should have made a positions call');
|
||||
assert.ok(posCall.url.includes('since='), `positions should include since: ${posCall.url}`);
|
||||
|
||||
const telCall = secondApiCalls.find(c => c.url.includes('/api/telemetry?'));
|
||||
assert.ok(telCall, 'should have made a telemetry call');
|
||||
assert.ok(telCall.url.includes('since='), `telemetry should include since: ${telCall.url}`);
|
||||
|
||||
const nbCall = secondApiCalls.find(c => c.url.includes('/api/neighbors?'));
|
||||
assert.ok(nbCall, 'should have made a neighbors call');
|
||||
assert.ok(nbCall.url.includes('since='), `neighbors should include since: ${nbCall.url}`);
|
||||
|
||||
const trCall = secondApiCalls.find(c => c.url.includes('/api/traces?'));
|
||||
assert.ok(trCall, 'should have made a traces call');
|
||||
assert.ok(trCall.url.includes('since='), `traces should include since: ${trCall.url}`);
|
||||
|
||||
const msgCalls = secondApiCalls.filter(c => c.url.includes('/api/messages?'));
|
||||
assert.ok(msgCalls.length > 0, 'should have made message calls');
|
||||
for (const mc of msgCalls) {
|
||||
assert.ok(mc.url.includes('since='), `messages should include since: ${mc.url}`);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
test('second refresh merges incremental data into existing state', async () => {
|
||||
const now = Math.floor(Date.now() / 1000);
|
||||
let callCount = 0;
|
||||
|
||||
// First call returns node A, second call returns node B
|
||||
const stubResponses = {
|
||||
'/api/nodes': () => {
|
||||
callCount++;
|
||||
if (callCount <= 1) {
|
||||
return [{ node_id: '!aaaa', last_heard: now, short_name: 'AA', role: 'CLIENT' }];
|
||||
}
|
||||
return [{ node_id: '!bbbb', last_heard: now + 60, short_name: 'BB', role: 'CLIENT' }];
|
||||
},
|
||||
};
|
||||
|
||||
await withStubFetchApp(stubResponses, async ({ testUtils, calls }) => {
|
||||
// After first refresh, call count should be 1
|
||||
assert.ok(callCount >= 1, 'first refresh should have fetched nodes');
|
||||
|
||||
// Trigger second refresh
|
||||
calls.length = 0;
|
||||
await testUtils.refresh();
|
||||
await new Promise(r => setTimeout(r, 50));
|
||||
|
||||
// The second refresh should have merged data
|
||||
assert.ok(callCount >= 2, 'second refresh should have fetched nodes again');
|
||||
});
|
||||
});
|
||||
|
||||
test('fetch functions use cache: default option', async () => {
|
||||
await withStubFetchApp({}, ({ calls }) => {
|
||||
const apiCalls = calls.filter(c => c.url.startsWith('/api/'));
|
||||
for (const call of apiCalls) {
|
||||
assert.equal(
|
||||
call.options.cache,
|
||||
'default',
|
||||
`${call.url} should use cache:default`,
|
||||
);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
test('messages fetch sends encrypted parameter when requested', async () => {
|
||||
await withStubFetchApp({}, ({ calls }) => {
|
||||
const encryptedCalls = calls.filter(
|
||||
c => c.url.includes('/api/messages') && c.url.includes('encrypted=true'),
|
||||
);
|
||||
assert.ok(encryptedCalls.length > 0, 'should have made encrypted message call');
|
||||
});
|
||||
});
|
||||
|
||||
test('since parameter uses a 1-second overlap to avoid missing rows', async () => {
|
||||
const now = Math.floor(Date.now() / 1000);
|
||||
const stubResponses = {
|
||||
'/api/nodes': [{ node_id: '!test', last_heard: now, short_name: 'T', role: 'CLIENT' }],
|
||||
};
|
||||
|
||||
await withStubFetchApp(stubResponses, async ({ testUtils, calls }) => {
|
||||
calls.length = 0;
|
||||
await testUtils.refresh();
|
||||
await new Promise(r => setTimeout(r, 50));
|
||||
|
||||
const nodeCall = calls.find(c => c.url.includes('/api/nodes?'));
|
||||
assert.ok(nodeCall, 'should have nodes call on second refresh');
|
||||
// The since value should be (now - 1) to create the overlap
|
||||
const expectedSince = now - 1;
|
||||
assert.ok(
|
||||
nodeCall.url.includes(`since=${expectedSince}`),
|
||||
`expected since=${expectedSince} in URL: ${nodeCall.url}`,
|
||||
);
|
||||
});
|
||||
});
|
||||
@@ -17,118 +17,107 @@
|
||||
import test from 'node:test';
|
||||
import assert from 'node:assert/strict';
|
||||
|
||||
import { createDomEnvironment } from './dom-environment.js';
|
||||
import { initializeApp } from '../main.js';
|
||||
import { withApp, innerHtml } from './main-app-test-helpers.js';
|
||||
|
||||
const MINIMAL_CONFIG = Object.freeze({
|
||||
channel: 'Primary',
|
||||
frequency: '915MHz',
|
||||
refreshMs: 0,
|
||||
refreshIntervalSeconds: 30,
|
||||
chatEnabled: true,
|
||||
mapCenter: { lat: 0, lon: 0 },
|
||||
mapZoom: null,
|
||||
maxDistanceKm: 0,
|
||||
tileFilters: { light: '', dark: '' },
|
||||
instancesFeatureEnabled: false,
|
||||
instanceDomain: null,
|
||||
snapshotWindowSeconds: 3600,
|
||||
// --- buildDisplayContext ---
|
||||
|
||||
test('buildDisplayContext extracts protocol from trace candidate source', () => {
|
||||
withApp((t) => {
|
||||
const entry = {
|
||||
nodeId: '!aabbccdd',
|
||||
trace: { protocol: 'meshcore', node_id: '!aabbccdd' },
|
||||
};
|
||||
const ctx = t.buildDisplayContext(entry);
|
||||
assert.equal(ctx.protocol, 'meshcore', 'protocol must be picked from entry.trace');
|
||||
});
|
||||
});
|
||||
|
||||
/**
|
||||
* Spin up a minimal DOM environment, call initializeApp with a stub config,
|
||||
* and return the inner test utilities alongside an env.cleanup() handle.
|
||||
*
|
||||
* @returns {{ testUtils: Object, cleanup: Function }}
|
||||
*/
|
||||
function setupApp() {
|
||||
const env = createDomEnvironment({ includeBody: true });
|
||||
// themeToggle is accessed without a null guard in initializeApp.
|
||||
env.createElement('button', 'themeToggle');
|
||||
const { _testUtils } = initializeApp(MINIMAL_CONFIG);
|
||||
return { testUtils: _testUtils, cleanup: env.cleanup.bind(env) };
|
||||
}
|
||||
test('buildDisplayContext extracts protocol from node candidate source', () => {
|
||||
withApp((t) => {
|
||||
const entry = {
|
||||
nodeId: '!aabbccdd',
|
||||
node: { protocol: 'meshcore' },
|
||||
};
|
||||
const ctx = t.buildDisplayContext(entry);
|
||||
assert.equal(ctx.protocol, 'meshcore', 'protocol must be picked from entry.node');
|
||||
});
|
||||
});
|
||||
|
||||
test('buildDisplayContext protocol is null when no candidate carries it', () => {
|
||||
withApp((t) => {
|
||||
const entry = { nodeId: '!aabbccdd', node: { short_name: 'X' } };
|
||||
const ctx = t.buildDisplayContext(entry);
|
||||
assert.equal(ctx.protocol, null, 'protocol should be null when absent from all sources');
|
||||
});
|
||||
});
|
||||
|
||||
// --- normalizeOverlaySource ---
|
||||
|
||||
test('normalizeOverlaySource propagates string protocol field', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
const result = testUtils.normalizeOverlaySource({ protocol: 'meshcore' });
|
||||
withApp((t) => {
|
||||
const result = t.normalizeOverlaySource({ protocol: 'meshcore' });
|
||||
assert.equal(result.protocol, 'meshcore');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
test('normalizeOverlaySource propagates "meshtastic" protocol', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
const result = testUtils.normalizeOverlaySource({ protocol: 'meshtastic' });
|
||||
withApp((t) => {
|
||||
const result = t.normalizeOverlaySource({ protocol: 'meshtastic' });
|
||||
assert.equal(result.protocol, 'meshtastic');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
test('normalizeOverlaySource omits protocol when absent', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
const result = testUtils.normalizeOverlaySource({ longName: 'Alice' });
|
||||
withApp((t) => {
|
||||
const result = t.normalizeOverlaySource({ longName: 'Alice' });
|
||||
assert.ok(!('protocol' in result), 'protocol should not be set when source has none');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
test('normalizeOverlaySource omits protocol when value is not a string', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
const result = testUtils.normalizeOverlaySource({ protocol: 42 });
|
||||
withApp((t) => {
|
||||
const result = t.normalizeOverlaySource({ protocol: 42 });
|
||||
assert.ok(!('protocol' in result), 'protocol should not be set for non-string values');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
// --- buildMapPopupHtml ---
|
||||
|
||||
test('buildMapPopupHtml includes meshtastic icon for null protocol', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
const html = testUtils.buildMapPopupHtml({ long_name: 'Alice', node_id: '!abc123', protocol: null }, 0);
|
||||
assert.ok(html.includes('meshtastic.svg'), 'popup should show meshtastic icon for null protocol');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
test('buildMapPopupHtml shows no icon for null protocol', () => {
|
||||
withApp((t) => {
|
||||
const html = t.buildMapPopupHtml({ long_name: 'Alice', node_id: '!abc123', protocol: null }, 0);
|
||||
assert.ok(!html.includes('meshtastic.svg'), 'popup should not show meshtastic icon when protocol is null');
|
||||
assert.ok(!html.includes('meshcore.svg'), 'popup should not show meshcore icon when protocol is null');
|
||||
});
|
||||
});
|
||||
|
||||
test('buildMapPopupHtml includes meshtastic icon for absent protocol', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
const html = testUtils.buildMapPopupHtml({ long_name: 'Bob', node_id: '!abc456' }, 0);
|
||||
assert.ok(html.includes('meshtastic.svg'), 'popup should show meshtastic icon when protocol absent');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
test('buildMapPopupHtml shows no icon when protocol is absent', () => {
|
||||
withApp((t) => {
|
||||
const html = t.buildMapPopupHtml({ long_name: 'Bob', node_id: '!abc456' }, 0);
|
||||
assert.ok(!html.includes('meshtastic.svg'), 'popup should not show any icon when protocol is absent');
|
||||
assert.ok(!html.includes('meshcore.svg'), 'popup should not show any icon when protocol is absent');
|
||||
});
|
||||
});
|
||||
|
||||
test('buildMapPopupHtml shows meshtastic icon for explicit meshtastic protocol', () => {
|
||||
withApp((t) => {
|
||||
const html = t.buildMapPopupHtml({ long_name: 'Alice', node_id: '!abc123', protocol: 'meshtastic' }, 0);
|
||||
assert.ok(html.includes('meshtastic.svg'), 'popup should show meshtastic icon for explicit meshtastic protocol');
|
||||
});
|
||||
});
|
||||
|
||||
test('buildMapPopupHtml omits meshtastic icon for meshcore protocol', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
const html = testUtils.buildMapPopupHtml({ long_name: 'Eve', node_id: '!abc789', protocol: 'meshcore' }, 0);
|
||||
withApp((t) => {
|
||||
const html = t.buildMapPopupHtml({ long_name: 'Eve', node_id: '!abc789', protocol: 'meshcore' }, 0);
|
||||
assert.ok(!html.includes('meshtastic.svg'), 'popup should not show meshtastic icon for meshcore nodes');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
// --- createAnnouncementEntry ---
|
||||
|
||||
test('createAnnouncementEntry prefixes meshtastic icon when protocol is meshtastic', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
const div = testUtils.createAnnouncementEntry({
|
||||
withApp((t) => {
|
||||
const div = t.createAnnouncementEntry({
|
||||
timestampSeconds: 1000,
|
||||
shortName: 'ALI',
|
||||
longName: 'Alice',
|
||||
@@ -137,17 +126,13 @@ test('createAnnouncementEntry prefixes meshtastic icon when protocol is meshtast
|
||||
nodeData: null,
|
||||
messageHtml: 'joined the mesh',
|
||||
});
|
||||
const html = typeof div.innerHTML === 'string' ? div.innerHTML : div.childNodes?.[0] ?? '';
|
||||
assert.ok(String(html).includes('meshtastic.svg'), 'announcement should include meshtastic icon');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
assert.ok(innerHtml(div).includes('meshtastic.svg'), 'announcement should include meshtastic icon');
|
||||
});
|
||||
});
|
||||
|
||||
test('createAnnouncementEntry prefixes meshtastic icon when protocol is absent', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
const div = testUtils.createAnnouncementEntry({
|
||||
test('createAnnouncementEntry shows no icon when protocol is absent', () => {
|
||||
withApp((t) => {
|
||||
const div = t.createAnnouncementEntry({
|
||||
timestampSeconds: 1000,
|
||||
shortName: 'BOB',
|
||||
longName: 'Bob',
|
||||
@@ -156,17 +141,14 @@ test('createAnnouncementEntry prefixes meshtastic icon when protocol is absent',
|
||||
nodeData: null,
|
||||
messageHtml: 'detected',
|
||||
});
|
||||
const html = String(typeof div.innerHTML === 'string' ? div.innerHTML : div.childNodes?.[0] ?? '');
|
||||
assert.ok(html.includes('meshtastic.svg'), 'announcement without protocol should show meshtastic icon');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
assert.ok(!innerHtml(div).includes('meshtastic.svg'), 'no meshtastic icon when protocol is absent');
|
||||
assert.ok(!innerHtml(div).includes('meshcore.svg'), 'no meshcore icon when protocol is absent');
|
||||
});
|
||||
});
|
||||
|
||||
test('createAnnouncementEntry omits meshtastic icon for meshcore protocol', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
const div = testUtils.createAnnouncementEntry({
|
||||
withApp((t) => {
|
||||
const div = t.createAnnouncementEntry({
|
||||
timestampSeconds: 1000,
|
||||
shortName: 'MC1',
|
||||
longName: 'MeshCore Node',
|
||||
@@ -175,56 +157,235 @@ test('createAnnouncementEntry omits meshtastic icon for meshcore protocol', () =
|
||||
nodeData: null,
|
||||
messageHtml: 'seen',
|
||||
});
|
||||
const html = String(typeof div.innerHTML === 'string' ? div.innerHTML : div.childNodes?.[0] ?? '');
|
||||
assert.ok(!html.includes('meshtastic.svg'), 'announcement for meshcore should not include meshtastic icon');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
assert.ok(!innerHtml(div).includes('meshtastic.svg'), 'announcement for meshcore should not include meshtastic icon');
|
||||
});
|
||||
});
|
||||
|
||||
test('createAnnouncementEntry shows meshcore icon for meshcore protocol', () => {
|
||||
withApp((t) => {
|
||||
const div = t.createAnnouncementEntry({
|
||||
timestampSeconds: 1000,
|
||||
shortName: 'MC1',
|
||||
longName: 'MeshCore Node',
|
||||
role: 'REPEATER',
|
||||
protocol: 'meshcore',
|
||||
metadataSource: null,
|
||||
nodeData: null,
|
||||
messageHtml: 'seen',
|
||||
});
|
||||
assert.ok(innerHtml(div).includes('meshcore.svg'), 'announcement for meshcore should include meshcore icon');
|
||||
});
|
||||
});
|
||||
|
||||
// --- createMessageChatEntry ---
|
||||
|
||||
test('createMessageChatEntry prefixes meshtastic icon when node protocol is meshtastic', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
const div = testUtils.createMessageChatEntry({
|
||||
withApp((t) => {
|
||||
const div = t.createMessageChatEntry({
|
||||
text: 'hello mesh',
|
||||
rx_time: 1000,
|
||||
node: { short_name: 'ALI', role: 'CLIENT', protocol: 'meshtastic' },
|
||||
});
|
||||
const html = String(typeof div.innerHTML === 'string' ? div.innerHTML : div.childNodes?.[0] ?? '');
|
||||
assert.ok(html.includes('meshtastic.svg'), 'chat entry should include meshtastic icon');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
assert.ok(innerHtml(div).includes('meshtastic.svg'), 'chat entry should include meshtastic icon');
|
||||
});
|
||||
});
|
||||
|
||||
test('createMessageChatEntry prefixes meshtastic icon when node protocol is absent', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
const div = testUtils.createMessageChatEntry({
|
||||
test('createMessageChatEntry shows no icon when node protocol is absent', () => {
|
||||
withApp((t) => {
|
||||
const div = t.createMessageChatEntry({
|
||||
text: 'hi',
|
||||
rx_time: 2000,
|
||||
node: { short_name: 'BOB', role: 'ROUTER' },
|
||||
});
|
||||
const html = String(typeof div.innerHTML === 'string' ? div.innerHTML : div.childNodes?.[0] ?? '');
|
||||
assert.ok(html.includes('meshtastic.svg'), 'chat entry without protocol should show meshtastic icon');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
assert.ok(!innerHtml(div).includes('meshtastic.svg'), 'no meshtastic icon when protocol is absent');
|
||||
assert.ok(!innerHtml(div).includes('meshcore.svg'), 'no meshcore icon when protocol is absent');
|
||||
});
|
||||
});
|
||||
|
||||
test('createMessageChatEntry omits meshtastic icon for meshcore node', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
const div = testUtils.createMessageChatEntry({
|
||||
withApp((t) => {
|
||||
const div = t.createMessageChatEntry({
|
||||
text: 'test',
|
||||
rx_time: 3000,
|
||||
node: { short_name: 'MC1', role: 'REPEATER', protocol: 'meshcore' },
|
||||
});
|
||||
const html = String(typeof div.innerHTML === 'string' ? div.innerHTML : div.childNodes?.[0] ?? '');
|
||||
assert.ok(!html.includes('meshtastic.svg'), 'chat entry for meshcore node should not show meshtastic icon');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
assert.ok(!innerHtml(div).includes('meshtastic.svg'), 'chat entry for meshcore node should not show meshtastic icon');
|
||||
});
|
||||
});
|
||||
|
||||
test('createMessageChatEntry shows meshcore icon for meshcore node', () => {
|
||||
withApp((t) => {
|
||||
const div = t.createMessageChatEntry({
|
||||
text: 'test',
|
||||
rx_time: 3000,
|
||||
node: { short_name: 'MC1', role: 'REPEATER', protocol: 'meshcore' },
|
||||
});
|
||||
assert.ok(innerHtml(div).includes('meshcore.svg'), 'chat entry for meshcore node should show meshcore icon');
|
||||
});
|
||||
});
|
||||
|
||||
// --- createMessageChatEntry: MeshCore channel message sender resolution ---
|
||||
|
||||
/**
|
||||
* A MeshCore COMPANION node used as the canonical sender fixture in the
|
||||
* channel-message tests below.
|
||||
*/
|
||||
const T114_ZEH = { node_id: '!aabbccdd', long_name: 'T114-Zeh', short_name: ' T ', role: 'COMPANION', protocol: 'meshcore' };
|
||||
|
||||
/**
|
||||
* Build a minimal MeshCore channel message payload for createMessageChatEntry.
|
||||
* @param {string} text Message text (typically "SenderName: body" format).
|
||||
* @param {object} [overrides] Properties to merge in.
|
||||
*/
|
||||
function makeMeshcoreChannelMsg(text, overrides = {}) {
|
||||
return { text, rx_time: 1000, protocol: 'meshcore', to_id: '^all', node: null, ...overrides };
|
||||
}
|
||||
|
||||
test('createMessageChatEntry: meshcore channel message uses sender node short name when found', () => {
|
||||
withApp((t) => {
|
||||
// Seed a node with a known long_name so findNodeByLongName can resolve it.
|
||||
t.rebuildNodeIndex([T114_ZEH]);
|
||||
const div = t.createMessageChatEntry(makeMeshcoreChannelMsg('T114-Zeh: Hello world'));
|
||||
const html = innerHtml(div);
|
||||
// Badge should NOT be the fallback '?' — the node's short_name should be used
|
||||
assert.ok(html.includes('T'), 'badge should contain T from derived short name');
|
||||
assert.ok(!html.includes('?'), 'badge should not show placeholder question mark');
|
||||
});
|
||||
});
|
||||
|
||||
test('createMessageChatEntry: meshcore channel message hides sender long name — only body shown', () => {
|
||||
withApp((t) => {
|
||||
t.rebuildNodeIndex([T114_ZEH]);
|
||||
const div = t.createMessageChatEntry(makeMeshcoreChannelMsg('T114-Zeh: Hello world'));
|
||||
const html = innerHtml(div);
|
||||
// The sender long name is NOT prepended as a link — only the text after the colon is shown
|
||||
assert.ok(html.includes('Hello world'), 'body text after colon should be rendered');
|
||||
// Sender name should not appear as a link (href to node page) in the body
|
||||
assert.ok(!html.includes('T114-Zeh:'), 'sender long name prefix with colon should not appear in body');
|
||||
});
|
||||
});
|
||||
|
||||
test('createMessageChatEntry: meshcore channel message, sender node not found — shows body only', () => {
|
||||
withApp((t) => {
|
||||
t.rebuildNodeIndex([]); // empty — no nodes known
|
||||
const div = t.createMessageChatEntry(makeMeshcoreChannelMsg('UnknownSender: Hello'));
|
||||
const html = innerHtml(div);
|
||||
// Only the body text is shown; sender name is not prepended as a link
|
||||
assert.ok(html.includes('Hello'), 'body text after colon should still be rendered');
|
||||
assert.ok(!html.includes('UnknownSender:'), 'sender long name prefix with colon should not appear in body');
|
||||
assert.ok(!html.includes('/nodes/'), 'should not produce a node link when sender is not found');
|
||||
});
|
||||
});
|
||||
|
||||
test('createMessageChatEntry: meshcore channel message, no colon in text — body unchanged', () => {
|
||||
withApp((t) => {
|
||||
t.rebuildNodeIndex([T114_ZEH]);
|
||||
const div = t.createMessageChatEntry(makeMeshcoreChannelMsg('no colon here'));
|
||||
const html = innerHtml(div);
|
||||
assert.ok(html.includes('no colon here'), 'body text should be rendered as-is when no sender prefix found');
|
||||
assert.ok(!html.includes('/nodes/'), 'should not produce a node link when no colon prefix');
|
||||
});
|
||||
});
|
||||
|
||||
test('createMessageChatEntry: meshcore message with @[Name] mention resolved to badge', () => {
|
||||
withApp((t) => {
|
||||
t.rebuildNodeIndex([
|
||||
{ ...T114_ZEH, node_id: '!11111111' },
|
||||
{ node_id: '!22222222', long_name: 'BGruenauBot', short_name: ' BG ', role: 'CLIENT', protocol: 'meshcore' },
|
||||
]);
|
||||
const div = t.createMessageChatEntry(makeMeshcoreChannelMsg('BGruenauBot: ack @[T114-Zeh]', { rx_time: 2000 }));
|
||||
const html = innerHtml(div);
|
||||
// The @[T114-Zeh] mention should render as a short-name badge span
|
||||
assert.ok(html.includes('short-name'), 'mention should produce a short-name badge');
|
||||
// The sender long name is not prepended as a link in the body
|
||||
assert.ok(!html.includes('BGruenauBot:'), 'sender long name prefix with colon should not appear in body');
|
||||
});
|
||||
});
|
||||
|
||||
test('createMessageChatEntry: meshcore message with @[Name] mention, node not found — fallback', () => {
|
||||
withApp((t) => {
|
||||
t.rebuildNodeIndex([]);
|
||||
const div = t.createMessageChatEntry(makeMeshcoreChannelMsg('EchoBot: Pong! @[Ghost]', { rx_time: 3000 }));
|
||||
const html = innerHtml(div);
|
||||
// @[Ghost] mention with no matching node renders as escaped plain text
|
||||
assert.ok(html.includes('@[Ghost]'), 'unresolved mention should render as escaped @[Name] text');
|
||||
});
|
||||
});
|
||||
|
||||
test('createMessageChatEntry: meshcore channel message with hydrated node — body only shown', () => {
|
||||
// Simulates the case where the ingestor resolved from_id successfully.
|
||||
// The node is hydrated (m.node is not null), and the body still has "SenderName: body".
|
||||
withApp((t) => {
|
||||
t.rebuildNodeIndex([T114_ZEH]);
|
||||
// node is already hydrated — ingestor resolved from_id via contacts
|
||||
const div = t.createMessageChatEntry(makeMeshcoreChannelMsg('T114-Zeh: Test message', { rx_time: 5000, node: T114_ZEH }));
|
||||
const html = innerHtml(div);
|
||||
// Only the body text after the colon is shown; sender long name is not prepended as a link
|
||||
assert.ok(html.includes('Test message'), 'body text after colon should be rendered');
|
||||
assert.ok(!html.includes('T114-Zeh:'), 'sender long name prefix with colon should not appear in body');
|
||||
});
|
||||
});
|
||||
|
||||
test('createMessageChatEntry: meshtastic message with @[Name] is NOT resolved as mention', () => {
|
||||
withApp((t) => {
|
||||
t.rebuildNodeIndex([
|
||||
{ node_id: '!11111111', long_name: 'Alice', short_name: 'ALCE', role: 'CLIENT', protocol: 'meshtastic' },
|
||||
]);
|
||||
const div = t.createMessageChatEntry({
|
||||
text: 'hello @[Alice]',
|
||||
rx_time: 4000,
|
||||
protocol: 'meshtastic',
|
||||
node: { short_name: 'ALCE', role: 'CLIENT', protocol: 'meshtastic' },
|
||||
});
|
||||
const html = innerHtml(div);
|
||||
// Meshtastic messages do not process @[Name] — rendered as literal escaped text
|
||||
assert.ok(html.includes('@[Alice]') || html.includes('@[Alice]') || html.includes('@[Alice]') || html.includes('@[Alice]'),
|
||||
'meshtastic @[Name] should be escaped literally, not resolved');
|
||||
// Ensure no mention badge was injected (no extra short-name span beyond the sender badge)
|
||||
const shortNameCount = (html.match(/short-name/g) || []).length;
|
||||
assert.ok(shortNameCount <= 1, 'only the sender badge should be present, no mention badge');
|
||||
});
|
||||
});
|
||||
|
||||
// --- renderShortHtml badge padding ---
|
||||
|
||||
test('renderShortHtml leaves 4-char ASCII names unpadded', () => {
|
||||
withApp(() => {
|
||||
const html = globalThis.PotatoMesh.renderShortHtml('0ac7', 'CLIENT');
|
||||
assert.ok(!html.includes(' 0ac7'), 'should not add leading space');
|
||||
assert.ok(!html.includes('0ac7 '), 'should not add trailing space');
|
||||
});
|
||||
});
|
||||
|
||||
test('renderShortHtml adds single space padding for short emoji names', () => {
|
||||
withApp(() => {
|
||||
const html = globalThis.PotatoMesh.renderShortHtml('\u26A1', 'CLIENT');
|
||||
// Should produce " ⚡ " — one leading, one trailing space (as )
|
||||
assert.ok(html.includes(' \u26A1 '), 'emoji should have one space on each side');
|
||||
// Should NOT have double leading spaces
|
||||
assert.ok(!html.includes(' \u26A1'), 'should not double-pad emoji');
|
||||
});
|
||||
});
|
||||
|
||||
test('renderShortHtml adds single space padding for surrogate pair emoji', () => {
|
||||
withApp(() => {
|
||||
const html = globalThis.PotatoMesh.renderShortHtml('\uD83D\uDE43', 'CLIENT');
|
||||
// 🙃 is a surrogate pair (length 2 in JS) but 1 grapheme
|
||||
assert.ok(html.includes(' \uD83D\uDE43 '), 'surrogate emoji should have one space on each side');
|
||||
});
|
||||
});
|
||||
|
||||
test('renderShortHtml adds single space padding for ZWJ emoji sequence', () => {
|
||||
withApp(() => {
|
||||
const zwj = '\u{1F3C3}\u{200D}\u{2642}\u{FE0F}'; // 🏃♂️ — length 5, 1 grapheme
|
||||
const html = globalThis.PotatoMesh.renderShortHtml(zwj, 'CLIENT');
|
||||
assert.ok(html.includes(` ${zwj} `), 'ZWJ emoji should have one space on each side');
|
||||
});
|
||||
});
|
||||
|
||||
test('renderShortHtml adds single space padding for plain 2-char name', () => {
|
||||
withApp(() => {
|
||||
const html = globalThis.PotatoMesh.renderShortHtml('ab', 'CLIENT');
|
||||
assert.ok(html.includes(' ab '), '2-char name should have one space on each side');
|
||||
});
|
||||
});
|
||||
|
||||
@@ -26,24 +26,24 @@ import {
|
||||
|
||||
const NOW = 1_700_000_000;
|
||||
|
||||
test('computeLocalActiveNodeStats calculates local hour/day/week/month counts', () => {
|
||||
test('computeLocalActiveNodeStats calculates local hour/day/week/month counts with per-protocol data', () => {
|
||||
const nodes = [
|
||||
{ last_heard: NOW - 60 },
|
||||
{ last_heard: NOW - 4_000 },
|
||||
{ last_heard: NOW - 90_000 },
|
||||
{ last_heard: NOW - (8 * 86_400) },
|
||||
{ last_heard: NOW - (20 * 86_400) },
|
||||
{ last_heard: NOW - 60, protocol: 'meshtastic' },
|
||||
{ last_heard: NOW - 4_000, protocol: 'meshcore' },
|
||||
{ last_heard: NOW - 90_000, protocol: 'meshtastic' },
|
||||
{ last_heard: NOW - (8 * 86_400), protocol: 'meshcore' },
|
||||
{ last_heard: NOW - (20 * 86_400), protocol: 'meshtastic' },
|
||||
];
|
||||
|
||||
const stats = computeLocalActiveNodeStats(nodes, NOW);
|
||||
|
||||
assert.deepEqual(stats, {
|
||||
hour: 1,
|
||||
day: 2,
|
||||
week: 3,
|
||||
month: 5,
|
||||
sampled: true,
|
||||
});
|
||||
assert.equal(stats.hour, 1);
|
||||
assert.equal(stats.day, 2);
|
||||
assert.equal(stats.week, 3);
|
||||
assert.equal(stats.month, 5);
|
||||
assert.equal(stats.sampled, true);
|
||||
assert.deepEqual(stats.meshcore, { hour: 0, day: 1, week: 1, month: 2 });
|
||||
assert.deepEqual(stats.meshtastic, { hour: 1, day: 1, week: 2, month: 3 });
|
||||
});
|
||||
|
||||
test('normaliseActiveNodeStatsPayload validates and normalizes API payload', () => {
|
||||
@@ -57,17 +57,27 @@ test('normaliseActiveNodeStatsPayload validates and normalizes API payload', ()
|
||||
sampled: false,
|
||||
};
|
||||
|
||||
assert.deepEqual(normaliseActiveNodeStatsPayload(payload), {
|
||||
hour: 11,
|
||||
day: 22,
|
||||
week: 33,
|
||||
month: 44,
|
||||
sampled: false,
|
||||
});
|
||||
const result = normaliseActiveNodeStatsPayload(payload);
|
||||
assert.equal(result.hour, 11);
|
||||
assert.equal(result.day, 22);
|
||||
assert.equal(result.week, 33);
|
||||
assert.equal(result.month, 44);
|
||||
assert.equal(result.sampled, false);
|
||||
|
||||
assert.equal(normaliseActiveNodeStatsPayload({}), null);
|
||||
});
|
||||
|
||||
test('normaliseActiveNodeStatsPayload includes per-protocol buckets when present', () => {
|
||||
const result = normaliseActiveNodeStatsPayload({
|
||||
active_nodes: { hour: 10, day: 20, week: 30, month: 40 },
|
||||
meshcore: { hour: 3, day: 8, week: 12, month: 15 },
|
||||
meshtastic: { hour: 7, day: 12, week: 18, month: 25 },
|
||||
sampled: false,
|
||||
});
|
||||
assert.deepEqual(result.meshcore, { hour: 3, day: 8, week: 12, month: 15 });
|
||||
assert.deepEqual(result.meshtastic, { hour: 7, day: 12, week: 18, month: 25 });
|
||||
});
|
||||
|
||||
test('normaliseActiveNodeStatsPayload rejects malformed stat values', () => {
|
||||
assert.equal(
|
||||
normaliseActiveNodeStatsPayload({ active_nodes: { hour: 'x', day: 1, week: 1, month: 1 } }),
|
||||
@@ -140,8 +150,8 @@ test('fetchActiveNodeStats reuses cached /api/stats response for repeated calls'
|
||||
|
||||
test('fetchActiveNodeStats falls back to local counts when stats fetch fails', async () => {
|
||||
const nodes = [
|
||||
{ last_heard: NOW - 120 },
|
||||
{ last_heard: NOW - (10 * 86_400) },
|
||||
{ last_heard: NOW - 120, protocol: 'meshtastic' },
|
||||
{ last_heard: NOW - (10 * 86_400), protocol: 'meshcore' },
|
||||
];
|
||||
const fetchImpl = async () => {
|
||||
throw new Error('network down');
|
||||
@@ -149,13 +159,13 @@ test('fetchActiveNodeStats falls back to local counts when stats fetch fails', a
|
||||
|
||||
const stats = await fetchActiveNodeStats({ nodes, nowSeconds: NOW, fetchImpl });
|
||||
|
||||
assert.deepEqual(stats, {
|
||||
hour: 1,
|
||||
day: 1,
|
||||
week: 1,
|
||||
month: 2,
|
||||
sampled: true,
|
||||
});
|
||||
assert.equal(stats.hour, 1);
|
||||
assert.equal(stats.day, 1);
|
||||
assert.equal(stats.week, 1);
|
||||
assert.equal(stats.month, 2);
|
||||
assert.equal(stats.sampled, true);
|
||||
assert.ok(stats.meshcore != null, 'fallback should include meshcore');
|
||||
assert.ok(stats.meshtastic != null, 'fallback should include meshtastic');
|
||||
});
|
||||
|
||||
test('fetchActiveNodeStats falls back to local counts on non-OK HTTP responses', async () => {
|
||||
@@ -183,28 +193,10 @@ test('fetchActiveNodeStats falls back to local counts on invalid payloads', asyn
|
||||
assert.equal(stats.month, 0);
|
||||
});
|
||||
|
||||
test('formatActiveNodeStatsText emits expected dashboard string', () => {
|
||||
test('formatActiveNodeStatsText emits compact day/week/month footer string', () => {
|
||||
const text = formatActiveNodeStatsText({
|
||||
channel: 'LongFast',
|
||||
frequency: '868MHz',
|
||||
stats: { hour: 1, day: 2, week: 3, month: 4, sampled: false },
|
||||
stats: { day: 2, week: 3, month: 4, sampled: false },
|
||||
});
|
||||
|
||||
assert.equal(
|
||||
text,
|
||||
'LongFast (868MHz) — active nodes: 1/hour, 2/day, 3/week, 4/month.'
|
||||
);
|
||||
});
|
||||
|
||||
test('formatActiveNodeStatsText appends sampled marker when local fallback is used', () => {
|
||||
const text = formatActiveNodeStatsText({
|
||||
channel: 'LongFast',
|
||||
frequency: '868MHz',
|
||||
stats: { hour: 9, day: 8, week: 7, month: 6, sampled: true },
|
||||
});
|
||||
|
||||
assert.equal(
|
||||
text,
|
||||
'LongFast (868MHz) — active nodes: 9/hour, 8/day, 7/week, 6/month (sampled).'
|
||||
);
|
||||
assert.equal(text, '2/day \u00b7 3/week \u00b7 4/month');
|
||||
});
|
||||
|
||||
@@ -0,0 +1,321 @@
|
||||
/*
|
||||
* Copyright © 2025-26 l5yth & contributors
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
import test from 'node:test';
|
||||
import assert from 'node:assert/strict';
|
||||
|
||||
import { setupApp, setupAppWithOptions } from './main-app-test-helpers.js';
|
||||
|
||||
const NOW = 1_700_000_000;
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// updateTitleCount
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('updateTitleCount does not throw when title and header elements are absent', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
assert.doesNotThrow(() => {
|
||||
testUtils.updateTitleCount({ hour: 5, day: 20, week: 42, month: 100, sampled: false });
|
||||
});
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('updateTitleCount handles null and undefined stats gracefully', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
assert.doesNotThrow(() => testUtils.updateTitleCount(null));
|
||||
assert.doesNotThrow(() => testUtils.updateTitleCount(undefined));
|
||||
assert.doesNotThrow(() => testUtils.updateTitleCount({}));
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// updateLegendProtocolCounts
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('updateLegendProtocolCounts returns early when both count elements are null', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
// Default state: meshcoreCountEl and meshtasticCountEl are null — should not throw.
|
||||
assert.doesNotThrow(() => {
|
||||
testUtils.updateLegendProtocolCounts({
|
||||
week: 10,
|
||||
meshcore: { hour: 1, day: 2, week: 3, month: 4 },
|
||||
meshtastic: { hour: 5, day: 6, week: 7, month: 8 },
|
||||
});
|
||||
});
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('updateLegendProtocolCounts sets per-protocol counts when elements are present', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
const mcEl = { textContent: '' };
|
||||
const mtEl = { textContent: '' };
|
||||
testUtils._setProtocolCountElements(mcEl, mtEl);
|
||||
|
||||
testUtils.updateLegendProtocolCounts({
|
||||
week: 3,
|
||||
meshcore: { hour: 1, day: 1, week: 2, month: 3 },
|
||||
meshtastic: { hour: 0, day: 1, week: 1, month: 2 },
|
||||
});
|
||||
|
||||
assert.equal(mcEl.textContent, ' (2)', 'meshcore count should be 2');
|
||||
assert.equal(mtEl.textContent, ' (1)', 'meshtastic count should be 1');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('updateLegendProtocolCounts handles missing per-protocol data gracefully', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
const mcEl = { textContent: '' };
|
||||
const mtEl = { textContent: '' };
|
||||
testUtils._setProtocolCountElements(mcEl, mtEl);
|
||||
|
||||
// Stats without per-protocol breakdowns (e.g. from an old instance).
|
||||
testUtils.updateLegendProtocolCounts({ week: 5 });
|
||||
|
||||
assert.equal(mcEl.textContent, ' (0)');
|
||||
assert.equal(mtEl.textContent, ' (0)');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('updateLegendProtocolCounts works when only meshcoreCountEl is present', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
const mcEl = { textContent: '' };
|
||||
testUtils._setProtocolCountElements(mcEl, null);
|
||||
|
||||
testUtils.updateLegendProtocolCounts({
|
||||
week: 5,
|
||||
meshcore: { hour: 0, day: 0, week: 1, month: 2 },
|
||||
});
|
||||
assert.equal(mcEl.textContent, ' (1)');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('updateLegendProtocolCounts works when only meshtasticCountEl is present', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
const mtEl = { textContent: '' };
|
||||
testUtils._setProtocolCountElements(null, mtEl);
|
||||
|
||||
testUtils.updateLegendProtocolCounts({
|
||||
week: 5,
|
||||
meshtastic: { hour: 0, day: 0, week: 1, month: 2 },
|
||||
});
|
||||
assert.equal(mtEl.textContent, ' (1)');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// updateFooterStats
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('updateFooterStats is a no-op when footerActiveNodes element is absent', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
assert.doesNotThrow(() => {
|
||||
testUtils.updateFooterStats({ day: 1, week: 2, month: 3, sampled: false });
|
||||
});
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('updateFooterStats populates the active-stats element when present', () => {
|
||||
const { testUtils, env, cleanup } = setupAppWithOptions({
|
||||
extraElements: ['footerActiveNodes'],
|
||||
});
|
||||
try {
|
||||
const el = env.document.getElementById('footerActiveNodes');
|
||||
testUtils.updateFooterStats({ day: 10, week: 20, month: 30, sampled: false });
|
||||
|
||||
assert.ok(
|
||||
el.textContent.includes('/day'),
|
||||
`expected footerActiveNodes to contain "/day", got: ${el.textContent}`,
|
||||
);
|
||||
assert.ok(
|
||||
el.textContent.includes('10/day'),
|
||||
`expected footerActiveNodes to contain "10/day", got: ${el.textContent}`,
|
||||
);
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// applyProtocolVisibility
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('applyProtocolVisibility hides meshcore column when meshcore week is 0', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
const mcCol = { style: { display: '' } };
|
||||
const mtCol = { style: { display: '' } };
|
||||
testUtils._setProtocolColElements(mcCol, mtCol);
|
||||
|
||||
testUtils.applyProtocolVisibility({
|
||||
meshcore: { hour: 0, day: 0, week: 0, month: 0 },
|
||||
meshtastic: { hour: 1, day: 5, week: 10, month: 20 },
|
||||
});
|
||||
|
||||
assert.equal(mcCol.style.display, 'none', 'meshcore column should be hidden');
|
||||
assert.equal(mtCol.style.display, '', 'meshtastic column should remain visible');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('applyProtocolVisibility hides meshtastic column when meshtastic week is 0', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
const mcCol = { style: { display: '' } };
|
||||
const mtCol = { style: { display: '' } };
|
||||
testUtils._setProtocolColElements(mcCol, mtCol);
|
||||
|
||||
testUtils.applyProtocolVisibility({
|
||||
meshcore: { hour: 1, day: 5, week: 10, month: 20 },
|
||||
meshtastic: { hour: 0, day: 0, week: 0, month: 0 },
|
||||
});
|
||||
|
||||
assert.equal(mcCol.style.display, '', 'meshcore column should remain visible');
|
||||
assert.equal(mtCol.style.display, 'none', 'meshtastic column should be hidden');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('applyProtocolVisibility shows both columns when both protocols have active nodes', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
const mcCol = { style: { display: 'none' } };
|
||||
const mtCol = { style: { display: 'none' } };
|
||||
testUtils._setProtocolColElements(mcCol, mtCol);
|
||||
|
||||
testUtils.applyProtocolVisibility({
|
||||
meshcore: { hour: 1, day: 2, week: 5, month: 10 },
|
||||
meshtastic: { hour: 2, day: 3, week: 8, month: 15 },
|
||||
});
|
||||
|
||||
assert.equal(mcCol.style.display, '', 'meshcore column should be visible');
|
||||
assert.equal(mtCol.style.display, '', 'meshtastic column should be visible');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('applyProtocolVisibility handles missing per-protocol data gracefully', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
const mcCol = { style: { display: '' } };
|
||||
const mtCol = { style: { display: '' } };
|
||||
testUtils._setProtocolColElements(mcCol, mtCol);
|
||||
|
||||
// No per-protocol data at all — treat as 0.
|
||||
testUtils.applyProtocolVisibility({ week: 5 });
|
||||
|
||||
assert.equal(mcCol.style.display, 'none');
|
||||
assert.equal(mtCol.style.display, 'none');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// restartAutoRefresh
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('restartAutoRefresh does not start a timer when refreshMs is 0', () => {
|
||||
// MINIMAL_CONFIG has refreshMs: 0 — timer must not be armed.
|
||||
const origSetInterval = globalThis.setInterval;
|
||||
const calls = [];
|
||||
globalThis.setInterval = (...args) => { calls.push(args); return origSetInterval(...args); };
|
||||
try {
|
||||
const { cleanup } = setupApp(); // uses refreshMs: 0
|
||||
// restartAutoRefresh is called during init; no timer should have been started.
|
||||
assert.equal(calls.length, 0, 'setInterval should not be called with refreshMs=0');
|
||||
cleanup();
|
||||
} finally {
|
||||
globalThis.setInterval = origSetInterval;
|
||||
}
|
||||
});
|
||||
|
||||
test('restartAutoRefresh starts a timer when refreshMs > 0', () => {
|
||||
const timers = [];
|
||||
const origSetInterval = globalThis.setInterval;
|
||||
const origClearInterval = globalThis.clearInterval;
|
||||
globalThis.setInterval = (fn, ms) => {
|
||||
const id = Symbol('timer');
|
||||
timers.push({ fn, ms, id });
|
||||
return id;
|
||||
};
|
||||
globalThis.clearInterval = () => {};
|
||||
|
||||
try {
|
||||
const { cleanup } = setupAppWithOptions({ configOverrides: { refreshMs: 30_000 } });
|
||||
assert.equal(timers.length, 1, 'setInterval should be called once during init');
|
||||
assert.equal(timers[0].ms, 30_000, 'interval should match configured refreshMs');
|
||||
cleanup();
|
||||
} finally {
|
||||
globalThis.setInterval = origSetInterval;
|
||||
globalThis.clearInterval = origClearInterval;
|
||||
}
|
||||
});
|
||||
|
||||
test('restartAutoRefresh clears the existing timer before starting a new one', () => {
|
||||
const cleared = [];
|
||||
const timers = [];
|
||||
const origSetInterval = globalThis.setInterval;
|
||||
const origClearInterval = globalThis.clearInterval;
|
||||
globalThis.setInterval = (fn, ms) => {
|
||||
const id = Symbol('timer');
|
||||
timers.push(id);
|
||||
return id;
|
||||
};
|
||||
globalThis.clearInterval = id => { cleared.push(id); };
|
||||
|
||||
try {
|
||||
const { testUtils, cleanup } = setupAppWithOptions({ configOverrides: { refreshMs: 30_000 } });
|
||||
// One timer started during init.
|
||||
assert.equal(timers.length, 1);
|
||||
|
||||
// Calling restartAutoRefresh again must clear the first timer and start a new one.
|
||||
testUtils.restartAutoRefresh();
|
||||
assert.equal(cleared.length, 1, 'existing timer should be cleared');
|
||||
assert.equal(cleared[0], timers[0], 'the original timer id should be cleared');
|
||||
assert.equal(timers.length, 2, 'a new timer should be started');
|
||||
cleanup();
|
||||
} finally {
|
||||
globalThis.setInterval = origSetInterval;
|
||||
globalThis.clearInterval = origClearInterval;
|
||||
}
|
||||
});
|
||||
@@ -0,0 +1,381 @@
|
||||
/*
|
||||
* Copyright © 2025-26 l5yth & contributors
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
import test from 'node:test';
|
||||
import assert from 'node:assert/strict';
|
||||
|
||||
import { createMapCenterResetHandler, DEFAULT_CENTER_RESET_ZOOM } from '../map-center-reset.js';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Build a minimal mock map object whose setView calls are recorded.
|
||||
*
|
||||
* @returns {{ setView: Function, calls: Array }} Mock map.
|
||||
*/
|
||||
function mockMap() {
|
||||
const obj = {
|
||||
calls: [],
|
||||
setView(target, zoom, options) {
|
||||
obj.calls.push({ type: 'setView', target, zoom, options });
|
||||
}
|
||||
};
|
||||
return obj;
|
||||
}
|
||||
|
||||
/**
|
||||
* Build a minimal autoFitController stub.
|
||||
*
|
||||
* @param {{ bounds?: object | null }} [opts]
|
||||
* @returns {object}
|
||||
*/
|
||||
function mockController(opts = {}) {
|
||||
const lastFit = opts.bounds !== undefined ? opts.bounds : null;
|
||||
const runs = [];
|
||||
return {
|
||||
runCallCount: 0,
|
||||
runs,
|
||||
getLastFit() { return lastFit; },
|
||||
runAutoFitOperation(fn) {
|
||||
this.runCallCount += 1;
|
||||
runs.push(fn);
|
||||
fn();
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
const CENTER = { lat: 38.76, lon: -27.09 };
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Constructor validation
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('createMapCenterResetHandler throws when getMap is not a function', () => {
|
||||
assert.throws(() => {
|
||||
createMapCenterResetHandler({
|
||||
getMap: null,
|
||||
autoFitController: mockController(),
|
||||
fitMapToBounds: () => {},
|
||||
mapCenterCoords: CENTER,
|
||||
});
|
||||
}, /getMap/);
|
||||
});
|
||||
|
||||
test('createMapCenterResetHandler throws when autoFitController is missing getLastFit', () => {
|
||||
assert.throws(() => {
|
||||
createMapCenterResetHandler({
|
||||
getMap: () => mockMap(),
|
||||
autoFitController: {},
|
||||
fitMapToBounds: () => {},
|
||||
mapCenterCoords: CENTER,
|
||||
});
|
||||
}, /autoFitController/);
|
||||
});
|
||||
|
||||
test('createMapCenterResetHandler throws when fitMapToBounds is not a function', () => {
|
||||
assert.throws(() => {
|
||||
createMapCenterResetHandler({
|
||||
getMap: () => mockMap(),
|
||||
autoFitController: mockController(),
|
||||
fitMapToBounds: null,
|
||||
mapCenterCoords: CENTER,
|
||||
});
|
||||
}, /fitMapToBounds/);
|
||||
});
|
||||
|
||||
test('createMapCenterResetHandler throws when mapCenterCoords is invalid', () => {
|
||||
assert.throws(() => {
|
||||
createMapCenterResetHandler({
|
||||
getMap: () => mockMap(),
|
||||
autoFitController: mockController(),
|
||||
fitMapToBounds: () => {},
|
||||
mapCenterCoords: { lat: NaN, lon: 0 },
|
||||
});
|
||||
}, /mapCenterCoords/);
|
||||
|
||||
assert.throws(() => {
|
||||
createMapCenterResetHandler({
|
||||
getMap: () => mockMap(),
|
||||
autoFitController: mockController(),
|
||||
fitMapToBounds: () => {},
|
||||
mapCenterCoords: null,
|
||||
});
|
||||
}, /mapCenterCoords/);
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// No-op when map is unavailable
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('handler returns without throwing when getMap returns null', () => {
|
||||
const fitCalls = [];
|
||||
const handler = createMapCenterResetHandler({
|
||||
getMap: () => null,
|
||||
autoFitController: mockController(),
|
||||
fitMapToBounds: (...args) => fitCalls.push(args),
|
||||
mapCenterCoords: CENTER,
|
||||
});
|
||||
assert.doesNotThrow(() => handler());
|
||||
assert.equal(fitCalls.length, 0);
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Auto-fit checkbox re-enabling
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('handler enables fitBoundsEl when present and not disabled', () => {
|
||||
const dispatched = [];
|
||||
const fitBoundsEl = {
|
||||
checked: false,
|
||||
disabled: false,
|
||||
dispatchEvent(e) { dispatched.push(e.type); }
|
||||
};
|
||||
// Provide a lastFit so the fallback setView path does not run — this test
|
||||
// is only asserting the checkbox re-enable behaviour.
|
||||
const lastFit = { bounds: [[0, 0], [1, 1]], options: { paddingPx: 12 } };
|
||||
const controller = mockController({ bounds: lastFit });
|
||||
const handler = createMapCenterResetHandler({
|
||||
getMap: () => mockMap(),
|
||||
autoFitController: controller,
|
||||
fitBoundsEl,
|
||||
fitMapToBounds: () => {},
|
||||
mapCenterCoords: CENTER,
|
||||
});
|
||||
handler();
|
||||
assert.equal(fitBoundsEl.checked, true);
|
||||
assert.equal(controller.runCallCount, 1);
|
||||
assert.deepEqual(dispatched, ['change']);
|
||||
});
|
||||
|
||||
test('handler dispatches a bubbling change event when re-enabling fitBoundsEl', () => {
|
||||
let capturedEvent = null;
|
||||
const fitBoundsEl = {
|
||||
checked: false,
|
||||
disabled: false,
|
||||
dispatchEvent(e) { capturedEvent = e; }
|
||||
};
|
||||
const handler = createMapCenterResetHandler({
|
||||
getMap: () => mockMap(),
|
||||
autoFitController: mockController(),
|
||||
fitBoundsEl,
|
||||
fitMapToBounds: () => {},
|
||||
mapCenterCoords: CENTER,
|
||||
});
|
||||
handler();
|
||||
assert.ok(capturedEvent, 'expected a change event to be dispatched');
|
||||
assert.equal(capturedEvent.type, 'change');
|
||||
assert.equal(capturedEvent.bubbles, true);
|
||||
});
|
||||
|
||||
test('handler does not modify fitBoundsEl.checked when element is disabled', () => {
|
||||
const fitBoundsEl = { checked: false, disabled: true };
|
||||
// Provide a lastFit so the fallback setView path does not run — this test
|
||||
// is only asserting the checkbox non-modification when disabled.
|
||||
const lastFit = { bounds: [[0, 0], [1, 1]], options: { paddingPx: 12 } };
|
||||
const controller = mockController({ bounds: lastFit });
|
||||
const handler = createMapCenterResetHandler({
|
||||
getMap: () => mockMap(),
|
||||
autoFitController: controller,
|
||||
fitBoundsEl,
|
||||
fitMapToBounds: () => {},
|
||||
mapCenterCoords: CENTER,
|
||||
});
|
||||
handler();
|
||||
assert.equal(fitBoundsEl.checked, false);
|
||||
assert.equal(controller.runCallCount, 0);
|
||||
});
|
||||
|
||||
test('handler does not throw when fitBoundsEl is null', () => {
|
||||
const handler = createMapCenterResetHandler({
|
||||
getMap: () => mockMap(),
|
||||
autoFitController: mockController(),
|
||||
fitBoundsEl: null,
|
||||
fitMapToBounds: () => {},
|
||||
mapCenterCoords: CENTER,
|
||||
});
|
||||
assert.doesNotThrow(() => handler());
|
||||
});
|
||||
|
||||
test('handler does not throw when fitBoundsEl is omitted', () => {
|
||||
const handler = createMapCenterResetHandler({
|
||||
getMap: () => mockMap(),
|
||||
autoFitController: mockController(),
|
||||
fitMapToBounds: () => {},
|
||||
mapCenterCoords: CENTER,
|
||||
});
|
||||
assert.doesNotThrow(() => handler());
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Last-fit path
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('handler calls fitMapToBounds with last-fit bounds when available', () => {
|
||||
const fakeBounds = [[1, 2], [3, 4]];
|
||||
const lastFit = { bounds: fakeBounds, options: { paddingPx: 12, maxZoom: 13 } };
|
||||
const fitCalls = [];
|
||||
const map = mockMap();
|
||||
const handler = createMapCenterResetHandler({
|
||||
getMap: () => map,
|
||||
autoFitController: mockController({ bounds: lastFit }),
|
||||
fitMapToBounds: (bounds, opts) => fitCalls.push({ bounds, opts }),
|
||||
mapCenterCoords: CENTER,
|
||||
});
|
||||
handler();
|
||||
assert.equal(fitCalls.length, 1);
|
||||
assert.deepEqual(fitCalls[0].bounds, fakeBounds);
|
||||
assert.equal(fitCalls[0].opts.animate, true);
|
||||
assert.equal(fitCalls[0].opts.paddingPx, 12);
|
||||
assert.equal(fitCalls[0].opts.maxZoom, 13);
|
||||
// setView must NOT be called when last-fit path is taken
|
||||
assert.equal(map.calls.length, 0);
|
||||
});
|
||||
|
||||
test('handler forwards paddingPx and maxZoom from last-fit options', () => {
|
||||
const lastFit = { bounds: [[0, 0], [1, 1]], options: { paddingPx: 8 } };
|
||||
const fitCalls = [];
|
||||
const handler = createMapCenterResetHandler({
|
||||
getMap: () => mockMap(),
|
||||
autoFitController: mockController({ bounds: lastFit }),
|
||||
fitMapToBounds: (b, o) => fitCalls.push(o),
|
||||
mapCenterCoords: CENTER,
|
||||
});
|
||||
handler();
|
||||
assert.equal(fitCalls[0].paddingPx, 8);
|
||||
assert.equal(fitCalls[0].maxZoom, undefined);
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Fallback path (no last fit recorded)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('handler calls setView with configured centre when no last fit exists', () => {
|
||||
const map = mockMap();
|
||||
const controller = mockController({ bounds: null });
|
||||
const handler = createMapCenterResetHandler({
|
||||
getMap: () => map,
|
||||
autoFitController: controller,
|
||||
fitMapToBounds: () => {},
|
||||
mapCenterCoords: CENTER,
|
||||
mapZoomOverride: null,
|
||||
});
|
||||
handler();
|
||||
assert.equal(map.calls.length, 1);
|
||||
assert.deepEqual(map.calls[0].target, [CENTER.lat, CENTER.lon]);
|
||||
assert.equal(map.calls[0].zoom, DEFAULT_CENTER_RESET_ZOOM);
|
||||
assert.deepEqual(map.calls[0].options, { animate: true });
|
||||
});
|
||||
|
||||
test('fallback setView is wrapped in runAutoFitOperation to prevent movestart/zoomstart unchecking auto-fit', () => {
|
||||
// Without the wrapper, the programmatic setView triggers movestart which calls
|
||||
// handleUserInteraction, undoing the auto-fit re-enable. runAutoFitOperation
|
||||
// sets autoFitInProgress=true so handleUserInteraction returns early.
|
||||
const map = mockMap();
|
||||
const controller = mockController({ bounds: null });
|
||||
const handler = createMapCenterResetHandler({
|
||||
getMap: () => map,
|
||||
autoFitController: controller,
|
||||
fitMapToBounds: () => {},
|
||||
mapCenterCoords: CENTER,
|
||||
});
|
||||
handler();
|
||||
// runAutoFitOperation must have been called at least once for the setView fallback
|
||||
assert.ok(controller.runCallCount >= 1, 'expected runAutoFitOperation to be called');
|
||||
assert.equal(map.calls.length, 1);
|
||||
});
|
||||
|
||||
test('fallback uses mapZoomOverride when provided', () => {
|
||||
const map = mockMap();
|
||||
const handler = createMapCenterResetHandler({
|
||||
getMap: () => map,
|
||||
autoFitController: mockController({ bounds: null }),
|
||||
fitMapToBounds: () => {},
|
||||
mapCenterCoords: CENTER,
|
||||
mapZoomOverride: 15,
|
||||
});
|
||||
handler();
|
||||
assert.equal(map.calls[0].zoom, 15);
|
||||
});
|
||||
|
||||
test('fallback uses DEFAULT_CENTER_RESET_ZOOM when mapZoomOverride is null', () => {
|
||||
const map = mockMap();
|
||||
const handler = createMapCenterResetHandler({
|
||||
getMap: () => map,
|
||||
autoFitController: mockController({ bounds: null }),
|
||||
fitMapToBounds: () => {},
|
||||
mapCenterCoords: CENTER,
|
||||
mapZoomOverride: null,
|
||||
});
|
||||
handler();
|
||||
assert.equal(map.calls[0].zoom, DEFAULT_CENTER_RESET_ZOOM);
|
||||
});
|
||||
|
||||
test('fallback uses DEFAULT_CENTER_RESET_ZOOM when mapZoomOverride is zero', () => {
|
||||
const map = mockMap();
|
||||
const handler = createMapCenterResetHandler({
|
||||
getMap: () => map,
|
||||
autoFitController: mockController({ bounds: null }),
|
||||
fitMapToBounds: () => {},
|
||||
mapCenterCoords: CENTER,
|
||||
mapZoomOverride: 0,
|
||||
});
|
||||
handler();
|
||||
assert.equal(map.calls[0].zoom, DEFAULT_CENTER_RESET_ZOOM);
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Mutual exclusivity
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('fitMapToBounds and setView are mutually exclusive per invocation (last fit wins)', () => {
|
||||
const fitCalls = [];
|
||||
const map = mockMap();
|
||||
const lastFit = { bounds: [[0, 0], [1, 1]], options: { paddingPx: 12 } };
|
||||
const handler = createMapCenterResetHandler({
|
||||
getMap: () => map,
|
||||
autoFitController: mockController({ bounds: lastFit }),
|
||||
fitMapToBounds: (...args) => fitCalls.push(args),
|
||||
mapCenterCoords: CENTER,
|
||||
});
|
||||
handler();
|
||||
assert.equal(fitCalls.length, 1);
|
||||
assert.equal(map.calls.length, 0);
|
||||
});
|
||||
|
||||
test('fitMapToBounds and setView are mutually exclusive per invocation (fallback wins)', () => {
|
||||
const fitCalls = [];
|
||||
const map = mockMap();
|
||||
const handler = createMapCenterResetHandler({
|
||||
getMap: () => map,
|
||||
autoFitController: mockController({ bounds: null }),
|
||||
fitMapToBounds: (...args) => fitCalls.push(args),
|
||||
mapCenterCoords: CENTER,
|
||||
});
|
||||
handler();
|
||||
assert.equal(fitCalls.length, 0);
|
||||
assert.equal(map.calls.length, 1);
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// DEFAULT_CENTER_RESET_ZOOM export
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('DEFAULT_CENTER_RESET_ZOOM is a positive finite number', () => {
|
||||
assert.ok(Number.isFinite(DEFAULT_CENTER_RESET_ZOOM));
|
||||
assert.ok(DEFAULT_CENTER_RESET_ZOOM > 0);
|
||||
});
|
||||
@@ -0,0 +1,168 @@
|
||||
/*
|
||||
* Copyright © 2025-26 l5yth & contributors
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
import test from 'node:test';
|
||||
import assert from 'node:assert/strict';
|
||||
|
||||
import {
|
||||
parseMeshcoreSenderPrefix,
|
||||
findNodeByLongName,
|
||||
} from '../meshcore-chat-helpers.js';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// parseMeshcoreSenderPrefix
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('parseMeshcoreSenderPrefix: typical message', () => {
|
||||
const result = parseMeshcoreSenderPrefix('T114-Zeh: Hello world');
|
||||
assert.deepEqual(result, { senderName: 'T114-Zeh', bodyText: 'Hello world' });
|
||||
});
|
||||
|
||||
test('parseMeshcoreSenderPrefix: trims whitespace around sender and body', () => {
|
||||
const result = parseMeshcoreSenderPrefix(' Alice : body text ');
|
||||
assert.deepEqual(result, { senderName: 'Alice', bodyText: 'body text' });
|
||||
});
|
||||
|
||||
test('parseMeshcoreSenderPrefix: empty body after colon', () => {
|
||||
const result = parseMeshcoreSenderPrefix('Sender:');
|
||||
assert.deepEqual(result, { senderName: 'Sender', bodyText: '' });
|
||||
});
|
||||
|
||||
test('parseMeshcoreSenderPrefix: no colon returns null', () => {
|
||||
assert.equal(parseMeshcoreSenderPrefix('No colon here'), null);
|
||||
});
|
||||
|
||||
test('parseMeshcoreSenderPrefix: empty string returns null', () => {
|
||||
assert.equal(parseMeshcoreSenderPrefix(''), null);
|
||||
});
|
||||
|
||||
test('parseMeshcoreSenderPrefix: null input returns null', () => {
|
||||
assert.equal(parseMeshcoreSenderPrefix(null), null);
|
||||
});
|
||||
|
||||
test('parseMeshcoreSenderPrefix: undefined input returns null', () => {
|
||||
assert.equal(parseMeshcoreSenderPrefix(undefined), null);
|
||||
});
|
||||
|
||||
test('parseMeshcoreSenderPrefix: non-string input returns null', () => {
|
||||
assert.equal(parseMeshcoreSenderPrefix(42), null);
|
||||
});
|
||||
|
||||
test('parseMeshcoreSenderPrefix: colon first (empty sender) returns null', () => {
|
||||
assert.equal(parseMeshcoreSenderPrefix(':body'), null);
|
||||
});
|
||||
|
||||
test('parseMeshcoreSenderPrefix: whitespace-only sender returns null', () => {
|
||||
assert.equal(parseMeshcoreSenderPrefix(' : body'), null);
|
||||
});
|
||||
|
||||
test('parseMeshcoreSenderPrefix: only first colon separates sender from body', () => {
|
||||
const result = parseMeshcoreSenderPrefix('A:B:C');
|
||||
assert.deepEqual(result, { senderName: 'A', bodyText: 'B:C' });
|
||||
});
|
||||
|
||||
test('parseMeshcoreSenderPrefix: colons in body are preserved intact', () => {
|
||||
const result = parseMeshcoreSenderPrefix('BGruenauBot/OBS+: ack @[T114-Zeh] | 80,42,68 (3 hops) | 62.6km');
|
||||
assert.deepEqual(result, {
|
||||
senderName: 'BGruenauBot/OBS+',
|
||||
bodyText: 'ack @[T114-Zeh] | 80,42,68 (3 hops) | 62.6km',
|
||||
});
|
||||
});
|
||||
|
||||
test('parseMeshcoreSenderPrefix: sender with slash and plus preserved', () => {
|
||||
const result = parseMeshcoreSenderPrefix('mEDI | Linux: Pong! T114-Zeh');
|
||||
assert.deepEqual(result, { senderName: 'mEDI | Linux', bodyText: 'Pong! T114-Zeh' });
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// findNodeByLongName
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/** Shared single-entry map used across the findNodeByLongName tests below. */
|
||||
function makeAliceMap(nodeOverride = {}) {
|
||||
const node = { node_id: '!aabbccdd', long_name: 'Alice', ...nodeOverride };
|
||||
return { node, map: new Map([['!aabbccdd', node]]) };
|
||||
}
|
||||
|
||||
test('findNodeByLongName: exact match on snake_case long_name', () => {
|
||||
const { node, map } = makeAliceMap();
|
||||
assert.equal(findNodeByLongName('Alice', map), node);
|
||||
});
|
||||
|
||||
test('findNodeByLongName: exact match on camelCase longName', () => {
|
||||
const node = { node_id: '!aabbccdd', longName: 'Alice', role: 'CLIENT' };
|
||||
const map = new Map([['!aabbccdd', node]]);
|
||||
assert.equal(findNodeByLongName('Alice', map), node);
|
||||
});
|
||||
|
||||
test('findNodeByLongName: no match returns null', () => {
|
||||
const { map } = makeAliceMap();
|
||||
assert.equal(findNodeByLongName('Unknown', map), null);
|
||||
});
|
||||
|
||||
test('findNodeByLongName: null longName returns null', () => {
|
||||
const { map } = makeAliceMap();
|
||||
assert.equal(findNodeByLongName(null, map), null);
|
||||
});
|
||||
|
||||
test('findNodeByLongName: undefined longName returns null', () => {
|
||||
const { map } = makeAliceMap();
|
||||
assert.equal(findNodeByLongName(undefined, map), null);
|
||||
});
|
||||
|
||||
test('findNodeByLongName: empty string longName returns null', () => {
|
||||
const { map } = makeAliceMap();
|
||||
assert.equal(findNodeByLongName('', map), null);
|
||||
});
|
||||
|
||||
test('findNodeByLongName: non-Map nodesById returns null', () => {
|
||||
assert.equal(findNodeByLongName('Alice', {}), null);
|
||||
});
|
||||
|
||||
test('findNodeByLongName: array nodesById returns null', () => {
|
||||
assert.equal(findNodeByLongName('Alice', []), null);
|
||||
});
|
||||
|
||||
test('findNodeByLongName: empty Map returns null', () => {
|
||||
assert.equal(findNodeByLongName('Alice', new Map()), null);
|
||||
});
|
||||
|
||||
test('findNodeByLongName: case-sensitive — lowercase mismatch returns null', () => {
|
||||
const { map } = makeAliceMap();
|
||||
assert.equal(findNodeByLongName('alice', map), null);
|
||||
});
|
||||
|
||||
test('findNodeByLongName: multiple nodes — returns correct one', () => {
|
||||
const nodeA = { node_id: '!11111111', long_name: 'Alpha' };
|
||||
const nodeB = { node_id: '!22222222', long_name: 'Beta' };
|
||||
const map = new Map([['!11111111', nodeA], ['!22222222', nodeB]]);
|
||||
assert.equal(findNodeByLongName('Beta', map), nodeB);
|
||||
assert.equal(findNodeByLongName('Alpha', map), nodeA);
|
||||
});
|
||||
|
||||
test('findNodeByLongName: prefers snake_case when both properties exist', () => {
|
||||
const node = { node_id: '!aabbccdd', long_name: 'Alice', longName: 'Different' };
|
||||
const map = new Map([['!aabbccdd', node]]);
|
||||
// long_name takes precedence via the ?? chain; should match 'Alice'
|
||||
assert.equal(findNodeByLongName('Alice', map), node);
|
||||
assert.equal(findNodeByLongName('Different', map), null);
|
||||
});
|
||||
|
||||
test('findNodeByLongName: node with null long_name is skipped', () => {
|
||||
const node = { node_id: '!aabbccdd', long_name: null };
|
||||
const map = new Map([['!aabbccdd', node]]);
|
||||
assert.equal(findNodeByLongName('Alice', map), null);
|
||||
});
|
||||
@@ -21,6 +21,7 @@ import {
|
||||
buildMessageBody,
|
||||
buildMessageIndex,
|
||||
normaliseMessageId,
|
||||
renderLiteralWithLinks,
|
||||
resolveReplyPrefix
|
||||
} from '../message-replies.js';
|
||||
|
||||
@@ -151,3 +152,214 @@ test('buildMessageBody appends reaction counts for REACTION_APP packets without
|
||||
|
||||
assert.equal(body, 'EMOJI(🌶) ESC(×2)');
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// buildMessageBody — renderMentionHtml callback
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// Shared mock helpers reused across the mention-callback tests below.
|
||||
const esc = v => `ESC(${v})`;
|
||||
const emoji = v => `EMOJI(${v})`;
|
||||
const badge = name => `BADGE(${name})`;
|
||||
|
||||
test('buildMessageBody throws TypeError when renderMentionHtml is not a function', () => {
|
||||
assert.throws(
|
||||
() => buildMessageBody({
|
||||
message: { text: 'hello' },
|
||||
escapeHtml: v => v,
|
||||
renderEmojiHtml: v => v,
|
||||
renderMentionHtml: 42,
|
||||
}),
|
||||
{ name: 'TypeError', message: 'renderMentionHtml must be a function when provided' }
|
||||
);
|
||||
});
|
||||
|
||||
test('buildMessageBody without renderMentionHtml escapes @[Name] literally', () => {
|
||||
const body = buildMessageBody({
|
||||
message: { text: 'hello @[Alice]' },
|
||||
escapeHtml: esc,
|
||||
renderEmojiHtml: emoji,
|
||||
});
|
||||
assert.equal(body, 'ESC(hello @[Alice])');
|
||||
});
|
||||
|
||||
test('buildMessageBody with renderMentionHtml replaces single @[Name] mention', () => {
|
||||
const body = buildMessageBody({
|
||||
message: { text: 'hi @[Alice] there' },
|
||||
escapeHtml: esc,
|
||||
renderEmojiHtml: emoji,
|
||||
renderMentionHtml: badge,
|
||||
});
|
||||
assert.equal(body, 'ESC(hi )BADGE(Alice)ESC( there)');
|
||||
});
|
||||
|
||||
test('buildMessageBody with renderMentionHtml handles multiple mentions', () => {
|
||||
const calls = [];
|
||||
const body = buildMessageBody({
|
||||
message: { text: '@[A] and @[B]' },
|
||||
escapeHtml: esc,
|
||||
renderEmojiHtml: emoji,
|
||||
renderMentionHtml: (name) => { calls.push(name); return `BADGE(${name})`; },
|
||||
});
|
||||
assert.deepEqual(calls, ['A', 'B']);
|
||||
assert.equal(body, 'BADGE(A)ESC( and )BADGE(B)');
|
||||
});
|
||||
|
||||
test('buildMessageBody with renderMentionHtml escapes literal segments', () => {
|
||||
const body = buildMessageBody({
|
||||
message: { text: '<b> @[Alice]' },
|
||||
escapeHtml: v => v.replace(/</g, '<').replace(/>/g, '>'),
|
||||
renderEmojiHtml: emoji,
|
||||
renderMentionHtml: badge,
|
||||
});
|
||||
assert.equal(body, '<b> BADGE(Alice)');
|
||||
});
|
||||
|
||||
test('buildMessageBody with renderMentionHtml at start of text', () => {
|
||||
const body = buildMessageBody({
|
||||
message: { text: '@[Alice] hello' },
|
||||
escapeHtml: esc,
|
||||
renderEmojiHtml: emoji,
|
||||
renderMentionHtml: badge,
|
||||
});
|
||||
assert.equal(body, 'BADGE(Alice)ESC( hello)');
|
||||
});
|
||||
|
||||
test('buildMessageBody with renderMentionHtml at end of text', () => {
|
||||
const body = buildMessageBody({
|
||||
message: { text: 'hello @[Alice]' },
|
||||
escapeHtml: esc,
|
||||
renderEmojiHtml: emoji,
|
||||
renderMentionHtml: badge,
|
||||
});
|
||||
assert.equal(body, 'ESC(hello )BADGE(Alice)');
|
||||
});
|
||||
|
||||
test('buildMessageBody with renderMentionHtml: no mentions, callback not invoked', () => {
|
||||
let called = false;
|
||||
const body = buildMessageBody({
|
||||
message: { text: 'plain text' },
|
||||
escapeHtml: esc,
|
||||
renderEmojiHtml: emoji,
|
||||
renderMentionHtml: () => { called = true; return 'BADGE'; },
|
||||
});
|
||||
assert.equal(called, false);
|
||||
assert.equal(body, 'ESC(plain text)');
|
||||
});
|
||||
|
||||
test('buildMessageBody with renderMentionHtml: null renderMentionHtml behaves like no callback', () => {
|
||||
const body = buildMessageBody({
|
||||
message: { text: 'hi @[Alice]' },
|
||||
escapeHtml: esc,
|
||||
renderEmojiHtml: emoji,
|
||||
renderMentionHtml: null,
|
||||
});
|
||||
assert.equal(body, 'ESC(hi @[Alice])');
|
||||
});
|
||||
|
||||
test('buildMessageBody reaction path unaffected by renderMentionHtml', () => {
|
||||
const reaction = { text: '1', emoji: '👍', portnum: 'REACTION_APP' };
|
||||
let called = false;
|
||||
const body = buildMessageBody({
|
||||
message: reaction,
|
||||
escapeHtml: esc,
|
||||
renderEmojiHtml: emoji,
|
||||
renderMentionHtml: () => { called = true; return 'BADGE'; },
|
||||
});
|
||||
assert.equal(called, false);
|
||||
assert.equal(body, 'EMOJI(👍)');
|
||||
});
|
||||
|
||||
test('buildMessageBody with renderMentionHtml: unclosed @[ treated as literal', () => {
|
||||
const body = buildMessageBody({
|
||||
message: { text: 'hello @[unclosed' },
|
||||
escapeHtml: esc,
|
||||
renderEmojiHtml: emoji,
|
||||
renderMentionHtml: () => 'BADGE',
|
||||
});
|
||||
// @[ without closing ] does not match the pattern — treated as literal
|
||||
assert.equal(body, 'ESC(hello @[unclosed)');
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// renderLiteralWithLinks — URL detection
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const e = v => `E(${v})`;
|
||||
|
||||
test('renderLiteralWithLinks passes plain text through escapeHtml', () => {
|
||||
assert.equal(renderLiteralWithLinks('hello world', e), 'E(hello world)');
|
||||
});
|
||||
|
||||
test('renderLiteralWithLinks wraps http:// URL in an anchor element', () => {
|
||||
const result = renderLiteralWithLinks('check http://example.com out', e);
|
||||
assert.equal(result, 'E(check )<a href="E(http://example.com)" target="_blank" rel="noopener noreferrer">E(http://example.com)</a>E( out)');
|
||||
});
|
||||
|
||||
test('renderLiteralWithLinks wraps https:// URL in an anchor element', () => {
|
||||
const result = renderLiteralWithLinks('see https://example.com/path?q=1', e);
|
||||
assert.ok(result.includes('<a href='), 'should produce an anchor');
|
||||
assert.ok(result.includes('target="_blank"'), 'should open in new tab');
|
||||
assert.ok(result.includes('rel="noopener noreferrer"'), 'should include noopener rel');
|
||||
});
|
||||
|
||||
test('renderLiteralWithLinks strips trailing period from URL', () => {
|
||||
const result = renderLiteralWithLinks('visit https://example.com.', e);
|
||||
assert.ok(result.includes('href="E(https://example.com)"'), 'period should not be in href');
|
||||
assert.ok(result.includes('>E(https://example.com)<'), 'period should not be in link text');
|
||||
assert.ok(result.endsWith('E(.)'), 'trailing period should appear as escaped text after the link');
|
||||
});
|
||||
|
||||
test('renderLiteralWithLinks strips trailing comma from URL', () => {
|
||||
const result = renderLiteralWithLinks('go to https://example.com, then stop', e);
|
||||
assert.ok(result.includes('href="E(https://example.com)"'), 'comma must not be in href');
|
||||
});
|
||||
|
||||
test('renderLiteralWithLinks handles URL at the start of text', () => {
|
||||
const result = renderLiteralWithLinks('https://example.com is great', e);
|
||||
assert.ok(result.startsWith('<a href='), 'anchor should be at start');
|
||||
assert.ok(result.endsWith('E( is great)'), 'text after URL should be escaped');
|
||||
});
|
||||
|
||||
test('renderLiteralWithLinks handles URL at the end of text', () => {
|
||||
const result = renderLiteralWithLinks('see https://example.com', e);
|
||||
assert.ok(result.startsWith('E(see )'), 'text before URL should be escaped');
|
||||
assert.ok(result.includes('<a href='), 'URL should be linked');
|
||||
});
|
||||
|
||||
test('renderLiteralWithLinks handles multiple URLs in text', () => {
|
||||
const result = renderLiteralWithLinks('a https://foo.com b https://bar.com c', e);
|
||||
const matches = result.match(/<a href=/g) || [];
|
||||
assert.equal(matches.length, 2, 'should produce two anchors');
|
||||
});
|
||||
|
||||
test('renderLiteralWithLinks does not linkify non-http schemes', () => {
|
||||
const result = renderLiteralWithLinks('ftp://example.com', e);
|
||||
assert.ok(!result.includes('<a href='), 'ftp:// should not be linkified');
|
||||
assert.equal(result, 'E(ftp://example.com)');
|
||||
});
|
||||
|
||||
test('renderLiteralWithLinks returns empty string for empty input', () => {
|
||||
assert.equal(renderLiteralWithLinks('', e), '');
|
||||
});
|
||||
|
||||
test('buildMessageBody linkifies URLs in message text without renderMentionHtml', () => {
|
||||
const body = buildMessageBody({
|
||||
message: { text: 'visit https://example.com now' },
|
||||
escapeHtml: e,
|
||||
renderEmojiHtml: v => `EMOJI(${v})`,
|
||||
});
|
||||
assert.ok(body.includes('<a href='), 'URL should be linkified');
|
||||
assert.ok(body.includes('target="_blank"'), 'should open in new tab');
|
||||
});
|
||||
|
||||
test('buildMessageBody linkifies URLs alongside @[Name] mentions', () => {
|
||||
const body = buildMessageBody({
|
||||
message: { text: '@[Alice] see https://example.com' },
|
||||
escapeHtml: e,
|
||||
renderEmojiHtml: v => `EMOJI(${v})`,
|
||||
renderMentionHtml: name => `BADGE(${name})`,
|
||||
});
|
||||
assert.ok(body.startsWith('BADGE(Alice)'), 'mention should be rendered as badge');
|
||||
assert.ok(body.includes('<a href='), 'URL should be linkified');
|
||||
});
|
||||
|
||||
@@ -109,7 +109,7 @@ test('refreshNodeInformation merges telemetry metrics when the base node lacks t
|
||||
|
||||
assert.equal(calls.length, 4);
|
||||
calls.forEach(call => {
|
||||
assert.deepEqual(call.options, { cache: 'no-store' });
|
||||
assert.deepEqual(call.options, { cache: 'default' });
|
||||
});
|
||||
});
|
||||
|
||||
@@ -336,6 +336,7 @@ test('merge helpers combine node, telemetry, and position data', () => {
|
||||
assert.equal(node.nodeId, '!node');
|
||||
assert.equal(node.nodeNum, 55);
|
||||
assert.equal(node.shortName, 'NODE');
|
||||
assert.equal(node.protocol, undefined); // no protocol in this fixture
|
||||
assert.equal(node.battery, 50);
|
||||
assert.equal(node.voltage, 3.8);
|
||||
assert.equal(node.lastHeard, 1_200);
|
||||
@@ -350,6 +351,31 @@ test('merge helpers combine node, telemetry, and position data', () => {
|
||||
assert.ok(node.position);
|
||||
});
|
||||
|
||||
test('mergeNodeFields propagates the protocol field', () => {
|
||||
const node = {};
|
||||
mergeNodeFields(node, { node_id: '!abc', protocol: 'meshcore' });
|
||||
assert.equal(node.protocol, 'meshcore');
|
||||
});
|
||||
|
||||
test('mergeNodeFields does not overwrite an existing protocol with absent value', () => {
|
||||
const node = { protocol: 'meshcore' };
|
||||
mergeNodeFields(node, { node_id: '!abc' });
|
||||
assert.equal(node.protocol, 'meshcore');
|
||||
});
|
||||
|
||||
test('refreshNodeInformation surfaces protocol from the node API record', async () => {
|
||||
const responses = new Map([
|
||||
['/api/nodes/!proto?limit=7', createResponse(200, [{ node_id: '!proto', short_name: 'PT', protocol: 'meshcore' }])],
|
||||
['/api/telemetry/!proto?limit=1000', createResponse(404, {})],
|
||||
['/api/positions/!proto?limit=7', createResponse(404, {})],
|
||||
['/api/neighbors/!proto?limit=1000', createResponse(404, {})],
|
||||
]);
|
||||
const fetchImpl = async url => responses.get(url) ?? createResponse(404, {});
|
||||
const node = await refreshNodeInformation('!proto', { fetchImpl });
|
||||
assert.equal(node.protocol, 'meshcore');
|
||||
assert.equal(node.rawSources.node.protocol, 'meshcore');
|
||||
});
|
||||
|
||||
test('normalizeReference extracts identifiers and tolerates malformed fallback payloads', () => {
|
||||
const originalWarn = console.warn;
|
||||
const warnings = [];
|
||||
|
||||
@@ -97,9 +97,10 @@ test('renderNodeLongNameLink returns empty string for null/empty longName', () =
|
||||
assert.equal(renderNodeLongNameLink(' ', '!abc123'), '');
|
||||
});
|
||||
|
||||
test('renderNodeLongNameLink prepends Meshtastic icon for null protocol', () => {
|
||||
test('renderNodeLongNameLink shows no icon for null protocol', () => {
|
||||
const html = renderNodeLongNameLink('Alice', '!abc123', { protocol: null });
|
||||
assert.ok(html.includes('meshtastic.svg'), 'should include meshtastic icon');
|
||||
assert.ok(!html.includes('meshtastic.svg'), 'no meshtastic icon for null protocol');
|
||||
assert.ok(!html.includes('meshcore.svg'), 'no meshcore icon for null protocol');
|
||||
assert.ok(html.includes('Alice'), 'should include the name');
|
||||
});
|
||||
|
||||
@@ -108,9 +109,10 @@ test('renderNodeLongNameLink prepends Meshtastic icon for "meshtastic" protocol'
|
||||
assert.ok(html.includes('meshtastic.svg'), 'should include meshtastic icon');
|
||||
});
|
||||
|
||||
test('renderNodeLongNameLink prepends Meshtastic icon for undefined protocol (default)', () => {
|
||||
test('renderNodeLongNameLink shows no icon for absent protocol (default)', () => {
|
||||
const html = renderNodeLongNameLink('Alice', '!abc123');
|
||||
assert.ok(html.includes('meshtastic.svg'), 'default protocol should show meshtastic icon');
|
||||
assert.ok(!html.includes('meshtastic.svg'), 'no meshtastic icon when protocol is absent');
|
||||
assert.ok(!html.includes('meshcore.svg'), 'no meshcore icon when protocol is absent');
|
||||
});
|
||||
|
||||
test('renderNodeLongNameLink does not prepend icon for "meshcore" protocol', () => {
|
||||
@@ -133,11 +135,12 @@ test('renderNodeLongNameLink renders anchor with href when identifier is present
|
||||
assert.ok(html.includes('data-node-id="!abc123"'), 'should include node id attribute');
|
||||
});
|
||||
|
||||
test('renderNodeLongNameLink renders plain text (with icon) when no identifier', () => {
|
||||
test('renderNodeLongNameLink renders plain text (no icon) when no identifier and null protocol', () => {
|
||||
const html = renderNodeLongNameLink('Alice', null, { protocol: null });
|
||||
assert.ok(!html.startsWith('<a '), 'should not be an anchor');
|
||||
assert.ok(html.includes('Alice'), 'should include the name');
|
||||
assert.ok(html.includes('meshtastic.svg'), 'should still show meshtastic icon');
|
||||
assert.ok(!html.includes('meshtastic.svg'), 'no meshtastic icon for null protocol');
|
||||
assert.ok(!html.includes('meshcore.svg'), 'no meshcore icon for null protocol');
|
||||
});
|
||||
|
||||
test('renderNodeLongNameLink escapes HTML in long name', () => {
|
||||
|
||||
@@ -16,7 +16,16 @@
|
||||
|
||||
import { describe, it } from 'node:test';
|
||||
import assert from 'node:assert/strict';
|
||||
import { extractModemMetadata, formatLoraFrequencyMHz, formatModemDisplay, __testUtils } from '../node-modem-metadata.js';
|
||||
import {
|
||||
extractModemMetadata,
|
||||
formatLoraFrequencyMHz,
|
||||
formatModemDisplay,
|
||||
resolveMeshcorePresetDisplay,
|
||||
formatPresetDisplay,
|
||||
__testUtils,
|
||||
} from '../node-modem-metadata.js';
|
||||
|
||||
const { toTrimmedString, parseMeshcorePresetTokens, bwToShortCode } = __testUtils;
|
||||
|
||||
describe('node-modem-metadata', () => {
|
||||
it('extracts modem preset and frequency from mixed payloads', () => {
|
||||
@@ -42,6 +51,12 @@ describe('node-modem-metadata', () => {
|
||||
});
|
||||
});
|
||||
|
||||
it('returns null metadata for null and non-object input', () => {
|
||||
assert.deepEqual(extractModemMetadata(null), { modemPreset: null, loraFreq: null });
|
||||
assert.deepEqual(extractModemMetadata('string'), { modemPreset: null, loraFreq: null });
|
||||
assert.deepEqual(extractModemMetadata(42), { modemPreset: null, loraFreq: null });
|
||||
});
|
||||
|
||||
it('formats positive frequencies with MHz suffix', () => {
|
||||
assert.equal(formatLoraFrequencyMHz(915), '915MHz');
|
||||
assert.equal(formatLoraFrequencyMHz(867.5), '867.5MHz');
|
||||
@@ -49,17 +64,141 @@ describe('node-modem-metadata', () => {
|
||||
assert.equal(formatLoraFrequencyMHz(null), null);
|
||||
});
|
||||
|
||||
it('combines preset and frequency for overlay display', () => {
|
||||
it('combines preset and frequency for overlay display — Meshtastic named preset', () => {
|
||||
assert.equal(formatModemDisplay('MediumFast', 868), 'MediumFast (868MHz)');
|
||||
assert.equal(formatModemDisplay('ShortSlow', null), 'ShortSlow');
|
||||
assert.equal(formatModemDisplay(null, 433), '433MHz');
|
||||
assert.equal(formatModemDisplay(undefined, undefined), null);
|
||||
});
|
||||
|
||||
it('combines named MeshCore preset and frequency for overlay display', () => {
|
||||
assert.equal(formatModemDisplay('SF10/BW250/CR5', 868), 'AU/NZ Wide (868MHz)');
|
||||
});
|
||||
|
||||
it('handles string frequency in formatModemDisplay for MeshCore presets', () => {
|
||||
// frequency is a string here; exercises the Number(frequency) branch.
|
||||
assert.equal(formatModemDisplay('SF10/BW250/CR5', '915'), 'AU/NZ Wide (915MHz)');
|
||||
});
|
||||
|
||||
it('passes null frequency to formatModemDisplay gracefully', () => {
|
||||
assert.equal(formatModemDisplay('SF10/BW62/CR5', null), 'AU/NZ Narrow');
|
||||
});
|
||||
|
||||
it('exposes trimmed string helper for targeted assertions', () => {
|
||||
const { toTrimmedString } = __testUtils;
|
||||
assert.equal(toTrimmedString(' hello '), 'hello');
|
||||
assert.equal(toTrimmedString(''), null);
|
||||
assert.equal(toTrimmedString(null), null);
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// parseMeshcorePresetTokens
|
||||
// ---------------------------------------------------------------------------
|
||||
describe('parseMeshcorePresetTokens', () => {
|
||||
it('returns null for non-SF/BW/CR strings', () => {
|
||||
assert.equal(parseMeshcorePresetTokens('MediumFast'), null);
|
||||
assert.equal(parseMeshcorePresetTokens('LongSlow'), null);
|
||||
assert.equal(parseMeshcorePresetTokens(''), null);
|
||||
assert.equal(parseMeshcorePresetTokens(null), null);
|
||||
});
|
||||
|
||||
it('returns null when any token is missing', () => {
|
||||
assert.equal(parseMeshcorePresetTokens('SF10/BW250'), null);
|
||||
assert.equal(parseMeshcorePresetTokens('BW250/CR5'), null);
|
||||
});
|
||||
|
||||
it('returns null when a token does not match the expected format', () => {
|
||||
assert.equal(parseMeshcorePresetTokens('SF10/BW250/XX5'), null);
|
||||
assert.equal(parseMeshcorePresetTokens('SF10/BW250/CR'), null);
|
||||
});
|
||||
|
||||
it('parses a valid SF/BW/CR string', () => {
|
||||
assert.deepEqual(parseMeshcorePresetTokens('SF10/BW250/CR5'), { sf: 10, bw: 250, cr: 5 });
|
||||
});
|
||||
|
||||
it('is case-insensitive', () => {
|
||||
assert.deepEqual(parseMeshcorePresetTokens('sf10/bw62/cr5'), { sf: 10, bw: 62, cr: 5 });
|
||||
});
|
||||
|
||||
it('accepts tokens in any order', () => {
|
||||
assert.deepEqual(parseMeshcorePresetTokens('CR5/SF7/BW62'), { sf: 7, bw: 62, cr: 5 });
|
||||
});
|
||||
|
||||
it('handles decimal bandwidth values like 62.5', () => {
|
||||
assert.deepEqual(parseMeshcorePresetTokens('SF7/BW62.5/CR5'), { sf: 7, bw: 62.5, cr: 5 });
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// bwToShortCode
|
||||
// ---------------------------------------------------------------------------
|
||||
describe('bwToShortCode', () => {
|
||||
it('maps 62 to Na', () => assert.equal(bwToShortCode(62), 'Na'));
|
||||
it('maps 62.5 to Na', () => assert.equal(bwToShortCode(62.5), 'Na'));
|
||||
it('maps 125 to St', () => assert.equal(bwToShortCode(125), 'St'));
|
||||
it('maps 250 to Wi', () => assert.equal(bwToShortCode(250), 'Wi'));
|
||||
it('returns null for unknown bandwidths', () => {
|
||||
assert.equal(bwToShortCode(500), null);
|
||||
assert.equal(bwToShortCode(31), null);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// resolveMeshcorePresetDisplay
|
||||
// ---------------------------------------------------------------------------
|
||||
describe('resolveMeshcorePresetDisplay', () => {
|
||||
it('returns null for non-SF/BW/CR input', () => {
|
||||
assert.equal(resolveMeshcorePresetDisplay('MediumFast', null), null);
|
||||
assert.equal(resolveMeshcorePresetDisplay(null, null), null);
|
||||
});
|
||||
|
||||
// Named preset table: [description, preset, freqMHz, expected]
|
||||
const NAMED_CASES = [
|
||||
['AU/NZ Wide', 'SF10/BW250/CR5', 915, { longName: 'AU/NZ Wide', shortCode: 'Wi', displayString: 'AU/NZ Wide' }],
|
||||
['AU/NZ Narrow', 'SF10/BW62/CR5', 915, { longName: 'AU/NZ Narrow', shortCode: 'Na', displayString: 'AU/NZ Narrow' }],
|
||||
['EU/UK Wide', 'SF11/BW250/CR5', 868, { longName: 'EU/UK Wide', shortCode: 'Wi', displayString: 'EU/UK Wide' }],
|
||||
['EU/UK Narrow', 'SF8/BW62/CR8', 868, { longName: 'EU/UK Narrow', shortCode: 'Na', displayString: 'EU/UK Narrow' }],
|
||||
['CZ/SK Narrow (freq < 900)', 'SF7/BW62/CR5', 868, { longName: 'CZ/SK Narrow', shortCode: 'Na', displayString: 'CZ/SK Narrow' }],
|
||||
['US/CA Narrow (freq >= 900)', 'SF7/BW62/CR5', 915, { longName: 'US/CA Narrow', shortCode: 'Na', displayString: 'US/CA Narrow' }],
|
||||
['US/CA Narrow (exact 900 boundary)', 'SF7/BW62/CR5', 900, { longName: 'US/CA Narrow', shortCode: 'Na', displayString: 'US/CA Narrow' }],
|
||||
];
|
||||
for (const [desc, preset, freq, expected] of NAMED_CASES) {
|
||||
it(`resolves ${desc}`, () => {
|
||||
assert.deepEqual(resolveMeshcorePresetDisplay(preset, freq), expected);
|
||||
});
|
||||
}
|
||||
|
||||
// Fallback cases: [description, preset, freqMHz, expected]
|
||||
const FALLBACK_CASES = [
|
||||
['SF7/BW62/CR5 with unknown freq uses BW fallback', 'SF7/BW62/CR5', null, { longName: null, shortCode: 'Na', displayString: 'BW62/SF7/CR5' }],
|
||||
['unknown BW has no short code', 'SF12/BW500/CR7', null, { longName: null, shortCode: null, displayString: 'BW500/SF12/CR7' }],
|
||||
['125 kHz BW gives St short code', 'SF9/BW125/CR6', null, { longName: null, shortCode: 'St', displayString: 'BW125/SF9/CR6' }],
|
||||
];
|
||||
for (const [desc, preset, freq, expected] of FALLBACK_CASES) {
|
||||
it(`falls back: ${desc}`, () => {
|
||||
assert.deepEqual(resolveMeshcorePresetDisplay(preset, freq), expected);
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// formatPresetDisplay
|
||||
// ---------------------------------------------------------------------------
|
||||
describe('formatPresetDisplay', () => {
|
||||
it('returns long name for named MeshCore presets', () => {
|
||||
assert.equal(formatPresetDisplay('SF10/BW250/CR5', 915), 'AU/NZ Wide');
|
||||
});
|
||||
|
||||
it('returns re-ordered BW/SF/CR for unknown SF/BW/CR presets', () => {
|
||||
assert.equal(formatPresetDisplay('SF12/BW500/CR7', null), 'BW500/SF12/CR7');
|
||||
});
|
||||
|
||||
it('returns raw string for non-SF/BW/CR presets', () => {
|
||||
assert.equal(formatPresetDisplay('MediumFast', null), 'MediumFast');
|
||||
});
|
||||
|
||||
it('returns null when preset is absent', () => {
|
||||
assert.equal(formatPresetDisplay(null, null), null);
|
||||
assert.equal(formatPresetDisplay(' ', null), null);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -144,8 +144,8 @@ test('padTwo handles zero', () => {
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('formatCompactDate returns two-digit day of month', () => {
|
||||
// 2025-01-05 UTC
|
||||
const ts = Date.UTC(2025, 0, 5);
|
||||
// Local calendar date (formatCompactDate uses getDate(), not UTC).
|
||||
const ts = new Date(2025, 0, 5).getTime();
|
||||
assert.equal(formatCompactDate(ts), '05');
|
||||
});
|
||||
|
||||
|
||||
@@ -360,7 +360,7 @@ test('renderSingleNodeTable renders a condensed table for the node', () => {
|
||||
10_000,
|
||||
);
|
||||
assert.equal(html.includes('<table'), true);
|
||||
assert.ok(html.includes('meshtastic.svg'), 'default protocol should show meshtastic icon in long name link');
|
||||
assert.ok(!html.includes('meshtastic.svg'), 'absent protocol should show no meshtastic icon in long name link');
|
||||
assert.match(html, /<a class="node-long-link" href="\/nodes\/!abcd" data-node-detail-link="true" data-node-id="!abcd">.*Example Node<\/a>/s);
|
||||
assert.equal(html.includes('66.0%'), true);
|
||||
assert.equal(html.includes('1.230%'), true);
|
||||
@@ -604,7 +604,7 @@ test('renderNodeDetailHtml composes the table, neighbors, and messages', () => {
|
||||
assert.equal(html.includes('Heard by'), true);
|
||||
assert.equal(html.includes('We hear'), true);
|
||||
assert.equal(html.includes('Messages'), true);
|
||||
assert.ok(html.includes('meshtastic.svg'), 'default protocol should show meshtastic icon in heading and table');
|
||||
assert.ok(!html.includes('meshtastic.svg'), 'absent protocol should show no meshtastic icon in heading and table');
|
||||
assert.match(html, /<a class="node-long-link" href="\/nodes\/!abcd" data-node-detail-link="true" data-node-id="!abcd">.*Example Node<\/a>/s);
|
||||
assert.equal(html.includes('PEER'), true);
|
||||
assert.equal(html.includes('ALLY'), true);
|
||||
@@ -665,7 +665,7 @@ test('renderSingleNodeTable shows meshtastic icon for meshtastic protocol in lon
|
||||
assert.ok(html.includes('meshtastic.svg'), 'meshtastic protocol should show icon in long name link');
|
||||
});
|
||||
|
||||
test('renderSingleNodeTable shows meshtastic icon when protocol is absent in long name link', () => {
|
||||
test('renderSingleNodeTable shows no protocol icon when protocol is absent in long name link', () => {
|
||||
const node = {
|
||||
shortName: 'A',
|
||||
longName: 'Alice',
|
||||
@@ -674,7 +674,8 @@ test('renderSingleNodeTable shows meshtastic icon when protocol is absent in lon
|
||||
rawSources: { node: { node_id: '!aa', role: 'CLIENT' } },
|
||||
};
|
||||
const html = renderSingleNodeTable(node, (short, role) => `<span data-role="${role}">${short}</span>`, 0);
|
||||
assert.ok(html.includes('meshtastic.svg'), 'absent protocol should show meshtastic icon in long name link');
|
||||
assert.ok(!html.includes('meshtastic.svg'), 'absent protocol should show no meshtastic icon in long name link');
|
||||
assert.ok(!html.includes('meshcore.svg'), 'absent protocol should show no meshcore icon in long name link');
|
||||
});
|
||||
|
||||
test('renderSingleNodeTable omits meshtastic icon for meshcore protocol in long name link', () => {
|
||||
@@ -700,12 +701,13 @@ test('renderNodeDetailHtml shows meshtastic icon in heading for meshtastic proto
|
||||
assert.ok(html.includes('meshtastic.svg'), 'meshtastic protocol should show icon in heading');
|
||||
});
|
||||
|
||||
test('renderNodeDetailHtml shows meshtastic icon in heading when protocol is absent', () => {
|
||||
test('renderNodeDetailHtml shows no protocol icon in heading when protocol is absent', () => {
|
||||
const html = renderNodeDetailHtml(
|
||||
{ shortName: 'A', longName: 'Alice', nodeId: '!aa', role: 'CLIENT' },
|
||||
{ renderShortHtml: short => `<span>${short}</span>` },
|
||||
);
|
||||
assert.ok(html.includes('meshtastic.svg'), 'absent protocol should show meshtastic icon in heading');
|
||||
assert.ok(!html.includes('meshtastic.svg'), 'absent protocol should show no meshtastic icon in heading');
|
||||
assert.ok(!html.includes('meshcore.svg'), 'absent protocol should show no meshcore icon in heading');
|
||||
});
|
||||
|
||||
test('renderNodeDetailHtml omits meshtastic icon in heading for meshcore protocol', () => {
|
||||
@@ -736,7 +738,7 @@ test('renderMessages prefixes meshtastic icon for meshtastic node protocol', ()
|
||||
assert.ok(html.includes('meshtastic.svg'), 'meshtastic node chat entry should show icon');
|
||||
});
|
||||
|
||||
test('renderMessages prefixes meshtastic icon when node protocol is absent', () => {
|
||||
test('renderMessages shows no protocol icon when node protocol is absent', () => {
|
||||
const nodeContext = {
|
||||
shortName: 'SRC',
|
||||
longName: 'Source',
|
||||
@@ -750,7 +752,8 @@ test('renderMessages prefixes meshtastic icon when node protocol is absent', ()
|
||||
(short, role) => `<span data-role="${role}">${short}</span>`,
|
||||
nodeContext,
|
||||
);
|
||||
assert.ok(html.includes('meshtastic.svg'), 'absent node protocol chat entry should show meshtastic icon');
|
||||
assert.ok(!html.includes('meshtastic.svg'), 'absent node protocol chat entry should show no meshtastic icon');
|
||||
assert.ok(!html.includes('meshcore.svg'), 'absent node protocol chat entry should show no meshcore icon');
|
||||
});
|
||||
|
||||
test('renderMessages omits meshtastic icon for meshcore node protocol', () => {
|
||||
@@ -916,7 +919,7 @@ test('fetchMessages handles HTTP responses and uses defaults', async () => {
|
||||
};
|
||||
const messages = await fetchMessages('!node', { fetchImpl });
|
||||
assert.equal(messages.length, 1);
|
||||
assert.equal(calls[0].options.cache, 'no-store');
|
||||
assert.equal(calls[0].options.cache, 'default');
|
||||
});
|
||||
|
||||
test('fetchMessages returns an empty list when the endpoint is missing', async () => {
|
||||
@@ -999,7 +1002,7 @@ test('fetchTracesForNode requests traceroutes for the node', async () => {
|
||||
const traces = await fetchTracesForNode('!abc', { fetchImpl });
|
||||
assert.equal(traces.length, 1);
|
||||
assert.equal(calls[0].url.includes('/api/traces/!abc'), true);
|
||||
assert.equal(calls[0].options.cache, 'no-store');
|
||||
assert.equal(calls[0].options.cache, 'default');
|
||||
});
|
||||
|
||||
test('fetchTracesForNode returns empty when identifier is missing', async () => {
|
||||
|
||||
@@ -122,19 +122,42 @@ test('renderNodeLongNameLink renders anchor when identifier is present', () => {
|
||||
assert.ok(html.includes('Alice'), 'long name should appear');
|
||||
});
|
||||
|
||||
test('renderNodeLongNameLink renders meshtastic icon for null protocol', () => {
|
||||
test('renderNodeLongNameLink shows no icon for null protocol', () => {
|
||||
const html = renderNodeLongNameLink('Alice', '!aabbccdd', { protocol: null });
|
||||
assert.ok(html.includes('meshtastic.svg'), 'meshtastic icon should be shown for null protocol');
|
||||
assert.ok(!html.includes('meshtastic.svg'), 'no icon when protocol is null');
|
||||
assert.ok(!html.includes('meshcore.svg'), 'no icon when protocol is null');
|
||||
});
|
||||
|
||||
test('renderNodeLongNameLink renders meshtastic icon when protocol is absent', () => {
|
||||
test('renderNodeLongNameLink shows no icon when protocol is absent', () => {
|
||||
const html = renderNodeLongNameLink('Alice', '!aabbccdd');
|
||||
assert.ok(html.includes('meshtastic.svg'));
|
||||
assert.ok(!html.includes('meshtastic.svg'), 'no icon when protocol is absent');
|
||||
assert.ok(!html.includes('meshcore.svg'), 'no icon when protocol is absent');
|
||||
});
|
||||
|
||||
test('renderNodeLongNameLink omits meshtastic icon for meshcore protocol', () => {
|
||||
test('renderNodeLongNameLink renders meshtastic icon for explicit meshtastic protocol', () => {
|
||||
const html = renderNodeLongNameLink('Alice', '!aabbccdd', { protocol: 'meshtastic' });
|
||||
assert.ok(html.includes('meshtastic.svg'), 'meshtastic icon shown for explicit meshtastic protocol');
|
||||
});
|
||||
|
||||
test('renderNodeLongNameLink uses meshcore icon for meshcore protocol', () => {
|
||||
const html = renderNodeLongNameLink('Eve', '!aabbccdd', { protocol: 'meshcore' });
|
||||
assert.ok(!html.includes('meshtastic.svg'), 'no meshtastic icon for meshcore protocol');
|
||||
assert.ok(html.includes('meshcore.svg'), 'meshcore icon should be shown');
|
||||
});
|
||||
|
||||
test('renderNodeLongNameLink renders meshcore icon for meshcore protocol', () => {
|
||||
const html = renderNodeLongNameLink('Eve', '!aabbccdd', { protocol: 'meshcore' });
|
||||
assert.ok(html.includes('meshcore.svg'), 'meshcore icon should be shown for meshcore protocol');
|
||||
});
|
||||
|
||||
test('renderNodeLongNameLink omits meshcore icon for meshtastic protocol', () => {
|
||||
const html = renderNodeLongNameLink('Alice', '!aabbccdd', { protocol: 'meshtastic' });
|
||||
assert.ok(!html.includes('meshcore.svg'), 'no meshcore icon for meshtastic protocol');
|
||||
});
|
||||
|
||||
test('renderNodeLongNameLink omits meshcore icon for null protocol', () => {
|
||||
const html = renderNodeLongNameLink('Alice', '!aabbccdd', { protocol: null });
|
||||
assert.ok(!html.includes('meshcore.svg'), 'no meshcore icon for null protocol');
|
||||
});
|
||||
|
||||
test('renderNodeLongNameLink renders plain text when identifier is null', () => {
|
||||
|
||||
@@ -24,28 +24,33 @@ import {
|
||||
meshcoreIconHtml,
|
||||
MESHTASTIC_ICON_SRC,
|
||||
MESHCORE_ICON_SRC,
|
||||
protocolIconPrefixHtml,
|
||||
} from '../protocol-helpers.js';
|
||||
|
||||
test('isMeshtasticProtocol — null is Meshtastic (default)', () => {
|
||||
assert.equal(isMeshtasticProtocol(null), true);
|
||||
});
|
||||
|
||||
test('isMeshtasticProtocol — undefined is Meshtastic (default)', () => {
|
||||
assert.equal(isMeshtasticProtocol(undefined), true);
|
||||
});
|
||||
|
||||
test('isMeshtasticProtocol — empty string is Meshtastic', () => {
|
||||
assert.equal(isMeshtasticProtocol(''), true);
|
||||
});
|
||||
|
||||
test('isMeshtasticProtocol — whitespace-only string is Meshtastic', () => {
|
||||
assert.equal(isMeshtasticProtocol(' '), true);
|
||||
});
|
||||
// ---------------------------------------------------------------------------
|
||||
// isMeshtasticProtocol — only matches the explicit string "meshtastic"
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('isMeshtasticProtocol — "meshtastic" is Meshtastic', () => {
|
||||
assert.equal(isMeshtasticProtocol('meshtastic'), true);
|
||||
});
|
||||
|
||||
test('isMeshtasticProtocol — null is not Meshtastic (no default)', () => {
|
||||
assert.equal(isMeshtasticProtocol(null), false);
|
||||
});
|
||||
|
||||
test('isMeshtasticProtocol — undefined is not Meshtastic (no default)', () => {
|
||||
assert.equal(isMeshtasticProtocol(undefined), false);
|
||||
});
|
||||
|
||||
test('isMeshtasticProtocol — empty string is not Meshtastic', () => {
|
||||
assert.equal(isMeshtasticProtocol(''), false);
|
||||
});
|
||||
|
||||
test('isMeshtasticProtocol — whitespace-only string is not Meshtastic', () => {
|
||||
assert.equal(isMeshtasticProtocol(' '), false);
|
||||
});
|
||||
|
||||
test('isMeshtasticProtocol — "meshcore" is not Meshtastic', () => {
|
||||
assert.equal(isMeshtasticProtocol('meshcore'), false);
|
||||
});
|
||||
@@ -101,3 +106,38 @@ test('MESHTASTIC_ICON_SRC is referenced by meshtasticIconHtml', () => {
|
||||
test('MESHCORE_ICON_SRC is referenced by meshcoreIconHtml', () => {
|
||||
assert.ok(meshcoreIconHtml().includes(MESHCORE_ICON_SRC), 'icon HTML must embed the src constant');
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// protocolIconPrefixHtml
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('protocolIconPrefixHtml — null yields empty string (no default)', () => {
|
||||
assert.equal(protocolIconPrefixHtml(null), '');
|
||||
});
|
||||
|
||||
test('protocolIconPrefixHtml — undefined yields empty string (no default)', () => {
|
||||
assert.equal(protocolIconPrefixHtml(undefined), '');
|
||||
});
|
||||
|
||||
test('protocolIconPrefixHtml — empty string yields empty string', () => {
|
||||
assert.equal(protocolIconPrefixHtml(''), '');
|
||||
});
|
||||
|
||||
test('protocolIconPrefixHtml — "meshtastic" yields meshtastic icon prefix', () => {
|
||||
const result = protocolIconPrefixHtml('meshtastic');
|
||||
assert.ok(result.includes('meshtastic.svg'), '"meshtastic" should produce the meshtastic icon');
|
||||
assert.ok(!result.includes('meshcore.svg'), '"meshtastic" must not produce the meshcore icon');
|
||||
assert.ok(result.endsWith(' '), 'prefix must end with a trailing space');
|
||||
});
|
||||
|
||||
test('protocolIconPrefixHtml — "meshcore" yields meshcore icon prefix', () => {
|
||||
const result = protocolIconPrefixHtml('meshcore');
|
||||
assert.ok(result.includes('meshcore.svg'), '"meshcore" should produce the meshcore icon');
|
||||
assert.ok(!result.includes('meshtastic.svg'), '"meshcore" must not produce the meshtastic icon');
|
||||
assert.ok(result.endsWith(' '), 'prefix must end with a trailing space');
|
||||
});
|
||||
|
||||
test('protocolIconPrefixHtml — unknown protocol yields empty string', () => {
|
||||
assert.equal(protocolIconPrefixHtml('reticulum'), '', 'unknown protocol should produce no prefix');
|
||||
assert.equal(protocolIconPrefixHtml('LoRa'), '', 'unknown protocol should produce no prefix');
|
||||
});
|
||||
|
||||
@@ -22,7 +22,11 @@ import {
|
||||
getRoleKey,
|
||||
getRoleRenderPriority,
|
||||
getRoleColors,
|
||||
getRoleTextColor,
|
||||
meshcoreRoleColors,
|
||||
meshcoreRoleTextColors,
|
||||
meshcoreRoleRenderOrder,
|
||||
meshtasticRoleRenderOrder,
|
||||
roleColors,
|
||||
normalizeRole,
|
||||
translateRoleId,
|
||||
@@ -52,10 +56,53 @@ test('role key and color lookups prefer known values with uppercase fallback', (
|
||||
});
|
||||
|
||||
test('render priority uses canonical role keys and defaults to zero for unknowns', () => {
|
||||
// translateRoleId(2) → 'ROUTER', so both should resolve to the same priority
|
||||
assert.equal(getRoleRenderPriority('ROUTER'), getRoleRenderPriority(2));
|
||||
assert.equal(getRoleRenderPriority('custom-role'), 0);
|
||||
});
|
||||
|
||||
test('render priority is protocol-aware for shared roles', () => {
|
||||
// SENSOR: meshtastic=2, meshcore=9
|
||||
assert.equal(getRoleRenderPriority('SENSOR', 'meshtastic'), 2);
|
||||
assert.equal(getRoleRenderPriority('SENSOR', 'meshcore'), 9);
|
||||
assert.ok(getRoleRenderPriority('SENSOR', 'meshcore') > getRoleRenderPriority('SENSOR', 'meshtastic'));
|
||||
// REPEATER: meshtastic=11, meshcore=3
|
||||
assert.equal(getRoleRenderPriority('REPEATER', 'meshtastic'), 11);
|
||||
assert.equal(getRoleRenderPriority('REPEATER', 'meshcore'), 3);
|
||||
assert.ok(getRoleRenderPriority('REPEATER', 'meshtastic') > getRoleRenderPriority('REPEATER', 'meshcore'));
|
||||
});
|
||||
|
||||
test('render priority meshcore-exclusive roles have defined priorities', () => {
|
||||
assert.equal(getRoleRenderPriority('COMPANION', 'meshcore'), 12);
|
||||
assert.equal(getRoleRenderPriority('ROOM_SERVER', 'meshcore'), 7);
|
||||
});
|
||||
|
||||
test('render priority respects the full bottom-to-top order', () => {
|
||||
const order = [
|
||||
['CLIENT_HIDDEN', null],
|
||||
['SENSOR', 'meshtastic'],
|
||||
['REPEATER', 'meshcore'],
|
||||
['TRACKER', null],
|
||||
['CLIENT_MUTE', null],
|
||||
['CLIENT', null],
|
||||
['ROOM_SERVER', 'meshcore'],
|
||||
['CLIENT_BASE', null],
|
||||
['SENSOR', 'meshcore'],
|
||||
['ROUTER_LATE', null],
|
||||
['REPEATER', 'meshtastic'],
|
||||
['COMPANION', 'meshcore'],
|
||||
['ROUTER', null],
|
||||
['LOST_AND_FOUND', null],
|
||||
];
|
||||
for (let i = 1; i < order.length; i++) {
|
||||
const [roleA, protoA] = order[i - 1];
|
||||
const [roleB, protoB] = order[i];
|
||||
const pA = getRoleRenderPriority(roleA, protoA);
|
||||
const pB = getRoleRenderPriority(roleB, protoB);
|
||||
assert.ok(pA < pB, `Expected ${roleA}/${protoA} (${pA}) < ${roleB}/${protoB} (${pB})`);
|
||||
}
|
||||
});
|
||||
|
||||
test('getRoleColors returns Meshtastic palette for null/undefined/meshtastic', () => {
|
||||
assert.equal(getRoleColors(null), roleColors);
|
||||
assert.equal(getRoleColors(undefined), roleColors);
|
||||
@@ -70,3 +117,34 @@ test('getRoleColors returns MeshCore palette for meshcore protocol', () => {
|
||||
test('getRoleColors returns Meshtastic palette for unknown protocols', () => {
|
||||
assert.equal(getRoleColors('reticulum'), roleColors);
|
||||
});
|
||||
|
||||
test('getRoleColor uses meshcore palette when protocol is meshcore', () => {
|
||||
assert.equal(getRoleColor('COMPANION', 'meshcore'), meshcoreRoleColors.COMPANION);
|
||||
assert.equal(getRoleColor('REPEATER', 'meshcore'), meshcoreRoleColors.REPEATER);
|
||||
assert.equal(getRoleColor('ROOM_SERVER', 'meshcore'), meshcoreRoleColors.ROOM_SERVER);
|
||||
assert.equal(getRoleColor('SENSOR', 'meshcore'), meshcoreRoleColors.SENSOR);
|
||||
});
|
||||
|
||||
test('getRoleColor uses meshtastic palette when protocol is null', () => {
|
||||
assert.equal(getRoleColor('ROUTER', null), roleColors.ROUTER);
|
||||
assert.equal(getRoleColor('CLIENT', null), roleColors.CLIENT);
|
||||
});
|
||||
|
||||
test('getRoleColor falls back to CLIENT color for unknown meshcore role', () => {
|
||||
assert.equal(getRoleColor('UNKNOWN_ROLE', 'meshcore'), roleColors.CLIENT);
|
||||
});
|
||||
|
||||
test('getRoleTextColor returns light grey for meshcore COMPANION', () => {
|
||||
assert.equal(getRoleTextColor('COMPANION', 'meshcore'), meshcoreRoleTextColors.COMPANION);
|
||||
});
|
||||
|
||||
test('getRoleTextColor returns null for meshcore roles without override', () => {
|
||||
assert.equal(getRoleTextColor('REPEATER', 'meshcore'), null);
|
||||
assert.equal(getRoleTextColor('ROOM_SERVER', 'meshcore'), null);
|
||||
assert.equal(getRoleTextColor('SENSOR', 'meshcore'), null);
|
||||
});
|
||||
|
||||
test('getRoleTextColor returns null for meshtastic roles', () => {
|
||||
assert.equal(getRoleTextColor('CLIENT', 'meshtastic'), null);
|
||||
assert.equal(getRoleTextColor('ROUTER', null), null);
|
||||
});
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user