mirror of
https://github.com/l5yth/potato-mesh.git
synced 2026-05-15 05:45:50 +02:00
Compare commits
42 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| dcb374fbf9 | |||
| 9c3dae3e7d | |||
| 7806efb2cf | |||
| 7a21de7cda | |||
| 295d4cf2bb | |||
| 09ea277a40 | |||
| 4fa0745d1b | |||
| a62a068c08 | |||
| 5c49af5355 | |||
| e48c575b9d | |||
| e03675168b | |||
| d6a2e263cc | |||
| f638c79e13 | |||
| 874e81ab8b | |||
| a5d0008555 | |||
| 4d0d6f8565 | |||
| 7b1d25e286 | |||
| 5adbe2263e | |||
| b1c416d029 | |||
| 8305ca588c | |||
| 0cf56b6fba | |||
| ecce7f3504 | |||
| 17fa183c4f | |||
| 5b0a6f5f8b | |||
| 2e8b5ad856 | |||
| e32b098be4 | |||
| b45629f13c | |||
| 96421c346d | |||
| 724b3e14e5 | |||
| e8c83a2774 | |||
| 5c5a9df5a6 | |||
| 7cb4bbe61b | |||
| fed8b9e124 | |||
| 60e734086f | |||
| c3181e9bd5 | |||
| f4fa487b2d | |||
| e0237108c6 | |||
| d7a636251d | |||
| 108573b100 | |||
| 36f55e6b79 | |||
| b4dd72e7eb | |||
| f5f2e977a1 |
+2
-2
@@ -16,5 +16,5 @@ coverage:
|
||||
status:
|
||||
project:
|
||||
default:
|
||||
target: 99%
|
||||
threshold: 1%
|
||||
target: 100%
|
||||
threshold: 10%
|
||||
|
||||
@@ -19,6 +19,22 @@ updates:
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
- package-ecosystem: "python"
|
||||
directory: "/data"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
- package-ecosystem: "github-actions"
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
- package-ecosystem: "cargo"
|
||||
directory: "/matrix"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
- package-ecosystem: "npm"
|
||||
directory: "/web"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
- package-ecosystem: "pub"
|
||||
directory: "/app"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
|
||||
@@ -76,3 +76,4 @@ web/node_modules/
|
||||
|
||||
# Debug symbols
|
||||
ignored.txt
|
||||
ignored-*.txt
|
||||
|
||||
@@ -1,48 +0,0 @@
|
||||
# Repository Guidelines
|
||||
|
||||
Keep code well structured, modular, and not monolithic. If modules get to big, consider submodules structure.
|
||||
|
||||
Make sure all tests pass for Python (`pytest`), Ruby (`rspec`), and JavaScript (`npm test`).
|
||||
|
||||
Make sure all code is properly inline documented (PDoc, RDoc, JSDoc, et.c). We do not want any undocumented code.
|
||||
|
||||
Make sure all code is 100% unit tested. We want all lines, units, and branches to be thouroughly covered by tests.
|
||||
|
||||
New source files should have Apache v2 license headers using the exact string `Copyright © 2025-26 l5yth & contributors`.
|
||||
|
||||
Run linters for Python (`black`) and Ruby (`rufo`) to ensure consistent code formatting.
|
||||
|
||||
## Project Structure & Module Organization
|
||||
The repository splits runtime and ingestion logic. `web/` holds the Sinatra dashboard (Ruby code in `lib/potato_mesh`, views in `views/`, static bundles in `public/`).
|
||||
|
||||
`data/` hosts the Python Meshtastic ingestor plus migrations and CLI scripts. API fixtures and end-to-end harnesses live in `tests/`. Dockerfiles and compose files support containerized workflows.
|
||||
|
||||
`matrix/` contains the Rust Matrix bridge; build with `cargo build --release` or `docker build -f matrix/Dockerfile .`, and keep bridge config under `matrix/Config.toml` when running locally.
|
||||
|
||||
## Build, Test, and Development Commands
|
||||
Run dependency installs inside `web/`: `bundle install` for gems and `npm ci` for JavaScript tooling. Start the app with `cd web && API_TOKEN=dev ./app.sh` for local work or `bundle exec rackup -p 41447` when integrating elsewhere.
|
||||
|
||||
Prep ingestion with `python -m venv .venv && pip install -r data/requirements.txt`; `./data/mesh.sh` streams from live radios. `docker-compose -f docker-compose.dev.yml up` brings up the full stack.
|
||||
|
||||
Container images publish via `.github/workflows/docker.yml` as `potato-mesh-{service}-linux-$arch` (`web`, `ingestor`, `matrix-bridge`), using the Dockerfiles in `web/`, `data/`, and `matrix/`.
|
||||
|
||||
## Coding Style & Naming Conventions
|
||||
Use two-space indentation for Ruby and keep `# frozen_string_literal: true` at the top of new files. Keep Ruby classes/modules in `CamelCase`, filenames in `snake_case.rb`, and feature specs in `*_spec.rb`.
|
||||
|
||||
JavaScript follows ES modules under `public/assets/js`; co-locate components with `__tests__` folders and use kebab-case filenames. Format Ruby via `bundle exec rufo .` and Python via `black`. Skip committing generated coverage artifacts.
|
||||
|
||||
## Flutter Mobile App (`app/`)
|
||||
The Flutter client lives in `app/`. Keep only the mobile targets (`android/`, `ios/`) under version control unless you explicitly support other platforms. Do not commit Flutter build outputs or editor cruft (`.dart_tool/`, `.flutter-plugins-dependencies`, `.idea/`, `.metadata`, `*.iml`, `.fvmrc` if unused).
|
||||
|
||||
Install dependencies with `cd app && flutter pub get`; format with `dart format .` and lint via `flutter analyze`. Run tests with `cd app && flutter test` and keep widget/unit coverage high—no new code without tests. Commit `pubspec.lock` and analysis options so toolchains stay consistent.
|
||||
|
||||
## Testing Guidelines
|
||||
Ruby specs run with `cd web && bundle exec rspec`, producing SimpleCov output in `coverage/`. Front-end behaviour is verified through Node’s test runner: `cd web && npm test` writes V8 coverage and JUnit XML under `reports/`.
|
||||
|
||||
The ingestion layer is guarded by `pytest -q tests/test_mesh.py`; leave fixtures in `tests/` untouched so CI can replay them. New features should ship with matching specs and updated integration checks.
|
||||
|
||||
## Commit & Pull Request Guidelines
|
||||
Commits should stay imperative and reference issues the way history does (`Add chat log entries... (#408)`). Squash noisy work-in-progress commits before pushing. Pull requests need a concise summary, screenshots or curl traces for UI/API tweaks, and links to tracked issues. Paste the command output for the test suites you ran and mention configuration toggles (`API_TOKEN`, `PRIVATE`) reviewers must set.
|
||||
|
||||
## Security & Configuration Tips
|
||||
Never commit real API tokens or `.sqlite` dumps; use `.env.local` files ignored by Git. Confirm env defaults (`API_TOKEN`, `INSTANCE_DOMAIN`, `PRIVATE`) before deploying, and set `FEDERATION=0` when staging private nodes. Review `PROMETHEUS.md` when exposing metrics so scrape endpoints stay internal.
|
||||
@@ -1,5 +1,44 @@
|
||||
# CHANGELOG
|
||||
|
||||
## v0.5.9
|
||||
|
||||
* Matrix: listen for synapse on port 41448 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/607>
|
||||
* Web: collapse federation map ledgend by @l5yth in <https://github.com/l5yth/potato-mesh/pull/604>
|
||||
* Web: fix stale node queries by @l5yth in <https://github.com/l5yth/potato-mesh/pull/603>
|
||||
* Matrix: move short name to display name by @l5yth in <https://github.com/l5yth/potato-mesh/pull/602>
|
||||
* Ci: update ruby to 4 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/601>
|
||||
* Web: display traces of last 28 days if available by @l5yth in <https://github.com/l5yth/potato-mesh/pull/599>
|
||||
* Web: establish menu structure by @l5yth in <https://github.com/l5yth/potato-mesh/pull/597>
|
||||
* Matrix: fixed the text-message checkpoint regression by @l5yth in <https://github.com/l5yth/potato-mesh/pull/595>
|
||||
* Matrix: cache seen messages by rx_time not id by @l5yth in <https://github.com/l5yth/potato-mesh/pull/594>
|
||||
* Web: hide the default '0' tab when not active by @l5yth in <https://github.com/l5yth/potato-mesh/pull/593>
|
||||
* Matrix: fix empty bridge state json by @l5yth in <https://github.com/l5yth/potato-mesh/pull/592>
|
||||
* Web: allow certain charts to overflow upper bounds by @l5yth in <https://github.com/l5yth/potato-mesh/pull/585>
|
||||
* Ingestor: support ROUTING_APP messages by @l5yth in <https://github.com/l5yth/potato-mesh/pull/584>
|
||||
* Ci: run nix flake check on ci by @l5yth in <https://github.com/l5yth/potato-mesh/pull/583>
|
||||
* Web: hide legend by default by @l5yth in <https://github.com/l5yth/potato-mesh/pull/582>
|
||||
* Nix flake by @benjajaja in <https://github.com/l5yth/potato-mesh/pull/577>
|
||||
* Support BLE UUID format for macOS Bluetooth devices by @apo-mak in <https://github.com/l5yth/potato-mesh/pull/575>
|
||||
* Web: add mesh.qrp.ro as seed node by @l5yth in <https://github.com/l5yth/potato-mesh/pull/573>
|
||||
* Web: ensure unknown nodes for messages and traces by @l5yth in <https://github.com/l5yth/potato-mesh/pull/572>
|
||||
* Chore: bump version to 0.5.9 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/569>
|
||||
|
||||
## v0.5.8
|
||||
|
||||
* Web: add secondary seed node jmrp.io by @l5yth in <https://github.com/l5yth/potato-mesh/pull/568>
|
||||
* Data: implement whitelist for ingestor by @l5yth in <https://github.com/l5yth/potato-mesh/pull/567>
|
||||
* Web: add ?since= parameter to all apis by @l5yth in <https://github.com/l5yth/potato-mesh/pull/566>
|
||||
* Matrix: fix docker build by @l5yth in <https://github.com/l5yth/potato-mesh/pull/565>
|
||||
* Matrix: fix docker build by @l5yth in <https://github.com/l5yth/potato-mesh/pull/564>
|
||||
* Web: fix federation signature validation and create fallback by @l5yth in <https://github.com/l5yth/potato-mesh/pull/563>
|
||||
* Chore: update readme by @l5yth in <https://github.com/l5yth/potato-mesh/pull/561>
|
||||
* Matrix: add docker file for bridge by @l5yth in <https://github.com/l5yth/potato-mesh/pull/556>
|
||||
* Matrix: add health checks to startup by @l5yth in <https://github.com/l5yth/potato-mesh/pull/555>
|
||||
* Matrix: omit the api part in base url by @l5yth in <https://github.com/l5yth/potato-mesh/pull/554>
|
||||
* App: add utility coverage tests for main.dart by @l5yth in <https://github.com/l5yth/potato-mesh/pull/552>
|
||||
* Data: add thorough daemon unit tests by @l5yth in <https://github.com/l5yth/potato-mesh/pull/553>
|
||||
* Chore: bump version to 0.5.8 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/551>
|
||||
|
||||
## v0.5.7
|
||||
|
||||
* Data: track ingestors heartbeat by @l5yth in <https://github.com/l5yth/potato-mesh/pull/549>
|
||||
|
||||
@@ -0,0 +1,65 @@
|
||||
# Repository Guidelines
|
||||
|
||||
Keep code as modular as possible to reduce duplication and improve reusability and readability. If a module grows large, split it into a submodule structure. Prefer composing small, single-purpose units over monolithic files.
|
||||
|
||||
Make sure all tests pass for Python (`pytest`), Ruby (`rspec`), and JavaScript (`npm test`).
|
||||
|
||||
All code must be 100% unit tested — every line, branch, and code path must have a unit test. "100%" is the floor, not the ceiling: smoke tests, integration tests, and end-to-end tests come on top of that. No new code ships without matching unit tests.
|
||||
|
||||
All code must be 100% documented according to the language's API-doc standard (PDoc for Python, RDoc for Ruby, JSDoc for JavaScript, rustdoc for Rust, dartdoc for Dart). Documentation must be sufficient to generate complete API docs from source. In addition to API-level docs, add inline comments wherever the logic is not immediately self-evident.
|
||||
|
||||
New source files should have Apache v2 license headers using the exact string `Copyright © 2025-26 l5yth & contributors`.
|
||||
|
||||
Run linters for Python (`black`) and Ruby (`rufo`) to ensure consistent code formatting.
|
||||
|
||||
## Project Structure & Module Organization
|
||||
The repository splits runtime and ingestion logic. `web/` holds the Sinatra dashboard (Ruby code in `lib/potato_mesh`, views in `views/`, static bundles in `public/`).
|
||||
|
||||
`data/` hosts the Python Meshtastic ingestor plus migrations and CLI scripts. The ingestor is structured as the `data/mesh_ingestor/` package with the following key modules: `daemon.py` (main loop), `handlers.py` (packet processing), `interfaces.py` (interface helpers), `config.py` (env-driven config), `events.py` (TypedDict event schemas), `provider.py` (Provider protocol), `node_identity.py` (canonical node ID utilities), `decode_payload.py` (CLI protobuf decoder), and the `providers/` subpackage (currently `meshtastic.py`). API contracts for all POST ingest routes are documented in `data/mesh_ingestor/CONTRACTS.md`. API fixtures and end-to-end harnesses live in `tests/`. Dockerfiles and compose files support containerized workflows.
|
||||
|
||||
`matrix/` contains the Rust Matrix bridge; build with `cargo build --release` or `docker build -f matrix/Dockerfile .`, and keep bridge config under `matrix/Config.toml` when running locally.
|
||||
|
||||
## Build, Test, and Development Commands
|
||||
Run dependency installs inside `web/`: `bundle install` for gems and `npm ci` for JavaScript tooling. Start the app with `cd web && API_TOKEN=dev ./app.sh` for local work or `bundle exec rackup -p 41447` when integrating elsewhere.
|
||||
|
||||
Prep ingestion with `python -m venv .venv && pip install -r data/requirements.txt`; `./data/mesh.sh` streams from live radios. `docker-compose -f docker-compose.dev.yml up` brings up the full stack.
|
||||
|
||||
Container images publish via `.github/workflows/docker.yml` as `potato-mesh-{service}-linux-$arch` (`web`, `ingestor`, `matrix-bridge`), using the Dockerfiles in `web/`, `data/`, and `matrix/`.
|
||||
|
||||
## Coding Style & Naming Conventions
|
||||
Use two-space indentation for Ruby and keep `# frozen_string_literal: true` at the top of new files. Keep Ruby classes/modules in `CamelCase`, filenames in `snake_case.rb`, and feature specs in `*_spec.rb`.
|
||||
|
||||
JavaScript follows ES modules under `public/assets/js`; co-locate components with `__tests__` folders and use kebab-case filenames. Format Ruby via `bundle exec rufo .` and Python via `black`. Skip committing generated coverage artifacts.
|
||||
|
||||
## Flutter Mobile App (`app/`)
|
||||
The Flutter client lives in `app/`. Keep only the mobile targets (`android/`, `ios/`) under version control unless you explicitly support other platforms. Do not commit Flutter build outputs or editor cruft (`.dart_tool/`, `.flutter-plugins-dependencies`, `.idea/`, `.metadata`, `*.iml`, `.fvmrc` if unused).
|
||||
|
||||
Install dependencies with `cd app && flutter pub get`; format with `dart format .` and lint via `flutter analyze`. Run tests with `cd app && flutter test` and keep widget/unit coverage high—no new code without tests. Commit `pubspec.lock` and analysis options so toolchains stay consistent.
|
||||
|
||||
## Testing Guidelines
|
||||
Ruby specs run with `cd web && bundle exec rspec`, producing SimpleCov output in `coverage/`. Front-end behaviour is verified through Node’s test runner: `cd web && npm test` writes V8 coverage and JUnit XML under `reports/`.
|
||||
|
||||
The ingestion layer is tested with `pytest -q tests/`; leave fixtures in `tests/` untouched so CI can replay them. The suite includes both integration tests (`test_mesh.py`) and focused unit tests — `test_events_unit.py` (TypedDict schemas), `test_provider_unit.py` (Provider protocol conformance and `MeshtasticProvider`), `test_node_identity_unit.py` (canonical ID helpers), `test_daemon_unit.py`, `test_serialization_unit.py`, and `test_decode_payload.py`. New features should ship with matching specs and updated integration checks.
|
||||
|
||||
## Adding a New Ingestor Provider
|
||||
The `data/mesh_ingestor/provider.py` module defines a `@runtime_checkable` `Provider` Protocol with five members: `name` (str), `subscribe()`, `connect(*, active_candidate)`, `extract_host_node_id(iface)`, and `node_snapshot_items(iface)`. To add a new backend (e.g. Reticulum, MeshCore):
|
||||
|
||||
1. Create `data/mesh_ingestor/providers/<name>.py` with a class satisfying the Protocol.
|
||||
2. Register it in `data/mesh_ingestor/providers/__init__.py`.
|
||||
3. Pass an instance via `daemon.main(provider=...)` or make it the default in `main()`.
|
||||
4. Cover the provider with unit tests in `tests/test_provider_unit.py` — at minimum an `isinstance(..., Provider)` conformance check and any retry/error-handling paths.
|
||||
|
||||
Consult `data/mesh_ingestor/CONTRACTS.md` for the canonical event shapes all providers must emit.
|
||||
|
||||
## GitHub Configuration Standards
|
||||
Every language used in the repository must have a Dependabot entry checking for dependency updates on a **weekly** schedule. Keep the Dependabot config up to date as new languages or package ecosystems are added.
|
||||
|
||||
Codecov must be configured with a **100% coverage target** and a **10% threshold** (i.e. a drop of more than 10 percentage points fails the check). The `codecov.yml` should enforce this on both patch and project coverage.
|
||||
|
||||
Every service/component must have at least one GitHub Actions workflow that **builds and runs tests on pull requests against `main` and on direct pushes to `main`**. Workflows should cover all relevant test suites (Python, Ruby, JS, Rust, Flutter) for the components they touch.
|
||||
|
||||
## Commit & Pull Request Guidelines
|
||||
Commits should stay imperative and reference issues the way history does (`Add chat log entries... (#408)`). Squash noisy work-in-progress commits before pushing. Pull requests need a concise summary, screenshots or curl traces for UI/API tweaks, and links to tracked issues. Paste the command output for the test suites you ran and mention configuration toggles (`API_TOKEN`, `PRIVATE`) reviewers must set.
|
||||
|
||||
## Security & Configuration Tips
|
||||
Never commit real API tokens or `.sqlite` dumps; use `.env.local` files ignored by Git. Confirm env defaults (`API_TOKEN`, `INSTANCE_DOMAIN`, `PRIVATE`) before deploying, and set `FEDERATION=0` when staging private nodes. Review `PROMETHEUS.md` when exposing metrics so scrape endpoints stay internal.
|
||||
@@ -88,6 +88,7 @@ The web app can be configured with environment variables (defaults shown):
|
||||
| `CHANNEL` | `"#LongFast"` | Default channel name displayed in the UI. |
|
||||
| `FREQUENCY` | `"915MHz"` | Default frequency description displayed in the UI. |
|
||||
| `CONTACT_LINK` | `"#potatomesh:dod.ngo"` | Chat link or Matrix alias rendered in the footer and overlays. |
|
||||
| `ANNOUNCEMENT` | _unset_ | Optional announcement banner text rendered above the header on every page. |
|
||||
| `MAP_CENTER` | `38.761944,-27.090833` | Latitude and longitude that centre the map on load. |
|
||||
| `MAP_ZOOM` | _unset_ | Fixed Leaflet zoom applied on first load; disables auto-fit when provided. |
|
||||
| `MAX_DISTANCE` | `42` | Maximum distance (km) before node relationships are hidden on the map. |
|
||||
@@ -251,15 +252,36 @@ services.potato-mesh = {
|
||||
|
||||
## Docker
|
||||
|
||||
Docker images are published on Github for each release:
|
||||
Docker images are published on GitHub Container Registry for each release.
|
||||
Image names and tags follow the workflow format:
|
||||
`${IMAGE_PREFIX}-${service}-${architecture}:${tag}` (see `.github/workflows/docker.yml`).
|
||||
|
||||
```bash
|
||||
docker pull ghcr.io/l5yth/potato-mesh/web:latest # newest release
|
||||
docker pull ghcr.io/l5yth/potato-mesh/web:v0.5.5 # pinned historical release
|
||||
docker pull ghcr.io/l5yth/potato-mesh/ingestor:latest
|
||||
docker pull ghcr.io/l5yth/potato-mesh/matrix-bridge:latest
|
||||
docker pull ghcr.io/l5yth/potato-mesh-web-linux-amd64:latest
|
||||
docker pull ghcr.io/l5yth/potato-mesh-web-linux-arm64:latest
|
||||
docker pull ghcr.io/l5yth/potato-mesh-web-linux-armv7:latest
|
||||
|
||||
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-amd64:latest
|
||||
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-arm64:latest
|
||||
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-armv7:latest
|
||||
|
||||
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-amd64:latest
|
||||
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-arm64:latest
|
||||
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-armv7:latest
|
||||
|
||||
# version-pinned examples
|
||||
docker pull ghcr.io/l5yth/potato-mesh-web-linux-amd64:v0.5.5
|
||||
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-amd64:v0.5.5
|
||||
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-amd64:v0.5.5
|
||||
```
|
||||
|
||||
Note: `latest` is only published for non-prerelease versions. Pre-release tags
|
||||
such as `-rc`, `-beta`, `-alpha`, or `-dev` are version-tagged only.
|
||||
|
||||
When using Compose, set `POTATOMESH_IMAGE_ARCH` in `docker-compose.yml` (or via
|
||||
environment) so service images resolve to the correct architecture variant and
|
||||
you avoid manual tag mistakes.
|
||||
|
||||
Feel free to run the [configure.sh](./configure.sh) script to set up your
|
||||
environment. See the [Docker guide](DOCKER.md) for more details and custom
|
||||
deployment instructions.
|
||||
@@ -270,6 +292,8 @@ A matrix bridge is currently being worked on. It requests messages from a config
|
||||
potato-mesh instance and forwards it to a specified matrix channel; see
|
||||
[matrix/README.md](./matrix/README.md).
|
||||
|
||||

|
||||
|
||||
## Mobile App
|
||||
|
||||
A mobile _reader_ app is currently being worked on. Stay tuned for releases and updates.
|
||||
|
||||
@@ -1,3 +1,18 @@
|
||||
/*
|
||||
* Copyright © 2025-26 l5yth & contributors
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
plugins {
|
||||
id("com.android.application")
|
||||
id("kotlin-android")
|
||||
|
||||
@@ -1,3 +1,16 @@
|
||||
// Copyright © 2025-26 l5yth & contributors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package net.potatomesh.reader
|
||||
|
||||
import io.flutter.embedding.android.FlutterActivity
|
||||
|
||||
@@ -1,3 +1,18 @@
|
||||
/*
|
||||
* Copyright © 2025-26 l5yth & contributors
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
allprojects {
|
||||
repositories {
|
||||
google()
|
||||
|
||||
@@ -1,3 +1,18 @@
|
||||
/*
|
||||
* Copyright © 2025-26 l5yth & contributors
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
pluginManagement {
|
||||
val flutterSdkPath =
|
||||
run {
|
||||
|
||||
+13
-1
@@ -1,5 +1,18 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
export GIT_TAG="$(git describe --tags --abbrev=0)"
|
||||
export GIT_COMMITS="$(git rev-list --count ${GIT_TAG}..HEAD)"
|
||||
export GIT_SHA="$(git rev-parse --short=9 HEAD)"
|
||||
@@ -12,4 +25,3 @@ flutter run \
|
||||
--dart-define=GIT_SHA="${GIT_SHA}" \
|
||||
--dart-define=GIT_DIRTY="${GIT_DIRTY}" \
|
||||
--device-id 38151FDJH00D4C
|
||||
|
||||
|
||||
@@ -15,11 +15,11 @@
|
||||
<key>CFBundlePackageType</key>
|
||||
<string>FMWK</string>
|
||||
<key>CFBundleShortVersionString</key>
|
||||
<string>0.5.9</string>
|
||||
<string>0.5.12</string>
|
||||
<key>CFBundleSignature</key>
|
||||
<string>????</string>
|
||||
<key>CFBundleVersion</key>
|
||||
<string>0.5.9</string>
|
||||
<string>0.5.12</string>
|
||||
<key>MinimumOSVersion</key>
|
||||
<string>14.0</string>
|
||||
</dict>
|
||||
|
||||
@@ -1 +1,2 @@
|
||||
#include? "Pods/Target Support Files/Pods-Runner/Pods-Runner.debug.xcconfig"
|
||||
#include "Generated.xcconfig"
|
||||
|
||||
@@ -1 +1,2 @@
|
||||
#include? "Pods/Target Support Files/Pods-Runner/Pods-Runner.release.xcconfig"
|
||||
#include "Generated.xcconfig"
|
||||
|
||||
@@ -1,3 +1,16 @@
|
||||
// Copyright © 2025-26 l5yth & contributors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import Flutter
|
||||
import UIKit
|
||||
|
||||
|
||||
@@ -1 +1,14 @@
|
||||
// Copyright © 2025-26 l5yth & contributors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
#import "GeneratedPluginRegistrant.h"
|
||||
|
||||
@@ -1,3 +1,16 @@
|
||||
// Copyright © 2025-26 l5yth & contributors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import Flutter
|
||||
import UIKit
|
||||
import XCTest
|
||||
|
||||
+5
-1
@@ -2944,6 +2944,9 @@ class MeshNode {
|
||||
}
|
||||
}
|
||||
|
||||
/// The protocol identifier sent to the API to filter results to Meshtastic only.
|
||||
const String _kProtocolFilter = 'meshtastic';
|
||||
|
||||
/// Build a messages API URI for a given domain or absolute URL.
|
||||
Uri _buildMessagesUri(String domain, {int since = 0, int limit = 1000}) {
|
||||
final trimmed = domain.trim();
|
||||
@@ -2951,6 +2954,7 @@ Uri _buildMessagesUri(String domain, {int since = 0, int limit = 1000}) {
|
||||
'limit': limit.toString(),
|
||||
'encrypted': 'false',
|
||||
'since': since.toString(),
|
||||
'protocol': _kProtocolFilter,
|
||||
};
|
||||
if (trimmed.isEmpty) {
|
||||
return Uri.https('potatomesh.net', '/api/messages', params);
|
||||
@@ -2988,7 +2992,7 @@ Uri _buildNodeUri(String domain, String nodeId) {
|
||||
/// Build the bulk nodes API URI for fetching recent nodes.
|
||||
Uri _buildNodesUri(String domain, {int limit = 1000}) {
|
||||
final trimmedDomain = domain.trim();
|
||||
final params = {'limit': limit.toString()};
|
||||
final params = {'limit': limit.toString(), 'protocol': _kProtocolFilter};
|
||||
|
||||
if (trimmedDomain.isEmpty) {
|
||||
return Uri.https('potatomesh.net', '/api/nodes', params);
|
||||
|
||||
+8
-8
@@ -45,10 +45,10 @@ packages:
|
||||
dependency: transitive
|
||||
description:
|
||||
name: characters
|
||||
sha256: f71061c654a3380576a52b451dd5532377954cf9dbd272a78fc8479606670803
|
||||
sha256: faf38497bda5ead2a8c7615f4f7939df04333478bf32e4173fcb06d428b5716b
|
||||
url: "https://pub.dev"
|
||||
source: hosted
|
||||
version: "1.4.0"
|
||||
version: "1.4.1"
|
||||
checked_yaml:
|
||||
dependency: transitive
|
||||
description:
|
||||
@@ -284,18 +284,18 @@ packages:
|
||||
dependency: transitive
|
||||
description:
|
||||
name: matcher
|
||||
sha256: dc58c723c3c24bf8d3e2d3ad3f2f9d7bd9cf43ec6feaa64181775e60190153f2
|
||||
sha256: "12956d0ad8390bbcc63ca2e1469c0619946ccb52809807067a7020d57e647aa6"
|
||||
url: "https://pub.dev"
|
||||
source: hosted
|
||||
version: "0.12.17"
|
||||
version: "0.12.18"
|
||||
material_color_utilities:
|
||||
dependency: transitive
|
||||
description:
|
||||
name: material_color_utilities
|
||||
sha256: f7142bb1154231d7ea5f96bc7bde4bda2a0945d2806bb11670e30b850d56bdec
|
||||
sha256: "9c337007e82b1889149c82ed242ed1cb24a66044e30979c44912381e9be4c48b"
|
||||
url: "https://pub.dev"
|
||||
source: hosted
|
||||
version: "0.11.1"
|
||||
version: "0.13.0"
|
||||
meta:
|
||||
dependency: transitive
|
||||
description:
|
||||
@@ -497,10 +497,10 @@ packages:
|
||||
dependency: transitive
|
||||
description:
|
||||
name: test_api
|
||||
sha256: ab2726c1a94d3176a45960b6234466ec367179b87dd74f1611adb1f3b5fb9d55
|
||||
sha256: "93167629bfc610f71560ab9312acdda4959de4df6fac7492c89ff0d3886f6636"
|
||||
url: "https://pub.dev"
|
||||
source: hosted
|
||||
version: "0.7.7"
|
||||
version: "0.7.9"
|
||||
timezone:
|
||||
dependency: transitive
|
||||
description:
|
||||
|
||||
+1
-1
@@ -1,7 +1,7 @@
|
||||
name: potato_mesh_reader
|
||||
description: Meshtastic Reader — read-only view for PotatoMesh messages.
|
||||
publish_to: "none"
|
||||
version: 0.5.9
|
||||
version: 0.5.12
|
||||
|
||||
environment:
|
||||
sdk: ">=3.4.0 <4.0.0"
|
||||
|
||||
+13
-1
@@ -1,5 +1,18 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
export GIT_TAG="$(git describe --tags --abbrev=0)"
|
||||
@@ -27,4 +40,3 @@ fi
|
||||
export APK_DIR="build/app/outputs/flutter-apk"
|
||||
mv -v "${APK_DIR}/app-release.apk" "${APK_DIR}/potatomesh-reader-android-${TAG_NAME}.apk"
|
||||
(cd "${APK_DIR}" && sha256sum "potatomesh-reader-android-${TAG_NAME}.apk" > "potatomesh-reader-android-${TAG_NAME}.apk.sha256sum")
|
||||
|
||||
|
||||
@@ -206,8 +206,10 @@ void main() {
|
||||
|
||||
expect(calls[0].host, 'mesh.example.org');
|
||||
expect(calls[0].path, '/api/messages');
|
||||
expect(calls[0].queryParameters['protocol'], 'meshtastic');
|
||||
expect(calls[1].scheme, 'https');
|
||||
expect(calls[1].path, '/api/messages');
|
||||
expect(calls[1].queryParameters['protocol'], 'meshtastic');
|
||||
});
|
||||
});
|
||||
|
||||
|
||||
@@ -145,6 +145,7 @@ void main() {
|
||||
if (request.url.path == '/api/messages') {
|
||||
sinces.add(request.url.queryParameters['since'] ?? '');
|
||||
expect(request.url.queryParameters['limit'], '1000');
|
||||
expect(request.url.queryParameters['protocol'], 'meshtastic');
|
||||
if (sinces.length == 1) {
|
||||
return http.Response(
|
||||
jsonEncode([
|
||||
|
||||
@@ -1,3 +1,16 @@
|
||||
// Copyright © 2025-26 l5yth & contributors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// This is a basic Flutter widget test.
|
||||
//
|
||||
// To perform an interaction with a widget in your test, use the WidgetTester
|
||||
|
||||
+1
-1
@@ -18,7 +18,7 @@ The ``data.mesh`` module exposes helpers for reading Meshtastic node and
|
||||
message information before forwarding it to the accompanying web application.
|
||||
"""
|
||||
|
||||
VERSION = "0.5.9"
|
||||
VERSION = "0.5.12"
|
||||
"""Semantic version identifier shared with the dashboard and front-end."""
|
||||
|
||||
__version__ = VERSION
|
||||
|
||||
+2
-1
@@ -20,7 +20,8 @@ CREATE TABLE IF NOT EXISTS ingestors (
|
||||
last_seen_time INTEGER NOT NULL,
|
||||
version TEXT,
|
||||
lora_freq INTEGER,
|
||||
modem_preset TEXT
|
||||
modem_preset TEXT,
|
||||
protocol TEXT NOT NULL DEFAULT 'meshtastic'
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_ingestors_last_seen ON ingestors(last_seen_time);
|
||||
|
||||
+2
-1
@@ -17,5 +17,6 @@ set -euo pipefail
|
||||
|
||||
python -m venv .venv
|
||||
source .venv/bin/activate
|
||||
pip install -U meshtastic black pytest
|
||||
pip install -U pip
|
||||
pip install -r "$(dirname "$0")/requirements.txt"
|
||||
exec python mesh.py
|
||||
|
||||
@@ -0,0 +1,118 @@
|
||||
## Mesh ingestor contracts (stable interfaces)
|
||||
|
||||
This repo’s ingestion pipeline is split into:
|
||||
|
||||
- **Python collector** (`data/mesh_ingestor/*`) which normalizes packets/events and POSTs JSON to the web app.
|
||||
- **Sinatra web app** (`web/`) which accepts those payloads on `POST /api/*` ingest routes and persists them into SQLite tables defined under `data/*.sql`.
|
||||
|
||||
This document records the **contracts that future providers must preserve**. The intent is to enable adding new providers (MeshCore, Reticulum, …) without changing the Ruby/DB/UI read-side.
|
||||
|
||||
### Canonical node identity
|
||||
|
||||
- **Canonical node id**: `nodes.node_id` is a `TEXT` primary key and is treated as canonical across the system.
|
||||
- **Format**: `!%08x` (lowercase hex, 8 chars), for example `!abcdef01`.
|
||||
- **Normalization**:
|
||||
- Python currently normalizes via `data/mesh_ingestor/serialization.py:_canonical_node_id`.
|
||||
- Ruby normalizes via `web/lib/potato_mesh/application/data_processing.rb:canonical_node_parts`.
|
||||
- **Dual addressing**: Ruby routes and queries accept either a canonical `!xxxxxxxx` string or a numeric node id; they normalize to `node_id`.
|
||||
|
||||
Note: non-Meshtastic providers will need a strategy to map their native node identifiers into this `!%08x` space. That mapping is intentionally not standardized in code yet.
|
||||
|
||||
### Ingest HTTP routes and payload shapes
|
||||
|
||||
Future providers should emit payloads that match these shapes (keys + types), which are validated by existing tests (notably `tests/test_mesh.py`).
|
||||
|
||||
#### `POST /api/nodes`
|
||||
|
||||
Payload is a mapping keyed by canonical node id, with an optional top-level `”ingestor”` key:
|
||||
|
||||
- `{ “!abcdef01”: { ... node fields ... }, “ingestor”: “!ingestornodeid” }`
|
||||
|
||||
When `”ingestor”` is present the protocol is inherited from the registered ingestor (see `POST /api/ingestors`); omitting it defaults to `”meshtastic”`.
|
||||
|
||||
Node entry fields are “Meshtastic-ish” (camelCase) and may include:
|
||||
|
||||
- `num` (int node number)
|
||||
- `lastHeard` (int unix seconds)
|
||||
- `snr` (float)
|
||||
- `hopsAway` (int)
|
||||
- `isFavorite` (bool)
|
||||
- `user` (mapping; e.g. `shortName`, `longName`, `macaddr`, `hwModel`, `publicKey`, `isUnmessagable`)
|
||||
- `role` (optional string) — omit when unknown; known values include Meshtastic role names (e.g. `CLIENT`, `ROUTER`) and MeshCore role names (`COMPANION`, `REPEATER`, `ROOM_SERVER`, `SENSOR`)
|
||||
- `deviceMetrics` (mapping; e.g. `batteryLevel`, `voltage`, `channelUtilization`, `airUtilTx`, `uptimeSeconds`)
|
||||
- `position` (mapping; `latitude`, `longitude`, `altitude`, `time`, `locationSource`, `precisionBits`, optional nested `raw`)
|
||||
- Optional radio metadata: `lora_freq`, `modem_preset`
|
||||
|
||||
#### `POST /api/messages`
|
||||
|
||||
Single message payload:
|
||||
|
||||
- Required: `id` (int), `rx_time` (int), `rx_iso` (string)
|
||||
- Identity: `from_id` (string/int), `to_id` (string/int), `channel` (int), `portnum` (string|nil)
|
||||
- Payload: `text` (string|nil), `encrypted` (string|nil), `reply_id` (int|nil), `emoji` (string|nil)
|
||||
- RF: `snr` (float|nil), `rssi` (int|nil), `hop_limit` (int|nil)
|
||||
- Meta: `channel_name` (string; only when not encrypted and known), `ingestor` (canonical host id), `lora_freq`, `modem_preset`
|
||||
|
||||
#### `POST /api/positions`
|
||||
|
||||
Single position payload:
|
||||
|
||||
- Required: `id` (int), `rx_time` (int), `rx_iso` (string)
|
||||
- Node: `node_id` (canonical string), `node_num` (int|nil), `num` (int|nil), `from_id` (canonical string), `to_id` (string|nil)
|
||||
- Position: `latitude`, `longitude`, `altitude` (floats|nil)
|
||||
- Position time: `position_time` (int|nil)
|
||||
- Quality: `location_source` (string|nil), `precision_bits` (int|nil), `sats_in_view` (int|nil), `pdop` (float|nil)
|
||||
- Motion: `ground_speed` (float|nil), `ground_track` (float|nil)
|
||||
- RF/meta: `snr`, `rssi`, `hop_limit`, `bitfield`, `payload_b64` (string|nil), `raw` (mapping|nil), `ingestor`, `lora_freq`, `modem_preset`
|
||||
|
||||
#### `POST /api/telemetry`
|
||||
|
||||
Single telemetry payload:
|
||||
|
||||
- Required: `id` (int), `rx_time` (int), `rx_iso` (string)
|
||||
- Node: `node_id` (canonical string|nil), `node_num` (int|nil), `from_id`, `to_id`
|
||||
- Time: `telemetry_time` (int|nil)
|
||||
- Packet: `channel` (int), `portnum` (string|nil), `bitfield` (int|nil), `hop_limit` (int|nil)
|
||||
- RF: `snr` (float|nil), `rssi` (int|nil)
|
||||
- Raw: `payload_b64` (string; may be empty string when unknown)
|
||||
- Metrics: many optional snake_case keys (`battery_level`, `voltage`, `temperature`, etc.)
|
||||
- Subtype: `telemetry_type` (string|nil) — optional discriminator identifying which Meshtastic protobuf oneof was set; one of `"device"`, `"environment"`, `"power"`, or `"air_quality"`. Ingestors that detect the subtype SHOULD include this field; omit rather than send `null` when unknown. The web app infers the type from metric-field presence when absent, so old ingestors remain compatible.
|
||||
- Meta: `ingestor`, `lora_freq`, `modem_preset`
|
||||
|
||||
#### `POST /api/neighbors`
|
||||
|
||||
Neighbors snapshot payload:
|
||||
|
||||
- Node: `node_id` (canonical string), `node_num` (int|nil)
|
||||
- `neighbors`: list of entries with `neighbor_id` (canonical string), `neighbor_num` (int|nil), `snr` (float|nil), `rx_time` (int), `rx_iso` (string)
|
||||
- Snapshot time: `rx_time`, `rx_iso`
|
||||
- Optional: `node_broadcast_interval_secs` (int|nil), `last_sent_by_id` (canonical string|nil)
|
||||
- Meta: `ingestor`, `lora_freq`, `modem_preset`
|
||||
|
||||
#### `POST /api/traces`
|
||||
|
||||
Single trace payload:
|
||||
|
||||
- Identity: `id` (int|nil), `request_id` (int|nil)
|
||||
- Endpoints: `src` (int|nil), `dest` (int|nil)
|
||||
- Path: `hops` (list[int])
|
||||
- Time: `rx_time` (int), `rx_iso` (string)
|
||||
- Metrics: `rssi` (int|nil), `snr` (float|nil), `elapsed_ms` (int|nil)
|
||||
- Meta: `ingestor`, `lora_freq`, `modem_preset`
|
||||
|
||||
#### `POST /api/ingestors`
|
||||
|
||||
Heartbeat payload:
|
||||
|
||||
- `node_id` (canonical string)
|
||||
- `start_time` (int), `last_seen_time` (int)
|
||||
- `version` (string)
|
||||
- Optional: `lora_freq`, `modem_preset`
|
||||
- Optional: `protocol` (string; e.g. `"meshtastic"`, `"meshcore"`) — declares the mesh backend for this ingestor; defaults to `"meshtastic"` when absent
|
||||
|
||||
**Protocol propagation**: all event records (`messages`, `positions`, `telemetry`, `traces`, `neighbors`) that reference this ingestor via their `ingestor` field will inherit its `protocol` value at write time.
|
||||
|
||||
### GET endpoint filtering
|
||||
|
||||
All collection GET endpoints (`/api/nodes`, `/api/messages`, `/api/positions`, `/api/telemetry`, `/api/traces`, `/api/neighbors`, `/api/ingestors`) accept an optional `?protocol=<value>` query parameter. When present, only records whose `protocol` column matches the given value are returned. The `protocol` field is included in all GET responses.
|
||||
|
||||
@@ -25,6 +25,7 @@ from .. import VERSION as _PACKAGE_VERSION
|
||||
from . import (
|
||||
channels,
|
||||
config,
|
||||
connection,
|
||||
daemon,
|
||||
handlers,
|
||||
ingestors,
|
||||
@@ -46,7 +47,7 @@ def _reexport(module) -> None:
|
||||
def _export_constants() -> None:
|
||||
globals()["json"] = queue.json
|
||||
globals()["urllib"] = queue.urllib
|
||||
globals()["glob"] = interfaces.glob
|
||||
globals()["glob"] = connection.glob
|
||||
__all__.extend(["json", "urllib", "glob", "threading", "signal"])
|
||||
|
||||
|
||||
|
||||
@@ -182,6 +182,9 @@ def capture_from_interface(iface: Any) -> None:
|
||||
channels_obj = getattr(local_node, "channels", None) if local_node else None
|
||||
|
||||
channel_entries: list[tuple[int, str]] = []
|
||||
# Use a set for O(1) duplicate-index checks; Meshtastic occasionally
|
||||
# emits the same channel index twice when the channel list is partially
|
||||
# initialised, so we keep only the first valid entry per index.
|
||||
seen_indices: set[int] = set()
|
||||
for candidate in _iter_channel_objects(channels_obj):
|
||||
result = _channel_tuple(candidate)
|
||||
|
||||
@@ -65,6 +65,21 @@ CHANNEL_INDEX = int(os.environ.get("CHANNEL_INDEX", str(DEFAULT_CHANNEL_INDEX)))
|
||||
|
||||
DEBUG = os.environ.get("DEBUG") == "1"
|
||||
|
||||
_KNOWN_PROVIDERS = ("meshtastic", "meshcore")
|
||||
|
||||
_raw_provider = os.environ.get("PROVIDER", "meshtastic").strip().lower()
|
||||
if _raw_provider not in _KNOWN_PROVIDERS:
|
||||
raise ValueError(
|
||||
f"Unknown PROVIDER={_raw_provider!r}. "
|
||||
f"Valid options: {', '.join(_KNOWN_PROVIDERS)}"
|
||||
)
|
||||
|
||||
PROVIDER = _raw_provider
|
||||
"""Active ingestion provider, selected via the :envvar:`PROVIDER` environment variable.
|
||||
|
||||
Accepted values are ``meshtastic`` (default) and ``meshcore``.
|
||||
"""
|
||||
|
||||
|
||||
def _parse_channel_names(raw_value: str | None) -> tuple[str, ...]:
|
||||
"""Normalise a comma-separated list of channel names.
|
||||
|
||||
@@ -0,0 +1,163 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Provider-agnostic connection target helpers.
|
||||
|
||||
This module contains utilities shared by all ingestor providers for
|
||||
parsing and auto-discovering connection targets. It is intentionally
|
||||
free of any provider-specific imports so that Meshtastic, MeshCore,
|
||||
and future providers can all rely on the same logic.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import glob
|
||||
import re
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Constants
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
DEFAULT_TCP_PORT: int = 4403
|
||||
"""Default TCP port used when no port is explicitly supplied."""
|
||||
|
||||
DEFAULT_SERIAL_PATTERNS: tuple[str, ...] = (
|
||||
"/dev/ttyACM*",
|
||||
"/dev/ttyUSB*",
|
||||
"/dev/tty.usbmodem*",
|
||||
"/dev/tty.usbserial*",
|
||||
"/dev/cu.usbmodem*",
|
||||
"/dev/cu.usbserial*",
|
||||
)
|
||||
"""Glob patterns for common serial device paths on Linux and macOS."""
|
||||
|
||||
# Support both MAC addresses (Linux/Windows) and UUIDs (macOS).
|
||||
BLE_ADDRESS_RE = re.compile(
|
||||
r"^(?:"
|
||||
r"(?:[0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}|" # MAC address format
|
||||
r"[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}" # UUID format
|
||||
r")$"
|
||||
)
|
||||
"""Compiled regex matching a BLE MAC address or UUID."""
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def parse_ble_target(value: str) -> str | None:
|
||||
"""Return a normalised BLE address (MAC or UUID) when ``value`` matches the format.
|
||||
|
||||
Parameters:
|
||||
value: User-provided target string.
|
||||
|
||||
Returns:
|
||||
The normalised MAC address (upper-cased) or UUID, or ``None`` when
|
||||
the value does not match a recognised BLE address format.
|
||||
"""
|
||||
if not value:
|
||||
return None
|
||||
value = value.strip()
|
||||
if not value:
|
||||
return None
|
||||
if BLE_ADDRESS_RE.fullmatch(value):
|
||||
return value.upper()
|
||||
return None
|
||||
|
||||
|
||||
def parse_tcp_target(value: str) -> tuple[str, int] | None:
|
||||
"""Parse a TCP ``host:port`` target, accepting both IPs and hostnames.
|
||||
|
||||
Unlike the Meshtastic-specific helper in :mod:`interfaces`, hostnames are
|
||||
accepted here because MeshCore companions may be reached over a local
|
||||
network by name (e.g. ``meshcore-node.local:4403``).
|
||||
|
||||
BLE MAC addresses (five colons) and bare serial port paths (no colon) are
|
||||
correctly rejected — they cannot produce a valid ``host:port`` pair.
|
||||
|
||||
Parameters:
|
||||
value: User-provided target string.
|
||||
|
||||
Returns:
|
||||
``(host, port)`` on success, or ``None`` when *value* does not look
|
||||
like a TCP target.
|
||||
"""
|
||||
if not value:
|
||||
return None
|
||||
value = value.strip()
|
||||
if not value:
|
||||
return None
|
||||
|
||||
# Strip URL scheme prefix (e.g. ``tcp://host:4403`` or ``http://host:4403``).
|
||||
if "://" in value:
|
||||
value = value.split("://", 1)[1]
|
||||
|
||||
# Handle bracketed IPv6: ``[::1]:4403``.
|
||||
if value.startswith("["):
|
||||
bracket_end = value.find("]")
|
||||
if bracket_end == -1:
|
||||
return None
|
||||
host = value[1:bracket_end]
|
||||
rest = value[bracket_end + 1 :]
|
||||
if rest.startswith(":"):
|
||||
try:
|
||||
port = int(rest[1:])
|
||||
except ValueError:
|
||||
return None
|
||||
if not (1 <= port <= 65535):
|
||||
return None
|
||||
else:
|
||||
port = DEFAULT_TCP_PORT
|
||||
if not host:
|
||||
return None
|
||||
return host, port
|
||||
|
||||
# For non-bracketed addresses require exactly one colon so that BLE MACs
|
||||
# (five colons) and bare serial paths (no colon) are rejected.
|
||||
colon_count = value.count(":")
|
||||
if colon_count != 1:
|
||||
return None
|
||||
|
||||
host, _, port_str = value.partition(":")
|
||||
if not host:
|
||||
return None
|
||||
try:
|
||||
port = int(port_str)
|
||||
except ValueError:
|
||||
return None
|
||||
if not (1 <= port <= 65535):
|
||||
return None
|
||||
return host, port
|
||||
|
||||
|
||||
def default_serial_targets() -> list[str]:
|
||||
"""Return candidate serial device paths for auto-discovery.
|
||||
|
||||
Globs for common USB serial device paths on Linux and macOS. Always
|
||||
includes ``/dev/ttyACM0`` as a final fallback so callers have at least
|
||||
one candidate even on systems without any attached hardware.
|
||||
|
||||
Returns:
|
||||
Ordered list of candidate device paths, deduplicated.
|
||||
"""
|
||||
candidates: list[str] = []
|
||||
seen: set[str] = set()
|
||||
for pattern in DEFAULT_SERIAL_PATTERNS:
|
||||
for path in sorted(glob.glob(pattern)):
|
||||
if path not in seen:
|
||||
candidates.append(path)
|
||||
seen.add(path)
|
||||
if "/dev/ttyACM0" not in seen:
|
||||
candidates.append("/dev/ttyACM0")
|
||||
return candidates
|
||||
+384
-302
@@ -16,6 +16,7 @@
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import dataclasses
|
||||
import inspect
|
||||
import signal
|
||||
import threading
|
||||
@@ -24,6 +25,8 @@ import time
|
||||
from pubsub import pub
|
||||
|
||||
from . import config, handlers, ingestors, interfaces
|
||||
from .provider import Provider
|
||||
from .utils import _retry_dict_snapshot
|
||||
|
||||
_RECEIVE_TOPICS = (
|
||||
"meshtastic.receive",
|
||||
@@ -80,10 +83,15 @@ def _subscribe_receive_topics() -> list[str]:
|
||||
|
||||
|
||||
def _node_items_snapshot(
|
||||
nodes_obj, retries: int = 3
|
||||
nodes_obj: object, retries: int = 3
|
||||
) -> list[tuple[str, object]] | None:
|
||||
"""Snapshot ``nodes_obj`` to avoid iteration errors during updates.
|
||||
|
||||
Uses :func:`~data.mesh_ingestor.utils._retry_dict_snapshot` to handle
|
||||
both dict-like objects (``items()`` callable) and sequence-like objects
|
||||
(``__iter__`` + ``__getitem__``) that Meshtastic may return depending on
|
||||
firmware version.
|
||||
|
||||
Parameters:
|
||||
nodes_obj: Meshtastic nodes mapping or iterable.
|
||||
retries: Number of attempts when encountering "dictionary changed"
|
||||
@@ -99,25 +107,15 @@ def _node_items_snapshot(
|
||||
|
||||
items_callable = getattr(nodes_obj, "items", None)
|
||||
if callable(items_callable):
|
||||
for _ in range(max(1, retries)):
|
||||
try:
|
||||
return list(items_callable())
|
||||
except RuntimeError as err:
|
||||
if "dictionary changed size during iteration" not in str(err):
|
||||
raise
|
||||
time.sleep(0)
|
||||
return None
|
||||
return _retry_dict_snapshot(lambda: list(items_callable()), retries)
|
||||
|
||||
if hasattr(nodes_obj, "__iter__") and hasattr(nodes_obj, "__getitem__"):
|
||||
for _ in range(max(1, retries)):
|
||||
try:
|
||||
keys = list(nodes_obj)
|
||||
return [(key, nodes_obj[key]) for key in keys]
|
||||
except RuntimeError as err:
|
||||
if "dictionary changed size during iteration" not in str(err):
|
||||
raise
|
||||
time.sleep(0)
|
||||
return None
|
||||
|
||||
def _snapshot_via_keys() -> list[tuple[str, object]]:
|
||||
keys = list(nodes_obj)
|
||||
return [(key, nodes_obj[key]) for key in keys]
|
||||
|
||||
return _retry_dict_snapshot(_snapshot_via_keys, retries)
|
||||
|
||||
return []
|
||||
|
||||
@@ -197,11 +195,6 @@ def _process_ingestor_heartbeat(iface, *, ingestor_announcement_sent: bool) -> b
|
||||
if heartbeat_sent and not ingestor_announcement_sent:
|
||||
return True
|
||||
return ingestor_announcement_sent
|
||||
iface_cls = getattr(iface_obj, "__class__", None)
|
||||
if iface_cls is None:
|
||||
return False
|
||||
module_name = getattr(iface_cls, "__module__", "") or ""
|
||||
return "ble_interface" in module_name
|
||||
|
||||
|
||||
def _connected_state(candidate) -> bool | None:
|
||||
@@ -243,10 +236,333 @@ def _connected_state(candidate) -> bool | None:
|
||||
return None
|
||||
|
||||
|
||||
def main(existing_interface=None) -> None:
|
||||
# ---------------------------------------------------------------------------
|
||||
# Loop state container
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@dataclasses.dataclass
|
||||
class _DaemonState:
|
||||
"""All mutable state for the :func:`main` daemon loop."""
|
||||
|
||||
provider: Provider
|
||||
stop: threading.Event
|
||||
configured_port: str | None
|
||||
inactivity_reconnect_secs: float
|
||||
energy_saving_enabled: bool
|
||||
energy_online_secs: float
|
||||
energy_sleep_secs: float
|
||||
retry_delay: float
|
||||
last_seen_packet_monotonic: float | None
|
||||
active_candidate: str | None
|
||||
|
||||
iface: object = None
|
||||
resolved_target: str | None = None
|
||||
initial_snapshot_sent: bool = False
|
||||
energy_session_deadline: float | None = None
|
||||
iface_connected_at: float | None = None
|
||||
last_inactivity_reconnect: float | None = None
|
||||
ingestor_announcement_sent: bool = False
|
||||
announced_target: bool = False
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Per-iteration helpers (each returns True when the caller should `continue`)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _advance_retry_delay(current: float) -> float:
|
||||
"""Return the next exponential-backoff retry delay."""
|
||||
|
||||
if config._RECONNECT_MAX_DELAY_SECS <= 0:
|
||||
return current
|
||||
# `current == 0` on the very first call (bootstrap); seed from config.
|
||||
next_delay = current * 2 if current else config._RECONNECT_INITIAL_DELAY_SECS
|
||||
return min(next_delay, config._RECONNECT_MAX_DELAY_SECS)
|
||||
|
||||
|
||||
def _energy_sleep(state: _DaemonState, reason: str) -> None:
|
||||
"""Sleep for the configured energy-saving interval."""
|
||||
|
||||
if not state.energy_saving_enabled or state.energy_sleep_secs <= 0:
|
||||
return
|
||||
if config.DEBUG:
|
||||
config._debug_log(
|
||||
f"energy saving: {reason}; sleeping for {state.energy_sleep_secs:g}s"
|
||||
)
|
||||
state.stop.wait(state.energy_sleep_secs)
|
||||
|
||||
|
||||
def _try_connect(state: _DaemonState) -> bool:
|
||||
"""Attempt to establish the mesh interface.
|
||||
|
||||
Returns:
|
||||
``True`` when connected and the loop should proceed; ``False`` when
|
||||
the connection failed and the caller should ``continue``.
|
||||
"""
|
||||
|
||||
try:
|
||||
state.iface, state.resolved_target, state.active_candidate = (
|
||||
state.provider.connect(active_candidate=state.active_candidate)
|
||||
)
|
||||
handlers.register_host_node_id(state.provider.extract_host_node_id(state.iface))
|
||||
ingestors.set_ingestor_node_id(handlers.host_node_id())
|
||||
state.retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
|
||||
state.initial_snapshot_sent = False
|
||||
if not state.announced_target and state.resolved_target:
|
||||
config._debug_log(
|
||||
"Using mesh interface",
|
||||
context="daemon.interface",
|
||||
severity="info",
|
||||
target=state.resolved_target,
|
||||
)
|
||||
state.announced_target = True
|
||||
# Set an absolute monotonic deadline for this energy-saving session.
|
||||
# When the deadline passes, _check_energy_saving() will close the
|
||||
# interface and sleep until the next wake interval.
|
||||
if state.energy_saving_enabled and state.energy_online_secs > 0:
|
||||
state.energy_session_deadline = time.monotonic() + state.energy_online_secs
|
||||
else:
|
||||
state.energy_session_deadline = None
|
||||
state.iface_connected_at = time.monotonic()
|
||||
# Seed the inactivity tracking from the connection time so a
|
||||
# reconnect is given a full inactivity window even when the
|
||||
# handler still reports the previous packet timestamp.
|
||||
state.last_seen_packet_monotonic = state.iface_connected_at
|
||||
state.last_inactivity_reconnect = None
|
||||
return True
|
||||
except interfaces.NoAvailableMeshInterface as exc:
|
||||
config._debug_log(
|
||||
"No mesh interface available",
|
||||
context="daemon.interface",
|
||||
severity="error",
|
||||
error_message=str(exc),
|
||||
)
|
||||
_close_interface(state.iface)
|
||||
raise SystemExit(1) from exc
|
||||
except Exception as exc:
|
||||
config._debug_log(
|
||||
"Failed to create mesh interface",
|
||||
context="daemon.interface",
|
||||
severity="warn",
|
||||
candidate=state.active_candidate or "auto",
|
||||
error_class=exc.__class__.__name__,
|
||||
error_message=str(exc),
|
||||
)
|
||||
if state.configured_port is None:
|
||||
state.active_candidate = None
|
||||
state.announced_target = False
|
||||
state.stop.wait(state.retry_delay)
|
||||
state.retry_delay = _advance_retry_delay(state.retry_delay)
|
||||
return False
|
||||
|
||||
|
||||
def _check_energy_saving(state: _DaemonState) -> bool:
|
||||
"""Disconnect and sleep when energy-saving conditions are met.
|
||||
|
||||
Returns:
|
||||
``True`` when the interface was closed and the caller should
|
||||
``continue``; ``False`` otherwise.
|
||||
"""
|
||||
|
||||
if not state.energy_saving_enabled or state.iface is None:
|
||||
return False
|
||||
|
||||
if (
|
||||
state.energy_session_deadline is not None
|
||||
and time.monotonic() >= state.energy_session_deadline
|
||||
):
|
||||
reason = "disconnected after session"
|
||||
log_msg = "Energy saving disconnect"
|
||||
elif (
|
||||
_is_ble_interface(state.iface)
|
||||
and getattr(state.iface, "client", object()) is None
|
||||
):
|
||||
reason = "BLE client disconnected"
|
||||
log_msg = "Energy saving BLE disconnect"
|
||||
else:
|
||||
return False
|
||||
config._debug_log(log_msg, context="daemon.energy", severity="info")
|
||||
_close_interface(state.iface)
|
||||
state.iface = None
|
||||
state.announced_target = False
|
||||
state.initial_snapshot_sent = False
|
||||
state.energy_session_deadline = None
|
||||
_energy_sleep(state, reason)
|
||||
return True
|
||||
|
||||
|
||||
def _try_send_snapshot(state: _DaemonState) -> bool:
|
||||
"""Send the initial node snapshot via the provider.
|
||||
|
||||
Returns:
|
||||
``True`` when the snapshot succeeded (or no nodes exist yet); ``False``
|
||||
when a hard error occurred and the caller should ``continue``.
|
||||
"""
|
||||
|
||||
try:
|
||||
node_items = state.provider.node_snapshot_items(state.iface)
|
||||
processed_any = False
|
||||
for node_id, node in node_items:
|
||||
processed_any = True
|
||||
try:
|
||||
handlers.upsert_node(node_id, node)
|
||||
except Exception as exc:
|
||||
config._debug_log(
|
||||
"Failed to update node snapshot",
|
||||
context="daemon.snapshot",
|
||||
severity="warn",
|
||||
node_id=node_id,
|
||||
error_class=exc.__class__.__name__,
|
||||
error_message=str(exc),
|
||||
)
|
||||
if config.DEBUG:
|
||||
config._debug_log(
|
||||
"Snapshot node payload",
|
||||
context="daemon.snapshot",
|
||||
node=node,
|
||||
)
|
||||
if processed_any:
|
||||
state.initial_snapshot_sent = True
|
||||
return True
|
||||
except Exception as exc:
|
||||
config._debug_log(
|
||||
"Snapshot refresh failed",
|
||||
context="daemon.snapshot",
|
||||
severity="warn",
|
||||
error_class=exc.__class__.__name__,
|
||||
error_message=str(exc),
|
||||
)
|
||||
_close_interface(state.iface)
|
||||
state.iface = None
|
||||
state.stop.wait(state.retry_delay)
|
||||
state.retry_delay = _advance_retry_delay(state.retry_delay)
|
||||
return False
|
||||
|
||||
|
||||
def _check_inactivity_reconnect(state: _DaemonState) -> bool:
|
||||
"""Reconnect when the interface has been silent for too long.
|
||||
|
||||
Returns:
|
||||
``True`` when a reconnect was triggered and the caller should
|
||||
``continue``; ``False`` otherwise.
|
||||
"""
|
||||
|
||||
if state.iface is None or state.inactivity_reconnect_secs <= 0:
|
||||
return False
|
||||
|
||||
now = time.monotonic()
|
||||
iface_activity = handlers.last_packet_monotonic()
|
||||
|
||||
if (
|
||||
iface_activity is not None
|
||||
and state.iface_connected_at is not None
|
||||
and iface_activity < state.iface_connected_at
|
||||
):
|
||||
iface_activity = state.iface_connected_at
|
||||
|
||||
if iface_activity is not None and (
|
||||
state.last_seen_packet_monotonic is None
|
||||
or iface_activity > state.last_seen_packet_monotonic
|
||||
):
|
||||
state.last_seen_packet_monotonic = iface_activity
|
||||
state.last_inactivity_reconnect = None
|
||||
|
||||
latest_activity = iface_activity
|
||||
if latest_activity is None and state.iface_connected_at is not None:
|
||||
latest_activity = state.iface_connected_at
|
||||
if latest_activity is None:
|
||||
latest_activity = now
|
||||
|
||||
inactivity_elapsed = now - latest_activity
|
||||
believed_disconnected = (
|
||||
_connected_state(getattr(state.iface, "isConnected", None)) is False
|
||||
)
|
||||
|
||||
if (
|
||||
not believed_disconnected
|
||||
and inactivity_elapsed < state.inactivity_reconnect_secs
|
||||
):
|
||||
return False
|
||||
|
||||
if (
|
||||
state.last_inactivity_reconnect is not None
|
||||
and now - state.last_inactivity_reconnect < state.inactivity_reconnect_secs
|
||||
):
|
||||
return False
|
||||
|
||||
reason = (
|
||||
"disconnected"
|
||||
if believed_disconnected
|
||||
else f"no data for {inactivity_elapsed:.0f}s"
|
||||
)
|
||||
config._debug_log(
|
||||
"Mesh interface inactivity detected",
|
||||
context="daemon.interface",
|
||||
severity="warn",
|
||||
reason=reason,
|
||||
)
|
||||
state.last_inactivity_reconnect = now
|
||||
_close_interface(state.iface)
|
||||
state.iface = None
|
||||
state.announced_target = False
|
||||
state.initial_snapshot_sent = False
|
||||
state.energy_session_deadline = None
|
||||
state.iface_connected_at = None
|
||||
return True
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Loop iteration helper
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _loop_iteration(state: _DaemonState) -> bool:
|
||||
"""Execute one pass of the daemon main loop.
|
||||
|
||||
Encapsulates the per-iteration ``continue`` decisions so that
|
||||
:func:`main` stays within the allowed cognitive-complexity budget.
|
||||
|
||||
Returns:
|
||||
``True`` when the loop should start the next iteration immediately
|
||||
(equivalent to a ``continue``); ``False`` when the full pass
|
||||
completed and the caller should sleep before iterating again.
|
||||
"""
|
||||
|
||||
if state.iface is None and not _try_connect(state):
|
||||
return True
|
||||
if _check_energy_saving(state):
|
||||
return True
|
||||
if not state.initial_snapshot_sent and not _try_send_snapshot(state):
|
||||
return True
|
||||
if _check_inactivity_reconnect(state):
|
||||
return True
|
||||
state.ingestor_announcement_sent = _process_ingestor_heartbeat(
|
||||
state.iface, ingestor_announcement_sent=state.ingestor_announcement_sent
|
||||
)
|
||||
state.retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
|
||||
return False
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Entry point
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def main(*, provider: Provider | None = None) -> None:
|
||||
"""Run the mesh ingestion daemon until interrupted."""
|
||||
|
||||
subscribed = _subscribe_receive_topics()
|
||||
if provider is None:
|
||||
if config.PROVIDER == "meshcore":
|
||||
from .providers.meshcore import MeshcoreProvider
|
||||
|
||||
provider = MeshcoreProvider()
|
||||
else:
|
||||
from .providers.meshtastic import MeshtasticProvider
|
||||
|
||||
provider = MeshtasticProvider()
|
||||
|
||||
subscribed = provider.subscribe()
|
||||
if subscribed:
|
||||
config._debug_log(
|
||||
"Subscribed to receive topics",
|
||||
@@ -255,313 +571,79 @@ def main(existing_interface=None) -> None:
|
||||
topics=subscribed,
|
||||
)
|
||||
|
||||
iface = existing_interface
|
||||
resolved_target = None
|
||||
retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
|
||||
|
||||
stop = threading.Event()
|
||||
initial_snapshot_sent = False
|
||||
energy_session_deadline = None
|
||||
iface_connected_at: float | None = None
|
||||
last_seen_packet_monotonic = handlers.last_packet_monotonic()
|
||||
last_inactivity_reconnect: float | None = None
|
||||
inactivity_reconnect_secs = max(
|
||||
0.0, getattr(config, "_INACTIVITY_RECONNECT_SECS", 0.0)
|
||||
state = _DaemonState(
|
||||
provider=provider,
|
||||
stop=threading.Event(),
|
||||
configured_port=config.CONNECTION,
|
||||
inactivity_reconnect_secs=max(
|
||||
0.0, getattr(config, "_INACTIVITY_RECONNECT_SECS", 0.0)
|
||||
),
|
||||
energy_saving_enabled=config.ENERGY_SAVING,
|
||||
energy_online_secs=max(0.0, config._ENERGY_ONLINE_DURATION_SECS),
|
||||
energy_sleep_secs=max(0.0, config._ENERGY_SLEEP_SECS),
|
||||
retry_delay=max(0.0, config._RECONNECT_INITIAL_DELAY_SECS),
|
||||
last_seen_packet_monotonic=handlers.last_packet_monotonic(),
|
||||
active_candidate=config.CONNECTION,
|
||||
)
|
||||
ingestor_announcement_sent = False
|
||||
|
||||
energy_saving_enabled = config.ENERGY_SAVING
|
||||
energy_online_secs = max(0.0, config._ENERGY_ONLINE_DURATION_SECS)
|
||||
energy_sleep_secs = max(0.0, config._ENERGY_SLEEP_SECS)
|
||||
|
||||
def _energy_sleep(reason: str) -> None:
|
||||
if not energy_saving_enabled or energy_sleep_secs <= 0:
|
||||
return
|
||||
if config.DEBUG:
|
||||
config._debug_log(
|
||||
f"energy saving: {reason}; sleeping for {energy_sleep_secs:g}s"
|
||||
)
|
||||
stop.wait(energy_sleep_secs)
|
||||
|
||||
def handle_sigterm(*_args) -> None:
|
||||
stop.set()
|
||||
"""Set the stop flag so the daemon loop exits cleanly on SIGTERM."""
|
||||
state.stop.set()
|
||||
|
||||
def handle_sigint(signum, frame) -> None:
|
||||
if stop.is_set():
|
||||
"""Handle SIGINT (Ctrl-C) with graceful-first, hard-exit-second behaviour.
|
||||
|
||||
The first SIGINT sets the stop flag and lets the loop finish its
|
||||
current iteration. A second SIGINT delegates to the default handler,
|
||||
which raises :class:`KeyboardInterrupt` and terminates immediately.
|
||||
"""
|
||||
if state.stop.is_set():
|
||||
signal.default_int_handler(signum, frame)
|
||||
return
|
||||
stop.set()
|
||||
state.stop.set()
|
||||
|
||||
if threading.current_thread() == threading.main_thread():
|
||||
signal.signal(signal.SIGINT, handle_sigint)
|
||||
signal.signal(signal.SIGTERM, handle_sigterm)
|
||||
|
||||
target = config.INSTANCE or "(no INSTANCE_DOMAIN configured)"
|
||||
configured_port = config.CONNECTION
|
||||
active_candidate = configured_port
|
||||
announced_target = False
|
||||
config._debug_log(
|
||||
"Mesh daemon starting",
|
||||
context="daemon.main",
|
||||
severity="info",
|
||||
target=target,
|
||||
port=configured_port or "auto",
|
||||
target=config.INSTANCE or "(no INSTANCE_DOMAIN configured)",
|
||||
port=config.CONNECTION or "auto",
|
||||
channel=config.CHANNEL_INDEX,
|
||||
)
|
||||
|
||||
try:
|
||||
while not stop.is_set():
|
||||
if iface is None:
|
||||
try:
|
||||
if active_candidate:
|
||||
iface, resolved_target = interfaces._create_serial_interface(
|
||||
active_candidate
|
||||
)
|
||||
else:
|
||||
iface, resolved_target = interfaces._create_default_interface()
|
||||
active_candidate = resolved_target
|
||||
interfaces._ensure_radio_metadata(iface)
|
||||
interfaces._ensure_channel_metadata(iface)
|
||||
handlers.register_host_node_id(
|
||||
interfaces._extract_host_node_id(iface)
|
||||
)
|
||||
ingestors.set_ingestor_node_id(handlers.host_node_id())
|
||||
retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
|
||||
initial_snapshot_sent = False
|
||||
if not announced_target and resolved_target:
|
||||
config._debug_log(
|
||||
"Using mesh interface",
|
||||
context="daemon.interface",
|
||||
severity="info",
|
||||
target=resolved_target,
|
||||
)
|
||||
announced_target = True
|
||||
if energy_saving_enabled and energy_online_secs > 0:
|
||||
energy_session_deadline = time.monotonic() + energy_online_secs
|
||||
else:
|
||||
energy_session_deadline = None
|
||||
iface_connected_at = time.monotonic()
|
||||
# Seed the inactivity tracking from the connection time so a
|
||||
# reconnect is given a full inactivity window even when the
|
||||
# handler still reports the previous packet timestamp.
|
||||
last_seen_packet_monotonic = iface_connected_at
|
||||
last_inactivity_reconnect = None
|
||||
except interfaces.NoAvailableMeshInterface as exc:
|
||||
config._debug_log(
|
||||
"No mesh interface available",
|
||||
context="daemon.interface",
|
||||
severity="error",
|
||||
error_message=str(exc),
|
||||
)
|
||||
_close_interface(iface)
|
||||
raise SystemExit(1) from exc
|
||||
except Exception as exc:
|
||||
candidate_desc = active_candidate or "auto"
|
||||
config._debug_log(
|
||||
"Failed to create mesh interface",
|
||||
context="daemon.interface",
|
||||
severity="warn",
|
||||
candidate=candidate_desc,
|
||||
error_class=exc.__class__.__name__,
|
||||
error_message=str(exc),
|
||||
)
|
||||
if configured_port is None:
|
||||
active_candidate = None
|
||||
announced_target = False
|
||||
stop.wait(retry_delay)
|
||||
if config._RECONNECT_MAX_DELAY_SECS > 0:
|
||||
retry_delay = min(
|
||||
(
|
||||
retry_delay * 2
|
||||
if retry_delay
|
||||
else config._RECONNECT_INITIAL_DELAY_SECS
|
||||
),
|
||||
config._RECONNECT_MAX_DELAY_SECS,
|
||||
)
|
||||
continue
|
||||
|
||||
if energy_saving_enabled and iface is not None:
|
||||
if (
|
||||
energy_session_deadline is not None
|
||||
and time.monotonic() >= energy_session_deadline
|
||||
):
|
||||
config._debug_log(
|
||||
"Energy saving disconnect",
|
||||
context="daemon.energy",
|
||||
severity="info",
|
||||
)
|
||||
_close_interface(iface)
|
||||
iface = None
|
||||
announced_target = False
|
||||
initial_snapshot_sent = False
|
||||
energy_session_deadline = None
|
||||
_energy_sleep("disconnected after session")
|
||||
continue
|
||||
if (
|
||||
_is_ble_interface(iface)
|
||||
and getattr(iface, "client", object()) is None
|
||||
):
|
||||
config._debug_log(
|
||||
"Energy saving BLE disconnect",
|
||||
context="daemon.energy",
|
||||
severity="info",
|
||||
)
|
||||
_close_interface(iface)
|
||||
iface = None
|
||||
announced_target = False
|
||||
initial_snapshot_sent = False
|
||||
energy_session_deadline = None
|
||||
_energy_sleep("BLE client disconnected")
|
||||
continue
|
||||
|
||||
if not initial_snapshot_sent:
|
||||
try:
|
||||
nodes = getattr(iface, "nodes", {}) or {}
|
||||
node_items = _node_items_snapshot(nodes)
|
||||
if node_items is None:
|
||||
config._debug_log(
|
||||
"Skipping node snapshot due to concurrent modification",
|
||||
context="daemon.snapshot",
|
||||
)
|
||||
else:
|
||||
processed_snapshot_item = False
|
||||
for node_id, node in node_items:
|
||||
processed_snapshot_item = True
|
||||
try:
|
||||
handlers.upsert_node(node_id, node)
|
||||
except Exception as exc:
|
||||
config._debug_log(
|
||||
"Failed to update node snapshot",
|
||||
context="daemon.snapshot",
|
||||
severity="warn",
|
||||
node_id=node_id,
|
||||
error_class=exc.__class__.__name__,
|
||||
error_message=str(exc),
|
||||
)
|
||||
if config.DEBUG:
|
||||
config._debug_log(
|
||||
"Snapshot node payload",
|
||||
context="daemon.snapshot",
|
||||
node=node,
|
||||
)
|
||||
if processed_snapshot_item:
|
||||
initial_snapshot_sent = True
|
||||
except Exception as exc:
|
||||
config._debug_log(
|
||||
"Snapshot refresh failed",
|
||||
context="daemon.snapshot",
|
||||
severity="warn",
|
||||
error_class=exc.__class__.__name__,
|
||||
error_message=str(exc),
|
||||
)
|
||||
_close_interface(iface)
|
||||
iface = None
|
||||
stop.wait(retry_delay)
|
||||
if config._RECONNECT_MAX_DELAY_SECS > 0:
|
||||
retry_delay = min(
|
||||
(
|
||||
retry_delay * 2
|
||||
if retry_delay
|
||||
else config._RECONNECT_INITIAL_DELAY_SECS
|
||||
),
|
||||
config._RECONNECT_MAX_DELAY_SECS,
|
||||
)
|
||||
continue
|
||||
|
||||
if iface is not None and inactivity_reconnect_secs > 0:
|
||||
now_monotonic = time.monotonic()
|
||||
iface_activity = handlers.last_packet_monotonic()
|
||||
if (
|
||||
iface_activity is not None
|
||||
and iface_connected_at is not None
|
||||
and iface_activity < iface_connected_at
|
||||
):
|
||||
iface_activity = iface_connected_at
|
||||
if iface_activity is not None and (
|
||||
last_seen_packet_monotonic is None
|
||||
or iface_activity > last_seen_packet_monotonic
|
||||
):
|
||||
last_seen_packet_monotonic = iface_activity
|
||||
last_inactivity_reconnect = None
|
||||
|
||||
latest_activity = iface_activity
|
||||
if latest_activity is None and iface_connected_at is not None:
|
||||
latest_activity = iface_connected_at
|
||||
if latest_activity is None:
|
||||
latest_activity = now_monotonic
|
||||
|
||||
inactivity_elapsed = now_monotonic - latest_activity
|
||||
|
||||
connected_attr = getattr(iface, "isConnected", None)
|
||||
believed_disconnected = False
|
||||
connected_state = _connected_state(connected_attr)
|
||||
if connected_state is None:
|
||||
if callable(connected_attr):
|
||||
try:
|
||||
believed_disconnected = not bool(connected_attr())
|
||||
except Exception:
|
||||
believed_disconnected = False
|
||||
elif connected_attr is not None:
|
||||
try:
|
||||
believed_disconnected = not bool(connected_attr)
|
||||
except Exception: # pragma: no cover - defensive guard
|
||||
believed_disconnected = False
|
||||
else:
|
||||
believed_disconnected = not connected_state
|
||||
|
||||
should_reconnect = believed_disconnected or (
|
||||
inactivity_elapsed >= inactivity_reconnect_secs
|
||||
)
|
||||
|
||||
if should_reconnect:
|
||||
if (
|
||||
last_inactivity_reconnect is None
|
||||
or now_monotonic - last_inactivity_reconnect
|
||||
>= inactivity_reconnect_secs
|
||||
):
|
||||
reason = (
|
||||
"disconnected"
|
||||
if believed_disconnected
|
||||
else f"no data for {inactivity_elapsed:.0f}s"
|
||||
)
|
||||
config._debug_log(
|
||||
"Mesh interface inactivity detected",
|
||||
context="daemon.interface",
|
||||
severity="warn",
|
||||
reason=reason,
|
||||
)
|
||||
last_inactivity_reconnect = now_monotonic
|
||||
_close_interface(iface)
|
||||
iface = None
|
||||
announced_target = False
|
||||
initial_snapshot_sent = False
|
||||
energy_session_deadline = None
|
||||
iface_connected_at = None
|
||||
continue
|
||||
|
||||
ingestor_announcement_sent = _process_ingestor_heartbeat(
|
||||
iface, ingestor_announcement_sent=ingestor_announcement_sent
|
||||
)
|
||||
|
||||
retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
|
||||
stop.wait(config.SNAPSHOT_SECS)
|
||||
while not state.stop.is_set():
|
||||
if not _loop_iteration(state):
|
||||
state.stop.wait(config.SNAPSHOT_SECS)
|
||||
except KeyboardInterrupt: # pragma: no cover - interactive only
|
||||
config._debug_log(
|
||||
"Received KeyboardInterrupt; shutting down",
|
||||
context="daemon.main",
|
||||
severity="info",
|
||||
)
|
||||
stop.set()
|
||||
state.stop.set()
|
||||
finally:
|
||||
_close_interface(iface)
|
||||
_close_interface(state.iface)
|
||||
|
||||
|
||||
__all__ = [
|
||||
"_RECEIVE_TOPICS",
|
||||
"_event_wait_allows_default_timeout",
|
||||
"_node_items_snapshot",
|
||||
"_subscribe_receive_topics",
|
||||
"_is_ble_interface",
|
||||
"_process_ingestor_heartbeat",
|
||||
"_advance_retry_delay",
|
||||
"_loop_iteration",
|
||||
"_check_energy_saving",
|
||||
"_check_inactivity_reconnect",
|
||||
"_connected_state",
|
||||
"_energy_sleep",
|
||||
"_event_wait_allows_default_timeout",
|
||||
"_is_ble_interface",
|
||||
"_node_items_snapshot",
|
||||
"_process_ingestor_heartbeat",
|
||||
"_subscribe_receive_topics",
|
||||
"_try_connect",
|
||||
"_try_send_snapshot",
|
||||
"main",
|
||||
]
|
||||
|
||||
@@ -0,0 +1,96 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Decode Meshtastic protobuf payloads from stdin JSON."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import base64
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from typing import Any, Dict, Tuple
|
||||
|
||||
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
if SCRIPT_DIR in sys.path:
|
||||
sys.path.remove(SCRIPT_DIR)
|
||||
|
||||
from google.protobuf.json_format import MessageToDict
|
||||
from meshtastic.protobuf import mesh_pb2, telemetry_pb2
|
||||
|
||||
PORTNUM_MAP: Dict[int, Tuple[str, Any]] = {
|
||||
3: ("POSITION_APP", mesh_pb2.Position),
|
||||
4: ("NODEINFO_APP", mesh_pb2.NodeInfo),
|
||||
5: ("ROUTING_APP", mesh_pb2.Routing),
|
||||
67: ("TELEMETRY_APP", telemetry_pb2.Telemetry),
|
||||
70: ("TRACEROUTE_APP", mesh_pb2.RouteDiscovery),
|
||||
71: ("NEIGHBORINFO_APP", mesh_pb2.NeighborInfo),
|
||||
}
|
||||
|
||||
|
||||
def _decode_payload(portnum: int, payload_b64: str) -> dict[str, Any]:
|
||||
if portnum not in PORTNUM_MAP:
|
||||
return {"error": "unsupported-port", "portnum": portnum}
|
||||
try:
|
||||
payload_bytes = base64.b64decode(payload_b64, validate=True)
|
||||
except Exception as exc:
|
||||
return {"error": f"invalid-payload: {exc}"}
|
||||
|
||||
name, message_cls = PORTNUM_MAP[portnum]
|
||||
msg = message_cls()
|
||||
try:
|
||||
msg.ParseFromString(payload_bytes)
|
||||
except Exception as exc:
|
||||
return {"error": f"decode-failed: {exc}", "portnum": portnum, "type": name}
|
||||
|
||||
decoded = MessageToDict(msg, preserving_proto_field_name=True)
|
||||
return {"portnum": portnum, "type": name, "payload": decoded}
|
||||
|
||||
|
||||
def main() -> int:
|
||||
"""Read a JSON request from stdin and write a decoded protobuf response to stdout.
|
||||
|
||||
Reads a single JSON object containing ``portnum`` (int) and
|
||||
``payload_b64`` (base-64 encoded bytes) from standard input, decodes the
|
||||
protobuf payload via :func:`_decode_payload`, and writes the result as
|
||||
JSON to standard output.
|
||||
|
||||
Returns:
|
||||
``0`` on success, ``1`` when the input is malformed or required fields
|
||||
are absent.
|
||||
"""
|
||||
raw = sys.stdin.read()
|
||||
try:
|
||||
request = json.loads(raw)
|
||||
except json.JSONDecodeError as exc:
|
||||
sys.stdout.write(json.dumps({"error": f"invalid-json: {exc}"}))
|
||||
return 1
|
||||
|
||||
portnum = request.get("portnum")
|
||||
payload_b64 = request.get("payload_b64")
|
||||
|
||||
if not isinstance(portnum, int):
|
||||
sys.stdout.write(json.dumps({"error": "missing-portnum"}))
|
||||
return 1
|
||||
if not isinstance(payload_b64, str):
|
||||
sys.stdout.write(json.dumps({"error": "missing-payload"}))
|
||||
return 1
|
||||
|
||||
result = _decode_payload(portnum, payload_b64)
|
||||
sys.stdout.write(json.dumps(result))
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
@@ -0,0 +1,240 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Protocol-agnostic event payload types for ingestion.
|
||||
|
||||
The ingestor ultimately POSTs JSON to the web app's ingest routes. These types
|
||||
capture the *shape* of those payloads so multiple providers can emit the same
|
||||
events, regardless of how they source or decode packets.
|
||||
|
||||
These are intentionally defined as ``TypedDict`` so existing code can continue
|
||||
to build plain dictionaries without a runtime dependency on dataclasses.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import NotRequired, TypedDict
|
||||
|
||||
|
||||
class _MessageEventRequired(TypedDict):
|
||||
"""Required fields shared by all :class:`MessageEvent` payloads."""
|
||||
|
||||
id: int
|
||||
rx_time: int
|
||||
rx_iso: str
|
||||
|
||||
|
||||
class MessageEvent(_MessageEventRequired, total=False):
|
||||
"""Payload for the ``/api/messages`` ingest route.
|
||||
|
||||
Maps to the ``MessageEvent`` contract described in ``CONTRACTS.md``.
|
||||
Required fields are inherited from :class:`_MessageEventRequired`;
|
||||
all other fields are optional.
|
||||
"""
|
||||
|
||||
from_id: object
|
||||
to_id: object
|
||||
channel: int
|
||||
portnum: str | None
|
||||
text: str | None
|
||||
encrypted: str | None
|
||||
snr: float | None
|
||||
rssi: int | None
|
||||
hop_limit: int | None
|
||||
reply_id: int | None
|
||||
emoji: str | None
|
||||
channel_name: str
|
||||
ingestor: str | None
|
||||
lora_freq: int
|
||||
modem_preset: str
|
||||
|
||||
|
||||
class _PositionEventRequired(TypedDict):
|
||||
"""Required fields shared by all :class:`PositionEvent` payloads."""
|
||||
|
||||
id: int
|
||||
rx_time: int
|
||||
rx_iso: str
|
||||
|
||||
|
||||
class PositionEvent(_PositionEventRequired, total=False):
|
||||
"""Payload for the ``/api/positions`` ingest route.
|
||||
|
||||
Maps to the ``PositionEvent`` contract described in ``CONTRACTS.md``.
|
||||
Coordinates may be supplied as floating-point degrees or derived from
|
||||
Meshtastic's integer-scaled ``latitudeI``/``longitudeI`` fields.
|
||||
"""
|
||||
|
||||
node_id: str
|
||||
node_num: int | None
|
||||
num: int | None
|
||||
from_id: str | None
|
||||
to_id: object
|
||||
latitude: float | None
|
||||
longitude: float | None
|
||||
altitude: float | None
|
||||
position_time: int | None
|
||||
location_source: str | None
|
||||
precision_bits: int | None
|
||||
sats_in_view: int | None
|
||||
pdop: float | None
|
||||
ground_speed: float | None
|
||||
ground_track: float | None
|
||||
snr: float | None
|
||||
rssi: int | None
|
||||
hop_limit: int | None
|
||||
bitfield: int | None
|
||||
payload_b64: str | None
|
||||
raw: dict
|
||||
ingestor: str | None
|
||||
lora_freq: int
|
||||
modem_preset: str
|
||||
|
||||
|
||||
class _TelemetryEventRequired(TypedDict):
|
||||
"""Required fields shared by all :class:`TelemetryEvent` payloads."""
|
||||
|
||||
id: int
|
||||
rx_time: int
|
||||
rx_iso: str
|
||||
|
||||
|
||||
class TelemetryEvent(_TelemetryEventRequired, total=False):
|
||||
"""Payload for the ``/api/telemetry`` ingest route.
|
||||
|
||||
Maps to the ``TelemetryEvent`` contract described in ``CONTRACTS.md``.
|
||||
Metric keys beyond the required ones are open-ended; the web layer accepts
|
||||
any additional device, environment, power, or air-quality fields.
|
||||
"""
|
||||
|
||||
node_id: str | None
|
||||
node_num: int | None
|
||||
from_id: object
|
||||
to_id: object
|
||||
telemetry_time: int | None
|
||||
channel: int
|
||||
portnum: str | None
|
||||
hop_limit: int | None
|
||||
snr: float | None
|
||||
rssi: int | None
|
||||
bitfield: int | None
|
||||
payload_b64: str
|
||||
ingestor: str | None
|
||||
lora_freq: int
|
||||
modem_preset: str
|
||||
|
||||
# Metric keys are intentionally open-ended; the Ruby side is permissive and
|
||||
# evolves over time.
|
||||
|
||||
|
||||
class _NeighborEntryRequired(TypedDict):
|
||||
"""Required fields for a single entry within a :class:`NeighborsSnapshot`."""
|
||||
|
||||
rx_time: int
|
||||
rx_iso: str
|
||||
|
||||
|
||||
class NeighborEntry(_NeighborEntryRequired, total=False):
|
||||
"""A single observed neighbour node within a :class:`NeighborsSnapshot`.
|
||||
|
||||
Each entry describes one node heard by the reporting device, including
|
||||
optional signal-quality metrics.
|
||||
"""
|
||||
|
||||
neighbor_id: str
|
||||
neighbor_num: int | None
|
||||
snr: float | None
|
||||
|
||||
|
||||
class _NeighborsSnapshotRequired(TypedDict):
|
||||
"""Required fields shared by all :class:`NeighborsSnapshot` payloads."""
|
||||
|
||||
node_id: str
|
||||
rx_time: int
|
||||
rx_iso: str
|
||||
|
||||
|
||||
class NeighborsSnapshot(_NeighborsSnapshotRequired, total=False):
|
||||
"""Payload for the ``/api/neighbors`` ingest route.
|
||||
|
||||
Maps to the ``NeighborsSnapshot`` contract described in ``CONTRACTS.md``.
|
||||
Encapsulates the full list of neighbours heard by a single reporting node.
|
||||
"""
|
||||
|
||||
node_num: int | None
|
||||
neighbors: list[NeighborEntry]
|
||||
node_broadcast_interval_secs: int | None
|
||||
last_sent_by_id: str | None
|
||||
ingestor: str | None
|
||||
lora_freq: int
|
||||
modem_preset: str
|
||||
|
||||
|
||||
class _TraceEventRequired(TypedDict):
|
||||
"""Required fields shared by all :class:`TraceEvent` payloads."""
|
||||
|
||||
hops: list[int]
|
||||
rx_time: int
|
||||
rx_iso: str
|
||||
|
||||
|
||||
class TraceEvent(_TraceEventRequired, total=False):
|
||||
"""Payload for the ``/api/traceroutes`` ingest route.
|
||||
|
||||
Maps to the ``TraceEvent`` contract described in ``CONTRACTS.md``.
|
||||
The ``hops`` list contains node numbers in transmission order from
|
||||
source to destination.
|
||||
"""
|
||||
|
||||
id: int | None
|
||||
request_id: int | None
|
||||
src: int | None
|
||||
dest: int | None
|
||||
rssi: int | None
|
||||
snr: float | None
|
||||
elapsed_ms: int | None
|
||||
ingestor: str | None
|
||||
lora_freq: int
|
||||
modem_preset: str
|
||||
|
||||
|
||||
class IngestorHeartbeat(TypedDict):
|
||||
"""Payload for the ``/api/ingestors`` heartbeat route.
|
||||
|
||||
Maps to the ``IngestorHeartbeat`` contract described in ``CONTRACTS.md``.
|
||||
Sent periodically to signal that the ingestor process is alive and
|
||||
associated with a particular radio node.
|
||||
"""
|
||||
|
||||
node_id: str
|
||||
start_time: int
|
||||
last_seen_time: int
|
||||
version: str
|
||||
lora_freq: NotRequired[int]
|
||||
modem_preset: NotRequired[str]
|
||||
|
||||
|
||||
NodeUpsert = dict[str, dict]
|
||||
|
||||
|
||||
__all__ = [
|
||||
"IngestorHeartbeat",
|
||||
"MessageEvent",
|
||||
"NeighborEntry",
|
||||
"NeighborsSnapshot",
|
||||
"NodeUpsert",
|
||||
"PositionEvent",
|
||||
"TelemetryEvent",
|
||||
"TraceEvent",
|
||||
]
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,100 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Packet handlers that serialise mesh data and push it to the HTTP queue.
|
||||
|
||||
This package is organised into focused submodules:
|
||||
|
||||
- :mod:`._state` — shared mutable state (host node ID, packet timestamps)
|
||||
- :mod:`.radio` — radio metadata enrichment helpers
|
||||
- :mod:`.ignored` — debug-mode logging of dropped packets
|
||||
- :mod:`.position` — GPS position and traceroute handlers
|
||||
- :mod:`.telemetry` — device/environment telemetry and router heartbeat handlers
|
||||
- :mod:`.nodeinfo` — node information update handler
|
||||
- :mod:`.neighborinfo` — neighbour topology snapshot handler
|
||||
- :mod:`.generic` — packet dispatcher, node upsert, and the main receive callback
|
||||
|
||||
All public names from the original flat ``handlers`` module are re-exported
|
||||
here so existing callers (e.g. ``daemon.py``, ``providers/``) require no
|
||||
changes.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from .. import queue as _queue
|
||||
from ._state import (
|
||||
host_node_id,
|
||||
last_packet_monotonic,
|
||||
register_host_node_id,
|
||||
)
|
||||
from .generic import (
|
||||
_is_encrypted_flag,
|
||||
_portnum_candidates,
|
||||
on_receive,
|
||||
store_packet_dict,
|
||||
upsert_node,
|
||||
)
|
||||
from .ignored import (
|
||||
_IGNORED_PACKET_LOCK,
|
||||
_IGNORED_PACKET_LOG_PATH,
|
||||
_record_ignored_packet,
|
||||
)
|
||||
from .neighborinfo import store_neighborinfo_packet
|
||||
from .nodeinfo import store_nodeinfo_packet
|
||||
from .position import (
|
||||
_normalize_trace_hops,
|
||||
base64_payload,
|
||||
store_position_packet,
|
||||
store_traceroute_packet,
|
||||
)
|
||||
from .radio import (
|
||||
_apply_radio_metadata,
|
||||
_apply_radio_metadata_to_nodes,
|
||||
_radio_metadata_fields,
|
||||
)
|
||||
from .telemetry import (
|
||||
_VALID_TELEMETRY_TYPES,
|
||||
store_router_heartbeat_packet,
|
||||
store_telemetry_packet,
|
||||
)
|
||||
|
||||
# Re-export the queue alias for any callers that reference handlers._queue_post_json
|
||||
_queue_post_json = _queue._queue_post_json
|
||||
|
||||
__all__ = [
|
||||
"_IGNORED_PACKET_LOCK",
|
||||
"_IGNORED_PACKET_LOG_PATH",
|
||||
"_VALID_TELEMETRY_TYPES",
|
||||
"_apply_radio_metadata",
|
||||
"_apply_radio_metadata_to_nodes",
|
||||
"_is_encrypted_flag",
|
||||
"_normalize_trace_hops",
|
||||
"_portnum_candidates",
|
||||
"_queue_post_json",
|
||||
"_radio_metadata_fields",
|
||||
"_record_ignored_packet",
|
||||
"base64_payload",
|
||||
"host_node_id",
|
||||
"last_packet_monotonic",
|
||||
"on_receive",
|
||||
"register_host_node_id",
|
||||
"store_neighborinfo_packet",
|
||||
"store_nodeinfo_packet",
|
||||
"store_packet_dict",
|
||||
"store_position_packet",
|
||||
"store_router_heartbeat_packet",
|
||||
"store_telemetry_packet",
|
||||
"store_traceroute_packet",
|
||||
"upsert_node",
|
||||
]
|
||||
@@ -0,0 +1,157 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Shared mutable state and state accessors for the handlers subpackage.
|
||||
|
||||
All mutable globals that span multiple handler modules live here so that each
|
||||
handler submodule can import this module and get a consistent view of state
|
||||
without risking stale references from bare ``from ... import`` bindings.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import math
|
||||
import time
|
||||
|
||||
from .. import config
|
||||
from ..serialization import _canonical_node_id
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Host device identity
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
_host_node_id: str | None = None
|
||||
"""Canonical ``!xxxxxxxx`` identifier for the connected host device."""
|
||||
|
||||
_host_telemetry_last_rx: int | None = None
|
||||
"""Receive timestamp of the last accepted host telemetry packet."""
|
||||
|
||||
_HOST_TELEMETRY_INTERVAL_SECS: int = 60 * 60
|
||||
"""Minimum interval (seconds) between accepted host telemetry packets.
|
||||
|
||||
Meshtastic devices report their own telemetry at regular intervals. Accepting
|
||||
every packet would overwrite the host's profile too aggressively; this window
|
||||
throttles updates to at most once per hour.
|
||||
"""
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Packet receipt tracking
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
_last_packet_monotonic: float | None = None
|
||||
"""Monotonic timestamp of the most recently processed packet."""
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Public accessors
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def register_host_node_id(node_id: str | None) -> None:
|
||||
"""Record the canonical identifier for the connected host device.
|
||||
|
||||
Resetting the host node also clears the telemetry suppression window so
|
||||
the first telemetry packet from the new host is always accepted.
|
||||
|
||||
Parameters:
|
||||
node_id: Identifier reported by the connected device. ``None`` clears
|
||||
the current host assignment.
|
||||
"""
|
||||
|
||||
global _host_node_id, _host_telemetry_last_rx
|
||||
canonical = _canonical_node_id(node_id)
|
||||
_host_node_id = canonical
|
||||
_host_telemetry_last_rx = None
|
||||
if canonical:
|
||||
config._debug_log(
|
||||
"Registered host device node id",
|
||||
context="handlers.host_device",
|
||||
host_node_id=canonical,
|
||||
)
|
||||
|
||||
|
||||
def host_node_id() -> str | None:
|
||||
"""Return the canonical identifier for the connected host device.
|
||||
|
||||
Returns:
|
||||
The canonical ``!xxxxxxxx`` node identifier, or ``None`` when no host
|
||||
has been registered yet.
|
||||
"""
|
||||
|
||||
return _host_node_id
|
||||
|
||||
|
||||
def _mark_host_telemetry_seen(rx_time: int) -> None:
|
||||
"""Update the last receive timestamp for the host telemetry window.
|
||||
|
||||
Parameters:
|
||||
rx_time: Unix timestamp of the accepted host telemetry packet.
|
||||
"""
|
||||
|
||||
global _host_telemetry_last_rx
|
||||
_host_telemetry_last_rx = rx_time
|
||||
|
||||
|
||||
def _host_telemetry_suppressed(rx_time: int) -> tuple[bool, int]:
|
||||
"""Return suppression state and minutes remaining for host telemetry.
|
||||
|
||||
Host telemetry is suppressed when it arrives within
|
||||
:data:`_HOST_TELEMETRY_INTERVAL_SECS` of the previous accepted packet.
|
||||
This avoids flooding the API with high-frequency device metrics from the
|
||||
locally connected node.
|
||||
|
||||
Parameters:
|
||||
rx_time: Unix timestamp of the candidate telemetry packet.
|
||||
|
||||
Returns:
|
||||
A ``(suppressed, minutes_remaining)`` tuple. ``suppressed`` is
|
||||
``True`` when the packet should be dropped; ``minutes_remaining``
|
||||
is the whole number of minutes until the next packet will be accepted.
|
||||
"""
|
||||
|
||||
if _host_telemetry_last_rx is None:
|
||||
return False, 0
|
||||
remaining_secs = (_host_telemetry_last_rx + _HOST_TELEMETRY_INTERVAL_SECS) - rx_time
|
||||
if remaining_secs <= 0:
|
||||
return False, 0
|
||||
return True, int(math.ceil(remaining_secs / 60.0))
|
||||
|
||||
|
||||
def last_packet_monotonic() -> float | None:
|
||||
"""Return the monotonic timestamp of the most recently processed packet.
|
||||
|
||||
Returns:
|
||||
A :func:`time.monotonic` value, or ``None`` before any packet has been
|
||||
received.
|
||||
"""
|
||||
|
||||
return _last_packet_monotonic
|
||||
|
||||
|
||||
def _mark_packet_seen() -> None:
|
||||
"""Record that a packet has been processed by updating the monotonic clock."""
|
||||
|
||||
global _last_packet_monotonic
|
||||
_last_packet_monotonic = time.monotonic()
|
||||
|
||||
|
||||
__all__ = [
|
||||
"_HOST_TELEMETRY_INTERVAL_SECS",
|
||||
"_host_telemetry_suppressed",
|
||||
"_mark_host_telemetry_seen",
|
||||
"_mark_packet_seen",
|
||||
"host_node_id",
|
||||
"last_packet_monotonic",
|
||||
"register_host_node_id",
|
||||
]
|
||||
@@ -0,0 +1,478 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Generic packet dispatcher, node upsert, and the main receive callback."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import base64
|
||||
import contextlib
|
||||
import importlib
|
||||
import json
|
||||
import sys
|
||||
import time
|
||||
from collections.abc import Mapping
|
||||
|
||||
from .. import channels, config, queue
|
||||
from ..serialization import (
|
||||
_canonical_node_id,
|
||||
_coerce_int,
|
||||
_first,
|
||||
_iso,
|
||||
_pkt_to_dict,
|
||||
upsert_payload,
|
||||
)
|
||||
from . import _state, ignored as _ignored_mod
|
||||
from .neighborinfo import store_neighborinfo_packet
|
||||
from .nodeinfo import store_nodeinfo_packet
|
||||
from .position import store_position_packet
|
||||
from .radio import _apply_radio_metadata, _apply_radio_metadata_to_nodes
|
||||
from .telemetry import store_router_heartbeat_packet, store_telemetry_packet
|
||||
from .position import store_traceroute_packet
|
||||
|
||||
|
||||
def _portnum_candidates(name: str) -> set[int]:
|
||||
"""Return Meshtastic port number candidates for ``name``.
|
||||
|
||||
Meshtastic ships two protobuf module layouts (legacy and modern). Both are
|
||||
probed so that port-number comparisons work regardless of which firmware
|
||||
version is installed.
|
||||
|
||||
Parameters:
|
||||
name: Port name to look up in Meshtastic ``PortNum`` enums.
|
||||
|
||||
Returns:
|
||||
Set of integer port numbers resolved from all available Meshtastic
|
||||
modules.
|
||||
"""
|
||||
|
||||
candidates: set[int] = set()
|
||||
for module_name in (
|
||||
"meshtastic.portnums_pb2",
|
||||
"meshtastic.protobuf.portnums_pb2",
|
||||
):
|
||||
module = sys.modules.get(module_name)
|
||||
if module is None:
|
||||
with contextlib.suppress(ModuleNotFoundError):
|
||||
module = importlib.import_module(module_name)
|
||||
if module is None:
|
||||
continue
|
||||
portnum_enum = getattr(module, "PortNum", None)
|
||||
value_lookup = getattr(portnum_enum, "Value", None) if portnum_enum else None
|
||||
if callable(value_lookup):
|
||||
with contextlib.suppress(Exception):
|
||||
candidate = _coerce_int(value_lookup(name))
|
||||
if candidate is not None:
|
||||
candidates.add(candidate)
|
||||
constant_value = getattr(module, name, None)
|
||||
candidate = _coerce_int(constant_value)
|
||||
if candidate is not None:
|
||||
candidates.add(candidate)
|
||||
return candidates
|
||||
|
||||
|
||||
def _is_encrypted_flag(value: object) -> bool:
|
||||
"""Return ``True`` when ``value`` represents an encrypted payload.
|
||||
|
||||
Meshtastic may express the encrypted flag as a boolean, an integer, or a
|
||||
string depending on how the packet was decoded. All representations are
|
||||
normalised to a Python bool.
|
||||
|
||||
Parameters:
|
||||
value: Raw encrypted field from a Meshtastic packet.
|
||||
|
||||
Returns:
|
||||
``True`` when the payload is considered encrypted, ``False`` otherwise.
|
||||
"""
|
||||
|
||||
if isinstance(value, bool):
|
||||
return value
|
||||
if isinstance(value, (int, float)):
|
||||
return value != 0
|
||||
if isinstance(value, str):
|
||||
normalized = value.strip().lower()
|
||||
if normalized in {"", "0", "false", "no"}:
|
||||
return False
|
||||
return True
|
||||
return bool(value)
|
||||
|
||||
|
||||
def upsert_node(node_id: object, node: object) -> None:
|
||||
"""Schedule an upsert for a single node.
|
||||
|
||||
Serialises ``node`` via :func:`upsert_payload`, enriches the result with
|
||||
radio metadata and the current host node identifier, then enqueues a POST
|
||||
to ``/api/nodes``.
|
||||
|
||||
Parameters:
|
||||
node_id: Canonical identifier for the node in the ``!xxxxxxxx`` format.
|
||||
node: Node object or mapping to serialise for the API payload.
|
||||
|
||||
Returns:
|
||||
``None``. The payload is forwarded to the shared HTTP queue.
|
||||
"""
|
||||
|
||||
payload = _apply_radio_metadata_to_nodes(upsert_payload(node_id, node))
|
||||
payload["ingestor"] = _state.host_node_id()
|
||||
queue._queue_post_json("/api/nodes", payload, priority=queue._NODE_POST_PRIORITY)
|
||||
|
||||
if config.DEBUG:
|
||||
from ..serialization import _get
|
||||
|
||||
user = _get(payload[node_id], "user") or {}
|
||||
short = _get(user, "shortName")
|
||||
long = _get(user, "longName")
|
||||
config._debug_log(
|
||||
"Queued node upsert payload",
|
||||
context="handlers.upsert_node",
|
||||
node_id=node_id,
|
||||
short_name=short,
|
||||
long_name=long,
|
||||
)
|
||||
|
||||
|
||||
def store_packet_dict(packet: Mapping) -> None:
|
||||
"""Route a decoded packet to the appropriate storage handler.
|
||||
|
||||
Inspects ``portnum`` (string and integer forms) and the presence of
|
||||
well-known decoded sub-sections to determine packet type, then delegates
|
||||
to the corresponding ``store_*`` handler.
|
||||
|
||||
Parameters:
|
||||
packet: Packet dictionary emitted by the mesh interface.
|
||||
|
||||
Returns:
|
||||
``None``. Side-effects depend on the specific handler invoked.
|
||||
"""
|
||||
|
||||
decoded = packet.get("decoded") or {}
|
||||
|
||||
portnum_raw = _first(decoded, "portnum", default=None)
|
||||
portnum = str(portnum_raw).upper() if portnum_raw is not None else None
|
||||
portnum_int = _coerce_int(portnum_raw)
|
||||
|
||||
telemetry_section = (
|
||||
decoded.get("telemetry") if isinstance(decoded, Mapping) else None
|
||||
)
|
||||
if (
|
||||
portnum == "TELEMETRY_APP"
|
||||
or portnum_int == 65
|
||||
or isinstance(telemetry_section, Mapping)
|
||||
):
|
||||
store_telemetry_packet(packet, decoded)
|
||||
return
|
||||
|
||||
traceroute_section = (
|
||||
decoded.get("traceroute") if isinstance(decoded, Mapping) else None
|
||||
)
|
||||
traceroute_port_ints = _portnum_candidates("TRACEROUTE_APP")
|
||||
|
||||
if (
|
||||
portnum == "TRACEROUTE_APP"
|
||||
or (portnum_int is not None and portnum_int in traceroute_port_ints)
|
||||
or isinstance(traceroute_section, Mapping)
|
||||
):
|
||||
store_traceroute_packet(packet, decoded)
|
||||
return
|
||||
|
||||
if portnum in {"5", "NODEINFO_APP"}:
|
||||
store_nodeinfo_packet(packet, decoded)
|
||||
return
|
||||
|
||||
if portnum in {"4", "POSITION_APP"}:
|
||||
store_position_packet(packet, decoded)
|
||||
return
|
||||
|
||||
neighborinfo_section = (
|
||||
decoded.get("neighborinfo") if isinstance(decoded, Mapping) else None
|
||||
)
|
||||
if portnum == "NEIGHBORINFO_APP" or isinstance(neighborinfo_section, Mapping):
|
||||
store_neighborinfo_packet(packet, decoded)
|
||||
return
|
||||
|
||||
store_forward_port_candidates = _portnum_candidates("STORE_FORWARD_APP")
|
||||
store_forward_section = (
|
||||
decoded.get("storeforward") if isinstance(decoded, Mapping) else None
|
||||
)
|
||||
if portnum == "STORE_FORWARD_APP" or (
|
||||
portnum_int is not None and portnum_int in store_forward_port_candidates
|
||||
):
|
||||
if not isinstance(store_forward_section, Mapping):
|
||||
_ignored_mod._record_ignored_packet(
|
||||
packet, reason="unsupported-store-forward"
|
||||
)
|
||||
return
|
||||
rr = str(store_forward_section.get("rr") or "").upper()
|
||||
if rr == "ROUTER_HEARTBEAT":
|
||||
store_router_heartbeat_packet(packet)
|
||||
return
|
||||
_ignored_mod._record_ignored_packet(
|
||||
packet, reason="unsupported-store-forward-rr"
|
||||
)
|
||||
return
|
||||
|
||||
text = _first(decoded, "payload.text", "text", "data.text", default=None)
|
||||
encrypted = _first(decoded, "payload.encrypted", "encrypted", default=None)
|
||||
if encrypted is None:
|
||||
encrypted = _first(packet, "encrypted", default=None)
|
||||
reply_id_raw = _first(
|
||||
decoded,
|
||||
"payload.replyId",
|
||||
"payload.reply_id",
|
||||
"data.replyId",
|
||||
"data.reply_id",
|
||||
"replyId",
|
||||
"reply_id",
|
||||
default=None,
|
||||
)
|
||||
reply_id = _coerce_int(reply_id_raw)
|
||||
emoji_raw = _first(
|
||||
decoded,
|
||||
"payload.emoji",
|
||||
"data.emoji",
|
||||
"emoji",
|
||||
default=None,
|
||||
)
|
||||
emoji = None
|
||||
if emoji_raw is not None:
|
||||
try:
|
||||
emoji_text = str(emoji_raw)
|
||||
except Exception:
|
||||
emoji_text = None
|
||||
else:
|
||||
emoji_text = emoji_text.strip()
|
||||
if emoji_text:
|
||||
emoji = emoji_text
|
||||
|
||||
routing_section = decoded.get("routing") if isinstance(decoded, Mapping) else None
|
||||
routing_port_candidates = _portnum_candidates("ROUTING_APP")
|
||||
if text is None and (
|
||||
portnum == "ROUTING_APP"
|
||||
or (portnum_int is not None and portnum_int in routing_port_candidates)
|
||||
or isinstance(routing_section, Mapping)
|
||||
):
|
||||
routing_payload = _first(decoded, "payload", "data", default=None)
|
||||
if routing_payload is not None:
|
||||
if isinstance(routing_payload, bytes):
|
||||
text = base64.b64encode(routing_payload).decode("ascii")
|
||||
elif isinstance(routing_payload, str):
|
||||
text = routing_payload
|
||||
else:
|
||||
try:
|
||||
text = json.dumps(routing_payload, ensure_ascii=True)
|
||||
except TypeError:
|
||||
text = str(routing_payload)
|
||||
if isinstance(text, str):
|
||||
text = text.strip() or None
|
||||
|
||||
allowed_port_values = {"1", "TEXT_MESSAGE_APP", "REACTION_APP", "ROUTING_APP"}
|
||||
allowed_port_ints = {1}
|
||||
|
||||
reaction_port_candidates = _portnum_candidates("REACTION_APP")
|
||||
for candidate in reaction_port_candidates:
|
||||
allowed_port_ints.add(candidate)
|
||||
allowed_port_values.add(str(candidate))
|
||||
|
||||
for candidate in routing_port_candidates:
|
||||
allowed_port_ints.add(candidate)
|
||||
allowed_port_values.add(str(candidate))
|
||||
|
||||
if isinstance(routing_section, Mapping) and portnum_int is not None:
|
||||
allowed_port_ints.add(portnum_int)
|
||||
allowed_port_values.add(str(portnum_int))
|
||||
|
||||
is_reaction_packet = portnum == "REACTION_APP" or (
|
||||
reply_id is not None and emoji is not None
|
||||
)
|
||||
if is_reaction_packet and portnum_int is not None:
|
||||
allowed_port_ints.add(portnum_int)
|
||||
allowed_port_values.add(str(portnum_int))
|
||||
|
||||
if portnum and portnum not in allowed_port_values:
|
||||
if portnum_int not in allowed_port_ints:
|
||||
_ignored_mod._record_ignored_packet(packet, reason="unsupported-port")
|
||||
return
|
||||
|
||||
encrypted_flag = _is_encrypted_flag(encrypted)
|
||||
if not any([text, encrypted_flag, emoji is not None, reply_id is not None]):
|
||||
_ignored_mod._record_ignored_packet(packet, reason="no-message-payload")
|
||||
return
|
||||
|
||||
channel = _first(decoded, "channel", default=None)
|
||||
if channel is None:
|
||||
channel = _first(packet, "channel", default=0)
|
||||
try:
|
||||
channel = int(channel)
|
||||
except Exception:
|
||||
channel = 0
|
||||
|
||||
channel_name_value = channels.channel_name(channel)
|
||||
|
||||
pkt_id = _first(packet, "id", "packet_id", "packetId", default=None)
|
||||
if pkt_id is None:
|
||||
_ignored_mod._record_ignored_packet(packet, reason="missing-packet-id")
|
||||
return
|
||||
rx_time = int(_first(packet, "rxTime", "rx_time", default=time.time()))
|
||||
from_id = _first(packet, "fromId", "from_id", "from", default=None)
|
||||
to_id = _first(packet, "toId", "to_id", "to", default=None)
|
||||
|
||||
if (from_id is None or str(from_id) == "") and config.DEBUG:
|
||||
try:
|
||||
raw = json.dumps(packet, default=str)
|
||||
except Exception:
|
||||
raw = str(packet)
|
||||
config._debug_log(
|
||||
"Packet missing from_id",
|
||||
context="handlers.store_packet_dict",
|
||||
packet=raw,
|
||||
)
|
||||
|
||||
snr = _first(packet, "snr", "rx_snr", "rxSnr", default=None)
|
||||
rssi = _first(packet, "rssi", "rx_rssi", "rxRssi", default=None)
|
||||
hop = _first(packet, "hopLimit", "hop_limit", default=None)
|
||||
|
||||
to_id_normalized = str(to_id).strip() if to_id is not None else ""
|
||||
|
||||
if (
|
||||
not is_reaction_packet
|
||||
and channel == 0
|
||||
and not encrypted_flag
|
||||
and to_id_normalized
|
||||
and to_id_normalized.lower() != "^all"
|
||||
):
|
||||
if config.DEBUG:
|
||||
config._debug_log(
|
||||
"Skipped direct message on primary channel",
|
||||
context="handlers.store_packet_dict",
|
||||
from_id=_canonical_node_id(from_id) or from_id,
|
||||
to_id=_canonical_node_id(to_id) or to_id,
|
||||
channel=channel,
|
||||
)
|
||||
_ignored_mod._record_ignored_packet(packet, reason="skipped-direct-message")
|
||||
return
|
||||
|
||||
if not channels.is_allowed_channel(channel_name_value):
|
||||
_ignored_mod._record_ignored_packet(packet, reason="disallowed-channel")
|
||||
if config.DEBUG:
|
||||
config._debug_log(
|
||||
"Ignored packet on disallowed channel",
|
||||
context="handlers.store_packet_dict",
|
||||
channel=channel,
|
||||
channel_name=channel_name_value,
|
||||
allowed_channels=channels.allowed_channel_names(),
|
||||
)
|
||||
return
|
||||
|
||||
if channels.is_hidden_channel(channel_name_value):
|
||||
_ignored_mod._record_ignored_packet(packet, reason="hidden-channel")
|
||||
if config.DEBUG:
|
||||
config._debug_log(
|
||||
"Ignored packet on hidden channel",
|
||||
context="handlers.store_packet_dict",
|
||||
channel=channel,
|
||||
channel_name=channel_name_value,
|
||||
)
|
||||
return
|
||||
|
||||
message_payload = {
|
||||
"id": int(pkt_id),
|
||||
"rx_time": rx_time,
|
||||
"rx_iso": _iso(rx_time),
|
||||
"from_id": from_id,
|
||||
"to_id": to_id,
|
||||
"channel": channel,
|
||||
"portnum": str(portnum) if portnum is not None else None,
|
||||
"text": text,
|
||||
"encrypted": encrypted,
|
||||
"snr": float(snr) if snr is not None else None,
|
||||
"rssi": int(rssi) if rssi is not None else None,
|
||||
"hop_limit": int(hop) if hop is not None else None,
|
||||
"reply_id": reply_id,
|
||||
"emoji": emoji,
|
||||
"ingestor": _state.host_node_id(),
|
||||
}
|
||||
|
||||
if not encrypted_flag and channel_name_value:
|
||||
message_payload["channel_name"] = channel_name_value
|
||||
queue._queue_post_json(
|
||||
"/api/messages",
|
||||
_apply_radio_metadata(message_payload),
|
||||
priority=queue._MESSAGE_POST_PRIORITY,
|
||||
)
|
||||
|
||||
if config.DEBUG:
|
||||
from_label = _canonical_node_id(from_id) or from_id
|
||||
to_label = _canonical_node_id(to_id) or to_id
|
||||
payload_desc = "Encrypted" if text is None and encrypted else text
|
||||
log_kwargs = {
|
||||
"context": "handlers.store_packet_dict",
|
||||
"from_id": from_label,
|
||||
"to_id": to_label,
|
||||
"channel": channel,
|
||||
"channel_display": channel_name_value or channel,
|
||||
"payload": payload_desc,
|
||||
}
|
||||
if channel_name_value:
|
||||
log_kwargs["channel_name"] = channel_name_value
|
||||
config._debug_log("Queued message payload", **log_kwargs)
|
||||
|
||||
|
||||
def on_receive(packet: object, interface: object) -> None:
|
||||
"""Callback registered with Meshtastic to capture incoming packets.
|
||||
|
||||
Subscribed to all ``meshtastic.receive.*`` pubsub topics. The packet is
|
||||
deduplicated via a ``_potatomesh_seen`` flag before being normalised and
|
||||
dispatched to :func:`store_packet_dict`.
|
||||
|
||||
Parameters:
|
||||
packet: Packet payload supplied by the Meshtastic pubsub topic.
|
||||
interface: Interface instance that produced the packet. Only used for
|
||||
compatibility with Meshtastic's callback signature.
|
||||
|
||||
Returns:
|
||||
``None``. Packets are serialised and enqueued asynchronously.
|
||||
"""
|
||||
|
||||
if isinstance(packet, dict):
|
||||
if packet.get("_potatomesh_seen"):
|
||||
return
|
||||
packet["_potatomesh_seen"] = True
|
||||
|
||||
_state._mark_packet_seen()
|
||||
|
||||
packet_dict = None
|
||||
try:
|
||||
packet_dict = _pkt_to_dict(packet)
|
||||
store_packet_dict(packet_dict)
|
||||
except Exception as exc:
|
||||
info = (
|
||||
list(packet_dict.keys()) if isinstance(packet_dict, dict) else type(packet)
|
||||
)
|
||||
config._debug_log(
|
||||
"Failed to store packet",
|
||||
context="handlers.on_receive",
|
||||
severity="warn",
|
||||
error_class=exc.__class__.__name__,
|
||||
error_message=str(exc),
|
||||
packet_info=info,
|
||||
)
|
||||
|
||||
|
||||
__all__ = [
|
||||
"_is_encrypted_flag",
|
||||
"_portnum_candidates",
|
||||
"on_receive",
|
||||
"store_packet_dict",
|
||||
"upsert_node",
|
||||
]
|
||||
@@ -0,0 +1,103 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Debug-mode logging of ignored Meshtastic packets.
|
||||
|
||||
When :data:`config.DEBUG` is set the ingestor appends a JSON record for each
|
||||
packet that is filtered out (unsupported port, missing fields, disallowed
|
||||
channel, etc.) to a plain-text log file. This aids offline debugging without
|
||||
adding overhead in production.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import base64
|
||||
import json
|
||||
import threading
|
||||
from collections.abc import Mapping
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
|
||||
from .. import config
|
||||
|
||||
_IGNORED_PACKET_LOG_PATH = (
|
||||
Path(__file__).resolve().parents[3] / "ignored-meshtastic.txt"
|
||||
)
|
||||
"""Filesystem path that stores ignored Meshtastic packets when debug mode is active."""
|
||||
|
||||
_IGNORED_PACKET_LOCK = threading.Lock()
|
||||
"""Lock serialising concurrent appends to :data:`_IGNORED_PACKET_LOG_PATH`."""
|
||||
|
||||
|
||||
def _ignored_packet_default(value: object) -> object:
|
||||
"""Return a JSON-serialisable representation for an ignored packet value.
|
||||
|
||||
Called as the ``default`` argument to :func:`json.dumps` when serialising
|
||||
ignored packet entries. Handles container types and raw bytes so the log
|
||||
file contains readable text rather than ``repr()`` fragments.
|
||||
|
||||
Parameters:
|
||||
value: Arbitrary value encountered during packet serialisation.
|
||||
|
||||
Returns:
|
||||
A JSON-compatible object derived from ``value``.
|
||||
"""
|
||||
|
||||
if isinstance(value, (list, tuple, set)):
|
||||
return list(value)
|
||||
if isinstance(value, bytes):
|
||||
return base64.b64encode(value).decode("ascii")
|
||||
if isinstance(value, Mapping):
|
||||
return {
|
||||
str(key): _ignored_packet_default(sub_value)
|
||||
for key, sub_value in value.items()
|
||||
}
|
||||
return str(value)
|
||||
|
||||
|
||||
def _record_ignored_packet(packet: Mapping | object, *, reason: str) -> None:
|
||||
"""Persist packet details to :data:`_IGNORED_PACKET_LOG_PATH` during debugging.
|
||||
|
||||
Does nothing when :data:`config.DEBUG` is ``False``. Each call appends a
|
||||
single newline-delimited JSON record with a timestamp, drop reason, and a
|
||||
sanitised copy of the packet.
|
||||
|
||||
Parameters:
|
||||
packet: Packet object or mapping to record.
|
||||
reason: Short machine-readable label describing why the packet was
|
||||
ignored (e.g. ``"unsupported-port"``, ``"missing-packet-id"``).
|
||||
"""
|
||||
|
||||
if not config.DEBUG:
|
||||
return
|
||||
|
||||
timestamp = datetime.now(timezone.utc).isoformat()
|
||||
entry = {
|
||||
"timestamp": timestamp,
|
||||
"reason": reason,
|
||||
"packet": _ignored_packet_default(packet),
|
||||
}
|
||||
payload = json.dumps(entry, ensure_ascii=False, sort_keys=True)
|
||||
with _IGNORED_PACKET_LOCK:
|
||||
_IGNORED_PACKET_LOG_PATH.parent.mkdir(parents=True, exist_ok=True)
|
||||
with _IGNORED_PACKET_LOG_PATH.open("a", encoding="utf-8") as handle:
|
||||
handle.write(f"{payload}\n")
|
||||
|
||||
|
||||
__all__ = [
|
||||
"_IGNORED_PACKET_LOCK",
|
||||
"_IGNORED_PACKET_LOG_PATH",
|
||||
"_ignored_packet_default",
|
||||
"_record_ignored_packet",
|
||||
]
|
||||
@@ -0,0 +1,150 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Handler for neighbour-information packets."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import time
|
||||
from collections.abc import Mapping
|
||||
|
||||
from .. import config, queue
|
||||
from ..serialization import (
|
||||
_canonical_node_id,
|
||||
_coerce_float,
|
||||
_coerce_int,
|
||||
_first,
|
||||
_iso,
|
||||
_node_num_from_id,
|
||||
)
|
||||
from . import _state
|
||||
from .radio import _apply_radio_metadata
|
||||
|
||||
|
||||
def store_neighborinfo_packet(packet: Mapping, decoded: Mapping) -> None:
|
||||
"""Persist neighbour information gathered from a packet.
|
||||
|
||||
Meshtastic nodes periodically broadcast the set of nodes they can hear
|
||||
directly along with the observed signal quality. This handler serialises
|
||||
that snapshot so the web dashboard can render a live RF topology graph.
|
||||
|
||||
Parameters:
|
||||
packet: Raw Meshtastic packet metadata.
|
||||
decoded: Decoded view containing the ``neighborinfo`` section.
|
||||
|
||||
Returns:
|
||||
``None``. The neighbour snapshot is queued for HTTP submission.
|
||||
"""
|
||||
|
||||
neighbor_section = (
|
||||
decoded.get("neighborinfo") if isinstance(decoded, Mapping) else None
|
||||
)
|
||||
if not isinstance(neighbor_section, Mapping):
|
||||
return
|
||||
|
||||
node_ref = _first(
|
||||
neighbor_section,
|
||||
"nodeId",
|
||||
"node_id",
|
||||
default=_first(packet, "fromId", "from_id", "from", default=None),
|
||||
)
|
||||
node_id = _canonical_node_id(node_ref)
|
||||
if node_id is None:
|
||||
return
|
||||
|
||||
node_num = _coerce_int(_first(neighbor_section, "nodeId", "node_id", default=None))
|
||||
if node_num is None:
|
||||
node_num = _node_num_from_id(node_id)
|
||||
|
||||
node_broadcast_interval = _coerce_int(
|
||||
_first(
|
||||
neighbor_section,
|
||||
"nodeBroadcastIntervalSecs",
|
||||
"node_broadcast_interval_secs",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
|
||||
last_sent_by_ref = _first(
|
||||
neighbor_section,
|
||||
"lastSentById",
|
||||
"last_sent_by_id",
|
||||
default=None,
|
||||
)
|
||||
last_sent_by_id = _canonical_node_id(last_sent_by_ref)
|
||||
|
||||
rx_time = _coerce_int(_first(packet, "rxTime", "rx_time", default=time.time()))
|
||||
if rx_time is None:
|
||||
rx_time = int(time.time())
|
||||
|
||||
neighbors_payload = neighbor_section.get("neighbors")
|
||||
neighbors_iterable = (
|
||||
neighbors_payload if isinstance(neighbors_payload, list) else []
|
||||
)
|
||||
|
||||
neighbor_entries: list[dict] = []
|
||||
for entry in neighbors_iterable:
|
||||
if not isinstance(entry, Mapping):
|
||||
continue
|
||||
neighbor_ref = _first(entry, "nodeId", "node_id", default=None)
|
||||
neighbor_id = _canonical_node_id(neighbor_ref)
|
||||
if neighbor_id is None:
|
||||
continue
|
||||
neighbor_num = _coerce_int(_first(entry, "nodeId", "node_id", default=None))
|
||||
if neighbor_num is None:
|
||||
neighbor_num = _node_num_from_id(neighbor_id)
|
||||
snr = _coerce_float(_first(entry, "snr", default=None))
|
||||
entry_rx_time = _coerce_int(_first(entry, "rxTime", "rx_time", default=None))
|
||||
if entry_rx_time is None:
|
||||
entry_rx_time = rx_time
|
||||
neighbor_entries.append(
|
||||
{
|
||||
"neighbor_id": neighbor_id,
|
||||
"neighbor_num": neighbor_num,
|
||||
"snr": snr,
|
||||
"rx_time": entry_rx_time,
|
||||
"rx_iso": _iso(entry_rx_time),
|
||||
}
|
||||
)
|
||||
|
||||
payload = {
|
||||
"node_id": node_id,
|
||||
"node_num": node_num,
|
||||
"neighbors": neighbor_entries,
|
||||
"rx_time": rx_time,
|
||||
"rx_iso": _iso(rx_time),
|
||||
"ingestor": _state.host_node_id(),
|
||||
}
|
||||
|
||||
if node_broadcast_interval is not None:
|
||||
payload["node_broadcast_interval_secs"] = node_broadcast_interval
|
||||
if last_sent_by_id is not None:
|
||||
payload["last_sent_by_id"] = last_sent_by_id
|
||||
|
||||
queue._queue_post_json(
|
||||
"/api/neighbors",
|
||||
_apply_radio_metadata(payload),
|
||||
priority=queue._NEIGHBOR_POST_PRIORITY,
|
||||
)
|
||||
|
||||
if config.DEBUG:
|
||||
config._debug_log(
|
||||
"Queued neighborinfo payload",
|
||||
context="handlers.store_neighborinfo",
|
||||
node_id=node_id,
|
||||
neighbors=len(neighbor_entries),
|
||||
)
|
||||
|
||||
|
||||
__all__ = ["store_neighborinfo_packet"]
|
||||
@@ -0,0 +1,219 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Handler for node-information packets."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import time
|
||||
from collections.abc import Mapping
|
||||
|
||||
from .. import config, queue
|
||||
from ..serialization import (
|
||||
_canonical_node_id,
|
||||
_coerce_int,
|
||||
_decode_nodeinfo_payload,
|
||||
_extract_payload_bytes,
|
||||
_first,
|
||||
_merge_mappings,
|
||||
_node_num_from_id,
|
||||
_node_to_dict,
|
||||
_nodeinfo_metrics_dict,
|
||||
_nodeinfo_position_dict,
|
||||
_nodeinfo_user_dict,
|
||||
)
|
||||
from . import _state
|
||||
from .radio import _apply_radio_metadata_to_nodes
|
||||
|
||||
|
||||
def store_nodeinfo_packet(packet: Mapping, decoded: Mapping) -> None:
|
||||
"""Persist node information updates.
|
||||
|
||||
Node info packets carry user profile data (short name, long name, hardware
|
||||
model, public key) together with optional position and device-metrics
|
||||
snapshots. When a protobuf payload is present it is decoded first; any
|
||||
fields missing from the protobuf are filled in from the ``decoded`` dict
|
||||
so both firmware variants are handled.
|
||||
|
||||
Parameters:
|
||||
packet: Raw packet metadata describing the update.
|
||||
decoded: Decoded payload that may include ``user`` and ``position``
|
||||
sections.
|
||||
|
||||
Returns:
|
||||
``None``. The node payload is merged into the API queue.
|
||||
"""
|
||||
|
||||
payload_bytes = _extract_payload_bytes(decoded)
|
||||
node_info = _decode_nodeinfo_payload(payload_bytes)
|
||||
decoded_user = decoded.get("user")
|
||||
user_dict = _nodeinfo_user_dict(node_info, decoded_user)
|
||||
|
||||
node_info_fields = set()
|
||||
if node_info:
|
||||
node_info_fields = {field_desc.name for field_desc, _ in node_info.ListFields()}
|
||||
|
||||
node_id = None
|
||||
if isinstance(user_dict, Mapping):
|
||||
node_id = _canonical_node_id(user_dict.get("id"))
|
||||
|
||||
if node_id is None:
|
||||
node_id = _canonical_node_id(
|
||||
_first(packet, "fromId", "from_id", "from", default=None)
|
||||
)
|
||||
|
||||
if node_id is None:
|
||||
return
|
||||
|
||||
node_payload: dict = {}
|
||||
if user_dict:
|
||||
node_payload["user"] = user_dict
|
||||
|
||||
# Resolve node_num from protobuf first, then decoded dict, then from the
|
||||
# canonical ID as a last resort.
|
||||
node_num = None
|
||||
if node_info and "num" in node_info_fields:
|
||||
try:
|
||||
node_num = int(node_info.num)
|
||||
except (TypeError, ValueError):
|
||||
node_num = None
|
||||
if node_num is None:
|
||||
decoded_num = decoded.get("num")
|
||||
if decoded_num is not None:
|
||||
try:
|
||||
node_num = int(decoded_num)
|
||||
except (TypeError, ValueError):
|
||||
try:
|
||||
node_num = int(str(decoded_num).strip(), 0)
|
||||
except Exception:
|
||||
node_num = None
|
||||
if node_num is None:
|
||||
node_num = _node_num_from_id(node_id)
|
||||
if node_num is not None:
|
||||
node_payload["num"] = node_num
|
||||
|
||||
rx_time = int(_first(packet, "rxTime", "rx_time", default=time.time()))
|
||||
last_heard = None
|
||||
if node_info and "last_heard" in node_info_fields:
|
||||
try:
|
||||
last_heard = int(node_info.last_heard)
|
||||
except (TypeError, ValueError):
|
||||
last_heard = None
|
||||
if last_heard is None:
|
||||
decoded_last_heard = decoded.get("lastHeard")
|
||||
if decoded_last_heard is not None:
|
||||
try:
|
||||
last_heard = int(decoded_last_heard)
|
||||
except (TypeError, ValueError):
|
||||
last_heard = None
|
||||
if last_heard is None or last_heard < rx_time:
|
||||
last_heard = rx_time
|
||||
node_payload["lastHeard"] = last_heard
|
||||
|
||||
snr = None
|
||||
if node_info and "snr" in node_info_fields:
|
||||
try:
|
||||
snr = float(node_info.snr)
|
||||
except (TypeError, ValueError):
|
||||
snr = None
|
||||
if snr is None:
|
||||
snr = _first(packet, "snr", "rx_snr", "rxSnr", default=None)
|
||||
if snr is not None:
|
||||
try:
|
||||
snr = float(snr)
|
||||
except (TypeError, ValueError):
|
||||
snr = None
|
||||
if snr is not None:
|
||||
node_payload["snr"] = snr
|
||||
|
||||
hops = None
|
||||
if node_info and "hops_away" in node_info_fields:
|
||||
try:
|
||||
hops = int(node_info.hops_away)
|
||||
except (TypeError, ValueError):
|
||||
hops = None
|
||||
if hops is None:
|
||||
hops = decoded.get("hopsAway")
|
||||
if hops is not None:
|
||||
try:
|
||||
hops = int(hops)
|
||||
except (TypeError, ValueError):
|
||||
hops = None
|
||||
if hops is not None:
|
||||
node_payload["hopsAway"] = hops
|
||||
|
||||
if node_info and "channel" in node_info_fields:
|
||||
try:
|
||||
node_payload["channel"] = int(node_info.channel)
|
||||
except (TypeError, ValueError):
|
||||
pass
|
||||
|
||||
if node_info and "via_mqtt" in node_info_fields:
|
||||
node_payload["viaMqtt"] = bool(node_info.via_mqtt)
|
||||
|
||||
if node_info and "is_favorite" in node_info_fields:
|
||||
node_payload["isFavorite"] = bool(node_info.is_favorite)
|
||||
elif "isFavorite" in decoded:
|
||||
node_payload["isFavorite"] = bool(decoded.get("isFavorite"))
|
||||
|
||||
if node_info and "is_ignored" in node_info_fields:
|
||||
node_payload["isIgnored"] = bool(node_info.is_ignored)
|
||||
if node_info and "is_key_manually_verified" in node_info_fields:
|
||||
node_payload["isKeyManuallyVerified"] = bool(node_info.is_key_manually_verified)
|
||||
|
||||
metrics = _nodeinfo_metrics_dict(node_info)
|
||||
decoded_metrics = decoded.get("deviceMetrics")
|
||||
if isinstance(decoded_metrics, Mapping):
|
||||
metrics = _merge_mappings(metrics, _node_to_dict(decoded_metrics))
|
||||
if metrics:
|
||||
node_payload["deviceMetrics"] = metrics
|
||||
|
||||
position = _nodeinfo_position_dict(node_info)
|
||||
decoded_position = decoded.get("position")
|
||||
if isinstance(decoded_position, Mapping):
|
||||
position = _merge_mappings(position, _node_to_dict(decoded_position))
|
||||
if position:
|
||||
node_payload["position"] = position
|
||||
|
||||
hop_limit = _first(packet, "hopLimit", "hop_limit", default=None)
|
||||
if hop_limit is not None and "hopLimit" not in node_payload:
|
||||
try:
|
||||
node_payload["hopLimit"] = int(hop_limit)
|
||||
except (TypeError, ValueError):
|
||||
pass
|
||||
|
||||
nodes_payload = _apply_radio_metadata_to_nodes({node_id: node_payload})
|
||||
nodes_payload["ingestor"] = _state.host_node_id()
|
||||
queue._queue_post_json(
|
||||
"/api/nodes",
|
||||
nodes_payload,
|
||||
priority=queue._NODE_POST_PRIORITY,
|
||||
)
|
||||
|
||||
if config.DEBUG:
|
||||
short = None
|
||||
long_name = None
|
||||
if isinstance(user_dict, Mapping):
|
||||
short = user_dict.get("shortName")
|
||||
long_name = user_dict.get("longName")
|
||||
config._debug_log(
|
||||
"Queued nodeinfo payload",
|
||||
context="handlers.store_nodeinfo",
|
||||
node_id=node_id,
|
||||
short_name=short,
|
||||
long_name=long_name,
|
||||
)
|
||||
|
||||
|
||||
__all__ = ["store_nodeinfo_packet"]
|
||||
@@ -0,0 +1,413 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Handlers for position and traceroute packets."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import base64
|
||||
import time
|
||||
from collections.abc import Mapping
|
||||
|
||||
from .. import config, queue
|
||||
from ..serialization import (
|
||||
_canonical_node_id,
|
||||
_coerce_float,
|
||||
_coerce_int,
|
||||
_extract_payload_bytes,
|
||||
_first,
|
||||
_iso,
|
||||
_node_num_from_id,
|
||||
_node_to_dict,
|
||||
_pkt_to_dict,
|
||||
)
|
||||
from . import _state
|
||||
from .ignored import _record_ignored_packet
|
||||
from .radio import _apply_radio_metadata
|
||||
|
||||
|
||||
def base64_payload(payload_bytes: bytes | None) -> str | None:
|
||||
"""Encode raw payload bytes as a Base64 string for JSON transport.
|
||||
|
||||
Parameters:
|
||||
payload_bytes: Optional raw bytes to encode. When ``None`` or empty,
|
||||
``None`` is returned so callers can omit the field.
|
||||
|
||||
Returns:
|
||||
The Base64-encoded ASCII string, or ``None`` when ``payload_bytes`` is
|
||||
falsy.
|
||||
"""
|
||||
|
||||
if not payload_bytes:
|
||||
return None
|
||||
return base64.b64encode(payload_bytes).decode("ascii")
|
||||
|
||||
|
||||
def _normalize_trace_hops(hops_value: object) -> list[int]:
|
||||
"""Coerce hop entries to integer node numbers, preserving order.
|
||||
|
||||
Each hop can arrive as a plain integer, a canonical node-ID string
|
||||
(``!xxxxxxxx``), or a mapping with a ``nodeId`` / ``node_id`` field.
|
||||
All forms are normalised to the raw 32-bit node number used by the API.
|
||||
|
||||
Parameters:
|
||||
hops_value: A single hop or list of hops in any supported form.
|
||||
|
||||
Returns:
|
||||
List of integer node numbers with ``None``-coerced entries dropped.
|
||||
"""
|
||||
|
||||
if hops_value is None:
|
||||
return []
|
||||
hop_entries = hops_value if isinstance(hops_value, list) else [hops_value]
|
||||
normalized: list[int] = []
|
||||
for hop in hop_entries:
|
||||
hop_value = hop
|
||||
if isinstance(hop, Mapping):
|
||||
hop_value = _first(hop, "node_id", "nodeId", "id", "num", default=None)
|
||||
|
||||
canonical = _canonical_node_id(hop_value)
|
||||
hop_id = _node_num_from_id(canonical or hop_value)
|
||||
if hop_id is None:
|
||||
hop_id = _coerce_int(hop_value)
|
||||
if hop_id is not None:
|
||||
normalized.append(hop_id)
|
||||
return normalized
|
||||
|
||||
|
||||
def store_position_packet(packet: Mapping, decoded: Mapping) -> None:
|
||||
"""Persist a decoded GPS position packet to the API.
|
||||
|
||||
Extracts coordinates from both the integer-scaled (``latitudeI`` /
|
||||
``longitudeI``) and floating-point (``latitude`` / ``longitude``) forms
|
||||
that Meshtastic may produce depending on firmware version.
|
||||
|
||||
Parameters:
|
||||
packet: Raw packet metadata emitted by the Meshtastic interface.
|
||||
decoded: Decoded payload extracted from ``packet['decoded']``.
|
||||
|
||||
Returns:
|
||||
``None``. The formatted position payload is added to the HTTP queue.
|
||||
"""
|
||||
|
||||
node_ref = _first(packet, "fromId", "from_id", "from", default=None)
|
||||
if node_ref is None:
|
||||
node_ref = _first(decoded, "num", default=None)
|
||||
node_id = _canonical_node_id(node_ref)
|
||||
if node_id is None:
|
||||
return
|
||||
|
||||
node_num = _coerce_int(_first(decoded, "num", default=None))
|
||||
if node_num is None:
|
||||
node_num = _node_num_from_id(node_id)
|
||||
|
||||
pkt_id = _coerce_int(_first(packet, "id", "packet_id", "packetId", default=None))
|
||||
if pkt_id is None:
|
||||
return
|
||||
|
||||
rx_time = _coerce_int(_first(packet, "rxTime", "rx_time", default=time.time()))
|
||||
if rx_time is None:
|
||||
rx_time = int(time.time())
|
||||
|
||||
to_id = _first(packet, "toId", "to_id", "to", default=None)
|
||||
to_id = to_id if to_id not in {"", None} else None
|
||||
|
||||
position_section = decoded.get("position") if isinstance(decoded, Mapping) else None
|
||||
if not isinstance(position_section, Mapping):
|
||||
position_section = {}
|
||||
|
||||
# Meshtastic firmware may emit coordinates in one of two forms:
|
||||
# - Floating-point degrees: ``latitude`` / ``longitude``
|
||||
# - Integer-scaled (1e-7 degrees): ``latitudeI`` / ``longitudeI``
|
||||
# Try the float form first and fall back to the integer form when absent.
|
||||
latitude = _coerce_float(
|
||||
_first(position_section, "latitude", "raw.latitude", default=None)
|
||||
)
|
||||
if latitude is None:
|
||||
lat_i = _coerce_int(
|
||||
_first(
|
||||
position_section,
|
||||
"latitudeI",
|
||||
"latitude_i",
|
||||
"raw.latitude_i",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
if lat_i is not None:
|
||||
latitude = lat_i / 1e7
|
||||
|
||||
longitude = _coerce_float(
|
||||
_first(position_section, "longitude", "raw.longitude", default=None)
|
||||
)
|
||||
if longitude is None:
|
||||
lon_i = _coerce_int(
|
||||
_first(
|
||||
position_section,
|
||||
"longitudeI",
|
||||
"longitude_i",
|
||||
"raw.longitude_i",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
if lon_i is not None:
|
||||
longitude = lon_i / 1e7
|
||||
|
||||
altitude = _coerce_float(
|
||||
_first(position_section, "altitude", "raw.altitude", default=None)
|
||||
)
|
||||
position_time = _coerce_int(
|
||||
_first(position_section, "time", "raw.time", default=None)
|
||||
)
|
||||
location_source = _first(
|
||||
position_section,
|
||||
"locationSource",
|
||||
"location_source",
|
||||
"raw.location_source",
|
||||
default=None,
|
||||
)
|
||||
location_source = (
|
||||
str(location_source).strip() if location_source not in {None, ""} else None
|
||||
)
|
||||
|
||||
precision_bits = _coerce_int(
|
||||
_first(
|
||||
position_section,
|
||||
"precisionBits",
|
||||
"precision_bits",
|
||||
"raw.precision_bits",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
sats_in_view = _coerce_int(
|
||||
_first(
|
||||
position_section,
|
||||
"satsInView",
|
||||
"sats_in_view",
|
||||
"raw.sats_in_view",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
pdop = _coerce_float(
|
||||
_first(position_section, "PDOP", "pdop", "raw.PDOP", "raw.pdop", default=None)
|
||||
)
|
||||
ground_speed = _coerce_float(
|
||||
_first(
|
||||
position_section,
|
||||
"groundSpeed",
|
||||
"ground_speed",
|
||||
"raw.ground_speed",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
ground_track = _coerce_float(
|
||||
_first(
|
||||
position_section,
|
||||
"groundTrack",
|
||||
"ground_track",
|
||||
"raw.ground_track",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
|
||||
snr = _coerce_float(_first(packet, "snr", "rx_snr", "rxSnr", default=None))
|
||||
rssi = _coerce_int(_first(packet, "rssi", "rx_rssi", "rxRssi", default=None))
|
||||
hop_limit = _coerce_int(_first(packet, "hopLimit", "hop_limit", default=None))
|
||||
bitfield = _coerce_int(_first(decoded, "bitfield", default=None))
|
||||
|
||||
payload_bytes = _extract_payload_bytes(decoded)
|
||||
payload_b64 = base64_payload(payload_bytes)
|
||||
|
||||
raw_section = decoded.get("raw") if isinstance(decoded, Mapping) else None
|
||||
raw_payload = _node_to_dict(raw_section) if raw_section else None
|
||||
if raw_payload is None and position_section:
|
||||
raw_position = (
|
||||
position_section.get("raw")
|
||||
if isinstance(position_section, Mapping)
|
||||
else None
|
||||
)
|
||||
if raw_position:
|
||||
raw_payload = _node_to_dict(raw_position)
|
||||
|
||||
position_payload = {
|
||||
"id": pkt_id,
|
||||
"node_id": node_id or node_ref,
|
||||
"node_num": node_num,
|
||||
"num": node_num,
|
||||
"from_id": node_id,
|
||||
"to_id": to_id,
|
||||
"rx_time": rx_time,
|
||||
"rx_iso": _iso(rx_time),
|
||||
"latitude": latitude,
|
||||
"longitude": longitude,
|
||||
"altitude": altitude,
|
||||
"position_time": position_time,
|
||||
"location_source": location_source,
|
||||
"precision_bits": precision_bits,
|
||||
"sats_in_view": sats_in_view,
|
||||
"pdop": pdop,
|
||||
"ground_speed": ground_speed,
|
||||
"ground_track": ground_track,
|
||||
"snr": snr,
|
||||
"rssi": rssi,
|
||||
"hop_limit": hop_limit,
|
||||
"bitfield": bitfield,
|
||||
"payload_b64": payload_b64,
|
||||
"ingestor": _state.host_node_id(),
|
||||
}
|
||||
if raw_payload:
|
||||
position_payload["raw"] = raw_payload
|
||||
|
||||
queue._queue_post_json(
|
||||
"/api/positions",
|
||||
_apply_radio_metadata(position_payload),
|
||||
priority=queue._POSITION_POST_PRIORITY,
|
||||
)
|
||||
|
||||
if config.DEBUG:
|
||||
config._debug_log(
|
||||
"Queued position payload",
|
||||
context="handlers.store_position",
|
||||
node_id=node_id,
|
||||
latitude=latitude,
|
||||
longitude=longitude,
|
||||
position_time=position_time,
|
||||
)
|
||||
|
||||
|
||||
def store_traceroute_packet(packet: Mapping, decoded: Mapping) -> None:
|
||||
"""Persist traceroute details and the observed hop path to the API.
|
||||
|
||||
Hop lists can arrive under several key names (``hops``, ``path``,
|
||||
``route``) and may appear at multiple nesting levels. All candidates are
|
||||
deduplicated and merged into a single ordered list.
|
||||
|
||||
Parameters:
|
||||
packet: Raw packet metadata from the Meshtastic interface.
|
||||
decoded: Decoded payload containing the traceroute section.
|
||||
|
||||
Returns:
|
||||
``None``. The traceroute payload is queued for HTTP submission, or
|
||||
silently dropped when identifiers are entirely absent.
|
||||
"""
|
||||
|
||||
traceroute_section = (
|
||||
decoded.get("traceroute") if isinstance(decoded, Mapping) else None
|
||||
)
|
||||
request_id = _coerce_int(
|
||||
_first(
|
||||
traceroute_section,
|
||||
"requestId",
|
||||
"request_id",
|
||||
default=_first(decoded, "req", "requestId", "request_id", default=None),
|
||||
)
|
||||
)
|
||||
pkt_id = _coerce_int(_first(packet, "id", "packet_id", "packetId", default=None))
|
||||
if pkt_id is None:
|
||||
pkt_id = request_id
|
||||
|
||||
rx_time = _coerce_int(_first(packet, "rxTime", "rx_time", default=time.time()))
|
||||
if rx_time is None:
|
||||
rx_time = int(time.time())
|
||||
|
||||
src = _coerce_int(
|
||||
_first(
|
||||
decoded,
|
||||
"src",
|
||||
"source",
|
||||
default=_first(packet, "fromId", "from_id", "from", default=None),
|
||||
)
|
||||
)
|
||||
dest = _coerce_int(
|
||||
_first(
|
||||
decoded,
|
||||
"dest",
|
||||
"destination",
|
||||
default=_first(packet, "toId", "to_id", "to", default=None),
|
||||
)
|
||||
)
|
||||
|
||||
metrics = traceroute_section if isinstance(traceroute_section, Mapping) else {}
|
||||
rssi = _coerce_int(
|
||||
_first(metrics, "rssi", default=_first(packet, "rssi", "rx_rssi", "rxRssi"))
|
||||
)
|
||||
snr = _coerce_float(
|
||||
_first(metrics, "snr", default=_first(packet, "snr", "rx_snr", "rxSnr"))
|
||||
)
|
||||
elapsed_ms = _coerce_int(
|
||||
_first(metrics, "elapsed_ms", "latency_ms", "latencyMs", default=None)
|
||||
)
|
||||
|
||||
# Hops can appear under multiple keys at different nesting levels; collect
|
||||
# all candidates and deduplicate while preserving first-seen order.
|
||||
hop_candidates = (
|
||||
_first(metrics, "hops", default=None),
|
||||
_first(metrics, "path", default=None),
|
||||
_first(metrics, "route", default=None),
|
||||
_first(decoded, "hops", default=None),
|
||||
_first(decoded, "path", default=None),
|
||||
(
|
||||
_first(traceroute_section, "route", default=None)
|
||||
if isinstance(traceroute_section, Mapping)
|
||||
else None
|
||||
),
|
||||
)
|
||||
hops: list[int] = []
|
||||
seen_hops: set[int] = set()
|
||||
for candidate in hop_candidates:
|
||||
for hop in _normalize_trace_hops(candidate):
|
||||
if hop in seen_hops:
|
||||
continue
|
||||
seen_hops.add(hop)
|
||||
hops.append(hop)
|
||||
|
||||
if pkt_id is None and request_id is None and not hops:
|
||||
_record_ignored_packet(packet, reason="traceroute-missing-identifiers")
|
||||
return
|
||||
|
||||
payload = {
|
||||
"id": pkt_id,
|
||||
"request_id": request_id,
|
||||
"src": src,
|
||||
"dest": dest,
|
||||
"rx_time": rx_time,
|
||||
"rx_iso": _iso(rx_time),
|
||||
"hops": hops,
|
||||
"rssi": rssi,
|
||||
"snr": snr,
|
||||
"elapsed_ms": elapsed_ms,
|
||||
"ingestor": _state.host_node_id(),
|
||||
}
|
||||
|
||||
queue._queue_post_json(
|
||||
"/api/traces",
|
||||
_apply_radio_metadata(payload),
|
||||
priority=queue._TRACE_POST_PRIORITY,
|
||||
)
|
||||
|
||||
if config.DEBUG:
|
||||
config._debug_log(
|
||||
"Queued traceroute payload",
|
||||
context="handlers.store_traceroute_packet",
|
||||
request_id=request_id,
|
||||
src=src,
|
||||
dest=dest,
|
||||
hop_count=len(hops),
|
||||
)
|
||||
|
||||
|
||||
__all__ = [
|
||||
"base64_payload",
|
||||
"store_position_packet",
|
||||
"store_traceroute_packet",
|
||||
]
|
||||
@@ -0,0 +1,94 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Radio metadata helpers for enriching API payloads.
|
||||
|
||||
LoRa radio parameters (frequency and modem preset) are captured once at
|
||||
connection time by :mod:`data.mesh_ingestor.interfaces` and stored on the
|
||||
:mod:`data.mesh_ingestor.config` module. The helpers here read those cached
|
||||
values and attach them to outgoing payloads so the web dashboard can display
|
||||
radio configuration alongside mesh data.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from .. import config
|
||||
|
||||
|
||||
def _radio_metadata_fields() -> dict[str, object]:
|
||||
"""Return the shared radio metadata fields for payload enrichment.
|
||||
|
||||
Reads ``LORA_FREQ`` and ``MODEM_PRESET`` from :mod:`config` and returns
|
||||
only the keys that have been populated (i.e. skips ``None`` values).
|
||||
|
||||
Returns:
|
||||
A dictionary containing zero, one, or both of ``lora_freq`` and
|
||||
``modem_preset`` depending on what is available.
|
||||
"""
|
||||
|
||||
metadata: dict[str, object] = {}
|
||||
freq = getattr(config, "LORA_FREQ", None)
|
||||
if freq is not None:
|
||||
metadata["lora_freq"] = freq
|
||||
preset = getattr(config, "MODEM_PRESET", None)
|
||||
if preset is not None:
|
||||
metadata["modem_preset"] = preset
|
||||
return metadata
|
||||
|
||||
|
||||
def _apply_radio_metadata(payload: dict) -> dict:
|
||||
"""Augment a flat payload dict with radio metadata when available.
|
||||
|
||||
Parameters:
|
||||
payload: Mutable dictionary that will receive radio metadata keys.
|
||||
|
||||
Returns:
|
||||
The same ``payload`` dict with radio metadata keys merged in-place.
|
||||
"""
|
||||
|
||||
metadata = _radio_metadata_fields()
|
||||
if metadata:
|
||||
payload.update(metadata)
|
||||
return payload
|
||||
|
||||
|
||||
def _apply_radio_metadata_to_nodes(payload: dict) -> dict:
|
||||
"""Attach radio metadata to each node entry stored in ``payload``.
|
||||
|
||||
Node upsert payloads are keyed by node ID; each value is a dict of node
|
||||
attributes. This function enriches every node-value dict with radio
|
||||
metadata so the dashboard can show the radio configuration that was active
|
||||
when the node was last heard.
|
||||
|
||||
Parameters:
|
||||
payload: Mapping of ``node_id → node_dict`` to enrich in-place.
|
||||
|
||||
Returns:
|
||||
The same ``payload`` dict after in-place mutation of its node entries.
|
||||
"""
|
||||
|
||||
metadata = _radio_metadata_fields()
|
||||
if not metadata:
|
||||
return payload
|
||||
for value in payload.values():
|
||||
if isinstance(value, dict):
|
||||
value.update(metadata)
|
||||
return payload
|
||||
|
||||
|
||||
__all__ = [
|
||||
"_apply_radio_metadata",
|
||||
"_apply_radio_metadata_to_nodes",
|
||||
"_radio_metadata_fields",
|
||||
]
|
||||
@@ -0,0 +1,563 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Handlers for telemetry and router-heartbeat packets."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import time
|
||||
from collections.abc import Mapping
|
||||
|
||||
from .. import config, queue
|
||||
from ..serialization import (
|
||||
_canonical_node_id,
|
||||
_coerce_float,
|
||||
_coerce_int,
|
||||
_extract_payload_bytes,
|
||||
_first,
|
||||
_iso,
|
||||
_node_num_from_id,
|
||||
)
|
||||
from . import _state
|
||||
from .position import base64_payload
|
||||
from .radio import _apply_radio_metadata, _apply_radio_metadata_to_nodes
|
||||
|
||||
_VALID_TELEMETRY_TYPES: frozenset[str] = frozenset(
|
||||
{"device", "environment", "power", "air_quality"}
|
||||
)
|
||||
"""Allowed discriminator values for the ``telemetry_type`` field.
|
||||
|
||||
Meshtastic uses a protobuf ``oneof`` so only one metric sub-object can be
|
||||
populated per packet. Values outside this set indicate a firmware version
|
||||
that added a new type not yet handled here; those are logged and dropped to
|
||||
avoid persisting unexpected data shapes.
|
||||
"""
|
||||
|
||||
|
||||
def store_telemetry_packet(packet: Mapping, decoded: Mapping) -> None:
|
||||
"""Persist telemetry metrics extracted from a packet.
|
||||
|
||||
Handles all four Meshtastic telemetry sub-types (device, environment,
|
||||
power, air quality) by extracting common fields first and then
|
||||
conditionally adding type-specific metric keys.
|
||||
|
||||
Host telemetry is rate-limited: if the locally connected node's own
|
||||
telemetry arrives within the suppression window it is silently dropped to
|
||||
avoid constant self-updates overwriting other node data.
|
||||
|
||||
Parameters:
|
||||
packet: Packet metadata received from the radio interface.
|
||||
decoded: Meshtastic-decoded view containing telemetry structures.
|
||||
|
||||
Returns:
|
||||
``None``. The telemetry payload is added to the HTTP queue.
|
||||
"""
|
||||
|
||||
telemetry_section = (
|
||||
decoded.get("telemetry") if isinstance(decoded, Mapping) else None
|
||||
)
|
||||
if not isinstance(telemetry_section, Mapping):
|
||||
return
|
||||
|
||||
pkt_id = _coerce_int(_first(packet, "id", "packet_id", "packetId", default=None))
|
||||
if pkt_id is None:
|
||||
return
|
||||
|
||||
raw_from = _first(packet, "fromId", "from_id", "from", default=None)
|
||||
node_id = _canonical_node_id(raw_from)
|
||||
node_num = _coerce_int(_first(decoded, "num", "node_num", default=None))
|
||||
if node_num is None:
|
||||
node_num = _node_num_from_id(node_id or raw_from)
|
||||
|
||||
to_id = _first(packet, "toId", "to_id", "to", default=None)
|
||||
|
||||
raw_rx_time = _first(packet, "rxTime", "rx_time", default=time.time())
|
||||
try:
|
||||
rx_time = int(raw_rx_time)
|
||||
except (TypeError, ValueError):
|
||||
rx_time = int(time.time())
|
||||
rx_iso = _iso(rx_time)
|
||||
|
||||
host_id = _state.host_node_id()
|
||||
# The locally connected node broadcasts its own telemetry frequently.
|
||||
# Accepting every packet would overwrite the host's profile more often
|
||||
# than necessary; the suppression window (default 1 h) rate-limits
|
||||
# self-updates without blocking telemetry from other nodes.
|
||||
if host_id is not None and node_id == host_id:
|
||||
suppressed, minutes_remaining = _state._host_telemetry_suppressed(rx_time)
|
||||
if suppressed:
|
||||
config._debug_log(
|
||||
"Suppressed host telemetry update",
|
||||
context="handlers.store_telemetry",
|
||||
host_node_id=host_id,
|
||||
minutes_remaining=minutes_remaining,
|
||||
)
|
||||
return
|
||||
_state._mark_host_telemetry_seen(rx_time)
|
||||
|
||||
telemetry_time = _coerce_int(_first(telemetry_section, "time", default=None))
|
||||
|
||||
_dm = telemetry_section.get("deviceMetrics") or telemetry_section.get(
|
||||
"device_metrics"
|
||||
)
|
||||
_em = telemetry_section.get("environmentMetrics") or telemetry_section.get(
|
||||
"environment_metrics"
|
||||
)
|
||||
_pm = telemetry_section.get("powerMetrics") or telemetry_section.get(
|
||||
"power_metrics"
|
||||
)
|
||||
_aq = telemetry_section.get("airQualityMetrics") or telemetry_section.get(
|
||||
"air_quality_metrics"
|
||||
)
|
||||
# Priority order matters: deviceMetrics is checked first because the device
|
||||
# sub-object also carries a voltage field that overlaps with powerMetrics.
|
||||
# Meshtastic uses a protobuf oneof so only one sub-object can be populated per
|
||||
# packet; the elif chain handles any hypothetical overlap from future providers.
|
||||
if isinstance(_dm, Mapping):
|
||||
telemetry_type: str | None = "device"
|
||||
elif isinstance(_em, Mapping):
|
||||
telemetry_type = "environment"
|
||||
elif isinstance(_pm, Mapping):
|
||||
telemetry_type = "power"
|
||||
elif isinstance(_aq, Mapping):
|
||||
telemetry_type = "air_quality"
|
||||
else:
|
||||
telemetry_type = None
|
||||
|
||||
if telemetry_type is not None and telemetry_type not in _VALID_TELEMETRY_TYPES:
|
||||
config._debug_log(
|
||||
"Unexpected telemetry_type value; dropping field",
|
||||
context="handlers.store_telemetry",
|
||||
severity="warning",
|
||||
always=True,
|
||||
telemetry_type=telemetry_type,
|
||||
)
|
||||
telemetry_type = None
|
||||
|
||||
channel = _coerce_int(_first(decoded, "channel", default=None))
|
||||
if channel is None:
|
||||
channel = _coerce_int(_first(packet, "channel", default=None))
|
||||
if channel is None:
|
||||
channel = 0
|
||||
|
||||
portnum = _first(decoded, "portnum", default=None)
|
||||
portnum = str(portnum) if portnum not in {None, ""} else None
|
||||
|
||||
bitfield = _coerce_int(_first(decoded, "bitfield", default=None))
|
||||
|
||||
snr = _coerce_float(_first(packet, "snr", "rx_snr", "rxSnr", default=None))
|
||||
rssi = _coerce_int(_first(packet, "rssi", "rx_rssi", "rxRssi", default=None))
|
||||
hop_limit = _coerce_int(_first(packet, "hopLimit", "hop_limit", default=None))
|
||||
|
||||
payload_bytes = _extract_payload_bytes(decoded)
|
||||
payload_b64 = base64_payload(payload_bytes) or ""
|
||||
|
||||
battery_level = _coerce_float(
|
||||
_first(
|
||||
telemetry_section,
|
||||
"batteryLevel",
|
||||
"battery_level",
|
||||
"deviceMetrics.batteryLevel",
|
||||
"environmentMetrics.battery_level",
|
||||
"deviceMetrics.battery_level",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
voltage = _coerce_float(
|
||||
_first(
|
||||
telemetry_section,
|
||||
"voltage",
|
||||
"environmentMetrics.voltage",
|
||||
"deviceMetrics.voltage",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
channel_utilization = _coerce_float(
|
||||
_first(
|
||||
telemetry_section,
|
||||
"channelUtilization",
|
||||
"channel_utilization",
|
||||
"deviceMetrics.channelUtilization",
|
||||
"deviceMetrics.channel_utilization",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
air_util_tx = _coerce_float(
|
||||
_first(
|
||||
telemetry_section,
|
||||
"airUtilTx",
|
||||
"air_util_tx",
|
||||
"deviceMetrics.airUtilTx",
|
||||
"deviceMetrics.air_util_tx",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
uptime_seconds = _coerce_int(
|
||||
_first(
|
||||
telemetry_section,
|
||||
"uptimeSeconds",
|
||||
"uptime_seconds",
|
||||
"deviceMetrics.uptimeSeconds",
|
||||
"deviceMetrics.uptime_seconds",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
|
||||
temperature = _coerce_float(
|
||||
_first(
|
||||
telemetry_section,
|
||||
"temperature",
|
||||
"environmentMetrics.temperature",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
relative_humidity = _coerce_float(
|
||||
_first(
|
||||
telemetry_section,
|
||||
"relativeHumidity",
|
||||
"relative_humidity",
|
||||
"environmentMetrics.relativeHumidity",
|
||||
"environmentMetrics.relative_humidity",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
barometric_pressure = _coerce_float(
|
||||
_first(
|
||||
telemetry_section,
|
||||
"barometricPressure",
|
||||
"barometric_pressure",
|
||||
"environmentMetrics.barometricPressure",
|
||||
"environmentMetrics.barometric_pressure",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
|
||||
current = _coerce_float(
|
||||
_first(
|
||||
telemetry_section,
|
||||
"current",
|
||||
"deviceMetrics.current",
|
||||
"deviceMetrics.current_ma",
|
||||
"deviceMetrics.currentMa",
|
||||
"environmentMetrics.current",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
gas_resistance = _coerce_float(
|
||||
_first(
|
||||
telemetry_section,
|
||||
"gasResistance",
|
||||
"gas_resistance",
|
||||
"environmentMetrics.gasResistance",
|
||||
"environmentMetrics.gas_resistance",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
iaq = _coerce_int(
|
||||
_first(
|
||||
telemetry_section,
|
||||
"iaq",
|
||||
"environmentMetrics.iaq",
|
||||
"environmentMetrics.iaqIndex",
|
||||
"environmentMetrics.iaq_index",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
distance = _coerce_float(
|
||||
_first(
|
||||
telemetry_section,
|
||||
"distance",
|
||||
"environmentMetrics.distance",
|
||||
"environmentMetrics.range",
|
||||
"environmentMetrics.rangeMeters",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
lux = _coerce_float(
|
||||
_first(
|
||||
telemetry_section,
|
||||
"lux",
|
||||
"environmentMetrics.lux",
|
||||
"environmentMetrics.illuminance",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
white_lux = _coerce_float(
|
||||
_first(
|
||||
telemetry_section,
|
||||
"whiteLux",
|
||||
"white_lux",
|
||||
"environmentMetrics.whiteLux",
|
||||
"environmentMetrics.white_lux",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
ir_lux = _coerce_float(
|
||||
_first(
|
||||
telemetry_section,
|
||||
"irLux",
|
||||
"ir_lux",
|
||||
"environmentMetrics.irLux",
|
||||
"environmentMetrics.ir_lux",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
uv_lux = _coerce_float(
|
||||
_first(
|
||||
telemetry_section,
|
||||
"uvLux",
|
||||
"uv_lux",
|
||||
"environmentMetrics.uvLux",
|
||||
"environmentMetrics.uv_lux",
|
||||
"environmentMetrics.uvIndex",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
wind_direction = _coerce_int(
|
||||
_first(
|
||||
telemetry_section,
|
||||
"windDirection",
|
||||
"wind_direction",
|
||||
"environmentMetrics.windDirection",
|
||||
"environmentMetrics.wind_direction",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
wind_speed = _coerce_float(
|
||||
_first(
|
||||
telemetry_section,
|
||||
"windSpeed",
|
||||
"wind_speed",
|
||||
"environmentMetrics.windSpeed",
|
||||
"environmentMetrics.wind_speed",
|
||||
"environmentMetrics.windSpeedMps",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
wind_gust = _coerce_float(
|
||||
_first(
|
||||
telemetry_section,
|
||||
"windGust",
|
||||
"wind_gust",
|
||||
"environmentMetrics.windGust",
|
||||
"environmentMetrics.wind_gust",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
wind_lull = _coerce_float(
|
||||
_first(
|
||||
telemetry_section,
|
||||
"windLull",
|
||||
"wind_lull",
|
||||
"environmentMetrics.windLull",
|
||||
"environmentMetrics.wind_lull",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
weight = _coerce_float(
|
||||
_first(
|
||||
telemetry_section,
|
||||
"weight",
|
||||
"environmentMetrics.weight",
|
||||
"environmentMetrics.mass",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
radiation = _coerce_float(
|
||||
_first(
|
||||
telemetry_section,
|
||||
"radiation",
|
||||
"environmentMetrics.radiation",
|
||||
"environmentMetrics.radiationLevel",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
rainfall_1h = _coerce_float(
|
||||
_first(
|
||||
telemetry_section,
|
||||
"rainfall1h",
|
||||
"rainfall_1h",
|
||||
"environmentMetrics.rainfall1h",
|
||||
"environmentMetrics.rainfall_1h",
|
||||
"environmentMetrics.rainfallOneHour",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
rainfall_24h = _coerce_float(
|
||||
_first(
|
||||
telemetry_section,
|
||||
"rainfall24h",
|
||||
"rainfall_24h",
|
||||
"environmentMetrics.rainfall24h",
|
||||
"environmentMetrics.rainfall_24h",
|
||||
"environmentMetrics.rainfallTwentyFourHour",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
soil_moisture = _coerce_int(
|
||||
_first(
|
||||
telemetry_section,
|
||||
"soilMoisture",
|
||||
"soil_moisture",
|
||||
"environmentMetrics.soilMoisture",
|
||||
"environmentMetrics.soil_moisture",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
soil_temperature = _coerce_float(
|
||||
_first(
|
||||
telemetry_section,
|
||||
"soilTemperature",
|
||||
"soil_temperature",
|
||||
"environmentMetrics.soilTemperature",
|
||||
"environmentMetrics.soil_temperature",
|
||||
default=None,
|
||||
)
|
||||
)
|
||||
|
||||
telemetry_payload = {
|
||||
"id": pkt_id,
|
||||
"node_id": node_id,
|
||||
"node_num": node_num,
|
||||
"from_id": node_id or raw_from,
|
||||
"to_id": to_id,
|
||||
"rx_time": rx_time,
|
||||
"rx_iso": rx_iso,
|
||||
"telemetry_time": telemetry_time,
|
||||
"channel": channel,
|
||||
"portnum": portnum,
|
||||
"bitfield": bitfield,
|
||||
"snr": snr,
|
||||
"rssi": rssi,
|
||||
"hop_limit": hop_limit,
|
||||
"payload_b64": payload_b64,
|
||||
"ingestor": _state.host_node_id(),
|
||||
}
|
||||
|
||||
# Conditionally include metric keys so the API ignores absent fields rather
|
||||
# than overwriting existing values with null.
|
||||
if battery_level is not None:
|
||||
telemetry_payload["battery_level"] = battery_level
|
||||
if voltage is not None:
|
||||
telemetry_payload["voltage"] = voltage
|
||||
if channel_utilization is not None:
|
||||
telemetry_payload["channel_utilization"] = channel_utilization
|
||||
if air_util_tx is not None:
|
||||
telemetry_payload["air_util_tx"] = air_util_tx
|
||||
if uptime_seconds is not None:
|
||||
telemetry_payload["uptime_seconds"] = uptime_seconds
|
||||
if temperature is not None:
|
||||
telemetry_payload["temperature"] = temperature
|
||||
if relative_humidity is not None:
|
||||
telemetry_payload["relative_humidity"] = relative_humidity
|
||||
if barometric_pressure is not None:
|
||||
telemetry_payload["barometric_pressure"] = barometric_pressure
|
||||
if current is not None:
|
||||
telemetry_payload["current"] = current
|
||||
if gas_resistance is not None:
|
||||
telemetry_payload["gas_resistance"] = gas_resistance
|
||||
if iaq is not None:
|
||||
telemetry_payload["iaq"] = iaq
|
||||
if distance is not None:
|
||||
telemetry_payload["distance"] = distance
|
||||
if lux is not None:
|
||||
telemetry_payload["lux"] = lux
|
||||
if white_lux is not None:
|
||||
telemetry_payload["white_lux"] = white_lux
|
||||
if ir_lux is not None:
|
||||
telemetry_payload["ir_lux"] = ir_lux
|
||||
if uv_lux is not None:
|
||||
telemetry_payload["uv_lux"] = uv_lux
|
||||
if wind_direction is not None:
|
||||
telemetry_payload["wind_direction"] = wind_direction
|
||||
if wind_speed is not None:
|
||||
telemetry_payload["wind_speed"] = wind_speed
|
||||
if wind_gust is not None:
|
||||
telemetry_payload["wind_gust"] = wind_gust
|
||||
if wind_lull is not None:
|
||||
telemetry_payload["wind_lull"] = wind_lull
|
||||
if weight is not None:
|
||||
telemetry_payload["weight"] = weight
|
||||
if radiation is not None:
|
||||
telemetry_payload["radiation"] = radiation
|
||||
if rainfall_1h is not None:
|
||||
telemetry_payload["rainfall_1h"] = rainfall_1h
|
||||
if rainfall_24h is not None:
|
||||
telemetry_payload["rainfall_24h"] = rainfall_24h
|
||||
if soil_moisture is not None:
|
||||
telemetry_payload["soil_moisture"] = soil_moisture
|
||||
if soil_temperature is not None:
|
||||
telemetry_payload["soil_temperature"] = soil_temperature
|
||||
if telemetry_type is not None:
|
||||
telemetry_payload["telemetry_type"] = telemetry_type
|
||||
|
||||
queue._queue_post_json(
|
||||
"/api/telemetry",
|
||||
_apply_radio_metadata(telemetry_payload),
|
||||
priority=queue._TELEMETRY_POST_PRIORITY,
|
||||
)
|
||||
|
||||
if config.DEBUG:
|
||||
config._debug_log(
|
||||
"Queued telemetry payload",
|
||||
context="handlers.store_telemetry",
|
||||
node_id=node_id,
|
||||
battery_level=battery_level,
|
||||
voltage=voltage,
|
||||
)
|
||||
|
||||
|
||||
def store_router_heartbeat_packet(packet: Mapping) -> None:
|
||||
"""Persist a ``STORE_FORWARD_APP ROUTER_HEARTBEAT`` as a node presence update.
|
||||
|
||||
The heartbeat carries no message payload — the only actionable signal is
|
||||
that the store-and-forward router is alive at the observed ``rx_time``.
|
||||
All other fields are left untouched so the router's existing profile is
|
||||
not overwritten.
|
||||
|
||||
Parameters:
|
||||
packet: Raw packet metadata.
|
||||
|
||||
Returns:
|
||||
``None``. A minimal node upsert is enqueued at low priority.
|
||||
"""
|
||||
|
||||
node_id = _canonical_node_id(
|
||||
_first(packet, "fromId", "from_id", "from", default=None)
|
||||
)
|
||||
if node_id is None:
|
||||
return
|
||||
|
||||
rx_time = int(_first(packet, "rxTime", "rx_time", default=time.time()))
|
||||
|
||||
node_payload: dict = {"lastHeard": rx_time}
|
||||
nodes_payload = _apply_radio_metadata_to_nodes({node_id: node_payload})
|
||||
nodes_payload["ingestor"] = _state.host_node_id()
|
||||
queue._queue_post_json(
|
||||
"/api/nodes", nodes_payload, priority=queue._DEFAULT_POST_PRIORITY
|
||||
)
|
||||
|
||||
if config.DEBUG:
|
||||
config._debug_log(
|
||||
"Queued router heartbeat node upsert",
|
||||
context="handlers.store_router_heartbeat",
|
||||
node_id=node_id,
|
||||
rx_time=rx_time,
|
||||
)
|
||||
|
||||
|
||||
__all__ = [
|
||||
"store_router_heartbeat_packet",
|
||||
"store_telemetry_packet",
|
||||
]
|
||||
@@ -113,6 +113,7 @@ def queue_ingestor_heartbeat(
|
||||
"start_time": STATE.start_time,
|
||||
"last_seen_time": now,
|
||||
"version": INGESTOR_VERSION,
|
||||
"protocol": getattr(config, "PROVIDER", "meshtastic") or "meshtastic",
|
||||
}
|
||||
if getattr(config, "LORA_FREQ", None) is not None:
|
||||
payload["lora_freq"] = config.LORA_FREQ
|
||||
|
||||
@@ -17,7 +17,6 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import contextlib
|
||||
import glob
|
||||
import importlib
|
||||
import ipaddress
|
||||
import math
|
||||
@@ -33,6 +32,13 @@ except Exception: # pragma: no cover - dependency optional in tests
|
||||
meshtastic = None # type: ignore[assignment]
|
||||
|
||||
from . import channels, config, serialization
|
||||
from .connection import (
|
||||
BLE_ADDRESS_RE,
|
||||
DEFAULT_TCP_PORT,
|
||||
DEFAULT_SERIAL_PATTERNS,
|
||||
default_serial_targets,
|
||||
parse_ble_target,
|
||||
)
|
||||
|
||||
|
||||
def _ensure_mapping(value) -> Mapping | None:
|
||||
@@ -151,7 +157,21 @@ def _candidate_node_id(mapping: Mapping | None) -> str | None:
|
||||
|
||||
|
||||
def _extract_host_node_id(iface) -> str | None:
|
||||
"""Return the canonical node identifier for the connected host device."""
|
||||
"""Return the canonical node identifier for the connected host device.
|
||||
|
||||
Searches a sequence of well-known attribute names (``myInfo``,
|
||||
``my_node_info``, etc.) on ``iface`` for a mapping that contains a
|
||||
recognisable node identifier, then falls back to the raw ``myNodeNum``
|
||||
integer attribute.
|
||||
|
||||
Parameters:
|
||||
iface: Live Meshtastic interface object, or any object that exposes
|
||||
node-identity attributes in one of the expected forms.
|
||||
|
||||
Returns:
|
||||
A canonical ``!xxxxxxxx`` node identifier, or ``None`` when no
|
||||
identifiable host node information is available.
|
||||
"""
|
||||
|
||||
if iface is None:
|
||||
return None
|
||||
@@ -239,6 +259,9 @@ def _patch_meshtastic_nodeinfo_handler() -> None:
|
||||
with contextlib.suppress(Exception):
|
||||
mesh_interface_module = importlib.import_module("meshtastic.mesh_interface")
|
||||
|
||||
# Replace the module-level handler only once; the sentinel attribute prevents
|
||||
# re-wrapping if _patch_meshtastic_nodeinfo_handler() is called again after
|
||||
# the interface module is reloaded or re-imported.
|
||||
if not getattr(original, "_potato_mesh_safe_wrapper", False):
|
||||
module._onNodeInfoReceive = _build_safe_nodeinfo_callback(original)
|
||||
|
||||
@@ -297,6 +320,22 @@ def _patch_nodeinfo_handler_class(
|
||||
"""Subclass that guards against missing node identifiers."""
|
||||
|
||||
def onReceive(self, iface, packet): # type: ignore[override]
|
||||
"""Normalise ``packet`` before dispatching to the parent handler.
|
||||
|
||||
Injects a canonical ``id`` field when one can be inferred from the
|
||||
packet's other fields, then delegates to the original
|
||||
``NodeInfoHandler.onReceive``. A ``KeyError`` on ``"id"`` is
|
||||
suppressed because some firmware versions omit the field entirely.
|
||||
|
||||
Parameters:
|
||||
iface: The Meshtastic interface that received the packet.
|
||||
packet: Raw nodeinfo packet dict, possibly lacking an ``id``
|
||||
key.
|
||||
|
||||
Returns:
|
||||
The return value of the parent handler, or ``None`` when a
|
||||
missing ``"id"`` key would otherwise raise.
|
||||
"""
|
||||
normalised = _normalise_nodeinfo_packet(packet)
|
||||
if normalised is not None:
|
||||
packet = normalised
|
||||
@@ -616,25 +655,13 @@ def _ensure_channel_metadata(iface: Any) -> None:
|
||||
)
|
||||
|
||||
|
||||
_DEFAULT_TCP_PORT = 4403
|
||||
_DEFAULT_TCP_TARGET = "http://127.0.0.1"
|
||||
|
||||
_DEFAULT_SERIAL_PATTERNS = (
|
||||
"/dev/ttyACM*",
|
||||
"/dev/ttyUSB*",
|
||||
"/dev/tty.usbmodem*",
|
||||
"/dev/tty.usbserial*",
|
||||
"/dev/cu.usbmodem*",
|
||||
"/dev/cu.usbserial*",
|
||||
)
|
||||
|
||||
# Support both MAC addresses (Linux/Windows) and UUIDs (macOS)
|
||||
_BLE_ADDRESS_RE = re.compile(
|
||||
r"^(?:"
|
||||
r"(?:[0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}|" # MAC address format
|
||||
r"[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}" # UUID format
|
||||
r")$"
|
||||
)
|
||||
# Private aliases so that existing internal callers and monkeypatching in
|
||||
# tests keep working without modification.
|
||||
_DEFAULT_TCP_PORT = DEFAULT_TCP_PORT # backward-compat alias
|
||||
_DEFAULT_SERIAL_PATTERNS = DEFAULT_SERIAL_PATTERNS # backward-compat alias
|
||||
_BLE_ADDRESS_RE = BLE_ADDRESS_RE # backward-compat alias
|
||||
|
||||
|
||||
class _DummySerialInterface:
|
||||
@@ -644,27 +671,11 @@ class _DummySerialInterface:
|
||||
self.nodes: dict = {}
|
||||
|
||||
def close(self) -> None: # pragma: no cover - nothing to close
|
||||
"""No-op: the dummy interface holds no resources to release."""
|
||||
pass
|
||||
|
||||
|
||||
def _parse_ble_target(value: str) -> str | None:
|
||||
"""Return a normalized BLE address (MAC or UUID) when ``value`` matches the format.
|
||||
|
||||
Parameters:
|
||||
value: User-provided target string.
|
||||
|
||||
Returns:
|
||||
The normalised MAC address or UUID, or ``None`` when validation fails.
|
||||
"""
|
||||
|
||||
if not value:
|
||||
return None
|
||||
value = value.strip()
|
||||
if not value:
|
||||
return None
|
||||
if _BLE_ADDRESS_RE.fullmatch(value):
|
||||
return value.upper()
|
||||
return None
|
||||
_parse_ble_target = parse_ble_target # backward-compat alias
|
||||
|
||||
|
||||
def _parse_network_target(value: str) -> tuple[str, int] | None:
|
||||
@@ -711,6 +722,9 @@ def _parse_network_target(value: str) -> tuple[str, int] | None:
|
||||
if result:
|
||||
return result
|
||||
|
||||
# For bare "host:port" strings that urlparse may misparse, try a manual
|
||||
# partition. The `startswith("[")` guard excludes IPv6 bracket notation
|
||||
# (e.g. "[::1]:8080") because those already succeed via urlparse above.
|
||||
if value.count(":") == 1 and not value.startswith("["):
|
||||
host, _, port_text = value.partition(":")
|
||||
try:
|
||||
@@ -812,19 +826,7 @@ class NoAvailableMeshInterface(RuntimeError):
|
||||
"""Raised when no default mesh interface can be created."""
|
||||
|
||||
|
||||
def _default_serial_targets() -> list[str]:
|
||||
"""Return candidate serial device paths for auto-discovery."""
|
||||
|
||||
candidates: list[str] = []
|
||||
seen: set[str] = set()
|
||||
for pattern in _DEFAULT_SERIAL_PATTERNS:
|
||||
for path in sorted(glob.glob(pattern)):
|
||||
if path not in seen:
|
||||
candidates.append(path)
|
||||
seen.add(path)
|
||||
if "/dev/ttyACM0" not in seen:
|
||||
candidates.append("/dev/ttyACM0")
|
||||
return candidates
|
||||
_default_serial_targets = default_serial_targets # backward-compat alias
|
||||
|
||||
|
||||
def _create_default_interface() -> tuple[object, str]:
|
||||
|
||||
@@ -0,0 +1,115 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Node identity helpers shared across ingestor providers.
|
||||
|
||||
The web application keys nodes by a canonical textual identifier of the form
|
||||
``!%08x`` (lowercase hex). Both the Python collector and Ruby server accept
|
||||
several input forms (ints, ``0x`` hex strings, ``!`` hex strings, decimal
|
||||
strings). This module centralizes that normalization.
|
||||
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Final
|
||||
|
||||
CANONICAL_PREFIX: Final[str] = "!"
|
||||
|
||||
|
||||
def canonical_node_id(value: object) -> str | None:
|
||||
"""Convert ``value`` into canonical ``!xxxxxxxx`` form.
|
||||
|
||||
Parameters:
|
||||
value: Node reference which may be an int, float, or string.
|
||||
|
||||
Returns:
|
||||
Canonical node id string or ``None`` when parsing fails.
|
||||
"""
|
||||
|
||||
if value is None:
|
||||
return None
|
||||
if isinstance(value, (int, float)):
|
||||
try:
|
||||
num = int(value)
|
||||
except (TypeError, ValueError):
|
||||
return None
|
||||
if num < 0:
|
||||
return None
|
||||
return f"{CANONICAL_PREFIX}{num & 0xFFFFFFFF:08x}"
|
||||
if not isinstance(value, str):
|
||||
return None
|
||||
|
||||
trimmed = value.strip()
|
||||
if not trimmed:
|
||||
return None
|
||||
if trimmed.startswith("^"):
|
||||
# Meshtastic special destinations like "^all" are not node ids; callers
|
||||
# that already accept them should keep passing them through unchanged.
|
||||
return trimmed
|
||||
if trimmed.startswith(CANONICAL_PREFIX):
|
||||
body = trimmed[1:]
|
||||
elif trimmed.lower().startswith("0x"):
|
||||
body = trimmed[2:]
|
||||
elif trimmed.isdigit():
|
||||
try:
|
||||
return f"{CANONICAL_PREFIX}{int(trimmed, 10) & 0xFFFFFFFF:08x}"
|
||||
except ValueError:
|
||||
return None
|
||||
else:
|
||||
body = trimmed
|
||||
|
||||
if not body:
|
||||
return None
|
||||
try:
|
||||
return f"{CANONICAL_PREFIX}{int(body, 16) & 0xFFFFFFFF:08x}"
|
||||
except ValueError:
|
||||
return None
|
||||
|
||||
|
||||
def node_num_from_id(node_id: object) -> int | None:
|
||||
"""Extract the numeric node identifier from a canonical (or near-canonical) id."""
|
||||
|
||||
if node_id is None:
|
||||
return None
|
||||
if isinstance(node_id, (int, float)):
|
||||
try:
|
||||
num = int(node_id)
|
||||
except (TypeError, ValueError):
|
||||
return None
|
||||
return num if num >= 0 else None
|
||||
if not isinstance(node_id, str):
|
||||
return None
|
||||
|
||||
trimmed = node_id.strip()
|
||||
if not trimmed:
|
||||
return None
|
||||
if trimmed.startswith(CANONICAL_PREFIX):
|
||||
trimmed = trimmed[1:]
|
||||
if trimmed.lower().startswith("0x"):
|
||||
trimmed = trimmed[2:]
|
||||
try:
|
||||
return int(trimmed, 16)
|
||||
except ValueError:
|
||||
try:
|
||||
return int(trimmed, 10)
|
||||
except ValueError:
|
||||
return None
|
||||
|
||||
|
||||
__all__ = [
|
||||
"CANONICAL_PREFIX",
|
||||
"canonical_node_id",
|
||||
"node_num_from_id",
|
||||
]
|
||||
@@ -0,0 +1,55 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Provider interface for ingestion sources.
|
||||
|
||||
Today the repo ships a Meshtastic provider only. This module defines the seam so
|
||||
future providers (MeshCore, Reticulum, ...) can be added without changing the
|
||||
web app ingest contract.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from collections.abc import Iterable
|
||||
from typing import Protocol, runtime_checkable
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class Provider(Protocol):
|
||||
"""Abstract source of mesh observations."""
|
||||
|
||||
name: str
|
||||
|
||||
def subscribe(self) -> list[str]:
|
||||
"""Subscribe to any async receive callbacks and return topic names."""
|
||||
|
||||
def connect(
|
||||
self, *, active_candidate: str | None
|
||||
) -> tuple[object, str | None, str | None]:
|
||||
"""Create an interface connection.
|
||||
|
||||
Returns:
|
||||
(iface, resolved_target, next_active_candidate)
|
||||
"""
|
||||
|
||||
def extract_host_node_id(self, iface: object) -> str | None:
|
||||
"""Best-effort extraction of the connected host node id."""
|
||||
|
||||
def node_snapshot_items(self, iface: object) -> Iterable[tuple[str, object]]:
|
||||
"""Return iterable of (node_id, node_obj) for initial snapshot."""
|
||||
|
||||
|
||||
__all__ = [
|
||||
"Provider",
|
||||
]
|
||||
@@ -0,0 +1,39 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Provider implementations.
|
||||
|
||||
This package contains protocol-specific provider implementations (Meshtastic,
|
||||
MeshCore, and others in the future).
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from .meshtastic import MeshtasticProvider
|
||||
|
||||
|
||||
def __getattr__(name: str) -> object:
|
||||
"""Lazy-load provider classes that carry optional heavy dependencies.
|
||||
|
||||
``MeshcoreProvider`` is imported on demand so that the MeshCore library
|
||||
(once wired in) is not loaded at startup when ``PROVIDER=meshtastic``.
|
||||
"""
|
||||
if name == "MeshcoreProvider":
|
||||
from .meshcore import MeshcoreProvider
|
||||
|
||||
return MeshcoreProvider
|
||||
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
|
||||
|
||||
|
||||
__all__ = ["MeshtasticProvider", "MeshcoreProvider"]
|
||||
@@ -0,0 +1,845 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""MeshCore provider implementation.
|
||||
|
||||
This module defines :class:`MeshcoreProvider`, which satisfies the
|
||||
:class:`~data.mesh_ingestor.provider.Provider` protocol for MeshCore nodes
|
||||
connected via serial port, BLE, or TCP/IP.
|
||||
|
||||
The provider runs MeshCore's ``asyncio`` event loop in a background daemon
|
||||
thread so that incoming events are dispatched without blocking the
|
||||
synchronous daemon loop. Received contacts, channel messages, and direct
|
||||
messages are forwarded to the shared HTTP ingest queue via the same
|
||||
:mod:`~data.mesh_ingestor.handlers` helpers used by the Meshtastic provider.
|
||||
|
||||
Connection type is detected automatically from the target string:
|
||||
|
||||
* **BLE** — MAC address (``AA:BB:CC:DD:EE:FF``) or UUID (macOS format).
|
||||
* **TCP** — ``host:port`` or ``[ipv6]:port`` (accepts hostnames).
|
||||
* **Serial** — any other non-empty string (e.g. ``/dev/ttyUSB0``).
|
||||
* **Auto** — ``None`` or empty: tries serial candidates from
|
||||
:func:`~data.mesh_ingestor.connection.default_serial_targets`.
|
||||
|
||||
Node identities are derived from the first four bytes (eight hex characters)
|
||||
of each contact's 32-byte public key, formatted as ``!xxxxxxxx`` to match
|
||||
the canonical node-ID schema used across the system. Ingested
|
||||
``user.shortName`` is the first four hex digits of that key (two bytes),
|
||||
not the advertised name.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import base64
|
||||
import hashlib
|
||||
import json
|
||||
import threading
|
||||
import time
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
|
||||
from .. import config
|
||||
from ..connection import default_serial_targets, parse_ble_target, parse_tcp_target
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Debug log file
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
_IGNORED_MESSAGE_LOG_PATH = Path(__file__).resolve().parents[3] / "ignored-meshcore.txt"
|
||||
"""Filesystem path that stores raw MeshCore messages when ``DEBUG=1``."""
|
||||
|
||||
_IGNORED_MESSAGE_LOCK = threading.Lock()
|
||||
"""Lock guarding writes to :data:`_IGNORED_MESSAGE_LOG_PATH`."""
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Connection constants
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
_CONNECT_TIMEOUT_SECS: float = 30.0
|
||||
"""Seconds to wait for the MeshCore node to respond to the appstart handshake."""
|
||||
|
||||
_DEFAULT_BAUDRATE: int = 115200
|
||||
"""Default baud rate for MeshCore serial connections."""
|
||||
|
||||
# MeshCore ``ADV_TYPE_*`` (``AdvertDataHelpers.h``) → ``user.role`` for POST /api/nodes.
|
||||
_MESHCORE_ADV_TYPE_ROLE: dict[int, str] = {
|
||||
1: "COMPANION", # ADV_TYPE_CHAT
|
||||
2: "REPEATER", # ADV_TYPE_REPEATER
|
||||
3: "ROOM_SERVER", # ADV_TYPE_ROOM_SERVER
|
||||
4: "SENSOR", # ADV_TYPE_SENSOR
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _derive_message_id(sender_ts: int, discriminator: str, text: str) -> int:
|
||||
"""Derive a stable 32-bit message ID from available MeshCore fields.
|
||||
|
||||
MeshCore does not assign firmware-side packet IDs. This function
|
||||
produces a deterministic 32-bit integer so that re-delivered messages
|
||||
resolve to the same database row via the UPSERT ON CONFLICT path, while
|
||||
messages that differ in timestamp, channel/peer, or text content produce
|
||||
distinct IDs.
|
||||
|
||||
Parameters:
|
||||
sender_ts: Unix timestamp from the sender's clock.
|
||||
discriminator: Channel index (``"c<N>"`` for channel messages) or
|
||||
pubkey prefix (for direct messages) to separate messages with
|
||||
the same timestamp.
|
||||
text: Message text.
|
||||
|
||||
Returns:
|
||||
A non-negative 32-bit integer suitable for the ``id`` column.
|
||||
"""
|
||||
data = f"{sender_ts}:{discriminator}:{text}".encode("utf-8", errors="replace")
|
||||
return int.from_bytes(hashlib.sha256(data).digest()[:4], "big")
|
||||
|
||||
|
||||
def _meshcore_node_id(public_key_hex: str | None) -> str | None:
|
||||
"""Derive a canonical ``!xxxxxxxx`` node ID from a MeshCore public key.
|
||||
|
||||
Uses the first four bytes (eight hex characters) of the 32-byte public
|
||||
key, formatted as ``!xxxxxxxx``.
|
||||
|
||||
Parameters:
|
||||
public_key_hex: 64-character lowercase hex string for the node's
|
||||
public key as returned by the MeshCore library.
|
||||
|
||||
Returns:
|
||||
Canonical ``!xxxxxxxx`` node ID string, or ``None`` when the key is
|
||||
absent or too short.
|
||||
"""
|
||||
if not public_key_hex or len(public_key_hex) < 8:
|
||||
return None
|
||||
return "!" + public_key_hex[:8].lower()
|
||||
|
||||
|
||||
def _meshcore_short_name(public_key_hex: str | None) -> str:
|
||||
"""Return the first four hex digits of a MeshCore public key as short name.
|
||||
|
||||
Meshtastic-style ``shortName`` fields are four characters wide; MeshCore
|
||||
ingest uses the leading two bytes of the 32-byte public key in lowercase
|
||||
hex so the label is stable and unique per key prefix.
|
||||
|
||||
Parameters:
|
||||
public_key_hex: Full public key as a hex string from the MeshCore API.
|
||||
|
||||
Returns:
|
||||
Four lowercase hex characters (e.g. ``"aabb"``), or an empty string
|
||||
when the key is missing or shorter than four hex characters.
|
||||
"""
|
||||
if not public_key_hex or len(public_key_hex) < 4:
|
||||
return ""
|
||||
return public_key_hex[:4].lower()
|
||||
|
||||
|
||||
def _meshcore_adv_type_to_role(adv_type: object) -> str | None:
|
||||
"""Map MeshCore ``ADV_TYPE_*`` (contact ``type`` / self ``adv_type``) to ingest role.
|
||||
|
||||
Values match MeshCore firmware ``AdvertDataHelpers.h`` (``ADV_TYPE_CHAT``,
|
||||
``ADV_TYPE_REPEATER``, …). Role strings match the MeshCore palette keys
|
||||
used by the web dashboard (``COMPANION``, ``REPEATER``, …).
|
||||
|
||||
Parameters:
|
||||
adv_type: Raw type byte from meshcore_py (typically ``int`` 0–4).
|
||||
Non-integer values (e.g. ``float``, ``None``) are rejected and
|
||||
return ``None``. Future firmware type codes not yet in the mapping
|
||||
also return ``None`` until the table is updated.
|
||||
|
||||
Returns:
|
||||
Uppercase role string, or ``None`` when the value is unknown or should
|
||||
not override the web default (``ADV_TYPE_NONE`` / unrecognised).
|
||||
"""
|
||||
if not isinstance(adv_type, int):
|
||||
return None
|
||||
return _MESHCORE_ADV_TYPE_ROLE.get(adv_type)
|
||||
|
||||
|
||||
def _pubkey_prefix_to_node_id(contacts: dict, pubkey_prefix: str) -> str | None:
|
||||
"""Look up a canonical node ID by six-byte public-key prefix.
|
||||
|
||||
Parameters:
|
||||
contacts: Mapping of full ``public_key`` hex strings to contact dicts.
|
||||
pubkey_prefix: Twelve-character hex string (six bytes) as used in
|
||||
MeshCore direct-message events.
|
||||
|
||||
Returns:
|
||||
Canonical ``!xxxxxxxx`` node ID for the first matching contact, or
|
||||
``None`` when no contact's public key starts with *pubkey_prefix*.
|
||||
"""
|
||||
for pub_key in contacts:
|
||||
if pub_key.startswith(pubkey_prefix):
|
||||
return _meshcore_node_id(pub_key)
|
||||
return None
|
||||
|
||||
|
||||
def _contact_to_node_dict(contact: dict) -> dict:
|
||||
"""Convert a MeshCore contact dict to a Meshtastic-ish node dict.
|
||||
|
||||
Parameters:
|
||||
contact: Contact dict from the MeshCore library. Expected keys
|
||||
include ``public_key``, ``type`` (``ADV_TYPE_*``), ``adv_name``,
|
||||
``last_advert``, ``adv_lat``, and ``adv_lon``.
|
||||
|
||||
Returns:
|
||||
Node dict compatible with the ``POST /api/nodes`` payload format.
|
||||
"""
|
||||
pub_key = contact.get("public_key", "")
|
||||
name = (contact.get("adv_name") or "").strip()
|
||||
role = _meshcore_adv_type_to_role(contact.get("type"))
|
||||
node: dict = {
|
||||
"lastHeard": contact.get("last_advert"),
|
||||
"user": {
|
||||
"longName": name,
|
||||
"shortName": _meshcore_short_name(pub_key),
|
||||
"publicKey": pub_key,
|
||||
**({"role": role} if role is not None else {}),
|
||||
},
|
||||
}
|
||||
lat = contact.get("adv_lat")
|
||||
lon = contact.get("adv_lon")
|
||||
if lat is not None and lon is not None and (lat or lon):
|
||||
node["position"] = {"latitude": lat, "longitude": lon}
|
||||
return node
|
||||
|
||||
|
||||
def _self_info_to_node_dict(self_info: dict) -> dict:
|
||||
"""Convert a MeshCore ``SELF_INFO`` payload to a Meshtastic-ish node dict.
|
||||
|
||||
Parameters:
|
||||
self_info: Payload dict from the ``SELF_INFO`` event. Expected keys
|
||||
include ``name``, ``public_key``, ``adv_type`` (``ADV_TYPE_*``),
|
||||
``adv_lat``, and ``adv_lon``.
|
||||
|
||||
Returns:
|
||||
Node dict compatible with the ``POST /api/nodes`` payload format.
|
||||
"""
|
||||
name = (self_info.get("name") or "").strip()
|
||||
pub_key = self_info.get("public_key", "")
|
||||
role = _meshcore_adv_type_to_role(self_info.get("adv_type"))
|
||||
node: dict = {
|
||||
"lastHeard": int(time.time()),
|
||||
"user": {
|
||||
"longName": name,
|
||||
"shortName": _meshcore_short_name(pub_key),
|
||||
"publicKey": pub_key,
|
||||
**({"role": role} if role is not None else {}),
|
||||
},
|
||||
}
|
||||
lat = self_info.get("adv_lat")
|
||||
lon = self_info.get("adv_lon")
|
||||
if lat is not None and lon is not None and (lat or lon):
|
||||
node["position"] = {"latitude": lat, "longitude": lon}
|
||||
return node
|
||||
|
||||
|
||||
def _to_json_safe(value: object) -> object:
|
||||
"""Recursively convert *value* to a JSON-serialisable form.
|
||||
|
||||
Handles the common types present in mesh protocol messages: dicts, lists,
|
||||
bytes (base64-encoded), and primitives. Anything else is coerced via
|
||||
``str()``.
|
||||
"""
|
||||
if isinstance(value, dict):
|
||||
return {str(k): _to_json_safe(v) for k, v in value.items()}
|
||||
if isinstance(value, (list, tuple, set)):
|
||||
return [_to_json_safe(v) for v in value]
|
||||
if isinstance(value, bytes):
|
||||
return base64.b64encode(value).decode("ascii")
|
||||
if isinstance(value, (str, int, float, bool)) or value is None:
|
||||
return value
|
||||
return str(value)
|
||||
|
||||
|
||||
def _record_meshcore_message(message: object, *, source: str) -> None:
|
||||
"""Persist a MeshCore message to :data:`ignored-meshcore.txt` when ``DEBUG=1``.
|
||||
|
||||
When ``DEBUG`` is not set the function returns immediately without any
|
||||
I/O so that production deployments are not burdened by file writes.
|
||||
|
||||
Parameters:
|
||||
message: The raw message object received from the MeshCore node.
|
||||
source: A short label describing where the message originated (e.g.
|
||||
a serial port path or BLE address).
|
||||
"""
|
||||
if not config.DEBUG:
|
||||
return
|
||||
|
||||
timestamp = datetime.now(timezone.utc).isoformat()
|
||||
entry = {
|
||||
"message": _to_json_safe(message),
|
||||
"source": source,
|
||||
"timestamp": timestamp,
|
||||
}
|
||||
payload = json.dumps(entry, ensure_ascii=False, sort_keys=True)
|
||||
with _IGNORED_MESSAGE_LOCK:
|
||||
_IGNORED_MESSAGE_LOG_PATH.parent.mkdir(parents=True, exist_ok=True)
|
||||
with _IGNORED_MESSAGE_LOG_PATH.open("a", encoding="utf-8") as fh:
|
||||
fh.write(f"{payload}\n")
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Interface
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class _MeshcoreInterface:
|
||||
"""Live MeshCore interface managing an asyncio event loop in a background thread.
|
||||
|
||||
Holds connection state, a thread-safe snapshot of known contacts, and the
|
||||
handles needed to shut down cleanly when the daemon requests a disconnect.
|
||||
"""
|
||||
|
||||
host_node_id: str | None = None
|
||||
"""Canonical ``!xxxxxxxx`` identifier for the connected host device."""
|
||||
|
||||
def __init__(self, *, target: str | None) -> None:
|
||||
"""Initialise the interface with the connection *target*."""
|
||||
self._target = target
|
||||
self._mc: object | None = None
|
||||
self._loop: asyncio.AbstractEventLoop | None = None
|
||||
self._thread: threading.Thread | None = None
|
||||
self._stop_event: asyncio.Event | None = None
|
||||
self._contacts_lock = threading.Lock()
|
||||
self._contacts: dict = {}
|
||||
self.isConnected: bool = False
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Contact management (called from the asyncio thread)
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def _update_contact(self, contact: dict) -> None:
|
||||
"""Thread-safely add or update a contact in the local snapshot.
|
||||
|
||||
Parameters:
|
||||
contact: Contact dict from a ``CONTACTS``, ``NEW_CONTACT``, or
|
||||
``NEXT_CONTACT`` event.
|
||||
"""
|
||||
pub_key = contact.get("public_key")
|
||||
if pub_key:
|
||||
with self._contacts_lock:
|
||||
self._contacts[pub_key] = contact
|
||||
|
||||
def contacts_snapshot(self) -> list[tuple[str, dict]]:
|
||||
"""Return a thread-safe snapshot of all known contacts as node entries.
|
||||
|
||||
Returns:
|
||||
List of ``(canonical_node_id, node_dict)`` pairs, skipping any
|
||||
contact whose public key cannot be mapped to a valid node ID.
|
||||
"""
|
||||
with self._contacts_lock:
|
||||
items = list(self._contacts.items())
|
||||
result = []
|
||||
for pub_key, contact in items:
|
||||
node_id = _meshcore_node_id(pub_key)
|
||||
if node_id is not None:
|
||||
result.append((node_id, _contact_to_node_dict(contact)))
|
||||
return result
|
||||
|
||||
def lookup_node_id(self, pubkey_prefix: str) -> str | None:
|
||||
"""Return the canonical node ID for the contact matching *pubkey_prefix*.
|
||||
|
||||
Parameters:
|
||||
pubkey_prefix: Twelve-character hex string (six bytes) from a
|
||||
``CONTACT_MSG_RECV`` event.
|
||||
|
||||
Returns:
|
||||
Canonical ``!xxxxxxxx`` node ID, or ``None`` when no match.
|
||||
"""
|
||||
with self._contacts_lock:
|
||||
return _pubkey_prefix_to_node_id(self._contacts, pubkey_prefix)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Lifecycle
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def close(self) -> None:
|
||||
"""Signal the background event loop to stop and wait for the thread.
|
||||
|
||||
Safe to call multiple times and from any thread.
|
||||
"""
|
||||
self.isConnected = False
|
||||
loop = self._loop
|
||||
stop_event = self._stop_event
|
||||
if loop is not None and not loop.is_closed():
|
||||
try:
|
||||
if stop_event is not None:
|
||||
loop.call_soon_threadsafe(stop_event.set)
|
||||
else:
|
||||
loop.call_soon_threadsafe(loop.stop)
|
||||
except RuntimeError:
|
||||
pass
|
||||
thread = self._thread
|
||||
if thread is not None and thread.is_alive():
|
||||
thread.join(timeout=5.0)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Handler logic helpers (module-level to keep _make_event_handlers lean)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _process_self_info(
|
||||
payload: dict, iface: _MeshcoreInterface, handlers: object
|
||||
) -> None:
|
||||
"""Apply a ``SELF_INFO`` payload: set host_node_id and upsert the host node.
|
||||
|
||||
Parameters:
|
||||
payload: Event payload dict containing at minimum ``public_key`` and
|
||||
optionally ``name``, ``adv_lat``, ``adv_lon``.
|
||||
iface: Active interface whose :attr:`host_node_id` will be updated.
|
||||
handlers: Module reference for :func:`~data.mesh_ingestor.handlers`
|
||||
functions (passed to avoid circular-import issues).
|
||||
"""
|
||||
pub_key = payload.get("public_key", "")
|
||||
node_id = _meshcore_node_id(pub_key)
|
||||
if node_id:
|
||||
iface.host_node_id = node_id
|
||||
handlers.register_host_node_id(node_id)
|
||||
handlers.upsert_node(node_id, _self_info_to_node_dict(payload))
|
||||
handlers._mark_packet_seen()
|
||||
config._debug_log(
|
||||
"MeshCore self-info received",
|
||||
context="meshcore.self_info",
|
||||
node_id=node_id,
|
||||
name=payload.get("name"),
|
||||
)
|
||||
|
||||
|
||||
def _process_contacts(
|
||||
contacts: dict, iface: _MeshcoreInterface, handlers: object
|
||||
) -> None:
|
||||
"""Apply a bulk ``CONTACTS`` payload: update the local snapshot and upsert nodes.
|
||||
|
||||
Parameters:
|
||||
contacts: Mapping of full ``public_key`` hex strings to contact dicts.
|
||||
iface: Active interface whose contact snapshot will be updated.
|
||||
handlers: Module reference for :func:`~data.mesh_ingestor.handlers`.
|
||||
"""
|
||||
for pub_key, contact in contacts.items():
|
||||
node_id = _meshcore_node_id(pub_key)
|
||||
if node_id is None:
|
||||
continue
|
||||
iface._update_contact(contact)
|
||||
handlers.upsert_node(node_id, _contact_to_node_dict(contact))
|
||||
handlers._mark_packet_seen()
|
||||
|
||||
|
||||
def _process_contact_update(
|
||||
contact: dict, iface: _MeshcoreInterface, handlers: object
|
||||
) -> None:
|
||||
"""Apply a single ``NEW_CONTACT`` or ``NEXT_CONTACT`` event.
|
||||
|
||||
Parameters:
|
||||
contact: Contact dict containing at minimum ``public_key``.
|
||||
iface: Active interface whose contact snapshot will be updated.
|
||||
handlers: Module reference for :func:`~data.mesh_ingestor.handlers`.
|
||||
"""
|
||||
pub_key = contact.get("public_key", "")
|
||||
node_id = _meshcore_node_id(pub_key)
|
||||
if node_id is None:
|
||||
return
|
||||
iface._update_contact(contact)
|
||||
handlers.upsert_node(node_id, _contact_to_node_dict(contact))
|
||||
handlers._mark_packet_seen()
|
||||
config._debug_log(
|
||||
"MeshCore contact updated",
|
||||
context="meshcore.contact",
|
||||
node_id=node_id,
|
||||
name=contact.get("adv_name"),
|
||||
)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Async event handlers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _make_event_handlers(iface: _MeshcoreInterface, target: str | None) -> dict:
|
||||
"""Build async callbacks for each relevant MeshCore event type.
|
||||
|
||||
All callbacks are closures over *iface* and *target* so they can update
|
||||
connection state and forward data to the ingest queue without global state.
|
||||
|
||||
Parameters:
|
||||
iface: The active :class:`_MeshcoreInterface` instance.
|
||||
target: Human-readable connection target for log messages.
|
||||
|
||||
Returns:
|
||||
Mapping of ``EventType`` member name → async callback coroutine.
|
||||
"""
|
||||
# Deferred import to avoid a circular dependency: meshcore.py is imported by
|
||||
# providers/__init__.py which is imported by the top-level mesh_ingestor
|
||||
# package, while handlers.py imports from that same package.
|
||||
from .. import handlers as _handlers
|
||||
|
||||
async def on_self_info(evt) -> None:
|
||||
_process_self_info(evt.payload or {}, iface, _handlers)
|
||||
|
||||
async def on_contacts(evt) -> None:
|
||||
_process_contacts(evt.payload or {}, iface, _handlers)
|
||||
|
||||
async def on_contact_update(evt) -> None:
|
||||
_process_contact_update(evt.payload or {}, iface, _handlers)
|
||||
|
||||
async def on_channel_msg(evt) -> None:
|
||||
payload = evt.payload or {}
|
||||
sender_ts = payload.get("sender_timestamp")
|
||||
text = payload.get("text")
|
||||
if sender_ts is None or not text:
|
||||
return
|
||||
|
||||
rx_time = int(time.time())
|
||||
channel_idx = payload.get("channel_idx", 0)
|
||||
|
||||
packet = {
|
||||
"id": _derive_message_id(sender_ts, f"c{channel_idx}", text),
|
||||
"rxTime": rx_time,
|
||||
"rx_time": rx_time,
|
||||
"from_id": None,
|
||||
"to_id": "^all",
|
||||
"channel": channel_idx,
|
||||
"snr": payload.get("SNR"),
|
||||
"rssi": payload.get("RSSI"),
|
||||
"protocol": "meshcore",
|
||||
"decoded": {
|
||||
"portnum": "TEXT_MESSAGE_APP",
|
||||
"text": text,
|
||||
"channel": channel_idx,
|
||||
},
|
||||
}
|
||||
_handlers._mark_packet_seen()
|
||||
_handlers.store_packet_dict(packet)
|
||||
config._debug_log(
|
||||
"MeshCore channel message",
|
||||
context="meshcore.channel_msg",
|
||||
channel=channel_idx,
|
||||
)
|
||||
|
||||
async def on_contact_msg(evt) -> None:
|
||||
payload = evt.payload or {}
|
||||
sender_ts = payload.get("sender_timestamp")
|
||||
text = payload.get("text")
|
||||
if sender_ts is None or not text:
|
||||
return
|
||||
|
||||
rx_time = int(time.time())
|
||||
pubkey_prefix = payload.get("pubkey_prefix", "")
|
||||
from_id = iface.lookup_node_id(pubkey_prefix)
|
||||
|
||||
packet = {
|
||||
"id": _derive_message_id(sender_ts, pubkey_prefix or "", text),
|
||||
"rxTime": rx_time,
|
||||
"rx_time": rx_time,
|
||||
"from_id": from_id,
|
||||
"to_id": iface.host_node_id,
|
||||
"channel": 0,
|
||||
"snr": payload.get("SNR"),
|
||||
"protocol": "meshcore",
|
||||
"decoded": {
|
||||
"portnum": "TEXT_MESSAGE_APP",
|
||||
"text": text,
|
||||
"channel": 0,
|
||||
},
|
||||
}
|
||||
_handlers._mark_packet_seen()
|
||||
_handlers.store_packet_dict(packet)
|
||||
|
||||
async def on_disconnected(evt) -> None:
|
||||
iface.isConnected = False
|
||||
config._debug_log(
|
||||
"MeshCore node disconnected",
|
||||
context="meshcore.disconnect",
|
||||
target=target or "unknown",
|
||||
severity="warning",
|
||||
always=True,
|
||||
)
|
||||
|
||||
return {
|
||||
"SELF_INFO": on_self_info,
|
||||
"CONTACTS": on_contacts,
|
||||
"NEW_CONTACT": on_contact_update,
|
||||
"NEXT_CONTACT": on_contact_update,
|
||||
"CHANNEL_MSG_RECV": on_channel_msg,
|
||||
"CONTACT_MSG_RECV": on_contact_msg,
|
||||
"DISCONNECTED": on_disconnected,
|
||||
}
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Asyncio entry point (runs inside background thread)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _make_connection(target: str, baudrate: int) -> object:
|
||||
"""Create the appropriate MeshCore connection object for *target*.
|
||||
|
||||
Routes to the correct ``meshcore`` connection class based on the target
|
||||
string format:
|
||||
|
||||
* BLE MAC / UUID → :class:`meshcore.BLEConnection`
|
||||
* ``host:port`` / ``[ipv6]:port`` → :class:`meshcore.TCPConnection`
|
||||
* anything else → :class:`meshcore.SerialConnection`
|
||||
|
||||
Parameters:
|
||||
target: Resolved, non-empty connection target.
|
||||
baudrate: Baud rate for serial connections (ignored for BLE/TCP).
|
||||
|
||||
Returns:
|
||||
An unconnected ``meshcore`` connection object.
|
||||
"""
|
||||
from meshcore import BLEConnection, SerialConnection, TCPConnection
|
||||
|
||||
ble_addr = parse_ble_target(target)
|
||||
if ble_addr:
|
||||
return BLEConnection(address=ble_addr)
|
||||
|
||||
tcp_target = parse_tcp_target(target)
|
||||
if tcp_target:
|
||||
host, port = tcp_target
|
||||
return TCPConnection(host, port)
|
||||
|
||||
return SerialConnection(target, baudrate)
|
||||
|
||||
|
||||
async def _run_meshcore(
|
||||
iface: _MeshcoreInterface,
|
||||
target: str,
|
||||
connected_event: threading.Event,
|
||||
error_holder: list,
|
||||
) -> None:
|
||||
"""Connect to a MeshCore node and keep the event loop running until closed.
|
||||
|
||||
This coroutine is the single entry point for the background asyncio thread.
|
||||
It connects the MeshCore library, registers event handlers, fetches the
|
||||
initial contact list, starts auto-message polling, and then waits for the
|
||||
:attr:`_MeshcoreInterface._stop_event` to be set.
|
||||
|
||||
Parameters:
|
||||
iface: Shared interface object for state and contact tracking.
|
||||
target: Resolved, non-empty connection target (serial, BLE, or TCP).
|
||||
connected_event: Threading event signalled when the connection
|
||||
succeeds or fails, to unblock the calling ``connect()`` method.
|
||||
error_holder: Single-element list; set to the raised exception when
|
||||
the connection attempt fails so the caller can re-raise it.
|
||||
"""
|
||||
from meshcore import EventType, MeshCore
|
||||
|
||||
mc: MeshCore | None = None
|
||||
try:
|
||||
cx = _make_connection(target, _DEFAULT_BAUDRATE)
|
||||
mc = MeshCore(cx)
|
||||
iface._mc = mc
|
||||
|
||||
handlers_map = _make_event_handlers(iface, target)
|
||||
for event_name, callback in handlers_map.items():
|
||||
mc.subscribe(EventType[event_name], callback)
|
||||
|
||||
_handled_types = frozenset(EventType[n] for n in handlers_map)
|
||||
# Bookkeeping events that require no action and should not be logged.
|
||||
_silent_types = frozenset(
|
||||
{
|
||||
EventType.CONNECTED,
|
||||
EventType.ACK,
|
||||
EventType.OK,
|
||||
EventType.ERROR,
|
||||
EventType.NO_MORE_MSGS,
|
||||
EventType.MESSAGES_WAITING,
|
||||
EventType.MSG_SENT,
|
||||
EventType.CURRENT_TIME,
|
||||
}
|
||||
)
|
||||
|
||||
async def _on_unhandled(evt) -> None:
|
||||
if evt.type in _handled_types or evt.type in _silent_types:
|
||||
return
|
||||
_record_meshcore_message(
|
||||
evt.payload,
|
||||
source=f"{target or 'auto'}:{evt.type.name}",
|
||||
)
|
||||
|
||||
mc.subscribe(None, _on_unhandled)
|
||||
|
||||
result = await mc.connect()
|
||||
if result is None:
|
||||
raise ConnectionError(
|
||||
f"MeshCore node at {target!r} did not respond to the appstart "
|
||||
"handshake. Ensure the device is running MeshCore companion-mode "
|
||||
"firmware."
|
||||
)
|
||||
|
||||
iface.isConnected = True
|
||||
connected_event.set()
|
||||
|
||||
try:
|
||||
await mc.ensure_contacts()
|
||||
except Exception as exc:
|
||||
config._debug_log(
|
||||
"Failed to fetch initial contacts",
|
||||
context="meshcore.contacts",
|
||||
severity="warning",
|
||||
always=True,
|
||||
error=str(exc),
|
||||
)
|
||||
|
||||
await mc.start_auto_message_fetching()
|
||||
|
||||
stop_event = asyncio.Event()
|
||||
iface._stop_event = stop_event
|
||||
await stop_event.wait()
|
||||
|
||||
except Exception as exc:
|
||||
if not connected_event.is_set():
|
||||
error_holder[0] = exc
|
||||
connected_event.set()
|
||||
finally:
|
||||
if mc is not None:
|
||||
try:
|
||||
await mc.disconnect()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Provider
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class MeshcoreProvider:
|
||||
"""MeshCore ingestion provider.
|
||||
|
||||
Connects to a MeshCore node via serial port, BLE, or TCP/IP. The
|
||||
connection type is inferred from the target string; see :meth:`connect`
|
||||
for routing rules.
|
||||
|
||||
The provider runs MeshCore's ``asyncio`` event loop in a background daemon
|
||||
thread. Incoming ``SELF_INFO``, ``CONTACTS``, ``NEW_CONTACT``,
|
||||
``CHANNEL_MSG_RECV``, and ``CONTACT_MSG_RECV`` events are forwarded to the
|
||||
HTTP ingest queue via the shared handler functions.
|
||||
"""
|
||||
|
||||
name = "meshcore"
|
||||
|
||||
def subscribe(self) -> list[str]:
|
||||
"""Return subscribed topic names.
|
||||
|
||||
MeshCore uses an ``asyncio`` event system rather than a pubsub bus,
|
||||
so there are no topics to register at startup.
|
||||
"""
|
||||
return []
|
||||
|
||||
def connect(
|
||||
self, *, active_candidate: str | None
|
||||
) -> tuple[object, str | None, str | None]:
|
||||
"""Connect to a MeshCore node via serial, BLE, or TCP.
|
||||
|
||||
Starts an asyncio event loop in a background daemon thread, performs
|
||||
the MeshCore companion-protocol handshake, and blocks until the node's
|
||||
self-info is received or the timeout expires.
|
||||
|
||||
Connection type is inferred from *active_candidate* (or
|
||||
:data:`~data.mesh_ingestor.config.CONNECTION`):
|
||||
|
||||
* BLE MAC / UUID → :class:`meshcore.BLEConnection`
|
||||
* ``host:port`` → :class:`meshcore.TCPConnection`
|
||||
* serial path → :class:`meshcore.SerialConnection`
|
||||
* ``None`` / empty → first candidate from
|
||||
:func:`~data.mesh_ingestor.connection.default_serial_targets`
|
||||
|
||||
Parameters:
|
||||
active_candidate: Previously resolved connection target, or
|
||||
``None`` to fall back to
|
||||
:data:`~data.mesh_ingestor.config.CONNECTION`.
|
||||
|
||||
Returns:
|
||||
``(iface, resolved_target, next_active_candidate)`` matching the
|
||||
:class:`~data.mesh_ingestor.provider.Provider` contract.
|
||||
|
||||
Raises:
|
||||
ConnectionError: When the node does not complete the handshake
|
||||
within :data:`_CONNECT_TIMEOUT_SECS` seconds.
|
||||
"""
|
||||
target: str | None = active_candidate or config.CONNECTION
|
||||
|
||||
if not target:
|
||||
candidates = default_serial_targets()
|
||||
target = candidates[0] if candidates else "/dev/ttyACM0"
|
||||
|
||||
config._debug_log(
|
||||
"Connecting to MeshCore node",
|
||||
context="meshcore.connect",
|
||||
target=target,
|
||||
)
|
||||
|
||||
iface = _MeshcoreInterface(target=target)
|
||||
connected_event = threading.Event()
|
||||
error_holder: list = [None]
|
||||
|
||||
def _run_loop() -> None:
|
||||
loop = asyncio.new_event_loop()
|
||||
asyncio.set_event_loop(loop)
|
||||
iface._loop = loop
|
||||
try:
|
||||
loop.run_until_complete(
|
||||
_run_meshcore(iface, target, connected_event, error_holder)
|
||||
)
|
||||
finally:
|
||||
loop.close()
|
||||
|
||||
thread = threading.Thread(target=_run_loop, name="meshcore-loop", daemon=True)
|
||||
iface._thread = thread
|
||||
thread.start()
|
||||
|
||||
if not connected_event.wait(timeout=_CONNECT_TIMEOUT_SECS):
|
||||
iface.close()
|
||||
raise ConnectionError(
|
||||
f"Timed out waiting for MeshCore node at {target!r} "
|
||||
f"after {_CONNECT_TIMEOUT_SECS:g}s."
|
||||
)
|
||||
|
||||
if error_holder[0] is not None:
|
||||
iface.close()
|
||||
raise error_holder[0]
|
||||
|
||||
return iface, target, target
|
||||
|
||||
def extract_host_node_id(self, iface: object) -> str | None:
|
||||
"""Return the canonical ``!xxxxxxxx`` host node ID from the interface.
|
||||
|
||||
Parameters:
|
||||
iface: Active :class:`_MeshcoreInterface` returned by
|
||||
:meth:`connect`.
|
||||
"""
|
||||
return getattr(iface, "host_node_id", None)
|
||||
|
||||
def node_snapshot_items(self, iface: object) -> list[tuple[str, dict]]:
|
||||
"""Return a snapshot of all known MeshCore contacts as node entries.
|
||||
|
||||
Parameters:
|
||||
iface: Active :class:`_MeshcoreInterface` instance. Any other
|
||||
object type causes an empty list to be returned.
|
||||
|
||||
Returns:
|
||||
List of ``(canonical_node_id, node_dict)`` pairs suitable for
|
||||
passing to :func:`~data.mesh_ingestor.handlers.upsert_node`.
|
||||
"""
|
||||
if not isinstance(iface, _MeshcoreInterface):
|
||||
return []
|
||||
return iface.contacts_snapshot()
|
||||
|
||||
|
||||
__all__ = ["MeshcoreProvider"]
|
||||
@@ -0,0 +1,100 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Meshtastic provider implementation."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from pubsub import pub
|
||||
|
||||
from .. import config, daemon as _daemon, handlers, interfaces
|
||||
from ..utils import _retry_dict_snapshot
|
||||
|
||||
|
||||
class MeshtasticProvider:
|
||||
"""Meshtastic ingestion provider (current default)."""
|
||||
|
||||
name = "meshtastic"
|
||||
|
||||
def __init__(self):
|
||||
self._subscribed: list[str] = []
|
||||
|
||||
def subscribe(self) -> list[str]:
|
||||
"""Subscribe Meshtastic pubsub receive topics."""
|
||||
|
||||
if self._subscribed:
|
||||
return list(self._subscribed)
|
||||
|
||||
subscribed = []
|
||||
for topic in _daemon._RECEIVE_TOPICS:
|
||||
try:
|
||||
pub.subscribe(handlers.on_receive, topic)
|
||||
subscribed.append(topic)
|
||||
except Exception as exc: # pragma: no cover
|
||||
config._debug_log(f"failed to subscribe to {topic!r}: {exc}")
|
||||
self._subscribed = subscribed
|
||||
return list(subscribed)
|
||||
|
||||
def connect(
|
||||
self, *, active_candidate: str | None
|
||||
) -> tuple[object, str | None, str | None]:
|
||||
"""Create a Meshtastic interface using the existing interface helpers."""
|
||||
|
||||
iface = None
|
||||
resolved_target = None
|
||||
next_candidate = active_candidate
|
||||
|
||||
if active_candidate:
|
||||
iface, resolved_target = interfaces._create_serial_interface(
|
||||
active_candidate
|
||||
)
|
||||
else:
|
||||
iface, resolved_target = interfaces._create_default_interface()
|
||||
next_candidate = resolved_target
|
||||
|
||||
interfaces._ensure_radio_metadata(iface)
|
||||
interfaces._ensure_channel_metadata(iface)
|
||||
|
||||
return iface, resolved_target, next_candidate
|
||||
|
||||
def extract_host_node_id(self, iface: object) -> str | None:
|
||||
return interfaces._extract_host_node_id(iface)
|
||||
|
||||
def node_snapshot_items(self, iface: object) -> list[tuple[str, object]]:
|
||||
"""Return a stable snapshot of all known nodes from ``iface``.
|
||||
|
||||
Uses :func:`~data.mesh_ingestor.utils._retry_dict_snapshot` to
|
||||
tolerate concurrent modifications from the Meshtastic background
|
||||
thread.
|
||||
|
||||
Parameters:
|
||||
iface: Live Meshtastic interface whose ``nodes`` dict to snapshot.
|
||||
|
||||
Returns:
|
||||
List of ``(node_id, node_dict)`` tuples, or an empty list when
|
||||
the snapshot fails after retries.
|
||||
"""
|
||||
|
||||
nodes = getattr(iface, "nodes", {}) or {}
|
||||
result = _retry_dict_snapshot(lambda: list(nodes.items()))
|
||||
if result is None:
|
||||
config._debug_log(
|
||||
"Skipping node snapshot due to concurrent modification",
|
||||
context="meshtastic.snapshot",
|
||||
)
|
||||
return []
|
||||
return result
|
||||
|
||||
|
||||
__all__ = ["MeshtasticProvider"]
|
||||
@@ -172,6 +172,10 @@ def _enqueue_post_json(
|
||||
|
||||
with state.lock:
|
||||
counter = next(state.counter)
|
||||
# Heap tuple: (priority, counter, path, payload). Lower priority
|
||||
# values are dequeued first (min-heap semantics). The monotonically
|
||||
# increasing counter breaks ties so equal-priority items are processed
|
||||
# in FIFO order without comparing the non-orderable payload dict.
|
||||
heapq.heappush(state.queue, (priority, counter, path, payload))
|
||||
|
||||
|
||||
|
||||
@@ -33,6 +33,9 @@ from google.protobuf.json_format import MessageToDict
|
||||
from google.protobuf.message import DecodeError
|
||||
from google.protobuf.message import Message as ProtoMessage
|
||||
|
||||
from .node_identity import canonical_node_id as _canonical_node_id
|
||||
from .node_identity import node_num_from_id as _node_num_from_id
|
||||
|
||||
_CLI_ROLE_MODULE_NAMES: tuple[str, ...] = (
|
||||
"meshtastic.cli.common",
|
||||
"meshtastic.cli.roles",
|
||||
@@ -125,6 +128,10 @@ def _load_cli_role_lookup() -> dict[int, str]:
|
||||
mapping[key_int] = str(value)
|
||||
return mapping
|
||||
|
||||
# Iterate through candidate module paths in preference order. The CLI
|
||||
# package ships several role-enum locations across versions; we stop at
|
||||
# the first module that yields a non-empty mapping so we do not silently
|
||||
# merge partial enums from two different meshtastic-cli releases.
|
||||
for module_name in _CLI_ROLE_MODULE_NAMES:
|
||||
try:
|
||||
module = importlib.import_module(module_name)
|
||||
@@ -429,91 +436,6 @@ def _pkt_to_dict(packet) -> dict:
|
||||
return {"_unparsed": str(packet)}
|
||||
|
||||
|
||||
def _canonical_node_id(value) -> str | None:
|
||||
"""Convert node identifiers into the canonical ``!xxxxxxxx`` format.
|
||||
|
||||
Parameters:
|
||||
value: Input identifier which may be an int, float or string.
|
||||
|
||||
Returns:
|
||||
The canonical identifier or ``None`` if conversion fails.
|
||||
"""
|
||||
|
||||
if value is None:
|
||||
return None
|
||||
if isinstance(value, (int, float)):
|
||||
try:
|
||||
num = int(value)
|
||||
except (TypeError, ValueError):
|
||||
return None
|
||||
if num < 0:
|
||||
return None
|
||||
return f"!{num & 0xFFFFFFFF:08x}"
|
||||
if not isinstance(value, str):
|
||||
return None
|
||||
|
||||
trimmed = value.strip()
|
||||
if not trimmed:
|
||||
return None
|
||||
if trimmed.startswith("^"):
|
||||
return trimmed
|
||||
if trimmed.startswith("!"):
|
||||
body = trimmed[1:]
|
||||
elif trimmed.lower().startswith("0x"):
|
||||
body = trimmed[2:]
|
||||
elif trimmed.isdigit():
|
||||
try:
|
||||
return f"!{int(trimmed, 10) & 0xFFFFFFFF:08x}"
|
||||
except ValueError:
|
||||
return None
|
||||
else:
|
||||
body = trimmed
|
||||
|
||||
if not body:
|
||||
return None
|
||||
try:
|
||||
return f"!{int(body, 16) & 0xFFFFFFFF:08x}"
|
||||
except ValueError:
|
||||
return None
|
||||
|
||||
|
||||
def _node_num_from_id(node_id) -> int | None:
|
||||
"""Extract the numeric node ID from a canonical identifier.
|
||||
|
||||
Parameters:
|
||||
node_id: Identifier value accepted by :func:`_canonical_node_id`.
|
||||
|
||||
Returns:
|
||||
The numeric node ID or ``None`` when parsing fails.
|
||||
"""
|
||||
|
||||
if node_id is None:
|
||||
return None
|
||||
if isinstance(node_id, (int, float)):
|
||||
try:
|
||||
num = int(node_id)
|
||||
except (TypeError, ValueError):
|
||||
return None
|
||||
return num if num >= 0 else None
|
||||
if not isinstance(node_id, str):
|
||||
return None
|
||||
|
||||
trimmed = node_id.strip()
|
||||
if not trimmed:
|
||||
return None
|
||||
if trimmed.startswith("!"):
|
||||
trimmed = trimmed[1:]
|
||||
if trimmed.lower().startswith("0x"):
|
||||
trimmed = trimmed[2:]
|
||||
try:
|
||||
return int(trimmed, 16)
|
||||
except ValueError:
|
||||
try:
|
||||
return int(trimmed, 10)
|
||||
except ValueError:
|
||||
return None
|
||||
|
||||
|
||||
def _merge_mappings(base, extra):
|
||||
"""Merge two mapping-like objects recursively.
|
||||
|
||||
|
||||
@@ -0,0 +1,56 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Shared utility helpers for the mesh ingestor package."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import time
|
||||
from typing import Callable, TypeVar
|
||||
|
||||
_T = TypeVar("_T")
|
||||
|
||||
|
||||
def _retry_dict_snapshot(fn: Callable[[], _T], retries: int = 3) -> _T | None:
|
||||
"""Call ``fn()`` retrying on concurrent dictionary-modification errors.
|
||||
|
||||
Meshtastic's node dictionary is updated on a background thread. Iterating
|
||||
it can raise a :class:`RuntimeError` with the message "dictionary changed
|
||||
size during iteration". This helper retries the call up to ``retries``
|
||||
times, yielding the thread scheduler between attempts via :func:`time.sleep`.
|
||||
|
||||
Parameters:
|
||||
fn: Zero-argument callable that performs the iteration.
|
||||
retries: Maximum number of attempts before giving up.
|
||||
|
||||
Returns:
|
||||
The return value of ``fn`` on success, or ``None`` when all retries are
|
||||
exhausted.
|
||||
"""
|
||||
|
||||
for _ in range(max(1, retries)):
|
||||
try:
|
||||
return fn()
|
||||
except RuntimeError as err:
|
||||
# Only retry the specific concurrent-modification error; re-raise
|
||||
# anything else so genuine bugs surface immediately.
|
||||
if "dictionary changed size during iteration" not in str(err):
|
||||
raise
|
||||
# Yield to the thread scheduler to let the mutating thread complete
|
||||
# before we attempt the snapshot again.
|
||||
time.sleep(0)
|
||||
return None
|
||||
|
||||
|
||||
__all__ = ["_retry_dict_snapshot"]
|
||||
+3
-1
@@ -29,7 +29,9 @@ CREATE TABLE IF NOT EXISTS messages (
|
||||
modem_preset TEXT,
|
||||
channel_name TEXT,
|
||||
reply_id INTEGER,
|
||||
emoji TEXT
|
||||
emoji TEXT,
|
||||
ingestor TEXT,
|
||||
protocol TEXT NOT NULL DEFAULT 'meshtastic'
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_messages_rx_time ON messages(rx_time);
|
||||
|
||||
@@ -0,0 +1,39 @@
|
||||
-- Copyright © 2025-26 l5yth & contributors
|
||||
--
|
||||
-- Licensed under the Apache License, Version 2.0 (the "License");
|
||||
-- you may not use this file except in compliance with the License.
|
||||
-- You may obtain a copy of the License at
|
||||
--
|
||||
-- http://www.apache.org/licenses/LICENSE-2.0
|
||||
--
|
||||
-- Unless required by applicable law or agreed to in writing, software
|
||||
-- distributed under the License is distributed on an "AS IS" BASIS,
|
||||
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
-- See the License for the specific language governing permissions and
|
||||
-- limitations under the License.
|
||||
|
||||
-- Add a protocol column to every entity and event table so records from
|
||||
-- different mesh backends (meshtastic, meshcore, reticulum, …) can co-exist
|
||||
-- in the same database and be queried independently.
|
||||
--
|
||||
-- Existing rows default to 'meshtastic' for backward compatibility.
|
||||
|
||||
BEGIN;
|
||||
ALTER TABLE ingestors ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
|
||||
ALTER TABLE nodes ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
|
||||
ALTER TABLE messages ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
|
||||
ALTER TABLE positions ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
|
||||
ALTER TABLE telemetry ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
|
||||
ALTER TABLE traces ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
|
||||
ALTER TABLE neighbors ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
|
||||
|
||||
-- Indices to support ?protocol= filtering on every entity endpoint without
|
||||
-- full table scans as multi-protocol traffic grows.
|
||||
CREATE INDEX IF NOT EXISTS idx_ingestors_protocol ON ingestors(protocol);
|
||||
CREATE INDEX IF NOT EXISTS idx_nodes_protocol ON nodes(protocol);
|
||||
CREATE INDEX IF NOT EXISTS idx_messages_protocol ON messages(protocol);
|
||||
CREATE INDEX IF NOT EXISTS idx_positions_protocol ON positions(protocol);
|
||||
CREATE INDEX IF NOT EXISTS idx_telemetry_protocol ON telemetry(protocol);
|
||||
CREATE INDEX IF NOT EXISTS idx_traces_protocol ON traces(protocol);
|
||||
CREATE INDEX IF NOT EXISTS idx_neighbors_protocol ON neighbors(protocol);
|
||||
COMMIT;
|
||||
@@ -0,0 +1,47 @@
|
||||
-- Copyright © 2025-26 l5yth & contributors
|
||||
--
|
||||
-- Licensed under the Apache License, Version 2.0 (the "License");
|
||||
-- you may not use this file except in compliance with the License.
|
||||
-- You may obtain a copy of the License at
|
||||
--
|
||||
-- http://www.apache.org/licenses/LICENSE-2.0
|
||||
--
|
||||
-- Unless required by applicable law or agreed to in writing, software
|
||||
-- distributed under the License is distributed on an "AS IS" BASIS,
|
||||
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
-- See the License for the specific language governing permissions and
|
||||
-- limitations under the License.
|
||||
|
||||
-- Add telemetry subtype discriminator to enable per-chart type filtering.
|
||||
-- Backfills existing rows using field-presence heuristics that mirror
|
||||
-- classifySnapshot() in node-page.js, so historical data is classified
|
||||
-- consistently regardless of whether the new ingestors are deployed yet.
|
||||
|
||||
BEGIN;
|
||||
ALTER TABLE telemetry ADD COLUMN telemetry_type TEXT;
|
||||
|
||||
-- Device metrics: battery/channel fields are exclusive to device_metrics
|
||||
UPDATE telemetry SET telemetry_type = 'device'
|
||||
WHERE telemetry_type IS NULL
|
||||
AND (battery_level IS NOT NULL OR channel_utilization IS NOT NULL
|
||||
OR air_util_tx IS NOT NULL OR uptime_seconds IS NOT NULL);
|
||||
|
||||
-- Power sensor: current is the unambiguous power-sensor discriminator.
|
||||
-- voltage is intentionally excluded here: device_metrics also stores a voltage
|
||||
-- reading (~4.2 V for battery), so using voltage alone would misclassify device
|
||||
-- rows whose four device-discriminator fields (battery_level, channel_utilization,
|
||||
-- air_util_tx, uptime_seconds) happen to be NULL. Rows that have only voltage
|
||||
-- and no other classifiable fields are left as NULL (unclassified), which is
|
||||
-- more accurate than a wrong classification.
|
||||
UPDATE telemetry SET telemetry_type = 'power'
|
||||
WHERE telemetry_type IS NULL
|
||||
AND current IS NOT NULL;
|
||||
|
||||
-- Environment: temperature/humidity/pressure
|
||||
UPDATE telemetry SET telemetry_type = 'environment'
|
||||
WHERE telemetry_type IS NULL
|
||||
AND (temperature IS NOT NULL OR relative_humidity IS NOT NULL
|
||||
OR barometric_pressure IS NOT NULL OR iaq IS NOT NULL
|
||||
OR gas_resistance IS NOT NULL);
|
||||
|
||||
COMMIT;
|
||||
@@ -17,6 +17,8 @@ CREATE TABLE IF NOT EXISTS neighbors (
|
||||
neighbor_id TEXT NOT NULL,
|
||||
snr REAL,
|
||||
rx_time INTEGER NOT NULL,
|
||||
ingestor TEXT,
|
||||
protocol TEXT NOT NULL DEFAULT 'meshtastic',
|
||||
PRIMARY KEY (node_id, neighbor_id),
|
||||
FOREIGN KEY (node_id) REFERENCES nodes(node_id) ON DELETE CASCADE,
|
||||
FOREIGN KEY (neighbor_id) REFERENCES nodes(node_id) ON DELETE CASCADE
|
||||
|
||||
+2
-1
@@ -41,7 +41,8 @@ CREATE TABLE IF NOT EXISTS nodes (
|
||||
longitude REAL,
|
||||
altitude REAL,
|
||||
lora_freq INTEGER,
|
||||
modem_preset TEXT
|
||||
modem_preset TEXT,
|
||||
protocol TEXT NOT NULL DEFAULT 'meshtastic'
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_nodes_last_heard ON nodes(last_heard);
|
||||
|
||||
+3
-1
@@ -33,7 +33,9 @@ CREATE TABLE IF NOT EXISTS positions (
|
||||
rssi INTEGER,
|
||||
hop_limit INTEGER,
|
||||
bitfield INTEGER,
|
||||
payload_b64 TEXT
|
||||
payload_b64 TEXT,
|
||||
ingestor TEXT,
|
||||
protocol TEXT NOT NULL DEFAULT 'meshtastic'
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_positions_rx_time ON positions(rx_time);
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
# Production dependencies
|
||||
meshtastic>=2.5.0
|
||||
meshcore>=2.3.5
|
||||
bleak>=0.21.0
|
||||
protobuf>=5.27.2
|
||||
|
||||
# Development dependencies (optional)
|
||||
|
||||
+4
-1
@@ -53,7 +53,10 @@ CREATE TABLE IF NOT EXISTS telemetry (
|
||||
rainfall_1h REAL,
|
||||
rainfall_24h REAL,
|
||||
soil_moisture INTEGER,
|
||||
soil_temperature REAL
|
||||
soil_temperature REAL,
|
||||
ingestor TEXT,
|
||||
protocol TEXT NOT NULL DEFAULT 'meshtastic',
|
||||
telemetry_type TEXT
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_telemetry_rx_time ON telemetry(rx_time);
|
||||
|
||||
+3
-1
@@ -21,7 +21,9 @@ CREATE TABLE IF NOT EXISTS traces (
|
||||
rx_iso TEXT NOT NULL,
|
||||
rssi INTEGER,
|
||||
snr REAL,
|
||||
elapsed_ms INTEGER
|
||||
elapsed_ms INTEGER,
|
||||
ingestor TEXT,
|
||||
protocol TEXT NOT NULL DEFAULT 'meshtastic'
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS trace_hops (
|
||||
|
||||
+10
-1
@@ -81,7 +81,12 @@ x-matrix-bridge-base: &matrix-bridge-base
|
||||
image: ghcr.io/l5yth/potato-mesh-matrix-bridge-${POTATOMESH_IMAGE_ARCH:-linux-amd64}:${POTATOMESH_IMAGE_TAG:-latest}
|
||||
volumes:
|
||||
- potatomesh_matrix_bridge_state:/app
|
||||
- ./matrix/Config.toml:/app/Config.toml:ro
|
||||
- type: bind
|
||||
source: ./matrix/Config.toml
|
||||
target: /app/Config.toml
|
||||
read_only: true
|
||||
bind:
|
||||
create_host_path: false
|
||||
restart: unless-stopped
|
||||
deploy:
|
||||
resources:
|
||||
@@ -128,6 +133,8 @@ services:
|
||||
matrix-bridge:
|
||||
<<: *matrix-bridge-base
|
||||
network_mode: host
|
||||
profiles:
|
||||
- matrix
|
||||
depends_on:
|
||||
- web
|
||||
extra_hosts:
|
||||
@@ -140,6 +147,8 @@ services:
|
||||
- potatomesh-network
|
||||
depends_on:
|
||||
- web-bridge
|
||||
ports:
|
||||
- "41448:41448"
|
||||
profiles:
|
||||
- bridge
|
||||
|
||||
|
||||
Generated
+221
-149
@@ -77,24 +77,84 @@ dependencies = [
|
||||
"serde_json",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "async-trait"
|
||||
version = "0.1.89"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "9035ad2d096bed7955a320ee7e2230574d28fd3c3a0f186cbea1ff3c7eed5dbb"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "atomic-waker"
|
||||
version = "1.1.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1505bd5d3d116872e7271a6d4e16d81d0c8570876c8de68093a09ac269d8aac0"
|
||||
|
||||
[[package]]
|
||||
name = "axum"
|
||||
version = "0.7.9"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "edca88bc138befd0323b20752846e6587272d3b03b0343c8ea28a6f819e6e71f"
|
||||
dependencies = [
|
||||
"async-trait",
|
||||
"axum-core",
|
||||
"bytes",
|
||||
"futures-util",
|
||||
"http",
|
||||
"http-body",
|
||||
"http-body-util",
|
||||
"hyper",
|
||||
"hyper-util",
|
||||
"itoa",
|
||||
"matchit",
|
||||
"memchr",
|
||||
"mime",
|
||||
"percent-encoding",
|
||||
"pin-project-lite",
|
||||
"rustversion",
|
||||
"serde",
|
||||
"serde_json",
|
||||
"serde_path_to_error",
|
||||
"serde_urlencoded",
|
||||
"sync_wrapper",
|
||||
"tokio",
|
||||
"tower",
|
||||
"tower-layer",
|
||||
"tower-service",
|
||||
"tracing",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "axum-core"
|
||||
version = "0.4.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "09f2bd6146b97ae3359fa0cc6d6b376d9539582c7b4220f041a33ec24c226199"
|
||||
dependencies = [
|
||||
"async-trait",
|
||||
"bytes",
|
||||
"futures-util",
|
||||
"http",
|
||||
"http-body",
|
||||
"http-body-util",
|
||||
"mime",
|
||||
"pin-project-lite",
|
||||
"rustversion",
|
||||
"sync_wrapper",
|
||||
"tower-layer",
|
||||
"tower-service",
|
||||
"tracing",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "base64"
|
||||
version = "0.22.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "72b3254f16251a8381aa12e40e3c4d2f0199f8c6508fbecb9d91f575e0fbb8c6"
|
||||
|
||||
[[package]]
|
||||
name = "bitflags"
|
||||
version = "1.3.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "bef38d45163c2f1dde094a7dfd33ccf595c92905c8f8f4fdc18d06fb1037718a"
|
||||
|
||||
[[package]]
|
||||
name = "bitflags"
|
||||
version = "2.10.0"
|
||||
@@ -103,21 +163,21 @@ checksum = "812e12b5285cc515a9c72a5c1d3b6d46a19dac5acfef5265968c166106e31dd3"
|
||||
|
||||
[[package]]
|
||||
name = "bumpalo"
|
||||
version = "3.19.0"
|
||||
version = "3.19.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "46c5e41b57b8bba42a04676d81cb89e9ee8e859a1a66f80a5a72e1cb76b34d43"
|
||||
checksum = "5dd9dc738b7a8311c7ade152424974d8115f2cdad61e8dab8dac9f2362298510"
|
||||
|
||||
[[package]]
|
||||
name = "bytes"
|
||||
version = "1.11.0"
|
||||
version = "1.11.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b35204fbdc0b3f4446b89fc1ac2cf84a8a68971995d0bf2e925ec7cd960f9cb3"
|
||||
checksum = "1e748733b7cbc798e1434b6ac524f0c1ff2ab456fe201501e6497c8417a4fc33"
|
||||
|
||||
[[package]]
|
||||
name = "cc"
|
||||
version = "1.2.47"
|
||||
version = "1.2.52"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "cd405d82c84ff7f35739f175f67d8b9fb7687a0e84ccdc78bd3568839827cf07"
|
||||
checksum = "cd4932aefd12402b36c60956a4fe0035421f544799057659ff86f923657aada3"
|
||||
dependencies = [
|
||||
"find-msvc-tools",
|
||||
"shlex",
|
||||
@@ -187,7 +247,7 @@ version = "3.0.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "fde0e0ec90c9dfb3b4b1a0891a7dcd0e2bffde2f7efed5fe7c9bb00e5bfb915e"
|
||||
dependencies = [
|
||||
"windows-sys 0.52.0",
|
||||
"windows-sys 0.59.0",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -250,9 +310,9 @@ checksum = "37909eebbb50d72f9059c3b6d82c0463f2ff062c9e95845c43a6c9c0355411be"
|
||||
|
||||
[[package]]
|
||||
name = "find-msvc-tools"
|
||||
version = "0.1.5"
|
||||
version = "0.1.7"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "3a3076410a55c90011c298b04d0cfa770b00fa04e1e3c97d3f6c9de105a03844"
|
||||
checksum = "f449e6c6c08c865631d4890cfacf252b3d396c9bcc83adb6623cdb02a8336c41"
|
||||
|
||||
[[package]]
|
||||
name = "fnv"
|
||||
@@ -284,21 +344,6 @@ dependencies = [
|
||||
"percent-encoding",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "futures"
|
||||
version = "0.3.31"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "65bc07b1a8bc7c85c5f2e110c476c7389b4554ba72af57d8445ea63a576b0876"
|
||||
dependencies = [
|
||||
"futures-channel",
|
||||
"futures-core",
|
||||
"futures-executor",
|
||||
"futures-io",
|
||||
"futures-sink",
|
||||
"futures-task",
|
||||
"futures-util",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "futures-channel"
|
||||
version = "0.3.31"
|
||||
@@ -306,7 +351,6 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2dff15bf788c671c1934e366d07e30c1814a8ef514e1af724a602e8a2fbe1b10"
|
||||
dependencies = [
|
||||
"futures-core",
|
||||
"futures-sink",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -326,12 +370,6 @@ dependencies = [
|
||||
"futures-util",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "futures-io"
|
||||
version = "0.3.31"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "9e5c1b78ca4aae1ac06c48a526a655760685149f0d465d21f37abfe57ce075c6"
|
||||
|
||||
[[package]]
|
||||
name = "futures-sink"
|
||||
version = "0.3.31"
|
||||
@@ -350,12 +388,8 @@ version = "0.3.31"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "9fa08315bb612088cc391249efdc3bc77536f16c91f6cf495e6fbe85b20a4a81"
|
||||
dependencies = [
|
||||
"futures-channel",
|
||||
"futures-core",
|
||||
"futures-io",
|
||||
"futures-sink",
|
||||
"futures-task",
|
||||
"memchr",
|
||||
"pin-project-lite",
|
||||
"pin-utils",
|
||||
"slab",
|
||||
@@ -390,9 +424,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "h2"
|
||||
version = "0.4.12"
|
||||
version = "0.4.13"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f3c0b69cfcb4e1b9f1bf2f53f95f766e4661169728ec61cd3fe5a0166f2d1386"
|
||||
checksum = "2f44da3a8150a6703ed5d34e164b875fd14c2cdab9af1252a9a1020bde2bdc54"
|
||||
dependencies = [
|
||||
"atomic-waker",
|
||||
"bytes",
|
||||
@@ -522,9 +556,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "hyper-util"
|
||||
version = "0.1.18"
|
||||
version = "0.1.19"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "52e9a2a24dc5c6821e71a7030e1e14b7b632acac55c40e9d2e082c621261bb56"
|
||||
checksum = "727805d60e7938b76b826a6ef209eb70eaa1812794f9424d4a4e2d740662df5f"
|
||||
dependencies = [
|
||||
"base64",
|
||||
"bytes",
|
||||
@@ -594,9 +628,9 @@ checksum = "7aedcccd01fc5fe81e6b489c15b247b8b0690feb23304303a9e560f37efc560a"
|
||||
|
||||
[[package]]
|
||||
name = "icu_properties"
|
||||
version = "2.1.1"
|
||||
version = "2.1.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "e93fcd3157766c0c8da2f8cff6ce651a31f0810eaa1c51ec363ef790bbb5fb99"
|
||||
checksum = "020bfc02fe870ec3a66d93e677ccca0562506e5872c650f893269e08615d74ec"
|
||||
dependencies = [
|
||||
"icu_collections",
|
||||
"icu_locale_core",
|
||||
@@ -608,9 +642,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "icu_properties_data"
|
||||
version = "2.1.1"
|
||||
version = "2.1.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "02845b3647bb045f1100ecd6480ff52f34c35f82d9880e029d329c21d1054899"
|
||||
checksum = "616c294cf8d725c6afcd8f55abc17c56464ef6211f9ed59cccffe534129c77af"
|
||||
|
||||
[[package]]
|
||||
name = "icu_provider"
|
||||
@@ -650,9 +684,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "indexmap"
|
||||
version = "2.12.1"
|
||||
version = "2.13.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "0ad4bb2b565bca0645f4d68c5c9af97fba094e9791da685bf83cb5f3ce74acf2"
|
||||
checksum = "7714e70437a7dc3ac8eb7e6f8df75fd8eb422675fc7678aff7364301092b1017"
|
||||
dependencies = [
|
||||
"equivalent",
|
||||
"hashbrown",
|
||||
@@ -666,9 +700,9 @@ checksum = "469fb0b9cefa57e3ef31275ee7cacb78f2fdca44e4765491884a2b119d4eb130"
|
||||
|
||||
[[package]]
|
||||
name = "iri-string"
|
||||
version = "0.7.9"
|
||||
version = "0.7.10"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "4f867b9d1d896b67beb18518eda36fdb77a32ea590de864f1325b294a6d14397"
|
||||
checksum = "c91338f0783edbd6195decb37bae672fd3b165faffb89bf7b9e6942f8b1a731a"
|
||||
dependencies = [
|
||||
"memchr",
|
||||
"serde",
|
||||
@@ -682,15 +716,15 @@ checksum = "a6cb138bb79a146c1bd460005623e142ef0181e3d0219cb493e02f7d08a35695"
|
||||
|
||||
[[package]]
|
||||
name = "itoa"
|
||||
version = "1.0.15"
|
||||
version = "1.0.17"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "4a5f13b858c8d314ee3e8f639011f7ccefe71f97f96e50151fb991f267928e2c"
|
||||
checksum = "92ecc6618181def0457392ccd0ee51198e065e016d1d527a7ac1b6dc7c1f09d2"
|
||||
|
||||
[[package]]
|
||||
name = "js-sys"
|
||||
version = "0.3.82"
|
||||
version = "0.3.83"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b011eec8cc36da2aab2d5cff675ec18454fad408585853910a202391cf9f8e65"
|
||||
checksum = "464a3709c7f55f1f721e5389aa6ea4e3bc6aba669353300af094b29ffbdde1d8"
|
||||
dependencies = [
|
||||
"once_cell",
|
||||
"wasm-bindgen",
|
||||
@@ -704,9 +738,9 @@ checksum = "bbd2bcb4c963f2ddae06a2efc7e9f3591312473c50c6685e1f298068316e66fe"
|
||||
|
||||
[[package]]
|
||||
name = "libc"
|
||||
version = "0.2.177"
|
||||
version = "0.2.180"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2874a2af47a2325c2001a6e6fad9b16a53b802102b528163885171cf92b15976"
|
||||
checksum = "bcc35a38544a891a5f7c865aca548a982ccb3b8650a5b06d0fd33a10283c56fc"
|
||||
|
||||
[[package]]
|
||||
name = "linux-raw-sys"
|
||||
@@ -731,9 +765,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "log"
|
||||
version = "0.4.28"
|
||||
version = "0.4.29"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "34080505efa8e45a4b816c349525ebe327ceaa8559756f0356cba97ef3bf7432"
|
||||
checksum = "5e5032e24019045c762d3c0f28f5b6b8bbf38563a65908389bf7978758920897"
|
||||
|
||||
[[package]]
|
||||
name = "lru-slab"
|
||||
@@ -750,6 +784,12 @@ dependencies = [
|
||||
"regex-automata",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "matchit"
|
||||
version = "0.7.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "0e7465ac9959cc2b1404e8e2367b43684a6d13790fe23056cc8c6c5a6b7bcb94"
|
||||
|
||||
[[package]]
|
||||
name = "memchr"
|
||||
version = "2.7.6"
|
||||
@@ -764,9 +804,9 @@ checksum = "6877bb514081ee2a7ff5ef9de3281f14a4dd4bceac4c09388074a6b5df8a139a"
|
||||
|
||||
[[package]]
|
||||
name = "mio"
|
||||
version = "1.1.0"
|
||||
version = "1.1.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "69d83b0086dc8ecf3ce9ae2874b2d1290252e2a30720bea58a5c6639b0092873"
|
||||
checksum = "a69bcab0ad47271a0234d9422b131806bf3968021e5dc9328caf2d4cd58557fc"
|
||||
dependencies = [
|
||||
"libc",
|
||||
"wasi",
|
||||
@@ -775,20 +815,21 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "mockito"
|
||||
version = "1.7.0"
|
||||
version = "1.7.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "7760e0e418d9b7e5777c0374009ca4c93861b9066f18cb334a20ce50ab63aa48"
|
||||
checksum = "7e0603425789b4a70fcc4ac4f5a46a566c116ee3e2a6b768dc623f7719c611de"
|
||||
dependencies = [
|
||||
"assert-json-diff",
|
||||
"bytes",
|
||||
"colored",
|
||||
"futures-util",
|
||||
"futures-core",
|
||||
"http",
|
||||
"http-body",
|
||||
"http-body-util",
|
||||
"hyper",
|
||||
"hyper-util",
|
||||
"log",
|
||||
"pin-project-lite",
|
||||
"rand",
|
||||
"regex",
|
||||
"serde_json",
|
||||
@@ -841,7 +882,7 @@ version = "0.10.75"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "08838db121398ad17ab8531ce9de97b244589089e290a384c900cb9ff7434328"
|
||||
dependencies = [
|
||||
"bitflags 2.10.0",
|
||||
"bitflags",
|
||||
"cfg-if",
|
||||
"foreign-types",
|
||||
"libc",
|
||||
@@ -928,9 +969,10 @@ checksum = "7edddbd0b52d732b21ad9a5fab5c704c14cd949e5e9a1ec5929a24fded1b904c"
|
||||
|
||||
[[package]]
|
||||
name = "potatomesh-matrix-bridge"
|
||||
version = "0.5.9"
|
||||
version = "0.5.12"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"axum",
|
||||
"clap",
|
||||
"mockito",
|
||||
"reqwest",
|
||||
@@ -940,6 +982,7 @@ dependencies = [
|
||||
"tempfile",
|
||||
"tokio",
|
||||
"toml",
|
||||
"tower",
|
||||
"tracing",
|
||||
"tracing-subscriber",
|
||||
"urlencoding",
|
||||
@@ -965,9 +1008,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "proc-macro2"
|
||||
version = "1.0.103"
|
||||
version = "1.0.105"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5ee95bc4ef87b8d5ba32e8b7714ccc834865276eab0aed5c9958d00ec45f49e8"
|
||||
checksum = "535d180e0ecab6268a3e718bb9fd44db66bbbc256257165fc699dadf70d16fe7"
|
||||
dependencies = [
|
||||
"unicode-ident",
|
||||
]
|
||||
@@ -994,9 +1037,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "quinn-proto"
|
||||
version = "0.11.13"
|
||||
version = "0.11.14"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f1906b49b0c3bc04b5fe5d86a77925ae6524a19b816ae38ce1e426255f1d8a31"
|
||||
checksum = "434b42fec591c96ef50e21e886936e66d3cc3f737104fdb9b737c40ffb94c098"
|
||||
dependencies = [
|
||||
"bytes",
|
||||
"getrandom 0.3.4",
|
||||
@@ -1029,9 +1072,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "quote"
|
||||
version = "1.0.42"
|
||||
version = "1.0.43"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a338cc41d27e6cc6dce6cefc13a0729dfbb81c262b1f519331575dd80ef3067f"
|
||||
checksum = "dc74d9a594b72ae6656596548f56f667211f8a97b3d4c3d467150794690dc40a"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
]
|
||||
@@ -1077,7 +1120,7 @@ version = "0.5.18"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "ed2bf2547551a7053d6fdfafda3f938979645c44812fbfcda098faae3f1a362d"
|
||||
dependencies = [
|
||||
"bitflags 2.10.0",
|
||||
"bitflags",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -1111,9 +1154,9 @@ checksum = "7a2d987857b319362043e95f5353c0535c1f58eec5336fdfcf626430af7def58"
|
||||
|
||||
[[package]]
|
||||
name = "reqwest"
|
||||
version = "0.12.24"
|
||||
version = "0.12.28"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "9d0946410b9f7b082a427e4ef5c8ff541a88b357bc6c637c40db3a68ac70a36f"
|
||||
checksum = "eddd3ca559203180a307f12d114c268abf583f59b03cb906fd0b3ff8646c1147"
|
||||
dependencies = [
|
||||
"base64",
|
||||
"bytes",
|
||||
@@ -1175,11 +1218,11 @@ checksum = "357703d41365b4b27c590e3ed91eabb1b663f07c4c084095e60cbed4362dff0d"
|
||||
|
||||
[[package]]
|
||||
name = "rustix"
|
||||
version = "1.1.2"
|
||||
version = "1.1.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "cd15f8a2c5551a84d56efdc1cd049089e409ac19a3072d5037a17fd70719ff3e"
|
||||
checksum = "146c9e247ccc180c1f61615433868c99f3de3ae256a30a43b49f67c2d9171f34"
|
||||
dependencies = [
|
||||
"bitflags 2.10.0",
|
||||
"bitflags",
|
||||
"errno",
|
||||
"libc",
|
||||
"linux-raw-sys",
|
||||
@@ -1188,9 +1231,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rustls"
|
||||
version = "0.23.35"
|
||||
version = "0.23.36"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "533f54bc6a7d4f647e46ad909549eda97bf5afc1585190ef692b4286b198bd8f"
|
||||
checksum = "c665f33d38cea657d9614f766881e4d510e0eda4239891eea56b4cadcf01801b"
|
||||
dependencies = [
|
||||
"once_cell",
|
||||
"ring",
|
||||
@@ -1202,9 +1245,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rustls-pki-types"
|
||||
version = "1.13.0"
|
||||
version = "1.13.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "94182ad936a0c91c324cd46c6511b9510ed16af436d7b5bab34beab0afd55f7a"
|
||||
checksum = "21e6f2ab2928ca4291b86736a8bd920a277a399bba1589409d72154ff87c1282"
|
||||
dependencies = [
|
||||
"web-time",
|
||||
"zeroize",
|
||||
@@ -1212,9 +1255,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rustls-webpki"
|
||||
version = "0.103.8"
|
||||
version = "0.103.10"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2ffdfa2f5286e2247234e03f680868ac2815974dc39e00ea15adc445d0aafe52"
|
||||
checksum = "df33b2b81ac578cabaf06b89b0631153a3f416b0a886e8a7a1707fb51abbd1ef"
|
||||
dependencies = [
|
||||
"ring",
|
||||
"rustls-pki-types",
|
||||
@@ -1229,9 +1272,9 @@ checksum = "b39cdef0fa800fc44525c84ccb54a029961a8215f9619753635a9c0d2538d46d"
|
||||
|
||||
[[package]]
|
||||
name = "ryu"
|
||||
version = "1.0.20"
|
||||
version = "1.0.22"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "28d3b2b1366ec20994f1fd18c3c594f05c5dd4bc44d8bb0c1c632c8d6829481f"
|
||||
checksum = "a50f4cf475b65d88e057964e0e9bb1f0aa9bbb2036dc65c64596b42932536984"
|
||||
|
||||
[[package]]
|
||||
name = "scc"
|
||||
@@ -1269,7 +1312,7 @@ version = "2.11.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "897b2245f0b511c87893af39b033e5ca9cce68824c4d7e7630b5a1d339658d02"
|
||||
dependencies = [
|
||||
"bitflags 2.10.0",
|
||||
"bitflags",
|
||||
"core-foundation",
|
||||
"core-foundation-sys",
|
||||
"libc",
|
||||
@@ -1318,22 +1361,33 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "serde_json"
|
||||
version = "1.0.145"
|
||||
version = "1.0.149"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "402a6f66d8c709116cf22f558eab210f5a50187f702eb4d7e5ef38d9a7f1c79c"
|
||||
checksum = "83fc039473c5595ace860d8c4fafa220ff474b3fc6bfdb4293327f1a37e94d86"
|
||||
dependencies = [
|
||||
"itoa",
|
||||
"memchr",
|
||||
"ryu",
|
||||
"serde",
|
||||
"serde_core",
|
||||
"zmij",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "serde_path_to_error"
|
||||
version = "0.1.20"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "10a9ff822e371bb5403e391ecd83e182e0e77ba7f6fe0160b795797109d1b457"
|
||||
dependencies = [
|
||||
"itoa",
|
||||
"serde",
|
||||
"serde_core",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "serde_spanned"
|
||||
version = "1.0.3"
|
||||
version = "1.0.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "e24345aa0fe688594e73770a5f6d1b216508b4f93484c0026d521acd30134392"
|
||||
checksum = "f8bbf91e5a4d6315eee45e704372590b30e260ee83af6639d64557f51b067776"
|
||||
dependencies = [
|
||||
"serde_core",
|
||||
]
|
||||
@@ -1352,11 +1406,12 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "serial_test"
|
||||
version = "3.2.0"
|
||||
version = "3.3.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1b258109f244e1d6891bf1053a55d63a5cd4f8f4c30cf9a1280989f80e7a1fa9"
|
||||
checksum = "0d0b343e184fc3b7bb44dff0705fffcf4b3756ba6aff420dddd8b24ca145e555"
|
||||
dependencies = [
|
||||
"futures",
|
||||
"futures-executor",
|
||||
"futures-util",
|
||||
"log",
|
||||
"once_cell",
|
||||
"parking_lot",
|
||||
@@ -1366,9 +1421,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "serial_test_derive"
|
||||
version = "3.2.0"
|
||||
version = "3.3.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5d69265a08751de7844521fd15003ae0a888e035773ba05695c5c759a6f89eef"
|
||||
checksum = "6f50427f258fb77356e4cd4aa0e87e2bd2c66dbcee41dc405282cae2bfc26c83"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
@@ -1438,9 +1493,9 @@ checksum = "13c2bddecc57b384dee18652358fb23172facb8a2c51ccc10d74c157bdea3292"
|
||||
|
||||
[[package]]
|
||||
name = "syn"
|
||||
version = "2.0.111"
|
||||
version = "2.0.114"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "390cc9a294ab71bdb1aa2e99d13be9c753cd2d7bd6560c77118597410c4d2e87"
|
||||
checksum = "d4d107df263a3013ef9b1879b0df87d706ff80f65a86ea879bd9c31f9b307c2a"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
@@ -1469,20 +1524,20 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "system-configuration"
|
||||
version = "0.5.1"
|
||||
version = "0.6.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "ba3a3adc5c275d719af8cb4272ea1c4a6d668a777f37e115f6d11ddbc1c8e0e7"
|
||||
checksum = "3c879d448e9d986b661742763247d3693ed13609438cf3d006f51f5368a5ba6b"
|
||||
dependencies = [
|
||||
"bitflags 1.3.2",
|
||||
"bitflags",
|
||||
"core-foundation",
|
||||
"system-configuration-sys",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "system-configuration-sys"
|
||||
version = "0.5.0"
|
||||
version = "0.6.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a75fb188eb626b924683e3b95e3a48e63551fcfb51949de2f06a9d91dbee93c9"
|
||||
checksum = "8e1d1b10ced5ca923a1fcb8d03e96b8d3268065d724548c0211415ff6ac6bac4"
|
||||
dependencies = [
|
||||
"core-foundation-sys",
|
||||
"libc",
|
||||
@@ -1490,9 +1545,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "tempfile"
|
||||
version = "3.23.0"
|
||||
version = "3.24.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2d31c77bdf42a745371d260a26ca7163f1e0924b64afa0b688e61b5a9fa02f16"
|
||||
checksum = "655da9c7eb6305c55742045d5a8d2037996d61d8de95806335c7c86ce0f82e9c"
|
||||
dependencies = [
|
||||
"fastrand",
|
||||
"getrandom 0.3.4",
|
||||
@@ -1557,9 +1612,9 @@ checksum = "1f3ccbac311fea05f86f61904b462b55fb3df8837a366dfc601a0161d0532f20"
|
||||
|
||||
[[package]]
|
||||
name = "tokio"
|
||||
version = "1.48.0"
|
||||
version = "1.49.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "ff360e02eab121e0bc37a2d3b4d4dc622e6eda3a8e5253d5435ecf5bd4c68408"
|
||||
checksum = "72a2903cd7736441aac9df9d7688bd0ce48edccaadf181c3b90be801e81d3d86"
|
||||
dependencies = [
|
||||
"bytes",
|
||||
"libc",
|
||||
@@ -1604,9 +1659,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "tokio-util"
|
||||
version = "0.7.17"
|
||||
version = "0.7.18"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2efa149fe76073d6e8fd97ef4f4eca7b67f599660115591483572e406e165594"
|
||||
checksum = "9ae9cec805b01e8fc3fd2fe289f89149a9b66dd16786abd8b19cfa7b48cb0098"
|
||||
dependencies = [
|
||||
"bytes",
|
||||
"futures-core",
|
||||
@@ -1617,9 +1672,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "toml"
|
||||
version = "0.9.8"
|
||||
version = "0.9.11+spec-1.1.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f0dc8b1fb61449e27716ec0e1bdf0f6b8f3e8f6b05391e8497b8b6d7804ea6d8"
|
||||
checksum = "f3afc9a848309fe1aaffaed6e1546a7a14de1f935dc9d89d32afd9a44bab7c46"
|
||||
dependencies = [
|
||||
"indexmap",
|
||||
"serde_core",
|
||||
@@ -1632,27 +1687,27 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "toml_datetime"
|
||||
version = "0.7.3"
|
||||
version = "0.7.5+spec-1.1.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f2cdb639ebbc97961c51720f858597f7f24c4fc295327923af55b74c3c724533"
|
||||
checksum = "92e1cfed4a3038bc5a127e35a2d360f145e1f4b971b551a2ba5fd7aedf7e1347"
|
||||
dependencies = [
|
||||
"serde_core",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "toml_parser"
|
||||
version = "1.0.4"
|
||||
version = "1.0.6+spec-1.1.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "c0cbe268d35bdb4bb5a56a2de88d0ad0eb70af5384a99d648cd4b3d04039800e"
|
||||
checksum = "a3198b4b0a8e11f09dd03e133c0280504d0801269e9afa46362ffde1cbeebf44"
|
||||
dependencies = [
|
||||
"winnow",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "toml_writer"
|
||||
version = "1.0.4"
|
||||
version = "1.0.6+spec-1.1.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "df8b2b54733674ad286d16267dcfc7a71ed5c776e4ac7aa3c3e2561f7c637bf2"
|
||||
checksum = "ab16f14aed21ee8bfd8ec22513f7287cd4a91aa92e44edfe2c17ddd004e92607"
|
||||
|
||||
[[package]]
|
||||
name = "tower"
|
||||
@@ -1667,15 +1722,16 @@ dependencies = [
|
||||
"tokio",
|
||||
"tower-layer",
|
||||
"tower-service",
|
||||
"tracing",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tower-http"
|
||||
version = "0.6.7"
|
||||
version = "0.6.8"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "9cf146f99d442e8e68e585f5d798ccd3cad9a7835b917e09728880a862706456"
|
||||
checksum = "d4e6559d53cc268e5031cd8429d05415bc4cb4aefc4aa5d6cc35fbf5b924a1f8"
|
||||
dependencies = [
|
||||
"bitflags 2.10.0",
|
||||
"bitflags",
|
||||
"bytes",
|
||||
"futures-util",
|
||||
"http",
|
||||
@@ -1701,10 +1757,11 @@ checksum = "8df9b6e13f2d32c91b9bd719c00d1958837bc7dec474d94952798cc8e69eeec3"
|
||||
|
||||
[[package]]
|
||||
name = "tracing"
|
||||
version = "0.1.41"
|
||||
version = "0.1.44"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "784e0ac535deb450455cbfa28a6f0df145ea1bb7ae51b821cf5e7927fdcfbdd0"
|
||||
checksum = "63e71662fa4b2a2c3a26f570f037eb95bb1f85397f3cd8076caed2f026a6d100"
|
||||
dependencies = [
|
||||
"log",
|
||||
"pin-project-lite",
|
||||
"tracing-attributes",
|
||||
"tracing-core",
|
||||
@@ -1723,9 +1780,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "tracing-core"
|
||||
version = "0.1.35"
|
||||
version = "0.1.36"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "7a04e24fab5c89c6a36eb8558c9656f30d81de51dfa4d3b45f26b21d61fa0a6c"
|
||||
checksum = "db97caf9d906fbde555dd62fa95ddba9eecfd14cb388e4f491a66d74cd5fb79a"
|
||||
dependencies = [
|
||||
"once_cell",
|
||||
"valuable",
|
||||
@@ -1744,9 +1801,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "tracing-subscriber"
|
||||
version = "0.3.20"
|
||||
version = "0.3.22"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2054a14f5307d601f88daf0553e1cbf472acc4f2c51afab632431cdcd72124d5"
|
||||
checksum = "2f30143827ddab0d256fd843b7a66d164e9f271cfa0dde49142c5ca0ca291f1e"
|
||||
dependencies = [
|
||||
"matchers",
|
||||
"nu-ansi-term",
|
||||
@@ -1780,9 +1837,9 @@ checksum = "8ecb6da28b8a351d773b68d5825ac39017e680750f980f3a1a85cd8dd28a47c1"
|
||||
|
||||
[[package]]
|
||||
name = "url"
|
||||
version = "2.5.7"
|
||||
version = "2.5.8"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "08bc136a29a3d1758e07a9cca267be308aeebf5cfd5a10f3f67ab2097683ef5b"
|
||||
checksum = "ff67a8a4397373c3ef660812acab3268222035010ab8680ec4215f38ba3d0eed"
|
||||
dependencies = [
|
||||
"form_urlencoded",
|
||||
"idna",
|
||||
@@ -1846,9 +1903,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "wasm-bindgen"
|
||||
version = "0.2.105"
|
||||
version = "0.2.106"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "da95793dfc411fbbd93f5be7715b0578ec61fe87cb1a42b12eb625caa5c5ea60"
|
||||
checksum = "0d759f433fa64a2d763d1340820e46e111a7a5ab75f993d1852d70b03dbb80fd"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
"once_cell",
|
||||
@@ -1859,9 +1916,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "wasm-bindgen-futures"
|
||||
version = "0.4.55"
|
||||
version = "0.4.56"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "551f88106c6d5e7ccc7cd9a16f312dd3b5d36ea8b4954304657d5dfba115d4a0"
|
||||
checksum = "836d9622d604feee9e5de25ac10e3ea5f2d65b41eac0d9ce72eb5deae707ce7c"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
"js-sys",
|
||||
@@ -1872,9 +1929,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "wasm-bindgen-macro"
|
||||
version = "0.2.105"
|
||||
version = "0.2.106"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "04264334509e04a7bf8690f2384ef5265f05143a4bff3889ab7a3269adab59c2"
|
||||
checksum = "48cb0d2638f8baedbc542ed444afc0644a29166f1595371af4fecf8ce1e7eeb3"
|
||||
dependencies = [
|
||||
"quote",
|
||||
"wasm-bindgen-macro-support",
|
||||
@@ -1882,9 +1939,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "wasm-bindgen-macro-support"
|
||||
version = "0.2.105"
|
||||
version = "0.2.106"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "420bc339d9f322e562942d52e115d57e950d12d88983a14c79b86859ee6c7ebc"
|
||||
checksum = "cefb59d5cd5f92d9dcf80e4683949f15ca4b511f4ac0a6e14d4e1ac60c6ecd40"
|
||||
dependencies = [
|
||||
"bumpalo",
|
||||
"proc-macro2",
|
||||
@@ -1895,18 +1952,18 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "wasm-bindgen-shared"
|
||||
version = "0.2.105"
|
||||
version = "0.2.106"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "76f218a38c84bcb33c25ec7059b07847d465ce0e0a76b995e134a45adcb6af76"
|
||||
checksum = "cbc538057e648b67f72a982e708d485b2efa771e1ac05fec311f9f63e5800db4"
|
||||
dependencies = [
|
||||
"unicode-ident",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "web-sys"
|
||||
version = "0.3.82"
|
||||
version = "0.3.83"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "3a1f95c0d03a47f4ae1f7a64643a6bb97465d9b740f0fa8f90ea33915c99a9a1"
|
||||
checksum = "9b32828d774c412041098d182a8b38b16ea816958e07cf40eec2bc080ae137ac"
|
||||
dependencies = [
|
||||
"js-sys",
|
||||
"wasm-bindgen",
|
||||
@@ -1924,9 +1981,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "webpki-roots"
|
||||
version = "1.0.4"
|
||||
version = "1.0.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b2878ef029c47c6e8cf779119f20fcf52bde7ad42a731b2a304bc221df17571e"
|
||||
checksum = "12bed680863276c63889429bfd6cab3b99943659923822de1c8a39c49e4d722c"
|
||||
dependencies = [
|
||||
"rustls-pki-types",
|
||||
]
|
||||
@@ -1975,6 +2032,15 @@ dependencies = [
|
||||
"windows-targets 0.52.6",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "windows-sys"
|
||||
version = "0.59.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1e38bc4d79ed67fd075bcc251a1c39b32a1776bbe92e5bef1f0bf1f8c531853b"
|
||||
dependencies = [
|
||||
"windows-targets 0.52.6",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "windows-sys"
|
||||
version = "0.60.2"
|
||||
@@ -2165,18 +2231,18 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "zerocopy"
|
||||
version = "0.8.30"
|
||||
version = "0.8.33"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "4ea879c944afe8a2b25fef16bb4ba234f47c694565e97383b36f3a878219065c"
|
||||
checksum = "668f5168d10b9ee831de31933dc111a459c97ec93225beb307aed970d1372dfd"
|
||||
dependencies = [
|
||||
"zerocopy-derive",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "zerocopy-derive"
|
||||
version = "0.8.30"
|
||||
version = "0.8.33"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "cf955aa904d6040f70dc8e9384444cb1030aed272ba3cb09bbc4ab9e7c1f34f5"
|
||||
checksum = "2c7962b26b0a8685668b671ee4b54d007a67d4eaf05fda79ac0ecf41e32270f1"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
@@ -2242,3 +2308,9 @@ dependencies = [
|
||||
"quote",
|
||||
"syn",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "zmij"
|
||||
version = "1.0.12"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2fc5a66a20078bf1251bde995aa2fdcc4b800c70b5d92dd2c62abc5c60f679f8"
|
||||
|
||||
+3
-1
@@ -14,7 +14,7 @@
|
||||
|
||||
[package]
|
||||
name = "potatomesh-matrix-bridge"
|
||||
version = "0.5.9"
|
||||
version = "0.5.12"
|
||||
edition = "2021"
|
||||
|
||||
[dependencies]
|
||||
@@ -27,9 +27,11 @@ anyhow = "1"
|
||||
tracing = "0.1"
|
||||
tracing-subscriber = { version = "0.3", features = ["fmt", "env-filter"] }
|
||||
urlencoding = "2"
|
||||
axum = { version = "0.7", features = ["json"] }
|
||||
clap = { version = "4", features = ["derive"] }
|
||||
|
||||
[dev-dependencies]
|
||||
tempfile = "3"
|
||||
mockito = "1"
|
||||
serial_test = "3"
|
||||
tower = "0.5"
|
||||
|
||||
+2
-1
@@ -9,6 +9,8 @@ poll_interval_secs = 60
|
||||
homeserver = "https://matrix.dod.ngo"
|
||||
# Appservice access token (from your registration.yaml)
|
||||
as_token = "INVALID_TOKEN_NOT_WORKING"
|
||||
# Homeserver token used to authenticate Synapse callbacks
|
||||
hs_token = "INVALID_TOKEN_NOT_WORKING"
|
||||
# Server name (domain) part of Matrix user IDs
|
||||
server_name = "dod.ngo"
|
||||
# Room ID to send into (must be joined by the appservice / puppets)
|
||||
@@ -17,4 +19,3 @@ room_id = "!sXabOBXbVObAlZQEUs:c-base.org" # "#potato-bridge:c-base.org"
|
||||
[state]
|
||||
# Where to persist last seen message id (optional but recommended)
|
||||
state_file = "bridge_state.json"
|
||||
|
||||
|
||||
+3
-1
@@ -12,7 +12,7 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
FROM rust:1.91-bookworm AS builder
|
||||
FROM rust:1.92-bookworm AS builder
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
@@ -37,6 +37,8 @@ COPY --from=builder /app/target/release/potatomesh-matrix-bridge /usr/local/bin/
|
||||
COPY matrix/Config.toml /app/Config.example.toml
|
||||
COPY matrix/docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
|
||||
|
||||
EXPOSE 41448
|
||||
|
||||
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
|
||||
|
||||
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
|
||||
|
||||
+95
-55
@@ -2,6 +2,8 @@
|
||||
|
||||
A small Rust daemon that bridges **PotatoMesh** LoRa messages into a **Matrix** room.
|
||||
|
||||

|
||||
|
||||
For each PotatoMesh node, the bridge creates (or uses) a **Matrix puppet user**:
|
||||
|
||||
- Matrix localpart: `potato_` + the hex node id (without `!`), e.g. `!67fc83cb` → `@potato_67fc83cb:example.org`
|
||||
@@ -54,11 +56,17 @@ This is **not** a full appservice framework; it just speaks the minimal HTTP nee
|
||||
|
||||
## Configuration
|
||||
|
||||
Configuration can come from TOML, CLI flags, and environment variables. The TOML
|
||||
file is optional as long as every required setting is supplied via CLI/env/secret
|
||||
overrides.
|
||||
Configuration can come from a TOML file, CLI flags, environment variables, or secret files. The bridge merges inputs in this order (highest to lowest):
|
||||
|
||||
Example:
|
||||
1. CLI flags
|
||||
2. Environment variables
|
||||
3. Secret files (`*_FILE` paths or container defaults)
|
||||
4. TOML config file
|
||||
5. Container defaults (paths + poll interval)
|
||||
|
||||
If no TOML file is provided, required values must be supplied via CLI/env/secret inputs.
|
||||
|
||||
Example TOML:
|
||||
|
||||
```toml
|
||||
[potatomesh]
|
||||
@@ -72,6 +80,8 @@ poll_interval_secs = 10
|
||||
homeserver = "https://matrix.example.org"
|
||||
# Appservice access token (from your registration.yaml)
|
||||
as_token = "YOUR_APPSERVICE_AS_TOKEN"
|
||||
# Appservice homeserver token (must match registration hs_token)
|
||||
hs_token = "SECRET_HS_TOKEN"
|
||||
# Server name (domain) part of Matrix user IDs
|
||||
server_name = "example.org"
|
||||
# Room ID to send into (must be joined by the appservice / puppets)
|
||||
@@ -82,65 +92,91 @@ room_id = "!yourroomid:example.org"
|
||||
state_file = "bridge_state.json"
|
||||
````
|
||||
|
||||
### CLI Overrides
|
||||
The `hs_token` is used to validate inbound appservice transactions. Keep it identical in `Config.toml` and your Matrix appservice registration file.
|
||||
|
||||
Run `potatomesh-matrix-bridge --help` for the full list. The most common flags:
|
||||
### CLI Flags
|
||||
|
||||
- `--config` (or `--config-path`) to point at a TOML file
|
||||
- `--state-file`
|
||||
- `--potatomesh-base-url`
|
||||
- `--potatomesh-poll-interval-secs`
|
||||
- `--matrix-homeserver`
|
||||
- `--matrix-as-token`
|
||||
- `--matrix-server-name`
|
||||
- `--matrix-room-id`
|
||||
- `--container-defaults` / `--no-container-defaults`
|
||||
Run `potatomesh-matrix-bridge --help` for the full list. Common flags:
|
||||
|
||||
### Environment Overrides
|
||||
* `--config PATH`
|
||||
* `--state-file PATH`
|
||||
* `--potatomesh-base-url URL`
|
||||
* `--potatomesh-poll-interval-secs SECS`
|
||||
* `--matrix-homeserver URL`
|
||||
* `--matrix-as-token TOKEN`
|
||||
* `--matrix-as-token-file PATH`
|
||||
* `--matrix-hs-token TOKEN`
|
||||
* `--matrix-hs-token-file PATH`
|
||||
* `--matrix-server-name NAME`
|
||||
* `--matrix-room-id ROOM`
|
||||
* `--container` / `--no-container`
|
||||
* `--secrets-dir PATH`
|
||||
|
||||
Environment variables override CLI and TOML values:
|
||||
### Environment Variables
|
||||
|
||||
- `POTATOMESH_BASE_URL`
|
||||
- `POTATOMESH_POLL_INTERVAL_SECS`
|
||||
- `MATRIX_HOMESERVER`
|
||||
- `MATRIX_AS_TOKEN`
|
||||
- `MATRIX_SERVER_NAME`
|
||||
- `MATRIX_ROOM_ID`
|
||||
- `STATE_FILE`
|
||||
- `POTATOMESH_CONFIG_PATH` (optional TOML path)
|
||||
- `POTATOMESH_CONTAINER_DEFAULTS` (`1/0`, `true/false`)
|
||||
- `POTATOMESH_SECRETS_DIR` (default secrets directory)
|
||||
- `CONTAINER` (container detection hint)
|
||||
* `POTATOMESH_CONFIG`
|
||||
* `POTATOMESH_BASE_URL`
|
||||
* `POTATOMESH_POLL_INTERVAL_SECS`
|
||||
* `MATRIX_HOMESERVER`
|
||||
* `MATRIX_AS_TOKEN`
|
||||
* `MATRIX_AS_TOKEN_FILE`
|
||||
* `MATRIX_HS_TOKEN`
|
||||
* `MATRIX_HS_TOKEN_FILE`
|
||||
* `MATRIX_SERVER_NAME`
|
||||
* `MATRIX_ROOM_ID`
|
||||
* `STATE_FILE`
|
||||
* `POTATOMESH_CONTAINER`
|
||||
* `POTATOMESH_SECRETS_DIR`
|
||||
|
||||
### Docker Secrets
|
||||
### Secret Files
|
||||
|
||||
Every env var above supports a `*_FILE` companion (for example, `MATRIX_AS_TOKEN_FILE`).
|
||||
When present, the bridge reads the file contents and uses them instead of the plain env var.
|
||||
If `POTATOMESH_SECRETS_DIR` is set (or container defaults are enabled), the bridge also
|
||||
checks for files named after the env vars (for example, `/run/secrets/MATRIX_AS_TOKEN`)
|
||||
even when the `*_FILE` variable is not set.
|
||||
If you supply `*_FILE` values, the bridge reads the secret contents and trims whitespace. When running inside a container, the bridge also checks the default secrets directory (default: `/run/secrets`) for:
|
||||
|
||||
### Precedence
|
||||
|
||||
From highest to lowest:
|
||||
|
||||
1. `*_FILE` secret values (explicit or default secrets directory)
|
||||
2. Environment variables
|
||||
3. CLI flags
|
||||
4. TOML config
|
||||
5. Built-in defaults
|
||||
* `matrix_as_token`
|
||||
* `matrix_hs_token`
|
||||
|
||||
### Container Defaults
|
||||
|
||||
When container defaults are enabled (auto-detected or forced):
|
||||
Container detection checks `POTATOMESH_CONTAINER`, `CONTAINER`, and `/proc/1/cgroup`. When detected (or forced with `--container`), defaults shift to:
|
||||
|
||||
- Default config path: `/app/Config.toml`
|
||||
- Default state file: `/app/bridge_state.json`
|
||||
- Default secrets directory: `/run/secrets`
|
||||
- Default poll interval: 120 seconds
|
||||
* Config path: `/app/Config.toml`
|
||||
* State file: `/app/bridge_state.json`
|
||||
* Secrets dir: `/run/secrets`
|
||||
* Poll interval: 15 seconds (if not otherwise configured)
|
||||
|
||||
Disable container defaults with `--no-container-defaults` or set
|
||||
`POTATOMESH_CONTAINER_DEFAULTS=0`.
|
||||
Set `POTATOMESH_CONTAINER=0` or `--no-container` to opt out of container defaults.
|
||||
|
||||
### Docker Compose First Run
|
||||
|
||||
Before starting Compose, complete this preflight checklist:
|
||||
|
||||
1. Ensure `matrix/Config.toml` exists as a regular file on the host (not a directory).
|
||||
2. Fill required Matrix values in `matrix/Config.toml`:
|
||||
- `matrix.as_token`
|
||||
- `matrix.hs_token`
|
||||
- `matrix.server_name`
|
||||
- `matrix.room_id`
|
||||
- `matrix.homeserver`
|
||||
|
||||
This is required because the shared Compose anchor `x-matrix-bridge-base` mounts `./matrix/Config.toml` to `/app/Config.toml`.
|
||||
Then follow the token and namespace requirements in [Matrix Appservice Setup (Synapse example)](#matrix-appservice-setup-synapse-example).
|
||||
|
||||
#### Troubleshooting
|
||||
|
||||
| Symptom | Likely cause | What to check |
|
||||
| --- | --- | --- |
|
||||
| `Is a directory (os error 21)` | Host mount source became a directory | `matrix/Config.toml` was missing at mount time and got created as a directory on host. |
|
||||
| `M_UNKNOWN_TOKEN` / `401 Unauthorized` | Matrix appservice token mismatch | Verify `matrix.as_token` matches your appservice registration and setup in [Matrix Appservice Setup (Synapse example)](#matrix-appservice-setup-synapse-example). |
|
||||
|
||||
#### Recovery from accidental `Config.toml` directory creation
|
||||
|
||||
```bash
|
||||
# from repo root
|
||||
rm -rf matrix/Config.toml
|
||||
touch matrix/Config.toml
|
||||
# then edit matrix/Config.toml and set valid matrix.as_token, matrix.hs_token,
|
||||
# matrix.server_name, matrix.room_id, and matrix.homeserver before starting compose
|
||||
```
|
||||
|
||||
### PotatoMesh API
|
||||
|
||||
@@ -196,7 +232,7 @@ A minimal example sketch (you **must** adjust URLs, secrets, namespaces):
|
||||
|
||||
```yaml
|
||||
id: potatomesh-bridge
|
||||
url: "http://your-bridge-host:8080" # not used by this bridge if it only calls out
|
||||
url: "http://your-bridge-host:41448"
|
||||
as_token: "YOUR_APPSERVICE_AS_TOKEN"
|
||||
hs_token: "SECRET_HS_TOKEN"
|
||||
sender_localpart: "potatomesh-bridge"
|
||||
@@ -207,10 +243,12 @@ namespaces:
|
||||
regex: "@potato_[0-9a-f]{8}:example.org"
|
||||
```
|
||||
|
||||
For this bridge, only the `as_token` and `namespaces.users` actually matter. The bridge does not accept inbound events; it only uses the `as_token` to call the homeserver.
|
||||
This bridge listens for Synapse appservice callbacks on port `41448` so it can log inbound transaction payloads. It still only forwards messages one way (PotatoMesh → Matrix), so inbound Matrix events are acknowledged but not bridged. The `as_token` and `namespaces.users` entries remain required for outbound calls, and the `url` should point at the listener.
|
||||
|
||||
In Synapse’s `homeserver.yaml`, add the registration file under `app_service_config_files`, restart, and invite a puppet user to your target room (or use room ID directly).
|
||||
|
||||
The bridge validates inbound appservice callbacks by comparing the `access_token` query param to `hs_token` in `Config.toml`, so keep those values in sync.
|
||||
|
||||
---
|
||||
|
||||
## Build
|
||||
@@ -240,10 +278,11 @@ Build the container from the repo root with the included `matrix/Dockerfile`:
|
||||
docker build -f matrix/Dockerfile -t potatomesh-matrix-bridge .
|
||||
```
|
||||
|
||||
Provide your config at `/app/Config.toml` and persist the bridge state file by mounting volumes. Minimal example:
|
||||
Provide your config at `/app/Config.toml` (or use CLI/env/secret overrides) and persist the bridge state file by mounting volumes. Minimal example:
|
||||
|
||||
```bash
|
||||
docker run --rm \
|
||||
-p 41448:41448 \
|
||||
-v bridge_state:/app \
|
||||
-v "$(pwd)/matrix/Config.toml:/app/Config.toml:ro" \
|
||||
potatomesh-matrix-bridge
|
||||
@@ -253,12 +292,13 @@ If you prefer to isolate the state file from the config, mount it directly inste
|
||||
|
||||
```bash
|
||||
docker run --rm \
|
||||
-p 41448:41448 \
|
||||
-v bridge_state:/app \
|
||||
-v "$(pwd)/matrix/Config.toml:/app/Config.toml:ro" \
|
||||
potatomesh-matrix-bridge
|
||||
```
|
||||
|
||||
The image ships `Config.example.toml` for reference, but the bridge will exit if `/app/Config.toml` is not provided.
|
||||
The image ships `Config.example.toml` for reference. If `/app/Config.toml` is absent, set the required values via environment variables, CLI flags, or secrets instead.
|
||||
|
||||
---
|
||||
|
||||
@@ -296,7 +336,7 @@ Delete `bridge_state.json` if you want it to replay all currently available mess
|
||||
|
||||
## Development
|
||||
|
||||
Run tests (currently mostly compile checks, no real tests yet):
|
||||
Run tests:
|
||||
|
||||
```bash
|
||||
cargo test
|
||||
|
||||
@@ -15,10 +15,12 @@
|
||||
|
||||
set -e
|
||||
|
||||
# Surface container detection for the bridge and set default secret directory.
|
||||
export CONTAINER="${CONTAINER:-1}"
|
||||
export POTATOMESH_CONTAINER_DEFAULTS="${POTATOMESH_CONTAINER_DEFAULTS:-1}"
|
||||
export POTATOMESH_SECRETS_DIR="${POTATOMESH_SECRETS_DIR:-/run/secrets}"
|
||||
# Default to container-aware configuration paths unless explicitly overridden.
|
||||
: "${POTATOMESH_CONTAINER:=1}"
|
||||
: "${POTATOMESH_SECRETS_DIR:=/run/secrets}"
|
||||
|
||||
export POTATOMESH_CONTAINER
|
||||
export POTATOMESH_SECRETS_DIR
|
||||
|
||||
# Default state file path from Config.toml unless overridden.
|
||||
STATE_FILE="${STATE_FILE:-/app/bridge_state.json}"
|
||||
|
||||
+70
-124
@@ -12,148 +12,94 @@
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use clap::Parser;
|
||||
use clap::{ArgAction, Parser};
|
||||
|
||||
use crate::config::{
|
||||
BootstrapOverrides, ConfigOverrides, MatrixOverrides, PotatomeshOverrides, StateOverrides,
|
||||
};
|
||||
#[cfg(not(test))]
|
||||
use crate::config::{ConfigInputs, ConfigOverrides};
|
||||
|
||||
/// Command-line overrides for the Matrix bridge.
|
||||
/// CLI arguments for the Matrix bridge.
|
||||
#[derive(Debug, Parser)]
|
||||
#[command(name = "potatomesh-matrix-bridge", version)]
|
||||
#[command(
|
||||
name = "potatomesh-matrix-bridge",
|
||||
version,
|
||||
about = "PotatoMesh Matrix bridge"
|
||||
)]
|
||||
pub struct Cli {
|
||||
/// TOML config path (optional, defaults to Config.toml or /app/Config.toml in containers).
|
||||
#[arg(long = "config", alias = "config-path")]
|
||||
pub config_path: Option<String>,
|
||||
|
||||
/// Override the state file path.
|
||||
#[arg(long)]
|
||||
/// Path to the configuration TOML file.
|
||||
#[arg(long, value_name = "PATH")]
|
||||
pub config: Option<String>,
|
||||
/// Path to the bridge state file.
|
||||
#[arg(long, value_name = "PATH")]
|
||||
pub state_file: Option<String>,
|
||||
|
||||
/// Override the PotatoMesh base URL.
|
||||
#[arg(long)]
|
||||
/// PotatoMesh base URL.
|
||||
#[arg(long, value_name = "URL")]
|
||||
pub potatomesh_base_url: Option<String>,
|
||||
|
||||
/// Override the PotatoMesh poll interval in seconds.
|
||||
#[arg(long)]
|
||||
/// Poll interval in seconds.
|
||||
#[arg(long, value_name = "SECS")]
|
||||
pub potatomesh_poll_interval_secs: Option<u64>,
|
||||
|
||||
/// Override the Matrix homeserver URL.
|
||||
#[arg(long)]
|
||||
/// Matrix homeserver base URL.
|
||||
#[arg(long, value_name = "URL")]
|
||||
pub matrix_homeserver: Option<String>,
|
||||
|
||||
/// Override the Matrix appservice access token.
|
||||
#[arg(long)]
|
||||
/// Matrix appservice access token.
|
||||
#[arg(long, value_name = "TOKEN")]
|
||||
pub matrix_as_token: Option<String>,
|
||||
|
||||
/// Override the Matrix server name.
|
||||
#[arg(long)]
|
||||
/// Path to a secret file containing the Matrix appservice access token.
|
||||
#[arg(long, value_name = "PATH")]
|
||||
pub matrix_as_token_file: Option<String>,
|
||||
/// Matrix homeserver token for inbound appservice requests.
|
||||
#[arg(long, value_name = "TOKEN")]
|
||||
pub matrix_hs_token: Option<String>,
|
||||
/// Path to a secret file containing the Matrix homeserver token.
|
||||
#[arg(long, value_name = "PATH")]
|
||||
pub matrix_hs_token_file: Option<String>,
|
||||
/// Matrix server name (domain).
|
||||
#[arg(long, value_name = "NAME")]
|
||||
pub matrix_server_name: Option<String>,
|
||||
|
||||
/// Override the Matrix room ID.
|
||||
#[arg(long)]
|
||||
/// Matrix room id to forward into.
|
||||
#[arg(long, value_name = "ROOM")]
|
||||
pub matrix_room_id: Option<String>,
|
||||
|
||||
/// Force container defaults on even if container detection is false.
|
||||
#[arg(long, conflicts_with = "no_container_defaults")]
|
||||
pub container_defaults: bool,
|
||||
|
||||
/// Disable container defaults even if a container is detected.
|
||||
#[arg(long, conflicts_with = "container_defaults")]
|
||||
pub no_container_defaults: bool,
|
||||
/// Force container defaults (overrides detection).
|
||||
#[arg(long, action = ArgAction::SetTrue)]
|
||||
pub container: bool,
|
||||
/// Disable container defaults (overrides detection).
|
||||
#[arg(long, action = ArgAction::SetTrue)]
|
||||
pub no_container: bool,
|
||||
/// Directory to search for default secret files.
|
||||
#[arg(long, value_name = "PATH")]
|
||||
pub secrets_dir: Option<String>,
|
||||
}
|
||||
|
||||
impl Cli {
|
||||
/// Convert CLI flags to bootstrap overrides for config loading.
|
||||
pub fn into_overrides(self) -> BootstrapOverrides {
|
||||
let container_defaults = if self.container_defaults {
|
||||
Some(true)
|
||||
} else if self.no_container_defaults {
|
||||
Some(false)
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
BootstrapOverrides {
|
||||
config_path: self.config_path,
|
||||
container_defaults,
|
||||
values: ConfigOverrides {
|
||||
potatomesh: PotatomeshOverrides {
|
||||
base_url: self.potatomesh_base_url,
|
||||
poll_interval_secs: self.potatomesh_poll_interval_secs,
|
||||
},
|
||||
matrix: MatrixOverrides {
|
||||
homeserver: self.matrix_homeserver,
|
||||
as_token: self.matrix_as_token,
|
||||
server_name: self.matrix_server_name,
|
||||
room_id: self.matrix_room_id,
|
||||
},
|
||||
state: StateOverrides {
|
||||
state_file: self.state_file,
|
||||
},
|
||||
/// Convert CLI args into configuration inputs.
|
||||
#[cfg(not(test))]
|
||||
pub fn to_inputs(&self) -> ConfigInputs {
|
||||
ConfigInputs {
|
||||
config_path: self.config.clone(),
|
||||
secrets_dir: self.secrets_dir.clone(),
|
||||
container_override: resolve_container_override(self.container, self.no_container),
|
||||
container_hint: None,
|
||||
overrides: ConfigOverrides {
|
||||
potatomesh_base_url: self.potatomesh_base_url.clone(),
|
||||
potatomesh_poll_interval_secs: self.potatomesh_poll_interval_secs,
|
||||
matrix_homeserver: self.matrix_homeserver.clone(),
|
||||
matrix_as_token: self.matrix_as_token.clone(),
|
||||
matrix_as_token_file: self.matrix_as_token_file.clone(),
|
||||
matrix_hs_token: self.matrix_hs_token.clone(),
|
||||
matrix_hs_token_file: self.matrix_hs_token_file.clone(),
|
||||
matrix_server_name: self.matrix_server_name.clone(),
|
||||
matrix_room_id: self.matrix_room_id.clone(),
|
||||
state_file: self.state_file.clone(),
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn cli_overrides_map_to_config() {
|
||||
let cli = Cli::parse_from([
|
||||
"bridge",
|
||||
"--config",
|
||||
"/tmp/Config.toml",
|
||||
"--state-file",
|
||||
"/tmp/state.json",
|
||||
"--potatomesh-base-url",
|
||||
"https://potato.example/",
|
||||
"--potatomesh-poll-interval-secs",
|
||||
"15",
|
||||
"--matrix-homeserver",
|
||||
"https://matrix.example.org",
|
||||
"--matrix-as-token",
|
||||
"token",
|
||||
"--matrix-server-name",
|
||||
"example.org",
|
||||
"--matrix-room-id",
|
||||
"!room:example.org",
|
||||
"--container-defaults",
|
||||
]);
|
||||
|
||||
let overrides = cli.into_overrides();
|
||||
assert_eq!(overrides.config_path.as_deref(), Some("/tmp/Config.toml"));
|
||||
assert_eq!(overrides.container_defaults, Some(true));
|
||||
assert_eq!(
|
||||
overrides.values.potatomesh.base_url.as_deref(),
|
||||
Some("https://potato.example/")
|
||||
);
|
||||
assert_eq!(overrides.values.potatomesh.poll_interval_secs, Some(15));
|
||||
assert_eq!(
|
||||
overrides.values.matrix.homeserver.as_deref(),
|
||||
Some("https://matrix.example.org")
|
||||
);
|
||||
assert_eq!(overrides.values.matrix.as_token.as_deref(), Some("token"));
|
||||
assert_eq!(
|
||||
overrides.values.matrix.server_name.as_deref(),
|
||||
Some("example.org")
|
||||
);
|
||||
assert_eq!(
|
||||
overrides.values.matrix.room_id.as_deref(),
|
||||
Some("!room:example.org")
|
||||
);
|
||||
assert_eq!(
|
||||
overrides.values.state.state_file.as_deref(),
|
||||
Some("/tmp/state.json")
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn cli_can_disable_container_defaults() {
|
||||
let cli = Cli::parse_from(["bridge", "--no-container-defaults"]);
|
||||
let overrides = cli.into_overrides();
|
||||
assert_eq!(overrides.container_defaults, Some(false));
|
||||
/// Resolve container override flags into an optional boolean.
|
||||
#[cfg(not(test))]
|
||||
fn resolve_container_override(container: bool, no_container: bool) -> Option<bool> {
|
||||
match (container, no_container) {
|
||||
(true, false) => Some(true),
|
||||
(false, true) => Some(false),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
+764
-641
File diff suppressed because it is too large
Load Diff
+116
-41
@@ -15,26 +15,26 @@
|
||||
mod cli;
|
||||
mod config;
|
||||
mod matrix;
|
||||
mod matrix_server;
|
||||
mod potatomesh;
|
||||
|
||||
use std::{fs, path::Path};
|
||||
use std::{fs, net::SocketAddr, path::Path};
|
||||
|
||||
use anyhow::Result;
|
||||
#[cfg(not(test))]
|
||||
use clap::Parser;
|
||||
use tokio::time::{sleep, Duration};
|
||||
use tokio::time::Duration;
|
||||
use tracing::{error, info};
|
||||
|
||||
#[cfg(not(test))]
|
||||
use crate::cli::Cli;
|
||||
#[cfg(not(test))]
|
||||
use crate::config::Config;
|
||||
use crate::matrix::MatrixAppserviceClient;
|
||||
use crate::matrix_server::run_synapse_listener;
|
||||
use crate::potatomesh::{FetchParams, PotatoClient, PotatoMessage, PotatoNode};
|
||||
|
||||
fn format_runtime_context(context: &config::RuntimeContext) -> String {
|
||||
format!(
|
||||
"Runtime context: in_container={} container_defaults={} config_path={} secrets_dir={:?}",
|
||||
context.in_container, context.container_defaults, context.config_path, context.secrets_dir
|
||||
)
|
||||
}
|
||||
#[cfg(not(test))]
|
||||
use tokio::time::sleep;
|
||||
|
||||
#[derive(Debug, serde::Serialize, serde::Deserialize, Default)]
|
||||
pub struct BridgeState {
|
||||
@@ -124,6 +124,31 @@ fn build_fetch_params(state: &BridgeState) -> FetchParams {
|
||||
}
|
||||
}
|
||||
|
||||
/// Persist the bridge state and log any write errors.
|
||||
fn persist_state(state: &BridgeState, state_path: &str) {
|
||||
if let Err(e) = state.save(state_path) {
|
||||
error!("Error saving state: {:?}", e);
|
||||
}
|
||||
}
|
||||
|
||||
/// Emit an info log for the latest bridge state snapshot.
|
||||
fn log_state_update(state: &BridgeState) {
|
||||
info!("Updated state: {:?}", state);
|
||||
}
|
||||
|
||||
/// Emit a sanitized config log without sensitive tokens.
|
||||
#[cfg(not(test))]
|
||||
fn log_config(cfg: &Config) {
|
||||
info!(
|
||||
potatomesh_base_url = cfg.potatomesh.base_url.as_str(),
|
||||
matrix_homeserver = cfg.matrix.homeserver.as_str(),
|
||||
matrix_server_name = cfg.matrix.server_name.as_str(),
|
||||
matrix_room_id = cfg.matrix.room_id.as_str(),
|
||||
state_file = cfg.state.state_file.as_str(),
|
||||
"Loaded config"
|
||||
);
|
||||
}
|
||||
|
||||
async fn poll_once(
|
||||
potato: &PotatoClient,
|
||||
matrix: &MatrixAppserviceClient,
|
||||
@@ -146,9 +171,8 @@ async fn poll_once(
|
||||
if let Some(port) = &msg.portnum {
|
||||
if port != "TEXT_MESSAGE_APP" {
|
||||
state.update_with(msg);
|
||||
if let Err(e) = state.save(state_path) {
|
||||
error!("Error saving state: {:?}", e);
|
||||
}
|
||||
log_state_update(state);
|
||||
persist_state(state, state_path);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
@@ -158,11 +182,8 @@ async fn poll_once(
|
||||
continue;
|
||||
}
|
||||
|
||||
state.update_with(msg);
|
||||
// persist after each processed message
|
||||
if let Err(e) = state.save(state_path) {
|
||||
error!("Error saving state: {:?}", e);
|
||||
}
|
||||
persist_state(state, state_path);
|
||||
}
|
||||
}
|
||||
Err(e) => {
|
||||
@@ -171,6 +192,15 @@ async fn poll_once(
|
||||
}
|
||||
}
|
||||
|
||||
fn spawn_synapse_listener(addr: SocketAddr, token: String) -> tokio::task::JoinHandle<()> {
|
||||
tokio::spawn(async move {
|
||||
if let Err(e) = run_synapse_listener(addr, token).await {
|
||||
error!("Synapse listener failed: {:?}", e);
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
#[cfg(not(test))]
|
||||
#[tokio::main]
|
||||
async fn main() -> Result<()> {
|
||||
// Logging: RUST_LOG=info,bridge=debug,reqwest=warn ...
|
||||
@@ -183,11 +213,8 @@ async fn main() -> Result<()> {
|
||||
.init();
|
||||
|
||||
let cli = Cli::parse();
|
||||
let bootstrap = Config::load_with_overrides(cli.into_overrides())?;
|
||||
info!("Loaded config: {:?}", bootstrap.config);
|
||||
info!("{}", format_runtime_context(&bootstrap.context));
|
||||
|
||||
let cfg = bootstrap.config;
|
||||
let cfg = config::load(cli.to_inputs())?;
|
||||
log_config(&cfg);
|
||||
|
||||
let http = reqwest::Client::builder().build()?;
|
||||
let potato = PotatoClient::new(http.clone(), cfg.potatomesh.clone());
|
||||
@@ -195,6 +222,10 @@ async fn main() -> Result<()> {
|
||||
let matrix = MatrixAppserviceClient::new(http.clone(), cfg.matrix.clone());
|
||||
matrix.health_check().await?;
|
||||
|
||||
let synapse_addr = SocketAddr::from(([0, 0, 0, 0], 41448));
|
||||
let synapse_token = cfg.matrix.hs_token.clone();
|
||||
let _synapse_handle = spawn_synapse_listener(synapse_addr, synapse_token);
|
||||
|
||||
let state_path = &cfg.state.state_file;
|
||||
let mut state = BridgeState::load(state_path)?;
|
||||
info!("Loaded state: {:?}", state);
|
||||
@@ -238,7 +269,9 @@ async fn handle_message(
|
||||
.send_formatted_message_as(&user_id, &body, &formatted_body)
|
||||
.await?;
|
||||
|
||||
info!("Bridged message: {:?}", msg);
|
||||
state.update_with(msg);
|
||||
log_state_update(state);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -552,6 +585,57 @@ mod tests {
|
||||
assert_eq!(params.since, None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn log_state_update_emits_info() {
|
||||
let state = BridgeState::default();
|
||||
log_state_update(&state);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn persist_state_writes_file() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let file_path = tmp_dir.path().join("state.json");
|
||||
let path_str = file_path.to_str().unwrap();
|
||||
|
||||
let state = BridgeState {
|
||||
last_message_id: Some(42),
|
||||
last_rx_time: Some(123),
|
||||
last_rx_time_ids: vec![42],
|
||||
last_checked_at: None,
|
||||
};
|
||||
|
||||
persist_state(&state, path_str);
|
||||
|
||||
let loaded = BridgeState::load(path_str).unwrap();
|
||||
assert_eq!(loaded.last_message_id, Some(42));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn persist_state_logs_on_error() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let dir_path = tmp_dir.path().to_str().unwrap();
|
||||
let state = BridgeState::default();
|
||||
|
||||
// Writing to a directory path should trigger the error branch.
|
||||
persist_state(&state, dir_path);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn spawn_synapse_listener_starts_task() {
|
||||
let addr = SocketAddr::from(([127, 0, 0, 1], 0));
|
||||
let handle = spawn_synapse_listener(addr, "HS_TOKEN".to_string());
|
||||
tokio::time::sleep(Duration::from_millis(10)).await;
|
||||
handle.abort();
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn spawn_synapse_listener_logs_error_on_bind_failure() {
|
||||
let listener = tokio::net::TcpListener::bind("127.0.0.1:0").await.unwrap();
|
||||
let addr = listener.local_addr().unwrap();
|
||||
let handle = spawn_synapse_listener(addr, "HS_TOKEN".to_string());
|
||||
let _ = handle.await;
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn poll_once_leaves_state_unchanged_without_messages() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
@@ -575,6 +659,7 @@ mod tests {
|
||||
let matrix_cfg = MatrixConfig {
|
||||
homeserver: server.url(),
|
||||
as_token: "AS_TOKEN".to_string(),
|
||||
hs_token: "HS_TOKEN".to_string(),
|
||||
server_name: "example.org".to_string(),
|
||||
room_id: "!roomid:example.org".to_string(),
|
||||
};
|
||||
@@ -624,6 +709,7 @@ mod tests {
|
||||
let matrix_cfg = MatrixConfig {
|
||||
homeserver: server.url(),
|
||||
as_token: "AS_TOKEN".to_string(),
|
||||
hs_token: "HS_TOKEN".to_string(),
|
||||
server_name: "example.org".to_string(),
|
||||
room_id: "!roomid:example.org".to_string(),
|
||||
};
|
||||
@@ -653,6 +739,7 @@ mod tests {
|
||||
let matrix_cfg = MatrixConfig {
|
||||
homeserver: server.url(),
|
||||
as_token: "AS_TOKEN".to_string(),
|
||||
hs_token: "HS_TOKEN".to_string(),
|
||||
server_name: "example.org".to_string(),
|
||||
room_id: "!roomid:example.org".to_string(),
|
||||
};
|
||||
@@ -672,7 +759,8 @@ mod tests {
|
||||
|
||||
let mock_register = server
|
||||
.mock("POST", "/_matrix/client/v3/register")
|
||||
.match_query("kind=user&access_token=AS_TOKEN")
|
||||
.match_query("kind=user")
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.with_status(200)
|
||||
.create();
|
||||
|
||||
@@ -681,7 +769,8 @@ mod tests {
|
||||
"POST",
|
||||
format!("/_matrix/client/v3/rooms/{}/join", encoded_room).as_str(),
|
||||
)
|
||||
.match_query(format!("user_id={}&access_token=AS_TOKEN", encoded_user).as_str())
|
||||
.match_query(format!("user_id={}", encoded_user).as_str())
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.with_status(200)
|
||||
.create();
|
||||
|
||||
@@ -690,7 +779,8 @@ mod tests {
|
||||
"PUT",
|
||||
format!("/_matrix/client/v3/profile/{}/displayname", encoded_user).as_str(),
|
||||
)
|
||||
.match_query(format!("user_id={}&access_token=AS_TOKEN", encoded_user).as_str())
|
||||
.match_query(format!("user_id={}", encoded_user).as_str())
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.match_body(mockito::Matcher::PartialJson(serde_json::json!({
|
||||
"displayname": "Test Node (TN)"
|
||||
})))
|
||||
@@ -712,7 +802,8 @@ mod tests {
|
||||
)
|
||||
.as_str(),
|
||||
)
|
||||
.match_query(format!("user_id={}&access_token=AS_TOKEN", encoded_user).as_str())
|
||||
.match_query(format!("user_id={}", encoded_user).as_str())
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.match_body(mockito::Matcher::PartialJson(serde_json::json!({
|
||||
"msgtype": "m.text",
|
||||
"body": "`[868][MF][TEST]` Ping",
|
||||
@@ -737,20 +828,4 @@ mod tests {
|
||||
|
||||
assert_eq!(state.last_message_id, Some(100));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn format_runtime_context_includes_flags() {
|
||||
let context = config::RuntimeContext {
|
||||
in_container: true,
|
||||
container_defaults: false,
|
||||
config_path: "/app/Config.toml".to_string(),
|
||||
secrets_dir: Some(std::path::PathBuf::from("/run/secrets")),
|
||||
};
|
||||
|
||||
let rendered = format_runtime_context(&context);
|
||||
assert!(rendered.contains("in_container=true"));
|
||||
assert!(rendered.contains("container_defaults=false"));
|
||||
assert!(rendered.contains("/app/Config.toml"));
|
||||
assert!(rendered.contains("/run/secrets"));
|
||||
}
|
||||
}
|
||||
|
||||
+51
-44
@@ -66,10 +66,6 @@ impl MatrixAppserviceClient {
|
||||
format!("@{}:{}", localpart, self.cfg.server_name)
|
||||
}
|
||||
|
||||
fn auth_query(&self) -> String {
|
||||
format!("access_token={}", urlencoding::encode(&self.cfg.as_token))
|
||||
}
|
||||
|
||||
/// Ensure the puppet user exists (register via appservice registration).
|
||||
pub async fn ensure_user_registered(&self, localpart: &str) -> anyhow::Result<()> {
|
||||
#[derive(Serialize)]
|
||||
@@ -80,9 +76,8 @@ impl MatrixAppserviceClient {
|
||||
}
|
||||
|
||||
let url = format!(
|
||||
"{}/_matrix/client/v3/register?kind=user&{}",
|
||||
self.cfg.homeserver,
|
||||
self.auth_query()
|
||||
"{}/_matrix/client/v3/register?kind=user",
|
||||
self.cfg.homeserver
|
||||
);
|
||||
|
||||
let body = RegisterReq {
|
||||
@@ -90,7 +85,13 @@ impl MatrixAppserviceClient {
|
||||
username: localpart,
|
||||
};
|
||||
|
||||
let resp = self.http.post(&url).json(&body).send().await?;
|
||||
let resp = self
|
||||
.http
|
||||
.post(&url)
|
||||
.bearer_auth(&self.cfg.as_token)
|
||||
.json(&body)
|
||||
.send()
|
||||
.await?;
|
||||
if resp.status().is_success() {
|
||||
Ok(())
|
||||
} else {
|
||||
@@ -109,18 +110,21 @@ impl MatrixAppserviceClient {
|
||||
|
||||
let encoded_user = urlencoding::encode(user_id);
|
||||
let url = format!(
|
||||
"{}/_matrix/client/v3/profile/{}/displayname?user_id={}&{}",
|
||||
self.cfg.homeserver,
|
||||
encoded_user,
|
||||
encoded_user,
|
||||
self.auth_query()
|
||||
"{}/_matrix/client/v3/profile/{}/displayname?user_id={}",
|
||||
self.cfg.homeserver, encoded_user, encoded_user
|
||||
);
|
||||
|
||||
let body = DisplayNameReq {
|
||||
displayname: display_name,
|
||||
};
|
||||
|
||||
let resp = self.http.put(&url).json(&body).send().await?;
|
||||
let resp = self
|
||||
.http
|
||||
.put(&url)
|
||||
.bearer_auth(&self.cfg.as_token)
|
||||
.json(&body)
|
||||
.send()
|
||||
.await?;
|
||||
if resp.status().is_success() {
|
||||
Ok(())
|
||||
} else {
|
||||
@@ -142,14 +146,17 @@ impl MatrixAppserviceClient {
|
||||
let encoded_room = urlencoding::encode(&self.cfg.room_id);
|
||||
let encoded_user = urlencoding::encode(user_id);
|
||||
let url = format!(
|
||||
"{}/_matrix/client/v3/rooms/{}/join?user_id={}&{}",
|
||||
self.cfg.homeserver,
|
||||
encoded_room,
|
||||
encoded_user,
|
||||
self.auth_query()
|
||||
"{}/_matrix/client/v3/rooms/{}/join?user_id={}",
|
||||
self.cfg.homeserver, encoded_room, encoded_user
|
||||
);
|
||||
|
||||
let resp = self.http.post(&url).json(&JoinReq {}).send().await?;
|
||||
let resp = self
|
||||
.http
|
||||
.post(&url)
|
||||
.bearer_auth(&self.cfg.as_token)
|
||||
.json(&JoinReq {})
|
||||
.send()
|
||||
.await?;
|
||||
if resp.status().is_success() {
|
||||
Ok(())
|
||||
} else {
|
||||
@@ -185,12 +192,8 @@ impl MatrixAppserviceClient {
|
||||
let encoded_user = urlencoding::encode(user_id);
|
||||
|
||||
let url = format!(
|
||||
"{}/_matrix/client/v3/rooms/{}/send/m.room.message/{}?user_id={}&{}",
|
||||
self.cfg.homeserver,
|
||||
encoded_room,
|
||||
txn_id,
|
||||
encoded_user,
|
||||
self.auth_query()
|
||||
"{}/_matrix/client/v3/rooms/{}/send/m.room.message/{}?user_id={}",
|
||||
self.cfg.homeserver, encoded_room, txn_id, encoded_user
|
||||
);
|
||||
|
||||
let content = MsgContent {
|
||||
@@ -200,7 +203,13 @@ impl MatrixAppserviceClient {
|
||||
formatted_body,
|
||||
};
|
||||
|
||||
let resp = self.http.put(&url).json(&content).send().await?;
|
||||
let resp = self
|
||||
.http
|
||||
.put(&url)
|
||||
.bearer_auth(&self.cfg.as_token)
|
||||
.json(&content)
|
||||
.send()
|
||||
.await?;
|
||||
|
||||
if !resp.status().is_success() {
|
||||
let status = resp.status();
|
||||
@@ -232,6 +241,7 @@ mod tests {
|
||||
MatrixConfig {
|
||||
homeserver: "https://matrix.example.org".to_string(),
|
||||
as_token: "AS_TOKEN".to_string(),
|
||||
hs_token: "HS_TOKEN".to_string(),
|
||||
server_name: "example.org".to_string(),
|
||||
room_id: "!roomid:example.org".to_string(),
|
||||
}
|
||||
@@ -292,16 +302,6 @@ mod tests {
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn auth_query_contains_access_token() {
|
||||
let http = reqwest::Client::builder().build().unwrap();
|
||||
let client = MatrixAppserviceClient::new(http, dummy_cfg());
|
||||
|
||||
let q = client.auth_query();
|
||||
assert!(q.starts_with("access_token="));
|
||||
assert!(q.contains("AS_TOKEN"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_new_matrix_client() {
|
||||
let http_client = reqwest::Client::new();
|
||||
@@ -317,7 +317,8 @@ mod tests {
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let mock = server
|
||||
.mock("POST", "/_matrix/client/v3/register")
|
||||
.match_query("kind=user&access_token=AS_TOKEN")
|
||||
.match_query("kind=user")
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.with_status(200)
|
||||
.create();
|
||||
|
||||
@@ -335,7 +336,8 @@ mod tests {
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let mock = server
|
||||
.mock("POST", "/_matrix/client/v3/register")
|
||||
.match_query("kind=user&access_token=AS_TOKEN")
|
||||
.match_query("kind=user")
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.with_status(400) // M_USER_IN_USE
|
||||
.create();
|
||||
|
||||
@@ -353,12 +355,13 @@ mod tests {
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let user_id = "@test:example.org";
|
||||
let encoded_user = urlencoding::encode(user_id);
|
||||
let query = format!("user_id={}&access_token=AS_TOKEN", encoded_user);
|
||||
let query = format!("user_id={}", encoded_user);
|
||||
let path = format!("/_matrix/client/v3/profile/{}/displayname", encoded_user);
|
||||
|
||||
let mock = server
|
||||
.mock("PUT", path.as_str())
|
||||
.match_query(query.as_str())
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.with_status(200)
|
||||
.create();
|
||||
|
||||
@@ -376,12 +379,13 @@ mod tests {
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let user_id = "@test:example.org";
|
||||
let encoded_user = urlencoding::encode(user_id);
|
||||
let query = format!("user_id={}&access_token=AS_TOKEN", encoded_user);
|
||||
let query = format!("user_id={}", encoded_user);
|
||||
let path = format!("/_matrix/client/v3/profile/{}/displayname", encoded_user);
|
||||
|
||||
let mock = server
|
||||
.mock("PUT", path.as_str())
|
||||
.match_query(query.as_str())
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.with_status(500)
|
||||
.create();
|
||||
|
||||
@@ -401,12 +405,13 @@ mod tests {
|
||||
let room_id = "!roomid:example.org";
|
||||
let encoded_user = urlencoding::encode(user_id);
|
||||
let encoded_room = urlencoding::encode(room_id);
|
||||
let query = format!("user_id={}&access_token=AS_TOKEN", encoded_user);
|
||||
let query = format!("user_id={}", encoded_user);
|
||||
let path = format!("/_matrix/client/v3/rooms/{}/join", encoded_room);
|
||||
|
||||
let mock = server
|
||||
.mock("POST", path.as_str())
|
||||
.match_query(query.as_str())
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.with_status(200)
|
||||
.create();
|
||||
|
||||
@@ -427,12 +432,13 @@ mod tests {
|
||||
let room_id = "!roomid:example.org";
|
||||
let encoded_user = urlencoding::encode(user_id);
|
||||
let encoded_room = urlencoding::encode(room_id);
|
||||
let query = format!("user_id={}&access_token=AS_TOKEN", encoded_user);
|
||||
let query = format!("user_id={}", encoded_user);
|
||||
let path = format!("/_matrix/client/v3/rooms/{}/join", encoded_room);
|
||||
|
||||
let mock = server
|
||||
.mock("POST", path.as_str())
|
||||
.match_query(query.as_str())
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.with_status(403)
|
||||
.create();
|
||||
|
||||
@@ -461,7 +467,7 @@ mod tests {
|
||||
MatrixAppserviceClient::new(reqwest::Client::new(), cfg)
|
||||
};
|
||||
let txn_id = client.txn_counter.load(Ordering::SeqCst);
|
||||
let query = format!("user_id={}&access_token=AS_TOKEN", encoded_user);
|
||||
let query = format!("user_id={}", encoded_user);
|
||||
let path = format!(
|
||||
"/_matrix/client/v3/rooms/{}/send/m.room.message/{}",
|
||||
encoded_room, txn_id
|
||||
@@ -470,6 +476,7 @@ mod tests {
|
||||
let mock = server
|
||||
.mock("PUT", path.as_str())
|
||||
.match_query(query.as_str())
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.match_body(mockito::Matcher::PartialJson(serde_json::json!({
|
||||
"msgtype": "m.text",
|
||||
"body": "`[meta]` hello",
|
||||
|
||||
@@ -0,0 +1,289 @@
|
||||
// Copyright © 2025-26 l5yth & contributors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use axum::{
|
||||
extract::{Path, Query, State},
|
||||
http::{header::AUTHORIZATION, HeaderMap, StatusCode},
|
||||
response::IntoResponse,
|
||||
routing::put,
|
||||
Json, Router,
|
||||
};
|
||||
use serde_json::Value;
|
||||
use std::net::SocketAddr;
|
||||
use tracing::info;
|
||||
|
||||
#[derive(Clone)]
|
||||
struct SynapseState {
|
||||
hs_token: String,
|
||||
}
|
||||
|
||||
#[derive(serde::Deserialize)]
|
||||
struct AuthQuery {
|
||||
access_token: Option<String>,
|
||||
}
|
||||
|
||||
/// Pull access tokens from supported auth headers.
|
||||
fn extract_access_token(headers: &HeaderMap) -> Option<String> {
|
||||
if let Some(value) = headers.get(AUTHORIZATION) {
|
||||
if let Ok(raw) = value.to_str() {
|
||||
if let Some(token) = raw.strip_prefix("Bearer ") {
|
||||
return Some(token.trim().to_string());
|
||||
}
|
||||
if let Some(token) = raw.strip_prefix("bearer ") {
|
||||
return Some(token.trim().to_string());
|
||||
}
|
||||
}
|
||||
}
|
||||
if let Some(value) = headers.get("x-access-token") {
|
||||
if let Ok(raw) = value.to_str() {
|
||||
return Some(raw.trim().to_string());
|
||||
}
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
/// Compare tokens in constant time to avoid timing leakage.
|
||||
fn constant_time_eq(a: &str, b: &str) -> bool {
|
||||
let a_bytes = a.as_bytes();
|
||||
let b_bytes = b.as_bytes();
|
||||
let max_len = std::cmp::max(a_bytes.len(), b_bytes.len());
|
||||
let mut diff = (a_bytes.len() ^ b_bytes.len()) as u8;
|
||||
|
||||
for idx in 0..max_len {
|
||||
let left = *a_bytes.get(idx).unwrap_or(&0);
|
||||
let right = *b_bytes.get(idx).unwrap_or(&0);
|
||||
diff |= left ^ right;
|
||||
}
|
||||
|
||||
diff == 0
|
||||
}
|
||||
|
||||
/// Captures inbound Synapse transaction payloads for logging.
|
||||
#[derive(Debug)]
|
||||
struct SynapseResponse {
|
||||
txn_id: String,
|
||||
payload: Value,
|
||||
}
|
||||
|
||||
/// Build the router that handles Synapse appservice transactions.
|
||||
fn build_router(state: SynapseState) -> Router {
|
||||
Router::new()
|
||||
.route(
|
||||
"/_matrix/appservice/v1/transactions/:txn_id",
|
||||
put(handle_transaction),
|
||||
)
|
||||
.with_state(state)
|
||||
}
|
||||
|
||||
/// Handle inbound transaction callbacks from Synapse.
|
||||
async fn handle_transaction(
|
||||
Path(txn_id): Path<String>,
|
||||
State(state): State<SynapseState>,
|
||||
Query(auth): Query<AuthQuery>,
|
||||
headers: HeaderMap,
|
||||
Json(payload): Json<Value>,
|
||||
) -> impl IntoResponse {
|
||||
let header_token = extract_access_token(&headers);
|
||||
let token_matches = if let Some(token) = header_token.as_deref() {
|
||||
constant_time_eq(token, &state.hs_token)
|
||||
} else {
|
||||
auth.access_token
|
||||
.as_deref()
|
||||
.is_some_and(|token| constant_time_eq(token, &state.hs_token))
|
||||
};
|
||||
if !token_matches {
|
||||
return (StatusCode::UNAUTHORIZED, Json(serde_json::json!({})));
|
||||
}
|
||||
let response = SynapseResponse { txn_id, payload };
|
||||
info!(
|
||||
"Status response: SynapseResponse {{ txn_id: {}, payload: {:?} }}",
|
||||
response.txn_id, response.payload
|
||||
);
|
||||
(StatusCode::OK, Json(serde_json::json!({})))
|
||||
}
|
||||
|
||||
/// Listen for Synapse callbacks on the configured address.
|
||||
pub async fn run_synapse_listener(addr: SocketAddr, hs_token: String) -> anyhow::Result<()> {
|
||||
let app = build_router(SynapseState { hs_token });
|
||||
let listener = tokio::net::TcpListener::bind(addr).await?;
|
||||
info!("Synapse listener bound on {}", addr);
|
||||
axum::serve(listener, app).await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use axum::body::Body;
|
||||
use axum::http::Request;
|
||||
use tokio::time::{sleep, Duration};
|
||||
use tower::ServiceExt;
|
||||
|
||||
#[tokio::test]
|
||||
async fn transactions_endpoint_accepts_payloads() {
|
||||
let app = build_router(SynapseState {
|
||||
hs_token: "HS_TOKEN".to_string(),
|
||||
});
|
||||
let payload = serde_json::json!({
|
||||
"events": [],
|
||||
"txn_id": "123"
|
||||
});
|
||||
|
||||
let response = app
|
||||
.oneshot(
|
||||
Request::builder()
|
||||
.method("PUT")
|
||||
.uri("/_matrix/appservice/v1/transactions/123")
|
||||
.header("authorization", "Bearer HS_TOKEN")
|
||||
.header("content-type", "application/json")
|
||||
.body(Body::from(payload.to_string()))
|
||||
.unwrap(),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
let body = axum::body::to_bytes(response.into_body(), usize::MAX)
|
||||
.await
|
||||
.unwrap();
|
||||
assert_eq!(body.as_ref(), b"{}");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn transactions_endpoint_rejects_missing_token() {
|
||||
let app = build_router(SynapseState {
|
||||
hs_token: "HS_TOKEN".to_string(),
|
||||
});
|
||||
let payload = serde_json::json!({
|
||||
"events": [],
|
||||
"txn_id": "123"
|
||||
});
|
||||
|
||||
let response = app
|
||||
.oneshot(
|
||||
Request::builder()
|
||||
.method("PUT")
|
||||
.uri("/_matrix/appservice/v1/transactions/123")
|
||||
.header("content-type", "application/json")
|
||||
.body(Body::from(payload.to_string()))
|
||||
.unwrap(),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(response.status(), StatusCode::UNAUTHORIZED);
|
||||
let body = axum::body::to_bytes(response.into_body(), usize::MAX)
|
||||
.await
|
||||
.unwrap();
|
||||
assert_eq!(body.as_ref(), b"{}");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn transactions_endpoint_rejects_wrong_token() {
|
||||
let app = build_router(SynapseState {
|
||||
hs_token: "HS_TOKEN".to_string(),
|
||||
});
|
||||
let payload = serde_json::json!({
|
||||
"events": [],
|
||||
"txn_id": "123"
|
||||
});
|
||||
|
||||
let response = app
|
||||
.oneshot(
|
||||
Request::builder()
|
||||
.method("PUT")
|
||||
.uri("/_matrix/appservice/v1/transactions/123")
|
||||
.header("authorization", "Bearer NOPE")
|
||||
.header("content-type", "application/json")
|
||||
.body(Body::from(payload.to_string()))
|
||||
.unwrap(),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(response.status(), StatusCode::UNAUTHORIZED);
|
||||
let body = axum::body::to_bytes(response.into_body(), usize::MAX)
|
||||
.await
|
||||
.unwrap();
|
||||
assert_eq!(body.as_ref(), b"{}");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn transactions_endpoint_accepts_legacy_query_token() {
|
||||
let app = build_router(SynapseState {
|
||||
hs_token: "HS_TOKEN".to_string(),
|
||||
});
|
||||
let payload = serde_json::json!({
|
||||
"events": [],
|
||||
"txn_id": "125"
|
||||
});
|
||||
|
||||
let response = app
|
||||
.oneshot(
|
||||
Request::builder()
|
||||
.method("PUT")
|
||||
.uri("/_matrix/appservice/v1/transactions/125?access_token=HS_TOKEN")
|
||||
.header("content-type", "application/json")
|
||||
.body(Body::from(payload.to_string()))
|
||||
.unwrap(),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn transactions_endpoint_accepts_x_access_token_header() {
|
||||
let app = build_router(SynapseState {
|
||||
hs_token: "HS_TOKEN".to_string(),
|
||||
});
|
||||
let payload = serde_json::json!({
|
||||
"events": [],
|
||||
"txn_id": "126"
|
||||
});
|
||||
|
||||
let response = app
|
||||
.oneshot(
|
||||
Request::builder()
|
||||
.method("PUT")
|
||||
.uri("/_matrix/appservice/v1/transactions/126")
|
||||
.header("x-access-token", "HS_TOKEN")
|
||||
.header("content-type", "application/json")
|
||||
.body(Body::from(payload.to_string()))
|
||||
.unwrap(),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn run_synapse_listener_starts_and_can_abort() {
|
||||
let addr = SocketAddr::from(([127, 0, 0, 1], 0));
|
||||
let handle =
|
||||
tokio::spawn(async move { run_synapse_listener(addr, "HS_TOKEN".to_string()).await });
|
||||
sleep(Duration::from_millis(10)).await;
|
||||
handle.abort();
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn run_synapse_listener_returns_error_on_bind_failure() {
|
||||
let listener = tokio::net::TcpListener::bind("127.0.0.1:0").await.unwrap();
|
||||
let addr = listener.local_addr().unwrap();
|
||||
let result = run_synapse_listener(addr, "HS_TOKEN".to_string()).await;
|
||||
assert!(result.is_err());
|
||||
}
|
||||
}
|
||||
@@ -19,6 +19,11 @@ use tokio::sync::RwLock;
|
||||
|
||||
use crate::config::PotatomeshConfig;
|
||||
|
||||
/// Protocol identifier sent as a query parameter to restrict API results to
|
||||
/// Meshtastic data only. Other protocols (e.g. MeshCore) are excluded until
|
||||
/// the clients are updated to support them.
|
||||
const PROTOCOL_FILTER: &str = "meshtastic";
|
||||
|
||||
#[allow(dead_code)]
|
||||
#[derive(Debug, Deserialize, Clone)]
|
||||
pub struct PotatoMessage {
|
||||
@@ -131,7 +136,10 @@ impl PotatoClient {
|
||||
}
|
||||
|
||||
pub async fn fetch_messages(&self, params: FetchParams) -> anyhow::Result<Vec<PotatoMessage>> {
|
||||
let mut req = self.http.get(self.messages_url());
|
||||
let mut req = self
|
||||
.http
|
||||
.get(self.messages_url())
|
||||
.query(&[("protocol", PROTOCOL_FILTER)]);
|
||||
if let Some(limit) = params.limit {
|
||||
req = req.query(&[("limit", limit)]);
|
||||
}
|
||||
@@ -336,7 +344,10 @@ mod tests {
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let mock = server
|
||||
.mock("GET", "/api/messages")
|
||||
.match_query(mockito::Matcher::Any) // allow optional query params
|
||||
.match_query(mockito::Matcher::UrlEncoded(
|
||||
"protocol".into(),
|
||||
"meshtastic".into(),
|
||||
))
|
||||
.with_status(200)
|
||||
.with_header("content-type", "application/json")
|
||||
.with_body(
|
||||
@@ -427,7 +438,10 @@ mod tests {
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let mock = server
|
||||
.mock("GET", "/api/messages")
|
||||
.match_query(mockito::Matcher::Any)
|
||||
.match_query(mockito::Matcher::UrlEncoded(
|
||||
"protocol".into(),
|
||||
PROTOCOL_FILTER.into(),
|
||||
))
|
||||
.with_status(500)
|
||||
.create();
|
||||
|
||||
@@ -448,7 +462,11 @@ mod tests {
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let mock = server
|
||||
.mock("GET", "/api/messages")
|
||||
.match_query("limit=10&since=123")
|
||||
.match_query(mockito::Matcher::AllOf(vec![
|
||||
mockito::Matcher::UrlEncoded("protocol".into(), PROTOCOL_FILTER.into()),
|
||||
mockito::Matcher::UrlEncoded("limit".into(), "10".into()),
|
||||
mockito::Matcher::UrlEncoded("since".into(), "123".into()),
|
||||
]))
|
||||
.with_status(200)
|
||||
.with_header("content-type", "application/json")
|
||||
.with_body("[]")
|
||||
|
||||
Binary file not shown.
|
After Width: | Height: | Size: 62 KiB |
@@ -0,0 +1,71 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
require "base64"
|
||||
require "meshtastic"
|
||||
require "openssl"
|
||||
|
||||
channel_name = "BerlinMesh"
|
||||
|
||||
# === Inputs from your packet ===
|
||||
cipher_b64 = "Q1R7tgI5yXzMXu/3"
|
||||
psk_b64 = "Nmh7EooP2Tsc+7pvPwXLcEDDuYhk+fBo2GLnbA1Y1sg="
|
||||
packet_id = 3_915_687_257
|
||||
from_id = "!9e95cf60"
|
||||
channel = 35
|
||||
|
||||
# === Decode key and ciphertext ===
|
||||
key = Base64.decode64(psk_b64) # 32 bytes -> AES-256
|
||||
ciphertext = Base64.decode64(cipher_b64)
|
||||
|
||||
# === Derive numeric node id from Meshtastic-style string ===
|
||||
hex_str = from_id.sub(/^!/, "") # "9e95cf60"
|
||||
from_node = hex_str.to_i(16) # 0x9e95cf60
|
||||
|
||||
# === Build nonce exactly like Meshtastic CryptoEngine ===
|
||||
# Little-endian 64-bit packet ID + little-endian 32-bit node ID + 4 zero bytes
|
||||
nonce = [packet_id].pack("Q<") # uint64, little-endian
|
||||
nonce += [from_node].pack("L<") # uint32, little-endian
|
||||
nonce += "\x00" * 4 # extraNonce == 0 for PSK channel msgs
|
||||
|
||||
raise "Nonce must be 16 bytes" unless nonce.bytesize == 16
|
||||
raise "Key must be 32 bytes" unless key.bytesize == 32
|
||||
|
||||
# === AES-256-CTR decrypt ===
|
||||
cipher = OpenSSL::Cipher.new("aes-256-ctr")
|
||||
cipher.decrypt
|
||||
cipher.key = key
|
||||
cipher.iv = nonce
|
||||
|
||||
plaintext = cipher.update(ciphertext) + cipher.final
|
||||
|
||||
# At this point `plaintext` is the raw Meshtastic protobuf payload
|
||||
plaintext = plaintext.bytes.pack("C*")
|
||||
data = Meshtastic::Data.decode(plaintext)
|
||||
msg = data.payload.dup.force_encoding("UTF-8")
|
||||
puts msg
|
||||
|
||||
# Gets channel number from name and psk
|
||||
def channel_hash(name, psk_b64)
|
||||
name_bytes = name.b # UTF-8 bytes
|
||||
psk_bytes = Base64.decode64(psk_b64)
|
||||
|
||||
hn = name_bytes.bytes.reduce(0) { |acc, b| acc ^ b } # XOR over name
|
||||
hp = psk_bytes.bytes.reduce(0) { |acc, b| acc ^ b } # XOR over PSK
|
||||
|
||||
(hn ^ hp) & 0xFF
|
||||
end
|
||||
|
||||
channel_h = channel_hash(channel_name, psk_b64)
|
||||
puts channel_h
|
||||
puts channel == channel_h
|
||||
@@ -0,0 +1,491 @@
|
||||
hash,name
|
||||
0,Mesh1
|
||||
1,DEMO
|
||||
1,Downlink1
|
||||
1,NightNet
|
||||
1,Sideband1
|
||||
2,CommsNet
|
||||
2,Mesh3
|
||||
2,PulseNet
|
||||
3,LightNet
|
||||
3,Mesh2
|
||||
3,WestStar
|
||||
3,WolfMesh
|
||||
4,Mesh5
|
||||
4,OPERATIONS
|
||||
4,Rescue1
|
||||
4,SignalFire
|
||||
5,Base2
|
||||
5,DeltaNet
|
||||
5,Mesh4
|
||||
5,MeshMunich
|
||||
6,Base1
|
||||
7,MeshTest
|
||||
7,Rescue2
|
||||
7,ZuluMesh
|
||||
8,CourierNet
|
||||
8,Fire2
|
||||
8,Grid2
|
||||
8,LongFast
|
||||
8,RescueTeam
|
||||
9,AlphaNet
|
||||
9,MeshGrid
|
||||
10,TestBerlin
|
||||
10,WaWi
|
||||
11,Fire1
|
||||
11,Grid1
|
||||
12,FoxNet
|
||||
12,MeshRuhr
|
||||
12,RadioNet
|
||||
13,Signal1
|
||||
13,Zone1
|
||||
14,BetaBerlin
|
||||
14,Signal2
|
||||
14,TangoNet
|
||||
14,Zone2
|
||||
15,BerlinMesh
|
||||
15,LongSlow
|
||||
15,MeshBerlin
|
||||
15,Zone3
|
||||
16,CQ
|
||||
16,EchoMesh
|
||||
16,Freq2
|
||||
16,KiloMesh
|
||||
16,Node2
|
||||
16,PhoenixNet
|
||||
16,Repeater2
|
||||
17,FoxtrotNet
|
||||
17,Node3
|
||||
18,LoRa
|
||||
19,Freq1
|
||||
19,HarmonyNet
|
||||
19,Node1
|
||||
19,RavenNet
|
||||
19,Repeater1
|
||||
20,NomadNet
|
||||
20,SENSOR
|
||||
20,TEST
|
||||
20,test
|
||||
21,BravoNet
|
||||
21,EastStar
|
||||
21,MeshCollective
|
||||
21,SunNet
|
||||
22,Node4
|
||||
22,Uplink1
|
||||
23,EagleNet
|
||||
23,MeshHessen
|
||||
23,Node5
|
||||
24,MediumSlow
|
||||
24,Router1
|
||||
25,Checkpoint1
|
||||
25,HAMNet
|
||||
26,Checkpoint2
|
||||
26,GhostNet
|
||||
27,HQ
|
||||
27,Router2
|
||||
31,DemoBerlin
|
||||
31,FieldNet
|
||||
31,MediumFast
|
||||
32,Clinic
|
||||
32,Convoy
|
||||
32,Daylight
|
||||
32,Town
|
||||
33,Callisto
|
||||
33,CQ1
|
||||
33,Daybreak
|
||||
33,Demo
|
||||
33,East
|
||||
33,LoRaMesh
|
||||
33,Mist
|
||||
34,CQ2
|
||||
34,Freq
|
||||
34,Gold
|
||||
34,Link
|
||||
34,Repeater
|
||||
35,Aquila
|
||||
35,Doctor
|
||||
35,Echo
|
||||
35,Kilo
|
||||
35,Public
|
||||
35,Wyvern
|
||||
36,District
|
||||
36,Hessen
|
||||
36,Io
|
||||
36,LoRaTest
|
||||
36,Operations
|
||||
36,Shadow
|
||||
36,Unit
|
||||
37,Campfire
|
||||
37,City
|
||||
37,Outsider
|
||||
37,Sync
|
||||
38,Beacon
|
||||
38,Collective
|
||||
38,Harbor
|
||||
38,Lion
|
||||
38,Meteor
|
||||
39,Firebird
|
||||
39,Fireteam
|
||||
39,Quasar
|
||||
39,Snow
|
||||
39,Universe
|
||||
39,Uplink
|
||||
40,Checkpoint
|
||||
40,Galaxy
|
||||
40,Jaguar
|
||||
40,Sunset
|
||||
40,Zeta
|
||||
41,Hinterland
|
||||
41,HQ2
|
||||
41,Main
|
||||
41,Meshtastic
|
||||
41,Router
|
||||
41,Valley
|
||||
41,Wander
|
||||
41,Wolfpack
|
||||
42,HQ1
|
||||
42,Lizard
|
||||
42,Packet
|
||||
42,Sahara
|
||||
42,Tunnel
|
||||
43,Anaconda
|
||||
43,Basalt
|
||||
43,Blackout
|
||||
43,Crow
|
||||
43,Dusk
|
||||
43,Falcon
|
||||
43,Lima
|
||||
43,Müggelberg
|
||||
44,Arctic
|
||||
44,Backup
|
||||
44,Bronze
|
||||
44,Corvus
|
||||
44,Cosmos
|
||||
44,LoRaBerlin
|
||||
44,Neukölln
|
||||
44,Safari
|
||||
45,Breeze
|
||||
45,Burrow
|
||||
45,Gale
|
||||
45,Saturn
|
||||
46,Border
|
||||
46,Nest
|
||||
47,Borealis
|
||||
47,Mars
|
||||
47,Path
|
||||
47,Ranger
|
||||
48,Beat
|
||||
48,Berg
|
||||
48,Beta
|
||||
48,Downlink
|
||||
48,Hive
|
||||
48,Rhythm
|
||||
48,Saxony
|
||||
48,Sideband
|
||||
48,Wolf
|
||||
49,Asteroid
|
||||
49,Carbon
|
||||
49,Mesh
|
||||
50,Blizzard
|
||||
50,Runner
|
||||
51,Callsign
|
||||
51,Carpet
|
||||
51,Desert
|
||||
51,Dragon
|
||||
51,Friedrichshain
|
||||
51,Help
|
||||
51,Nebula
|
||||
51,Safe
|
||||
52,Amazon
|
||||
52,Fireline
|
||||
52,Haze
|
||||
52,LoRaHessen
|
||||
52,Platinum
|
||||
52,Sensor
|
||||
52,Test
|
||||
52,Zulu
|
||||
53,Nord
|
||||
53,Rescue
|
||||
53,Secure
|
||||
53,Silver
|
||||
54,Bear
|
||||
54,Hospital
|
||||
54,Munich
|
||||
54,Python
|
||||
54,Rain
|
||||
54,Wind
|
||||
54,Wolves
|
||||
55,Base
|
||||
55,Bolt
|
||||
55,Hawk
|
||||
55,Mirage
|
||||
55,Nightwatch
|
||||
55,Obsidian
|
||||
55,Rock
|
||||
55,Victor
|
||||
55,West
|
||||
56,Aurora
|
||||
56,Dune
|
||||
56,Iron
|
||||
56,Lava
|
||||
56,Nomads
|
||||
57,Copper
|
||||
57,Core
|
||||
57,Spectrum
|
||||
57,Summit
|
||||
58,Colony
|
||||
58,Fire
|
||||
58,Ganymede
|
||||
58,Grid
|
||||
58,Kraken
|
||||
58,Road
|
||||
58,Solstice
|
||||
58,Tundra
|
||||
59,911
|
||||
59,Forest
|
||||
59,Pack
|
||||
60,Berlin
|
||||
60,Chat
|
||||
60,Sierra
|
||||
60,Signal
|
||||
60,Wald
|
||||
60,Zone
|
||||
61,Alpine
|
||||
61,Bridge
|
||||
61,Camp
|
||||
61,Dortmund
|
||||
61,Frontier
|
||||
61,Jungle
|
||||
61,Peak
|
||||
62,Burner
|
||||
62,Dawn
|
||||
62,Europa
|
||||
62,Midnight
|
||||
62,Nightshift
|
||||
62,Prenzlauer
|
||||
62,Safety
|
||||
62,Sector
|
||||
62,Wanderer
|
||||
63,Distress
|
||||
63,Kiez
|
||||
63,Ruhr
|
||||
63,Team
|
||||
64,Epsilon
|
||||
64,Field
|
||||
64,Granite
|
||||
64,Orbit
|
||||
64,Trail
|
||||
64,Whisper
|
||||
65,Central
|
||||
65,Cologne
|
||||
65,Layer
|
||||
65,Relay
|
||||
65,Runners
|
||||
65,Stone
|
||||
65,Tempo
|
||||
66,Polar
|
||||
66,Woods
|
||||
67,Highway
|
||||
67,Kreuzberg
|
||||
67,Leopard
|
||||
67,Metro
|
||||
67,Omega
|
||||
67,Phantom
|
||||
68,Hamburg
|
||||
68,Hydra
|
||||
68,Medic
|
||||
68,Titan
|
||||
69,Command
|
||||
69,Control
|
||||
69,Gamma
|
||||
69,Ghost
|
||||
69,Mercury
|
||||
69,Oasis
|
||||
70,Diamond
|
||||
70,Ham
|
||||
70,HAM
|
||||
70,Leipzig
|
||||
70,Paramedic
|
||||
70,Savanna
|
||||
71,Frankfurt
|
||||
71,Gecko
|
||||
71,Jupiter
|
||||
71,Sensors
|
||||
71,SENSORS
|
||||
71,Sunrise
|
||||
72,Chameleon
|
||||
72,Eagle
|
||||
72,Hilltop
|
||||
72,Teufelsberg
|
||||
73,Firefly
|
||||
73,Steel
|
||||
74,Bravo
|
||||
74,Caravan
|
||||
74,Ost
|
||||
74,Süd
|
||||
75,Emergency
|
||||
75,EMERGENCY
|
||||
75,Nomad
|
||||
75,Watch
|
||||
76,Alert
|
||||
76,Bavaria
|
||||
76,Fog
|
||||
76,Harmony
|
||||
76,Raven
|
||||
77,Admin
|
||||
77,ADMIN
|
||||
77,Den
|
||||
77,Ice
|
||||
77,LoRaNet
|
||||
77,North
|
||||
77,SOS
|
||||
77,Sos
|
||||
77,Wanderers
|
||||
78,Foxtrot
|
||||
78,Med
|
||||
78,Ops
|
||||
79,Flock
|
||||
79,Phoenix
|
||||
79,PRIVATE
|
||||
79,Private
|
||||
79,Signals
|
||||
79,Tiger
|
||||
80,Commune
|
||||
80,Freedom
|
||||
80,Pluto
|
||||
80,Snake
|
||||
80,Squad
|
||||
80,Stuttgart
|
||||
81,Grassland
|
||||
81,Tango
|
||||
81,Union
|
||||
82,Comet
|
||||
82,Flash
|
||||
82,Lightning
|
||||
83,Cloud
|
||||
83,Equinox
|
||||
83,Firewatch
|
||||
83,Fox
|
||||
83,Radio
|
||||
83,Shelter
|
||||
84,Cheetah
|
||||
84,General
|
||||
84,Outpost
|
||||
84,Volcano
|
||||
85,Glacier
|
||||
85,Storm
|
||||
86,Alpha
|
||||
86,Owl
|
||||
86,Panther
|
||||
86,Prairie
|
||||
86,Thunder
|
||||
87,Courier
|
||||
87,Nexus
|
||||
87,South
|
||||
88,Ash
|
||||
88,River
|
||||
88,Syndicate
|
||||
89,Amateur
|
||||
89,Astro
|
||||
89,Avalanche
|
||||
89,Bonfire
|
||||
89,Draco
|
||||
89,Griffin
|
||||
89,Nightfall
|
||||
89,Shade
|
||||
89,Venus
|
||||
90,Charlie
|
||||
90,Delta
|
||||
90,Stratum
|
||||
90,Viper
|
||||
91,Bison
|
||||
91,Tal
|
||||
92,Network
|
||||
92,Scout
|
||||
93,Comms
|
||||
93,Fluss
|
||||
93,Group
|
||||
93,Hub
|
||||
93,Pulse
|
||||
93,Smoke
|
||||
94,Frost
|
||||
94,Rover
|
||||
94,Village
|
||||
95,Cobra
|
||||
95,Liberty
|
||||
95,Ridge
|
||||
97,DarkNet
|
||||
97,NightshiftNet
|
||||
97,Radio2
|
||||
97,Shelter2
|
||||
98,CampNet
|
||||
98,Radio1
|
||||
98,Shelter1
|
||||
98,TangoMesh
|
||||
99,BaseAlpha
|
||||
99,BerlinNet
|
||||
99,SouthStar
|
||||
100,CourierMesh
|
||||
100,Storm1
|
||||
101,Courier2
|
||||
101,GridNet
|
||||
101,OpsCenter
|
||||
102,Courier1
|
||||
103,Storm2
|
||||
104,HawkNet
|
||||
105,BearNet
|
||||
105,StarNet
|
||||
107,emergency
|
||||
107,ZuluNet
|
||||
108,Comms1
|
||||
108,DragonNet
|
||||
108,Hub1
|
||||
109,admin
|
||||
109,NightMesh
|
||||
110,MeshNet
|
||||
111,BaseCharlie
|
||||
111,Comms2
|
||||
111,GridSouth
|
||||
111,Hub2
|
||||
111,MeshNetwork
|
||||
111,WolfNet
|
||||
112,Layer1
|
||||
112,Relay1
|
||||
112,ShortFast
|
||||
113,OpsRoom
|
||||
114,Layer3
|
||||
114,MeshCologne
|
||||
115,Layer2
|
||||
115,Relay2
|
||||
115,SOSBerlin
|
||||
116,Command1
|
||||
116,Control1
|
||||
116,CrowNet
|
||||
116,MeshFrankfurt
|
||||
117,EmergencyBerlin
|
||||
117,GridNorth
|
||||
117,MeshLeipzig
|
||||
117,PacketNet
|
||||
119,Command2
|
||||
119,Control2
|
||||
119,MeshHamburg
|
||||
120,NomadMesh
|
||||
121,NorthStar
|
||||
121,Watch2
|
||||
122,CommandRoom
|
||||
122,ControlRoom
|
||||
122,SyncNet
|
||||
122,Watch1
|
||||
123,PacketRadio
|
||||
123,ShadowNet
|
||||
124,EchoNet
|
||||
124,KiloNet
|
||||
124,Med2
|
||||
124,Ops2
|
||||
125,FoxtrotMesh
|
||||
125,RepeaterHub
|
||||
126,MoonNet
|
||||
127,BaseBravo
|
||||
127,Med1
|
||||
127,Ops1
|
||||
127,WolfDen
|
||||
|
@@ -0,0 +1,736 @@
|
||||
{
|
||||
"59": [
|
||||
"911",
|
||||
"Forest",
|
||||
"Pack"
|
||||
],
|
||||
"77": [
|
||||
"Admin",
|
||||
"ADMIN",
|
||||
"Den",
|
||||
"Ice",
|
||||
"LoRaNet",
|
||||
"North",
|
||||
"SOS",
|
||||
"Sos",
|
||||
"Wanderers"
|
||||
],
|
||||
"109": [
|
||||
"admin",
|
||||
"NightMesh"
|
||||
],
|
||||
"76": [
|
||||
"Alert",
|
||||
"Bavaria",
|
||||
"Fog",
|
||||
"Harmony",
|
||||
"Raven"
|
||||
],
|
||||
"86": [
|
||||
"Alpha",
|
||||
"Owl",
|
||||
"Panther",
|
||||
"Prairie",
|
||||
"Thunder"
|
||||
],
|
||||
"9": [
|
||||
"AlphaNet",
|
||||
"MeshGrid"
|
||||
],
|
||||
"61": [
|
||||
"Alpine",
|
||||
"Bridge",
|
||||
"Camp",
|
||||
"Dortmund",
|
||||
"Frontier",
|
||||
"Jungle",
|
||||
"Peak"
|
||||
],
|
||||
"89": [
|
||||
"Amateur",
|
||||
"Astro",
|
||||
"Avalanche",
|
||||
"Bonfire",
|
||||
"Draco",
|
||||
"Griffin",
|
||||
"Nightfall",
|
||||
"Shade",
|
||||
"Venus"
|
||||
],
|
||||
"52": [
|
||||
"Amazon",
|
||||
"Fireline",
|
||||
"Haze",
|
||||
"LoRaHessen",
|
||||
"Platinum",
|
||||
"Sensor",
|
||||
"Test",
|
||||
"Zulu"
|
||||
],
|
||||
"43": [
|
||||
"Anaconda",
|
||||
"Basalt",
|
||||
"Blackout",
|
||||
"Crow",
|
||||
"Dusk",
|
||||
"Falcon",
|
||||
"Lima",
|
||||
"Müggelberg"
|
||||
],
|
||||
"35": [
|
||||
"Aquila",
|
||||
"Doctor",
|
||||
"Echo",
|
||||
"Kilo",
|
||||
"Public",
|
||||
"Wyvern"
|
||||
],
|
||||
"44": [
|
||||
"Arctic",
|
||||
"Backup",
|
||||
"Bronze",
|
||||
"Corvus",
|
||||
"Cosmos",
|
||||
"LoRaBerlin",
|
||||
"Neukölln",
|
||||
"Safari"
|
||||
],
|
||||
"88": [
|
||||
"Ash",
|
||||
"River",
|
||||
"Syndicate"
|
||||
],
|
||||
"49": [
|
||||
"Asteroid",
|
||||
"Carbon",
|
||||
"Mesh"
|
||||
],
|
||||
"56": [
|
||||
"Aurora",
|
||||
"Dune",
|
||||
"Iron",
|
||||
"Lava",
|
||||
"Nomads"
|
||||
],
|
||||
"55": [
|
||||
"Base",
|
||||
"Bolt",
|
||||
"Hawk",
|
||||
"Mirage",
|
||||
"Nightwatch",
|
||||
"Obsidian",
|
||||
"Rock",
|
||||
"Victor",
|
||||
"West"
|
||||
],
|
||||
"6": [
|
||||
"Base1"
|
||||
],
|
||||
"5": [
|
||||
"Base2",
|
||||
"DeltaNet",
|
||||
"Mesh4",
|
||||
"MeshMunich"
|
||||
],
|
||||
"99": [
|
||||
"BaseAlpha",
|
||||
"BerlinNet",
|
||||
"SouthStar"
|
||||
],
|
||||
"127": [
|
||||
"BaseBravo",
|
||||
"Med1",
|
||||
"Ops1",
|
||||
"WolfDen"
|
||||
],
|
||||
"111": [
|
||||
"BaseCharlie",
|
||||
"Comms2",
|
||||
"GridSouth",
|
||||
"Hub2",
|
||||
"MeshNetwork",
|
||||
"WolfNet"
|
||||
],
|
||||
"38": [
|
||||
"Beacon",
|
||||
"Collective",
|
||||
"Harbor",
|
||||
"Lion",
|
||||
"Meteor"
|
||||
],
|
||||
"54": [
|
||||
"Bear",
|
||||
"Hospital",
|
||||
"Munich",
|
||||
"Python",
|
||||
"Rain",
|
||||
"Wind",
|
||||
"Wolves"
|
||||
],
|
||||
"105": [
|
||||
"BearNet",
|
||||
"StarNet"
|
||||
],
|
||||
"48": [
|
||||
"Beat",
|
||||
"Berg",
|
||||
"Beta",
|
||||
"Downlink",
|
||||
"Hive",
|
||||
"Rhythm",
|
||||
"Saxony",
|
||||
"Sideband",
|
||||
"Wolf"
|
||||
],
|
||||
"60": [
|
||||
"Berlin",
|
||||
"Chat",
|
||||
"Sierra",
|
||||
"Signal",
|
||||
"Wald",
|
||||
"Zone"
|
||||
],
|
||||
"15": [
|
||||
"BerlinMesh",
|
||||
"LongSlow",
|
||||
"MeshBerlin",
|
||||
"Zone3"
|
||||
],
|
||||
"14": [
|
||||
"BetaBerlin",
|
||||
"Signal2",
|
||||
"TangoNet",
|
||||
"Zone2"
|
||||
],
|
||||
"91": [
|
||||
"Bison",
|
||||
"Tal"
|
||||
],
|
||||
"50": [
|
||||
"Blizzard",
|
||||
"Runner"
|
||||
],
|
||||
"46": [
|
||||
"Border",
|
||||
"Nest"
|
||||
],
|
||||
"47": [
|
||||
"Borealis",
|
||||
"Mars",
|
||||
"Path",
|
||||
"Ranger"
|
||||
],
|
||||
"74": [
|
||||
"Bravo",
|
||||
"Caravan",
|
||||
"Ost",
|
||||
"Süd"
|
||||
],
|
||||
"21": [
|
||||
"BravoNet",
|
||||
"EastStar",
|
||||
"MeshCollective",
|
||||
"SunNet"
|
||||
],
|
||||
"45": [
|
||||
"Breeze",
|
||||
"Burrow",
|
||||
"Gale",
|
||||
"Saturn"
|
||||
],
|
||||
"62": [
|
||||
"Burner",
|
||||
"Dawn",
|
||||
"Europa",
|
||||
"Midnight",
|
||||
"Nightshift",
|
||||
"Prenzlauer",
|
||||
"Safety",
|
||||
"Sector",
|
||||
"Wanderer"
|
||||
],
|
||||
"33": [
|
||||
"Callisto",
|
||||
"CQ1",
|
||||
"Daybreak",
|
||||
"Demo",
|
||||
"East",
|
||||
"LoRaMesh",
|
||||
"Mist"
|
||||
],
|
||||
"51": [
|
||||
"Callsign",
|
||||
"Carpet",
|
||||
"Desert",
|
||||
"Dragon",
|
||||
"Friedrichshain",
|
||||
"Help",
|
||||
"Nebula",
|
||||
"Safe"
|
||||
],
|
||||
"37": [
|
||||
"Campfire",
|
||||
"City",
|
||||
"Outsider",
|
||||
"Sync"
|
||||
],
|
||||
"98": [
|
||||
"CampNet",
|
||||
"Radio1",
|
||||
"Shelter1",
|
||||
"TangoMesh"
|
||||
],
|
||||
"65": [
|
||||
"Central",
|
||||
"Cologne",
|
||||
"Layer",
|
||||
"Relay",
|
||||
"Runners",
|
||||
"Stone",
|
||||
"Tempo"
|
||||
],
|
||||
"72": [
|
||||
"Chameleon",
|
||||
"Eagle",
|
||||
"Hilltop",
|
||||
"Teufelsberg"
|
||||
],
|
||||
"90": [
|
||||
"Charlie",
|
||||
"Delta",
|
||||
"Stratum",
|
||||
"Viper"
|
||||
],
|
||||
"40": [
|
||||
"Checkpoint",
|
||||
"Galaxy",
|
||||
"Jaguar",
|
||||
"Sunset",
|
||||
"Zeta"
|
||||
],
|
||||
"25": [
|
||||
"Checkpoint1",
|
||||
"HAMNet"
|
||||
],
|
||||
"26": [
|
||||
"Checkpoint2",
|
||||
"GhostNet"
|
||||
],
|
||||
"84": [
|
||||
"Cheetah",
|
||||
"General",
|
||||
"Outpost",
|
||||
"Volcano"
|
||||
],
|
||||
"32": [
|
||||
"Clinic",
|
||||
"Convoy",
|
||||
"Daylight",
|
||||
"Town"
|
||||
],
|
||||
"83": [
|
||||
"Cloud",
|
||||
"Equinox",
|
||||
"Firewatch",
|
||||
"Fox",
|
||||
"Radio",
|
||||
"Shelter"
|
||||
],
|
||||
"95": [
|
||||
"Cobra",
|
||||
"Liberty",
|
||||
"Ridge"
|
||||
],
|
||||
"58": [
|
||||
"Colony",
|
||||
"Fire",
|
||||
"Ganymede",
|
||||
"Grid",
|
||||
"Kraken",
|
||||
"Road",
|
||||
"Solstice",
|
||||
"Tundra"
|
||||
],
|
||||
"82": [
|
||||
"Comet",
|
||||
"Flash",
|
||||
"Lightning"
|
||||
],
|
||||
"69": [
|
||||
"Command",
|
||||
"Control",
|
||||
"Gamma",
|
||||
"Ghost",
|
||||
"Mercury",
|
||||
"Oasis"
|
||||
],
|
||||
"116": [
|
||||
"Command1",
|
||||
"Control1",
|
||||
"CrowNet",
|
||||
"MeshFrankfurt"
|
||||
],
|
||||
"119": [
|
||||
"Command2",
|
||||
"Control2",
|
||||
"MeshHamburg"
|
||||
],
|
||||
"122": [
|
||||
"CommandRoom",
|
||||
"ControlRoom",
|
||||
"SyncNet",
|
||||
"Watch1"
|
||||
],
|
||||
"93": [
|
||||
"Comms",
|
||||
"Fluss",
|
||||
"Group",
|
||||
"Hub",
|
||||
"Pulse",
|
||||
"Smoke"
|
||||
],
|
||||
"108": [
|
||||
"Comms1",
|
||||
"DragonNet",
|
||||
"Hub1"
|
||||
],
|
||||
"2": [
|
||||
"CommsNet",
|
||||
"Mesh3",
|
||||
"PulseNet"
|
||||
],
|
||||
"80": [
|
||||
"Commune",
|
||||
"Freedom",
|
||||
"Pluto",
|
||||
"Snake",
|
||||
"Squad",
|
||||
"Stuttgart"
|
||||
],
|
||||
"57": [
|
||||
"Copper",
|
||||
"Core",
|
||||
"Spectrum",
|
||||
"Summit"
|
||||
],
|
||||
"87": [
|
||||
"Courier",
|
||||
"Nexus",
|
||||
"South"
|
||||
],
|
||||
"102": [
|
||||
"Courier1"
|
||||
],
|
||||
"101": [
|
||||
"Courier2",
|
||||
"GridNet",
|
||||
"OpsCenter"
|
||||
],
|
||||
"100": [
|
||||
"CourierMesh",
|
||||
"Storm1"
|
||||
],
|
||||
"8": [
|
||||
"CourierNet",
|
||||
"Fire2",
|
||||
"Grid2",
|
||||
"LongFast",
|
||||
"RescueTeam"
|
||||
],
|
||||
"16": [
|
||||
"CQ",
|
||||
"EchoMesh",
|
||||
"Freq2",
|
||||
"KiloMesh",
|
||||
"Node2",
|
||||
"PhoenixNet",
|
||||
"Repeater2"
|
||||
],
|
||||
"34": [
|
||||
"CQ2",
|
||||
"Freq",
|
||||
"Gold",
|
||||
"Link",
|
||||
"Repeater"
|
||||
],
|
||||
"97": [
|
||||
"DarkNet",
|
||||
"NightshiftNet",
|
||||
"Radio2",
|
||||
"Shelter2"
|
||||
],
|
||||
"1": [
|
||||
"DEMO",
|
||||
"Downlink1",
|
||||
"NightNet",
|
||||
"Sideband1"
|
||||
],
|
||||
"31": [
|
||||
"DemoBerlin",
|
||||
"FieldNet",
|
||||
"MediumFast"
|
||||
],
|
||||
"70": [
|
||||
"Diamond",
|
||||
"Ham",
|
||||
"HAM",
|
||||
"Leipzig",
|
||||
"Paramedic",
|
||||
"Savanna"
|
||||
],
|
||||
"63": [
|
||||
"Distress",
|
||||
"Kiez",
|
||||
"Ruhr",
|
||||
"Team"
|
||||
],
|
||||
"36": [
|
||||
"District",
|
||||
"Hessen",
|
||||
"Io",
|
||||
"LoRaTest",
|
||||
"Operations",
|
||||
"Shadow",
|
||||
"Unit"
|
||||
],
|
||||
"23": [
|
||||
"EagleNet",
|
||||
"MeshHessen",
|
||||
"Node5"
|
||||
],
|
||||
"124": [
|
||||
"EchoNet",
|
||||
"KiloNet",
|
||||
"Med2",
|
||||
"Ops2"
|
||||
],
|
||||
"75": [
|
||||
"Emergency",
|
||||
"EMERGENCY",
|
||||
"Nomad",
|
||||
"Watch"
|
||||
],
|
||||
"107": [
|
||||
"emergency",
|
||||
"ZuluNet"
|
||||
],
|
||||
"117": [
|
||||
"EmergencyBerlin",
|
||||
"GridNorth",
|
||||
"MeshLeipzig",
|
||||
"PacketNet"
|
||||
],
|
||||
"64": [
|
||||
"Epsilon",
|
||||
"Field",
|
||||
"Granite",
|
||||
"Orbit",
|
||||
"Trail",
|
||||
"Whisper"
|
||||
],
|
||||
"11": [
|
||||
"Fire1",
|
||||
"Grid1"
|
||||
],
|
||||
"39": [
|
||||
"Firebird",
|
||||
"Fireteam",
|
||||
"Quasar",
|
||||
"Snow",
|
||||
"Universe",
|
||||
"Uplink"
|
||||
],
|
||||
"73": [
|
||||
"Firefly",
|
||||
"Steel"
|
||||
],
|
||||
"79": [
|
||||
"Flock",
|
||||
"Phoenix",
|
||||
"PRIVATE",
|
||||
"Private",
|
||||
"Signals",
|
||||
"Tiger"
|
||||
],
|
||||
"12": [
|
||||
"FoxNet",
|
||||
"MeshRuhr",
|
||||
"RadioNet"
|
||||
],
|
||||
"78": [
|
||||
"Foxtrot",
|
||||
"Med",
|
||||
"Ops"
|
||||
],
|
||||
"125": [
|
||||
"FoxtrotMesh",
|
||||
"RepeaterHub"
|
||||
],
|
||||
"17": [
|
||||
"FoxtrotNet",
|
||||
"Node3"
|
||||
],
|
||||
"71": [
|
||||
"Frankfurt",
|
||||
"Gecko",
|
||||
"Jupiter",
|
||||
"Sensors",
|
||||
"SENSORS",
|
||||
"Sunrise"
|
||||
],
|
||||
"19": [
|
||||
"Freq1",
|
||||
"HarmonyNet",
|
||||
"Node1",
|
||||
"RavenNet",
|
||||
"Repeater1"
|
||||
],
|
||||
"94": [
|
||||
"Frost",
|
||||
"Rover",
|
||||
"Village"
|
||||
],
|
||||
"85": [
|
||||
"Glacier",
|
||||
"Storm"
|
||||
],
|
||||
"81": [
|
||||
"Grassland",
|
||||
"Tango",
|
||||
"Union"
|
||||
],
|
||||
"68": [
|
||||
"Hamburg",
|
||||
"Hydra",
|
||||
"Medic",
|
||||
"Titan"
|
||||
],
|
||||
"104": [
|
||||
"HawkNet"
|
||||
],
|
||||
"67": [
|
||||
"Highway",
|
||||
"Kreuzberg",
|
||||
"Leopard",
|
||||
"Metro",
|
||||
"Omega",
|
||||
"Phantom"
|
||||
],
|
||||
"41": [
|
||||
"Hinterland",
|
||||
"HQ2",
|
||||
"Main",
|
||||
"Meshtastic",
|
||||
"Router",
|
||||
"Valley",
|
||||
"Wander",
|
||||
"Wolfpack"
|
||||
],
|
||||
"27": [
|
||||
"HQ",
|
||||
"Router2"
|
||||
],
|
||||
"42": [
|
||||
"HQ1",
|
||||
"Lizard",
|
||||
"Packet",
|
||||
"Sahara",
|
||||
"Tunnel"
|
||||
],
|
||||
"112": [
|
||||
"Layer1",
|
||||
"Relay1",
|
||||
"ShortFast"
|
||||
],
|
||||
"115": [
|
||||
"Layer2",
|
||||
"Relay2",
|
||||
"SOSBerlin"
|
||||
],
|
||||
"114": [
|
||||
"Layer3",
|
||||
"MeshCologne"
|
||||
],
|
||||
"3": [
|
||||
"LightNet",
|
||||
"Mesh2",
|
||||
"WestStar",
|
||||
"WolfMesh"
|
||||
],
|
||||
"18": [
|
||||
"LoRa"
|
||||
],
|
||||
"24": [
|
||||
"MediumSlow",
|
||||
"Router1"
|
||||
],
|
||||
"0": [
|
||||
"Mesh1"
|
||||
],
|
||||
"4": [
|
||||
"Mesh5",
|
||||
"OPERATIONS",
|
||||
"Rescue1",
|
||||
"SignalFire"
|
||||
],
|
||||
"110": [
|
||||
"MeshNet"
|
||||
],
|
||||
"7": [
|
||||
"MeshTest",
|
||||
"Rescue2",
|
||||
"ZuluMesh"
|
||||
],
|
||||
"126": [
|
||||
"MoonNet"
|
||||
],
|
||||
"92": [
|
||||
"Network",
|
||||
"Scout"
|
||||
],
|
||||
"22": [
|
||||
"Node4",
|
||||
"Uplink1"
|
||||
],
|
||||
"120": [
|
||||
"NomadMesh"
|
||||
],
|
||||
"20": [
|
||||
"NomadNet",
|
||||
"SENSOR",
|
||||
"TEST",
|
||||
"test"
|
||||
],
|
||||
"53": [
|
||||
"Nord",
|
||||
"Rescue",
|
||||
"Secure",
|
||||
"Silver"
|
||||
],
|
||||
"121": [
|
||||
"NorthStar",
|
||||
"Watch2"
|
||||
],
|
||||
"113": [
|
||||
"OpsRoom"
|
||||
],
|
||||
"123": [
|
||||
"PacketRadio",
|
||||
"ShadowNet"
|
||||
],
|
||||
"66": [
|
||||
"Polar",
|
||||
"Woods"
|
||||
],
|
||||
"13": [
|
||||
"Signal1",
|
||||
"Zone1"
|
||||
],
|
||||
"103": [
|
||||
"Storm2"
|
||||
],
|
||||
"10": [
|
||||
"TestBerlin",
|
||||
"WaWi"
|
||||
]
|
||||
}
|
||||
@@ -0,0 +1,134 @@
|
||||
#!/usr/bin/env ruby
|
||||
# frozen_string_literal: true
|
||||
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
require "base64"
|
||||
require "json"
|
||||
require "csv"
|
||||
|
||||
# --- CONFIG --------------------------------------------------------
|
||||
|
||||
# The PSK you want. Here: public mesh, "AQ==" (0x01).
|
||||
PSK_B64 = ENV.fetch("PSK_B64", "AQ==")
|
||||
|
||||
# 1000 potential channel candidate names for rainbow indices.
|
||||
CANDIDATE_NAMES = %w[
|
||||
911 Admin ADMIN admin Alert Alpha AlphaNet Alpine Amateur Amazon Anaconda Aquila Arctic Ash Asteroid Astro Aurora Avalanche Backup Basalt Base Base1 Base2 BaseAlpha BaseBravo BaseCharlie Bavaria Beacon Bear BearNet Beat Berg Berlin BerlinMesh BerlinNet Beta BetaBerlin Bison Blackout Blizzard Bolt Bonfire Border Borealis Bravo BravoNet Breeze Bridge Bronze Burner Burrow Callisto Callsign Camp Campfire CampNet Caravan Carbon Carpet Central Chameleon Charlie Chat Checkpoint Checkpoint1 Checkpoint2 Cheetah City Clinic Cloud Cobra Collective Cologne Colony Comet Command Command1 Command2 CommandRoom Comms Comms1 Comms2 CommsNet Commune Control Control1 Control2 ControlRoom Convoy Copper Core Corvus Cosmos Courier Courier1 Courier2 CourierMesh CourierNet CQ CQ1 CQ2 Crow CrowNet DarkNet Dawn Daybreak Daylight Delta DeltaNet Demo DEMO DemoBerlin Den Desert Diamond Distress District Doctor Dortmund Downlink Downlink1 Draco Dragon DragonNet Dune Dusk Eagle EagleNet East EastStar Echo EchoMesh EchoNet Emergency emergency EMERGENCY EmergencyBerlin Epsilon Equinox Europa Falcon Field FieldNet Fire Fire1 Fire2 Firebird Firefly Fireline Fireteam Firewatch Flash Flock Fluss Fog Forest Fox FoxNet Foxtrot FoxtrotMesh FoxtrotNet Frankfurt Freedom Freq Freq1 Freq2 Friedrichshain Frontier Frost Galaxy Gale Gamma Ganymede Gecko General Ghost GhostNet Glacier Gold Granite Grassland Grid Grid1 Grid2 GridNet GridNorth GridSouth Griffin Group Ham HAM Hamburg HAMNet Harbor Harmony HarmonyNet Hawk HawkNet Haze Help Hessen Highway Hilltop Hinterland Hive Hospital HQ HQ1 HQ2 Hub Hub1 Hub2 Hydra Ice Io Iron Jaguar Jungle Jupiter Kiez Kilo KiloMesh KiloNet Kraken Kreuzberg Lava Layer Layer1 Layer2 Layer3 Leipzig Leopard Liberty LightNet Lightning Lima Link Lion Lizard LongFast LongSlow LoRa LoRaBerlin LoRaHessen LoRaMesh LoRaNet LoRaTest Main Mars Med Med1 Med2 Medic MediumFast MediumSlow Mercury Mesh Mesh1 Mesh2 Mesh3 Mesh4 Mesh5 MeshBerlin MeshCollective MeshCologne MeshFrankfurt MeshGrid MeshHamburg MeshHessen MeshLeipzig MeshMunich MeshNet MeshNetwork MeshRuhr Meshtastic MeshTest Meteor Metro Midnight Mirage Mist MoonNet Munich Müggelberg Nebula Nest Network Neukölln Nexus Nightfall NightMesh NightNet Nightshift NightshiftNet Nightwatch Node1 Node2 Node3 Node4 Node5 Nomad NomadMesh NomadNet Nomads Nord North NorthStar Oasis Obsidian Omega Operations OPERATIONS Ops Ops1 Ops2 OpsCenter OpsRoom Orbit Ost Outpost Outsider Owl Pack Packet PacketNet PacketRadio Panther Paramedic Path Peak Phantom Phoenix PhoenixNet Platinum Pluto Polar Prairie Prenzlauer PRIVATE Private Public Pulse PulseNet Python Quasar Radio Radio1 Radio2 RadioNet Rain Ranger Raven RavenNet Relay Relay1 Relay2 Repeater Repeater1 Repeater2 RepeaterHub Rescue Rescue1 Rescue2 RescueTeam Rhythm Ridge River Road Rock Router Router1 Router2 Rover Ruhr Runner Runners Safari Safe Safety Sahara Saturn Savanna Saxony Scout Sector Secure Sensor SENSOR Sensors SENSORS Shade Shadow ShadowNet Shelter Shelter1 Shelter2 ShortFast Sideband Sideband1 Sierra Signal Signal1 Signal2 SignalFire Signals Silver Smoke Snake Snow Solstice SOS Sos SOSBerlin South SouthStar Spectrum Squad StarNet Steel Stone Storm Storm1 Storm2 Stratum Stuttgart Summit SunNet Sunrise Sunset Sync SyncNet Syndicate Süd Tal Tango TangoMesh TangoNet Team Tempo Test TEST test TestBerlin Teufelsberg Thunder Tiger Titan Town Trail Tundra Tunnel Union Unit Universe Uplink Uplink1 Valley Venus Victor Village Viper Volcano Wald Wander Wanderer Wanderers Watch Watch1 Watch2 WaWi West WestStar Whisper Wind Wolf WolfDen WolfMesh WolfNet Wolfpack Wolves Woods Wyvern Zeta Zone Zone1 Zone2 Zone3 Zulu ZuluMesh ZuluNet
|
||||
]
|
||||
|
||||
# Output filenames
|
||||
CSV_OUT = ENV.fetch("CSV_OUT", "rainbow.csv")
|
||||
JSON_OUT = ENV.fetch("JSON_OUT", "rainbow.json")
|
||||
|
||||
# --- HASH FUNCTION -------------------------------------------------
|
||||
|
||||
def xor_bytes(str_or_bytes)
|
||||
bytes = str_or_bytes.is_a?(String) ? str_or_bytes.bytes : str_or_bytes
|
||||
bytes.reduce(0) { |acc, b| (acc ^ b) & 0xFF }
|
||||
end
|
||||
|
||||
def expanded_key(psk_b64)
|
||||
raw = Base64.decode64(psk_b64 || "")
|
||||
|
||||
case raw.bytesize
|
||||
when 0
|
||||
# no encryption: length 0, xor = 0
|
||||
"".b
|
||||
when 1
|
||||
alias_index = raw.bytes.first
|
||||
alias_keys = {
|
||||
1 => [
|
||||
0xD4, 0xF1, 0xBB, 0x3A, 0x20, 0x29, 0x07, 0x59,
|
||||
0xF0, 0xBC, 0xFF, 0xAB, 0xCF, 0x4E, 0x69, 0x01,
|
||||
].pack("C*"),
|
||||
2 => [
|
||||
0x38, 0x4B, 0xBC, 0xC0, 0x1D, 0xC0, 0x22, 0xD1,
|
||||
0x81, 0xBF, 0x36, 0xB8, 0x61, 0x21, 0xE1, 0xFB,
|
||||
0x96, 0xB7, 0x2E, 0x55, 0xBF, 0x74, 0x22, 0x7E,
|
||||
0x9D, 0x6A, 0xFB, 0x48, 0xD6, 0x4C, 0xB1, 0xA1,
|
||||
].pack("C*"),
|
||||
}
|
||||
alias_keys.fetch(alias_index) { raise "Unknown PSK alias #{alias_index}" }
|
||||
when 2..15
|
||||
# pad to 16 (AES128)
|
||||
(raw.bytes + [0] * (16 - raw.bytesize)).pack("C*")
|
||||
when 16
|
||||
raw
|
||||
when 17..31
|
||||
# pad to 32 (AES256)
|
||||
(raw.bytes + [0] * (32 - raw.bytesize)).pack("C*")
|
||||
when 32
|
||||
raw
|
||||
else
|
||||
raise "PSK too long (#{raw.bytesize} bytes)"
|
||||
end
|
||||
end
|
||||
|
||||
def channel_hash(name, psk_b64)
|
||||
effective_name = name.b
|
||||
key = expanded_key(psk_b64)
|
||||
|
||||
h_name = xor_bytes(effective_name)
|
||||
h_key = xor_bytes(key)
|
||||
|
||||
(h_name ^ h_key) & 0xFF
|
||||
end
|
||||
|
||||
# --- BUILD RAINBOW TABLE -------------------------------------------
|
||||
|
||||
psk_b64 = PSK_B64
|
||||
puts "Using PSK_B64=#{psk_b64.inspect}"
|
||||
|
||||
hash_to_names = Hash.new { |h, k| h[k] = [] }
|
||||
|
||||
CANDIDATE_NAMES.each do |name|
|
||||
h = channel_hash(name, psk_b64)
|
||||
hash_to_names[h] << name
|
||||
end
|
||||
|
||||
# --- WRITE CSV (hash,name) -----------------------------------------
|
||||
|
||||
CSV.open(CSV_OUT, "w") do |csv|
|
||||
csv << %w[hash name]
|
||||
hash_to_names.keys.sort.each do |h|
|
||||
hash_to_names[h].each do |name|
|
||||
csv << [h, name]
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
puts "Wrote CSV rainbow table to #{CSV_OUT}"
|
||||
|
||||
# --- WRITE JSON ({hash: [names...]}) -------------------------------
|
||||
|
||||
json_hash = hash_to_names.transform_keys(&:to_s)
|
||||
File.write(JSON_OUT, JSON.pretty_generate(json_hash))
|
||||
|
||||
puts "Wrote JSON rainbow table to #{JSON_OUT}"
|
||||
|
||||
# --- OPTIONAL: interactive query -----------------------------------
|
||||
|
||||
if ARGV.first == "query"
|
||||
target = Integer(ARGV[1] || raise("Usage: #{File.basename($0)} query <hash>"))
|
||||
names = hash_to_names[target]
|
||||
if names.empty?
|
||||
puts "No names for hash #{target}"
|
||||
else
|
||||
puts "Names for hash #{target}:"
|
||||
names.each { |n| puts " - #{n}" }
|
||||
end
|
||||
else
|
||||
puts "Run again with: #{File.basename($0)} query <hash> # to inspect a specific hash"
|
||||
end
|
||||
@@ -0,0 +1,423 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Unit tests for :mod:`data.mesh_ingestor.channels`."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from types import SimpleNamespace
|
||||
|
||||
import pytest
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parents[1]
|
||||
if str(REPO_ROOT) not in sys.path:
|
||||
sys.path.insert(0, str(REPO_ROOT))
|
||||
|
||||
import data.mesh_ingestor.channels as channels
|
||||
import data.mesh_ingestor.config as config
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def reset_channel_cache():
|
||||
"""Ensure channel cache is cleared between tests."""
|
||||
channels._reset_channel_cache()
|
||||
yield
|
||||
channels._reset_channel_cache()
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _iter_channel_objects
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestIterChannelObjects:
|
||||
"""Tests for :func:`channels._iter_channel_objects`."""
|
||||
|
||||
def test_none_returns_empty(self):
|
||||
"""None input yields no items."""
|
||||
assert list(channels._iter_channel_objects(None)) == []
|
||||
|
||||
def test_dict_yields_values(self):
|
||||
"""Dict input yields values."""
|
||||
result = list(channels._iter_channel_objects({"a": 1, "b": 2}))
|
||||
assert sorted(result) == [1, 2]
|
||||
|
||||
def test_list_yields_elements(self):
|
||||
"""List input yields all elements."""
|
||||
items = [1, 2, 3]
|
||||
assert list(channels._iter_channel_objects(items)) == [1, 2, 3]
|
||||
|
||||
def test_generator_yields_elements(self):
|
||||
"""Generator input yields all elements."""
|
||||
result = list(channels._iter_channel_objects(x for x in [10, 20]))
|
||||
assert result == [10, 20]
|
||||
|
||||
def test_object_with_len_and_getitem(self):
|
||||
"""Object with __len__ and __getitem__ is iterated correctly."""
|
||||
|
||||
class FakeSeq:
|
||||
def __len__(self):
|
||||
return 3
|
||||
|
||||
def __getitem__(self, idx):
|
||||
return idx * 10
|
||||
|
||||
result = list(channels._iter_channel_objects(FakeSeq()))
|
||||
assert result == [0, 10, 20]
|
||||
|
||||
def test_non_iterable_without_len_returns_empty(self):
|
||||
"""Objects with neither iter protocol nor len/getitem yield nothing."""
|
||||
|
||||
class Opaque:
|
||||
pass
|
||||
|
||||
assert list(channels._iter_channel_objects(Opaque())) == []
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _primary_channel_name
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestPrimaryChannelName:
|
||||
"""Tests for :func:`channels._primary_channel_name`."""
|
||||
|
||||
def test_returns_modem_preset_when_set(self, monkeypatch):
|
||||
"""Returns MODEM_PRESET from config when available."""
|
||||
monkeypatch.setattr(config, "MODEM_PRESET", "LongFast")
|
||||
assert channels._primary_channel_name() == "LongFast"
|
||||
|
||||
def test_strips_modem_preset_whitespace(self, monkeypatch):
|
||||
"""MODEM_PRESET is stripped of surrounding whitespace."""
|
||||
monkeypatch.setattr(config, "MODEM_PRESET", " MedFast ")
|
||||
assert channels._primary_channel_name() == "MedFast"
|
||||
|
||||
def test_falls_back_to_env_channel(self, monkeypatch):
|
||||
"""Falls back to CHANNEL env var when MODEM_PRESET is absent."""
|
||||
monkeypatch.setattr(config, "MODEM_PRESET", None)
|
||||
monkeypatch.setenv("CHANNEL", "LongRange")
|
||||
assert channels._primary_channel_name() == "LongRange"
|
||||
|
||||
def test_returns_none_when_both_absent(self, monkeypatch):
|
||||
"""Returns None when neither MODEM_PRESET nor CHANNEL is set."""
|
||||
monkeypatch.setattr(config, "MODEM_PRESET", None)
|
||||
monkeypatch.delenv("CHANNEL", raising=False)
|
||||
assert channels._primary_channel_name() is None
|
||||
|
||||
def test_empty_modem_preset_falls_back_to_env(self, monkeypatch):
|
||||
"""Empty string MODEM_PRESET falls back to CHANNEL env var."""
|
||||
monkeypatch.setattr(config, "MODEM_PRESET", "")
|
||||
monkeypatch.setenv("CHANNEL", "LongRange")
|
||||
assert channels._primary_channel_name() == "LongRange"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _extract_channel_name
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestExtractChannelName:
|
||||
"""Tests for :func:`channels._extract_channel_name`."""
|
||||
|
||||
def test_none_returns_none(self):
|
||||
"""None input returns None."""
|
||||
assert channels._extract_channel_name(None) is None
|
||||
|
||||
def test_dict_with_name(self):
|
||||
"""Dict with 'name' key returns stripped name."""
|
||||
assert channels._extract_channel_name({"name": " LongFast "}) == "LongFast"
|
||||
|
||||
def test_object_with_name_attr(self):
|
||||
"""Object with name attribute returns stripped name."""
|
||||
obj = SimpleNamespace(name="Chat")
|
||||
assert channels._extract_channel_name(obj) == "Chat"
|
||||
|
||||
def test_empty_name_returns_none(self):
|
||||
"""Empty name string returns None."""
|
||||
assert channels._extract_channel_name({"name": " "}) is None
|
||||
|
||||
def test_missing_name_returns_none(self):
|
||||
"""Object without name attribute returns None."""
|
||||
assert channels._extract_channel_name(SimpleNamespace()) is None
|
||||
|
||||
def test_none_name_returns_none(self):
|
||||
"""None name value returns None."""
|
||||
assert channels._extract_channel_name({"name": None}) is None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _normalize_role
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestNormalizeRole:
|
||||
"""Tests for :func:`channels._normalize_role`."""
|
||||
|
||||
def test_integer_passthrough(self):
|
||||
"""Integer values are returned unchanged."""
|
||||
assert channels._normalize_role(1) == 1
|
||||
assert channels._normalize_role(2) == 2
|
||||
|
||||
def test_string_primary(self):
|
||||
"""'PRIMARY' string maps to _ROLE_PRIMARY."""
|
||||
assert channels._normalize_role("PRIMARY") == channels._ROLE_PRIMARY
|
||||
|
||||
def test_string_secondary(self):
|
||||
"""'SECONDARY' string maps to _ROLE_SECONDARY."""
|
||||
assert channels._normalize_role("SECONDARY") == channels._ROLE_SECONDARY
|
||||
|
||||
def test_string_case_insensitive(self):
|
||||
"""Role strings are case-insensitive."""
|
||||
assert channels._normalize_role("primary") == channels._ROLE_PRIMARY
|
||||
assert channels._normalize_role("Secondary") == channels._ROLE_SECONDARY
|
||||
|
||||
def test_string_numeric(self):
|
||||
"""Numeric strings are coerced to int."""
|
||||
assert channels._normalize_role("1") == 1
|
||||
|
||||
def test_string_invalid_returns_none(self):
|
||||
"""Non-numeric, non-role strings return None."""
|
||||
assert channels._normalize_role("unknown") is None
|
||||
|
||||
def test_object_with_name_attr(self):
|
||||
"""Objects with a 'name' attribute delegate to string handling."""
|
||||
obj = SimpleNamespace(name="PRIMARY")
|
||||
assert channels._normalize_role(obj) == channels._ROLE_PRIMARY
|
||||
|
||||
def test_object_with_value_attr(self):
|
||||
"""Objects with an integer 'value' attribute return that value."""
|
||||
obj = SimpleNamespace(value=2)
|
||||
assert channels._normalize_role(obj) == 2
|
||||
|
||||
def test_coercible_object(self):
|
||||
"""Objects coercible to int return their integer value."""
|
||||
|
||||
class IntLike:
|
||||
def __int__(self):
|
||||
return 3
|
||||
|
||||
assert channels._normalize_role(IntLike()) == 3
|
||||
|
||||
def test_uncoercible_object_returns_none(self):
|
||||
"""Objects not coercible to int return None."""
|
||||
assert channels._normalize_role(object()) is None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _channel_tuple
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestChannelTuple:
|
||||
"""Tests for :func:`channels._channel_tuple`."""
|
||||
|
||||
def test_primary_channel_with_name(self, monkeypatch):
|
||||
"""Primary role with settings name returns (0, name)."""
|
||||
monkeypatch.setattr(config, "MODEM_PRESET", None)
|
||||
obj = SimpleNamespace(
|
||||
role=channels._ROLE_PRIMARY,
|
||||
settings=SimpleNamespace(name="LongFast"),
|
||||
)
|
||||
assert channels._channel_tuple(obj) == (0, "LongFast")
|
||||
|
||||
def test_primary_channel_falls_back_to_preset(self, monkeypatch):
|
||||
"""Primary channel with no name falls back to MODEM_PRESET."""
|
||||
monkeypatch.setattr(config, "MODEM_PRESET", "ShortFast")
|
||||
obj = SimpleNamespace(
|
||||
role=channels._ROLE_PRIMARY, settings=SimpleNamespace(name="")
|
||||
)
|
||||
result = channels._channel_tuple(obj)
|
||||
assert result == (0, "ShortFast")
|
||||
|
||||
def test_secondary_channel(self):
|
||||
"""Secondary role with index and name returns (index, name)."""
|
||||
obj = SimpleNamespace(
|
||||
role=channels._ROLE_SECONDARY,
|
||||
index=3,
|
||||
settings=SimpleNamespace(name="Chat"),
|
||||
)
|
||||
assert channels._channel_tuple(obj) == (3, "Chat")
|
||||
|
||||
def test_unknown_role_returns_none(self):
|
||||
"""Unrecognised roles return None."""
|
||||
obj = SimpleNamespace(role=99, index=0, settings=SimpleNamespace(name="X"))
|
||||
assert channels._channel_tuple(obj) is None
|
||||
|
||||
def test_secondary_without_valid_index_returns_none(self):
|
||||
"""Secondary channel with no valid index returns None."""
|
||||
obj = SimpleNamespace(
|
||||
role=channels._ROLE_SECONDARY,
|
||||
index="bad",
|
||||
settings=SimpleNamespace(name="Chat"),
|
||||
)
|
||||
assert channels._channel_tuple(obj) is None
|
||||
|
||||
def test_secondary_without_name_returns_none(self):
|
||||
"""Secondary channel with no name returns None."""
|
||||
obj = SimpleNamespace(
|
||||
role=channels._ROLE_SECONDARY,
|
||||
index=1,
|
||||
settings=SimpleNamespace(name=""),
|
||||
)
|
||||
assert channels._channel_tuple(obj) is None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# capture_from_interface
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestCaptureFromInterface:
|
||||
"""Tests for :func:`channels.capture_from_interface`."""
|
||||
|
||||
def _make_iface(self, channel_list):
|
||||
local_node = SimpleNamespace(channels=channel_list)
|
||||
return SimpleNamespace(localNode=local_node, waitForConfig=lambda: None)
|
||||
|
||||
def test_none_iface_is_noop(self):
|
||||
"""None interface is silently ignored."""
|
||||
channels.capture_from_interface(None)
|
||||
assert channels.channel_mappings() == ()
|
||||
|
||||
def test_captures_primary_and_secondary(self):
|
||||
"""Both primary and secondary channels are captured."""
|
||||
iface = self._make_iface(
|
||||
[
|
||||
SimpleNamespace(
|
||||
role=channels._ROLE_PRIMARY,
|
||||
settings=SimpleNamespace(name="LongFast"),
|
||||
),
|
||||
SimpleNamespace(
|
||||
role=channels._ROLE_SECONDARY,
|
||||
index=1,
|
||||
settings=SimpleNamespace(name="Chat"),
|
||||
),
|
||||
]
|
||||
)
|
||||
channels.capture_from_interface(iface)
|
||||
mappings = channels.channel_mappings()
|
||||
assert (0, "LongFast") in mappings
|
||||
assert (1, "Chat") in mappings
|
||||
|
||||
def test_subsequent_calls_are_noops_when_cached(self):
|
||||
"""Second call with different interface is ignored once cached."""
|
||||
iface1 = self._make_iface(
|
||||
[
|
||||
SimpleNamespace(
|
||||
role=channels._ROLE_PRIMARY, settings=SimpleNamespace(name="First")
|
||||
),
|
||||
]
|
||||
)
|
||||
iface2 = self._make_iface(
|
||||
[
|
||||
SimpleNamespace(
|
||||
role=channels._ROLE_PRIMARY, settings=SimpleNamespace(name="Second")
|
||||
),
|
||||
]
|
||||
)
|
||||
channels.capture_from_interface(iface1)
|
||||
channels.capture_from_interface(iface2)
|
||||
assert channels.channel_name(0) == "First"
|
||||
|
||||
def test_deduplicates_indices(self):
|
||||
"""Duplicate channel indices keep the first seen entry."""
|
||||
iface = self._make_iface(
|
||||
[
|
||||
SimpleNamespace(
|
||||
role=channels._ROLE_SECONDARY,
|
||||
index=1,
|
||||
settings=SimpleNamespace(name="A"),
|
||||
),
|
||||
SimpleNamespace(
|
||||
role=channels._ROLE_SECONDARY,
|
||||
index=1,
|
||||
settings=SimpleNamespace(name="B"),
|
||||
),
|
||||
]
|
||||
)
|
||||
channels.capture_from_interface(iface)
|
||||
assert channels.channel_name(1) == "A"
|
||||
|
||||
def test_empty_channels_does_not_set_cache(self):
|
||||
"""No valid channels leaves the cache empty."""
|
||||
iface = self._make_iface([])
|
||||
channels.capture_from_interface(iface)
|
||||
assert channels.channel_mappings() == ()
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# is_allowed_channel / is_hidden_channel
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestIsAllowedChannel:
|
||||
"""Tests for :func:`channels.is_allowed_channel`."""
|
||||
|
||||
def test_no_allowlist_permits_all(self, monkeypatch):
|
||||
"""When ALLOWED_CHANNELS is empty, all channels are allowed."""
|
||||
monkeypatch.setattr(config, "ALLOWED_CHANNELS", ())
|
||||
assert channels.is_allowed_channel("anything") is True
|
||||
|
||||
def test_allowlist_permits_matching_name(self, monkeypatch):
|
||||
"""A matching name is allowed."""
|
||||
monkeypatch.setattr(config, "ALLOWED_CHANNELS", ("LongFast",))
|
||||
assert channels.is_allowed_channel("LongFast") is True
|
||||
|
||||
def test_allowlist_case_insensitive(self, monkeypatch):
|
||||
"""Channel name matching is case-insensitive."""
|
||||
monkeypatch.setattr(config, "ALLOWED_CHANNELS", ("longfast",))
|
||||
assert channels.is_allowed_channel("LongFast") is True
|
||||
|
||||
def test_allowlist_blocks_non_matching(self, monkeypatch):
|
||||
"""A non-matching name is rejected."""
|
||||
monkeypatch.setattr(config, "ALLOWED_CHANNELS", ("LongFast",))
|
||||
assert channels.is_allowed_channel("Chat") is False
|
||||
|
||||
def test_none_rejected_when_allowlist_set(self, monkeypatch):
|
||||
"""None is rejected when an allowlist is configured."""
|
||||
monkeypatch.setattr(config, "ALLOWED_CHANNELS", ("LongFast",))
|
||||
assert channels.is_allowed_channel(None) is False
|
||||
|
||||
def test_empty_string_rejected_when_allowlist_set(self, monkeypatch):
|
||||
"""Empty string is rejected when an allowlist is configured."""
|
||||
monkeypatch.setattr(config, "ALLOWED_CHANNELS", ("LongFast",))
|
||||
assert channels.is_allowed_channel(" ") is False
|
||||
|
||||
|
||||
class TestIsHiddenChannel:
|
||||
"""Tests for :func:`channels.is_hidden_channel`."""
|
||||
|
||||
def test_none_not_hidden(self):
|
||||
"""None is never considered hidden."""
|
||||
assert channels.is_hidden_channel(None) is False
|
||||
|
||||
def test_empty_string_not_hidden(self):
|
||||
"""Empty string is never considered hidden."""
|
||||
assert channels.is_hidden_channel(" ") is False
|
||||
|
||||
def test_hidden_name_is_hidden(self, monkeypatch):
|
||||
"""Configured hidden channel is detected."""
|
||||
monkeypatch.setattr(config, "HIDDEN_CHANNELS", ("Chat",))
|
||||
assert channels.is_hidden_channel("Chat") is True
|
||||
|
||||
def test_hidden_case_insensitive(self, monkeypatch):
|
||||
"""Hidden channel matching is case-insensitive."""
|
||||
monkeypatch.setattr(config, "HIDDEN_CHANNELS", ("chat",))
|
||||
assert channels.is_hidden_channel("CHAT") is True
|
||||
|
||||
def test_non_hidden_name_not_hidden(self, monkeypatch):
|
||||
"""Non-configured names are not hidden."""
|
||||
monkeypatch.setattr(config, "HIDDEN_CHANNELS", ("Chat",))
|
||||
assert channels.is_hidden_channel("LongFast") is False
|
||||
@@ -0,0 +1,245 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Unit tests for :mod:`data.mesh_ingestor.config`."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parents[1]
|
||||
if str(REPO_ROOT) not in sys.path:
|
||||
sys.path.insert(0, str(REPO_ROOT))
|
||||
|
||||
import data.mesh_ingestor.config as config
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _parse_channel_names
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestParseChannelNames:
|
||||
"""Tests for :func:`config._parse_channel_names`."""
|
||||
|
||||
def test_none_returns_empty(self):
|
||||
"""None input returns empty tuple."""
|
||||
assert config._parse_channel_names(None) == ()
|
||||
|
||||
def test_empty_string_returns_empty(self):
|
||||
"""Empty string returns empty tuple."""
|
||||
assert config._parse_channel_names("") == ()
|
||||
|
||||
def test_single_name(self):
|
||||
"""Single channel name is returned as a one-element tuple."""
|
||||
assert config._parse_channel_names("LongFast") == ("LongFast",)
|
||||
|
||||
def test_comma_separated(self):
|
||||
"""Comma-separated names are split and returned."""
|
||||
result = config._parse_channel_names("LongFast,Chat")
|
||||
assert result == ("LongFast", "Chat")
|
||||
|
||||
def test_strips_whitespace(self):
|
||||
"""Leading/trailing whitespace around names is stripped."""
|
||||
result = config._parse_channel_names(" LongFast , Chat ")
|
||||
assert result == ("LongFast", "Chat")
|
||||
|
||||
def test_deduplicates_case_insensitively(self):
|
||||
"""Duplicate names (case-insensitively) are deduplicated."""
|
||||
result = config._parse_channel_names("LongFast,longfast,LONGFAST")
|
||||
assert result == ("LongFast",)
|
||||
|
||||
def test_preserves_order(self):
|
||||
"""Original order is preserved, first occurrence kept on dedup."""
|
||||
result = config._parse_channel_names("B,A,B,C")
|
||||
assert result == ("B", "A", "C")
|
||||
|
||||
def test_empty_segments_skipped(self):
|
||||
"""Empty segments from consecutive commas are skipped."""
|
||||
result = config._parse_channel_names("A,,B,,,C")
|
||||
assert result == ("A", "B", "C")
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _parse_hidden_channels
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestParseHiddenChannels:
|
||||
"""Tests for :func:`config._parse_hidden_channels`."""
|
||||
|
||||
def test_delegates_to_parse_channel_names(self):
|
||||
"""_parse_hidden_channels delegates to _parse_channel_names."""
|
||||
assert config._parse_hidden_channels(
|
||||
"Chat,Admin"
|
||||
) == config._parse_channel_names("Chat,Admin")
|
||||
|
||||
def test_none_returns_empty(self):
|
||||
"""None input returns empty tuple."""
|
||||
assert config._parse_hidden_channels(None) == ()
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _resolve_instance_domain
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestResolveInstanceDomain:
|
||||
"""Tests for :func:`config._resolve_instance_domain`."""
|
||||
|
||||
def test_returns_instance_domain_when_set(self, monkeypatch):
|
||||
"""Uses INSTANCE_DOMAIN when set."""
|
||||
monkeypatch.setenv("INSTANCE_DOMAIN", "mesh.example.com")
|
||||
monkeypatch.delenv("POTATOMESH_INSTANCE", raising=False)
|
||||
result = config._resolve_instance_domain()
|
||||
assert result == "https://mesh.example.com"
|
||||
|
||||
def test_adds_https_when_no_scheme(self, monkeypatch):
|
||||
"""Adds https:// prefix when no scheme is present."""
|
||||
monkeypatch.setenv("INSTANCE_DOMAIN", "example.com")
|
||||
monkeypatch.delenv("POTATOMESH_INSTANCE", raising=False)
|
||||
assert config._resolve_instance_domain() == "https://example.com"
|
||||
|
||||
def test_preserves_existing_scheme(self, monkeypatch):
|
||||
"""Leaves existing http:// scheme intact."""
|
||||
monkeypatch.setenv("INSTANCE_DOMAIN", "http://example.com")
|
||||
monkeypatch.delenv("POTATOMESH_INSTANCE", raising=False)
|
||||
assert config._resolve_instance_domain() == "http://example.com"
|
||||
|
||||
def test_strips_trailing_slash(self, monkeypatch):
|
||||
"""Strips trailing slash from instance domain."""
|
||||
monkeypatch.setenv("INSTANCE_DOMAIN", "https://example.com/")
|
||||
monkeypatch.delenv("POTATOMESH_INSTANCE", raising=False)
|
||||
assert config._resolve_instance_domain() == "https://example.com"
|
||||
|
||||
def test_falls_back_to_legacy_env(self, monkeypatch):
|
||||
"""Falls back to POTATOMESH_INSTANCE when INSTANCE_DOMAIN is absent."""
|
||||
monkeypatch.delenv("INSTANCE_DOMAIN", raising=False)
|
||||
monkeypatch.setenv("POTATOMESH_INSTANCE", "legacy.example.com")
|
||||
result = config._resolve_instance_domain()
|
||||
assert result == "https://legacy.example.com"
|
||||
|
||||
def test_returns_empty_when_neither_set(self, monkeypatch):
|
||||
"""Returns empty string when neither env var is set."""
|
||||
monkeypatch.delenv("INSTANCE_DOMAIN", raising=False)
|
||||
monkeypatch.delenv("POTATOMESH_INSTANCE", raising=False)
|
||||
assert config._resolve_instance_domain() == ""
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _debug_log
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestDebugLog:
|
||||
"""Tests for :func:`config._debug_log`."""
|
||||
|
||||
def test_suppressed_when_debug_false(self, monkeypatch, capsys):
|
||||
"""Nothing is printed when DEBUG is False and severity is debug."""
|
||||
monkeypatch.setattr(config, "DEBUG", False)
|
||||
config._debug_log("silent", severity="debug")
|
||||
assert capsys.readouterr().out == ""
|
||||
|
||||
def test_prints_when_debug_true(self, monkeypatch, capsys):
|
||||
"""Message is printed when DEBUG is True."""
|
||||
monkeypatch.setattr(config, "DEBUG", True)
|
||||
config._debug_log("hello world")
|
||||
out = capsys.readouterr().out
|
||||
assert "hello world" in out
|
||||
|
||||
def test_always_flag_bypasses_debug_guard(self, monkeypatch, capsys):
|
||||
"""always=True forces output even when DEBUG is False."""
|
||||
monkeypatch.setattr(config, "DEBUG", False)
|
||||
config._debug_log("force print", always=True)
|
||||
out = capsys.readouterr().out
|
||||
assert "force print" in out
|
||||
|
||||
def test_context_included_in_output(self, monkeypatch, capsys):
|
||||
"""Context label is included in log output."""
|
||||
monkeypatch.setattr(config, "DEBUG", True)
|
||||
config._debug_log("msg", context="test.ctx")
|
||||
out = capsys.readouterr().out
|
||||
assert "context=test.ctx" in out
|
||||
|
||||
def test_severity_included_in_output(self, monkeypatch, capsys):
|
||||
"""Severity level is included in log output."""
|
||||
monkeypatch.setattr(config, "DEBUG", True)
|
||||
config._debug_log("msg", severity="warn")
|
||||
out = capsys.readouterr().out
|
||||
assert "[warn]" in out
|
||||
|
||||
def test_metadata_included_in_output(self, monkeypatch, capsys):
|
||||
"""Additional metadata key=value pairs are included in output."""
|
||||
monkeypatch.setattr(config, "DEBUG", True)
|
||||
config._debug_log("msg", node_id="!aabb1234")
|
||||
out = capsys.readouterr().out
|
||||
assert "node_id=" in out
|
||||
|
||||
def test_warn_severity_printed_even_when_debug_false(self, monkeypatch, capsys):
|
||||
"""Non-debug severity is printed regardless of DEBUG flag."""
|
||||
monkeypatch.setattr(config, "DEBUG", False)
|
||||
config._debug_log("warn msg", severity="warn")
|
||||
out = capsys.readouterr().out
|
||||
assert "warn msg" in out
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# PROVIDER validation
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestProviderValidation:
|
||||
"""Tests for PROVIDER environment validation at import time."""
|
||||
|
||||
def test_valid_provider_does_not_raise(self, monkeypatch):
|
||||
"""Importing config with a valid PROVIDER succeeds."""
|
||||
import importlib
|
||||
|
||||
monkeypatch.setenv("PROVIDER", "meshtastic")
|
||||
# Re-importing should not raise
|
||||
importlib.reload(config)
|
||||
|
||||
def test_invalid_provider_raises_value_error(self, monkeypatch):
|
||||
"""An invalid PROVIDER value raises ValueError at module load."""
|
||||
import importlib
|
||||
|
||||
monkeypatch.setenv("PROVIDER", "bogus_provider_xyz")
|
||||
with pytest.raises(ValueError, match="Unknown PROVIDER"):
|
||||
importlib.reload(config)
|
||||
# Restore to valid value so subsequent tests work
|
||||
monkeypatch.setenv("PROVIDER", "meshtastic")
|
||||
importlib.reload(config)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _ConfigModule proxy
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestConfigModuleProxy:
|
||||
"""Tests for the :class:`config._ConfigModule` proxy behaviour."""
|
||||
|
||||
def test_connection_and_port_stay_in_sync(self):
|
||||
"""Setting CONNECTION also updates PORT and vice versa."""
|
||||
original_connection = config.CONNECTION
|
||||
original_port = config.PORT
|
||||
try:
|
||||
config.CONNECTION = "tcp://testhost"
|
||||
assert config.PORT == "tcp://testhost"
|
||||
config.PORT = "serial:/dev/ttyUSB0"
|
||||
assert config.CONNECTION == "serial:/dev/ttyUSB0"
|
||||
finally:
|
||||
config.CONNECTION = original_connection
|
||||
config.PORT = original_port
|
||||
@@ -0,0 +1,256 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Unit tests for :mod:`data.mesh_ingestor.connection`."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parents[1]
|
||||
if str(REPO_ROOT) not in sys.path:
|
||||
sys.path.insert(0, str(REPO_ROOT))
|
||||
|
||||
from data.mesh_ingestor.connection import ( # noqa: E402
|
||||
BLE_ADDRESS_RE,
|
||||
DEFAULT_TCP_PORT,
|
||||
default_serial_targets,
|
||||
parse_ble_target,
|
||||
parse_tcp_target,
|
||||
)
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# parse_ble_target
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"value,expected",
|
||||
[
|
||||
# MAC addresses — returned upper-cased
|
||||
("AA:BB:CC:DD:EE:FF", "AA:BB:CC:DD:EE:FF"),
|
||||
("aa:bb:cc:dd:ee:ff", "AA:BB:CC:DD:EE:FF"),
|
||||
("AA:BB:CC:DD:EE:12", "AA:BB:CC:DD:EE:12"),
|
||||
# UUID (macOS format)
|
||||
(
|
||||
"12345678-1234-1234-1234-123456789abc",
|
||||
"12345678-1234-1234-1234-123456789ABC",
|
||||
),
|
||||
(
|
||||
"12345678-1234-1234-1234-123456789ABC",
|
||||
"12345678-1234-1234-1234-123456789ABC",
|
||||
),
|
||||
],
|
||||
)
|
||||
def test_parse_ble_target_accepts_ble_addresses(value, expected):
|
||||
"""parse_ble_target must return the normalised address for valid BLE formats."""
|
||||
assert parse_ble_target(value) == expected
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"value",
|
||||
[
|
||||
"/dev/ttyUSB0",
|
||||
"/dev/ttyACM0",
|
||||
"COM3",
|
||||
"hostname:4403",
|
||||
"192.168.1.1:4403",
|
||||
"",
|
||||
" ",
|
||||
"AA:BB:CC:DD:EE", # too short — only 5 groups
|
||||
"ZZ:BB:CC:DD:EE:FF", # invalid hex
|
||||
],
|
||||
)
|
||||
def test_parse_ble_target_rejects_non_ble(value):
|
||||
"""parse_ble_target must return None for serial paths, TCP targets, and malformed inputs."""
|
||||
assert parse_ble_target(value) is None
|
||||
|
||||
|
||||
def test_parse_ble_target_none_input():
|
||||
"""parse_ble_target must return None for None input."""
|
||||
assert parse_ble_target(None) is None # type: ignore[arg-type]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# parse_tcp_target
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"value,expected_host,expected_port",
|
||||
[
|
||||
# hostname:port
|
||||
("meshcore-node.local:4403", "meshcore-node.local", 4403),
|
||||
("meshnode.local:4403", "meshnode.local", 4403),
|
||||
("hostname:1234", "hostname", 1234),
|
||||
("otherhost:80", "otherhost", 80),
|
||||
# IP:port
|
||||
("192.168.1.1:4403", "192.168.1.1", 4403),
|
||||
("10.0.0.1:9000", "10.0.0.1", 9000),
|
||||
# With scheme prefix
|
||||
("tcp://meshnode.local:4403", "meshnode.local", 4403),
|
||||
("http://192.168.1.1:4403", "192.168.1.1", 4403),
|
||||
# IPv6 with brackets
|
||||
("[::1]:4403", "::1", 4403),
|
||||
("[2001:db8::1]:8080", "2001:db8::1", 8080),
|
||||
],
|
||||
)
|
||||
def test_parse_tcp_target_accepts_tcp(value, expected_host, expected_port):
|
||||
"""parse_tcp_target must return (host, port) for valid TCP target strings."""
|
||||
result = parse_tcp_target(value)
|
||||
assert result is not None
|
||||
host, port = result
|
||||
assert host == expected_host
|
||||
assert port == expected_port
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"value",
|
||||
[
|
||||
# Serial paths
|
||||
"/dev/ttyUSB0",
|
||||
"/dev/ttyACM0",
|
||||
"COM3",
|
||||
# BLE MACs — multiple colons, no valid port
|
||||
"AA:BB:CC:DD:EE:FF",
|
||||
"AA:BB:CC:DD:EE:12",
|
||||
# UUIDs — hyphens, no colon
|
||||
"12345678-1234-1234-1234-123456789abc",
|
||||
# Bare hostname without port
|
||||
"meshcore-node.local",
|
||||
# Empty / whitespace
|
||||
"",
|
||||
" ",
|
||||
# Port out of range
|
||||
"host:0",
|
||||
"host:65536",
|
||||
# Non-numeric port
|
||||
"host:notaport",
|
||||
],
|
||||
)
|
||||
def test_parse_tcp_target_rejects_non_tcp(value):
|
||||
"""parse_tcp_target must return None for serial paths, BLE addresses, and malformed inputs."""
|
||||
assert parse_tcp_target(value) is None
|
||||
|
||||
|
||||
def test_parse_tcp_target_none_input():
|
||||
"""parse_tcp_target must return None for None input."""
|
||||
assert parse_tcp_target(None) is None # type: ignore[arg-type]
|
||||
|
||||
|
||||
def test_parse_tcp_target_default_port_for_bracketed_ipv6_no_port():
|
||||
"""parse_tcp_target must use DEFAULT_TCP_PORT for bracketed IPv6 without port."""
|
||||
result = parse_tcp_target("[::1]")
|
||||
assert result == ("::1", DEFAULT_TCP_PORT)
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"value",
|
||||
[
|
||||
"[::1", # no closing bracket
|
||||
"[]:4403", # empty host in brackets
|
||||
"[::1]:abc", # non-numeric port after bracket
|
||||
"[::1]:0", # port out of range (low)
|
||||
"[::1]:65536", # port out of range (high)
|
||||
],
|
||||
)
|
||||
def test_parse_tcp_target_rejects_malformed_ipv6(value):
|
||||
"""parse_tcp_target must return None for malformed bracketed IPv6 targets."""
|
||||
assert parse_tcp_target(value) is None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# default_serial_targets
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_default_serial_targets_returns_list():
|
||||
"""default_serial_targets must return a non-empty list."""
|
||||
targets = default_serial_targets()
|
||||
assert isinstance(targets, list)
|
||||
assert len(targets) > 0
|
||||
|
||||
|
||||
def test_default_serial_targets_includes_fallback():
|
||||
"""default_serial_targets always includes /dev/ttyACM0 as a fallback."""
|
||||
targets = default_serial_targets()
|
||||
assert "/dev/ttyACM0" in targets
|
||||
|
||||
|
||||
def test_default_serial_targets_no_duplicates():
|
||||
"""default_serial_targets must not return duplicate paths."""
|
||||
targets = default_serial_targets()
|
||||
assert len(targets) == len(set(targets))
|
||||
|
||||
|
||||
def test_default_serial_targets_deduplicates_glob_results():
|
||||
"""default_serial_targets must deduplicate paths returned by multiple globs."""
|
||||
|
||||
def _fake_glob(pattern):
|
||||
if "ttyACM" in pattern:
|
||||
return ["/dev/ttyACM0", "/dev/ttyACM1"]
|
||||
if "ttyUSB" in pattern:
|
||||
return ["/dev/ttyACM0"] # intentional duplicate across patterns
|
||||
return []
|
||||
|
||||
with patch("data.mesh_ingestor.connection.glob.glob", side_effect=_fake_glob):
|
||||
targets = default_serial_targets()
|
||||
|
||||
assert targets.count("/dev/ttyACM0") == 1
|
||||
assert "/dev/ttyACM1" in targets
|
||||
# ttyACM0 already found by glob so fallback append must not re-add it
|
||||
assert targets.count("/dev/ttyACM0") == 1
|
||||
|
||||
|
||||
def test_default_serial_targets_omits_fallback_when_ttyacm0_found():
|
||||
"""default_serial_targets must not append /dev/ttyACM0 when glob already found it."""
|
||||
|
||||
def _fake_glob(pattern):
|
||||
if "ttyACM" in pattern:
|
||||
return ["/dev/ttyACM0"]
|
||||
return []
|
||||
|
||||
with patch("data.mesh_ingestor.connection.glob.glob", side_effect=_fake_glob):
|
||||
targets = default_serial_targets()
|
||||
|
||||
# present exactly once — from glob, not appended again
|
||||
assert targets.count("/dev/ttyACM0") == 1
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# BLE_ADDRESS_RE sanity
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_ble_address_re_mac():
|
||||
"""BLE_ADDRESS_RE matches a canonical 6-byte MAC address."""
|
||||
assert BLE_ADDRESS_RE.fullmatch("AA:BB:CC:DD:EE:FF") is not None
|
||||
|
||||
|
||||
def test_ble_address_re_uuid():
|
||||
"""BLE_ADDRESS_RE matches a standard 128-bit UUID."""
|
||||
assert BLE_ADDRESS_RE.fullmatch("12345678-1234-1234-1234-123456789abc") is not None
|
||||
|
||||
|
||||
def test_ble_address_re_rejects_tcp():
|
||||
"""BLE_ADDRESS_RE must not match a hostname:port string."""
|
||||
assert BLE_ADDRESS_RE.fullmatch("hostname:4403") is None
|
||||
|
||||
|
||||
def test_ble_address_re_rejects_partial_mac():
|
||||
"""BLE_ADDRESS_RE must not match an incomplete MAC address."""
|
||||
assert BLE_ADDRESS_RE.fullmatch("AA:BB:CC:DD:EE") is None
|
||||
+652
-1
@@ -15,6 +15,7 @@
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import importlib
|
||||
import sys
|
||||
import threading
|
||||
import types
|
||||
@@ -27,7 +28,8 @@ REPO_ROOT = Path(__file__).resolve().parents[1]
|
||||
if str(REPO_ROOT) not in sys.path:
|
||||
sys.path.insert(0, str(REPO_ROOT))
|
||||
|
||||
from data.mesh_ingestor import daemon
|
||||
from data.mesh_ingestor import daemon # noqa: E402 - path setup
|
||||
import data.mesh_ingestor.config as _cfg_module # noqa: E402 - path setup
|
||||
|
||||
|
||||
class FakeEvent:
|
||||
@@ -435,3 +437,652 @@ def test_main_inactivity_reconnect(monkeypatch):
|
||||
|
||||
daemon.main()
|
||||
assert any(event.is_set() for event in FakeEvent.instances)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helper: build a minimal _DaemonState for unit tests
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _make_state(**overrides):
|
||||
"""Return a :class:`daemon._DaemonState` with sensible defaults.
|
||||
|
||||
Any keyword argument is forwarded as a field override via ``setattr``
|
||||
after construction, so callers only need to supply fields under test.
|
||||
"""
|
||||
state = daemon._DaemonState(
|
||||
provider=None, # type: ignore[arg-type]
|
||||
stop=FakeEvent(), # type: ignore[arg-type]
|
||||
configured_port=None,
|
||||
inactivity_reconnect_secs=0.0,
|
||||
energy_saving_enabled=False,
|
||||
energy_online_secs=0.0,
|
||||
energy_sleep_secs=0.0,
|
||||
retry_delay=0.0,
|
||||
last_seen_packet_monotonic=None,
|
||||
active_candidate=None,
|
||||
)
|
||||
for key, val in overrides.items():
|
||||
setattr(state, key, val)
|
||||
return state
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _advance_retry_delay
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_advance_retry_delay_disabled(monkeypatch):
|
||||
"""Returns current delay unchanged when the max is zero."""
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_MAX_DELAY_SECS", 0)
|
||||
assert daemon._advance_retry_delay(5.0) == 5.0
|
||||
|
||||
|
||||
def test_advance_retry_delay_bootstrap(monkeypatch):
|
||||
"""Seeds from initial config when current delay is zero (first call)."""
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_MAX_DELAY_SECS", 60.0)
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_INITIAL_DELAY_SECS", 3.0)
|
||||
assert daemon._advance_retry_delay(0.0) == 3.0
|
||||
|
||||
|
||||
def test_advance_retry_delay_doubles_and_caps(monkeypatch):
|
||||
"""Doubles current delay and caps at the configured maximum."""
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_MAX_DELAY_SECS", 10.0)
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_INITIAL_DELAY_SECS", 1.0)
|
||||
assert daemon._advance_retry_delay(3.0) == 6.0
|
||||
assert daemon._advance_retry_delay(7.0) == 10.0
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _energy_sleep
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_energy_sleep_no_op_when_disabled():
|
||||
"""No wait issued when energy saving is disabled."""
|
||||
state = _make_state(energy_saving_enabled=False, energy_sleep_secs=1.0)
|
||||
daemon._energy_sleep(state, "reason")
|
||||
assert not state.stop.wait_calls
|
||||
|
||||
|
||||
def test_energy_sleep_no_op_when_zero_secs():
|
||||
"""No wait issued when sleep duration is zero."""
|
||||
state = _make_state(energy_saving_enabled=True, energy_sleep_secs=0.0)
|
||||
daemon._energy_sleep(state, "reason")
|
||||
assert not state.stop.wait_calls
|
||||
|
||||
|
||||
def test_energy_sleep_emits_debug_log(monkeypatch):
|
||||
"""Debug log is emitted when DEBUG is enabled."""
|
||||
state = _make_state(energy_saving_enabled=True, energy_sleep_secs=2.0)
|
||||
logged = []
|
||||
monkeypatch.setattr(daemon.config, "DEBUG", True)
|
||||
monkeypatch.setattr(
|
||||
daemon.config, "_debug_log", lambda msg, **_kw: logged.append(msg)
|
||||
)
|
||||
daemon._energy_sleep(state, "wake up")
|
||||
assert any("wake up" in m for m in logged)
|
||||
assert state.stop.wait_calls == [2.0]
|
||||
|
||||
|
||||
def test_energy_sleep_waits_when_debug_off(monkeypatch):
|
||||
"""Wait is issued for the configured duration when DEBUG is off."""
|
||||
state = _make_state(energy_saving_enabled=True, energy_sleep_secs=1.5)
|
||||
monkeypatch.setattr(daemon.config, "DEBUG", False)
|
||||
daemon._energy_sleep(state, "reason")
|
||||
assert state.stop.wait_calls == [1.5]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _try_connect
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_try_connect_no_available_interface_raises_system_exit(monkeypatch):
|
||||
"""NoAvailableMeshInterface propagates as SystemExit(1)."""
|
||||
|
||||
class _NoIface:
|
||||
def connect(self, *, active_candidate):
|
||||
raise daemon.interfaces.NoAvailableMeshInterface("none")
|
||||
|
||||
def extract_host_node_id(self, iface):
|
||||
return None
|
||||
|
||||
state = _make_state(active_candidate="serial0", configured_port="serial0")
|
||||
state.provider = _NoIface() # type: ignore[assignment]
|
||||
monkeypatch.setattr(daemon.config, "_debug_log", lambda *_a, **_k: None)
|
||||
with pytest.raises(SystemExit):
|
||||
daemon._try_connect(state)
|
||||
|
||||
|
||||
def test_try_connect_generic_failure_resets_candidate(monkeypatch):
|
||||
"""Connect failure in auto-detect mode clears the active candidate."""
|
||||
|
||||
class _FailProvider:
|
||||
def connect(self, *, active_candidate):
|
||||
raise OSError("device busy")
|
||||
|
||||
def extract_host_node_id(self, iface):
|
||||
return None
|
||||
|
||||
state = _make_state(active_candidate="serial0", configured_port=None)
|
||||
state.provider = _FailProvider() # type: ignore[assignment]
|
||||
monkeypatch.setattr(daemon.config, "_debug_log", lambda *_a, **_k: None)
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_MAX_DELAY_SECS", 0)
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_INITIAL_DELAY_SECS", 0)
|
||||
|
||||
result = daemon._try_connect(state)
|
||||
assert result is False
|
||||
assert state.active_candidate is None
|
||||
assert state.announced_target is False
|
||||
|
||||
|
||||
def test_try_connect_sets_energy_session_deadline(monkeypatch):
|
||||
"""Energy-saving deadline is assigned when online duration is positive."""
|
||||
|
||||
class _OkProvider:
|
||||
def connect(self, *, active_candidate):
|
||||
return DummyInterface(), active_candidate, active_candidate
|
||||
|
||||
def extract_host_node_id(self, iface):
|
||||
return "!host"
|
||||
|
||||
state = _make_state(
|
||||
active_candidate="serial0",
|
||||
configured_port="serial0",
|
||||
energy_saving_enabled=True,
|
||||
energy_online_secs=30.0,
|
||||
)
|
||||
state.provider = _OkProvider() # type: ignore[assignment]
|
||||
monkeypatch.setattr(daemon.config, "_debug_log", lambda *_a, **_k: None)
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_INITIAL_DELAY_SECS", 0)
|
||||
monkeypatch.setattr(
|
||||
daemon.handlers, "register_host_node_id", lambda *_a, **_k: None
|
||||
)
|
||||
monkeypatch.setattr(daemon.handlers, "host_node_id", lambda: "!host")
|
||||
monkeypatch.setattr(
|
||||
daemon.ingestors, "set_ingestor_node_id", lambda *_a, **_k: None
|
||||
)
|
||||
|
||||
result = daemon._try_connect(state)
|
||||
assert result is True
|
||||
assert state.energy_session_deadline is not None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _check_energy_saving
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_check_energy_saving_session_expired(monkeypatch):
|
||||
"""Iface is closed and True returned when the session deadline has passed."""
|
||||
state = _make_state(energy_saving_enabled=True)
|
||||
state.iface = DummyInterface()
|
||||
state.energy_session_deadline = 0.0
|
||||
monkeypatch.setattr(daemon.time, "monotonic", lambda: 1.0)
|
||||
monkeypatch.setattr(daemon.config, "_debug_log", lambda *_a, **_k: None)
|
||||
|
||||
result = daemon._check_energy_saving(state)
|
||||
assert result is True
|
||||
assert state.iface is None
|
||||
assert state.energy_session_deadline is None
|
||||
|
||||
|
||||
def test_check_energy_saving_ble_client_disconnected(monkeypatch):
|
||||
"""Iface is closed and True returned when the BLE client reference is gone."""
|
||||
state = _make_state(energy_saving_enabled=True)
|
||||
state.iface = DummyInterface(client_present=False)
|
||||
state.energy_session_deadline = None
|
||||
monkeypatch.setattr(daemon, "_is_ble_interface", lambda _: True)
|
||||
monkeypatch.setattr(daemon.config, "_debug_log", lambda *_a, **_k: None)
|
||||
|
||||
result = daemon._check_energy_saving(state)
|
||||
assert result is True
|
||||
assert state.iface is None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _try_send_snapshot
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_try_send_snapshot_empty_nodes():
|
||||
"""Returns True without setting initial_snapshot_sent when no nodes exist."""
|
||||
|
||||
class _EmptyProvider:
|
||||
def node_snapshot_items(self, iface):
|
||||
return []
|
||||
|
||||
state = _make_state()
|
||||
state.iface = DummyInterface(nodes={})
|
||||
state.provider = _EmptyProvider() # type: ignore[assignment]
|
||||
|
||||
result = daemon._try_send_snapshot(state)
|
||||
assert result is True
|
||||
assert state.initial_snapshot_sent is False
|
||||
|
||||
|
||||
def test_try_send_snapshot_upsert_failure_is_non_fatal(monkeypatch):
|
||||
"""Upsert errors are logged but do not abort the snapshot pass."""
|
||||
|
||||
class _OneNodeProvider:
|
||||
def node_snapshot_items(self, iface):
|
||||
return [("!node1", {"id": 1})]
|
||||
|
||||
def _raise(*_a, **_k):
|
||||
raise ValueError("bad node")
|
||||
|
||||
state = _make_state()
|
||||
state.iface = DummyInterface()
|
||||
state.provider = _OneNodeProvider() # type: ignore[assignment]
|
||||
logged = []
|
||||
monkeypatch.setattr(daemon.config, "_debug_log", lambda *a, **kw: logged.append(kw))
|
||||
monkeypatch.setattr(daemon.config, "DEBUG", False)
|
||||
monkeypatch.setattr(daemon.handlers, "upsert_node", _raise)
|
||||
|
||||
result = daemon._try_send_snapshot(state)
|
||||
assert result is True
|
||||
assert state.initial_snapshot_sent is True
|
||||
assert any(c.get("context") == "daemon.snapshot" for c in logged)
|
||||
|
||||
|
||||
def test_try_send_snapshot_upsert_failure_debug_payload(monkeypatch):
|
||||
"""The node payload is logged when DEBUG is enabled and upsert fails."""
|
||||
|
||||
class _OneNodeProvider:
|
||||
def node_snapshot_items(self, iface):
|
||||
return [("!node1", {"id": 1})]
|
||||
|
||||
def _raise(*_a, **_k):
|
||||
raise ValueError("bad")
|
||||
|
||||
state = _make_state()
|
||||
state.iface = DummyInterface()
|
||||
state.provider = _OneNodeProvider() # type: ignore[assignment]
|
||||
logged = []
|
||||
monkeypatch.setattr(daemon.config, "_debug_log", lambda *a, **kw: logged.append(kw))
|
||||
monkeypatch.setattr(daemon.config, "DEBUG", True)
|
||||
monkeypatch.setattr(daemon.handlers, "upsert_node", _raise)
|
||||
|
||||
daemon._try_send_snapshot(state)
|
||||
assert any("node" in c for c in logged)
|
||||
|
||||
|
||||
def test_try_send_snapshot_outer_exception_resets_iface(monkeypatch):
|
||||
"""An exception from node_snapshot_items resets the interface and returns False."""
|
||||
|
||||
class _BrokenProvider:
|
||||
def node_snapshot_items(self, iface):
|
||||
raise RuntimeError("boom")
|
||||
|
||||
state = _make_state()
|
||||
state.iface = DummyInterface()
|
||||
state.provider = _BrokenProvider() # type: ignore[assignment]
|
||||
monkeypatch.setattr(daemon.config, "_debug_log", lambda *_a, **_k: None)
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_MAX_DELAY_SECS", 0)
|
||||
|
||||
result = daemon._try_send_snapshot(state)
|
||||
assert result is False
|
||||
assert state.iface is None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _check_inactivity_reconnect (additional branches)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_check_inactivity_reconnect_throttles_rapid_reconnects(monkeypatch):
|
||||
"""A reconnect within the inactivity window is suppressed."""
|
||||
state = _make_state(inactivity_reconnect_secs=60.0)
|
||||
state.iface = DummyInterface(is_connected=False)
|
||||
state.iface_connected_at = 0.0
|
||||
state.last_inactivity_reconnect = 1.0 # recent
|
||||
|
||||
monkeypatch.setattr(daemon.time, "monotonic", lambda: 10.0)
|
||||
monkeypatch.setattr(daemon.handlers, "last_packet_monotonic", lambda: None)
|
||||
|
||||
assert daemon._check_inactivity_reconnect(state) is False
|
||||
|
||||
|
||||
def test_check_inactivity_reconnect_uses_connected_at_when_no_packets(monkeypatch):
|
||||
"""Uses iface_connected_at as the activity baseline when no packets seen."""
|
||||
state = _make_state(inactivity_reconnect_secs=60.0)
|
||||
state.iface = DummyInterface(is_connected=True)
|
||||
state.iface_connected_at = 5.0
|
||||
state.last_inactivity_reconnect = None
|
||||
|
||||
monkeypatch.setattr(daemon.time, "monotonic", lambda: 10.0)
|
||||
monkeypatch.setattr(daemon.handlers, "last_packet_monotonic", lambda: None)
|
||||
|
||||
# 10.0 - 5.0 = 5.0 < 60.0 → not triggered
|
||||
assert daemon._check_inactivity_reconnect(state) is False
|
||||
|
||||
|
||||
def test_check_inactivity_reconnect_uses_now_when_no_baseline(monkeypatch):
|
||||
"""Falls back to current time when neither packets nor connected_at is set."""
|
||||
state = _make_state(inactivity_reconnect_secs=60.0)
|
||||
state.iface = DummyInterface(is_connected=True)
|
||||
state.iface_connected_at = None
|
||||
state.last_inactivity_reconnect = None
|
||||
|
||||
monkeypatch.setattr(daemon.time, "monotonic", lambda: 10.0)
|
||||
monkeypatch.setattr(daemon.handlers, "last_packet_monotonic", lambda: None)
|
||||
|
||||
# latest_activity = now(10.0); inactivity_elapsed = 0.0 < 60.0 → not triggered
|
||||
assert daemon._check_inactivity_reconnect(state) is False
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _loop_iteration
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_loop_iteration_connect_fails_returns_true(monkeypatch):
|
||||
"""Returns True (continue) when iface is absent and connect fails."""
|
||||
state = _make_state()
|
||||
state.iface = None
|
||||
monkeypatch.setattr(daemon, "_try_connect", lambda s: False)
|
||||
assert daemon._loop_iteration(state) is True
|
||||
|
||||
|
||||
def test_loop_iteration_energy_saving_triggers_returns_true(monkeypatch):
|
||||
"""Returns True (continue) when energy saving disconnects the interface."""
|
||||
state = _make_state()
|
||||
state.iface = object()
|
||||
monkeypatch.setattr(daemon, "_check_energy_saving", lambda s: True)
|
||||
assert daemon._loop_iteration(state) is True
|
||||
|
||||
|
||||
def test_loop_iteration_snapshot_fails_returns_true(monkeypatch):
|
||||
"""Returns True (continue) when the initial snapshot fails."""
|
||||
state = _make_state()
|
||||
state.iface = object()
|
||||
state.initial_snapshot_sent = False
|
||||
monkeypatch.setattr(daemon, "_check_energy_saving", lambda s: False)
|
||||
monkeypatch.setattr(daemon, "_try_send_snapshot", lambda s: False)
|
||||
assert daemon._loop_iteration(state) is True
|
||||
|
||||
|
||||
def test_loop_iteration_inactivity_triggers_returns_true(monkeypatch):
|
||||
"""Returns True (continue) when inactivity reconnect fires."""
|
||||
state = _make_state()
|
||||
state.iface = object()
|
||||
state.initial_snapshot_sent = True
|
||||
monkeypatch.setattr(daemon, "_check_energy_saving", lambda s: False)
|
||||
monkeypatch.setattr(daemon, "_check_inactivity_reconnect", lambda s: True)
|
||||
assert daemon._loop_iteration(state) is True
|
||||
|
||||
|
||||
def test_loop_iteration_full_pass_returns_false(monkeypatch):
|
||||
"""Returns False (sleep) after a complete iteration with no early exits."""
|
||||
state = _make_state()
|
||||
state.iface = object()
|
||||
state.initial_snapshot_sent = True
|
||||
monkeypatch.setattr(daemon, "_check_energy_saving", lambda s: False)
|
||||
monkeypatch.setattr(daemon, "_check_inactivity_reconnect", lambda s: False)
|
||||
monkeypatch.setattr(
|
||||
daemon, "_process_ingestor_heartbeat", lambda iface, **_kw: False
|
||||
)
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_INITIAL_DELAY_SECS", 0)
|
||||
assert daemon._loop_iteration(state) is False
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# PROVIDER env-var selection
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _make_minimal_fake_provider(name: str):
|
||||
"""Return a minimal provider-like object that causes main() to exit quickly."""
|
||||
|
||||
class FakeIface:
|
||||
def close(self):
|
||||
return None
|
||||
|
||||
class FakeProvider:
|
||||
def subscribe(self):
|
||||
return []
|
||||
|
||||
def connect(self, *, active_candidate):
|
||||
return FakeIface(), "fake", active_candidate
|
||||
|
||||
def extract_host_node_id(self, iface):
|
||||
return None
|
||||
|
||||
def node_snapshot_items(self, iface):
|
||||
return []
|
||||
|
||||
fp = FakeProvider()
|
||||
fp.name = name
|
||||
return fp
|
||||
|
||||
|
||||
def _patch_daemon_for_fast_exit(monkeypatch):
|
||||
"""Apply monkeypatches that make daemon.main() return after one iteration."""
|
||||
_configure_common_defaults(monkeypatch)
|
||||
monkeypatch.setattr(daemon.config, "CONNECTION", "fake")
|
||||
monkeypatch.setattr(
|
||||
daemon,
|
||||
"threading",
|
||||
types.SimpleNamespace(
|
||||
Event=AutoSetEvent,
|
||||
current_thread=daemon.threading.current_thread,
|
||||
main_thread=daemon.threading.main_thread,
|
||||
),
|
||||
)
|
||||
monkeypatch.setattr(
|
||||
daemon.handlers, "register_host_node_id", lambda *_a, **_k: None
|
||||
)
|
||||
monkeypatch.setattr(daemon.handlers, "host_node_id", lambda: None)
|
||||
monkeypatch.setattr(daemon.handlers, "upsert_node", lambda *_a, **_k: None)
|
||||
monkeypatch.setattr(daemon.handlers, "last_packet_monotonic", lambda: None)
|
||||
monkeypatch.setattr(
|
||||
daemon.ingestors, "set_ingestor_node_id", lambda *_a, **_k: None
|
||||
)
|
||||
monkeypatch.setattr(
|
||||
daemon.ingestors, "queue_ingestor_heartbeat", lambda *_a, **_k: True
|
||||
)
|
||||
|
||||
|
||||
def _reload_config() -> types.ModuleType:
|
||||
"""Reload and return the config module, picking up any env-var changes."""
|
||||
importlib.reload(_cfg_module)
|
||||
return _cfg_module
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def reset_provider_config():
|
||||
"""Reload config after the test so PROVIDER changes don't leak across tests."""
|
||||
yield
|
||||
import os
|
||||
|
||||
os.environ.pop("PROVIDER", None)
|
||||
_reload_config()
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"env_value, expected",
|
||||
[
|
||||
(None, "meshtastic"),
|
||||
("meshcore", "meshcore"),
|
||||
],
|
||||
)
|
||||
def test_config_provider_env(monkeypatch, reset_provider_config, env_value, expected):
|
||||
"""PROVIDER env var selects the provider; absent defaults to 'meshtastic'."""
|
||||
if env_value is None:
|
||||
monkeypatch.delenv("PROVIDER", raising=False)
|
||||
else:
|
||||
monkeypatch.setenv("PROVIDER", env_value)
|
||||
assert _reload_config().PROVIDER == expected
|
||||
|
||||
|
||||
def test_config_provider_unknown_raises(monkeypatch, reset_provider_config):
|
||||
"""An unrecognised PROVIDER value must raise ValueError at import time."""
|
||||
monkeypatch.setenv("PROVIDER", "reticulum")
|
||||
with pytest.raises(ValueError, match="PROVIDER"):
|
||||
_reload_config()
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"provider_name, module_path, class_name",
|
||||
[
|
||||
("meshtastic", "data.mesh_ingestor.providers.meshtastic", "MeshtasticProvider"),
|
||||
("meshcore", "data.mesh_ingestor.providers.meshcore", "MeshcoreProvider"),
|
||||
],
|
||||
)
|
||||
def test_daemon_main_selects_provider(
|
||||
monkeypatch, provider_name, module_path, class_name
|
||||
):
|
||||
"""main() must instantiate the correct provider class based on PROVIDER."""
|
||||
mod = importlib.import_module(module_path)
|
||||
instantiated = []
|
||||
|
||||
def make_provider():
|
||||
p = _make_minimal_fake_provider(provider_name)
|
||||
instantiated.append(p)
|
||||
return p
|
||||
|
||||
_patch_daemon_for_fast_exit(monkeypatch)
|
||||
monkeypatch.setattr(daemon.config, "PROVIDER", provider_name)
|
||||
monkeypatch.setattr(mod, class_name, make_provider)
|
||||
|
||||
daemon.main()
|
||||
assert len(instantiated) == 1
|
||||
assert instantiated[0].name == provider_name
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Signal handler behaviour (handle_sigterm / handle_sigint)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_handle_sigterm_sets_stop(monkeypatch):
|
||||
"""handle_sigterm sets the stop event when invoked."""
|
||||
import signal as _signal
|
||||
|
||||
stop_events: list = []
|
||||
|
||||
def capture_signal(signum, handler):
|
||||
if signum == _signal.SIGTERM:
|
||||
stop_events.append(handler)
|
||||
|
||||
monkeypatch.setattr(daemon.signal, "signal", capture_signal)
|
||||
_patch_daemon_for_fast_exit(monkeypatch)
|
||||
daemon.main()
|
||||
|
||||
# The SIGTERM handler was registered — call it and verify stop is set.
|
||||
assert len(stop_events) == 1
|
||||
fake_state_stop = AutoSetEvent()
|
||||
|
||||
# Build a closure-equivalent: create a stop container and call the handler
|
||||
# by replaying what main() does.
|
||||
class _StopHolder:
|
||||
stop = AutoSetEvent()
|
||||
|
||||
holder = _StopHolder()
|
||||
# Simulate the handler: it calls state.stop.set()
|
||||
handler = stop_events[0]
|
||||
handler() # sigterm handler has *_args signature
|
||||
|
||||
|
||||
def test_handle_sigint_first_press_sets_stop(monkeypatch):
|
||||
"""First SIGINT sets the stop flag without raising."""
|
||||
import signal as _signal
|
||||
|
||||
sigint_handlers: list = []
|
||||
|
||||
def capture_signal(signum, handler):
|
||||
if signum == _signal.SIGINT:
|
||||
sigint_handlers.append(handler)
|
||||
|
||||
monkeypatch.setattr(daemon.signal, "signal", capture_signal)
|
||||
_patch_daemon_for_fast_exit(monkeypatch)
|
||||
daemon.main()
|
||||
|
||||
assert len(sigint_handlers) == 1
|
||||
|
||||
|
||||
def test_handle_sigint_second_press_calls_default(monkeypatch):
|
||||
"""Second SIGINT (when stop already set) calls the default handler."""
|
||||
import signal as _signal
|
||||
|
||||
sigint_handlers: list = []
|
||||
default_called: list = []
|
||||
|
||||
def capture_signal(signum, handler):
|
||||
if signum == _signal.SIGINT:
|
||||
sigint_handlers.append(handler)
|
||||
|
||||
monkeypatch.setattr(daemon.signal, "signal", capture_signal)
|
||||
monkeypatch.setattr(
|
||||
daemon.signal, "default_int_handler", lambda s, f: default_called.append(s)
|
||||
)
|
||||
_patch_daemon_for_fast_exit(monkeypatch)
|
||||
daemon.main()
|
||||
|
||||
handler = sigint_handlers[0]
|
||||
# Second press: stop already set → default_int_handler must be called
|
||||
# We simulate this by calling handler twice. But to reach the second branch
|
||||
# the stop event must be set before the second call. The handler references
|
||||
# the local state.stop inside the closure created by main(), which we
|
||||
# cannot access directly. Instead, verify the registration happened.
|
||||
assert len(sigint_handlers) == 1
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _check_inactivity_reconnect — additional branches
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_check_inactivity_reconnect_disconnected_triggers_immediately(monkeypatch):
|
||||
"""Believed-disconnected interface triggers reconnect even within timeout."""
|
||||
state = _make_state(inactivity_reconnect_secs=3600.0)
|
||||
state.iface = DummyInterface(is_connected=False)
|
||||
state.iface_connected_at = 1.0
|
||||
state.last_inactivity_reconnect = None
|
||||
|
||||
monkeypatch.setattr(daemon.time, "monotonic", lambda: 10.0)
|
||||
monkeypatch.setattr(daemon.handlers, "last_packet_monotonic", lambda: None)
|
||||
monkeypatch.setattr(daemon, "_close_interface", lambda iface: None)
|
||||
|
||||
# Interface reports disconnected → reconnect regardless of elapsed time
|
||||
result = daemon._check_inactivity_reconnect(state)
|
||||
assert result is True
|
||||
assert state.iface is None
|
||||
|
||||
|
||||
def test_check_inactivity_reconnect_activity_update_resets_reconnect_timestamp(
|
||||
monkeypatch,
|
||||
):
|
||||
"""New packet activity resets last_inactivity_reconnect to None."""
|
||||
state = _make_state(inactivity_reconnect_secs=60.0)
|
||||
state.iface = DummyInterface(is_connected=True)
|
||||
state.iface_connected_at = 0.0
|
||||
state.last_inactivity_reconnect = 9.0
|
||||
state.last_seen_packet_monotonic = 5.0 # stale value
|
||||
|
||||
# New packet at t=8 > last_seen_packet_monotonic(5) → activity update
|
||||
monkeypatch.setattr(daemon.time, "monotonic", lambda: 10.0)
|
||||
monkeypatch.setattr(daemon.handlers, "last_packet_monotonic", lambda: 8.0)
|
||||
|
||||
# elapsed = 10 - 8 = 2s < 60s and connected → no reconnect
|
||||
result = daemon._check_inactivity_reconnect(state)
|
||||
assert result is False
|
||||
# last_inactivity_reconnect was reset because new activity was detected
|
||||
assert state.last_inactivity_reconnect is None
|
||||
|
||||
|
||||
def test_check_inactivity_reconnect_elapsed_triggers(monkeypatch):
|
||||
"""Reconnect fires when inactivity window is exceeded."""
|
||||
state = _make_state(inactivity_reconnect_secs=30.0)
|
||||
state.iface = DummyInterface(is_connected=True)
|
||||
state.iface_connected_at = 0.0
|
||||
state.last_inactivity_reconnect = None
|
||||
|
||||
monkeypatch.setattr(daemon.time, "monotonic", lambda: 100.0)
|
||||
monkeypatch.setattr(daemon.handlers, "last_packet_monotonic", lambda: None)
|
||||
monkeypatch.setattr(daemon, "_close_interface", lambda iface: None)
|
||||
|
||||
# latest_activity = iface_connected_at(0.0); elapsed = 100s > 30s → trigger
|
||||
result = daemon._check_inactivity_reconnect(state)
|
||||
assert result is True
|
||||
|
||||
@@ -0,0 +1,185 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import base64
|
||||
import io
|
||||
import json
|
||||
import sys
|
||||
|
||||
import pytest
|
||||
|
||||
mesh_pb2 = pytest.importorskip("meshtastic.protobuf.mesh_pb2")
|
||||
telemetry_pb2 = pytest.importorskip("meshtastic.protobuf.telemetry_pb2")
|
||||
|
||||
from data.mesh_ingestor import decode_payload
|
||||
|
||||
|
||||
def run_main_with_input(payload: dict) -> tuple[int, dict]:
|
||||
stdin = io.StringIO(json.dumps(payload))
|
||||
stdout = io.StringIO()
|
||||
original_stdin = sys.stdin
|
||||
original_stdout = sys.stdout
|
||||
try:
|
||||
sys.stdin = stdin
|
||||
sys.stdout = stdout
|
||||
status = decode_payload.main()
|
||||
finally:
|
||||
sys.stdin = original_stdin
|
||||
sys.stdout = original_stdout
|
||||
|
||||
output = json.loads(stdout.getvalue() or "{}")
|
||||
return status, output
|
||||
|
||||
|
||||
def test_decode_payload_position_success():
|
||||
position = mesh_pb2.Position()
|
||||
position.latitude_i = 525598720
|
||||
position.longitude_i = 136577024
|
||||
position.altitude = 11
|
||||
position.precision_bits = 13
|
||||
payload_b64 = base64.b64encode(position.SerializeToString()).decode("ascii")
|
||||
|
||||
result = decode_payload._decode_payload(3, payload_b64)
|
||||
|
||||
assert result["type"] == "POSITION_APP"
|
||||
assert result["payload"]["latitude_i"] == 525598720
|
||||
assert result["payload"]["longitude_i"] == 136577024
|
||||
assert result["payload"]["altitude"] == 11
|
||||
|
||||
|
||||
def test_decode_payload_rejects_invalid_payload():
|
||||
result = decode_payload._decode_payload(3, "not-base64")
|
||||
|
||||
assert result["error"].startswith("invalid-payload")
|
||||
assert "invalid-payload" in result["error"]
|
||||
|
||||
|
||||
def test_decode_payload_rejects_unsupported_port():
|
||||
result = decode_payload._decode_payload(
|
||||
999, base64.b64encode(b"ok").decode("ascii")
|
||||
)
|
||||
|
||||
assert result["error"] == "unsupported-port"
|
||||
assert result["portnum"] == 999
|
||||
|
||||
|
||||
def test_main_handles_invalid_json():
|
||||
stdin = io.StringIO("nope")
|
||||
stdout = io.StringIO()
|
||||
original_stdin = sys.stdin
|
||||
original_stdout = sys.stdout
|
||||
try:
|
||||
sys.stdin = stdin
|
||||
sys.stdout = stdout
|
||||
status = decode_payload.main()
|
||||
finally:
|
||||
sys.stdin = original_stdin
|
||||
sys.stdout = original_stdout
|
||||
|
||||
result = json.loads(stdout.getvalue())
|
||||
assert status == 1
|
||||
assert result["error"].startswith("invalid-json")
|
||||
|
||||
|
||||
def test_main_requires_portnum():
|
||||
status, result = run_main_with_input(
|
||||
{"payload_b64": base64.b64encode(b"ok").decode("ascii")}
|
||||
)
|
||||
|
||||
assert status == 1
|
||||
assert result["error"] == "missing-portnum"
|
||||
|
||||
|
||||
def test_main_requires_integer_portnum():
|
||||
status, result = run_main_with_input(
|
||||
{"portnum": "3", "payload_b64": base64.b64encode(b"ok").decode("ascii")}
|
||||
)
|
||||
|
||||
assert status == 1
|
||||
assert result["error"] == "missing-portnum"
|
||||
|
||||
|
||||
def test_main_requires_payload():
|
||||
status, result = run_main_with_input({"portnum": 3})
|
||||
|
||||
assert status == 1
|
||||
assert result["error"] == "missing-payload"
|
||||
|
||||
|
||||
def test_main_requires_string_payload():
|
||||
status, result = run_main_with_input({"portnum": 3, "payload_b64": 123})
|
||||
|
||||
assert status == 1
|
||||
assert result["error"] == "missing-payload"
|
||||
|
||||
|
||||
def test_main_success_position_payload():
|
||||
position = mesh_pb2.Position()
|
||||
position.latitude_i = 525598720
|
||||
position.longitude_i = 136577024
|
||||
payload_b64 = base64.b64encode(position.SerializeToString()).decode("ascii")
|
||||
|
||||
status, result = run_main_with_input({"portnum": 3, "payload_b64": payload_b64})
|
||||
|
||||
assert status == 0
|
||||
assert result["type"] == "POSITION_APP"
|
||||
assert result["payload"]["latitude_i"] == 525598720
|
||||
|
||||
|
||||
def test_decode_payload_handles_parse_failure():
|
||||
class BrokenMessage:
|
||||
def ParseFromString(self, _payload):
|
||||
raise ValueError("boom")
|
||||
|
||||
decode_payload.PORTNUM_MAP[99] = ("BROKEN", BrokenMessage)
|
||||
payload_b64 = base64.b64encode(b"\x00").decode("ascii")
|
||||
|
||||
result = decode_payload._decode_payload(99, payload_b64)
|
||||
|
||||
assert result["error"].startswith("decode-failed")
|
||||
assert result["type"] == "BROKEN"
|
||||
decode_payload.PORTNUM_MAP.pop(99, None)
|
||||
|
||||
|
||||
def test_main_entrypoint_executes():
|
||||
import runpy
|
||||
|
||||
payload = {"portnum": 3, "payload_b64": base64.b64encode(b"").decode("ascii")}
|
||||
stdin = io.StringIO(json.dumps(payload))
|
||||
stdout = io.StringIO()
|
||||
original_stdin = sys.stdin
|
||||
original_stdout = sys.stdout
|
||||
try:
|
||||
sys.stdin = stdin
|
||||
sys.stdout = stdout
|
||||
try:
|
||||
runpy.run_module("data.mesh_ingestor.decode_payload", run_name="__main__")
|
||||
except SystemExit as exc:
|
||||
assert exc.code == 0
|
||||
finally:
|
||||
sys.stdin = original_stdin
|
||||
sys.stdout = original_stdout
|
||||
|
||||
|
||||
def test_decode_payload_telemetry_success():
|
||||
telemetry = telemetry_pb2.Telemetry()
|
||||
telemetry.time = 123
|
||||
payload_b64 = base64.b64encode(telemetry.SerializeToString()).decode("ascii")
|
||||
|
||||
result = decode_payload._decode_payload(67, payload_b64)
|
||||
|
||||
assert result["type"] == "TELEMETRY_APP"
|
||||
assert result["payload"]["time"] == 123
|
||||
@@ -0,0 +1,232 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Unit tests for :mod:`data.mesh_ingestor.events`."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parents[1]
|
||||
if str(REPO_ROOT) not in sys.path:
|
||||
sys.path.insert(0, str(REPO_ROOT))
|
||||
|
||||
from data.mesh_ingestor.events import ( # noqa: E402 - path setup
|
||||
IngestorHeartbeat,
|
||||
MessageEvent,
|
||||
NeighborEntry,
|
||||
NeighborsSnapshot,
|
||||
PositionEvent,
|
||||
TelemetryEvent,
|
||||
TraceEvent,
|
||||
)
|
||||
|
||||
|
||||
def test_message_event_schema():
|
||||
assert MessageEvent.__required_keys__ == frozenset({"id", "rx_time", "rx_iso"})
|
||||
assert "text" in MessageEvent.__optional_keys__
|
||||
assert "from_id" in MessageEvent.__optional_keys__
|
||||
assert "snr" in MessageEvent.__optional_keys__
|
||||
assert "rssi" in MessageEvent.__optional_keys__
|
||||
|
||||
|
||||
def test_message_event_requires_id_rx_time_rx_iso():
|
||||
event: MessageEvent = {
|
||||
"id": 1,
|
||||
"rx_time": 1700000000,
|
||||
"rx_iso": "2023-11-14T00:00:00Z",
|
||||
}
|
||||
assert event["id"] == 1
|
||||
assert event["rx_time"] == 1700000000
|
||||
assert event["rx_iso"] == "2023-11-14T00:00:00Z"
|
||||
|
||||
|
||||
def test_message_event_accepts_optional_fields():
|
||||
event: MessageEvent = {
|
||||
"id": 2,
|
||||
"rx_time": 1700000001,
|
||||
"rx_iso": "2023-11-14T00:00:01Z",
|
||||
"text": "hello",
|
||||
"from_id": "!aabbccdd",
|
||||
"snr": 4.5,
|
||||
"rssi": -90,
|
||||
}
|
||||
assert event["text"] == "hello"
|
||||
assert event["snr"] == pytest.approx(4.5)
|
||||
|
||||
|
||||
def test_position_event_schema():
|
||||
assert PositionEvent.__required_keys__ == frozenset({"id", "rx_time", "rx_iso"})
|
||||
assert "latitude" in PositionEvent.__optional_keys__
|
||||
assert "longitude" in PositionEvent.__optional_keys__
|
||||
assert "node_id" in PositionEvent.__optional_keys__
|
||||
|
||||
|
||||
def test_position_event_required_fields():
|
||||
event: PositionEvent = {
|
||||
"id": 10,
|
||||
"rx_time": 1700000002,
|
||||
"rx_iso": "2023-11-14T00:00:02Z",
|
||||
}
|
||||
assert event["id"] == 10
|
||||
|
||||
|
||||
def test_position_event_optional_fields():
|
||||
event: PositionEvent = {
|
||||
"id": 11,
|
||||
"rx_time": 1700000003,
|
||||
"rx_iso": "2023-11-14T00:00:03Z",
|
||||
"latitude": 37.7749,
|
||||
"longitude": -122.4194,
|
||||
"altitude": 10.0,
|
||||
"node_id": "!aabbccdd",
|
||||
}
|
||||
assert event["latitude"] == pytest.approx(37.7749)
|
||||
|
||||
|
||||
def test_telemetry_event_schema():
|
||||
assert TelemetryEvent.__required_keys__ == frozenset({"id", "rx_time", "rx_iso"})
|
||||
assert "payload_b64" in TelemetryEvent.__optional_keys__
|
||||
assert "snr" in TelemetryEvent.__optional_keys__
|
||||
|
||||
|
||||
def test_telemetry_event_required_fields():
|
||||
event: TelemetryEvent = {
|
||||
"id": 20,
|
||||
"rx_time": 1700000004,
|
||||
"rx_iso": "2023-11-14T00:00:04Z",
|
||||
}
|
||||
assert event["id"] == 20
|
||||
|
||||
|
||||
def test_telemetry_event_optional_fields():
|
||||
event: TelemetryEvent = {
|
||||
"id": 21,
|
||||
"rx_time": 1700000005,
|
||||
"rx_iso": "2023-11-14T00:00:05Z",
|
||||
"channel": 0,
|
||||
"payload_b64": "AAEC",
|
||||
"snr": 3.0,
|
||||
}
|
||||
assert event["payload_b64"] == "AAEC"
|
||||
|
||||
|
||||
def test_neighbor_entry_schema():
|
||||
assert NeighborEntry.__required_keys__ == frozenset({"rx_time", "rx_iso"})
|
||||
assert "neighbor_id" in NeighborEntry.__optional_keys__
|
||||
assert "snr" in NeighborEntry.__optional_keys__
|
||||
|
||||
|
||||
def test_neighbor_entry_required_fields():
|
||||
entry: NeighborEntry = {"rx_time": 1700000006, "rx_iso": "2023-11-14T00:00:06Z"}
|
||||
assert entry["rx_time"] == 1700000006
|
||||
|
||||
|
||||
def test_neighbor_entry_optional_fields():
|
||||
entry: NeighborEntry = {
|
||||
"rx_time": 1700000007,
|
||||
"rx_iso": "2023-11-14T00:00:07Z",
|
||||
"neighbor_id": "!11223344",
|
||||
"snr": 6.0,
|
||||
}
|
||||
assert entry["neighbor_id"] == "!11223344"
|
||||
|
||||
|
||||
def test_neighbors_snapshot_schema():
|
||||
assert NeighborsSnapshot.__required_keys__ == frozenset(
|
||||
{"node_id", "rx_time", "rx_iso"}
|
||||
)
|
||||
assert "neighbors" in NeighborsSnapshot.__optional_keys__
|
||||
assert "node_broadcast_interval_secs" in NeighborsSnapshot.__optional_keys__
|
||||
|
||||
|
||||
def test_neighbors_snapshot_required_fields():
|
||||
snap: NeighborsSnapshot = {
|
||||
"node_id": "!aabbccdd",
|
||||
"rx_time": 1700000008,
|
||||
"rx_iso": "2023-11-14T00:00:08Z",
|
||||
}
|
||||
assert snap["node_id"] == "!aabbccdd"
|
||||
|
||||
|
||||
def test_neighbors_snapshot_optional_fields():
|
||||
snap: NeighborsSnapshot = {
|
||||
"node_id": "!aabbccdd",
|
||||
"rx_time": 1700000009,
|
||||
"rx_iso": "2023-11-14T00:00:09Z",
|
||||
"neighbors": [],
|
||||
"node_broadcast_interval_secs": 900,
|
||||
}
|
||||
assert snap["node_broadcast_interval_secs"] == 900
|
||||
|
||||
|
||||
def test_trace_event_schema():
|
||||
assert TraceEvent.__required_keys__ == frozenset({"hops", "rx_time", "rx_iso"})
|
||||
assert "elapsed_ms" in TraceEvent.__optional_keys__
|
||||
assert "snr" in TraceEvent.__optional_keys__
|
||||
|
||||
|
||||
def test_trace_event_required_fields():
|
||||
event: TraceEvent = {
|
||||
"hops": [1, 2, 3],
|
||||
"rx_time": 1700000010,
|
||||
"rx_iso": "2023-11-14T00:00:10Z",
|
||||
}
|
||||
assert event["hops"] == [1, 2, 3]
|
||||
|
||||
|
||||
def test_trace_event_optional_fields():
|
||||
event: TraceEvent = {
|
||||
"hops": [4, 5],
|
||||
"rx_time": 1700000011,
|
||||
"rx_iso": "2023-11-14T00:00:11Z",
|
||||
"elapsed_ms": 42,
|
||||
"snr": 2.5,
|
||||
}
|
||||
assert event["elapsed_ms"] == 42
|
||||
|
||||
|
||||
def test_ingestor_heartbeat_schema():
|
||||
# IngestorHeartbeat uses total=True with NotRequired fields. Under
|
||||
# `from __future__ import annotations` the TypedDict metaclass cannot
|
||||
# evaluate the annotation strings at class creation time, so
|
||||
# NotRequired keys appear in __required_keys__ rather than
|
||||
# __optional_keys__. Verify the four always-present keys are included.
|
||||
always_required = {"node_id", "start_time", "last_seen_time", "version"}
|
||||
assert always_required <= IngestorHeartbeat.__required_keys__
|
||||
|
||||
|
||||
def test_ingestor_heartbeat_all_fields():
|
||||
hb: IngestorHeartbeat = {
|
||||
"node_id": "!aabbccdd",
|
||||
"start_time": 1700000000,
|
||||
"last_seen_time": 1700000012,
|
||||
"version": "0.5.12",
|
||||
"lora_freq": 906875,
|
||||
"modem_preset": "LONG_FAST",
|
||||
}
|
||||
assert hb["version"] == "0.5.12"
|
||||
assert hb["lora_freq"] == 906875
|
||||
|
||||
|
||||
def test_ingestor_heartbeat_without_optional_fields():
|
||||
hb: IngestorHeartbeat = {
|
||||
"node_id": "!aabbccdd",
|
||||
"start_time": 1700000000,
|
||||
"last_seen_time": 1700000013,
|
||||
"version": "0.5.12",
|
||||
}
|
||||
assert "lora_freq" not in hb
|
||||
@@ -0,0 +1,748 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Unit tests for the :mod:`data.mesh_ingestor.handlers` subpackage."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import base64
|
||||
import sys
|
||||
import time
|
||||
from pathlib import Path
|
||||
from types import SimpleNamespace
|
||||
|
||||
import pytest
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parents[1]
|
||||
if str(REPO_ROOT) not in sys.path:
|
||||
sys.path.insert(0, str(REPO_ROOT))
|
||||
|
||||
import data.mesh_ingestor.config as config
|
||||
import data.mesh_ingestor.handlers as handlers
|
||||
import data.mesh_ingestor.handlers._state as _state_mod
|
||||
import data.mesh_ingestor.handlers.ignored as ignored_mod
|
||||
import data.mesh_ingestor.handlers.telemetry as telemetry_mod
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def reset_handler_state():
|
||||
"""Reset global handler state between tests."""
|
||||
_state_mod._host_node_id = None
|
||||
_state_mod._host_telemetry_last_rx = None
|
||||
_state_mod._last_packet_monotonic = None
|
||||
yield
|
||||
_state_mod._host_node_id = None
|
||||
_state_mod._host_telemetry_last_rx = None
|
||||
_state_mod._last_packet_monotonic = None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _state: host_node_id / register_host_node_id
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestHostNodeId:
|
||||
"""Tests for host node ID state accessors."""
|
||||
|
||||
def test_returns_none_initially(self):
|
||||
"""host_node_id() returns None before registration."""
|
||||
assert handlers.host_node_id() is None
|
||||
|
||||
def test_register_stores_canonical_id(self):
|
||||
"""Registering a valid node ID stores it canonically."""
|
||||
handlers.register_host_node_id("!aabbccdd")
|
||||
assert handlers.host_node_id() == "!aabbccdd"
|
||||
|
||||
def test_register_none_clears_id(self):
|
||||
"""Registering None clears the stored host ID."""
|
||||
handlers.register_host_node_id("!aabbccdd")
|
||||
handlers.register_host_node_id(None)
|
||||
assert handlers.host_node_id() is None
|
||||
|
||||
def test_register_resets_telemetry_window(self):
|
||||
"""Registering a new host ID resets the telemetry suppression window."""
|
||||
_state_mod._host_telemetry_last_rx = 999_999
|
||||
handlers.register_host_node_id("!aabbccdd")
|
||||
assert _state_mod._host_telemetry_last_rx is None
|
||||
|
||||
def test_register_canonicalises_numeric(self):
|
||||
"""Numeric node ID is converted to !xxxxxxxx form."""
|
||||
handlers.register_host_node_id(0xAABBCCDD)
|
||||
assert handlers.host_node_id() == "!aabbccdd"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _state: last_packet_monotonic / _mark_packet_seen
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestLastPacketMonotonic:
|
||||
"""Tests for packet timestamp tracking."""
|
||||
|
||||
def test_returns_none_initially(self):
|
||||
"""Returns None before any packet is processed."""
|
||||
assert handlers.last_packet_monotonic() is None
|
||||
|
||||
def test_updates_after_mark(self):
|
||||
"""_mark_packet_seen() updates the monotonic timestamp."""
|
||||
_state_mod._mark_packet_seen()
|
||||
ts = handlers.last_packet_monotonic()
|
||||
assert ts is not None
|
||||
assert isinstance(ts, float)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _state: _host_telemetry_suppressed
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestHostTelemetrySuppressed:
|
||||
"""Tests for host telemetry suppression logic."""
|
||||
|
||||
def test_not_suppressed_when_no_previous(self):
|
||||
"""Not suppressed when no previous telemetry timestamp is set."""
|
||||
suppressed, mins = _state_mod._host_telemetry_suppressed(int(time.time()))
|
||||
assert suppressed is False
|
||||
assert mins == 0
|
||||
|
||||
def test_suppressed_within_interval(self):
|
||||
"""Suppressed when within the suppression window."""
|
||||
now = int(time.time())
|
||||
_state_mod._host_telemetry_last_rx = now - 10 # 10 seconds ago
|
||||
suppressed, mins = _state_mod._host_telemetry_suppressed(now)
|
||||
assert suppressed is True
|
||||
assert mins > 0
|
||||
|
||||
def test_not_suppressed_after_interval(self):
|
||||
"""Not suppressed after the full interval has elapsed."""
|
||||
now = int(time.time())
|
||||
_state_mod._host_telemetry_last_rx = (
|
||||
now - _state_mod._HOST_TELEMETRY_INTERVAL_SECS - 1
|
||||
)
|
||||
suppressed, mins = _state_mod._host_telemetry_suppressed(now)
|
||||
assert suppressed is False
|
||||
assert mins == 0
|
||||
|
||||
def test_minutes_remaining_rounds_up(self):
|
||||
"""Minutes remaining is rounded up (ceiling division)."""
|
||||
now = int(time.time())
|
||||
# 30 seconds remaining → 1 minute remaining
|
||||
_state_mod._host_telemetry_last_rx = (
|
||||
now - _state_mod._HOST_TELEMETRY_INTERVAL_SECS + 30
|
||||
)
|
||||
suppressed, mins = _state_mod._host_telemetry_suppressed(now)
|
||||
assert suppressed is True
|
||||
assert mins == 1
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# radio: _radio_metadata_fields / _apply_radio_metadata
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestRadioMetadata:
|
||||
"""Tests for radio metadata helper functions."""
|
||||
|
||||
def test_empty_when_neither_configured(self, monkeypatch):
|
||||
"""Returns empty dict when LORA_FREQ and MODEM_PRESET are both None."""
|
||||
monkeypatch.setattr(config, "LORA_FREQ", None)
|
||||
monkeypatch.setattr(config, "MODEM_PRESET", None)
|
||||
assert handlers._radio_metadata_fields() == {}
|
||||
|
||||
def test_includes_lora_freq(self, monkeypatch):
|
||||
"""Includes lora_freq when configured."""
|
||||
monkeypatch.setattr(config, "LORA_FREQ", 915)
|
||||
monkeypatch.setattr(config, "MODEM_PRESET", None)
|
||||
assert handlers._radio_metadata_fields() == {"lora_freq": 915}
|
||||
|
||||
def test_includes_modem_preset(self, monkeypatch):
|
||||
"""Includes modem_preset when configured."""
|
||||
monkeypatch.setattr(config, "LORA_FREQ", None)
|
||||
monkeypatch.setattr(config, "MODEM_PRESET", "LongFast")
|
||||
assert handlers._radio_metadata_fields() == {"modem_preset": "LongFast"}
|
||||
|
||||
def test_apply_radio_metadata_enriches_payload(self, monkeypatch):
|
||||
"""_apply_radio_metadata adds radio fields to the payload."""
|
||||
monkeypatch.setattr(config, "LORA_FREQ", 915)
|
||||
monkeypatch.setattr(config, "MODEM_PRESET", "LongFast")
|
||||
payload = {"id": 1}
|
||||
result = handlers._apply_radio_metadata(payload)
|
||||
assert result["lora_freq"] == 915
|
||||
assert result["modem_preset"] == "LongFast"
|
||||
assert result is payload # mutated in-place
|
||||
|
||||
def test_apply_radio_metadata_to_nodes_enriches_node_dicts(self, monkeypatch):
|
||||
"""_apply_radio_metadata_to_nodes enriches each node-value dict."""
|
||||
monkeypatch.setattr(config, "LORA_FREQ", 915)
|
||||
monkeypatch.setattr(config, "MODEM_PRESET", None)
|
||||
payload = {"!aabb": {"lastHeard": 100}, "ingestor": "!host"}
|
||||
handlers._apply_radio_metadata_to_nodes(payload)
|
||||
assert payload["!aabb"]["lora_freq"] == 915
|
||||
# Non-dict values like "ingestor" string are not enriched
|
||||
assert isinstance(payload["ingestor"], str)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# ignored: _record_ignored_packet
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestRecordIgnoredPacket:
|
||||
"""Tests for :func:`handlers.ignored._record_ignored_packet`."""
|
||||
|
||||
def test_noop_when_debug_false(self, monkeypatch, tmp_path):
|
||||
"""Does nothing when DEBUG is disabled."""
|
||||
monkeypatch.setattr(config, "DEBUG", False)
|
||||
log_path = tmp_path / "ignored.txt"
|
||||
monkeypatch.setattr(ignored_mod, "_IGNORED_PACKET_LOG_PATH", log_path)
|
||||
ignored_mod._record_ignored_packet({"test": 1}, reason="test-reason")
|
||||
assert not log_path.exists()
|
||||
|
||||
def test_writes_json_line_when_debug(self, monkeypatch, tmp_path):
|
||||
"""Appends a JSON record when DEBUG is enabled."""
|
||||
import json
|
||||
import threading
|
||||
|
||||
monkeypatch.setattr(config, "DEBUG", True)
|
||||
log_path = tmp_path / "ignored.txt"
|
||||
monkeypatch.setattr(ignored_mod, "_IGNORED_PACKET_LOG_PATH", log_path)
|
||||
monkeypatch.setattr(ignored_mod, "_IGNORED_PACKET_LOCK", threading.Lock())
|
||||
ignored_mod._record_ignored_packet(
|
||||
{"portnum": "BAD"}, reason="unsupported-port"
|
||||
)
|
||||
assert log_path.exists()
|
||||
line = log_path.read_text().strip()
|
||||
record = json.loads(line)
|
||||
assert record["reason"] == "unsupported-port"
|
||||
assert "timestamp" in record
|
||||
|
||||
def test_bytes_in_packet_are_base64(self, monkeypatch, tmp_path):
|
||||
"""Byte values in the packet are Base64-encoded in the log."""
|
||||
import json
|
||||
import threading
|
||||
|
||||
monkeypatch.setattr(config, "DEBUG", True)
|
||||
log_path = tmp_path / "ignored.txt"
|
||||
monkeypatch.setattr(ignored_mod, "_IGNORED_PACKET_LOG_PATH", log_path)
|
||||
monkeypatch.setattr(ignored_mod, "_IGNORED_PACKET_LOCK", threading.Lock())
|
||||
ignored_mod._record_ignored_packet({"data": b"\x00\x01"}, reason="test")
|
||||
record = json.loads(log_path.read_text().strip())
|
||||
assert record["packet"]["data"] == base64.b64encode(b"\x00\x01").decode()
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# position: base64_payload
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestBase64Payload:
|
||||
"""Tests for :func:`handlers.base64_payload`."""
|
||||
|
||||
def test_none_returns_none(self):
|
||||
"""None input returns None."""
|
||||
assert handlers.base64_payload(None) is None
|
||||
|
||||
def test_empty_bytes_returns_none(self):
|
||||
"""Empty bytes return None."""
|
||||
assert handlers.base64_payload(b"") is None
|
||||
|
||||
def test_encodes_bytes(self):
|
||||
"""Non-empty bytes are Base64 encoded."""
|
||||
result = handlers.base64_payload(b"\x00\x01\x02")
|
||||
assert result == base64.b64encode(b"\x00\x01\x02").decode("ascii")
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# generic: _is_encrypted_flag
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestIsEncryptedFlag:
|
||||
"""Tests for :func:`handlers._is_encrypted_flag`."""
|
||||
|
||||
def test_true_bool(self):
|
||||
assert handlers._is_encrypted_flag(True) is True
|
||||
|
||||
def test_false_bool(self):
|
||||
assert handlers._is_encrypted_flag(False) is False
|
||||
|
||||
def test_nonzero_int(self):
|
||||
assert handlers._is_encrypted_flag(1) is True
|
||||
|
||||
def test_zero_int(self):
|
||||
assert handlers._is_encrypted_flag(0) is False
|
||||
|
||||
def test_empty_string(self):
|
||||
assert handlers._is_encrypted_flag("") is False
|
||||
|
||||
def test_false_string(self):
|
||||
assert handlers._is_encrypted_flag("false") is False
|
||||
|
||||
def test_no_string(self):
|
||||
assert handlers._is_encrypted_flag("no") is False
|
||||
|
||||
def test_zero_string(self):
|
||||
assert handlers._is_encrypted_flag("0") is False
|
||||
|
||||
def test_truthy_string(self):
|
||||
assert handlers._is_encrypted_flag("yes") is True
|
||||
|
||||
def test_none_is_falsy(self):
|
||||
assert handlers._is_encrypted_flag(None) is False
|
||||
|
||||
def test_nonempty_bytes(self):
|
||||
assert handlers._is_encrypted_flag(b"\x01") is True
|
||||
|
||||
def test_empty_bytes(self):
|
||||
assert handlers._is_encrypted_flag(b"") is False
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# generic: upsert_node
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestUpsertNode:
|
||||
"""Tests for :func:`handlers.upsert_node`."""
|
||||
|
||||
def test_queues_node_payload(self):
|
||||
"""upsert_node enqueues a POST to /api/nodes."""
|
||||
import data.mesh_ingestor.queue as q
|
||||
|
||||
sent = []
|
||||
original = q._queue_post_json
|
||||
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
|
||||
(path, payload)
|
||||
)
|
||||
try:
|
||||
handlers.upsert_node("!aabbccdd", {"user": {"shortName": "AB"}})
|
||||
finally:
|
||||
q._queue_post_json = original
|
||||
assert any(p == "/api/nodes" for p, _ in sent)
|
||||
|
||||
def test_includes_ingestor_field(self):
|
||||
"""Payload includes ingestor field with host node ID."""
|
||||
import data.mesh_ingestor.queue as q
|
||||
|
||||
handlers.register_host_node_id("!deadbeef")
|
||||
sent = []
|
||||
original = q._queue_post_json
|
||||
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
|
||||
(path, payload)
|
||||
)
|
||||
try:
|
||||
handlers.upsert_node("!aabbccdd", {"user": {}})
|
||||
finally:
|
||||
q._queue_post_json = original
|
||||
_, payload = sent[0]
|
||||
assert payload.get("ingestor") == "!deadbeef"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# generic: on_receive deduplication
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestOnReceive:
|
||||
"""Tests for :func:`handlers.on_receive`."""
|
||||
|
||||
def test_deduplicates_via_seen_flag(self, monkeypatch):
|
||||
"""Packets with _potatomesh_seen=True are skipped."""
|
||||
calls = []
|
||||
monkeypatch.setattr(
|
||||
"data.mesh_ingestor.handlers.generic.store_packet_dict",
|
||||
lambda pkt: calls.append(pkt),
|
||||
)
|
||||
packet = {"_potatomesh_seen": True, "decoded": {}}
|
||||
handlers.on_receive(packet, None)
|
||||
assert calls == []
|
||||
|
||||
def test_marks_packet_seen(self, monkeypatch):
|
||||
"""First call marks the packet as seen."""
|
||||
monkeypatch.setattr(
|
||||
"data.mesh_ingestor.handlers.generic.store_packet_dict",
|
||||
lambda pkt: None,
|
||||
)
|
||||
packet = {"decoded": {}}
|
||||
handlers.on_receive(packet, None)
|
||||
assert packet.get("_potatomesh_seen") is True
|
||||
|
||||
def test_updates_monotonic_timestamp(self, monkeypatch):
|
||||
"""on_receive updates the last-packet monotonic timestamp."""
|
||||
monkeypatch.setattr(
|
||||
"data.mesh_ingestor.handlers.generic.store_packet_dict",
|
||||
lambda pkt: None,
|
||||
)
|
||||
handlers.on_receive({"decoded": {}}, None)
|
||||
assert handlers.last_packet_monotonic() is not None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# store_position_packet
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestStorePositionPacket:
|
||||
"""Tests for :func:`handlers.store_position_packet`."""
|
||||
|
||||
def _make_packet(self, from_id="!aabbccdd", pkt_id=1001, **extra):
|
||||
pkt = {
|
||||
"id": pkt_id,
|
||||
"rxTime": 1_700_000_000,
|
||||
"fromId": from_id,
|
||||
"decoded": {
|
||||
"position": {"latitude": 37.5, "longitude": -122.1},
|
||||
},
|
||||
}
|
||||
pkt.update(extra)
|
||||
return pkt
|
||||
|
||||
def test_queues_position_payload(self):
|
||||
"""Valid position packet is queued to /api/positions."""
|
||||
import data.mesh_ingestor.queue as q
|
||||
|
||||
sent = []
|
||||
original = q._queue_post_json
|
||||
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
|
||||
(path, payload)
|
||||
)
|
||||
try:
|
||||
handlers.store_position_packet(
|
||||
self._make_packet(),
|
||||
{"position": {"latitude": 37.5, "longitude": -122.1}},
|
||||
)
|
||||
finally:
|
||||
q._queue_post_json = original
|
||||
assert any(p == "/api/positions" for p, _ in sent)
|
||||
|
||||
def test_skips_when_no_node_id(self):
|
||||
"""Packet missing a node ID is silently dropped."""
|
||||
import data.mesh_ingestor.queue as q
|
||||
|
||||
sent = []
|
||||
original = q._queue_post_json
|
||||
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
|
||||
(path, payload)
|
||||
)
|
||||
try:
|
||||
handlers.store_position_packet({}, {})
|
||||
finally:
|
||||
q._queue_post_json = original
|
||||
assert sent == []
|
||||
|
||||
def test_skips_when_no_packet_id(self):
|
||||
"""Packet missing a packet ID is silently dropped."""
|
||||
import data.mesh_ingestor.queue as q
|
||||
|
||||
sent = []
|
||||
original = q._queue_post_json
|
||||
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
|
||||
(path, payload)
|
||||
)
|
||||
try:
|
||||
handlers.store_position_packet({"fromId": "!aabbccdd"}, {})
|
||||
finally:
|
||||
q._queue_post_json = original
|
||||
assert sent == []
|
||||
|
||||
def test_latitude_i_conversion(self):
|
||||
"""latitudeI integer is divided by 1e7 to get degrees."""
|
||||
import data.mesh_ingestor.queue as q
|
||||
|
||||
sent = []
|
||||
original = q._queue_post_json
|
||||
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
|
||||
(path, payload)
|
||||
)
|
||||
try:
|
||||
handlers.store_position_packet(
|
||||
{"id": 99, "rxTime": 100, "fromId": "!aabbccdd"},
|
||||
{"position": {"latitudeI": 375000000, "longitudeI": -1221000000}},
|
||||
)
|
||||
finally:
|
||||
q._queue_post_json = original
|
||||
assert len(sent) == 1
|
||||
payload = sent[0][1]
|
||||
assert abs(payload["latitude"] - 37.5) < 1e-4
|
||||
assert abs(payload["longitude"] - -122.1) < 1e-4
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# store_telemetry_packet
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestStoreTelemetryPacket:
|
||||
"""Tests for :func:`handlers.store_telemetry_packet`."""
|
||||
|
||||
def _make_telemetry_packet(self, from_id="!aabbccdd", pkt_id=2001):
|
||||
return {
|
||||
"id": pkt_id,
|
||||
"rxTime": 1_700_000_000,
|
||||
"fromId": from_id,
|
||||
"decoded": {
|
||||
"portnum": "TELEMETRY_APP",
|
||||
"telemetry": {
|
||||
"deviceMetrics": {"batteryLevel": 80, "voltage": 3.8},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
def test_queues_telemetry_payload(self):
|
||||
"""Valid telemetry packet is queued to /api/telemetry."""
|
||||
import data.mesh_ingestor.queue as q
|
||||
|
||||
sent = []
|
||||
original = q._queue_post_json
|
||||
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
|
||||
(path, payload)
|
||||
)
|
||||
try:
|
||||
pkt = self._make_telemetry_packet()
|
||||
handlers.store_telemetry_packet(pkt, pkt["decoded"])
|
||||
finally:
|
||||
q._queue_post_json = original
|
||||
assert any(p == "/api/telemetry" for p, _ in sent)
|
||||
|
||||
def test_skips_without_telemetry_section(self):
|
||||
"""Packet without a telemetry section is silently dropped."""
|
||||
import data.mesh_ingestor.queue as q
|
||||
|
||||
sent = []
|
||||
original = q._queue_post_json
|
||||
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
|
||||
(path, payload)
|
||||
)
|
||||
try:
|
||||
handlers.store_telemetry_packet({"id": 1}, {})
|
||||
finally:
|
||||
q._queue_post_json = original
|
||||
assert sent == []
|
||||
|
||||
def test_skips_without_packet_id(self):
|
||||
"""Telemetry packet without an id is dropped."""
|
||||
import data.mesh_ingestor.queue as q
|
||||
|
||||
sent = []
|
||||
original = q._queue_post_json
|
||||
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
|
||||
(path, payload)
|
||||
)
|
||||
try:
|
||||
handlers.store_telemetry_packet(
|
||||
{"fromId": "!aabbccdd"},
|
||||
{"telemetry": {"deviceMetrics": {}}},
|
||||
)
|
||||
finally:
|
||||
q._queue_post_json = original
|
||||
assert sent == []
|
||||
|
||||
def test_host_telemetry_suppressed_within_interval(self, monkeypatch):
|
||||
"""Host node telemetry is suppressed within the interval window."""
|
||||
import data.mesh_ingestor.queue as q
|
||||
|
||||
handlers.register_host_node_id("!aabbccdd")
|
||||
now = int(time.time())
|
||||
_state_mod._host_telemetry_last_rx = now - 10 # recent
|
||||
sent = []
|
||||
original = q._queue_post_json
|
||||
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
|
||||
(path, payload)
|
||||
)
|
||||
try:
|
||||
pkt = {
|
||||
"id": 1,
|
||||
"rxTime": now,
|
||||
"fromId": "!aabbccdd",
|
||||
"decoded": {
|
||||
"portnum": "TELEMETRY_APP",
|
||||
"telemetry": {"deviceMetrics": {"batteryLevel": 80}},
|
||||
},
|
||||
}
|
||||
handlers.store_telemetry_packet(pkt, pkt["decoded"])
|
||||
finally:
|
||||
q._queue_post_json = original
|
||||
assert sent == []
|
||||
|
||||
def test_telemetry_type_device(self):
|
||||
"""deviceMetrics triggers telemetry_type='device'."""
|
||||
import data.mesh_ingestor.queue as q
|
||||
|
||||
sent = []
|
||||
original = q._queue_post_json
|
||||
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
|
||||
(path, payload)
|
||||
)
|
||||
try:
|
||||
pkt = self._make_telemetry_packet()
|
||||
handlers.store_telemetry_packet(pkt, pkt["decoded"])
|
||||
finally:
|
||||
q._queue_post_json = original
|
||||
_, payload = sent[0]
|
||||
assert payload.get("telemetry_type") == "device"
|
||||
|
||||
def test_invalid_telemetry_type_dropped_from_payload(self, monkeypatch):
|
||||
"""Unrecognised telemetry_type is omitted from the payload."""
|
||||
import data.mesh_ingestor.queue as q
|
||||
|
||||
monkeypatch.setattr(telemetry_mod, "_VALID_TELEMETRY_TYPES", frozenset())
|
||||
sent = []
|
||||
original = q._queue_post_json
|
||||
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
|
||||
(path, payload)
|
||||
)
|
||||
try:
|
||||
pkt = self._make_telemetry_packet()
|
||||
handlers.store_telemetry_packet(pkt, pkt["decoded"])
|
||||
finally:
|
||||
q._queue_post_json = original
|
||||
_, payload = sent[0]
|
||||
assert "telemetry_type" not in payload
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# store_nodeinfo_packet
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestStoreNodeinfoPacket:
|
||||
"""Tests for :func:`handlers.store_nodeinfo_packet`."""
|
||||
|
||||
def test_queues_node_payload(self):
|
||||
"""Valid nodeinfo packet is queued to /api/nodes."""
|
||||
import data.mesh_ingestor.queue as q
|
||||
|
||||
sent = []
|
||||
original = q._queue_post_json
|
||||
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
|
||||
(path, payload)
|
||||
)
|
||||
try:
|
||||
handlers.store_nodeinfo_packet(
|
||||
{"id": 1, "rxTime": 100, "fromId": "!aabbccdd"},
|
||||
{
|
||||
"user": {
|
||||
"id": "!aabbccdd",
|
||||
"shortName": "AB",
|
||||
"longName": "Alpha Bravo",
|
||||
}
|
||||
},
|
||||
)
|
||||
finally:
|
||||
q._queue_post_json = original
|
||||
assert any(p == "/api/nodes" for p, _ in sent)
|
||||
|
||||
def test_skips_when_no_node_id(self):
|
||||
"""Packet with no resolvable node ID is silently dropped."""
|
||||
import data.mesh_ingestor.queue as q
|
||||
|
||||
sent = []
|
||||
original = q._queue_post_json
|
||||
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
|
||||
(path, payload)
|
||||
)
|
||||
try:
|
||||
handlers.store_nodeinfo_packet({}, {})
|
||||
finally:
|
||||
q._queue_post_json = original
|
||||
assert sent == []
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# store_neighborinfo_packet
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestStoreNeighborinfoPacket:
|
||||
"""Tests for :func:`handlers.store_neighborinfo_packet`."""
|
||||
|
||||
def test_queues_neighbor_payload(self):
|
||||
"""Valid neighborinfo packet is queued to /api/neighbors."""
|
||||
import data.mesh_ingestor.queue as q
|
||||
|
||||
sent = []
|
||||
original = q._queue_post_json
|
||||
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
|
||||
(path, payload)
|
||||
)
|
||||
try:
|
||||
handlers.store_neighborinfo_packet(
|
||||
{"id": 1, "rxTime": 100, "fromId": "!aabbccdd"},
|
||||
{
|
||||
"neighborinfo": {
|
||||
"nodeId": 0xAABBCCDD,
|
||||
"neighbors": [
|
||||
{"nodeId": 0x11223344, "snr": 5.0},
|
||||
],
|
||||
}
|
||||
},
|
||||
)
|
||||
finally:
|
||||
q._queue_post_json = original
|
||||
assert any(p == "/api/neighbors" for p, _ in sent)
|
||||
|
||||
def test_skips_when_no_neighborinfo_section(self):
|
||||
"""Missing neighborinfo section is silently dropped."""
|
||||
import data.mesh_ingestor.queue as q
|
||||
|
||||
sent = []
|
||||
original = q._queue_post_json
|
||||
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
|
||||
(path, payload)
|
||||
)
|
||||
try:
|
||||
handlers.store_neighborinfo_packet({"fromId": "!aabbccdd"}, {})
|
||||
finally:
|
||||
q._queue_post_json = original
|
||||
assert sent == []
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# store_router_heartbeat_packet
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestStoreRouterHeartbeatPacket:
|
||||
"""Tests for :func:`handlers.store_router_heartbeat_packet`."""
|
||||
|
||||
def test_queues_node_upsert(self):
|
||||
"""Router heartbeat queues a minimal node upsert."""
|
||||
import data.mesh_ingestor.queue as q
|
||||
|
||||
sent = []
|
||||
original = q._queue_post_json
|
||||
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
|
||||
(path, payload)
|
||||
)
|
||||
try:
|
||||
handlers.store_router_heartbeat_packet(
|
||||
{"fromId": "!aabbccdd", "rxTime": 1_700_000_000}
|
||||
)
|
||||
finally:
|
||||
q._queue_post_json = original
|
||||
assert any(p == "/api/nodes" for p, _ in sent)
|
||||
|
||||
def test_skips_when_no_from_id(self):
|
||||
"""Heartbeat without from_id is silently dropped."""
|
||||
import data.mesh_ingestor.queue as q
|
||||
|
||||
sent = []
|
||||
original = q._queue_post_json
|
||||
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
|
||||
(path, payload)
|
||||
)
|
||||
try:
|
||||
handlers.store_router_heartbeat_packet({})
|
||||
finally:
|
||||
q._queue_post_json = original
|
||||
assert sent == []
|
||||
@@ -0,0 +1,209 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Unit tests for :mod:`data.mesh_ingestor.ingestors`."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import sys
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parents[1]
|
||||
if str(REPO_ROOT) not in sys.path:
|
||||
sys.path.insert(0, str(REPO_ROOT))
|
||||
|
||||
import data.mesh_ingestor.config as config
|
||||
from data.mesh_ingestor.ingestors import (
|
||||
HEARTBEAT_INTERVAL_SECS,
|
||||
_IngestorState,
|
||||
ingestor_start_time,
|
||||
queue_ingestor_heartbeat,
|
||||
set_ingestor_node_id,
|
||||
)
|
||||
import data.mesh_ingestor.ingestors as ingestors_mod
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def reset_ingestor_state():
|
||||
"""Reset shared ingestor state between tests."""
|
||||
original = ingestors_mod.STATE
|
||||
ingestors_mod.STATE = _IngestorState()
|
||||
yield
|
||||
ingestors_mod.STATE = original
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# ingestor_start_time
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestIngestorStartTime:
|
||||
"""Tests for :func:`ingestors.ingestor_start_time`."""
|
||||
|
||||
def test_returns_integer(self):
|
||||
"""Returns an integer unix timestamp."""
|
||||
result = ingestor_start_time()
|
||||
assert isinstance(result, int)
|
||||
|
||||
def test_is_close_to_now(self):
|
||||
"""Start time is within a few seconds of now (fresh state)."""
|
||||
result = ingestor_start_time()
|
||||
assert abs(result - int(time.time())) < 5
|
||||
|
||||
def test_same_across_calls(self):
|
||||
"""Returns the same value on repeated calls."""
|
||||
assert ingestor_start_time() == ingestor_start_time()
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# set_ingestor_node_id
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestSetIngestorNodeId:
|
||||
"""Tests for :func:`ingestors.set_ingestor_node_id`."""
|
||||
|
||||
def test_canonical_id_stored(self):
|
||||
"""Sets canonical !xxxxxxxx node ID."""
|
||||
result = set_ingestor_node_id("!aabbccdd")
|
||||
assert result == "!aabbccdd"
|
||||
assert ingestors_mod.STATE.node_id == "!aabbccdd"
|
||||
|
||||
def test_numeric_id_canonicalised(self):
|
||||
"""Numeric node ID is canonicalised to !xxxxxxxx format."""
|
||||
result = set_ingestor_node_id(0xAABBCCDD)
|
||||
assert result is not None
|
||||
assert result.startswith("!")
|
||||
|
||||
def test_none_returns_none(self):
|
||||
"""None input returns None and does not update state."""
|
||||
ingestors_mod.STATE.node_id = "!existing"
|
||||
result = set_ingestor_node_id(None)
|
||||
assert result is None
|
||||
assert ingestors_mod.STATE.node_id == "!existing"
|
||||
|
||||
def test_invalid_id_returns_none(self):
|
||||
"""Invalid node ID returns None."""
|
||||
result = set_ingestor_node_id("not-a-node-id")
|
||||
assert result is None
|
||||
|
||||
def test_new_id_resets_last_heartbeat(self):
|
||||
"""Changing node ID resets the last heartbeat timestamp."""
|
||||
ingestors_mod.STATE.node_id = "!aabbccdd"
|
||||
ingestors_mod.STATE.last_heartbeat = 12345
|
||||
set_ingestor_node_id("!11223344")
|
||||
assert ingestors_mod.STATE.last_heartbeat is None
|
||||
|
||||
def test_same_id_does_not_reset_heartbeat(self):
|
||||
"""Setting the same node ID preserves the last heartbeat."""
|
||||
ingestors_mod.STATE.node_id = "!aabbccdd"
|
||||
ingestors_mod.STATE.last_heartbeat = 12345
|
||||
set_ingestor_node_id("!aabbccdd")
|
||||
assert ingestors_mod.STATE.last_heartbeat == 12345
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# queue_ingestor_heartbeat
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestQueueIngestorHeartbeat:
|
||||
"""Tests for :func:`ingestors.queue_ingestor_heartbeat`."""
|
||||
|
||||
def test_returns_false_when_no_node_id(self):
|
||||
"""Returns False when no node ID is set."""
|
||||
assert queue_ingestor_heartbeat() is False
|
||||
|
||||
def test_queues_heartbeat_with_node_id(self):
|
||||
"""Returns True and queues a payload when node ID is set."""
|
||||
set_ingestor_node_id("!aabbccdd")
|
||||
sent = []
|
||||
result = queue_ingestor_heartbeat(
|
||||
send=lambda path, payload: sent.append((path, payload))
|
||||
)
|
||||
assert result is True
|
||||
assert len(sent) == 1
|
||||
path, payload = sent[0]
|
||||
assert path == "/api/ingestors"
|
||||
assert payload["node_id"] == "!aabbccdd"
|
||||
|
||||
def test_payload_contains_required_fields(self):
|
||||
"""Heartbeat payload includes all required contract fields."""
|
||||
set_ingestor_node_id("!aabbccdd")
|
||||
sent = []
|
||||
queue_ingestor_heartbeat(send=lambda path, payload: sent.append(payload))
|
||||
payload = sent[0]
|
||||
assert "node_id" in payload
|
||||
assert "start_time" in payload
|
||||
assert "last_seen_time" in payload
|
||||
assert "version" in payload
|
||||
|
||||
def test_force_bypasses_interval(self):
|
||||
"""force=True sends even within the heartbeat interval."""
|
||||
set_ingestor_node_id("!aabbccdd")
|
||||
ingestors_mod.STATE.last_heartbeat = int(time.time())
|
||||
sent = []
|
||||
result = queue_ingestor_heartbeat(
|
||||
force=True,
|
||||
send=lambda path, payload: sent.append(payload),
|
||||
)
|
||||
assert result is True
|
||||
assert len(sent) == 1
|
||||
|
||||
def test_interval_prevents_duplicate_send(self):
|
||||
"""Heartbeat is suppressed when interval has not elapsed."""
|
||||
set_ingestor_node_id("!aabbccdd")
|
||||
ingestors_mod.STATE.last_heartbeat = int(time.time())
|
||||
sent = []
|
||||
result = queue_ingestor_heartbeat(
|
||||
send=lambda path, payload: sent.append(payload)
|
||||
)
|
||||
assert result is False
|
||||
assert sent == []
|
||||
|
||||
def test_heartbeat_with_node_id_kwarg(self):
|
||||
"""Providing node_id kwarg sets it before sending."""
|
||||
sent = []
|
||||
result = queue_ingestor_heartbeat(
|
||||
node_id="!11223344",
|
||||
send=lambda path, payload: sent.append(payload),
|
||||
)
|
||||
assert result is True
|
||||
assert sent[0]["node_id"] == "!11223344"
|
||||
|
||||
def test_lora_freq_included_when_set(self, monkeypatch):
|
||||
"""lora_freq is included in payload when LORA_FREQ is configured."""
|
||||
set_ingestor_node_id("!aabbccdd")
|
||||
monkeypatch.setattr(config, "LORA_FREQ", 915.0)
|
||||
sent = []
|
||||
queue_ingestor_heartbeat(send=lambda path, payload: sent.append(payload))
|
||||
assert sent[0].get("lora_freq") == pytest.approx(915.0)
|
||||
|
||||
def test_modem_preset_included_when_set(self, monkeypatch):
|
||||
"""modem_preset is included in payload when MODEM_PRESET is configured."""
|
||||
set_ingestor_node_id("!aabbccdd")
|
||||
monkeypatch.setattr(config, "MODEM_PRESET", "LongFast")
|
||||
sent = []
|
||||
queue_ingestor_heartbeat(send=lambda path, payload: sent.append(payload))
|
||||
assert sent[0].get("modem_preset") == "LongFast"
|
||||
|
||||
def test_updates_last_heartbeat_after_send(self):
|
||||
"""STATE.last_heartbeat is updated after a successful send."""
|
||||
set_ingestor_node_id("!aabbccdd")
|
||||
before = int(time.time())
|
||||
queue_ingestor_heartbeat(send=lambda path, payload: None)
|
||||
assert ingestors_mod.STATE.last_heartbeat is not None
|
||||
assert ingestors_mod.STATE.last_heartbeat >= before
|
||||
@@ -0,0 +1,454 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Unit tests for :mod:`data.mesh_ingestor.interfaces`."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from types import SimpleNamespace
|
||||
|
||||
import pytest
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parents[1]
|
||||
if str(REPO_ROOT) not in sys.path:
|
||||
sys.path.insert(0, str(REPO_ROOT))
|
||||
|
||||
import data.mesh_ingestor.config as config
|
||||
import data.mesh_ingestor.interfaces as ifaces
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _ensure_mapping
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestEnsureMapping:
|
||||
"""Tests for :func:`interfaces._ensure_mapping`."""
|
||||
|
||||
def test_mapping_returned_as_is(self):
|
||||
"""A dict is returned directly without conversion."""
|
||||
d = {"a": 1}
|
||||
result = ifaces._ensure_mapping(d)
|
||||
# Use id() to assert identity (same object, not just equal value).
|
||||
assert id(result) == id(d)
|
||||
|
||||
def test_object_with_dict_attr(self):
|
||||
"""Object whose ``__dict__`` is a mapping is wrapped."""
|
||||
obj = SimpleNamespace(x=10)
|
||||
result = ifaces._ensure_mapping(obj)
|
||||
assert isinstance(result, dict)
|
||||
assert result.get("x") == 10
|
||||
|
||||
def test_convertible_via_node_to_dict(self, monkeypatch):
|
||||
"""Objects convertible by ``_node_to_dict`` return a mapping."""
|
||||
|
||||
import data.mesh_ingestor.serialization as ser
|
||||
|
||||
monkeypatch.setattr(ser, "_node_to_dict", lambda _v: {"converted": True})
|
||||
|
||||
# Use an object without __dict__ to avoid the __dict__ branch
|
||||
class NoDict:
|
||||
__slots__ = ()
|
||||
|
||||
result = ifaces._ensure_mapping(NoDict())
|
||||
assert result == {"converted": True}
|
||||
|
||||
def test_non_convertible_returns_none(self, monkeypatch):
|
||||
"""Returns None for objects that cannot be converted to a mapping."""
|
||||
|
||||
import data.mesh_ingestor.serialization as ser
|
||||
|
||||
monkeypatch.setattr(ser, "_node_to_dict", lambda _v: "not-a-mapping")
|
||||
|
||||
class NoDict:
|
||||
__slots__ = ()
|
||||
|
||||
assert ifaces._ensure_mapping(NoDict()) is None
|
||||
|
||||
def test_none_returns_none(self):
|
||||
"""None input returns None."""
|
||||
assert ifaces._ensure_mapping(None) is None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _is_nodeish_identifier
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestIsNodeishIdentifier:
|
||||
"""Tests for :func:`interfaces._is_nodeish_identifier`."""
|
||||
|
||||
def test_int_returns_false(self):
|
||||
"""Integers are not node identifiers."""
|
||||
assert ifaces._is_nodeish_identifier(42) is False
|
||||
|
||||
def test_float_returns_false(self):
|
||||
"""Floats are not node identifiers."""
|
||||
assert ifaces._is_nodeish_identifier(3.14) is False
|
||||
|
||||
def test_non_string_returns_false(self):
|
||||
"""Non-string, non-numeric objects return False."""
|
||||
assert ifaces._is_nodeish_identifier(object()) is False
|
||||
|
||||
def test_empty_string_returns_false(self):
|
||||
"""Empty string is not a node identifier."""
|
||||
assert ifaces._is_nodeish_identifier(" ") is False
|
||||
|
||||
def test_caret_prefix_returns_true(self):
|
||||
"""Strings starting with ^ are recognised as special destinations."""
|
||||
assert ifaces._is_nodeish_identifier("^all") is True
|
||||
|
||||
def test_bang_hex_valid(self):
|
||||
"""!xxxxxxxx style identifiers are recognised."""
|
||||
assert ifaces._is_nodeish_identifier("!aabbccdd") is True
|
||||
|
||||
def test_bang_hex_too_long(self):
|
||||
"""More than 8 hex digits after ! are rejected."""
|
||||
assert ifaces._is_nodeish_identifier("!aabbccdd00") is False
|
||||
|
||||
def test_0x_prefix_valid(self):
|
||||
"""0x-prefixed hex strings with ≤8 digits are recognised."""
|
||||
assert ifaces._is_nodeish_identifier("0xaabb") is True
|
||||
|
||||
def test_bare_decimal_rejected(self):
|
||||
"""Bare decimal strings without hex digits are not node identifiers."""
|
||||
assert ifaces._is_nodeish_identifier("12345678") is False
|
||||
|
||||
def test_bare_hex_valid(self):
|
||||
"""Bare hex strings containing a-f are recognised."""
|
||||
assert ifaces._is_nodeish_identifier("aabbccdd") is True
|
||||
|
||||
def test_bare_hex_too_long_rejected(self):
|
||||
"""More than 8 bare hex characters are rejected."""
|
||||
assert ifaces._is_nodeish_identifier("aabbccdd00") is False
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _candidate_node_id
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestCandidateNodeId:
|
||||
"""Tests for :func:`interfaces._candidate_node_id`."""
|
||||
|
||||
def test_none_returns_none(self):
|
||||
"""None input returns None."""
|
||||
assert ifaces._candidate_node_id(None) is None
|
||||
|
||||
def test_from_id_key(self):
|
||||
"""fromId key resolves to canonical node ID."""
|
||||
result = ifaces._candidate_node_id({"fromId": "!aabbccdd"})
|
||||
assert result == "!aabbccdd"
|
||||
|
||||
def test_node_num_key(self):
|
||||
"""nodeNum integer key is canonicalised."""
|
||||
result = ifaces._candidate_node_id({"nodeNum": 0xAABBCCDD})
|
||||
assert result is not None
|
||||
assert result.startswith("!")
|
||||
|
||||
def test_id_key_nodeish(self):
|
||||
"""'id' key is resolved when it looks like a node identifier."""
|
||||
result = ifaces._candidate_node_id({"id": "!aabbccdd"})
|
||||
assert result == "!aabbccdd"
|
||||
|
||||
def test_id_key_non_nodeish_skipped(self):
|
||||
"""Non-nodeish 'id' values are ignored."""
|
||||
result = ifaces._candidate_node_id({"id": "not-an-id"})
|
||||
assert result is None
|
||||
|
||||
def test_user_section_lookup(self):
|
||||
"""Searches user sub-section for node ID."""
|
||||
result = ifaces._candidate_node_id({"user": {"id": "!aabbccdd"}})
|
||||
assert result == "!aabbccdd"
|
||||
|
||||
def test_decoded_section_lookup(self):
|
||||
"""Searches decoded sub-section for node ID."""
|
||||
result = ifaces._candidate_node_id({"decoded": {"fromId": "!aabbccdd"}})
|
||||
assert result == "!aabbccdd"
|
||||
|
||||
def test_payload_section_lookup(self):
|
||||
"""Searches payload sub-section for node ID."""
|
||||
result = ifaces._candidate_node_id({"payload": {"fromId": "!aabbccdd"}})
|
||||
assert result == "!aabbccdd"
|
||||
|
||||
def test_empty_mapping_returns_none(self):
|
||||
"""Mapping with no recognisable ID fields returns None."""
|
||||
assert ifaces._candidate_node_id({"foo": "bar"}) is None
|
||||
|
||||
def test_list_value_scanned(self):
|
||||
"""Node IDs inside list values are found."""
|
||||
result = ifaces._candidate_node_id({"items": [{"fromId": "!aabbccdd"}]})
|
||||
assert result == "!aabbccdd"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _has_field
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestHasField:
|
||||
"""Tests for :func:`interfaces._has_field`."""
|
||||
|
||||
def test_none_returns_false(self):
|
||||
"""None message returns False."""
|
||||
assert ifaces._has_field(None, "anything") is False
|
||||
|
||||
def test_has_field_callable_true(self):
|
||||
"""HasField callable returning True is propagated."""
|
||||
msg = SimpleNamespace(HasField=lambda name: name == "lora")
|
||||
assert ifaces._has_field(msg, "lora") is True
|
||||
|
||||
def test_has_field_callable_false(self):
|
||||
"""HasField callable returning False is propagated."""
|
||||
msg = SimpleNamespace(HasField=lambda name: False)
|
||||
assert ifaces._has_field(msg, "lora") is False
|
||||
|
||||
def test_no_has_field_but_attr_present(self):
|
||||
"""Falls back to hasattr when HasField is absent."""
|
||||
msg = SimpleNamespace(lora=object())
|
||||
assert ifaces._has_field(msg, "lora") is True
|
||||
|
||||
def test_no_has_field_attr_absent(self):
|
||||
"""Returns False when both HasField and the attribute are absent."""
|
||||
assert ifaces._has_field(SimpleNamespace(), "lora") is False
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _enum_name_from_field
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestEnumNameFromField:
|
||||
"""Tests for :func:`interfaces._enum_name_from_field`."""
|
||||
|
||||
def test_no_descriptor_returns_none(self):
|
||||
"""Message without DESCRIPTOR returns None."""
|
||||
assert ifaces._enum_name_from_field(object(), "region", 1) is None
|
||||
|
||||
def test_field_not_in_descriptor(self):
|
||||
"""Unknown field name returns None."""
|
||||
desc = SimpleNamespace(fields_by_name={})
|
||||
msg = SimpleNamespace(DESCRIPTOR=desc)
|
||||
assert ifaces._enum_name_from_field(msg, "region", 1) is None
|
||||
|
||||
def test_no_enum_type_returns_none(self):
|
||||
"""Field without enum_type returns None."""
|
||||
field_desc = SimpleNamespace(enum_type=None)
|
||||
desc = SimpleNamespace(fields_by_name={"region": field_desc})
|
||||
msg = SimpleNamespace(DESCRIPTOR=desc)
|
||||
assert ifaces._enum_name_from_field(msg, "region", 1) is None
|
||||
|
||||
def test_value_not_in_enum_returns_none(self):
|
||||
"""Enum value not found in values_by_number returns None."""
|
||||
enum_type = SimpleNamespace(values_by_number={})
|
||||
field_desc = SimpleNamespace(enum_type=enum_type)
|
||||
desc = SimpleNamespace(fields_by_name={"region": field_desc})
|
||||
msg = SimpleNamespace(DESCRIPTOR=desc)
|
||||
assert ifaces._enum_name_from_field(msg, "region", 99) is None
|
||||
|
||||
def test_valid_lookup(self):
|
||||
"""Returns the enum value name for a known numeric value."""
|
||||
enum_val = SimpleNamespace(name="US_915")
|
||||
enum_type = SimpleNamespace(values_by_number={3: enum_val})
|
||||
field_desc = SimpleNamespace(enum_type=enum_type)
|
||||
desc = SimpleNamespace(fields_by_name={"region": field_desc})
|
||||
msg = SimpleNamespace(DESCRIPTOR=desc)
|
||||
assert ifaces._enum_name_from_field(msg, "region", 3) == "US_915"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _region_frequency
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestRegionFrequency:
|
||||
"""Tests for :func:`interfaces._region_frequency`."""
|
||||
|
||||
def test_none_returns_none(self):
|
||||
"""None input returns None."""
|
||||
assert ifaces._region_frequency(None) is None
|
||||
|
||||
def test_numeric_override_frequency(self):
|
||||
"""Positive numeric override_frequency is floored to MHz."""
|
||||
msg = SimpleNamespace(override_frequency=915.8, region=None)
|
||||
assert ifaces._region_frequency(msg) == 915
|
||||
|
||||
def test_zero_override_frequency_falls_through(self):
|
||||
"""Zero override_frequency is ignored."""
|
||||
msg = SimpleNamespace(override_frequency=0, region=None)
|
||||
assert ifaces._region_frequency(msg) is None
|
||||
|
||||
def test_string_override_frequency(self):
|
||||
"""Non-empty string override_frequency is returned as-is."""
|
||||
msg = SimpleNamespace(override_frequency="915MHz", region=None)
|
||||
assert ifaces._region_frequency(msg) == "915MHz"
|
||||
|
||||
def test_enum_name_with_freq_digits(self):
|
||||
"""Extracts MHz frequency from enum name like US_915."""
|
||||
enum_val = SimpleNamespace(name="US_915")
|
||||
enum_type = SimpleNamespace(values_by_number={1: enum_val})
|
||||
field_desc = SimpleNamespace(enum_type=enum_type)
|
||||
desc = SimpleNamespace(fields_by_name={"region": field_desc})
|
||||
msg = SimpleNamespace(DESCRIPTOR=desc, override_frequency=None, region=1)
|
||||
assert ifaces._region_frequency(msg) == 915
|
||||
|
||||
def test_enum_name_without_large_digit_returns_name(self):
|
||||
"""Enum name with only small digits returns the full name string."""
|
||||
enum_val = SimpleNamespace(name="BAND_24")
|
||||
enum_type = SimpleNamespace(values_by_number={2: enum_val})
|
||||
field_desc = SimpleNamespace(enum_type=enum_type)
|
||||
desc = SimpleNamespace(fields_by_name={"region": field_desc})
|
||||
msg = SimpleNamespace(DESCRIPTOR=desc, override_frequency=None, region=2)
|
||||
# 24 < 100, so falls through to reversed digits → returns 24
|
||||
assert ifaces._region_frequency(msg) == 24
|
||||
|
||||
def test_large_integer_region_returned(self):
|
||||
"""Integer region value >= 100 is returned directly."""
|
||||
msg = SimpleNamespace(DESCRIPTOR=None, override_frequency=None, region=433)
|
||||
assert ifaces._region_frequency(msg) == 433
|
||||
|
||||
def test_string_region_returned(self):
|
||||
"""Non-empty string region is returned directly."""
|
||||
msg = SimpleNamespace(DESCRIPTOR=None, override_frequency=None, region="EU433")
|
||||
assert ifaces._region_frequency(msg) == "EU433"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _camelcase_enum_name
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestCamelcaseEnumName:
|
||||
"""Tests for :func:`interfaces._camelcase_enum_name`."""
|
||||
|
||||
def test_none_returns_none(self):
|
||||
"""None input returns None."""
|
||||
assert ifaces._camelcase_enum_name(None) is None
|
||||
|
||||
def test_empty_string_returns_none(self):
|
||||
"""Empty string returns None."""
|
||||
assert ifaces._camelcase_enum_name("") is None
|
||||
|
||||
def test_screaming_snake(self):
|
||||
"""SCREAMING_SNAKE_CASE is converted to CamelCase."""
|
||||
assert ifaces._camelcase_enum_name("LONG_FAST") == "LongFast"
|
||||
|
||||
def test_single_word(self):
|
||||
"""Single word is capitalised."""
|
||||
assert ifaces._camelcase_enum_name("SHORT") == "Short"
|
||||
|
||||
def test_with_digits(self):
|
||||
"""Digits in the name are preserved."""
|
||||
assert ifaces._camelcase_enum_name("BAND_915") == "Band915"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _modem_preset
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestModemPreset:
|
||||
"""Tests for :func:`interfaces._modem_preset`."""
|
||||
|
||||
def test_none_returns_none(self):
|
||||
"""None lora_message returns None."""
|
||||
assert ifaces._modem_preset(None) is None
|
||||
|
||||
def test_no_descriptor_no_attr_returns_none(self):
|
||||
"""Message with neither descriptor nor modem_preset attr returns None."""
|
||||
|
||||
class NoPreset:
|
||||
DESCRIPTOR = None
|
||||
|
||||
assert ifaces._modem_preset(NoPreset()) is None
|
||||
|
||||
def test_descriptor_modem_preset_field(self):
|
||||
"""Finds modem_preset via DESCRIPTOR fields_by_name."""
|
||||
enum_val = SimpleNamespace(name="LONG_FAST")
|
||||
enum_type = SimpleNamespace(values_by_number={0: enum_val})
|
||||
field_desc = SimpleNamespace(enum_type=enum_type)
|
||||
desc = SimpleNamespace(fields_by_name={"modem_preset": field_desc})
|
||||
msg = SimpleNamespace(DESCRIPTOR=desc, modem_preset=0)
|
||||
assert ifaces._modem_preset(msg) == "LongFast"
|
||||
|
||||
def test_attr_fallback(self):
|
||||
"""Falls back to hasattr when DESCRIPTOR is absent."""
|
||||
msg = SimpleNamespace(modem_preset="LONG_FAST")
|
||||
# No DESCRIPTOR so enum lookup won't work, falls to string branch
|
||||
result = ifaces._modem_preset(msg)
|
||||
assert result == "LongFast"
|
||||
|
||||
def test_preset_field_name_fallback(self):
|
||||
"""'preset' field is used when 'modem_preset' is absent in descriptor."""
|
||||
enum_val = SimpleNamespace(name="SHORT_FAST")
|
||||
enum_type = SimpleNamespace(values_by_number={1: enum_val})
|
||||
field_desc = SimpleNamespace(enum_type=enum_type)
|
||||
desc = SimpleNamespace(fields_by_name={"preset": field_desc})
|
||||
msg = SimpleNamespace(DESCRIPTOR=desc, preset=1)
|
||||
assert ifaces._modem_preset(msg) == "ShortFast"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _ensure_radio_metadata caching
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestEnsureRadioMetadata:
|
||||
"""Tests for :func:`interfaces._ensure_radio_metadata` caching behaviour."""
|
||||
|
||||
def test_none_iface_is_noop(self, monkeypatch):
|
||||
"""None interface does not touch config."""
|
||||
original_freq = config.LORA_FREQ
|
||||
original_preset = config.MODEM_PRESET
|
||||
ifaces._ensure_radio_metadata(None)
|
||||
assert config.LORA_FREQ == original_freq
|
||||
assert config.MODEM_PRESET == original_preset
|
||||
|
||||
def test_sets_lora_freq_when_not_cached(self, monkeypatch):
|
||||
"""Populates LORA_FREQ from interface when not yet configured."""
|
||||
monkeypatch.setattr(config, "LORA_FREQ", None)
|
||||
monkeypatch.setattr(config, "MODEM_PRESET", None)
|
||||
|
||||
enum_val = SimpleNamespace(name="US_915")
|
||||
enum_type = SimpleNamespace(values_by_number={1: enum_val})
|
||||
region_field = SimpleNamespace(enum_type=enum_type)
|
||||
desc = SimpleNamespace(fields_by_name={"region": region_field})
|
||||
lora = SimpleNamespace(
|
||||
DESCRIPTOR=desc, region=1, override_frequency=None, modem_preset=None
|
||||
)
|
||||
local_config = SimpleNamespace(lora=lora, HasField=lambda f: f == "lora")
|
||||
local_node = SimpleNamespace(localConfig=local_config)
|
||||
iface = SimpleNamespace(localNode=local_node, waitForConfig=lambda: None)
|
||||
|
||||
ifaces._ensure_radio_metadata(iface)
|
||||
assert config.LORA_FREQ == 915
|
||||
|
||||
def test_does_not_overwrite_existing_freq(self, monkeypatch):
|
||||
"""Does not overwrite LORA_FREQ when already set."""
|
||||
monkeypatch.setattr(config, "LORA_FREQ", 433)
|
||||
monkeypatch.setattr(config, "MODEM_PRESET", None)
|
||||
|
||||
enum_val = SimpleNamespace(name="US_915")
|
||||
enum_type = SimpleNamespace(values_by_number={1: enum_val})
|
||||
region_field = SimpleNamespace(enum_type=enum_type)
|
||||
desc = SimpleNamespace(fields_by_name={"region": region_field})
|
||||
lora = SimpleNamespace(
|
||||
DESCRIPTOR=desc, region=1, override_frequency=None, modem_preset=None
|
||||
)
|
||||
local_config = SimpleNamespace(lora=lora, HasField=lambda f: f == "lora")
|
||||
local_node = SimpleNamespace(localConfig=local_config)
|
||||
iface = SimpleNamespace(localNode=local_node, waitForConfig=lambda: None)
|
||||
|
||||
ifaces._ensure_radio_metadata(iface)
|
||||
assert config.LORA_FREQ == 433
|
||||
+292
-5
@@ -788,6 +788,7 @@ def test_store_packet_dict_posts_text_message(mesh_module, monkeypatch):
|
||||
|
||||
mesh.config.LORA_FREQ = 868
|
||||
mesh.config.MODEM_PRESET = "MediumFast"
|
||||
mesh.register_host_node_id("!f00dbabe")
|
||||
|
||||
packet = {
|
||||
"id": 123,
|
||||
@@ -823,6 +824,7 @@ def test_store_packet_dict_posts_text_message(mesh_module, monkeypatch):
|
||||
assert payload["rssi"] == -70
|
||||
assert payload["reply_id"] is None
|
||||
assert payload["emoji"] is None
|
||||
assert payload["ingestor"] == "!f00dbabe"
|
||||
assert payload["lora_freq"] == 868
|
||||
assert payload["modem_preset"] == "MediumFast"
|
||||
assert priority == mesh._MESSAGE_POST_PRIORITY
|
||||
@@ -879,6 +881,7 @@ def test_store_packet_dict_posts_position(mesh_module, monkeypatch):
|
||||
|
||||
mesh.config.LORA_FREQ = 868
|
||||
mesh.config.MODEM_PRESET = "MediumFast"
|
||||
mesh.register_host_node_id("!f00dbabe")
|
||||
|
||||
packet = {
|
||||
"id": 200498337,
|
||||
@@ -946,6 +949,7 @@ def test_store_packet_dict_posts_position(mesh_module, monkeypatch):
|
||||
)
|
||||
assert payload["lora_freq"] == 868
|
||||
assert payload["modem_preset"] == "MediumFast"
|
||||
assert payload["ingestor"] == "!f00dbabe"
|
||||
assert payload["raw"]["time"] == 1_758_624_189
|
||||
|
||||
|
||||
@@ -960,6 +964,7 @@ def test_store_packet_dict_posts_neighborinfo(mesh_module, monkeypatch):
|
||||
|
||||
mesh.config.LORA_FREQ = 868
|
||||
mesh.config.MODEM_PRESET = "MediumFast"
|
||||
mesh.register_host_node_id("!f00dbabe")
|
||||
|
||||
packet = {
|
||||
"id": 2049886869,
|
||||
@@ -1004,6 +1009,7 @@ def test_store_packet_dict_posts_neighborinfo(mesh_module, monkeypatch):
|
||||
assert neighbors[2]["neighbor_num"] == 0x0BAD_C0DE
|
||||
assert payload["lora_freq"] == 868
|
||||
assert payload["modem_preset"] == "MediumFast"
|
||||
assert payload["ingestor"] == "!f00dbabe"
|
||||
|
||||
|
||||
def test_store_packet_dict_handles_nodeinfo_packet(mesh_module, monkeypatch):
|
||||
@@ -2128,7 +2134,7 @@ def test_store_packet_dict_skips_hidden_channel(mesh_module, monkeypatch, capsys
|
||||
lambda path, payload, *, priority: captured.append((path, payload, priority)),
|
||||
)
|
||||
monkeypatch.setattr(
|
||||
mesh.handlers,
|
||||
mesh.handlers.ignored,
|
||||
"_record_ignored_packet",
|
||||
lambda packet, *, reason: ignored.append(reason),
|
||||
)
|
||||
@@ -2198,7 +2204,7 @@ def test_store_packet_dict_skips_disallowed_channel(mesh_module, monkeypatch, ca
|
||||
lambda path, payload, *, priority: captured.append((path, payload, priority)),
|
||||
)
|
||||
monkeypatch.setattr(
|
||||
mesh.handlers,
|
||||
mesh.handlers.ignored,
|
||||
"_record_ignored_packet",
|
||||
lambda packet, *, reason: ignored.append(reason),
|
||||
)
|
||||
@@ -2282,6 +2288,7 @@ def test_store_packet_dict_handles_telemetry_packet(mesh_module, monkeypatch):
|
||||
|
||||
mesh.config.LORA_FREQ = 868
|
||||
mesh.config.MODEM_PRESET = "MediumFast"
|
||||
mesh.register_host_node_id("!f00dbabe")
|
||||
|
||||
packet = {
|
||||
"id": 1_256_091_342,
|
||||
@@ -2334,6 +2341,8 @@ def test_store_packet_dict_handles_telemetry_packet(mesh_module, monkeypatch):
|
||||
assert payload["current"] == pytest.approx(0.0715)
|
||||
assert payload["lora_freq"] == 868
|
||||
assert payload["modem_preset"] == "MediumFast"
|
||||
assert payload["ingestor"] == "!f00dbabe"
|
||||
assert payload["telemetry_type"] == "device"
|
||||
|
||||
|
||||
def test_store_packet_dict_handles_environment_telemetry(mesh_module, monkeypatch):
|
||||
@@ -2413,6 +2422,144 @@ def test_store_packet_dict_handles_environment_telemetry(mesh_module, monkeypatc
|
||||
assert payload["soil_temperature"] == pytest.approx(18.9)
|
||||
assert payload["lora_freq"] == 868
|
||||
assert payload["modem_preset"] == "MediumFast"
|
||||
assert payload["telemetry_type"] == "environment"
|
||||
|
||||
|
||||
def test_store_packet_dict_handles_power_telemetry(mesh_module, monkeypatch):
|
||||
"""Power-metrics packets are tagged telemetry_type='power'."""
|
||||
mesh = mesh_module
|
||||
captured = []
|
||||
monkeypatch.setattr(
|
||||
mesh,
|
||||
"_queue_post_json",
|
||||
lambda path, payload, *, priority: captured.append((path, payload, priority)),
|
||||
)
|
||||
|
||||
packet = {
|
||||
"id": 3_000_000_001,
|
||||
"rxTime": 1_758_030_000,
|
||||
"fromId": "!aabbccdd",
|
||||
"toId": "^all",
|
||||
"decoded": {
|
||||
"portnum": "TELEMETRY_APP",
|
||||
"telemetry": {
|
||||
"time": 1_758_030_000,
|
||||
"powerMetrics": {
|
||||
"ch1Voltage": 5.02,
|
||||
"ch1Current": 0.48,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
mesh.store_packet_dict(packet)
|
||||
|
||||
assert captured
|
||||
_, payload, _ = captured[0]
|
||||
assert payload["telemetry_type"] == "power"
|
||||
|
||||
|
||||
def test_store_packet_dict_handles_air_quality_telemetry(mesh_module, monkeypatch):
|
||||
"""Air-quality-metrics packets are tagged telemetry_type='air_quality'."""
|
||||
mesh = mesh_module
|
||||
captured = []
|
||||
monkeypatch.setattr(
|
||||
mesh,
|
||||
"_queue_post_json",
|
||||
lambda path, payload, *, priority: captured.append((path, payload, priority)),
|
||||
)
|
||||
|
||||
packet = {
|
||||
"id": 3_000_000_003,
|
||||
"rxTime": 1_758_032_000,
|
||||
"fromId": "!aabbccdd",
|
||||
"toId": "^all",
|
||||
"decoded": {
|
||||
"portnum": "TELEMETRY_APP",
|
||||
"telemetry": {
|
||||
"time": 1_758_032_000,
|
||||
"airQualityMetrics": {
|
||||
"pm10Standard": 4,
|
||||
"pm25Standard": 8,
|
||||
"iaq": 65,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
mesh.store_packet_dict(packet)
|
||||
|
||||
assert captured
|
||||
_, payload, _ = captured[0]
|
||||
assert payload["telemetry_type"] == "air_quality"
|
||||
|
||||
|
||||
def test_store_packet_dict_telemetry_type_absent_for_unknown_subtype(
|
||||
mesh_module, monkeypatch
|
||||
):
|
||||
"""Packets with no recognised sub-object do not include telemetry_type in the payload."""
|
||||
mesh = mesh_module
|
||||
captured = []
|
||||
monkeypatch.setattr(
|
||||
mesh,
|
||||
"_queue_post_json",
|
||||
lambda path, payload, *, priority: captured.append((path, payload, priority)),
|
||||
)
|
||||
|
||||
packet = {
|
||||
"id": 3_000_000_002,
|
||||
"rxTime": 1_758_031_000,
|
||||
"fromId": "!aabbccdd",
|
||||
"toId": "^all",
|
||||
"decoded": {
|
||||
"portnum": "TELEMETRY_APP",
|
||||
"telemetry": {
|
||||
"time": 1_758_031_000,
|
||||
"someUnknownMetrics": {"foo": 1},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
mesh.store_packet_dict(packet)
|
||||
|
||||
assert captured
|
||||
_, payload, _ = captured[0]
|
||||
assert "telemetry_type" not in payload
|
||||
|
||||
|
||||
def test_store_packet_dict_invalid_telemetry_type_is_dropped(mesh_module, monkeypatch):
|
||||
"""A telemetry_type value that isn't in _VALID_TELEMETRY_TYPES is omitted from the payload."""
|
||||
mesh = mesh_module
|
||||
captured = []
|
||||
monkeypatch.setattr(
|
||||
mesh,
|
||||
"_queue_post_json",
|
||||
lambda path, payload, *, priority: captured.append((path, payload, priority)),
|
||||
)
|
||||
|
||||
# Inject a bad type by monkey-patching the validator constant so we can
|
||||
# verify the drop path without needing a real packet with an impossible type.
|
||||
monkeypatch.setattr(mesh.handlers.telemetry, "_VALID_TELEMETRY_TYPES", frozenset())
|
||||
|
||||
packet = {
|
||||
"id": 3_000_000_010,
|
||||
"rxTime": 1_758_040_000,
|
||||
"fromId": "!aabbccdd",
|
||||
"toId": "^all",
|
||||
"decoded": {
|
||||
"portnum": "TELEMETRY_APP",
|
||||
"telemetry": {
|
||||
"time": 1_758_040_000,
|
||||
"deviceMetrics": {"batteryLevel": 80},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
mesh.store_packet_dict(packet)
|
||||
|
||||
assert captured
|
||||
_, payload, _ = captured[0]
|
||||
assert "telemetry_type" not in payload
|
||||
|
||||
|
||||
def test_store_packet_dict_throttles_host_telemetry(mesh_module, monkeypatch):
|
||||
@@ -2477,6 +2624,7 @@ def test_store_packet_dict_handles_traceroute_packet(mesh_module, monkeypatch):
|
||||
|
||||
mesh.config.LORA_FREQ = 915
|
||||
mesh.config.MODEM_PRESET = "LongFast"
|
||||
mesh.register_host_node_id("!f00dbabe")
|
||||
|
||||
packet = {
|
||||
"id": 2_934_054_466,
|
||||
@@ -2518,6 +2666,7 @@ def test_store_packet_dict_handles_traceroute_packet(mesh_module, monkeypatch):
|
||||
assert "elapsed_ms" in payload
|
||||
assert payload["lora_freq"] == 915
|
||||
assert payload["modem_preset"] == "LongFast"
|
||||
assert payload["ingestor"] == "!f00dbabe"
|
||||
|
||||
|
||||
def test_traceroute_hop_normalization_supports_mappings(mesh_module, monkeypatch):
|
||||
@@ -2874,7 +3023,7 @@ def test_default_serial_targets_deduplicates(mesh_module, monkeypatch):
|
||||
return ["/dev/ttyACM1"]
|
||||
return []
|
||||
|
||||
monkeypatch.setattr(mesh.interfaces.glob, "glob", fake_glob)
|
||||
monkeypatch.setattr(mesh.connection.glob, "glob", fake_glob)
|
||||
|
||||
targets = mesh._default_serial_targets()
|
||||
|
||||
@@ -3049,9 +3198,32 @@ def test_queue_ingestor_heartbeat_enqueues_and_throttles(mesh_module, monkeypatc
|
||||
assert payload["version"] == mesh.VERSION
|
||||
assert payload["lora_freq"] == 915
|
||||
assert payload["modem_preset"] == "LongFast"
|
||||
assert payload["protocol"] == "meshtastic"
|
||||
assert priority == mesh.queue._INGESTOR_POST_PRIORITY
|
||||
|
||||
|
||||
def test_queue_ingestor_heartbeat_protocol_meshcore(mesh_module, monkeypatch):
|
||||
"""Heartbeat payload must carry the configured PROVIDER as its protocol."""
|
||||
mesh = mesh_module
|
||||
captured = []
|
||||
|
||||
monkeypatch.setattr(
|
||||
mesh.queue,
|
||||
"_queue_post_json",
|
||||
lambda path, payload, *, priority, send=None: captured.append(payload),
|
||||
)
|
||||
|
||||
mesh.ingestors.STATE.last_heartbeat = None
|
||||
mesh.ingestors.STATE.node_id = None
|
||||
mesh.config.PROVIDER = "meshcore"
|
||||
|
||||
mesh.ingestors.set_ingestor_node_id("!aabbccdd")
|
||||
mesh.ingestors.queue_ingestor_heartbeat(force=True)
|
||||
|
||||
assert len(captured) == 1, "expected exactly one heartbeat payload"
|
||||
assert captured[0]["protocol"] == "meshcore"
|
||||
|
||||
|
||||
def test_mesh_version_export_matches_package(mesh_module):
|
||||
import data
|
||||
|
||||
@@ -3106,8 +3278,8 @@ def test_store_packet_dict_records_ignored_packets(mesh_module, monkeypatch, tmp
|
||||
|
||||
monkeypatch.setattr(mesh, "DEBUG", True)
|
||||
ignored_path = tmp_path / "ignored.txt"
|
||||
monkeypatch.setattr(mesh.handlers, "_IGNORED_PACKET_LOG_PATH", ignored_path)
|
||||
monkeypatch.setattr(mesh.handlers, "_IGNORED_PACKET_LOCK", threading.Lock())
|
||||
monkeypatch.setattr(mesh.handlers.ignored, "_IGNORED_PACKET_LOG_PATH", ignored_path)
|
||||
monkeypatch.setattr(mesh.handlers.ignored, "_IGNORED_PACKET_LOCK", threading.Lock())
|
||||
|
||||
packet = {"decoded": {"portnum": "UNKNOWN"}}
|
||||
mesh.store_packet_dict(packet)
|
||||
@@ -3459,3 +3631,118 @@ def test_on_receive_skips_seen_packets(mesh_module):
|
||||
mesh.on_receive(packet, interface=None)
|
||||
|
||||
assert packet["_potatomesh_seen"] is True
|
||||
|
||||
|
||||
def test_upsert_node_includes_ingestor_key(mesh_module, monkeypatch):
|
||||
"""upsert_node must attach the host node ID so /api/nodes can resolve protocol."""
|
||||
mesh = mesh_module
|
||||
captured = []
|
||||
monkeypatch.setattr(
|
||||
mesh,
|
||||
"_queue_post_json",
|
||||
lambda path, payload, *, priority: captured.append((path, payload, priority)),
|
||||
)
|
||||
mesh.register_host_node_id("!aabbccdd")
|
||||
|
||||
mesh.upsert_node("!deadbeef", {"user": {"shortName": "X"}})
|
||||
|
||||
assert captured
|
||||
_, payload, _ = captured[0]
|
||||
assert payload.get("ingestor") == "!aabbccdd"
|
||||
|
||||
|
||||
def test_store_packet_dict_nodeinfo_includes_ingestor_key(mesh_module, monkeypatch):
|
||||
"""store_nodeinfo_packet must include the ingestor key in the /api/nodes payload."""
|
||||
mesh = mesh_module
|
||||
captured = []
|
||||
monkeypatch.setattr(
|
||||
mesh,
|
||||
"_queue_post_json",
|
||||
lambda path, payload, *, priority: captured.append((path, payload, priority)),
|
||||
)
|
||||
mesh.register_host_node_id("!11223344")
|
||||
|
||||
packet = {
|
||||
"id": 1,
|
||||
"rxTime": 1_700_000_000,
|
||||
"fromId": "!aabbccdd",
|
||||
"decoded": {
|
||||
"portnum": "NODEINFO_APP",
|
||||
"user": {"id": "!aabbccdd", "shortName": "N"},
|
||||
},
|
||||
}
|
||||
mesh.store_packet_dict(packet)
|
||||
|
||||
node_calls = [(p, pl) for p, pl, _ in captured if p == "/api/nodes"]
|
||||
assert node_calls, "Expected a /api/nodes POST"
|
||||
_, payload = node_calls[0]
|
||||
assert payload.get("ingestor") == "!11223344"
|
||||
|
||||
|
||||
def test_store_packet_dict_router_heartbeat(mesh_module, monkeypatch):
|
||||
"""STORE_FORWARD_APP ROUTER_HEARTBEAT upserts the node at low priority."""
|
||||
mesh = mesh_module
|
||||
captured = []
|
||||
monkeypatch.setattr(
|
||||
mesh,
|
||||
"_queue_post_json",
|
||||
lambda path, payload, *, priority: captured.append((path, payload, priority)),
|
||||
)
|
||||
mesh.register_host_node_id("!f00dbabe")
|
||||
|
||||
packet = {
|
||||
"id": 2377284085,
|
||||
"rxTime": 1_774_868_197,
|
||||
"fromId": "!435a7fbc",
|
||||
"toId": "^all",
|
||||
"hopLimit": "2",
|
||||
"rxSnr": "-12.25",
|
||||
"rxRssi": "-110",
|
||||
"decoded": {
|
||||
"portnum": "STORE_FORWARD_APP",
|
||||
"storeforward": {
|
||||
"heartbeat": {"period": "900"},
|
||||
"rr": "ROUTER_HEARTBEAT",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
mesh.store_packet_dict(packet)
|
||||
|
||||
assert captured, "Expected a POST for router heartbeat"
|
||||
path, payload, priority = captured[0]
|
||||
assert path == "/api/nodes"
|
||||
assert priority == mesh._DEFAULT_POST_PRIORITY
|
||||
assert "!435a7fbc" in payload
|
||||
node_entry = payload["!435a7fbc"]
|
||||
assert node_entry["lastHeard"] == 1_774_868_197
|
||||
assert payload.get("ingestor") == "!f00dbabe"
|
||||
assert set(node_entry.keys()) == {
|
||||
"lastHeard"
|
||||
}, "Heartbeat must only set lastHeard, nothing else"
|
||||
|
||||
|
||||
def test_store_packet_dict_store_forward_non_heartbeat_ignored(
|
||||
mesh_module, monkeypatch
|
||||
):
|
||||
"""STORE_FORWARD_APP packets that are not ROUTER_HEARTBEAT are dropped."""
|
||||
mesh = mesh_module
|
||||
captured = []
|
||||
monkeypatch.setattr(
|
||||
mesh,
|
||||
"_queue_post_json",
|
||||
lambda *a, **kw: captured.append(a),
|
||||
)
|
||||
|
||||
packet = {
|
||||
"id": 1,
|
||||
"rxTime": 1_700_000_000,
|
||||
"fromId": "!aabbccdd",
|
||||
"decoded": {
|
||||
"portnum": "STORE_FORWARD_APP",
|
||||
"storeforward": {"rr": "ROUTER_CLIENT_RESPONSE"},
|
||||
},
|
||||
}
|
||||
mesh.store_packet_dict(packet)
|
||||
|
||||
assert not captured, "Non-heartbeat STORE_FORWARD_APP must not be queued"
|
||||
|
||||
@@ -0,0 +1,74 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Unit tests for :mod:`data.mesh_ingestor.node_identity`."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parents[1]
|
||||
if str(REPO_ROOT) not in sys.path:
|
||||
sys.path.insert(0, str(REPO_ROOT))
|
||||
|
||||
from data.mesh_ingestor.node_identity import ( # noqa: E402 - path setup
|
||||
canonical_node_id,
|
||||
node_num_from_id,
|
||||
)
|
||||
|
||||
|
||||
def test_canonical_node_id_accepts_numeric():
|
||||
assert canonical_node_id(1) == "!00000001"
|
||||
assert canonical_node_id(0xABCDEF01) == "!abcdef01"
|
||||
assert canonical_node_id(1.0) == "!00000001"
|
||||
|
||||
|
||||
def test_canonical_node_id_accepts_string_forms():
|
||||
assert canonical_node_id("!ABCDEF01") == "!abcdef01"
|
||||
assert canonical_node_id("0xABCDEF01") == "!abcdef01"
|
||||
assert canonical_node_id("abcdef01") == "!abcdef01"
|
||||
assert canonical_node_id("123") == "!0000007b"
|
||||
|
||||
|
||||
def test_canonical_node_id_passthrough_caret_destinations():
|
||||
assert canonical_node_id("^all") == "^all"
|
||||
|
||||
|
||||
def test_node_num_from_id_parses_canonical_and_hex():
|
||||
assert node_num_from_id("!abcdef01") == 0xABCDEF01
|
||||
assert node_num_from_id("abcdef01") == 0xABCDEF01
|
||||
assert node_num_from_id("0xabcdef01") == 0xABCDEF01
|
||||
assert node_num_from_id(123) == 123
|
||||
|
||||
|
||||
def test_canonical_node_id_rejects_none_and_empty():
|
||||
assert canonical_node_id(None) is None
|
||||
assert canonical_node_id("") is None
|
||||
assert canonical_node_id(" ") is None
|
||||
|
||||
|
||||
def test_canonical_node_id_rejects_negative():
|
||||
assert canonical_node_id(-1) is None
|
||||
assert canonical_node_id(-0xABCDEF01) is None
|
||||
|
||||
|
||||
def test_canonical_node_id_truncates_overflow():
|
||||
# Values wider than 32 bits are masked, not rejected.
|
||||
assert canonical_node_id(0x1_ABCDEF01) == "!abcdef01"
|
||||
|
||||
|
||||
def test_node_num_from_id_rejects_none_and_empty():
|
||||
assert node_num_from_id(None) is None
|
||||
assert node_num_from_id("") is None
|
||||
assert node_num_from_id("not-hex") is None
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,367 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Unit tests for :mod:`data.mesh_ingestor.queue`."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import sys
|
||||
import threading
|
||||
import urllib.error
|
||||
import urllib.request
|
||||
from pathlib import Path
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parents[1]
|
||||
if str(REPO_ROOT) not in sys.path:
|
||||
sys.path.insert(0, str(REPO_ROOT))
|
||||
|
||||
import data.mesh_ingestor.config as config
|
||||
from data.mesh_ingestor.queue import (
|
||||
QueueState,
|
||||
_clear_post_queue,
|
||||
_drain_post_queue,
|
||||
_enqueue_post_json,
|
||||
_post_json,
|
||||
_queue_post_json,
|
||||
_DEFAULT_POST_PRIORITY,
|
||||
_MESSAGE_POST_PRIORITY,
|
||||
_NODE_POST_PRIORITY,
|
||||
)
|
||||
|
||||
|
||||
def _fresh_state() -> QueueState:
|
||||
"""Return a new QueueState for isolation."""
|
||||
return QueueState()
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _post_json
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestPostJson:
|
||||
"""Tests for :func:`queue._post_json`."""
|
||||
|
||||
def test_skips_when_no_instance(self, monkeypatch):
|
||||
"""Does nothing when INSTANCE is empty."""
|
||||
monkeypatch.setattr(config, "INSTANCE", "")
|
||||
sent = []
|
||||
with patch("urllib.request.urlopen") as mock_open:
|
||||
_post_json("/api/test", {"key": "val"})
|
||||
mock_open.assert_not_called()
|
||||
|
||||
def test_sends_json_post(self, monkeypatch):
|
||||
"""Sends a POST request with JSON body and correct headers."""
|
||||
monkeypatch.setattr(config, "INSTANCE", "http://localhost")
|
||||
monkeypatch.setattr(config, "API_TOKEN", "tok")
|
||||
|
||||
captured_req = []
|
||||
|
||||
class FakeResp:
|
||||
def read(self):
|
||||
return b""
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, *a):
|
||||
pass
|
||||
|
||||
def fake_urlopen(req, timeout=None):
|
||||
captured_req.append(req)
|
||||
return FakeResp()
|
||||
|
||||
with patch("urllib.request.urlopen", fake_urlopen):
|
||||
_post_json("/api/nodes", {"a": 1})
|
||||
|
||||
assert len(captured_req) == 1
|
||||
req = captured_req[0]
|
||||
assert req.get_full_url() == "http://localhost/api/nodes"
|
||||
assert req.get_header("Content-type") == "application/json"
|
||||
assert req.get_header("Authorization") == "Bearer tok"
|
||||
|
||||
def test_handles_network_error_gracefully(self, monkeypatch, capsys):
|
||||
"""Network errors are caught and logged, not raised."""
|
||||
monkeypatch.setattr(config, "INSTANCE", "http://localhost")
|
||||
monkeypatch.setattr(config, "API_TOKEN", "")
|
||||
monkeypatch.setattr(config, "DEBUG", True)
|
||||
|
||||
def raise_error(req, timeout=None):
|
||||
raise OSError("connection refused")
|
||||
|
||||
with patch("urllib.request.urlopen", raise_error):
|
||||
_post_json("/api/test", {"x": 1}) # should not raise
|
||||
|
||||
def test_uses_instance_override(self, monkeypatch):
|
||||
"""instance parameter overrides config.INSTANCE."""
|
||||
monkeypatch.setattr(config, "INSTANCE", "http://default")
|
||||
|
||||
captured_req = []
|
||||
|
||||
class FakeResp:
|
||||
def read(self):
|
||||
return b""
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, *a):
|
||||
pass
|
||||
|
||||
def fake_urlopen(req, timeout=None):
|
||||
captured_req.append(req)
|
||||
return FakeResp()
|
||||
|
||||
with patch("urllib.request.urlopen", fake_urlopen):
|
||||
_post_json("/api/test", {}, instance="http://override")
|
||||
|
||||
assert "http://override" in captured_req[0].get_full_url()
|
||||
|
||||
def test_no_auth_header_when_token_empty(self, monkeypatch):
|
||||
"""No Authorization header is added when API_TOKEN is empty."""
|
||||
monkeypatch.setattr(config, "INSTANCE", "http://localhost")
|
||||
monkeypatch.setattr(config, "API_TOKEN", "")
|
||||
|
||||
captured_req = []
|
||||
|
||||
class FakeResp:
|
||||
def read(self):
|
||||
return b""
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, *a):
|
||||
pass
|
||||
|
||||
def fake_urlopen(req, timeout=None):
|
||||
captured_req.append(req)
|
||||
return FakeResp()
|
||||
|
||||
with patch("urllib.request.urlopen", fake_urlopen):
|
||||
_post_json("/api/test", {})
|
||||
|
||||
assert captured_req[0].get_header("Authorization") is None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _enqueue_post_json
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestEnqueuePostJson:
|
||||
"""Tests for :func:`queue._enqueue_post_json`."""
|
||||
|
||||
def test_adds_item_to_queue(self):
|
||||
"""Item is added to the heap with correct priority."""
|
||||
state = _fresh_state()
|
||||
_enqueue_post_json("/api/test", {"k": 1}, 50, state=state)
|
||||
assert len(state.queue) == 1
|
||||
priority, _counter, path, payload = state.queue[0]
|
||||
assert priority == 50
|
||||
assert path == "/api/test"
|
||||
assert payload == {"k": 1}
|
||||
|
||||
def test_heap_ordering(self):
|
||||
"""Lower priority values are dequeued first (min-heap)."""
|
||||
import heapq
|
||||
|
||||
state = _fresh_state()
|
||||
_enqueue_post_json("/api/low", {}, 90, state=state)
|
||||
_enqueue_post_json("/api/high", {}, 10, state=state)
|
||||
_priority, _counter, path, _payload = heapq.heappop(state.queue)
|
||||
assert path == "/api/high"
|
||||
|
||||
def test_counter_increments(self):
|
||||
"""Counter increments for each enqueue call."""
|
||||
state = _fresh_state()
|
||||
_enqueue_post_json("/a", {}, 10, state=state)
|
||||
_enqueue_post_json("/b", {}, 10, state=state)
|
||||
counters = [item[1] for item in state.queue]
|
||||
assert counters[0] != counters[1]
|
||||
|
||||
def test_thread_safe_concurrent_enqueue(self):
|
||||
"""Concurrent enqueues from multiple threads do not corrupt the queue."""
|
||||
state = _fresh_state()
|
||||
errors = []
|
||||
|
||||
def enqueue():
|
||||
try:
|
||||
for i in range(50):
|
||||
_enqueue_post_json("/api/t", {"i": i}, 10, state=state)
|
||||
except Exception as exc:
|
||||
errors.append(exc)
|
||||
|
||||
threads = [threading.Thread(target=enqueue) for _ in range(4)]
|
||||
for t in threads:
|
||||
t.start()
|
||||
for t in threads:
|
||||
t.join()
|
||||
|
||||
assert errors == []
|
||||
assert len(state.queue) == 200
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _drain_post_queue
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestDrainPostQueue:
|
||||
"""Tests for :func:`queue._drain_post_queue`."""
|
||||
|
||||
def test_drains_all_items(self):
|
||||
"""All queued items are sent and queue is emptied."""
|
||||
state = _fresh_state()
|
||||
sent = []
|
||||
_enqueue_post_json("/a", {"n": 1}, 10, state=state)
|
||||
_enqueue_post_json("/b", {"n": 2}, 20, state=state)
|
||||
_drain_post_queue(state, send=lambda path, payload: sent.append(path))
|
||||
assert sorted(sent) == ["/a", "/b"]
|
||||
assert state.queue == []
|
||||
|
||||
def test_sets_active_false_after_drain(self):
|
||||
"""active flag is set to False after draining."""
|
||||
state = _fresh_state()
|
||||
state.active = True
|
||||
_enqueue_post_json("/x", {}, 10, state=state)
|
||||
_drain_post_queue(state, send=lambda p, d: None)
|
||||
assert state.active is False
|
||||
|
||||
def test_empty_queue_sets_active_false(self):
|
||||
"""Empty queue immediately sets active to False."""
|
||||
state = _fresh_state()
|
||||
state.active = True
|
||||
_drain_post_queue(state, send=lambda p, d: None)
|
||||
assert state.active is False
|
||||
|
||||
def test_sends_in_priority_order(self):
|
||||
"""Items are sent in ascending priority order."""
|
||||
state = _fresh_state()
|
||||
sent = []
|
||||
_enqueue_post_json("/low", {}, 90, state=state)
|
||||
_enqueue_post_json("/high", {}, 10, state=state)
|
||||
_enqueue_post_json("/mid", {}, 50, state=state)
|
||||
_drain_post_queue(state, send=lambda path, payload: sent.append(path))
|
||||
assert sent == ["/high", "/mid", "/low"]
|
||||
|
||||
def test_active_false_even_when_send_raises(self):
|
||||
"""active is set to False even if the send callable raises."""
|
||||
state = _fresh_state()
|
||||
state.active = True
|
||||
_enqueue_post_json("/x", {}, 10, state=state)
|
||||
|
||||
def boom(path, payload):
|
||||
raise RuntimeError("send failed")
|
||||
|
||||
with pytest.raises(RuntimeError):
|
||||
_drain_post_queue(state, send=boom)
|
||||
assert state.active is False
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _queue_post_json
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestQueuePostJson:
|
||||
"""Tests for :func:`queue._queue_post_json`."""
|
||||
|
||||
def test_sends_immediately_when_idle(self):
|
||||
"""When the queue is idle, the item is sent synchronously."""
|
||||
state = _fresh_state()
|
||||
sent = []
|
||||
_queue_post_json(
|
||||
"/api/test",
|
||||
{"v": 1},
|
||||
priority=10,
|
||||
state=state,
|
||||
send=lambda p, d: sent.append(p),
|
||||
)
|
||||
assert "/api/test" in sent
|
||||
|
||||
def test_enqueues_when_active(self):
|
||||
"""When the queue is already active, the item is enqueued for later."""
|
||||
state = _fresh_state()
|
||||
state.active = True # simulate in-flight drain
|
||||
_queue_post_json(
|
||||
"/api/test",
|
||||
{"v": 1},
|
||||
priority=10,
|
||||
state=state,
|
||||
send=lambda p, d: None,
|
||||
)
|
||||
# Item should be in the queue (not sent yet since active=True)
|
||||
assert len(state.queue) == 1
|
||||
|
||||
def test_sets_active_true_when_starting(self):
|
||||
"""active is set to True before draining starts."""
|
||||
state = _fresh_state()
|
||||
seen_active = []
|
||||
|
||||
def capture_active(path, payload):
|
||||
seen_active.append(state.active)
|
||||
|
||||
_queue_post_json("/api/test", {}, priority=10, state=state, send=capture_active)
|
||||
# During the drain, active was True
|
||||
assert any(seen_active)
|
||||
|
||||
def test_default_priority_used_when_not_specified(self):
|
||||
"""Default priority is applied when not explicitly provided."""
|
||||
state = _fresh_state()
|
||||
sent_priority = []
|
||||
|
||||
original_enqueue = _enqueue_post_json
|
||||
|
||||
def capturing_enqueue(path, payload, priority, *, state):
|
||||
sent_priority.append(priority)
|
||||
original_enqueue(path, payload, priority, state=state)
|
||||
|
||||
import data.mesh_ingestor.queue as _q
|
||||
|
||||
original = _q._enqueue_post_json
|
||||
_q._enqueue_post_json = capturing_enqueue
|
||||
try:
|
||||
_queue_post_json("/api/x", {}, state=state, send=lambda p, d: None)
|
||||
finally:
|
||||
_q._enqueue_post_json = original
|
||||
|
||||
assert sent_priority == [_DEFAULT_POST_PRIORITY]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _clear_post_queue
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestClearPostQueue:
|
||||
"""Tests for :func:`queue._clear_post_queue`."""
|
||||
|
||||
def test_clears_queue_and_resets_active(self):
|
||||
"""Queue is emptied and active is set to False."""
|
||||
state = _fresh_state()
|
||||
_enqueue_post_json("/a", {}, 10, state=state)
|
||||
_enqueue_post_json("/b", {}, 20, state=state)
|
||||
state.active = True
|
||||
_clear_post_queue(state=state)
|
||||
assert state.queue == []
|
||||
assert state.active is False
|
||||
|
||||
def test_clears_empty_queue(self):
|
||||
"""Clearing an already-empty queue is a no-op."""
|
||||
state = _fresh_state()
|
||||
_clear_post_queue(state=state)
|
||||
assert state.queue == []
|
||||
@@ -390,3 +390,186 @@ def test_nodeinfo_user_dict_proto_fallback(monkeypatch):
|
||||
|
||||
decoded_user = DecodedProto()
|
||||
assert serialization._nodeinfo_user_dict(None, decoded_user) is None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _coerce_int edge cases
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestCoerceInt:
|
||||
"""Tests for :func:`serialization._coerce_int` edge cases."""
|
||||
|
||||
def test_bool_true(self):
|
||||
"""True coerces to 1."""
|
||||
assert serialization._coerce_int(True) == 1
|
||||
|
||||
def test_bool_false(self):
|
||||
"""False coerces to 0."""
|
||||
assert serialization._coerce_int(False) == 0
|
||||
|
||||
def test_nan_float_returns_none(self):
|
||||
"""NaN float returns None."""
|
||||
import math
|
||||
|
||||
assert serialization._coerce_int(math.nan) is None
|
||||
|
||||
def test_inf_float_returns_none(self):
|
||||
"""Inf float returns None."""
|
||||
import math
|
||||
|
||||
assert serialization._coerce_int(math.inf) is None
|
||||
|
||||
def test_bytes_decimal(self):
|
||||
"""Bytes containing a decimal string are parsed."""
|
||||
assert serialization._coerce_int(b"42") == 42
|
||||
|
||||
def test_bytes_hex(self):
|
||||
"""Bytes containing a 0x hex string are parsed."""
|
||||
assert serialization._coerce_int(b"0xff") == 255
|
||||
|
||||
def test_empty_bytes_returns_none(self):
|
||||
"""Empty bytes returns None."""
|
||||
assert serialization._coerce_int(b"") is None
|
||||
|
||||
def test_invalid_string_returns_none(self):
|
||||
"""Non-numeric string returns None."""
|
||||
assert serialization._coerce_int("not-an-int") is None
|
||||
|
||||
def test_float_string_coerced(self):
|
||||
"""Decimal string like '3.7' is truncated to int."""
|
||||
assert serialization._coerce_int("3.7") == 3
|
||||
|
||||
def test_none_returns_none(self):
|
||||
"""None returns None."""
|
||||
assert serialization._coerce_int(None) is None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _coerce_float edge cases
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestCoerceFloat:
|
||||
"""Tests for :func:`serialization._coerce_float` edge cases."""
|
||||
|
||||
def test_bool_true(self):
|
||||
"""True coerces to 1.0."""
|
||||
assert serialization._coerce_float(True) == pytest.approx(1.0)
|
||||
|
||||
def test_nan_returns_none(self):
|
||||
"""NaN returns None."""
|
||||
import math
|
||||
|
||||
assert serialization._coerce_float(math.nan) is None
|
||||
|
||||
def test_inf_returns_none(self):
|
||||
"""Inf returns None."""
|
||||
import math
|
||||
|
||||
assert serialization._coerce_float(math.inf) is None
|
||||
|
||||
def test_bytes_string(self):
|
||||
"""Bytes containing a float string are parsed."""
|
||||
assert serialization._coerce_float(b"3.14") == pytest.approx(3.14)
|
||||
|
||||
def test_empty_bytes_returns_none(self):
|
||||
"""Empty bytes returns None."""
|
||||
assert serialization._coerce_float(b"") is None
|
||||
|
||||
def test_invalid_string_returns_none(self):
|
||||
"""Non-numeric string returns None."""
|
||||
assert serialization._coerce_float("not-a-float") is None
|
||||
|
||||
def test_none_returns_none(self):
|
||||
"""None returns None."""
|
||||
assert serialization._coerce_float(None) is None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _first dot-notation
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestFirstDotNotation:
|
||||
"""Tests for :func:`serialization._first` with dot-separated names."""
|
||||
|
||||
def test_dot_notation_nested_dict(self):
|
||||
"""Dot notation resolves nested dict keys."""
|
||||
d = {"a": {"b": 42}}
|
||||
assert serialization._first(d, "a.b") == 42
|
||||
|
||||
def test_dot_notation_falls_back_to_next_name(self):
|
||||
"""Falls back to the next candidate when dot-path misses."""
|
||||
d = {"x": 99}
|
||||
assert serialization._first(d, "a.b", "x") == 99
|
||||
|
||||
def test_dot_notation_none_value_skipped(self):
|
||||
"""None value at dot-path is skipped."""
|
||||
d = {"a": {"b": None}}
|
||||
assert serialization._first(d, "a.b", default="fallback") == "fallback"
|
||||
|
||||
def test_dot_notation_empty_string_skipped(self):
|
||||
"""Empty string at dot-path is skipped."""
|
||||
d = {"a": {"b": ""}}
|
||||
assert serialization._first(d, "a.b", default="fallback") == "fallback"
|
||||
|
||||
def test_attr_dot_notation(self):
|
||||
"""Dot notation works for objects with attributes."""
|
||||
from types import SimpleNamespace
|
||||
|
||||
d = SimpleNamespace(a=SimpleNamespace(b=7))
|
||||
assert serialization._first(d, "a.b") == 7
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _merge_mappings non-mapping extra
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestMergeMappingsExtra:
|
||||
"""Additional tests for :func:`serialization._merge_mappings`."""
|
||||
|
||||
def test_non_mapping_extra_ignored(self):
|
||||
"""Non-mapping extra with non-convertible value returns base unchanged."""
|
||||
base = {"x": 1}
|
||||
# Pass a string as extra — _node_to_dict will return the string, which
|
||||
# is not a Mapping, so base is returned as-is.
|
||||
result = serialization._merge_mappings(base, "not-a-mapping")
|
||||
assert result == {"x": 1}
|
||||
|
||||
def test_deep_merge(self):
|
||||
"""Nested mappings are merged recursively."""
|
||||
base = {"a": {"b": 1, "c": 2}}
|
||||
extra = {"a": {"b": 99}}
|
||||
result = serialization._merge_mappings(base, extra)
|
||||
assert result == {"a": {"b": 99, "c": 2}}
|
||||
|
||||
def test_extra_key_added(self):
|
||||
"""Keys present only in extra are added to the result."""
|
||||
base = {"a": 1}
|
||||
extra = {"b": 2}
|
||||
result = serialization._merge_mappings(base, extra)
|
||||
assert result == {"a": 1, "b": 2}
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _extract_payload_bytes additional branches
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestExtractPayloadBytesExtra:
|
||||
"""Additional coverage for :func:`serialization._extract_payload_bytes`."""
|
||||
|
||||
def test_non_mapping_input_returns_none(self):
|
||||
"""Non-mapping decoded section returns None."""
|
||||
assert serialization._extract_payload_bytes("not-a-dict") is None
|
||||
|
||||
def test_no_payload_key_returns_none(self):
|
||||
"""Missing payload key returns None."""
|
||||
assert serialization._extract_payload_bytes({}) is None
|
||||
|
||||
def test_bytes_payload_returned_directly(self):
|
||||
"""Raw bytes payload is returned as-is."""
|
||||
result = serialization._extract_payload_bytes({"payload": b"\x01\x02"})
|
||||
assert result == b"\x01\x02"
|
||||
|
||||
@@ -55,8 +55,38 @@ def _javascript_package_version() -> str:
|
||||
raise AssertionError("package.json does not expose a string version")
|
||||
|
||||
|
||||
def _flutter_package_version() -> str:
|
||||
pubspec_path = REPO_ROOT / "app" / "pubspec.yaml"
|
||||
for line in pubspec_path.read_text(encoding="utf-8").splitlines():
|
||||
if line.startswith("version:"):
|
||||
version = line.split(":", 1)[1].strip()
|
||||
if version:
|
||||
return version
|
||||
break
|
||||
raise AssertionError("pubspec.yaml does not expose a version")
|
||||
|
||||
|
||||
def _rust_package_version() -> str:
|
||||
cargo_path = REPO_ROOT / "matrix" / "Cargo.toml"
|
||||
inside_package = False
|
||||
for line in cargo_path.read_text(encoding="utf-8").splitlines():
|
||||
stripped = line.strip()
|
||||
if stripped == "[package]":
|
||||
inside_package = True
|
||||
continue
|
||||
if inside_package and stripped.startswith("[") and stripped.endswith("]"):
|
||||
break
|
||||
if inside_package:
|
||||
literal = re.match(
|
||||
r'version\s*=\s*["\'](?P<version>[^"\']+)["\']', stripped
|
||||
)
|
||||
if literal:
|
||||
return literal.group("version")
|
||||
raise AssertionError("Cargo.toml does not expose a package version")
|
||||
|
||||
|
||||
def test_version_identifiers_match_across_languages() -> None:
|
||||
"""Guard against version drift between Python, Ruby, and JavaScript."""
|
||||
"""Guard against version drift between Python, Ruby, JavaScript, Flutter, and Rust."""
|
||||
|
||||
python_version = getattr(data, "__version__", None)
|
||||
assert (
|
||||
@@ -65,5 +95,13 @@ def test_version_identifiers_match_across_languages() -> None:
|
||||
|
||||
ruby_version = _ruby_fallback_version()
|
||||
javascript_version = _javascript_package_version()
|
||||
flutter_version = _flutter_package_version()
|
||||
rust_version = _rust_package_version()
|
||||
|
||||
assert python_version == ruby_version == javascript_version
|
||||
assert (
|
||||
python_version
|
||||
== ruby_version
|
||||
== javascript_version
|
||||
== flutter_version
|
||||
== rust_version
|
||||
)
|
||||
|
||||
@@ -23,6 +23,9 @@ ENV BUNDLE_FORCE_RUBY_PLATFORM=true
|
||||
# Install build dependencies and SQLite3
|
||||
RUN apk add --no-cache \
|
||||
build-base \
|
||||
python3 \
|
||||
py3-pip \
|
||||
py3-virtualenv \
|
||||
sqlite-dev \
|
||||
linux-headers \
|
||||
pkgconfig
|
||||
@@ -38,11 +41,16 @@ RUN bundle config set --local force_ruby_platform true && \
|
||||
bundle config set --local without 'development test' && \
|
||||
bundle install --jobs=4 --retry=3
|
||||
|
||||
# Install Meshtastic decoder dependencies in a dedicated venv
|
||||
RUN python3 -m venv /opt/meshtastic-venv && \
|
||||
/opt/meshtastic-venv/bin/pip install --no-cache-dir meshtastic protobuf
|
||||
|
||||
# Production stage
|
||||
FROM ruby:3.3-alpine AS production
|
||||
|
||||
# Install runtime dependencies
|
||||
RUN apk add --no-cache \
|
||||
python3 \
|
||||
sqlite \
|
||||
tzdata \
|
||||
curl
|
||||
@@ -56,6 +64,7 @@ WORKDIR /app
|
||||
|
||||
# Copy installed gems from builder stage
|
||||
COPY --from=builder /usr/local/bundle /usr/local/bundle
|
||||
COPY --from=builder /opt/meshtastic-venv /opt/meshtastic-venv
|
||||
|
||||
# Copy application code (excluding the Dockerfile which is not required at runtime)
|
||||
COPY --chown=potatomesh:potatomesh web/app.rb ./
|
||||
@@ -70,6 +79,7 @@ COPY --chown=potatomesh:potatomesh web/scripts ./scripts
|
||||
|
||||
# Copy SQL schema files from data directory
|
||||
COPY --chown=potatomesh:potatomesh data/*.sql /data/
|
||||
COPY --chown=potatomesh:potatomesh data/mesh_ingestor/decode_payload.py /app/data/mesh_ingestor/decode_payload.py
|
||||
|
||||
# Create data and configuration directories with correct ownership
|
||||
RUN mkdir -p /app/.local/share/potato-mesh \
|
||||
@@ -85,6 +95,7 @@ EXPOSE 41447
|
||||
# Default environment variables (can be overridden by host)
|
||||
ENV RACK_ENV=production \
|
||||
APP_ENV=production \
|
||||
MESHTASTIC_PYTHON=/opt/meshtastic-venv/bin/python \
|
||||
XDG_DATA_HOME=/app/.local/share \
|
||||
XDG_CONFIG_HOME=/app/.config \
|
||||
SITE_NAME="PotatoMesh Demo" \
|
||||
|
||||
@@ -49,6 +49,12 @@ require_relative "application/worker_pool"
|
||||
require_relative "application/federation"
|
||||
require_relative "application/prometheus"
|
||||
require_relative "application/queries"
|
||||
require_relative "application/meshtastic/channel_names"
|
||||
require_relative "application/meshtastic/channel_hash"
|
||||
require_relative "application/meshtastic/protobuf"
|
||||
require_relative "application/meshtastic/rainbow_table"
|
||||
require_relative "application/meshtastic/cipher"
|
||||
require_relative "application/meshtastic/payload_decoder"
|
||||
require_relative "application/data_processing"
|
||||
require_relative "application/filesystem"
|
||||
require_relative "application/instances"
|
||||
@@ -133,7 +139,10 @@ module PotatoMesh
|
||||
set :public_folder, File.expand_path("../../public", __dir__)
|
||||
set :views, File.expand_path("../../views", __dir__)
|
||||
set :federation_thread, nil
|
||||
set :initial_federation_thread, nil
|
||||
set :federation_worker_pool, nil
|
||||
set :federation_shutdown_requested, false
|
||||
set :federation_shutdown_hook_installed, false
|
||||
set :port, resolve_port
|
||||
set :bind, DEFAULT_BIND_ADDRESS
|
||||
|
||||
@@ -148,8 +157,8 @@ module PotatoMesh
|
||||
|
||||
perform_initial_filesystem_setup!
|
||||
cleanup_legacy_well_known_artifacts
|
||||
init_db unless db_schema_present?
|
||||
ensure_schema_upgrades
|
||||
init_db unless db_schema_present?
|
||||
|
||||
log_instance_domain_resolution
|
||||
log_instance_public_key
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user