Compare commits

...

62 Commits

Author SHA1 Message Date
l5y 13b2ce9067 web: fix meshcore node misclassification (#748)
* web: fix meshcore node misclassification

* web: address review comments

* web: address review comments
2026-04-15 12:38:50 +02:00
l5y 5a73e212a3 web: optimize caching (#744)
* web: optimize caching

* web: address review comments

* web: address review comments

* web: run rufo
2026-04-14 23:29:54 +02:00
l5y 07c8e85caa web: fix federation resolver issue with multi addresses (#743)
* web: fix federation resolver issue with multi addresses

* web: add tests

* web: address review comments
2026-04-14 18:55:40 +02:00
l5y c08b3f2c2d web: restore refresh and protocol buttons (#742)
* web: restore refresh and protocol buttons

* web: restore refresh and protocol buttons

* web: restore refresh and protocol buttons

* web: address review comments
2026-04-14 16:54:57 +02:00
dependabot[bot] 851b2180dd build(deps): bump rand from 0.9.2 to 0.9.4 in /matrix (#741)
Bumps [rand](https://github.com/rust-random/rand) from 0.9.2 to 0.9.4.
- [Release notes](https://github.com/rust-random/rand/releases)
- [Changelog](https://github.com/rust-random/rand/blob/0.9.4/CHANGELOG.md)
- [Commits](https://github.com/rust-random/rand/compare/rand_core-0.9.2...0.9.4)

---
updated-dependencies:
- dependency-name: rand
  dependency-version: 0.9.4
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-14 08:22:07 +02:00
l5y c175445251 ingestor: fix serial connection failures (#736)
* ingestor: fix serial connection failures

* ingestor: address review comments

* ingestor: address review comments

* ingestor: further hardening

* ingestor: add tests

* ingestor: address review comments

* ingestor: address review comments
2026-04-13 23:42:07 +02:00
l5y b951dbffeb web: per protocol active node counts (#735)
* web: per protocol active node counts

* web: address review comments
2026-04-13 18:26:16 +02:00
l5y 10e6c99196 data: better lora frequency handling for meshtastic (#733)
* data: better lora frequency handling for meshtastic

* ingestor: address review comments
2026-04-12 16:02:15 +02:00
l5y aeb97477f0 chore: bump version to 0.6.1 (#726) 2026-04-09 13:14:20 +02:00
l5y 81e588e44c web: add markdown static pages (#723)
* web: add markdown static pages

* web: add tests and docker

* web: improve wording and configs

* web: add tests

* web: address review comments

* web: address review comments

* Potential fix for pull request finding 'CodeQL / Incomplete multi-character sanitization'

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

* web: address review comments

* web: address review comments

---------

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2026-04-08 16:42:13 +02:00
l5y 083de6418f web: fix federation for multi protocol (#722)
* web: fix federation for multi protocol

* web: fix short name emojis

* web: address review comments

* ci: fix the codeql gap

* ci: fix the codeql gap

* ci: fix the codeql gap

* ci: remove swift
2026-04-08 14:36:43 +02:00
l5y 5b9e6e3d48 data: trace analysus multi ingestor support (#721)
* data: trace analysus multi ingestor support

* address review comments
2026-04-08 11:58:32 +02:00
l5y 4a6ba38e94 chore: prepare codebase for breaking release (#718)
* chore: prepare codebase for breaking release

* docker: fix debug flug in prod matrix bridge
2026-04-08 10:51:38 +02:00
l5y 4d38ddd341 web: facelift (#716)
* web: facelift

* web: facelift

* web: facelift

* web: address review comments

* web: address review comments

* web: address review comments

* web: address review comments

* web: address review comments

* web: address review comments

* web: more css magic

* web: link parsing for chat contact

* web: remove one-letter fallback for shortnames

* Potential fix for pull request finding 'CodeQL / Incomplete multi-character sanitization'

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

* web: fix fallback for shortnames

* web: address review comments

---------

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2026-04-07 21:38:43 +02:00
l5y 267d2ec9e1 data: fix position time updates (#715)
* data: fix position time updates

* data: fix position time updates
2026-04-06 19:29:38 +02:00
l5y 526a0c7246 data: fix meshcore ingestore self reporting (#713)
* data: fix meshcore ingestore self reporting

* data: fix meshcore ingestore self reporting

* address review comments
2026-04-06 15:19:01 +02:00
l5y 95aa1de8a8 web: sort channels by activity not index (#711)
* web: sort channels by activity not index

* web: address review comments
2026-04-06 14:04:47 +02:00
l5y d8b80c2a97 web: reference meshcore nodes in chat (#709)
* web: reference meshcore nodes in chat

* data: add adv_name to messages

* web: address review comments

* derive actual companion from name string

* derive actual companion from name string

* derive actual companion from name string

* web: address review comments

* web: address review comments
2026-04-06 13:39:00 +02:00
l5y 406fa80dd0 web: fix node disappearance role reset (#707)
* web: fix node disappearance role reset

* web: address review comments

* web: address review comments

* web: address review comments
2026-04-05 23:43:36 +02:00
l5y de1ccc5a2e release: v0.6.0 — remove deprecated env var aliases (#704)
* chore: bump version to 0.6.0 and remove deprecated env var aliases

BREAKING CHANGES:
- POTATOMESH_INSTANCE removed — use INSTANCE_DOMAIN
- PROVIDER removed — use PROTOCOL
- MESH_SERIAL removed — use CONNECTION
- PORT config alias removed — use CONNECTION

The _ConfigModule proxy class (which kept PROTOCOL/PROVIDER and
CONNECTION/PORT in sync) is deleted. docker-compose.yml now defaults
INSTANCE_DOMAIN to http://web:41447 so deployments without an explicit
value continue to work.

* tests: run black

* address review comments
2026-04-05 16:49:10 +02:00
l5y 0a479e4517 web: protect real node names from fallback (#702)
* web: protect real node names from fallback

* web: address review comments

* web: address review comments
2026-04-05 13:57:18 +02:00
l5y 8c59396ec8 fix: derive channel probe bound from device max_channels (#701)
Replace the hardcoded max_idx=8 parameter on _ensure_channel_names with
a DEVICE_INFO query (send_device_query → max_channels) so the full range
of configured channels is always probed regardless of firmware variant.
Falls back to _CHANNEL_PROBE_FALLBACK_MAX (32) when the query fails or
the device returns an older firmware that omits max_channels.

Also removes always=True from the warning-severity channel failure log
(redundant — only debug-severity is gated behind the DEBUG flag) and adds
a deferred-import comment in _ensure_channel_names.
2026-04-05 13:46:04 +02:00
l5y 3647cb125b web: define meshcore modem presets (#696)
* web: define meshcore modem presets

* web: address review comments
2026-04-05 13:37:58 +02:00
l5y adc122fce0 data: register meshcore channel mappings (#695)
* data: register meshcore channel mappings

* fix: use mc.commands.get_channel for MeshCore channel name probing

MeshCore exposes device commands via the commands sub-object
(CommandHandler), not directly on MeshCore instances.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: probe all channel indices regardless of ERROR responses

Removed the consecutive-error early-stop heuristic from
_ensure_channel_names so sparse channel configurations (e.g. slots 0
and 5 configured with slots 1–4 empty) are fully probed. Only a hard
exception aborts the loop early.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-05 13:36:03 +02:00
l5y d33ebd8f4c data: provide frequency and modem preset for meshcore (#694)
* data: provide frequency and modem preset for meshcore

* data: provide frequency and modem preset for meshcore

* ingestor: address review comments

* fix: remove duplicate _mark_packet_seen entry from handlers __all__

* ci: install meshcore in Python workflow

protocols/meshcore.py now imports meshcore at module level (required to
fix a self-referential import failure after the providers/ → protocols/
rename).  test_provider_unit.py imports that module unconditionally, so
meshcore must be present in the test environment.

* data: run black
2026-04-05 09:13:48 +02:00
l5y 06530f36ff web: add proper short names for meshcore companions (#693)
* web: add proper short names for meshcore companions

* web: address review comments
2026-04-05 09:01:43 +02:00
l5y 3cfa0db7e6 web: distinguish meshcore from meshtastic in frontend (#688)
* web: distinguish meshcore from meshtastic in frontend

* fix mark_packet_seen bug

* web: distinguish meshcore from meshtastic in frontend

* address review comments

* address review comments

* address review comments
2026-04-04 17:14:16 +02:00
l5y d9420ff13b fix: address review comments from PRs #676 and #681 (#689)
* fix: address review comments from PRs #676 and #681

- Introduce ClosedBeforeConnectedError(ConnectionError) subclass so
  callers can distinguish a user-initiated shutdown from a hardware
  failure without string-matching the exception message (#676)
- Add test covering the close-before-connected path: asserts
  isConnected stays False and error_holder contains the typed error
- Add protocolIconPrefixHtml unit tests covering null, meshtastic,
  meshcore, and unknown protocol strings (#681)
- Add buildDisplayContext tests for protocol extraction from trace,
  node, and absent candidate sources (#681)
- Expose buildDisplayContext via _testUtils to make it directly testable
- Add meshcore icon presence assertions to createAnnouncementEntry and
  createMessageChatEntry tests (previously only checked absence of
  meshtastic icon)

* fix: address #689 review comments

- Move createMessageChatEntry meshcore icon test into its own section,
  after the createMessageChatEntry divider where it belongs
- Export ClosedBeforeConnectedError from providers/__init__.py via the
  existing lazy-load __getattr__ so callers outside the providers/
  subpackage can catch it without importing the full meshcore module

* refactor: eliminate test boilerplate to fix SonarCloud duplication gate

Introduce withApp() and innerHtml() helpers in main-protocol.test.js to
replace the 18-repeated setupApp/try/finally/cleanup pattern and the
inconsistent innerHTML extraction expression. No test logic changed.

* refactor: extract stalled-run helpers to fix SonarCloud duplication gate

The two stall-based _run_meshcore tests shared ~20 lines of identical
setup and spin-loop boilerplate. Extract _setup_stalled_run() and
_start_stalled_run() so each test contains only its distinct assertions.
2026-04-04 13:28:26 +02:00
Ben Allfree 7e0ba60a22 fix: get meshcore protocol icon displaying correctly (#681) 2026-04-04 13:00:25 +02:00
Ben Allfree 257e26c996 [meshcore] fix: race condition (#676)
* fix: ensure stop_event is set before connection completion in _run_meshcore

* Fix CancelledError lint in meshcore cancel test
2026-04-04 12:41:56 +02:00
l5y dcb374fbf9 enh: surface meshcore role types (#680) (#685)
* enh: surface meshcore role types (#680)

Map MeshCore ADV_TYPE_* integers to user.role strings so COMPANION,
REPEATER, ROOM_SERVER, and SENSOR roles are surfaced to the dashboard.
Role is omitted when ADV_TYPE_NONE (0) or unknown.

Co-authored-by: Ben Allfree <ben@benallfree.com>

* data: run black

---------

Co-authored-by: Ben Allfree <ben@benallfree.com>
2026-04-04 10:41:06 +02:00
l5y 9c3dae3e7d chore: refactor codebase before meshcore release (#682)
* chore: refactor codebase before meshcore release

* data: run black

* fix: resolve SonarCloud S1244/S5796 reliability issues in test files

Replace floating-point equality comparisons with pytest.approx() to
satisfy S1244, and replace the `is` identity operator with id()-based
comparison to satisfy S5796.

* fix: remove duplicate encrypted_flag assignment in store_packet_dict

The encrypted_flag was computed identically on lines 307 and 345 with no
mutation of `encrypted` between them. Remove the dead second assignment.
2026-04-04 10:22:31 +02:00
Ben Allfree 7806efb2cf meshcore/fix: short name should be 1st 4 hex digits of public key (#679) 2026-04-04 09:40:49 +02:00
Ben Allfree 7a21de7cda chore: update dependencies and configuration files (#674)
* Updated versions and SHA256 checksums for several packages in pubspec.lock.
* Added include statements for Pods configuration in Debug.xcconfig and Release.xcconfig.
2026-04-03 23:21:49 +02:00
Ben Allfree 295d4cf2bb chore: update mesh.sh to use requirements file (#675) 2026-04-03 23:20:48 +02:00
l5y 09ea277a40 data/meshcore: fix ble and enable tcp (#669)
* data/meshcore: fix ble and enable tcp

* ingestor: address review comments

* ingestor: address review comments
2026-04-02 22:31:33 +02:00
l5y 4fa0745d1b data: handle store_forward and router_heartbeat portnum (#667)
* data: handle store_forward and router_heartbeat portnum

* ingestor: address review comments
2026-03-31 23:42:26 +02:00
l5y a62a068c08 feat: implement meshcore provider (#663)
* feat: add meshcore support

* fix: address PR #663 review comments

* fix: address PR #663 review comments

* address review comments
2026-03-31 13:44:05 +02:00
l5y 5c49af5355 ci: update dependabot and codecov settings (#666) 2026-03-31 12:45:07 +02:00
l5y e48c575b9d web: prepare release (#665)
* web: prepare release

* fix: address pre-release review concerns

- Emit invalid telemetry_type warning at severity=warning/always=True so
  it surfaces in production logs, not just under DEBUG=1
- Hoist VALID_TELEMETRY_TYPES to a module-level constant in DataProcessing
  to avoid per-call allocation inside insert_telemetry
- Add Python test covering the invalid-type drop path in store_telemetry_packet
- Add Ruby spec asserting that an invalid telemetry_type in a POST payload
  is discarded and metric-based inference takes over
2026-03-30 23:15:55 +02:00
l5y e03675168b app: only query meshtastic provider (#664)
* app: only query meshtastic provider

* app: address review comments
2026-03-30 19:04:34 +02:00
l5y d6a2e263cc data: prepare ingestor for meshcore (#658)
* data: prepare ingestor for meshcore

* ingestor: address review comments

* ingestor: address review comments

* ingestor: address review comments

* ingestor: address review comments
2026-03-30 09:17:10 +02:00
l5y f638c79e13 web: fix css issues (#659)
* web: fix css issues

* chore: bump version to 0.5.12
2026-03-30 08:55:35 +02:00
l5y 874e81ab8b web: prepare frontend for multi protocol (#657)
* web: prepare frontend for multi protocol

* web: address review comments

* fix: address review feedback on multi-protocol frontend prep

- Replace iconHtml/innerHTML in renderChatTabs with iconSrc + DOM APIs;
  the img element is now built attribute-by-attribute so no innerHTML trust
  boundary exists even if iconSrc were to receive external input
- Add MESHTASTIC_ICON_SRC / MESHCORE_ICON_SRC constants to protocol-helpers;
  meshtasticIconHtml() and meshcoreIconHtml() reference these so the asset
  path has a single source of truth
- Use meshtasticIconHtml() in the map legend via a temp span to eliminate
  the 7-setAttribute duplication
- Add getRoleColors(protocol) to role-helpers, making meshcoreRoleColors
  reachable through a tested code path rather than a dead export
- Rename __test__ export in main.js to __testUtils for consistency
- Add JSDoc cross-reference on normalizeNodeNameValue vs stringOrNull


* web: address review comments

* web: address review comments

* web: address review comments
2026-03-30 08:21:39 +02:00
l5y a5d0008555 feat: split device and power-sensor telemetry charts (#643) (#656)
* feat: split device and power-sensor telemetry charts (#643)

Add telemetry_type TEXT discriminator column across the full stack so
device_metrics rows no longer mix with power_metrics in the same chart.
Python and Ruby ingestors detect the protobuf subtype at write time;
classifySnapshot() provides field-presence fallback for legacy rows.
'Power metrics' chart split into 'Device health' and 'Power sensor'.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: skip typeFilter for aggregated telemetry; add air_quality coverage

- renderTelemetryChart now skips spec.typeFilter when chartOptions.isAggregated
  is true, preventing mixed-bucket aggregated snapshots from losing series data
- renderTelemetryCharts detects the aggregated vs per-packet path and sets
  isAggregated accordingly; typeFilter still applies for per-packet history
- JS tests: extract makeAggregatedNode/makeHistoryNode helpers to eliminate
  fixture duplication; add aggregated-mixed-bucket regression test; move
  type-separation tests onto the history path where filtering actually applies
- Ruby + Python: add air_quality_metrics telemetry_type tests for coverage

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* refactor: reduce test duplication flagged by Sonar

Hoist CHART_NOW_MS/CHART_NOW_SECONDS constants to eliminate 14 repeated
setup lines across renderTelemetryCharts tests.  Extract
expect_stored_telemetry_type helper in app_spec to replace the four
identical with_db/SELECT/expect blocks in telemetry_type inference tests.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* web: address review comments

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-30 00:07:24 +02:00
l5y 4d0d6f8565 web: implement a 'protocol' field across systems (#655)
* web: implement a 'protocol' field across systems

* web: address review feedback on multi-protocol support

- Rebase on main (pick up coordinate-clearing bugfix from #654)
- P1: prevent cross-protocol message merges on shared packet IDs
- P2: exclude "ingestor" key when enforcing /api/nodes batch limit
- Extract append_protocol_filter helper + PROTOCOL_CLAUSE constant to
  reduce cognitive complexity and deduplicate SQL fragment in queries.rb
- Extract coerce_bool helper to reduce upsert_node cognitive complexity
- Merge nested if in insert_message protocol update path (Sonar)
- Add explicit UPDATE backfill in ensure_schema_upgrades so any pre-existing
  NULL/empty protocol rows are set to meshtastic on upgrade
- Rename migration file to 20260328_ (correct year)
- Expand protocol_spec.rb: filter tests for all 7 endpoints,
  cross-protocol non-merge test, batch limit test, Sonar constant fixes,
  ENV.fetch, P1 regression test


* web: address review comments
2026-03-29 11:48:32 +02:00
l5y 7b1d25e286 fix upsert clearing node coordinates bug (#654) 2026-03-28 21:21:13 +01:00
l5y 5adbe2263e data: resolve circular dependency of deamon.py (#653)
* data: resolve circular dependency of deamon.py

* address review comments

* address review comments

* address review comments
2026-03-28 18:46:21 +01:00
Ben Allfree b1c416d029 first cut (#651) 2026-03-28 17:09:12 +01:00
dependabot[bot] 8305ca588c build(deps): bump rustls-webpki from 0.103.8 to 0.103.10 in /matrix (#649)
Bumps [rustls-webpki](https://github.com/rustls/webpki) from 0.103.8 to 0.103.10.
- [Release notes](https://github.com/rustls/webpki/releases)
- [Commits](https://github.com/rustls/webpki/compare/v/0.103.8...v/0.103.10)

---
updated-dependencies:
- dependency-name: rustls-webpki
  dependency-version: 0.103.10
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-21 12:55:17 +01:00
dependabot[bot] 0cf56b6fba build(deps): bump quinn-proto from 0.11.13 to 0.11.14 in /matrix (#646)
Bumps [quinn-proto](https://github.com/quinn-rs/quinn) from 0.11.13 to 0.11.14.
- [Release notes](https://github.com/quinn-rs/quinn/releases)
- [Commits](https://github.com/quinn-rs/quinn/compare/quinn-proto-0.11.13...quinn-proto-0.11.14)

---
updated-dependencies:
- dependency-name: quinn-proto
  dependency-version: 0.11.14
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-11 14:56:43 +01:00
l5y ecce7f3504 chore: bump version to 0.5.11 (#645)
* chore: bump version to 0.5.11

* data: run black
2026-03-01 21:59:04 +01:00
l5y 17fa183c4f web: limit horizontal size of dropdown (#644)
* web: limit horizontal size of dropdown

* address review comments
2026-03-01 21:49:06 +01:00
l5y 5b0a6f5f8b web: expose node stats in distinct api (#641)
* web: expose node stats in distinct api

* web: address review comments

* web: address review comments

* web: address review comments

* web: address review comments
2026-02-14 21:14:10 +01:00
l5y 2e8b5ad856 web: do not merge channels by name (#640) 2026-02-14 15:42:14 +01:00
l5y e32b098be4 web: do not merge channels by ID in frontend (#637)
* web: do not merge channels by ID in frontend

* web: address review comments

* web: address review comments
2026-02-14 14:56:25 +01:00
l5y b45629f13c web: do not touch neighbor last seen on neighbor info (#636)
* web: do not touch neighbor last seen on neighbor info

* web: address review comments
2026-02-14 14:43:46 +01:00
l5y 96421c346d ingestor: report self id per packet (#635)
* ingestor: report self id per packet

* ingestor: address review comments

* ingestor: address review comments

* ingestor: address review comments

* ingestor: address review comments
2026-02-14 14:29:05 +01:00
l5y 724b3e14e5 ci: fix docker compose and docs (#634)
* ci: fix docker compose and docs

* docker: address review comments
2026-02-14 13:25:43 +01:00
l5y e8c83a2774 web: supress encrypted text messages in frontend (#633)
* web: supress encrypted text messages in frontend

* web: address review comments

* web: address review comments

* web: address review comments

* web: address review comments
2026-02-14 13:11:02 +01:00
l5y 5c5a9df5a6 federation: ensure requests timeout properly and can be terminated (#631)
* federation: ensure requests timeout properly and can be terminated

* web: address review comments

* web: address review comments

* web: address review comments

* web: address review comments
2026-02-14 12:29:01 +01:00
dependabot[bot] 7cb4bbe61b build(deps): bump bytes from 1.11.0 to 1.11.1 in /matrix (#627)
Bumps [bytes](https://github.com/tokio-rs/bytes) from 1.11.0 to 1.11.1.
- [Release notes](https://github.com/tokio-rs/bytes/releases)
- [Changelog](https://github.com/tokio-rs/bytes/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/bytes/compare/v1.11.0...v1.11.1)

---
updated-dependencies:
- dependency-name: bytes
  dependency-version: 1.11.1
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-06 21:40:49 +01:00
205 changed files with 35837 additions and 6473 deletions
+2 -2
View File
@@ -16,5 +16,5 @@ coverage:
status:
project:
default:
target: 99%
threshold: 1%
target: 100%
threshold: 10%
+11 -1
View File
@@ -1,3 +1,6 @@
# Copyright © 2025-26 l5yth & contributors
# Licensed under the Apache License, Version 2.0 (see LICENSE)
#
# PotatoMesh Environment Configuration
# Copy this file to .env and customize for your setup
@@ -14,7 +17,7 @@ INSTANCE_DOMAIN="mesh.example.org"
# Generate a secure token: openssl rand -hex 32
API_TOKEN="your-secure-api-token-here"
# Meshtastic connection target (required for ingestor)
# Mesh radio connection target (required for ingestor)
# Common serial paths:
# - Linux: /dev/ttyACM0, /dev/ttyUSB0
# - macOS: /dev/cu.usbserial-*
@@ -23,6 +26,10 @@ API_TOKEN="your-secure-api-token-here"
# Bluetooth address (e.g. ED:4D:9E:95:CF:60).
CONNECTION="/dev/ttyACM0"
# Mesh protocol to use (meshtastic or meshcore)
# Default: meshtastic
PROTOCOL="meshtastic"
# =============================================================================
# SITE CUSTOMIZATION
# =============================================================================
@@ -68,6 +75,9 @@ PRIVATE=0
# Debug mode (0=off, 1=on)
DEBUG=0
# Energy saving mode — sleep between ingestion cycles (0=off, 1=on)
ENERGY_SAVING=0
# Default map zoom override
# MAP_ZOOM=15
+16
View File
@@ -19,6 +19,22 @@ updates:
schedule:
interval: "weekly"
- package-ecosystem: "python"
directory: "/data"
schedule:
interval: "weekly"
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
- package-ecosystem: "cargo"
directory: "/matrix"
schedule:
interval: "weekly"
- package-ecosystem: "npm"
directory: "/web"
schedule:
interval: "weekly"
- package-ecosystem: "pub"
directory: "/app"
schedule:
interval: "weekly"
+3 -9
View File
@@ -1,3 +1,6 @@
<!-- Copyright © 2025-26 l5yth & contributors -->
<!-- Licensed under the Apache License, Version 2.0 (see LICENSE) -->
# GitHub Actions Workflows
## Workflows
@@ -10,12 +13,3 @@
- **`mobile.yml`** - Flutter mobile tests with coverage reporting
- **`release.yml`** - Tag-triggered Flutter release builds for Android and iOS
## Usage
```bash
# Build locally
docker-compose build
# Deploy
docker-compose up -d
```
+1 -1
View File
@@ -23,7 +23,7 @@ on:
jobs:
analyze:
name: Analyze (${{ matrix.language }})
runs-on: ${{ (matrix.language == 'swift' && 'macos-latest') || 'ubuntu-latest' }}
runs-on: ubuntu-latest
permissions:
security-events: write
packages: read
+1 -1
View File
@@ -188,7 +188,7 @@ jobs:
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-ingestor-linux-amd64:${{ steps.version.outputs.version }}
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-ingestor-linux-amd64:${{ steps.version.outputs.version_with_v }}
docker run --rm --name ingestor-test \
-e POTATOMESH_INSTANCE=http://localhost:41447 \
-e INSTANCE_DOMAIN=http://localhost:41447 \
-e API_TOKEN=test-token \
-e CONNECTION=mock \
-e DEBUG=1 \
+1 -1
View File
@@ -39,7 +39,7 @@ jobs:
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install black pytest pytest-cov meshtastic
pip install black pytest pytest-cov meshtastic meshcore
- name: Test with pytest and coverage
run: |
mkdir -p reports
+4
View File
@@ -74,5 +74,9 @@ web/.config
node_modules/
web/node_modules/
# Operator-customised static pages (keep only the shipped default)
web/pages/*.md
# Debug symbols
ignored.txt
ignored-*.txt
-48
View File
@@ -1,48 +0,0 @@
# Repository Guidelines
Keep code well structured, modular, and not monolithic. If modules get to big, consider submodules structure.
Make sure all tests pass for Python (`pytest`), Ruby (`rspec`), and JavaScript (`npm test`).
Make sure all code is properly inline documented (PDoc, RDoc, JSDoc, et.c). We do not want any undocumented code.
Make sure all code is 100% unit tested. We want all lines, units, and branches to be thouroughly covered by tests.
New source files should have Apache v2 license headers using the exact string `Copyright © 2025-26 l5yth & contributors`.
Run linters for Python (`black`) and Ruby (`rufo`) to ensure consistent code formatting.
## Project Structure & Module Organization
The repository splits runtime and ingestion logic. `web/` holds the Sinatra dashboard (Ruby code in `lib/potato_mesh`, views in `views/`, static bundles in `public/`).
`data/` hosts the Python Meshtastic ingestor plus migrations and CLI scripts. API fixtures and end-to-end harnesses live in `tests/`. Dockerfiles and compose files support containerized workflows.
`matrix/` contains the Rust Matrix bridge; build with `cargo build --release` or `docker build -f matrix/Dockerfile .`, and keep bridge config under `matrix/Config.toml` when running locally.
## Build, Test, and Development Commands
Run dependency installs inside `web/`: `bundle install` for gems and `npm ci` for JavaScript tooling. Start the app with `cd web && API_TOKEN=dev ./app.sh` for local work or `bundle exec rackup -p 41447` when integrating elsewhere.
Prep ingestion with `python -m venv .venv && pip install -r data/requirements.txt`; `./data/mesh.sh` streams from live radios. `docker-compose -f docker-compose.dev.yml up` brings up the full stack.
Container images publish via `.github/workflows/docker.yml` as `potato-mesh-{service}-linux-$arch` (`web`, `ingestor`, `matrix-bridge`), using the Dockerfiles in `web/`, `data/`, and `matrix/`.
## Coding Style & Naming Conventions
Use two-space indentation for Ruby and keep `# frozen_string_literal: true` at the top of new files. Keep Ruby classes/modules in `CamelCase`, filenames in `snake_case.rb`, and feature specs in `*_spec.rb`.
JavaScript follows ES modules under `public/assets/js`; co-locate components with `__tests__` folders and use kebab-case filenames. Format Ruby via `bundle exec rufo .` and Python via `black`. Skip committing generated coverage artifacts.
## Flutter Mobile App (`app/`)
The Flutter client lives in `app/`. Keep only the mobile targets (`android/`, `ios/`) under version control unless you explicitly support other platforms. Do not commit Flutter build outputs or editor cruft (`.dart_tool/`, `.flutter-plugins-dependencies`, `.idea/`, `.metadata`, `*.iml`, `.fvmrc` if unused).
Install dependencies with `cd app && flutter pub get`; format with `dart format .` and lint via `flutter analyze`. Run tests with `cd app && flutter test` and keep widget/unit coverage high—no new code without tests. Commit `pubspec.lock` and analysis options so toolchains stay consistent.
## Testing Guidelines
Ruby specs run with `cd web && bundle exec rspec`, producing SimpleCov output in `coverage/`. Front-end behaviour is verified through Nodes test runner: `cd web && npm test` writes V8 coverage and JUnit XML under `reports/`.
The ingestion layer is guarded by `pytest -q tests/test_mesh.py`; leave fixtures in `tests/` untouched so CI can replay them. New features should ship with matching specs and updated integration checks.
## Commit & Pull Request Guidelines
Commits should stay imperative and reference issues the way history does (`Add chat log entries... (#408)`). Squash noisy work-in-progress commits before pushing. Pull requests need a concise summary, screenshots or curl traces for UI/API tweaks, and links to tracked issues. Paste the command output for the test suites you ran and mention configuration toggles (`API_TOKEN`, `PRIVATE`) reviewers must set.
## Security & Configuration Tips
Never commit real API tokens or `.sqlite` dumps; use `.env.local` files ignored by Git. Confirm env defaults (`API_TOKEN`, `INSTANCE_DOMAIN`, `PRIVATE`) before deploying, and set `FEDERATION=0` when staging private nodes. Review `PROMETHEUS.md` when exposing metrics so scrape endpoints stay internal.
+103
View File
@@ -1,5 +1,108 @@
<!-- Copyright © 2025-26 l5yth & contributors -->
<!-- Licensed under the Apache License, Version 2.0 (see LICENSE) -->
# CHANGELOG
## v0.6.0
This is a service release of the radio mesh app-suite `potato-mesh` v0.6.0 which introduces new features and overhauls the user interface. The primary notable change is added support for multi-protocol along with an implementation of **Meshcore** in ingestor, web app, and frontend.
Demo: <https://potatomesh.net/>
### Meshcore
To start ingesting Meshcore data to an upgraded potato-mesh web app, simply tell your ingestor to use the `PROTOCOL="meshcore"`.
### About Pages
The other notable feature is the removal of the "darkmode" and "info" buttons in favor of customizable markdown pages that allow for more flexibility with regard to custom content (info about presets, contact information, etc.) - see `/pages/*.md` in the web app ([#723](https://github.com/l5yth/potato-mesh/pull/723)).
### Breaking Variable Changes
The following deprecated environmental variables have been removed in this release finally ([#704](https://github.com/l5yth/potato-mesh/pull/704)):
* ~~POTATOMESH_INSTANCE~~ - please use `INSTANCE_DOMAIN`
* ~~MESH_SERIAL~~ and ~~PORT~~ - please use `CONNECTION`
### Features
* Web: add markdown static pages by @l5yth in <https://github.com/l5yth/potato-mesh/pull/723>
* Data: trace analysus multi ingestor support by @l5yth in <https://github.com/l5yth/potato-mesh/pull/721>
* Web: facelift by @l5yth in <https://github.com/l5yth/potato-mesh/pull/716>
* Web: sort channels by activity not index by @l5yth in <https://github.com/l5yth/potato-mesh/pull/711>
* Data: derive meshcore channel probe bound from device max_channels by @l5yth in <https://github.com/l5yth/potato-mesh/pull/701>
* Web: define meshcore modem presets by @l5yth in <https://github.com/l5yth/potato-mesh/pull/696>
* Data: register meshcore channel mappings by @l5yth in <https://github.com/l5yth/potato-mesh/pull/695>
* Data: provide frequency and modem preset for meshcore by @l5yth in <https://github.com/l5yth/potato-mesh/pull/694>
* Web: distinguish meshcore from meshtastic in frontend by @l5yth in <https://github.com/l5yth/potato-mesh/pull/688>
* [Meshcore] fix: get meshcore protocol icon displaying correctly by @benallfree in <https://github.com/l5yth/potato-mesh/pull/681>
### Fixes
* Web: fix federation for multi protocol by @l5yth in <https://github.com/l5yth/potato-mesh/pull/722>
* Data: fix position time updates by @l5yth in <https://github.com/l5yth/potato-mesh/pull/715>
* Data: fix meshcore ingestor self reporting by @l5yth in <https://github.com/l5yth/potato-mesh/pull/713>
* Web: reference meshcore nodes in chat by @l5yth in <https://github.com/l5yth/potato-mesh/pull/709>
* Web: fix node disappearance role reset by @l5yth in <https://github.com/l5yth/potato-mesh/pull/707>
* Web: protect real node names from fallback by @l5yth in <https://github.com/l5yth/potato-mesh/pull/702>
* Web: add proper short names for meshcore companions by @l5yth in <https://github.com/l5yth/potato-mesh/pull/693>
* Fix: address review comments from PRs #676 and #681 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/689>
* [Meshcore] fix: race condition by @benallfree in <https://github.com/l5yth/potato-mesh/pull/676>
### Chores
* Release: v0.6.0 — remove deprecated env var aliases by @l5yth in <https://github.com/l5yth/potato-mesh/pull/704>
* Chore: prepare codebase for breaking release by @l5yth in <https://github.com/l5yth/potato-mesh/pull/718>
## v0.5.12
This is a service release of the app potato-mesh v0.5.12 which improves performance and stability.
Notably, the frontend went through some graphical tweaks to prepare for an upcoming multi-protocol release (meshcore, reticulum, etc.).
* Enh: surface meshcore role types (#680) by @l5yth in https://github.com/l5yth/potato-mesh/pull/685
* Chore: refactor codebase before meshcore release by @l5yth in https://github.com/l5yth/potato-mesh/pull/682
* [Meshcore] enh: short name should be 1st 4 hex digits of public key by @benallfree in https://github.com/l5yth/potato-mesh/pull/679
* Chore: update xcode deps by @benallfree in https://github.com/l5yth/potato-mesh/pull/674
* Chore: update mesh.sh to use requirements file by @benallfree in https://github.com/l5yth/potato-mesh/pull/675
* Data/meshcore: fix ble and enable tcp by @l5yth in https://github.com/l5yth/potato-mesh/pull/669
* Data: handle store_forward and router_heartbeat portnum by @l5yth in https://github.com/l5yth/potato-mesh/pull/667
* Feat: implement meshcore provider by @l5yth in https://github.com/l5yth/potato-mesh/pull/663
* Ci: update dependabot and codecov settings by @l5yth in https://github.com/l5yth/potato-mesh/pull/666
* Web: prepare release by @l5yth in https://github.com/l5yth/potato-mesh/pull/665
* App: only query meshtastic provider by @l5yth in https://github.com/l5yth/potato-mesh/pull/664
* Data: prepare ingestor for meshcore by @l5yth in https://github.com/l5yth/potato-mesh/pull/658
* Web: fix css issues by @l5yth in https://github.com/l5yth/potato-mesh/pull/659
* Web: prepare frontend for multi protocol by @l5yth in https://github.com/l5yth/potato-mesh/pull/657
* Feat: split device and power-sensor telemetry charts (#643) by @l5yth in https://github.com/l5yth/potato-mesh/pull/656
* Web: implement a 'protocol' field across systems by @l5yth in https://github.com/l5yth/potato-mesh/pull/655
* Fix upsert clearing node coordinates bug by @l5yth in https://github.com/l5yth/potato-mesh/pull/654
* Data: resolve circular dependency of deamon.py by @l5yth in https://github.com/l5yth/potato-mesh/pull/653
* Proposal: mesh provider pattern refactor by @benallfree in https://github.com/l5yth/potato-mesh/pull/651
* Build(deps): bump rustls-webpki from 0.103.8 to 0.103.10 in /matrix by @dependabot[bot] in https://github.com/l5yth/potato-mesh/pull/649
* Build(deps): bump quinn-proto from 0.11.13 to 0.11.14 in /matrix by @dependabot[bot] in https://github.com/l5yth/potato-mesh/pull/646
## v0.5.11
* Chore: bump version to 0.5.11 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/645>
* Web: limit horizontal size of dropdown by @l5yth in <https://github.com/l5yth/potato-mesh/pull/644>
## v0.5.10
* Web: expose node stats in distinct api by @l5yth in <https://github.com/l5yth/potato-mesh/pull/641>
* Web: do merge channels by name by @l5yth in <https://github.com/l5yth/potato-mesh/pull/640>
* Web: do not merge channels by ID in frontend by @l5yth in <https://github.com/l5yth/potato-mesh/pull/637>
* Web: do not touch neighbor last seen on neighbor info by @l5yth in <https://github.com/l5yth/potato-mesh/pull/636>
* Ingestor: report self id per packet by @l5yth in <https://github.com/l5yth/potato-mesh/pull/635>
* Ci: fix docker compose and docs by @l5yth in <https://github.com/l5yth/potato-mesh/pull/634>
* Web: supress encrypted text messages in frontend by @l5yth in <https://github.com/l5yth/potato-mesh/pull/633>
* Federation: ensure requests timeout properly and can be terminated by @l5yth in <https://github.com/l5yth/potato-mesh/pull/631>
* Build(deps): bump bytes from 1.11.0 to 1.11.1 in /matrix by @dependabot[bot]< in https://github.com/l5yth/potato-mesh/pull/627>
* Matrix: config loading now merges optional TOML with CLI/env/secret inputs by @l5yth in <https://github.com/l5yth/potato-mesh/pull/617>
* Matrix: logs only non-sensitive config fields by @l5yth in <https://github.com/l5yth/potato-mesh/pull/616>
* Web: decrypted takes precedence by @l5yth in <https://github.com/l5yth/potato-mesh/pull/614>
* Add Apache 2.0 license headers to missing sources by @l5yth in <https://github.com/l5yth/potato-mesh/pull/615>
* Web: decrypt PSK-1 unencrypted messages on arrival by @l5yth in <https://github.com/l5yth/potato-mesh/pull/611>
* Web: daemonize federation worker pool to avoid deadlocks on stuck announcments by @l5yth in <https://github.com/l5yth/potato-mesh/pull/610>
* Web: add announcement banner by @l5yth in <https://github.com/l5yth/potato-mesh/pull/609>
* L5Y chore version 0510 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/608>
## v0.5.9
* Matrix: listen for synapse on port 41448 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/607>
+68
View File
@@ -0,0 +1,68 @@
<!-- Copyright © 2025-26 l5yth & contributors -->
<!-- Licensed under the Apache License, Version 2.0 (see LICENSE) -->
# Repository Guidelines
Keep code as modular as possible to reduce duplication and improve reusability and readability — this applies to tests as well as production code. If a module grows large, split it into a submodule structure. Prefer composing small, single-purpose units over monolithic files.
Make sure all tests pass for Python (`pytest`), Ruby (`rspec`), and JavaScript (`npm test`).
All code must be 100% unit tested — every line, branch, and code path must have a unit test. "100%" is the floor, not the ceiling: smoke tests, integration tests, and end-to-end tests come on top of that. No new code ships without matching unit tests.
All code must be 100% documented according to the language's API-doc standard (PDoc for Python, RDoc for Ruby, JSDoc for JavaScript, rustdoc for Rust, dartdoc for Dart). Documentation must be sufficient to generate complete API docs from source. In addition to API-level docs, add inline comments wherever the logic is not immediately self-evident.
Every file in the repository must carry an Apache v2 license notice using the exact string `Copyright © 2025-26 l5yth & contributors`. **Source-code files** (`.rb`, `.py`, `.js`, `.rs`, `.dart`, etc.) must include the full Apache v2 license header block. **Non-source files** (docs, configs, YAML, TOML, Dockerfiles, etc.) must include a short 2-line Apache v2 notice (copyright line + license reference).
Run linters for Python (`black`) and Ruby (`rufo`) to ensure consistent code formatting.
## Project Structure & Module Organization
The repository splits runtime and ingestion logic. `web/` holds the Sinatra dashboard (Ruby code in `lib/potato_mesh`, views in `views/`, static bundles in `public/`).
`data/` hosts the Python Meshtastic ingestor plus migrations and CLI scripts. The ingestor is structured as the `data/mesh_ingestor/` package with the following key modules: `daemon.py` (main loop), `handlers.py` (packet processing), `interfaces.py` (interface helpers), `config.py` (env-driven config), `events.py` (TypedDict event schemas), `mesh_protocol.py` (MeshProtocol base), `node_identity.py` (canonical node ID utilities), `decode_payload.py` (CLI protobuf decoder), and the `protocols/` subpackage (currently `meshtastic.py`). API contracts for all POST ingest routes are documented in `data/mesh_ingestor/CONTRACTS.md`. API fixtures and end-to-end harnesses live in `tests/`. Dockerfiles and compose files support containerized workflows.
`matrix/` contains the Rust Matrix bridge; build with `cargo build --release` or `docker build -f matrix/Dockerfile .`, and keep bridge config under `matrix/Config.toml` when running locally.
## Build, Test, and Development Commands
Run dependency installs inside `web/`: `bundle install` for gems and `npm ci` for JavaScript tooling. Start the app with `cd web && API_TOKEN=dev ./app.sh` for local work or `bundle exec rackup -p 41447` when integrating elsewhere.
Prep ingestion with `python -m venv .venv && pip install -r data/requirements.txt`; `./data/mesh.sh` streams from live radios. `docker-compose -f docker-compose.dev.yml up` brings up the full stack.
Container images publish via `.github/workflows/docker.yml` as `potato-mesh-{service}-linux-$arch` (`web`, `ingestor`, `matrix-bridge`), using the Dockerfiles in `web/`, `data/`, and `matrix/`.
## Coding Style & Naming Conventions
Use two-space indentation for Ruby and keep `# frozen_string_literal: true` at the top of new files. Keep Ruby classes/modules in `CamelCase`, filenames in `snake_case.rb`, and feature specs in `*_spec.rb`.
JavaScript follows ES modules under `public/assets/js`; co-locate components with `__tests__` folders and use kebab-case filenames. Format Ruby via `bundle exec rufo .` and Python via `black`. Skip committing generated coverage artifacts.
## Flutter Mobile App (`app/`)
The Flutter client lives in `app/`. Keep only the mobile targets (`android/`, `ios/`) under version control unless you explicitly support other platforms. Do not commit Flutter build outputs or editor cruft (`.dart_tool/`, `.flutter-plugins-dependencies`, `.idea/`, `.metadata`, `*.iml`, `.fvmrc` if unused).
Install dependencies with `cd app && flutter pub get`; format with `dart format .` and lint via `flutter analyze`. Run tests with `cd app && flutter test` and keep widget/unit coverage high—no new code without tests. Commit `pubspec.lock` and analysis options so toolchains stay consistent.
## Testing Guidelines
Ruby specs run with `cd web && bundle exec rspec`, producing SimpleCov output in `coverage/`. Front-end behaviour is verified through Nodes test runner: `cd web && npm test` writes V8 coverage and JUnit XML under `reports/`.
The ingestion layer is tested with `pytest -q tests/`; leave fixtures in `tests/` untouched so CI can replay them. The suite includes both integration tests (`test_mesh.py`) and focused unit tests — `test_events_unit.py` (TypedDict schemas), `test_provider_unit.py` (Provider protocol conformance and `MeshtasticProvider`), `test_node_identity_unit.py` (canonical ID helpers), `test_daemon_unit.py`, `test_serialization_unit.py`, and `test_decode_payload.py`. New features should ship with matching specs and updated integration checks.
## Adding a New Ingestor Protocol
The `data/mesh_ingestor/mesh_protocol.py` module defines a `@runtime_checkable` `MeshProtocol` class with five members: `name` (str), `subscribe()`, `connect(*, active_candidate)`, `extract_host_node_id(iface)`, and `node_snapshot_items(iface)`. To add a new backend (e.g. Reticulum):
1. Create `data/mesh_ingestor/protocols/<name>.py` with a class satisfying the `MeshProtocol` interface.
2. Register it in `data/mesh_ingestor/protocols/__init__.py`.
3. Pass an instance via `daemon.main(provider=...)` or make it the default in `main()`.
4. Cover the protocol with unit tests in `tests/test_provider_unit.py` — at minimum an `isinstance(..., MeshProtocol)` conformance check and any retry/error-handling paths.
Consult `data/mesh_ingestor/CONTRACTS.md` for the canonical event shapes all protocols must emit.
## GitHub Configuration Standards
Every language used in the repository must have a Dependabot entry checking for dependency updates on a **weekly** schedule. Keep the Dependabot config up to date as new languages or package ecosystems are added.
Codecov must be configured with a **100% coverage target** and a **10% threshold** (i.e. a drop of more than 10 percentage points fails the check). The `codecov.yml` should enforce this on both patch and project coverage.
Every service/component must have at least one GitHub Actions workflow that **builds and runs tests on pull requests against `main` and on direct pushes to `main`**. Workflows should cover all relevant test suites (Python, Ruby, JS, Rust, Flutter) for the components they touch.
## Commit & Pull Request Guidelines
Commits should stay imperative and reference issues the way history does (`Add chat log entries... (#408)`). Squash noisy work-in-progress commits before pushing. Pull requests need a concise summary, screenshots or curl traces for UI/API tweaks, and links to tracked issues. Paste the command output for the test suites you ran and mention configuration toggles (`API_TOKEN`, `PRIVATE`) reviewers must set.
## Security & Configuration Tips
Never commit real API tokens or `.sqlite` dumps; use `.env.local` files ignored by Git. Confirm env defaults (`API_TOKEN`, `INSTANCE_DOMAIN`, `PRIVATE`) before deploying, and set `FEDERATION=0` when staging private nodes. Review `PROMETHEUS.md` when exposing metrics so scrape endpoints stay internal.
+24 -10
View File
@@ -1,3 +1,6 @@
<!-- Copyright © 2025-26 l5yth & contributors -->
<!-- Licensed under the Apache License, Version 2.0 (see LICENSE) -->
# PotatoMesh Docker Guide
PotatoMesh publishes ready-to-run container images to the GitHub Packages container
@@ -13,16 +16,16 @@ will pull the latest release images for you.
## Images on GHCR
| Service | Image |
|----------|---------------------------------------------------------------------------------------------------------------|
| Web UI | `ghcr.io/l5yth/potato-mesh-web-linux-amd64:<tag>` (e.g. `latest`, `3.0`, `v3.0`, or `3.1.0-rc1`) |
| Ingestor | `ghcr.io/l5yth/potato-mesh-ingestor-linux-amd64:<tag>` (e.g. `latest`, `3.0`, `v3.0`, or `3.1.0-rc1`) |
| Service | Image |
|----------|----------------------------------------------------------------------------------------------------------------|
| Web UI | `ghcr.io/l5yth/potato-mesh-web-linux-amd64:<tag>` (e.g. `latest`, `0.6.0`, `v0.6.0`, or `0.7.0-rc1`) |
| Ingestor | `ghcr.io/l5yth/potato-mesh-ingestor-linux-amd64:<tag>` (e.g. `latest`, `0.6.0`, `v0.6.0`, or `0.7.0-rc1`) |
Images are published for every tagged release. Stable builds receive both
semantic version tags (for example `3.0`) and a matching `v`-prefixed tag (for
example `v3.0`), plus a `latest` tag that tracks the newest stable release.
semantic version tags (for example `0.6.0`) and a matching `v`-prefixed tag (for
example `v0.6.0`), plus a `latest` tag that tracks the newest stable release.
Pre-release tags (for example `-rc`, `-beta`, `-alpha`, or `-dev` suffixes) are
published only with their explicit version strings (`3.1.0-rc1` and `v3.1.0-rc1`
published only with their explicit version strings (`0.7.0-rc1` and `v0.7.0-rc1`
in this example) and do **not** advance `latest`. Pin the versioned tags when
you need a specific build.
@@ -60,9 +63,8 @@ Additional environment variables are optional:
| `CONNECTION` | `/dev/ttyACM0` | Serial device, TCP endpoint, or Bluetooth target used by the ingestor to reach the radio. |
The ingestor posts to the URL configured via `INSTANCE_DOMAIN` (defaulting to
`http://web:41447` in the provided compose file) and still accepts
`POTATOMESH_INSTANCE` as a legacy alias when the primary variable is unset. Use
`CHANNEL_INDEX` to select a LoRa channel on serial or Bluetooth connections.
`http://web:41447` in the provided compose file). Use `CHANNEL_INDEX` to select
a LoRa channel on serial or Bluetooth connections.
## Docker Compose file
@@ -79,6 +81,18 @@ the container. This path stores the instance private key and staged
of container lifecycle events, generated credentials are not replaced on reboot
or re-deploy.
The `potatomesh_pages` volume mounts to `/app/pages` and holds operator-managed
Markdown files that are rendered as static content pages in the web UI. On first
start the default `1-about.md` page is copied from the image into the volume.
You can add, edit, or remove `.md` files in this volume to customise your
instance's navigation. To use a host directory instead of a named volume, replace
the volume entry with a bind mount:
```yaml
volumes:
- ./my-pages:/app/pages
```
## Start the stack
From the directory containing the Compose file:
+30 -9
View File
@@ -1,3 +1,4 @@
# syntax=docker/dockerfile:1.6
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -25,6 +26,9 @@ ENV BUNDLE_FORCE_RUBY_PLATFORM=true
# Install build dependencies and SQLite3
RUN apk add --no-cache \
build-base \
python3 \
py3-pip \
py3-virtualenv \
sqlite-dev \
linux-headers \
pkgconfig
@@ -40,11 +44,16 @@ RUN bundle config set --local force_ruby_platform true && \
bundle config set --local without 'development test' && \
bundle install --jobs=4 --retry=3
# Install Meshtastic decoder dependencies in a dedicated venv
RUN python3 -m venv /opt/meshtastic-venv && \
/opt/meshtastic-venv/bin/pip install --no-cache-dir meshtastic protobuf
# Production stage
FROM ruby:3.3-alpine AS production
# Install runtime dependencies
RUN apk add --no-cache \
python3 \
sqlite \
tzdata \
curl
@@ -58,18 +67,27 @@ WORKDIR /app
# Copy installed gems from builder stage
COPY --from=builder /usr/local/bundle /usr/local/bundle
COPY --from=builder /opt/meshtastic-venv /opt/meshtastic-venv
# Copy application code (exclude Dockerfile from web directory)
COPY --chown=potatomesh:potatomesh web/app.rb web/app.sh web/Gemfile web/Gemfile.lock* web/spec/ ./
# Copy application code (excluding the Dockerfile which is not required at runtime)
COPY --chown=potatomesh:potatomesh web/app.rb ./
COPY --chown=potatomesh:potatomesh web/app.sh ./
COPY --chown=potatomesh:potatomesh web/Gemfile ./
COPY --chown=potatomesh:potatomesh web/Gemfile.lock* ./
COPY --chown=potatomesh:potatomesh web/lib ./lib
COPY --chown=potatomesh:potatomesh web/spec ./spec
COPY --chown=potatomesh:potatomesh web/public ./public
COPY --chown=potatomesh:potatomesh web/views/ ./views/
COPY --chown=potatomesh:potatomesh web/views ./views
COPY --chown=potatomesh:potatomesh web/scripts ./scripts
# Copy SQL schema files from data directory
COPY --chown=potatomesh:potatomesh data/*.sql /data/
COPY --chown=potatomesh:potatomesh data/mesh_ingestor/decode_payload.py /app/data/mesh_ingestor/decode_payload.py
# Create data directory for SQLite database
RUN mkdir -p /app/data /app/.local/share/potato-mesh && \
chown -R potatomesh:potatomesh /app/data /app/.local
# Create data and configuration directories with correct ownership
RUN mkdir -p /app/.local/share/potato-mesh \
&& mkdir -p /app/.config/potato-mesh/well-known \
&& chown -R potatomesh:potatomesh /app/.local/share /app/.config
# Switch to non-root user
USER potatomesh
@@ -78,13 +96,16 @@ USER potatomesh
EXPOSE 41447
# Default environment variables (can be overridden by host)
ENV APP_ENV=production \
RACK_ENV=production \
ENV RACK_ENV=production \
APP_ENV=production \
MESHTASTIC_PYTHON=/opt/meshtastic-venv/bin/python \
XDG_DATA_HOME=/app/.local/share \
XDG_CONFIG_HOME=/app/.config \
SITE_NAME="PotatoMesh Demo" \
INSTANCE_DOMAIN="potato.example.com" \
CHANNEL="#LongFast" \
FREQUENCY="915MHz" \
MAP_CENTER="38.761944,-27.090833" \
MAP_ZOOM="" \
MAX_DISTANCE=42 \
CONTACT_LINK="#potatomesh:dod.ngo" \
DEBUG=0
+3
View File
@@ -1,3 +1,6 @@
<!-- Copyright © 2025-26 l5yth & contributors -->
<!-- Licensed under the Apache License, Version 2.0 (see LICENSE) -->
# Prometheus Monitoring for PotatoMesh
PotatoMesh exposes runtime telemetry through a dedicated Prometheus endpoint so you can
+59 -8
View File
@@ -1,3 +1,6 @@
<!-- Copyright © 2025-26 l5yth & contributors -->
<!-- Licensed under the Apache License, Version 2.0 (see LICENSE) -->
# 🥔 PotatoMesh
[![GitHub Workflow Status](https://img.shields.io/github/actions/workflow/status/l5yth/potato-mesh/ruby.yml?branch=main)](https://github.com/l5yth/potato-mesh/actions)
@@ -7,7 +10,10 @@
[![Contributions Welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/l5yth/potato-mesh/issues)
[![Matrix Chat](https://img.shields.io/badge/matrix-%23potatomesh:dod.ngo-blue)](https://matrix.to/#/#potatomesh:dod.ngo)
A federated, Meshtastic-powered node dashboard for your local community.
[![Meshtastic](https://img.shields.io/badge/Meshtastic-supported-67ea94)](https://meshtastic.org)
[![MeshCore](https://img.shields.io/badge/MeshCore-supported-000000)](https://meshcore.co.uk)
A federated, Meshtastic & Meshcore node dashboard for your local community.
_No MQTT clutter, just local LoRa aether._
* Web dashboard with chat window and map view showing nodes, positions, neighbors,
@@ -17,15 +23,17 @@ _No MQTT clutter, just local LoRa aether._
* Allows searching and filtering for nodes in map and table view.
* Federated: _automatically_ froms a federation with other communities running
Potato Mesh!
* Supports Meshtastic and Meshcore
* Supplemental Python ingestor to feed the POST APIs of the Web app with data remotely.
* Supports multiple ingestors per instance.
* Supports Meshtastic and Meshcore
* Matrix bridge that posts Meshtastic messages to a defined matrix channel (no
radio required).
* Mobile app to _read_ messages on your local aether (no radio required).
Live demo for Berlin #MediumFast: [potatomesh.net](https://potatomesh.net)
Live demo for Berlin: [potatomesh.net](https://potatomesh.net)
![screenshot of the fourth version](./scrot-0.4.png)
![screenshot of the sixth version](./scrot-0.7.png)
## Web App
@@ -120,6 +128,28 @@ well-known document is staged in
The database can be found in `$XDG_DATA_HOME/potato-mesh`.
### Custom Pages
Instance operators can publish static content pages (contact details, mesh
protocol information, legal notices, etc.) by placing Markdown files in the
`pages/` directory inside `web/`. Each `.md` file automatically becomes a nav
entry and a route under `/pages/<slug>`.
Files are named `<sort-prefix>-<slug>.md` — the numeric prefix controls
navigation order and the slug becomes the URL path and nav label:
| Filename | Nav Label | URL |
| ---------------------- | -------------- | ----------------------- |
| `1-about.md` | About | `/pages/about` |
| `5-rules.md` | Rules | `/pages/rules` |
| `9-contact.md` | Contact | `/pages/contact` |
| `20-impressum.md` | Impressum | `/pages/impressum` |
A default `1-about.md` ships with the app. In Docker deployments the directory
is exposed as the `potatomesh_pages` volume (mounted at `/app/pages`) so you can
add or edit pages without rebuilding the image. The pages directory can also be
overridden with the `PAGES_DIR` environment variable.
### Federation
PotatoMesh instances can optionally federate by publishing signed metadata and
@@ -252,15 +282,36 @@ services.potato-mesh = {
## Docker
Docker images are published on Github for each release:
Docker images are published on GitHub Container Registry for each release.
Image names and tags follow the workflow format:
`${IMAGE_PREFIX}-${service}-${architecture}:${tag}` (see `.github/workflows/docker.yml`).
```bash
docker pull ghcr.io/l5yth/potato-mesh/web:latest # newest release
docker pull ghcr.io/l5yth/potato-mesh/web:v0.5.5 # pinned historical release
docker pull ghcr.io/l5yth/potato-mesh/ingestor:latest
docker pull ghcr.io/l5yth/potato-mesh/matrix-bridge:latest
docker pull ghcr.io/l5yth/potato-mesh-web-linux-amd64:latest
docker pull ghcr.io/l5yth/potato-mesh-web-linux-arm64:latest
docker pull ghcr.io/l5yth/potato-mesh-web-linux-armv7:latest
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-amd64:latest
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-arm64:latest
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-armv7:latest
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-amd64:latest
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-arm64:latest
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-armv7:latest
# version-pinned examples
docker pull ghcr.io/l5yth/potato-mesh-web-linux-amd64:v0.6.0
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-amd64:v0.6.0
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-amd64:v0.6.0
```
Note: `latest` is only published for non-prerelease versions. Pre-release tags
such as `-rc`, `-beta`, `-alpha`, or `-dev` are version-tagged only.
When using Compose, set `POTATOMESH_IMAGE_ARCH` in `docker-compose.yml` (or via
environment) so service images resolve to the correct architecture variant and
you avoid manual tag mistakes.
Feel free to run the [configure.sh](./configure.sh) script to set up your
environment. See the [Docker guide](DOCKER.md) for more details and custom
deployment instructions.
+6 -2
View File
@@ -1,6 +1,10 @@
# Meshtastic Reader
<!-- Copyright © 2025-26 l5yth & contributors -->
<!-- Licensed under the Apache License, Version 2.0 (see LICENSE) -->
Meshtastic Reader read-only PotatoMesh chat client for Android and iOS.
# PotatoMesh Mobile
PotatoMesh Mobile — read-only mesh chat client for Android and iOS.
Supports Meshtastic and MeshCore networks.
## Setup
+2 -2
View File
@@ -15,11 +15,11 @@
<key>CFBundlePackageType</key>
<string>FMWK</string>
<key>CFBundleShortVersionString</key>
<string>0.5.10</string>
<string>0.6.1</string>
<key>CFBundleSignature</key>
<string>????</string>
<key>CFBundleVersion</key>
<string>0.5.10</string>
<string>0.6.1</string>
<key>MinimumOSVersion</key>
<string>14.0</string>
</dict>
+1
View File
@@ -1 +1,2 @@
#include? "Pods/Target Support Files/Pods-Runner/Pods-Runner.debug.xcconfig"
#include "Generated.xcconfig"
+1
View File
@@ -1 +1,2 @@
#include? "Pods/Target Support Files/Pods-Runner/Pods-Runner.release.xcconfig"
#include "Generated.xcconfig"
+5 -1
View File
@@ -2944,6 +2944,9 @@ class MeshNode {
}
}
/// The protocol identifier sent to the API to filter results to Meshtastic only.
const String _kProtocolFilter = 'meshtastic';
/// Build a messages API URI for a given domain or absolute URL.
Uri _buildMessagesUri(String domain, {int since = 0, int limit = 1000}) {
final trimmed = domain.trim();
@@ -2951,6 +2954,7 @@ Uri _buildMessagesUri(String domain, {int since = 0, int limit = 1000}) {
'limit': limit.toString(),
'encrypted': 'false',
'since': since.toString(),
'protocol': _kProtocolFilter,
};
if (trimmed.isEmpty) {
return Uri.https('potatomesh.net', '/api/messages', params);
@@ -2988,7 +2992,7 @@ Uri _buildNodeUri(String domain, String nodeId) {
/// Build the bulk nodes API URI for fetching recent nodes.
Uri _buildNodesUri(String domain, {int limit = 1000}) {
final trimmedDomain = domain.trim();
final params = {'limit': limit.toString()};
final params = {'limit': limit.toString(), 'protocol': _kProtocolFilter};
if (trimmedDomain.isEmpty) {
return Uri.https('potatomesh.net', '/api/nodes', params);
+8 -8
View File
@@ -45,10 +45,10 @@ packages:
dependency: transitive
description:
name: characters
sha256: f71061c654a3380576a52b451dd5532377954cf9dbd272a78fc8479606670803
sha256: faf38497bda5ead2a8c7615f4f7939df04333478bf32e4173fcb06d428b5716b
url: "https://pub.dev"
source: hosted
version: "1.4.0"
version: "1.4.1"
checked_yaml:
dependency: transitive
description:
@@ -284,18 +284,18 @@ packages:
dependency: transitive
description:
name: matcher
sha256: dc58c723c3c24bf8d3e2d3ad3f2f9d7bd9cf43ec6feaa64181775e60190153f2
sha256: "12956d0ad8390bbcc63ca2e1469c0619946ccb52809807067a7020d57e647aa6"
url: "https://pub.dev"
source: hosted
version: "0.12.17"
version: "0.12.18"
material_color_utilities:
dependency: transitive
description:
name: material_color_utilities
sha256: f7142bb1154231d7ea5f96bc7bde4bda2a0945d2806bb11670e30b850d56bdec
sha256: "9c337007e82b1889149c82ed242ed1cb24a66044e30979c44912381e9be4c48b"
url: "https://pub.dev"
source: hosted
version: "0.11.1"
version: "0.13.0"
meta:
dependency: transitive
description:
@@ -497,10 +497,10 @@ packages:
dependency: transitive
description:
name: test_api
sha256: ab2726c1a94d3176a45960b6234466ec367179b87dd74f1611adb1f3b5fb9d55
sha256: "93167629bfc610f71560ab9312acdda4959de4df6fac7492c89ff0d3886f6636"
url: "https://pub.dev"
source: hosted
version: "0.7.7"
version: "0.7.9"
timezone:
dependency: transitive
description:
+1 -1
View File
@@ -1,7 +1,7 @@
name: potato_mesh_reader
description: Meshtastic Reader — read-only view for PotatoMesh messages.
publish_to: "none"
version: 0.5.10
version: 0.6.1
environment:
sdk: ">=3.4.0 <4.0.0"
+2
View File
@@ -206,8 +206,10 @@ void main() {
expect(calls[0].host, 'mesh.example.org');
expect(calls[0].path, '/api/messages');
expect(calls[0].queryParameters['protocol'], 'meshtastic');
expect(calls[1].scheme, 'https');
expect(calls[1].path, '/api/messages');
expect(calls[1].queryParameters['protocol'], 'meshtastic');
});
});
+1
View File
@@ -145,6 +145,7 @@ void main() {
if (request.url.path == '/api/messages') {
sinces.add(request.url.queryParameters['since'] ?? '');
expect(request.url.queryParameters['limit'], '1000');
expect(request.url.queryParameters['protocol'], 'meshtastic');
if (sinces.length == 1) {
return http.Response(
jsonEncode([
-10
View File
@@ -219,16 +219,6 @@ else
sed -i.bak '/^INSTANCE_DOMAIN=.*/d' .env
fi
# Migrate legacy connection settings and ensure defaults exist
if grep -q "^MESH_SERIAL=" .env; then
legacy_connection=$(grep "^MESH_SERIAL=" .env | head -n1 | cut -d'=' -f2-)
if [ -n "$legacy_connection" ] && ! grep -q "^CONNECTION=" .env; then
echo "♻️ Migrating legacy MESH_SERIAL value to CONNECTION"
update_env "CONNECTION" "$legacy_connection"
fi
sed -i.bak '/^MESH_SERIAL=.*/d' .env
fi
if ! grep -q "^CONNECTION=" .env; then
echo "CONNECTION=/dev/ttyACM0" >> .env
fi
+2
View File
@@ -50,6 +50,7 @@ USER potatomesh
ENV CONNECTION=/dev/ttyACM0 \
CHANNEL_INDEX=0 \
DEBUG=0 \
PROTOCOL=meshtastic \
ALLOWED_CHANNELS="" \
HIDDEN_CHANNELS="" \
INSTANCE_DOMAIN="" \
@@ -77,6 +78,7 @@ USER ContainerUser
ENV CONNECTION=/dev/ttyACM0 \
CHANNEL_INDEX=0 \
DEBUG=0 \
PROTOCOL=meshtastic \
ALLOWED_CHANNELS="" \
HIDDEN_CHANNELS="" \
INSTANCE_DOMAIN="" \
+1 -1
View File
@@ -18,7 +18,7 @@ The ``data.mesh`` module exposes helpers for reading Meshtastic node and
message information before forwarding it to the accompanying web application.
"""
VERSION = "0.5.10"
VERSION = "0.6.1"
"""Semantic version identifier shared with the dashboard and front-end."""
__version__ = VERSION
+2 -1
View File
@@ -20,7 +20,8 @@ CREATE TABLE IF NOT EXISTS ingestors (
last_seen_time INTEGER NOT NULL,
version TEXT,
lora_freq INTEGER,
modem_preset TEXT
modem_preset TEXT,
protocol TEXT NOT NULL DEFAULT 'meshtastic'
);
CREATE INDEX IF NOT EXISTS idx_ingestors_last_seen ON ingestors(last_seen_time);
+2
View File
@@ -27,6 +27,8 @@ CREATE TABLE IF NOT EXISTS instances (
last_update_time INTEGER,
is_private BOOLEAN NOT NULL DEFAULT 0,
nodes_count INTEGER,
meshcore_nodes_count INTEGER,
meshtastic_nodes_count INTEGER,
contact_link TEXT,
signature TEXT
);
+11 -4
View File
@@ -15,7 +15,14 @@
set -euo pipefail
python -m venv .venv
source .venv/bin/activate
pip install -U meshtastic black pytest
exec python mesh.py
# Recreate the venv only when its embedded Python is missing or points to the
# wrong prefix (e.g. a stale shebang from a sibling project's venv). Avoid
# --clear on every run: it wipes installed packages before each start, so any
# restart during a PyPI outage turns a transient network failure into hard
# ingestor downtime.
if ! .venv/bin/python -c "import sys; exit(0 if '.venv' in sys.prefix else 1)" 2>/dev/null; then
python -m venv --clear .venv
fi
.venv/bin/pip install -U pip
.venv/bin/pip install -r "$(dirname "$0")/requirements.txt"
exec .venv/bin/python mesh.py
+121
View File
@@ -0,0 +1,121 @@
<!-- Copyright © 2025-26 l5yth & contributors -->
<!-- Licensed under the Apache License, Version 2.0 (see LICENSE) -->
## Mesh ingestor contracts (stable interfaces)
This repos ingestion pipeline is split into:
- **Python collector** (`data/mesh_ingestor/*`) which normalizes packets/events and POSTs JSON to the web app.
- **Sinatra web app** (`web/`) which accepts those payloads on `POST /api/*` ingest routes and persists them into SQLite tables defined under `data/*.sql`.
This document records the **contracts that future protocols must preserve**. The intent is to enable adding new protocols (MeshCore, Reticulum, …) without changing the Ruby/DB/UI read-side.
### Canonical node identity
- **Canonical node id**: `nodes.node_id` is a `TEXT` primary key and is treated as canonical across the system.
- **Format**: `!%08x` (lowercase hex, 8 chars), for example `!abcdef01`.
- **Normalization**:
- Python currently normalizes via `data/mesh_ingestor/serialization.py:_canonical_node_id`.
- Ruby normalizes via `web/lib/potato_mesh/application/data_processing.rb:canonical_node_parts`.
- **Dual addressing**: Ruby routes and queries accept either a canonical `!xxxxxxxx` string or a numeric node id; they normalize to `node_id`.
Note: non-Meshtastic protocols will need a strategy to map their native node identifiers into this `!%08x` space. That mapping is intentionally not standardized in code yet.
### Ingest HTTP routes and payload shapes
Future providers should emit payloads that match these shapes (keys + types), which are validated by existing tests (notably `tests/test_mesh.py`).
#### `POST /api/nodes`
Payload is a mapping keyed by canonical node id, with an optional top-level `”ingestor”` key:
- `{ “!abcdef01”: { ... node fields ... }, “ingestor”: “!ingestornodeid” }`
When `”ingestor”` is present the protocol is inherited from the registered ingestor (see `POST /api/ingestors`); omitting it defaults to `”meshtastic”`.
Node entry fields are “Meshtastic-ish” (camelCase) and may include:
- `num` (int node number)
- `lastHeard` (int unix seconds)
- `snr` (float)
- `hopsAway` (int)
- `isFavorite` (bool)
- `user` (mapping; e.g. `shortName`, `longName`, `macaddr`, `hwModel`, `publicKey`, `isUnmessagable`)
- `role` (optional string) — omit when unknown; known values include Meshtastic role names (e.g. `CLIENT`, `ROUTER`) and MeshCore role names (`COMPANION`, `REPEATER`, `ROOM_SERVER`, `SENSOR`)
- `deviceMetrics` (mapping; e.g. `batteryLevel`, `voltage`, `channelUtilization`, `airUtilTx`, `uptimeSeconds`)
- `position` (mapping; `latitude`, `longitude`, `altitude`, `time`, `locationSource`, `precisionBits`, optional nested `raw`)
- Optional radio metadata: `lora_freq`, `modem_preset`
#### `POST /api/messages`
Single message payload:
- Required: `id` (int), `rx_time` (int), `rx_iso` (string)
- Identity: `from_id` (string/int), `to_id` (string/int), `channel` (int), `portnum` (string|nil)
- Payload: `text` (string|nil), `encrypted` (string|nil), `reply_id` (int|nil), `emoji` (string|nil)
- RF: `snr` (float|nil), `rssi` (int|nil), `hop_limit` (int|nil)
- Meta: `channel_name` (string; only when not encrypted and known), `ingestor` (canonical host id), `lora_freq`, `modem_preset`
#### `POST /api/positions`
Single position payload:
- Required: `id` (int), `rx_time` (int), `rx_iso` (string)
- Node: `node_id` (canonical string), `node_num` (int|nil), `num` (int|nil), `from_id` (canonical string), `to_id` (string|nil)
- Position: `latitude`, `longitude`, `altitude` (floats|nil)
- Position time: `position_time` (int|nil)
- Quality: `location_source` (string|nil), `precision_bits` (int|nil), `sats_in_view` (int|nil), `pdop` (float|nil)
- Motion: `ground_speed` (float|nil), `ground_track` (float|nil)
- RF/meta: `snr`, `rssi`, `hop_limit`, `bitfield`, `payload_b64` (string|nil), `raw` (mapping|nil), `ingestor`, `lora_freq`, `modem_preset`
#### `POST /api/telemetry`
Single telemetry payload:
- Required: `id` (int), `rx_time` (int), `rx_iso` (string)
- Node: `node_id` (canonical string|nil), `node_num` (int|nil), `from_id`, `to_id`
- Time: `telemetry_time` (int|nil)
- Packet: `channel` (int), `portnum` (string|nil), `bitfield` (int|nil), `hop_limit` (int|nil)
- RF: `snr` (float|nil), `rssi` (int|nil)
- Raw: `payload_b64` (string; may be empty string when unknown)
- Metrics: many optional snake_case keys (`battery_level`, `voltage`, `temperature`, etc.)
- Subtype: `telemetry_type` (string|nil) — optional discriminator identifying which Meshtastic protobuf oneof was set; one of `"device"`, `"environment"`, `"power"`, or `"air_quality"`. Ingestors that detect the subtype SHOULD include this field; omit rather than send `null` when unknown. The web app infers the type from metric-field presence when absent, so old ingestors remain compatible.
- Meta: `ingestor`, `lora_freq`, `modem_preset`
#### `POST /api/neighbors`
Neighbors snapshot payload:
- Node: `node_id` (canonical string), `node_num` (int|nil)
- `neighbors`: list of entries with `neighbor_id` (canonical string), `neighbor_num` (int|nil), `snr` (float|nil), `rx_time` (int), `rx_iso` (string)
- Snapshot time: `rx_time`, `rx_iso`
- Optional: `node_broadcast_interval_secs` (int|nil), `last_sent_by_id` (canonical string|nil)
- Meta: `ingestor`, `lora_freq`, `modem_preset`
#### `POST /api/traces`
Single trace payload:
- Identity: `id` (int|nil), `request_id` (int|nil)
- Endpoints: `src` (int|nil), `dest` (int|nil)
- Path: `hops` (list[int])
- Time: `rx_time` (int), `rx_iso` (string)
- Metrics: `rssi` (int|nil), `snr` (float|nil), `elapsed_ms` (int|nil)
- Meta: `ingestor`, `lora_freq`, `modem_preset`
#### `POST /api/ingestors`
Heartbeat payload:
- `node_id` (canonical string)
- `start_time` (int), `last_seen_time` (int)
- `version` (string)
- Optional: `lora_freq`, `modem_preset`
- Optional: `protocol` (string; e.g. `"meshtastic"`, `"meshcore"`) — declares the mesh backend for this ingestor; defaults to `"meshtastic"` when absent
**Protocol propagation**: all event records (`messages`, `positions`, `telemetry`, `traces`, `neighbors`) that reference this ingestor via their `ingestor` field will inherit its `protocol` value at write time.
### GET endpoint filtering
All collection GET endpoints (`/api/nodes`, `/api/messages`, `/api/positions`, `/api/telemetry`, `/api/traces`, `/api/neighbors`, `/api/ingestors`) accept an optional `?protocol=<value>` query parameter. When present, only records whose `protocol` column matches the given value are returned. The `protocol` field is included in all GET responses.
+3 -4
View File
@@ -25,6 +25,7 @@ from .. import VERSION as _PACKAGE_VERSION
from . import (
channels,
config,
connection,
daemon,
handlers,
ingestors,
@@ -46,7 +47,7 @@ def _reexport(module) -> None:
def _export_constants() -> None:
globals()["json"] = queue.json
globals()["urllib"] = queue.urllib
globals()["glob"] = interfaces.glob
globals()["glob"] = connection.glob
__all__.extend(["json", "urllib", "glob", "threading", "signal"])
@@ -69,6 +70,7 @@ _CONFIG_ATTRS = {
"CHANNEL_INDEX",
"DEBUG",
"INSTANCE",
"INSTANCES",
"API_TOKEN",
"ALLOWED_CHANNELS",
"HIDDEN_CHANNELS",
@@ -81,9 +83,6 @@ _CONFIG_ATTRS = {
"_debug_log",
}
# Legacy export maintained for backwards compatibility.
_CONFIG_ATTRS.add("PORT")
_INTERFACE_ATTRS = {"BLEInterface", "SerialInterface", "TCPInterface"}
_QUEUE_ATTRS = set(queue.__all__)
+41
View File
@@ -182,6 +182,9 @@ def capture_from_interface(iface: Any) -> None:
channels_obj = getattr(local_node, "channels", None) if local_node else None
channel_entries: list[tuple[int, str]] = []
# Use a set for O(1) duplicate-index checks; Meshtastic occasionally
# emits the same channel index twice when the channel list is partially
# initialised, so we keep only the first valid entry per index.
seen_indices: set[int] = set()
for candidate in _iter_channel_objects(channels_obj):
result = _channel_tuple(candidate)
@@ -270,6 +273,43 @@ def is_hidden_channel(channel_name_value: str | None) -> bool:
return False
def register_channel(channel_idx: int, channel_name_value: str) -> None:
"""Register a single channel index → name mapping.
Unlike :func:`capture_from_interface`, which scans a complete interface
object in one shot, this function registers entries one at a time. It is
intended for protocols (e.g. MeshCore) that expose channel metadata via
per-index requests rather than a bulk channel list.
Idempotent: silently skips if *channel_idx* is already cached or
*channel_name_value* is blank, matching the first-seen-wins semantics of
:func:`capture_from_interface`.
Parameters:
channel_idx: Zero-based channel index.
channel_name_value: Human-readable channel name reported by the device.
"""
global _CHANNEL_MAPPINGS, _CHANNEL_LOOKUP
if not isinstance(channel_name_value, str) or not channel_name_value.strip():
return
if channel_idx in _CHANNEL_LOOKUP:
return
name = channel_name_value.strip()
_CHANNEL_LOOKUP[channel_idx] = name
_CHANNEL_MAPPINGS = tuple(sorted(_CHANNEL_LOOKUP.items()))
config._debug_log(
"Registered channel",
context="channels.register",
severity="info",
channel_idx=channel_idx,
channel_name=name,
)
def _reset_channel_cache() -> None:
"""Clear cached channel data. Intended for use in tests only."""
@@ -282,6 +322,7 @@ __all__ = [
"capture_from_interface",
"channel_mappings",
"channel_name",
"register_channel",
"allowed_channel_names",
"hidden_channel_names",
"is_allowed_channel",
+145 -36
View File
@@ -16,10 +16,9 @@
from __future__ import annotations
import math
import os
import sys
from datetime import datetime, timezone
from types import ModuleType
from typing import Any
DEFAULT_SNAPSHOT_SECS = 60
@@ -49,12 +48,14 @@ DEFAULT_ENERGY_SLEEP_SECS = float(6 * 60 * 60)
DEFAULT_INGESTOR_HEARTBEAT_SECS = float(60 * 60)
"""Interval between ingestor heartbeat announcements."""
CONNECTION = os.environ.get("CONNECTION") or os.environ.get("MESH_SERIAL")
DEFAULT_SELF_NODE_REPORT_INTERVAL_SECS = float(60 * 60)
"""Interval between periodic forced self-node re-reports from the daemon."""
CONNECTION = os.environ.get("CONNECTION")
"""Optional connection target for the mesh interface.
When unset, platform-specific defaults will be inferred by the interface
implementations. The legacy :envvar:`MESH_SERIAL` environment variable is still
accepted for backwards compatibility.
implementations.
"""
SNAPSHOT_SECS = DEFAULT_SNAPSHOT_SECS
@@ -65,6 +66,52 @@ CHANNEL_INDEX = int(os.environ.get("CHANNEL_INDEX", str(DEFAULT_CHANNEL_INDEX)))
DEBUG = os.environ.get("DEBUG") == "1"
_KNOWN_PROTOCOLS = ("meshtastic", "meshcore")
_raw_protocol = os.environ.get("PROTOCOL", "meshtastic").strip().lower()
if _raw_protocol not in _KNOWN_PROTOCOLS:
raise ValueError(
f"Unknown PROTOCOL={_raw_protocol!r}. "
f"Valid options: {', '.join(_KNOWN_PROTOCOLS)}"
)
PROTOCOL = _raw_protocol
"""Active ingestion protocol, selected via the :envvar:`PROTOCOL` environment variable.
Accepted values are ``meshtastic`` (default) and ``meshcore``.
"""
def _parse_lora_freq_env(raw: str | None) -> float | int | None:
"""Parse the ``FREQUENCY`` environment variable into a numeric LoRa frequency.
Returns an :class:`int` for whole-number strings (e.g. ``"868"``), a
:class:`float` for decimal strings (e.g. ``"869.525"``), or ``None`` when
*raw* is empty, absent, non-numeric, or non-finite (e.g. ``"inf"``).
Non-numeric labels such as ``"EU_868"`` intentionally return ``None`` so
that :data:`LORA_FREQ` is left unset and :func:`~interfaces._ensure_radio_metadata`
can still populate it from the detected radio configuration.
Parameters:
raw: Raw value of the ``FREQUENCY`` environment variable.
Returns:
Numeric frequency value, or ``None``.
"""
if not raw:
return None
stripped = raw.strip()
if not stripped:
return None
try:
as_float = float(stripped)
except ValueError:
return None
if not math.isfinite(as_float):
return None
return int(as_float) if as_float == int(as_float) else as_float
def _parse_channel_names(raw_value: str | None) -> tuple[str, ...]:
"""Normalise a comma-separated list of channel names.
@@ -112,16 +159,16 @@ ALLOWED_CHANNELS = _parse_channel_names(os.environ.get("ALLOWED_CHANNELS"))
def _resolve_instance_domain() -> str:
"""Resolve the configured instance domain from the environment.
The ingestor prefers the :envvar:`INSTANCE_DOMAIN` variable for clarity and
compatibility with the web application. For deployments that still
configure the legacy :envvar:`POTATOMESH_INSTANCE` variable, the resolver
falls back to that value when no primary domain is set.
Reads the :envvar:`INSTANCE_DOMAIN` variable. When the value does not
contain a scheme, ``https://`` is prepended automatically.
.. note::
Kept for backward compatibility with existing tests and callers.
New code should use :func:`_resolve_instance_domains` instead.
"""
instance_domain = os.environ.get("INSTANCE_DOMAIN", "")
legacy_instance = os.environ.get("POTATOMESH_INSTANCE", "")
configured_instance = (instance_domain or legacy_instance).rstrip("/")
configured_instance = os.environ.get("INSTANCE_DOMAIN", "").rstrip("/")
if configured_instance and "://" not in configured_instance:
return f"https://{configured_instance}"
@@ -129,13 +176,91 @@ def _resolve_instance_domain() -> str:
return configured_instance
INSTANCE = _resolve_instance_domain()
API_TOKEN = os.environ.get("API_TOKEN", "")
def _normalise_domain(raw: str) -> str:
"""Strip whitespace and trailing slashes, prepend ``https://`` when needed.
Parameters:
raw: Single domain string to normalise.
Returns:
A URL string with a scheme prefix.
"""
domain = raw.strip().rstrip("/")
if domain and "://" not in domain:
return f"https://{domain}"
return domain
def _resolve_instance_domains() -> tuple[tuple[str, str], ...]:
"""Parse :envvar:`INSTANCE_DOMAIN` and :envvar:`API_TOKEN` into paired tuples.
When ``INSTANCE_DOMAIN`` contains comma-separated values, each entry is
treated as an independent target. ``API_TOKEN`` is either broadcast to
every target (single value) or positionally paired (comma-separated with
a matching count).
Returns:
A tuple of ``(instance_url, api_token)`` pairs, deduplicated by URL.
Raises:
ValueError: When the number of comma-separated tokens exceeds the
number of domains.
"""
raw_domain = os.environ.get("INSTANCE_DOMAIN", "")
raw_token = os.environ.get("API_TOKEN", "")
domains: list[str] = []
seen: set[str] = set()
for part in raw_domain.split(","):
normalised = _normalise_domain(part)
if not normalised:
continue
key = normalised.casefold()
if key in seen:
continue
seen.add(key)
domains.append(normalised)
if not domains:
return ()
tokens = [t.strip() for t in raw_token.split(",")]
# A single token (including empty string) is broadcast to all domains.
if len(tokens) == 1:
token = tokens[0]
return tuple((d, token) for d in domains)
if len(tokens) != len(domains):
raise ValueError(
f"API_TOKEN has {len(tokens)} comma-separated values but "
f"INSTANCE_DOMAIN has {len(domains)}; counts must match or "
f"API_TOKEN must be a single value"
)
return tuple(zip(domains, tokens))
INSTANCES: tuple[tuple[str, str], ...] = _resolve_instance_domains()
"""Paired ``(instance_url, api_token)`` tuples derived from the environment."""
INSTANCE = INSTANCES[0][0] if INSTANCES else _resolve_instance_domain()
"""First configured instance URL, kept for backward compatibility."""
API_TOKEN = INSTANCES[0][1] if INSTANCES else os.environ.get("API_TOKEN", "")
"""API token for the first configured instance, kept for backward compatibility."""
ENERGY_SAVING = os.environ.get("ENERGY_SAVING") == "1"
"""When ``True``, enables the ingestor's energy saving mode."""
LORA_FREQ: float | int | str | None = None
"""Frequency of the local node's configured LoRa region in MHz or raw region label."""
LORA_FREQ: float | int | str | None = _parse_lora_freq_env(os.environ.get("FREQUENCY"))
"""Frequency of the local node's configured LoRa region in MHz or raw region label.
Pre-seeded from the ``FREQUENCY`` environment variable when set to a finite
numeric value, allowing operators to override auto-detected values.
Non-numeric or non-finite values are ignored so that auto-detection from the
radio interface can still fill this in.
"""
MODEM_PRESET: str | None = None
"""CamelCase modem preset name reported by the local node."""
@@ -147,9 +272,7 @@ _INACTIVITY_RECONNECT_SECS = DEFAULT_INACTIVITY_RECONNECT_SECS
_ENERGY_ONLINE_DURATION_SECS = DEFAULT_ENERGY_ONLINE_DURATION_SECS
_ENERGY_SLEEP_SECS = DEFAULT_ENERGY_SLEEP_SECS
_INGESTOR_HEARTBEAT_SECS = DEFAULT_INGESTOR_HEARTBEAT_SECS
# Backwards compatibility shim for legacy imports.
PORT = CONNECTION
_SELF_NODE_REPORT_INTERVAL_SECS = DEFAULT_SELF_NODE_REPORT_INTERVAL_SECS
def _debug_log(
@@ -194,6 +317,7 @@ __all__ = [
"HIDDEN_CHANNELS",
"ALLOWED_CHANNELS",
"INSTANCE",
"INSTANCES",
"API_TOKEN",
"ENERGY_SAVING",
"LORA_FREQ",
@@ -205,21 +329,6 @@ __all__ = [
"_ENERGY_ONLINE_DURATION_SECS",
"_ENERGY_SLEEP_SECS",
"_INGESTOR_HEARTBEAT_SECS",
"_SELF_NODE_REPORT_INTERVAL_SECS",
"_debug_log",
]
class _ConfigModule(ModuleType):
"""Module proxy that keeps connection aliases synchronised."""
def __setattr__(self, name: str, value: Any) -> None: # type: ignore[override]
"""Propagate CONNECTION/PORT assignments to both attributes."""
if name in {"CONNECTION", "PORT"}:
super().__setattr__("CONNECTION", value)
super().__setattr__("PORT", value)
return
super().__setattr__(name, value)
sys.modules[__name__].__class__ = _ConfigModule
+163
View File
@@ -0,0 +1,163 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Provider-agnostic connection target helpers.
This module contains utilities shared by all ingestor providers for
parsing and auto-discovering connection targets. It is intentionally
free of any provider-specific imports so that Meshtastic, MeshCore,
and future providers can all rely on the same logic.
"""
from __future__ import annotations
import glob
import re
# ---------------------------------------------------------------------------
# Constants
# ---------------------------------------------------------------------------
DEFAULT_TCP_PORT: int = 4403
"""Default TCP port used when no port is explicitly supplied."""
DEFAULT_SERIAL_PATTERNS: tuple[str, ...] = (
"/dev/ttyACM*",
"/dev/ttyUSB*",
"/dev/tty.usbmodem*",
"/dev/tty.usbserial*",
"/dev/cu.usbmodem*",
"/dev/cu.usbserial*",
)
"""Glob patterns for common serial device paths on Linux and macOS."""
# Support both MAC addresses (Linux/Windows) and UUIDs (macOS).
BLE_ADDRESS_RE = re.compile(
r"^(?:"
r"(?:[0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}|" # MAC address format
r"[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}" # UUID format
r")$"
)
"""Compiled regex matching a BLE MAC address or UUID."""
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def parse_ble_target(value: str) -> str | None:
"""Return a normalised BLE address (MAC or UUID) when ``value`` matches the format.
Parameters:
value: User-provided target string.
Returns:
The normalised MAC address (upper-cased) or UUID, or ``None`` when
the value does not match a recognised BLE address format.
"""
if not value:
return None
value = value.strip()
if not value:
return None
if BLE_ADDRESS_RE.fullmatch(value):
return value.upper()
return None
def parse_tcp_target(value: str) -> tuple[str, int] | None:
"""Parse a TCP ``host:port`` target, accepting both IPs and hostnames.
Unlike the Meshtastic-specific helper in :mod:`interfaces`, hostnames are
accepted here because MeshCore companions may be reached over a local
network by name (e.g. ``meshcore-node.local:4403``).
BLE MAC addresses (five colons) and bare serial port paths (no colon) are
correctly rejected they cannot produce a valid ``host:port`` pair.
Parameters:
value: User-provided target string.
Returns:
``(host, port)`` on success, or ``None`` when *value* does not look
like a TCP target.
"""
if not value:
return None
value = value.strip()
if not value:
return None
# Strip URL scheme prefix (e.g. ``tcp://host:4403`` or ``http://host:4403``).
if "://" in value:
value = value.split("://", 1)[1]
# Handle bracketed IPv6: ``[::1]:4403``.
if value.startswith("["):
bracket_end = value.find("]")
if bracket_end == -1:
return None
host = value[1:bracket_end]
rest = value[bracket_end + 1 :]
if rest.startswith(":"):
try:
port = int(rest[1:])
except ValueError:
return None
if not (1 <= port <= 65535):
return None
else:
port = DEFAULT_TCP_PORT
if not host:
return None
return host, port
# For non-bracketed addresses require exactly one colon so that BLE MACs
# (five colons) and bare serial paths (no colon) are rejected.
colon_count = value.count(":")
if colon_count != 1:
return None
host, _, port_str = value.partition(":")
if not host:
return None
try:
port = int(port_str)
except ValueError:
return None
if not (1 <= port <= 65535):
return None
return host, port
def default_serial_targets() -> list[str]:
"""Return candidate serial device paths for auto-discovery.
Globs for common USB serial device paths on Linux and macOS. Always
includes ``/dev/ttyACM0`` as a final fallback so callers have at least
one candidate even on systems without any attached hardware.
Returns:
Ordered list of candidate device paths, deduplicated.
"""
candidates: list[str] = []
seen: set[str] = set()
for pattern in DEFAULT_SERIAL_PATTERNS:
for path in sorted(glob.glob(pattern)):
if path not in seen:
candidates.append(path)
seen.add(path)
if "/dev/ttyACM0" not in seen:
candidates.append("/dev/ttyACM0")
return candidates
+467 -302
View File
@@ -16,6 +16,7 @@
from __future__ import annotations
import dataclasses
import inspect
import signal
import threading
@@ -23,7 +24,9 @@ import time
from pubsub import pub
from . import config, handlers, ingestors, interfaces
from . import config, handlers, ingestors, interfaces, queue
from .mesh_protocol import MeshProtocol
from .utils import _retry_dict_snapshot
_RECEIVE_TOPICS = (
"meshtastic.receive",
@@ -80,10 +83,15 @@ def _subscribe_receive_topics() -> list[str]:
def _node_items_snapshot(
nodes_obj, retries: int = 3
nodes_obj: object, retries: int = 3
) -> list[tuple[str, object]] | None:
"""Snapshot ``nodes_obj`` to avoid iteration errors during updates.
Uses :func:`~data.mesh_ingestor.utils._retry_dict_snapshot` to handle
both dict-like objects (``items()`` callable) and sequence-like objects
(``__iter__`` + ``__getitem__``) that Meshtastic may return depending on
firmware version.
Parameters:
nodes_obj: Meshtastic nodes mapping or iterable.
retries: Number of attempts when encountering "dictionary changed"
@@ -99,25 +107,15 @@ def _node_items_snapshot(
items_callable = getattr(nodes_obj, "items", None)
if callable(items_callable):
for _ in range(max(1, retries)):
try:
return list(items_callable())
except RuntimeError as err:
if "dictionary changed size during iteration" not in str(err):
raise
time.sleep(0)
return None
return _retry_dict_snapshot(lambda: list(items_callable()), retries)
if hasattr(nodes_obj, "__iter__") and hasattr(nodes_obj, "__getitem__"):
for _ in range(max(1, retries)):
try:
keys = list(nodes_obj)
return [(key, nodes_obj[key]) for key in keys]
except RuntimeError as err:
if "dictionary changed size during iteration" not in str(err):
raise
time.sleep(0)
return None
def _snapshot_via_keys() -> list[tuple[str, object]]:
keys = list(nodes_obj)
return [(key, nodes_obj[key]) for key in keys]
return _retry_dict_snapshot(_snapshot_via_keys, retries)
return []
@@ -197,11 +195,6 @@ def _process_ingestor_heartbeat(iface, *, ingestor_announcement_sent: bool) -> b
if heartbeat_sent and not ingestor_announcement_sent:
return True
return ingestor_announcement_sent
iface_cls = getattr(iface_obj, "__class__", None)
if iface_cls is None:
return False
module_name = getattr(iface_cls, "__module__", "") or ""
return "ble_interface" in module_name
def _connected_state(candidate) -> bool | None:
@@ -243,10 +236,403 @@ def _connected_state(candidate) -> bool | None:
return None
def main(existing_interface=None) -> None:
# ---------------------------------------------------------------------------
# Loop state container
# ---------------------------------------------------------------------------
@dataclasses.dataclass
class _DaemonState:
"""All mutable state for the :func:`main` daemon loop."""
provider: MeshProtocol
stop: threading.Event
configured_port: str | None
inactivity_reconnect_secs: float
energy_saving_enabled: bool
energy_online_secs: float
energy_sleep_secs: float
retry_delay: float
last_seen_packet_monotonic: float | None
active_candidate: str | None
iface: object = None
resolved_target: str | None = None
initial_snapshot_sent: bool = False
energy_session_deadline: float | None = None
iface_connected_at: float | None = None
last_inactivity_reconnect: float | None = None
ingestor_announcement_sent: bool = False
announced_target: bool = False
last_self_node_report: float | None = None
# ---------------------------------------------------------------------------
# Per-iteration helpers (each returns True when the caller should `continue`)
# ---------------------------------------------------------------------------
def _advance_retry_delay(current: float) -> float:
"""Return the next exponential-backoff retry delay."""
if config._RECONNECT_MAX_DELAY_SECS <= 0:
return current
# `current == 0` on the very first call (bootstrap); seed from config.
next_delay = current * 2 if current else config._RECONNECT_INITIAL_DELAY_SECS
return min(next_delay, config._RECONNECT_MAX_DELAY_SECS)
def _energy_sleep(state: _DaemonState, reason: str) -> None:
"""Sleep for the configured energy-saving interval."""
if not state.energy_saving_enabled or state.energy_sleep_secs <= 0:
return
if config.DEBUG:
config._debug_log(
f"energy saving: {reason}; sleeping for {state.energy_sleep_secs:g}s"
)
state.stop.wait(state.energy_sleep_secs)
def _try_connect(state: _DaemonState) -> bool:
"""Attempt to establish the mesh interface.
Returns:
``True`` when connected and the loop should proceed; ``False`` when
the connection failed and the caller should ``continue``.
"""
try:
state.iface, state.resolved_target, state.active_candidate = (
state.provider.connect(active_candidate=state.active_candidate)
)
handlers.register_host_node_id(state.provider.extract_host_node_id(state.iface))
ingestors.set_ingestor_node_id(handlers.host_node_id())
state.retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
state.initial_snapshot_sent = False
state.last_self_node_report = None
if not state.announced_target and state.resolved_target:
config._debug_log(
"Using mesh interface",
context="daemon.interface",
severity="info",
target=state.resolved_target,
)
state.announced_target = True
# Set an absolute monotonic deadline for this energy-saving session.
# When the deadline passes, _check_energy_saving() will close the
# interface and sleep until the next wake interval.
if state.energy_saving_enabled and state.energy_online_secs > 0:
state.energy_session_deadline = time.monotonic() + state.energy_online_secs
else:
state.energy_session_deadline = None
state.iface_connected_at = time.monotonic()
# Seed the inactivity tracking from the connection time so a
# reconnect is given a full inactivity window even when the
# handler still reports the previous packet timestamp.
state.last_seen_packet_monotonic = state.iface_connected_at
state.last_inactivity_reconnect = None
return True
except interfaces.NoAvailableMeshInterface as exc:
config._debug_log(
"No mesh interface available",
context="daemon.interface",
severity="error",
error_message=str(exc),
)
_close_interface(state.iface)
raise SystemExit(1) from exc
except Exception as exc:
config._debug_log(
"Failed to create mesh interface",
context="daemon.interface",
severity="warn",
candidate=state.active_candidate or "auto",
error_class=exc.__class__.__name__,
error_message=str(exc),
)
if state.configured_port is None:
state.active_candidate = None
state.announced_target = False
state.stop.wait(state.retry_delay)
state.retry_delay = _advance_retry_delay(state.retry_delay)
return False
def _check_energy_saving(state: _DaemonState) -> bool:
"""Disconnect and sleep when energy-saving conditions are met.
Returns:
``True`` when the interface was closed and the caller should
``continue``; ``False`` otherwise.
"""
if not state.energy_saving_enabled or state.iface is None:
return False
if (
state.energy_session_deadline is not None
and time.monotonic() >= state.energy_session_deadline
):
reason = "disconnected after session"
log_msg = "Energy saving disconnect"
elif (
_is_ble_interface(state.iface)
and getattr(state.iface, "client", object()) is None
):
reason = "BLE client disconnected"
log_msg = "Energy saving BLE disconnect"
else:
return False
config._debug_log(log_msg, context="daemon.energy", severity="info")
_close_interface(state.iface)
state.iface = None
state.announced_target = False
state.initial_snapshot_sent = False
state.last_self_node_report = None
state.energy_session_deadline = None
_energy_sleep(state, reason)
return True
def _try_send_snapshot(state: _DaemonState) -> bool:
"""Send the initial node snapshot via the provider.
Returns:
``True`` when the snapshot succeeded (or no nodes exist yet); ``False``
when a hard error occurred and the caller should ``continue``.
"""
try:
node_items = state.provider.node_snapshot_items(state.iface)
processed_any = False
for node_id, node in node_items:
processed_any = True
try:
handlers.upsert_node(node_id, node)
except Exception as exc:
config._debug_log(
"Failed to update node snapshot",
context="daemon.snapshot",
severity="warn",
node_id=node_id,
error_class=exc.__class__.__name__,
error_message=str(exc),
)
if config.DEBUG:
config._debug_log(
"Snapshot node payload",
context="daemon.snapshot",
node=node,
)
if processed_any:
state.initial_snapshot_sent = True
return True
except Exception as exc:
config._debug_log(
"Snapshot refresh failed",
context="daemon.snapshot",
severity="warn",
error_class=exc.__class__.__name__,
error_message=str(exc),
)
_close_interface(state.iface)
state.iface = None
state.stop.wait(state.retry_delay)
state.retry_delay = _advance_retry_delay(state.retry_delay)
return False
def _check_inactivity_reconnect(state: _DaemonState) -> bool:
"""Reconnect when the interface has been silent for too long.
Returns:
``True`` when a reconnect was triggered and the caller should
``continue``; ``False`` otherwise.
"""
if state.iface is None or state.inactivity_reconnect_secs <= 0:
return False
now = time.monotonic()
iface_activity = handlers.last_packet_monotonic()
if (
iface_activity is not None
and state.iface_connected_at is not None
and iface_activity < state.iface_connected_at
):
iface_activity = state.iface_connected_at
if iface_activity is not None and (
state.last_seen_packet_monotonic is None
or iface_activity > state.last_seen_packet_monotonic
):
state.last_seen_packet_monotonic = iface_activity
state.last_inactivity_reconnect = None
latest_activity = iface_activity
if latest_activity is None and state.iface_connected_at is not None:
latest_activity = state.iface_connected_at
if latest_activity is None:
latest_activity = now
inactivity_elapsed = now - latest_activity
believed_disconnected = (
_connected_state(getattr(state.iface, "isConnected", None)) is False
)
if (
not believed_disconnected
and inactivity_elapsed < state.inactivity_reconnect_secs
):
return False
if state.last_inactivity_reconnect is not None:
# For explicit disconnects use the shorter max-reconnect-delay window
# so the daemon reconnects promptly without thrashing. For inactivity-
# only triggers retain the full inactivity window as the throttle.
throttle_secs = (
config._RECONNECT_MAX_DELAY_SECS
if believed_disconnected
else state.inactivity_reconnect_secs
)
if now - state.last_inactivity_reconnect < throttle_secs:
return False
reason = (
"disconnected"
if believed_disconnected
else f"no data for {inactivity_elapsed:.0f}s"
)
# Uses the module-level global STATE — acceptable because there is only
# one queue in production, and in tests this is purely informational.
queue_depth = len(queue.STATE.queue)
config._debug_log(
"Mesh interface inactivity detected",
context="daemon.interface",
severity="warn",
reason=reason,
queue_depth=queue_depth,
)
state.last_inactivity_reconnect = now
_close_interface(state.iface)
state.iface = None
state.announced_target = False
state.initial_snapshot_sent = False
state.last_self_node_report = None
state.energy_session_deadline = None
state.iface_connected_at = None
return True
# ---------------------------------------------------------------------------
# Periodic self-node report helper
# ---------------------------------------------------------------------------
def _try_send_self_node(state: _DaemonState) -> None:
"""Re-upsert the host self-node when the provider supports it.
Called once immediately after the initial snapshot and then at most once
per :data:`~data.mesh_ingestor.config._SELF_NODE_REPORT_INTERVAL_SECS`.
This ensures the self-node's protocol and radio metadata are refreshed
even when the ingestor heartbeat races ahead of the first SELF_INFO event
(meshcore) or when the protocol never sends periodic NODEINFO for itself.
Parameters:
state: Current daemon loop state.
Returns:
``None``. Errors are logged and suppressed so a single failure does
not break the main loop.
"""
self_node_fn = getattr(state.provider, "self_node_item", None)
if not callable(self_node_fn):
return
try:
item = self_node_fn(state.iface)
if item is None:
return
node_id, node = item
handlers.upsert_node(node_id, node)
state.last_self_node_report = time.monotonic()
config._debug_log(
"Sent periodic self-node report",
context="daemon.self_node",
severity="info",
node_id=node_id,
)
except Exception as exc:
config._debug_log(
"Self-node re-report failed",
context="daemon.self_node",
severity="warn",
error_class=exc.__class__.__name__,
error_message=str(exc),
)
# ---------------------------------------------------------------------------
# Loop iteration helper
# ---------------------------------------------------------------------------
def _loop_iteration(state: _DaemonState) -> bool:
"""Execute one pass of the daemon main loop.
Encapsulates the per-iteration ``continue`` decisions so that
:func:`main` stays within the allowed cognitive-complexity budget.
Returns:
``True`` when the loop should start the next iteration immediately
(equivalent to a ``continue``); ``False`` when the full pass
completed and the caller should sleep before iterating again.
"""
if state.iface is None and not _try_connect(state):
return True
if _check_energy_saving(state):
return True
if not state.initial_snapshot_sent and not _try_send_snapshot(state):
return True
if _check_inactivity_reconnect(state):
return True
state.ingestor_announcement_sent = _process_ingestor_heartbeat(
state.iface, ingestor_announcement_sent=state.ingestor_announcement_sent
)
# Periodically re-upsert the host self-node so that its protocol and radio
# metadata are corrected after the ingestor heartbeat is registered, and
# kept fresh for protocols (e.g. meshcore) that only emit SELF_INFO once.
_now = time.monotonic()
if state.initial_snapshot_sent and (
state.last_self_node_report is None
or _now - state.last_self_node_report >= config._SELF_NODE_REPORT_INTERVAL_SECS
):
_try_send_self_node(state)
state.retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
return False
# ---------------------------------------------------------------------------
# Entry point
# ---------------------------------------------------------------------------
def main(*, provider: MeshProtocol | None = None) -> None:
"""Run the mesh ingestion daemon until interrupted."""
subscribed = _subscribe_receive_topics()
if provider is None:
if config.PROTOCOL == "meshcore":
from .protocols.meshcore import MeshcoreProvider
provider = MeshcoreProvider()
else:
from .protocols.meshtastic import MeshtasticProvider
provider = MeshtasticProvider()
subscribed = provider.subscribe()
if subscribed:
config._debug_log(
"Subscribed to receive topics",
@@ -255,313 +641,92 @@ def main(existing_interface=None) -> None:
topics=subscribed,
)
iface = existing_interface
resolved_target = None
retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
if not config.INSTANCES and not config.INSTANCE:
config._debug_log(
"No INSTANCE_DOMAIN configured — cannot forward data; exiting",
context="daemon.main",
severity="error",
always=True,
)
return
stop = threading.Event()
initial_snapshot_sent = False
energy_session_deadline = None
iface_connected_at: float | None = None
last_seen_packet_monotonic = handlers.last_packet_monotonic()
last_inactivity_reconnect: float | None = None
inactivity_reconnect_secs = max(
0.0, getattr(config, "_INACTIVITY_RECONNECT_SECS", 0.0)
queue._start_queue_drainer(queue.STATE)
state = _DaemonState(
provider=provider,
stop=threading.Event(),
configured_port=config.CONNECTION,
inactivity_reconnect_secs=max(
0.0, getattr(config, "_INACTIVITY_RECONNECT_SECS", 0.0)
),
energy_saving_enabled=config.ENERGY_SAVING,
energy_online_secs=max(0.0, config._ENERGY_ONLINE_DURATION_SECS),
energy_sleep_secs=max(0.0, config._ENERGY_SLEEP_SECS),
retry_delay=max(0.0, config._RECONNECT_INITIAL_DELAY_SECS),
last_seen_packet_monotonic=handlers.last_packet_monotonic(),
active_candidate=config.CONNECTION,
)
ingestor_announcement_sent = False
energy_saving_enabled = config.ENERGY_SAVING
energy_online_secs = max(0.0, config._ENERGY_ONLINE_DURATION_SECS)
energy_sleep_secs = max(0.0, config._ENERGY_SLEEP_SECS)
def _energy_sleep(reason: str) -> None:
if not energy_saving_enabled or energy_sleep_secs <= 0:
return
if config.DEBUG:
config._debug_log(
f"energy saving: {reason}; sleeping for {energy_sleep_secs:g}s"
)
stop.wait(energy_sleep_secs)
def handle_sigterm(*_args) -> None:
stop.set()
"""Set the stop flag so the daemon loop exits cleanly on SIGTERM."""
state.stop.set()
def handle_sigint(signum, frame) -> None:
if stop.is_set():
"""Handle SIGINT (Ctrl-C) with graceful-first, hard-exit-second behaviour.
The first SIGINT sets the stop flag and lets the loop finish its
current iteration. A second SIGINT delegates to the default handler,
which raises :class:`KeyboardInterrupt` and terminates immediately.
"""
if state.stop.is_set():
signal.default_int_handler(signum, frame)
return
stop.set()
state.stop.set()
if threading.current_thread() == threading.main_thread():
signal.signal(signal.SIGINT, handle_sigint)
signal.signal(signal.SIGTERM, handle_sigterm)
target = config.INSTANCE or "(no INSTANCE_DOMAIN configured)"
configured_port = config.CONNECTION
active_candidate = configured_port
announced_target = False
instance_label = ", ".join(inst for inst, _ in config.INSTANCES)
config._debug_log(
"Mesh daemon starting",
context="daemon.main",
severity="info",
target=target,
port=configured_port or "auto",
target=instance_label,
port=config.CONNECTION or "auto",
channel=config.CHANNEL_INDEX,
)
try:
while not stop.is_set():
if iface is None:
try:
if active_candidate:
iface, resolved_target = interfaces._create_serial_interface(
active_candidate
)
else:
iface, resolved_target = interfaces._create_default_interface()
active_candidate = resolved_target
interfaces._ensure_radio_metadata(iface)
interfaces._ensure_channel_metadata(iface)
handlers.register_host_node_id(
interfaces._extract_host_node_id(iface)
)
ingestors.set_ingestor_node_id(handlers.host_node_id())
retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
initial_snapshot_sent = False
if not announced_target and resolved_target:
config._debug_log(
"Using mesh interface",
context="daemon.interface",
severity="info",
target=resolved_target,
)
announced_target = True
if energy_saving_enabled and energy_online_secs > 0:
energy_session_deadline = time.monotonic() + energy_online_secs
else:
energy_session_deadline = None
iface_connected_at = time.monotonic()
# Seed the inactivity tracking from the connection time so a
# reconnect is given a full inactivity window even when the
# handler still reports the previous packet timestamp.
last_seen_packet_monotonic = iface_connected_at
last_inactivity_reconnect = None
except interfaces.NoAvailableMeshInterface as exc:
config._debug_log(
"No mesh interface available",
context="daemon.interface",
severity="error",
error_message=str(exc),
)
_close_interface(iface)
raise SystemExit(1) from exc
except Exception as exc:
candidate_desc = active_candidate or "auto"
config._debug_log(
"Failed to create mesh interface",
context="daemon.interface",
severity="warn",
candidate=candidate_desc,
error_class=exc.__class__.__name__,
error_message=str(exc),
)
if configured_port is None:
active_candidate = None
announced_target = False
stop.wait(retry_delay)
if config._RECONNECT_MAX_DELAY_SECS > 0:
retry_delay = min(
(
retry_delay * 2
if retry_delay
else config._RECONNECT_INITIAL_DELAY_SECS
),
config._RECONNECT_MAX_DELAY_SECS,
)
continue
if energy_saving_enabled and iface is not None:
if (
energy_session_deadline is not None
and time.monotonic() >= energy_session_deadline
):
config._debug_log(
"Energy saving disconnect",
context="daemon.energy",
severity="info",
)
_close_interface(iface)
iface = None
announced_target = False
initial_snapshot_sent = False
energy_session_deadline = None
_energy_sleep("disconnected after session")
continue
if (
_is_ble_interface(iface)
and getattr(iface, "client", object()) is None
):
config._debug_log(
"Energy saving BLE disconnect",
context="daemon.energy",
severity="info",
)
_close_interface(iface)
iface = None
announced_target = False
initial_snapshot_sent = False
energy_session_deadline = None
_energy_sleep("BLE client disconnected")
continue
if not initial_snapshot_sent:
try:
nodes = getattr(iface, "nodes", {}) or {}
node_items = _node_items_snapshot(nodes)
if node_items is None:
config._debug_log(
"Skipping node snapshot due to concurrent modification",
context="daemon.snapshot",
)
else:
processed_snapshot_item = False
for node_id, node in node_items:
processed_snapshot_item = True
try:
handlers.upsert_node(node_id, node)
except Exception as exc:
config._debug_log(
"Failed to update node snapshot",
context="daemon.snapshot",
severity="warn",
node_id=node_id,
error_class=exc.__class__.__name__,
error_message=str(exc),
)
if config.DEBUG:
config._debug_log(
"Snapshot node payload",
context="daemon.snapshot",
node=node,
)
if processed_snapshot_item:
initial_snapshot_sent = True
except Exception as exc:
config._debug_log(
"Snapshot refresh failed",
context="daemon.snapshot",
severity="warn",
error_class=exc.__class__.__name__,
error_message=str(exc),
)
_close_interface(iface)
iface = None
stop.wait(retry_delay)
if config._RECONNECT_MAX_DELAY_SECS > 0:
retry_delay = min(
(
retry_delay * 2
if retry_delay
else config._RECONNECT_INITIAL_DELAY_SECS
),
config._RECONNECT_MAX_DELAY_SECS,
)
continue
if iface is not None and inactivity_reconnect_secs > 0:
now_monotonic = time.monotonic()
iface_activity = handlers.last_packet_monotonic()
if (
iface_activity is not None
and iface_connected_at is not None
and iface_activity < iface_connected_at
):
iface_activity = iface_connected_at
if iface_activity is not None and (
last_seen_packet_monotonic is None
or iface_activity > last_seen_packet_monotonic
):
last_seen_packet_monotonic = iface_activity
last_inactivity_reconnect = None
latest_activity = iface_activity
if latest_activity is None and iface_connected_at is not None:
latest_activity = iface_connected_at
if latest_activity is None:
latest_activity = now_monotonic
inactivity_elapsed = now_monotonic - latest_activity
connected_attr = getattr(iface, "isConnected", None)
believed_disconnected = False
connected_state = _connected_state(connected_attr)
if connected_state is None:
if callable(connected_attr):
try:
believed_disconnected = not bool(connected_attr())
except Exception:
believed_disconnected = False
elif connected_attr is not None:
try:
believed_disconnected = not bool(connected_attr)
except Exception: # pragma: no cover - defensive guard
believed_disconnected = False
else:
believed_disconnected = not connected_state
should_reconnect = believed_disconnected or (
inactivity_elapsed >= inactivity_reconnect_secs
)
if should_reconnect:
if (
last_inactivity_reconnect is None
or now_monotonic - last_inactivity_reconnect
>= inactivity_reconnect_secs
):
reason = (
"disconnected"
if believed_disconnected
else f"no data for {inactivity_elapsed:.0f}s"
)
config._debug_log(
"Mesh interface inactivity detected",
context="daemon.interface",
severity="warn",
reason=reason,
)
last_inactivity_reconnect = now_monotonic
_close_interface(iface)
iface = None
announced_target = False
initial_snapshot_sent = False
energy_session_deadline = None
iface_connected_at = None
continue
ingestor_announcement_sent = _process_ingestor_heartbeat(
iface, ingestor_announcement_sent=ingestor_announcement_sent
)
retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
stop.wait(config.SNAPSHOT_SECS)
while not state.stop.is_set():
if not _loop_iteration(state):
state.stop.wait(config.SNAPSHOT_SECS)
except KeyboardInterrupt: # pragma: no cover - interactive only
config._debug_log(
"Received KeyboardInterrupt; shutting down",
context="daemon.main",
severity="info",
)
stop.set()
state.stop.set()
finally:
_close_interface(iface)
_close_interface(state.iface)
__all__ = [
"_RECEIVE_TOPICS",
"_event_wait_allows_default_timeout",
"_node_items_snapshot",
"_subscribe_receive_topics",
"_is_ble_interface",
"_process_ingestor_heartbeat",
"_advance_retry_delay",
"_loop_iteration",
"_check_energy_saving",
"_check_inactivity_reconnect",
"_connected_state",
"_energy_sleep",
"_event_wait_allows_default_timeout",
"_is_ble_interface",
"_node_items_snapshot",
"_process_ingestor_heartbeat",
"_subscribe_receive_topics",
"_try_connect",
"_try_send_self_node",
"_try_send_snapshot",
"main",
]
+11 -1
View File
@@ -29,7 +29,6 @@ if SCRIPT_DIR in sys.path:
from google.protobuf.json_format import MessageToDict
from meshtastic.protobuf import mesh_pb2, telemetry_pb2
PORTNUM_MAP: Dict[int, Tuple[str, Any]] = {
3: ("POSITION_APP", mesh_pb2.Position),
4: ("NODEINFO_APP", mesh_pb2.NodeInfo),
@@ -60,6 +59,17 @@ def _decode_payload(portnum: int, payload_b64: str) -> dict[str, Any]:
def main() -> int:
"""Read a JSON request from stdin and write a decoded protobuf response to stdout.
Reads a single JSON object containing ``portnum`` (int) and
``payload_b64`` (base-64 encoded bytes) from standard input, decodes the
protobuf payload via :func:`_decode_payload`, and writes the result as
JSON to standard output.
Returns:
``0`` on success, ``1`` when the input is malformed or required fields
are absent.
"""
raw = sys.stdin.read()
try:
request = json.loads(raw)
+240
View File
@@ -0,0 +1,240 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Protocol-agnostic event payload types for ingestion.
The ingestor ultimately POSTs JSON to the web app's ingest routes. These types
capture the *shape* of those payloads so multiple providers can emit the same
events, regardless of how they source or decode packets.
These are intentionally defined as ``TypedDict`` so existing code can continue
to build plain dictionaries without a runtime dependency on dataclasses.
"""
from __future__ import annotations
from typing import NotRequired, TypedDict
class _MessageEventRequired(TypedDict):
"""Required fields shared by all :class:`MessageEvent` payloads."""
id: int
rx_time: int
rx_iso: str
class MessageEvent(_MessageEventRequired, total=False):
"""Payload for the ``/api/messages`` ingest route.
Maps to the ``MessageEvent`` contract described in ``CONTRACTS.md``.
Required fields are inherited from :class:`_MessageEventRequired`;
all other fields are optional.
"""
from_id: object
to_id: object
channel: int
portnum: str | None
text: str | None
encrypted: str | None
snr: float | None
rssi: int | None
hop_limit: int | None
reply_id: int | None
emoji: str | None
channel_name: str
ingestor: str | None
lora_freq: int
modem_preset: str
class _PositionEventRequired(TypedDict):
"""Required fields shared by all :class:`PositionEvent` payloads."""
id: int
rx_time: int
rx_iso: str
class PositionEvent(_PositionEventRequired, total=False):
"""Payload for the ``/api/positions`` ingest route.
Maps to the ``PositionEvent`` contract described in ``CONTRACTS.md``.
Coordinates may be supplied as floating-point degrees or derived from
Meshtastic's integer-scaled ``latitudeI``/``longitudeI`` fields.
"""
node_id: str
node_num: int | None
num: int | None
from_id: str | None
to_id: object
latitude: float | None
longitude: float | None
altitude: float | None
position_time: int | None
location_source: str | None
precision_bits: int | None
sats_in_view: int | None
pdop: float | None
ground_speed: float | None
ground_track: float | None
snr: float | None
rssi: int | None
hop_limit: int | None
bitfield: int | None
payload_b64: str | None
raw: dict
ingestor: str | None
lora_freq: int
modem_preset: str
class _TelemetryEventRequired(TypedDict):
"""Required fields shared by all :class:`TelemetryEvent` payloads."""
id: int
rx_time: int
rx_iso: str
class TelemetryEvent(_TelemetryEventRequired, total=False):
"""Payload for the ``/api/telemetry`` ingest route.
Maps to the ``TelemetryEvent`` contract described in ``CONTRACTS.md``.
Metric keys beyond the required ones are open-ended; the web layer accepts
any additional device, environment, power, or air-quality fields.
"""
node_id: str | None
node_num: int | None
from_id: object
to_id: object
telemetry_time: int | None
channel: int
portnum: str | None
hop_limit: int | None
snr: float | None
rssi: int | None
bitfield: int | None
payload_b64: str
ingestor: str | None
lora_freq: int
modem_preset: str
# Metric keys are intentionally open-ended; the Ruby side is permissive and
# evolves over time.
class _NeighborEntryRequired(TypedDict):
"""Required fields for a single entry within a :class:`NeighborsSnapshot`."""
rx_time: int
rx_iso: str
class NeighborEntry(_NeighborEntryRequired, total=False):
"""A single observed neighbour node within a :class:`NeighborsSnapshot`.
Each entry describes one node heard by the reporting device, including
optional signal-quality metrics.
"""
neighbor_id: str
neighbor_num: int | None
snr: float | None
class _NeighborsSnapshotRequired(TypedDict):
"""Required fields shared by all :class:`NeighborsSnapshot` payloads."""
node_id: str
rx_time: int
rx_iso: str
class NeighborsSnapshot(_NeighborsSnapshotRequired, total=False):
"""Payload for the ``/api/neighbors`` ingest route.
Maps to the ``NeighborsSnapshot`` contract described in ``CONTRACTS.md``.
Encapsulates the full list of neighbours heard by a single reporting node.
"""
node_num: int | None
neighbors: list[NeighborEntry]
node_broadcast_interval_secs: int | None
last_sent_by_id: str | None
ingestor: str | None
lora_freq: int
modem_preset: str
class _TraceEventRequired(TypedDict):
"""Required fields shared by all :class:`TraceEvent` payloads."""
hops: list[int]
rx_time: int
rx_iso: str
class TraceEvent(_TraceEventRequired, total=False):
"""Payload for the ``/api/traceroutes`` ingest route.
Maps to the ``TraceEvent`` contract described in ``CONTRACTS.md``.
The ``hops`` list contains node numbers in transmission order from
source to destination.
"""
id: int | None
request_id: int | None
src: int | None
dest: int | None
rssi: int | None
snr: float | None
elapsed_ms: int | None
ingestor: str | None
lora_freq: int
modem_preset: str
class IngestorHeartbeat(TypedDict):
"""Payload for the ``/api/ingestors`` heartbeat route.
Maps to the ``IngestorHeartbeat`` contract described in ``CONTRACTS.md``.
Sent periodically to signal that the ingestor process is alive and
associated with a particular radio node.
"""
node_id: str
start_time: int
last_seen_time: int
version: str
lora_freq: NotRequired[int]
modem_preset: NotRequired[str]
NodeUpsert = dict[str, dict]
__all__ = [
"IngestorHeartbeat",
"MessageEvent",
"NeighborEntry",
"NeighborsSnapshot",
"NodeUpsert",
"PositionEvent",
"TelemetryEvent",
"TraceEvent",
]
File diff suppressed because it is too large Load Diff
+102
View File
@@ -0,0 +1,102 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Packet handlers that serialise mesh data and push it to the HTTP queue.
This package is organised into focused submodules:
- :mod:`._state` shared mutable state (host node ID, packet timestamps)
- :mod:`.radio` radio metadata enrichment helpers
- :mod:`.ignored` debug-mode logging of dropped packets
- :mod:`.position` GPS position and traceroute handlers
- :mod:`.telemetry` device/environment telemetry and router heartbeat handlers
- :mod:`.nodeinfo` node information update handler
- :mod:`.neighborinfo` neighbour topology snapshot handler
- :mod:`.generic` packet dispatcher, node upsert, and the main receive callback
All public names from the original flat ``handlers`` module are re-exported
here so existing callers (e.g. ``daemon.py``, ``protocols/``) require no
changes.
"""
from __future__ import annotations
from .. import queue as _queue
from ._state import (
_mark_packet_seen,
host_node_id,
last_packet_monotonic,
register_host_node_id,
)
from .generic import (
_is_encrypted_flag,
_portnum_candidates,
on_receive,
store_packet_dict,
upsert_node,
)
from .ignored import (
_IGNORED_PACKET_LOCK,
_IGNORED_PACKET_LOG_PATH,
_record_ignored_packet,
)
from .neighborinfo import store_neighborinfo_packet
from .nodeinfo import store_nodeinfo_packet
from .position import (
_normalize_trace_hops,
base64_payload,
store_position_packet,
store_traceroute_packet,
)
from .radio import (
_apply_radio_metadata,
_apply_radio_metadata_to_nodes,
_radio_metadata_fields,
)
from .telemetry import (
_VALID_TELEMETRY_TYPES,
store_router_heartbeat_packet,
store_telemetry_packet,
)
# Re-export the queue alias for any callers that reference handlers._queue_post_json
_queue_post_json = _queue._queue_post_json
__all__ = [
"_IGNORED_PACKET_LOCK",
"_IGNORED_PACKET_LOG_PATH",
"_VALID_TELEMETRY_TYPES",
"_apply_radio_metadata",
"_apply_radio_metadata_to_nodes",
"_is_encrypted_flag",
"_mark_packet_seen",
"_normalize_trace_hops",
"_portnum_candidates",
"_queue_post_json",
"_radio_metadata_fields",
"_record_ignored_packet",
"base64_payload",
"host_node_id",
"last_packet_monotonic",
"on_receive",
"register_host_node_id",
"store_neighborinfo_packet",
"store_nodeinfo_packet",
"store_packet_dict",
"store_position_packet",
"store_router_heartbeat_packet",
"store_telemetry_packet",
"store_traceroute_packet",
"upsert_node",
]
+202
View File
@@ -0,0 +1,202 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Shared mutable state and state accessors for the handlers subpackage.
All mutable globals that span multiple handler modules live here so that each
handler submodule can import this module and get a consistent view of state
without risking stale references from bare ``from ... import`` bindings.
"""
from __future__ import annotations
import math
import time
from .. import config
from ..serialization import _canonical_node_id
# ---------------------------------------------------------------------------
# Host device identity
# ---------------------------------------------------------------------------
_host_node_id: str | None = None
"""Canonical ``!xxxxxxxx`` identifier for the connected host device."""
_host_telemetry_last_rx: int | None = None
"""Receive timestamp of the last accepted host telemetry packet."""
_HOST_TELEMETRY_INTERVAL_SECS: int = 60 * 60
"""Minimum interval (seconds) between accepted host telemetry packets.
Meshtastic devices report their own telemetry at regular intervals. Accepting
every packet would overwrite the host's profile too aggressively; this window
throttles updates to at most once per hour.
"""
_host_nodeinfo_last_seen: float | None = None
"""Monotonic timestamp of the last accepted host NODEINFO upsert."""
_HOST_NODEINFO_INTERVAL_SECS: int = 60 * 60
"""Minimum interval (seconds) between accepted host NODEINFO upserts.
The meshtastic library re-broadcasts the local node's NODEINFO to the mesh
periodically. Accepting every broadcast would overwrite the host node record
too aggressively; this window throttles self-NODEINFO upserts to at most once
per hour.
"""
# ---------------------------------------------------------------------------
# Packet receipt tracking
# ---------------------------------------------------------------------------
_last_packet_monotonic: float | None = None
"""Monotonic timestamp of the most recently processed packet."""
# ---------------------------------------------------------------------------
# Public accessors
# ---------------------------------------------------------------------------
def register_host_node_id(node_id: str | None) -> None:
"""Record the canonical identifier for the connected host device.
Resetting the host node also clears the telemetry suppression window so
the first telemetry packet from the new host is always accepted.
Parameters:
node_id: Identifier reported by the connected device. ``None`` clears
the current host assignment.
"""
global _host_node_id, _host_telemetry_last_rx, _host_nodeinfo_last_seen
canonical = _canonical_node_id(node_id)
_host_node_id = canonical
_host_telemetry_last_rx = None
_host_nodeinfo_last_seen = None
if canonical:
config._debug_log(
"Registered host device node id",
context="handlers.host_device",
host_node_id=canonical,
)
def host_node_id() -> str | None:
"""Return the canonical identifier for the connected host device.
Returns:
The canonical ``!xxxxxxxx`` node identifier, or ``None`` when no host
has been registered yet.
"""
return _host_node_id
def _mark_host_telemetry_seen(rx_time: int) -> None:
"""Update the last receive timestamp for the host telemetry window.
Parameters:
rx_time: Unix timestamp of the accepted host telemetry packet.
"""
global _host_telemetry_last_rx
_host_telemetry_last_rx = rx_time
def _host_telemetry_suppressed(rx_time: int) -> tuple[bool, int]:
"""Return suppression state and minutes remaining for host telemetry.
Host telemetry is suppressed when it arrives within
:data:`_HOST_TELEMETRY_INTERVAL_SECS` of the previous accepted packet.
This avoids flooding the API with high-frequency device metrics from the
locally connected node.
Parameters:
rx_time: Unix timestamp of the candidate telemetry packet.
Returns:
A ``(suppressed, minutes_remaining)`` tuple. ``suppressed`` is
``True`` when the packet should be dropped; ``minutes_remaining``
is the whole number of minutes until the next packet will be accepted.
"""
if _host_telemetry_last_rx is None:
return False, 0
remaining_secs = (_host_telemetry_last_rx + _HOST_TELEMETRY_INTERVAL_SECS) - rx_time
if remaining_secs <= 0:
return False, 0
return True, int(math.ceil(remaining_secs / 60.0))
def _host_nodeinfo_suppressed(now: float) -> bool:
"""Return ``True`` when a host NODEINFO upsert should be suppressed.
Self-NODEINFO upserts are throttled to at most once per
:data:`_HOST_NODEINFO_INTERVAL_SECS` to prevent the meshtastic library's
periodic rebroadcast from overwriting the host node record too aggressively.
Parameters:
now: Current :func:`time.monotonic` value.
Returns:
``True`` when the request should be dropped; ``False`` when it should
proceed.
"""
if _host_nodeinfo_last_seen is None:
return False
return (now - _host_nodeinfo_last_seen) < _HOST_NODEINFO_INTERVAL_SECS
def _mark_host_nodeinfo_seen(now: float) -> None:
"""Record that a host NODEINFO upsert was accepted.
Parameters:
now: Current :func:`time.monotonic` value from the accepted upsert.
"""
global _host_nodeinfo_last_seen
_host_nodeinfo_last_seen = now
def last_packet_monotonic() -> float | None:
"""Return the monotonic timestamp of the most recently processed packet.
Returns:
A :func:`time.monotonic` value, or ``None`` before any packet has been
received.
"""
return _last_packet_monotonic
def _mark_packet_seen() -> None:
"""Record that a packet has been processed by updating the monotonic clock."""
global _last_packet_monotonic
_last_packet_monotonic = time.monotonic()
__all__ = [
"_HOST_NODEINFO_INTERVAL_SECS",
"_HOST_TELEMETRY_INTERVAL_SECS",
"_host_nodeinfo_suppressed",
"_host_telemetry_suppressed",
"_mark_host_nodeinfo_seen",
"_mark_host_telemetry_seen",
"_mark_packet_seen",
"host_node_id",
"last_packet_monotonic",
"register_host_node_id",
]
+478
View File
@@ -0,0 +1,478 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Generic packet dispatcher, node upsert, and the main receive callback."""
from __future__ import annotations
import base64
import contextlib
import importlib
import json
import sys
import time
from collections.abc import Mapping
from .. import channels, config, queue
from ..serialization import (
_canonical_node_id,
_coerce_int,
_first,
_iso,
_pkt_to_dict,
upsert_payload,
)
from . import _state, ignored as _ignored_mod
from .neighborinfo import store_neighborinfo_packet
from .nodeinfo import store_nodeinfo_packet
from .position import store_position_packet
from .radio import _apply_radio_metadata, _apply_radio_metadata_to_nodes
from .telemetry import store_router_heartbeat_packet, store_telemetry_packet
from .position import store_traceroute_packet
def _portnum_candidates(name: str) -> set[int]:
"""Return Meshtastic port number candidates for ``name``.
Meshtastic ships two protobuf module layouts (legacy and modern). Both are
probed so that port-number comparisons work regardless of which firmware
version is installed.
Parameters:
name: Port name to look up in Meshtastic ``PortNum`` enums.
Returns:
Set of integer port numbers resolved from all available Meshtastic
modules.
"""
candidates: set[int] = set()
for module_name in (
"meshtastic.portnums_pb2",
"meshtastic.protobuf.portnums_pb2",
):
module = sys.modules.get(module_name)
if module is None:
with contextlib.suppress(ModuleNotFoundError):
module = importlib.import_module(module_name)
if module is None:
continue
portnum_enum = getattr(module, "PortNum", None)
value_lookup = getattr(portnum_enum, "Value", None) if portnum_enum else None
if callable(value_lookup):
with contextlib.suppress(Exception):
candidate = _coerce_int(value_lookup(name))
if candidate is not None:
candidates.add(candidate)
constant_value = getattr(module, name, None)
candidate = _coerce_int(constant_value)
if candidate is not None:
candidates.add(candidate)
return candidates
def _is_encrypted_flag(value: object) -> bool:
"""Return ``True`` when ``value`` represents an encrypted payload.
Meshtastic may express the encrypted flag as a boolean, an integer, or a
string depending on how the packet was decoded. All representations are
normalised to a Python bool.
Parameters:
value: Raw encrypted field from a Meshtastic packet.
Returns:
``True`` when the payload is considered encrypted, ``False`` otherwise.
"""
if isinstance(value, bool):
return value
if isinstance(value, (int, float)):
return value != 0
if isinstance(value, str):
normalized = value.strip().lower()
if normalized in {"", "0", "false", "no"}:
return False
return True
return bool(value)
def upsert_node(node_id: object, node: object) -> None:
"""Schedule an upsert for a single node.
Serialises ``node`` via :func:`upsert_payload`, enriches the result with
radio metadata and the current host node identifier, then enqueues a POST
to ``/api/nodes``.
Parameters:
node_id: Canonical identifier for the node in the ``!xxxxxxxx`` format.
node: Node object or mapping to serialise for the API payload.
Returns:
``None``. The payload is forwarded to the shared HTTP queue.
"""
payload = _apply_radio_metadata_to_nodes(upsert_payload(node_id, node))
payload["ingestor"] = _state.host_node_id()
queue._queue_post_json("/api/nodes", payload, priority=queue._NODE_POST_PRIORITY)
if config.DEBUG:
from ..serialization import _get
user = _get(payload[node_id], "user") or {}
short = _get(user, "shortName")
long = _get(user, "longName")
config._debug_log(
"Queued node upsert payload",
context="handlers.upsert_node",
node_id=node_id,
short_name=short,
long_name=long,
)
def store_packet_dict(packet: Mapping) -> None:
"""Route a decoded packet to the appropriate storage handler.
Inspects ``portnum`` (string and integer forms) and the presence of
well-known decoded sub-sections to determine packet type, then delegates
to the corresponding ``store_*`` handler.
Parameters:
packet: Packet dictionary emitted by the mesh interface.
Returns:
``None``. Side-effects depend on the specific handler invoked.
"""
decoded = packet.get("decoded") or {}
portnum_raw = _first(decoded, "portnum", default=None)
portnum = str(portnum_raw).upper() if portnum_raw is not None else None
portnum_int = _coerce_int(portnum_raw)
telemetry_section = (
decoded.get("telemetry") if isinstance(decoded, Mapping) else None
)
if (
portnum == "TELEMETRY_APP"
or portnum_int == 65
or isinstance(telemetry_section, Mapping)
):
store_telemetry_packet(packet, decoded)
return
traceroute_section = (
decoded.get("traceroute") if isinstance(decoded, Mapping) else None
)
traceroute_port_ints = _portnum_candidates("TRACEROUTE_APP")
if (
portnum == "TRACEROUTE_APP"
or (portnum_int is not None and portnum_int in traceroute_port_ints)
or isinstance(traceroute_section, Mapping)
):
store_traceroute_packet(packet, decoded)
return
if portnum in {"5", "NODEINFO_APP"}:
store_nodeinfo_packet(packet, decoded)
return
if portnum in {"4", "POSITION_APP"}:
store_position_packet(packet, decoded)
return
neighborinfo_section = (
decoded.get("neighborinfo") if isinstance(decoded, Mapping) else None
)
if portnum == "NEIGHBORINFO_APP" or isinstance(neighborinfo_section, Mapping):
store_neighborinfo_packet(packet, decoded)
return
store_forward_port_candidates = _portnum_candidates("STORE_FORWARD_APP")
store_forward_section = (
decoded.get("storeforward") if isinstance(decoded, Mapping) else None
)
if portnum == "STORE_FORWARD_APP" or (
portnum_int is not None and portnum_int in store_forward_port_candidates
):
if not isinstance(store_forward_section, Mapping):
_ignored_mod._record_ignored_packet(
packet, reason="unsupported-store-forward"
)
return
rr = str(store_forward_section.get("rr") or "").upper()
if rr == "ROUTER_HEARTBEAT":
store_router_heartbeat_packet(packet)
return
_ignored_mod._record_ignored_packet(
packet, reason="unsupported-store-forward-rr"
)
return
text = _first(decoded, "payload.text", "text", "data.text", default=None)
encrypted = _first(decoded, "payload.encrypted", "encrypted", default=None)
if encrypted is None:
encrypted = _first(packet, "encrypted", default=None)
reply_id_raw = _first(
decoded,
"payload.replyId",
"payload.reply_id",
"data.replyId",
"data.reply_id",
"replyId",
"reply_id",
default=None,
)
reply_id = _coerce_int(reply_id_raw)
emoji_raw = _first(
decoded,
"payload.emoji",
"data.emoji",
"emoji",
default=None,
)
emoji = None
if emoji_raw is not None:
try:
emoji_text = str(emoji_raw)
except Exception:
emoji_text = None
else:
emoji_text = emoji_text.strip()
if emoji_text:
emoji = emoji_text
routing_section = decoded.get("routing") if isinstance(decoded, Mapping) else None
routing_port_candidates = _portnum_candidates("ROUTING_APP")
if text is None and (
portnum == "ROUTING_APP"
or (portnum_int is not None and portnum_int in routing_port_candidates)
or isinstance(routing_section, Mapping)
):
routing_payload = _first(decoded, "payload", "data", default=None)
if routing_payload is not None:
if isinstance(routing_payload, bytes):
text = base64.b64encode(routing_payload).decode("ascii")
elif isinstance(routing_payload, str):
text = routing_payload
else:
try:
text = json.dumps(routing_payload, ensure_ascii=True)
except TypeError:
text = str(routing_payload)
if isinstance(text, str):
text = text.strip() or None
allowed_port_values = {"1", "TEXT_MESSAGE_APP", "REACTION_APP", "ROUTING_APP"}
allowed_port_ints = {1}
reaction_port_candidates = _portnum_candidates("REACTION_APP")
for candidate in reaction_port_candidates:
allowed_port_ints.add(candidate)
allowed_port_values.add(str(candidate))
for candidate in routing_port_candidates:
allowed_port_ints.add(candidate)
allowed_port_values.add(str(candidate))
if isinstance(routing_section, Mapping) and portnum_int is not None:
allowed_port_ints.add(portnum_int)
allowed_port_values.add(str(portnum_int))
is_reaction_packet = portnum == "REACTION_APP" or (
reply_id is not None and emoji is not None
)
if is_reaction_packet and portnum_int is not None:
allowed_port_ints.add(portnum_int)
allowed_port_values.add(str(portnum_int))
if portnum and portnum not in allowed_port_values:
if portnum_int not in allowed_port_ints:
_ignored_mod._record_ignored_packet(packet, reason="unsupported-port")
return
encrypted_flag = _is_encrypted_flag(encrypted)
if not any([text, encrypted_flag, emoji is not None, reply_id is not None]):
_ignored_mod._record_ignored_packet(packet, reason="no-message-payload")
return
channel = _first(decoded, "channel", default=None)
if channel is None:
channel = _first(packet, "channel", default=0)
try:
channel = int(channel)
except Exception:
channel = 0
channel_name_value = channels.channel_name(channel)
pkt_id = _first(packet, "id", "packet_id", "packetId", default=None)
if pkt_id is None:
_ignored_mod._record_ignored_packet(packet, reason="missing-packet-id")
return
rx_time = int(_first(packet, "rxTime", "rx_time", default=time.time()))
from_id = _first(packet, "fromId", "from_id", "from", default=None)
to_id = _first(packet, "toId", "to_id", "to", default=None)
if (from_id is None or str(from_id) == "") and config.DEBUG:
try:
raw = json.dumps(packet, default=str)
except Exception:
raw = str(packet)
config._debug_log(
"Packet missing from_id",
context="handlers.store_packet_dict",
packet=raw,
)
snr = _first(packet, "snr", "rx_snr", "rxSnr", default=None)
rssi = _first(packet, "rssi", "rx_rssi", "rxRssi", default=None)
hop = _first(packet, "hopLimit", "hop_limit", default=None)
to_id_normalized = str(to_id).strip() if to_id is not None else ""
if (
not is_reaction_packet
and channel == 0
and not encrypted_flag
and to_id_normalized
and to_id_normalized.lower() != "^all"
):
if config.DEBUG:
config._debug_log(
"Skipped direct message on primary channel",
context="handlers.store_packet_dict",
from_id=_canonical_node_id(from_id) or from_id,
to_id=_canonical_node_id(to_id) or to_id,
channel=channel,
)
_ignored_mod._record_ignored_packet(packet, reason="skipped-direct-message")
return
if not channels.is_allowed_channel(channel_name_value):
_ignored_mod._record_ignored_packet(packet, reason="disallowed-channel")
if config.DEBUG:
config._debug_log(
"Ignored packet on disallowed channel",
context="handlers.store_packet_dict",
channel=channel,
channel_name=channel_name_value,
allowed_channels=channels.allowed_channel_names(),
)
return
if channels.is_hidden_channel(channel_name_value):
_ignored_mod._record_ignored_packet(packet, reason="hidden-channel")
if config.DEBUG:
config._debug_log(
"Ignored packet on hidden channel",
context="handlers.store_packet_dict",
channel=channel,
channel_name=channel_name_value,
)
return
message_payload = {
"id": int(pkt_id),
"rx_time": rx_time,
"rx_iso": _iso(rx_time),
"from_id": from_id,
"to_id": to_id,
"channel": channel,
"portnum": str(portnum) if portnum is not None else None,
"text": text,
"encrypted": encrypted,
"snr": float(snr) if snr is not None else None,
"rssi": int(rssi) if rssi is not None else None,
"hop_limit": int(hop) if hop is not None else None,
"reply_id": reply_id,
"emoji": emoji,
"ingestor": _state.host_node_id(),
}
if not encrypted_flag and channel_name_value:
message_payload["channel_name"] = channel_name_value
queue._queue_post_json(
"/api/messages",
_apply_radio_metadata(message_payload),
priority=queue._MESSAGE_POST_PRIORITY,
)
if config.DEBUG:
from_label = _canonical_node_id(from_id) or from_id
to_label = _canonical_node_id(to_id) or to_id
payload_desc = "Encrypted" if text is None and encrypted else text
log_kwargs = {
"context": "handlers.store_packet_dict",
"from_id": from_label,
"to_id": to_label,
"channel": channel,
"channel_display": channel_name_value or channel,
"payload": payload_desc,
}
if channel_name_value:
log_kwargs["channel_name"] = channel_name_value
config._debug_log("Queued message payload", **log_kwargs)
def on_receive(packet: object, interface: object) -> None:
"""Callback registered with Meshtastic to capture incoming packets.
Subscribed to all ``meshtastic.receive.*`` pubsub topics. The packet is
deduplicated via a ``_potatomesh_seen`` flag before being normalised and
dispatched to :func:`store_packet_dict`.
Parameters:
packet: Packet payload supplied by the Meshtastic pubsub topic.
interface: Interface instance that produced the packet. Only used for
compatibility with Meshtastic's callback signature.
Returns:
``None``. Packets are serialised and enqueued asynchronously.
"""
if isinstance(packet, dict):
if packet.get("_potatomesh_seen"):
return
packet["_potatomesh_seen"] = True
_state._mark_packet_seen()
packet_dict = None
try:
packet_dict = _pkt_to_dict(packet)
store_packet_dict(packet_dict)
except Exception as exc:
info = (
list(packet_dict.keys()) if isinstance(packet_dict, dict) else type(packet)
)
config._debug_log(
"Failed to store packet",
context="handlers.on_receive",
severity="warn",
error_class=exc.__class__.__name__,
error_message=str(exc),
packet_info=info,
)
__all__ = [
"_is_encrypted_flag",
"_portnum_candidates",
"on_receive",
"store_packet_dict",
"upsert_node",
]
+103
View File
@@ -0,0 +1,103 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Debug-mode logging of ignored Meshtastic packets.
When :data:`config.DEBUG` is set the ingestor appends a JSON record for each
packet that is filtered out (unsupported port, missing fields, disallowed
channel, etc.) to a plain-text log file. This aids offline debugging without
adding overhead in production.
"""
from __future__ import annotations
import base64
import json
import threading
from collections.abc import Mapping
from datetime import datetime, timezone
from pathlib import Path
from .. import config
_IGNORED_PACKET_LOG_PATH = (
Path(__file__).resolve().parents[3] / "ignored-meshtastic.txt"
)
"""Filesystem path that stores ignored Meshtastic packets when debug mode is active."""
_IGNORED_PACKET_LOCK = threading.Lock()
"""Lock serialising concurrent appends to :data:`_IGNORED_PACKET_LOG_PATH`."""
def _ignored_packet_default(value: object) -> object:
"""Return a JSON-serialisable representation for an ignored packet value.
Called as the ``default`` argument to :func:`json.dumps` when serialising
ignored packet entries. Handles container types and raw bytes so the log
file contains readable text rather than ``repr()`` fragments.
Parameters:
value: Arbitrary value encountered during packet serialisation.
Returns:
A JSON-compatible object derived from ``value``.
"""
if isinstance(value, (list, tuple, set)):
return list(value)
if isinstance(value, bytes):
return base64.b64encode(value).decode("ascii")
if isinstance(value, Mapping):
return {
str(key): _ignored_packet_default(sub_value)
for key, sub_value in value.items()
}
return str(value)
def _record_ignored_packet(packet: Mapping | object, *, reason: str) -> None:
"""Persist packet details to :data:`_IGNORED_PACKET_LOG_PATH` during debugging.
Does nothing when :data:`config.DEBUG` is ``False``. Each call appends a
single newline-delimited JSON record with a timestamp, drop reason, and a
sanitised copy of the packet.
Parameters:
packet: Packet object or mapping to record.
reason: Short machine-readable label describing why the packet was
ignored (e.g. ``"unsupported-port"``, ``"missing-packet-id"``).
"""
if not config.DEBUG:
return
timestamp = datetime.now(timezone.utc).isoformat()
entry = {
"timestamp": timestamp,
"reason": reason,
"packet": _ignored_packet_default(packet),
}
payload = json.dumps(entry, ensure_ascii=False, sort_keys=True)
with _IGNORED_PACKET_LOCK:
_IGNORED_PACKET_LOG_PATH.parent.mkdir(parents=True, exist_ok=True)
with _IGNORED_PACKET_LOG_PATH.open("a", encoding="utf-8") as handle:
handle.write(f"{payload}\n")
__all__ = [
"_IGNORED_PACKET_LOCK",
"_IGNORED_PACKET_LOG_PATH",
"_ignored_packet_default",
"_record_ignored_packet",
]
+150
View File
@@ -0,0 +1,150 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Handler for neighbour-information packets."""
from __future__ import annotations
import time
from collections.abc import Mapping
from .. import config, queue
from ..serialization import (
_canonical_node_id,
_coerce_float,
_coerce_int,
_first,
_iso,
_node_num_from_id,
)
from . import _state
from .radio import _apply_radio_metadata
def store_neighborinfo_packet(packet: Mapping, decoded: Mapping) -> None:
"""Persist neighbour information gathered from a packet.
Meshtastic nodes periodically broadcast the set of nodes they can hear
directly along with the observed signal quality. This handler serialises
that snapshot so the web dashboard can render a live RF topology graph.
Parameters:
packet: Raw Meshtastic packet metadata.
decoded: Decoded view containing the ``neighborinfo`` section.
Returns:
``None``. The neighbour snapshot is queued for HTTP submission.
"""
neighbor_section = (
decoded.get("neighborinfo") if isinstance(decoded, Mapping) else None
)
if not isinstance(neighbor_section, Mapping):
return
node_ref = _first(
neighbor_section,
"nodeId",
"node_id",
default=_first(packet, "fromId", "from_id", "from", default=None),
)
node_id = _canonical_node_id(node_ref)
if node_id is None:
return
node_num = _coerce_int(_first(neighbor_section, "nodeId", "node_id", default=None))
if node_num is None:
node_num = _node_num_from_id(node_id)
node_broadcast_interval = _coerce_int(
_first(
neighbor_section,
"nodeBroadcastIntervalSecs",
"node_broadcast_interval_secs",
default=None,
)
)
last_sent_by_ref = _first(
neighbor_section,
"lastSentById",
"last_sent_by_id",
default=None,
)
last_sent_by_id = _canonical_node_id(last_sent_by_ref)
rx_time = _coerce_int(_first(packet, "rxTime", "rx_time", default=time.time()))
if rx_time is None:
rx_time = int(time.time())
neighbors_payload = neighbor_section.get("neighbors")
neighbors_iterable = (
neighbors_payload if isinstance(neighbors_payload, list) else []
)
neighbor_entries: list[dict] = []
for entry in neighbors_iterable:
if not isinstance(entry, Mapping):
continue
neighbor_ref = _first(entry, "nodeId", "node_id", default=None)
neighbor_id = _canonical_node_id(neighbor_ref)
if neighbor_id is None:
continue
neighbor_num = _coerce_int(_first(entry, "nodeId", "node_id", default=None))
if neighbor_num is None:
neighbor_num = _node_num_from_id(neighbor_id)
snr = _coerce_float(_first(entry, "snr", default=None))
entry_rx_time = _coerce_int(_first(entry, "rxTime", "rx_time", default=None))
if entry_rx_time is None:
entry_rx_time = rx_time
neighbor_entries.append(
{
"neighbor_id": neighbor_id,
"neighbor_num": neighbor_num,
"snr": snr,
"rx_time": entry_rx_time,
"rx_iso": _iso(entry_rx_time),
}
)
payload = {
"node_id": node_id,
"node_num": node_num,
"neighbors": neighbor_entries,
"rx_time": rx_time,
"rx_iso": _iso(rx_time),
"ingestor": _state.host_node_id(),
}
if node_broadcast_interval is not None:
payload["node_broadcast_interval_secs"] = node_broadcast_interval
if last_sent_by_id is not None:
payload["last_sent_by_id"] = last_sent_by_id
queue._queue_post_json(
"/api/neighbors",
_apply_radio_metadata(payload),
priority=queue._NEIGHBOR_POST_PRIORITY,
)
if config.DEBUG:
config._debug_log(
"Queued neighborinfo payload",
context="handlers.store_neighborinfo",
node_id=node_id,
neighbors=len(neighbor_entries),
)
__all__ = ["store_neighborinfo_packet"]
+234
View File
@@ -0,0 +1,234 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Handler for node-information packets."""
from __future__ import annotations
import time
from collections.abc import Mapping
from .. import config, queue
from ..serialization import (
_canonical_node_id,
_coerce_int,
_decode_nodeinfo_payload,
_extract_payload_bytes,
_first,
_merge_mappings,
_node_num_from_id,
_node_to_dict,
_nodeinfo_metrics_dict,
_nodeinfo_position_dict,
_nodeinfo_user_dict,
)
from . import _state
from .radio import _apply_radio_metadata_to_nodes
def store_nodeinfo_packet(packet: Mapping, decoded: Mapping) -> None:
"""Persist node information updates.
Node info packets carry user profile data (short name, long name, hardware
model, public key) together with optional position and device-metrics
snapshots. When a protobuf payload is present it is decoded first; any
fields missing from the protobuf are filled in from the ``decoded`` dict
so both firmware variants are handled.
Parameters:
packet: Raw packet metadata describing the update.
decoded: Decoded payload that may include ``user`` and ``position``
sections.
Returns:
``None``. The node payload is merged into the API queue.
"""
payload_bytes = _extract_payload_bytes(decoded)
node_info = _decode_nodeinfo_payload(payload_bytes)
decoded_user = decoded.get("user")
user_dict = _nodeinfo_user_dict(node_info, decoded_user)
node_info_fields = set()
if node_info:
node_info_fields = {field_desc.name for field_desc, _ in node_info.ListFields()}
node_id = None
if isinstance(user_dict, Mapping):
node_id = _canonical_node_id(user_dict.get("id"))
if node_id is None:
node_id = _canonical_node_id(
_first(packet, "fromId", "from_id", "from", default=None)
)
if node_id is None:
return
# Throttle self-NODEINFO upserts to at most once per hour. The meshtastic
# library rebroadcasts the local node's NODEINFO periodically; accepting
# every broadcast would overwrite the host node record too aggressively.
if node_id == _state.host_node_id():
_now = time.monotonic()
if _state._host_nodeinfo_suppressed(_now):
if config.DEBUG:
config._debug_log(
"Suppressed host self-NODEINFO update within throttle window",
context="handlers.store_nodeinfo",
node_id=node_id,
)
return
_state._mark_host_nodeinfo_seen(_now)
node_payload: dict = {}
if user_dict:
node_payload["user"] = user_dict
# Resolve node_num from protobuf first, then decoded dict, then from the
# canonical ID as a last resort.
node_num = None
if node_info and "num" in node_info_fields:
try:
node_num = int(node_info.num)
except (TypeError, ValueError):
node_num = None
if node_num is None:
decoded_num = decoded.get("num")
if decoded_num is not None:
try:
node_num = int(decoded_num)
except (TypeError, ValueError):
try:
node_num = int(str(decoded_num).strip(), 0)
except Exception:
node_num = None
if node_num is None:
node_num = _node_num_from_id(node_id)
if node_num is not None:
node_payload["num"] = node_num
rx_time = int(_first(packet, "rxTime", "rx_time", default=time.time()))
last_heard = None
if node_info and "last_heard" in node_info_fields:
try:
last_heard = int(node_info.last_heard)
except (TypeError, ValueError):
last_heard = None
if last_heard is None:
decoded_last_heard = decoded.get("lastHeard")
if decoded_last_heard is not None:
try:
last_heard = int(decoded_last_heard)
except (TypeError, ValueError):
last_heard = None
if last_heard is None or last_heard < rx_time:
last_heard = rx_time
node_payload["lastHeard"] = last_heard
snr = None
if node_info and "snr" in node_info_fields:
try:
snr = float(node_info.snr)
except (TypeError, ValueError):
snr = None
if snr is None:
snr = _first(packet, "snr", "rx_snr", "rxSnr", default=None)
if snr is not None:
try:
snr = float(snr)
except (TypeError, ValueError):
snr = None
if snr is not None:
node_payload["snr"] = snr
hops = None
if node_info and "hops_away" in node_info_fields:
try:
hops = int(node_info.hops_away)
except (TypeError, ValueError):
hops = None
if hops is None:
hops = decoded.get("hopsAway")
if hops is not None:
try:
hops = int(hops)
except (TypeError, ValueError):
hops = None
if hops is not None:
node_payload["hopsAway"] = hops
if node_info and "channel" in node_info_fields:
try:
node_payload["channel"] = int(node_info.channel)
except (TypeError, ValueError):
pass
if node_info and "via_mqtt" in node_info_fields:
node_payload["viaMqtt"] = bool(node_info.via_mqtt)
if node_info and "is_favorite" in node_info_fields:
node_payload["isFavorite"] = bool(node_info.is_favorite)
elif "isFavorite" in decoded:
node_payload["isFavorite"] = bool(decoded.get("isFavorite"))
if node_info and "is_ignored" in node_info_fields:
node_payload["isIgnored"] = bool(node_info.is_ignored)
if node_info and "is_key_manually_verified" in node_info_fields:
node_payload["isKeyManuallyVerified"] = bool(node_info.is_key_manually_verified)
metrics = _nodeinfo_metrics_dict(node_info)
decoded_metrics = decoded.get("deviceMetrics")
if isinstance(decoded_metrics, Mapping):
metrics = _merge_mappings(metrics, _node_to_dict(decoded_metrics))
if metrics:
node_payload["deviceMetrics"] = metrics
position = _nodeinfo_position_dict(node_info)
decoded_position = decoded.get("position")
if isinstance(decoded_position, Mapping):
position = _merge_mappings(position, _node_to_dict(decoded_position))
if position:
node_payload["position"] = position
hop_limit = _first(packet, "hopLimit", "hop_limit", default=None)
if hop_limit is not None and "hopLimit" not in node_payload:
try:
node_payload["hopLimit"] = int(hop_limit)
except (TypeError, ValueError):
pass
nodes_payload = _apply_radio_metadata_to_nodes({node_id: node_payload})
nodes_payload["ingestor"] = _state.host_node_id()
queue._queue_post_json(
"/api/nodes",
nodes_payload,
priority=queue._NODE_POST_PRIORITY,
)
if config.DEBUG:
short = None
long_name = None
if isinstance(user_dict, Mapping):
short = user_dict.get("shortName")
long_name = user_dict.get("longName")
config._debug_log(
"Queued nodeinfo payload",
context="handlers.store_nodeinfo",
node_id=node_id,
short_name=short,
long_name=long_name,
)
__all__ = ["store_nodeinfo_packet"]
+413
View File
@@ -0,0 +1,413 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Handlers for position and traceroute packets."""
from __future__ import annotations
import base64
import time
from collections.abc import Mapping
from .. import config, queue
from ..serialization import (
_canonical_node_id,
_coerce_float,
_coerce_int,
_extract_payload_bytes,
_first,
_iso,
_node_num_from_id,
_node_to_dict,
_pkt_to_dict,
)
from . import _state
from .ignored import _record_ignored_packet
from .radio import _apply_radio_metadata
def base64_payload(payload_bytes: bytes | None) -> str | None:
"""Encode raw payload bytes as a Base64 string for JSON transport.
Parameters:
payload_bytes: Optional raw bytes to encode. When ``None`` or empty,
``None`` is returned so callers can omit the field.
Returns:
The Base64-encoded ASCII string, or ``None`` when ``payload_bytes`` is
falsy.
"""
if not payload_bytes:
return None
return base64.b64encode(payload_bytes).decode("ascii")
def _normalize_trace_hops(hops_value: object) -> list[int]:
"""Coerce hop entries to integer node numbers, preserving order.
Each hop can arrive as a plain integer, a canonical node-ID string
(``!xxxxxxxx``), or a mapping with a ``nodeId`` / ``node_id`` field.
All forms are normalised to the raw 32-bit node number used by the API.
Parameters:
hops_value: A single hop or list of hops in any supported form.
Returns:
List of integer node numbers with ``None``-coerced entries dropped.
"""
if hops_value is None:
return []
hop_entries = hops_value if isinstance(hops_value, list) else [hops_value]
normalized: list[int] = []
for hop in hop_entries:
hop_value = hop
if isinstance(hop, Mapping):
hop_value = _first(hop, "node_id", "nodeId", "id", "num", default=None)
canonical = _canonical_node_id(hop_value)
hop_id = _node_num_from_id(canonical or hop_value)
if hop_id is None:
hop_id = _coerce_int(hop_value)
if hop_id is not None:
normalized.append(hop_id)
return normalized
def store_position_packet(packet: Mapping, decoded: Mapping) -> None:
"""Persist a decoded GPS position packet to the API.
Extracts coordinates from both the integer-scaled (``latitudeI`` /
``longitudeI``) and floating-point (``latitude`` / ``longitude``) forms
that Meshtastic may produce depending on firmware version.
Parameters:
packet: Raw packet metadata emitted by the Meshtastic interface.
decoded: Decoded payload extracted from ``packet['decoded']``.
Returns:
``None``. The formatted position payload is added to the HTTP queue.
"""
node_ref = _first(packet, "fromId", "from_id", "from", default=None)
if node_ref is None:
node_ref = _first(decoded, "num", default=None)
node_id = _canonical_node_id(node_ref)
if node_id is None:
return
node_num = _coerce_int(_first(decoded, "num", default=None))
if node_num is None:
node_num = _node_num_from_id(node_id)
pkt_id = _coerce_int(_first(packet, "id", "packet_id", "packetId", default=None))
if pkt_id is None:
return
rx_time = _coerce_int(_first(packet, "rxTime", "rx_time", default=time.time()))
if rx_time is None:
rx_time = int(time.time())
to_id = _first(packet, "toId", "to_id", "to", default=None)
to_id = to_id if to_id not in {"", None} else None
position_section = decoded.get("position") if isinstance(decoded, Mapping) else None
if not isinstance(position_section, Mapping):
position_section = {}
# Meshtastic firmware may emit coordinates in one of two forms:
# - Floating-point degrees: ``latitude`` / ``longitude``
# - Integer-scaled (1e-7 degrees): ``latitudeI`` / ``longitudeI``
# Try the float form first and fall back to the integer form when absent.
latitude = _coerce_float(
_first(position_section, "latitude", "raw.latitude", default=None)
)
if latitude is None:
lat_i = _coerce_int(
_first(
position_section,
"latitudeI",
"latitude_i",
"raw.latitude_i",
default=None,
)
)
if lat_i is not None:
latitude = lat_i / 1e7
longitude = _coerce_float(
_first(position_section, "longitude", "raw.longitude", default=None)
)
if longitude is None:
lon_i = _coerce_int(
_first(
position_section,
"longitudeI",
"longitude_i",
"raw.longitude_i",
default=None,
)
)
if lon_i is not None:
longitude = lon_i / 1e7
altitude = _coerce_float(
_first(position_section, "altitude", "raw.altitude", default=None)
)
position_time = _coerce_int(
_first(position_section, "time", "raw.time", default=None)
)
location_source = _first(
position_section,
"locationSource",
"location_source",
"raw.location_source",
default=None,
)
location_source = (
str(location_source).strip() if location_source not in {None, ""} else None
)
precision_bits = _coerce_int(
_first(
position_section,
"precisionBits",
"precision_bits",
"raw.precision_bits",
default=None,
)
)
sats_in_view = _coerce_int(
_first(
position_section,
"satsInView",
"sats_in_view",
"raw.sats_in_view",
default=None,
)
)
pdop = _coerce_float(
_first(position_section, "PDOP", "pdop", "raw.PDOP", "raw.pdop", default=None)
)
ground_speed = _coerce_float(
_first(
position_section,
"groundSpeed",
"ground_speed",
"raw.ground_speed",
default=None,
)
)
ground_track = _coerce_float(
_first(
position_section,
"groundTrack",
"ground_track",
"raw.ground_track",
default=None,
)
)
snr = _coerce_float(_first(packet, "snr", "rx_snr", "rxSnr", default=None))
rssi = _coerce_int(_first(packet, "rssi", "rx_rssi", "rxRssi", default=None))
hop_limit = _coerce_int(_first(packet, "hopLimit", "hop_limit", default=None))
bitfield = _coerce_int(_first(decoded, "bitfield", default=None))
payload_bytes = _extract_payload_bytes(decoded)
payload_b64 = base64_payload(payload_bytes)
raw_section = decoded.get("raw") if isinstance(decoded, Mapping) else None
raw_payload = _node_to_dict(raw_section) if raw_section else None
if raw_payload is None and position_section:
raw_position = (
position_section.get("raw")
if isinstance(position_section, Mapping)
else None
)
if raw_position:
raw_payload = _node_to_dict(raw_position)
position_payload = {
"id": pkt_id,
"node_id": node_id or node_ref,
"node_num": node_num,
"num": node_num,
"from_id": node_id,
"to_id": to_id,
"rx_time": rx_time,
"rx_iso": _iso(rx_time),
"latitude": latitude,
"longitude": longitude,
"altitude": altitude,
"position_time": position_time,
"location_source": location_source,
"precision_bits": precision_bits,
"sats_in_view": sats_in_view,
"pdop": pdop,
"ground_speed": ground_speed,
"ground_track": ground_track,
"snr": snr,
"rssi": rssi,
"hop_limit": hop_limit,
"bitfield": bitfield,
"payload_b64": payload_b64,
"ingestor": _state.host_node_id(),
}
if raw_payload:
position_payload["raw"] = raw_payload
queue._queue_post_json(
"/api/positions",
_apply_radio_metadata(position_payload),
priority=queue._POSITION_POST_PRIORITY,
)
if config.DEBUG:
config._debug_log(
"Queued position payload",
context="handlers.store_position",
node_id=node_id,
latitude=latitude,
longitude=longitude,
position_time=position_time,
)
def store_traceroute_packet(packet: Mapping, decoded: Mapping) -> None:
"""Persist traceroute details and the observed hop path to the API.
Hop lists can arrive under several key names (``hops``, ``path``,
``route``) and may appear at multiple nesting levels. All candidates are
deduplicated and merged into a single ordered list.
Parameters:
packet: Raw packet metadata from the Meshtastic interface.
decoded: Decoded payload containing the traceroute section.
Returns:
``None``. The traceroute payload is queued for HTTP submission, or
silently dropped when identifiers are entirely absent.
"""
traceroute_section = (
decoded.get("traceroute") if isinstance(decoded, Mapping) else None
)
request_id = _coerce_int(
_first(
traceroute_section,
"requestId",
"request_id",
default=_first(decoded, "req", "requestId", "request_id", default=None),
)
)
pkt_id = _coerce_int(_first(packet, "id", "packet_id", "packetId", default=None))
if pkt_id is None:
pkt_id = request_id
rx_time = _coerce_int(_first(packet, "rxTime", "rx_time", default=time.time()))
if rx_time is None:
rx_time = int(time.time())
src = _coerce_int(
_first(
decoded,
"src",
"source",
default=_first(packet, "fromId", "from_id", "from", default=None),
)
)
dest = _coerce_int(
_first(
decoded,
"dest",
"destination",
default=_first(packet, "toId", "to_id", "to", default=None),
)
)
metrics = traceroute_section if isinstance(traceroute_section, Mapping) else {}
rssi = _coerce_int(
_first(metrics, "rssi", default=_first(packet, "rssi", "rx_rssi", "rxRssi"))
)
snr = _coerce_float(
_first(metrics, "snr", default=_first(packet, "snr", "rx_snr", "rxSnr"))
)
elapsed_ms = _coerce_int(
_first(metrics, "elapsed_ms", "latency_ms", "latencyMs", default=None)
)
# Hops can appear under multiple keys at different nesting levels; collect
# all candidates and deduplicate while preserving first-seen order.
hop_candidates = (
_first(metrics, "hops", default=None),
_first(metrics, "path", default=None),
_first(metrics, "route", default=None),
_first(decoded, "hops", default=None),
_first(decoded, "path", default=None),
(
_first(traceroute_section, "route", default=None)
if isinstance(traceroute_section, Mapping)
else None
),
)
hops: list[int] = []
seen_hops: set[int] = set()
for candidate in hop_candidates:
for hop in _normalize_trace_hops(candidate):
if hop in seen_hops:
continue
seen_hops.add(hop)
hops.append(hop)
if pkt_id is None and request_id is None and not hops:
_record_ignored_packet(packet, reason="traceroute-missing-identifiers")
return
payload = {
"id": pkt_id,
"request_id": request_id,
"src": src,
"dest": dest,
"rx_time": rx_time,
"rx_iso": _iso(rx_time),
"hops": hops,
"rssi": rssi,
"snr": snr,
"elapsed_ms": elapsed_ms,
"ingestor": _state.host_node_id(),
}
queue._queue_post_json(
"/api/traces",
_apply_radio_metadata(payload),
priority=queue._TRACE_POST_PRIORITY,
)
if config.DEBUG:
config._debug_log(
"Queued traceroute payload",
context="handlers.store_traceroute_packet",
request_id=request_id,
src=src,
dest=dest,
hop_count=len(hops),
)
__all__ = [
"base64_payload",
"store_position_packet",
"store_traceroute_packet",
]
+94
View File
@@ -0,0 +1,94 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Radio metadata helpers for enriching API payloads.
LoRa radio parameters (frequency and modem preset) are captured once at
connection time by :mod:`data.mesh_ingestor.interfaces` and stored on the
:mod:`data.mesh_ingestor.config` module. The helpers here read those cached
values and attach them to outgoing payloads so the web dashboard can display
radio configuration alongside mesh data.
"""
from __future__ import annotations
from .. import config
def _radio_metadata_fields() -> dict[str, object]:
"""Return the shared radio metadata fields for payload enrichment.
Reads ``LORA_FREQ`` and ``MODEM_PRESET`` from :mod:`config` and returns
only the keys that have been populated (i.e. skips ``None`` values).
Returns:
A dictionary containing zero, one, or both of ``lora_freq`` and
``modem_preset`` depending on what is available.
"""
metadata: dict[str, object] = {}
freq = getattr(config, "LORA_FREQ", None)
if freq is not None:
metadata["lora_freq"] = freq
preset = getattr(config, "MODEM_PRESET", None)
if preset is not None:
metadata["modem_preset"] = preset
return metadata
def _apply_radio_metadata(payload: dict) -> dict:
"""Augment a flat payload dict with radio metadata when available.
Parameters:
payload: Mutable dictionary that will receive radio metadata keys.
Returns:
The same ``payload`` dict with radio metadata keys merged in-place.
"""
metadata = _radio_metadata_fields()
if metadata:
payload.update(metadata)
return payload
def _apply_radio_metadata_to_nodes(payload: dict) -> dict:
"""Attach radio metadata to each node entry stored in ``payload``.
Node upsert payloads are keyed by node ID; each value is a dict of node
attributes. This function enriches every node-value dict with radio
metadata so the dashboard can show the radio configuration that was active
when the node was last heard.
Parameters:
payload: Mapping of ``node_id node_dict`` to enrich in-place.
Returns:
The same ``payload`` dict after in-place mutation of its node entries.
"""
metadata = _radio_metadata_fields()
if not metadata:
return payload
for value in payload.values():
if isinstance(value, dict):
value.update(metadata)
return payload
__all__ = [
"_apply_radio_metadata",
"_apply_radio_metadata_to_nodes",
"_radio_metadata_fields",
]
+563
View File
@@ -0,0 +1,563 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Handlers for telemetry and router-heartbeat packets."""
from __future__ import annotations
import time
from collections.abc import Mapping
from .. import config, queue
from ..serialization import (
_canonical_node_id,
_coerce_float,
_coerce_int,
_extract_payload_bytes,
_first,
_iso,
_node_num_from_id,
)
from . import _state
from .position import base64_payload
from .radio import _apply_radio_metadata, _apply_radio_metadata_to_nodes
_VALID_TELEMETRY_TYPES: frozenset[str] = frozenset(
{"device", "environment", "power", "air_quality"}
)
"""Allowed discriminator values for the ``telemetry_type`` field.
Meshtastic uses a protobuf ``oneof`` so only one metric sub-object can be
populated per packet. Values outside this set indicate a firmware version
that added a new type not yet handled here; those are logged and dropped to
avoid persisting unexpected data shapes.
"""
def store_telemetry_packet(packet: Mapping, decoded: Mapping) -> None:
"""Persist telemetry metrics extracted from a packet.
Handles all four Meshtastic telemetry sub-types (device, environment,
power, air quality) by extracting common fields first and then
conditionally adding type-specific metric keys.
Host telemetry is rate-limited: if the locally connected node's own
telemetry arrives within the suppression window it is silently dropped to
avoid constant self-updates overwriting other node data.
Parameters:
packet: Packet metadata received from the radio interface.
decoded: Meshtastic-decoded view containing telemetry structures.
Returns:
``None``. The telemetry payload is added to the HTTP queue.
"""
telemetry_section = (
decoded.get("telemetry") if isinstance(decoded, Mapping) else None
)
if not isinstance(telemetry_section, Mapping):
return
pkt_id = _coerce_int(_first(packet, "id", "packet_id", "packetId", default=None))
if pkt_id is None:
return
raw_from = _first(packet, "fromId", "from_id", "from", default=None)
node_id = _canonical_node_id(raw_from)
node_num = _coerce_int(_first(decoded, "num", "node_num", default=None))
if node_num is None:
node_num = _node_num_from_id(node_id or raw_from)
to_id = _first(packet, "toId", "to_id", "to", default=None)
raw_rx_time = _first(packet, "rxTime", "rx_time", default=time.time())
try:
rx_time = int(raw_rx_time)
except (TypeError, ValueError):
rx_time = int(time.time())
rx_iso = _iso(rx_time)
host_id = _state.host_node_id()
# The locally connected node broadcasts its own telemetry frequently.
# Accepting every packet would overwrite the host's profile more often
# than necessary; the suppression window (default 1 h) rate-limits
# self-updates without blocking telemetry from other nodes.
if host_id is not None and node_id == host_id:
suppressed, minutes_remaining = _state._host_telemetry_suppressed(rx_time)
if suppressed:
config._debug_log(
"Suppressed host telemetry update",
context="handlers.store_telemetry",
host_node_id=host_id,
minutes_remaining=minutes_remaining,
)
return
_state._mark_host_telemetry_seen(rx_time)
telemetry_time = _coerce_int(_first(telemetry_section, "time", default=None))
_dm = telemetry_section.get("deviceMetrics") or telemetry_section.get(
"device_metrics"
)
_em = telemetry_section.get("environmentMetrics") or telemetry_section.get(
"environment_metrics"
)
_pm = telemetry_section.get("powerMetrics") or telemetry_section.get(
"power_metrics"
)
_aq = telemetry_section.get("airQualityMetrics") or telemetry_section.get(
"air_quality_metrics"
)
# Priority order matters: deviceMetrics is checked first because the device
# sub-object also carries a voltage field that overlaps with powerMetrics.
# Meshtastic uses a protobuf oneof so only one sub-object can be populated per
# packet; the elif chain handles any hypothetical overlap from future protocols.
if isinstance(_dm, Mapping):
telemetry_type: str | None = "device"
elif isinstance(_em, Mapping):
telemetry_type = "environment"
elif isinstance(_pm, Mapping):
telemetry_type = "power"
elif isinstance(_aq, Mapping):
telemetry_type = "air_quality"
else:
telemetry_type = None
if telemetry_type is not None and telemetry_type not in _VALID_TELEMETRY_TYPES:
config._debug_log(
"Unexpected telemetry_type value; dropping field",
context="handlers.store_telemetry",
severity="warning",
always=True,
telemetry_type=telemetry_type,
)
telemetry_type = None
channel = _coerce_int(_first(decoded, "channel", default=None))
if channel is None:
channel = _coerce_int(_first(packet, "channel", default=None))
if channel is None:
channel = 0
portnum = _first(decoded, "portnum", default=None)
portnum = str(portnum) if portnum not in {None, ""} else None
bitfield = _coerce_int(_first(decoded, "bitfield", default=None))
snr = _coerce_float(_first(packet, "snr", "rx_snr", "rxSnr", default=None))
rssi = _coerce_int(_first(packet, "rssi", "rx_rssi", "rxRssi", default=None))
hop_limit = _coerce_int(_first(packet, "hopLimit", "hop_limit", default=None))
payload_bytes = _extract_payload_bytes(decoded)
payload_b64 = base64_payload(payload_bytes) or ""
battery_level = _coerce_float(
_first(
telemetry_section,
"batteryLevel",
"battery_level",
"deviceMetrics.batteryLevel",
"environmentMetrics.battery_level",
"deviceMetrics.battery_level",
default=None,
)
)
voltage = _coerce_float(
_first(
telemetry_section,
"voltage",
"environmentMetrics.voltage",
"deviceMetrics.voltage",
default=None,
)
)
channel_utilization = _coerce_float(
_first(
telemetry_section,
"channelUtilization",
"channel_utilization",
"deviceMetrics.channelUtilization",
"deviceMetrics.channel_utilization",
default=None,
)
)
air_util_tx = _coerce_float(
_first(
telemetry_section,
"airUtilTx",
"air_util_tx",
"deviceMetrics.airUtilTx",
"deviceMetrics.air_util_tx",
default=None,
)
)
uptime_seconds = _coerce_int(
_first(
telemetry_section,
"uptimeSeconds",
"uptime_seconds",
"deviceMetrics.uptimeSeconds",
"deviceMetrics.uptime_seconds",
default=None,
)
)
temperature = _coerce_float(
_first(
telemetry_section,
"temperature",
"environmentMetrics.temperature",
default=None,
)
)
relative_humidity = _coerce_float(
_first(
telemetry_section,
"relativeHumidity",
"relative_humidity",
"environmentMetrics.relativeHumidity",
"environmentMetrics.relative_humidity",
default=None,
)
)
barometric_pressure = _coerce_float(
_first(
telemetry_section,
"barometricPressure",
"barometric_pressure",
"environmentMetrics.barometricPressure",
"environmentMetrics.barometric_pressure",
default=None,
)
)
current = _coerce_float(
_first(
telemetry_section,
"current",
"deviceMetrics.current",
"deviceMetrics.current_ma",
"deviceMetrics.currentMa",
"environmentMetrics.current",
default=None,
)
)
gas_resistance = _coerce_float(
_first(
telemetry_section,
"gasResistance",
"gas_resistance",
"environmentMetrics.gasResistance",
"environmentMetrics.gas_resistance",
default=None,
)
)
iaq = _coerce_int(
_first(
telemetry_section,
"iaq",
"environmentMetrics.iaq",
"environmentMetrics.iaqIndex",
"environmentMetrics.iaq_index",
default=None,
)
)
distance = _coerce_float(
_first(
telemetry_section,
"distance",
"environmentMetrics.distance",
"environmentMetrics.range",
"environmentMetrics.rangeMeters",
default=None,
)
)
lux = _coerce_float(
_first(
telemetry_section,
"lux",
"environmentMetrics.lux",
"environmentMetrics.illuminance",
default=None,
)
)
white_lux = _coerce_float(
_first(
telemetry_section,
"whiteLux",
"white_lux",
"environmentMetrics.whiteLux",
"environmentMetrics.white_lux",
default=None,
)
)
ir_lux = _coerce_float(
_first(
telemetry_section,
"irLux",
"ir_lux",
"environmentMetrics.irLux",
"environmentMetrics.ir_lux",
default=None,
)
)
uv_lux = _coerce_float(
_first(
telemetry_section,
"uvLux",
"uv_lux",
"environmentMetrics.uvLux",
"environmentMetrics.uv_lux",
"environmentMetrics.uvIndex",
default=None,
)
)
wind_direction = _coerce_int(
_first(
telemetry_section,
"windDirection",
"wind_direction",
"environmentMetrics.windDirection",
"environmentMetrics.wind_direction",
default=None,
)
)
wind_speed = _coerce_float(
_first(
telemetry_section,
"windSpeed",
"wind_speed",
"environmentMetrics.windSpeed",
"environmentMetrics.wind_speed",
"environmentMetrics.windSpeedMps",
default=None,
)
)
wind_gust = _coerce_float(
_first(
telemetry_section,
"windGust",
"wind_gust",
"environmentMetrics.windGust",
"environmentMetrics.wind_gust",
default=None,
)
)
wind_lull = _coerce_float(
_first(
telemetry_section,
"windLull",
"wind_lull",
"environmentMetrics.windLull",
"environmentMetrics.wind_lull",
default=None,
)
)
weight = _coerce_float(
_first(
telemetry_section,
"weight",
"environmentMetrics.weight",
"environmentMetrics.mass",
default=None,
)
)
radiation = _coerce_float(
_first(
telemetry_section,
"radiation",
"environmentMetrics.radiation",
"environmentMetrics.radiationLevel",
default=None,
)
)
rainfall_1h = _coerce_float(
_first(
telemetry_section,
"rainfall1h",
"rainfall_1h",
"environmentMetrics.rainfall1h",
"environmentMetrics.rainfall_1h",
"environmentMetrics.rainfallOneHour",
default=None,
)
)
rainfall_24h = _coerce_float(
_first(
telemetry_section,
"rainfall24h",
"rainfall_24h",
"environmentMetrics.rainfall24h",
"environmentMetrics.rainfall_24h",
"environmentMetrics.rainfallTwentyFourHour",
default=None,
)
)
soil_moisture = _coerce_int(
_first(
telemetry_section,
"soilMoisture",
"soil_moisture",
"environmentMetrics.soilMoisture",
"environmentMetrics.soil_moisture",
default=None,
)
)
soil_temperature = _coerce_float(
_first(
telemetry_section,
"soilTemperature",
"soil_temperature",
"environmentMetrics.soilTemperature",
"environmentMetrics.soil_temperature",
default=None,
)
)
telemetry_payload = {
"id": pkt_id,
"node_id": node_id,
"node_num": node_num,
"from_id": node_id or raw_from,
"to_id": to_id,
"rx_time": rx_time,
"rx_iso": rx_iso,
"telemetry_time": telemetry_time,
"channel": channel,
"portnum": portnum,
"bitfield": bitfield,
"snr": snr,
"rssi": rssi,
"hop_limit": hop_limit,
"payload_b64": payload_b64,
"ingestor": _state.host_node_id(),
}
# Conditionally include metric keys so the API ignores absent fields rather
# than overwriting existing values with null.
if battery_level is not None:
telemetry_payload["battery_level"] = battery_level
if voltage is not None:
telemetry_payload["voltage"] = voltage
if channel_utilization is not None:
telemetry_payload["channel_utilization"] = channel_utilization
if air_util_tx is not None:
telemetry_payload["air_util_tx"] = air_util_tx
if uptime_seconds is not None:
telemetry_payload["uptime_seconds"] = uptime_seconds
if temperature is not None:
telemetry_payload["temperature"] = temperature
if relative_humidity is not None:
telemetry_payload["relative_humidity"] = relative_humidity
if barometric_pressure is not None:
telemetry_payload["barometric_pressure"] = barometric_pressure
if current is not None:
telemetry_payload["current"] = current
if gas_resistance is not None:
telemetry_payload["gas_resistance"] = gas_resistance
if iaq is not None:
telemetry_payload["iaq"] = iaq
if distance is not None:
telemetry_payload["distance"] = distance
if lux is not None:
telemetry_payload["lux"] = lux
if white_lux is not None:
telemetry_payload["white_lux"] = white_lux
if ir_lux is not None:
telemetry_payload["ir_lux"] = ir_lux
if uv_lux is not None:
telemetry_payload["uv_lux"] = uv_lux
if wind_direction is not None:
telemetry_payload["wind_direction"] = wind_direction
if wind_speed is not None:
telemetry_payload["wind_speed"] = wind_speed
if wind_gust is not None:
telemetry_payload["wind_gust"] = wind_gust
if wind_lull is not None:
telemetry_payload["wind_lull"] = wind_lull
if weight is not None:
telemetry_payload["weight"] = weight
if radiation is not None:
telemetry_payload["radiation"] = radiation
if rainfall_1h is not None:
telemetry_payload["rainfall_1h"] = rainfall_1h
if rainfall_24h is not None:
telemetry_payload["rainfall_24h"] = rainfall_24h
if soil_moisture is not None:
telemetry_payload["soil_moisture"] = soil_moisture
if soil_temperature is not None:
telemetry_payload["soil_temperature"] = soil_temperature
if telemetry_type is not None:
telemetry_payload["telemetry_type"] = telemetry_type
queue._queue_post_json(
"/api/telemetry",
_apply_radio_metadata(telemetry_payload),
priority=queue._TELEMETRY_POST_PRIORITY,
)
if config.DEBUG:
config._debug_log(
"Queued telemetry payload",
context="handlers.store_telemetry",
node_id=node_id,
battery_level=battery_level,
voltage=voltage,
)
def store_router_heartbeat_packet(packet: Mapping) -> None:
"""Persist a ``STORE_FORWARD_APP ROUTER_HEARTBEAT`` as a node presence update.
The heartbeat carries no message payload the only actionable signal is
that the store-and-forward router is alive at the observed ``rx_time``.
All other fields are left untouched so the router's existing profile is
not overwritten.
Parameters:
packet: Raw packet metadata.
Returns:
``None``. A minimal node upsert is enqueued at low priority.
"""
node_id = _canonical_node_id(
_first(packet, "fromId", "from_id", "from", default=None)
)
if node_id is None:
return
rx_time = int(_first(packet, "rxTime", "rx_time", default=time.time()))
node_payload: dict = {"lastHeard": rx_time}
nodes_payload = _apply_radio_metadata_to_nodes({node_id: node_payload})
nodes_payload["ingestor"] = _state.host_node_id()
queue._queue_post_json(
"/api/nodes", nodes_payload, priority=queue._DEFAULT_POST_PRIORITY
)
if config.DEBUG:
config._debug_log(
"Queued router heartbeat node upsert",
context="handlers.store_router_heartbeat",
node_id=node_id,
rx_time=rx_time,
)
__all__ = [
"store_router_heartbeat_packet",
"store_telemetry_packet",
]
+1
View File
@@ -113,6 +113,7 @@ def queue_ingestor_heartbeat(
"start_time": STATE.start_time,
"last_seen_time": now,
"version": INGESTOR_VERSION,
"protocol": getattr(config, "PROTOCOL", "meshtastic") or "meshtastic",
}
if getattr(config, "LORA_FREQ", None) is not None:
payload["lora_freq"] = config.LORA_FREQ
+143 -52
View File
@@ -17,7 +17,6 @@
from __future__ import annotations
import contextlib
import glob
import importlib
import ipaddress
import math
@@ -33,6 +32,13 @@ except Exception: # pragma: no cover - dependency optional in tests
meshtastic = None # type: ignore[assignment]
from . import channels, config, serialization
from .connection import (
BLE_ADDRESS_RE,
DEFAULT_TCP_PORT,
DEFAULT_SERIAL_PATTERNS,
default_serial_targets,
parse_ble_target,
)
def _ensure_mapping(value) -> Mapping | None:
@@ -151,7 +157,21 @@ def _candidate_node_id(mapping: Mapping | None) -> str | None:
def _extract_host_node_id(iface) -> str | None:
"""Return the canonical node identifier for the connected host device."""
"""Return the canonical node identifier for the connected host device.
Searches a sequence of well-known attribute names (``myInfo``,
``my_node_info``, etc.) on ``iface`` for a mapping that contains a
recognisable node identifier, then falls back to the raw ``myNodeNum``
integer attribute.
Parameters:
iface: Live Meshtastic interface object, or any object that exposes
node-identity attributes in one of the expected forms.
Returns:
A canonical ``!xxxxxxxx`` node identifier, or ``None`` when no
identifiable host node information is available.
"""
if iface is None:
return None
@@ -239,6 +259,9 @@ def _patch_meshtastic_nodeinfo_handler() -> None:
with contextlib.suppress(Exception):
mesh_interface_module = importlib.import_module("meshtastic.mesh_interface")
# Replace the module-level handler only once; the sentinel attribute prevents
# re-wrapping if _patch_meshtastic_nodeinfo_handler() is called again after
# the interface module is reloaded or re-imported.
if not getattr(original, "_potato_mesh_safe_wrapper", False):
module._onNodeInfoReceive = _build_safe_nodeinfo_callback(original)
@@ -297,6 +320,22 @@ def _patch_nodeinfo_handler_class(
"""Subclass that guards against missing node identifiers."""
def onReceive(self, iface, packet): # type: ignore[override]
"""Normalise ``packet`` before dispatching to the parent handler.
Injects a canonical ``id`` field when one can be inferred from the
packet's other fields, then delegates to the original
``NodeInfoHandler.onReceive``. A ``KeyError`` on ``"id"`` is
suppressed because some firmware versions omit the field entirely.
Parameters:
iface: The Meshtastic interface that received the packet.
packet: Raw nodeinfo packet dict, possibly lacking an ``id``
key.
Returns:
The return value of the parent handler, or ``None`` when a
missing ``"id"`` key would otherwise raise.
"""
normalised = _normalise_nodeinfo_packet(packet)
if normalised is not None:
packet = normalised
@@ -472,16 +511,96 @@ def _resolve_lora_message(local_config: Any) -> Any | None:
return None
# Maps Meshtastic region enum name to (base_freq_MHz, channel_spacing_MHz).
# Values are derived from the Meshtastic firmware RegionInfo tables.
# Used by _computed_channel_frequency to derive the actual radio frequency
# from the region and channel index.
_REGION_CHANNEL_PARAMS: dict[str, tuple[float, float]] = {
"US": (902.0, 0.25), # 902928 MHz; e.g. ch 52 ≈ 915 MHz at 250 kHz spacing
"EU_433": (433.175, 0.2),
"EU_868": (869.525, 0.5), # actual primary ≈ 869.525 MHz, not 868
"CN": (470.0, 0.2),
"JP": (920.875, 0.5),
"ANZ": (916.0, 0.5),
"KR": (921.9, 0.5),
"TW": (923.0, 0.5),
"RU": (868.9, 0.5),
"IN": (865.0, 0.5),
"NZ_865": (864.0, 0.5),
"TH": (920.0, 0.5),
"LORA_24": (2400.0, 0.5),
"UA_433": (433.175, 0.2),
"UA_868": (868.0, 0.5),
"MY_433": (433.0, 0.2),
"MY_919": (919.0, 0.5),
"SG_923": (923.0, 0.5),
"PH_433": (433.0, 0.2),
"PH_868": (868.0, 0.5),
"PH_915": (915.0, 0.5),
"ANZ_433": (433.0, 0.2),
"KZ_433": (433.0, 0.2),
"KZ_863": (863.125, 0.5),
"NP_865": (865.0, 0.5),
"BR_902": (902.0, 0.25),
# IL (Israel) is absent from meshtastic Python lib 2.7.8 protobufs; the
# enum value is unresolvable at runtime. Operators on IL firmware should
# set the FREQUENCY environment variable to override.
}
def _computed_channel_frequency(
enum_name: str | None,
channel_num: int | None,
) -> int | None:
"""Compute the floor MHz frequency for a known region and channel index.
Looks up *enum_name* in :data:`_REGION_CHANNEL_PARAMS` and returns
``floor(base_freq + channel_num * spacing)``. Returns ``None`` when the
region is not in the table. A missing or negative *channel_num* is
treated as 0 so the base frequency is always usable.
Args:
enum_name: Region enum name as returned by
:func:`_enum_name_from_field`, e.g. ``"EU_868"`` or ``"US"``.
channel_num: Zero-based channel index from the device LoRa config.
Returns:
Floored MHz as :class:`int`, or ``None`` if the region is unknown.
"""
if enum_name is None:
return None
params = _REGION_CHANNEL_PARAMS.get(enum_name)
if params is None:
return None
base, spacing = params
idx = channel_num if (isinstance(channel_num, int) and channel_num >= 0) else 0
return math.floor(base + idx * spacing)
def _region_frequency(lora_message: Any) -> int | float | str | None:
"""Derive the LoRa region frequency in MHz or the region label from ``lora_message``.
Numeric override values are floored to the nearest MHz to align with the
integer frequencies expected elsewhere in the ingestion pipeline.
Frequency sources are tried in priority order:
1. ``override_frequency > 0`` explicit radio override, floored to MHz.
2. :data:`_REGION_CHANNEL_PARAMS` lookup + ``channel_num`` actual
band-plan frequency derived from the device's region and channel index,
floored to MHz.
3. Largest digit token 100 parsed from the region enum name string.
4. Largest digit token < 100 from the enum name (reversed scan).
5. Full enum name string, raw integer 100, or raw string as a label.
Args:
lora_message: A LoRa config protobuf message or compatible object.
Returns:
An integer MHz frequency, a fallback string label, or ``None``.
"""
if lora_message is None:
return None
# Step 1 — explicit radio override
override_frequency = getattr(lora_message, "override_frequency", None)
if override_frequency is not None:
if isinstance(override_frequency, (int, float)):
@@ -494,6 +613,15 @@ def _region_frequency(lora_message: Any) -> int | float | str | None:
if region_value is None:
return None
enum_name = _enum_name_from_field(lora_message, "region", region_value)
# Step 2 — lookup table + channel offset (actual band-plan frequency)
if enum_name:
channel_num = getattr(lora_message, "channel_num", None)
computed = _computed_channel_frequency(enum_name, channel_num)
if computed is not None:
return computed
# Steps 35 — parse digits from enum name (fallback for unknown regions)
if enum_name:
digits = re.findall(r"\d+", enum_name)
for token in digits:
@@ -616,25 +744,13 @@ def _ensure_channel_metadata(iface: Any) -> None:
)
_DEFAULT_TCP_PORT = 4403
_DEFAULT_TCP_TARGET = "http://127.0.0.1"
_DEFAULT_SERIAL_PATTERNS = (
"/dev/ttyACM*",
"/dev/ttyUSB*",
"/dev/tty.usbmodem*",
"/dev/tty.usbserial*",
"/dev/cu.usbmodem*",
"/dev/cu.usbserial*",
)
# Support both MAC addresses (Linux/Windows) and UUIDs (macOS)
_BLE_ADDRESS_RE = re.compile(
r"^(?:"
r"(?:[0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}|" # MAC address format
r"[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}" # UUID format
r")$"
)
# Private aliases so that existing internal callers and monkeypatching in
# tests keep working without modification.
_DEFAULT_TCP_PORT = DEFAULT_TCP_PORT # backward-compat alias
_DEFAULT_SERIAL_PATTERNS = DEFAULT_SERIAL_PATTERNS # backward-compat alias
_BLE_ADDRESS_RE = BLE_ADDRESS_RE # backward-compat alias
class _DummySerialInterface:
@@ -644,27 +760,11 @@ class _DummySerialInterface:
self.nodes: dict = {}
def close(self) -> None: # pragma: no cover - nothing to close
"""No-op: the dummy interface holds no resources to release."""
pass
def _parse_ble_target(value: str) -> str | None:
"""Return a normalized BLE address (MAC or UUID) when ``value`` matches the format.
Parameters:
value: User-provided target string.
Returns:
The normalised MAC address or UUID, or ``None`` when validation fails.
"""
if not value:
return None
value = value.strip()
if not value:
return None
if _BLE_ADDRESS_RE.fullmatch(value):
return value.upper()
return None
_parse_ble_target = parse_ble_target # backward-compat alias
def _parse_network_target(value: str) -> tuple[str, int] | None:
@@ -711,6 +811,9 @@ def _parse_network_target(value: str) -> tuple[str, int] | None:
if result:
return result
# For bare "host:port" strings that urlparse may misparse, try a manual
# partition. The `startswith("[")` guard excludes IPv6 bracket notation
# (e.g. "[::1]:8080") because those already succeed via urlparse above.
if value.count(":") == 1 and not value.startswith("["):
host, _, port_text = value.partition(":")
try:
@@ -812,19 +915,7 @@ class NoAvailableMeshInterface(RuntimeError):
"""Raised when no default mesh interface can be created."""
def _default_serial_targets() -> list[str]:
"""Return candidate serial device paths for auto-discovery."""
candidates: list[str] = []
seen: set[str] = set()
for pattern in _DEFAULT_SERIAL_PATTERNS:
for path in sorted(glob.glob(pattern)):
if path not in seen:
candidates.append(path)
seen.add(path)
if "/dev/ttyACM0" not in seen:
candidates.append("/dev/ttyACM0")
return candidates
_default_serial_targets = default_serial_targets # backward-compat alias
def _create_default_interface() -> tuple[object, str]:
+57
View File
@@ -0,0 +1,57 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""MeshProtocol interface for ingestion sources.
This module defines the seam so future protocols (MeshCore, Reticulum, ...) can
be added without changing the web app ingest contract.
"""
from __future__ import annotations
from collections.abc import Iterable
from typing import Protocol, runtime_checkable
@runtime_checkable
class MeshProtocol(Protocol):
"""Abstract mesh protocol source."""
name: str
def subscribe(self) -> list[str]:
"""Subscribe to any async receive callbacks and return topic names."""
def connect(
self, *, active_candidate: str | None
) -> tuple[object, str | None, str | None]:
"""Create an interface connection.
Returns:
(iface, resolved_target, next_active_candidate)
"""
def extract_host_node_id(self, iface: object) -> str | None:
"""Best-effort extraction of the connected host node id."""
def node_snapshot_items(self, iface: object) -> Iterable[tuple[str, object]]:
"""Return iterable of (node_id, node_obj) for initial snapshot."""
__all__ = [
"MeshProtocol",
]
# Backwards-compatibility alias — import Provider from here during transition.
Provider = MeshProtocol
+115
View File
@@ -0,0 +1,115 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Node identity helpers shared across ingestor providers.
The web application keys nodes by a canonical textual identifier of the form
``!%08x`` (lowercase hex). Both the Python collector and Ruby server accept
several input forms (ints, ``0x`` hex strings, ``!`` hex strings, decimal
strings). This module centralizes that normalization.
"""
from __future__ import annotations
from typing import Final
CANONICAL_PREFIX: Final[str] = "!"
def canonical_node_id(value: object) -> str | None:
"""Convert ``value`` into canonical ``!xxxxxxxx`` form.
Parameters:
value: Node reference which may be an int, float, or string.
Returns:
Canonical node id string or ``None`` when parsing fails.
"""
if value is None:
return None
if isinstance(value, (int, float)):
try:
num = int(value)
except (TypeError, ValueError):
return None
if num < 0:
return None
return f"{CANONICAL_PREFIX}{num & 0xFFFFFFFF:08x}"
if not isinstance(value, str):
return None
trimmed = value.strip()
if not trimmed:
return None
if trimmed.startswith("^"):
# Meshtastic special destinations like "^all" are not node ids; callers
# that already accept them should keep passing them through unchanged.
return trimmed
if trimmed.startswith(CANONICAL_PREFIX):
body = trimmed[1:]
elif trimmed.lower().startswith("0x"):
body = trimmed[2:]
elif trimmed.isdigit():
try:
return f"{CANONICAL_PREFIX}{int(trimmed, 10) & 0xFFFFFFFF:08x}"
except ValueError:
return None
else:
body = trimmed
if not body:
return None
try:
return f"{CANONICAL_PREFIX}{int(body, 16) & 0xFFFFFFFF:08x}"
except ValueError:
return None
def node_num_from_id(node_id: object) -> int | None:
"""Extract the numeric node identifier from a canonical (or near-canonical) id."""
if node_id is None:
return None
if isinstance(node_id, (int, float)):
try:
num = int(node_id)
except (TypeError, ValueError):
return None
return num if num >= 0 else None
if not isinstance(node_id, str):
return None
trimmed = node_id.strip()
if not trimmed:
return None
if trimmed.startswith(CANONICAL_PREFIX):
trimmed = trimmed[1:]
if trimmed.lower().startswith("0x"):
trimmed = trimmed[2:]
try:
return int(trimmed, 16)
except ValueError:
try:
return int(trimmed, 10)
except ValueError:
return None
__all__ = [
"CANONICAL_PREFIX",
"canonical_node_id",
"node_num_from_id",
]
+44
View File
@@ -0,0 +1,44 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Protocol implementations.
This package contains protocol-specific implementations (Meshtastic,
MeshCore, and others in the future).
"""
from __future__ import annotations
from .meshtastic import MeshtasticProvider
def __getattr__(name: str) -> object:
"""Lazy-load protocol classes and exceptions that carry optional heavy dependencies.
``MeshcoreProvider`` and ``ClosedBeforeConnectedError`` are imported on
demand so that the MeshCore library (once wired in) is not loaded at
startup when ``PROTOCOL=meshtastic``.
"""
if name == "MeshcoreProvider":
from .meshcore import MeshcoreProvider
return MeshcoreProvider
if name == "ClosedBeforeConnectedError":
from .meshcore import ClosedBeforeConnectedError
return ClosedBeforeConnectedError
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
__all__ = ["MeshtasticProvider", "MeshcoreProvider", "ClosedBeforeConnectedError"]
File diff suppressed because it is too large Load Diff
+100
View File
@@ -0,0 +1,100 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Meshtastic protocol implementation."""
from __future__ import annotations
from pubsub import pub
from .. import config, daemon as _daemon, handlers, interfaces
from ..utils import _retry_dict_snapshot
class MeshtasticProvider:
"""Meshtastic ingestion protocol (current default)."""
name = "meshtastic"
def __init__(self):
self._subscribed: list[str] = []
def subscribe(self) -> list[str]:
"""Subscribe Meshtastic pubsub receive topics."""
if self._subscribed:
return list(self._subscribed)
subscribed = []
for topic in _daemon._RECEIVE_TOPICS:
try:
pub.subscribe(handlers.on_receive, topic)
subscribed.append(topic)
except Exception as exc: # pragma: no cover
config._debug_log(f"failed to subscribe to {topic!r}: {exc}")
self._subscribed = subscribed
return list(subscribed)
def connect(
self, *, active_candidate: str | None
) -> tuple[object, str | None, str | None]:
"""Create a Meshtastic interface using the existing interface helpers."""
iface = None
resolved_target = None
next_candidate = active_candidate
if active_candidate:
iface, resolved_target = interfaces._create_serial_interface(
active_candidate
)
else:
iface, resolved_target = interfaces._create_default_interface()
next_candidate = resolved_target
interfaces._ensure_radio_metadata(iface)
interfaces._ensure_channel_metadata(iface)
return iface, resolved_target, next_candidate
def extract_host_node_id(self, iface: object) -> str | None:
return interfaces._extract_host_node_id(iface)
def node_snapshot_items(self, iface: object) -> list[tuple[str, object]]:
"""Return a stable snapshot of all known nodes from ``iface``.
Uses :func:`~data.mesh_ingestor.utils._retry_dict_snapshot` to
tolerate concurrent modifications from the Meshtastic background
thread.
Parameters:
iface: Live Meshtastic interface whose ``nodes`` dict to snapshot.
Returns:
List of ``(node_id, node_dict)`` tuples, or an empty list when
the snapshot fails after retries.
"""
nodes = getattr(iface, "nodes", {}) or {}
result = _retry_dict_snapshot(lambda: list(nodes.items()))
if result is None:
config._debug_log(
"Skipping node snapshot due to concurrent modification",
context="meshtastic.snapshot",
)
return []
return result
__all__ = ["MeshtasticProvider"]
+348 -30
View File
@@ -73,52 +73,61 @@ def _payload_key_value_pairs(payload: Mapping[str, object]) -> str:
return " ".join(pairs)
_MESSAGE_POST_PRIORITY = 10
_INGESTOR_POST_PRIORITY = 80
_NEIGHBOR_POST_PRIORITY = 20
_TRACE_POST_PRIORITY = 25
_POSITION_POST_PRIORITY = 30
_TELEMETRY_POST_PRIORITY = 40
_NODE_POST_PRIORITY = 50
_INGESTOR_POST_PRIORITY = 0
_CHANNEL_POST_PRIORITY = 10
_NODE_POST_PRIORITY = 20
_MESSAGE_POST_PRIORITY = 30
_NEIGHBOR_POST_PRIORITY = 40
_TRACE_POST_PRIORITY = 50
_POSITION_POST_PRIORITY = 60
_TELEMETRY_POST_PRIORITY = 70
_DEFAULT_POST_PRIORITY = 90
_MAX_SEND_RETRIES = 3
"""Maximum number of times a failed POST item is re-queued before being dropped."""
@dataclass
class QueueState:
"""Mutable state for the HTTP POST priority queue."""
lock: threading.Lock = field(default_factory=threading.Lock)
queue: list[tuple[int, int, str, dict]] = field(default_factory=list)
# Heap tuple: (priority, counter, path, payload, retries).
queue: list[tuple[int, int, str, dict, int]] = field(default_factory=list)
counter: Iterable[int] = field(default_factory=itertools.count)
active: bool = False
# Background drain thread. When the drainer is alive, _queue_post_json
# signals drain_event instead of blocking the caller with HTTP calls.
drain_event: threading.Event = field(default_factory=threading.Event)
drainer: threading.Thread | None = None
# Set to request the drainer thread to exit its loop cleanly.
shutdown: threading.Event = field(default_factory=threading.Event)
STATE = QueueState()
def _post_json(
def _send_single(
instance: str,
api_token: str,
path: str,
payload: dict,
*,
instance: str | None = None,
api_token: str | None = None,
) -> None:
"""Send a JSON payload to the configured web API.
) -> bool:
"""Transmit a single JSON payload to one instance.
Parameters:
path: API path relative to the configured instance root.
instance: Base URL of the target instance.
api_token: Bearer token for this instance (may be empty).
path: API path relative to the instance root.
payload: JSON-serialisable body to transmit.
instance: Optional override for :data:`config.INSTANCE`.
api_token: Optional override for :data:`config.API_TOKEN`.
Returns:
``True`` when the request succeeded, ``False`` on failure.
"""
if instance is None:
instance = config.INSTANCE
if api_token is None:
api_token = config.API_TOKEN
if not instance:
return
return True
url = f"{instance}{path}"
data = json.dumps(payload).encode("utf-8")
@@ -143,15 +152,80 @@ def _post_json(
try:
with urllib.request.urlopen(req, timeout=10) as resp:
resp.read()
except Exception as exc: # pragma: no cover - exercised in production
return True
except Exception as exc:
config._debug_log(
"POST request failed",
context="queue.post_json",
severity="warn",
always=True,
url=url,
error_class=exc.__class__.__name__,
error_message=str(exc),
)
return False
def _post_json(
path: str,
payload: dict,
*,
instance: str | None = None,
api_token: str | None = None,
) -> bool:
"""Send a JSON payload to one or more configured web API instances.
When ``instance`` is provided explicitly the payload is sent to that
single target. Otherwise every ``(url, token)`` pair in
:data:`config.INSTANCES` receives the payload independently so that
one failure does not block delivery to the remaining targets.
Parameters:
path: API path relative to the instance root.
payload: JSON-serialisable body to transmit.
instance: Optional single-instance override.
api_token: Optional token override (only used with ``instance``).
Returns:
``True`` when at least one instance received the payload
successfully, ``False`` when all targets failed. A missing
configuration is not a transient failure and returns ``True``
(retrying would not help).
"""
if instance is not None:
if not instance:
return True
return _send_single(instance, api_token or "", path, payload)
targets: tuple[tuple[str, str], ...] = config.INSTANCES
if not targets:
# Backward-compatible fallback for callers that only set
# config.INSTANCE / config.API_TOKEN directly.
inst = config.INSTANCE
if not inst:
try:
config._debug_log(
"No target instances configured; discarding payload",
context="queue.post_json",
severity="error",
always=True,
path=path,
)
except Exception:
pass
return False
return _send_single(inst, api_token or config.API_TOKEN, path, payload)
any_ok = False
any_attempted = False
for inst, token in targets:
if not inst:
continue
any_attempted = True
if _send_single(inst, token, path, payload):
any_ok = True
return any_ok or not any_attempted
def _enqueue_post_json(
@@ -160,6 +234,7 @@ def _enqueue_post_json(
priority: int,
*,
state: QueueState = STATE,
retries: int = 0,
) -> None:
"""Store a POST request in the priority queue.
@@ -168,11 +243,17 @@ def _enqueue_post_json(
payload: JSON-serialisable body.
priority: Lower values execute first.
state: Shared queue state, injectable for testing.
retries: Number of prior failed send attempts for this item.
"""
with state.lock:
counter = next(state.counter)
heapq.heappush(state.queue, (priority, counter, path, payload))
# Heap tuple: (priority, counter, path, payload, retries). Lower
# priority values are dequeued first (min-heap semantics). The
# monotonically increasing counter breaks ties so equal-priority
# items are processed in FIFO order without comparing the
# non-orderable payload dict.
heapq.heappush(state.queue, (priority, counter, path, payload, retries))
def _drain_post_queue(
@@ -180,6 +261,12 @@ def _drain_post_queue(
) -> None:
"""Process queued POST requests in priority order.
When the *send* callable returns ``False`` (transient failure) the item
is re-queued up to :data:`_MAX_SEND_RETRIES` times. Items exceeding
the limit are dropped with a warning. Custom *send* callables that
return ``None`` (the typical test/heartbeat pattern) are never retried
the ``result is False`` identity check ensures backward compatibility.
Parameters:
state: Queue container holding pending items.
send: Optional callable used to transmit requests.
@@ -194,13 +281,184 @@ def _drain_post_queue(
if not state.queue:
state.active = False
return
_priority, _idx, path, payload = heapq.heappop(state.queue)
send(path, payload)
item = heapq.heappop(state.queue)
# Support both 5-tuple (current) and 4-tuple (legacy/test) items.
if len(item) >= 5:
priority, _idx, path, payload, retries = item[:5]
else:
priority, _idx, path, payload = item[:4]
retries = 0
result = send(path, payload)
# Only retry when the send callable explicitly signals failure
# (returns False). Custom send callables (tests, heartbeat)
# return None and must NOT be treated as failures.
if result is False:
if retries < _MAX_SEND_RETRIES:
_enqueue_post_json(
path, payload, priority, state=state, retries=retries + 1
)
else:
try:
config._debug_log(
"Dropping item after max retries",
context="queue.drain",
severity="warn",
always=True,
path=path,
retries=retries,
)
except Exception:
pass
finally:
with state.lock:
state.active = False
_QUEUE_DEPTH_WARNING_THRESHOLD = 100
"""Log a warning when the queue grows past this many items."""
def _queue_drainer_loop(state: QueueState = STATE) -> None:
"""Body of the background queue-drain daemon thread.
Blocks on :attr:`QueueState.drain_event`, clears it, then empties the
queue by calling :func:`_drain_post_queue`. The thread is created as a
daemon so it terminates automatically when the process exits.
The loop exits cleanly when :attr:`QueueState.shutdown` is set, allowing
tests (and graceful-shutdown paths) to join the thread instead of leaking
daemon threads that accumulate across a test run.
The loop is deliberately hardened so that **no** :class:`Exception` can
kill the thread. The ``_debug_log`` calls inside the error handler are
themselves wrapped in ``try/except`` to prevent cascading failures
(e.g. ``BrokenPipeError`` from ``print()`` to a closed stdout).
.. note::
There is a benign race between ``drain_event.clear()`` and the end
of :func:`_drain_post_queue`: a signal arriving in that window is
consumed by ``clear()`` but the item is still drained because the
drain loop empties the queue completely. However, an item enqueued
*after* the drain loop finds the queue empty and *before*
``wait()`` re-blocks will sit until the next ``drain_event.set()``
call (i.e. the next enqueue). This is acceptable for a best-effort
ingestor maximum extra latency equals the inter-packet interval.
Parameters:
state: Queue state instance to drain.
"""
try:
config._debug_log(
"Queue drainer thread started",
context="queue.drainer",
severity="info",
always=True,
)
except Exception:
pass
while not state.shutdown.is_set():
state.drain_event.wait(timeout=1.0)
if state.shutdown.is_set():
break
state.drain_event.clear()
depth = len(state.queue)
if depth > _QUEUE_DEPTH_WARNING_THRESHOLD:
try:
config._debug_log(
"Queue depth warning",
context="queue.drainer",
severity="warn",
always=True,
depth=depth,
)
except Exception:
pass
try:
_drain_post_queue(state)
except Exception as exc:
try:
config._debug_log(
"Queue drainer error",
context="queue.drainer",
severity="error",
always=True,
error_class=exc.__class__.__name__,
error_message=str(exc),
)
except Exception:
pass
try:
config._debug_log(
"Queue drainer thread exiting",
context="queue.drainer",
severity="info",
always=True,
)
except Exception:
pass
def _start_queue_drainer(state: QueueState = STATE) -> None:
"""Idempotently start the background queue-drain thread.
Calling this function when a drainer thread is already alive is a
no-op. The thread is created as a daemon so it does not prevent
process exit. The check-and-start is performed under :attr:`state.lock`
to avoid starting duplicate threads under concurrent callers.
If items are already in the queue when the drainer is started,
:attr:`QueueState.drain_event` is signalled immediately so they are not
stranded waiting for the next packet to arrive.
Parameters:
state: Queue state whose :func:`_queue_drainer_loop` to start.
"""
with state.lock:
if state.drainer is not None and state.drainer.is_alive():
return
# Reset in case the prior thread was stopped or crashed while
# shutdown was already set.
state.shutdown.clear()
t = threading.Thread(
target=_queue_drainer_loop,
args=(state,),
name="queue-drainer",
daemon=True,
)
t.start()
state.drainer = t
if state.queue:
state.drain_event.set()
def _stop_queue_drainer(state: QueueState = STATE, timeout: float = 5.0) -> None:
"""Signal the drainer thread to exit and wait for it to finish.
Sets :attr:`QueueState.shutdown` and :attr:`QueueState.drain_event` so
the loop wakes up, observes the shutdown flag, and terminates. After
joining (up to *timeout* seconds) the drainer reference is cleared.
Safe to call when no drainer is running (no-op).
Parameters:
state: Queue state whose drainer to stop.
timeout: Maximum seconds to wait for the thread to finish.
"""
if state.drainer is None or not state.drainer.is_alive():
return
state.shutdown.set()
state.drain_event.set()
state.drainer.join(timeout=timeout)
state.drainer = None
def _queue_post_json(
path: str,
payload: dict,
@@ -209,14 +467,32 @@ def _queue_post_json(
state: QueueState = STATE,
send: Callable[[str, dict], None] | None = None,
) -> None:
"""Queue a POST request and start processing if idle.
"""Queue a POST request and wake the drain thread (or drain inline).
When a background drainer thread is running (started via
:func:`_start_queue_drainer`), this function enqueues the item and
signals :attr:`QueueState.drain_event` without blocking the drain
happens on the dedicated thread. This keeps the caller's thread (which
may be the Meshtastic asyncio I/O thread) free to process serial events.
When no background drainer is alive the call falls back to a
synchronous inline drain. This path is used by tests (which pass a
``send`` override via :func:`_fresh_state`) and for any standalone use
without calling :func:`_start_queue_drainer`.
.. note::
The background drainer is used **only** when no custom ``send``
override is provided (i.e. the production ``_post_json`` path).
Any caller that supplies a custom ``send`` (tests, heartbeat
helpers) always gets the synchronous inline drain so its transport
is honoured correctly.
Parameters:
path: API path for the request.
payload: JSON payload to send.
priority: Scheduling priority where lower values run first.
state: Queue container used to store pending requests.
send: Optional transport override, primarily for tests.
send: Optional transport override (synchronous fallback only).
"""
if send is None:
@@ -236,6 +512,42 @@ def _queue_post_json(
)
_enqueue_post_json(path, payload, priority, state=state)
# Use the background drainer only when it is alive AND no custom send
# override is in play. A custom send (used by tests and callers such as
# ingestors.queue_ingestor_heartbeat) must be honoured synchronously
# because the background drainer always calls _drain_post_queue without
# a send override.
#
# The ``is`` check is intentional: _post_json is a module-level function
# so identity comparison reliably detects the "no override" default that
# was assigned at the top of this function.
if send is _post_json:
if state.drainer is not None and state.drainer.is_alive():
state.drain_event.set()
return
# The drainer was previously started but has died (e.g. unhandled
# exception). Restart it so the caller stays non-blocking and the
# MeshCore asyncio event loop is not stalled by inline HTTP calls.
if state.drainer is not None:
try:
config._debug_log(
"Restarting dead queue drainer thread",
context="queue.queue_post_json",
severity="warn",
always=True,
)
except Exception:
pass
_start_queue_drainer(state)
# If the restart succeeded, delegate to the background thread.
if state.drainer is not None and state.drainer.is_alive():
state.drain_event.set()
return
# Synchronous fallback: no drainer was ever started, the restart
# failed, or a custom send override is in play.
with state.lock:
if state.active:
return
@@ -258,17 +570,23 @@ def _clear_post_queue(state: QueueState = STATE) -> None:
__all__ = [
"STATE",
"QueueState",
"_CHANNEL_POST_PRIORITY",
"_DEFAULT_POST_PRIORITY",
"_MESSAGE_POST_PRIORITY",
"_INGESTOR_POST_PRIORITY",
"_MAX_SEND_RETRIES",
"_MESSAGE_POST_PRIORITY",
"_NEIGHBOR_POST_PRIORITY",
"_NODE_POST_PRIORITY",
"_POSITION_POST_PRIORITY",
"_QUEUE_DEPTH_WARNING_THRESHOLD",
"_TRACE_POST_PRIORITY",
"_TELEMETRY_POST_PRIORITY",
"_clear_post_queue",
"_drain_post_queue",
"_enqueue_post_json",
"_post_json",
"_queue_drainer_loop",
"_queue_post_json",
"_start_queue_drainer",
"_stop_queue_drainer",
]
+7 -85
View File
@@ -33,6 +33,9 @@ from google.protobuf.json_format import MessageToDict
from google.protobuf.message import DecodeError
from google.protobuf.message import Message as ProtoMessage
from .node_identity import canonical_node_id as _canonical_node_id
from .node_identity import node_num_from_id as _node_num_from_id
_CLI_ROLE_MODULE_NAMES: tuple[str, ...] = (
"meshtastic.cli.common",
"meshtastic.cli.roles",
@@ -125,6 +128,10 @@ def _load_cli_role_lookup() -> dict[int, str]:
mapping[key_int] = str(value)
return mapping
# Iterate through candidate module paths in preference order. The CLI
# package ships several role-enum locations across versions; we stop at
# the first module that yields a non-empty mapping so we do not silently
# merge partial enums from two different meshtastic-cli releases.
for module_name in _CLI_ROLE_MODULE_NAMES:
try:
module = importlib.import_module(module_name)
@@ -429,91 +436,6 @@ def _pkt_to_dict(packet) -> dict:
return {"_unparsed": str(packet)}
def _canonical_node_id(value) -> str | None:
"""Convert node identifiers into the canonical ``!xxxxxxxx`` format.
Parameters:
value: Input identifier which may be an int, float or string.
Returns:
The canonical identifier or ``None`` if conversion fails.
"""
if value is None:
return None
if isinstance(value, (int, float)):
try:
num = int(value)
except (TypeError, ValueError):
return None
if num < 0:
return None
return f"!{num & 0xFFFFFFFF:08x}"
if not isinstance(value, str):
return None
trimmed = value.strip()
if not trimmed:
return None
if trimmed.startswith("^"):
return trimmed
if trimmed.startswith("!"):
body = trimmed[1:]
elif trimmed.lower().startswith("0x"):
body = trimmed[2:]
elif trimmed.isdigit():
try:
return f"!{int(trimmed, 10) & 0xFFFFFFFF:08x}"
except ValueError:
return None
else:
body = trimmed
if not body:
return None
try:
return f"!{int(body, 16) & 0xFFFFFFFF:08x}"
except ValueError:
return None
def _node_num_from_id(node_id) -> int | None:
"""Extract the numeric node ID from a canonical identifier.
Parameters:
node_id: Identifier value accepted by :func:`_canonical_node_id`.
Returns:
The numeric node ID or ``None`` when parsing fails.
"""
if node_id is None:
return None
if isinstance(node_id, (int, float)):
try:
num = int(node_id)
except (TypeError, ValueError):
return None
return num if num >= 0 else None
if not isinstance(node_id, str):
return None
trimmed = node_id.strip()
if not trimmed:
return None
if trimmed.startswith("!"):
trimmed = trimmed[1:]
if trimmed.lower().startswith("0x"):
trimmed = trimmed[2:]
try:
return int(trimmed, 16)
except ValueError:
try:
return int(trimmed, 10)
except ValueError:
return None
def _merge_mappings(base, extra):
"""Merge two mapping-like objects recursively.
+56
View File
@@ -0,0 +1,56 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Shared utility helpers for the mesh ingestor package."""
from __future__ import annotations
import time
from typing import Callable, TypeVar
_T = TypeVar("_T")
def _retry_dict_snapshot(fn: Callable[[], _T], retries: int = 3) -> _T | None:
"""Call ``fn()`` retrying on concurrent dictionary-modification errors.
Meshtastic's node dictionary is updated on a background thread. Iterating
it can raise a :class:`RuntimeError` with the message "dictionary changed
size during iteration". This helper retries the call up to ``retries``
times, yielding the thread scheduler between attempts via :func:`time.sleep`.
Parameters:
fn: Zero-argument callable that performs the iteration.
retries: Maximum number of attempts before giving up.
Returns:
The return value of ``fn`` on success, or ``None`` when all retries are
exhausted.
"""
for _ in range(max(1, retries)):
try:
return fn()
except RuntimeError as err:
# Only retry the specific concurrent-modification error; re-raise
# anything else so genuine bugs surface immediately.
if "dictionary changed size during iteration" not in str(err):
raise
# Yield to the thread scheduler to let the mutating thread complete
# before we attempt the snapshot again.
time.sleep(0)
return None
__all__ = ["_retry_dict_snapshot"]
+3 -1
View File
@@ -29,7 +29,9 @@ CREATE TABLE IF NOT EXISTS messages (
modem_preset TEXT,
channel_name TEXT,
reply_id INTEGER,
emoji TEXT
emoji TEXT,
ingestor TEXT,
protocol TEXT NOT NULL DEFAULT 'meshtastic'
);
CREATE INDEX IF NOT EXISTS idx_messages_rx_time ON messages(rx_time);
@@ -0,0 +1,39 @@
-- Copyright © 2025-26 l5yth & contributors
--
-- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License.
-- You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing, software
-- distributed under the License is distributed on an "AS IS" BASIS,
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-- See the License for the specific language governing permissions and
-- limitations under the License.
-- Add a protocol column to every entity and event table so records from
-- different mesh backends (meshtastic, meshcore, reticulum, …) can co-exist
-- in the same database and be queried independently.
--
-- Existing rows default to 'meshtastic' for backward compatibility.
BEGIN;
ALTER TABLE ingestors ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
ALTER TABLE nodes ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
ALTER TABLE messages ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
ALTER TABLE positions ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
ALTER TABLE telemetry ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
ALTER TABLE traces ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
ALTER TABLE neighbors ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
-- Indices to support ?protocol= filtering on every entity endpoint without
-- full table scans as multi-protocol traffic grows.
CREATE INDEX IF NOT EXISTS idx_ingestors_protocol ON ingestors(protocol);
CREATE INDEX IF NOT EXISTS idx_nodes_protocol ON nodes(protocol);
CREATE INDEX IF NOT EXISTS idx_messages_protocol ON messages(protocol);
CREATE INDEX IF NOT EXISTS idx_positions_protocol ON positions(protocol);
CREATE INDEX IF NOT EXISTS idx_telemetry_protocol ON telemetry(protocol);
CREATE INDEX IF NOT EXISTS idx_traces_protocol ON traces(protocol);
CREATE INDEX IF NOT EXISTS idx_neighbors_protocol ON neighbors(protocol);
COMMIT;
@@ -0,0 +1,47 @@
-- Copyright © 2025-26 l5yth & contributors
--
-- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License.
-- You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing, software
-- distributed under the License is distributed on an "AS IS" BASIS,
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-- See the License for the specific language governing permissions and
-- limitations under the License.
-- Add telemetry subtype discriminator to enable per-chart type filtering.
-- Backfills existing rows using field-presence heuristics that mirror
-- classifySnapshot() in node-page.js, so historical data is classified
-- consistently regardless of whether the new ingestors are deployed yet.
BEGIN;
ALTER TABLE telemetry ADD COLUMN telemetry_type TEXT;
-- Device metrics: battery/channel fields are exclusive to device_metrics
UPDATE telemetry SET telemetry_type = 'device'
WHERE telemetry_type IS NULL
AND (battery_level IS NOT NULL OR channel_utilization IS NOT NULL
OR air_util_tx IS NOT NULL OR uptime_seconds IS NOT NULL);
-- Power sensor: current is the unambiguous power-sensor discriminator.
-- voltage is intentionally excluded here: device_metrics also stores a voltage
-- reading (~4.2 V for battery), so using voltage alone would misclassify device
-- rows whose four device-discriminator fields (battery_level, channel_utilization,
-- air_util_tx, uptime_seconds) happen to be NULL. Rows that have only voltage
-- and no other classifiable fields are left as NULL (unclassified), which is
-- more accurate than a wrong classification.
UPDATE telemetry SET telemetry_type = 'power'
WHERE telemetry_type IS NULL
AND current IS NOT NULL;
-- Environment: temperature/humidity/pressure
UPDATE telemetry SET telemetry_type = 'environment'
WHERE telemetry_type IS NULL
AND (temperature IS NOT NULL OR relative_humidity IS NOT NULL
OR barometric_pressure IS NOT NULL OR iaq IS NOT NULL
OR gas_resistance IS NOT NULL);
COMMIT;
+2
View File
@@ -17,6 +17,8 @@ CREATE TABLE IF NOT EXISTS neighbors (
neighbor_id TEXT NOT NULL,
snr REAL,
rx_time INTEGER NOT NULL,
ingestor TEXT,
protocol TEXT NOT NULL DEFAULT 'meshtastic',
PRIMARY KEY (node_id, neighbor_id),
FOREIGN KEY (node_id) REFERENCES nodes(node_id) ON DELETE CASCADE,
FOREIGN KEY (neighbor_id) REFERENCES nodes(node_id) ON DELETE CASCADE
+4 -1
View File
@@ -41,9 +41,12 @@ CREATE TABLE IF NOT EXISTS nodes (
longitude REAL,
altitude REAL,
lora_freq INTEGER,
modem_preset TEXT
modem_preset TEXT,
protocol TEXT NOT NULL DEFAULT 'meshtastic',
synthetic BOOLEAN NOT NULL DEFAULT 0
);
CREATE INDEX IF NOT EXISTS idx_nodes_last_heard ON nodes(last_heard);
CREATE INDEX IF NOT EXISTS idx_nodes_hw_model ON nodes(hw_model);
CREATE INDEX IF NOT EXISTS idx_nodes_latlon ON nodes(latitude, longitude);
CREATE INDEX IF NOT EXISTS idx_nodes_long_name ON nodes(long_name);
+3 -1
View File
@@ -33,7 +33,9 @@ CREATE TABLE IF NOT EXISTS positions (
rssi INTEGER,
hop_limit INTEGER,
bitfield INTEGER,
payload_b64 TEXT
payload_b64 TEXT,
ingestor TEXT,
protocol TEXT NOT NULL DEFAULT 'meshtastic'
);
CREATE INDEX IF NOT EXISTS idx_positions_rx_time ON positions(rx_time);
+2
View File
@@ -1,5 +1,7 @@
# Production dependencies
meshtastic>=2.5.0
meshcore>=2.3.5
bleak>=0.21.0
protobuf>=5.27.2
# Development dependencies (optional)
+4 -1
View File
@@ -53,7 +53,10 @@ CREATE TABLE IF NOT EXISTS telemetry (
rainfall_1h REAL,
rainfall_24h REAL,
soil_moisture INTEGER,
soil_temperature REAL
soil_temperature REAL,
ingestor TEXT,
protocol TEXT NOT NULL DEFAULT 'meshtastic',
telemetry_type TEXT
);
CREATE INDEX IF NOT EXISTS idx_telemetry_rx_time ON telemetry(rx_time);
+3 -1
View File
@@ -21,7 +21,9 @@ CREATE TABLE IF NOT EXISTS traces (
rx_iso TEXT NOT NULL,
rssi INTEGER,
snr REAL,
elapsed_ms INTEGER
elapsed_ms INTEGER,
ingestor TEXT,
protocol TEXT NOT NULL DEFAULT 'meshtastic'
);
CREATE TABLE IF NOT EXISTS trace_hops (
+18
View File
@@ -49,3 +49,21 @@ services:
environment:
DEBUG: 0
restart: always
matrix-bridge:
build:
context: .
dockerfile: matrix/Dockerfile
target: runtime
environment:
DEBUG: 0
restart: always
matrix-bridge-bridge:
build:
context: .
dockerfile: matrix/Dockerfile
target: runtime
environment:
DEBUG: 0
restart: always
+14 -3
View File
@@ -34,6 +34,7 @@ x-web-base: &web-base
- potatomesh_data:/app/.local/share/potato-mesh
- potatomesh_config:/app/.config/potato-mesh
- potatomesh_logs:/app/logs
- potatomesh_pages:/app/pages
restart: unless-stopped
deploy:
resources:
@@ -52,9 +53,10 @@ x-ingestor-base: &ingestor-base
ALLOWED_CHANNELS: ${ALLOWED_CHANNELS:-""}
HIDDEN_CHANNELS: ${HIDDEN_CHANNELS:-""}
API_TOKEN: ${API_TOKEN}
INSTANCE_DOMAIN: ${INSTANCE_DOMAIN}
POTATOMESH_INSTANCE: ${POTATOMESH_INSTANCE:-http://web:41447}
INSTANCE_DOMAIN: ${INSTANCE_DOMAIN:-http://web:41447}
DEBUG: ${DEBUG:-0}
PROTOCOL: ${PROTOCOL:-meshtastic}
ENERGY_SAVING: ${ENERGY_SAVING:-0}
FEDERATION: ${FEDERATION:-1}
PRIVATE: ${PRIVATE:-0}
volumes:
@@ -81,7 +83,12 @@ x-matrix-bridge-base: &matrix-bridge-base
image: ghcr.io/l5yth/potato-mesh-matrix-bridge-${POTATOMESH_IMAGE_ARCH:-linux-amd64}:${POTATOMESH_IMAGE_TAG:-latest}
volumes:
- potatomesh_matrix_bridge_state:/app
- ./matrix/Config.toml:/app/Config.toml:ro
- type: bind
source: ./matrix/Config.toml
target: /app/Config.toml
read_only: true
bind:
create_host_path: false
restart: unless-stopped
deploy:
resources:
@@ -128,6 +135,8 @@ services:
matrix-bridge:
<<: *matrix-bridge-base
network_mode: host
profiles:
- matrix
depends_on:
- web
extra_hosts:
@@ -152,6 +161,8 @@ volumes:
driver: local
potatomesh_logs:
driver: local
potatomesh_pages:
driver: local
potatomesh_matrix_bridge_state:
driver: local
+9 -9
View File
@@ -169,9 +169,9 @@ checksum = "5dd9dc738b7a8311c7ade152424974d8115f2cdad61e8dab8dac9f2362298510"
[[package]]
name = "bytes"
version = "1.11.0"
version = "1.11.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b35204fbdc0b3f4446b89fc1ac2cf84a8a68971995d0bf2e925ec7cd960f9cb3"
checksum = "1e748733b7cbc798e1434b6ac524f0c1ff2ab456fe201501e6497c8417a4fc33"
[[package]]
name = "cc"
@@ -969,7 +969,7 @@ checksum = "7edddbd0b52d732b21ad9a5fab5c704c14cd949e5e9a1ec5929a24fded1b904c"
[[package]]
name = "potatomesh-matrix-bridge"
version = "0.5.10"
version = "0.6.1"
dependencies = [
"anyhow",
"axum",
@@ -1037,9 +1037,9 @@ dependencies = [
[[package]]
name = "quinn-proto"
version = "0.11.13"
version = "0.11.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f1906b49b0c3bc04b5fe5d86a77925ae6524a19b816ae38ce1e426255f1d8a31"
checksum = "434b42fec591c96ef50e21e886936e66d3cc3f737104fdb9b737c40ffb94c098"
dependencies = [
"bytes",
"getrandom 0.3.4",
@@ -1087,9 +1087,9 @@ checksum = "69cdb34c158ceb288df11e18b4bd39de994f6657d83847bdffdbd7f346754b0f"
[[package]]
name = "rand"
version = "0.9.2"
version = "0.9.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6db2770f06117d490610c7488547d543617b21bfa07796d7a12f6f1bd53850d1"
checksum = "44c5af06bb1b7d3216d91932aed5265164bf384dc89cd6ba05cf59a35f5f76ea"
dependencies = [
"rand_chacha",
"rand_core",
@@ -1255,9 +1255,9 @@ dependencies = [
[[package]]
name = "rustls-webpki"
version = "0.103.8"
version = "0.103.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2ffdfa2f5286e2247234e03f680868ac2815974dc39e00ea15adc445d0aafe52"
checksum = "df33b2b81ac578cabaf06b89b0631153a3f416b0a886e8a7a1707fb51abbd1ef"
dependencies = [
"ring",
"rustls-pki-types",
+1 -1
View File
@@ -14,7 +14,7 @@
[package]
name = "potatomesh-matrix-bridge"
version = "0.5.10"
version = "0.6.1"
edition = "2021"
[dependencies]
+36 -1
View File
@@ -1,3 +1,6 @@
<!-- Copyright © 2025-26 l5yth & contributors -->
<!-- Licensed under the Apache License, Version 2.0 (see LICENSE) -->
# potatomesh-matrix-bridge
A small Rust daemon that bridges **PotatoMesh** LoRa messages into a **Matrix** room.
@@ -90,7 +93,7 @@ room_id = "!yourroomid:example.org"
[state]
# Where to persist last seen message id
state_file = "bridge_state.json"
````
```
The `hs_token` is used to validate inbound appservice transactions. Keep it identical in `Config.toml` and your Matrix appservice registration file.
@@ -146,6 +149,38 @@ Container detection checks `POTATOMESH_CONTAINER`, `CONTAINER`, and `/proc/1/cgr
Set `POTATOMESH_CONTAINER=0` or `--no-container` to opt out of container defaults.
### Docker Compose First Run
Before starting Compose, complete this preflight checklist:
1. Ensure `matrix/Config.toml` exists as a regular file on the host (not a directory).
2. Fill required Matrix values in `matrix/Config.toml`:
- `matrix.as_token`
- `matrix.hs_token`
- `matrix.server_name`
- `matrix.room_id`
- `matrix.homeserver`
This is required because the shared Compose anchor `x-matrix-bridge-base` mounts `./matrix/Config.toml` to `/app/Config.toml`.
Then follow the token and namespace requirements in [Matrix Appservice Setup (Synapse example)](#matrix-appservice-setup-synapse-example).
#### Troubleshooting
| Symptom | Likely cause | What to check |
| --- | --- | --- |
| `Is a directory (os error 21)` | Host mount source became a directory | `matrix/Config.toml` was missing at mount time and got created as a directory on host. |
| `M_UNKNOWN_TOKEN` / `401 Unauthorized` | Matrix appservice token mismatch | Verify `matrix.as_token` matches your appservice registration and setup in [Matrix Appservice Setup (Synapse example)](#matrix-appservice-setup-synapse-example). |
#### Recovery from accidental `Config.toml` directory creation
```bash
# from repo root
rm -rf matrix/Config.toml
touch matrix/Config.toml
# then edit matrix/Config.toml and set valid matrix.as_token, matrix.hs_token,
# matrix.server_name, matrix.room_id, and matrix.homeserver before starting compose
```
### PotatoMesh API
The bridge assumes:
+22 -4
View File
@@ -19,6 +19,11 @@ use tokio::sync::RwLock;
use crate::config::PotatomeshConfig;
/// Protocol identifier sent as a query parameter to restrict API results to
/// Meshtastic data only. Other protocols (e.g. MeshCore) are excluded until
/// the clients are updated to support them.
const PROTOCOL_FILTER: &str = "meshtastic";
#[allow(dead_code)]
#[derive(Debug, Deserialize, Clone)]
pub struct PotatoMessage {
@@ -131,7 +136,10 @@ impl PotatoClient {
}
pub async fn fetch_messages(&self, params: FetchParams) -> anyhow::Result<Vec<PotatoMessage>> {
let mut req = self.http.get(self.messages_url());
let mut req = self
.http
.get(self.messages_url())
.query(&[("protocol", PROTOCOL_FILTER)]);
if let Some(limit) = params.limit {
req = req.query(&[("limit", limit)]);
}
@@ -336,7 +344,10 @@ mod tests {
let mut server = mockito::Server::new_async().await;
let mock = server
.mock("GET", "/api/messages")
.match_query(mockito::Matcher::Any) // allow optional query params
.match_query(mockito::Matcher::UrlEncoded(
"protocol".into(),
"meshtastic".into(),
))
.with_status(200)
.with_header("content-type", "application/json")
.with_body(
@@ -427,7 +438,10 @@ mod tests {
let mut server = mockito::Server::new_async().await;
let mock = server
.mock("GET", "/api/messages")
.match_query(mockito::Matcher::Any)
.match_query(mockito::Matcher::UrlEncoded(
"protocol".into(),
PROTOCOL_FILTER.into(),
))
.with_status(500)
.create();
@@ -448,7 +462,11 @@ mod tests {
let mut server = mockito::Server::new_async().await;
let mock = server
.mock("GET", "/api/messages")
.match_query("limit=10&since=123")
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("protocol".into(), PROTOCOL_FILTER.into()),
mockito::Matcher::UrlEncoded("limit".into(), "10".into()),
mockito::Matcher::UrlEncoded("since".into(), "123".into()),
]))
.with_status(200)
.with_header("content-type", "application/json")
.with_body("[]")
BIN
View File
Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

+1 -3
View File
@@ -28,9 +28,7 @@ from meshtastic.mesh_interface import MeshInterface
from meshtastic.serial_interface import SerialInterface
from pubsub import pub
CONNECTION = os.environ.get("CONNECTION") or os.environ.get(
"MESH_SERIAL", "/dev/ttyACM0"
)
CONNECTION = os.environ.get("CONNECTION", "/dev/ttyACM0")
"""Connection target opened to capture Meshtastic traffic."""
OUT = os.environ.get("MESH_DUMP_FILE", "meshtastic-dump.ndjson")
+474
View File
@@ -0,0 +1,474 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unit tests for :mod:`data.mesh_ingestor.channels`."""
from __future__ import annotations
import sys
from pathlib import Path
from types import SimpleNamespace
import pytest
REPO_ROOT = Path(__file__).resolve().parents[1]
if str(REPO_ROOT) not in sys.path:
sys.path.insert(0, str(REPO_ROOT))
import data.mesh_ingestor.channels as channels
import data.mesh_ingestor.config as config
@pytest.fixture(autouse=True)
def reset_channel_cache():
"""Ensure channel cache is cleared between tests."""
channels._reset_channel_cache()
yield
channels._reset_channel_cache()
# ---------------------------------------------------------------------------
# _iter_channel_objects
# ---------------------------------------------------------------------------
class TestIterChannelObjects:
"""Tests for :func:`channels._iter_channel_objects`."""
def test_none_returns_empty(self):
"""None input yields no items."""
assert list(channels._iter_channel_objects(None)) == []
def test_dict_yields_values(self):
"""Dict input yields values."""
result = list(channels._iter_channel_objects({"a": 1, "b": 2}))
assert sorted(result) == [1, 2]
def test_list_yields_elements(self):
"""List input yields all elements."""
items = [1, 2, 3]
assert list(channels._iter_channel_objects(items)) == [1, 2, 3]
def test_generator_yields_elements(self):
"""Generator input yields all elements."""
result = list(channels._iter_channel_objects(x for x in [10, 20]))
assert result == [10, 20]
def test_object_with_len_and_getitem(self):
"""Object with __len__ and __getitem__ is iterated correctly."""
class FakeSeq:
def __len__(self):
return 3
def __getitem__(self, idx):
return idx * 10
result = list(channels._iter_channel_objects(FakeSeq()))
assert result == [0, 10, 20]
def test_non_iterable_without_len_returns_empty(self):
"""Objects with neither iter protocol nor len/getitem yield nothing."""
class Opaque:
pass
assert list(channels._iter_channel_objects(Opaque())) == []
# ---------------------------------------------------------------------------
# _primary_channel_name
# ---------------------------------------------------------------------------
class TestPrimaryChannelName:
"""Tests for :func:`channels._primary_channel_name`."""
def test_returns_modem_preset_when_set(self, monkeypatch):
"""Returns MODEM_PRESET from config when available."""
monkeypatch.setattr(config, "MODEM_PRESET", "LongFast")
assert channels._primary_channel_name() == "LongFast"
def test_strips_modem_preset_whitespace(self, monkeypatch):
"""MODEM_PRESET is stripped of surrounding whitespace."""
monkeypatch.setattr(config, "MODEM_PRESET", " MedFast ")
assert channels._primary_channel_name() == "MedFast"
def test_falls_back_to_env_channel(self, monkeypatch):
"""Falls back to CHANNEL env var when MODEM_PRESET is absent."""
monkeypatch.setattr(config, "MODEM_PRESET", None)
monkeypatch.setenv("CHANNEL", "LongRange")
assert channels._primary_channel_name() == "LongRange"
def test_returns_none_when_both_absent(self, monkeypatch):
"""Returns None when neither MODEM_PRESET nor CHANNEL is set."""
monkeypatch.setattr(config, "MODEM_PRESET", None)
monkeypatch.delenv("CHANNEL", raising=False)
assert channels._primary_channel_name() is None
def test_empty_modem_preset_falls_back_to_env(self, monkeypatch):
"""Empty string MODEM_PRESET falls back to CHANNEL env var."""
monkeypatch.setattr(config, "MODEM_PRESET", "")
monkeypatch.setenv("CHANNEL", "LongRange")
assert channels._primary_channel_name() == "LongRange"
# ---------------------------------------------------------------------------
# _extract_channel_name
# ---------------------------------------------------------------------------
class TestExtractChannelName:
"""Tests for :func:`channels._extract_channel_name`."""
def test_none_returns_none(self):
"""None input returns None."""
assert channels._extract_channel_name(None) is None
def test_dict_with_name(self):
"""Dict with 'name' key returns stripped name."""
assert channels._extract_channel_name({"name": " LongFast "}) == "LongFast"
def test_object_with_name_attr(self):
"""Object with name attribute returns stripped name."""
obj = SimpleNamespace(name="Chat")
assert channels._extract_channel_name(obj) == "Chat"
def test_empty_name_returns_none(self):
"""Empty name string returns None."""
assert channels._extract_channel_name({"name": " "}) is None
def test_missing_name_returns_none(self):
"""Object without name attribute returns None."""
assert channels._extract_channel_name(SimpleNamespace()) is None
def test_none_name_returns_none(self):
"""None name value returns None."""
assert channels._extract_channel_name({"name": None}) is None
# ---------------------------------------------------------------------------
# _normalize_role
# ---------------------------------------------------------------------------
class TestNormalizeRole:
"""Tests for :func:`channels._normalize_role`."""
def test_integer_passthrough(self):
"""Integer values are returned unchanged."""
assert channels._normalize_role(1) == 1
assert channels._normalize_role(2) == 2
def test_string_primary(self):
"""'PRIMARY' string maps to _ROLE_PRIMARY."""
assert channels._normalize_role("PRIMARY") == channels._ROLE_PRIMARY
def test_string_secondary(self):
"""'SECONDARY' string maps to _ROLE_SECONDARY."""
assert channels._normalize_role("SECONDARY") == channels._ROLE_SECONDARY
def test_string_case_insensitive(self):
"""Role strings are case-insensitive."""
assert channels._normalize_role("primary") == channels._ROLE_PRIMARY
assert channels._normalize_role("Secondary") == channels._ROLE_SECONDARY
def test_string_numeric(self):
"""Numeric strings are coerced to int."""
assert channels._normalize_role("1") == 1
def test_string_invalid_returns_none(self):
"""Non-numeric, non-role strings return None."""
assert channels._normalize_role("unknown") is None
def test_object_with_name_attr(self):
"""Objects with a 'name' attribute delegate to string handling."""
obj = SimpleNamespace(name="PRIMARY")
assert channels._normalize_role(obj) == channels._ROLE_PRIMARY
def test_object_with_value_attr(self):
"""Objects with an integer 'value' attribute return that value."""
obj = SimpleNamespace(value=2)
assert channels._normalize_role(obj) == 2
def test_coercible_object(self):
"""Objects coercible to int return their integer value."""
class IntLike:
def __int__(self):
return 3
assert channels._normalize_role(IntLike()) == 3
def test_uncoercible_object_returns_none(self):
"""Objects not coercible to int return None."""
assert channels._normalize_role(object()) is None
# ---------------------------------------------------------------------------
# _channel_tuple
# ---------------------------------------------------------------------------
class TestChannelTuple:
"""Tests for :func:`channels._channel_tuple`."""
def test_primary_channel_with_name(self, monkeypatch):
"""Primary role with settings name returns (0, name)."""
monkeypatch.setattr(config, "MODEM_PRESET", None)
obj = SimpleNamespace(
role=channels._ROLE_PRIMARY,
settings=SimpleNamespace(name="LongFast"),
)
assert channels._channel_tuple(obj) == (0, "LongFast")
def test_primary_channel_falls_back_to_preset(self, monkeypatch):
"""Primary channel with no name falls back to MODEM_PRESET."""
monkeypatch.setattr(config, "MODEM_PRESET", "ShortFast")
obj = SimpleNamespace(
role=channels._ROLE_PRIMARY, settings=SimpleNamespace(name="")
)
result = channels._channel_tuple(obj)
assert result == (0, "ShortFast")
def test_secondary_channel(self):
"""Secondary role with index and name returns (index, name)."""
obj = SimpleNamespace(
role=channels._ROLE_SECONDARY,
index=3,
settings=SimpleNamespace(name="Chat"),
)
assert channels._channel_tuple(obj) == (3, "Chat")
def test_unknown_role_returns_none(self):
"""Unrecognised roles return None."""
obj = SimpleNamespace(role=99, index=0, settings=SimpleNamespace(name="X"))
assert channels._channel_tuple(obj) is None
def test_secondary_without_valid_index_returns_none(self):
"""Secondary channel with no valid index returns None."""
obj = SimpleNamespace(
role=channels._ROLE_SECONDARY,
index="bad",
settings=SimpleNamespace(name="Chat"),
)
assert channels._channel_tuple(obj) is None
def test_secondary_without_name_returns_none(self):
"""Secondary channel with no name returns None."""
obj = SimpleNamespace(
role=channels._ROLE_SECONDARY,
index=1,
settings=SimpleNamespace(name=""),
)
assert channels._channel_tuple(obj) is None
# ---------------------------------------------------------------------------
# capture_from_interface
# ---------------------------------------------------------------------------
class TestCaptureFromInterface:
"""Tests for :func:`channels.capture_from_interface`."""
def _make_iface(self, channel_list):
local_node = SimpleNamespace(channels=channel_list)
return SimpleNamespace(localNode=local_node, waitForConfig=lambda: None)
def test_none_iface_is_noop(self):
"""None interface is silently ignored."""
channels.capture_from_interface(None)
assert channels.channel_mappings() == ()
def test_captures_primary_and_secondary(self):
"""Both primary and secondary channels are captured."""
iface = self._make_iface(
[
SimpleNamespace(
role=channels._ROLE_PRIMARY,
settings=SimpleNamespace(name="LongFast"),
),
SimpleNamespace(
role=channels._ROLE_SECONDARY,
index=1,
settings=SimpleNamespace(name="Chat"),
),
]
)
channels.capture_from_interface(iface)
mappings = channels.channel_mappings()
assert (0, "LongFast") in mappings
assert (1, "Chat") in mappings
def test_subsequent_calls_are_noops_when_cached(self):
"""Second call with different interface is ignored once cached."""
iface1 = self._make_iface(
[
SimpleNamespace(
role=channels._ROLE_PRIMARY, settings=SimpleNamespace(name="First")
),
]
)
iface2 = self._make_iface(
[
SimpleNamespace(
role=channels._ROLE_PRIMARY, settings=SimpleNamespace(name="Second")
),
]
)
channels.capture_from_interface(iface1)
channels.capture_from_interface(iface2)
assert channels.channel_name(0) == "First"
def test_deduplicates_indices(self):
"""Duplicate channel indices keep the first seen entry."""
iface = self._make_iface(
[
SimpleNamespace(
role=channels._ROLE_SECONDARY,
index=1,
settings=SimpleNamespace(name="A"),
),
SimpleNamespace(
role=channels._ROLE_SECONDARY,
index=1,
settings=SimpleNamespace(name="B"),
),
]
)
channels.capture_from_interface(iface)
assert channels.channel_name(1) == "A"
def test_empty_channels_does_not_set_cache(self):
"""No valid channels leaves the cache empty."""
iface = self._make_iface([])
channels.capture_from_interface(iface)
assert channels.channel_mappings() == ()
# ---------------------------------------------------------------------------
# is_allowed_channel / is_hidden_channel
# ---------------------------------------------------------------------------
class TestIsAllowedChannel:
"""Tests for :func:`channels.is_allowed_channel`."""
def test_no_allowlist_permits_all(self, monkeypatch):
"""When ALLOWED_CHANNELS is empty, all channels are allowed."""
monkeypatch.setattr(config, "ALLOWED_CHANNELS", ())
assert channels.is_allowed_channel("anything") is True
def test_allowlist_permits_matching_name(self, monkeypatch):
"""A matching name is allowed."""
monkeypatch.setattr(config, "ALLOWED_CHANNELS", ("LongFast",))
assert channels.is_allowed_channel("LongFast") is True
def test_allowlist_case_insensitive(self, monkeypatch):
"""Channel name matching is case-insensitive."""
monkeypatch.setattr(config, "ALLOWED_CHANNELS", ("longfast",))
assert channels.is_allowed_channel("LongFast") is True
def test_allowlist_blocks_non_matching(self, monkeypatch):
"""A non-matching name is rejected."""
monkeypatch.setattr(config, "ALLOWED_CHANNELS", ("LongFast",))
assert channels.is_allowed_channel("Chat") is False
def test_none_rejected_when_allowlist_set(self, monkeypatch):
"""None is rejected when an allowlist is configured."""
monkeypatch.setattr(config, "ALLOWED_CHANNELS", ("LongFast",))
assert channels.is_allowed_channel(None) is False
def test_empty_string_rejected_when_allowlist_set(self, monkeypatch):
"""Empty string is rejected when an allowlist is configured."""
monkeypatch.setattr(config, "ALLOWED_CHANNELS", ("LongFast",))
assert channels.is_allowed_channel(" ") is False
class TestIsHiddenChannel:
"""Tests for :func:`channels.is_hidden_channel`."""
def test_none_not_hidden(self):
"""None is never considered hidden."""
assert channels.is_hidden_channel(None) is False
def test_empty_string_not_hidden(self):
"""Empty string is never considered hidden."""
assert channels.is_hidden_channel(" ") is False
def test_hidden_name_is_hidden(self, monkeypatch):
"""Configured hidden channel is detected."""
monkeypatch.setattr(config, "HIDDEN_CHANNELS", ("Chat",))
assert channels.is_hidden_channel("Chat") is True
def test_hidden_case_insensitive(self, monkeypatch):
"""Hidden channel matching is case-insensitive."""
monkeypatch.setattr(config, "HIDDEN_CHANNELS", ("chat",))
assert channels.is_hidden_channel("CHAT") is True
def test_non_hidden_name_not_hidden(self, monkeypatch):
"""Non-configured names are not hidden."""
monkeypatch.setattr(config, "HIDDEN_CHANNELS", ("Chat",))
assert channels.is_hidden_channel("LongFast") is False
# ---------------------------------------------------------------------------
# register_channel
# ---------------------------------------------------------------------------
class TestRegisterChannel:
"""Tests for :func:`channels.register_channel`."""
def test_adds_to_lookup(self):
"""register_channel must make the name retrievable via channel_name."""
channels.register_channel(1, "Chat")
assert channels.channel_name(1) == "Chat"
def test_no_overwrite(self):
"""Second call with same index must not replace the first-registered name."""
channels.register_channel(0, "LongFast")
channels.register_channel(0, "Other")
assert channels.channel_name(0) == "LongFast"
def test_strips_whitespace(self):
"""Leading and trailing whitespace is stripped from the channel name."""
channels.register_channel(2, " Chat ")
assert channels.channel_name(2) == "Chat"
def test_ignores_empty_string(self):
"""Empty string is silently ignored and does not populate the cache."""
channels.register_channel(3, "")
assert channels.channel_name(3) is None
def test_ignores_whitespace_only_string(self):
"""Whitespace-only name is silently ignored."""
channels.register_channel(3, " ")
assert channels.channel_name(3) is None
def test_updates_mappings_tuple(self):
"""channel_mappings() reflects all registered entries, sorted by index."""
channels.register_channel(2, "Admin")
channels.register_channel(0, "LongFast")
assert channels.channel_mappings() == ((0, "LongFast"), (2, "Admin"))
def test_coexists_with_capture_from_interface(self):
"""Entries from register_channel and capture_from_interface merge correctly."""
# Simulate capture_from_interface populating index 0.
channels._CHANNEL_LOOKUP[0] = "LongFast"
channels._CHANNEL_MAPPINGS = ((0, "LongFast"),)
# register_channel should add index 1 without disturbing index 0.
channels.register_channel(1, "Chat")
assert channels.channel_name(0) == "LongFast"
assert channels.channel_name(1) == "Chat"
+371
View File
@@ -0,0 +1,371 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unit tests for :mod:`data.mesh_ingestor.config`."""
from __future__ import annotations
import sys
from pathlib import Path
import pytest
REPO_ROOT = Path(__file__).resolve().parents[1]
if str(REPO_ROOT) not in sys.path:
sys.path.insert(0, str(REPO_ROOT))
import data.mesh_ingestor.config as config
# ---------------------------------------------------------------------------
# _parse_channel_names
# ---------------------------------------------------------------------------
class TestParseChannelNames:
"""Tests for :func:`config._parse_channel_names`."""
def test_none_returns_empty(self):
"""None input returns empty tuple."""
assert config._parse_channel_names(None) == ()
def test_empty_string_returns_empty(self):
"""Empty string returns empty tuple."""
assert config._parse_channel_names("") == ()
def test_single_name(self):
"""Single channel name is returned as a one-element tuple."""
assert config._parse_channel_names("LongFast") == ("LongFast",)
def test_comma_separated(self):
"""Comma-separated names are split and returned."""
result = config._parse_channel_names("LongFast,Chat")
assert result == ("LongFast", "Chat")
def test_strips_whitespace(self):
"""Leading/trailing whitespace around names is stripped."""
result = config._parse_channel_names(" LongFast , Chat ")
assert result == ("LongFast", "Chat")
def test_deduplicates_case_insensitively(self):
"""Duplicate names (case-insensitively) are deduplicated."""
result = config._parse_channel_names("LongFast,longfast,LONGFAST")
assert result == ("LongFast",)
def test_preserves_order(self):
"""Original order is preserved, first occurrence kept on dedup."""
result = config._parse_channel_names("B,A,B,C")
assert result == ("B", "A", "C")
def test_empty_segments_skipped(self):
"""Empty segments from consecutive commas are skipped."""
result = config._parse_channel_names("A,,B,,,C")
assert result == ("A", "B", "C")
# ---------------------------------------------------------------------------
# _parse_hidden_channels
# ---------------------------------------------------------------------------
class TestParseHiddenChannels:
"""Tests for :func:`config._parse_hidden_channels`."""
def test_delegates_to_parse_channel_names(self):
"""_parse_hidden_channels delegates to _parse_channel_names."""
assert config._parse_hidden_channels(
"Chat,Admin"
) == config._parse_channel_names("Chat,Admin")
def test_none_returns_empty(self):
"""None input returns empty tuple."""
assert config._parse_hidden_channels(None) == ()
# ---------------------------------------------------------------------------
# _resolve_instance_domain
# ---------------------------------------------------------------------------
class TestResolveInstanceDomains:
"""Tests for :func:`config._resolve_instance_domains`."""
def test_single_domain(self, monkeypatch):
"""Single domain produces one-element tuple."""
monkeypatch.setenv("INSTANCE_DOMAIN", "foo.tld")
monkeypatch.setenv("API_TOKEN", "secret")
result = config._resolve_instance_domains()
assert result == (("https://foo.tld", "secret"),)
def test_multi_domain_broadcast_token(self, monkeypatch):
"""Multiple domains with a single token broadcast the token."""
monkeypatch.setenv("INSTANCE_DOMAIN", "foo.tld, bar.tld")
monkeypatch.setenv("API_TOKEN", "shared")
result = config._resolve_instance_domains()
assert result == (
("https://foo.tld", "shared"),
("https://bar.tld", "shared"),
)
def test_multi_domain_per_instance_tokens(self, monkeypatch):
"""Comma-separated tokens are positionally paired with domains."""
monkeypatch.setenv("INSTANCE_DOMAIN", "a.tld,b.tld")
monkeypatch.setenv("API_TOKEN", "tok1,tok2")
result = config._resolve_instance_domains()
assert result == (("https://a.tld", "tok1"), ("https://b.tld", "tok2"))
def test_token_count_mismatch_raises(self, monkeypatch):
"""Mismatched counts raise ValueError at parse time."""
monkeypatch.setenv("INSTANCE_DOMAIN", "a.tld,b.tld")
monkeypatch.setenv("API_TOKEN", "t1,t2,t3")
with pytest.raises(ValueError, match="counts must match"):
config._resolve_instance_domains()
def test_deduplicates_domains(self, monkeypatch):
"""Duplicate domains are collapsed to a single entry."""
monkeypatch.setenv("INSTANCE_DOMAIN", "foo.tld, foo.tld")
monkeypatch.setenv("API_TOKEN", "tok")
result = config._resolve_instance_domains()
assert result == (("https://foo.tld", "tok"),)
def test_preserves_explicit_scheme(self, monkeypatch):
"""Domains with explicit schemes keep them; others get https://."""
monkeypatch.setenv("INSTANCE_DOMAIN", "http://local:41447,bar.tld")
monkeypatch.setenv("API_TOKEN", "tok")
result = config._resolve_instance_domains()
assert result == (
("http://local:41447", "tok"),
("https://bar.tld", "tok"),
)
def test_empty_domain(self, monkeypatch):
"""Empty INSTANCE_DOMAIN returns an empty tuple."""
monkeypatch.setenv("INSTANCE_DOMAIN", "")
monkeypatch.setenv("API_TOKEN", "tok")
result = config._resolve_instance_domains()
assert result == ()
def test_strips_trailing_slashes(self, monkeypatch):
"""Trailing slashes are stripped from domains."""
monkeypatch.setenv("INSTANCE_DOMAIN", "foo.tld/")
monkeypatch.setenv("API_TOKEN", "tok")
result = config._resolve_instance_domains()
assert result == (("https://foo.tld", "tok"),)
def test_empty_token_broadcast(self, monkeypatch):
"""Empty API_TOKEN broadcasts empty string to all instances."""
monkeypatch.setenv("INSTANCE_DOMAIN", "a.tld,b.tld")
monkeypatch.setenv("API_TOKEN", "")
result = config._resolve_instance_domains()
assert result == (("https://a.tld", ""), ("https://b.tld", ""))
# ---------------------------------------------------------------------------
# _resolve_instance_domain (legacy, kept for backward compatibility)
# ---------------------------------------------------------------------------
class TestResolveInstanceDomain:
"""Tests for :func:`config._resolve_instance_domain`."""
def test_returns_instance_domain_when_set(self, monkeypatch):
"""Uses INSTANCE_DOMAIN when set."""
monkeypatch.setenv("INSTANCE_DOMAIN", "mesh.example.com")
result = config._resolve_instance_domain()
assert result == "https://mesh.example.com"
def test_adds_https_when_no_scheme(self, monkeypatch):
"""Adds https:// prefix when no scheme is present."""
monkeypatch.setenv("INSTANCE_DOMAIN", "example.com")
assert config._resolve_instance_domain() == "https://example.com"
def test_preserves_existing_scheme(self, monkeypatch):
"""Leaves existing http:// scheme intact."""
monkeypatch.setenv("INSTANCE_DOMAIN", "http://example.com")
assert config._resolve_instance_domain() == "http://example.com"
def test_strips_trailing_slash(self, monkeypatch):
"""Strips trailing slash from instance domain."""
monkeypatch.setenv("INSTANCE_DOMAIN", "https://example.com/")
assert config._resolve_instance_domain() == "https://example.com"
def test_returns_empty_when_not_set(self, monkeypatch):
"""Returns empty string when INSTANCE_DOMAIN is unset."""
monkeypatch.delenv("INSTANCE_DOMAIN", raising=False)
assert config._resolve_instance_domain() == ""
# ---------------------------------------------------------------------------
# _debug_log
# ---------------------------------------------------------------------------
class TestDebugLog:
"""Tests for :func:`config._debug_log`."""
def test_suppressed_when_debug_false(self, monkeypatch, capsys):
"""Nothing is printed when DEBUG is False and severity is debug."""
monkeypatch.setattr(config, "DEBUG", False)
config._debug_log("silent", severity="debug")
assert capsys.readouterr().out == ""
def test_prints_when_debug_true(self, monkeypatch, capsys):
"""Message is printed when DEBUG is True."""
monkeypatch.setattr(config, "DEBUG", True)
config._debug_log("hello world")
out = capsys.readouterr().out
assert "hello world" in out
def test_always_flag_bypasses_debug_guard(self, monkeypatch, capsys):
"""always=True forces output even when DEBUG is False."""
monkeypatch.setattr(config, "DEBUG", False)
config._debug_log("force print", always=True)
out = capsys.readouterr().out
assert "force print" in out
def test_context_included_in_output(self, monkeypatch, capsys):
"""Context label is included in log output."""
monkeypatch.setattr(config, "DEBUG", True)
config._debug_log("msg", context="test.ctx")
out = capsys.readouterr().out
assert "context=test.ctx" in out
def test_severity_included_in_output(self, monkeypatch, capsys):
"""Severity level is included in log output."""
monkeypatch.setattr(config, "DEBUG", True)
config._debug_log("msg", severity="warn")
out = capsys.readouterr().out
assert "[warn]" in out
def test_metadata_included_in_output(self, monkeypatch, capsys):
"""Additional metadata key=value pairs are included in output."""
monkeypatch.setattr(config, "DEBUG", True)
config._debug_log("msg", node_id="!aabb1234")
out = capsys.readouterr().out
assert "node_id=" in out
def test_warn_severity_printed_even_when_debug_false(self, monkeypatch, capsys):
"""Non-debug severity is printed regardless of DEBUG flag."""
monkeypatch.setattr(config, "DEBUG", False)
config._debug_log("warn msg", severity="warn")
out = capsys.readouterr().out
assert "warn msg" in out
# ---------------------------------------------------------------------------
# PROTOCOL validation
# ---------------------------------------------------------------------------
class TestProtocolValidation:
"""Tests for PROTOCOL environment validation at import time."""
def test_valid_protocol_does_not_raise(self, monkeypatch):
"""Importing config with a valid PROTOCOL succeeds."""
import importlib
monkeypatch.setenv("PROTOCOL", "meshtastic")
# Re-importing should not raise
importlib.reload(config)
def test_invalid_protocol_raises_value_error(self, monkeypatch):
"""An invalid PROTOCOL value raises ValueError at module load."""
import importlib
monkeypatch.setenv("PROTOCOL", "bogus_protocol_xyz")
with pytest.raises(ValueError, match="Unknown PROTOCOL"):
importlib.reload(config)
# Restore to valid value so subsequent tests work
monkeypatch.setenv("PROTOCOL", "meshtastic")
importlib.reload(config)
# ---------------------------------------------------------------------------
# _parse_lora_freq_env
# ---------------------------------------------------------------------------
class TestParseLoraFreqEnv:
"""Tests for :func:`config._parse_lora_freq_env`."""
def test_none_returns_none(self):
"""None input returns None."""
assert config._parse_lora_freq_env(None) is None
def test_empty_string_returns_none(self):
"""Empty string returns None."""
assert config._parse_lora_freq_env("") is None
def test_whitespace_only_returns_none(self):
"""Whitespace-only string returns None."""
assert config._parse_lora_freq_env(" ") is None
def test_integer_string_returns_int(self):
"""Whole-number string returns int."""
result = config._parse_lora_freq_env("868")
assert result == 868
assert isinstance(result, int)
def test_float_integer_value_returns_int(self):
"""String like '915.0' (whole float) returns int 915."""
result = config._parse_lora_freq_env("915.0")
assert result == 915
assert isinstance(result, int)
def test_decimal_string_returns_float(self):
"""Decimal string returns float."""
result = config._parse_lora_freq_env("869.525")
assert result == pytest.approx(869.525)
assert isinstance(result, float)
def test_non_numeric_label_returns_none(self):
"""Non-numeric string returns None so auto-detection is not blocked."""
assert config._parse_lora_freq_env("EU_868") is None
def test_unit_suffixed_string_returns_none(self):
"""String like '915MHz' returns None (not numeric)."""
assert config._parse_lora_freq_env("915MHz") is None
def test_inf_returns_none(self):
"""'inf' is non-finite and returns None."""
assert config._parse_lora_freq_env("inf") is None
def test_large_exponent_returns_none(self):
"""'1e309' overflows to inf and returns None."""
assert config._parse_lora_freq_env("1e309") is None
def test_nan_returns_none(self):
"""'nan' is non-finite and returns None."""
assert config._parse_lora_freq_env("nan") is None
def test_whitespace_stripped(self):
"""Leading/trailing whitespace is ignored."""
assert config._parse_lora_freq_env(" 919 ") == 919
def test_frequency_env_preseeds_lora_freq(self, monkeypatch):
"""FREQUENCY env var pre-seeds LORA_FREQ at module load."""
import importlib
monkeypatch.setenv("FREQUENCY", "915")
importlib.reload(config)
assert config.LORA_FREQ == 915
# Restore
monkeypatch.delenv("FREQUENCY")
importlib.reload(config)
def test_no_frequency_env_leaves_lora_freq_none(self, monkeypatch):
"""Absent FREQUENCY env var leaves LORA_FREQ as None."""
import importlib
monkeypatch.delenv("FREQUENCY", raising=False)
importlib.reload(config)
assert config.LORA_FREQ is None
+256
View File
@@ -0,0 +1,256 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unit tests for :mod:`data.mesh_ingestor.connection`."""
from __future__ import annotations
import sys
from pathlib import Path
from unittest.mock import patch
import pytest
REPO_ROOT = Path(__file__).resolve().parents[1]
if str(REPO_ROOT) not in sys.path:
sys.path.insert(0, str(REPO_ROOT))
from data.mesh_ingestor.connection import ( # noqa: E402
BLE_ADDRESS_RE,
DEFAULT_TCP_PORT,
default_serial_targets,
parse_ble_target,
parse_tcp_target,
)
# ---------------------------------------------------------------------------
# parse_ble_target
# ---------------------------------------------------------------------------
@pytest.mark.parametrize(
"value,expected",
[
# MAC addresses — returned upper-cased
("AA:BB:CC:DD:EE:FF", "AA:BB:CC:DD:EE:FF"),
("aa:bb:cc:dd:ee:ff", "AA:BB:CC:DD:EE:FF"),
("AA:BB:CC:DD:EE:12", "AA:BB:CC:DD:EE:12"),
# UUID (macOS format)
(
"12345678-1234-1234-1234-123456789abc",
"12345678-1234-1234-1234-123456789ABC",
),
(
"12345678-1234-1234-1234-123456789ABC",
"12345678-1234-1234-1234-123456789ABC",
),
],
)
def test_parse_ble_target_accepts_ble_addresses(value, expected):
"""parse_ble_target must return the normalised address for valid BLE formats."""
assert parse_ble_target(value) == expected
@pytest.mark.parametrize(
"value",
[
"/dev/ttyUSB0",
"/dev/ttyACM0",
"COM3",
"hostname:4403",
"192.168.1.1:4403",
"",
" ",
"AA:BB:CC:DD:EE", # too short — only 5 groups
"ZZ:BB:CC:DD:EE:FF", # invalid hex
],
)
def test_parse_ble_target_rejects_non_ble(value):
"""parse_ble_target must return None for serial paths, TCP targets, and malformed inputs."""
assert parse_ble_target(value) is None
def test_parse_ble_target_none_input():
"""parse_ble_target must return None for None input."""
assert parse_ble_target(None) is None # type: ignore[arg-type]
# ---------------------------------------------------------------------------
# parse_tcp_target
# ---------------------------------------------------------------------------
@pytest.mark.parametrize(
"value,expected_host,expected_port",
[
# hostname:port
("meshcore-node.local:4403", "meshcore-node.local", 4403),
("meshnode.local:4403", "meshnode.local", 4403),
("hostname:1234", "hostname", 1234),
("otherhost:80", "otherhost", 80),
# IP:port
("192.168.1.1:4403", "192.168.1.1", 4403),
("10.0.0.1:9000", "10.0.0.1", 9000),
# With scheme prefix
("tcp://meshnode.local:4403", "meshnode.local", 4403),
("http://192.168.1.1:4403", "192.168.1.1", 4403),
# IPv6 with brackets
("[::1]:4403", "::1", 4403),
("[2001:db8::1]:8080", "2001:db8::1", 8080),
],
)
def test_parse_tcp_target_accepts_tcp(value, expected_host, expected_port):
"""parse_tcp_target must return (host, port) for valid TCP target strings."""
result = parse_tcp_target(value)
assert result is not None
host, port = result
assert host == expected_host
assert port == expected_port
@pytest.mark.parametrize(
"value",
[
# Serial paths
"/dev/ttyUSB0",
"/dev/ttyACM0",
"COM3",
# BLE MACs — multiple colons, no valid port
"AA:BB:CC:DD:EE:FF",
"AA:BB:CC:DD:EE:12",
# UUIDs — hyphens, no colon
"12345678-1234-1234-1234-123456789abc",
# Bare hostname without port
"meshcore-node.local",
# Empty / whitespace
"",
" ",
# Port out of range
"host:0",
"host:65536",
# Non-numeric port
"host:notaport",
],
)
def test_parse_tcp_target_rejects_non_tcp(value):
"""parse_tcp_target must return None for serial paths, BLE addresses, and malformed inputs."""
assert parse_tcp_target(value) is None
def test_parse_tcp_target_none_input():
"""parse_tcp_target must return None for None input."""
assert parse_tcp_target(None) is None # type: ignore[arg-type]
def test_parse_tcp_target_default_port_for_bracketed_ipv6_no_port():
"""parse_tcp_target must use DEFAULT_TCP_PORT for bracketed IPv6 without port."""
result = parse_tcp_target("[::1]")
assert result == ("::1", DEFAULT_TCP_PORT)
@pytest.mark.parametrize(
"value",
[
"[::1", # no closing bracket
"[]:4403", # empty host in brackets
"[::1]:abc", # non-numeric port after bracket
"[::1]:0", # port out of range (low)
"[::1]:65536", # port out of range (high)
],
)
def test_parse_tcp_target_rejects_malformed_ipv6(value):
"""parse_tcp_target must return None for malformed bracketed IPv6 targets."""
assert parse_tcp_target(value) is None
# ---------------------------------------------------------------------------
# default_serial_targets
# ---------------------------------------------------------------------------
def test_default_serial_targets_returns_list():
"""default_serial_targets must return a non-empty list."""
targets = default_serial_targets()
assert isinstance(targets, list)
assert len(targets) > 0
def test_default_serial_targets_includes_fallback():
"""default_serial_targets always includes /dev/ttyACM0 as a fallback."""
targets = default_serial_targets()
assert "/dev/ttyACM0" in targets
def test_default_serial_targets_no_duplicates():
"""default_serial_targets must not return duplicate paths."""
targets = default_serial_targets()
assert len(targets) == len(set(targets))
def test_default_serial_targets_deduplicates_glob_results():
"""default_serial_targets must deduplicate paths returned by multiple globs."""
def _fake_glob(pattern):
if "ttyACM" in pattern:
return ["/dev/ttyACM0", "/dev/ttyACM1"]
if "ttyUSB" in pattern:
return ["/dev/ttyACM0"] # intentional duplicate across patterns
return []
with patch("data.mesh_ingestor.connection.glob.glob", side_effect=_fake_glob):
targets = default_serial_targets()
assert targets.count("/dev/ttyACM0") == 1
assert "/dev/ttyACM1" in targets
# ttyACM0 already found by glob so fallback append must not re-add it
assert targets.count("/dev/ttyACM0") == 1
def test_default_serial_targets_omits_fallback_when_ttyacm0_found():
"""default_serial_targets must not append /dev/ttyACM0 when glob already found it."""
def _fake_glob(pattern):
if "ttyACM" in pattern:
return ["/dev/ttyACM0"]
return []
with patch("data.mesh_ingestor.connection.glob.glob", side_effect=_fake_glob):
targets = default_serial_targets()
# present exactly once — from glob, not appended again
assert targets.count("/dev/ttyACM0") == 1
# ---------------------------------------------------------------------------
# BLE_ADDRESS_RE sanity
# ---------------------------------------------------------------------------
def test_ble_address_re_mac():
"""BLE_ADDRESS_RE matches a canonical 6-byte MAC address."""
assert BLE_ADDRESS_RE.fullmatch("AA:BB:CC:DD:EE:FF") is not None
def test_ble_address_re_uuid():
"""BLE_ADDRESS_RE matches a standard 128-bit UUID."""
assert BLE_ADDRESS_RE.fullmatch("12345678-1234-1234-1234-123456789abc") is not None
def test_ble_address_re_rejects_tcp():
"""BLE_ADDRESS_RE must not match a hostname:port string."""
assert BLE_ADDRESS_RE.fullmatch("hostname:4403") is None
def test_ble_address_re_rejects_partial_mac():
"""BLE_ADDRESS_RE must not match an incomplete MAC address."""
assert BLE_ADDRESS_RE.fullmatch("AA:BB:CC:DD:EE") is None
+1009 -1
View File
File diff suppressed because it is too large Load Diff
+4 -2
View File
@@ -19,8 +19,10 @@ import io
import json
import sys
from meshtastic.protobuf import mesh_pb2
from meshtastic.protobuf import telemetry_pb2
import pytest
mesh_pb2 = pytest.importorskip("meshtastic.protobuf.mesh_pb2")
telemetry_pb2 = pytest.importorskip("meshtastic.protobuf.telemetry_pb2")
from data.mesh_ingestor import decode_payload
+232
View File
@@ -0,0 +1,232 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unit tests for :mod:`data.mesh_ingestor.events`."""
from __future__ import annotations
import sys
from pathlib import Path
import pytest
REPO_ROOT = Path(__file__).resolve().parents[1]
if str(REPO_ROOT) not in sys.path:
sys.path.insert(0, str(REPO_ROOT))
from data.mesh_ingestor.events import ( # noqa: E402 - path setup
IngestorHeartbeat,
MessageEvent,
NeighborEntry,
NeighborsSnapshot,
PositionEvent,
TelemetryEvent,
TraceEvent,
)
def test_message_event_schema():
assert MessageEvent.__required_keys__ == frozenset({"id", "rx_time", "rx_iso"})
assert "text" in MessageEvent.__optional_keys__
assert "from_id" in MessageEvent.__optional_keys__
assert "snr" in MessageEvent.__optional_keys__
assert "rssi" in MessageEvent.__optional_keys__
def test_message_event_requires_id_rx_time_rx_iso():
event: MessageEvent = {
"id": 1,
"rx_time": 1700000000,
"rx_iso": "2023-11-14T00:00:00Z",
}
assert event["id"] == 1
assert event["rx_time"] == 1700000000
assert event["rx_iso"] == "2023-11-14T00:00:00Z"
def test_message_event_accepts_optional_fields():
event: MessageEvent = {
"id": 2,
"rx_time": 1700000001,
"rx_iso": "2023-11-14T00:00:01Z",
"text": "hello",
"from_id": "!aabbccdd",
"snr": 4.5,
"rssi": -90,
}
assert event["text"] == "hello"
assert event["snr"] == pytest.approx(4.5)
def test_position_event_schema():
assert PositionEvent.__required_keys__ == frozenset({"id", "rx_time", "rx_iso"})
assert "latitude" in PositionEvent.__optional_keys__
assert "longitude" in PositionEvent.__optional_keys__
assert "node_id" in PositionEvent.__optional_keys__
def test_position_event_required_fields():
event: PositionEvent = {
"id": 10,
"rx_time": 1700000002,
"rx_iso": "2023-11-14T00:00:02Z",
}
assert event["id"] == 10
def test_position_event_optional_fields():
event: PositionEvent = {
"id": 11,
"rx_time": 1700000003,
"rx_iso": "2023-11-14T00:00:03Z",
"latitude": 37.7749,
"longitude": -122.4194,
"altitude": 10.0,
"node_id": "!aabbccdd",
}
assert event["latitude"] == pytest.approx(37.7749)
def test_telemetry_event_schema():
assert TelemetryEvent.__required_keys__ == frozenset({"id", "rx_time", "rx_iso"})
assert "payload_b64" in TelemetryEvent.__optional_keys__
assert "snr" in TelemetryEvent.__optional_keys__
def test_telemetry_event_required_fields():
event: TelemetryEvent = {
"id": 20,
"rx_time": 1700000004,
"rx_iso": "2023-11-14T00:00:04Z",
}
assert event["id"] == 20
def test_telemetry_event_optional_fields():
event: TelemetryEvent = {
"id": 21,
"rx_time": 1700000005,
"rx_iso": "2023-11-14T00:00:05Z",
"channel": 0,
"payload_b64": "AAEC",
"snr": 3.0,
}
assert event["payload_b64"] == "AAEC"
def test_neighbor_entry_schema():
assert NeighborEntry.__required_keys__ == frozenset({"rx_time", "rx_iso"})
assert "neighbor_id" in NeighborEntry.__optional_keys__
assert "snr" in NeighborEntry.__optional_keys__
def test_neighbor_entry_required_fields():
entry: NeighborEntry = {"rx_time": 1700000006, "rx_iso": "2023-11-14T00:00:06Z"}
assert entry["rx_time"] == 1700000006
def test_neighbor_entry_optional_fields():
entry: NeighborEntry = {
"rx_time": 1700000007,
"rx_iso": "2023-11-14T00:00:07Z",
"neighbor_id": "!11223344",
"snr": 6.0,
}
assert entry["neighbor_id"] == "!11223344"
def test_neighbors_snapshot_schema():
assert NeighborsSnapshot.__required_keys__ == frozenset(
{"node_id", "rx_time", "rx_iso"}
)
assert "neighbors" in NeighborsSnapshot.__optional_keys__
assert "node_broadcast_interval_secs" in NeighborsSnapshot.__optional_keys__
def test_neighbors_snapshot_required_fields():
snap: NeighborsSnapshot = {
"node_id": "!aabbccdd",
"rx_time": 1700000008,
"rx_iso": "2023-11-14T00:00:08Z",
}
assert snap["node_id"] == "!aabbccdd"
def test_neighbors_snapshot_optional_fields():
snap: NeighborsSnapshot = {
"node_id": "!aabbccdd",
"rx_time": 1700000009,
"rx_iso": "2023-11-14T00:00:09Z",
"neighbors": [],
"node_broadcast_interval_secs": 900,
}
assert snap["node_broadcast_interval_secs"] == 900
def test_trace_event_schema():
assert TraceEvent.__required_keys__ == frozenset({"hops", "rx_time", "rx_iso"})
assert "elapsed_ms" in TraceEvent.__optional_keys__
assert "snr" in TraceEvent.__optional_keys__
def test_trace_event_required_fields():
event: TraceEvent = {
"hops": [1, 2, 3],
"rx_time": 1700000010,
"rx_iso": "2023-11-14T00:00:10Z",
}
assert event["hops"] == [1, 2, 3]
def test_trace_event_optional_fields():
event: TraceEvent = {
"hops": [4, 5],
"rx_time": 1700000011,
"rx_iso": "2023-11-14T00:00:11Z",
"elapsed_ms": 42,
"snr": 2.5,
}
assert event["elapsed_ms"] == 42
def test_ingestor_heartbeat_schema():
# IngestorHeartbeat uses total=True with NotRequired fields. Under
# `from __future__ import annotations` the TypedDict metaclass cannot
# evaluate the annotation strings at class creation time, so
# NotRequired keys appear in __required_keys__ rather than
# __optional_keys__. Verify the four always-present keys are included.
always_required = {"node_id", "start_time", "last_seen_time", "version"}
assert always_required <= IngestorHeartbeat.__required_keys__
def test_ingestor_heartbeat_all_fields():
hb: IngestorHeartbeat = {
"node_id": "!aabbccdd",
"start_time": 1700000000,
"last_seen_time": 1700000012,
"version": "0.5.12",
"lora_freq": 906875,
"modem_preset": "LONG_FAST",
}
assert hb["version"] == "0.5.12"
assert hb["lora_freq"] == 906875
def test_ingestor_heartbeat_without_optional_fields():
hb: IngestorHeartbeat = {
"node_id": "!aabbccdd",
"start_time": 1700000000,
"last_seen_time": 1700000013,
"version": "0.5.12",
}
assert "lora_freq" not in hb
+893
View File
@@ -0,0 +1,893 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unit tests for the :mod:`data.mesh_ingestor.handlers` subpackage."""
from __future__ import annotations
import base64
import sys
import time
from pathlib import Path
from types import SimpleNamespace
import pytest
REPO_ROOT = Path(__file__).resolve().parents[1]
if str(REPO_ROOT) not in sys.path:
sys.path.insert(0, str(REPO_ROOT))
import data.mesh_ingestor.config as config
import data.mesh_ingestor.handlers as handlers
import data.mesh_ingestor.handlers._state as _state_mod
import data.mesh_ingestor.handlers.ignored as ignored_mod
import data.mesh_ingestor.handlers.telemetry as telemetry_mod
@pytest.fixture(autouse=True)
def reset_handler_state():
"""Reset global handler state between tests."""
_state_mod._host_node_id = None
_state_mod._host_telemetry_last_rx = None
_state_mod._host_nodeinfo_last_seen = None
_state_mod._last_packet_monotonic = None
yield
_state_mod._host_node_id = None
_state_mod._host_telemetry_last_rx = None
_state_mod._host_nodeinfo_last_seen = None
_state_mod._last_packet_monotonic = None
# ---------------------------------------------------------------------------
# _state: host_node_id / register_host_node_id
# ---------------------------------------------------------------------------
class TestHostNodeId:
"""Tests for host node ID state accessors."""
def test_returns_none_initially(self):
"""host_node_id() returns None before registration."""
assert handlers.host_node_id() is None
def test_register_stores_canonical_id(self):
"""Registering a valid node ID stores it canonically."""
handlers.register_host_node_id("!aabbccdd")
assert handlers.host_node_id() == "!aabbccdd"
def test_register_none_clears_id(self):
"""Registering None clears the stored host ID."""
handlers.register_host_node_id("!aabbccdd")
handlers.register_host_node_id(None)
assert handlers.host_node_id() is None
def test_register_resets_telemetry_window(self):
"""Registering a new host ID resets the telemetry suppression window."""
_state_mod._host_telemetry_last_rx = 999_999
handlers.register_host_node_id("!aabbccdd")
assert _state_mod._host_telemetry_last_rx is None
def test_register_resets_nodeinfo_window(self):
"""Registering a new host ID resets the NODEINFO suppression window."""
_state_mod._host_nodeinfo_last_seen = 12345.0
handlers.register_host_node_id("!aabbccdd")
assert _state_mod._host_nodeinfo_last_seen is None
def test_register_canonicalises_numeric(self):
"""Numeric node ID is converted to !xxxxxxxx form."""
handlers.register_host_node_id(0xAABBCCDD)
assert handlers.host_node_id() == "!aabbccdd"
# ---------------------------------------------------------------------------
# _state: last_packet_monotonic / _mark_packet_seen
# ---------------------------------------------------------------------------
class TestLastPacketMonotonic:
"""Tests for packet timestamp tracking."""
def test_returns_none_initially(self):
"""Returns None before any packet is processed."""
assert handlers.last_packet_monotonic() is None
def test_updates_after_mark(self):
"""_mark_packet_seen() updates the monotonic timestamp."""
_state_mod._mark_packet_seen()
ts = handlers.last_packet_monotonic()
assert ts is not None
assert isinstance(ts, float)
def test_mark_packet_seen_exported_from_handlers(self):
"""handlers._mark_packet_seen must be accessible via the package."""
assert callable(handlers._mark_packet_seen)
handlers._mark_packet_seen()
ts = handlers.last_packet_monotonic()
assert ts is not None
# ---------------------------------------------------------------------------
# _state: _host_telemetry_suppressed
# ---------------------------------------------------------------------------
class TestHostTelemetrySuppressed:
"""Tests for host telemetry suppression logic."""
def test_not_suppressed_when_no_previous(self):
"""Not suppressed when no previous telemetry timestamp is set."""
suppressed, mins = _state_mod._host_telemetry_suppressed(int(time.time()))
assert suppressed is False
assert mins == 0
def test_suppressed_within_interval(self):
"""Suppressed when within the suppression window."""
now = int(time.time())
_state_mod._host_telemetry_last_rx = now - 10 # 10 seconds ago
suppressed, mins = _state_mod._host_telemetry_suppressed(now)
assert suppressed is True
assert mins > 0
def test_not_suppressed_after_interval(self):
"""Not suppressed after the full interval has elapsed."""
now = int(time.time())
_state_mod._host_telemetry_last_rx = (
now - _state_mod._HOST_TELEMETRY_INTERVAL_SECS - 1
)
suppressed, mins = _state_mod._host_telemetry_suppressed(now)
assert suppressed is False
assert mins == 0
def test_minutes_remaining_rounds_up(self):
"""Minutes remaining is rounded up (ceiling division)."""
now = int(time.time())
# 30 seconds remaining → 1 minute remaining
_state_mod._host_telemetry_last_rx = (
now - _state_mod._HOST_TELEMETRY_INTERVAL_SECS + 30
)
suppressed, mins = _state_mod._host_telemetry_suppressed(now)
assert suppressed is True
assert mins == 1
# ---------------------------------------------------------------------------
# _state: _host_nodeinfo_suppressed / _mark_host_nodeinfo_seen
# ---------------------------------------------------------------------------
class TestHostNodeinfoSuppressed:
"""Tests for host NODEINFO suppression logic."""
def test_not_suppressed_when_no_previous(self):
"""Not suppressed when no previous NODEINFO timestamp is set."""
assert _state_mod._host_nodeinfo_suppressed(time.monotonic()) is False
def test_suppressed_within_interval(self):
"""Suppressed when within the suppression window."""
now = time.monotonic()
_state_mod._host_nodeinfo_last_seen = now - 10.0 # 10 seconds ago
assert _state_mod._host_nodeinfo_suppressed(now) is True
def test_not_suppressed_after_interval(self):
"""Not suppressed after the full interval has elapsed."""
now = time.monotonic()
_state_mod._host_nodeinfo_last_seen = (
now - _state_mod._HOST_NODEINFO_INTERVAL_SECS - 1.0
)
assert _state_mod._host_nodeinfo_suppressed(now) is False
def test_mark_updates_timestamp(self):
"""_mark_host_nodeinfo_seen stores the provided timestamp."""
now = time.monotonic()
_state_mod._mark_host_nodeinfo_seen(now)
assert _state_mod._host_nodeinfo_last_seen == now
def test_suppressed_after_mark(self):
"""Immediately after marking, a second call is suppressed."""
now = time.monotonic()
_state_mod._mark_host_nodeinfo_seen(now)
assert _state_mod._host_nodeinfo_suppressed(now + 1.0) is True
def test_not_suppressed_after_mark_and_full_interval(self):
"""After a full interval has elapsed, suppression lifts."""
long_ago = time.monotonic() - _state_mod._HOST_NODEINFO_INTERVAL_SECS - 5.0
_state_mod._mark_host_nodeinfo_seen(long_ago)
assert _state_mod._host_nodeinfo_suppressed(time.monotonic()) is False
# ---------------------------------------------------------------------------
# radio: _radio_metadata_fields / _apply_radio_metadata
# ---------------------------------------------------------------------------
class TestRadioMetadata:
"""Tests for radio metadata helper functions."""
def test_empty_when_neither_configured(self, monkeypatch):
"""Returns empty dict when LORA_FREQ and MODEM_PRESET are both None."""
monkeypatch.setattr(config, "LORA_FREQ", None)
monkeypatch.setattr(config, "MODEM_PRESET", None)
assert handlers._radio_metadata_fields() == {}
def test_includes_lora_freq(self, monkeypatch):
"""Includes lora_freq when configured."""
monkeypatch.setattr(config, "LORA_FREQ", 915)
monkeypatch.setattr(config, "MODEM_PRESET", None)
assert handlers._radio_metadata_fields() == {"lora_freq": 915}
def test_includes_modem_preset(self, monkeypatch):
"""Includes modem_preset when configured."""
monkeypatch.setattr(config, "LORA_FREQ", None)
monkeypatch.setattr(config, "MODEM_PRESET", "LongFast")
assert handlers._radio_metadata_fields() == {"modem_preset": "LongFast"}
def test_apply_radio_metadata_enriches_payload(self, monkeypatch):
"""_apply_radio_metadata adds radio fields to the payload."""
monkeypatch.setattr(config, "LORA_FREQ", 915)
monkeypatch.setattr(config, "MODEM_PRESET", "LongFast")
payload = {"id": 1}
result = handlers._apply_radio_metadata(payload)
assert result["lora_freq"] == 915
assert result["modem_preset"] == "LongFast"
assert result is payload # mutated in-place
def test_apply_radio_metadata_to_nodes_enriches_node_dicts(self, monkeypatch):
"""_apply_radio_metadata_to_nodes enriches each node-value dict."""
monkeypatch.setattr(config, "LORA_FREQ", 915)
monkeypatch.setattr(config, "MODEM_PRESET", None)
payload = {"!aabb": {"lastHeard": 100}, "ingestor": "!host"}
handlers._apply_radio_metadata_to_nodes(payload)
assert payload["!aabb"]["lora_freq"] == 915
# Non-dict values like "ingestor" string are not enriched
assert isinstance(payload["ingestor"], str)
# ---------------------------------------------------------------------------
# ignored: _record_ignored_packet
# ---------------------------------------------------------------------------
class TestRecordIgnoredPacket:
"""Tests for :func:`handlers.ignored._record_ignored_packet`."""
def test_noop_when_debug_false(self, monkeypatch, tmp_path):
"""Does nothing when DEBUG is disabled."""
monkeypatch.setattr(config, "DEBUG", False)
log_path = tmp_path / "ignored.txt"
monkeypatch.setattr(ignored_mod, "_IGNORED_PACKET_LOG_PATH", log_path)
ignored_mod._record_ignored_packet({"test": 1}, reason="test-reason")
assert not log_path.exists()
def test_writes_json_line_when_debug(self, monkeypatch, tmp_path):
"""Appends a JSON record when DEBUG is enabled."""
import json
import threading
monkeypatch.setattr(config, "DEBUG", True)
log_path = tmp_path / "ignored.txt"
monkeypatch.setattr(ignored_mod, "_IGNORED_PACKET_LOG_PATH", log_path)
monkeypatch.setattr(ignored_mod, "_IGNORED_PACKET_LOCK", threading.Lock())
ignored_mod._record_ignored_packet(
{"portnum": "BAD"}, reason="unsupported-port"
)
assert log_path.exists()
line = log_path.read_text().strip()
record = json.loads(line)
assert record["reason"] == "unsupported-port"
assert "timestamp" in record
def test_bytes_in_packet_are_base64(self, monkeypatch, tmp_path):
"""Byte values in the packet are Base64-encoded in the log."""
import json
import threading
monkeypatch.setattr(config, "DEBUG", True)
log_path = tmp_path / "ignored.txt"
monkeypatch.setattr(ignored_mod, "_IGNORED_PACKET_LOG_PATH", log_path)
monkeypatch.setattr(ignored_mod, "_IGNORED_PACKET_LOCK", threading.Lock())
ignored_mod._record_ignored_packet({"data": b"\x00\x01"}, reason="test")
record = json.loads(log_path.read_text().strip())
assert record["packet"]["data"] == base64.b64encode(b"\x00\x01").decode()
# ---------------------------------------------------------------------------
# position: base64_payload
# ---------------------------------------------------------------------------
class TestBase64Payload:
"""Tests for :func:`handlers.base64_payload`."""
def test_none_returns_none(self):
"""None input returns None."""
assert handlers.base64_payload(None) is None
def test_empty_bytes_returns_none(self):
"""Empty bytes return None."""
assert handlers.base64_payload(b"") is None
def test_encodes_bytes(self):
"""Non-empty bytes are Base64 encoded."""
result = handlers.base64_payload(b"\x00\x01\x02")
assert result == base64.b64encode(b"\x00\x01\x02").decode("ascii")
# ---------------------------------------------------------------------------
# generic: _is_encrypted_flag
# ---------------------------------------------------------------------------
class TestIsEncryptedFlag:
"""Tests for :func:`handlers._is_encrypted_flag`."""
def test_true_bool(self):
assert handlers._is_encrypted_flag(True) is True
def test_false_bool(self):
assert handlers._is_encrypted_flag(False) is False
def test_nonzero_int(self):
assert handlers._is_encrypted_flag(1) is True
def test_zero_int(self):
assert handlers._is_encrypted_flag(0) is False
def test_empty_string(self):
assert handlers._is_encrypted_flag("") is False
def test_false_string(self):
assert handlers._is_encrypted_flag("false") is False
def test_no_string(self):
assert handlers._is_encrypted_flag("no") is False
def test_zero_string(self):
assert handlers._is_encrypted_flag("0") is False
def test_truthy_string(self):
assert handlers._is_encrypted_flag("yes") is True
def test_none_is_falsy(self):
assert handlers._is_encrypted_flag(None) is False
def test_nonempty_bytes(self):
assert handlers._is_encrypted_flag(b"\x01") is True
def test_empty_bytes(self):
assert handlers._is_encrypted_flag(b"") is False
# ---------------------------------------------------------------------------
# generic: upsert_node
# ---------------------------------------------------------------------------
class TestUpsertNode:
"""Tests for :func:`handlers.upsert_node`."""
def test_queues_node_payload(self):
"""upsert_node enqueues a POST to /api/nodes."""
import data.mesh_ingestor.queue as q
sent = []
original = q._queue_post_json
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
(path, payload)
)
try:
handlers.upsert_node("!aabbccdd", {"user": {"shortName": "AB"}})
finally:
q._queue_post_json = original
assert any(p == "/api/nodes" for p, _ in sent)
def test_includes_ingestor_field(self):
"""Payload includes ingestor field with host node ID."""
import data.mesh_ingestor.queue as q
handlers.register_host_node_id("!deadbeef")
sent = []
original = q._queue_post_json
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
(path, payload)
)
try:
handlers.upsert_node("!aabbccdd", {"user": {}})
finally:
q._queue_post_json = original
_, payload = sent[0]
assert payload.get("ingestor") == "!deadbeef"
# ---------------------------------------------------------------------------
# generic: on_receive deduplication
# ---------------------------------------------------------------------------
class TestOnReceive:
"""Tests for :func:`handlers.on_receive`."""
def test_deduplicates_via_seen_flag(self, monkeypatch):
"""Packets with _potatomesh_seen=True are skipped."""
calls = []
monkeypatch.setattr(
"data.mesh_ingestor.handlers.generic.store_packet_dict",
lambda pkt: calls.append(pkt),
)
packet = {"_potatomesh_seen": True, "decoded": {}}
handlers.on_receive(packet, None)
assert calls == []
def test_marks_packet_seen(self, monkeypatch):
"""First call marks the packet as seen."""
monkeypatch.setattr(
"data.mesh_ingestor.handlers.generic.store_packet_dict",
lambda pkt: None,
)
packet = {"decoded": {}}
handlers.on_receive(packet, None)
assert packet.get("_potatomesh_seen") is True
def test_updates_monotonic_timestamp(self, monkeypatch):
"""on_receive updates the last-packet monotonic timestamp."""
monkeypatch.setattr(
"data.mesh_ingestor.handlers.generic.store_packet_dict",
lambda pkt: None,
)
handlers.on_receive({"decoded": {}}, None)
assert handlers.last_packet_monotonic() is not None
# ---------------------------------------------------------------------------
# store_position_packet
# ---------------------------------------------------------------------------
class TestStorePositionPacket:
"""Tests for :func:`handlers.store_position_packet`."""
def _make_packet(self, from_id="!aabbccdd", pkt_id=1001, **extra):
pkt = {
"id": pkt_id,
"rxTime": 1_700_000_000,
"fromId": from_id,
"decoded": {
"position": {"latitude": 37.5, "longitude": -122.1},
},
}
pkt.update(extra)
return pkt
def test_queues_position_payload(self):
"""Valid position packet is queued to /api/positions."""
import data.mesh_ingestor.queue as q
sent = []
original = q._queue_post_json
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
(path, payload)
)
try:
handlers.store_position_packet(
self._make_packet(),
{"position": {"latitude": 37.5, "longitude": -122.1}},
)
finally:
q._queue_post_json = original
assert any(p == "/api/positions" for p, _ in sent)
def test_skips_when_no_node_id(self):
"""Packet missing a node ID is silently dropped."""
import data.mesh_ingestor.queue as q
sent = []
original = q._queue_post_json
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
(path, payload)
)
try:
handlers.store_position_packet({}, {})
finally:
q._queue_post_json = original
assert sent == []
def test_skips_when_no_packet_id(self):
"""Packet missing a packet ID is silently dropped."""
import data.mesh_ingestor.queue as q
sent = []
original = q._queue_post_json
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
(path, payload)
)
try:
handlers.store_position_packet({"fromId": "!aabbccdd"}, {})
finally:
q._queue_post_json = original
assert sent == []
def test_latitude_i_conversion(self):
"""latitudeI integer is divided by 1e7 to get degrees."""
import data.mesh_ingestor.queue as q
sent = []
original = q._queue_post_json
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
(path, payload)
)
try:
handlers.store_position_packet(
{"id": 99, "rxTime": 100, "fromId": "!aabbccdd"},
{"position": {"latitudeI": 375000000, "longitudeI": -1221000000}},
)
finally:
q._queue_post_json = original
assert len(sent) == 1
payload = sent[0][1]
assert abs(payload["latitude"] - 37.5) < 1e-4
assert abs(payload["longitude"] - -122.1) < 1e-4
# ---------------------------------------------------------------------------
# store_telemetry_packet
# ---------------------------------------------------------------------------
class TestStoreTelemetryPacket:
"""Tests for :func:`handlers.store_telemetry_packet`."""
def _make_telemetry_packet(self, from_id="!aabbccdd", pkt_id=2001):
return {
"id": pkt_id,
"rxTime": 1_700_000_000,
"fromId": from_id,
"decoded": {
"portnum": "TELEMETRY_APP",
"telemetry": {
"deviceMetrics": {"batteryLevel": 80, "voltage": 3.8},
},
},
}
def test_queues_telemetry_payload(self):
"""Valid telemetry packet is queued to /api/telemetry."""
import data.mesh_ingestor.queue as q
sent = []
original = q._queue_post_json
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
(path, payload)
)
try:
pkt = self._make_telemetry_packet()
handlers.store_telemetry_packet(pkt, pkt["decoded"])
finally:
q._queue_post_json = original
assert any(p == "/api/telemetry" for p, _ in sent)
def test_skips_without_telemetry_section(self):
"""Packet without a telemetry section is silently dropped."""
import data.mesh_ingestor.queue as q
sent = []
original = q._queue_post_json
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
(path, payload)
)
try:
handlers.store_telemetry_packet({"id": 1}, {})
finally:
q._queue_post_json = original
assert sent == []
def test_skips_without_packet_id(self):
"""Telemetry packet without an id is dropped."""
import data.mesh_ingestor.queue as q
sent = []
original = q._queue_post_json
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
(path, payload)
)
try:
handlers.store_telemetry_packet(
{"fromId": "!aabbccdd"},
{"telemetry": {"deviceMetrics": {}}},
)
finally:
q._queue_post_json = original
assert sent == []
def test_host_telemetry_suppressed_within_interval(self, monkeypatch):
"""Host node telemetry is suppressed within the interval window."""
import data.mesh_ingestor.queue as q
handlers.register_host_node_id("!aabbccdd")
now = int(time.time())
_state_mod._host_telemetry_last_rx = now - 10 # recent
sent = []
original = q._queue_post_json
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
(path, payload)
)
try:
pkt = {
"id": 1,
"rxTime": now,
"fromId": "!aabbccdd",
"decoded": {
"portnum": "TELEMETRY_APP",
"telemetry": {"deviceMetrics": {"batteryLevel": 80}},
},
}
handlers.store_telemetry_packet(pkt, pkt["decoded"])
finally:
q._queue_post_json = original
assert sent == []
def test_telemetry_type_device(self):
"""deviceMetrics triggers telemetry_type='device'."""
import data.mesh_ingestor.queue as q
sent = []
original = q._queue_post_json
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
(path, payload)
)
try:
pkt = self._make_telemetry_packet()
handlers.store_telemetry_packet(pkt, pkt["decoded"])
finally:
q._queue_post_json = original
_, payload = sent[0]
assert payload.get("telemetry_type") == "device"
def test_invalid_telemetry_type_dropped_from_payload(self, monkeypatch):
"""Unrecognised telemetry_type is omitted from the payload."""
import data.mesh_ingestor.queue as q
monkeypatch.setattr(telemetry_mod, "_VALID_TELEMETRY_TYPES", frozenset())
sent = []
original = q._queue_post_json
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
(path, payload)
)
try:
pkt = self._make_telemetry_packet()
handlers.store_telemetry_packet(pkt, pkt["decoded"])
finally:
q._queue_post_json = original
_, payload = sent[0]
assert "telemetry_type" not in payload
# ---------------------------------------------------------------------------
# store_nodeinfo_packet
# ---------------------------------------------------------------------------
class TestStoreNodeinfoPacket:
"""Tests for :func:`handlers.store_nodeinfo_packet`."""
def test_queues_node_payload(self):
"""Valid nodeinfo packet is queued to /api/nodes."""
import data.mesh_ingestor.queue as q
sent = []
original = q._queue_post_json
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
(path, payload)
)
try:
handlers.store_nodeinfo_packet(
{"id": 1, "rxTime": 100, "fromId": "!aabbccdd"},
{
"user": {
"id": "!aabbccdd",
"shortName": "AB",
"longName": "Alpha Bravo",
}
},
)
finally:
q._queue_post_json = original
assert any(p == "/api/nodes" for p, _ in sent)
def test_skips_when_no_node_id(self):
"""Packet with no resolvable node ID is silently dropped."""
import data.mesh_ingestor.queue as q
sent = []
original = q._queue_post_json
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
(path, payload)
)
try:
handlers.store_nodeinfo_packet({}, {})
finally:
q._queue_post_json = original
assert sent == []
def test_host_nodeinfo_not_suppressed_on_first_call(self):
"""First NODEINFO from the host node is always forwarded."""
import data.mesh_ingestor.queue as q
handlers.register_host_node_id("!aabbccdd")
sent = []
original = q._queue_post_json
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(path)
try:
handlers.store_nodeinfo_packet(
{"id": 1, "rxTime": 100, "fromId": "!aabbccdd"},
{"user": {"id": "!aabbccdd", "shortName": "AB", "longName": "Alpha"}},
)
finally:
q._queue_post_json = original
assert "/api/nodes" in sent
def test_host_nodeinfo_suppressed_within_window(self):
"""Second NODEINFO from the host within the throttle window is dropped."""
import data.mesh_ingestor.queue as q
handlers.register_host_node_id("!aabbccdd")
# Simulate a recent upsert so the window is active.
_state_mod._mark_host_nodeinfo_seen(time.monotonic())
sent = []
original = q._queue_post_json
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(path)
try:
handlers.store_nodeinfo_packet(
{"id": 2, "rxTime": 200, "fromId": "!aabbccdd"},
{"user": {"id": "!aabbccdd", "shortName": "AB", "longName": "Alpha"}},
)
finally:
q._queue_post_json = original
assert sent == []
def test_host_nodeinfo_allowed_after_window_expires(self):
"""NODEINFO from the host is forwarded after the throttle window expires."""
import data.mesh_ingestor.queue as q
handlers.register_host_node_id("!aabbccdd")
# Place last-seen far in the past so the window has expired.
_state_mod._host_nodeinfo_last_seen = (
time.monotonic() - _state_mod._HOST_NODEINFO_INTERVAL_SECS - 10.0
)
sent = []
original = q._queue_post_json
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(path)
try:
handlers.store_nodeinfo_packet(
{"id": 3, "rxTime": 300, "fromId": "!aabbccdd"},
{"user": {"id": "!aabbccdd", "shortName": "AB", "longName": "Alpha"}},
)
finally:
q._queue_post_json = original
assert "/api/nodes" in sent
def test_non_host_nodeinfo_never_suppressed(self):
"""NODEINFO from a non-host node is never throttled."""
import data.mesh_ingestor.queue as q
handlers.register_host_node_id("!aabbccdd")
# Mark the host as recently seen to activate the throttle.
_state_mod._mark_host_nodeinfo_seen(time.monotonic())
sent = []
original = q._queue_post_json
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(path)
try:
handlers.store_nodeinfo_packet(
{"id": 4, "rxTime": 400, "fromId": "!11223344"},
{
"user": {
"id": "!11223344",
"shortName": "CD",
"longName": "Charlie Delta",
}
},
)
finally:
q._queue_post_json = original
assert "/api/nodes" in sent
# ---------------------------------------------------------------------------
# store_neighborinfo_packet
# ---------------------------------------------------------------------------
class TestStoreNeighborinfoPacket:
"""Tests for :func:`handlers.store_neighborinfo_packet`."""
def test_queues_neighbor_payload(self):
"""Valid neighborinfo packet is queued to /api/neighbors."""
import data.mesh_ingestor.queue as q
sent = []
original = q._queue_post_json
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
(path, payload)
)
try:
handlers.store_neighborinfo_packet(
{"id": 1, "rxTime": 100, "fromId": "!aabbccdd"},
{
"neighborinfo": {
"nodeId": 0xAABBCCDD,
"neighbors": [
{"nodeId": 0x11223344, "snr": 5.0},
],
}
},
)
finally:
q._queue_post_json = original
assert any(p == "/api/neighbors" for p, _ in sent)
def test_skips_when_no_neighborinfo_section(self):
"""Missing neighborinfo section is silently dropped."""
import data.mesh_ingestor.queue as q
sent = []
original = q._queue_post_json
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
(path, payload)
)
try:
handlers.store_neighborinfo_packet({"fromId": "!aabbccdd"}, {})
finally:
q._queue_post_json = original
assert sent == []
# ---------------------------------------------------------------------------
# store_router_heartbeat_packet
# ---------------------------------------------------------------------------
class TestStoreRouterHeartbeatPacket:
"""Tests for :func:`handlers.store_router_heartbeat_packet`."""
def test_queues_node_upsert(self):
"""Router heartbeat queues a minimal node upsert."""
import data.mesh_ingestor.queue as q
sent = []
original = q._queue_post_json
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
(path, payload)
)
try:
handlers.store_router_heartbeat_packet(
{"fromId": "!aabbccdd", "rxTime": 1_700_000_000}
)
finally:
q._queue_post_json = original
assert any(p == "/api/nodes" for p, _ in sent)
def test_skips_when_no_from_id(self):
"""Heartbeat without from_id is silently dropped."""
import data.mesh_ingestor.queue as q
sent = []
original = q._queue_post_json
q._queue_post_json = lambda path, payload, *, priority, **kw: sent.append(
(path, payload)
)
try:
handlers.store_router_heartbeat_packet({})
finally:
q._queue_post_json = original
assert sent == []
+1 -11
View File
@@ -41,16 +41,6 @@ def reset_state(monkeypatch):
importlib.reload(config)
def test_config_module_port_aliases(monkeypatch):
"""Ensure the config module keeps CONNECTION and PORT in sync."""
reloaded = importlib.reload(config)
monkeypatch.setattr(reloaded, "CONNECTION", "dev-tty", raising=False)
reloaded.PORT = "new-port"
assert reloaded.CONNECTION == "new-port"
assert reloaded.PORT == "new-port"
def test_queue_stringification_and_ordering():
"""Exercise queue payload formatting and priority ordering."""
@@ -237,7 +227,7 @@ def test_region_frequency_and_resolution_helpers():
assert freq == "915MHz"
freq = interfaces._region_frequency(LoraMessage(2))
assert freq == "US"
assert freq == 902 # "US" is in the region lookup table → base 902 MHz
class StringRegionMessage:
def __init__(self, region):
+209
View File
@@ -0,0 +1,209 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unit tests for :mod:`data.mesh_ingestor.ingestors`."""
from __future__ import annotations
import sys
import time
from pathlib import Path
import pytest
REPO_ROOT = Path(__file__).resolve().parents[1]
if str(REPO_ROOT) not in sys.path:
sys.path.insert(0, str(REPO_ROOT))
import data.mesh_ingestor.config as config
from data.mesh_ingestor.ingestors import (
HEARTBEAT_INTERVAL_SECS,
_IngestorState,
ingestor_start_time,
queue_ingestor_heartbeat,
set_ingestor_node_id,
)
import data.mesh_ingestor.ingestors as ingestors_mod
@pytest.fixture(autouse=True)
def reset_ingestor_state():
"""Reset shared ingestor state between tests."""
original = ingestors_mod.STATE
ingestors_mod.STATE = _IngestorState()
yield
ingestors_mod.STATE = original
# ---------------------------------------------------------------------------
# ingestor_start_time
# ---------------------------------------------------------------------------
class TestIngestorStartTime:
"""Tests for :func:`ingestors.ingestor_start_time`."""
def test_returns_integer(self):
"""Returns an integer unix timestamp."""
result = ingestor_start_time()
assert isinstance(result, int)
def test_is_close_to_now(self):
"""Start time is within a few seconds of now (fresh state)."""
result = ingestor_start_time()
assert abs(result - int(time.time())) < 5
def test_same_across_calls(self):
"""Returns the same value on repeated calls."""
assert ingestor_start_time() == ingestor_start_time()
# ---------------------------------------------------------------------------
# set_ingestor_node_id
# ---------------------------------------------------------------------------
class TestSetIngestorNodeId:
"""Tests for :func:`ingestors.set_ingestor_node_id`."""
def test_canonical_id_stored(self):
"""Sets canonical !xxxxxxxx node ID."""
result = set_ingestor_node_id("!aabbccdd")
assert result == "!aabbccdd"
assert ingestors_mod.STATE.node_id == "!aabbccdd"
def test_numeric_id_canonicalised(self):
"""Numeric node ID is canonicalised to !xxxxxxxx format."""
result = set_ingestor_node_id(0xAABBCCDD)
assert result is not None
assert result.startswith("!")
def test_none_returns_none(self):
"""None input returns None and does not update state."""
ingestors_mod.STATE.node_id = "!existing"
result = set_ingestor_node_id(None)
assert result is None
assert ingestors_mod.STATE.node_id == "!existing"
def test_invalid_id_returns_none(self):
"""Invalid node ID returns None."""
result = set_ingestor_node_id("not-a-node-id")
assert result is None
def test_new_id_resets_last_heartbeat(self):
"""Changing node ID resets the last heartbeat timestamp."""
ingestors_mod.STATE.node_id = "!aabbccdd"
ingestors_mod.STATE.last_heartbeat = 12345
set_ingestor_node_id("!11223344")
assert ingestors_mod.STATE.last_heartbeat is None
def test_same_id_does_not_reset_heartbeat(self):
"""Setting the same node ID preserves the last heartbeat."""
ingestors_mod.STATE.node_id = "!aabbccdd"
ingestors_mod.STATE.last_heartbeat = 12345
set_ingestor_node_id("!aabbccdd")
assert ingestors_mod.STATE.last_heartbeat == 12345
# ---------------------------------------------------------------------------
# queue_ingestor_heartbeat
# ---------------------------------------------------------------------------
class TestQueueIngestorHeartbeat:
"""Tests for :func:`ingestors.queue_ingestor_heartbeat`."""
def test_returns_false_when_no_node_id(self):
"""Returns False when no node ID is set."""
assert queue_ingestor_heartbeat() is False
def test_queues_heartbeat_with_node_id(self):
"""Returns True and queues a payload when node ID is set."""
set_ingestor_node_id("!aabbccdd")
sent = []
result = queue_ingestor_heartbeat(
send=lambda path, payload: sent.append((path, payload))
)
assert result is True
assert len(sent) == 1
path, payload = sent[0]
assert path == "/api/ingestors"
assert payload["node_id"] == "!aabbccdd"
def test_payload_contains_required_fields(self):
"""Heartbeat payload includes all required contract fields."""
set_ingestor_node_id("!aabbccdd")
sent = []
queue_ingestor_heartbeat(send=lambda path, payload: sent.append(payload))
payload = sent[0]
assert "node_id" in payload
assert "start_time" in payload
assert "last_seen_time" in payload
assert "version" in payload
def test_force_bypasses_interval(self):
"""force=True sends even within the heartbeat interval."""
set_ingestor_node_id("!aabbccdd")
ingestors_mod.STATE.last_heartbeat = int(time.time())
sent = []
result = queue_ingestor_heartbeat(
force=True,
send=lambda path, payload: sent.append(payload),
)
assert result is True
assert len(sent) == 1
def test_interval_prevents_duplicate_send(self):
"""Heartbeat is suppressed when interval has not elapsed."""
set_ingestor_node_id("!aabbccdd")
ingestors_mod.STATE.last_heartbeat = int(time.time())
sent = []
result = queue_ingestor_heartbeat(
send=lambda path, payload: sent.append(payload)
)
assert result is False
assert sent == []
def test_heartbeat_with_node_id_kwarg(self):
"""Providing node_id kwarg sets it before sending."""
sent = []
result = queue_ingestor_heartbeat(
node_id="!11223344",
send=lambda path, payload: sent.append(payload),
)
assert result is True
assert sent[0]["node_id"] == "!11223344"
def test_lora_freq_included_when_set(self, monkeypatch):
"""lora_freq is included in payload when LORA_FREQ is configured."""
set_ingestor_node_id("!aabbccdd")
monkeypatch.setattr(config, "LORA_FREQ", 915.0)
sent = []
queue_ingestor_heartbeat(send=lambda path, payload: sent.append(payload))
assert sent[0].get("lora_freq") == pytest.approx(915.0)
def test_modem_preset_included_when_set(self, monkeypatch):
"""modem_preset is included in payload when MODEM_PRESET is configured."""
set_ingestor_node_id("!aabbccdd")
monkeypatch.setattr(config, "MODEM_PRESET", "LongFast")
sent = []
queue_ingestor_heartbeat(send=lambda path, payload: sent.append(payload))
assert sent[0].get("modem_preset") == "LongFast"
def test_updates_last_heartbeat_after_send(self):
"""STATE.last_heartbeat is updated after a successful send."""
set_ingestor_node_id("!aabbccdd")
before = int(time.time())
queue_ingestor_heartbeat(send=lambda path, payload: None)
assert ingestors_mod.STATE.last_heartbeat is not None
assert ingestors_mod.STATE.last_heartbeat >= before
+579
View File
@@ -0,0 +1,579 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unit tests for :mod:`data.mesh_ingestor.interfaces`."""
from __future__ import annotations
import sys
from pathlib import Path
from types import SimpleNamespace
import pytest
REPO_ROOT = Path(__file__).resolve().parents[1]
if str(REPO_ROOT) not in sys.path:
sys.path.insert(0, str(REPO_ROOT))
import data.mesh_ingestor.config as config
import data.mesh_ingestor.interfaces as ifaces
# ---------------------------------------------------------------------------
# _ensure_mapping
# ---------------------------------------------------------------------------
class TestEnsureMapping:
"""Tests for :func:`interfaces._ensure_mapping`."""
def test_mapping_returned_as_is(self):
"""A dict is returned directly without conversion."""
d = {"a": 1}
result = ifaces._ensure_mapping(d)
# Use id() to assert identity (same object, not just equal value).
assert id(result) == id(d)
def test_object_with_dict_attr(self):
"""Object whose ``__dict__`` is a mapping is wrapped."""
obj = SimpleNamespace(x=10)
result = ifaces._ensure_mapping(obj)
assert isinstance(result, dict)
assert result.get("x") == 10
def test_convertible_via_node_to_dict(self, monkeypatch):
"""Objects convertible by ``_node_to_dict`` return a mapping."""
import data.mesh_ingestor.serialization as ser
monkeypatch.setattr(ser, "_node_to_dict", lambda _v: {"converted": True})
# Use an object without __dict__ to avoid the __dict__ branch
class NoDict:
__slots__ = ()
result = ifaces._ensure_mapping(NoDict())
assert result == {"converted": True}
def test_non_convertible_returns_none(self, monkeypatch):
"""Returns None for objects that cannot be converted to a mapping."""
import data.mesh_ingestor.serialization as ser
monkeypatch.setattr(ser, "_node_to_dict", lambda _v: "not-a-mapping")
class NoDict:
__slots__ = ()
assert ifaces._ensure_mapping(NoDict()) is None
def test_none_returns_none(self):
"""None input returns None."""
assert ifaces._ensure_mapping(None) is None
# ---------------------------------------------------------------------------
# _is_nodeish_identifier
# ---------------------------------------------------------------------------
class TestIsNodeishIdentifier:
"""Tests for :func:`interfaces._is_nodeish_identifier`."""
def test_int_returns_false(self):
"""Integers are not node identifiers."""
assert ifaces._is_nodeish_identifier(42) is False
def test_float_returns_false(self):
"""Floats are not node identifiers."""
assert ifaces._is_nodeish_identifier(3.14) is False
def test_non_string_returns_false(self):
"""Non-string, non-numeric objects return False."""
assert ifaces._is_nodeish_identifier(object()) is False
def test_empty_string_returns_false(self):
"""Empty string is not a node identifier."""
assert ifaces._is_nodeish_identifier(" ") is False
def test_caret_prefix_returns_true(self):
"""Strings starting with ^ are recognised as special destinations."""
assert ifaces._is_nodeish_identifier("^all") is True
def test_bang_hex_valid(self):
"""!xxxxxxxx style identifiers are recognised."""
assert ifaces._is_nodeish_identifier("!aabbccdd") is True
def test_bang_hex_too_long(self):
"""More than 8 hex digits after ! are rejected."""
assert ifaces._is_nodeish_identifier("!aabbccdd00") is False
def test_0x_prefix_valid(self):
"""0x-prefixed hex strings with ≤8 digits are recognised."""
assert ifaces._is_nodeish_identifier("0xaabb") is True
def test_bare_decimal_rejected(self):
"""Bare decimal strings without hex digits are not node identifiers."""
assert ifaces._is_nodeish_identifier("12345678") is False
def test_bare_hex_valid(self):
"""Bare hex strings containing a-f are recognised."""
assert ifaces._is_nodeish_identifier("aabbccdd") is True
def test_bare_hex_too_long_rejected(self):
"""More than 8 bare hex characters are rejected."""
assert ifaces._is_nodeish_identifier("aabbccdd00") is False
# ---------------------------------------------------------------------------
# _candidate_node_id
# ---------------------------------------------------------------------------
class TestCandidateNodeId:
"""Tests for :func:`interfaces._candidate_node_id`."""
def test_none_returns_none(self):
"""None input returns None."""
assert ifaces._candidate_node_id(None) is None
def test_from_id_key(self):
"""fromId key resolves to canonical node ID."""
result = ifaces._candidate_node_id({"fromId": "!aabbccdd"})
assert result == "!aabbccdd"
def test_node_num_key(self):
"""nodeNum integer key is canonicalised."""
result = ifaces._candidate_node_id({"nodeNum": 0xAABBCCDD})
assert result is not None
assert result.startswith("!")
def test_id_key_nodeish(self):
"""'id' key is resolved when it looks like a node identifier."""
result = ifaces._candidate_node_id({"id": "!aabbccdd"})
assert result == "!aabbccdd"
def test_id_key_non_nodeish_skipped(self):
"""Non-nodeish 'id' values are ignored."""
result = ifaces._candidate_node_id({"id": "not-an-id"})
assert result is None
def test_user_section_lookup(self):
"""Searches user sub-section for node ID."""
result = ifaces._candidate_node_id({"user": {"id": "!aabbccdd"}})
assert result == "!aabbccdd"
def test_decoded_section_lookup(self):
"""Searches decoded sub-section for node ID."""
result = ifaces._candidate_node_id({"decoded": {"fromId": "!aabbccdd"}})
assert result == "!aabbccdd"
def test_payload_section_lookup(self):
"""Searches payload sub-section for node ID."""
result = ifaces._candidate_node_id({"payload": {"fromId": "!aabbccdd"}})
assert result == "!aabbccdd"
def test_empty_mapping_returns_none(self):
"""Mapping with no recognisable ID fields returns None."""
assert ifaces._candidate_node_id({"foo": "bar"}) is None
def test_list_value_scanned(self):
"""Node IDs inside list values are found."""
result = ifaces._candidate_node_id({"items": [{"fromId": "!aabbccdd"}]})
assert result == "!aabbccdd"
# ---------------------------------------------------------------------------
# _has_field
# ---------------------------------------------------------------------------
class TestHasField:
"""Tests for :func:`interfaces._has_field`."""
def test_none_returns_false(self):
"""None message returns False."""
assert ifaces._has_field(None, "anything") is False
def test_has_field_callable_true(self):
"""HasField callable returning True is propagated."""
msg = SimpleNamespace(HasField=lambda name: name == "lora")
assert ifaces._has_field(msg, "lora") is True
def test_has_field_callable_false(self):
"""HasField callable returning False is propagated."""
msg = SimpleNamespace(HasField=lambda name: False)
assert ifaces._has_field(msg, "lora") is False
def test_no_has_field_but_attr_present(self):
"""Falls back to hasattr when HasField is absent."""
msg = SimpleNamespace(lora=object())
assert ifaces._has_field(msg, "lora") is True
def test_no_has_field_attr_absent(self):
"""Returns False when both HasField and the attribute are absent."""
assert ifaces._has_field(SimpleNamespace(), "lora") is False
# ---------------------------------------------------------------------------
# _enum_name_from_field
# ---------------------------------------------------------------------------
class TestEnumNameFromField:
"""Tests for :func:`interfaces._enum_name_from_field`."""
def test_no_descriptor_returns_none(self):
"""Message without DESCRIPTOR returns None."""
assert ifaces._enum_name_from_field(object(), "region", 1) is None
def test_field_not_in_descriptor(self):
"""Unknown field name returns None."""
desc = SimpleNamespace(fields_by_name={})
msg = SimpleNamespace(DESCRIPTOR=desc)
assert ifaces._enum_name_from_field(msg, "region", 1) is None
def test_no_enum_type_returns_none(self):
"""Field without enum_type returns None."""
field_desc = SimpleNamespace(enum_type=None)
desc = SimpleNamespace(fields_by_name={"region": field_desc})
msg = SimpleNamespace(DESCRIPTOR=desc)
assert ifaces._enum_name_from_field(msg, "region", 1) is None
def test_value_not_in_enum_returns_none(self):
"""Enum value not found in values_by_number returns None."""
enum_type = SimpleNamespace(values_by_number={})
field_desc = SimpleNamespace(enum_type=enum_type)
desc = SimpleNamespace(fields_by_name={"region": field_desc})
msg = SimpleNamespace(DESCRIPTOR=desc)
assert ifaces._enum_name_from_field(msg, "region", 99) is None
def test_valid_lookup(self):
"""Returns the enum value name for a known numeric value."""
enum_val = SimpleNamespace(name="US_915")
enum_type = SimpleNamespace(values_by_number={3: enum_val})
field_desc = SimpleNamespace(enum_type=enum_type)
desc = SimpleNamespace(fields_by_name={"region": field_desc})
msg = SimpleNamespace(DESCRIPTOR=desc)
assert ifaces._enum_name_from_field(msg, "region", 3) == "US_915"
# ---------------------------------------------------------------------------
# _computed_channel_frequency
# ---------------------------------------------------------------------------
class TestComputedChannelFrequency:
"""Tests for :func:`interfaces._computed_channel_frequency`."""
def test_none_enum_name_returns_none(self):
"""None enum_name returns None."""
assert ifaces._computed_channel_frequency(None, 0) is None
def test_unknown_region_returns_none(self):
"""Enum name not in lookup table returns None."""
assert ifaces._computed_channel_frequency("UNKNOWN_REGION", 0) is None
def test_us_channel_0_base_frequency(self):
"""US region, channel 0, returns floor(902.0 + 0*0.25) = 902."""
assert ifaces._computed_channel_frequency("US", 0) == 902
def test_us_channel_52_mid_band(self):
"""US region, channel 52, returns floor(902.0 + 52*0.25) = 915."""
assert ifaces._computed_channel_frequency("US", 52) == 915
def test_eu_868_channel_0_returns_869(self):
"""EU_868 region, channel 0, returns floor(869.525) = 869, not 868."""
assert ifaces._computed_channel_frequency("EU_868", 0) == 869
def test_eu_868_channel_1_returns_870(self):
"""EU_868 region, channel 1, returns floor(869.525 + 0.5) = 870."""
assert ifaces._computed_channel_frequency("EU_868", 1) == 870
def test_my_919_channel_0(self):
"""MY_919 region, channel 0, returns floor(919.0) = 919."""
assert ifaces._computed_channel_frequency("MY_919", 0) == 919
def test_lora_24_channel_0(self):
"""LORA_24 region, channel 0, returns floor(2400.0) = 2400."""
assert ifaces._computed_channel_frequency("LORA_24", 0) == 2400
def test_none_channel_num_defaults_to_zero(self):
"""None channel_num is treated as 0, returning the base frequency."""
assert ifaces._computed_channel_frequency("ANZ", None) == 916
def test_negative_channel_num_clamped_to_zero(self):
"""Negative channel_num is clamped to 0, returning the base frequency."""
assert ifaces._computed_channel_frequency("ANZ", -1) == 916
def test_result_is_int(self):
"""Return type is int (math.floor result), not float."""
result = ifaces._computed_channel_frequency("EU_868", 0)
assert isinstance(result, int)
def test_nz_865_channel_0(self):
"""NZ_865 region, channel 0, returns floor(864.0) = 864."""
assert ifaces._computed_channel_frequency("NZ_865", 0) == 864
def test_br_902_channel_4_spacing_0_25(self):
"""BR_902 region, channel 4, returns floor(902.0 + 4*0.25) = 903."""
assert ifaces._computed_channel_frequency("BR_902", 4) == 903
def test_kz_863_channel_0(self):
"""KZ_863 region, channel 0, returns floor(863.125) = 863."""
assert ifaces._computed_channel_frequency("KZ_863", 0) == 863
# ---------------------------------------------------------------------------
# _region_frequency
# ---------------------------------------------------------------------------
class TestRegionFrequency:
"""Tests for :func:`interfaces._region_frequency`."""
def test_none_returns_none(self):
"""None input returns None."""
assert ifaces._region_frequency(None) is None
def test_numeric_override_frequency(self):
"""Positive numeric override_frequency is floored to MHz."""
msg = SimpleNamespace(override_frequency=915.8, region=None)
assert ifaces._region_frequency(msg) == 915
def test_zero_override_frequency_falls_through(self):
"""Zero override_frequency is ignored."""
msg = SimpleNamespace(override_frequency=0, region=None)
assert ifaces._region_frequency(msg) is None
def test_string_override_frequency(self):
"""Non-empty string override_frequency is returned as-is."""
msg = SimpleNamespace(override_frequency="915MHz", region=None)
assert ifaces._region_frequency(msg) == "915MHz"
def test_enum_name_with_freq_digits(self):
"""Extracts MHz frequency from enum name like US_915."""
enum_val = SimpleNamespace(name="US_915")
enum_type = SimpleNamespace(values_by_number={1: enum_val})
field_desc = SimpleNamespace(enum_type=enum_type)
desc = SimpleNamespace(fields_by_name={"region": field_desc})
msg = SimpleNamespace(DESCRIPTOR=desc, override_frequency=None, region=1)
assert ifaces._region_frequency(msg) == 915
def test_enum_name_without_large_digit_returns_name(self):
"""Enum name with only small digits returns the full name string."""
enum_val = SimpleNamespace(name="BAND_24")
enum_type = SimpleNamespace(values_by_number={2: enum_val})
field_desc = SimpleNamespace(enum_type=enum_type)
desc = SimpleNamespace(fields_by_name={"region": field_desc})
msg = SimpleNamespace(DESCRIPTOR=desc, override_frequency=None, region=2)
# 24 < 100, so falls through to reversed digits → returns 24
assert ifaces._region_frequency(msg) == 24
def test_large_integer_region_returned(self):
"""Integer region value >= 100 is returned directly."""
msg = SimpleNamespace(DESCRIPTOR=None, override_frequency=None, region=433)
assert ifaces._region_frequency(msg) == 433
def test_string_region_returned(self):
"""Non-empty string region is returned directly."""
msg = SimpleNamespace(DESCRIPTOR=None, override_frequency=None, region="EU433")
assert ifaces._region_frequency(msg) == "EU433"
def test_us_enum_lookup_table_used(self):
"""US region with channel_num=0 returns 902 from lookup table, not None."""
enum_val = SimpleNamespace(name="US")
enum_type = SimpleNamespace(values_by_number={1: enum_val})
field_desc = SimpleNamespace(enum_type=enum_type)
desc = SimpleNamespace(fields_by_name={"region": field_desc})
msg = SimpleNamespace(
DESCRIPTOR=desc, override_frequency=None, region=1, channel_num=0
)
assert ifaces._region_frequency(msg) == 902
def test_eu_868_returns_869_not_868(self):
"""EU_868 region returns 869 from lookup table, not 868 parsed from name."""
enum_val = SimpleNamespace(name="EU_868")
enum_type = SimpleNamespace(values_by_number={3: enum_val})
field_desc = SimpleNamespace(enum_type=enum_type)
desc = SimpleNamespace(fields_by_name={"region": field_desc})
msg = SimpleNamespace(
DESCRIPTOR=desc, override_frequency=None, region=3, channel_num=0
)
assert ifaces._region_frequency(msg) == 869
def test_unrecognised_int_falls_through(self):
"""Raw int region with no DESCRIPTOR and value < 100 returns None."""
msg = SimpleNamespace(DESCRIPTOR=None, override_frequency=None, region=99)
assert ifaces._region_frequency(msg) is None
def test_missing_channel_num_attr_uses_base(self):
"""Region in lookup table with no channel_num attribute returns base freq."""
enum_val = SimpleNamespace(name="MY_919")
enum_type = SimpleNamespace(values_by_number={17: enum_val})
field_desc = SimpleNamespace(enum_type=enum_type)
desc = SimpleNamespace(fields_by_name={"region": field_desc})
# deliberately no channel_num attribute
msg = SimpleNamespace(DESCRIPTOR=desc, override_frequency=None, region=17)
assert ifaces._region_frequency(msg) == 919
def test_override_takes_priority_over_lookup_table(self):
"""override_frequency takes priority over the lookup table."""
enum_val = SimpleNamespace(name="EU_868")
enum_type = SimpleNamespace(values_by_number={3: enum_val})
field_desc = SimpleNamespace(enum_type=enum_type)
desc = SimpleNamespace(fields_by_name={"region": field_desc})
msg = SimpleNamespace(
DESCRIPTOR=desc, override_frequency=867.3, region=3, channel_num=0
)
assert ifaces._region_frequency(msg) == 867
def test_unknown_enum_name_falls_to_digit_parse(self):
"""Enum name not in lookup table falls through to digit parsing."""
enum_val = SimpleNamespace(name="FUTURE_999")
enum_type = SimpleNamespace(values_by_number={99: enum_val})
field_desc = SimpleNamespace(enum_type=enum_type)
desc = SimpleNamespace(fields_by_name={"region": field_desc})
msg = SimpleNamespace(
DESCRIPTOR=desc, override_frequency=None, region=99, channel_num=0
)
assert ifaces._region_frequency(msg) == 999
# ---------------------------------------------------------------------------
# _camelcase_enum_name
# ---------------------------------------------------------------------------
class TestCamelcaseEnumName:
"""Tests for :func:`interfaces._camelcase_enum_name`."""
def test_none_returns_none(self):
"""None input returns None."""
assert ifaces._camelcase_enum_name(None) is None
def test_empty_string_returns_none(self):
"""Empty string returns None."""
assert ifaces._camelcase_enum_name("") is None
def test_screaming_snake(self):
"""SCREAMING_SNAKE_CASE is converted to CamelCase."""
assert ifaces._camelcase_enum_name("LONG_FAST") == "LongFast"
def test_single_word(self):
"""Single word is capitalised."""
assert ifaces._camelcase_enum_name("SHORT") == "Short"
def test_with_digits(self):
"""Digits in the name are preserved."""
assert ifaces._camelcase_enum_name("BAND_915") == "Band915"
# ---------------------------------------------------------------------------
# _modem_preset
# ---------------------------------------------------------------------------
class TestModemPreset:
"""Tests for :func:`interfaces._modem_preset`."""
def test_none_returns_none(self):
"""None lora_message returns None."""
assert ifaces._modem_preset(None) is None
def test_no_descriptor_no_attr_returns_none(self):
"""Message with neither descriptor nor modem_preset attr returns None."""
class NoPreset:
DESCRIPTOR = None
assert ifaces._modem_preset(NoPreset()) is None
def test_descriptor_modem_preset_field(self):
"""Finds modem_preset via DESCRIPTOR fields_by_name."""
enum_val = SimpleNamespace(name="LONG_FAST")
enum_type = SimpleNamespace(values_by_number={0: enum_val})
field_desc = SimpleNamespace(enum_type=enum_type)
desc = SimpleNamespace(fields_by_name={"modem_preset": field_desc})
msg = SimpleNamespace(DESCRIPTOR=desc, modem_preset=0)
assert ifaces._modem_preset(msg) == "LongFast"
def test_attr_fallback(self):
"""Falls back to hasattr when DESCRIPTOR is absent."""
msg = SimpleNamespace(modem_preset="LONG_FAST")
# No DESCRIPTOR so enum lookup won't work, falls to string branch
result = ifaces._modem_preset(msg)
assert result == "LongFast"
def test_preset_field_name_fallback(self):
"""'preset' field is used when 'modem_preset' is absent in descriptor."""
enum_val = SimpleNamespace(name="SHORT_FAST")
enum_type = SimpleNamespace(values_by_number={1: enum_val})
field_desc = SimpleNamespace(enum_type=enum_type)
desc = SimpleNamespace(fields_by_name={"preset": field_desc})
msg = SimpleNamespace(DESCRIPTOR=desc, preset=1)
assert ifaces._modem_preset(msg) == "ShortFast"
# ---------------------------------------------------------------------------
# _ensure_radio_metadata caching
# ---------------------------------------------------------------------------
class TestEnsureRadioMetadata:
"""Tests for :func:`interfaces._ensure_radio_metadata` caching behaviour."""
def test_none_iface_is_noop(self, monkeypatch):
"""None interface does not touch config."""
original_freq = config.LORA_FREQ
original_preset = config.MODEM_PRESET
ifaces._ensure_radio_metadata(None)
assert config.LORA_FREQ == original_freq
assert config.MODEM_PRESET == original_preset
def test_sets_lora_freq_when_not_cached(self, monkeypatch):
"""Populates LORA_FREQ from interface when not yet configured."""
monkeypatch.setattr(config, "LORA_FREQ", None)
monkeypatch.setattr(config, "MODEM_PRESET", None)
enum_val = SimpleNamespace(name="US_915")
enum_type = SimpleNamespace(values_by_number={1: enum_val})
region_field = SimpleNamespace(enum_type=enum_type)
desc = SimpleNamespace(fields_by_name={"region": region_field})
lora = SimpleNamespace(
DESCRIPTOR=desc, region=1, override_frequency=None, modem_preset=None
)
local_config = SimpleNamespace(lora=lora, HasField=lambda f: f == "lora")
local_node = SimpleNamespace(localConfig=local_config)
iface = SimpleNamespace(localNode=local_node, waitForConfig=lambda: None)
ifaces._ensure_radio_metadata(iface)
assert config.LORA_FREQ == 915
def test_does_not_overwrite_existing_freq(self, monkeypatch):
"""Does not overwrite LORA_FREQ when already set."""
monkeypatch.setattr(config, "LORA_FREQ", 433)
monkeypatch.setattr(config, "MODEM_PRESET", None)
enum_val = SimpleNamespace(name="US_915")
enum_type = SimpleNamespace(values_by_number={1: enum_val})
region_field = SimpleNamespace(enum_type=enum_type)
desc = SimpleNamespace(fields_by_name={"region": region_field})
lora = SimpleNamespace(
DESCRIPTOR=desc, region=1, override_frequency=None, modem_preset=None
)
local_config = SimpleNamespace(lora=lora, HasField=lambda f: f == "lora")
local_node = SimpleNamespace(localConfig=local_config)
iface = SimpleNamespace(localNode=local_node, waitForConfig=lambda: None)
ifaces._ensure_radio_metadata(iface)
assert config.LORA_FREQ == 433
+317 -37
View File
@@ -228,13 +228,14 @@ def mesh_module(monkeypatch):
def test_instance_domain_prefers_primary_env(mesh_module, monkeypatch):
"""Ensure the ingestor prefers ``INSTANCE_DOMAIN`` over the legacy variable."""
"""Ensure the ingestor reads ``INSTANCE_DOMAIN``."""
monkeypatch.setenv("INSTANCE_DOMAIN", "https://new.example")
monkeypatch.setenv("POTATOMESH_INSTANCE", "https://legacy.example")
try:
refreshed_instances = mesh_module.config._resolve_instance_domains()
refreshed_instance = mesh_module.config._resolve_instance_domain()
mesh_module.config.INSTANCES = refreshed_instances
mesh_module.config.INSTANCE = refreshed_instance
mesh_module.INSTANCE = refreshed_instance
@@ -242,26 +243,7 @@ def test_instance_domain_prefers_primary_env(mesh_module, monkeypatch):
assert mesh_module.INSTANCE == "https://new.example"
finally:
monkeypatch.delenv("INSTANCE_DOMAIN", raising=False)
monkeypatch.delenv("POTATOMESH_INSTANCE", raising=False)
mesh_module.config.INSTANCE = mesh_module.config._resolve_instance_domain()
mesh_module.INSTANCE = mesh_module.config.INSTANCE
def test_instance_domain_falls_back_to_legacy(mesh_module, monkeypatch):
"""Verify ``POTATOMESH_INSTANCE`` is used when ``INSTANCE_DOMAIN`` is unset."""
monkeypatch.delenv("INSTANCE_DOMAIN", raising=False)
monkeypatch.setenv("POTATOMESH_INSTANCE", "https://legacy-only.example")
try:
refreshed_instance = mesh_module.config._resolve_instance_domain()
mesh_module.config.INSTANCE = refreshed_instance
mesh_module.INSTANCE = refreshed_instance
assert refreshed_instance == "https://legacy-only.example"
assert mesh_module.INSTANCE == "https://legacy-only.example"
finally:
monkeypatch.delenv("POTATOMESH_INSTANCE", raising=False)
mesh_module.config.INSTANCES = mesh_module.config._resolve_instance_domains()
mesh_module.config.INSTANCE = mesh_module.config._resolve_instance_domain()
mesh_module.INSTANCE = mesh_module.config.INSTANCE
@@ -270,10 +252,11 @@ def test_instance_domain_infers_scheme_for_hostnames(mesh_module, monkeypatch):
"""Ensure bare hostnames are promoted to HTTPS URLs for ingestion."""
monkeypatch.setenv("INSTANCE_DOMAIN", "mesh.example.org")
monkeypatch.delenv("POTATOMESH_INSTANCE", raising=False)
try:
refreshed_instances = mesh_module.config._resolve_instance_domains()
refreshed_instance = mesh_module.config._resolve_instance_domain()
mesh_module.config.INSTANCES = refreshed_instances
mesh_module.config.INSTANCE = refreshed_instance
mesh_module.INSTANCE = refreshed_instance
@@ -281,6 +264,7 @@ def test_instance_domain_infers_scheme_for_hostnames(mesh_module, monkeypatch):
assert mesh_module.INSTANCE == "https://mesh.example.org"
finally:
monkeypatch.delenv("INSTANCE_DOMAIN", raising=False)
mesh_module.config.INSTANCES = mesh_module.config._resolve_instance_domains()
mesh_module.config.INSTANCE = mesh_module.config._resolve_instance_domain()
mesh_module.INSTANCE = mesh_module.config.INSTANCE
@@ -605,10 +589,10 @@ def test_ensure_radio_metadata_extracts_config(mesh_module, capsys):
first_log = capsys.readouterr().out
assert iface.wait_calls == 1
assert mesh.config.LORA_FREQ == 868
assert mesh.config.LORA_FREQ == 869
assert mesh.config.MODEM_PRESET == "MediumFast"
assert "Captured LoRa radio metadata" in first_log
assert "lora_freq=868" in first_log
assert "lora_freq=869" in first_log
assert "modem_preset='MediumFast'" in first_log
secondary_lora = make_lora(7, "US_915", 2, "LONG_FAST", preset_field="preset")
@@ -618,7 +602,7 @@ def test_ensure_radio_metadata_extracts_config(mesh_module, capsys):
second_log = capsys.readouterr().out
assert second_iface.wait_calls == 1
assert mesh.config.LORA_FREQ == 868
assert mesh.config.LORA_FREQ == 869
assert mesh.config.MODEM_PRESET == "MediumFast"
assert second_log == ""
@@ -788,6 +772,7 @@ def test_store_packet_dict_posts_text_message(mesh_module, monkeypatch):
mesh.config.LORA_FREQ = 868
mesh.config.MODEM_PRESET = "MediumFast"
mesh.register_host_node_id("!f00dbabe")
packet = {
"id": 123,
@@ -823,6 +808,7 @@ def test_store_packet_dict_posts_text_message(mesh_module, monkeypatch):
assert payload["rssi"] == -70
assert payload["reply_id"] is None
assert payload["emoji"] is None
assert payload["ingestor"] == "!f00dbabe"
assert payload["lora_freq"] == 868
assert payload["modem_preset"] == "MediumFast"
assert priority == mesh._MESSAGE_POST_PRIORITY
@@ -879,6 +865,7 @@ def test_store_packet_dict_posts_position(mesh_module, monkeypatch):
mesh.config.LORA_FREQ = 868
mesh.config.MODEM_PRESET = "MediumFast"
mesh.register_host_node_id("!f00dbabe")
packet = {
"id": 200498337,
@@ -946,6 +933,7 @@ def test_store_packet_dict_posts_position(mesh_module, monkeypatch):
)
assert payload["lora_freq"] == 868
assert payload["modem_preset"] == "MediumFast"
assert payload["ingestor"] == "!f00dbabe"
assert payload["raw"]["time"] == 1_758_624_189
@@ -960,6 +948,7 @@ def test_store_packet_dict_posts_neighborinfo(mesh_module, monkeypatch):
mesh.config.LORA_FREQ = 868
mesh.config.MODEM_PRESET = "MediumFast"
mesh.register_host_node_id("!f00dbabe")
packet = {
"id": 2049886869,
@@ -1004,6 +993,7 @@ def test_store_packet_dict_posts_neighborinfo(mesh_module, monkeypatch):
assert neighbors[2]["neighbor_num"] == 0x0BAD_C0DE
assert payload["lora_freq"] == 868
assert payload["modem_preset"] == "MediumFast"
assert payload["ingestor"] == "!f00dbabe"
def test_store_packet_dict_handles_nodeinfo_packet(mesh_module, monkeypatch):
@@ -1631,7 +1621,9 @@ def test_main_retries_interface_creation(mesh_module, monkeypatch):
raise RuntimeError("boom")
return iface, port
monkeypatch.setattr(mesh, "PORT", "/dev/ttyTEST")
monkeypatch.setattr(mesh, "INSTANCES", (("http://test", ""),))
monkeypatch.setattr(mesh, "INSTANCE", "http://test")
monkeypatch.setattr(mesh, "CONNECTION", "/dev/ttyTEST")
monkeypatch.setattr(mesh, "_create_serial_interface", fake_create)
monkeypatch.setattr(mesh.threading, "Event", DummyEvent)
monkeypatch.setattr(mesh.signal, "signal", lambda *_, **__: None)
@@ -1703,7 +1695,9 @@ def test_main_reconnects_when_connection_event_clears(mesh_module, monkeypatch):
self._flag = True
return True
monkeypatch.setattr(mesh, "PORT", "/dev/ttyTEST")
monkeypatch.setattr(mesh, "INSTANCES", (("http://test", ""),))
monkeypatch.setattr(mesh, "INSTANCE", "http://test")
monkeypatch.setattr(mesh, "CONNECTION", "/dev/ttyTEST")
monkeypatch.setattr(mesh, "_create_serial_interface", fake_create)
monkeypatch.setattr(mesh.threading, "Event", DummyStopEvent)
monkeypatch.setattr(mesh.signal, "signal", lambda *_, **__: None)
@@ -1767,7 +1761,9 @@ def test_main_recreates_interface_after_snapshot_error(mesh_module, monkeypatch)
def record_upsert(node_id, node):
upsert_calls.append(node_id)
monkeypatch.setattr(mesh, "PORT", "/dev/ttyTEST")
monkeypatch.setattr(mesh, "INSTANCES", (("http://test", ""),))
monkeypatch.setattr(mesh, "INSTANCE", "http://test")
monkeypatch.setattr(mesh, "CONNECTION", "/dev/ttyTEST")
monkeypatch.setattr(mesh, "_create_serial_interface", fake_create)
monkeypatch.setattr(mesh, "upsert_node", record_upsert)
monkeypatch.setattr(mesh.threading, "Event", DummyEvent)
@@ -1789,7 +1785,9 @@ def test_main_exits_when_defaults_unavailable(mesh_module, monkeypatch):
def fail_default():
raise mesh.NoAvailableMeshInterface("no interface available")
monkeypatch.setattr(mesh, "PORT", None)
monkeypatch.setattr(mesh, "INSTANCES", (("http://test", ""),))
monkeypatch.setattr(mesh, "INSTANCE", "http://test")
monkeypatch.setattr(mesh, "CONNECTION", None)
monkeypatch.setattr(mesh, "_create_default_interface", fail_default)
monkeypatch.setattr(mesh.signal, "signal", lambda *_, **__: None)
@@ -2128,7 +2126,7 @@ def test_store_packet_dict_skips_hidden_channel(mesh_module, monkeypatch, capsys
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
monkeypatch.setattr(
mesh.handlers,
mesh.handlers.ignored,
"_record_ignored_packet",
lambda packet, *, reason: ignored.append(reason),
)
@@ -2198,7 +2196,7 @@ def test_store_packet_dict_skips_disallowed_channel(mesh_module, monkeypatch, ca
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
monkeypatch.setattr(
mesh.handlers,
mesh.handlers.ignored,
"_record_ignored_packet",
lambda packet, *, reason: ignored.append(reason),
)
@@ -2282,6 +2280,7 @@ def test_store_packet_dict_handles_telemetry_packet(mesh_module, monkeypatch):
mesh.config.LORA_FREQ = 868
mesh.config.MODEM_PRESET = "MediumFast"
mesh.register_host_node_id("!f00dbabe")
packet = {
"id": 1_256_091_342,
@@ -2334,6 +2333,8 @@ def test_store_packet_dict_handles_telemetry_packet(mesh_module, monkeypatch):
assert payload["current"] == pytest.approx(0.0715)
assert payload["lora_freq"] == 868
assert payload["modem_preset"] == "MediumFast"
assert payload["ingestor"] == "!f00dbabe"
assert payload["telemetry_type"] == "device"
def test_store_packet_dict_handles_environment_telemetry(mesh_module, monkeypatch):
@@ -2413,6 +2414,144 @@ def test_store_packet_dict_handles_environment_telemetry(mesh_module, monkeypatc
assert payload["soil_temperature"] == pytest.approx(18.9)
assert payload["lora_freq"] == 868
assert payload["modem_preset"] == "MediumFast"
assert payload["telemetry_type"] == "environment"
def test_store_packet_dict_handles_power_telemetry(mesh_module, monkeypatch):
"""Power-metrics packets are tagged telemetry_type='power'."""
mesh = mesh_module
captured = []
monkeypatch.setattr(
mesh,
"_queue_post_json",
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
packet = {
"id": 3_000_000_001,
"rxTime": 1_758_030_000,
"fromId": "!aabbccdd",
"toId": "^all",
"decoded": {
"portnum": "TELEMETRY_APP",
"telemetry": {
"time": 1_758_030_000,
"powerMetrics": {
"ch1Voltage": 5.02,
"ch1Current": 0.48,
},
},
},
}
mesh.store_packet_dict(packet)
assert captured
_, payload, _ = captured[0]
assert payload["telemetry_type"] == "power"
def test_store_packet_dict_handles_air_quality_telemetry(mesh_module, monkeypatch):
"""Air-quality-metrics packets are tagged telemetry_type='air_quality'."""
mesh = mesh_module
captured = []
monkeypatch.setattr(
mesh,
"_queue_post_json",
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
packet = {
"id": 3_000_000_003,
"rxTime": 1_758_032_000,
"fromId": "!aabbccdd",
"toId": "^all",
"decoded": {
"portnum": "TELEMETRY_APP",
"telemetry": {
"time": 1_758_032_000,
"airQualityMetrics": {
"pm10Standard": 4,
"pm25Standard": 8,
"iaq": 65,
},
},
},
}
mesh.store_packet_dict(packet)
assert captured
_, payload, _ = captured[0]
assert payload["telemetry_type"] == "air_quality"
def test_store_packet_dict_telemetry_type_absent_for_unknown_subtype(
mesh_module, monkeypatch
):
"""Packets with no recognised sub-object do not include telemetry_type in the payload."""
mesh = mesh_module
captured = []
monkeypatch.setattr(
mesh,
"_queue_post_json",
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
packet = {
"id": 3_000_000_002,
"rxTime": 1_758_031_000,
"fromId": "!aabbccdd",
"toId": "^all",
"decoded": {
"portnum": "TELEMETRY_APP",
"telemetry": {
"time": 1_758_031_000,
"someUnknownMetrics": {"foo": 1},
},
},
}
mesh.store_packet_dict(packet)
assert captured
_, payload, _ = captured[0]
assert "telemetry_type" not in payload
def test_store_packet_dict_invalid_telemetry_type_is_dropped(mesh_module, monkeypatch):
"""A telemetry_type value that isn't in _VALID_TELEMETRY_TYPES is omitted from the payload."""
mesh = mesh_module
captured = []
monkeypatch.setattr(
mesh,
"_queue_post_json",
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
# Inject a bad type by monkey-patching the validator constant so we can
# verify the drop path without needing a real packet with an impossible type.
monkeypatch.setattr(mesh.handlers.telemetry, "_VALID_TELEMETRY_TYPES", frozenset())
packet = {
"id": 3_000_000_010,
"rxTime": 1_758_040_000,
"fromId": "!aabbccdd",
"toId": "^all",
"decoded": {
"portnum": "TELEMETRY_APP",
"telemetry": {
"time": 1_758_040_000,
"deviceMetrics": {"batteryLevel": 80},
},
},
}
mesh.store_packet_dict(packet)
assert captured
_, payload, _ = captured[0]
assert "telemetry_type" not in payload
def test_store_packet_dict_throttles_host_telemetry(mesh_module, monkeypatch):
@@ -2477,6 +2616,7 @@ def test_store_packet_dict_handles_traceroute_packet(mesh_module, monkeypatch):
mesh.config.LORA_FREQ = 915
mesh.config.MODEM_PRESET = "LongFast"
mesh.register_host_node_id("!f00dbabe")
packet = {
"id": 2_934_054_466,
@@ -2518,6 +2658,7 @@ def test_store_packet_dict_handles_traceroute_packet(mesh_module, monkeypatch):
assert "elapsed_ms" in payload
assert payload["lora_freq"] == 915
assert payload["modem_preset"] == "LongFast"
assert payload["ingestor"] == "!f00dbabe"
def test_traceroute_hop_normalization_supports_mappings(mesh_module, monkeypatch):
@@ -2569,7 +2710,8 @@ def test_traceroute_packet_without_identifiers_is_ignored(mesh_module, monkeypat
assert captured == []
def test_post_queue_prioritises_messages(mesh_module, monkeypatch):
def test_post_queue_prioritises_nodes_over_messages(mesh_module, monkeypatch):
"""Nodes (priority 20) must be processed before messages (priority 30)."""
mesh = mesh_module
mesh._clear_post_queue()
calls = []
@@ -2586,7 +2728,7 @@ def test_post_queue_prioritises_messages(mesh_module, monkeypatch):
mesh._drain_post_queue()
assert [path for path, _ in calls] == ["/api/messages", "/api/nodes"]
assert [path for path, _ in calls] == ["/api/nodes", "/api/messages"]
def test_drain_post_queue_handles_enqueued_items_during_send(mesh_module):
@@ -2874,7 +3016,7 @@ def test_default_serial_targets_deduplicates(mesh_module, monkeypatch):
return ["/dev/ttyACM1"]
return []
monkeypatch.setattr(mesh.interfaces.glob, "glob", fake_glob)
monkeypatch.setattr(mesh.connection.glob, "glob", fake_glob)
targets = mesh._default_serial_targets()
@@ -3049,9 +3191,32 @@ def test_queue_ingestor_heartbeat_enqueues_and_throttles(mesh_module, monkeypatc
assert payload["version"] == mesh.VERSION
assert payload["lora_freq"] == 915
assert payload["modem_preset"] == "LongFast"
assert payload["protocol"] == "meshtastic"
assert priority == mesh.queue._INGESTOR_POST_PRIORITY
def test_queue_ingestor_heartbeat_protocol_meshcore(mesh_module, monkeypatch):
"""Heartbeat payload must carry the configured PROTOCOL as its protocol."""
mesh = mesh_module
captured = []
monkeypatch.setattr(
mesh.queue,
"_queue_post_json",
lambda path, payload, *, priority, send=None: captured.append(payload),
)
mesh.ingestors.STATE.last_heartbeat = None
mesh.ingestors.STATE.node_id = None
mesh.config.PROTOCOL = "meshcore"
mesh.ingestors.set_ingestor_node_id("!aabbccdd")
mesh.ingestors.queue_ingestor_heartbeat(force=True)
assert len(captured) == 1, "expected exactly one heartbeat payload"
assert captured[0]["protocol"] == "meshcore"
def test_mesh_version_export_matches_package(mesh_module):
import data
@@ -3106,8 +3271,8 @@ def test_store_packet_dict_records_ignored_packets(mesh_module, monkeypatch, tmp
monkeypatch.setattr(mesh, "DEBUG", True)
ignored_path = tmp_path / "ignored.txt"
monkeypatch.setattr(mesh.handlers, "_IGNORED_PACKET_LOG_PATH", ignored_path)
monkeypatch.setattr(mesh.handlers, "_IGNORED_PACKET_LOCK", threading.Lock())
monkeypatch.setattr(mesh.handlers.ignored, "_IGNORED_PACKET_LOG_PATH", ignored_path)
monkeypatch.setattr(mesh.handlers.ignored, "_IGNORED_PACKET_LOCK", threading.Lock())
packet = {"decoded": {"portnum": "UNKNOWN"}}
mesh.store_packet_dict(packet)
@@ -3459,3 +3624,118 @@ def test_on_receive_skips_seen_packets(mesh_module):
mesh.on_receive(packet, interface=None)
assert packet["_potatomesh_seen"] is True
def test_upsert_node_includes_ingestor_key(mesh_module, monkeypatch):
"""upsert_node must attach the host node ID so /api/nodes can resolve protocol."""
mesh = mesh_module
captured = []
monkeypatch.setattr(
mesh,
"_queue_post_json",
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
mesh.register_host_node_id("!aabbccdd")
mesh.upsert_node("!deadbeef", {"user": {"shortName": "X"}})
assert captured
_, payload, _ = captured[0]
assert payload.get("ingestor") == "!aabbccdd"
def test_store_packet_dict_nodeinfo_includes_ingestor_key(mesh_module, monkeypatch):
"""store_nodeinfo_packet must include the ingestor key in the /api/nodes payload."""
mesh = mesh_module
captured = []
monkeypatch.setattr(
mesh,
"_queue_post_json",
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
mesh.register_host_node_id("!11223344")
packet = {
"id": 1,
"rxTime": 1_700_000_000,
"fromId": "!aabbccdd",
"decoded": {
"portnum": "NODEINFO_APP",
"user": {"id": "!aabbccdd", "shortName": "N"},
},
}
mesh.store_packet_dict(packet)
node_calls = [(p, pl) for p, pl, _ in captured if p == "/api/nodes"]
assert node_calls, "Expected a /api/nodes POST"
_, payload = node_calls[0]
assert payload.get("ingestor") == "!11223344"
def test_store_packet_dict_router_heartbeat(mesh_module, monkeypatch):
"""STORE_FORWARD_APP ROUTER_HEARTBEAT upserts the node at low priority."""
mesh = mesh_module
captured = []
monkeypatch.setattr(
mesh,
"_queue_post_json",
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
mesh.register_host_node_id("!f00dbabe")
packet = {
"id": 2377284085,
"rxTime": 1_774_868_197,
"fromId": "!435a7fbc",
"toId": "^all",
"hopLimit": "2",
"rxSnr": "-12.25",
"rxRssi": "-110",
"decoded": {
"portnum": "STORE_FORWARD_APP",
"storeforward": {
"heartbeat": {"period": "900"},
"rr": "ROUTER_HEARTBEAT",
},
},
}
mesh.store_packet_dict(packet)
assert captured, "Expected a POST for router heartbeat"
path, payload, priority = captured[0]
assert path == "/api/nodes"
assert priority == mesh._DEFAULT_POST_PRIORITY
assert "!435a7fbc" in payload
node_entry = payload["!435a7fbc"]
assert node_entry["lastHeard"] == 1_774_868_197
assert payload.get("ingestor") == "!f00dbabe"
assert set(node_entry.keys()) == {
"lastHeard"
}, "Heartbeat must only set lastHeard, nothing else"
def test_store_packet_dict_store_forward_non_heartbeat_ignored(
mesh_module, monkeypatch
):
"""STORE_FORWARD_APP packets that are not ROUTER_HEARTBEAT are dropped."""
mesh = mesh_module
captured = []
monkeypatch.setattr(
mesh,
"_queue_post_json",
lambda *a, **kw: captured.append(a),
)
packet = {
"id": 1,
"rxTime": 1_700_000_000,
"fromId": "!aabbccdd",
"decoded": {
"portnum": "STORE_FORWARD_APP",
"storeforward": {"rr": "ROUTER_CLIENT_RESPONSE"},
},
}
mesh.store_packet_dict(packet)
assert not captured, "Non-heartbeat STORE_FORWARD_APP must not be queued"
+74
View File
@@ -0,0 +1,74 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unit tests for :mod:`data.mesh_ingestor.node_identity`."""
from __future__ import annotations
import sys
from pathlib import Path
REPO_ROOT = Path(__file__).resolve().parents[1]
if str(REPO_ROOT) not in sys.path:
sys.path.insert(0, str(REPO_ROOT))
from data.mesh_ingestor.node_identity import ( # noqa: E402 - path setup
canonical_node_id,
node_num_from_id,
)
def test_canonical_node_id_accepts_numeric():
assert canonical_node_id(1) == "!00000001"
assert canonical_node_id(0xABCDEF01) == "!abcdef01"
assert canonical_node_id(1.0) == "!00000001"
def test_canonical_node_id_accepts_string_forms():
assert canonical_node_id("!ABCDEF01") == "!abcdef01"
assert canonical_node_id("0xABCDEF01") == "!abcdef01"
assert canonical_node_id("abcdef01") == "!abcdef01"
assert canonical_node_id("123") == "!0000007b"
def test_canonical_node_id_passthrough_caret_destinations():
assert canonical_node_id("^all") == "^all"
def test_node_num_from_id_parses_canonical_and_hex():
assert node_num_from_id("!abcdef01") == 0xABCDEF01
assert node_num_from_id("abcdef01") == 0xABCDEF01
assert node_num_from_id("0xabcdef01") == 0xABCDEF01
assert node_num_from_id(123) == 123
def test_canonical_node_id_rejects_none_and_empty():
assert canonical_node_id(None) is None
assert canonical_node_id("") is None
assert canonical_node_id(" ") is None
def test_canonical_node_id_rejects_negative():
assert canonical_node_id(-1) is None
assert canonical_node_id(-0xABCDEF01) is None
def test_canonical_node_id_truncates_overflow():
# Values wider than 32 bits are masked, not rejected.
assert canonical_node_id(0x1_ABCDEF01) == "!abcdef01"
def test_node_num_from_id_rejects_none_and_empty():
assert node_num_from_id(None) is None
assert node_num_from_id("") is None
assert node_num_from_id("not-hex") is None
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
+183
View File
@@ -390,3 +390,186 @@ def test_nodeinfo_user_dict_proto_fallback(monkeypatch):
decoded_user = DecodedProto()
assert serialization._nodeinfo_user_dict(None, decoded_user) is None
# ---------------------------------------------------------------------------
# _coerce_int edge cases
# ---------------------------------------------------------------------------
class TestCoerceInt:
"""Tests for :func:`serialization._coerce_int` edge cases."""
def test_bool_true(self):
"""True coerces to 1."""
assert serialization._coerce_int(True) == 1
def test_bool_false(self):
"""False coerces to 0."""
assert serialization._coerce_int(False) == 0
def test_nan_float_returns_none(self):
"""NaN float returns None."""
import math
assert serialization._coerce_int(math.nan) is None
def test_inf_float_returns_none(self):
"""Inf float returns None."""
import math
assert serialization._coerce_int(math.inf) is None
def test_bytes_decimal(self):
"""Bytes containing a decimal string are parsed."""
assert serialization._coerce_int(b"42") == 42
def test_bytes_hex(self):
"""Bytes containing a 0x hex string are parsed."""
assert serialization._coerce_int(b"0xff") == 255
def test_empty_bytes_returns_none(self):
"""Empty bytes returns None."""
assert serialization._coerce_int(b"") is None
def test_invalid_string_returns_none(self):
"""Non-numeric string returns None."""
assert serialization._coerce_int("not-an-int") is None
def test_float_string_coerced(self):
"""Decimal string like '3.7' is truncated to int."""
assert serialization._coerce_int("3.7") == 3
def test_none_returns_none(self):
"""None returns None."""
assert serialization._coerce_int(None) is None
# ---------------------------------------------------------------------------
# _coerce_float edge cases
# ---------------------------------------------------------------------------
class TestCoerceFloat:
"""Tests for :func:`serialization._coerce_float` edge cases."""
def test_bool_true(self):
"""True coerces to 1.0."""
assert serialization._coerce_float(True) == pytest.approx(1.0)
def test_nan_returns_none(self):
"""NaN returns None."""
import math
assert serialization._coerce_float(math.nan) is None
def test_inf_returns_none(self):
"""Inf returns None."""
import math
assert serialization._coerce_float(math.inf) is None
def test_bytes_string(self):
"""Bytes containing a float string are parsed."""
assert serialization._coerce_float(b"3.14") == pytest.approx(3.14)
def test_empty_bytes_returns_none(self):
"""Empty bytes returns None."""
assert serialization._coerce_float(b"") is None
def test_invalid_string_returns_none(self):
"""Non-numeric string returns None."""
assert serialization._coerce_float("not-a-float") is None
def test_none_returns_none(self):
"""None returns None."""
assert serialization._coerce_float(None) is None
# ---------------------------------------------------------------------------
# _first dot-notation
# ---------------------------------------------------------------------------
class TestFirstDotNotation:
"""Tests for :func:`serialization._first` with dot-separated names."""
def test_dot_notation_nested_dict(self):
"""Dot notation resolves nested dict keys."""
d = {"a": {"b": 42}}
assert serialization._first(d, "a.b") == 42
def test_dot_notation_falls_back_to_next_name(self):
"""Falls back to the next candidate when dot-path misses."""
d = {"x": 99}
assert serialization._first(d, "a.b", "x") == 99
def test_dot_notation_none_value_skipped(self):
"""None value at dot-path is skipped."""
d = {"a": {"b": None}}
assert serialization._first(d, "a.b", default="fallback") == "fallback"
def test_dot_notation_empty_string_skipped(self):
"""Empty string at dot-path is skipped."""
d = {"a": {"b": ""}}
assert serialization._first(d, "a.b", default="fallback") == "fallback"
def test_attr_dot_notation(self):
"""Dot notation works for objects with attributes."""
from types import SimpleNamespace
d = SimpleNamespace(a=SimpleNamespace(b=7))
assert serialization._first(d, "a.b") == 7
# ---------------------------------------------------------------------------
# _merge_mappings non-mapping extra
# ---------------------------------------------------------------------------
class TestMergeMappingsExtra:
"""Additional tests for :func:`serialization._merge_mappings`."""
def test_non_mapping_extra_ignored(self):
"""Non-mapping extra with non-convertible value returns base unchanged."""
base = {"x": 1}
# Pass a string as extra — _node_to_dict will return the string, which
# is not a Mapping, so base is returned as-is.
result = serialization._merge_mappings(base, "not-a-mapping")
assert result == {"x": 1}
def test_deep_merge(self):
"""Nested mappings are merged recursively."""
base = {"a": {"b": 1, "c": 2}}
extra = {"a": {"b": 99}}
result = serialization._merge_mappings(base, extra)
assert result == {"a": {"b": 99, "c": 2}}
def test_extra_key_added(self):
"""Keys present only in extra are added to the result."""
base = {"a": 1}
extra = {"b": 2}
result = serialization._merge_mappings(base, extra)
assert result == {"a": 1, "b": 2}
# ---------------------------------------------------------------------------
# _extract_payload_bytes additional branches
# ---------------------------------------------------------------------------
class TestExtractPayloadBytesExtra:
"""Additional coverage for :func:`serialization._extract_payload_bytes`."""
def test_non_mapping_input_returns_none(self):
"""Non-mapping decoded section returns None."""
assert serialization._extract_payload_bytes("not-a-dict") is None
def test_no_payload_key_returns_none(self):
"""Missing payload key returns None."""
assert serialization._extract_payload_bytes({}) is None
def test_bytes_payload_returned_directly(self):
"""Raw bytes payload is returned as-is."""
result = serialization._extract_payload_bytes({"payload": b"\x01\x02"})
assert result == b"\x01\x02"
+40 -2
View File
@@ -55,8 +55,38 @@ def _javascript_package_version() -> str:
raise AssertionError("package.json does not expose a string version")
def _flutter_package_version() -> str:
pubspec_path = REPO_ROOT / "app" / "pubspec.yaml"
for line in pubspec_path.read_text(encoding="utf-8").splitlines():
if line.startswith("version:"):
version = line.split(":", 1)[1].strip()
if version:
return version
break
raise AssertionError("pubspec.yaml does not expose a version")
def _rust_package_version() -> str:
cargo_path = REPO_ROOT / "matrix" / "Cargo.toml"
inside_package = False
for line in cargo_path.read_text(encoding="utf-8").splitlines():
stripped = line.strip()
if stripped == "[package]":
inside_package = True
continue
if inside_package and stripped.startswith("[") and stripped.endswith("]"):
break
if inside_package:
literal = re.match(
r'version\s*=\s*["\'](?P<version>[^"\']+)["\']', stripped
)
if literal:
return literal.group("version")
raise AssertionError("Cargo.toml does not expose a package version")
def test_version_identifiers_match_across_languages() -> None:
"""Guard against version drift between Python, Ruby, and JavaScript."""
"""Guard against version drift between Python, Ruby, JavaScript, Flutter, and Rust."""
python_version = getattr(data, "__version__", None)
assert (
@@ -65,5 +95,13 @@ def test_version_identifiers_match_across_languages() -> None:
ruby_version = _ruby_fallback_version()
javascript_version = _javascript_package_version()
flutter_version = _flutter_package_version()
rust_version = _rust_package_version()
assert python_version == ruby_version == javascript_version
assert (
python_version
== ruby_version
== javascript_version
== flutter_version
== rust_version
)
+3 -1
View File
@@ -76,6 +76,7 @@ COPY --chown=potatomesh:potatomesh web/spec ./spec
COPY --chown=potatomesh:potatomesh web/public ./public
COPY --chown=potatomesh:potatomesh web/views ./views
COPY --chown=potatomesh:potatomesh web/scripts ./scripts
COPY --chown=potatomesh:potatomesh web/pages ./pages
# Copy SQL schema files from data directory
COPY --chown=potatomesh:potatomesh data/*.sql /data/
@@ -84,7 +85,8 @@ COPY --chown=potatomesh:potatomesh data/mesh_ingestor/decode_payload.py /app/dat
# Create data and configuration directories with correct ownership
RUN mkdir -p /app/.local/share/potato-mesh \
&& mkdir -p /app/.config/potato-mesh/well-known \
&& chown -R potatomesh:potatomesh /app/.local/share /app/.config
&& mkdir -p /app/pages \
&& chown -R potatomesh:potatomesh /app/.local/share /app/.config /app/pages
# Switch to non-root user
USER potatomesh
+4
View File
@@ -20,6 +20,8 @@ gem "sqlite3", "~> 1.7"
gem "rackup", "~> 2.2"
gem "puma", "~> 7.0"
gem "prometheus-client"
gem "kramdown", "~> 2.4"
gem "kramdown-parser-gfm", "~> 1.1"
group :test do
gem "rspec", "~> 3.12"
@@ -29,3 +31,5 @@ group :test do
gem "simplecov_json_formatter", "~> 0.1", require: false
gem "rspec_junit_formatter", "~> 0.6", require: false
end
gem "sanitize", "7.0.0"
+9 -1
View File
@@ -57,6 +57,8 @@ require_relative "application/meshtastic/cipher"
require_relative "application/meshtastic/payload_decoder"
require_relative "application/data_processing"
require_relative "application/filesystem"
require_relative "application/api_cache"
require_relative "application/pages"
require_relative "application/instances"
require_relative "application/routes/api"
require_relative "application/routes/ingest"
@@ -74,6 +76,7 @@ module PotatoMesh
extend App::Queries
extend App::DataProcessing
extend App::Filesystem
extend App::Pages
helpers App::Helpers
include App::Database
@@ -85,6 +88,7 @@ module PotatoMesh
include App::Queries
include App::DataProcessing
include App::Filesystem
include App::Pages
register App::Routes::Api
register App::Routes::Ingest
@@ -139,7 +143,10 @@ module PotatoMesh
set :public_folder, File.expand_path("../../public", __dir__)
set :views, File.expand_path("../../views", __dir__)
set :federation_thread, nil
set :initial_federation_thread, nil
set :federation_worker_pool, nil
set :federation_shutdown_requested, false
set :federation_shutdown_hook_installed, false
set :port, resolve_port
set :bind, DEFAULT_BIND_ADDRESS
@@ -154,8 +161,8 @@ module PotatoMesh
perform_initial_filesystem_setup!
cleanup_legacy_well_known_artifacts
init_db unless db_schema_present?
ensure_schema_upgrades
init_db unless db_schema_present?
log_instance_domain_resolution
log_instance_public_key
@@ -207,6 +214,7 @@ SELF_INSTANCE_ID = PotatoMesh::Application::SELF_INSTANCE_ID unless defined?(SEL
PotatoMesh::App::Prometheus,
PotatoMesh::App::Queries,
PotatoMesh::App::DataProcessing,
PotatoMesh::App::Pages,
].each do |mod|
Object.include(mod) unless Object < mod
end
@@ -0,0 +1,163 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
require "digest"
module PotatoMesh
module App
# Thread-safe in-memory cache for serialised API responses.
#
# Each entry is stored with a monotonic expiration time and a pre-computed
# ETag so the route handler can skip recomputing the digest on cache hits.
#
# The cache is bounded to {MAX_ENTRIES} to prevent unbounded memory growth
# from attacker-controlled query parameters. When the limit is reached the
# oldest entry by insertion order is evicted (LRU-ish via Ruby hash ordering).
#
# Invalidation can target a specific prefix (e.g. +"api:nodes:"+) so that an
# ingest POST to +/api/messages+ does not flush the neighbors cache.
# A single-flight guard coalesces concurrent misses for the same key so only
# one thread computes the value while others wait for the result.
module ApiCache
# Hard cap on the number of cached entries to prevent memory exhaustion.
# With the whitelisted protocol values and known limit set, the realistic
# key space is ~30 entries. 64 provides generous headroom.
MAX_ENTRIES = 64
@store = {}
@inflight = {}
@mutex = Mutex.new
class << self
# Retrieve a cached value or compute and store it.
#
# When multiple threads request the same cold key concurrently only one
# executes the block; the others wait for the result (single-flight).
#
# The returned hash contains both +:value+ (the JSON string) and +:etag+
# (pre-computed weak ETag) so callers can set the header without
# re-hashing the body.
#
# @param key [String] cache key incorporating all relevant query
# parameters (limit, protocol, etc.).
# @param ttl_seconds [Numeric] time-to-live for the cached entry.
# @yield Computes the value to cache when the entry is missing or
# expired. The block should return the serialised JSON string.
# @return [Hash{Symbol => String}] +:value+ and +:etag+ of the response.
def fetch(key, ttl_seconds:)
now = monotonic_now
@mutex.synchronize do
entry = @store[key]
if entry && now < entry[:expires_at]
return { value: entry[:value], etag: entry[:etag] }
end
# Single-flight: if another thread is already computing this key,
# wait for it to finish and use its result. The loop guards
# against spurious wakeups from ConditionVariable#wait.
while @inflight.key?(key)
cv = @inflight[key]
cv.wait(@mutex)
entry = @store[key]
if entry && monotonic_now < entry[:expires_at]
return { value: entry[:value], etag: entry[:etag] }
end
end
# Mark this key as in-flight so concurrent requests wait.
@inflight[key] = ConditionVariable.new
end
value = yield
etag = Digest::MD5.hexdigest(value)
@mutex.synchronize do
evict_oldest_if_full
@store[key] = { value: value, etag: etag, expires_at: monotonic_now + ttl_seconds }
cv = @inflight.delete(key)
cv&.broadcast
end
{ value: value, etag: etag }
rescue => e
# On error, unblock any waiters and re-raise.
@mutex.synchronize do
cv = @inflight.delete(key)
cv&.broadcast
end
raise e
end
# Remove entries whose keys start with any of the given prefixes.
#
# Targeted invalidation so that e.g. a messages POST does not flush the
# neighbors or telemetry caches.
#
# @param prefixes [Array<String>] key prefixes to match.
# @return [void]
def invalidate_prefix(*prefixes)
@mutex.synchronize do
@store.reject! do |key, _|
prefixes.any? { |p| key.start_with?(p) }
end
end
end
# Remove all entries from the cache.
#
# @return [void]
def invalidate_all
@mutex.synchronize { @store.clear }
end
# Remove specific entries by exact key.
#
# @param keys [Array<String>] cache keys to evict.
# @return [void]
def invalidate(*keys)
@mutex.synchronize do
keys.each { |k| @store.delete(k) }
end
end
# Return the number of entries currently held in the cache.
#
# @return [Integer] entry count.
def size
@mutex.synchronize { @store.size }
end
private
# Use the monotonic clock so TTL calculations are immune to wall-clock
# adjustments (NTP jumps, DST transitions, etc.).
def monotonic_now
Process.clock_gettime(Process::CLOCK_MONOTONIC)
end
# Evict the oldest entry when the store is at capacity. Ruby hashes
# preserve insertion order, so +first+ is the oldest key.
def evict_oldest_if_full
while @store.size >= MAX_ENTRIES
oldest_key = @store.each_key.first
@store.delete(oldest_key)
end
end
end
end
end
end
@@ -17,6 +17,32 @@
module PotatoMesh
module App
module DataProcessing
# Allowed values for the +telemetry_type+ discriminator column.
VALID_TELEMETRY_TYPES = %w[device environment power air_quality].freeze
# Coerce a Ruby boolean into a SQLite integer (1/0) while passing through
# any other value unchanged. Used when writing boolean node fields.
#
# @param value [Boolean, Object] value to coerce.
# @return [Integer, Object] 1, 0, or the original value.
def coerce_bool(value)
case value
when true then 1
when false then 0
else value
end
end
# Resolve the numeric representation of a node identifier from a packet payload.
#
# The +payload["num"]+ field may arrive as an Integer, a decimal string, or
# a hexadecimal string (with or without an +0x+ prefix). When the field is
# absent or ambiguous the method falls back to decoding the hex portion of
# +node_id+.
#
# @param node_id [String, nil] canonical node identifier in +!xxxxxxxx+ form.
# @param payload [Hash] inbound message payload that may carry a +num+ field.
# @return [Integer, nil] resolved 32-bit node number or +nil+ when undecidable.
def resolve_node_num(node_id, payload)
raw = payload["num"]
@@ -48,6 +74,19 @@ module PotatoMesh
nil
end
# Derive the canonical triplet for a node reference.
#
# Accepts an Integer node number, a hex string with or without the +!+
# sigil, a decimal numeric string, or a +0x+-prefixed hex string. A
# +fallback_num+ may be provided when +node_ref+ is nil.
#
# @param node_ref [Integer, String, nil] raw node identifier from a packet.
# @param fallback_num [Integer, nil] numeric fallback when +node_ref+ is nil.
# @return [Array(String, Integer, String), nil] tuple of
# +[canonical_id, node_num, short_id]+ or +nil+ when the reference cannot
# be resolved. +canonical_id+ is prefixed with +!+ and zero-padded to
# eight lowercase hex digits. +short_id+ is the upper-case last four
# hex digits used for display.
def canonical_node_parts(node_ref, fallback_num = nil)
fallback = coerce_integer(fallback_num)
@@ -118,7 +157,7 @@ module PotatoMesh
normalized == "ffffffff"
end
def ensure_unknown_node(db, node_ref, fallback_num = nil, heard_time: nil)
def ensure_unknown_node(db, node_ref, fallback_num = nil, heard_time: nil, protocol: "meshtastic")
parts = canonical_node_parts(node_ref, fallback_num)
return unless parts
@@ -131,17 +170,21 @@ module PotatoMesh
)
return if existing
long_name = "Meshtastic #{short_id}"
long_name = "#{protocol_display_label(protocol)} #{short_id}"
default_role = case protocol
when "meshcore" then "COMPANION"
else "CLIENT_HIDDEN"
end
heard_time = coerce_integer(heard_time)
inserted = false
with_busy_retry do
db.execute(
<<~SQL,
INSERT OR IGNORE INTO nodes(node_id,num,short_name,long_name,role,last_heard,first_heard)
VALUES (?,?,?,?,?,?,?)
INSERT OR IGNORE INTO nodes(node_id,num,short_name,long_name,role,last_heard,first_heard,protocol)
VALUES (?,?,?,?,?,?,?,?)
SQL
[node_id, node_num, short_id, long_name, "CLIENT_HIDDEN", heard_time, heard_time],
[node_id, node_num, short_id, long_name, default_role, heard_time, heard_time, protocol],
)
inserted = db.changes.positive?
end
@@ -160,6 +203,27 @@ module PotatoMesh
inserted
end
# Converts a protocol identifier such as +meshtastic+ or +mesh-core+ into
# the display label used in generated node names: capitalised parts joined
# without a separator (e.g. +Meshtastic+, +MeshCore+).
def protocol_display_label(protocol)
protocol.split(/[-_]/).map(&:capitalize).join
end
# Returns true if +long_name+ is the synthetic placeholder generated by
# +ensure_unknown_node+ for the given +node_id+ and +protocol+. Such
# names carry no real information and must not overwrite a known name
# already on record.
def generic_fallback_name?(long_name, node_id, protocol)
return false unless long_name && !long_name.empty?
parts = canonical_node_parts(node_id)
return false unless parts
short_id = parts[2]
long_name == "#{protocol_display_label(protocol)} #{short_id}"
end
def touch_node_last_seen(
db,
node_ref,
@@ -254,11 +318,12 @@ module PotatoMesh
return false unless version
lora_freq = coerce_integer(payload["lora_freq"])
modem_preset = string_or_nil(payload["modem_preset"])
protocol = string_or_nil(payload["protocol"]) || "meshtastic"
with_busy_retry do
db.execute <<~SQL, [node_id, start_time, last_seen_time, version, lora_freq, modem_preset]
INSERT INTO ingestors(node_id, start_time, last_seen_time, version, lora_freq, modem_preset)
VALUES(?,?,?,?,?,?)
db.execute <<~SQL, [node_id, start_time, last_seen_time, version, lora_freq, modem_preset, protocol]
INSERT INTO ingestors(node_id, start_time, last_seen_time, version, lora_freq, modem_preset, protocol)
VALUES(?,?,?,?,?,?,?)
ON CONFLICT(node_id) DO UPDATE SET
start_time = CASE
WHEN excluded.start_time > ingestors.start_time THEN excluded.start_time
@@ -270,7 +335,8 @@ module PotatoMesh
END,
version = COALESCE(excluded.version, ingestors.version),
lora_freq = COALESCE(excluded.lora_freq, ingestors.lora_freq),
modem_preset = COALESCE(excluded.modem_preset, ingestors.modem_preset)
modem_preset = COALESCE(excluded.modem_preset, ingestors.modem_preset),
protocol = excluded.protocol
SQL
end
@@ -286,43 +352,64 @@ module PotatoMesh
false
end
def upsert_node(db, node_id, n)
def upsert_node(db, node_id, n, protocol: "meshtastic")
user = n["user"] || {}
met = n["deviceMetrics"] || {}
pos = n["position"] || {}
role = user["role"] || "CLIENT"
# nil when user info absent; COALESCE in the conflict clause preserves
# the stored role rather than overwriting with a default.
role = user["role"]
lh = coerce_integer(n["lastHeard"])
pt = coerce_integer(pos["time"])
now = Time.now.to_i
pt = nil if pt && pt > now
lh = now if lh && lh > now
lh = pt if pt && (!lh || lh < pt)
# 0 is truthy in Ruby — `lh ||= now` won't replace it, leaving the
# 7-day list filter to evaluate `0 >= now-7days` → false (node hidden).
lh = nil if lh && lh <= 0
# position.time = 0 means no GPS fix; skip it as a last_heard anchor
# (would re-introduce the same zero-timestamp exclusion bug for lh).
lh = pt if pt && pt > 0 && (!lh || lh < pt)
lh ||= now
bool = ->(v) {
case v
when true then 1
when false then 0
else v
end
}
node_num = resolve_node_num(node_id, n)
update_prometheus_metrics(node_id, user, role, met, pos)
lora_freq = coerce_integer(n["lora_freq"] || n["loraFrequency"])
modem_preset = string_or_nil(n["modem_preset"] || n["modemPreset"])
# Synthetic flag: true for placeholder nodes created from channel message
# sender names before the real contact advertisement is received.
synthetic = user["synthetic"] ? 1 : 0
long_name = user["longName"]
# If the incoming long name is a generic placeholder, prefer any real
# name already on record so we never stomp known data with fallback
# text. For new nodes there is nothing to preserve, so the generic
# name is still written via the INSERT VALUES path.
long_name_conflict_sql = if generic_fallback_name?(long_name, node_id, protocol)
# Generic placeholder: keep any real name already on record.
# COALESCE returns nodes.long_name when non-null, otherwise falls
# back to the incoming generic — so brand-new nodes still get it.
"COALESCE(nodes.long_name, excluded.long_name)"
else
# Real name (or nil): use the incoming value, preserving the
# existing name only when the incoming value is nil. A nil
# long_name in the packet carries no information, so falling back
# to what we already have is better than overwriting with NULL.
"COALESCE(excluded.long_name, nodes.long_name)"
end
row = [
node_id,
node_num,
user["shortName"],
user["longName"],
long_name,
user["macaddr"],
user["hwModel"] || n["hwModel"],
role,
user["publicKey"],
bool.call(user["isUnmessagable"]),
bool.call(n["isFavorite"]),
coerce_bool(user["isUnmessagable"]),
coerce_bool(n["isFavorite"]),
n["hopsAway"],
n["snr"],
lh,
@@ -344,24 +431,83 @@ module PotatoMesh
pos["altitude"],
lora_freq,
modem_preset,
protocol,
synthetic,
]
with_busy_retry do
db.execute <<~SQL, row
INSERT INTO nodes(node_id,num,short_name,long_name,macaddr,hw_model,role,public_key,is_unmessagable,is_favorite,
hops_away,snr,last_heard,first_heard,battery_level,voltage,channel_utilization,air_util_tx,uptime_seconds,
position_time,location_source,precision_bits,latitude,longitude,altitude,lora_freq,modem_preset)
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
ON CONFLICT(node_id) DO UPDATE SET
num=excluded.num, short_name=excluded.short_name, long_name=excluded.long_name, macaddr=excluded.macaddr,
hw_model=excluded.hw_model, role=excluded.role, public_key=excluded.public_key, is_unmessagable=excluded.is_unmessagable,
is_favorite=excluded.is_favorite, hops_away=excluded.hops_away, snr=excluded.snr, last_heard=excluded.last_heard,
first_heard=COALESCE(nodes.first_heard, excluded.first_heard, excluded.last_heard),
battery_level=excluded.battery_level, voltage=excluded.voltage, channel_utilization=excluded.channel_utilization,
air_util_tx=excluded.air_util_tx, uptime_seconds=excluded.uptime_seconds, position_time=excluded.position_time,
location_source=excluded.location_source, precision_bits=excluded.precision_bits, latitude=excluded.latitude, longitude=excluded.longitude,
altitude=excluded.altitude, lora_freq=excluded.lora_freq, modem_preset=excluded.modem_preset
WHERE COALESCE(excluded.last_heard,0) >= COALESCE(nodes.last_heard,0)
SQL
db.transaction do
db.execute(<<~SQL, row)
INSERT INTO nodes(node_id,num,short_name,long_name,macaddr,hw_model,role,public_key,is_unmessagable,is_favorite,
hops_away,snr,last_heard,first_heard,battery_level,voltage,channel_utilization,air_util_tx,uptime_seconds,
position_time,location_source,precision_bits,latitude,longitude,altitude,lora_freq,modem_preset,protocol,synthetic)
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
ON CONFLICT(node_id) DO UPDATE SET
num=COALESCE(excluded.num, nodes.num),
short_name=COALESCE(excluded.short_name, nodes.short_name),
long_name=#{long_name_conflict_sql},
macaddr=COALESCE(excluded.macaddr, nodes.macaddr),
hw_model=COALESCE(excluded.hw_model, nodes.hw_model),
role=COALESCE(excluded.role, nodes.role),
public_key=COALESCE(excluded.public_key, nodes.public_key),
is_unmessagable=COALESCE(excluded.is_unmessagable, nodes.is_unmessagable),
is_favorite=excluded.is_favorite, hops_away=excluded.hops_away, snr=excluded.snr, last_heard=excluded.last_heard,
first_heard=COALESCE(nodes.first_heard, excluded.first_heard, excluded.last_heard),
battery_level=excluded.battery_level, voltage=excluded.voltage, channel_utilization=excluded.channel_utilization,
air_util_tx=excluded.air_util_tx, uptime_seconds=excluded.uptime_seconds,
position_time=COALESCE(excluded.position_time, nodes.position_time),
location_source=COALESCE(excluded.location_source, nodes.location_source),
precision_bits=COALESCE(excluded.precision_bits, nodes.precision_bits),
latitude=COALESCE(excluded.latitude, nodes.latitude),
longitude=COALESCE(excluded.longitude, nodes.longitude),
altitude=COALESCE(excluded.altitude, nodes.altitude),
lora_freq=excluded.lora_freq, modem_preset=excluded.modem_preset,
protocol=COALESCE(NULLIF(nodes.protocol,'meshtastic'), excluded.protocol),
synthetic=MIN(COALESCE(excluded.synthetic,1), COALESCE(nodes.synthetic,1))
WHERE COALESCE(excluded.last_heard,0) >= COALESCE(nodes.last_heard,0)
AND NOT (COALESCE(nodes.synthetic,0) = 0 AND excluded.synthetic = 1)
SQL
# When a real (non-synthetic) node is upserted with a known long
# name, migrate any synthetic placeholder rows that share that name.
# This fires when the MeshCore device finally receives the sender's
# contact advertisement, resolving the placeholder to a real node ID.
if synthetic == 0 && long_name && !long_name.empty?
merge_synthetic_nodes(db, node_id, long_name)
end
end
end
end
# Migrate messages from synthetic placeholder nodes to a newly confirmed
# real node, then remove the placeholders.
#
# Called inside a transaction from +upsert_node+ when a real (non-synthetic)
# MeshCore node with the same +long_name+ is upserted.
#
# Only +messages.from_id+ is migrated. Synthetic nodes are placeholders
# created solely from parsed channel message sender names, so they cannot
# have associated positions, telemetry, neighbors, or traces — those tables
# are intentionally left untouched.
#
# @param db [SQLite3::Database] open database connection.
# @param real_node_id [String] canonical node ID for the real contact.
# @param long_name [String] long name to match against synthetic rows.
# @return [void]
def merge_synthetic_nodes(db, real_node_id, long_name)
synthetic_ids = db.execute(
"SELECT node_id FROM nodes WHERE long_name = ? AND synthetic = 1 AND protocol = 'meshcore' AND node_id != ?",
[long_name, real_node_id],
).map { |row| row[0] }
synthetic_ids.each do |synthetic_id|
db.execute(
"UPDATE messages SET from_id = ? WHERE from_id = ?",
[real_node_id, synthetic_id],
)
db.execute(
"DELETE FROM nodes WHERE node_id = ? AND synthetic = 1",
[synthetic_id],
)
end
end
@@ -493,7 +639,7 @@ module PotatoMesh
end
end
def insert_position(db, payload)
def insert_position(db, payload, protocol_cache: nil)
pos_id = coerce_integer(payload["id"] || payload["packet_id"])
return unless pos_id
@@ -524,8 +670,10 @@ module PotatoMesh
lora_freq = coerce_integer(payload["lora_freq"] || payload["loraFrequency"])
modem_preset = string_or_nil(payload["modem_preset"] || payload["modemPreset"])
ingestor = string_or_nil(payload["ingestor"])
protocol = resolve_protocol(db, ingestor, cache: protocol_cache)
ensure_unknown_node(db, node_id || node_num, node_num, heard_time: rx_time)
ensure_unknown_node(db, node_id || node_num, node_num, heard_time: rx_time, protocol: protocol)
touch_node_last_seen(
db,
node_id || node_num,
@@ -639,13 +787,15 @@ module PotatoMesh
hop_limit,
bitfield,
payload_b64,
ingestor,
protocol,
]
with_busy_retry do
db.execute <<~SQL, row
INSERT INTO positions(id,node_id,node_num,rx_time,rx_iso,position_time,to_id,latitude,longitude,altitude,location_source,
precision_bits,sats_in_view,pdop,ground_speed,ground_track,snr,rssi,hop_limit,bitfield,payload_b64)
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
precision_bits,sats_in_view,pdop,ground_speed,ground_track,snr,rssi,hop_limit,bitfield,payload_b64,ingestor,protocol)
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
ON CONFLICT(id) DO UPDATE SET
node_id=COALESCE(excluded.node_id,positions.node_id),
node_num=COALESCE(excluded.node_num,positions.node_num),
@@ -666,7 +816,9 @@ module PotatoMesh
rssi=COALESCE(excluded.rssi,positions.rssi),
hop_limit=COALESCE(excluded.hop_limit,positions.hop_limit),
bitfield=COALESCE(excluded.bitfield,positions.bitfield),
payload_b64=COALESCE(excluded.payload_b64,positions.payload_b64)
payload_b64=COALESCE(excluded.payload_b64,positions.payload_b64),
ingestor=COALESCE(NULLIF(positions.ingestor,''), excluded.ingestor),
protocol=COALESCE(NULLIF(positions.protocol,'meshtastic'), excluded.protocol)
SQL
end
@@ -685,7 +837,7 @@ module PotatoMesh
)
end
def insert_neighbors(db, payload)
def insert_neighbors(db, payload, protocol_cache: nil)
return unless payload.is_a?(Hash)
now = Time.now.to_i
@@ -717,7 +869,10 @@ module PotatoMesh
node_id = "!#{node_id.delete_prefix("!").downcase}" if node_id.start_with?("!")
ensure_unknown_node(db, node_id || node_num, node_num, heard_time: rx_time)
ingestor = string_or_nil(payload["ingestor"])
protocol = resolve_protocol(db, ingestor, cache: protocol_cache)
ensure_unknown_node(db, node_id || node_num, node_num, heard_time: rx_time, protocol: protocol)
touch_node_last_seen(db, node_id || node_num, node_num, rx_time: rx_time, source: :neighborinfo)
neighbor_entries = []
@@ -756,22 +911,43 @@ module PotatoMesh
entry_rx_time = now if entry_rx_time && entry_rx_time > now
snr = coerce_float(neighbor["snr"])
ensure_unknown_node(db, neighbor_id || neighbor_num, neighbor_num, heard_time: entry_rx_time)
touch_node_last_seen(db, neighbor_id || neighbor_num, neighbor_num, rx_time: entry_rx_time, source: :neighborinfo)
ensure_unknown_node(db, neighbor_id || neighbor_num, neighbor_num, heard_time: entry_rx_time, protocol: protocol)
neighbor_entries << [neighbor_id, snr, entry_rx_time]
neighbor_entries << [neighbor_id, snr, entry_rx_time, ingestor, protocol]
end
with_busy_retry do
db.transaction do
db.execute("DELETE FROM neighbors WHERE node_id = ?", [node_id])
neighbor_entries.each do |neighbor_id, snr_value, heard_time|
if neighbor_entries.empty?
db.execute("DELETE FROM neighbors WHERE node_id = ?", [node_id])
else
expected_neighbors = neighbor_entries.map(&:first).uniq
existing_neighbors = db.execute(
"SELECT neighbor_id FROM neighbors WHERE node_id = ?",
[node_id],
).flatten
stale_neighbors = existing_neighbors - expected_neighbors
stale_neighbors.each_slice(500) do |slice|
placeholders = slice.map { "?" }.join(",")
db.execute(
"DELETE FROM neighbors WHERE node_id = ? AND neighbor_id IN (#{placeholders})",
[node_id] + slice,
)
end
end
neighbor_entries.each do |neighbor_id, snr_value, heard_time, reporter_id, proto|
db.execute(
<<~SQL,
INSERT OR REPLACE INTO neighbors(node_id, neighbor_id, snr, rx_time)
VALUES (?, ?, ?, ?)
INSERT INTO neighbors(node_id, neighbor_id, snr, rx_time, ingestor, protocol)
VALUES (?, ?, ?, ?, ?, ?)
ON CONFLICT(node_id, neighbor_id) DO UPDATE SET
snr = excluded.snr,
rx_time = excluded.rx_time,
ingestor = COALESCE(NULLIF(neighbors.ingestor,''), excluded.ingestor),
protocol = COALESCE(NULLIF(neighbors.protocol,'meshtastic'), excluded.protocol)
SQL
[node_id, neighbor_id, snr_value, heard_time],
[node_id, neighbor_id, snr_value, heard_time, reporter_id, proto],
)
end
end
@@ -785,7 +961,8 @@ module PotatoMesh
rx_time,
metrics = {},
lora_freq: nil,
modem_preset: nil
modem_preset: nil,
protocol: "meshtastic"
)
num = coerce_integer(node_num)
id = string_or_nil(node_id)
@@ -795,7 +972,7 @@ module PotatoMesh
id ||= format("!%08x", num & 0xFFFFFFFF) if num
return unless id
ensure_unknown_node(db, id, num, heard_time: rx_time)
ensure_unknown_node(db, id, num, heard_time: rx_time, protocol: protocol)
touch_node_last_seen(
db,
id,
@@ -897,6 +1074,35 @@ module PotatoMesh
private :resolve_numeric_metric
# Look up the protocol registered by a given ingestor node.
#
# @param db [SQLite3::Database] open database handle.
# @param ingestor_node_id [String, nil] the node_id of the reporting ingestor.
# @param cache [Hash, nil] optional per-request memoization hash; pass a shared
# Hash instance across a batch to avoid redundant DB lookups per record.
# @return [String] protocol string; defaults to "meshtastic" when absent or unknown.
def resolve_protocol(db, ingestor_node_id, cache: nil)
return "meshtastic" if ingestor_node_id.nil? || ingestor_node_id.to_s.strip.empty?
if cache
return cache[ingestor_node_id] if cache.key?(ingestor_node_id)
result = db.get_first_value(
"SELECT protocol FROM ingestors WHERE node_id = ? LIMIT 1",
[ingestor_node_id],
) || "meshtastic"
cache[ingestor_node_id] = result
return result
end
db.get_first_value(
"SELECT protocol FROM ingestors WHERE node_id = ? LIMIT 1",
[ingestor_node_id],
) || "meshtastic"
end
private :resolve_protocol
# Normalise a traceroute hop entry to a numeric node identifier.
#
# @param hop [Object] raw hop entry from the payload.
@@ -935,7 +1141,7 @@ module PotatoMesh
hop_entries.filter_map { |entry| coerce_trace_node_id(entry) }
end
def insert_telemetry(db, payload)
def insert_telemetry(db, payload, protocol_cache: nil)
return unless payload.is_a?(Hash)
telemetry_id = coerce_integer(payload["id"] || payload["packet_id"])
@@ -981,12 +1187,30 @@ module PotatoMesh
payload_b64 = string_or_nil(payload["payload_b64"] || payload["payload"])
lora_freq = coerce_integer(payload["lora_freq"] || payload["loraFrequency"])
modem_preset = string_or_nil(payload["modem_preset"] || payload["modemPreset"])
ingestor = string_or_nil(payload["ingestor"])
protocol = resolve_protocol(db, ingestor, cache: protocol_cache)
telemetry_section = normalize_json_object(payload["telemetry"])
device_metrics = normalize_json_object(payload["device_metrics"] || payload["deviceMetrics"])
device_metrics ||= normalize_json_object(telemetry_section["deviceMetrics"]) if telemetry_section&.key?("deviceMetrics")
environment_metrics = normalize_json_object(payload["environment_metrics"] || payload["environmentMetrics"])
environment_metrics ||= normalize_json_object(telemetry_section["environmentMetrics"]) if telemetry_section&.key?("environmentMetrics")
power_metrics = normalize_json_object(payload["power_metrics"] || payload["powerMetrics"])
power_metrics ||= normalize_json_object(telemetry_section["powerMetrics"]) if telemetry_section&.key?("powerMetrics")
air_quality_metrics = normalize_json_object(payload["air_quality_metrics"] || payload["airQualityMetrics"])
air_quality_metrics ||= normalize_json_object(telemetry_section["airQualityMetrics"]) if telemetry_section&.key?("airQualityMetrics")
telemetry_type = string_or_nil(payload["telemetry_type"])
telemetry_type = nil unless VALID_TELEMETRY_TYPES.include?(telemetry_type)
telemetry_type ||= if device_metrics&.any?
"device"
elsif environment_metrics&.any?
"environment"
elsif power_metrics&.any?
"power"
elsif air_quality_metrics&.any?
"air_quality"
end
sources = {
payload: payload,
@@ -1310,6 +1534,9 @@ module PotatoMesh
rainfall_24h,
soil_moisture,
soil_temperature,
ingestor,
protocol,
telemetry_type,
]
placeholders = Array.new(row.length, "?").join(",")
@@ -1317,7 +1544,7 @@ module PotatoMesh
with_busy_retry do
db.execute <<~SQL, row
INSERT INTO telemetry(id,node_id,node_num,from_id,to_id,rx_time,rx_iso,telemetry_time,channel,portnum,hop_limit,snr,rssi,bitfield,payload_b64,
battery_level,voltage,channel_utilization,air_util_tx,uptime_seconds,temperature,relative_humidity,barometric_pressure,gas_resistance,current,iaq,distance,lux,white_lux,ir_lux,uv_lux,wind_direction,wind_speed,weight,wind_gust,wind_lull,radiation,rainfall_1h,rainfall_24h,soil_moisture,soil_temperature)
battery_level,voltage,channel_utilization,air_util_tx,uptime_seconds,temperature,relative_humidity,barometric_pressure,gas_resistance,current,iaq,distance,lux,white_lux,ir_lux,uv_lux,wind_direction,wind_speed,weight,wind_gust,wind_lull,radiation,rainfall_1h,rainfall_24h,soil_moisture,soil_temperature,ingestor,protocol,telemetry_type)
VALUES (#{placeholders})
ON CONFLICT(id) DO UPDATE SET
node_id=COALESCE(excluded.node_id,telemetry.node_id),
@@ -1359,7 +1586,10 @@ module PotatoMesh
rainfall_1h=COALESCE(excluded.rainfall_1h,telemetry.rainfall_1h),
rainfall_24h=COALESCE(excluded.rainfall_24h,telemetry.rainfall_24h),
soil_moisture=COALESCE(excluded.soil_moisture,telemetry.soil_moisture),
soil_temperature=COALESCE(excluded.soil_temperature,telemetry.soil_temperature)
soil_temperature=COALESCE(excluded.soil_temperature,telemetry.soil_temperature),
ingestor=COALESCE(NULLIF(telemetry.ingestor,''), excluded.ingestor),
protocol=COALESCE(NULLIF(telemetry.protocol,'meshtastic'), excluded.protocol),
telemetry_type=COALESCE(excluded.telemetry_type,telemetry.telemetry_type)
SQL
end
@@ -1377,6 +1607,7 @@ module PotatoMesh
},
lora_freq: lora_freq,
modem_preset: modem_preset,
protocol: protocol,
)
end
@@ -1385,7 +1616,7 @@ module PotatoMesh
# @param db [SQLite3::Database] open database handle.
# @param payload [Hash] traceroute payload as produced by the ingestor.
# @return [void]
def insert_trace(db, payload)
def insert_trace(db, payload, protocol_cache: nil)
return unless payload.is_a?(Hash)
trace_identifier = coerce_integer(payload["id"] || payload["packet_id"] || payload["packetId"])
@@ -1410,20 +1641,22 @@ module PotatoMesh
metrics&.[]("latency_ms") ||
metrics&.[]("latencyMs"),
)
ingestor = string_or_nil(payload["ingestor"])
protocol = resolve_protocol(db, ingestor, cache: protocol_cache)
hops_value = payload.key?("hops") ? payload["hops"] : payload["path"]
hops = normalize_trace_hops(hops_value)
all_nodes = [src, dest, *hops].compact.uniq
all_nodes.each do |node|
ensure_unknown_node(db, node, node, heard_time: rx_time)
ensure_unknown_node(db, node, node, heard_time: rx_time, protocol: protocol)
touch_node_last_seen(db, node, node, rx_time: rx_time, source: :trace)
end
with_busy_retry do
db.execute <<~SQL, [trace_identifier, request_id, src, dest, rx_time, rx_iso, rssi, snr, elapsed_ms]
INSERT INTO traces(id, request_id, src, dest, rx_time, rx_iso, rssi, snr, elapsed_ms)
VALUES(?,?,?,?,?,?,?,?,?)
db.execute <<~SQL, [trace_identifier, request_id, src, dest, rx_time, rx_iso, rssi, snr, elapsed_ms, ingestor, protocol]
INSERT INTO traces(id, request_id, src, dest, rx_time, rx_iso, rssi, snr, elapsed_ms, ingestor, protocol)
VALUES(?,?,?,?,?,?,?,?,?,?,?)
ON CONFLICT(id) DO UPDATE SET
request_id=COALESCE(excluded.request_id,traces.request_id),
src=COALESCE(excluded.src,traces.src),
@@ -1432,7 +1665,9 @@ module PotatoMesh
rx_iso=excluded.rx_iso,
rssi=COALESCE(excluded.rssi,traces.rssi),
snr=COALESCE(excluded.snr,traces.snr),
elapsed_ms=COALESCE(excluded.elapsed_ms,traces.elapsed_ms)
elapsed_ms=COALESCE(excluded.elapsed_ms,traces.elapsed_ms),
ingestor=COALESCE(NULLIF(traces.ingestor,''), excluded.ingestor),
protocol=COALESCE(NULLIF(traces.protocol,'meshtastic'), excluded.protocol)
SQL
trace_id = trace_identifier || db.last_insert_row_id
@@ -1500,7 +1735,7 @@ module PotatoMesh
}
end
def insert_message(db, message)
def insert_message(db, message, protocol_cache: nil)
return unless message.is_a?(Hash)
msg_id = coerce_integer(message["id"] || message["packet_id"])
@@ -1566,7 +1801,6 @@ module PotatoMesh
channel_index = coerce_integer(message["channel"] || message["channel_index"] || message["channelIndex"])
decrypted_payload = nil
decrypted_text = nil
decrypted_portnum = nil
if encrypted && (text.nil? || text.to_s.strip.empty?)
@@ -1581,19 +1815,6 @@ module PotatoMesh
if decrypted
decrypted_payload = decrypted
decrypted_portnum = decrypted[:portnum]
if decrypted[:text]
text = decrypted[:text]
decrypted_text = text
clear_encrypted = true
encrypted = nil
message["text"] = text
message["channel_name"] ||= decrypted[:channel_name]
if portnum.nil? && decrypted_portnum
portnum = decrypted_portnum
message["portnum"] = portnum
end
end
end
end
@@ -1607,6 +1828,8 @@ module PotatoMesh
channel_name = string_or_nil(message["channel_name"] || message["channelName"])
reply_id = coerce_integer(message["reply_id"] || message["replyId"])
emoji = string_or_nil(message["emoji"])
ingestor = string_or_nil(message["ingestor"])
protocol = resolve_protocol(db, ingestor, cache: protocol_cache)
row = [
msg_id,
@@ -1626,11 +1849,13 @@ module PotatoMesh
channel_name,
reply_id,
emoji,
ingestor,
protocol,
]
with_busy_retry do
existing = db.get_first_row(
"SELECT from_id, to_id, text, encrypted, lora_freq, modem_preset, channel_name, reply_id, emoji, portnum FROM messages WHERE id = ?",
"SELECT from_id, to_id, text, encrypted, lora_freq, modem_preset, channel_name, reply_id, emoji, portnum, ingestor, protocol FROM messages WHERE id = ?",
[msg_id],
)
if existing
@@ -1728,6 +1953,16 @@ module PotatoMesh
updates["emoji"] = emoji if should_update
end
if ingestor
existing_ingestor = existing.is_a?(Hash) ? existing["ingestor"] : existing[10]
existing_ingestor = string_or_nil(existing_ingestor)
updates["ingestor"] = ingestor if existing_ingestor.nil?
end
existing_protocol = existing.is_a?(Hash) ? existing["protocol"] : existing[11]
return if existing_protocol && existing_protocol != "meshtastic" && existing_protocol != protocol
updates["protocol"] = protocol if (existing_protocol.nil? || existing_protocol == "meshtastic") && protocol != "meshtastic"
unless updates.empty?
assignments = updates.keys.map { |column| "#{column} = ?" }.join(", ")
db.execute("UPDATE messages SET #{assignments} WHERE id = ?", updates.values + [msg_id])
@@ -1737,12 +1972,12 @@ module PotatoMesh
begin
db.execute <<~SQL, row
INSERT INTO messages(id,rx_time,rx_iso,from_id,to_id,channel,portnum,text,encrypted,snr,rssi,hop_limit,lora_freq,modem_preset,channel_name,reply_id,emoji)
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
INSERT INTO messages(id,rx_time,rx_iso,from_id,to_id,channel,portnum,text,encrypted,snr,rssi,hop_limit,lora_freq,modem_preset,channel_name,reply_id,emoji,ingestor,protocol)
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
SQL
rescue SQLite3::ConstraintException
existing_row = db.get_first_row(
"SELECT text, encrypted FROM messages WHERE id = ?",
"SELECT text, encrypted, ingestor, protocol FROM messages WHERE id = ?",
[msg_id],
)
existing_text = existing_row.is_a?(Hash) ? existing_row["text"] : existing_row&.[](0)
@@ -1750,6 +1985,12 @@ module PotatoMesh
allow_encrypted_update = existing_text_str.nil? || existing_text_str.strip.empty?
existing_encrypted = existing_row.is_a?(Hash) ? existing_row["encrypted"] : existing_row&.[](1)
existing_encrypted_str = existing_encrypted&.to_s
existing_ingestor = existing_row.is_a?(Hash) ? existing_row["ingestor"] : existing_row&.[](2)
existing_ingestor = string_or_nil(existing_ingestor)
existing_fallback_protocol = existing_row.is_a?(Hash) ? existing_row["protocol"] : existing_row&.[](3)
# Guard against cross-protocol contamination in the constraint fallback path,
# mirroring the same guard applied in the primary update path above.
return if existing_fallback_protocol && existing_fallback_protocol != "meshtastic" && existing_fallback_protocol != protocol
decrypted_precedence = text && (clear_encrypted || (existing_encrypted_str && !existing_encrypted_str.strip.empty?))
fallback_updates = {}
@@ -1777,6 +2018,8 @@ module PotatoMesh
end
fallback_updates["reply_id"] = reply_id unless reply_id.nil?
fallback_updates["emoji"] = emoji if emoji
fallback_updates["ingestor"] = ingestor if ingestor && existing_ingestor.nil?
fallback_updates["protocol"] = protocol if (existing_fallback_protocol.nil? || existing_fallback_protocol == "meshtastic") && protocol != "meshtastic"
unless fallback_updates.empty?
assignments = fallback_updates.keys.map { |column| "#{column} = ?" }.join(", ")
db.execute("UPDATE messages SET #{assignments} WHERE id = ?", fallback_updates.values + [msg_id])
@@ -1785,7 +2028,7 @@ module PotatoMesh
end
end
if clear_encrypted && decrypted_text
if clear_encrypted && text
debug_log(
"Stored decrypted text message",
context: "data_processing.insert_message",
@@ -1827,9 +2070,9 @@ module PotatoMesh
)
end
should_touch_message = !stored_decrypted || decrypted_text
should_touch_message = !stored_decrypted
if should_touch_message
ensure_unknown_node(db, from_id || raw_from_id, message["from_num"], heard_time: rx_time)
ensure_unknown_node(db, from_id || raw_from_id, message["from_num"], heard_time: rx_time, protocol: protocol)
touch_node_last_seen(
db,
from_id || raw_from_id || message["from_num"],
@@ -1840,7 +2083,7 @@ module PotatoMesh
modem_preset: modem_preset,
)
ensure_unknown_node(db, to_id || raw_to_id, message["to_num"], heard_time: rx_time) if to_id || raw_to_id
ensure_unknown_node(db, to_id || raw_to_id, message["to_num"], heard_time: rx_time, protocol: protocol) if to_id || raw_to_id
if to_id || raw_to_id || message.key?("to_num")
touch_node_last_seen(
db,
@@ -1893,7 +2136,7 @@ module PotatoMesh
return false unless portnum_value
payload_b64 = Base64.strict_encode64(payload_bytes)
supported_ports = [3, 67, 70, 71]
supported_ports = [3, 4, 67, 70, 71]
return false unless supported_ports.include?(portnum_value)
decoded = PotatoMesh::App::Meshtastic::PayloadDecoder.decode(
@@ -1918,6 +2161,7 @@ module PotatoMesh
"lora_freq" => coerce_integer(message["lora_freq"] || message["loraFrequency"]),
"modem_preset" => string_or_nil(message["modem_preset"] || message["modemPreset"]),
"payload_b64" => payload_b64,
"ingestor" => string_or_nil(message["ingestor"]),
}
case decoded["type"]
@@ -1931,6 +2175,33 @@ module PotatoMesh
portnum: portnum_value,
)
true
when "NODEINFO_APP"
node_payload = normalize_decrypted_nodeinfo_payload(decoded["payload"])
return false unless valid_decrypted_nodeinfo_payload?(node_payload)
node_id = string_or_nil(node_payload["id"]) || from_id
node_num = coerce_integer(node_payload["num"]) ||
coerce_integer(message["from_num"]) ||
resolve_node_num(from_id, message)
node_id ||= format("!%08x", node_num & 0xFFFFFFFF) if node_num
return false unless node_id
payload = node_payload.merge(
"num" => node_num,
"lastHeard" => coerce_integer(node_payload["lastHeard"] || node_payload["last_heard"]) || rx_time,
"snr" => node_payload.key?("snr") ? node_payload["snr"] : snr,
"lora_freq" => common_payload["lora_freq"],
"modem_preset" => common_payload["modem_preset"],
)
upsert_node(db, node_id, payload)
debug_log(
"Stored decrypted node payload",
context: "data_processing.store_decrypted_payload",
message_id: packet_id,
portnum: portnum_value,
node_id: node_id,
)
true
when "TELEMETRY_APP"
payload = common_payload.merge("telemetry" => decoded["payload"])
insert_telemetry(db, payload)
@@ -1993,6 +2264,92 @@ module PotatoMesh
end
end
# Validate decoded NodeInfo payloads before upserting node records.
#
# @param payload [Object] decoded payload candidate.
# @return [Boolean] true when the payload resembles a Meshtastic NodeInfo.
def valid_decrypted_nodeinfo_payload?(payload)
return false unless payload.is_a?(Hash)
return false if payload.empty?
return false unless payload["user"].is_a?(Hash)
return false if payload.key?("position") && !payload["position"].is_a?(Hash)
return false if payload.key?("deviceMetrics") && !payload["deviceMetrics"].is_a?(Hash)
return false unless nodeinfo_user_has_identifying_fields?(payload["user"])
true
end
# Normalize decoded NodeInfo payload keys for +upsert_node+ compatibility.
#
# The Python decoder preserves protobuf field names, so nested hashes may
# use +snake_case+ keys that +upsert_node+ does not read.
#
# @param payload [Object] decoded NodeInfo payload.
# @return [Hash] normalized payload hash.
def normalize_decrypted_nodeinfo_payload(payload)
return {} unless payload.is_a?(Hash)
user = payload["user"]
normalized_user = user.is_a?(Hash) ? user.dup : nil
if normalized_user
normalized_user["shortName"] ||= normalized_user["short_name"]
normalized_user["longName"] ||= normalized_user["long_name"]
normalized_user["hwModel"] ||= normalized_user["hw_model"]
normalized_user["publicKey"] ||= normalized_user["public_key"]
normalized_user["isUnmessagable"] = normalized_user["is_unmessagable"] if normalized_user.key?("is_unmessagable")
end
metrics = payload["deviceMetrics"] || payload["device_metrics"]
normalized_metrics = metrics.is_a?(Hash) ? metrics.dup : nil
if normalized_metrics
normalized_metrics["batteryLevel"] ||= normalized_metrics["battery_level"]
normalized_metrics["channelUtilization"] ||= normalized_metrics["channel_utilization"]
normalized_metrics["airUtilTx"] ||= normalized_metrics["air_util_tx"]
normalized_metrics["uptimeSeconds"] ||= normalized_metrics["uptime_seconds"]
end
position = payload["position"]
normalized_position = position.is_a?(Hash) ? position.dup : nil
if normalized_position
normalized_position["precisionBits"] ||= normalized_position["precision_bits"]
normalized_position["locationSource"] ||= normalized_position["location_source"]
end
normalized = payload.dup
normalized["user"] = normalized_user if normalized_user
normalized["deviceMetrics"] = normalized_metrics if normalized_metrics
normalized["position"] = normalized_position if normalized_position
normalized["lastHeard"] ||= normalized["last_heard"]
normalized["hopsAway"] ||= normalized["hops_away"]
normalized["isFavorite"] = normalized["is_favorite"] if normalized.key?("is_favorite")
normalized["hwModel"] ||= normalized["hw_model"]
normalized
end
# Validate that a decoded NodeInfo user section contains identifying data.
#
# @param user [Hash] decoded NodeInfo user payload.
# @return [Boolean] true when at least one identifying field is present.
def nodeinfo_user_has_identifying_fields?(user)
identifying_fields = [
user["id"],
user["shortName"],
user["short_name"],
user["longName"],
user["long_name"],
user["macaddr"],
user["hwModel"],
user["hw_model"],
user["publicKey"],
user["public_key"],
]
identifying_fields.any? do |value|
value.is_a?(String) ? !value.strip.empty? : !value.nil?
end
end
def normalize_node_id(db, node_ref)
return nil if node_ref.nil?
ref_str = node_ref.to_s.strip
+147 -34
View File
@@ -111,51 +111,96 @@ module PotatoMesh
#
# @return [void]
def ensure_schema_upgrades
FileUtils.mkdir_p(File.dirname(PotatoMesh::Config.db_path))
db = open_database
node_columns = db.execute("PRAGMA table_info(nodes)").map { |row| row[1] }
unless node_columns.include?("precision_bits")
db.execute("ALTER TABLE nodes ADD COLUMN precision_bits INTEGER")
node_columns << "precision_bits"
node_table_exists = db.get_first_value(
"SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name='nodes'",
).to_i > 0
if node_table_exists
node_columns = db.execute("PRAGMA table_info(nodes)").map { |row| row[1] }
unless node_columns.include?("precision_bits")
db.execute("ALTER TABLE nodes ADD COLUMN precision_bits INTEGER")
node_columns << "precision_bits"
end
unless node_columns.include?("lora_freq")
db.execute("ALTER TABLE nodes ADD COLUMN lora_freq INTEGER")
end
unless node_columns.include?("modem_preset")
db.execute("ALTER TABLE nodes ADD COLUMN modem_preset TEXT")
end
unless node_columns.include?("protocol")
db.execute("ALTER TABLE nodes ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic'")
db.execute("UPDATE nodes SET protocol = 'meshtastic' WHERE protocol IS NULL OR TRIM(protocol) = ''")
end
unless node_columns.include?("synthetic")
db.execute("ALTER TABLE nodes ADD COLUMN synthetic BOOLEAN NOT NULL DEFAULT 0")
end
if node_columns.include?("long_name")
existing_indexes = db.execute("SELECT name FROM sqlite_master WHERE type='index' AND tbl_name='nodes'").flatten
unless existing_indexes.include?("idx_nodes_long_name")
db.execute("CREATE INDEX IF NOT EXISTS idx_nodes_long_name ON nodes(long_name)")
end
end
# Backfill #747: ensure_unknown_node previously omitted the protocol
# column and hardcoded role=CLIENT_HIDDEN, causing meshcore placeholder
# nodes to be stored as meshtastic/CLIENT_HIDDEN. Fix both in one pass.
if node_columns.include?("protocol")
db.execute("UPDATE nodes SET protocol = 'meshcore' WHERE long_name LIKE 'Meshcore %' AND protocol = 'meshtastic'")
db.execute("UPDATE nodes SET role = 'COMPANION' WHERE protocol = 'meshcore' AND role = 'CLIENT_HIDDEN'")
end
end
unless node_columns.include?("lora_freq")
db.execute("ALTER TABLE nodes ADD COLUMN lora_freq INTEGER")
end
message_table_exists = db.get_first_value(
"SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name='messages'",
).to_i > 0
message_columns = message_table_exists ? db.execute("PRAGMA table_info(messages)").map { |row| row[1] } : []
unless node_columns.include?("modem_preset")
db.execute("ALTER TABLE nodes ADD COLUMN modem_preset TEXT")
end
if message_table_exists
unless message_columns.include?("lora_freq")
db.execute("ALTER TABLE messages ADD COLUMN lora_freq INTEGER")
end
message_columns = db.execute("PRAGMA table_info(messages)").map { |row| row[1] }
unless message_columns.include?("modem_preset")
db.execute("ALTER TABLE messages ADD COLUMN modem_preset TEXT")
end
unless message_columns.include?("lora_freq")
db.execute("ALTER TABLE messages ADD COLUMN lora_freq INTEGER")
end
unless message_columns.include?("channel_name")
db.execute("ALTER TABLE messages ADD COLUMN channel_name TEXT")
end
unless message_columns.include?("modem_preset")
db.execute("ALTER TABLE messages ADD COLUMN modem_preset TEXT")
end
unless message_columns.include?("reply_id")
db.execute("ALTER TABLE messages ADD COLUMN reply_id INTEGER")
message_columns << "reply_id"
end
unless message_columns.include?("channel_name")
db.execute("ALTER TABLE messages ADD COLUMN channel_name TEXT")
end
unless message_columns.include?("emoji")
db.execute("ALTER TABLE messages ADD COLUMN emoji TEXT")
message_columns << "emoji"
end
unless message_columns.include?("reply_id")
db.execute("ALTER TABLE messages ADD COLUMN reply_id INTEGER")
message_columns << "reply_id"
end
unless message_columns.include?("ingestor")
db.execute("ALTER TABLE messages ADD COLUMN ingestor TEXT")
end
unless message_columns.include?("emoji")
db.execute("ALTER TABLE messages ADD COLUMN emoji TEXT")
message_columns << "emoji"
end
unless message_columns.include?("protocol")
db.execute("ALTER TABLE messages ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic'")
db.execute("UPDATE messages SET protocol = 'meshtastic' WHERE protocol IS NULL OR TRIM(protocol) = ''")
end
reply_index_exists =
db.get_first_value(
"SELECT COUNT(*) FROM sqlite_master WHERE type='index' AND name='idx_messages_reply_id'",
).to_i > 0
unless reply_index_exists
db.execute("CREATE INDEX IF NOT EXISTS idx_messages_reply_id ON messages(reply_id)")
reply_index_exists =
db.get_first_value(
"SELECT COUNT(*) FROM sqlite_master WHERE type='index' AND name='idx_messages_reply_id'",
).to_i > 0
unless reply_index_exists
db.execute("CREATE INDEX IF NOT EXISTS idx_messages_reply_id ON messages(reply_id)")
end
end
tables = db.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='instances'").flatten
@@ -172,6 +217,17 @@ module PotatoMesh
unless instance_columns.include?("nodes_count")
db.execute("ALTER TABLE instances ADD COLUMN nodes_count INTEGER")
instance_columns << "nodes_count"
end
unless instance_columns.include?("meshcore_nodes_count")
db.execute("ALTER TABLE instances ADD COLUMN meshcore_nodes_count INTEGER")
instance_columns << "meshcore_nodes_count"
end
unless instance_columns.include?("meshtastic_nodes_count")
db.execute("ALTER TABLE instances ADD COLUMN meshtastic_nodes_count INTEGER")
instance_columns << "meshtastic_nodes_count"
end
telemetry_tables =
@@ -188,6 +244,49 @@ module PotatoMesh
db.execute("ALTER TABLE telemetry ADD COLUMN #{name} #{type}")
telemetry_columns << name
end
unless telemetry_columns.include?("ingestor")
db.execute("ALTER TABLE telemetry ADD COLUMN ingestor TEXT")
end
unless telemetry_columns.include?("telemetry_type")
db.execute("ALTER TABLE telemetry ADD COLUMN telemetry_type TEXT")
end
unless telemetry_columns.include?("protocol")
db.execute("ALTER TABLE telemetry ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic'")
db.execute("UPDATE telemetry SET protocol = 'meshtastic' WHERE protocol IS NULL OR TRIM(protocol) = ''")
end
position_tables =
db.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='positions'").flatten
if position_tables.empty?
positions_schema = File.expand_path("../../../../data/positions.sql", __dir__)
db.execute_batch(File.read(positions_schema))
end
position_columns = db.execute("PRAGMA table_info(positions)").map { |row| row[1] }
unless position_columns.include?("ingestor")
db.execute("ALTER TABLE positions ADD COLUMN ingestor TEXT")
end
unless position_columns.include?("protocol")
db.execute("ALTER TABLE positions ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic'")
db.execute("UPDATE positions SET protocol = 'meshtastic' WHERE protocol IS NULL OR TRIM(protocol) = ''")
end
neighbor_tables =
db.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='neighbors'").flatten
if neighbor_tables.empty?
neighbors_schema = File.expand_path("../../../../data/neighbors.sql", __dir__)
db.execute_batch(File.read(neighbors_schema))
end
neighbor_columns = db.execute("PRAGMA table_info(neighbors)").map { |row| row[1] }
unless neighbor_columns.include?("ingestor")
db.execute("ALTER TABLE neighbors ADD COLUMN ingestor TEXT")
end
unless neighbor_columns.include?("protocol")
db.execute("ALTER TABLE neighbors ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic'")
db.execute("UPDATE neighbors SET protocol = 'meshtastic' WHERE protocol IS NULL OR TRIM(protocol) = ''")
end
trace_tables =
db.execute(
@@ -197,6 +296,15 @@ module PotatoMesh
traces_schema = File.expand_path("../../../../data/traces.sql", __dir__)
db.execute_batch(File.read(traces_schema))
end
trace_columns = db.execute("PRAGMA table_info(traces)").map { |row| row[1] }
unless trace_columns.include?("ingestor")
db.execute("ALTER TABLE traces ADD COLUMN ingestor TEXT")
end
unless trace_columns.include?("protocol")
db.execute("ALTER TABLE traces ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic'")
db.execute("UPDATE traces SET protocol = 'meshtastic' WHERE protocol IS NULL OR TRIM(protocol) = ''")
end
ingestor_tables =
db.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='ingestors'").flatten
@@ -214,6 +322,11 @@ module PotatoMesh
unless ingestor_columns.include?("modem_preset")
db.execute("ALTER TABLE ingestors ADD COLUMN modem_preset TEXT")
end
unless ingestor_columns.include?("protocol")
db.execute("ALTER TABLE ingestors ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic'")
db.execute("UPDATE ingestors SET protocol = 'meshtastic' WHERE protocol IS NULL OR TRIM(protocol) = ''")
end
end
rescue SQLite3::SQLException, Errno::ENOENT => e
warn_log(
+562 -67
View File
@@ -17,6 +17,8 @@
module PotatoMesh
module App
module Federation
FEDERATION_SLEEP_SLICE_SECONDS = 0.2
# Resolve the canonical domain for the running instance.
#
# @return [String, nil] sanitized instance domain or nil outside production.
@@ -61,7 +63,11 @@ module PotatoMesh
def self_instance_attributes
domain = self_instance_domain
last_update = latest_node_update_timestamp || Time.now.to_i
nodes_count = active_node_count_since(Time.now.to_i - PotatoMesh::Config.remote_instance_max_node_age)
cutoff = Time.now.to_i - PotatoMesh::Config.remote_instance_max_node_age
db = open_database(readonly: true)
nodes_count = active_node_count_since(cutoff, db: db)
mc_count = active_node_count_since_for_protocol(cutoff, "meshcore", db: db)
mt_count = active_node_count_since_for_protocol(cutoff, "meshtastic", db: db)
{
id: app_constant(:SELF_INSTANCE_ID),
domain: domain,
@@ -76,7 +82,11 @@ module PotatoMesh
is_private: private_mode?,
contact_link: sanitized_contact_link,
nodes_count: nodes_count,
meshcore_nodes_count: mc_count,
meshtastic_nodes_count: mt_count,
}
ensure
db&.close
end
# Count the number of nodes active since the supplied timestamp.
@@ -105,6 +115,39 @@ module PotatoMesh
handle&.close unless db
end
# Count the number of nodes for a specific protocol active since the
# supplied timestamp.
#
# @param cutoff [Integer] unix timestamp in seconds.
# @param protocol [String] protocol name (e.g. "meshcore", "meshtastic").
# @param db [SQLite3::Database, nil] optional open handle to reuse.
# @return [Integer, nil] node count or nil when unavailable.
def active_node_count_since_for_protocol(cutoff, protocol, db: nil)
return nil unless cutoff && protocol
handle = db || open_database(readonly: true)
count =
with_busy_retry do
handle.get_first_value(
"SELECT COUNT(*) FROM nodes WHERE last_heard >= ? AND protocol = ?",
cutoff.to_i,
protocol,
)
end
Integer(count)
rescue SQLite3::Exception, ArgumentError => e
warn_log(
"Failed to count active nodes for protocol",
context: "instances.protocol_nodes_count",
protocol: protocol,
error_class: e.class.name,
error_message: e.message,
)
nil
ensure
handle&.close unless db
end
def sign_instance_attributes(attributes)
payload = canonical_instance_payload(attributes)
Base64.strict_encode64(
@@ -167,9 +210,22 @@ module PotatoMesh
# Ensure the federation worker pool exists when federation remains enabled.
#
# Threading model: the pool is a fixed-size thread pool backed by a bounded
# queue. A single long-lived announcer thread (started by
# {#start_federation_announcer!}) drives periodic crawl and announcement
# cycles by submitting tasks onto the pool; individual crawl and announce
# jobs then run concurrently on pool threads. The pool is lazily
# instantiated on first use and is memoized on the Sinatra settings object so
# that all requests share the same instance. An +at_exit+ hook
# ({#ensure_federation_shutdown_hook!}) guarantees the pool drains cleanly on
# process termination even when the announcer thread is still alive.
#
# @return [PotatoMesh::App::WorkerPool, nil] active worker pool if created.
def ensure_federation_worker_pool!
return nil unless federation_enabled?
return nil if federation_shutdown_requested?
ensure_federation_shutdown_hook!
existing = settings.respond_to?(:federation_worker_pool) ? settings.federation_worker_pool : nil
return existing if existing&.alive?
@@ -181,16 +237,77 @@ module PotatoMesh
name: "potato-mesh-fed",
)
set(:federation_worker_pool, pool) if respond_to?(:set)
pool
end
# Ensure federation background workers are torn down during process exit.
#
# @return [void]
def ensure_federation_shutdown_hook!
application = is_a?(Class) ? self : self.class
return application.ensure_federation_shutdown_hook! unless application.equal?(self)
installed = if respond_to?(:settings) && settings.respond_to?(:federation_shutdown_hook_installed)
settings.federation_shutdown_hook_installed
else
instance_variable_defined?(:@federation_shutdown_hook_installed) && @federation_shutdown_hook_installed
end
return if installed
if respond_to?(:set) && settings.respond_to?(:federation_shutdown_hook_installed=)
set(:federation_shutdown_hook_installed, true)
else
@federation_shutdown_hook_installed = true
end
at_exit do
begin
pool.shutdown(timeout: PotatoMesh::Config.federation_task_timeout_seconds)
application.shutdown_federation_background_work!(timeout: PotatoMesh::Config.federation_shutdown_timeout_seconds)
rescue StandardError
# Suppress shutdown errors during interpreter teardown.
end
end
end
set(:federation_worker_pool, pool) if respond_to?(:set)
pool
# Check whether federation workers have received a shutdown request.
#
# @return [Boolean] true when stop has been requested.
def federation_shutdown_requested?
return false unless respond_to?(:settings)
return false unless settings.respond_to?(:federation_shutdown_requested)
settings.federation_shutdown_requested == true
end
# Mark federation background work as shutting down.
#
# @return [void]
def request_federation_shutdown!
set(:federation_shutdown_requested, true) if respond_to?(:set)
end
# Clear any previously requested federation shutdown marker.
#
# @return [void]
def clear_federation_shutdown_request!
set(:federation_shutdown_requested, false) if respond_to?(:set)
end
# Sleep in short intervals so federation loops can react to shutdown.
#
# @param seconds [Numeric] target sleep duration.
# @return [Boolean] true when the full delay elapsed without shutdown.
def federation_sleep_with_shutdown(seconds)
remaining = seconds.to_f
while remaining.positive?
return false if federation_shutdown_requested?
slice = [remaining, FEDERATION_SLEEP_SLICE_SECONDS].min
Kernel.sleep(slice)
remaining -= slice
end
!federation_shutdown_requested?
end
# Shutdown and clear the federation worker pool if present.
@@ -214,6 +331,47 @@ module PotatoMesh
end
end
# Gracefully terminate federation background loops and worker pool tasks.
#
# @param timeout [Numeric, nil] maximum join time applied per thread.
# @return [void]
def shutdown_federation_background_work!(timeout: nil)
request_federation_shutdown!
timeout_value = timeout || PotatoMesh::Config.federation_shutdown_timeout_seconds
# Drain the worker pool first so federation threads blocked in
# wait_for_federation_tasks unblock promptly instead of waiting
# for each task's individual timeout to expire.
shutdown_federation_worker_pool!
stop_federation_thread!(:initial_federation_thread, timeout: timeout_value)
stop_federation_thread!(:federation_thread, timeout: timeout_value)
clear_federation_crawl_state!
end
# Stop a specific federation thread setting and clear its reference.
#
# @param setting_name [Symbol] settings key storing the thread object.
# @param timeout [Numeric] seconds to wait for clean thread exit.
# @return [void]
def stop_federation_thread!(setting_name, timeout:)
return unless respond_to?(:settings)
return unless settings.respond_to?(setting_name)
thread = settings.public_send(setting_name)
if thread&.alive?
begin
thread.wakeup if thread.respond_to?(:wakeup)
rescue ThreadError
# The thread may not currently be sleeping; continue shutdown.
end
thread.join(timeout)
if thread.alive?
thread.kill
thread.join(0.1)
end
end
set(setting_name, nil) if respond_to?(:set)
end
def federation_target_domains(self_domain)
normalized_self = sanitize_instance_domain(self_domain)&.downcase
ordered = []
@@ -263,19 +421,24 @@ module PotatoMesh
db&.close
end
# Announce the local instance record to a remote federation peer,
# cycling through resolved IP addresses when transport-level failures
# occur.
#
# @param domain [String] remote peer hostname.
# @param payload_json [String] JSON-encoded announcement body.
# @return [Boolean] true when the announcement was accepted.
def announce_instance_to_domain(domain, payload_json)
return false unless domain && !domain.empty?
return false if federation_shutdown_requested?
https_failures = []
instance_uri_candidates(domain, "/api/instances").each do |uri|
published = instance_uri_candidates(domain, "/api/instances").any? do |uri|
break false if federation_shutdown_requested?
begin
http = build_remote_http_client(uri)
response = http.start do |connection|
request = build_federation_http_request(Net::HTTP::Post, uri)
request.body = payload_json
connection.request(request)
end
response = perform_announce_request(uri, payload_json)
if response.is_a?(Net::HTTPSuccess)
debug_log(
"Published federation announcement",
@@ -283,14 +446,16 @@ module PotatoMesh
target: uri.to_s,
status: response.code,
)
return true
true
else
debug_log(
"Federation announcement failed",
context: "federation.announce",
target: uri.to_s,
status: response.code,
)
false
end
debug_log(
"Federation announcement failed",
context: "federation.announce",
target: uri.to_s,
status: response.code,
)
rescue StandardError => e
metadata = {
context: "federation.announce",
@@ -305,9 +470,18 @@ module PotatoMesh
**metadata,
)
https_failures << metadata
next
else
warn_log(
"Federation announcement raised exception",
**metadata,
)
end
false
end
end
unless published
https_failures.each do |metadata|
warn_log(
"Federation announcement raised exception",
**metadata,
@@ -315,14 +489,56 @@ module PotatoMesh
end
end
https_failures.each do |metadata|
warn_log(
"Federation announcement raised exception",
**metadata,
)
published
end
# Execute a POST announcement request against the supplied URI, cycling
# through resolved IP addresses on connection-level failures.
#
# @param uri [URI::Generic] target endpoint.
# @param payload_json [String] JSON-encoded announcement body.
# @return [Net::HTTPResponse] the HTTP response from the first reachable address.
# @raise [StandardError] when all addresses fail or a non-retryable error occurs.
def perform_announce_request(uri, payload_json)
remote_addresses = sort_addresses_for_connection(resolve_remote_ip_addresses(uri))
addresses = remote_addresses.empty? ? [nil] : remote_addresses
last_error = nil
addresses.each do |address|
break if federation_shutdown_requested?
begin
return perform_single_announce_request(uri, payload_json, ip_address: address&.to_s)
rescue StandardError => e
if connection_refused_or_unreachable?(e)
last_error = e
else
raise
end
end
end
false
raise(last_error || StandardError.new("all resolved addresses failed"))
end
# Execute a single POST announcement request, optionally pinning the
# connection to a specific IP address.
#
# @param uri [URI::Generic] target endpoint.
# @param payload_json [String] JSON-encoded announcement body.
# @param ip_address [String, nil] resolved IP address to pin the
# connection to, or +nil+ to let {build_remote_http_client} resolve.
# @return [Net::HTTPResponse] the HTTP response.
# @raise [StandardError] when the request fails.
def perform_single_announce_request(uri, payload_json, ip_address: nil)
http = build_remote_http_client(uri, ip_address: ip_address)
Timeout.timeout(PotatoMesh::Config.remote_instance_request_timeout) do
http.start do |connection|
request = build_federation_http_request(Net::HTTP::Post, uri)
request.body = payload_json
connection.request(request)
end
end
end
# Determine whether an HTTPS announcement failure should fall back to HTTP.
@@ -340,8 +556,37 @@ module PotatoMesh
false
end
# Determine whether an error indicates a transport-level connection
# failure that may succeed on an alternative resolved address.
#
# Connection refusals, host/network unreachable errors, and TCP open
# timeouts signal that the selected IP address cannot be reached but
# do not rule out alternative addresses for the same hostname.
#
# @param error [StandardError] failure raised during the connection attempt.
# @return [Boolean] true when a retry with a different address is warranted.
def connection_refused_or_unreachable?(error)
retryable_classes = [
Errno::ECONNREFUSED,
Errno::EHOSTUNREACH,
Errno::ENETUNREACH,
Errno::ECONNRESET,
Errno::ETIMEDOUT,
Net::OpenTimeout,
]
current = error
while current
return true if retryable_classes.any? { |klass| current.is_a?(klass) }
current = current.respond_to?(:cause) ? current.cause : nil
end
false
end
def announce_instance_to_all_domains
return unless federation_enabled?
return if federation_shutdown_requested?
attributes, signature = ensure_self_instance_record!
payload_json = JSON.generate(instance_announcement_payload(attributes, signature))
@@ -349,13 +594,15 @@ module PotatoMesh
pool = federation_worker_pool
scheduled = []
domains.each do |domain|
domains.each_with_object(scheduled) do |domain, scheduled_tasks|
break if federation_shutdown_requested?
if pool
begin
task = pool.schedule do
announce_instance_to_domain(domain, payload_json)
end
scheduled << [domain, task]
scheduled_tasks << [domain, task]
next
rescue PotatoMesh::App::WorkerPool::QueueFullError
warn_log(
@@ -396,7 +643,9 @@ module PotatoMesh
return if scheduled.empty?
timeout = PotatoMesh::Config.federation_task_timeout_seconds
scheduled.each do |domain, task|
scheduled.all? do |domain, task|
break false if federation_shutdown_requested?
begin
task.wait(timeout: timeout)
rescue PotatoMesh::App::WorkerPool::TaskTimeoutError => e
@@ -417,19 +666,23 @@ module PotatoMesh
error_message: e.message,
)
end
true
end
end
def start_federation_announcer!
# Federation broadcasts must not execute when federation support is disabled.
return nil unless federation_enabled?
clear_federation_shutdown_request!
ensure_federation_shutdown_hook!
existing = settings.federation_thread
return existing if existing&.alive?
thread = Thread.new do
loop do
sleep PotatoMesh::Config.federation_announcement_interval
break unless federation_sleep_with_shutdown(PotatoMesh::Config.federation_announcement_interval)
begin
announce_instance_to_all_domains
rescue StandardError => e
@@ -455,6 +708,8 @@ module PotatoMesh
def start_initial_federation_announcement!
# Skip the initial broadcast entirely when federation is disabled.
return nil unless federation_enabled?
clear_federation_shutdown_request!
ensure_federation_shutdown_hook!
existing = settings.respond_to?(:initial_federation_thread) ? settings.initial_federation_thread : nil
return existing if existing&.alive?
@@ -462,7 +717,12 @@ module PotatoMesh
thread = Thread.new do
begin
delay = PotatoMesh::Config.initial_federation_delay_seconds
Kernel.sleep(delay) if delay.positive?
if delay.positive?
completed = federation_sleep_with_shutdown(delay)
next unless completed
end
next if federation_shutdown_requested?
announce_instance_to_all_domains
rescue StandardError => e
warn_log(
@@ -522,16 +782,67 @@ module PotatoMesh
[]
end
# Execute a GET request against the supplied federation URI, cycling
# through resolved IP addresses when a transport-level connection
# failure occurs.
#
# DNS resolution is performed once and the resulting addresses are
# sorted with IPv4 first via {sort_addresses_for_connection}. Each
# address is attempted sequentially; when a connection-level error
# (refused, unreachable, timeout) is raised the next address is tried.
# Non-connection errors (SSL failures, HTTP-level errors) are raised
# immediately without trying further addresses.
#
# @param uri [URI::Generic] target endpoint to request.
# @return [String] raw HTTP response body on success.
# @raise [InstanceFetchError] when all addresses are exhausted or a
# non-retryable error occurs.
def perform_instance_http_request(uri)
http = build_remote_http_client(uri)
http.start do |connection|
request = build_federation_http_request(Net::HTTP::Get, uri)
response = connection.request(request)
case response
when Net::HTTPSuccess
response.body
else
raise InstanceFetchError, "unexpected response #{response.code}"
raise InstanceFetchError, "federation shutdown requested" if federation_shutdown_requested?
remote_addresses = sort_addresses_for_connection(resolve_remote_ip_addresses(uri))
addresses = remote_addresses.empty? ? [nil] : remote_addresses
last_error = nil
addresses.each do |address|
break if federation_shutdown_requested?
begin
return perform_single_http_request(uri, ip_address: address&.to_s)
rescue InstanceFetchError => e
if connection_refused_or_unreachable?(e)
last_error = e
else
raise
end
end
end
raise last_error || InstanceFetchError.new("all resolved addresses failed")
rescue ArgumentError => e
raise_instance_fetch_error(e)
end
# Execute a single HTTP GET request against the supplied URI, optionally
# pinning the connection to a specific IP address.
#
# @param uri [URI::Generic] target endpoint.
# @param ip_address [String, nil] resolved IP address to pin the
# connection to, or +nil+ to let {build_remote_http_client} resolve.
# @return [String] raw HTTP response body.
# @raise [InstanceFetchError] when the request fails.
def perform_single_http_request(uri, ip_address: nil)
http = build_remote_http_client(uri, ip_address: ip_address)
Timeout.timeout(PotatoMesh::Config.remote_instance_request_timeout) do
http.start do |connection|
request = build_federation_http_request(Net::HTTP::Get, uri)
response = connection.request(request)
case response
when Net::HTTPSuccess
response.body
else
raise InstanceFetchError, "unexpected response #{response.code}"
end
end
end
rescue StandardError => e
@@ -588,8 +899,12 @@ module PotatoMesh
end
def fetch_instance_json(domain, path)
return [nil, ["federation shutdown requested"]] if federation_shutdown_requested?
errors = []
instance_uri_candidates(domain, path).each do |uri|
break if federation_shutdown_requested?
begin
body = perform_instance_http_request(uri)
return [JSON.parse(body), uri] if body
@@ -602,6 +917,34 @@ module PotatoMesh
[nil, errors]
end
# Resolve the best matching active-node count from a remote /api/stats payload.
#
# @param payload [Hash, nil] decoded JSON payload from /api/stats.
# @param max_age_seconds [Integer] activity window currently expected for federation freshness.
# @return [Integer, nil] selected active-node count when available.
def remote_active_node_count_from_stats(payload, max_age_seconds:)
return nil unless payload.is_a?(Hash)
active_nodes = payload["active_nodes"]
return nil unless active_nodes.is_a?(Hash)
age = coerce_integer(max_age_seconds) || 0
key = if age <= 3600
"hour"
elsif age <= 86_400
"day"
elsif age <= PotatoMesh::Config.week_seconds
"week"
else
"month"
end
value = coerce_integer(active_nodes[key])
return nil unless value
[value, 0].max
end
# Parse a remote federation instance payload into canonical attributes.
#
# @param payload [Hash] JSON object describing a remote instance.
@@ -662,51 +1005,149 @@ module PotatoMesh
# @param overall_limit [Integer, nil] maximum unique domains visited.
# @return [Boolean] true when the crawl was scheduled successfully.
def enqueue_federation_crawl(domain, per_response_limit:, overall_limit:)
pool = federation_worker_pool
sanitized_domain = sanitize_instance_domain(domain)
unless sanitized_domain
warn_log(
"Skipped remote instance crawl",
context: "federation.instances",
domain: domain,
reason: "invalid domain",
)
return false
end
return false if federation_shutdown_requested?
application = is_a?(Class) ? self : self.class
pool = application.federation_worker_pool
unless pool
debug_log(
"Skipped remote instance crawl",
context: "federation.instances",
domain: domain,
domain: sanitized_domain,
reason: "federation disabled",
)
return false
end
application = is_a?(Class) ? self : self.class
claim_result = application.claim_federation_crawl_slot(sanitized_domain)
unless claim_result == :claimed
debug_log(
"Skipped remote instance crawl",
context: "federation.instances",
domain: sanitized_domain,
reason: claim_result == :in_flight ? "crawl already in flight" : "recent crawl completed",
)
return false
end
pool.schedule do
db = application.open_database
db = nil
begin
db = application.open_database
application.ingest_known_instances_from!(
db,
domain,
sanitized_domain,
per_response_limit: per_response_limit,
overall_limit: overall_limit,
)
ensure
db&.close
application.release_federation_crawl_slot(sanitized_domain)
end
end
true
rescue PotatoMesh::App::WorkerPool::QueueFullError
warn_log(
"Skipped remote instance crawl",
context: "federation.instances",
domain: domain,
reason: "worker queue saturated",
)
false
application.handle_failed_federation_crawl_schedule(sanitized_domain, "worker queue saturated")
rescue PotatoMesh::App::WorkerPool::ShutdownError
application.handle_failed_federation_crawl_schedule(sanitized_domain, "worker pool shut down")
end
# Handle a failed crawl schedule attempt without applying cooldown.
#
# @param domain [String] canonical domain that failed to schedule.
# @param reason [String] human-readable failure reason.
# @return [Boolean] always false because scheduling did not succeed.
def handle_failed_federation_crawl_schedule(domain, reason)
release_federation_crawl_slot(domain, record_completion: false)
warn_log(
"Skipped remote instance crawl",
context: "federation.instances",
domain: domain,
reason: "worker pool shut down",
reason: reason,
)
false
end
# Initialize shared in-memory state used to deduplicate crawl scheduling.
#
# @return [void]
def initialize_federation_crawl_state!
@federation_crawl_init_mutex ||= Mutex.new
return if instance_variable_defined?(:@federation_crawl_mutex) && @federation_crawl_mutex
@federation_crawl_init_mutex.synchronize do
return if instance_variable_defined?(:@federation_crawl_mutex) && @federation_crawl_mutex
@federation_crawl_mutex = Mutex.new
@federation_crawl_in_flight = Set.new
@federation_crawl_last_completed_at = {}
end
end
# Retrieve the cooldown period used for duplicate crawl suppression.
#
# @return [Integer] seconds a domain remains in cooldown after completion.
def federation_crawl_cooldown_seconds
PotatoMesh::Config.federation_crawl_cooldown_seconds
end
# Mark a domain crawl as claimed if no active or recent crawl exists.
#
# @param domain [String] canonical domain name.
# @return [Symbol] +:claimed+, +:in_flight+, or +:cooldown+.
def claim_federation_crawl_slot(domain)
initialize_federation_crawl_state!
now = Time.now.to_i
@federation_crawl_mutex.synchronize do
return :in_flight if @federation_crawl_in_flight.include?(domain)
last_completed = @federation_crawl_last_completed_at[domain]
if last_completed && now - last_completed < federation_crawl_cooldown_seconds
return :cooldown
end
@federation_crawl_in_flight << domain
:claimed
end
end
# Release an in-flight crawl claim and record completion timestamp.
#
# @param domain [String] canonical domain name.
# @param record_completion [Boolean] true to apply cooldown tracking.
# @return [void]
def release_federation_crawl_slot(domain, record_completion: true)
return unless domain
initialize_federation_crawl_state!
@federation_crawl_mutex.synchronize do
@federation_crawl_in_flight.delete(domain)
@federation_crawl_last_completed_at[domain] = Time.now.to_i if record_completion
end
end
# Clear all in-memory crawl scheduling state.
#
# @return [void]
def clear_federation_crawl_state!
initialize_federation_crawl_state!
@federation_crawl_mutex.synchronize do
@federation_crawl_in_flight.clear
@federation_crawl_last_completed_at.clear
end
end
# Recursively ingest federation records exposed by the supplied domain.
#
# @param db [SQLite3::Database] open database connection used for writes.
@@ -724,6 +1165,7 @@ module PotatoMesh
)
sanitized = sanitize_instance_domain(domain)
return visited || Set.new unless sanitized
return visited || Set.new if federation_shutdown_requested?
visited ||= Set.new
@@ -758,6 +1200,8 @@ module PotatoMesh
processed_entries = 0
recent_cutoff = Time.now.to_i - PotatoMesh::Config.remote_instance_max_node_age
payload.each do |entry|
break if federation_shutdown_requested?
if per_response_limit && per_response_limit.positive? && processed_entries >= per_response_limit
debug_log(
"Skipped remote instance entry due to response limit",
@@ -811,21 +1255,41 @@ module PotatoMesh
attributes[:is_private] = false if attributes[:is_private].nil?
stats_payload, stats_metadata = fetch_instance_json(attributes[:domain], "/api/stats")
stats_count = remote_active_node_count_from_stats(
stats_payload,
max_age_seconds: PotatoMesh::Config.remote_instance_max_node_age,
)
attributes[:nodes_count] = stats_count if stats_count
# Extract per-protocol 24h counts (informational, not signed).
if stats_payload.is_a?(Hash)
mc_day = stats_payload.dig("meshcore", "day")
mt_day = stats_payload.dig("meshtastic", "day")
attributes[:meshcore_nodes_count] = coerce_integer(mc_day) if mc_day
attributes[:meshtastic_nodes_count] = coerce_integer(mt_day) if mt_day
end
nodes_since_path = "/api/nodes?since=#{recent_cutoff}&limit=1000"
nodes_since_window, nodes_since_metadata = fetch_instance_json(attributes[:domain], nodes_since_path)
if nodes_since_window.is_a?(Array)
if stats_count.nil? && attributes[:nodes_count].nil? && nodes_since_window.is_a?(Array)
attributes[:nodes_count] = nodes_since_window.length
elsif nodes_since_metadata
warn_log(
"Failed to load remote node window",
context: "federation.instances",
domain: attributes[:domain],
reason: Array(nodes_since_metadata).map(&:to_s).join("; "),
)
end
remote_nodes, node_metadata = fetch_instance_json(attributes[:domain], "/api/nodes")
remote_nodes ||= nodes_since_window if nodes_since_window.is_a?(Array)
remote_nodes = nodes_since_window if remote_nodes.nil? && nodes_since_window.is_a?(Array)
if attributes[:nodes_count].nil? && remote_nodes.is_a?(Array)
attributes[:nodes_count] = remote_nodes.length
end
if stats_count.nil? && Array(stats_metadata).any?
debug_log(
"Remote instance /api/stats unavailable; using node list fallback",
context: "federation.instances",
domain: attributes[:domain],
reason: Array(stats_metadata).map(&:to_s).join("; "),
)
end
unless remote_nodes
warn_log(
"Failed to load remote node data",
@@ -906,15 +1370,41 @@ module PotatoMesh
unrestricted_addresses
end
# Sort resolved addresses so that IPv4 precedes IPv6.
#
# Federation peers with dual-stack DNS may publish addresses where one
# family is unreachable. Placing IPv4 entries first mirrors the
# preference used by {discover_local_ip_address} and improves the
# likelihood that the first connection attempt succeeds.
#
# @param addresses [Array<IPAddr>] resolved IP address list.
# @return [Array<IPAddr>] addresses sorted with IPv4 entries before IPv6.
def sort_addresses_for_connection(addresses)
return addresses if addresses.nil? || addresses.length <= 1
v4, v6 = addresses.partition { |ip| !ip.ipv6? }
v4 + v6
end
# Build an HTTP client configured for communication with a remote instance.
#
# When +ip_address+ is supplied the client is pinned to that specific
# address, bypassing DNS resolution. Callers that iterate over
# multiple resolved addresses should pass each candidate in turn.
#
# @param uri [URI::Generic] target URI describing the remote endpoint.
# @param ip_address [String, nil] explicit IP address to connect to,
# or +nil+ to resolve via DNS and use the first result.
# @return [Net::HTTP] HTTP client ready to execute the request.
def build_remote_http_client(uri)
remote_addresses = resolve_remote_ip_addresses(uri)
def build_remote_http_client(uri, ip_address: nil)
http = Net::HTTP.new(uri.host, uri.port)
if http.respond_to?(:ipaddr=) && remote_addresses.any?
http.ipaddr = remote_addresses.first.to_s
if ip_address
http.ipaddr = ip_address if http.respond_to?(:ipaddr=)
else
remote_addresses = resolve_remote_ip_addresses(uri)
if http.respond_to?(:ipaddr=) && remote_addresses.any?
http.ipaddr = remote_addresses.first.to_s
end
end
http.open_timeout = PotatoMesh::Config.remote_instance_http_timeout
http.read_timeout = PotatoMesh::Config.remote_instance_read_timeout
@@ -1107,8 +1597,9 @@ module PotatoMesh
sql = <<~SQL
INSERT INTO instances (
id, domain, pubkey, name, version, channel, frequency,
latitude, longitude, last_update_time, is_private, nodes_count, contact_link, signature
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
latitude, longitude, last_update_time, is_private, nodes_count,
meshcore_nodes_count, meshtastic_nodes_count, contact_link, signature
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(id) DO UPDATE SET
domain=excluded.domain,
pubkey=excluded.pubkey,
@@ -1121,6 +1612,8 @@ module PotatoMesh
last_update_time=excluded.last_update_time,
is_private=excluded.is_private,
nodes_count=excluded.nodes_count,
meshcore_nodes_count=excluded.meshcore_nodes_count,
meshtastic_nodes_count=excluded.meshtastic_nodes_count,
contact_link=excluded.contact_link,
signature=excluded.signature
SQL
@@ -1139,6 +1632,8 @@ module PotatoMesh
attributes[:last_update_time],
attributes[:is_private] ? 1 : 0,
nodes_count,
coerce_integer(attributes[:meshcore_nodes_count]),
coerce_integer(attributes[:meshtastic_nodes_count]),
attributes[:contact_link],
signature,
]
+4 -453
View File
@@ -14,456 +14,7 @@
# frozen_string_literal: true
module PotatoMesh
module App
# Shared view and controller helper methods. Each helper is documented with
# its intended consumers to ensure consistent behaviour across the Sinatra
# application.
module Helpers
ANNOUNCEMENT_URL_PATTERN = %r{\bhttps?://[^\s<]+}i.freeze
# Fetch an application level constant exposed by {PotatoMesh::Application}.
#
# @param name [Symbol] constant identifier to retrieve.
# @return [Object] constant value stored on the application class.
def app_constant(name)
PotatoMesh::Application.const_get(name)
end
# Retrieve the configured Prometheus report identifiers as an array.
#
# @return [Array<String>] list of report IDs used on the metrics page.
def prom_report_ids
PotatoMesh::Config.prom_report_id_list
end
# Read a text configuration value with a fallback.
#
# @param key [String] environment variable key.
# @param default [String] fallback value when unset.
# @return [String] sanitised configuration string.
def fetch_config_string(key, default)
PotatoMesh::Config.fetch_string(key, default)
end
# Proxy for {PotatoMesh::Sanitizer.string_or_nil}.
#
# @param value [Object] value to sanitise.
# @return [String, nil] cleaned string or nil.
def string_or_nil(value)
PotatoMesh::Sanitizer.string_or_nil(value)
end
# Proxy for {PotatoMesh::Sanitizer.sanitize_instance_domain}.
#
# @param value [Object] candidate domain string.
# @param downcase [Boolean] whether to force lowercase normalisation.
# @return [String, nil] canonical domain or nil.
def sanitize_instance_domain(value, downcase: true)
PotatoMesh::Sanitizer.sanitize_instance_domain(value, downcase: downcase)
end
# Proxy for {PotatoMesh::Sanitizer.instance_domain_host}.
#
# @param domain [String] domain literal.
# @return [String, nil] host portion of the domain.
def instance_domain_host(domain)
PotatoMesh::Sanitizer.instance_domain_host(domain)
end
# Proxy for {PotatoMesh::Sanitizer.ip_from_domain}.
#
# @param domain [String] domain literal.
# @return [IPAddr, nil] parsed address object.
def ip_from_domain(domain)
PotatoMesh::Sanitizer.ip_from_domain(domain)
end
# Proxy for {PotatoMesh::Sanitizer.sanitized_string}.
#
# @param value [Object] arbitrary input.
# @return [String] trimmed string representation.
def sanitized_string(value)
PotatoMesh::Sanitizer.sanitized_string(value)
end
# Retrieve the site name presented to users.
#
# @return [String] sanitised site label.
def sanitized_site_name
PotatoMesh::Sanitizer.sanitized_site_name
end
# Retrieve the configured announcement banner copy.
#
# @return [String, nil] sanitised announcement or nil when unset.
def sanitized_announcement
PotatoMesh::Sanitizer.sanitized_announcement
end
# Render the announcement copy with safe outbound links.
#
# @return [String, nil] escaped HTML snippet or nil when unset.
def announcement_html
announcement = sanitized_announcement
return nil unless announcement
fragments = []
last_index = 0
announcement.to_enum(:scan, ANNOUNCEMENT_URL_PATTERN).each do
match = Regexp.last_match
next unless match
start_index = match.begin(0)
end_index = match.end(0)
if start_index > last_index
fragments << Rack::Utils.escape_html(announcement[last_index...start_index])
end
url = match[0]
escaped_url = Rack::Utils.escape_html(url)
fragments << %(<a href="#{escaped_url}" target="_blank" rel="noopener noreferrer">#{escaped_url}</a>)
last_index = end_index
end
if last_index < announcement.length
fragments << Rack::Utils.escape_html(announcement[last_index..])
end
fragments.join
end
# Retrieve the configured channel.
#
# @return [String] sanitised channel identifier.
def sanitized_channel
PotatoMesh::Sanitizer.sanitized_channel
end
# Retrieve the configured frequency descriptor.
#
# @return [String] sanitised frequency text.
def sanitized_frequency
PotatoMesh::Sanitizer.sanitized_frequency
end
# Build the configuration hash exposed to the frontend application.
#
# @return [Hash] JSON serialisable configuration payload.
def frontend_app_config
{
refreshIntervalSeconds: PotatoMesh::Config.refresh_interval_seconds,
refreshMs: PotatoMesh::Config.refresh_interval_seconds * 1000,
chatEnabled: !private_mode?,
channel: sanitized_channel,
frequency: sanitized_frequency,
contactLink: sanitized_contact_link,
contactLinkUrl: sanitized_contact_link_url,
mapCenter: {
lat: PotatoMesh::Config.map_center_lat,
lon: PotatoMesh::Config.map_center_lon,
},
mapZoom: PotatoMesh::Config.map_zoom,
maxDistanceKm: PotatoMesh::Config.max_distance_km,
tileFilters: PotatoMesh::Config.tile_filters,
instanceDomain: app_constant(:INSTANCE_DOMAIN),
instancesFeatureEnabled: federation_enabled? && !private_mode?,
}
end
# Retrieve the configured contact link or nil when unset.
#
# @return [String, nil] contact link identifier.
def sanitized_contact_link
PotatoMesh::Sanitizer.sanitized_contact_link
end
# Retrieve the hyperlink derived from the configured contact link.
#
# @return [String, nil] hyperlink pointing to the community chat.
def sanitized_contact_link_url
PotatoMesh::Sanitizer.sanitized_contact_link_url
end
# Retrieve the configured maximum node distance in kilometres.
#
# @return [Numeric, nil] maximum distance or nil if disabled.
def sanitized_max_distance_km
PotatoMesh::Sanitizer.sanitized_max_distance_km
end
# Format a kilometre value for human readable output.
#
# @param distance [Numeric] distance in kilometres.
# @return [String] formatted distance value.
def formatted_distance_km(distance)
PotatoMesh::Meta.formatted_distance_km(distance)
end
# Build the canonical node detail path for the supplied identifier.
#
# @param identifier [String, nil] node identifier in ``!xxxx`` notation.
# @return [String, nil] detail path including the canonical ``!`` prefix.
def node_detail_path(identifier)
ident = string_or_nil(identifier)
return nil unless ident && !ident.empty?
trimmed = ident.strip
return nil if trimmed.empty?
body = trimmed.start_with?("!") ? trimmed[1..-1] : trimmed
return nil unless body && !body.empty?
escaped = Rack::Utils.escape_path(body)
"/nodes/!#{escaped}"
end
# Present a version string with a leading ``v`` when missing to keep
# UI labels consistent across tagged and fallback builds.
#
# @param version [String, nil] raw application version string.
# @return [String, nil] version string prefixed with ``v`` when needed.
def display_version(version)
return nil if version.nil? || version.to_s.strip.empty?
text = version.to_s.strip
text.start_with?("v") ? text : "v#{text}"
end
# Render a linked long name pointing to the node detail page.
#
# @param long_name [String] display name for the node.
# @param identifier [String, nil] canonical node identifier.
# @param css_class [String, nil] optional CSS class applied to the anchor.
# @return [String] escaped HTML snippet.
def node_long_name_link(long_name, identifier, css_class: "node-long-link")
text = string_or_nil(long_name)
return "" unless text
href = node_detail_path(identifier)
escaped_text = Rack::Utils.escape_html(text)
return escaped_text unless href
canonical_identifier = canonical_node_identifier(identifier)
class_attr = css_class ? %( class="#{css_class}") : ""
data_attrs = %( data-node-detail-link="true")
if canonical_identifier
escaped_identifier = Rack::Utils.escape_html(canonical_identifier)
data_attrs = %(#{data_attrs} data-node-id="#{escaped_identifier}")
end
%(<a#{class_attr} href="#{href}"#{data_attrs}>#{escaped_text}</a>)
end
# Normalise a node identifier by ensuring the canonical ``!`` prefix.
#
# @param identifier [String, nil] raw identifier string.
# @return [String, nil] canonical identifier or ``nil`` when unavailable.
def canonical_node_identifier(identifier)
ident = string_or_nil(identifier)
return nil unless ident && !ident.empty?
trimmed = ident.strip
return nil if trimmed.empty?
trimmed.start_with?("!") ? trimmed : "!#{trimmed}"
end
# Generate the meta description used in SEO tags.
#
# @return [String] combined descriptive sentence.
def meta_description
PotatoMesh::Meta.description(private_mode: private_mode?)
end
# Generate the structured meta configuration for the UI.
#
# @return [Hash] frozen configuration metadata.
def meta_configuration
PotatoMesh::Meta.configuration(private_mode: private_mode?)
end
# Coerce an arbitrary value into an integer when possible.
#
# @param value [Object] user supplied value.
# @return [Integer, nil] parsed integer or nil when invalid.
def coerce_integer(value)
case value
when Integer
value
when Float
value.finite? ? value.to_i : nil
when Numeric
value.to_i
when String
trimmed = value.strip
return nil if trimmed.empty?
return trimmed.to_i(16) if trimmed.match?(/\A0[xX][0-9A-Fa-f]+\z/)
return trimmed.to_i(10) if trimmed.match?(/\A-?\d+\z/)
begin
float_val = Float(trimmed)
float_val.finite? ? float_val.to_i : nil
rescue ArgumentError
nil
end
else
nil
end
end
# Coerce an arbitrary value into a floating point number when possible.
#
# @param value [Object] user supplied value.
# @return [Float, nil] parsed float or nil when invalid.
def coerce_float(value)
case value
when Float
value.finite? ? value : nil
when Integer
value.to_f
when Numeric
value.to_f
when String
trimmed = value.strip
return nil if trimmed.empty?
begin
float_val = Float(trimmed)
float_val.finite? ? float_val : nil
rescue ArgumentError
nil
end
else
nil
end
end
# Coerce an arbitrary value into a boolean according to common truthy
# conventions.
#
# @param value [Object] user supplied value.
# @return [Boolean, nil] boolean interpretation or nil when unknown.
def coerce_boolean(value)
case value
when true, false
value
when String
trimmed = value.strip.downcase
return true if %w[true 1 yes y].include?(trimmed)
return false if %w[false 0 no n].include?(trimmed)
nil
when Numeric
!value.to_i.zero?
else
nil
end
end
# Normalise PEM encoded public key content into LF line endings.
#
# @param value [String, #to_s, nil] raw PEM content.
# @return [String, nil] cleaned PEM string or nil when blank.
def sanitize_public_key_pem(value)
return nil if value.nil?
pem = value.is_a?(String) ? value : value.to_s
pem = pem.gsub(/\r\n?/, "\n")
return nil if pem.strip.empty?
pem
end
# Recursively coerce hash keys to strings and normalise nested arrays.
#
# @param value [Object] JSON compatible value.
# @return [Object] structure with canonical string keys.
def normalize_json_value(value)
case value
when Hash
value.each_with_object({}) do |(key, val), memo|
memo[key.to_s] = normalize_json_value(val)
end
when Array
value.map { |element| normalize_json_value(element) }
else
value
end
end
# Parse JSON payloads or hashes into normalised hashes with string keys.
#
# @param value [Hash, String, nil] raw JSON object or string representation.
# @return [Hash, nil] canonicalised hash or nil when parsing fails.
def normalize_json_object(value)
case value
when Hash
normalize_json_value(value)
when String
trimmed = value.strip
return nil if trimmed.empty?
begin
parsed = JSON.parse(trimmed)
rescue JSON::ParserError
return nil
end
parsed.is_a?(Hash) ? normalize_json_value(parsed) : nil
else
nil
end
end
# Emit a structured debug log entry tagged with the calling context.
#
# @param message [String] text to emit.
# @param context [String] logical source of the message.
# @param metadata [Hash] additional structured key/value data.
# @return [void]
def debug_log(message, context: "app", **metadata)
logger = PotatoMesh::Logging.logger_for(self)
PotatoMesh::Logging.log(logger, :debug, message, context: context, **metadata)
end
# Emit a structured warning log entry tagged with the calling context.
#
# @param message [String] text to emit.
# @param context [String] logical source of the message.
# @param metadata [Hash] additional structured key/value data.
# @return [void]
def warn_log(message, context: "app", **metadata)
logger = PotatoMesh::Logging.logger_for(self)
PotatoMesh::Logging.log(logger, :warn, message, context: context, **metadata)
end
# Indicate whether private mode has been requested.
#
# @return [Boolean] true when PRIVATE=1.
def private_mode?
PotatoMesh::Config.private_mode_enabled?
end
# Identify whether the Rack environment corresponds to the test suite.
#
# @return [Boolean] true when RACK_ENV is "test".
def test_environment?
ENV["RACK_ENV"] == "test"
end
# Determine whether the application is running in a production environment.
#
# @return [Boolean] true when APP_ENV or RACK_ENV resolves to "production".
def production_environment?
app_env = string_or_nil(ENV["APP_ENV"])&.downcase
rack_env = string_or_nil(ENV["RACK_ENV"])&.downcase
app_env == "production" || rack_env == "production"
end
# Determine whether federation features should be active.
#
# @return [Boolean] true when federation configuration allows it.
def federation_enabled?
PotatoMesh::Config.federation_enabled?
end
# Determine whether federation announcements should run asynchronously.
#
# @return [Boolean] true when announcements are enabled.
def federation_announcements_active?
federation_enabled? && !test_environment?
end
end
end
end
require_relative "helpers/logging_helpers"
require_relative "helpers/html_helpers"
require_relative "helpers/node_helpers"
require_relative "helpers/config_helpers"
@@ -0,0 +1,129 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Helpers
# Fetch an application level constant exposed by {PotatoMesh::Application}.
#
# @param name [Symbol] constant identifier to retrieve.
# @return [Object] constant value stored on the application class.
def app_constant(name)
PotatoMesh::Application.const_get(name)
end
# Retrieve the configured Prometheus report identifiers as an array.
#
# @return [Array<String>] list of report IDs used on the metrics page.
def prom_report_ids
PotatoMesh::Config.prom_report_id_list
end
# Read a text configuration value with a fallback.
#
# @param key [String] environment variable key.
# @param default [String] fallback value when unset.
# @return [String] sanitised configuration string.
def fetch_config_string(key, default)
PotatoMesh::Config.fetch_string(key, default)
end
# Build the configuration hash exposed to the frontend application.
#
# @return [Hash] JSON serialisable configuration payload.
def frontend_app_config
{
refreshIntervalSeconds: PotatoMesh::Config.refresh_interval_seconds,
refreshMs: PotatoMesh::Config.refresh_interval_seconds * 1000,
chatEnabled: !private_mode?,
channel: sanitized_channel,
frequency: sanitized_frequency,
contactLink: sanitized_contact_link,
contactLinkUrl: sanitized_contact_link_url,
mapCenter: {
lat: PotatoMesh::Config.map_center_lat,
lon: PotatoMesh::Config.map_center_lon,
},
mapZoom: PotatoMesh::Config.map_zoom,
maxDistanceKm: PotatoMesh::Config.max_distance_km,
tileFilters: PotatoMesh::Config.tile_filters,
instanceDomain: app_constant(:INSTANCE_DOMAIN),
instancesFeatureEnabled: federation_enabled? && !private_mode?,
}
end
# Generate the meta description used in SEO tags.
#
# @return [String] combined descriptive sentence.
def meta_description
PotatoMesh::Meta.description(private_mode: private_mode?)
end
# Generate the structured meta configuration for the UI.
#
# @return [Hash] frozen configuration metadata.
def meta_configuration
PotatoMesh::Meta.configuration(private_mode: private_mode?)
end
# Indicate whether private mode has been requested.
#
# @return [Boolean] true when PRIVATE=1.
def private_mode?
PotatoMesh::Config.private_mode_enabled?
end
# Identify whether the Rack environment corresponds to the test suite.
#
# @return [Boolean] true when RACK_ENV is "test".
def test_environment?
ENV["RACK_ENV"] == "test"
end
# Determine whether the application is running in a production environment.
#
# @return [Boolean] true when APP_ENV or RACK_ENV resolves to "production".
def production_environment?
app_env = string_or_nil(ENV["APP_ENV"])&.downcase
rack_env = string_or_nil(ENV["RACK_ENV"])&.downcase
app_env == "production" || rack_env == "production"
end
# Determine whether federation features should be active.
#
# @return [Boolean] true when federation configuration allows it.
def federation_enabled?
PotatoMesh::Config.federation_enabled?
end
# Determine whether federation announcements should run asynchronously.
#
# @return [Boolean] true when announcements are enabled.
def federation_announcements_active?
federation_enabled? && !test_environment?
end
# Format a kilometre value for human readable output.
#
# @param distance [Numeric] distance in kilometres.
# @return [String] formatted distance value.
def formatted_distance_km(distance)
PotatoMesh::Meta.formatted_distance_km(distance)
end
end
end
end

Some files were not shown because too many files have changed in this diff Show More