Compare commits

..

94 Commits

Author SHA1 Message Date
l5y 13b2ce9067 web: fix meshcore node misclassification (#748)
* web: fix meshcore node misclassification

* web: address review comments

* web: address review comments
2026-04-15 12:38:50 +02:00
l5y 5a73e212a3 web: optimize caching (#744)
* web: optimize caching

* web: address review comments

* web: address review comments

* web: run rufo
2026-04-14 23:29:54 +02:00
l5y 07c8e85caa web: fix federation resolver issue with multi addresses (#743)
* web: fix federation resolver issue with multi addresses

* web: add tests

* web: address review comments
2026-04-14 18:55:40 +02:00
l5y c08b3f2c2d web: restore refresh and protocol buttons (#742)
* web: restore refresh and protocol buttons

* web: restore refresh and protocol buttons

* web: restore refresh and protocol buttons

* web: address review comments
2026-04-14 16:54:57 +02:00
dependabot[bot] 851b2180dd build(deps): bump rand from 0.9.2 to 0.9.4 in /matrix (#741)
Bumps [rand](https://github.com/rust-random/rand) from 0.9.2 to 0.9.4.
- [Release notes](https://github.com/rust-random/rand/releases)
- [Changelog](https://github.com/rust-random/rand/blob/0.9.4/CHANGELOG.md)
- [Commits](https://github.com/rust-random/rand/compare/rand_core-0.9.2...0.9.4)

---
updated-dependencies:
- dependency-name: rand
  dependency-version: 0.9.4
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-14 08:22:07 +02:00
l5y c175445251 ingestor: fix serial connection failures (#736)
* ingestor: fix serial connection failures

* ingestor: address review comments

* ingestor: address review comments

* ingestor: further hardening

* ingestor: add tests

* ingestor: address review comments

* ingestor: address review comments
2026-04-13 23:42:07 +02:00
l5y b951dbffeb web: per protocol active node counts (#735)
* web: per protocol active node counts

* web: address review comments
2026-04-13 18:26:16 +02:00
l5y 10e6c99196 data: better lora frequency handling for meshtastic (#733)
* data: better lora frequency handling for meshtastic

* ingestor: address review comments
2026-04-12 16:02:15 +02:00
l5y aeb97477f0 chore: bump version to 0.6.1 (#726) 2026-04-09 13:14:20 +02:00
l5y 81e588e44c web: add markdown static pages (#723)
* web: add markdown static pages

* web: add tests and docker

* web: improve wording and configs

* web: add tests

* web: address review comments

* web: address review comments

* Potential fix for pull request finding 'CodeQL / Incomplete multi-character sanitization'

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

* web: address review comments

* web: address review comments

---------

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2026-04-08 16:42:13 +02:00
l5y 083de6418f web: fix federation for multi protocol (#722)
* web: fix federation for multi protocol

* web: fix short name emojis

* web: address review comments

* ci: fix the codeql gap

* ci: fix the codeql gap

* ci: fix the codeql gap

* ci: remove swift
2026-04-08 14:36:43 +02:00
l5y 5b9e6e3d48 data: trace analysus multi ingestor support (#721)
* data: trace analysus multi ingestor support

* address review comments
2026-04-08 11:58:32 +02:00
l5y 4a6ba38e94 chore: prepare codebase for breaking release (#718)
* chore: prepare codebase for breaking release

* docker: fix debug flug in prod matrix bridge
2026-04-08 10:51:38 +02:00
l5y 4d38ddd341 web: facelift (#716)
* web: facelift

* web: facelift

* web: facelift

* web: address review comments

* web: address review comments

* web: address review comments

* web: address review comments

* web: address review comments

* web: address review comments

* web: more css magic

* web: link parsing for chat contact

* web: remove one-letter fallback for shortnames

* Potential fix for pull request finding 'CodeQL / Incomplete multi-character sanitization'

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

* web: fix fallback for shortnames

* web: address review comments

---------

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2026-04-07 21:38:43 +02:00
l5y 267d2ec9e1 data: fix position time updates (#715)
* data: fix position time updates

* data: fix position time updates
2026-04-06 19:29:38 +02:00
l5y 526a0c7246 data: fix meshcore ingestore self reporting (#713)
* data: fix meshcore ingestore self reporting

* data: fix meshcore ingestore self reporting

* address review comments
2026-04-06 15:19:01 +02:00
l5y 95aa1de8a8 web: sort channels by activity not index (#711)
* web: sort channels by activity not index

* web: address review comments
2026-04-06 14:04:47 +02:00
l5y d8b80c2a97 web: reference meshcore nodes in chat (#709)
* web: reference meshcore nodes in chat

* data: add adv_name to messages

* web: address review comments

* derive actual companion from name string

* derive actual companion from name string

* derive actual companion from name string

* web: address review comments

* web: address review comments
2026-04-06 13:39:00 +02:00
l5y 406fa80dd0 web: fix node disappearance role reset (#707)
* web: fix node disappearance role reset

* web: address review comments

* web: address review comments

* web: address review comments
2026-04-05 23:43:36 +02:00
l5y de1ccc5a2e release: v0.6.0 — remove deprecated env var aliases (#704)
* chore: bump version to 0.6.0 and remove deprecated env var aliases

BREAKING CHANGES:
- POTATOMESH_INSTANCE removed — use INSTANCE_DOMAIN
- PROVIDER removed — use PROTOCOL
- MESH_SERIAL removed — use CONNECTION
- PORT config alias removed — use CONNECTION

The _ConfigModule proxy class (which kept PROTOCOL/PROVIDER and
CONNECTION/PORT in sync) is deleted. docker-compose.yml now defaults
INSTANCE_DOMAIN to http://web:41447 so deployments without an explicit
value continue to work.

* tests: run black

* address review comments
2026-04-05 16:49:10 +02:00
l5y 0a479e4517 web: protect real node names from fallback (#702)
* web: protect real node names from fallback

* web: address review comments

* web: address review comments
2026-04-05 13:57:18 +02:00
l5y 8c59396ec8 fix: derive channel probe bound from device max_channels (#701)
Replace the hardcoded max_idx=8 parameter on _ensure_channel_names with
a DEVICE_INFO query (send_device_query → max_channels) so the full range
of configured channels is always probed regardless of firmware variant.
Falls back to _CHANNEL_PROBE_FALLBACK_MAX (32) when the query fails or
the device returns an older firmware that omits max_channels.

Also removes always=True from the warning-severity channel failure log
(redundant — only debug-severity is gated behind the DEBUG flag) and adds
a deferred-import comment in _ensure_channel_names.
2026-04-05 13:46:04 +02:00
l5y 3647cb125b web: define meshcore modem presets (#696)
* web: define meshcore modem presets

* web: address review comments
2026-04-05 13:37:58 +02:00
l5y adc122fce0 data: register meshcore channel mappings (#695)
* data: register meshcore channel mappings

* fix: use mc.commands.get_channel for MeshCore channel name probing

MeshCore exposes device commands via the commands sub-object
(CommandHandler), not directly on MeshCore instances.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: probe all channel indices regardless of ERROR responses

Removed the consecutive-error early-stop heuristic from
_ensure_channel_names so sparse channel configurations (e.g. slots 0
and 5 configured with slots 1–4 empty) are fully probed. Only a hard
exception aborts the loop early.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-05 13:36:03 +02:00
l5y d33ebd8f4c data: provide frequency and modem preset for meshcore (#694)
* data: provide frequency and modem preset for meshcore

* data: provide frequency and modem preset for meshcore

* ingestor: address review comments

* fix: remove duplicate _mark_packet_seen entry from handlers __all__

* ci: install meshcore in Python workflow

protocols/meshcore.py now imports meshcore at module level (required to
fix a self-referential import failure after the providers/ → protocols/
rename).  test_provider_unit.py imports that module unconditionally, so
meshcore must be present in the test environment.

* data: run black
2026-04-05 09:13:48 +02:00
l5y 06530f36ff web: add proper short names for meshcore companions (#693)
* web: add proper short names for meshcore companions

* web: address review comments
2026-04-05 09:01:43 +02:00
l5y 3cfa0db7e6 web: distinguish meshcore from meshtastic in frontend (#688)
* web: distinguish meshcore from meshtastic in frontend

* fix mark_packet_seen bug

* web: distinguish meshcore from meshtastic in frontend

* address review comments

* address review comments

* address review comments
2026-04-04 17:14:16 +02:00
l5y d9420ff13b fix: address review comments from PRs #676 and #681 (#689)
* fix: address review comments from PRs #676 and #681

- Introduce ClosedBeforeConnectedError(ConnectionError) subclass so
  callers can distinguish a user-initiated shutdown from a hardware
  failure without string-matching the exception message (#676)
- Add test covering the close-before-connected path: asserts
  isConnected stays False and error_holder contains the typed error
- Add protocolIconPrefixHtml unit tests covering null, meshtastic,
  meshcore, and unknown protocol strings (#681)
- Add buildDisplayContext tests for protocol extraction from trace,
  node, and absent candidate sources (#681)
- Expose buildDisplayContext via _testUtils to make it directly testable
- Add meshcore icon presence assertions to createAnnouncementEntry and
  createMessageChatEntry tests (previously only checked absence of
  meshtastic icon)

* fix: address #689 review comments

- Move createMessageChatEntry meshcore icon test into its own section,
  after the createMessageChatEntry divider where it belongs
- Export ClosedBeforeConnectedError from providers/__init__.py via the
  existing lazy-load __getattr__ so callers outside the providers/
  subpackage can catch it without importing the full meshcore module

* refactor: eliminate test boilerplate to fix SonarCloud duplication gate

Introduce withApp() and innerHtml() helpers in main-protocol.test.js to
replace the 18-repeated setupApp/try/finally/cleanup pattern and the
inconsistent innerHTML extraction expression. No test logic changed.

* refactor: extract stalled-run helpers to fix SonarCloud duplication gate

The two stall-based _run_meshcore tests shared ~20 lines of identical
setup and spin-loop boilerplate. Extract _setup_stalled_run() and
_start_stalled_run() so each test contains only its distinct assertions.
2026-04-04 13:28:26 +02:00
Ben Allfree 7e0ba60a22 fix: get meshcore protocol icon displaying correctly (#681) 2026-04-04 13:00:25 +02:00
Ben Allfree 257e26c996 [meshcore] fix: race condition (#676)
* fix: ensure stop_event is set before connection completion in _run_meshcore

* Fix CancelledError lint in meshcore cancel test
2026-04-04 12:41:56 +02:00
l5y dcb374fbf9 enh: surface meshcore role types (#680) (#685)
* enh: surface meshcore role types (#680)

Map MeshCore ADV_TYPE_* integers to user.role strings so COMPANION,
REPEATER, ROOM_SERVER, and SENSOR roles are surfaced to the dashboard.
Role is omitted when ADV_TYPE_NONE (0) or unknown.

Co-authored-by: Ben Allfree <ben@benallfree.com>

* data: run black

---------

Co-authored-by: Ben Allfree <ben@benallfree.com>
2026-04-04 10:41:06 +02:00
l5y 9c3dae3e7d chore: refactor codebase before meshcore release (#682)
* chore: refactor codebase before meshcore release

* data: run black

* fix: resolve SonarCloud S1244/S5796 reliability issues in test files

Replace floating-point equality comparisons with pytest.approx() to
satisfy S1244, and replace the `is` identity operator with id()-based
comparison to satisfy S5796.

* fix: remove duplicate encrypted_flag assignment in store_packet_dict

The encrypted_flag was computed identically on lines 307 and 345 with no
mutation of `encrypted` between them. Remove the dead second assignment.
2026-04-04 10:22:31 +02:00
Ben Allfree 7806efb2cf meshcore/fix: short name should be 1st 4 hex digits of public key (#679) 2026-04-04 09:40:49 +02:00
Ben Allfree 7a21de7cda chore: update dependencies and configuration files (#674)
* Updated versions and SHA256 checksums for several packages in pubspec.lock.
* Added include statements for Pods configuration in Debug.xcconfig and Release.xcconfig.
2026-04-03 23:21:49 +02:00
Ben Allfree 295d4cf2bb chore: update mesh.sh to use requirements file (#675) 2026-04-03 23:20:48 +02:00
l5y 09ea277a40 data/meshcore: fix ble and enable tcp (#669)
* data/meshcore: fix ble and enable tcp

* ingestor: address review comments

* ingestor: address review comments
2026-04-02 22:31:33 +02:00
l5y 4fa0745d1b data: handle store_forward and router_heartbeat portnum (#667)
* data: handle store_forward and router_heartbeat portnum

* ingestor: address review comments
2026-03-31 23:42:26 +02:00
l5y a62a068c08 feat: implement meshcore provider (#663)
* feat: add meshcore support

* fix: address PR #663 review comments

* fix: address PR #663 review comments

* address review comments
2026-03-31 13:44:05 +02:00
l5y 5c49af5355 ci: update dependabot and codecov settings (#666) 2026-03-31 12:45:07 +02:00
l5y e48c575b9d web: prepare release (#665)
* web: prepare release

* fix: address pre-release review concerns

- Emit invalid telemetry_type warning at severity=warning/always=True so
  it surfaces in production logs, not just under DEBUG=1
- Hoist VALID_TELEMETRY_TYPES to a module-level constant in DataProcessing
  to avoid per-call allocation inside insert_telemetry
- Add Python test covering the invalid-type drop path in store_telemetry_packet
- Add Ruby spec asserting that an invalid telemetry_type in a POST payload
  is discarded and metric-based inference takes over
2026-03-30 23:15:55 +02:00
l5y e03675168b app: only query meshtastic provider (#664)
* app: only query meshtastic provider

* app: address review comments
2026-03-30 19:04:34 +02:00
l5y d6a2e263cc data: prepare ingestor for meshcore (#658)
* data: prepare ingestor for meshcore

* ingestor: address review comments

* ingestor: address review comments

* ingestor: address review comments

* ingestor: address review comments
2026-03-30 09:17:10 +02:00
l5y f638c79e13 web: fix css issues (#659)
* web: fix css issues

* chore: bump version to 0.5.12
2026-03-30 08:55:35 +02:00
l5y 874e81ab8b web: prepare frontend for multi protocol (#657)
* web: prepare frontend for multi protocol

* web: address review comments

* fix: address review feedback on multi-protocol frontend prep

- Replace iconHtml/innerHTML in renderChatTabs with iconSrc + DOM APIs;
  the img element is now built attribute-by-attribute so no innerHTML trust
  boundary exists even if iconSrc were to receive external input
- Add MESHTASTIC_ICON_SRC / MESHCORE_ICON_SRC constants to protocol-helpers;
  meshtasticIconHtml() and meshcoreIconHtml() reference these so the asset
  path has a single source of truth
- Use meshtasticIconHtml() in the map legend via a temp span to eliminate
  the 7-setAttribute duplication
- Add getRoleColors(protocol) to role-helpers, making meshcoreRoleColors
  reachable through a tested code path rather than a dead export
- Rename __test__ export in main.js to __testUtils for consistency
- Add JSDoc cross-reference on normalizeNodeNameValue vs stringOrNull


* web: address review comments

* web: address review comments

* web: address review comments
2026-03-30 08:21:39 +02:00
l5y a5d0008555 feat: split device and power-sensor telemetry charts (#643) (#656)
* feat: split device and power-sensor telemetry charts (#643)

Add telemetry_type TEXT discriminator column across the full stack so
device_metrics rows no longer mix with power_metrics in the same chart.
Python and Ruby ingestors detect the protobuf subtype at write time;
classifySnapshot() provides field-presence fallback for legacy rows.
'Power metrics' chart split into 'Device health' and 'Power sensor'.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: skip typeFilter for aggregated telemetry; add air_quality coverage

- renderTelemetryChart now skips spec.typeFilter when chartOptions.isAggregated
  is true, preventing mixed-bucket aggregated snapshots from losing series data
- renderTelemetryCharts detects the aggregated vs per-packet path and sets
  isAggregated accordingly; typeFilter still applies for per-packet history
- JS tests: extract makeAggregatedNode/makeHistoryNode helpers to eliminate
  fixture duplication; add aggregated-mixed-bucket regression test; move
  type-separation tests onto the history path where filtering actually applies
- Ruby + Python: add air_quality_metrics telemetry_type tests for coverage

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* refactor: reduce test duplication flagged by Sonar

Hoist CHART_NOW_MS/CHART_NOW_SECONDS constants to eliminate 14 repeated
setup lines across renderTelemetryCharts tests.  Extract
expect_stored_telemetry_type helper in app_spec to replace the four
identical with_db/SELECT/expect blocks in telemetry_type inference tests.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* web: address review comments

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-30 00:07:24 +02:00
l5y 4d0d6f8565 web: implement a 'protocol' field across systems (#655)
* web: implement a 'protocol' field across systems

* web: address review feedback on multi-protocol support

- Rebase on main (pick up coordinate-clearing bugfix from #654)
- P1: prevent cross-protocol message merges on shared packet IDs
- P2: exclude "ingestor" key when enforcing /api/nodes batch limit
- Extract append_protocol_filter helper + PROTOCOL_CLAUSE constant to
  reduce cognitive complexity and deduplicate SQL fragment in queries.rb
- Extract coerce_bool helper to reduce upsert_node cognitive complexity
- Merge nested if in insert_message protocol update path (Sonar)
- Add explicit UPDATE backfill in ensure_schema_upgrades so any pre-existing
  NULL/empty protocol rows are set to meshtastic on upgrade
- Rename migration file to 20260328_ (correct year)
- Expand protocol_spec.rb: filter tests for all 7 endpoints,
  cross-protocol non-merge test, batch limit test, Sonar constant fixes,
  ENV.fetch, P1 regression test


* web: address review comments
2026-03-29 11:48:32 +02:00
l5y 7b1d25e286 fix upsert clearing node coordinates bug (#654) 2026-03-28 21:21:13 +01:00
l5y 5adbe2263e data: resolve circular dependency of deamon.py (#653)
* data: resolve circular dependency of deamon.py

* address review comments

* address review comments

* address review comments
2026-03-28 18:46:21 +01:00
Ben Allfree b1c416d029 first cut (#651) 2026-03-28 17:09:12 +01:00
dependabot[bot] 8305ca588c build(deps): bump rustls-webpki from 0.103.8 to 0.103.10 in /matrix (#649)
Bumps [rustls-webpki](https://github.com/rustls/webpki) from 0.103.8 to 0.103.10.
- [Release notes](https://github.com/rustls/webpki/releases)
- [Commits](https://github.com/rustls/webpki/compare/v/0.103.8...v/0.103.10)

---
updated-dependencies:
- dependency-name: rustls-webpki
  dependency-version: 0.103.10
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-21 12:55:17 +01:00
dependabot[bot] 0cf56b6fba build(deps): bump quinn-proto from 0.11.13 to 0.11.14 in /matrix (#646)
Bumps [quinn-proto](https://github.com/quinn-rs/quinn) from 0.11.13 to 0.11.14.
- [Release notes](https://github.com/quinn-rs/quinn/releases)
- [Commits](https://github.com/quinn-rs/quinn/compare/quinn-proto-0.11.13...quinn-proto-0.11.14)

---
updated-dependencies:
- dependency-name: quinn-proto
  dependency-version: 0.11.14
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-11 14:56:43 +01:00
l5y ecce7f3504 chore: bump version to 0.5.11 (#645)
* chore: bump version to 0.5.11

* data: run black
2026-03-01 21:59:04 +01:00
l5y 17fa183c4f web: limit horizontal size of dropdown (#644)
* web: limit horizontal size of dropdown

* address review comments
2026-03-01 21:49:06 +01:00
l5y 5b0a6f5f8b web: expose node stats in distinct api (#641)
* web: expose node stats in distinct api

* web: address review comments

* web: address review comments

* web: address review comments

* web: address review comments
2026-02-14 21:14:10 +01:00
l5y 2e8b5ad856 web: do not merge channels by name (#640) 2026-02-14 15:42:14 +01:00
l5y e32b098be4 web: do not merge channels by ID in frontend (#637)
* web: do not merge channels by ID in frontend

* web: address review comments

* web: address review comments
2026-02-14 14:56:25 +01:00
l5y b45629f13c web: do not touch neighbor last seen on neighbor info (#636)
* web: do not touch neighbor last seen on neighbor info

* web: address review comments
2026-02-14 14:43:46 +01:00
l5y 96421c346d ingestor: report self id per packet (#635)
* ingestor: report self id per packet

* ingestor: address review comments

* ingestor: address review comments

* ingestor: address review comments

* ingestor: address review comments
2026-02-14 14:29:05 +01:00
l5y 724b3e14e5 ci: fix docker compose and docs (#634)
* ci: fix docker compose and docs

* docker: address review comments
2026-02-14 13:25:43 +01:00
l5y e8c83a2774 web: supress encrypted text messages in frontend (#633)
* web: supress encrypted text messages in frontend

* web: address review comments

* web: address review comments

* web: address review comments

* web: address review comments
2026-02-14 13:11:02 +01:00
l5y 5c5a9df5a6 federation: ensure requests timeout properly and can be terminated (#631)
* federation: ensure requests timeout properly and can be terminated

* web: address review comments

* web: address review comments

* web: address review comments

* web: address review comments
2026-02-14 12:29:01 +01:00
dependabot[bot] 7cb4bbe61b build(deps): bump bytes from 1.11.0 to 1.11.1 in /matrix (#627)
Bumps [bytes](https://github.com/tokio-rs/bytes) from 1.11.0 to 1.11.1.
- [Release notes](https://github.com/tokio-rs/bytes/releases)
- [Changelog](https://github.com/tokio-rs/bytes/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/bytes/compare/v1.11.0...v1.11.1)

---
updated-dependencies:
- dependency-name: bytes
  dependency-version: 1.11.1
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-06 21:40:49 +01:00
l5y fed8b9e124 matrix: config loading now merges optional TOML with CLI/env/secret inputs (#617)
* matrix: config loading now merges optional TOML with CLI/env/secret inputs

* matrix: fix tests

* matrix: address review comments

* matrix: fix tests

* matrix: cover missing unit test vectors
2026-01-10 23:39:53 +01:00
l5y 60e734086f matrix: logs only non-sensitive config fields (#616)
* matrix: logs only non-sensitive config fields

* matrix: run fmt
2026-01-10 21:06:51 +01:00
l5y c3181e9bd5 web: decrypted takes precedence (#614)
* web: decrypted takes precedence

* web: run rufo

* web: fix tests

* web: fix tests

* web: cover missing unit test vectors

* web: fix tests
2026-01-10 13:13:55 +01:00
l5y f4fa487b2d Add Apache headers to missing sources (#615) 2026-01-10 13:07:47 +01:00
l5y e0237108c6 web: decrypt PSK-1 unencrypted messages on arrival (#611)
* web: decrypt PSK-1 unencrypted messages on arrival

* web: address review comments

* web: use proper psk to decrypt instead of alias

* cover missing unit test vectors

* tests: run black formatter

* web: fix tests

* web: refine decryption data processing logic

* web: address review comments

* web: cover missing unit test vectors

* web: cover missing unit test vectors

* web: cover missing unit test vectors

* web: cover missing unit test vectors
2026-01-10 12:33:59 +01:00
l5y d7a636251d web: daemonize federation worker pool to avoid deadlocks on stuck announcments (#610)
* web: daemonize federation worker pool to avoid deadlocks on stuck announcments

* web: address review comments

* web: address review comments
2026-01-09 09:12:25 +01:00
l5y 108573b100 web: add announcement banner (#609)
* web: add announcement banner

* web: cover missing unit test vectors
2026-01-08 21:17:59 +01:00
l5y 36f55e6b79 l5y chore version 0510 (#608)
* chore: bump version to 0.5.10

* chore: bump version to 0.5.10

* chore: update changelog
2026-01-08 16:20:14 +01:00
l5y b4dd72e7eb matrix: listen for synapse on port 41448 (#607)
* matrix: listen for synapse on port 41448

* matrix: address review comments

* matrix: address review comments

* matrix: cover missing unit test vectors

* matrix: cover missing unit test vectors
2026-01-08 15:51:31 +01:00
l5y f5f2e977a1 web: collapse federation map ledgend (#604)
* web: collapse federation map ledgend

* web: cover missing unit test vectors
2026-01-06 17:31:20 +01:00
l5y e9a0dc0d59 web: fix stale node queries (#603) 2026-01-06 16:13:04 +01:00
l5y d75c395514 matrix: move short name to display name (#602)
* matrix: move short name to display name

* matrix: run fmt
2026-01-05 23:24:27 +01:00
l5y b08f951780 ci: update ruby to 4 (#601)
* ci: update ruby to 4

* ci: update dispatch triggers
2026-01-05 23:23:56 +01:00
l5y 955431ac18 web: display traces of last 28 days if available (#599)
* web: display traces of last 28 days if available

* web: address review comments

* web: fix tests

* web: fix tests
2026-01-05 21:22:16 +01:00
l5y 7f40abf92a web: establish menu structure (#597)
* web: establish menu structure

* web: cover missing unit test vectors

* web: fix tests
2026-01-05 21:18:51 +01:00
l5y c157fd481b matrix: fixed the text-message checkpoint regression (#595)
* matrix: fixed the text-message checkpoint regression

* matrix: improve formatting

* matrix: fix tests
2026-01-05 18:20:25 +01:00
l5y a6fc7145bc matrix: cache seen messages by rx_time not id (#594)
* matrix: cache seen messages by rx_time not id

* matrix: fix review comments

* matrix: fix review comments

* matrix: cover missing unit test vectors

* matrix: fix tests
2026-01-05 17:34:54 +01:00
l5y ca05cbb2c5 web: hide the default '0' tab when not active (#593) 2026-01-05 16:26:56 +01:00
l5y 5c79572c4d matrix: fix empty bridge state json (#592)
* matrix: fix empty bridge state json

* matrix: fix tests
2026-01-05 16:11:24 +01:00
l5y 6fd8e5ad12 web: allow certain charts to overflow upper bounds (#585)
* web: allow certain charts to overflow upper bounds

* web: cover missing unit test vectors
2025-12-31 15:15:18 +01:00
l5y 09fbc32e48 ingestor: support ROUTING_APP messages (#584)
* ingestor: support ROUTING_APP messages

* data: cover missing unit test vectors

* data: address review comments

* tests: fix
2025-12-31 13:13:34 +01:00
l5y 4591d5acd6 ci: run nix flake check on ci (#583)
* ci: run nix flake check on ci

* ci: fix tests
2025-12-31 12:58:37 +01:00
l5y 6c711f80b4 web: hide legend by default (#582)
* web: hide legend my default

* web: run rufo
2025-12-31 12:42:53 +01:00
Benjamin Grosse e61e701240 nix flake (#577) 2025-12-31 12:00:11 +01:00
apo-mak 42f4e80a26 Support BLE UUID format for macOS Bluetooth devices (#575)
* Initial plan

* Add BLE UUID support for macOS devices

Co-authored-by: apo-mak <25563515+apo-mak@users.noreply.github.com>

* docs: Add UUID format example for macOS BLE connections

Co-authored-by: apo-mak <25563515+apo-mak@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: apo-mak <25563515+apo-mak@users.noreply.github.com>
2025-12-20 20:21:59 +01:00
l5y 4dc03f33ca web: add mesh.qrp.ro as seed node (#573) 2025-12-17 10:48:51 +01:00
l5y 5572c6cd12 web: ensure unknown nodes for messages and traces (#572) 2025-12-17 10:21:03 +01:00
l5y 4f7e66de82 chore: bump version to 0.5.9 (#569) 2025-12-16 21:14:10 +00:00
l5y c1898037c0 web: add secondary seed node jmrp.io (#568) 2025-12-16 21:38:41 +01:00
l5y efc5f64279 data: implement whitelist for ingestor (#567)
* data: implement whitelist for ingestor

* data: run black

* data: cover missing unit test vectors
2025-12-16 21:11:53 +01:00
l5y 636a203254 web: add ?since= parameter to all apis (#566) 2025-12-16 20:24:31 +01:00
l5y 2e78fa7a3a matrix: fix docker build 2025-12-16 19:26:31 +01:00
251 changed files with 45385 additions and 6662 deletions
+2 -2
View File
@@ -16,5 +16,5 @@ coverage:
status:
project:
default:
target: 99%
threshold: 1%
target: 100%
threshold: 10%
+11 -1
View File
@@ -1,3 +1,6 @@
# Copyright © 2025-26 l5yth & contributors
# Licensed under the Apache License, Version 2.0 (see LICENSE)
#
# PotatoMesh Environment Configuration
# Copy this file to .env and customize for your setup
@@ -14,7 +17,7 @@ INSTANCE_DOMAIN="mesh.example.org"
# Generate a secure token: openssl rand -hex 32
API_TOKEN="your-secure-api-token-here"
# Meshtastic connection target (required for ingestor)
# Mesh radio connection target (required for ingestor)
# Common serial paths:
# - Linux: /dev/ttyACM0, /dev/ttyUSB0
# - macOS: /dev/cu.usbserial-*
@@ -23,6 +26,10 @@ API_TOKEN="your-secure-api-token-here"
# Bluetooth address (e.g. ED:4D:9E:95:CF:60).
CONNECTION="/dev/ttyACM0"
# Mesh protocol to use (meshtastic or meshcore)
# Default: meshtastic
PROTOCOL="meshtastic"
# =============================================================================
# SITE CUSTOMIZATION
# =============================================================================
@@ -68,6 +75,9 @@ PRIVATE=0
# Debug mode (0=off, 1=on)
DEBUG=0
# Energy saving mode — sleep between ingestion cycles (0=off, 1=on)
ENERGY_SAVING=0
# Default map zoom override
# MAP_ZOOM=15
+16
View File
@@ -19,6 +19,22 @@ updates:
schedule:
interval: "weekly"
- package-ecosystem: "python"
directory: "/data"
schedule:
interval: "weekly"
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
- package-ecosystem: "cargo"
directory: "/matrix"
schedule:
interval: "weekly"
- package-ecosystem: "npm"
directory: "/web"
schedule:
interval: "weekly"
- package-ecosystem: "pub"
directory: "/app"
schedule:
interval: "weekly"
+3 -9
View File
@@ -1,3 +1,6 @@
<!-- Copyright © 2025-26 l5yth & contributors -->
<!-- Licensed under the Apache License, Version 2.0 (see LICENSE) -->
# GitHub Actions Workflows
## Workflows
@@ -10,12 +13,3 @@
- **`mobile.yml`** - Flutter mobile tests with coverage reporting
- **`release.yml`** - Tag-triggered Flutter release builds for Android and iOS
## Usage
```bash
# Build locally
docker-compose build
# Deploy
docker-compose up -d
```
+1 -1
View File
@@ -23,7 +23,7 @@ on:
jobs:
analyze:
name: Analyze (${{ matrix.language }})
runs-on: ${{ (matrix.language == 'swift' && 'macos-latest') || 'ubuntu-latest' }}
runs-on: ubuntu-latest
permissions:
security-events: write
packages: read
+1 -1
View File
@@ -188,7 +188,7 @@ jobs:
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-ingestor-linux-amd64:${{ steps.version.outputs.version }}
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-ingestor-linux-amd64:${{ steps.version.outputs.version_with_v }}
docker run --rm --name ingestor-test \
-e POTATOMESH_INSTANCE=http://localhost:41447 \
-e INSTANCE_DOMAIN=http://localhost:41447 \
-e API_TOKEN=test-token \
-e CONNECTION=mock \
-e DEBUG=1 \
+1
View File
@@ -20,6 +20,7 @@ on:
pull_request:
branches: [ "main" ]
paths:
- '.github/**'
- 'web/**'
- 'tests/**'
+1
View File
@@ -20,6 +20,7 @@ on:
pull_request:
branches: [ "main" ]
paths:
- '.github/**'
- 'app/**'
- 'tests/**'
+35
View File
@@ -0,0 +1,35 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: Nix
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
jobs:
flake-check:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install Nix
uses: cachix/install-nix-action@v30
with:
extra_nix_config: |
experimental-features = nix-command flakes
- name: Run flake checks
run: nix flake check
+2 -1
View File
@@ -20,6 +20,7 @@ on:
pull_request:
branches: [ "main" ]
paths:
- '.github/**'
- 'data/**'
- 'tests/**'
@@ -38,7 +39,7 @@ jobs:
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install black pytest pytest-cov meshtastic
pip install black pytest pytest-cov meshtastic meshcore
- name: Test with pytest and coverage
run: |
mkdir -p reports
+2 -1
View File
@@ -20,6 +20,7 @@ on:
pull_request:
branches: [ "main" ]
paths:
- '.github/**'
- 'web/**'
- 'tests/**'
@@ -34,7 +35,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
ruby-version: ['3.3', '3.4']
ruby-version: ['3.4', '4.0']
steps:
- uses: actions/checkout@v5
+4
View File
@@ -74,5 +74,9 @@ web/.config
node_modules/
web/node_modules/
# Operator-customised static pages (keep only the shipped default)
web/pages/*.md
# Debug symbols
ignored.txt
ignored-*.txt
-48
View File
@@ -1,48 +0,0 @@
# Repository Guidelines
Keep code well structured, modular, and not monolithic. If modules get to big, consider submodules structure.
Make sure all tests pass for Python (`pytest`), Ruby (`rspec`), and JavaScript (`npm test`).
Make sure all code is properly inline documented (PDoc, RDoc, JSDoc, et.c). We do not want any undocumented code.
Make sure all code is 100% unit tested. We want all lines, units, and branches to be thouroughly covered by tests.
New source files should have Apache v2 license headers using the exact string `Copyright © 2025-26 l5yth & contributors`.
Run linters for Python (`black`) and Ruby (`rufo`) to ensure consistent code formatting.
## Project Structure & Module Organization
The repository splits runtime and ingestion logic. `web/` holds the Sinatra dashboard (Ruby code in `lib/potato_mesh`, views in `views/`, static bundles in `public/`).
`data/` hosts the Python Meshtastic ingestor plus migrations and CLI scripts. API fixtures and end-to-end harnesses live in `tests/`. Dockerfiles and compose files support containerized workflows.
`matrix/` contains the Rust Matrix bridge; build with `cargo build --release` or `docker build -f matrix/Dockerfile .`, and keep bridge config under `matrix/Config.toml` when running locally.
## Build, Test, and Development Commands
Run dependency installs inside `web/`: `bundle install` for gems and `npm ci` for JavaScript tooling. Start the app with `cd web && API_TOKEN=dev ./app.sh` for local work or `bundle exec rackup -p 41447` when integrating elsewhere.
Prep ingestion with `python -m venv .venv && pip install -r data/requirements.txt`; `./data/mesh.sh` streams from live radios. `docker-compose -f docker-compose.dev.yml up` brings up the full stack.
Container images publish via `.github/workflows/docker.yml` as `potato-mesh-{service}-linux-$arch` (`web`, `ingestor`, `matrix-bridge`), using the Dockerfiles in `web/`, `data/`, and `matrix/`.
## Coding Style & Naming Conventions
Use two-space indentation for Ruby and keep `# frozen_string_literal: true` at the top of new files. Keep Ruby classes/modules in `CamelCase`, filenames in `snake_case.rb`, and feature specs in `*_spec.rb`.
JavaScript follows ES modules under `public/assets/js`; co-locate components with `__tests__` folders and use kebab-case filenames. Format Ruby via `bundle exec rufo .` and Python via `black`. Skip committing generated coverage artifacts.
## Flutter Mobile App (`app/`)
The Flutter client lives in `app/`. Keep only the mobile targets (`android/`, `ios/`) under version control unless you explicitly support other platforms. Do not commit Flutter build outputs or editor cruft (`.dart_tool/`, `.flutter-plugins-dependencies`, `.idea/`, `.metadata`, `*.iml`, `.fvmrc` if unused).
Install dependencies with `cd app && flutter pub get`; format with `dart format .` and lint via `flutter analyze`. Run tests with `cd app && flutter test` and keep widget/unit coverage high—no new code without tests. Commit `pubspec.lock` and analysis options so toolchains stay consistent.
## Testing Guidelines
Ruby specs run with `cd web && bundle exec rspec`, producing SimpleCov output in `coverage/`. Front-end behaviour is verified through Nodes test runner: `cd web && npm test` writes V8 coverage and JUnit XML under `reports/`.
The ingestion layer is guarded by `pytest -q tests/test_mesh.py`; leave fixtures in `tests/` untouched so CI can replay them. New features should ship with matching specs and updated integration checks.
## Commit & Pull Request Guidelines
Commits should stay imperative and reference issues the way history does (`Add chat log entries... (#408)`). Squash noisy work-in-progress commits before pushing. Pull requests need a concise summary, screenshots or curl traces for UI/API tweaks, and links to tracked issues. Paste the command output for the test suites you ran and mention configuration toggles (`API_TOKEN`, `PRIVATE`) reviewers must set.
## Security & Configuration Tips
Never commit real API tokens or `.sqlite` dumps; use `.env.local` files ignored by Git. Confirm env defaults (`API_TOKEN`, `INSTANCE_DOMAIN`, `PRIVATE`) before deploying, and set `FEDERATION=0` when staging private nodes. Review `PROMETHEUS.md` when exposing metrics so scrape endpoints stay internal.
+142
View File
@@ -1,5 +1,147 @@
<!-- Copyright © 2025-26 l5yth & contributors -->
<!-- Licensed under the Apache License, Version 2.0 (see LICENSE) -->
# CHANGELOG
## v0.6.0
This is a service release of the radio mesh app-suite `potato-mesh` v0.6.0 which introduces new features and overhauls the user interface. The primary notable change is added support for multi-protocol along with an implementation of **Meshcore** in ingestor, web app, and frontend.
Demo: <https://potatomesh.net/>
### Meshcore
To start ingesting Meshcore data to an upgraded potato-mesh web app, simply tell your ingestor to use the `PROTOCOL="meshcore"`.
### About Pages
The other notable feature is the removal of the "darkmode" and "info" buttons in favor of customizable markdown pages that allow for more flexibility with regard to custom content (info about presets, contact information, etc.) - see `/pages/*.md` in the web app ([#723](https://github.com/l5yth/potato-mesh/pull/723)).
### Breaking Variable Changes
The following deprecated environmental variables have been removed in this release finally ([#704](https://github.com/l5yth/potato-mesh/pull/704)):
* ~~POTATOMESH_INSTANCE~~ - please use `INSTANCE_DOMAIN`
* ~~MESH_SERIAL~~ and ~~PORT~~ - please use `CONNECTION`
### Features
* Web: add markdown static pages by @l5yth in <https://github.com/l5yth/potato-mesh/pull/723>
* Data: trace analysus multi ingestor support by @l5yth in <https://github.com/l5yth/potato-mesh/pull/721>
* Web: facelift by @l5yth in <https://github.com/l5yth/potato-mesh/pull/716>
* Web: sort channels by activity not index by @l5yth in <https://github.com/l5yth/potato-mesh/pull/711>
* Data: derive meshcore channel probe bound from device max_channels by @l5yth in <https://github.com/l5yth/potato-mesh/pull/701>
* Web: define meshcore modem presets by @l5yth in <https://github.com/l5yth/potato-mesh/pull/696>
* Data: register meshcore channel mappings by @l5yth in <https://github.com/l5yth/potato-mesh/pull/695>
* Data: provide frequency and modem preset for meshcore by @l5yth in <https://github.com/l5yth/potato-mesh/pull/694>
* Web: distinguish meshcore from meshtastic in frontend by @l5yth in <https://github.com/l5yth/potato-mesh/pull/688>
* [Meshcore] fix: get meshcore protocol icon displaying correctly by @benallfree in <https://github.com/l5yth/potato-mesh/pull/681>
### Fixes
* Web: fix federation for multi protocol by @l5yth in <https://github.com/l5yth/potato-mesh/pull/722>
* Data: fix position time updates by @l5yth in <https://github.com/l5yth/potato-mesh/pull/715>
* Data: fix meshcore ingestor self reporting by @l5yth in <https://github.com/l5yth/potato-mesh/pull/713>
* Web: reference meshcore nodes in chat by @l5yth in <https://github.com/l5yth/potato-mesh/pull/709>
* Web: fix node disappearance role reset by @l5yth in <https://github.com/l5yth/potato-mesh/pull/707>
* Web: protect real node names from fallback by @l5yth in <https://github.com/l5yth/potato-mesh/pull/702>
* Web: add proper short names for meshcore companions by @l5yth in <https://github.com/l5yth/potato-mesh/pull/693>
* Fix: address review comments from PRs #676 and #681 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/689>
* [Meshcore] fix: race condition by @benallfree in <https://github.com/l5yth/potato-mesh/pull/676>
### Chores
* Release: v0.6.0 — remove deprecated env var aliases by @l5yth in <https://github.com/l5yth/potato-mesh/pull/704>
* Chore: prepare codebase for breaking release by @l5yth in <https://github.com/l5yth/potato-mesh/pull/718>
## v0.5.12
This is a service release of the app potato-mesh v0.5.12 which improves performance and stability.
Notably, the frontend went through some graphical tweaks to prepare for an upcoming multi-protocol release (meshcore, reticulum, etc.).
* Enh: surface meshcore role types (#680) by @l5yth in https://github.com/l5yth/potato-mesh/pull/685
* Chore: refactor codebase before meshcore release by @l5yth in https://github.com/l5yth/potato-mesh/pull/682
* [Meshcore] enh: short name should be 1st 4 hex digits of public key by @benallfree in https://github.com/l5yth/potato-mesh/pull/679
* Chore: update xcode deps by @benallfree in https://github.com/l5yth/potato-mesh/pull/674
* Chore: update mesh.sh to use requirements file by @benallfree in https://github.com/l5yth/potato-mesh/pull/675
* Data/meshcore: fix ble and enable tcp by @l5yth in https://github.com/l5yth/potato-mesh/pull/669
* Data: handle store_forward and router_heartbeat portnum by @l5yth in https://github.com/l5yth/potato-mesh/pull/667
* Feat: implement meshcore provider by @l5yth in https://github.com/l5yth/potato-mesh/pull/663
* Ci: update dependabot and codecov settings by @l5yth in https://github.com/l5yth/potato-mesh/pull/666
* Web: prepare release by @l5yth in https://github.com/l5yth/potato-mesh/pull/665
* App: only query meshtastic provider by @l5yth in https://github.com/l5yth/potato-mesh/pull/664
* Data: prepare ingestor for meshcore by @l5yth in https://github.com/l5yth/potato-mesh/pull/658
* Web: fix css issues by @l5yth in https://github.com/l5yth/potato-mesh/pull/659
* Web: prepare frontend for multi protocol by @l5yth in https://github.com/l5yth/potato-mesh/pull/657
* Feat: split device and power-sensor telemetry charts (#643) by @l5yth in https://github.com/l5yth/potato-mesh/pull/656
* Web: implement a 'protocol' field across systems by @l5yth in https://github.com/l5yth/potato-mesh/pull/655
* Fix upsert clearing node coordinates bug by @l5yth in https://github.com/l5yth/potato-mesh/pull/654
* Data: resolve circular dependency of deamon.py by @l5yth in https://github.com/l5yth/potato-mesh/pull/653
* Proposal: mesh provider pattern refactor by @benallfree in https://github.com/l5yth/potato-mesh/pull/651
* Build(deps): bump rustls-webpki from 0.103.8 to 0.103.10 in /matrix by @dependabot[bot] in https://github.com/l5yth/potato-mesh/pull/649
* Build(deps): bump quinn-proto from 0.11.13 to 0.11.14 in /matrix by @dependabot[bot] in https://github.com/l5yth/potato-mesh/pull/646
## v0.5.11
* Chore: bump version to 0.5.11 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/645>
* Web: limit horizontal size of dropdown by @l5yth in <https://github.com/l5yth/potato-mesh/pull/644>
## v0.5.10
* Web: expose node stats in distinct api by @l5yth in <https://github.com/l5yth/potato-mesh/pull/641>
* Web: do merge channels by name by @l5yth in <https://github.com/l5yth/potato-mesh/pull/640>
* Web: do not merge channels by ID in frontend by @l5yth in <https://github.com/l5yth/potato-mesh/pull/637>
* Web: do not touch neighbor last seen on neighbor info by @l5yth in <https://github.com/l5yth/potato-mesh/pull/636>
* Ingestor: report self id per packet by @l5yth in <https://github.com/l5yth/potato-mesh/pull/635>
* Ci: fix docker compose and docs by @l5yth in <https://github.com/l5yth/potato-mesh/pull/634>
* Web: supress encrypted text messages in frontend by @l5yth in <https://github.com/l5yth/potato-mesh/pull/633>
* Federation: ensure requests timeout properly and can be terminated by @l5yth in <https://github.com/l5yth/potato-mesh/pull/631>
* Build(deps): bump bytes from 1.11.0 to 1.11.1 in /matrix by @dependabot[bot]< in https://github.com/l5yth/potato-mesh/pull/627>
* Matrix: config loading now merges optional TOML with CLI/env/secret inputs by @l5yth in <https://github.com/l5yth/potato-mesh/pull/617>
* Matrix: logs only non-sensitive config fields by @l5yth in <https://github.com/l5yth/potato-mesh/pull/616>
* Web: decrypted takes precedence by @l5yth in <https://github.com/l5yth/potato-mesh/pull/614>
* Add Apache 2.0 license headers to missing sources by @l5yth in <https://github.com/l5yth/potato-mesh/pull/615>
* Web: decrypt PSK-1 unencrypted messages on arrival by @l5yth in <https://github.com/l5yth/potato-mesh/pull/611>
* Web: daemonize federation worker pool to avoid deadlocks on stuck announcments by @l5yth in <https://github.com/l5yth/potato-mesh/pull/610>
* Web: add announcement banner by @l5yth in <https://github.com/l5yth/potato-mesh/pull/609>
* L5Y chore version 0510 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/608>
## v0.5.9
* Matrix: listen for synapse on port 41448 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/607>
* Web: collapse federation map ledgend by @l5yth in <https://github.com/l5yth/potato-mesh/pull/604>
* Web: fix stale node queries by @l5yth in <https://github.com/l5yth/potato-mesh/pull/603>
* Matrix: move short name to display name by @l5yth in <https://github.com/l5yth/potato-mesh/pull/602>
* Ci: update ruby to 4 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/601>
* Web: display traces of last 28 days if available by @l5yth in <https://github.com/l5yth/potato-mesh/pull/599>
* Web: establish menu structure by @l5yth in <https://github.com/l5yth/potato-mesh/pull/597>
* Matrix: fixed the text-message checkpoint regression by @l5yth in <https://github.com/l5yth/potato-mesh/pull/595>
* Matrix: cache seen messages by rx_time not id by @l5yth in <https://github.com/l5yth/potato-mesh/pull/594>
* Web: hide the default '0' tab when not active by @l5yth in <https://github.com/l5yth/potato-mesh/pull/593>
* Matrix: fix empty bridge state json by @l5yth in <https://github.com/l5yth/potato-mesh/pull/592>
* Web: allow certain charts to overflow upper bounds by @l5yth in <https://github.com/l5yth/potato-mesh/pull/585>
* Ingestor: support ROUTING_APP messages by @l5yth in <https://github.com/l5yth/potato-mesh/pull/584>
* Ci: run nix flake check on ci by @l5yth in <https://github.com/l5yth/potato-mesh/pull/583>
* Web: hide legend by default by @l5yth in <https://github.com/l5yth/potato-mesh/pull/582>
* Nix flake by @benjajaja in <https://github.com/l5yth/potato-mesh/pull/577>
* Support BLE UUID format for macOS Bluetooth devices by @apo-mak in <https://github.com/l5yth/potato-mesh/pull/575>
* Web: add mesh.qrp.ro as seed node by @l5yth in <https://github.com/l5yth/potato-mesh/pull/573>
* Web: ensure unknown nodes for messages and traces by @l5yth in <https://github.com/l5yth/potato-mesh/pull/572>
* Chore: bump version to 0.5.9 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/569>
## v0.5.8
* Web: add secondary seed node jmrp.io by @l5yth in <https://github.com/l5yth/potato-mesh/pull/568>
* Data: implement whitelist for ingestor by @l5yth in <https://github.com/l5yth/potato-mesh/pull/567>
* Web: add ?since= parameter to all apis by @l5yth in <https://github.com/l5yth/potato-mesh/pull/566>
* Matrix: fix docker build by @l5yth in <https://github.com/l5yth/potato-mesh/pull/565>
* Matrix: fix docker build by @l5yth in <https://github.com/l5yth/potato-mesh/pull/564>
* Web: fix federation signature validation and create fallback by @l5yth in <https://github.com/l5yth/potato-mesh/pull/563>
* Chore: update readme by @l5yth in <https://github.com/l5yth/potato-mesh/pull/561>
* Matrix: add docker file for bridge by @l5yth in <https://github.com/l5yth/potato-mesh/pull/556>
* Matrix: add health checks to startup by @l5yth in <https://github.com/l5yth/potato-mesh/pull/555>
* Matrix: omit the api part in base url by @l5yth in <https://github.com/l5yth/potato-mesh/pull/554>
* App: add utility coverage tests for main.dart by @l5yth in <https://github.com/l5yth/potato-mesh/pull/552>
* Data: add thorough daemon unit tests by @l5yth in <https://github.com/l5yth/potato-mesh/pull/553>
* Chore: bump version to 0.5.8 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/551>
## v0.5.7
* Data: track ingestors heartbeat by @l5yth in <https://github.com/l5yth/potato-mesh/pull/549>
+68
View File
@@ -0,0 +1,68 @@
<!-- Copyright © 2025-26 l5yth & contributors -->
<!-- Licensed under the Apache License, Version 2.0 (see LICENSE) -->
# Repository Guidelines
Keep code as modular as possible to reduce duplication and improve reusability and readability — this applies to tests as well as production code. If a module grows large, split it into a submodule structure. Prefer composing small, single-purpose units over monolithic files.
Make sure all tests pass for Python (`pytest`), Ruby (`rspec`), and JavaScript (`npm test`).
All code must be 100% unit tested — every line, branch, and code path must have a unit test. "100%" is the floor, not the ceiling: smoke tests, integration tests, and end-to-end tests come on top of that. No new code ships without matching unit tests.
All code must be 100% documented according to the language's API-doc standard (PDoc for Python, RDoc for Ruby, JSDoc for JavaScript, rustdoc for Rust, dartdoc for Dart). Documentation must be sufficient to generate complete API docs from source. In addition to API-level docs, add inline comments wherever the logic is not immediately self-evident.
Every file in the repository must carry an Apache v2 license notice using the exact string `Copyright © 2025-26 l5yth & contributors`. **Source-code files** (`.rb`, `.py`, `.js`, `.rs`, `.dart`, etc.) must include the full Apache v2 license header block. **Non-source files** (docs, configs, YAML, TOML, Dockerfiles, etc.) must include a short 2-line Apache v2 notice (copyright line + license reference).
Run linters for Python (`black`) and Ruby (`rufo`) to ensure consistent code formatting.
## Project Structure & Module Organization
The repository splits runtime and ingestion logic. `web/` holds the Sinatra dashboard (Ruby code in `lib/potato_mesh`, views in `views/`, static bundles in `public/`).
`data/` hosts the Python Meshtastic ingestor plus migrations and CLI scripts. The ingestor is structured as the `data/mesh_ingestor/` package with the following key modules: `daemon.py` (main loop), `handlers.py` (packet processing), `interfaces.py` (interface helpers), `config.py` (env-driven config), `events.py` (TypedDict event schemas), `mesh_protocol.py` (MeshProtocol base), `node_identity.py` (canonical node ID utilities), `decode_payload.py` (CLI protobuf decoder), and the `protocols/` subpackage (currently `meshtastic.py`). API contracts for all POST ingest routes are documented in `data/mesh_ingestor/CONTRACTS.md`. API fixtures and end-to-end harnesses live in `tests/`. Dockerfiles and compose files support containerized workflows.
`matrix/` contains the Rust Matrix bridge; build with `cargo build --release` or `docker build -f matrix/Dockerfile .`, and keep bridge config under `matrix/Config.toml` when running locally.
## Build, Test, and Development Commands
Run dependency installs inside `web/`: `bundle install` for gems and `npm ci` for JavaScript tooling. Start the app with `cd web && API_TOKEN=dev ./app.sh` for local work or `bundle exec rackup -p 41447` when integrating elsewhere.
Prep ingestion with `python -m venv .venv && pip install -r data/requirements.txt`; `./data/mesh.sh` streams from live radios. `docker-compose -f docker-compose.dev.yml up` brings up the full stack.
Container images publish via `.github/workflows/docker.yml` as `potato-mesh-{service}-linux-$arch` (`web`, `ingestor`, `matrix-bridge`), using the Dockerfiles in `web/`, `data/`, and `matrix/`.
## Coding Style & Naming Conventions
Use two-space indentation for Ruby and keep `# frozen_string_literal: true` at the top of new files. Keep Ruby classes/modules in `CamelCase`, filenames in `snake_case.rb`, and feature specs in `*_spec.rb`.
JavaScript follows ES modules under `public/assets/js`; co-locate components with `__tests__` folders and use kebab-case filenames. Format Ruby via `bundle exec rufo .` and Python via `black`. Skip committing generated coverage artifacts.
## Flutter Mobile App (`app/`)
The Flutter client lives in `app/`. Keep only the mobile targets (`android/`, `ios/`) under version control unless you explicitly support other platforms. Do not commit Flutter build outputs or editor cruft (`.dart_tool/`, `.flutter-plugins-dependencies`, `.idea/`, `.metadata`, `*.iml`, `.fvmrc` if unused).
Install dependencies with `cd app && flutter pub get`; format with `dart format .` and lint via `flutter analyze`. Run tests with `cd app && flutter test` and keep widget/unit coverage high—no new code without tests. Commit `pubspec.lock` and analysis options so toolchains stay consistent.
## Testing Guidelines
Ruby specs run with `cd web && bundle exec rspec`, producing SimpleCov output in `coverage/`. Front-end behaviour is verified through Nodes test runner: `cd web && npm test` writes V8 coverage and JUnit XML under `reports/`.
The ingestion layer is tested with `pytest -q tests/`; leave fixtures in `tests/` untouched so CI can replay them. The suite includes both integration tests (`test_mesh.py`) and focused unit tests — `test_events_unit.py` (TypedDict schemas), `test_provider_unit.py` (Provider protocol conformance and `MeshtasticProvider`), `test_node_identity_unit.py` (canonical ID helpers), `test_daemon_unit.py`, `test_serialization_unit.py`, and `test_decode_payload.py`. New features should ship with matching specs and updated integration checks.
## Adding a New Ingestor Protocol
The `data/mesh_ingestor/mesh_protocol.py` module defines a `@runtime_checkable` `MeshProtocol` class with five members: `name` (str), `subscribe()`, `connect(*, active_candidate)`, `extract_host_node_id(iface)`, and `node_snapshot_items(iface)`. To add a new backend (e.g. Reticulum):
1. Create `data/mesh_ingestor/protocols/<name>.py` with a class satisfying the `MeshProtocol` interface.
2. Register it in `data/mesh_ingestor/protocols/__init__.py`.
3. Pass an instance via `daemon.main(provider=...)` or make it the default in `main()`.
4. Cover the protocol with unit tests in `tests/test_provider_unit.py` — at minimum an `isinstance(..., MeshProtocol)` conformance check and any retry/error-handling paths.
Consult `data/mesh_ingestor/CONTRACTS.md` for the canonical event shapes all protocols must emit.
## GitHub Configuration Standards
Every language used in the repository must have a Dependabot entry checking for dependency updates on a **weekly** schedule. Keep the Dependabot config up to date as new languages or package ecosystems are added.
Codecov must be configured with a **100% coverage target** and a **10% threshold** (i.e. a drop of more than 10 percentage points fails the check). The `codecov.yml` should enforce this on both patch and project coverage.
Every service/component must have at least one GitHub Actions workflow that **builds and runs tests on pull requests against `main` and on direct pushes to `main`**. Workflows should cover all relevant test suites (Python, Ruby, JS, Rust, Flutter) for the components they touch.
## Commit & Pull Request Guidelines
Commits should stay imperative and reference issues the way history does (`Add chat log entries... (#408)`). Squash noisy work-in-progress commits before pushing. Pull requests need a concise summary, screenshots or curl traces for UI/API tweaks, and links to tracked issues. Paste the command output for the test suites you ran and mention configuration toggles (`API_TOKEN`, `PRIVATE`) reviewers must set.
## Security & Configuration Tips
Never commit real API tokens or `.sqlite` dumps; use `.env.local` files ignored by Git. Confirm env defaults (`API_TOKEN`, `INSTANCE_DOMAIN`, `PRIVATE`) before deploying, and set `FEDERATION=0` when staging private nodes. Review `PROMETHEUS.md` when exposing metrics so scrape endpoints stay internal.
+25 -10
View File
@@ -1,3 +1,6 @@
<!-- Copyright © 2025-26 l5yth & contributors -->
<!-- Licensed under the Apache License, Version 2.0 (see LICENSE) -->
# PotatoMesh Docker Guide
PotatoMesh publishes ready-to-run container images to the GitHub Packages container
@@ -13,16 +16,16 @@ will pull the latest release images for you.
## Images on GHCR
| Service | Image |
|----------|---------------------------------------------------------------------------------------------------------------|
| Web UI | `ghcr.io/l5yth/potato-mesh-web-linux-amd64:<tag>` (e.g. `latest`, `3.0`, `v3.0`, or `3.1.0-rc1`) |
| Ingestor | `ghcr.io/l5yth/potato-mesh-ingestor-linux-amd64:<tag>` (e.g. `latest`, `3.0`, `v3.0`, or `3.1.0-rc1`) |
| Service | Image |
|----------|----------------------------------------------------------------------------------------------------------------|
| Web UI | `ghcr.io/l5yth/potato-mesh-web-linux-amd64:<tag>` (e.g. `latest`, `0.6.0`, `v0.6.0`, or `0.7.0-rc1`) |
| Ingestor | `ghcr.io/l5yth/potato-mesh-ingestor-linux-amd64:<tag>` (e.g. `latest`, `0.6.0`, `v0.6.0`, or `0.7.0-rc1`) |
Images are published for every tagged release. Stable builds receive both
semantic version tags (for example `3.0`) and a matching `v`-prefixed tag (for
example `v3.0`), plus a `latest` tag that tracks the newest stable release.
semantic version tags (for example `0.6.0`) and a matching `v`-prefixed tag (for
example `v0.6.0`), plus a `latest` tag that tracks the newest stable release.
Pre-release tags (for example `-rc`, `-beta`, `-alpha`, or `-dev` suffixes) are
published only with their explicit version strings (`3.1.0-rc1` and `v3.1.0-rc1`
published only with their explicit version strings (`0.7.0-rc1` and `v0.7.0-rc1`
in this example) and do **not** advance `latest`. Pin the versioned tags when
you need a specific build.
@@ -53,15 +56,15 @@ Additional environment variables are optional:
| `MAP_ZOOM` | _unset_ | Fixed Leaflet zoom (disables the auto-fit checkbox when set). |
| `MAX_DISTANCE` | `42` | Maximum relationship distance (km) before edges are hidden. |
| `DEBUG` | `0` | Enables verbose logging across services when set to `1`. |
| `ALLOWED_CHANNELS` | _unset_ | Comma-separated channel names the ingestor accepts; other channels are skipped before hidden filters. |
| `HIDDEN_CHANNELS` | _unset_ | Comma-separated channel names the ingestor skips when forwarding packets. |
| `FEDERATION` | `1` | Controls whether the instance announces itself and crawls peers (`1`) or stays isolated (`0`). |
| `PRIVATE` | `0` | Restricts public visibility and disables chat/message endpoints when set to `1`. |
| `CONNECTION` | `/dev/ttyACM0` | Serial device, TCP endpoint, or Bluetooth target used by the ingestor to reach the radio. |
The ingestor posts to the URL configured via `INSTANCE_DOMAIN` (defaulting to
`http://web:41447` in the provided compose file) and still accepts
`POTATOMESH_INSTANCE` as a legacy alias when the primary variable is unset. Use
`CHANNEL_INDEX` to select a LoRa channel on serial or Bluetooth connections.
`http://web:41447` in the provided compose file). Use `CHANNEL_INDEX` to select
a LoRa channel on serial or Bluetooth connections.
## Docker Compose file
@@ -78,6 +81,18 @@ the container. This path stores the instance private key and staged
of container lifecycle events, generated credentials are not replaced on reboot
or re-deploy.
The `potatomesh_pages` volume mounts to `/app/pages` and holds operator-managed
Markdown files that are rendered as static content pages in the web UI. On first
start the default `1-about.md` page is copied from the image into the volume.
You can add, edit, or remove `.md` files in this volume to customise your
instance's navigation. To use a host directory instead of a named volume, replace
the volume entry with a bind mount:
```yaml
volumes:
- ./my-pages:/app/pages
```
## Start the stack
From the directory containing the Compose file:
+30 -9
View File
@@ -1,3 +1,4 @@
# syntax=docker/dockerfile:1.6
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -25,6 +26,9 @@ ENV BUNDLE_FORCE_RUBY_PLATFORM=true
# Install build dependencies and SQLite3
RUN apk add --no-cache \
build-base \
python3 \
py3-pip \
py3-virtualenv \
sqlite-dev \
linux-headers \
pkgconfig
@@ -40,11 +44,16 @@ RUN bundle config set --local force_ruby_platform true && \
bundle config set --local without 'development test' && \
bundle install --jobs=4 --retry=3
# Install Meshtastic decoder dependencies in a dedicated venv
RUN python3 -m venv /opt/meshtastic-venv && \
/opt/meshtastic-venv/bin/pip install --no-cache-dir meshtastic protobuf
# Production stage
FROM ruby:3.3-alpine AS production
# Install runtime dependencies
RUN apk add --no-cache \
python3 \
sqlite \
tzdata \
curl
@@ -58,18 +67,27 @@ WORKDIR /app
# Copy installed gems from builder stage
COPY --from=builder /usr/local/bundle /usr/local/bundle
COPY --from=builder /opt/meshtastic-venv /opt/meshtastic-venv
# Copy application code (exclude Dockerfile from web directory)
COPY --chown=potatomesh:potatomesh web/app.rb web/app.sh web/Gemfile web/Gemfile.lock* web/spec/ ./
# Copy application code (excluding the Dockerfile which is not required at runtime)
COPY --chown=potatomesh:potatomesh web/app.rb ./
COPY --chown=potatomesh:potatomesh web/app.sh ./
COPY --chown=potatomesh:potatomesh web/Gemfile ./
COPY --chown=potatomesh:potatomesh web/Gemfile.lock* ./
COPY --chown=potatomesh:potatomesh web/lib ./lib
COPY --chown=potatomesh:potatomesh web/spec ./spec
COPY --chown=potatomesh:potatomesh web/public ./public
COPY --chown=potatomesh:potatomesh web/views/ ./views/
COPY --chown=potatomesh:potatomesh web/views ./views
COPY --chown=potatomesh:potatomesh web/scripts ./scripts
# Copy SQL schema files from data directory
COPY --chown=potatomesh:potatomesh data/*.sql /data/
COPY --chown=potatomesh:potatomesh data/mesh_ingestor/decode_payload.py /app/data/mesh_ingestor/decode_payload.py
# Create data directory for SQLite database
RUN mkdir -p /app/data /app/.local/share/potato-mesh && \
chown -R potatomesh:potatomesh /app/data /app/.local
# Create data and configuration directories with correct ownership
RUN mkdir -p /app/.local/share/potato-mesh \
&& mkdir -p /app/.config/potato-mesh/well-known \
&& chown -R potatomesh:potatomesh /app/.local/share /app/.config
# Switch to non-root user
USER potatomesh
@@ -78,13 +96,16 @@ USER potatomesh
EXPOSE 41447
# Default environment variables (can be overridden by host)
ENV APP_ENV=production \
RACK_ENV=production \
ENV RACK_ENV=production \
APP_ENV=production \
MESHTASTIC_PYTHON=/opt/meshtastic-venv/bin/python \
XDG_DATA_HOME=/app/.local/share \
XDG_CONFIG_HOME=/app/.config \
SITE_NAME="PotatoMesh Demo" \
INSTANCE_DOMAIN="potato.example.com" \
CHANNEL="#LongFast" \
FREQUENCY="915MHz" \
MAP_CENTER="38.761944,-27.090833" \
MAP_ZOOM="" \
MAX_DISTANCE=42 \
CONTACT_LINK="#potatomesh:dod.ngo" \
DEBUG=0
+3
View File
@@ -1,3 +1,6 @@
<!-- Copyright © 2025-26 l5yth & contributors -->
<!-- Licensed under the Apache License, Version 2.0 (see LICENSE) -->
# Prometheus Monitoring for PotatoMesh
PotatoMesh exposes runtime telemetry through a dedicated Prometheus endpoint so you can
+109 -13
View File
@@ -1,3 +1,6 @@
<!-- Copyright © 2025-26 l5yth & contributors -->
<!-- Licensed under the Apache License, Version 2.0 (see LICENSE) -->
# 🥔 PotatoMesh
[![GitHub Workflow Status](https://img.shields.io/github/actions/workflow/status/l5yth/potato-mesh/ruby.yml?branch=main)](https://github.com/l5yth/potato-mesh/actions)
@@ -7,7 +10,10 @@
[![Contributions Welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/l5yth/potato-mesh/issues)
[![Matrix Chat](https://img.shields.io/badge/matrix-%23potatomesh:dod.ngo-blue)](https://matrix.to/#/#potatomesh:dod.ngo)
A federated, Meshtastic-powered node dashboard for your local community.
[![Meshtastic](https://img.shields.io/badge/Meshtastic-supported-67ea94)](https://meshtastic.org)
[![MeshCore](https://img.shields.io/badge/MeshCore-supported-000000)](https://meshcore.co.uk)
A federated, Meshtastic & Meshcore node dashboard for your local community.
_No MQTT clutter, just local LoRa aether._
* Web dashboard with chat window and map view showing nodes, positions, neighbors,
@@ -17,15 +23,17 @@ _No MQTT clutter, just local LoRa aether._
* Allows searching and filtering for nodes in map and table view.
* Federated: _automatically_ froms a federation with other communities running
Potato Mesh!
* Supports Meshtastic and Meshcore
* Supplemental Python ingestor to feed the POST APIs of the Web app with data remotely.
* Supports multiple ingestors per instance.
* Supports Meshtastic and Meshcore
* Matrix bridge that posts Meshtastic messages to a defined matrix channel (no
radio required).
* Mobile app to _read_ messages on your local aether (no radio required).
Live demo for Berlin #MediumFast: [potatomesh.net](https://potatomesh.net)
Live demo for Berlin: [potatomesh.net](https://potatomesh.net)
![screenshot of the fourth version](./scrot-0.4.png)
![screenshot of the sixth version](./scrot-0.7.png)
## Web App
@@ -88,10 +96,12 @@ The web app can be configured with environment variables (defaults shown):
| `CHANNEL` | `"#LongFast"` | Default channel name displayed in the UI. |
| `FREQUENCY` | `"915MHz"` | Default frequency description displayed in the UI. |
| `CONTACT_LINK` | `"#potatomesh:dod.ngo"` | Chat link or Matrix alias rendered in the footer and overlays. |
| `ANNOUNCEMENT` | _unset_ | Optional announcement banner text rendered above the header on every page. |
| `MAP_CENTER` | `38.761944,-27.090833` | Latitude and longitude that centre the map on load. |
| `MAP_ZOOM` | _unset_ | Fixed Leaflet zoom applied on first load; disables auto-fit when provided. |
| `MAX_DISTANCE` | `42` | Maximum distance (km) before node relationships are hidden on the map. |
| `DEBUG` | `0` | Set to `1` for verbose logging in the web and ingestor services. |
| `ALLOWED_CHANNELS` | _unset_ | Comma-separated channel names the ingestor accepts; when set, all other channels are skipped before hidden filters. |
| `HIDDEN_CHANNELS` | _unset_ | Comma-separated channel names the ingestor will ignore when forwarding packets. |
| `FEDERATION` | `1` | Set to `1` to announce your instance and crawl peers, or `0` to disable federation. Private mode overrides this. |
| `PRIVATE` | `0` | Set to `1` to hide the chat UI, disable message APIs, and exclude hidden clients from public listings. |
@@ -118,6 +128,28 @@ well-known document is staged in
The database can be found in `$XDG_DATA_HOME/potato-mesh`.
### Custom Pages
Instance operators can publish static content pages (contact details, mesh
protocol information, legal notices, etc.) by placing Markdown files in the
`pages/` directory inside `web/`. Each `.md` file automatically becomes a nav
entry and a route under `/pages/<slug>`.
Files are named `<sort-prefix>-<slug>.md` — the numeric prefix controls
navigation order and the slug becomes the URL path and nav label:
| Filename | Nav Label | URL |
| ---------------------- | -------------- | ----------------------- |
| `1-about.md` | About | `/pages/about` |
| `5-rules.md` | Rules | `/pages/rules` |
| `9-contact.md` | Contact | `/pages/contact` |
| `20-impressum.md` | Impressum | `/pages/impressum` |
A default `1-about.md` ships with the app. In Docker deployments the directory
is exposed as the `potatomesh_pages` volume (mounted at `/app/pages`) so you can
add or edit pages without rebuilding the image. The pages directory can also be
overridden with the `PAGES_DIR` environment variable.
### Federation
PotatoMesh instances can optionally federate by publishing signed metadata and
@@ -201,23 +233,85 @@ Run the script with `INSTANCE_DOMAIN` and `API_TOKEN` to keep updating
node records and parsing new incoming messages. Enable debug output with `DEBUG=1`,
specify the connection target with `CONNECTION` (default `/dev/ttyACM0`) or set it to
an IP address (for example `192.168.1.20:4403`) to use the Meshtastic TCP
interface. `CONNECTION` also accepts Bluetooth device addresses (e.g.,
`ED:4D:9E:95:CF:60`) and the script attempts a BLE connection if available. To keep
private channels out of the web UI, set `HIDDEN_CHANNELS` to a comma-separated
list of channel names (for example `HIDDEN_CHANNELS="Secret,Ops"`); packets on
those channels are discarded instead of being sent to `/api/messages`.
interface. `CONNECTION` also accepts Bluetooth device addresses in MAC format (e.g.,
`ED:4D:9E:95:CF:60`) or UUID format for macOS (e.g., `C0AEA92F-045E-9B82-C9A6-A1FD822B3A9E`)
and the script attempts a BLE connection if available. To keep
ingestion limited, set `ALLOWED_CHANNELS` to a comma-separated whitelist (for
example `ALLOWED_CHANNELS="Chat,Ops"`); packets on other channels are discarded.
Use `HIDDEN_CHANNELS` to block specific channels from the web UI even when they
appear in the allowlist.
## Nix
For the dev shell, run:
```bash
nix develop
```
The shell provides Ruby plus the Python ingestor dependencies (including `meshtastic`
and `protobuf`). To sanity-check that the ingestor starts, run `python -m data.mesh`
with the usual environment variables (`INSTANCE_DOMAIN`, `API_TOKEN`, `CONNECTION`).
To run the packaged apps directly:
```bash
nix run .#web
nix run .#ingestor
```
Minimal NixOS module snippet:
```nix
services.potato-mesh = {
enable = true;
apiTokenFile = config.sops.secrets.potato-mesh-api-token.path;
dataDir = "/var/lib/potato-mesh";
port = 41447;
instanceDomain = "https://mesh.me";
siteName = "Nix Mesh";
contactLink = "homeserver.mx";
mapCenter = "28.96,-13.56";
frequency = "868MHz";
ingestor = {
enable = true;
connection = "192.168.X.Y:4403";
};
};
```
## Docker
Docker images are published on Github for each release:
Docker images are published on GitHub Container Registry for each release.
Image names and tags follow the workflow format:
`${IMAGE_PREFIX}-${service}-${architecture}:${tag}` (see `.github/workflows/docker.yml`).
```bash
docker pull ghcr.io/l5yth/potato-mesh/web:latest # newest release
docker pull ghcr.io/l5yth/potato-mesh/web:v0.5.5 # pinned historical release
docker pull ghcr.io/l5yth/potato-mesh/ingestor:latest
docker pull ghcr.io/l5yth/potato-mesh/matrix-bridge:latest
docker pull ghcr.io/l5yth/potato-mesh-web-linux-amd64:latest
docker pull ghcr.io/l5yth/potato-mesh-web-linux-arm64:latest
docker pull ghcr.io/l5yth/potato-mesh-web-linux-armv7:latest
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-amd64:latest
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-arm64:latest
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-armv7:latest
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-amd64:latest
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-arm64:latest
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-armv7:latest
# version-pinned examples
docker pull ghcr.io/l5yth/potato-mesh-web-linux-amd64:v0.6.0
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-amd64:v0.6.0
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-amd64:v0.6.0
```
Note: `latest` is only published for non-prerelease versions. Pre-release tags
such as `-rc`, `-beta`, `-alpha`, or `-dev` are version-tagged only.
When using Compose, set `POTATOMESH_IMAGE_ARCH` in `docker-compose.yml` (or via
environment) so service images resolve to the correct architecture variant and
you avoid manual tag mistakes.
Feel free to run the [configure.sh](./configure.sh) script to set up your
environment. See the [Docker guide](DOCKER.md) for more details and custom
deployment instructions.
@@ -228,6 +322,8 @@ A matrix bridge is currently being worked on. It requests messages from a config
potato-mesh instance and forwards it to a specified matrix channel; see
[matrix/README.md](./matrix/README.md).
![matrix bridge](./scrot-0.6.png)
## Mobile App
A mobile _reader_ app is currently being worked on. Stay tuned for releases and updates.
+6 -2
View File
@@ -1,6 +1,10 @@
# Meshtastic Reader
<!-- Copyright © 2025-26 l5yth & contributors -->
<!-- Licensed under the Apache License, Version 2.0 (see LICENSE) -->
Meshtastic Reader read-only PotatoMesh chat client for Android and iOS.
# PotatoMesh Mobile
PotatoMesh Mobile — read-only mesh chat client for Android and iOS.
Supports Meshtastic and MeshCore networks.
## Setup
+15
View File
@@ -1,3 +1,18 @@
/*
* Copyright © 2025-26 l5yth & contributors
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
plugins {
id("com.android.application")
id("kotlin-android")
@@ -1,3 +1,16 @@
// Copyright © 2025-26 l5yth & contributors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package net.potatomesh.reader
import io.flutter.embedding.android.FlutterActivity
+15
View File
@@ -1,3 +1,18 @@
/*
* Copyright © 2025-26 l5yth & contributors
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
allprojects {
repositories {
google()
+15
View File
@@ -1,3 +1,18 @@
/*
* Copyright © 2025-26 l5yth & contributors
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
pluginManagement {
val flutterSdkPath =
run {
+13 -1
View File
@@ -1,5 +1,18 @@
#!/usr/bin/env bash
# Copyright © 2025-26 l5yth & contributors
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
export GIT_TAG="$(git describe --tags --abbrev=0)"
export GIT_COMMITS="$(git rev-list --count ${GIT_TAG}..HEAD)"
export GIT_SHA="$(git rev-parse --short=9 HEAD)"
@@ -12,4 +25,3 @@ flutter run \
--dart-define=GIT_SHA="${GIT_SHA}" \
--dart-define=GIT_DIRTY="${GIT_DIRTY}" \
--device-id 38151FDJH00D4C
+2 -2
View File
@@ -15,11 +15,11 @@
<key>CFBundlePackageType</key>
<string>FMWK</string>
<key>CFBundleShortVersionString</key>
<string>0.5.8</string>
<string>0.6.1</string>
<key>CFBundleSignature</key>
<string>????</string>
<key>CFBundleVersion</key>
<string>0.5.8</string>
<string>0.6.1</string>
<key>MinimumOSVersion</key>
<string>14.0</string>
</dict>
+1
View File
@@ -1 +1,2 @@
#include? "Pods/Target Support Files/Pods-Runner/Pods-Runner.debug.xcconfig"
#include "Generated.xcconfig"
+1
View File
@@ -1 +1,2 @@
#include? "Pods/Target Support Files/Pods-Runner/Pods-Runner.release.xcconfig"
#include "Generated.xcconfig"
+13
View File
@@ -1,3 +1,16 @@
// Copyright © 2025-26 l5yth & contributors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
import Flutter
import UIKit
+13
View File
@@ -1 +1,14 @@
// Copyright © 2025-26 l5yth & contributors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#import "GeneratedPluginRegistrant.h"
+13
View File
@@ -1,3 +1,16 @@
// Copyright © 2025-26 l5yth & contributors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
import Flutter
import UIKit
import XCTest
+5 -1
View File
@@ -2944,6 +2944,9 @@ class MeshNode {
}
}
/// The protocol identifier sent to the API to filter results to Meshtastic only.
const String _kProtocolFilter = 'meshtastic';
/// Build a messages API URI for a given domain or absolute URL.
Uri _buildMessagesUri(String domain, {int since = 0, int limit = 1000}) {
final trimmed = domain.trim();
@@ -2951,6 +2954,7 @@ Uri _buildMessagesUri(String domain, {int since = 0, int limit = 1000}) {
'limit': limit.toString(),
'encrypted': 'false',
'since': since.toString(),
'protocol': _kProtocolFilter,
};
if (trimmed.isEmpty) {
return Uri.https('potatomesh.net', '/api/messages', params);
@@ -2988,7 +2992,7 @@ Uri _buildNodeUri(String domain, String nodeId) {
/// Build the bulk nodes API URI for fetching recent nodes.
Uri _buildNodesUri(String domain, {int limit = 1000}) {
final trimmedDomain = domain.trim();
final params = {'limit': limit.toString()};
final params = {'limit': limit.toString(), 'protocol': _kProtocolFilter};
if (trimmedDomain.isEmpty) {
return Uri.https('potatomesh.net', '/api/nodes', params);
+8 -8
View File
@@ -45,10 +45,10 @@ packages:
dependency: transitive
description:
name: characters
sha256: f71061c654a3380576a52b451dd5532377954cf9dbd272a78fc8479606670803
sha256: faf38497bda5ead2a8c7615f4f7939df04333478bf32e4173fcb06d428b5716b
url: "https://pub.dev"
source: hosted
version: "1.4.0"
version: "1.4.1"
checked_yaml:
dependency: transitive
description:
@@ -284,18 +284,18 @@ packages:
dependency: transitive
description:
name: matcher
sha256: dc58c723c3c24bf8d3e2d3ad3f2f9d7bd9cf43ec6feaa64181775e60190153f2
sha256: "12956d0ad8390bbcc63ca2e1469c0619946ccb52809807067a7020d57e647aa6"
url: "https://pub.dev"
source: hosted
version: "0.12.17"
version: "0.12.18"
material_color_utilities:
dependency: transitive
description:
name: material_color_utilities
sha256: f7142bb1154231d7ea5f96bc7bde4bda2a0945d2806bb11670e30b850d56bdec
sha256: "9c337007e82b1889149c82ed242ed1cb24a66044e30979c44912381e9be4c48b"
url: "https://pub.dev"
source: hosted
version: "0.11.1"
version: "0.13.0"
meta:
dependency: transitive
description:
@@ -497,10 +497,10 @@ packages:
dependency: transitive
description:
name: test_api
sha256: ab2726c1a94d3176a45960b6234466ec367179b87dd74f1611adb1f3b5fb9d55
sha256: "93167629bfc610f71560ab9312acdda4959de4df6fac7492c89ff0d3886f6636"
url: "https://pub.dev"
source: hosted
version: "0.7.7"
version: "0.7.9"
timezone:
dependency: transitive
description:
+1 -1
View File
@@ -1,7 +1,7 @@
name: potato_mesh_reader
description: Meshtastic Reader — read-only view for PotatoMesh messages.
publish_to: "none"
version: 0.5.8
version: 0.6.1
environment:
sdk: ">=3.4.0 <4.0.0"
+13 -1
View File
@@ -1,5 +1,18 @@
#!/usr/bin/env bash
# Copyright © 2025-26 l5yth & contributors
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -euo pipefail
export GIT_TAG="$(git describe --tags --abbrev=0)"
@@ -27,4 +40,3 @@ fi
export APK_DIR="build/app/outputs/flutter-apk"
mv -v "${APK_DIR}/app-release.apk" "${APK_DIR}/potatomesh-reader-android-${TAG_NAME}.apk"
(cd "${APK_DIR}" && sha256sum "potatomesh-reader-android-${TAG_NAME}.apk" > "potatomesh-reader-android-${TAG_NAME}.apk.sha256sum")
+2
View File
@@ -206,8 +206,10 @@ void main() {
expect(calls[0].host, 'mesh.example.org');
expect(calls[0].path, '/api/messages');
expect(calls[0].queryParameters['protocol'], 'meshtastic');
expect(calls[1].scheme, 'https');
expect(calls[1].path, '/api/messages');
expect(calls[1].queryParameters['protocol'], 'meshtastic');
});
});
+1
View File
@@ -145,6 +145,7 @@ void main() {
if (request.url.path == '/api/messages') {
sinces.add(request.url.queryParameters['since'] ?? '');
expect(request.url.queryParameters['limit'], '1000');
expect(request.url.queryParameters['protocol'], 'meshtastic');
if (sinces.length == 1) {
return http.Response(
jsonEncode([
+13
View File
@@ -1,3 +1,16 @@
// Copyright © 2025-26 l5yth & contributors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// This is a basic Flutter widget test.
//
// To perform an interaction with a widget in your test, use the WidgetTester
+10 -10
View File
@@ -77,6 +77,7 @@ FREQUENCY=$(grep "^FREQUENCY=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' ||
FEDERATION=$(grep "^FEDERATION=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "1")
PRIVATE=$(grep "^PRIVATE=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "0")
HIDDEN_CHANNELS=$(grep "^HIDDEN_CHANNELS=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "")
ALLOWED_CHANNELS=$(grep "^ALLOWED_CHANNELS=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "")
MAP_CENTER=$(grep "^MAP_CENTER=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "38.761944,-27.090833")
MAP_ZOOM=$(grep "^MAP_ZOOM=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "")
MAX_DISTANCE=$(grep "^MAX_DISTANCE=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "42")
@@ -127,6 +128,9 @@ echo "-------------------"
echo "Private mode hides public mesh messages from unauthenticated visitors."
echo "Set to 1 to hide public feeds or 0 to keep them visible."
read_with_default "Enable private mode (1=yes, 0=no)" "$PRIVATE" PRIVATE
echo "Provide a comma-separated whitelist of channel names to ingest (optional)."
echo "When set, only listed channels are ingested unless explicitly hidden below."
read_with_default "Allowed channels" "$ALLOWED_CHANNELS" ALLOWED_CHANNELS
echo "Provide a comma-separated list of channel names to hide from the web UI (optional)."
read_with_default "Hidden channels" "$HIDDEN_CHANNELS" HIDDEN_CHANNELS
@@ -199,6 +203,11 @@ update_env "POTATOMESH_IMAGE_TAG" "$POTATOMESH_IMAGE_TAG"
update_env "FEDERATION" "$FEDERATION"
update_env "PRIVATE" "$PRIVATE"
update_env "CONNECTION" "$CONNECTION"
if [ -n "$ALLOWED_CHANNELS" ]; then
update_env "ALLOWED_CHANNELS" "\"$ALLOWED_CHANNELS\""
else
sed -i.bak '/^ALLOWED_CHANNELS=.*/d' .env
fi
if [ -n "$HIDDEN_CHANNELS" ]; then
update_env "HIDDEN_CHANNELS" "\"$HIDDEN_CHANNELS\""
else
@@ -210,16 +219,6 @@ else
sed -i.bak '/^INSTANCE_DOMAIN=.*/d' .env
fi
# Migrate legacy connection settings and ensure defaults exist
if grep -q "^MESH_SERIAL=" .env; then
legacy_connection=$(grep "^MESH_SERIAL=" .env | head -n1 | cut -d'=' -f2-)
if [ -n "$legacy_connection" ] && ! grep -q "^CONNECTION=" .env; then
echo "♻️ Migrating legacy MESH_SERIAL value to CONNECTION"
update_env "CONNECTION" "$legacy_connection"
fi
sed -i.bak '/^MESH_SERIAL=.*/d' .env
fi
if ! grep -q "^CONNECTION=" .env; then
echo "CONNECTION=/dev/ttyACM0" >> .env
fi
@@ -252,6 +251,7 @@ echo " API Token: ${API_TOKEN:0:8}..."
echo " Docker Image Arch: $POTATOMESH_IMAGE_ARCH"
echo " Docker Image Tag: $POTATOMESH_IMAGE_TAG"
echo " Private Mode: ${PRIVATE}"
echo " Allowed Channels: ${ALLOWED_CHANNELS:-'All'}"
echo " Hidden Channels: ${HIDDEN_CHANNELS:-'None'}"
echo " Instance Domain: ${INSTANCE_DOMAIN:-'Auto-detected'}"
if [ "${FEDERATION:-1}" = "0" ]; then
+6
View File
@@ -50,6 +50,9 @@ USER potatomesh
ENV CONNECTION=/dev/ttyACM0 \
CHANNEL_INDEX=0 \
DEBUG=0 \
PROTOCOL=meshtastic \
ALLOWED_CHANNELS="" \
HIDDEN_CHANNELS="" \
INSTANCE_DOMAIN="" \
API_TOKEN=""
@@ -75,6 +78,9 @@ USER ContainerUser
ENV CONNECTION=/dev/ttyACM0 \
CHANNEL_INDEX=0 \
DEBUG=0 \
PROTOCOL=meshtastic \
ALLOWED_CHANNELS="" \
HIDDEN_CHANNELS="" \
INSTANCE_DOMAIN="" \
API_TOKEN=""
+1 -1
View File
@@ -18,7 +18,7 @@ The ``data.mesh`` module exposes helpers for reading Meshtastic node and
message information before forwarding it to the accompanying web application.
"""
VERSION = "0.5.8"
VERSION = "0.6.1"
"""Semantic version identifier shared with the dashboard and front-end."""
__version__ = VERSION
+2 -1
View File
@@ -20,7 +20,8 @@ CREATE TABLE IF NOT EXISTS ingestors (
last_seen_time INTEGER NOT NULL,
version TEXT,
lora_freq INTEGER,
modem_preset TEXT
modem_preset TEXT,
protocol TEXT NOT NULL DEFAULT 'meshtastic'
);
CREATE INDEX IF NOT EXISTS idx_ingestors_last_seen ON ingestors(last_seen_time);
+2
View File
@@ -27,6 +27,8 @@ CREATE TABLE IF NOT EXISTS instances (
last_update_time INTEGER,
is_private BOOLEAN NOT NULL DEFAULT 0,
nodes_count INTEGER,
meshcore_nodes_count INTEGER,
meshtastic_nodes_count INTEGER,
contact_link TEXT,
signature TEXT
);
+11 -4
View File
@@ -15,7 +15,14 @@
set -euo pipefail
python -m venv .venv
source .venv/bin/activate
pip install -U meshtastic black pytest
exec python mesh.py
# Recreate the venv only when its embedded Python is missing or points to the
# wrong prefix (e.g. a stale shebang from a sibling project's venv). Avoid
# --clear on every run: it wipes installed packages before each start, so any
# restart during a PyPI outage turns a transient network failure into hard
# ingestor downtime.
if ! .venv/bin/python -c "import sys; exit(0 if '.venv' in sys.prefix else 1)" 2>/dev/null; then
python -m venv --clear .venv
fi
.venv/bin/pip install -U pip
.venv/bin/pip install -r "$(dirname "$0")/requirements.txt"
exec .venv/bin/python mesh.py
+121
View File
@@ -0,0 +1,121 @@
<!-- Copyright © 2025-26 l5yth & contributors -->
<!-- Licensed under the Apache License, Version 2.0 (see LICENSE) -->
## Mesh ingestor contracts (stable interfaces)
This repos ingestion pipeline is split into:
- **Python collector** (`data/mesh_ingestor/*`) which normalizes packets/events and POSTs JSON to the web app.
- **Sinatra web app** (`web/`) which accepts those payloads on `POST /api/*` ingest routes and persists them into SQLite tables defined under `data/*.sql`.
This document records the **contracts that future protocols must preserve**. The intent is to enable adding new protocols (MeshCore, Reticulum, …) without changing the Ruby/DB/UI read-side.
### Canonical node identity
- **Canonical node id**: `nodes.node_id` is a `TEXT` primary key and is treated as canonical across the system.
- **Format**: `!%08x` (lowercase hex, 8 chars), for example `!abcdef01`.
- **Normalization**:
- Python currently normalizes via `data/mesh_ingestor/serialization.py:_canonical_node_id`.
- Ruby normalizes via `web/lib/potato_mesh/application/data_processing.rb:canonical_node_parts`.
- **Dual addressing**: Ruby routes and queries accept either a canonical `!xxxxxxxx` string or a numeric node id; they normalize to `node_id`.
Note: non-Meshtastic protocols will need a strategy to map their native node identifiers into this `!%08x` space. That mapping is intentionally not standardized in code yet.
### Ingest HTTP routes and payload shapes
Future providers should emit payloads that match these shapes (keys + types), which are validated by existing tests (notably `tests/test_mesh.py`).
#### `POST /api/nodes`
Payload is a mapping keyed by canonical node id, with an optional top-level `”ingestor”` key:
- `{ “!abcdef01”: { ... node fields ... }, “ingestor”: “!ingestornodeid” }`
When `”ingestor”` is present the protocol is inherited from the registered ingestor (see `POST /api/ingestors`); omitting it defaults to `”meshtastic”`.
Node entry fields are “Meshtastic-ish” (camelCase) and may include:
- `num` (int node number)
- `lastHeard` (int unix seconds)
- `snr` (float)
- `hopsAway` (int)
- `isFavorite` (bool)
- `user` (mapping; e.g. `shortName`, `longName`, `macaddr`, `hwModel`, `publicKey`, `isUnmessagable`)
- `role` (optional string) — omit when unknown; known values include Meshtastic role names (e.g. `CLIENT`, `ROUTER`) and MeshCore role names (`COMPANION`, `REPEATER`, `ROOM_SERVER`, `SENSOR`)
- `deviceMetrics` (mapping; e.g. `batteryLevel`, `voltage`, `channelUtilization`, `airUtilTx`, `uptimeSeconds`)
- `position` (mapping; `latitude`, `longitude`, `altitude`, `time`, `locationSource`, `precisionBits`, optional nested `raw`)
- Optional radio metadata: `lora_freq`, `modem_preset`
#### `POST /api/messages`
Single message payload:
- Required: `id` (int), `rx_time` (int), `rx_iso` (string)
- Identity: `from_id` (string/int), `to_id` (string/int), `channel` (int), `portnum` (string|nil)
- Payload: `text` (string|nil), `encrypted` (string|nil), `reply_id` (int|nil), `emoji` (string|nil)
- RF: `snr` (float|nil), `rssi` (int|nil), `hop_limit` (int|nil)
- Meta: `channel_name` (string; only when not encrypted and known), `ingestor` (canonical host id), `lora_freq`, `modem_preset`
#### `POST /api/positions`
Single position payload:
- Required: `id` (int), `rx_time` (int), `rx_iso` (string)
- Node: `node_id` (canonical string), `node_num` (int|nil), `num` (int|nil), `from_id` (canonical string), `to_id` (string|nil)
- Position: `latitude`, `longitude`, `altitude` (floats|nil)
- Position time: `position_time` (int|nil)
- Quality: `location_source` (string|nil), `precision_bits` (int|nil), `sats_in_view` (int|nil), `pdop` (float|nil)
- Motion: `ground_speed` (float|nil), `ground_track` (float|nil)
- RF/meta: `snr`, `rssi`, `hop_limit`, `bitfield`, `payload_b64` (string|nil), `raw` (mapping|nil), `ingestor`, `lora_freq`, `modem_preset`
#### `POST /api/telemetry`
Single telemetry payload:
- Required: `id` (int), `rx_time` (int), `rx_iso` (string)
- Node: `node_id` (canonical string|nil), `node_num` (int|nil), `from_id`, `to_id`
- Time: `telemetry_time` (int|nil)
- Packet: `channel` (int), `portnum` (string|nil), `bitfield` (int|nil), `hop_limit` (int|nil)
- RF: `snr` (float|nil), `rssi` (int|nil)
- Raw: `payload_b64` (string; may be empty string when unknown)
- Metrics: many optional snake_case keys (`battery_level`, `voltage`, `temperature`, etc.)
- Subtype: `telemetry_type` (string|nil) — optional discriminator identifying which Meshtastic protobuf oneof was set; one of `"device"`, `"environment"`, `"power"`, or `"air_quality"`. Ingestors that detect the subtype SHOULD include this field; omit rather than send `null` when unknown. The web app infers the type from metric-field presence when absent, so old ingestors remain compatible.
- Meta: `ingestor`, `lora_freq`, `modem_preset`
#### `POST /api/neighbors`
Neighbors snapshot payload:
- Node: `node_id` (canonical string), `node_num` (int|nil)
- `neighbors`: list of entries with `neighbor_id` (canonical string), `neighbor_num` (int|nil), `snr` (float|nil), `rx_time` (int), `rx_iso` (string)
- Snapshot time: `rx_time`, `rx_iso`
- Optional: `node_broadcast_interval_secs` (int|nil), `last_sent_by_id` (canonical string|nil)
- Meta: `ingestor`, `lora_freq`, `modem_preset`
#### `POST /api/traces`
Single trace payload:
- Identity: `id` (int|nil), `request_id` (int|nil)
- Endpoints: `src` (int|nil), `dest` (int|nil)
- Path: `hops` (list[int])
- Time: `rx_time` (int), `rx_iso` (string)
- Metrics: `rssi` (int|nil), `snr` (float|nil), `elapsed_ms` (int|nil)
- Meta: `ingestor`, `lora_freq`, `modem_preset`
#### `POST /api/ingestors`
Heartbeat payload:
- `node_id` (canonical string)
- `start_time` (int), `last_seen_time` (int)
- `version` (string)
- Optional: `lora_freq`, `modem_preset`
- Optional: `protocol` (string; e.g. `"meshtastic"`, `"meshcore"`) — declares the mesh backend for this ingestor; defaults to `"meshtastic"` when absent
**Protocol propagation**: all event records (`messages`, `positions`, `telemetry`, `traces`, `neighbors`) that reference this ingestor via their `ingestor` field will inherit its `protocol` value at write time.
### GET endpoint filtering
All collection GET endpoints (`/api/nodes`, `/api/messages`, `/api/positions`, `/api/telemetry`, `/api/traces`, `/api/neighbors`, `/api/ingestors`) accept an optional `?protocol=<value>` query parameter. When present, only records whose `protocol` column matches the given value are returned. The `protocol` field is included in all GET responses.
+4 -4
View File
@@ -25,6 +25,7 @@ from .. import VERSION as _PACKAGE_VERSION
from . import (
channels,
config,
connection,
daemon,
handlers,
ingestors,
@@ -46,7 +47,7 @@ def _reexport(module) -> None:
def _export_constants() -> None:
globals()["json"] = queue.json
globals()["urllib"] = queue.urllib
globals()["glob"] = interfaces.glob
globals()["glob"] = connection.glob
__all__.extend(["json", "urllib", "glob", "threading", "signal"])
@@ -69,7 +70,9 @@ _CONFIG_ATTRS = {
"CHANNEL_INDEX",
"DEBUG",
"INSTANCE",
"INSTANCES",
"API_TOKEN",
"ALLOWED_CHANNELS",
"HIDDEN_CHANNELS",
"LORA_FREQ",
"MODEM_PRESET",
@@ -80,9 +83,6 @@ _CONFIG_ATTRS = {
"_debug_log",
}
# Legacy export maintained for backwards compatibility.
_CONFIG_ATTRS.add("PORT")
_INTERFACE_ATTRS = {"BLEInterface", "SerialInterface", "TCPInterface"}
_QUEUE_ATTRS = set(queue.__all__)
+70
View File
@@ -182,6 +182,9 @@ def capture_from_interface(iface: Any) -> None:
channels_obj = getattr(local_node, "channels", None) if local_node else None
channel_entries: list[tuple[int, str]] = []
# Use a set for O(1) duplicate-index checks; Meshtastic occasionally
# emits the same channel index twice when the channel list is partially
# initialised, so we keep only the first valid entry per index.
seen_indices: set[int] = set()
for candidate in _iter_channel_objects(channels_obj):
result = _channel_tuple(candidate)
@@ -228,6 +231,33 @@ def hidden_channel_names() -> tuple[str, ...]:
return tuple(getattr(config, "HIDDEN_CHANNELS", ()))
def allowed_channel_names() -> tuple[str, ...]:
"""Return the configured set of explicitly allowed channel names."""
return tuple(getattr(config, "ALLOWED_CHANNELS", ()))
def is_allowed_channel(channel_name_value: str | None) -> bool:
"""Return ``True`` when ``channel_name_value`` is permitted by policy."""
allowed = getattr(config, "ALLOWED_CHANNELS", ())
if not allowed:
return True
if channel_name_value is None:
return False
normalized = channel_name_value.strip()
if not normalized:
return False
normalized_casefold = normalized.casefold()
for allowed_name in allowed:
if normalized_casefold == allowed_name.casefold():
return True
return False
def is_hidden_channel(channel_name_value: str | None) -> bool:
"""Return ``True`` when ``channel_name_value`` is configured as hidden."""
@@ -243,6 +273,43 @@ def is_hidden_channel(channel_name_value: str | None) -> bool:
return False
def register_channel(channel_idx: int, channel_name_value: str) -> None:
"""Register a single channel index → name mapping.
Unlike :func:`capture_from_interface`, which scans a complete interface
object in one shot, this function registers entries one at a time. It is
intended for protocols (e.g. MeshCore) that expose channel metadata via
per-index requests rather than a bulk channel list.
Idempotent: silently skips if *channel_idx* is already cached or
*channel_name_value* is blank, matching the first-seen-wins semantics of
:func:`capture_from_interface`.
Parameters:
channel_idx: Zero-based channel index.
channel_name_value: Human-readable channel name reported by the device.
"""
global _CHANNEL_MAPPINGS, _CHANNEL_LOOKUP
if not isinstance(channel_name_value, str) or not channel_name_value.strip():
return
if channel_idx in _CHANNEL_LOOKUP:
return
name = channel_name_value.strip()
_CHANNEL_LOOKUP[channel_idx] = name
_CHANNEL_MAPPINGS = tuple(sorted(_CHANNEL_LOOKUP.items()))
config._debug_log(
"Registered channel",
context="channels.register",
severity="info",
channel_idx=channel_idx,
channel_name=name,
)
def _reset_channel_cache() -> None:
"""Clear cached channel data. Intended for use in tests only."""
@@ -255,7 +322,10 @@ __all__ = [
"capture_from_interface",
"channel_mappings",
"channel_name",
"register_channel",
"allowed_channel_names",
"hidden_channel_names",
"is_allowed_channel",
"is_hidden_channel",
"_reset_channel_cache",
]
+157 -38
View File
@@ -16,10 +16,9 @@
from __future__ import annotations
import math
import os
import sys
from datetime import datetime, timezone
from types import ModuleType
from typing import Any
DEFAULT_SNAPSHOT_SECS = 60
@@ -49,12 +48,14 @@ DEFAULT_ENERGY_SLEEP_SECS = float(6 * 60 * 60)
DEFAULT_INGESTOR_HEARTBEAT_SECS = float(60 * 60)
"""Interval between ingestor heartbeat announcements."""
CONNECTION = os.environ.get("CONNECTION") or os.environ.get("MESH_SERIAL")
DEFAULT_SELF_NODE_REPORT_INTERVAL_SECS = float(60 * 60)
"""Interval between periodic forced self-node re-reports from the daemon."""
CONNECTION = os.environ.get("CONNECTION")
"""Optional connection target for the mesh interface.
When unset, platform-specific defaults will be inferred by the interface
implementations. The legacy :envvar:`MESH_SERIAL` environment variable is still
accepted for backwards compatibility.
implementations.
"""
SNAPSHOT_SECS = DEFAULT_SNAPSHOT_SECS
@@ -65,9 +66,55 @@ CHANNEL_INDEX = int(os.environ.get("CHANNEL_INDEX", str(DEFAULT_CHANNEL_INDEX)))
DEBUG = os.environ.get("DEBUG") == "1"
_KNOWN_PROTOCOLS = ("meshtastic", "meshcore")
def _parse_hidden_channels(raw_value: str | None) -> tuple[str, ...]:
"""Normalise a comma-separated list of hidden channel names.
_raw_protocol = os.environ.get("PROTOCOL", "meshtastic").strip().lower()
if _raw_protocol not in _KNOWN_PROTOCOLS:
raise ValueError(
f"Unknown PROTOCOL={_raw_protocol!r}. "
f"Valid options: {', '.join(_KNOWN_PROTOCOLS)}"
)
PROTOCOL = _raw_protocol
"""Active ingestion protocol, selected via the :envvar:`PROTOCOL` environment variable.
Accepted values are ``meshtastic`` (default) and ``meshcore``.
"""
def _parse_lora_freq_env(raw: str | None) -> float | int | None:
"""Parse the ``FREQUENCY`` environment variable into a numeric LoRa frequency.
Returns an :class:`int` for whole-number strings (e.g. ``"868"``), a
:class:`float` for decimal strings (e.g. ``"869.525"``), or ``None`` when
*raw* is empty, absent, non-numeric, or non-finite (e.g. ``"inf"``).
Non-numeric labels such as ``"EU_868"`` intentionally return ``None`` so
that :data:`LORA_FREQ` is left unset and :func:`~interfaces._ensure_radio_metadata`
can still populate it from the detected radio configuration.
Parameters:
raw: Raw value of the ``FREQUENCY`` environment variable.
Returns:
Numeric frequency value, or ``None``.
"""
if not raw:
return None
stripped = raw.strip()
if not stripped:
return None
try:
as_float = float(stripped)
except ValueError:
return None
if not math.isfinite(as_float):
return None
return int(as_float) if as_float == int(as_float) else as_float
def _parse_channel_names(raw_value: str | None) -> tuple[str, ...]:
"""Normalise a comma-separated list of channel names.
Parameters:
raw_value: Raw environment string containing channel names separated by
@@ -96,23 +143,32 @@ def _parse_hidden_channels(raw_value: str | None) -> tuple[str, ...]:
return tuple(normalized_entries)
def _parse_hidden_channels(raw_value: str | None) -> tuple[str, ...]:
"""Compatibility wrapper that parses hidden channel names."""
return _parse_channel_names(raw_value)
HIDDEN_CHANNELS = _parse_hidden_channels(os.environ.get("HIDDEN_CHANNELS"))
"""Channel names configured to be ignored by the ingestor."""
ALLOWED_CHANNELS = _parse_channel_names(os.environ.get("ALLOWED_CHANNELS"))
"""Explicitly permitted channel names; when set, other channels are ignored."""
def _resolve_instance_domain() -> str:
"""Resolve the configured instance domain from the environment.
The ingestor prefers the :envvar:`INSTANCE_DOMAIN` variable for clarity and
compatibility with the web application. For deployments that still
configure the legacy :envvar:`POTATOMESH_INSTANCE` variable, the resolver
falls back to that value when no primary domain is set.
Reads the :envvar:`INSTANCE_DOMAIN` variable. When the value does not
contain a scheme, ``https://`` is prepended automatically.
.. note::
Kept for backward compatibility with existing tests and callers.
New code should use :func:`_resolve_instance_domains` instead.
"""
instance_domain = os.environ.get("INSTANCE_DOMAIN", "")
legacy_instance = os.environ.get("POTATOMESH_INSTANCE", "")
configured_instance = (instance_domain or legacy_instance).rstrip("/")
configured_instance = os.environ.get("INSTANCE_DOMAIN", "").rstrip("/")
if configured_instance and "://" not in configured_instance:
return f"https://{configured_instance}"
@@ -120,13 +176,91 @@ def _resolve_instance_domain() -> str:
return configured_instance
INSTANCE = _resolve_instance_domain()
API_TOKEN = os.environ.get("API_TOKEN", "")
def _normalise_domain(raw: str) -> str:
"""Strip whitespace and trailing slashes, prepend ``https://`` when needed.
Parameters:
raw: Single domain string to normalise.
Returns:
A URL string with a scheme prefix.
"""
domain = raw.strip().rstrip("/")
if domain and "://" not in domain:
return f"https://{domain}"
return domain
def _resolve_instance_domains() -> tuple[tuple[str, str], ...]:
"""Parse :envvar:`INSTANCE_DOMAIN` and :envvar:`API_TOKEN` into paired tuples.
When ``INSTANCE_DOMAIN`` contains comma-separated values, each entry is
treated as an independent target. ``API_TOKEN`` is either broadcast to
every target (single value) or positionally paired (comma-separated with
a matching count).
Returns:
A tuple of ``(instance_url, api_token)`` pairs, deduplicated by URL.
Raises:
ValueError: When the number of comma-separated tokens exceeds the
number of domains.
"""
raw_domain = os.environ.get("INSTANCE_DOMAIN", "")
raw_token = os.environ.get("API_TOKEN", "")
domains: list[str] = []
seen: set[str] = set()
for part in raw_domain.split(","):
normalised = _normalise_domain(part)
if not normalised:
continue
key = normalised.casefold()
if key in seen:
continue
seen.add(key)
domains.append(normalised)
if not domains:
return ()
tokens = [t.strip() for t in raw_token.split(",")]
# A single token (including empty string) is broadcast to all domains.
if len(tokens) == 1:
token = tokens[0]
return tuple((d, token) for d in domains)
if len(tokens) != len(domains):
raise ValueError(
f"API_TOKEN has {len(tokens)} comma-separated values but "
f"INSTANCE_DOMAIN has {len(domains)}; counts must match or "
f"API_TOKEN must be a single value"
)
return tuple(zip(domains, tokens))
INSTANCES: tuple[tuple[str, str], ...] = _resolve_instance_domains()
"""Paired ``(instance_url, api_token)`` tuples derived from the environment."""
INSTANCE = INSTANCES[0][0] if INSTANCES else _resolve_instance_domain()
"""First configured instance URL, kept for backward compatibility."""
API_TOKEN = INSTANCES[0][1] if INSTANCES else os.environ.get("API_TOKEN", "")
"""API token for the first configured instance, kept for backward compatibility."""
ENERGY_SAVING = os.environ.get("ENERGY_SAVING") == "1"
"""When ``True``, enables the ingestor's energy saving mode."""
LORA_FREQ: float | int | str | None = None
"""Frequency of the local node's configured LoRa region in MHz or raw region label."""
LORA_FREQ: float | int | str | None = _parse_lora_freq_env(os.environ.get("FREQUENCY"))
"""Frequency of the local node's configured LoRa region in MHz or raw region label.
Pre-seeded from the ``FREQUENCY`` environment variable when set to a finite
numeric value, allowing operators to override auto-detected values.
Non-numeric or non-finite values are ignored so that auto-detection from the
radio interface can still fill this in.
"""
MODEM_PRESET: str | None = None
"""CamelCase modem preset name reported by the local node."""
@@ -138,9 +272,7 @@ _INACTIVITY_RECONNECT_SECS = DEFAULT_INACTIVITY_RECONNECT_SECS
_ENERGY_ONLINE_DURATION_SECS = DEFAULT_ENERGY_ONLINE_DURATION_SECS
_ENERGY_SLEEP_SECS = DEFAULT_ENERGY_SLEEP_SECS
_INGESTOR_HEARTBEAT_SECS = DEFAULT_INGESTOR_HEARTBEAT_SECS
# Backwards compatibility shim for legacy imports.
PORT = CONNECTION
_SELF_NODE_REPORT_INTERVAL_SECS = DEFAULT_SELF_NODE_REPORT_INTERVAL_SECS
def _debug_log(
@@ -183,7 +315,9 @@ __all__ = [
"CHANNEL_INDEX",
"DEBUG",
"HIDDEN_CHANNELS",
"ALLOWED_CHANNELS",
"INSTANCE",
"INSTANCES",
"API_TOKEN",
"ENERGY_SAVING",
"LORA_FREQ",
@@ -195,21 +329,6 @@ __all__ = [
"_ENERGY_ONLINE_DURATION_SECS",
"_ENERGY_SLEEP_SECS",
"_INGESTOR_HEARTBEAT_SECS",
"_SELF_NODE_REPORT_INTERVAL_SECS",
"_debug_log",
]
class _ConfigModule(ModuleType):
"""Module proxy that keeps connection aliases synchronised."""
def __setattr__(self, name: str, value: Any) -> None: # type: ignore[override]
"""Propagate CONNECTION/PORT assignments to both attributes."""
if name in {"CONNECTION", "PORT"}:
super().__setattr__("CONNECTION", value)
super().__setattr__("PORT", value)
return
super().__setattr__(name, value)
sys.modules[__name__].__class__ = _ConfigModule
+163
View File
@@ -0,0 +1,163 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Provider-agnostic connection target helpers.
This module contains utilities shared by all ingestor providers for
parsing and auto-discovering connection targets. It is intentionally
free of any provider-specific imports so that Meshtastic, MeshCore,
and future providers can all rely on the same logic.
"""
from __future__ import annotations
import glob
import re
# ---------------------------------------------------------------------------
# Constants
# ---------------------------------------------------------------------------
DEFAULT_TCP_PORT: int = 4403
"""Default TCP port used when no port is explicitly supplied."""
DEFAULT_SERIAL_PATTERNS: tuple[str, ...] = (
"/dev/ttyACM*",
"/dev/ttyUSB*",
"/dev/tty.usbmodem*",
"/dev/tty.usbserial*",
"/dev/cu.usbmodem*",
"/dev/cu.usbserial*",
)
"""Glob patterns for common serial device paths on Linux and macOS."""
# Support both MAC addresses (Linux/Windows) and UUIDs (macOS).
BLE_ADDRESS_RE = re.compile(
r"^(?:"
r"(?:[0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}|" # MAC address format
r"[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}" # UUID format
r")$"
)
"""Compiled regex matching a BLE MAC address or UUID."""
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def parse_ble_target(value: str) -> str | None:
"""Return a normalised BLE address (MAC or UUID) when ``value`` matches the format.
Parameters:
value: User-provided target string.
Returns:
The normalised MAC address (upper-cased) or UUID, or ``None`` when
the value does not match a recognised BLE address format.
"""
if not value:
return None
value = value.strip()
if not value:
return None
if BLE_ADDRESS_RE.fullmatch(value):
return value.upper()
return None
def parse_tcp_target(value: str) -> tuple[str, int] | None:
"""Parse a TCP ``host:port`` target, accepting both IPs and hostnames.
Unlike the Meshtastic-specific helper in :mod:`interfaces`, hostnames are
accepted here because MeshCore companions may be reached over a local
network by name (e.g. ``meshcore-node.local:4403``).
BLE MAC addresses (five colons) and bare serial port paths (no colon) are
correctly rejected they cannot produce a valid ``host:port`` pair.
Parameters:
value: User-provided target string.
Returns:
``(host, port)`` on success, or ``None`` when *value* does not look
like a TCP target.
"""
if not value:
return None
value = value.strip()
if not value:
return None
# Strip URL scheme prefix (e.g. ``tcp://host:4403`` or ``http://host:4403``).
if "://" in value:
value = value.split("://", 1)[1]
# Handle bracketed IPv6: ``[::1]:4403``.
if value.startswith("["):
bracket_end = value.find("]")
if bracket_end == -1:
return None
host = value[1:bracket_end]
rest = value[bracket_end + 1 :]
if rest.startswith(":"):
try:
port = int(rest[1:])
except ValueError:
return None
if not (1 <= port <= 65535):
return None
else:
port = DEFAULT_TCP_PORT
if not host:
return None
return host, port
# For non-bracketed addresses require exactly one colon so that BLE MACs
# (five colons) and bare serial paths (no colon) are rejected.
colon_count = value.count(":")
if colon_count != 1:
return None
host, _, port_str = value.partition(":")
if not host:
return None
try:
port = int(port_str)
except ValueError:
return None
if not (1 <= port <= 65535):
return None
return host, port
def default_serial_targets() -> list[str]:
"""Return candidate serial device paths for auto-discovery.
Globs for common USB serial device paths on Linux and macOS. Always
includes ``/dev/ttyACM0`` as a final fallback so callers have at least
one candidate even on systems without any attached hardware.
Returns:
Ordered list of candidate device paths, deduplicated.
"""
candidates: list[str] = []
seen: set[str] = set()
for pattern in DEFAULT_SERIAL_PATTERNS:
for path in sorted(glob.glob(pattern)):
if path not in seen:
candidates.append(path)
seen.add(path)
if "/dev/ttyACM0" not in seen:
candidates.append("/dev/ttyACM0")
return candidates
+467 -302
View File
@@ -16,6 +16,7 @@
from __future__ import annotations
import dataclasses
import inspect
import signal
import threading
@@ -23,7 +24,9 @@ import time
from pubsub import pub
from . import config, handlers, ingestors, interfaces
from . import config, handlers, ingestors, interfaces, queue
from .mesh_protocol import MeshProtocol
from .utils import _retry_dict_snapshot
_RECEIVE_TOPICS = (
"meshtastic.receive",
@@ -80,10 +83,15 @@ def _subscribe_receive_topics() -> list[str]:
def _node_items_snapshot(
nodes_obj, retries: int = 3
nodes_obj: object, retries: int = 3
) -> list[tuple[str, object]] | None:
"""Snapshot ``nodes_obj`` to avoid iteration errors during updates.
Uses :func:`~data.mesh_ingestor.utils._retry_dict_snapshot` to handle
both dict-like objects (``items()`` callable) and sequence-like objects
(``__iter__`` + ``__getitem__``) that Meshtastic may return depending on
firmware version.
Parameters:
nodes_obj: Meshtastic nodes mapping or iterable.
retries: Number of attempts when encountering "dictionary changed"
@@ -99,25 +107,15 @@ def _node_items_snapshot(
items_callable = getattr(nodes_obj, "items", None)
if callable(items_callable):
for _ in range(max(1, retries)):
try:
return list(items_callable())
except RuntimeError as err:
if "dictionary changed size during iteration" not in str(err):
raise
time.sleep(0)
return None
return _retry_dict_snapshot(lambda: list(items_callable()), retries)
if hasattr(nodes_obj, "__iter__") and hasattr(nodes_obj, "__getitem__"):
for _ in range(max(1, retries)):
try:
keys = list(nodes_obj)
return [(key, nodes_obj[key]) for key in keys]
except RuntimeError as err:
if "dictionary changed size during iteration" not in str(err):
raise
time.sleep(0)
return None
def _snapshot_via_keys() -> list[tuple[str, object]]:
keys = list(nodes_obj)
return [(key, nodes_obj[key]) for key in keys]
return _retry_dict_snapshot(_snapshot_via_keys, retries)
return []
@@ -197,11 +195,6 @@ def _process_ingestor_heartbeat(iface, *, ingestor_announcement_sent: bool) -> b
if heartbeat_sent and not ingestor_announcement_sent:
return True
return ingestor_announcement_sent
iface_cls = getattr(iface_obj, "__class__", None)
if iface_cls is None:
return False
module_name = getattr(iface_cls, "__module__", "") or ""
return "ble_interface" in module_name
def _connected_state(candidate) -> bool | None:
@@ -243,10 +236,403 @@ def _connected_state(candidate) -> bool | None:
return None
def main(existing_interface=None) -> None:
# ---------------------------------------------------------------------------
# Loop state container
# ---------------------------------------------------------------------------
@dataclasses.dataclass
class _DaemonState:
"""All mutable state for the :func:`main` daemon loop."""
provider: MeshProtocol
stop: threading.Event
configured_port: str | None
inactivity_reconnect_secs: float
energy_saving_enabled: bool
energy_online_secs: float
energy_sleep_secs: float
retry_delay: float
last_seen_packet_monotonic: float | None
active_candidate: str | None
iface: object = None
resolved_target: str | None = None
initial_snapshot_sent: bool = False
energy_session_deadline: float | None = None
iface_connected_at: float | None = None
last_inactivity_reconnect: float | None = None
ingestor_announcement_sent: bool = False
announced_target: bool = False
last_self_node_report: float | None = None
# ---------------------------------------------------------------------------
# Per-iteration helpers (each returns True when the caller should `continue`)
# ---------------------------------------------------------------------------
def _advance_retry_delay(current: float) -> float:
"""Return the next exponential-backoff retry delay."""
if config._RECONNECT_MAX_DELAY_SECS <= 0:
return current
# `current == 0` on the very first call (bootstrap); seed from config.
next_delay = current * 2 if current else config._RECONNECT_INITIAL_DELAY_SECS
return min(next_delay, config._RECONNECT_MAX_DELAY_SECS)
def _energy_sleep(state: _DaemonState, reason: str) -> None:
"""Sleep for the configured energy-saving interval."""
if not state.energy_saving_enabled or state.energy_sleep_secs <= 0:
return
if config.DEBUG:
config._debug_log(
f"energy saving: {reason}; sleeping for {state.energy_sleep_secs:g}s"
)
state.stop.wait(state.energy_sleep_secs)
def _try_connect(state: _DaemonState) -> bool:
"""Attempt to establish the mesh interface.
Returns:
``True`` when connected and the loop should proceed; ``False`` when
the connection failed and the caller should ``continue``.
"""
try:
state.iface, state.resolved_target, state.active_candidate = (
state.provider.connect(active_candidate=state.active_candidate)
)
handlers.register_host_node_id(state.provider.extract_host_node_id(state.iface))
ingestors.set_ingestor_node_id(handlers.host_node_id())
state.retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
state.initial_snapshot_sent = False
state.last_self_node_report = None
if not state.announced_target and state.resolved_target:
config._debug_log(
"Using mesh interface",
context="daemon.interface",
severity="info",
target=state.resolved_target,
)
state.announced_target = True
# Set an absolute monotonic deadline for this energy-saving session.
# When the deadline passes, _check_energy_saving() will close the
# interface and sleep until the next wake interval.
if state.energy_saving_enabled and state.energy_online_secs > 0:
state.energy_session_deadline = time.monotonic() + state.energy_online_secs
else:
state.energy_session_deadline = None
state.iface_connected_at = time.monotonic()
# Seed the inactivity tracking from the connection time so a
# reconnect is given a full inactivity window even when the
# handler still reports the previous packet timestamp.
state.last_seen_packet_monotonic = state.iface_connected_at
state.last_inactivity_reconnect = None
return True
except interfaces.NoAvailableMeshInterface as exc:
config._debug_log(
"No mesh interface available",
context="daemon.interface",
severity="error",
error_message=str(exc),
)
_close_interface(state.iface)
raise SystemExit(1) from exc
except Exception as exc:
config._debug_log(
"Failed to create mesh interface",
context="daemon.interface",
severity="warn",
candidate=state.active_candidate or "auto",
error_class=exc.__class__.__name__,
error_message=str(exc),
)
if state.configured_port is None:
state.active_candidate = None
state.announced_target = False
state.stop.wait(state.retry_delay)
state.retry_delay = _advance_retry_delay(state.retry_delay)
return False
def _check_energy_saving(state: _DaemonState) -> bool:
"""Disconnect and sleep when energy-saving conditions are met.
Returns:
``True`` when the interface was closed and the caller should
``continue``; ``False`` otherwise.
"""
if not state.energy_saving_enabled or state.iface is None:
return False
if (
state.energy_session_deadline is not None
and time.monotonic() >= state.energy_session_deadline
):
reason = "disconnected after session"
log_msg = "Energy saving disconnect"
elif (
_is_ble_interface(state.iface)
and getattr(state.iface, "client", object()) is None
):
reason = "BLE client disconnected"
log_msg = "Energy saving BLE disconnect"
else:
return False
config._debug_log(log_msg, context="daemon.energy", severity="info")
_close_interface(state.iface)
state.iface = None
state.announced_target = False
state.initial_snapshot_sent = False
state.last_self_node_report = None
state.energy_session_deadline = None
_energy_sleep(state, reason)
return True
def _try_send_snapshot(state: _DaemonState) -> bool:
"""Send the initial node snapshot via the provider.
Returns:
``True`` when the snapshot succeeded (or no nodes exist yet); ``False``
when a hard error occurred and the caller should ``continue``.
"""
try:
node_items = state.provider.node_snapshot_items(state.iface)
processed_any = False
for node_id, node in node_items:
processed_any = True
try:
handlers.upsert_node(node_id, node)
except Exception as exc:
config._debug_log(
"Failed to update node snapshot",
context="daemon.snapshot",
severity="warn",
node_id=node_id,
error_class=exc.__class__.__name__,
error_message=str(exc),
)
if config.DEBUG:
config._debug_log(
"Snapshot node payload",
context="daemon.snapshot",
node=node,
)
if processed_any:
state.initial_snapshot_sent = True
return True
except Exception as exc:
config._debug_log(
"Snapshot refresh failed",
context="daemon.snapshot",
severity="warn",
error_class=exc.__class__.__name__,
error_message=str(exc),
)
_close_interface(state.iface)
state.iface = None
state.stop.wait(state.retry_delay)
state.retry_delay = _advance_retry_delay(state.retry_delay)
return False
def _check_inactivity_reconnect(state: _DaemonState) -> bool:
"""Reconnect when the interface has been silent for too long.
Returns:
``True`` when a reconnect was triggered and the caller should
``continue``; ``False`` otherwise.
"""
if state.iface is None or state.inactivity_reconnect_secs <= 0:
return False
now = time.monotonic()
iface_activity = handlers.last_packet_monotonic()
if (
iface_activity is not None
and state.iface_connected_at is not None
and iface_activity < state.iface_connected_at
):
iface_activity = state.iface_connected_at
if iface_activity is not None and (
state.last_seen_packet_monotonic is None
or iface_activity > state.last_seen_packet_monotonic
):
state.last_seen_packet_monotonic = iface_activity
state.last_inactivity_reconnect = None
latest_activity = iface_activity
if latest_activity is None and state.iface_connected_at is not None:
latest_activity = state.iface_connected_at
if latest_activity is None:
latest_activity = now
inactivity_elapsed = now - latest_activity
believed_disconnected = (
_connected_state(getattr(state.iface, "isConnected", None)) is False
)
if (
not believed_disconnected
and inactivity_elapsed < state.inactivity_reconnect_secs
):
return False
if state.last_inactivity_reconnect is not None:
# For explicit disconnects use the shorter max-reconnect-delay window
# so the daemon reconnects promptly without thrashing. For inactivity-
# only triggers retain the full inactivity window as the throttle.
throttle_secs = (
config._RECONNECT_MAX_DELAY_SECS
if believed_disconnected
else state.inactivity_reconnect_secs
)
if now - state.last_inactivity_reconnect < throttle_secs:
return False
reason = (
"disconnected"
if believed_disconnected
else f"no data for {inactivity_elapsed:.0f}s"
)
# Uses the module-level global STATE — acceptable because there is only
# one queue in production, and in tests this is purely informational.
queue_depth = len(queue.STATE.queue)
config._debug_log(
"Mesh interface inactivity detected",
context="daemon.interface",
severity="warn",
reason=reason,
queue_depth=queue_depth,
)
state.last_inactivity_reconnect = now
_close_interface(state.iface)
state.iface = None
state.announced_target = False
state.initial_snapshot_sent = False
state.last_self_node_report = None
state.energy_session_deadline = None
state.iface_connected_at = None
return True
# ---------------------------------------------------------------------------
# Periodic self-node report helper
# ---------------------------------------------------------------------------
def _try_send_self_node(state: _DaemonState) -> None:
"""Re-upsert the host self-node when the provider supports it.
Called once immediately after the initial snapshot and then at most once
per :data:`~data.mesh_ingestor.config._SELF_NODE_REPORT_INTERVAL_SECS`.
This ensures the self-node's protocol and radio metadata are refreshed
even when the ingestor heartbeat races ahead of the first SELF_INFO event
(meshcore) or when the protocol never sends periodic NODEINFO for itself.
Parameters:
state: Current daemon loop state.
Returns:
``None``. Errors are logged and suppressed so a single failure does
not break the main loop.
"""
self_node_fn = getattr(state.provider, "self_node_item", None)
if not callable(self_node_fn):
return
try:
item = self_node_fn(state.iface)
if item is None:
return
node_id, node = item
handlers.upsert_node(node_id, node)
state.last_self_node_report = time.monotonic()
config._debug_log(
"Sent periodic self-node report",
context="daemon.self_node",
severity="info",
node_id=node_id,
)
except Exception as exc:
config._debug_log(
"Self-node re-report failed",
context="daemon.self_node",
severity="warn",
error_class=exc.__class__.__name__,
error_message=str(exc),
)
# ---------------------------------------------------------------------------
# Loop iteration helper
# ---------------------------------------------------------------------------
def _loop_iteration(state: _DaemonState) -> bool:
"""Execute one pass of the daemon main loop.
Encapsulates the per-iteration ``continue`` decisions so that
:func:`main` stays within the allowed cognitive-complexity budget.
Returns:
``True`` when the loop should start the next iteration immediately
(equivalent to a ``continue``); ``False`` when the full pass
completed and the caller should sleep before iterating again.
"""
if state.iface is None and not _try_connect(state):
return True
if _check_energy_saving(state):
return True
if not state.initial_snapshot_sent and not _try_send_snapshot(state):
return True
if _check_inactivity_reconnect(state):
return True
state.ingestor_announcement_sent = _process_ingestor_heartbeat(
state.iface, ingestor_announcement_sent=state.ingestor_announcement_sent
)
# Periodically re-upsert the host self-node so that its protocol and radio
# metadata are corrected after the ingestor heartbeat is registered, and
# kept fresh for protocols (e.g. meshcore) that only emit SELF_INFO once.
_now = time.monotonic()
if state.initial_snapshot_sent and (
state.last_self_node_report is None
or _now - state.last_self_node_report >= config._SELF_NODE_REPORT_INTERVAL_SECS
):
_try_send_self_node(state)
state.retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
return False
# ---------------------------------------------------------------------------
# Entry point
# ---------------------------------------------------------------------------
def main(*, provider: MeshProtocol | None = None) -> None:
"""Run the mesh ingestion daemon until interrupted."""
subscribed = _subscribe_receive_topics()
if provider is None:
if config.PROTOCOL == "meshcore":
from .protocols.meshcore import MeshcoreProvider
provider = MeshcoreProvider()
else:
from .protocols.meshtastic import MeshtasticProvider
provider = MeshtasticProvider()
subscribed = provider.subscribe()
if subscribed:
config._debug_log(
"Subscribed to receive topics",
@@ -255,313 +641,92 @@ def main(existing_interface=None) -> None:
topics=subscribed,
)
iface = existing_interface
resolved_target = None
retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
if not config.INSTANCES and not config.INSTANCE:
config._debug_log(
"No INSTANCE_DOMAIN configured — cannot forward data; exiting",
context="daemon.main",
severity="error",
always=True,
)
return
stop = threading.Event()
initial_snapshot_sent = False
energy_session_deadline = None
iface_connected_at: float | None = None
last_seen_packet_monotonic = handlers.last_packet_monotonic()
last_inactivity_reconnect: float | None = None
inactivity_reconnect_secs = max(
0.0, getattr(config, "_INACTIVITY_RECONNECT_SECS", 0.0)
queue._start_queue_drainer(queue.STATE)
state = _DaemonState(
provider=provider,
stop=threading.Event(),
configured_port=config.CONNECTION,
inactivity_reconnect_secs=max(
0.0, getattr(config, "_INACTIVITY_RECONNECT_SECS", 0.0)
),
energy_saving_enabled=config.ENERGY_SAVING,
energy_online_secs=max(0.0, config._ENERGY_ONLINE_DURATION_SECS),
energy_sleep_secs=max(0.0, config._ENERGY_SLEEP_SECS),
retry_delay=max(0.0, config._RECONNECT_INITIAL_DELAY_SECS),
last_seen_packet_monotonic=handlers.last_packet_monotonic(),
active_candidate=config.CONNECTION,
)
ingestor_announcement_sent = False
energy_saving_enabled = config.ENERGY_SAVING
energy_online_secs = max(0.0, config._ENERGY_ONLINE_DURATION_SECS)
energy_sleep_secs = max(0.0, config._ENERGY_SLEEP_SECS)
def _energy_sleep(reason: str) -> None:
if not energy_saving_enabled or energy_sleep_secs <= 0:
return
if config.DEBUG:
config._debug_log(
f"energy saving: {reason}; sleeping for {energy_sleep_secs:g}s"
)
stop.wait(energy_sleep_secs)
def handle_sigterm(*_args) -> None:
stop.set()
"""Set the stop flag so the daemon loop exits cleanly on SIGTERM."""
state.stop.set()
def handle_sigint(signum, frame) -> None:
if stop.is_set():
"""Handle SIGINT (Ctrl-C) with graceful-first, hard-exit-second behaviour.
The first SIGINT sets the stop flag and lets the loop finish its
current iteration. A second SIGINT delegates to the default handler,
which raises :class:`KeyboardInterrupt` and terminates immediately.
"""
if state.stop.is_set():
signal.default_int_handler(signum, frame)
return
stop.set()
state.stop.set()
if threading.current_thread() == threading.main_thread():
signal.signal(signal.SIGINT, handle_sigint)
signal.signal(signal.SIGTERM, handle_sigterm)
target = config.INSTANCE or "(no INSTANCE_DOMAIN configured)"
configured_port = config.CONNECTION
active_candidate = configured_port
announced_target = False
instance_label = ", ".join(inst for inst, _ in config.INSTANCES)
config._debug_log(
"Mesh daemon starting",
context="daemon.main",
severity="info",
target=target,
port=configured_port or "auto",
target=instance_label,
port=config.CONNECTION or "auto",
channel=config.CHANNEL_INDEX,
)
try:
while not stop.is_set():
if iface is None:
try:
if active_candidate:
iface, resolved_target = interfaces._create_serial_interface(
active_candidate
)
else:
iface, resolved_target = interfaces._create_default_interface()
active_candidate = resolved_target
interfaces._ensure_radio_metadata(iface)
interfaces._ensure_channel_metadata(iface)
handlers.register_host_node_id(
interfaces._extract_host_node_id(iface)
)
ingestors.set_ingestor_node_id(handlers.host_node_id())
retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
initial_snapshot_sent = False
if not announced_target and resolved_target:
config._debug_log(
"Using mesh interface",
context="daemon.interface",
severity="info",
target=resolved_target,
)
announced_target = True
if energy_saving_enabled and energy_online_secs > 0:
energy_session_deadline = time.monotonic() + energy_online_secs
else:
energy_session_deadline = None
iface_connected_at = time.monotonic()
# Seed the inactivity tracking from the connection time so a
# reconnect is given a full inactivity window even when the
# handler still reports the previous packet timestamp.
last_seen_packet_monotonic = iface_connected_at
last_inactivity_reconnect = None
except interfaces.NoAvailableMeshInterface as exc:
config._debug_log(
"No mesh interface available",
context="daemon.interface",
severity="error",
error_message=str(exc),
)
_close_interface(iface)
raise SystemExit(1) from exc
except Exception as exc:
candidate_desc = active_candidate or "auto"
config._debug_log(
"Failed to create mesh interface",
context="daemon.interface",
severity="warn",
candidate=candidate_desc,
error_class=exc.__class__.__name__,
error_message=str(exc),
)
if configured_port is None:
active_candidate = None
announced_target = False
stop.wait(retry_delay)
if config._RECONNECT_MAX_DELAY_SECS > 0:
retry_delay = min(
(
retry_delay * 2
if retry_delay
else config._RECONNECT_INITIAL_DELAY_SECS
),
config._RECONNECT_MAX_DELAY_SECS,
)
continue
if energy_saving_enabled and iface is not None:
if (
energy_session_deadline is not None
and time.monotonic() >= energy_session_deadline
):
config._debug_log(
"Energy saving disconnect",
context="daemon.energy",
severity="info",
)
_close_interface(iface)
iface = None
announced_target = False
initial_snapshot_sent = False
energy_session_deadline = None
_energy_sleep("disconnected after session")
continue
if (
_is_ble_interface(iface)
and getattr(iface, "client", object()) is None
):
config._debug_log(
"Energy saving BLE disconnect",
context="daemon.energy",
severity="info",
)
_close_interface(iface)
iface = None
announced_target = False
initial_snapshot_sent = False
energy_session_deadline = None
_energy_sleep("BLE client disconnected")
continue
if not initial_snapshot_sent:
try:
nodes = getattr(iface, "nodes", {}) or {}
node_items = _node_items_snapshot(nodes)
if node_items is None:
config._debug_log(
"Skipping node snapshot due to concurrent modification",
context="daemon.snapshot",
)
else:
processed_snapshot_item = False
for node_id, node in node_items:
processed_snapshot_item = True
try:
handlers.upsert_node(node_id, node)
except Exception as exc:
config._debug_log(
"Failed to update node snapshot",
context="daemon.snapshot",
severity="warn",
node_id=node_id,
error_class=exc.__class__.__name__,
error_message=str(exc),
)
if config.DEBUG:
config._debug_log(
"Snapshot node payload",
context="daemon.snapshot",
node=node,
)
if processed_snapshot_item:
initial_snapshot_sent = True
except Exception as exc:
config._debug_log(
"Snapshot refresh failed",
context="daemon.snapshot",
severity="warn",
error_class=exc.__class__.__name__,
error_message=str(exc),
)
_close_interface(iface)
iface = None
stop.wait(retry_delay)
if config._RECONNECT_MAX_DELAY_SECS > 0:
retry_delay = min(
(
retry_delay * 2
if retry_delay
else config._RECONNECT_INITIAL_DELAY_SECS
),
config._RECONNECT_MAX_DELAY_SECS,
)
continue
if iface is not None and inactivity_reconnect_secs > 0:
now_monotonic = time.monotonic()
iface_activity = handlers.last_packet_monotonic()
if (
iface_activity is not None
and iface_connected_at is not None
and iface_activity < iface_connected_at
):
iface_activity = iface_connected_at
if iface_activity is not None and (
last_seen_packet_monotonic is None
or iface_activity > last_seen_packet_monotonic
):
last_seen_packet_monotonic = iface_activity
last_inactivity_reconnect = None
latest_activity = iface_activity
if latest_activity is None and iface_connected_at is not None:
latest_activity = iface_connected_at
if latest_activity is None:
latest_activity = now_monotonic
inactivity_elapsed = now_monotonic - latest_activity
connected_attr = getattr(iface, "isConnected", None)
believed_disconnected = False
connected_state = _connected_state(connected_attr)
if connected_state is None:
if callable(connected_attr):
try:
believed_disconnected = not bool(connected_attr())
except Exception:
believed_disconnected = False
elif connected_attr is not None:
try:
believed_disconnected = not bool(connected_attr)
except Exception: # pragma: no cover - defensive guard
believed_disconnected = False
else:
believed_disconnected = not connected_state
should_reconnect = believed_disconnected or (
inactivity_elapsed >= inactivity_reconnect_secs
)
if should_reconnect:
if (
last_inactivity_reconnect is None
or now_monotonic - last_inactivity_reconnect
>= inactivity_reconnect_secs
):
reason = (
"disconnected"
if believed_disconnected
else f"no data for {inactivity_elapsed:.0f}s"
)
config._debug_log(
"Mesh interface inactivity detected",
context="daemon.interface",
severity="warn",
reason=reason,
)
last_inactivity_reconnect = now_monotonic
_close_interface(iface)
iface = None
announced_target = False
initial_snapshot_sent = False
energy_session_deadline = None
iface_connected_at = None
continue
ingestor_announcement_sent = _process_ingestor_heartbeat(
iface, ingestor_announcement_sent=ingestor_announcement_sent
)
retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
stop.wait(config.SNAPSHOT_SECS)
while not state.stop.is_set():
if not _loop_iteration(state):
state.stop.wait(config.SNAPSHOT_SECS)
except KeyboardInterrupt: # pragma: no cover - interactive only
config._debug_log(
"Received KeyboardInterrupt; shutting down",
context="daemon.main",
severity="info",
)
stop.set()
state.stop.set()
finally:
_close_interface(iface)
_close_interface(state.iface)
__all__ = [
"_RECEIVE_TOPICS",
"_event_wait_allows_default_timeout",
"_node_items_snapshot",
"_subscribe_receive_topics",
"_is_ble_interface",
"_process_ingestor_heartbeat",
"_advance_retry_delay",
"_loop_iteration",
"_check_energy_saving",
"_check_inactivity_reconnect",
"_connected_state",
"_energy_sleep",
"_event_wait_allows_default_timeout",
"_is_ble_interface",
"_node_items_snapshot",
"_process_ingestor_heartbeat",
"_subscribe_receive_topics",
"_try_connect",
"_try_send_self_node",
"_try_send_snapshot",
"main",
]
+96
View File
@@ -0,0 +1,96 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Decode Meshtastic protobuf payloads from stdin JSON."""
from __future__ import annotations
import base64
import json
import os
import sys
from typing import Any, Dict, Tuple
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
if SCRIPT_DIR in sys.path:
sys.path.remove(SCRIPT_DIR)
from google.protobuf.json_format import MessageToDict
from meshtastic.protobuf import mesh_pb2, telemetry_pb2
PORTNUM_MAP: Dict[int, Tuple[str, Any]] = {
3: ("POSITION_APP", mesh_pb2.Position),
4: ("NODEINFO_APP", mesh_pb2.NodeInfo),
5: ("ROUTING_APP", mesh_pb2.Routing),
67: ("TELEMETRY_APP", telemetry_pb2.Telemetry),
70: ("TRACEROUTE_APP", mesh_pb2.RouteDiscovery),
71: ("NEIGHBORINFO_APP", mesh_pb2.NeighborInfo),
}
def _decode_payload(portnum: int, payload_b64: str) -> dict[str, Any]:
if portnum not in PORTNUM_MAP:
return {"error": "unsupported-port", "portnum": portnum}
try:
payload_bytes = base64.b64decode(payload_b64, validate=True)
except Exception as exc:
return {"error": f"invalid-payload: {exc}"}
name, message_cls = PORTNUM_MAP[portnum]
msg = message_cls()
try:
msg.ParseFromString(payload_bytes)
except Exception as exc:
return {"error": f"decode-failed: {exc}", "portnum": portnum, "type": name}
decoded = MessageToDict(msg, preserving_proto_field_name=True)
return {"portnum": portnum, "type": name, "payload": decoded}
def main() -> int:
"""Read a JSON request from stdin and write a decoded protobuf response to stdout.
Reads a single JSON object containing ``portnum`` (int) and
``payload_b64`` (base-64 encoded bytes) from standard input, decodes the
protobuf payload via :func:`_decode_payload`, and writes the result as
JSON to standard output.
Returns:
``0`` on success, ``1`` when the input is malformed or required fields
are absent.
"""
raw = sys.stdin.read()
try:
request = json.loads(raw)
except json.JSONDecodeError as exc:
sys.stdout.write(json.dumps({"error": f"invalid-json: {exc}"}))
return 1
portnum = request.get("portnum")
payload_b64 = request.get("payload_b64")
if not isinstance(portnum, int):
sys.stdout.write(json.dumps({"error": "missing-portnum"}))
return 1
if not isinstance(payload_b64, str):
sys.stdout.write(json.dumps({"error": "missing-payload"}))
return 1
result = _decode_payload(portnum, payload_b64)
sys.stdout.write(json.dumps(result))
return 0
if __name__ == "__main__":
raise SystemExit(main())
+240
View File
@@ -0,0 +1,240 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Protocol-agnostic event payload types for ingestion.
The ingestor ultimately POSTs JSON to the web app's ingest routes. These types
capture the *shape* of those payloads so multiple providers can emit the same
events, regardless of how they source or decode packets.
These are intentionally defined as ``TypedDict`` so existing code can continue
to build plain dictionaries without a runtime dependency on dataclasses.
"""
from __future__ import annotations
from typing import NotRequired, TypedDict
class _MessageEventRequired(TypedDict):
"""Required fields shared by all :class:`MessageEvent` payloads."""
id: int
rx_time: int
rx_iso: str
class MessageEvent(_MessageEventRequired, total=False):
"""Payload for the ``/api/messages`` ingest route.
Maps to the ``MessageEvent`` contract described in ``CONTRACTS.md``.
Required fields are inherited from :class:`_MessageEventRequired`;
all other fields are optional.
"""
from_id: object
to_id: object
channel: int
portnum: str | None
text: str | None
encrypted: str | None
snr: float | None
rssi: int | None
hop_limit: int | None
reply_id: int | None
emoji: str | None
channel_name: str
ingestor: str | None
lora_freq: int
modem_preset: str
class _PositionEventRequired(TypedDict):
"""Required fields shared by all :class:`PositionEvent` payloads."""
id: int
rx_time: int
rx_iso: str
class PositionEvent(_PositionEventRequired, total=False):
"""Payload for the ``/api/positions`` ingest route.
Maps to the ``PositionEvent`` contract described in ``CONTRACTS.md``.
Coordinates may be supplied as floating-point degrees or derived from
Meshtastic's integer-scaled ``latitudeI``/``longitudeI`` fields.
"""
node_id: str
node_num: int | None
num: int | None
from_id: str | None
to_id: object
latitude: float | None
longitude: float | None
altitude: float | None
position_time: int | None
location_source: str | None
precision_bits: int | None
sats_in_view: int | None
pdop: float | None
ground_speed: float | None
ground_track: float | None
snr: float | None
rssi: int | None
hop_limit: int | None
bitfield: int | None
payload_b64: str | None
raw: dict
ingestor: str | None
lora_freq: int
modem_preset: str
class _TelemetryEventRequired(TypedDict):
"""Required fields shared by all :class:`TelemetryEvent` payloads."""
id: int
rx_time: int
rx_iso: str
class TelemetryEvent(_TelemetryEventRequired, total=False):
"""Payload for the ``/api/telemetry`` ingest route.
Maps to the ``TelemetryEvent`` contract described in ``CONTRACTS.md``.
Metric keys beyond the required ones are open-ended; the web layer accepts
any additional device, environment, power, or air-quality fields.
"""
node_id: str | None
node_num: int | None
from_id: object
to_id: object
telemetry_time: int | None
channel: int
portnum: str | None
hop_limit: int | None
snr: float | None
rssi: int | None
bitfield: int | None
payload_b64: str
ingestor: str | None
lora_freq: int
modem_preset: str
# Metric keys are intentionally open-ended; the Ruby side is permissive and
# evolves over time.
class _NeighborEntryRequired(TypedDict):
"""Required fields for a single entry within a :class:`NeighborsSnapshot`."""
rx_time: int
rx_iso: str
class NeighborEntry(_NeighborEntryRequired, total=False):
"""A single observed neighbour node within a :class:`NeighborsSnapshot`.
Each entry describes one node heard by the reporting device, including
optional signal-quality metrics.
"""
neighbor_id: str
neighbor_num: int | None
snr: float | None
class _NeighborsSnapshotRequired(TypedDict):
"""Required fields shared by all :class:`NeighborsSnapshot` payloads."""
node_id: str
rx_time: int
rx_iso: str
class NeighborsSnapshot(_NeighborsSnapshotRequired, total=False):
"""Payload for the ``/api/neighbors`` ingest route.
Maps to the ``NeighborsSnapshot`` contract described in ``CONTRACTS.md``.
Encapsulates the full list of neighbours heard by a single reporting node.
"""
node_num: int | None
neighbors: list[NeighborEntry]
node_broadcast_interval_secs: int | None
last_sent_by_id: str | None
ingestor: str | None
lora_freq: int
modem_preset: str
class _TraceEventRequired(TypedDict):
"""Required fields shared by all :class:`TraceEvent` payloads."""
hops: list[int]
rx_time: int
rx_iso: str
class TraceEvent(_TraceEventRequired, total=False):
"""Payload for the ``/api/traceroutes`` ingest route.
Maps to the ``TraceEvent`` contract described in ``CONTRACTS.md``.
The ``hops`` list contains node numbers in transmission order from
source to destination.
"""
id: int | None
request_id: int | None
src: int | None
dest: int | None
rssi: int | None
snr: float | None
elapsed_ms: int | None
ingestor: str | None
lora_freq: int
modem_preset: str
class IngestorHeartbeat(TypedDict):
"""Payload for the ``/api/ingestors`` heartbeat route.
Maps to the ``IngestorHeartbeat`` contract described in ``CONTRACTS.md``.
Sent periodically to signal that the ingestor process is alive and
associated with a particular radio node.
"""
node_id: str
start_time: int
last_seen_time: int
version: str
lora_freq: NotRequired[int]
modem_preset: NotRequired[str]
NodeUpsert = dict[str, dict]
__all__ = [
"IngestorHeartbeat",
"MessageEvent",
"NeighborEntry",
"NeighborsSnapshot",
"NodeUpsert",
"PositionEvent",
"TelemetryEvent",
"TraceEvent",
]
File diff suppressed because it is too large Load Diff
+102
View File
@@ -0,0 +1,102 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Packet handlers that serialise mesh data and push it to the HTTP queue.
This package is organised into focused submodules:
- :mod:`._state` shared mutable state (host node ID, packet timestamps)
- :mod:`.radio` radio metadata enrichment helpers
- :mod:`.ignored` debug-mode logging of dropped packets
- :mod:`.position` GPS position and traceroute handlers
- :mod:`.telemetry` device/environment telemetry and router heartbeat handlers
- :mod:`.nodeinfo` node information update handler
- :mod:`.neighborinfo` neighbour topology snapshot handler
- :mod:`.generic` packet dispatcher, node upsert, and the main receive callback
All public names from the original flat ``handlers`` module are re-exported
here so existing callers (e.g. ``daemon.py``, ``protocols/``) require no
changes.
"""
from __future__ import annotations
from .. import queue as _queue
from ._state import (
_mark_packet_seen,
host_node_id,
last_packet_monotonic,
register_host_node_id,
)
from .generic import (
_is_encrypted_flag,
_portnum_candidates,
on_receive,
store_packet_dict,
upsert_node,
)
from .ignored import (
_IGNORED_PACKET_LOCK,
_IGNORED_PACKET_LOG_PATH,
_record_ignored_packet,
)
from .neighborinfo import store_neighborinfo_packet
from .nodeinfo import store_nodeinfo_packet
from .position import (
_normalize_trace_hops,
base64_payload,
store_position_packet,
store_traceroute_packet,
)
from .radio import (
_apply_radio_metadata,
_apply_radio_metadata_to_nodes,
_radio_metadata_fields,
)
from .telemetry import (
_VALID_TELEMETRY_TYPES,
store_router_heartbeat_packet,
store_telemetry_packet,
)
# Re-export the queue alias for any callers that reference handlers._queue_post_json
_queue_post_json = _queue._queue_post_json
__all__ = [
"_IGNORED_PACKET_LOCK",
"_IGNORED_PACKET_LOG_PATH",
"_VALID_TELEMETRY_TYPES",
"_apply_radio_metadata",
"_apply_radio_metadata_to_nodes",
"_is_encrypted_flag",
"_mark_packet_seen",
"_normalize_trace_hops",
"_portnum_candidates",
"_queue_post_json",
"_radio_metadata_fields",
"_record_ignored_packet",
"base64_payload",
"host_node_id",
"last_packet_monotonic",
"on_receive",
"register_host_node_id",
"store_neighborinfo_packet",
"store_nodeinfo_packet",
"store_packet_dict",
"store_position_packet",
"store_router_heartbeat_packet",
"store_telemetry_packet",
"store_traceroute_packet",
"upsert_node",
]
+202
View File
@@ -0,0 +1,202 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Shared mutable state and state accessors for the handlers subpackage.
All mutable globals that span multiple handler modules live here so that each
handler submodule can import this module and get a consistent view of state
without risking stale references from bare ``from ... import`` bindings.
"""
from __future__ import annotations
import math
import time
from .. import config
from ..serialization import _canonical_node_id
# ---------------------------------------------------------------------------
# Host device identity
# ---------------------------------------------------------------------------
_host_node_id: str | None = None
"""Canonical ``!xxxxxxxx`` identifier for the connected host device."""
_host_telemetry_last_rx: int | None = None
"""Receive timestamp of the last accepted host telemetry packet."""
_HOST_TELEMETRY_INTERVAL_SECS: int = 60 * 60
"""Minimum interval (seconds) between accepted host telemetry packets.
Meshtastic devices report their own telemetry at regular intervals. Accepting
every packet would overwrite the host's profile too aggressively; this window
throttles updates to at most once per hour.
"""
_host_nodeinfo_last_seen: float | None = None
"""Monotonic timestamp of the last accepted host NODEINFO upsert."""
_HOST_NODEINFO_INTERVAL_SECS: int = 60 * 60
"""Minimum interval (seconds) between accepted host NODEINFO upserts.
The meshtastic library re-broadcasts the local node's NODEINFO to the mesh
periodically. Accepting every broadcast would overwrite the host node record
too aggressively; this window throttles self-NODEINFO upserts to at most once
per hour.
"""
# ---------------------------------------------------------------------------
# Packet receipt tracking
# ---------------------------------------------------------------------------
_last_packet_monotonic: float | None = None
"""Monotonic timestamp of the most recently processed packet."""
# ---------------------------------------------------------------------------
# Public accessors
# ---------------------------------------------------------------------------
def register_host_node_id(node_id: str | None) -> None:
"""Record the canonical identifier for the connected host device.
Resetting the host node also clears the telemetry suppression window so
the first telemetry packet from the new host is always accepted.
Parameters:
node_id: Identifier reported by the connected device. ``None`` clears
the current host assignment.
"""
global _host_node_id, _host_telemetry_last_rx, _host_nodeinfo_last_seen
canonical = _canonical_node_id(node_id)
_host_node_id = canonical
_host_telemetry_last_rx = None
_host_nodeinfo_last_seen = None
if canonical:
config._debug_log(
"Registered host device node id",
context="handlers.host_device",
host_node_id=canonical,
)
def host_node_id() -> str | None:
"""Return the canonical identifier for the connected host device.
Returns:
The canonical ``!xxxxxxxx`` node identifier, or ``None`` when no host
has been registered yet.
"""
return _host_node_id
def _mark_host_telemetry_seen(rx_time: int) -> None:
"""Update the last receive timestamp for the host telemetry window.
Parameters:
rx_time: Unix timestamp of the accepted host telemetry packet.
"""
global _host_telemetry_last_rx
_host_telemetry_last_rx = rx_time
def _host_telemetry_suppressed(rx_time: int) -> tuple[bool, int]:
"""Return suppression state and minutes remaining for host telemetry.
Host telemetry is suppressed when it arrives within
:data:`_HOST_TELEMETRY_INTERVAL_SECS` of the previous accepted packet.
This avoids flooding the API with high-frequency device metrics from the
locally connected node.
Parameters:
rx_time: Unix timestamp of the candidate telemetry packet.
Returns:
A ``(suppressed, minutes_remaining)`` tuple. ``suppressed`` is
``True`` when the packet should be dropped; ``minutes_remaining``
is the whole number of minutes until the next packet will be accepted.
"""
if _host_telemetry_last_rx is None:
return False, 0
remaining_secs = (_host_telemetry_last_rx + _HOST_TELEMETRY_INTERVAL_SECS) - rx_time
if remaining_secs <= 0:
return False, 0
return True, int(math.ceil(remaining_secs / 60.0))
def _host_nodeinfo_suppressed(now: float) -> bool:
"""Return ``True`` when a host NODEINFO upsert should be suppressed.
Self-NODEINFO upserts are throttled to at most once per
:data:`_HOST_NODEINFO_INTERVAL_SECS` to prevent the meshtastic library's
periodic rebroadcast from overwriting the host node record too aggressively.
Parameters:
now: Current :func:`time.monotonic` value.
Returns:
``True`` when the request should be dropped; ``False`` when it should
proceed.
"""
if _host_nodeinfo_last_seen is None:
return False
return (now - _host_nodeinfo_last_seen) < _HOST_NODEINFO_INTERVAL_SECS
def _mark_host_nodeinfo_seen(now: float) -> None:
"""Record that a host NODEINFO upsert was accepted.
Parameters:
now: Current :func:`time.monotonic` value from the accepted upsert.
"""
global _host_nodeinfo_last_seen
_host_nodeinfo_last_seen = now
def last_packet_monotonic() -> float | None:
"""Return the monotonic timestamp of the most recently processed packet.
Returns:
A :func:`time.monotonic` value, or ``None`` before any packet has been
received.
"""
return _last_packet_monotonic
def _mark_packet_seen() -> None:
"""Record that a packet has been processed by updating the monotonic clock."""
global _last_packet_monotonic
_last_packet_monotonic = time.monotonic()
__all__ = [
"_HOST_NODEINFO_INTERVAL_SECS",
"_HOST_TELEMETRY_INTERVAL_SECS",
"_host_nodeinfo_suppressed",
"_host_telemetry_suppressed",
"_mark_host_nodeinfo_seen",
"_mark_host_telemetry_seen",
"_mark_packet_seen",
"host_node_id",
"last_packet_monotonic",
"register_host_node_id",
]
+478
View File
@@ -0,0 +1,478 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Generic packet dispatcher, node upsert, and the main receive callback."""
from __future__ import annotations
import base64
import contextlib
import importlib
import json
import sys
import time
from collections.abc import Mapping
from .. import channels, config, queue
from ..serialization import (
_canonical_node_id,
_coerce_int,
_first,
_iso,
_pkt_to_dict,
upsert_payload,
)
from . import _state, ignored as _ignored_mod
from .neighborinfo import store_neighborinfo_packet
from .nodeinfo import store_nodeinfo_packet
from .position import store_position_packet
from .radio import _apply_radio_metadata, _apply_radio_metadata_to_nodes
from .telemetry import store_router_heartbeat_packet, store_telemetry_packet
from .position import store_traceroute_packet
def _portnum_candidates(name: str) -> set[int]:
"""Return Meshtastic port number candidates for ``name``.
Meshtastic ships two protobuf module layouts (legacy and modern). Both are
probed so that port-number comparisons work regardless of which firmware
version is installed.
Parameters:
name: Port name to look up in Meshtastic ``PortNum`` enums.
Returns:
Set of integer port numbers resolved from all available Meshtastic
modules.
"""
candidates: set[int] = set()
for module_name in (
"meshtastic.portnums_pb2",
"meshtastic.protobuf.portnums_pb2",
):
module = sys.modules.get(module_name)
if module is None:
with contextlib.suppress(ModuleNotFoundError):
module = importlib.import_module(module_name)
if module is None:
continue
portnum_enum = getattr(module, "PortNum", None)
value_lookup = getattr(portnum_enum, "Value", None) if portnum_enum else None
if callable(value_lookup):
with contextlib.suppress(Exception):
candidate = _coerce_int(value_lookup(name))
if candidate is not None:
candidates.add(candidate)
constant_value = getattr(module, name, None)
candidate = _coerce_int(constant_value)
if candidate is not None:
candidates.add(candidate)
return candidates
def _is_encrypted_flag(value: object) -> bool:
"""Return ``True`` when ``value`` represents an encrypted payload.
Meshtastic may express the encrypted flag as a boolean, an integer, or a
string depending on how the packet was decoded. All representations are
normalised to a Python bool.
Parameters:
value: Raw encrypted field from a Meshtastic packet.
Returns:
``True`` when the payload is considered encrypted, ``False`` otherwise.
"""
if isinstance(value, bool):
return value
if isinstance(value, (int, float)):
return value != 0
if isinstance(value, str):
normalized = value.strip().lower()
if normalized in {"", "0", "false", "no"}:
return False
return True
return bool(value)
def upsert_node(node_id: object, node: object) -> None:
"""Schedule an upsert for a single node.
Serialises ``node`` via :func:`upsert_payload`, enriches the result with
radio metadata and the current host node identifier, then enqueues a POST
to ``/api/nodes``.
Parameters:
node_id: Canonical identifier for the node in the ``!xxxxxxxx`` format.
node: Node object or mapping to serialise for the API payload.
Returns:
``None``. The payload is forwarded to the shared HTTP queue.
"""
payload = _apply_radio_metadata_to_nodes(upsert_payload(node_id, node))
payload["ingestor"] = _state.host_node_id()
queue._queue_post_json("/api/nodes", payload, priority=queue._NODE_POST_PRIORITY)
if config.DEBUG:
from ..serialization import _get
user = _get(payload[node_id], "user") or {}
short = _get(user, "shortName")
long = _get(user, "longName")
config._debug_log(
"Queued node upsert payload",
context="handlers.upsert_node",
node_id=node_id,
short_name=short,
long_name=long,
)
def store_packet_dict(packet: Mapping) -> None:
"""Route a decoded packet to the appropriate storage handler.
Inspects ``portnum`` (string and integer forms) and the presence of
well-known decoded sub-sections to determine packet type, then delegates
to the corresponding ``store_*`` handler.
Parameters:
packet: Packet dictionary emitted by the mesh interface.
Returns:
``None``. Side-effects depend on the specific handler invoked.
"""
decoded = packet.get("decoded") or {}
portnum_raw = _first(decoded, "portnum", default=None)
portnum = str(portnum_raw).upper() if portnum_raw is not None else None
portnum_int = _coerce_int(portnum_raw)
telemetry_section = (
decoded.get("telemetry") if isinstance(decoded, Mapping) else None
)
if (
portnum == "TELEMETRY_APP"
or portnum_int == 65
or isinstance(telemetry_section, Mapping)
):
store_telemetry_packet(packet, decoded)
return
traceroute_section = (
decoded.get("traceroute") if isinstance(decoded, Mapping) else None
)
traceroute_port_ints = _portnum_candidates("TRACEROUTE_APP")
if (
portnum == "TRACEROUTE_APP"
or (portnum_int is not None and portnum_int in traceroute_port_ints)
or isinstance(traceroute_section, Mapping)
):
store_traceroute_packet(packet, decoded)
return
if portnum in {"5", "NODEINFO_APP"}:
store_nodeinfo_packet(packet, decoded)
return
if portnum in {"4", "POSITION_APP"}:
store_position_packet(packet, decoded)
return
neighborinfo_section = (
decoded.get("neighborinfo") if isinstance(decoded, Mapping) else None
)
if portnum == "NEIGHBORINFO_APP" or isinstance(neighborinfo_section, Mapping):
store_neighborinfo_packet(packet, decoded)
return
store_forward_port_candidates = _portnum_candidates("STORE_FORWARD_APP")
store_forward_section = (
decoded.get("storeforward") if isinstance(decoded, Mapping) else None
)
if portnum == "STORE_FORWARD_APP" or (
portnum_int is not None and portnum_int in store_forward_port_candidates
):
if not isinstance(store_forward_section, Mapping):
_ignored_mod._record_ignored_packet(
packet, reason="unsupported-store-forward"
)
return
rr = str(store_forward_section.get("rr") or "").upper()
if rr == "ROUTER_HEARTBEAT":
store_router_heartbeat_packet(packet)
return
_ignored_mod._record_ignored_packet(
packet, reason="unsupported-store-forward-rr"
)
return
text = _first(decoded, "payload.text", "text", "data.text", default=None)
encrypted = _first(decoded, "payload.encrypted", "encrypted", default=None)
if encrypted is None:
encrypted = _first(packet, "encrypted", default=None)
reply_id_raw = _first(
decoded,
"payload.replyId",
"payload.reply_id",
"data.replyId",
"data.reply_id",
"replyId",
"reply_id",
default=None,
)
reply_id = _coerce_int(reply_id_raw)
emoji_raw = _first(
decoded,
"payload.emoji",
"data.emoji",
"emoji",
default=None,
)
emoji = None
if emoji_raw is not None:
try:
emoji_text = str(emoji_raw)
except Exception:
emoji_text = None
else:
emoji_text = emoji_text.strip()
if emoji_text:
emoji = emoji_text
routing_section = decoded.get("routing") if isinstance(decoded, Mapping) else None
routing_port_candidates = _portnum_candidates("ROUTING_APP")
if text is None and (
portnum == "ROUTING_APP"
or (portnum_int is not None and portnum_int in routing_port_candidates)
or isinstance(routing_section, Mapping)
):
routing_payload = _first(decoded, "payload", "data", default=None)
if routing_payload is not None:
if isinstance(routing_payload, bytes):
text = base64.b64encode(routing_payload).decode("ascii")
elif isinstance(routing_payload, str):
text = routing_payload
else:
try:
text = json.dumps(routing_payload, ensure_ascii=True)
except TypeError:
text = str(routing_payload)
if isinstance(text, str):
text = text.strip() or None
allowed_port_values = {"1", "TEXT_MESSAGE_APP", "REACTION_APP", "ROUTING_APP"}
allowed_port_ints = {1}
reaction_port_candidates = _portnum_candidates("REACTION_APP")
for candidate in reaction_port_candidates:
allowed_port_ints.add(candidate)
allowed_port_values.add(str(candidate))
for candidate in routing_port_candidates:
allowed_port_ints.add(candidate)
allowed_port_values.add(str(candidate))
if isinstance(routing_section, Mapping) and portnum_int is not None:
allowed_port_ints.add(portnum_int)
allowed_port_values.add(str(portnum_int))
is_reaction_packet = portnum == "REACTION_APP" or (
reply_id is not None and emoji is not None
)
if is_reaction_packet and portnum_int is not None:
allowed_port_ints.add(portnum_int)
allowed_port_values.add(str(portnum_int))
if portnum and portnum not in allowed_port_values:
if portnum_int not in allowed_port_ints:
_ignored_mod._record_ignored_packet(packet, reason="unsupported-port")
return
encrypted_flag = _is_encrypted_flag(encrypted)
if not any([text, encrypted_flag, emoji is not None, reply_id is not None]):
_ignored_mod._record_ignored_packet(packet, reason="no-message-payload")
return
channel = _first(decoded, "channel", default=None)
if channel is None:
channel = _first(packet, "channel", default=0)
try:
channel = int(channel)
except Exception:
channel = 0
channel_name_value = channels.channel_name(channel)
pkt_id = _first(packet, "id", "packet_id", "packetId", default=None)
if pkt_id is None:
_ignored_mod._record_ignored_packet(packet, reason="missing-packet-id")
return
rx_time = int(_first(packet, "rxTime", "rx_time", default=time.time()))
from_id = _first(packet, "fromId", "from_id", "from", default=None)
to_id = _first(packet, "toId", "to_id", "to", default=None)
if (from_id is None or str(from_id) == "") and config.DEBUG:
try:
raw = json.dumps(packet, default=str)
except Exception:
raw = str(packet)
config._debug_log(
"Packet missing from_id",
context="handlers.store_packet_dict",
packet=raw,
)
snr = _first(packet, "snr", "rx_snr", "rxSnr", default=None)
rssi = _first(packet, "rssi", "rx_rssi", "rxRssi", default=None)
hop = _first(packet, "hopLimit", "hop_limit", default=None)
to_id_normalized = str(to_id).strip() if to_id is not None else ""
if (
not is_reaction_packet
and channel == 0
and not encrypted_flag
and to_id_normalized
and to_id_normalized.lower() != "^all"
):
if config.DEBUG:
config._debug_log(
"Skipped direct message on primary channel",
context="handlers.store_packet_dict",
from_id=_canonical_node_id(from_id) or from_id,
to_id=_canonical_node_id(to_id) or to_id,
channel=channel,
)
_ignored_mod._record_ignored_packet(packet, reason="skipped-direct-message")
return
if not channels.is_allowed_channel(channel_name_value):
_ignored_mod._record_ignored_packet(packet, reason="disallowed-channel")
if config.DEBUG:
config._debug_log(
"Ignored packet on disallowed channel",
context="handlers.store_packet_dict",
channel=channel,
channel_name=channel_name_value,
allowed_channels=channels.allowed_channel_names(),
)
return
if channels.is_hidden_channel(channel_name_value):
_ignored_mod._record_ignored_packet(packet, reason="hidden-channel")
if config.DEBUG:
config._debug_log(
"Ignored packet on hidden channel",
context="handlers.store_packet_dict",
channel=channel,
channel_name=channel_name_value,
)
return
message_payload = {
"id": int(pkt_id),
"rx_time": rx_time,
"rx_iso": _iso(rx_time),
"from_id": from_id,
"to_id": to_id,
"channel": channel,
"portnum": str(portnum) if portnum is not None else None,
"text": text,
"encrypted": encrypted,
"snr": float(snr) if snr is not None else None,
"rssi": int(rssi) if rssi is not None else None,
"hop_limit": int(hop) if hop is not None else None,
"reply_id": reply_id,
"emoji": emoji,
"ingestor": _state.host_node_id(),
}
if not encrypted_flag and channel_name_value:
message_payload["channel_name"] = channel_name_value
queue._queue_post_json(
"/api/messages",
_apply_radio_metadata(message_payload),
priority=queue._MESSAGE_POST_PRIORITY,
)
if config.DEBUG:
from_label = _canonical_node_id(from_id) or from_id
to_label = _canonical_node_id(to_id) or to_id
payload_desc = "Encrypted" if text is None and encrypted else text
log_kwargs = {
"context": "handlers.store_packet_dict",
"from_id": from_label,
"to_id": to_label,
"channel": channel,
"channel_display": channel_name_value or channel,
"payload": payload_desc,
}
if channel_name_value:
log_kwargs["channel_name"] = channel_name_value
config._debug_log("Queued message payload", **log_kwargs)
def on_receive(packet: object, interface: object) -> None:
"""Callback registered with Meshtastic to capture incoming packets.
Subscribed to all ``meshtastic.receive.*`` pubsub topics. The packet is
deduplicated via a ``_potatomesh_seen`` flag before being normalised and
dispatched to :func:`store_packet_dict`.
Parameters:
packet: Packet payload supplied by the Meshtastic pubsub topic.
interface: Interface instance that produced the packet. Only used for
compatibility with Meshtastic's callback signature.
Returns:
``None``. Packets are serialised and enqueued asynchronously.
"""
if isinstance(packet, dict):
if packet.get("_potatomesh_seen"):
return
packet["_potatomesh_seen"] = True
_state._mark_packet_seen()
packet_dict = None
try:
packet_dict = _pkt_to_dict(packet)
store_packet_dict(packet_dict)
except Exception as exc:
info = (
list(packet_dict.keys()) if isinstance(packet_dict, dict) else type(packet)
)
config._debug_log(
"Failed to store packet",
context="handlers.on_receive",
severity="warn",
error_class=exc.__class__.__name__,
error_message=str(exc),
packet_info=info,
)
__all__ = [
"_is_encrypted_flag",
"_portnum_candidates",
"on_receive",
"store_packet_dict",
"upsert_node",
]
+103
View File
@@ -0,0 +1,103 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Debug-mode logging of ignored Meshtastic packets.
When :data:`config.DEBUG` is set the ingestor appends a JSON record for each
packet that is filtered out (unsupported port, missing fields, disallowed
channel, etc.) to a plain-text log file. This aids offline debugging without
adding overhead in production.
"""
from __future__ import annotations
import base64
import json
import threading
from collections.abc import Mapping
from datetime import datetime, timezone
from pathlib import Path
from .. import config
_IGNORED_PACKET_LOG_PATH = (
Path(__file__).resolve().parents[3] / "ignored-meshtastic.txt"
)
"""Filesystem path that stores ignored Meshtastic packets when debug mode is active."""
_IGNORED_PACKET_LOCK = threading.Lock()
"""Lock serialising concurrent appends to :data:`_IGNORED_PACKET_LOG_PATH`."""
def _ignored_packet_default(value: object) -> object:
"""Return a JSON-serialisable representation for an ignored packet value.
Called as the ``default`` argument to :func:`json.dumps` when serialising
ignored packet entries. Handles container types and raw bytes so the log
file contains readable text rather than ``repr()`` fragments.
Parameters:
value: Arbitrary value encountered during packet serialisation.
Returns:
A JSON-compatible object derived from ``value``.
"""
if isinstance(value, (list, tuple, set)):
return list(value)
if isinstance(value, bytes):
return base64.b64encode(value).decode("ascii")
if isinstance(value, Mapping):
return {
str(key): _ignored_packet_default(sub_value)
for key, sub_value in value.items()
}
return str(value)
def _record_ignored_packet(packet: Mapping | object, *, reason: str) -> None:
"""Persist packet details to :data:`_IGNORED_PACKET_LOG_PATH` during debugging.
Does nothing when :data:`config.DEBUG` is ``False``. Each call appends a
single newline-delimited JSON record with a timestamp, drop reason, and a
sanitised copy of the packet.
Parameters:
packet: Packet object or mapping to record.
reason: Short machine-readable label describing why the packet was
ignored (e.g. ``"unsupported-port"``, ``"missing-packet-id"``).
"""
if not config.DEBUG:
return
timestamp = datetime.now(timezone.utc).isoformat()
entry = {
"timestamp": timestamp,
"reason": reason,
"packet": _ignored_packet_default(packet),
}
payload = json.dumps(entry, ensure_ascii=False, sort_keys=True)
with _IGNORED_PACKET_LOCK:
_IGNORED_PACKET_LOG_PATH.parent.mkdir(parents=True, exist_ok=True)
with _IGNORED_PACKET_LOG_PATH.open("a", encoding="utf-8") as handle:
handle.write(f"{payload}\n")
__all__ = [
"_IGNORED_PACKET_LOCK",
"_IGNORED_PACKET_LOG_PATH",
"_ignored_packet_default",
"_record_ignored_packet",
]
+150
View File
@@ -0,0 +1,150 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Handler for neighbour-information packets."""
from __future__ import annotations
import time
from collections.abc import Mapping
from .. import config, queue
from ..serialization import (
_canonical_node_id,
_coerce_float,
_coerce_int,
_first,
_iso,
_node_num_from_id,
)
from . import _state
from .radio import _apply_radio_metadata
def store_neighborinfo_packet(packet: Mapping, decoded: Mapping) -> None:
"""Persist neighbour information gathered from a packet.
Meshtastic nodes periodically broadcast the set of nodes they can hear
directly along with the observed signal quality. This handler serialises
that snapshot so the web dashboard can render a live RF topology graph.
Parameters:
packet: Raw Meshtastic packet metadata.
decoded: Decoded view containing the ``neighborinfo`` section.
Returns:
``None``. The neighbour snapshot is queued for HTTP submission.
"""
neighbor_section = (
decoded.get("neighborinfo") if isinstance(decoded, Mapping) else None
)
if not isinstance(neighbor_section, Mapping):
return
node_ref = _first(
neighbor_section,
"nodeId",
"node_id",
default=_first(packet, "fromId", "from_id", "from", default=None),
)
node_id = _canonical_node_id(node_ref)
if node_id is None:
return
node_num = _coerce_int(_first(neighbor_section, "nodeId", "node_id", default=None))
if node_num is None:
node_num = _node_num_from_id(node_id)
node_broadcast_interval = _coerce_int(
_first(
neighbor_section,
"nodeBroadcastIntervalSecs",
"node_broadcast_interval_secs",
default=None,
)
)
last_sent_by_ref = _first(
neighbor_section,
"lastSentById",
"last_sent_by_id",
default=None,
)
last_sent_by_id = _canonical_node_id(last_sent_by_ref)
rx_time = _coerce_int(_first(packet, "rxTime", "rx_time", default=time.time()))
if rx_time is None:
rx_time = int(time.time())
neighbors_payload = neighbor_section.get("neighbors")
neighbors_iterable = (
neighbors_payload if isinstance(neighbors_payload, list) else []
)
neighbor_entries: list[dict] = []
for entry in neighbors_iterable:
if not isinstance(entry, Mapping):
continue
neighbor_ref = _first(entry, "nodeId", "node_id", default=None)
neighbor_id = _canonical_node_id(neighbor_ref)
if neighbor_id is None:
continue
neighbor_num = _coerce_int(_first(entry, "nodeId", "node_id", default=None))
if neighbor_num is None:
neighbor_num = _node_num_from_id(neighbor_id)
snr = _coerce_float(_first(entry, "snr", default=None))
entry_rx_time = _coerce_int(_first(entry, "rxTime", "rx_time", default=None))
if entry_rx_time is None:
entry_rx_time = rx_time
neighbor_entries.append(
{
"neighbor_id": neighbor_id,
"neighbor_num": neighbor_num,
"snr": snr,
"rx_time": entry_rx_time,
"rx_iso": _iso(entry_rx_time),
}
)
payload = {
"node_id": node_id,
"node_num": node_num,
"neighbors": neighbor_entries,
"rx_time": rx_time,
"rx_iso": _iso(rx_time),
"ingestor": _state.host_node_id(),
}
if node_broadcast_interval is not None:
payload["node_broadcast_interval_secs"] = node_broadcast_interval
if last_sent_by_id is not None:
payload["last_sent_by_id"] = last_sent_by_id
queue._queue_post_json(
"/api/neighbors",
_apply_radio_metadata(payload),
priority=queue._NEIGHBOR_POST_PRIORITY,
)
if config.DEBUG:
config._debug_log(
"Queued neighborinfo payload",
context="handlers.store_neighborinfo",
node_id=node_id,
neighbors=len(neighbor_entries),
)
__all__ = ["store_neighborinfo_packet"]
+234
View File
@@ -0,0 +1,234 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Handler for node-information packets."""
from __future__ import annotations
import time
from collections.abc import Mapping
from .. import config, queue
from ..serialization import (
_canonical_node_id,
_coerce_int,
_decode_nodeinfo_payload,
_extract_payload_bytes,
_first,
_merge_mappings,
_node_num_from_id,
_node_to_dict,
_nodeinfo_metrics_dict,
_nodeinfo_position_dict,
_nodeinfo_user_dict,
)
from . import _state
from .radio import _apply_radio_metadata_to_nodes
def store_nodeinfo_packet(packet: Mapping, decoded: Mapping) -> None:
"""Persist node information updates.
Node info packets carry user profile data (short name, long name, hardware
model, public key) together with optional position and device-metrics
snapshots. When a protobuf payload is present it is decoded first; any
fields missing from the protobuf are filled in from the ``decoded`` dict
so both firmware variants are handled.
Parameters:
packet: Raw packet metadata describing the update.
decoded: Decoded payload that may include ``user`` and ``position``
sections.
Returns:
``None``. The node payload is merged into the API queue.
"""
payload_bytes = _extract_payload_bytes(decoded)
node_info = _decode_nodeinfo_payload(payload_bytes)
decoded_user = decoded.get("user")
user_dict = _nodeinfo_user_dict(node_info, decoded_user)
node_info_fields = set()
if node_info:
node_info_fields = {field_desc.name for field_desc, _ in node_info.ListFields()}
node_id = None
if isinstance(user_dict, Mapping):
node_id = _canonical_node_id(user_dict.get("id"))
if node_id is None:
node_id = _canonical_node_id(
_first(packet, "fromId", "from_id", "from", default=None)
)
if node_id is None:
return
# Throttle self-NODEINFO upserts to at most once per hour. The meshtastic
# library rebroadcasts the local node's NODEINFO periodically; accepting
# every broadcast would overwrite the host node record too aggressively.
if node_id == _state.host_node_id():
_now = time.monotonic()
if _state._host_nodeinfo_suppressed(_now):
if config.DEBUG:
config._debug_log(
"Suppressed host self-NODEINFO update within throttle window",
context="handlers.store_nodeinfo",
node_id=node_id,
)
return
_state._mark_host_nodeinfo_seen(_now)
node_payload: dict = {}
if user_dict:
node_payload["user"] = user_dict
# Resolve node_num from protobuf first, then decoded dict, then from the
# canonical ID as a last resort.
node_num = None
if node_info and "num" in node_info_fields:
try:
node_num = int(node_info.num)
except (TypeError, ValueError):
node_num = None
if node_num is None:
decoded_num = decoded.get("num")
if decoded_num is not None:
try:
node_num = int(decoded_num)
except (TypeError, ValueError):
try:
node_num = int(str(decoded_num).strip(), 0)
except Exception:
node_num = None
if node_num is None:
node_num = _node_num_from_id(node_id)
if node_num is not None:
node_payload["num"] = node_num
rx_time = int(_first(packet, "rxTime", "rx_time", default=time.time()))
last_heard = None
if node_info and "last_heard" in node_info_fields:
try:
last_heard = int(node_info.last_heard)
except (TypeError, ValueError):
last_heard = None
if last_heard is None:
decoded_last_heard = decoded.get("lastHeard")
if decoded_last_heard is not None:
try:
last_heard = int(decoded_last_heard)
except (TypeError, ValueError):
last_heard = None
if last_heard is None or last_heard < rx_time:
last_heard = rx_time
node_payload["lastHeard"] = last_heard
snr = None
if node_info and "snr" in node_info_fields:
try:
snr = float(node_info.snr)
except (TypeError, ValueError):
snr = None
if snr is None:
snr = _first(packet, "snr", "rx_snr", "rxSnr", default=None)
if snr is not None:
try:
snr = float(snr)
except (TypeError, ValueError):
snr = None
if snr is not None:
node_payload["snr"] = snr
hops = None
if node_info and "hops_away" in node_info_fields:
try:
hops = int(node_info.hops_away)
except (TypeError, ValueError):
hops = None
if hops is None:
hops = decoded.get("hopsAway")
if hops is not None:
try:
hops = int(hops)
except (TypeError, ValueError):
hops = None
if hops is not None:
node_payload["hopsAway"] = hops
if node_info and "channel" in node_info_fields:
try:
node_payload["channel"] = int(node_info.channel)
except (TypeError, ValueError):
pass
if node_info and "via_mqtt" in node_info_fields:
node_payload["viaMqtt"] = bool(node_info.via_mqtt)
if node_info and "is_favorite" in node_info_fields:
node_payload["isFavorite"] = bool(node_info.is_favorite)
elif "isFavorite" in decoded:
node_payload["isFavorite"] = bool(decoded.get("isFavorite"))
if node_info and "is_ignored" in node_info_fields:
node_payload["isIgnored"] = bool(node_info.is_ignored)
if node_info and "is_key_manually_verified" in node_info_fields:
node_payload["isKeyManuallyVerified"] = bool(node_info.is_key_manually_verified)
metrics = _nodeinfo_metrics_dict(node_info)
decoded_metrics = decoded.get("deviceMetrics")
if isinstance(decoded_metrics, Mapping):
metrics = _merge_mappings(metrics, _node_to_dict(decoded_metrics))
if metrics:
node_payload["deviceMetrics"] = metrics
position = _nodeinfo_position_dict(node_info)
decoded_position = decoded.get("position")
if isinstance(decoded_position, Mapping):
position = _merge_mappings(position, _node_to_dict(decoded_position))
if position:
node_payload["position"] = position
hop_limit = _first(packet, "hopLimit", "hop_limit", default=None)
if hop_limit is not None and "hopLimit" not in node_payload:
try:
node_payload["hopLimit"] = int(hop_limit)
except (TypeError, ValueError):
pass
nodes_payload = _apply_radio_metadata_to_nodes({node_id: node_payload})
nodes_payload["ingestor"] = _state.host_node_id()
queue._queue_post_json(
"/api/nodes",
nodes_payload,
priority=queue._NODE_POST_PRIORITY,
)
if config.DEBUG:
short = None
long_name = None
if isinstance(user_dict, Mapping):
short = user_dict.get("shortName")
long_name = user_dict.get("longName")
config._debug_log(
"Queued nodeinfo payload",
context="handlers.store_nodeinfo",
node_id=node_id,
short_name=short,
long_name=long_name,
)
__all__ = ["store_nodeinfo_packet"]
+413
View File
@@ -0,0 +1,413 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Handlers for position and traceroute packets."""
from __future__ import annotations
import base64
import time
from collections.abc import Mapping
from .. import config, queue
from ..serialization import (
_canonical_node_id,
_coerce_float,
_coerce_int,
_extract_payload_bytes,
_first,
_iso,
_node_num_from_id,
_node_to_dict,
_pkt_to_dict,
)
from . import _state
from .ignored import _record_ignored_packet
from .radio import _apply_radio_metadata
def base64_payload(payload_bytes: bytes | None) -> str | None:
"""Encode raw payload bytes as a Base64 string for JSON transport.
Parameters:
payload_bytes: Optional raw bytes to encode. When ``None`` or empty,
``None`` is returned so callers can omit the field.
Returns:
The Base64-encoded ASCII string, or ``None`` when ``payload_bytes`` is
falsy.
"""
if not payload_bytes:
return None
return base64.b64encode(payload_bytes).decode("ascii")
def _normalize_trace_hops(hops_value: object) -> list[int]:
"""Coerce hop entries to integer node numbers, preserving order.
Each hop can arrive as a plain integer, a canonical node-ID string
(``!xxxxxxxx``), or a mapping with a ``nodeId`` / ``node_id`` field.
All forms are normalised to the raw 32-bit node number used by the API.
Parameters:
hops_value: A single hop or list of hops in any supported form.
Returns:
List of integer node numbers with ``None``-coerced entries dropped.
"""
if hops_value is None:
return []
hop_entries = hops_value if isinstance(hops_value, list) else [hops_value]
normalized: list[int] = []
for hop in hop_entries:
hop_value = hop
if isinstance(hop, Mapping):
hop_value = _first(hop, "node_id", "nodeId", "id", "num", default=None)
canonical = _canonical_node_id(hop_value)
hop_id = _node_num_from_id(canonical or hop_value)
if hop_id is None:
hop_id = _coerce_int(hop_value)
if hop_id is not None:
normalized.append(hop_id)
return normalized
def store_position_packet(packet: Mapping, decoded: Mapping) -> None:
"""Persist a decoded GPS position packet to the API.
Extracts coordinates from both the integer-scaled (``latitudeI`` /
``longitudeI``) and floating-point (``latitude`` / ``longitude``) forms
that Meshtastic may produce depending on firmware version.
Parameters:
packet: Raw packet metadata emitted by the Meshtastic interface.
decoded: Decoded payload extracted from ``packet['decoded']``.
Returns:
``None``. The formatted position payload is added to the HTTP queue.
"""
node_ref = _first(packet, "fromId", "from_id", "from", default=None)
if node_ref is None:
node_ref = _first(decoded, "num", default=None)
node_id = _canonical_node_id(node_ref)
if node_id is None:
return
node_num = _coerce_int(_first(decoded, "num", default=None))
if node_num is None:
node_num = _node_num_from_id(node_id)
pkt_id = _coerce_int(_first(packet, "id", "packet_id", "packetId", default=None))
if pkt_id is None:
return
rx_time = _coerce_int(_first(packet, "rxTime", "rx_time", default=time.time()))
if rx_time is None:
rx_time = int(time.time())
to_id = _first(packet, "toId", "to_id", "to", default=None)
to_id = to_id if to_id not in {"", None} else None
position_section = decoded.get("position") if isinstance(decoded, Mapping) else None
if not isinstance(position_section, Mapping):
position_section = {}
# Meshtastic firmware may emit coordinates in one of two forms:
# - Floating-point degrees: ``latitude`` / ``longitude``
# - Integer-scaled (1e-7 degrees): ``latitudeI`` / ``longitudeI``
# Try the float form first and fall back to the integer form when absent.
latitude = _coerce_float(
_first(position_section, "latitude", "raw.latitude", default=None)
)
if latitude is None:
lat_i = _coerce_int(
_first(
position_section,
"latitudeI",
"latitude_i",
"raw.latitude_i",
default=None,
)
)
if lat_i is not None:
latitude = lat_i / 1e7
longitude = _coerce_float(
_first(position_section, "longitude", "raw.longitude", default=None)
)
if longitude is None:
lon_i = _coerce_int(
_first(
position_section,
"longitudeI",
"longitude_i",
"raw.longitude_i",
default=None,
)
)
if lon_i is not None:
longitude = lon_i / 1e7
altitude = _coerce_float(
_first(position_section, "altitude", "raw.altitude", default=None)
)
position_time = _coerce_int(
_first(position_section, "time", "raw.time", default=None)
)
location_source = _first(
position_section,
"locationSource",
"location_source",
"raw.location_source",
default=None,
)
location_source = (
str(location_source).strip() if location_source not in {None, ""} else None
)
precision_bits = _coerce_int(
_first(
position_section,
"precisionBits",
"precision_bits",
"raw.precision_bits",
default=None,
)
)
sats_in_view = _coerce_int(
_first(
position_section,
"satsInView",
"sats_in_view",
"raw.sats_in_view",
default=None,
)
)
pdop = _coerce_float(
_first(position_section, "PDOP", "pdop", "raw.PDOP", "raw.pdop", default=None)
)
ground_speed = _coerce_float(
_first(
position_section,
"groundSpeed",
"ground_speed",
"raw.ground_speed",
default=None,
)
)
ground_track = _coerce_float(
_first(
position_section,
"groundTrack",
"ground_track",
"raw.ground_track",
default=None,
)
)
snr = _coerce_float(_first(packet, "snr", "rx_snr", "rxSnr", default=None))
rssi = _coerce_int(_first(packet, "rssi", "rx_rssi", "rxRssi", default=None))
hop_limit = _coerce_int(_first(packet, "hopLimit", "hop_limit", default=None))
bitfield = _coerce_int(_first(decoded, "bitfield", default=None))
payload_bytes = _extract_payload_bytes(decoded)
payload_b64 = base64_payload(payload_bytes)
raw_section = decoded.get("raw") if isinstance(decoded, Mapping) else None
raw_payload = _node_to_dict(raw_section) if raw_section else None
if raw_payload is None and position_section:
raw_position = (
position_section.get("raw")
if isinstance(position_section, Mapping)
else None
)
if raw_position:
raw_payload = _node_to_dict(raw_position)
position_payload = {
"id": pkt_id,
"node_id": node_id or node_ref,
"node_num": node_num,
"num": node_num,
"from_id": node_id,
"to_id": to_id,
"rx_time": rx_time,
"rx_iso": _iso(rx_time),
"latitude": latitude,
"longitude": longitude,
"altitude": altitude,
"position_time": position_time,
"location_source": location_source,
"precision_bits": precision_bits,
"sats_in_view": sats_in_view,
"pdop": pdop,
"ground_speed": ground_speed,
"ground_track": ground_track,
"snr": snr,
"rssi": rssi,
"hop_limit": hop_limit,
"bitfield": bitfield,
"payload_b64": payload_b64,
"ingestor": _state.host_node_id(),
}
if raw_payload:
position_payload["raw"] = raw_payload
queue._queue_post_json(
"/api/positions",
_apply_radio_metadata(position_payload),
priority=queue._POSITION_POST_PRIORITY,
)
if config.DEBUG:
config._debug_log(
"Queued position payload",
context="handlers.store_position",
node_id=node_id,
latitude=latitude,
longitude=longitude,
position_time=position_time,
)
def store_traceroute_packet(packet: Mapping, decoded: Mapping) -> None:
"""Persist traceroute details and the observed hop path to the API.
Hop lists can arrive under several key names (``hops``, ``path``,
``route``) and may appear at multiple nesting levels. All candidates are
deduplicated and merged into a single ordered list.
Parameters:
packet: Raw packet metadata from the Meshtastic interface.
decoded: Decoded payload containing the traceroute section.
Returns:
``None``. The traceroute payload is queued for HTTP submission, or
silently dropped when identifiers are entirely absent.
"""
traceroute_section = (
decoded.get("traceroute") if isinstance(decoded, Mapping) else None
)
request_id = _coerce_int(
_first(
traceroute_section,
"requestId",
"request_id",
default=_first(decoded, "req", "requestId", "request_id", default=None),
)
)
pkt_id = _coerce_int(_first(packet, "id", "packet_id", "packetId", default=None))
if pkt_id is None:
pkt_id = request_id
rx_time = _coerce_int(_first(packet, "rxTime", "rx_time", default=time.time()))
if rx_time is None:
rx_time = int(time.time())
src = _coerce_int(
_first(
decoded,
"src",
"source",
default=_first(packet, "fromId", "from_id", "from", default=None),
)
)
dest = _coerce_int(
_first(
decoded,
"dest",
"destination",
default=_first(packet, "toId", "to_id", "to", default=None),
)
)
metrics = traceroute_section if isinstance(traceroute_section, Mapping) else {}
rssi = _coerce_int(
_first(metrics, "rssi", default=_first(packet, "rssi", "rx_rssi", "rxRssi"))
)
snr = _coerce_float(
_first(metrics, "snr", default=_first(packet, "snr", "rx_snr", "rxSnr"))
)
elapsed_ms = _coerce_int(
_first(metrics, "elapsed_ms", "latency_ms", "latencyMs", default=None)
)
# Hops can appear under multiple keys at different nesting levels; collect
# all candidates and deduplicate while preserving first-seen order.
hop_candidates = (
_first(metrics, "hops", default=None),
_first(metrics, "path", default=None),
_first(metrics, "route", default=None),
_first(decoded, "hops", default=None),
_first(decoded, "path", default=None),
(
_first(traceroute_section, "route", default=None)
if isinstance(traceroute_section, Mapping)
else None
),
)
hops: list[int] = []
seen_hops: set[int] = set()
for candidate in hop_candidates:
for hop in _normalize_trace_hops(candidate):
if hop in seen_hops:
continue
seen_hops.add(hop)
hops.append(hop)
if pkt_id is None and request_id is None and not hops:
_record_ignored_packet(packet, reason="traceroute-missing-identifiers")
return
payload = {
"id": pkt_id,
"request_id": request_id,
"src": src,
"dest": dest,
"rx_time": rx_time,
"rx_iso": _iso(rx_time),
"hops": hops,
"rssi": rssi,
"snr": snr,
"elapsed_ms": elapsed_ms,
"ingestor": _state.host_node_id(),
}
queue._queue_post_json(
"/api/traces",
_apply_radio_metadata(payload),
priority=queue._TRACE_POST_PRIORITY,
)
if config.DEBUG:
config._debug_log(
"Queued traceroute payload",
context="handlers.store_traceroute_packet",
request_id=request_id,
src=src,
dest=dest,
hop_count=len(hops),
)
__all__ = [
"base64_payload",
"store_position_packet",
"store_traceroute_packet",
]
+94
View File
@@ -0,0 +1,94 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Radio metadata helpers for enriching API payloads.
LoRa radio parameters (frequency and modem preset) are captured once at
connection time by :mod:`data.mesh_ingestor.interfaces` and stored on the
:mod:`data.mesh_ingestor.config` module. The helpers here read those cached
values and attach them to outgoing payloads so the web dashboard can display
radio configuration alongside mesh data.
"""
from __future__ import annotations
from .. import config
def _radio_metadata_fields() -> dict[str, object]:
"""Return the shared radio metadata fields for payload enrichment.
Reads ``LORA_FREQ`` and ``MODEM_PRESET`` from :mod:`config` and returns
only the keys that have been populated (i.e. skips ``None`` values).
Returns:
A dictionary containing zero, one, or both of ``lora_freq`` and
``modem_preset`` depending on what is available.
"""
metadata: dict[str, object] = {}
freq = getattr(config, "LORA_FREQ", None)
if freq is not None:
metadata["lora_freq"] = freq
preset = getattr(config, "MODEM_PRESET", None)
if preset is not None:
metadata["modem_preset"] = preset
return metadata
def _apply_radio_metadata(payload: dict) -> dict:
"""Augment a flat payload dict with radio metadata when available.
Parameters:
payload: Mutable dictionary that will receive radio metadata keys.
Returns:
The same ``payload`` dict with radio metadata keys merged in-place.
"""
metadata = _radio_metadata_fields()
if metadata:
payload.update(metadata)
return payload
def _apply_radio_metadata_to_nodes(payload: dict) -> dict:
"""Attach radio metadata to each node entry stored in ``payload``.
Node upsert payloads are keyed by node ID; each value is a dict of node
attributes. This function enriches every node-value dict with radio
metadata so the dashboard can show the radio configuration that was active
when the node was last heard.
Parameters:
payload: Mapping of ``node_id node_dict`` to enrich in-place.
Returns:
The same ``payload`` dict after in-place mutation of its node entries.
"""
metadata = _radio_metadata_fields()
if not metadata:
return payload
for value in payload.values():
if isinstance(value, dict):
value.update(metadata)
return payload
__all__ = [
"_apply_radio_metadata",
"_apply_radio_metadata_to_nodes",
"_radio_metadata_fields",
]
+563
View File
@@ -0,0 +1,563 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Handlers for telemetry and router-heartbeat packets."""
from __future__ import annotations
import time
from collections.abc import Mapping
from .. import config, queue
from ..serialization import (
_canonical_node_id,
_coerce_float,
_coerce_int,
_extract_payload_bytes,
_first,
_iso,
_node_num_from_id,
)
from . import _state
from .position import base64_payload
from .radio import _apply_radio_metadata, _apply_radio_metadata_to_nodes
_VALID_TELEMETRY_TYPES: frozenset[str] = frozenset(
{"device", "environment", "power", "air_quality"}
)
"""Allowed discriminator values for the ``telemetry_type`` field.
Meshtastic uses a protobuf ``oneof`` so only one metric sub-object can be
populated per packet. Values outside this set indicate a firmware version
that added a new type not yet handled here; those are logged and dropped to
avoid persisting unexpected data shapes.
"""
def store_telemetry_packet(packet: Mapping, decoded: Mapping) -> None:
"""Persist telemetry metrics extracted from a packet.
Handles all four Meshtastic telemetry sub-types (device, environment,
power, air quality) by extracting common fields first and then
conditionally adding type-specific metric keys.
Host telemetry is rate-limited: if the locally connected node's own
telemetry arrives within the suppression window it is silently dropped to
avoid constant self-updates overwriting other node data.
Parameters:
packet: Packet metadata received from the radio interface.
decoded: Meshtastic-decoded view containing telemetry structures.
Returns:
``None``. The telemetry payload is added to the HTTP queue.
"""
telemetry_section = (
decoded.get("telemetry") if isinstance(decoded, Mapping) else None
)
if not isinstance(telemetry_section, Mapping):
return
pkt_id = _coerce_int(_first(packet, "id", "packet_id", "packetId", default=None))
if pkt_id is None:
return
raw_from = _first(packet, "fromId", "from_id", "from", default=None)
node_id = _canonical_node_id(raw_from)
node_num = _coerce_int(_first(decoded, "num", "node_num", default=None))
if node_num is None:
node_num = _node_num_from_id(node_id or raw_from)
to_id = _first(packet, "toId", "to_id", "to", default=None)
raw_rx_time = _first(packet, "rxTime", "rx_time", default=time.time())
try:
rx_time = int(raw_rx_time)
except (TypeError, ValueError):
rx_time = int(time.time())
rx_iso = _iso(rx_time)
host_id = _state.host_node_id()
# The locally connected node broadcasts its own telemetry frequently.
# Accepting every packet would overwrite the host's profile more often
# than necessary; the suppression window (default 1 h) rate-limits
# self-updates without blocking telemetry from other nodes.
if host_id is not None and node_id == host_id:
suppressed, minutes_remaining = _state._host_telemetry_suppressed(rx_time)
if suppressed:
config._debug_log(
"Suppressed host telemetry update",
context="handlers.store_telemetry",
host_node_id=host_id,
minutes_remaining=minutes_remaining,
)
return
_state._mark_host_telemetry_seen(rx_time)
telemetry_time = _coerce_int(_first(telemetry_section, "time", default=None))
_dm = telemetry_section.get("deviceMetrics") or telemetry_section.get(
"device_metrics"
)
_em = telemetry_section.get("environmentMetrics") or telemetry_section.get(
"environment_metrics"
)
_pm = telemetry_section.get("powerMetrics") or telemetry_section.get(
"power_metrics"
)
_aq = telemetry_section.get("airQualityMetrics") or telemetry_section.get(
"air_quality_metrics"
)
# Priority order matters: deviceMetrics is checked first because the device
# sub-object also carries a voltage field that overlaps with powerMetrics.
# Meshtastic uses a protobuf oneof so only one sub-object can be populated per
# packet; the elif chain handles any hypothetical overlap from future protocols.
if isinstance(_dm, Mapping):
telemetry_type: str | None = "device"
elif isinstance(_em, Mapping):
telemetry_type = "environment"
elif isinstance(_pm, Mapping):
telemetry_type = "power"
elif isinstance(_aq, Mapping):
telemetry_type = "air_quality"
else:
telemetry_type = None
if telemetry_type is not None and telemetry_type not in _VALID_TELEMETRY_TYPES:
config._debug_log(
"Unexpected telemetry_type value; dropping field",
context="handlers.store_telemetry",
severity="warning",
always=True,
telemetry_type=telemetry_type,
)
telemetry_type = None
channel = _coerce_int(_first(decoded, "channel", default=None))
if channel is None:
channel = _coerce_int(_first(packet, "channel", default=None))
if channel is None:
channel = 0
portnum = _first(decoded, "portnum", default=None)
portnum = str(portnum) if portnum not in {None, ""} else None
bitfield = _coerce_int(_first(decoded, "bitfield", default=None))
snr = _coerce_float(_first(packet, "snr", "rx_snr", "rxSnr", default=None))
rssi = _coerce_int(_first(packet, "rssi", "rx_rssi", "rxRssi", default=None))
hop_limit = _coerce_int(_first(packet, "hopLimit", "hop_limit", default=None))
payload_bytes = _extract_payload_bytes(decoded)
payload_b64 = base64_payload(payload_bytes) or ""
battery_level = _coerce_float(
_first(
telemetry_section,
"batteryLevel",
"battery_level",
"deviceMetrics.batteryLevel",
"environmentMetrics.battery_level",
"deviceMetrics.battery_level",
default=None,
)
)
voltage = _coerce_float(
_first(
telemetry_section,
"voltage",
"environmentMetrics.voltage",
"deviceMetrics.voltage",
default=None,
)
)
channel_utilization = _coerce_float(
_first(
telemetry_section,
"channelUtilization",
"channel_utilization",
"deviceMetrics.channelUtilization",
"deviceMetrics.channel_utilization",
default=None,
)
)
air_util_tx = _coerce_float(
_first(
telemetry_section,
"airUtilTx",
"air_util_tx",
"deviceMetrics.airUtilTx",
"deviceMetrics.air_util_tx",
default=None,
)
)
uptime_seconds = _coerce_int(
_first(
telemetry_section,
"uptimeSeconds",
"uptime_seconds",
"deviceMetrics.uptimeSeconds",
"deviceMetrics.uptime_seconds",
default=None,
)
)
temperature = _coerce_float(
_first(
telemetry_section,
"temperature",
"environmentMetrics.temperature",
default=None,
)
)
relative_humidity = _coerce_float(
_first(
telemetry_section,
"relativeHumidity",
"relative_humidity",
"environmentMetrics.relativeHumidity",
"environmentMetrics.relative_humidity",
default=None,
)
)
barometric_pressure = _coerce_float(
_first(
telemetry_section,
"barometricPressure",
"barometric_pressure",
"environmentMetrics.barometricPressure",
"environmentMetrics.barometric_pressure",
default=None,
)
)
current = _coerce_float(
_first(
telemetry_section,
"current",
"deviceMetrics.current",
"deviceMetrics.current_ma",
"deviceMetrics.currentMa",
"environmentMetrics.current",
default=None,
)
)
gas_resistance = _coerce_float(
_first(
telemetry_section,
"gasResistance",
"gas_resistance",
"environmentMetrics.gasResistance",
"environmentMetrics.gas_resistance",
default=None,
)
)
iaq = _coerce_int(
_first(
telemetry_section,
"iaq",
"environmentMetrics.iaq",
"environmentMetrics.iaqIndex",
"environmentMetrics.iaq_index",
default=None,
)
)
distance = _coerce_float(
_first(
telemetry_section,
"distance",
"environmentMetrics.distance",
"environmentMetrics.range",
"environmentMetrics.rangeMeters",
default=None,
)
)
lux = _coerce_float(
_first(
telemetry_section,
"lux",
"environmentMetrics.lux",
"environmentMetrics.illuminance",
default=None,
)
)
white_lux = _coerce_float(
_first(
telemetry_section,
"whiteLux",
"white_lux",
"environmentMetrics.whiteLux",
"environmentMetrics.white_lux",
default=None,
)
)
ir_lux = _coerce_float(
_first(
telemetry_section,
"irLux",
"ir_lux",
"environmentMetrics.irLux",
"environmentMetrics.ir_lux",
default=None,
)
)
uv_lux = _coerce_float(
_first(
telemetry_section,
"uvLux",
"uv_lux",
"environmentMetrics.uvLux",
"environmentMetrics.uv_lux",
"environmentMetrics.uvIndex",
default=None,
)
)
wind_direction = _coerce_int(
_first(
telemetry_section,
"windDirection",
"wind_direction",
"environmentMetrics.windDirection",
"environmentMetrics.wind_direction",
default=None,
)
)
wind_speed = _coerce_float(
_first(
telemetry_section,
"windSpeed",
"wind_speed",
"environmentMetrics.windSpeed",
"environmentMetrics.wind_speed",
"environmentMetrics.windSpeedMps",
default=None,
)
)
wind_gust = _coerce_float(
_first(
telemetry_section,
"windGust",
"wind_gust",
"environmentMetrics.windGust",
"environmentMetrics.wind_gust",
default=None,
)
)
wind_lull = _coerce_float(
_first(
telemetry_section,
"windLull",
"wind_lull",
"environmentMetrics.windLull",
"environmentMetrics.wind_lull",
default=None,
)
)
weight = _coerce_float(
_first(
telemetry_section,
"weight",
"environmentMetrics.weight",
"environmentMetrics.mass",
default=None,
)
)
radiation = _coerce_float(
_first(
telemetry_section,
"radiation",
"environmentMetrics.radiation",
"environmentMetrics.radiationLevel",
default=None,
)
)
rainfall_1h = _coerce_float(
_first(
telemetry_section,
"rainfall1h",
"rainfall_1h",
"environmentMetrics.rainfall1h",
"environmentMetrics.rainfall_1h",
"environmentMetrics.rainfallOneHour",
default=None,
)
)
rainfall_24h = _coerce_float(
_first(
telemetry_section,
"rainfall24h",
"rainfall_24h",
"environmentMetrics.rainfall24h",
"environmentMetrics.rainfall_24h",
"environmentMetrics.rainfallTwentyFourHour",
default=None,
)
)
soil_moisture = _coerce_int(
_first(
telemetry_section,
"soilMoisture",
"soil_moisture",
"environmentMetrics.soilMoisture",
"environmentMetrics.soil_moisture",
default=None,
)
)
soil_temperature = _coerce_float(
_first(
telemetry_section,
"soilTemperature",
"soil_temperature",
"environmentMetrics.soilTemperature",
"environmentMetrics.soil_temperature",
default=None,
)
)
telemetry_payload = {
"id": pkt_id,
"node_id": node_id,
"node_num": node_num,
"from_id": node_id or raw_from,
"to_id": to_id,
"rx_time": rx_time,
"rx_iso": rx_iso,
"telemetry_time": telemetry_time,
"channel": channel,
"portnum": portnum,
"bitfield": bitfield,
"snr": snr,
"rssi": rssi,
"hop_limit": hop_limit,
"payload_b64": payload_b64,
"ingestor": _state.host_node_id(),
}
# Conditionally include metric keys so the API ignores absent fields rather
# than overwriting existing values with null.
if battery_level is not None:
telemetry_payload["battery_level"] = battery_level
if voltage is not None:
telemetry_payload["voltage"] = voltage
if channel_utilization is not None:
telemetry_payload["channel_utilization"] = channel_utilization
if air_util_tx is not None:
telemetry_payload["air_util_tx"] = air_util_tx
if uptime_seconds is not None:
telemetry_payload["uptime_seconds"] = uptime_seconds
if temperature is not None:
telemetry_payload["temperature"] = temperature
if relative_humidity is not None:
telemetry_payload["relative_humidity"] = relative_humidity
if barometric_pressure is not None:
telemetry_payload["barometric_pressure"] = barometric_pressure
if current is not None:
telemetry_payload["current"] = current
if gas_resistance is not None:
telemetry_payload["gas_resistance"] = gas_resistance
if iaq is not None:
telemetry_payload["iaq"] = iaq
if distance is not None:
telemetry_payload["distance"] = distance
if lux is not None:
telemetry_payload["lux"] = lux
if white_lux is not None:
telemetry_payload["white_lux"] = white_lux
if ir_lux is not None:
telemetry_payload["ir_lux"] = ir_lux
if uv_lux is not None:
telemetry_payload["uv_lux"] = uv_lux
if wind_direction is not None:
telemetry_payload["wind_direction"] = wind_direction
if wind_speed is not None:
telemetry_payload["wind_speed"] = wind_speed
if wind_gust is not None:
telemetry_payload["wind_gust"] = wind_gust
if wind_lull is not None:
telemetry_payload["wind_lull"] = wind_lull
if weight is not None:
telemetry_payload["weight"] = weight
if radiation is not None:
telemetry_payload["radiation"] = radiation
if rainfall_1h is not None:
telemetry_payload["rainfall_1h"] = rainfall_1h
if rainfall_24h is not None:
telemetry_payload["rainfall_24h"] = rainfall_24h
if soil_moisture is not None:
telemetry_payload["soil_moisture"] = soil_moisture
if soil_temperature is not None:
telemetry_payload["soil_temperature"] = soil_temperature
if telemetry_type is not None:
telemetry_payload["telemetry_type"] = telemetry_type
queue._queue_post_json(
"/api/telemetry",
_apply_radio_metadata(telemetry_payload),
priority=queue._TELEMETRY_POST_PRIORITY,
)
if config.DEBUG:
config._debug_log(
"Queued telemetry payload",
context="handlers.store_telemetry",
node_id=node_id,
battery_level=battery_level,
voltage=voltage,
)
def store_router_heartbeat_packet(packet: Mapping) -> None:
"""Persist a ``STORE_FORWARD_APP ROUTER_HEARTBEAT`` as a node presence update.
The heartbeat carries no message payload the only actionable signal is
that the store-and-forward router is alive at the observed ``rx_time``.
All other fields are left untouched so the router's existing profile is
not overwritten.
Parameters:
packet: Raw packet metadata.
Returns:
``None``. A minimal node upsert is enqueued at low priority.
"""
node_id = _canonical_node_id(
_first(packet, "fromId", "from_id", "from", default=None)
)
if node_id is None:
return
rx_time = int(_first(packet, "rxTime", "rx_time", default=time.time()))
node_payload: dict = {"lastHeard": rx_time}
nodes_payload = _apply_radio_metadata_to_nodes({node_id: node_payload})
nodes_payload["ingestor"] = _state.host_node_id()
queue._queue_post_json(
"/api/nodes", nodes_payload, priority=queue._DEFAULT_POST_PRIORITY
)
if config.DEBUG:
config._debug_log(
"Queued router heartbeat node upsert",
context="handlers.store_router_heartbeat",
node_id=node_id,
rx_time=rx_time,
)
__all__ = [
"store_router_heartbeat_packet",
"store_telemetry_packet",
]
+1
View File
@@ -113,6 +113,7 @@ def queue_ingestor_heartbeat(
"start_time": STATE.start_time,
"last_seen_time": now,
"version": INGESTOR_VERSION,
"protocol": getattr(config, "PROTOCOL", "meshtastic") or "meshtastic",
}
if getattr(config, "LORA_FREQ", None) is not None:
payload["lora_freq"] = config.LORA_FREQ
+146 -46
View File
@@ -17,7 +17,6 @@
from __future__ import annotations
import contextlib
import glob
import importlib
import ipaddress
import math
@@ -33,6 +32,13 @@ except Exception: # pragma: no cover - dependency optional in tests
meshtastic = None # type: ignore[assignment]
from . import channels, config, serialization
from .connection import (
BLE_ADDRESS_RE,
DEFAULT_TCP_PORT,
DEFAULT_SERIAL_PATTERNS,
default_serial_targets,
parse_ble_target,
)
def _ensure_mapping(value) -> Mapping | None:
@@ -151,7 +157,21 @@ def _candidate_node_id(mapping: Mapping | None) -> str | None:
def _extract_host_node_id(iface) -> str | None:
"""Return the canonical node identifier for the connected host device."""
"""Return the canonical node identifier for the connected host device.
Searches a sequence of well-known attribute names (``myInfo``,
``my_node_info``, etc.) on ``iface`` for a mapping that contains a
recognisable node identifier, then falls back to the raw ``myNodeNum``
integer attribute.
Parameters:
iface: Live Meshtastic interface object, or any object that exposes
node-identity attributes in one of the expected forms.
Returns:
A canonical ``!xxxxxxxx`` node identifier, or ``None`` when no
identifiable host node information is available.
"""
if iface is None:
return None
@@ -239,6 +259,9 @@ def _patch_meshtastic_nodeinfo_handler() -> None:
with contextlib.suppress(Exception):
mesh_interface_module = importlib.import_module("meshtastic.mesh_interface")
# Replace the module-level handler only once; the sentinel attribute prevents
# re-wrapping if _patch_meshtastic_nodeinfo_handler() is called again after
# the interface module is reloaded or re-imported.
if not getattr(original, "_potato_mesh_safe_wrapper", False):
module._onNodeInfoReceive = _build_safe_nodeinfo_callback(original)
@@ -297,6 +320,22 @@ def _patch_nodeinfo_handler_class(
"""Subclass that guards against missing node identifiers."""
def onReceive(self, iface, packet): # type: ignore[override]
"""Normalise ``packet`` before dispatching to the parent handler.
Injects a canonical ``id`` field when one can be inferred from the
packet's other fields, then delegates to the original
``NodeInfoHandler.onReceive``. A ``KeyError`` on ``"id"`` is
suppressed because some firmware versions omit the field entirely.
Parameters:
iface: The Meshtastic interface that received the packet.
packet: Raw nodeinfo packet dict, possibly lacking an ``id``
key.
Returns:
The return value of the parent handler, or ``None`` when a
missing ``"id"`` key would otherwise raise.
"""
normalised = _normalise_nodeinfo_packet(packet)
if normalised is not None:
packet = normalised
@@ -472,16 +511,96 @@ def _resolve_lora_message(local_config: Any) -> Any | None:
return None
# Maps Meshtastic region enum name to (base_freq_MHz, channel_spacing_MHz).
# Values are derived from the Meshtastic firmware RegionInfo tables.
# Used by _computed_channel_frequency to derive the actual radio frequency
# from the region and channel index.
_REGION_CHANNEL_PARAMS: dict[str, tuple[float, float]] = {
"US": (902.0, 0.25), # 902928 MHz; e.g. ch 52 ≈ 915 MHz at 250 kHz spacing
"EU_433": (433.175, 0.2),
"EU_868": (869.525, 0.5), # actual primary ≈ 869.525 MHz, not 868
"CN": (470.0, 0.2),
"JP": (920.875, 0.5),
"ANZ": (916.0, 0.5),
"KR": (921.9, 0.5),
"TW": (923.0, 0.5),
"RU": (868.9, 0.5),
"IN": (865.0, 0.5),
"NZ_865": (864.0, 0.5),
"TH": (920.0, 0.5),
"LORA_24": (2400.0, 0.5),
"UA_433": (433.175, 0.2),
"UA_868": (868.0, 0.5),
"MY_433": (433.0, 0.2),
"MY_919": (919.0, 0.5),
"SG_923": (923.0, 0.5),
"PH_433": (433.0, 0.2),
"PH_868": (868.0, 0.5),
"PH_915": (915.0, 0.5),
"ANZ_433": (433.0, 0.2),
"KZ_433": (433.0, 0.2),
"KZ_863": (863.125, 0.5),
"NP_865": (865.0, 0.5),
"BR_902": (902.0, 0.25),
# IL (Israel) is absent from meshtastic Python lib 2.7.8 protobufs; the
# enum value is unresolvable at runtime. Operators on IL firmware should
# set the FREQUENCY environment variable to override.
}
def _computed_channel_frequency(
enum_name: str | None,
channel_num: int | None,
) -> int | None:
"""Compute the floor MHz frequency for a known region and channel index.
Looks up *enum_name* in :data:`_REGION_CHANNEL_PARAMS` and returns
``floor(base_freq + channel_num * spacing)``. Returns ``None`` when the
region is not in the table. A missing or negative *channel_num* is
treated as 0 so the base frequency is always usable.
Args:
enum_name: Region enum name as returned by
:func:`_enum_name_from_field`, e.g. ``"EU_868"`` or ``"US"``.
channel_num: Zero-based channel index from the device LoRa config.
Returns:
Floored MHz as :class:`int`, or ``None`` if the region is unknown.
"""
if enum_name is None:
return None
params = _REGION_CHANNEL_PARAMS.get(enum_name)
if params is None:
return None
base, spacing = params
idx = channel_num if (isinstance(channel_num, int) and channel_num >= 0) else 0
return math.floor(base + idx * spacing)
def _region_frequency(lora_message: Any) -> int | float | str | None:
"""Derive the LoRa region frequency in MHz or the region label from ``lora_message``.
Numeric override values are floored to the nearest MHz to align with the
integer frequencies expected elsewhere in the ingestion pipeline.
Frequency sources are tried in priority order:
1. ``override_frequency > 0`` explicit radio override, floored to MHz.
2. :data:`_REGION_CHANNEL_PARAMS` lookup + ``channel_num`` actual
band-plan frequency derived from the device's region and channel index,
floored to MHz.
3. Largest digit token 100 parsed from the region enum name string.
4. Largest digit token < 100 from the enum name (reversed scan).
5. Full enum name string, raw integer 100, or raw string as a label.
Args:
lora_message: A LoRa config protobuf message or compatible object.
Returns:
An integer MHz frequency, a fallback string label, or ``None``.
"""
if lora_message is None:
return None
# Step 1 — explicit radio override
override_frequency = getattr(lora_message, "override_frequency", None)
if override_frequency is not None:
if isinstance(override_frequency, (int, float)):
@@ -494,6 +613,15 @@ def _region_frequency(lora_message: Any) -> int | float | str | None:
if region_value is None:
return None
enum_name = _enum_name_from_field(lora_message, "region", region_value)
# Step 2 — lookup table + channel offset (actual band-plan frequency)
if enum_name:
channel_num = getattr(lora_message, "channel_num", None)
computed = _computed_channel_frequency(enum_name, channel_num)
if computed is not None:
return computed
# Steps 35 — parse digits from enum name (fallback for unknown regions)
if enum_name:
digits = re.findall(r"\d+", enum_name)
for token in digits:
@@ -616,19 +744,13 @@ def _ensure_channel_metadata(iface: Any) -> None:
)
_DEFAULT_TCP_PORT = 4403
_DEFAULT_TCP_TARGET = "http://127.0.0.1"
_DEFAULT_SERIAL_PATTERNS = (
"/dev/ttyACM*",
"/dev/ttyUSB*",
"/dev/tty.usbmodem*",
"/dev/tty.usbserial*",
"/dev/cu.usbmodem*",
"/dev/cu.usbserial*",
)
_BLE_ADDRESS_RE = re.compile(r"^(?:[0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}$")
# Private aliases so that existing internal callers and monkeypatching in
# tests keep working without modification.
_DEFAULT_TCP_PORT = DEFAULT_TCP_PORT # backward-compat alias
_DEFAULT_SERIAL_PATTERNS = DEFAULT_SERIAL_PATTERNS # backward-compat alias
_BLE_ADDRESS_RE = BLE_ADDRESS_RE # backward-compat alias
class _DummySerialInterface:
@@ -638,27 +760,11 @@ class _DummySerialInterface:
self.nodes: dict = {}
def close(self) -> None: # pragma: no cover - nothing to close
"""No-op: the dummy interface holds no resources to release."""
pass
def _parse_ble_target(value: str) -> str | None:
"""Return an uppercase BLE MAC address when ``value`` matches the format.
Parameters:
value: User-provided target string.
Returns:
The normalised MAC address or ``None`` when validation fails.
"""
if not value:
return None
value = value.strip()
if not value:
return None
if _BLE_ADDRESS_RE.fullmatch(value):
return value.upper()
return None
_parse_ble_target = parse_ble_target # backward-compat alias
def _parse_network_target(value: str) -> tuple[str, int] | None:
@@ -705,6 +811,9 @@ def _parse_network_target(value: str) -> tuple[str, int] | None:
if result:
return result
# For bare "host:port" strings that urlparse may misparse, try a manual
# partition. The `startswith("[")` guard excludes IPv6 bracket notation
# (e.g. "[::1]:8080") because those already succeed via urlparse above.
if value.count(":") == 1 and not value.startswith("["):
host, _, port_text = value.partition(":")
try:
@@ -772,10 +881,13 @@ def _create_serial_interface(port: str) -> tuple[object, str]:
return _DummySerialInterface(), "mock"
ble_target = _parse_ble_target(port_value)
if ble_target:
# Determine if it's a MAC address or UUID
address_type = "MAC" if ":" in ble_target else "UUID"
config._debug_log(
"Using BLE interface",
context="interfaces.ble",
address=ble_target,
address_type=address_type,
)
return _load_ble_interface()(address=ble_target), ble_target
network_target = _parse_network_target(port_value)
@@ -803,19 +915,7 @@ class NoAvailableMeshInterface(RuntimeError):
"""Raised when no default mesh interface can be created."""
def _default_serial_targets() -> list[str]:
"""Return candidate serial device paths for auto-discovery."""
candidates: list[str] = []
seen: set[str] = set()
for pattern in _DEFAULT_SERIAL_PATTERNS:
for path in sorted(glob.glob(pattern)):
if path not in seen:
candidates.append(path)
seen.add(path)
if "/dev/ttyACM0" not in seen:
candidates.append("/dev/ttyACM0")
return candidates
_default_serial_targets = default_serial_targets # backward-compat alias
def _create_default_interface() -> tuple[object, str]:
+57
View File
@@ -0,0 +1,57 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""MeshProtocol interface for ingestion sources.
This module defines the seam so future protocols (MeshCore, Reticulum, ...) can
be added without changing the web app ingest contract.
"""
from __future__ import annotations
from collections.abc import Iterable
from typing import Protocol, runtime_checkable
@runtime_checkable
class MeshProtocol(Protocol):
"""Abstract mesh protocol source."""
name: str
def subscribe(self) -> list[str]:
"""Subscribe to any async receive callbacks and return topic names."""
def connect(
self, *, active_candidate: str | None
) -> tuple[object, str | None, str | None]:
"""Create an interface connection.
Returns:
(iface, resolved_target, next_active_candidate)
"""
def extract_host_node_id(self, iface: object) -> str | None:
"""Best-effort extraction of the connected host node id."""
def node_snapshot_items(self, iface: object) -> Iterable[tuple[str, object]]:
"""Return iterable of (node_id, node_obj) for initial snapshot."""
__all__ = [
"MeshProtocol",
]
# Backwards-compatibility alias — import Provider from here during transition.
Provider = MeshProtocol
+115
View File
@@ -0,0 +1,115 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Node identity helpers shared across ingestor providers.
The web application keys nodes by a canonical textual identifier of the form
``!%08x`` (lowercase hex). Both the Python collector and Ruby server accept
several input forms (ints, ``0x`` hex strings, ``!`` hex strings, decimal
strings). This module centralizes that normalization.
"""
from __future__ import annotations
from typing import Final
CANONICAL_PREFIX: Final[str] = "!"
def canonical_node_id(value: object) -> str | None:
"""Convert ``value`` into canonical ``!xxxxxxxx`` form.
Parameters:
value: Node reference which may be an int, float, or string.
Returns:
Canonical node id string or ``None`` when parsing fails.
"""
if value is None:
return None
if isinstance(value, (int, float)):
try:
num = int(value)
except (TypeError, ValueError):
return None
if num < 0:
return None
return f"{CANONICAL_PREFIX}{num & 0xFFFFFFFF:08x}"
if not isinstance(value, str):
return None
trimmed = value.strip()
if not trimmed:
return None
if trimmed.startswith("^"):
# Meshtastic special destinations like "^all" are not node ids; callers
# that already accept them should keep passing them through unchanged.
return trimmed
if trimmed.startswith(CANONICAL_PREFIX):
body = trimmed[1:]
elif trimmed.lower().startswith("0x"):
body = trimmed[2:]
elif trimmed.isdigit():
try:
return f"{CANONICAL_PREFIX}{int(trimmed, 10) & 0xFFFFFFFF:08x}"
except ValueError:
return None
else:
body = trimmed
if not body:
return None
try:
return f"{CANONICAL_PREFIX}{int(body, 16) & 0xFFFFFFFF:08x}"
except ValueError:
return None
def node_num_from_id(node_id: object) -> int | None:
"""Extract the numeric node identifier from a canonical (or near-canonical) id."""
if node_id is None:
return None
if isinstance(node_id, (int, float)):
try:
num = int(node_id)
except (TypeError, ValueError):
return None
return num if num >= 0 else None
if not isinstance(node_id, str):
return None
trimmed = node_id.strip()
if not trimmed:
return None
if trimmed.startswith(CANONICAL_PREFIX):
trimmed = trimmed[1:]
if trimmed.lower().startswith("0x"):
trimmed = trimmed[2:]
try:
return int(trimmed, 16)
except ValueError:
try:
return int(trimmed, 10)
except ValueError:
return None
__all__ = [
"CANONICAL_PREFIX",
"canonical_node_id",
"node_num_from_id",
]
+44
View File
@@ -0,0 +1,44 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Protocol implementations.
This package contains protocol-specific implementations (Meshtastic,
MeshCore, and others in the future).
"""
from __future__ import annotations
from .meshtastic import MeshtasticProvider
def __getattr__(name: str) -> object:
"""Lazy-load protocol classes and exceptions that carry optional heavy dependencies.
``MeshcoreProvider`` and ``ClosedBeforeConnectedError`` are imported on
demand so that the MeshCore library (once wired in) is not loaded at
startup when ``PROTOCOL=meshtastic``.
"""
if name == "MeshcoreProvider":
from .meshcore import MeshcoreProvider
return MeshcoreProvider
if name == "ClosedBeforeConnectedError":
from .meshcore import ClosedBeforeConnectedError
return ClosedBeforeConnectedError
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
__all__ = ["MeshtasticProvider", "MeshcoreProvider", "ClosedBeforeConnectedError"]
File diff suppressed because it is too large Load Diff
+100
View File
@@ -0,0 +1,100 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Meshtastic protocol implementation."""
from __future__ import annotations
from pubsub import pub
from .. import config, daemon as _daemon, handlers, interfaces
from ..utils import _retry_dict_snapshot
class MeshtasticProvider:
"""Meshtastic ingestion protocol (current default)."""
name = "meshtastic"
def __init__(self):
self._subscribed: list[str] = []
def subscribe(self) -> list[str]:
"""Subscribe Meshtastic pubsub receive topics."""
if self._subscribed:
return list(self._subscribed)
subscribed = []
for topic in _daemon._RECEIVE_TOPICS:
try:
pub.subscribe(handlers.on_receive, topic)
subscribed.append(topic)
except Exception as exc: # pragma: no cover
config._debug_log(f"failed to subscribe to {topic!r}: {exc}")
self._subscribed = subscribed
return list(subscribed)
def connect(
self, *, active_candidate: str | None
) -> tuple[object, str | None, str | None]:
"""Create a Meshtastic interface using the existing interface helpers."""
iface = None
resolved_target = None
next_candidate = active_candidate
if active_candidate:
iface, resolved_target = interfaces._create_serial_interface(
active_candidate
)
else:
iface, resolved_target = interfaces._create_default_interface()
next_candidate = resolved_target
interfaces._ensure_radio_metadata(iface)
interfaces._ensure_channel_metadata(iface)
return iface, resolved_target, next_candidate
def extract_host_node_id(self, iface: object) -> str | None:
return interfaces._extract_host_node_id(iface)
def node_snapshot_items(self, iface: object) -> list[tuple[str, object]]:
"""Return a stable snapshot of all known nodes from ``iface``.
Uses :func:`~data.mesh_ingestor.utils._retry_dict_snapshot` to
tolerate concurrent modifications from the Meshtastic background
thread.
Parameters:
iface: Live Meshtastic interface whose ``nodes`` dict to snapshot.
Returns:
List of ``(node_id, node_dict)`` tuples, or an empty list when
the snapshot fails after retries.
"""
nodes = getattr(iface, "nodes", {}) or {}
result = _retry_dict_snapshot(lambda: list(nodes.items()))
if result is None:
config._debug_log(
"Skipping node snapshot due to concurrent modification",
context="meshtastic.snapshot",
)
return []
return result
__all__ = ["MeshtasticProvider"]
+348 -30
View File
@@ -73,52 +73,61 @@ def _payload_key_value_pairs(payload: Mapping[str, object]) -> str:
return " ".join(pairs)
_MESSAGE_POST_PRIORITY = 10
_INGESTOR_POST_PRIORITY = 80
_NEIGHBOR_POST_PRIORITY = 20
_TRACE_POST_PRIORITY = 25
_POSITION_POST_PRIORITY = 30
_TELEMETRY_POST_PRIORITY = 40
_NODE_POST_PRIORITY = 50
_INGESTOR_POST_PRIORITY = 0
_CHANNEL_POST_PRIORITY = 10
_NODE_POST_PRIORITY = 20
_MESSAGE_POST_PRIORITY = 30
_NEIGHBOR_POST_PRIORITY = 40
_TRACE_POST_PRIORITY = 50
_POSITION_POST_PRIORITY = 60
_TELEMETRY_POST_PRIORITY = 70
_DEFAULT_POST_PRIORITY = 90
_MAX_SEND_RETRIES = 3
"""Maximum number of times a failed POST item is re-queued before being dropped."""
@dataclass
class QueueState:
"""Mutable state for the HTTP POST priority queue."""
lock: threading.Lock = field(default_factory=threading.Lock)
queue: list[tuple[int, int, str, dict]] = field(default_factory=list)
# Heap tuple: (priority, counter, path, payload, retries).
queue: list[tuple[int, int, str, dict, int]] = field(default_factory=list)
counter: Iterable[int] = field(default_factory=itertools.count)
active: bool = False
# Background drain thread. When the drainer is alive, _queue_post_json
# signals drain_event instead of blocking the caller with HTTP calls.
drain_event: threading.Event = field(default_factory=threading.Event)
drainer: threading.Thread | None = None
# Set to request the drainer thread to exit its loop cleanly.
shutdown: threading.Event = field(default_factory=threading.Event)
STATE = QueueState()
def _post_json(
def _send_single(
instance: str,
api_token: str,
path: str,
payload: dict,
*,
instance: str | None = None,
api_token: str | None = None,
) -> None:
"""Send a JSON payload to the configured web API.
) -> bool:
"""Transmit a single JSON payload to one instance.
Parameters:
path: API path relative to the configured instance root.
instance: Base URL of the target instance.
api_token: Bearer token for this instance (may be empty).
path: API path relative to the instance root.
payload: JSON-serialisable body to transmit.
instance: Optional override for :data:`config.INSTANCE`.
api_token: Optional override for :data:`config.API_TOKEN`.
Returns:
``True`` when the request succeeded, ``False`` on failure.
"""
if instance is None:
instance = config.INSTANCE
if api_token is None:
api_token = config.API_TOKEN
if not instance:
return
return True
url = f"{instance}{path}"
data = json.dumps(payload).encode("utf-8")
@@ -143,15 +152,80 @@ def _post_json(
try:
with urllib.request.urlopen(req, timeout=10) as resp:
resp.read()
except Exception as exc: # pragma: no cover - exercised in production
return True
except Exception as exc:
config._debug_log(
"POST request failed",
context="queue.post_json",
severity="warn",
always=True,
url=url,
error_class=exc.__class__.__name__,
error_message=str(exc),
)
return False
def _post_json(
path: str,
payload: dict,
*,
instance: str | None = None,
api_token: str | None = None,
) -> bool:
"""Send a JSON payload to one or more configured web API instances.
When ``instance`` is provided explicitly the payload is sent to that
single target. Otherwise every ``(url, token)`` pair in
:data:`config.INSTANCES` receives the payload independently so that
one failure does not block delivery to the remaining targets.
Parameters:
path: API path relative to the instance root.
payload: JSON-serialisable body to transmit.
instance: Optional single-instance override.
api_token: Optional token override (only used with ``instance``).
Returns:
``True`` when at least one instance received the payload
successfully, ``False`` when all targets failed. A missing
configuration is not a transient failure and returns ``True``
(retrying would not help).
"""
if instance is not None:
if not instance:
return True
return _send_single(instance, api_token or "", path, payload)
targets: tuple[tuple[str, str], ...] = config.INSTANCES
if not targets:
# Backward-compatible fallback for callers that only set
# config.INSTANCE / config.API_TOKEN directly.
inst = config.INSTANCE
if not inst:
try:
config._debug_log(
"No target instances configured; discarding payload",
context="queue.post_json",
severity="error",
always=True,
path=path,
)
except Exception:
pass
return False
return _send_single(inst, api_token or config.API_TOKEN, path, payload)
any_ok = False
any_attempted = False
for inst, token in targets:
if not inst:
continue
any_attempted = True
if _send_single(inst, token, path, payload):
any_ok = True
return any_ok or not any_attempted
def _enqueue_post_json(
@@ -160,6 +234,7 @@ def _enqueue_post_json(
priority: int,
*,
state: QueueState = STATE,
retries: int = 0,
) -> None:
"""Store a POST request in the priority queue.
@@ -168,11 +243,17 @@ def _enqueue_post_json(
payload: JSON-serialisable body.
priority: Lower values execute first.
state: Shared queue state, injectable for testing.
retries: Number of prior failed send attempts for this item.
"""
with state.lock:
counter = next(state.counter)
heapq.heappush(state.queue, (priority, counter, path, payload))
# Heap tuple: (priority, counter, path, payload, retries). Lower
# priority values are dequeued first (min-heap semantics). The
# monotonically increasing counter breaks ties so equal-priority
# items are processed in FIFO order without comparing the
# non-orderable payload dict.
heapq.heappush(state.queue, (priority, counter, path, payload, retries))
def _drain_post_queue(
@@ -180,6 +261,12 @@ def _drain_post_queue(
) -> None:
"""Process queued POST requests in priority order.
When the *send* callable returns ``False`` (transient failure) the item
is re-queued up to :data:`_MAX_SEND_RETRIES` times. Items exceeding
the limit are dropped with a warning. Custom *send* callables that
return ``None`` (the typical test/heartbeat pattern) are never retried
the ``result is False`` identity check ensures backward compatibility.
Parameters:
state: Queue container holding pending items.
send: Optional callable used to transmit requests.
@@ -194,13 +281,184 @@ def _drain_post_queue(
if not state.queue:
state.active = False
return
_priority, _idx, path, payload = heapq.heappop(state.queue)
send(path, payload)
item = heapq.heappop(state.queue)
# Support both 5-tuple (current) and 4-tuple (legacy/test) items.
if len(item) >= 5:
priority, _idx, path, payload, retries = item[:5]
else:
priority, _idx, path, payload = item[:4]
retries = 0
result = send(path, payload)
# Only retry when the send callable explicitly signals failure
# (returns False). Custom send callables (tests, heartbeat)
# return None and must NOT be treated as failures.
if result is False:
if retries < _MAX_SEND_RETRIES:
_enqueue_post_json(
path, payload, priority, state=state, retries=retries + 1
)
else:
try:
config._debug_log(
"Dropping item after max retries",
context="queue.drain",
severity="warn",
always=True,
path=path,
retries=retries,
)
except Exception:
pass
finally:
with state.lock:
state.active = False
_QUEUE_DEPTH_WARNING_THRESHOLD = 100
"""Log a warning when the queue grows past this many items."""
def _queue_drainer_loop(state: QueueState = STATE) -> None:
"""Body of the background queue-drain daemon thread.
Blocks on :attr:`QueueState.drain_event`, clears it, then empties the
queue by calling :func:`_drain_post_queue`. The thread is created as a
daemon so it terminates automatically when the process exits.
The loop exits cleanly when :attr:`QueueState.shutdown` is set, allowing
tests (and graceful-shutdown paths) to join the thread instead of leaking
daemon threads that accumulate across a test run.
The loop is deliberately hardened so that **no** :class:`Exception` can
kill the thread. The ``_debug_log`` calls inside the error handler are
themselves wrapped in ``try/except`` to prevent cascading failures
(e.g. ``BrokenPipeError`` from ``print()`` to a closed stdout).
.. note::
There is a benign race between ``drain_event.clear()`` and the end
of :func:`_drain_post_queue`: a signal arriving in that window is
consumed by ``clear()`` but the item is still drained because the
drain loop empties the queue completely. However, an item enqueued
*after* the drain loop finds the queue empty and *before*
``wait()`` re-blocks will sit until the next ``drain_event.set()``
call (i.e. the next enqueue). This is acceptable for a best-effort
ingestor maximum extra latency equals the inter-packet interval.
Parameters:
state: Queue state instance to drain.
"""
try:
config._debug_log(
"Queue drainer thread started",
context="queue.drainer",
severity="info",
always=True,
)
except Exception:
pass
while not state.shutdown.is_set():
state.drain_event.wait(timeout=1.0)
if state.shutdown.is_set():
break
state.drain_event.clear()
depth = len(state.queue)
if depth > _QUEUE_DEPTH_WARNING_THRESHOLD:
try:
config._debug_log(
"Queue depth warning",
context="queue.drainer",
severity="warn",
always=True,
depth=depth,
)
except Exception:
pass
try:
_drain_post_queue(state)
except Exception as exc:
try:
config._debug_log(
"Queue drainer error",
context="queue.drainer",
severity="error",
always=True,
error_class=exc.__class__.__name__,
error_message=str(exc),
)
except Exception:
pass
try:
config._debug_log(
"Queue drainer thread exiting",
context="queue.drainer",
severity="info",
always=True,
)
except Exception:
pass
def _start_queue_drainer(state: QueueState = STATE) -> None:
"""Idempotently start the background queue-drain thread.
Calling this function when a drainer thread is already alive is a
no-op. The thread is created as a daemon so it does not prevent
process exit. The check-and-start is performed under :attr:`state.lock`
to avoid starting duplicate threads under concurrent callers.
If items are already in the queue when the drainer is started,
:attr:`QueueState.drain_event` is signalled immediately so they are not
stranded waiting for the next packet to arrive.
Parameters:
state: Queue state whose :func:`_queue_drainer_loop` to start.
"""
with state.lock:
if state.drainer is not None and state.drainer.is_alive():
return
# Reset in case the prior thread was stopped or crashed while
# shutdown was already set.
state.shutdown.clear()
t = threading.Thread(
target=_queue_drainer_loop,
args=(state,),
name="queue-drainer",
daemon=True,
)
t.start()
state.drainer = t
if state.queue:
state.drain_event.set()
def _stop_queue_drainer(state: QueueState = STATE, timeout: float = 5.0) -> None:
"""Signal the drainer thread to exit and wait for it to finish.
Sets :attr:`QueueState.shutdown` and :attr:`QueueState.drain_event` so
the loop wakes up, observes the shutdown flag, and terminates. After
joining (up to *timeout* seconds) the drainer reference is cleared.
Safe to call when no drainer is running (no-op).
Parameters:
state: Queue state whose drainer to stop.
timeout: Maximum seconds to wait for the thread to finish.
"""
if state.drainer is None or not state.drainer.is_alive():
return
state.shutdown.set()
state.drain_event.set()
state.drainer.join(timeout=timeout)
state.drainer = None
def _queue_post_json(
path: str,
payload: dict,
@@ -209,14 +467,32 @@ def _queue_post_json(
state: QueueState = STATE,
send: Callable[[str, dict], None] | None = None,
) -> None:
"""Queue a POST request and start processing if idle.
"""Queue a POST request and wake the drain thread (or drain inline).
When a background drainer thread is running (started via
:func:`_start_queue_drainer`), this function enqueues the item and
signals :attr:`QueueState.drain_event` without blocking the drain
happens on the dedicated thread. This keeps the caller's thread (which
may be the Meshtastic asyncio I/O thread) free to process serial events.
When no background drainer is alive the call falls back to a
synchronous inline drain. This path is used by tests (which pass a
``send`` override via :func:`_fresh_state`) and for any standalone use
without calling :func:`_start_queue_drainer`.
.. note::
The background drainer is used **only** when no custom ``send``
override is provided (i.e. the production ``_post_json`` path).
Any caller that supplies a custom ``send`` (tests, heartbeat
helpers) always gets the synchronous inline drain so its transport
is honoured correctly.
Parameters:
path: API path for the request.
payload: JSON payload to send.
priority: Scheduling priority where lower values run first.
state: Queue container used to store pending requests.
send: Optional transport override, primarily for tests.
send: Optional transport override (synchronous fallback only).
"""
if send is None:
@@ -236,6 +512,42 @@ def _queue_post_json(
)
_enqueue_post_json(path, payload, priority, state=state)
# Use the background drainer only when it is alive AND no custom send
# override is in play. A custom send (used by tests and callers such as
# ingestors.queue_ingestor_heartbeat) must be honoured synchronously
# because the background drainer always calls _drain_post_queue without
# a send override.
#
# The ``is`` check is intentional: _post_json is a module-level function
# so identity comparison reliably detects the "no override" default that
# was assigned at the top of this function.
if send is _post_json:
if state.drainer is not None and state.drainer.is_alive():
state.drain_event.set()
return
# The drainer was previously started but has died (e.g. unhandled
# exception). Restart it so the caller stays non-blocking and the
# MeshCore asyncio event loop is not stalled by inline HTTP calls.
if state.drainer is not None:
try:
config._debug_log(
"Restarting dead queue drainer thread",
context="queue.queue_post_json",
severity="warn",
always=True,
)
except Exception:
pass
_start_queue_drainer(state)
# If the restart succeeded, delegate to the background thread.
if state.drainer is not None and state.drainer.is_alive():
state.drain_event.set()
return
# Synchronous fallback: no drainer was ever started, the restart
# failed, or a custom send override is in play.
with state.lock:
if state.active:
return
@@ -258,17 +570,23 @@ def _clear_post_queue(state: QueueState = STATE) -> None:
__all__ = [
"STATE",
"QueueState",
"_CHANNEL_POST_PRIORITY",
"_DEFAULT_POST_PRIORITY",
"_MESSAGE_POST_PRIORITY",
"_INGESTOR_POST_PRIORITY",
"_MAX_SEND_RETRIES",
"_MESSAGE_POST_PRIORITY",
"_NEIGHBOR_POST_PRIORITY",
"_NODE_POST_PRIORITY",
"_POSITION_POST_PRIORITY",
"_QUEUE_DEPTH_WARNING_THRESHOLD",
"_TRACE_POST_PRIORITY",
"_TELEMETRY_POST_PRIORITY",
"_clear_post_queue",
"_drain_post_queue",
"_enqueue_post_json",
"_post_json",
"_queue_drainer_loop",
"_queue_post_json",
"_start_queue_drainer",
"_stop_queue_drainer",
]
+7 -85
View File
@@ -33,6 +33,9 @@ from google.protobuf.json_format import MessageToDict
from google.protobuf.message import DecodeError
from google.protobuf.message import Message as ProtoMessage
from .node_identity import canonical_node_id as _canonical_node_id
from .node_identity import node_num_from_id as _node_num_from_id
_CLI_ROLE_MODULE_NAMES: tuple[str, ...] = (
"meshtastic.cli.common",
"meshtastic.cli.roles",
@@ -125,6 +128,10 @@ def _load_cli_role_lookup() -> dict[int, str]:
mapping[key_int] = str(value)
return mapping
# Iterate through candidate module paths in preference order. The CLI
# package ships several role-enum locations across versions; we stop at
# the first module that yields a non-empty mapping so we do not silently
# merge partial enums from two different meshtastic-cli releases.
for module_name in _CLI_ROLE_MODULE_NAMES:
try:
module = importlib.import_module(module_name)
@@ -429,91 +436,6 @@ def _pkt_to_dict(packet) -> dict:
return {"_unparsed": str(packet)}
def _canonical_node_id(value) -> str | None:
"""Convert node identifiers into the canonical ``!xxxxxxxx`` format.
Parameters:
value: Input identifier which may be an int, float or string.
Returns:
The canonical identifier or ``None`` if conversion fails.
"""
if value is None:
return None
if isinstance(value, (int, float)):
try:
num = int(value)
except (TypeError, ValueError):
return None
if num < 0:
return None
return f"!{num & 0xFFFFFFFF:08x}"
if not isinstance(value, str):
return None
trimmed = value.strip()
if not trimmed:
return None
if trimmed.startswith("^"):
return trimmed
if trimmed.startswith("!"):
body = trimmed[1:]
elif trimmed.lower().startswith("0x"):
body = trimmed[2:]
elif trimmed.isdigit():
try:
return f"!{int(trimmed, 10) & 0xFFFFFFFF:08x}"
except ValueError:
return None
else:
body = trimmed
if not body:
return None
try:
return f"!{int(body, 16) & 0xFFFFFFFF:08x}"
except ValueError:
return None
def _node_num_from_id(node_id) -> int | None:
"""Extract the numeric node ID from a canonical identifier.
Parameters:
node_id: Identifier value accepted by :func:`_canonical_node_id`.
Returns:
The numeric node ID or ``None`` when parsing fails.
"""
if node_id is None:
return None
if isinstance(node_id, (int, float)):
try:
num = int(node_id)
except (TypeError, ValueError):
return None
return num if num >= 0 else None
if not isinstance(node_id, str):
return None
trimmed = node_id.strip()
if not trimmed:
return None
if trimmed.startswith("!"):
trimmed = trimmed[1:]
if trimmed.lower().startswith("0x"):
trimmed = trimmed[2:]
try:
return int(trimmed, 16)
except ValueError:
try:
return int(trimmed, 10)
except ValueError:
return None
def _merge_mappings(base, extra):
"""Merge two mapping-like objects recursively.
+56
View File
@@ -0,0 +1,56 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Shared utility helpers for the mesh ingestor package."""
from __future__ import annotations
import time
from typing import Callable, TypeVar
_T = TypeVar("_T")
def _retry_dict_snapshot(fn: Callable[[], _T], retries: int = 3) -> _T | None:
"""Call ``fn()`` retrying on concurrent dictionary-modification errors.
Meshtastic's node dictionary is updated on a background thread. Iterating
it can raise a :class:`RuntimeError` with the message "dictionary changed
size during iteration". This helper retries the call up to ``retries``
times, yielding the thread scheduler between attempts via :func:`time.sleep`.
Parameters:
fn: Zero-argument callable that performs the iteration.
retries: Maximum number of attempts before giving up.
Returns:
The return value of ``fn`` on success, or ``None`` when all retries are
exhausted.
"""
for _ in range(max(1, retries)):
try:
return fn()
except RuntimeError as err:
# Only retry the specific concurrent-modification error; re-raise
# anything else so genuine bugs surface immediately.
if "dictionary changed size during iteration" not in str(err):
raise
# Yield to the thread scheduler to let the mutating thread complete
# before we attempt the snapshot again.
time.sleep(0)
return None
__all__ = ["_retry_dict_snapshot"]
+3 -1
View File
@@ -29,7 +29,9 @@ CREATE TABLE IF NOT EXISTS messages (
modem_preset TEXT,
channel_name TEXT,
reply_id INTEGER,
emoji TEXT
emoji TEXT,
ingestor TEXT,
protocol TEXT NOT NULL DEFAULT 'meshtastic'
);
CREATE INDEX IF NOT EXISTS idx_messages_rx_time ON messages(rx_time);
@@ -0,0 +1,39 @@
-- Copyright © 2025-26 l5yth & contributors
--
-- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License.
-- You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing, software
-- distributed under the License is distributed on an "AS IS" BASIS,
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-- See the License for the specific language governing permissions and
-- limitations under the License.
-- Add a protocol column to every entity and event table so records from
-- different mesh backends (meshtastic, meshcore, reticulum, …) can co-exist
-- in the same database and be queried independently.
--
-- Existing rows default to 'meshtastic' for backward compatibility.
BEGIN;
ALTER TABLE ingestors ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
ALTER TABLE nodes ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
ALTER TABLE messages ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
ALTER TABLE positions ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
ALTER TABLE telemetry ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
ALTER TABLE traces ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
ALTER TABLE neighbors ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
-- Indices to support ?protocol= filtering on every entity endpoint without
-- full table scans as multi-protocol traffic grows.
CREATE INDEX IF NOT EXISTS idx_ingestors_protocol ON ingestors(protocol);
CREATE INDEX IF NOT EXISTS idx_nodes_protocol ON nodes(protocol);
CREATE INDEX IF NOT EXISTS idx_messages_protocol ON messages(protocol);
CREATE INDEX IF NOT EXISTS idx_positions_protocol ON positions(protocol);
CREATE INDEX IF NOT EXISTS idx_telemetry_protocol ON telemetry(protocol);
CREATE INDEX IF NOT EXISTS idx_traces_protocol ON traces(protocol);
CREATE INDEX IF NOT EXISTS idx_neighbors_protocol ON neighbors(protocol);
COMMIT;
@@ -0,0 +1,47 @@
-- Copyright © 2025-26 l5yth & contributors
--
-- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License.
-- You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing, software
-- distributed under the License is distributed on an "AS IS" BASIS,
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-- See the License for the specific language governing permissions and
-- limitations under the License.
-- Add telemetry subtype discriminator to enable per-chart type filtering.
-- Backfills existing rows using field-presence heuristics that mirror
-- classifySnapshot() in node-page.js, so historical data is classified
-- consistently regardless of whether the new ingestors are deployed yet.
BEGIN;
ALTER TABLE telemetry ADD COLUMN telemetry_type TEXT;
-- Device metrics: battery/channel fields are exclusive to device_metrics
UPDATE telemetry SET telemetry_type = 'device'
WHERE telemetry_type IS NULL
AND (battery_level IS NOT NULL OR channel_utilization IS NOT NULL
OR air_util_tx IS NOT NULL OR uptime_seconds IS NOT NULL);
-- Power sensor: current is the unambiguous power-sensor discriminator.
-- voltage is intentionally excluded here: device_metrics also stores a voltage
-- reading (~4.2 V for battery), so using voltage alone would misclassify device
-- rows whose four device-discriminator fields (battery_level, channel_utilization,
-- air_util_tx, uptime_seconds) happen to be NULL. Rows that have only voltage
-- and no other classifiable fields are left as NULL (unclassified), which is
-- more accurate than a wrong classification.
UPDATE telemetry SET telemetry_type = 'power'
WHERE telemetry_type IS NULL
AND current IS NOT NULL;
-- Environment: temperature/humidity/pressure
UPDATE telemetry SET telemetry_type = 'environment'
WHERE telemetry_type IS NULL
AND (temperature IS NOT NULL OR relative_humidity IS NOT NULL
OR barometric_pressure IS NOT NULL OR iaq IS NOT NULL
OR gas_resistance IS NOT NULL);
COMMIT;
+2
View File
@@ -17,6 +17,8 @@ CREATE TABLE IF NOT EXISTS neighbors (
neighbor_id TEXT NOT NULL,
snr REAL,
rx_time INTEGER NOT NULL,
ingestor TEXT,
protocol TEXT NOT NULL DEFAULT 'meshtastic',
PRIMARY KEY (node_id, neighbor_id),
FOREIGN KEY (node_id) REFERENCES nodes(node_id) ON DELETE CASCADE,
FOREIGN KEY (neighbor_id) REFERENCES nodes(node_id) ON DELETE CASCADE
+4 -1
View File
@@ -41,9 +41,12 @@ CREATE TABLE IF NOT EXISTS nodes (
longitude REAL,
altitude REAL,
lora_freq INTEGER,
modem_preset TEXT
modem_preset TEXT,
protocol TEXT NOT NULL DEFAULT 'meshtastic',
synthetic BOOLEAN NOT NULL DEFAULT 0
);
CREATE INDEX IF NOT EXISTS idx_nodes_last_heard ON nodes(last_heard);
CREATE INDEX IF NOT EXISTS idx_nodes_hw_model ON nodes(hw_model);
CREATE INDEX IF NOT EXISTS idx_nodes_latlon ON nodes(latitude, longitude);
CREATE INDEX IF NOT EXISTS idx_nodes_long_name ON nodes(long_name);
+3 -1
View File
@@ -33,7 +33,9 @@ CREATE TABLE IF NOT EXISTS positions (
rssi INTEGER,
hop_limit INTEGER,
bitfield INTEGER,
payload_b64 TEXT
payload_b64 TEXT,
ingestor TEXT,
protocol TEXT NOT NULL DEFAULT 'meshtastic'
);
CREATE INDEX IF NOT EXISTS idx_positions_rx_time ON positions(rx_time);
+2
View File
@@ -1,5 +1,7 @@
# Production dependencies
meshtastic>=2.5.0
meshcore>=2.3.5
bleak>=0.21.0
protobuf>=5.27.2
# Development dependencies (optional)
+4 -1
View File
@@ -53,7 +53,10 @@ CREATE TABLE IF NOT EXISTS telemetry (
rainfall_1h REAL,
rainfall_24h REAL,
soil_moisture INTEGER,
soil_temperature REAL
soil_temperature REAL,
ingestor TEXT,
protocol TEXT NOT NULL DEFAULT 'meshtastic',
telemetry_type TEXT
);
CREATE INDEX IF NOT EXISTS idx_telemetry_rx_time ON telemetry(rx_time);
+3 -1
View File
@@ -21,7 +21,9 @@ CREATE TABLE IF NOT EXISTS traces (
rx_iso TEXT NOT NULL,
rssi INTEGER,
snr REAL,
elapsed_ms INTEGER
elapsed_ms INTEGER,
ingestor TEXT,
protocol TEXT NOT NULL DEFAULT 'meshtastic'
);
CREATE TABLE IF NOT EXISTS trace_hops (
+18
View File
@@ -49,3 +49,21 @@ services:
environment:
DEBUG: 0
restart: always
matrix-bridge:
build:
context: .
dockerfile: matrix/Dockerfile
target: runtime
environment:
DEBUG: 0
restart: always
matrix-bridge-bridge:
build:
context: .
dockerfile: matrix/Dockerfile
target: runtime
environment:
DEBUG: 0
restart: always
+17 -3
View File
@@ -34,6 +34,7 @@ x-web-base: &web-base
- potatomesh_data:/app/.local/share/potato-mesh
- potatomesh_config:/app/.config/potato-mesh
- potatomesh_logs:/app/logs
- potatomesh_pages:/app/pages
restart: unless-stopped
deploy:
resources:
@@ -49,11 +50,13 @@ x-ingestor-base: &ingestor-base
environment:
CONNECTION: ${CONNECTION:-/dev/ttyACM0}
CHANNEL_INDEX: ${CHANNEL_INDEX:-0}
ALLOWED_CHANNELS: ${ALLOWED_CHANNELS:-""}
HIDDEN_CHANNELS: ${HIDDEN_CHANNELS:-""}
API_TOKEN: ${API_TOKEN}
INSTANCE_DOMAIN: ${INSTANCE_DOMAIN}
POTATOMESH_INSTANCE: ${POTATOMESH_INSTANCE:-http://web:41447}
INSTANCE_DOMAIN: ${INSTANCE_DOMAIN:-http://web:41447}
DEBUG: ${DEBUG:-0}
PROTOCOL: ${PROTOCOL:-meshtastic}
ENERGY_SAVING: ${ENERGY_SAVING:-0}
FEDERATION: ${FEDERATION:-1}
PRIVATE: ${PRIVATE:-0}
volumes:
@@ -80,7 +83,12 @@ x-matrix-bridge-base: &matrix-bridge-base
image: ghcr.io/l5yth/potato-mesh-matrix-bridge-${POTATOMESH_IMAGE_ARCH:-linux-amd64}:${POTATOMESH_IMAGE_TAG:-latest}
volumes:
- potatomesh_matrix_bridge_state:/app
- ./matrix/Config.toml:/app/Config.toml:ro
- type: bind
source: ./matrix/Config.toml
target: /app/Config.toml
read_only: true
bind:
create_host_path: false
restart: unless-stopped
deploy:
resources:
@@ -127,6 +135,8 @@ services:
matrix-bridge:
<<: *matrix-bridge-base
network_mode: host
profiles:
- matrix
depends_on:
- web
extra_hosts:
@@ -139,6 +149,8 @@ services:
- potatomesh-network
depends_on:
- web-bridge
ports:
- "41448:41448"
profiles:
- bridge
@@ -149,6 +161,8 @@ volumes:
driver: local
potatomesh_logs:
driver: local
potatomesh_pages:
driver: local
potatomesh_matrix_bridge_state:
driver: local
Generated
+61
View File
@@ -0,0 +1,61 @@
{
"nodes": {
"flake-utils": {
"inputs": {
"systems": "systems"
},
"locked": {
"lastModified": 1731533236,
"narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1766070988,
"narHash": "sha256-G/WVghka6c4bAzMhTwT2vjLccg/awmHkdKSd2JrycLc=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "c6245e83d836d0433170a16eb185cefe0572f8b8",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"root": {
"inputs": {
"flake-utils": "flake-utils",
"nixpkgs": "nixpkgs"
}
},
"systems": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
}
},
"root": "root",
"version": 7
}
+384
View File
@@ -0,0 +1,384 @@
{
description = "PotatoMesh - A federated, Meshtastic-powered node dashboard";
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
flake-utils.url = "github:numtide/flake-utils";
};
outputs = { self, nixpkgs, flake-utils }:
flake-utils.lib.eachDefaultSystem (system:
let
pkgs = nixpkgs.legacyPackages.${system};
# Python environment for the ingestor
pythonEnv = pkgs.python3.withPackages (ps: with ps; [
meshtastic
protobuf
requests
]);
# Web app wrapper script
webApp = pkgs.writeShellApplication {
name = "potato-mesh-web";
runtimeInputs = [ pkgs.ruby pkgs.bundler pkgs.sqlite pkgs.git pkgs.gnumake pkgs.gcc ];
text = ''
if [ -n "''${XDG_DATA_HOME:-}" ]; then
BASEDIR="$XDG_DATA_HOME"
else
BASEDIR="$HOME/.local/share/potato-mesh"
fi
WORKDIR="$BASEDIR/web"
mkdir -p "$WORKDIR"
# Copy app files if not present or outdated
APP_SRC="${./web}"
DATA_SRC="${./data}"
if [ ! -f "$WORKDIR/.installed" ] || [ "$APP_SRC" != "$(cat "$WORKDIR/.src_path" 2>/dev/null)" ]; then
# Copy web app
cp -rT "$APP_SRC" "$WORKDIR/"
chmod -R u+w "$WORKDIR"
# Copy data directory (contains SQL schemas)
mkdir -p "$BASEDIR/data"
cp -rT "$DATA_SRC" "$BASEDIR/data/"
chmod -R u+w "$BASEDIR/data"
echo "$APP_SRC" > "$WORKDIR/.src_path"
rm -f "$WORKDIR/.installed"
fi
cd "$WORKDIR"
# Install gems if needed
if [ ! -f ".installed" ]; then
bundle config set --local path 'vendor/bundle'
bundle install
touch .installed
fi
exec bundle exec ruby app.rb -p "''${PORT:-41447}" -o "''${HOST:-0.0.0.0}"
'';
};
# Ingestor wrapper script
ingestor = pkgs.writeShellApplication {
name = "potato-mesh-ingestor";
runtimeInputs = [ pythonEnv ];
text = ''
# The ingestor needs to run from parent directory with data/ folder
if [ -n "''${XDG_DATA_HOME:-}" ]; then
BASEDIR="$XDG_DATA_HOME"
else
BASEDIR="$HOME/.local/share/potato-mesh"
fi
if [ ! -d "$BASEDIR/data" ]; then
mkdir -p "$BASEDIR"
cp -rT "${./data}" "$BASEDIR/data/"
chmod -R u+w "$BASEDIR/data"
fi
cd "$BASEDIR"
exec python -m data.mesh
'';
};
in {
packages = {
web = webApp;
ingestor = ingestor;
default = webApp;
};
apps = {
web = {
type = "app";
program = "${webApp}/bin/potato-mesh-web";
};
ingestor = {
type = "app";
program = "${ingestor}/bin/potato-mesh-ingestor";
};
default = self.apps.${system}.web;
};
devShells.default = pkgs.mkShell {
buildInputs = [
pkgs.ruby
pkgs.bundler
pythonEnv
pkgs.sqlite
];
shellHook = ''
echo "PotatoMesh development shell"
echo " - Ruby: $(ruby --version)"
echo " - Python: $(python --version)"
echo ""
echo "To run the web app: cd web && bundle install && ./app.sh"
echo "To run the ingestor: cd data && python mesh.py"
'';
};
checks.potato-mesh-nixos = pkgs.testers.nixosTest {
name = "potato-mesh-data-dir";
nodes.machine = { lib, ... }: {
imports = [ self.nixosModules.default ];
services.potato-mesh = {
enable = true;
apiToken = "test-token";
dataDir = "/var/lib/potato-mesh";
ingestor.enable = true;
};
systemd.services.potato-mesh-ingestor.wantedBy = lib.mkForce [];
};
testScript = ''
machine.start
machine.succeed("grep -q 'XDG_DATA_HOME=/var/lib/potato-mesh' /etc/systemd/system/potato-mesh-web.service")
machine.succeed("grep -q 'XDG_DATA_HOME=/var/lib/potato-mesh' /etc/systemd/system/potato-mesh-ingestor.service")
machine.succeed("grep -q 'WorkingDirectory=/var/lib/potato-mesh' /etc/systemd/system/potato-mesh-web.service")
machine.succeed("grep -q 'WorkingDirectory=/var/lib/potato-mesh' /etc/systemd/system/potato-mesh-ingestor.service")
'';
};
}
) // {
# NixOS module
nixosModules.default = { config, lib, pkgs, ... }:
let
cfg = config.services.potato-mesh;
in {
options.services.potato-mesh = {
enable = lib.mkEnableOption "PotatoMesh web dashboard";
package = lib.mkOption {
type = lib.types.package;
default = self.packages.${pkgs.system}.web;
description = "The potato-mesh web package to use";
};
port = lib.mkOption {
type = lib.types.port;
default = 41447;
description = "Port to listen on";
};
host = lib.mkOption {
type = lib.types.str;
default = "0.0.0.0";
description = "Host to bind to";
};
apiToken = lib.mkOption {
type = lib.types.nullOr lib.types.str;
default = null;
description = "Shared secret that authorizes ingestors and API clients making POST requests. Warning: visible in nix store. Prefer apiTokenFile for production.";
};
apiTokenFile = lib.mkOption {
type = lib.types.nullOr lib.types.path;
default = null;
description = "File containing API_TOKEN=<secret> (recommended for production)";
};
instanceDomain = lib.mkOption {
type = lib.types.nullOr lib.types.str;
default = null;
description = "Public hostname used for metadata, federation, and generated API links";
};
siteName = lib.mkOption {
type = lib.types.str;
default = "PotatoMesh Demo";
description = "Title and header displayed in the UI";
};
channel = lib.mkOption {
type = lib.types.str;
default = "#LongFast";
description = "Default channel name displayed in the UI";
};
frequency = lib.mkOption {
type = lib.types.str;
default = "915MHz";
description = "Default frequency description displayed in the UI";
};
contactLink = lib.mkOption {
type = lib.types.str;
default = "#potatomesh:dod.ngo";
description = "Chat link or Matrix alias rendered in the footer and overlays";
};
mapCenter = lib.mkOption {
type = lib.types.str;
default = "38.761944,-27.090833";
description = "Latitude and longitude that centre the map on load";
};
mapZoom = lib.mkOption {
type = lib.types.nullOr lib.types.int;
default = null;
description = "Fixed Leaflet zoom applied on first load; disables auto-fit when provided";
};
maxDistance = lib.mkOption {
type = lib.types.int;
default = 42;
description = "Maximum distance (km) before node relationships are hidden on the map";
};
debug = lib.mkOption {
type = lib.types.bool;
default = false;
description = "Enable verbose logging";
};
allowedChannels = lib.mkOption {
type = lib.types.nullOr lib.types.str;
default = null;
description = "Comma-separated channel names the ingestor accepts";
};
hiddenChannels = lib.mkOption {
type = lib.types.nullOr lib.types.str;
default = null;
description = "Comma-separated channel names the ingestor will ignore";
};
federation = lib.mkOption {
type = lib.types.bool;
default = true;
description = "Announce instance and crawl peers";
};
private = lib.mkOption {
type = lib.types.bool;
default = false;
description = "Hide chat UI, disable message APIs, and exclude hidden clients from public listings";
};
dataDir = lib.mkOption {
type = lib.types.path;
default = "/var/lib/potato-mesh";
description = "Directory to store database and configuration";
};
user = lib.mkOption {
type = lib.types.str;
default = "potato-mesh";
description = "User to run the service as";
};
group = lib.mkOption {
type = lib.types.str;
default = "potato-mesh";
description = "Group to run the service as";
};
# Ingestor options
ingestor = {
enable = lib.mkEnableOption "PotatoMesh Python ingestor";
package = lib.mkOption {
type = lib.types.package;
default = self.packages.${pkgs.system}.ingestor;
description = "The potato-mesh ingestor package to use";
};
connection = lib.mkOption {
type = lib.types.str;
default = "/dev/ttyACM0";
description = "Connection target: serial port, IP:port for TCP, or Bluetooth address for BLE";
};
};
};
config = lib.mkIf cfg.enable {
users.users.${cfg.user} = {
isSystemUser = true;
group = cfg.group;
home = cfg.dataDir;
createHome = true;
};
users.groups.${cfg.group} = {};
systemd.services.potato-mesh-web = {
description = "PotatoMesh Web Dashboard";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
environment = {
RACK_ENV = "production";
APP_ENV = "production";
PORT = toString cfg.port;
HOST = cfg.host;
SITE_NAME = cfg.siteName;
CHANNEL = cfg.channel;
FREQUENCY = cfg.frequency;
CONTACT_LINK = cfg.contactLink;
MAP_CENTER = cfg.mapCenter;
MAX_DISTANCE = toString cfg.maxDistance;
DEBUG = if cfg.debug then "1" else "0";
FEDERATION = if cfg.federation then "1" else "0";
PRIVATE = if cfg.private then "1" else "0";
XDG_DATA_HOME = cfg.dataDir;
XDG_CONFIG_HOME = "${cfg.dataDir}/config";
} // lib.optionalAttrs (cfg.instanceDomain != null) {
INSTANCE_DOMAIN = cfg.instanceDomain;
} // lib.optionalAttrs (cfg.mapZoom != null) {
MAP_ZOOM = toString cfg.mapZoom;
} // lib.optionalAttrs (cfg.allowedChannels != null) {
ALLOWED_CHANNELS = cfg.allowedChannels;
} // lib.optionalAttrs (cfg.hiddenChannels != null) {
HIDDEN_CHANNELS = cfg.hiddenChannels;
} // lib.optionalAttrs (cfg.apiToken != null) {
API_TOKEN = cfg.apiToken;
};
serviceConfig = {
Type = "simple";
User = cfg.user;
Group = cfg.group;
WorkingDirectory = cfg.dataDir;
ExecStart = "${cfg.package}/bin/potato-mesh-web";
Restart = "always";
RestartSec = 5;
} // lib.optionalAttrs (cfg.apiTokenFile != null) {
EnvironmentFile = cfg.apiTokenFile;
};
};
systemd.services.potato-mesh-ingestor = lib.mkIf cfg.ingestor.enable {
description = "PotatoMesh Python Ingestor";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" "potato-mesh-web.service" ];
requires = [ "potato-mesh-web.service" ];
environment = {
INSTANCE_DOMAIN = "http://127.0.0.1:${toString cfg.port}";
CONNECTION = cfg.ingestor.connection;
DEBUG = if cfg.debug then "1" else "0";
XDG_DATA_HOME = cfg.dataDir;
} // lib.optionalAttrs (cfg.allowedChannels != null) {
ALLOWED_CHANNELS = cfg.allowedChannels;
} // lib.optionalAttrs (cfg.hiddenChannels != null) {
HIDDEN_CHANNELS = cfg.hiddenChannels;
} // lib.optionalAttrs (cfg.apiToken != null) {
API_TOKEN = cfg.apiToken;
};
serviceConfig = {
Type = "simple";
User = cfg.user;
Group = cfg.group;
WorkingDirectory = cfg.dataDir;
ExecStart = "${cfg.ingestor.package}/bin/potato-mesh-ingestor";
Restart = "always";
RestartSec = 10;
} // lib.optionalAttrs (cfg.apiTokenFile != null) {
EnvironmentFile = cfg.apiTokenFile;
};
};
};
};
};
}
+351 -152
View File
File diff suppressed because it is too large Load Diff
+4 -1
View File
@@ -14,7 +14,7 @@
[package]
name = "potatomesh-matrix-bridge"
version = "0.5.8"
version = "0.6.1"
edition = "2021"
[dependencies]
@@ -27,8 +27,11 @@ anyhow = "1"
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["fmt", "env-filter"] }
urlencoding = "2"
axum = { version = "0.7", features = ["json"] }
clap = { version = "4", features = ["derive"] }
[dev-dependencies]
tempfile = "3"
mockito = "1"
serial_test = "3"
tower = "0.5"
+2 -1
View File
@@ -9,6 +9,8 @@ poll_interval_secs = 60
homeserver = "https://matrix.dod.ngo"
# Appservice access token (from your registration.yaml)
as_token = "INVALID_TOKEN_NOT_WORKING"
# Homeserver token used to authenticate Synapse callbacks
hs_token = "INVALID_TOKEN_NOT_WORKING"
# Server name (domain) part of Matrix user IDs
server_name = "dod.ngo"
# Room ID to send into (must be joined by the appservice / puppets)
@@ -17,4 +19,3 @@ room_id = "!sXabOBXbVObAlZQEUs:c-base.org" # "#potato-bridge:c-base.org"
[state]
# Where to persist last seen message id (optional but recommended)
state_file = "bridge_state.json"
+3 -1
View File
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
FROM rust:1.91-bookworm AS builder
FROM rust:1.92-bookworm AS builder
WORKDIR /app
@@ -37,6 +37,8 @@ COPY --from=builder /app/target/release/potatomesh-matrix-bridge /usr/local/bin/
COPY matrix/Config.toml /app/Config.example.toml
COPY matrix/docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
EXPOSE 41448
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
+113 -8
View File
@@ -1,7 +1,12 @@
<!-- Copyright © 2025-26 l5yth & contributors -->
<!-- Licensed under the Apache License, Version 2.0 (see LICENSE) -->
# potatomesh-matrix-bridge
A small Rust daemon that bridges **PotatoMesh** LoRa messages into a **Matrix** room.
![matrix bridge](../scrot-0.6.png)
For each PotatoMesh node, the bridge creates (or uses) a **Matrix puppet user**:
- Matrix localpart: `potato_` + the hex node id (without `!`), e.g. `!67fc83cb``@potato_67fc83cb:example.org`
@@ -54,9 +59,17 @@ This is **not** a full appservice framework; it just speaks the minimal HTTP nee
## Configuration
All configuration is in `Config.toml` in the project root.
Configuration can come from a TOML file, CLI flags, environment variables, or secret files. The bridge merges inputs in this order (highest to lowest):
Example:
1. CLI flags
2. Environment variables
3. Secret files (`*_FILE` paths or container defaults)
4. TOML config file
5. Container defaults (paths + poll interval)
If no TOML file is provided, required values must be supplied via CLI/env/secret inputs.
Example TOML:
```toml
[potatomesh]
@@ -70,6 +83,8 @@ poll_interval_secs = 10
homeserver = "https://matrix.example.org"
# Appservice access token (from your registration.yaml)
as_token = "YOUR_APPSERVICE_AS_TOKEN"
# Appservice homeserver token (must match registration hs_token)
hs_token = "SECRET_HS_TOKEN"
# Server name (domain) part of Matrix user IDs
server_name = "example.org"
# Room ID to send into (must be joined by the appservice / puppets)
@@ -78,7 +93,93 @@ room_id = "!yourroomid:example.org"
[state]
# Where to persist last seen message id
state_file = "bridge_state.json"
````
```
The `hs_token` is used to validate inbound appservice transactions. Keep it identical in `Config.toml` and your Matrix appservice registration file.
### CLI Flags
Run `potatomesh-matrix-bridge --help` for the full list. Common flags:
* `--config PATH`
* `--state-file PATH`
* `--potatomesh-base-url URL`
* `--potatomesh-poll-interval-secs SECS`
* `--matrix-homeserver URL`
* `--matrix-as-token TOKEN`
* `--matrix-as-token-file PATH`
* `--matrix-hs-token TOKEN`
* `--matrix-hs-token-file PATH`
* `--matrix-server-name NAME`
* `--matrix-room-id ROOM`
* `--container` / `--no-container`
* `--secrets-dir PATH`
### Environment Variables
* `POTATOMESH_CONFIG`
* `POTATOMESH_BASE_URL`
* `POTATOMESH_POLL_INTERVAL_SECS`
* `MATRIX_HOMESERVER`
* `MATRIX_AS_TOKEN`
* `MATRIX_AS_TOKEN_FILE`
* `MATRIX_HS_TOKEN`
* `MATRIX_HS_TOKEN_FILE`
* `MATRIX_SERVER_NAME`
* `MATRIX_ROOM_ID`
* `STATE_FILE`
* `POTATOMESH_CONTAINER`
* `POTATOMESH_SECRETS_DIR`
### Secret Files
If you supply `*_FILE` values, the bridge reads the secret contents and trims whitespace. When running inside a container, the bridge also checks the default secrets directory (default: `/run/secrets`) for:
* `matrix_as_token`
* `matrix_hs_token`
### Container Defaults
Container detection checks `POTATOMESH_CONTAINER`, `CONTAINER`, and `/proc/1/cgroup`. When detected (or forced with `--container`), defaults shift to:
* Config path: `/app/Config.toml`
* State file: `/app/bridge_state.json`
* Secrets dir: `/run/secrets`
* Poll interval: 15 seconds (if not otherwise configured)
Set `POTATOMESH_CONTAINER=0` or `--no-container` to opt out of container defaults.
### Docker Compose First Run
Before starting Compose, complete this preflight checklist:
1. Ensure `matrix/Config.toml` exists as a regular file on the host (not a directory).
2. Fill required Matrix values in `matrix/Config.toml`:
- `matrix.as_token`
- `matrix.hs_token`
- `matrix.server_name`
- `matrix.room_id`
- `matrix.homeserver`
This is required because the shared Compose anchor `x-matrix-bridge-base` mounts `./matrix/Config.toml` to `/app/Config.toml`.
Then follow the token and namespace requirements in [Matrix Appservice Setup (Synapse example)](#matrix-appservice-setup-synapse-example).
#### Troubleshooting
| Symptom | Likely cause | What to check |
| --- | --- | --- |
| `Is a directory (os error 21)` | Host mount source became a directory | `matrix/Config.toml` was missing at mount time and got created as a directory on host. |
| `M_UNKNOWN_TOKEN` / `401 Unauthorized` | Matrix appservice token mismatch | Verify `matrix.as_token` matches your appservice registration and setup in [Matrix Appservice Setup (Synapse example)](#matrix-appservice-setup-synapse-example). |
#### Recovery from accidental `Config.toml` directory creation
```bash
# from repo root
rm -rf matrix/Config.toml
touch matrix/Config.toml
# then edit matrix/Config.toml and set valid matrix.as_token, matrix.hs_token,
# matrix.server_name, matrix.room_id, and matrix.homeserver before starting compose
```
### PotatoMesh API
@@ -134,7 +235,7 @@ A minimal example sketch (you **must** adjust URLs, secrets, namespaces):
```yaml
id: potatomesh-bridge
url: "http://your-bridge-host:8080" # not used by this bridge if it only calls out
url: "http://your-bridge-host:41448"
as_token: "YOUR_APPSERVICE_AS_TOKEN"
hs_token: "SECRET_HS_TOKEN"
sender_localpart: "potatomesh-bridge"
@@ -145,10 +246,12 @@ namespaces:
regex: "@potato_[0-9a-f]{8}:example.org"
```
For this bridge, only the `as_token` and `namespaces.users` actually matter. The bridge does not accept inbound events; it only uses the `as_token` to call the homeserver.
This bridge listens for Synapse appservice callbacks on port `41448` so it can log inbound transaction payloads. It still only forwards messages one way (PotatoMesh → Matrix), so inbound Matrix events are acknowledged but not bridged. The `as_token` and `namespaces.users` entries remain required for outbound calls, and the `url` should point at the listener.
In Synapses `homeserver.yaml`, add the registration file under `app_service_config_files`, restart, and invite a puppet user to your target room (or use room ID directly).
The bridge validates inbound appservice callbacks by comparing the `access_token` query param to `hs_token` in `Config.toml`, so keep those values in sync.
---
## Build
@@ -178,10 +281,11 @@ Build the container from the repo root with the included `matrix/Dockerfile`:
docker build -f matrix/Dockerfile -t potatomesh-matrix-bridge .
```
Provide your config at `/app/Config.toml` and persist the bridge state file by mounting volumes. Minimal example:
Provide your config at `/app/Config.toml` (or use CLI/env/secret overrides) and persist the bridge state file by mounting volumes. Minimal example:
```bash
docker run --rm \
-p 41448:41448 \
-v bridge_state:/app \
-v "$(pwd)/matrix/Config.toml:/app/Config.toml:ro" \
potatomesh-matrix-bridge
@@ -191,12 +295,13 @@ If you prefer to isolate the state file from the config, mount it directly inste
```bash
docker run --rm \
-p 41448:41448 \
-v bridge_state:/app \
-v "$(pwd)/matrix/Config.toml:/app/Config.toml:ro" \
potatomesh-matrix-bridge
```
The image ships `Config.example.toml` for reference, but the bridge will exit if `/app/Config.toml` is not provided.
The image ships `Config.example.toml` for reference. If `/app/Config.toml` is absent, set the required values via environment variables, CLI flags, or secrets instead.
---
@@ -234,7 +339,7 @@ Delete `bridge_state.json` if you want it to replay all currently available mess
## Development
Run tests (currently mostly compile checks, no real tests yet):
Run tests:
```bash
cargo test
+7
View File
@@ -15,6 +15,13 @@
set -e
# Default to container-aware configuration paths unless explicitly overridden.
: "${POTATOMESH_CONTAINER:=1}"
: "${POTATOMESH_SECRETS_DIR:=/run/secrets}"
export POTATOMESH_CONTAINER
export POTATOMESH_SECRETS_DIR
# Default state file path from Config.toml unless overridden.
STATE_FILE="${STATE_FILE:-/app/bridge_state.json}"
STATE_DIR="$(dirname "$STATE_FILE")"
+105
View File
@@ -0,0 +1,105 @@
// Copyright © 2025-26 l5yth & contributors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use clap::{ArgAction, Parser};
#[cfg(not(test))]
use crate::config::{ConfigInputs, ConfigOverrides};
/// CLI arguments for the Matrix bridge.
#[derive(Debug, Parser)]
#[command(
name = "potatomesh-matrix-bridge",
version,
about = "PotatoMesh Matrix bridge"
)]
pub struct Cli {
/// Path to the configuration TOML file.
#[arg(long, value_name = "PATH")]
pub config: Option<String>,
/// Path to the bridge state file.
#[arg(long, value_name = "PATH")]
pub state_file: Option<String>,
/// PotatoMesh base URL.
#[arg(long, value_name = "URL")]
pub potatomesh_base_url: Option<String>,
/// Poll interval in seconds.
#[arg(long, value_name = "SECS")]
pub potatomesh_poll_interval_secs: Option<u64>,
/// Matrix homeserver base URL.
#[arg(long, value_name = "URL")]
pub matrix_homeserver: Option<String>,
/// Matrix appservice access token.
#[arg(long, value_name = "TOKEN")]
pub matrix_as_token: Option<String>,
/// Path to a secret file containing the Matrix appservice access token.
#[arg(long, value_name = "PATH")]
pub matrix_as_token_file: Option<String>,
/// Matrix homeserver token for inbound appservice requests.
#[arg(long, value_name = "TOKEN")]
pub matrix_hs_token: Option<String>,
/// Path to a secret file containing the Matrix homeserver token.
#[arg(long, value_name = "PATH")]
pub matrix_hs_token_file: Option<String>,
/// Matrix server name (domain).
#[arg(long, value_name = "NAME")]
pub matrix_server_name: Option<String>,
/// Matrix room id to forward into.
#[arg(long, value_name = "ROOM")]
pub matrix_room_id: Option<String>,
/// Force container defaults (overrides detection).
#[arg(long, action = ArgAction::SetTrue)]
pub container: bool,
/// Disable container defaults (overrides detection).
#[arg(long, action = ArgAction::SetTrue)]
pub no_container: bool,
/// Directory to search for default secret files.
#[arg(long, value_name = "PATH")]
pub secrets_dir: Option<String>,
}
impl Cli {
/// Convert CLI args into configuration inputs.
#[cfg(not(test))]
pub fn to_inputs(&self) -> ConfigInputs {
ConfigInputs {
config_path: self.config.clone(),
secrets_dir: self.secrets_dir.clone(),
container_override: resolve_container_override(self.container, self.no_container),
container_hint: None,
overrides: ConfigOverrides {
potatomesh_base_url: self.potatomesh_base_url.clone(),
potatomesh_poll_interval_secs: self.potatomesh_poll_interval_secs,
matrix_homeserver: self.matrix_homeserver.clone(),
matrix_as_token: self.matrix_as_token.clone(),
matrix_as_token_file: self.matrix_as_token_file.clone(),
matrix_hs_token: self.matrix_hs_token.clone(),
matrix_hs_token_file: self.matrix_hs_token_file.clone(),
matrix_server_name: self.matrix_server_name.clone(),
matrix_room_id: self.matrix_room_id.clone(),
state_file: self.state_file.clone(),
},
}
}
}
/// Resolve container override flags into an optional boolean.
#[cfg(not(test))]
fn resolve_container_override(container: bool, no_container: bool) -> Option<bool> {
match (container, no_container) {
(true, false) => Some(true),
(false, true) => Some(false),
_ => None,
}
}
+841 -20
View File
@@ -15,25 +15,37 @@
use serde::Deserialize;
use std::{fs, path::Path};
const DEFAULT_CONFIG_PATH: &str = "Config.toml";
const CONTAINER_CONFIG_PATH: &str = "/app/Config.toml";
const DEFAULT_STATE_FILE: &str = "bridge_state.json";
const CONTAINER_STATE_FILE: &str = "/app/bridge_state.json";
const DEFAULT_SECRETS_DIR: &str = "/run/secrets";
const CONTAINER_POLL_INTERVAL_SECS: u64 = 15;
/// PotatoMesh API settings.
#[derive(Debug, Deserialize, Clone)]
pub struct PotatomeshConfig {
pub base_url: String,
pub poll_interval_secs: u64,
}
/// Matrix appservice settings for the bridge.
#[derive(Debug, Deserialize, Clone)]
pub struct MatrixConfig {
pub homeserver: String,
pub as_token: String,
pub hs_token: String,
pub server_name: String,
pub room_id: String,
}
/// State file configuration for the bridge.
#[derive(Debug, Deserialize, Clone)]
pub struct StateConfig {
pub state_file: String,
}
/// Full configuration loaded for the bridge runtime.
#[derive(Debug, Deserialize, Clone)]
pub struct Config {
pub potatomesh: PotatomeshConfig,
@@ -41,19 +53,447 @@ pub struct Config {
pub state: StateConfig,
}
#[derive(Debug, Deserialize, Clone, Default)]
struct PartialPotatomeshConfig {
#[serde(default)]
base_url: Option<String>,
#[serde(default)]
poll_interval_secs: Option<u64>,
}
#[derive(Debug, Deserialize, Clone, Default)]
struct PartialMatrixConfig {
#[serde(default)]
homeserver: Option<String>,
#[serde(default)]
as_token: Option<String>,
#[serde(default)]
hs_token: Option<String>,
#[serde(default)]
server_name: Option<String>,
#[serde(default)]
room_id: Option<String>,
}
#[derive(Debug, Deserialize, Clone, Default)]
struct PartialStateConfig {
#[serde(default)]
state_file: Option<String>,
}
#[derive(Debug, Deserialize, Clone, Default)]
struct PartialConfig {
#[serde(default)]
potatomesh: PartialPotatomeshConfig,
#[serde(default)]
matrix: PartialMatrixConfig,
#[serde(default)]
state: PartialStateConfig,
}
/// Overwrite an optional value when the incoming value is present.
fn merge_option<T>(target: &mut Option<T>, incoming: Option<T>) {
if incoming.is_some() {
*target = incoming;
}
}
/// CLI or environment overrides for configuration fields.
#[derive(Debug, Clone, Default)]
pub struct ConfigOverrides {
pub potatomesh_base_url: Option<String>,
pub potatomesh_poll_interval_secs: Option<u64>,
pub matrix_homeserver: Option<String>,
pub matrix_as_token: Option<String>,
pub matrix_as_token_file: Option<String>,
pub matrix_hs_token: Option<String>,
pub matrix_hs_token_file: Option<String>,
pub matrix_server_name: Option<String>,
pub matrix_room_id: Option<String>,
pub state_file: Option<String>,
}
impl ConfigOverrides {
fn apply_non_token_overrides(&self, cfg: &mut PartialConfig) {
merge_option(
&mut cfg.potatomesh.base_url,
self.potatomesh_base_url.clone(),
);
merge_option(
&mut cfg.potatomesh.poll_interval_secs,
self.potatomesh_poll_interval_secs,
);
merge_option(&mut cfg.matrix.homeserver, self.matrix_homeserver.clone());
merge_option(&mut cfg.matrix.server_name, self.matrix_server_name.clone());
merge_option(&mut cfg.matrix.room_id, self.matrix_room_id.clone());
merge_option(&mut cfg.state.state_file, self.state_file.clone());
}
fn merge(self, higher: ConfigOverrides) -> ConfigOverrides {
let matrix_as_token = if higher.matrix_as_token_file.is_some() {
higher.matrix_as_token
} else {
higher.matrix_as_token.or(self.matrix_as_token)
};
let matrix_hs_token = if higher.matrix_hs_token_file.is_some() {
higher.matrix_hs_token
} else {
higher.matrix_hs_token.or(self.matrix_hs_token)
};
ConfigOverrides {
potatomesh_base_url: higher.potatomesh_base_url.or(self.potatomesh_base_url),
potatomesh_poll_interval_secs: higher
.potatomesh_poll_interval_secs
.or(self.potatomesh_poll_interval_secs),
matrix_homeserver: higher.matrix_homeserver.or(self.matrix_homeserver),
matrix_as_token,
matrix_as_token_file: higher.matrix_as_token_file.or(self.matrix_as_token_file),
matrix_hs_token,
matrix_hs_token_file: higher.matrix_hs_token_file.or(self.matrix_hs_token_file),
matrix_server_name: higher.matrix_server_name.or(self.matrix_server_name),
matrix_room_id: higher.matrix_room_id.or(self.matrix_room_id),
state_file: higher.state_file.or(self.state_file),
}
}
}
/// Inputs gathered from CLI flags or environment variables.
#[derive(Debug, Clone, Default)]
pub struct ConfigInputs {
pub config_path: Option<String>,
pub secrets_dir: Option<String>,
pub container_override: Option<bool>,
pub container_hint: Option<String>,
pub overrides: ConfigOverrides,
}
impl ConfigInputs {
/// Merge two input sets, preferring values from `higher`.
pub fn merge(self, higher: ConfigInputs) -> ConfigInputs {
ConfigInputs {
config_path: higher.config_path.or(self.config_path),
secrets_dir: higher.secrets_dir.or(self.secrets_dir),
container_override: higher.container_override.or(self.container_override),
container_hint: higher.container_hint.or(self.container_hint),
overrides: self.overrides.merge(higher.overrides),
}
}
/// Load configuration inputs from the process environment.
#[cfg(not(test))]
pub fn from_env() -> anyhow::Result<Self> {
let overrides = ConfigOverrides {
potatomesh_base_url: env_var("POTATOMESH_BASE_URL"),
potatomesh_poll_interval_secs: parse_u64_env("POTATOMESH_POLL_INTERVAL_SECS")?,
matrix_homeserver: env_var("MATRIX_HOMESERVER"),
matrix_as_token: env_var("MATRIX_AS_TOKEN"),
matrix_as_token_file: env_var("MATRIX_AS_TOKEN_FILE"),
matrix_hs_token: env_var("MATRIX_HS_TOKEN"),
matrix_hs_token_file: env_var("MATRIX_HS_TOKEN_FILE"),
matrix_server_name: env_var("MATRIX_SERVER_NAME"),
matrix_room_id: env_var("MATRIX_ROOM_ID"),
state_file: env_var("STATE_FILE"),
};
Ok(ConfigInputs {
config_path: env_var("POTATOMESH_CONFIG"),
secrets_dir: env_var("POTATOMESH_SECRETS_DIR"),
container_override: parse_bool_env("POTATOMESH_CONTAINER")?,
container_hint: env_var("CONTAINER"),
overrides,
})
}
}
impl Config {
/// Load a full Config from a TOML file.
#[cfg(test)]
pub fn load_from_file(path: &str) -> anyhow::Result<Self> {
let contents = fs::read_to_string(path)?;
let cfg = toml::from_str(&contents)?;
Ok(cfg)
}
}
pub fn from_default_path() -> anyhow::Result<Self> {
let path = "Config.toml";
if !Path::new(path).exists() {
anyhow::bail!("Config file {path} not found");
/// Load a Config by merging CLI/env overrides with an optional TOML file.
#[cfg(not(test))]
pub fn load(cli_inputs: ConfigInputs) -> anyhow::Result<Config> {
let env_inputs = ConfigInputs::from_env()?;
let cgroup_hint = read_cgroup();
load_from_sources(cli_inputs, env_inputs, cgroup_hint.as_deref())
}
/// Load configuration by merging CLI/env inputs and an optional config file.
fn load_from_sources(
cli_inputs: ConfigInputs,
env_inputs: ConfigInputs,
cgroup_hint: Option<&str>,
) -> anyhow::Result<Config> {
let merged_inputs = env_inputs.merge(cli_inputs);
let container = detect_container(
merged_inputs.container_override,
merged_inputs.container_hint.as_deref(),
cgroup_hint,
);
let defaults = default_paths(container);
let base_cfg = resolve_base_config(&merged_inputs, &defaults)?;
let mut cfg = base_cfg.unwrap_or_default();
merged_inputs.overrides.apply_non_token_overrides(&mut cfg);
let secrets_dir = resolve_secrets_dir(&merged_inputs, container, &defaults);
let as_token = resolve_token(
cfg.matrix.as_token.clone(),
merged_inputs.overrides.matrix_as_token.clone(),
merged_inputs.overrides.matrix_as_token_file.as_deref(),
secrets_dir.as_deref(),
"matrix_as_token",
)?;
let hs_token = resolve_token(
cfg.matrix.hs_token.clone(),
merged_inputs.overrides.matrix_hs_token.clone(),
merged_inputs.overrides.matrix_hs_token_file.as_deref(),
secrets_dir.as_deref(),
"matrix_hs_token",
)?;
if cfg.potatomesh.poll_interval_secs.is_none() && container {
cfg.potatomesh.poll_interval_secs = Some(defaults.poll_interval_secs);
}
if cfg.state.state_file.is_none() {
cfg.state.state_file = Some(defaults.state_file);
}
let missing = collect_missing_fields(&cfg, &as_token, &hs_token);
if !missing.is_empty() {
anyhow::bail!(
"Missing required configuration values: {}",
missing.join(", ")
);
}
Ok(Config {
potatomesh: PotatomeshConfig {
base_url: cfg.potatomesh.base_url.unwrap(),
poll_interval_secs: cfg.potatomesh.poll_interval_secs.unwrap(),
},
matrix: MatrixConfig {
homeserver: cfg.matrix.homeserver.unwrap(),
as_token: as_token.unwrap(),
hs_token: hs_token.unwrap(),
server_name: cfg.matrix.server_name.unwrap(),
room_id: cfg.matrix.room_id.unwrap(),
},
state: StateConfig {
state_file: cfg.state.state_file.unwrap(),
},
})
}
/// Collect the missing required field identifiers for error reporting.
fn collect_missing_fields(
cfg: &PartialConfig,
as_token: &Option<String>,
hs_token: &Option<String>,
) -> Vec<&'static str> {
let mut missing = Vec::new();
if cfg.potatomesh.base_url.is_none() {
missing.push("potatomesh.base_url");
}
if cfg.potatomesh.poll_interval_secs.is_none() {
missing.push("potatomesh.poll_interval_secs");
}
if cfg.matrix.homeserver.is_none() {
missing.push("matrix.homeserver");
}
if as_token.is_none() {
missing.push("matrix.as_token");
}
if hs_token.is_none() {
missing.push("matrix.hs_token");
}
if cfg.matrix.server_name.is_none() {
missing.push("matrix.server_name");
}
if cfg.matrix.room_id.is_none() {
missing.push("matrix.room_id");
}
if cfg.state.state_file.is_none() {
missing.push("state.state_file");
}
missing
}
/// Resolve the base TOML config file, honoring explicit config paths.
fn resolve_base_config(
inputs: &ConfigInputs,
defaults: &DefaultPaths,
) -> anyhow::Result<Option<PartialConfig>> {
if let Some(path) = &inputs.config_path {
return Ok(Some(load_partial_from_file(path)?));
}
let container_path = Path::new(&defaults.config_path);
if container_path.exists() {
return Ok(Some(load_partial_from_file(&defaults.config_path)?));
}
let host_path = Path::new(DEFAULT_CONFIG_PATH);
if host_path.exists() {
return Ok(Some(load_partial_from_file(DEFAULT_CONFIG_PATH)?));
}
Ok(None)
}
/// Decide which secrets directory to use based on inputs and defaults.
fn resolve_secrets_dir(
inputs: &ConfigInputs,
container: bool,
defaults: &DefaultPaths,
) -> Option<String> {
if let Some(explicit) = inputs.secrets_dir.clone() {
return Some(explicit);
}
if container {
return Some(defaults.secrets_dir.clone());
}
None
}
/// Resolve a token value from explicit values, secret files, or config file values.
fn resolve_token(
base_value: Option<String>,
explicit_value: Option<String>,
explicit_file: Option<&str>,
secrets_dir: Option<&str>,
default_secret_name: &str,
) -> anyhow::Result<Option<String>> {
if let Some(value) = explicit_value {
return Ok(Some(value));
}
if let Some(path) = explicit_file {
return Ok(Some(read_secret_file(path)?));
}
if let Some(dir) = secrets_dir {
let default_path = Path::new(dir).join(default_secret_name);
if default_path.exists() {
return Ok(Some(read_secret_file(
default_path
.to_str()
.ok_or_else(|| anyhow::anyhow!("Invalid secret file path"))?,
)?));
}
Self::load_from_file(path)
}
Ok(base_value)
}
/// Read and trim a secret file from disk.
fn read_secret_file(path: &str) -> anyhow::Result<String> {
let contents = fs::read_to_string(path)?;
let trimmed = contents.trim();
if trimmed.is_empty() {
anyhow::bail!("Secret file {path} is empty");
}
Ok(trimmed.to_string())
}
/// Load a partial config from a TOML file.
fn load_partial_from_file(path: &str) -> anyhow::Result<PartialConfig> {
let contents = fs::read_to_string(path)?;
let cfg = toml::from_str(&contents)?;
Ok(cfg)
}
/// Compute default paths and intervals based on container mode.
fn default_paths(container: bool) -> DefaultPaths {
if container {
DefaultPaths {
config_path: CONTAINER_CONFIG_PATH.to_string(),
state_file: CONTAINER_STATE_FILE.to_string(),
secrets_dir: DEFAULT_SECRETS_DIR.to_string(),
poll_interval_secs: CONTAINER_POLL_INTERVAL_SECS,
}
} else {
DefaultPaths {
config_path: DEFAULT_CONFIG_PATH.to_string(),
state_file: DEFAULT_STATE_FILE.to_string(),
secrets_dir: DEFAULT_SECRETS_DIR.to_string(),
poll_interval_secs: CONTAINER_POLL_INTERVAL_SECS,
}
}
}
#[derive(Debug, Clone)]
struct DefaultPaths {
config_path: String,
state_file: String,
secrets_dir: String,
poll_interval_secs: u64,
}
/// Detect whether the bridge is running inside a container.
fn detect_container(
override_value: Option<bool>,
env_hint: Option<&str>,
cgroup_hint: Option<&str>,
) -> bool {
if let Some(value) = override_value {
return value;
}
if let Some(hint) = env_hint {
if !hint.trim().is_empty() {
return true;
}
}
if let Some(cgroup) = cgroup_hint {
let haystack = cgroup.to_ascii_lowercase();
return haystack.contains("docker")
|| haystack.contains("kubepods")
|| haystack.contains("containerd")
|| haystack.contains("podman");
}
false
}
/// Read the primary cgroup file for container detection.
#[cfg(not(test))]
fn read_cgroup() -> Option<String> {
fs::read_to_string("/proc/1/cgroup").ok()
}
/// Read and trim an environment variable value.
#[cfg(not(test))]
fn env_var(key: &str) -> Option<String> {
std::env::var(key).ok().filter(|v| !v.trim().is_empty())
}
/// Parse a u64 environment variable value.
#[cfg(not(test))]
fn parse_u64_env(key: &str) -> anyhow::Result<Option<u64>> {
match env_var(key) {
None => Ok(None),
Some(value) => value
.parse::<u64>()
.map(Some)
.map_err(|e| anyhow::anyhow!("Invalid {key} value: {e}")),
}
}
/// Parse a boolean environment variable value.
#[cfg(not(test))]
fn parse_bool_env(key: &str) -> anyhow::Result<Option<bool>> {
match env_var(key) {
None => Ok(None),
Some(value) => parse_bool_value(key, &value).map(Some),
}
}
/// Parse a boolean string with standard truthy/falsy values.
#[cfg(not(test))]
fn parse_bool_value(key: &str, value: &str) -> anyhow::Result<bool> {
let normalized = value.trim().to_ascii_lowercase();
match normalized.as_str() {
"1" | "true" | "yes" | "on" => Ok(true),
"0" | "false" | "no" | "off" => Ok(false),
_ => anyhow::bail!("Invalid {key} value: {value}"),
}
}
@@ -62,6 +502,43 @@ mod tests {
use super::*;
use serial_test::serial;
use std::io::Write;
use std::path::{Path, PathBuf};
struct CwdGuard {
original: PathBuf,
}
impl CwdGuard {
/// Switch to the provided path and restore the original cwd on drop.
fn enter(path: &Path) -> Self {
let original = std::env::current_dir().unwrap_or_else(|_| PathBuf::from("/"));
std::env::set_current_dir(path).unwrap();
Self { original }
}
}
impl Drop for CwdGuard {
fn drop(&mut self) {
if std::env::set_current_dir(&self.original).is_err() {
let _ = std::env::set_current_dir("/");
}
}
}
fn minimal_overrides() -> ConfigOverrides {
ConfigOverrides {
potatomesh_base_url: Some("https://potatomesh.net/".to_string()),
potatomesh_poll_interval_secs: Some(10),
matrix_homeserver: Some("https://matrix.example.org".to_string()),
matrix_as_token: Some("AS_TOKEN".to_string()),
matrix_hs_token: Some("HS_TOKEN".to_string()),
matrix_server_name: Some("example.org".to_string()),
matrix_room_id: Some("!roomid:example.org".to_string()),
state_file: Some("bridge_state.json".to_string()),
matrix_as_token_file: None,
matrix_hs_token_file: None,
}
}
#[test]
fn parse_minimal_config_from_toml_str() {
@@ -73,6 +550,7 @@ mod tests {
[matrix]
homeserver = "https://matrix.example.org"
as_token = "AS_TOKEN"
hs_token = "HS_TOKEN"
server_name = "example.org"
room_id = "!roomid:example.org"
@@ -86,6 +564,7 @@ mod tests {
assert_eq!(cfg.matrix.homeserver, "https://matrix.example.org");
assert_eq!(cfg.matrix.as_token, "AS_TOKEN");
assert_eq!(cfg.matrix.hs_token, "HS_TOKEN");
assert_eq!(cfg.matrix.server_name, "example.org");
assert_eq!(cfg.matrix.room_id, "!roomid:example.org");
@@ -108,6 +587,7 @@ mod tests {
[matrix]
homeserver = "https://matrix.example.org"
as_token = "AS_TOKEN"
hs_token = "HS_TOKEN"
server_name = "example.org"
room_id = "!roomid:example.org"
@@ -121,37 +601,378 @@ mod tests {
}
#[test]
#[serial]
fn from_default_path_not_found() {
let tmp_dir = tempfile::tempdir().unwrap();
std::env::set_current_dir(tmp_dir.path()).unwrap();
let result = Config::from_default_path();
assert!(result.is_err());
fn detect_container_prefers_override() {
assert!(detect_container(Some(true), None, None));
assert!(!detect_container(
Some(false),
Some("docker"),
Some("docker")
));
}
#[test]
#[serial]
fn from_default_path_found() {
fn detect_container_from_hint_or_cgroup() {
assert!(detect_container(None, Some("docker"), None));
assert!(detect_container(None, None, Some("kubepods")));
assert!(!detect_container(None, None, Some("")));
}
#[test]
fn load_uses_cli_overrides_over_env() {
let toml_str = r#"
[potatomesh]
base_url = "https://potatomesh.net/"
poll_interval_secs = 10
poll_interval_secs = 5
[matrix]
homeserver = "https://matrix.example.org"
as_token = "AS_TOKEN"
hs_token = "HS_TOKEN"
server_name = "example.org"
room_id = "!roomid:example.org"
[state]
state_file = "bridge_state.json"
"#;
let tmp_dir = tempfile::tempdir().unwrap();
let file_path = tmp_dir.path().join("Config.toml");
let mut file = std::fs::File::create(file_path).unwrap();
let mut file = tempfile::NamedTempFile::new().unwrap();
write!(file, "{}", toml_str).unwrap();
std::env::set_current_dir(tmp_dir.path()).unwrap();
let result = Config::from_default_path();
assert!(result.is_ok());
let env_inputs = ConfigInputs {
config_path: Some(file.path().to_str().unwrap().to_string()),
overrides: ConfigOverrides {
potatomesh_base_url: Some("https://env.example/".to_string()),
..minimal_overrides()
},
..ConfigInputs::default()
};
let cli_inputs = ConfigInputs {
overrides: ConfigOverrides {
potatomesh_base_url: Some("https://cli.example/".to_string()),
..ConfigOverrides::default()
},
..ConfigInputs::default()
};
let cfg = load_from_sources(cli_inputs, env_inputs, None).unwrap();
assert_eq!(cfg.potatomesh.base_url, "https://cli.example/");
}
#[test]
#[serial]
fn load_uses_container_secret_defaults() {
let tmp_dir = tempfile::tempdir().unwrap();
let _guard = CwdGuard::enter(tmp_dir.path());
let secrets_dir = tmp_dir.path();
fs::write(secrets_dir.join("matrix_as_token"), "FROM_SECRET").unwrap();
let cli_inputs = ConfigInputs {
secrets_dir: Some(secrets_dir.to_string_lossy().to_string()),
container_override: Some(true),
overrides: ConfigOverrides {
potatomesh_base_url: Some("https://potatomesh.net/".to_string()),
potatomesh_poll_interval_secs: Some(10),
matrix_homeserver: Some("https://matrix.example.org".to_string()),
matrix_hs_token: Some("HS_TOKEN".to_string()),
matrix_server_name: Some("example.org".to_string()),
matrix_room_id: Some("!roomid:example.org".to_string()),
state_file: Some("bridge_state.json".to_string()),
..ConfigOverrides::default()
},
..ConfigInputs::default()
};
let cfg = load_from_sources(cli_inputs, ConfigInputs::default(), None).unwrap();
assert_eq!(cfg.matrix.as_token, "FROM_SECRET");
}
#[test]
fn resolve_token_prefers_explicit_value() {
let tmp_dir = tempfile::tempdir().unwrap();
let token_file = tmp_dir.path().join("token");
fs::write(&token_file, "FROM_FILE").unwrap();
let resolved = resolve_token(
Some("FROM_BASE".to_string()),
Some("FROM_EXPLICIT".to_string()),
Some(token_file.to_str().unwrap()),
Some(tmp_dir.path().to_str().unwrap()),
"matrix_as_token",
)
.unwrap();
assert_eq!(resolved, Some("FROM_EXPLICIT".to_string()));
}
#[test]
fn resolve_token_reads_explicit_file() {
let tmp_dir = tempfile::tempdir().unwrap();
let token_file = tmp_dir.path().join("token");
fs::write(&token_file, "FROM_FILE").unwrap();
let resolved = resolve_token(
None,
None,
Some(token_file.to_str().unwrap()),
None,
"matrix_as_token",
)
.unwrap();
assert_eq!(resolved, Some("FROM_FILE".to_string()));
}
#[test]
fn resolve_token_reads_default_secret_file() {
let tmp_dir = tempfile::tempdir().unwrap();
fs::write(tmp_dir.path().join("matrix_hs_token"), "FROM_SECRET").unwrap();
let resolved = resolve_token(
None,
None,
None,
Some(tmp_dir.path().to_str().unwrap()),
"matrix_hs_token",
)
.unwrap();
assert_eq!(resolved, Some("FROM_SECRET".to_string()));
}
#[test]
fn resolve_token_errors_on_empty_secret_file() {
let tmp_dir = tempfile::tempdir().unwrap();
let token_file = tmp_dir.path().join("token");
fs::write(&token_file, " ").unwrap();
let result = resolve_token(
None,
None,
Some(token_file.to_str().unwrap()),
None,
"matrix_as_token",
);
assert!(result.is_err());
}
#[test]
fn resolve_secrets_dir_prefers_explicit() {
let defaults = DefaultPaths {
config_path: "Config.toml".to_string(),
state_file: DEFAULT_STATE_FILE.to_string(),
secrets_dir: "default".to_string(),
poll_interval_secs: CONTAINER_POLL_INTERVAL_SECS,
};
let inputs = ConfigInputs {
secrets_dir: Some("explicit".to_string()),
..ConfigInputs::default()
};
let resolved = resolve_secrets_dir(&inputs, true, &defaults);
assert_eq!(resolved, Some("explicit".to_string()));
}
#[test]
fn resolve_secrets_dir_container_default() {
let defaults = DefaultPaths {
config_path: "Config.toml".to_string(),
state_file: DEFAULT_STATE_FILE.to_string(),
secrets_dir: "default".to_string(),
poll_interval_secs: CONTAINER_POLL_INTERVAL_SECS,
};
let inputs = ConfigInputs::default();
let resolved = resolve_secrets_dir(&inputs, true, &defaults);
assert_eq!(resolved, Some("default".to_string()));
assert_eq!(resolve_secrets_dir(&inputs, false, &defaults), None);
}
#[test]
#[serial]
fn resolve_base_config_prefers_explicit_path() {
let tmp_dir = tempfile::tempdir().unwrap();
let _guard = CwdGuard::enter(tmp_dir.path());
let config_path = tmp_dir.path().join("explicit.toml");
fs::write(
&config_path,
r#"[potatomesh]
base_url = "https://potatomesh.net/"
poll_interval_secs = 10
[matrix]
homeserver = "https://matrix.example.org"
as_token = "AS_TOKEN"
hs_token = "HS_TOKEN"
server_name = "example.org"
room_id = "!roomid:example.org"
[state]
state_file = "bridge_state.json"
"#,
)
.unwrap();
let defaults = default_paths(false);
let inputs = ConfigInputs {
config_path: Some(config_path.to_string_lossy().to_string()),
..ConfigInputs::default()
};
let resolved = resolve_base_config(&inputs, &defaults).unwrap();
assert!(resolved.is_some());
}
#[test]
#[serial]
fn resolve_base_config_uses_container_path_when_present() {
let tmp_dir = tempfile::tempdir().unwrap();
let _guard = CwdGuard::enter(tmp_dir.path());
let config_path = tmp_dir.path().join("container.toml");
fs::write(
&config_path,
r#"[potatomesh]
base_url = "https://potatomesh.net/"
poll_interval_secs = 10
[matrix]
homeserver = "https://matrix.example.org"
as_token = "AS_TOKEN"
hs_token = "HS_TOKEN"
server_name = "example.org"
room_id = "!roomid:example.org"
[state]
state_file = "bridge_state.json"
"#,
)
.unwrap();
let defaults = DefaultPaths {
config_path: config_path.to_string_lossy().to_string(),
state_file: DEFAULT_STATE_FILE.to_string(),
secrets_dir: DEFAULT_SECRETS_DIR.to_string(),
poll_interval_secs: CONTAINER_POLL_INTERVAL_SECS,
};
let resolved = resolve_base_config(&ConfigInputs::default(), &defaults).unwrap();
assert!(resolved.is_some());
}
#[test]
#[serial]
fn resolve_base_config_uses_host_path_when_present() {
let tmp_dir = tempfile::tempdir().unwrap();
let _guard = CwdGuard::enter(tmp_dir.path());
fs::write(
"Config.toml",
r#"[potatomesh]
base_url = "https://potatomesh.net/"
poll_interval_secs = 10
[matrix]
homeserver = "https://matrix.example.org"
as_token = "AS_TOKEN"
hs_token = "HS_TOKEN"
server_name = "example.org"
room_id = "!roomid:example.org"
[state]
state_file = "bridge_state.json"
"#,
)
.unwrap();
let defaults = default_paths(false);
let resolved = resolve_base_config(&ConfigInputs::default(), &defaults).unwrap();
assert!(resolved.is_some());
}
#[test]
#[serial]
fn resolve_base_config_returns_none_when_missing() {
let tmp_dir = tempfile::tempdir().unwrap();
let _guard = CwdGuard::enter(tmp_dir.path());
let defaults = default_paths(false);
let resolved = resolve_base_config(&ConfigInputs::default(), &defaults).unwrap();
assert!(resolved.is_none());
}
#[test]
#[serial]
fn load_prefers_cli_token_file_over_env_value() {
let tmp_dir = tempfile::tempdir().unwrap();
let _guard = CwdGuard::enter(tmp_dir.path());
let token_file = tmp_dir.path().join("as_token");
fs::write(&token_file, "CLI_SECRET").unwrap();
let env_inputs = ConfigInputs {
overrides: ConfigOverrides {
potatomesh_base_url: Some("https://potatomesh.net/".to_string()),
potatomesh_poll_interval_secs: Some(10),
matrix_homeserver: Some("https://matrix.example.org".to_string()),
matrix_as_token: Some("ENV_TOKEN".to_string()),
matrix_hs_token: Some("HS_TOKEN".to_string()),
matrix_server_name: Some("example.org".to_string()),
matrix_room_id: Some("!roomid:example.org".to_string()),
..ConfigOverrides::default()
},
..ConfigInputs::default()
};
let cli_inputs = ConfigInputs {
overrides: ConfigOverrides {
matrix_as_token_file: Some(token_file.to_string_lossy().to_string()),
..ConfigOverrides::default()
},
..ConfigInputs::default()
};
let cfg = load_from_sources(cli_inputs, env_inputs, None).unwrap();
assert_eq!(cfg.matrix.as_token, "CLI_SECRET");
}
#[test]
#[serial]
fn load_uses_container_default_poll_interval() {
let tmp_dir = tempfile::tempdir().unwrap();
let _guard = CwdGuard::enter(tmp_dir.path());
let cli_inputs = ConfigInputs {
container_override: Some(true),
overrides: ConfigOverrides {
potatomesh_base_url: Some("https://potatomesh.net/".to_string()),
matrix_homeserver: Some("https://matrix.example.org".to_string()),
matrix_as_token: Some("AS_TOKEN".to_string()),
matrix_hs_token: Some("HS_TOKEN".to_string()),
matrix_server_name: Some("example.org".to_string()),
matrix_room_id: Some("!roomid:example.org".to_string()),
..ConfigOverrides::default()
},
..ConfigInputs::default()
};
let cfg = load_from_sources(cli_inputs, ConfigInputs::default(), None).unwrap();
assert_eq!(
cfg.potatomesh.poll_interval_secs,
CONTAINER_POLL_INTERVAL_SECS
);
}
#[test]
#[serial]
fn load_uses_default_state_path_when_missing() {
let tmp_dir = tempfile::tempdir().unwrap();
let _guard = CwdGuard::enter(tmp_dir.path());
let cli_inputs = ConfigInputs {
overrides: ConfigOverrides {
potatomesh_base_url: Some("https://potatomesh.net/".to_string()),
potatomesh_poll_interval_secs: Some(10),
matrix_homeserver: Some("https://matrix.example.org".to_string()),
matrix_as_token: Some("AS_TOKEN".to_string()),
matrix_hs_token: Some("HS_TOKEN".to_string()),
matrix_server_name: Some("example.org".to_string()),
matrix_room_id: Some("!roomid:example.org".to_string()),
..ConfigOverrides::default()
},
..ConfigInputs::default()
};
let cfg = load_from_sources(cli_inputs, ConfigInputs::default(), None).unwrap();
assert_eq!(cfg.state.state_file, DEFAULT_STATE_FILE);
}
}
+424 -121
View File
@@ -12,23 +12,42 @@
// See the License for the specific language governing permissions and
// limitations under the License.
mod cli;
mod config;
mod matrix;
mod matrix_server;
mod potatomesh;
use std::{fs, path::Path};
use std::{fs, net::SocketAddr, path::Path};
use anyhow::Result;
use tokio::time::{sleep, Duration};
#[cfg(not(test))]
use clap::Parser;
use tokio::time::Duration;
use tracing::{error, info};
#[cfg(not(test))]
use crate::cli::Cli;
#[cfg(not(test))]
use crate::config::Config;
use crate::matrix::MatrixAppserviceClient;
use crate::potatomesh::{FetchParams, PotatoClient, PotatoMessage};
use crate::matrix_server::run_synapse_listener;
use crate::potatomesh::{FetchParams, PotatoClient, PotatoMessage, PotatoNode};
#[cfg(not(test))]
use tokio::time::sleep;
#[derive(Debug, serde::Serialize, serde::Deserialize, Default)]
pub struct BridgeState {
/// Highest message id processed by the bridge.
last_message_id: Option<u64>,
/// Highest rx_time observed; used to build incremental fetch queries.
#[serde(default)]
last_rx_time: Option<u64>,
/// Message ids seen at the current last_rx_time for de-duplication.
#[serde(default)]
last_rx_time_ids: Vec<u64>,
/// Legacy checkpoint timestamp used before last_rx_time was added.
#[serde(default, skip_serializing)]
last_checked_at: Option<u64>,
}
@@ -38,7 +57,15 @@ impl BridgeState {
return Ok(Self::default());
}
let data = fs::read_to_string(path)?;
let s: Self = serde_json::from_str(&data)?;
// Treat empty/whitespace-only files as a fresh state.
if data.trim().is_empty() {
return Ok(Self::default());
}
let mut s: Self = serde_json::from_str(&data)?;
if s.last_rx_time.is_none() {
s.last_rx_time = s.last_checked_at;
}
s.last_checked_at = None;
Ok(s)
}
@@ -49,17 +76,32 @@ impl BridgeState {
}
fn should_forward(&self, msg: &PotatoMessage) -> bool {
match self.last_message_id {
None => true,
Some(last) => msg.id > last,
match self.last_rx_time {
None => match self.last_message_id {
None => true,
Some(last_id) => msg.id > last_id,
},
Some(last_ts) => {
if msg.rx_time > last_ts {
true
} else if msg.rx_time < last_ts {
false
} else {
!self.last_rx_time_ids.contains(&msg.id)
}
}
}
}
fn update_with(&mut self, msg: &PotatoMessage) {
self.last_message_id = Some(match self.last_message_id {
None => msg.id,
Some(last) => last.max(msg.id),
});
self.last_message_id = Some(msg.id);
if self.last_rx_time.is_none() || Some(msg.rx_time) > self.last_rx_time {
self.last_rx_time = Some(msg.rx_time);
self.last_rx_time_ids = vec![msg.id];
} else if Some(msg.rx_time) == self.last_rx_time && !self.last_rx_time_ids.contains(&msg.id)
{
self.last_rx_time_ids.push(msg.id);
}
}
}
@@ -69,7 +111,7 @@ fn build_fetch_params(state: &BridgeState) -> FetchParams {
limit: None,
since: None,
}
} else if let Some(ts) = state.last_checked_at {
} else if let Some(ts) = state.last_rx_time {
FetchParams {
limit: None,
since: Some(ts),
@@ -82,17 +124,29 @@ fn build_fetch_params(state: &BridgeState) -> FetchParams {
}
}
fn update_checkpoint(state: &mut BridgeState, delivered_all: bool, now_secs: u64) -> bool {
if !delivered_all {
return false;
/// Persist the bridge state and log any write errors.
fn persist_state(state: &BridgeState, state_path: &str) {
if let Err(e) = state.save(state_path) {
error!("Error saving state: {:?}", e);
}
}
if state.last_message_id.is_some() {
state.last_checked_at = Some(now_secs);
true
} else {
false
}
/// Emit an info log for the latest bridge state snapshot.
fn log_state_update(state: &BridgeState) {
info!("Updated state: {:?}", state);
}
/// Emit a sanitized config log without sensitive tokens.
#[cfg(not(test))]
fn log_config(cfg: &Config) {
info!(
potatomesh_base_url = cfg.potatomesh.base_url.as_str(),
matrix_homeserver = cfg.matrix.homeserver.as_str(),
matrix_server_name = cfg.matrix.server_name.as_str(),
matrix_room_id = cfg.matrix.room_id.as_str(),
state_file = cfg.state.state_file.as_str(),
"Loaded config"
);
}
async fn poll_once(
@@ -100,16 +154,13 @@ async fn poll_once(
matrix: &MatrixAppserviceClient,
state: &mut BridgeState,
state_path: &str,
now_secs: u64,
) {
let params = build_fetch_params(state);
match potato.fetch_messages(params).await {
Ok(mut msgs) => {
// sort by id ascending so we process in order
msgs.sort_by_key(|m| m.id);
let mut delivered_all = true;
// sort by rx_time so we process by actual receipt time
msgs.sort_by_key(|m| m.rx_time);
for msg in &msgs {
if !state.should_forward(msg) {
@@ -120,27 +171,19 @@ async fn poll_once(
if let Some(port) = &msg.portnum {
if port != "TEXT_MESSAGE_APP" {
state.update_with(msg);
log_state_update(state);
persist_state(state, state_path);
continue;
}
}
if let Err(e) = handle_message(potato, matrix, state, msg).await {
error!("Error handling message {}: {:?}", msg.id, e);
delivered_all = false;
continue;
}
// persist after each processed message
if let Err(e) = state.save(state_path) {
error!("Error saving state: {:?}", e);
}
}
// Only advance checkpoint after successful delivery and a known last_message_id.
if update_checkpoint(state, delivered_all, now_secs) {
if let Err(e) = state.save(state_path) {
error!("Error saving state: {:?}", e);
}
persist_state(state, state_path);
}
}
Err(e) => {
@@ -149,6 +192,15 @@ async fn poll_once(
}
}
fn spawn_synapse_listener(addr: SocketAddr, token: String) -> tokio::task::JoinHandle<()> {
tokio::spawn(async move {
if let Err(e) = run_synapse_listener(addr, token).await {
error!("Synapse listener failed: {:?}", e);
}
})
}
#[cfg(not(test))]
#[tokio::main]
async fn main() -> Result<()> {
// Logging: RUST_LOG=info,bridge=debug,reqwest=warn ...
@@ -160,8 +212,9 @@ async fn main() -> Result<()> {
)
.init();
let cfg = Config::from_default_path()?;
info!("Loaded config: {:?}", cfg);
let cli = Cli::parse();
let cfg = config::load(cli.to_inputs())?;
log_config(&cfg);
let http = reqwest::Client::builder().build()?;
let potato = PotatoClient::new(http.clone(), cfg.potatomesh.clone());
@@ -169,6 +222,10 @@ async fn main() -> Result<()> {
let matrix = MatrixAppserviceClient::new(http.clone(), cfg.matrix.clone());
matrix.health_check().await?;
let synapse_addr = SocketAddr::from(([0, 0, 0, 0], 41448));
let synapse_token = cfg.matrix.hs_token.clone();
let _synapse_handle = spawn_synapse_listener(synapse_addr, synapse_token);
let state_path = &cfg.state.state_file;
let mut state = BridgeState::load(state_path)?;
info!("Loaded state: {:?}", state);
@@ -176,12 +233,7 @@ async fn main() -> Result<()> {
let poll_interval = Duration::from_secs(cfg.potatomesh.poll_interval_secs);
loop {
let now_secs = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_secs();
poll_once(&potato, &matrix, &mut state, state_path, now_secs).await;
poll_once(&potato, &matrix, &mut state, state_path).await;
sleep(poll_interval).await;
}
@@ -199,38 +251,79 @@ async fn handle_message(
// Ensure puppet exists & has display name
matrix.ensure_user_registered(&localpart).await?;
matrix.set_display_name(&user_id, &node.long_name).await?;
matrix.ensure_user_joined_room(&user_id).await?;
let display_name = display_name_for_node(&node);
matrix.set_display_name(&user_id, &display_name).await?;
// Format the bridged message
let short = node
.short_name
.clone()
.unwrap_or_else(|| node.long_name.clone());
let body = format!(
"[{short}] {text}\n({from_id} → {to_id}, {rssi}, {snr}, {chan}/{preset})",
short = short,
text = msg.text,
from_id = msg.from_id,
to_id = msg.to_id,
rssi = msg
.rssi
.map(|v| format!("RSSI {v} dB"))
.unwrap_or_else(|| "RSSI n/a".to_string()),
snr = msg
.snr
.map(|v| format!("SNR {v} dB"))
.unwrap_or_else(|| "SNR n/a".to_string()),
chan = msg.channel_name,
preset = msg.modem_preset,
let preset_short = modem_preset_short(&msg.modem_preset);
let prefix = format!(
"[{freq}][{preset_short}][{channel}]",
freq = msg.lora_freq,
preset_short = preset_short,
channel = msg.channel_name,
);
let (body, formatted_body) = format_message_bodies(&prefix, &msg.text);
matrix.send_text_message_as(&user_id, &body).await?;
matrix
.send_formatted_message_as(&user_id, &body, &formatted_body)
.await?;
info!("Bridged message: {:?}", msg);
state.update_with(msg);
log_state_update(state);
Ok(())
}
/// Build a compact modem preset label like "LF" for "LongFast".
fn modem_preset_short(preset: &str) -> String {
let letters: String = preset
.chars()
.filter(|ch| ch.is_ascii_uppercase())
.collect();
if letters.is_empty() {
preset.chars().take(2).collect()
} else {
letters
}
}
/// Build plain text + HTML message bodies with inline-code metadata.
fn format_message_bodies(prefix: &str, text: &str) -> (String, String) {
let body = format!("`{}` {}", prefix, text);
let formatted_body = format!("<code>{}</code> {}", escape_html(prefix), escape_html(text));
(body, formatted_body)
}
/// Build the Matrix display name from a node's long/short names.
fn display_name_for_node(node: &PotatoNode) -> String {
match node
.short_name
.as_deref()
.map(str::trim)
.filter(|s| !s.is_empty())
{
Some(short) if short != node.long_name => format!("{} ({})", node.long_name, short),
_ => node.long_name.clone(),
}
}
/// Minimal HTML escaping for Matrix formatted_body payloads.
fn escape_html(input: &str) -> String {
let mut escaped = String::with_capacity(input.len());
for ch in input.chars() {
match ch {
'&' => escaped.push_str("&amp;"),
'<' => escaped.push_str("&lt;"),
'>' => escaped.push_str("&gt;"),
'"' => escaped.push_str("&quot;"),
'\'' => escaped.push_str("&#39;"),
_ => escaped.push(ch),
}
}
escaped
}
#[cfg(test)]
mod tests {
use super::*;
@@ -259,6 +352,54 @@ mod tests {
}
}
fn sample_node(short_name: Option<&str>, long_name: &str) -> PotatoNode {
PotatoNode {
node_id: "!abcd1234".to_string(),
short_name: short_name.map(str::to_string),
long_name: long_name.to_string(),
role: None,
hw_model: None,
last_heard: None,
first_heard: None,
latitude: None,
longitude: None,
altitude: None,
}
}
#[test]
fn modem_preset_short_handles_camelcase() {
assert_eq!(modem_preset_short("LongFast"), "LF");
assert_eq!(modem_preset_short("MediumFast"), "MF");
}
#[test]
fn format_message_bodies_escape_html() {
let (body, formatted) = format_message_bodies("[868][LF]", "Hello <&>");
assert_eq!(body, "`[868][LF]` Hello <&>");
assert_eq!(formatted, "<code>[868][LF]</code> Hello &lt;&amp;&gt;");
}
#[test]
fn escape_html_escapes_quotes() {
assert_eq!(escape_html("a\"b'c"), "a&quot;b&#39;c");
}
#[test]
fn display_name_for_node_includes_short_when_present() {
let node = sample_node(Some("TN"), "Test Node");
assert_eq!(display_name_for_node(&node), "Test Node (TN)");
}
#[test]
fn display_name_for_node_ignores_empty_or_duplicate_short() {
let empty_short = sample_node(Some(""), "Test Node");
assert_eq!(display_name_for_node(&empty_short), "Test Node");
let duplicate_short = sample_node(Some("Test Node"), "Test Node");
assert_eq!(display_name_for_node(&duplicate_short), "Test Node");
}
#[test]
fn bridge_state_initially_forwards_all() {
let state = BridgeState::default();
@@ -268,39 +409,72 @@ mod tests {
}
#[test]
fn bridge_state_tracks_highest_id_and_skips_older() {
fn bridge_state_tracks_latest_rx_time_and_skips_older() {
let mut state = BridgeState::default();
let m1 = sample_msg(10);
let m2 = sample_msg(20);
let m3 = sample_msg(15);
let m1 = PotatoMessage { rx_time: 10, ..m1 };
let m2 = PotatoMessage { rx_time: 20, ..m2 };
let m3 = PotatoMessage { rx_time: 15, ..m3 };
// First message, should forward
assert!(state.should_forward(&m1));
state.update_with(&m1);
assert_eq!(state.last_message_id, Some(10));
assert_eq!(state.last_rx_time, Some(10));
// Second message, higher id, should forward
assert!(state.should_forward(&m2));
state.update_with(&m2);
assert_eq!(state.last_message_id, Some(20));
assert_eq!(state.last_rx_time, Some(20));
// Third message, lower than last, should NOT forward
assert!(!state.should_forward(&m3));
// state remains unchanged
assert_eq!(state.last_message_id, Some(20));
assert_eq!(state.last_rx_time, Some(20));
}
#[test]
fn bridge_state_update_is_monotonic() {
let mut state = BridgeState {
last_message_id: Some(50),
fn bridge_state_uses_legacy_id_filter_when_rx_time_missing() {
let state = BridgeState {
last_message_id: Some(10),
last_rx_time: None,
last_rx_time_ids: vec![],
last_checked_at: None,
};
let m = sample_msg(40);
let older = sample_msg(9);
let newer = sample_msg(11);
state.update_with(&m); // id is lower than current
// last_message_id must stay at 50
assert_eq!(state.last_message_id, Some(50));
assert!(!state.should_forward(&older));
assert!(state.should_forward(&newer));
}
#[test]
fn bridge_state_dedupes_same_timestamp() {
let mut state = BridgeState::default();
let m1 = PotatoMessage {
rx_time: 100,
..sample_msg(10)
};
let m2 = PotatoMessage {
rx_time: 100,
..sample_msg(9)
};
let dup = PotatoMessage {
rx_time: 100,
..sample_msg(10)
};
assert!(state.should_forward(&m1));
state.update_with(&m1);
assert!(state.should_forward(&m2));
state.update_with(&m2);
assert!(!state.should_forward(&dup));
assert_eq!(state.last_rx_time, Some(100));
assert_eq!(state.last_rx_time_ids, vec![10, 9]);
}
#[test]
@@ -311,13 +485,17 @@ mod tests {
let state = BridgeState {
last_message_id: Some(12345),
last_checked_at: Some(99),
last_rx_time: Some(99),
last_rx_time_ids: vec![123],
last_checked_at: Some(77),
};
state.save(path_str).unwrap();
let loaded_state = BridgeState::load(path_str).unwrap();
assert_eq!(loaded_state.last_message_id, Some(12345));
assert_eq!(loaded_state.last_checked_at, Some(99));
assert_eq!(loaded_state.last_rx_time, Some(99));
assert_eq!(loaded_state.last_rx_time_ids, vec![123]);
assert_eq!(loaded_state.last_checked_at, None);
}
#[test]
@@ -328,50 +506,50 @@ mod tests {
let state = BridgeState::load(path_str).unwrap();
assert_eq!(state.last_message_id, None);
assert_eq!(state.last_rx_time, None);
assert!(state.last_rx_time_ids.is_empty());
}
#[test]
fn bridge_state_load_empty_file() {
let tmp_dir = tempfile::tempdir().unwrap();
let file_path = tmp_dir.path().join("empty.json");
let path_str = file_path.to_str().unwrap();
fs::write(path_str, "").unwrap();
let state = BridgeState::load(path_str).unwrap();
assert_eq!(state.last_message_id, None);
assert_eq!(state.last_rx_time, None);
assert!(state.last_rx_time_ids.is_empty());
assert_eq!(state.last_checked_at, None);
}
#[test]
fn update_checkpoint_requires_last_message_id() {
let mut state = BridgeState {
last_message_id: None,
last_checked_at: Some(10),
};
fn bridge_state_migrates_legacy_checkpoint() {
let tmp_dir = tempfile::tempdir().unwrap();
let file_path = tmp_dir.path().join("legacy_state.json");
let path_str = file_path.to_str().unwrap();
let saved = update_checkpoint(&mut state, true, 123);
assert!(!saved);
assert_eq!(state.last_checked_at, Some(10));
}
fs::write(
path_str,
r#"{"last_message_id":42,"last_checked_at":1710000000}"#,
)
.unwrap();
#[test]
fn update_checkpoint_skips_when_not_delivered() {
let mut state = BridgeState {
last_message_id: Some(5),
last_checked_at: Some(10),
};
let saved = update_checkpoint(&mut state, false, 123);
assert!(!saved);
assert_eq!(state.last_checked_at, Some(10));
}
#[test]
fn update_checkpoint_sets_when_safe() {
let mut state = BridgeState {
last_message_id: Some(5),
last_checked_at: None,
};
let saved = update_checkpoint(&mut state, true, 123);
assert!(saved);
assert_eq!(state.last_checked_at, Some(123));
let state = BridgeState::load(path_str).unwrap();
assert_eq!(state.last_message_id, Some(42));
assert_eq!(state.last_rx_time, Some(1_710_000_000));
assert!(state.last_rx_time_ids.is_empty());
}
#[test]
fn fetch_params_respects_missing_last_message_id() {
let state = BridgeState {
last_message_id: None,
last_checked_at: Some(123),
last_rx_time: Some(123),
last_rx_time_ids: vec![],
last_checked_at: None,
};
let params = build_fetch_params(&state);
@@ -383,7 +561,9 @@ mod tests {
fn fetch_params_uses_since_when_safe() {
let state = BridgeState {
last_message_id: Some(1),
last_checked_at: Some(123),
last_rx_time: Some(123),
last_rx_time_ids: vec![],
last_checked_at: None,
};
let params = build_fetch_params(&state);
@@ -395,6 +575,8 @@ mod tests {
fn fetch_params_defaults_to_small_window() {
let state = BridgeState {
last_message_id: Some(1),
last_rx_time: None,
last_rx_time_ids: vec![],
last_checked_at: None,
};
@@ -403,8 +585,59 @@ mod tests {
assert_eq!(params.since, None);
}
#[test]
fn log_state_update_emits_info() {
let state = BridgeState::default();
log_state_update(&state);
}
#[test]
fn persist_state_writes_file() {
let tmp_dir = tempfile::tempdir().unwrap();
let file_path = tmp_dir.path().join("state.json");
let path_str = file_path.to_str().unwrap();
let state = BridgeState {
last_message_id: Some(42),
last_rx_time: Some(123),
last_rx_time_ids: vec![42],
last_checked_at: None,
};
persist_state(&state, path_str);
let loaded = BridgeState::load(path_str).unwrap();
assert_eq!(loaded.last_message_id, Some(42));
}
#[test]
fn persist_state_logs_on_error() {
let tmp_dir = tempfile::tempdir().unwrap();
let dir_path = tmp_dir.path().to_str().unwrap();
let state = BridgeState::default();
// Writing to a directory path should trigger the error branch.
persist_state(&state, dir_path);
}
#[tokio::test]
async fn poll_once_persists_checkpoint_without_messages() {
async fn spawn_synapse_listener_starts_task() {
let addr = SocketAddr::from(([127, 0, 0, 1], 0));
let handle = spawn_synapse_listener(addr, "HS_TOKEN".to_string());
tokio::time::sleep(Duration::from_millis(10)).await;
handle.abort();
}
#[tokio::test]
async fn spawn_synapse_listener_logs_error_on_bind_failure() {
let listener = tokio::net::TcpListener::bind("127.0.0.1:0").await.unwrap();
let addr = listener.local_addr().unwrap();
let handle = spawn_synapse_listener(addr, "HS_TOKEN".to_string());
let _ = handle.await;
}
#[tokio::test]
async fn poll_once_leaves_state_unchanged_without_messages() {
let tmp_dir = tempfile::tempdir().unwrap();
let state_path = tmp_dir.path().join("state.json");
let state_str = state_path.to_str().unwrap();
@@ -426,6 +659,7 @@ mod tests {
let matrix_cfg = MatrixConfig {
homeserver: server.url(),
as_token: "AS_TOKEN".to_string(),
hs_token: "HS_TOKEN".to_string(),
server_name: "example.org".to_string(),
room_id: "!roomid:example.org".to_string(),
};
@@ -435,18 +669,63 @@ mod tests {
let mut state = BridgeState {
last_message_id: Some(1),
last_rx_time: Some(100),
last_rx_time_ids: vec![1],
last_checked_at: None,
};
poll_once(&potato, &matrix, &mut state, state_str, 123).await;
poll_once(&potato, &matrix, &mut state, state_str).await;
mock_msgs.assert();
// Should have advanced checkpoint and saved it.
assert_eq!(state.last_checked_at, Some(123));
// No new data means state remains unchanged and is not persisted.
assert_eq!(state.last_rx_time, Some(100));
assert_eq!(state.last_rx_time_ids, vec![1]);
assert!(!state_path.exists());
}
#[tokio::test]
async fn poll_once_persists_state_for_non_text_messages() {
let tmp_dir = tempfile::tempdir().unwrap();
let state_path = tmp_dir.path().join("state.json");
let state_str = state_path.to_str().unwrap();
let mut server = mockito::Server::new_async().await;
let mock_msgs = server
.mock("GET", "/api/messages")
.match_query(mockito::Matcher::Any)
.with_status(200)
.with_header("content-type", "application/json")
.with_body(
r#"[{"id":1,"rx_time":100,"rx_iso":"2025-11-27T00:00:00Z","from_id":"!abcd1234","to_id":"^all","channel":1,"portnum":"POSITION_APP","text":"","rssi":-100,"hop_limit":1,"lora_freq":868,"modem_preset":"MediumFast","channel_name":"TEST","snr":0.0,"node_id":"!abcd1234"}]"#,
)
.create();
let http_client = reqwest::Client::new();
let potatomesh_cfg = PotatomeshConfig {
base_url: server.url(),
poll_interval_secs: 1,
};
let matrix_cfg = MatrixConfig {
homeserver: server.url(),
as_token: "AS_TOKEN".to_string(),
hs_token: "HS_TOKEN".to_string(),
server_name: "example.org".to_string(),
room_id: "!roomid:example.org".to_string(),
};
let potato = PotatoClient::new(http_client.clone(), potatomesh_cfg);
let matrix = MatrixAppserviceClient::new(http_client, matrix_cfg);
let mut state = BridgeState::default();
poll_once(&potato, &matrix, &mut state, state_str).await;
mock_msgs.assert();
assert!(state_path.exists());
let loaded = BridgeState::load(state_str).unwrap();
assert_eq!(loaded.last_checked_at, Some(123));
assert_eq!(loaded.last_message_id, Some(1));
assert_eq!(loaded.last_rx_time, Some(100));
assert_eq!(loaded.last_rx_time_ids, vec![1]);
}
#[tokio::test]
@@ -460,6 +739,7 @@ mod tests {
let matrix_cfg = MatrixConfig {
homeserver: server.url(),
as_token: "AS_TOKEN".to_string(),
hs_token: "HS_TOKEN".to_string(),
server_name: "example.org".to_string(),
room_id: "!roomid:example.org".to_string(),
};
@@ -467,6 +747,8 @@ mod tests {
let node_id = "abcd1234";
let user_id = format!("@potato_{}:{}", node_id, matrix_cfg.server_name);
let encoded_user = urlencoding::encode(&user_id);
let room_id = matrix_cfg.room_id.clone();
let encoded_room = urlencoding::encode(&room_id);
let mock_get_node = server
.mock("GET", "/api/nodes/abcd1234")
@@ -477,7 +759,18 @@ mod tests {
let mock_register = server
.mock("POST", "/_matrix/client/v3/register")
.match_query("kind=user&access_token=AS_TOKEN")
.match_query("kind=user")
.match_header("authorization", "Bearer AS_TOKEN")
.with_status(200)
.create();
let mock_join = server
.mock(
"POST",
format!("/_matrix/client/v3/rooms/{}/join", encoded_room).as_str(),
)
.match_query(format!("user_id={}", encoded_user).as_str())
.match_header("authorization", "Bearer AS_TOKEN")
.with_status(200)
.create();
@@ -486,14 +779,16 @@ mod tests {
"PUT",
format!("/_matrix/client/v3/profile/{}/displayname", encoded_user).as_str(),
)
.match_query(format!("user_id={}&access_token=AS_TOKEN", encoded_user).as_str())
.match_query(format!("user_id={}", encoded_user).as_str())
.match_header("authorization", "Bearer AS_TOKEN")
.match_body(mockito::Matcher::PartialJson(serde_json::json!({
"displayname": "Test Node (TN)"
})))
.with_status(200)
.create();
let http_client = reqwest::Client::new();
let matrix_client = MatrixAppserviceClient::new(http_client.clone(), matrix_cfg);
let room_id = &matrix_client.cfg.room_id;
let encoded_room = urlencoding::encode(room_id);
let txn_id = matrix_client
.txn_counter
.load(std::sync::atomic::Ordering::SeqCst);
@@ -507,7 +802,14 @@ mod tests {
)
.as_str(),
)
.match_query(format!("user_id={}&access_token=AS_TOKEN", encoded_user).as_str())
.match_query(format!("user_id={}", encoded_user).as_str())
.match_header("authorization", "Bearer AS_TOKEN")
.match_body(mockito::Matcher::PartialJson(serde_json::json!({
"msgtype": "m.text",
"body": "`[868][MF][TEST]` Ping",
"format": "org.matrix.custom.html",
"formatted_body": "<code>[868][MF][TEST]</code> Ping",
})))
.with_status(200)
.create();
@@ -520,6 +822,7 @@ mod tests {
assert!(result.is_ok());
mock_get_node.assert();
mock_register.assert();
mock_join.assert();
mock_display_name.assert();
mock_send.assert();
+133 -62
View File
@@ -66,10 +66,6 @@ impl MatrixAppserviceClient {
format!("@{}:{}", localpart, self.cfg.server_name)
}
fn auth_query(&self) -> String {
format!("access_token={}", urlencoding::encode(&self.cfg.as_token))
}
/// Ensure the puppet user exists (register via appservice registration).
pub async fn ensure_user_registered(&self, localpart: &str) -> anyhow::Result<()> {
#[derive(Serialize)]
@@ -80,9 +76,8 @@ impl MatrixAppserviceClient {
}
let url = format!(
"{}/_matrix/client/v3/register?kind=user&{}",
self.cfg.homeserver,
self.auth_query()
"{}/_matrix/client/v3/register?kind=user",
self.cfg.homeserver
);
let body = RegisterReq {
@@ -90,7 +85,13 @@ impl MatrixAppserviceClient {
username: localpart,
};
let resp = self.http.post(&url).json(&body).send().await?;
let resp = self
.http
.post(&url)
.bearer_auth(&self.cfg.as_token)
.json(&body)
.send()
.await?;
if resp.status().is_success() {
Ok(())
} else {
@@ -109,18 +110,21 @@ impl MatrixAppserviceClient {
let encoded_user = urlencoding::encode(user_id);
let url = format!(
"{}/_matrix/client/v3/profile/{}/displayname?user_id={}&{}",
self.cfg.homeserver,
encoded_user,
encoded_user,
self.auth_query()
"{}/_matrix/client/v3/profile/{}/displayname?user_id={}",
self.cfg.homeserver, encoded_user, encoded_user
);
let body = DisplayNameReq {
displayname: display_name,
};
let resp = self.http.put(&url).json(&body).send().await?;
let resp = self
.http
.put(&url)
.bearer_auth(&self.cfg.as_token)
.json(&body)
.send()
.await?;
if resp.status().is_success() {
Ok(())
} else {
@@ -134,12 +138,53 @@ impl MatrixAppserviceClient {
}
}
/// Send a plain text message into the configured room as puppet user_id.
pub async fn send_text_message_as(&self, user_id: &str, body_text: &str) -> anyhow::Result<()> {
/// Ensure the puppet user is joined to the configured room.
pub async fn ensure_user_joined_room(&self, user_id: &str) -> anyhow::Result<()> {
#[derive(Serialize)]
struct JoinReq {}
let encoded_room = urlencoding::encode(&self.cfg.room_id);
let encoded_user = urlencoding::encode(user_id);
let url = format!(
"{}/_matrix/client/v3/rooms/{}/join?user_id={}",
self.cfg.homeserver, encoded_room, encoded_user
);
let resp = self
.http
.post(&url)
.bearer_auth(&self.cfg.as_token)
.json(&JoinReq {})
.send()
.await?;
if resp.status().is_success() {
Ok(())
} else {
let status = resp.status();
let body_snip = resp.text().await.unwrap_or_default();
Err(anyhow::anyhow!(
"Matrix join failed for {} in {} with status {} ({})",
user_id,
self.cfg.room_id,
status,
body_snip
))
}
}
/// Send a text message with HTML formatting into the configured room as puppet user_id.
pub async fn send_formatted_message_as(
&self,
user_id: &str,
body_text: &str,
formatted_body: &str,
) -> anyhow::Result<()> {
#[derive(Serialize)]
struct MsgContent<'a> {
msgtype: &'a str,
body: &'a str,
format: &'a str,
formatted_body: &'a str,
}
let txn_id = self.txn_counter.fetch_add(1, Ordering::SeqCst);
@@ -147,35 +192,36 @@ impl MatrixAppserviceClient {
let encoded_user = urlencoding::encode(user_id);
let url = format!(
"{}/_matrix/client/v3/rooms/{}/send/m.room.message/{}?user_id={}&{}",
self.cfg.homeserver,
encoded_room,
txn_id,
encoded_user,
self.auth_query()
"{}/_matrix/client/v3/rooms/{}/send/m.room.message/{}?user_id={}",
self.cfg.homeserver, encoded_room, txn_id, encoded_user
);
let content = MsgContent {
msgtype: "m.text",
body: body_text,
format: "org.matrix.custom.html",
formatted_body,
};
let resp = self.http.put(&url).json(&content).send().await?;
let resp = self
.http
.put(&url)
.bearer_auth(&self.cfg.as_token)
.json(&content)
.send()
.await?;
if !resp.status().is_success() {
let status = resp.status();
// optional: pull a short body snippet for debugging
let body_snip = resp.text().await.unwrap_or_default();
// Log for observability
tracing::warn!(
"Failed to send message as {}: status {}, body: {}",
"Failed to send formatted message as {}: status {}, body: {}",
user_id,
status,
body_snip
);
// Propagate an error so callers know this message was NOT delivered
return Err(anyhow::anyhow!(
"Matrix send failed for {} with status {}",
user_id,
@@ -195,6 +241,7 @@ mod tests {
MatrixConfig {
homeserver: "https://matrix.example.org".to_string(),
as_token: "AS_TOKEN".to_string(),
hs_token: "HS_TOKEN".to_string(),
server_name: "example.org".to_string(),
room_id: "!roomid:example.org".to_string(),
}
@@ -255,16 +302,6 @@ mod tests {
assert!(result.is_err());
}
#[test]
fn auth_query_contains_access_token() {
let http = reqwest::Client::builder().build().unwrap();
let client = MatrixAppserviceClient::new(http, dummy_cfg());
let q = client.auth_query();
assert!(q.starts_with("access_token="));
assert!(q.contains("AS_TOKEN"));
}
#[test]
fn test_new_matrix_client() {
let http_client = reqwest::Client::new();
@@ -280,7 +317,8 @@ mod tests {
let mut server = mockito::Server::new_async().await;
let mock = server
.mock("POST", "/_matrix/client/v3/register")
.match_query("kind=user&access_token=AS_TOKEN")
.match_query("kind=user")
.match_header("authorization", "Bearer AS_TOKEN")
.with_status(200)
.create();
@@ -298,7 +336,8 @@ mod tests {
let mut server = mockito::Server::new_async().await;
let mock = server
.mock("POST", "/_matrix/client/v3/register")
.match_query("kind=user&access_token=AS_TOKEN")
.match_query("kind=user")
.match_header("authorization", "Bearer AS_TOKEN")
.with_status(400) // M_USER_IN_USE
.create();
@@ -316,12 +355,13 @@ mod tests {
let mut server = mockito::Server::new_async().await;
let user_id = "@test:example.org";
let encoded_user = urlencoding::encode(user_id);
let query = format!("user_id={}&access_token=AS_TOKEN", encoded_user);
let query = format!("user_id={}", encoded_user);
let path = format!("/_matrix/client/v3/profile/{}/displayname", encoded_user);
let mock = server
.mock("PUT", path.as_str())
.match_query(query.as_str())
.match_header("authorization", "Bearer AS_TOKEN")
.with_status(200)
.create();
@@ -339,12 +379,13 @@ mod tests {
let mut server = mockito::Server::new_async().await;
let user_id = "@test:example.org";
let encoded_user = urlencoding::encode(user_id);
let query = format!("user_id={}&access_token=AS_TOKEN", encoded_user);
let query = format!("user_id={}", encoded_user);
let path = format!("/_matrix/client/v3/profile/{}/displayname", encoded_user);
let mock = server
.mock("PUT", path.as_str())
.match_query(query.as_str())
.match_header("authorization", "Bearer AS_TOKEN")
.with_status(500)
.create();
@@ -358,40 +399,61 @@ mod tests {
}
#[tokio::test]
async fn test_send_text_message_as_success() {
async fn test_ensure_user_joined_room_success() {
let mut server = mockito::Server::new_async().await;
let user_id = "@test:example.org";
let room_id = "!roomid:example.org";
let encoded_user = urlencoding::encode(user_id);
let encoded_room = urlencoding::encode(room_id);
let client = {
let mut cfg = dummy_cfg();
cfg.homeserver = server.url();
cfg.room_id = room_id.to_string();
MatrixAppserviceClient::new(reqwest::Client::new(), cfg)
};
let txn_id = client.txn_counter.load(Ordering::SeqCst);
let query = format!("user_id={}&access_token=AS_TOKEN", encoded_user);
let path = format!(
"/_matrix/client/v3/rooms/{}/send/m.room.message/{}",
encoded_room, txn_id
);
let query = format!("user_id={}", encoded_user);
let path = format!("/_matrix/client/v3/rooms/{}/join", encoded_room);
let mock = server
.mock("PUT", path.as_str())
.mock("POST", path.as_str())
.match_query(query.as_str())
.match_header("authorization", "Bearer AS_TOKEN")
.with_status(200)
.create();
let result = client.send_text_message_as(user_id, "hello").await;
let mut cfg = dummy_cfg();
cfg.homeserver = server.url();
cfg.room_id = room_id.to_string();
let client = MatrixAppserviceClient::new(reqwest::Client::new(), cfg);
let result = client.ensure_user_joined_room(user_id).await;
mock.assert();
assert!(result.is_ok());
}
#[tokio::test]
async fn test_send_text_message_as_fail() {
async fn test_ensure_user_joined_room_fail() {
let mut server = mockito::Server::new_async().await;
let user_id = "@test:example.org";
let room_id = "!roomid:example.org";
let encoded_user = urlencoding::encode(user_id);
let encoded_room = urlencoding::encode(room_id);
let query = format!("user_id={}", encoded_user);
let path = format!("/_matrix/client/v3/rooms/{}/join", encoded_room);
let mock = server
.mock("POST", path.as_str())
.match_query(query.as_str())
.match_header("authorization", "Bearer AS_TOKEN")
.with_status(403)
.create();
let mut cfg = dummy_cfg();
cfg.homeserver = server.url();
cfg.room_id = room_id.to_string();
let client = MatrixAppserviceClient::new(reqwest::Client::new(), cfg);
let result = client.ensure_user_joined_room(user_id).await;
mock.assert();
assert!(result.is_err());
}
#[tokio::test]
async fn test_send_formatted_message_as_success() {
let mut server = mockito::Server::new_async().await;
let user_id = "@test:example.org";
let room_id = "!roomid:example.org";
@@ -405,7 +467,7 @@ mod tests {
MatrixAppserviceClient::new(reqwest::Client::new(), cfg)
};
let txn_id = client.txn_counter.load(Ordering::SeqCst);
let query = format!("user_id={}&access_token=AS_TOKEN", encoded_user);
let query = format!("user_id={}", encoded_user);
let path = format!(
"/_matrix/client/v3/rooms/{}/send/m.room.message/{}",
encoded_room, txn_id
@@ -414,12 +476,21 @@ mod tests {
let mock = server
.mock("PUT", path.as_str())
.match_query(query.as_str())
.with_status(500)
.match_header("authorization", "Bearer AS_TOKEN")
.match_body(mockito::Matcher::PartialJson(serde_json::json!({
"msgtype": "m.text",
"body": "`[meta]` hello",
"format": "org.matrix.custom.html",
"formatted_body": "<code>[meta]</code> hello",
})))
.with_status(200)
.create();
let result = client.send_text_message_as(user_id, "hello").await;
let result = client
.send_formatted_message_as(user_id, "`[meta]` hello", "<code>[meta]</code> hello")
.await;
mock.assert();
assert!(result.is_err());
assert!(result.is_ok());
}
}
+289
View File
@@ -0,0 +1,289 @@
// Copyright © 2025-26 l5yth & contributors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use axum::{
extract::{Path, Query, State},
http::{header::AUTHORIZATION, HeaderMap, StatusCode},
response::IntoResponse,
routing::put,
Json, Router,
};
use serde_json::Value;
use std::net::SocketAddr;
use tracing::info;
#[derive(Clone)]
struct SynapseState {
hs_token: String,
}
#[derive(serde::Deserialize)]
struct AuthQuery {
access_token: Option<String>,
}
/// Pull access tokens from supported auth headers.
fn extract_access_token(headers: &HeaderMap) -> Option<String> {
if let Some(value) = headers.get(AUTHORIZATION) {
if let Ok(raw) = value.to_str() {
if let Some(token) = raw.strip_prefix("Bearer ") {
return Some(token.trim().to_string());
}
if let Some(token) = raw.strip_prefix("bearer ") {
return Some(token.trim().to_string());
}
}
}
if let Some(value) = headers.get("x-access-token") {
if let Ok(raw) = value.to_str() {
return Some(raw.trim().to_string());
}
}
None
}
/// Compare tokens in constant time to avoid timing leakage.
fn constant_time_eq(a: &str, b: &str) -> bool {
let a_bytes = a.as_bytes();
let b_bytes = b.as_bytes();
let max_len = std::cmp::max(a_bytes.len(), b_bytes.len());
let mut diff = (a_bytes.len() ^ b_bytes.len()) as u8;
for idx in 0..max_len {
let left = *a_bytes.get(idx).unwrap_or(&0);
let right = *b_bytes.get(idx).unwrap_or(&0);
diff |= left ^ right;
}
diff == 0
}
/// Captures inbound Synapse transaction payloads for logging.
#[derive(Debug)]
struct SynapseResponse {
txn_id: String,
payload: Value,
}
/// Build the router that handles Synapse appservice transactions.
fn build_router(state: SynapseState) -> Router {
Router::new()
.route(
"/_matrix/appservice/v1/transactions/:txn_id",
put(handle_transaction),
)
.with_state(state)
}
/// Handle inbound transaction callbacks from Synapse.
async fn handle_transaction(
Path(txn_id): Path<String>,
State(state): State<SynapseState>,
Query(auth): Query<AuthQuery>,
headers: HeaderMap,
Json(payload): Json<Value>,
) -> impl IntoResponse {
let header_token = extract_access_token(&headers);
let token_matches = if let Some(token) = header_token.as_deref() {
constant_time_eq(token, &state.hs_token)
} else {
auth.access_token
.as_deref()
.is_some_and(|token| constant_time_eq(token, &state.hs_token))
};
if !token_matches {
return (StatusCode::UNAUTHORIZED, Json(serde_json::json!({})));
}
let response = SynapseResponse { txn_id, payload };
info!(
"Status response: SynapseResponse {{ txn_id: {}, payload: {:?} }}",
response.txn_id, response.payload
);
(StatusCode::OK, Json(serde_json::json!({})))
}
/// Listen for Synapse callbacks on the configured address.
pub async fn run_synapse_listener(addr: SocketAddr, hs_token: String) -> anyhow::Result<()> {
let app = build_router(SynapseState { hs_token });
let listener = tokio::net::TcpListener::bind(addr).await?;
info!("Synapse listener bound on {}", addr);
axum::serve(listener, app).await?;
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
use axum::body::Body;
use axum::http::Request;
use tokio::time::{sleep, Duration};
use tower::ServiceExt;
#[tokio::test]
async fn transactions_endpoint_accepts_payloads() {
let app = build_router(SynapseState {
hs_token: "HS_TOKEN".to_string(),
});
let payload = serde_json::json!({
"events": [],
"txn_id": "123"
});
let response = app
.oneshot(
Request::builder()
.method("PUT")
.uri("/_matrix/appservice/v1/transactions/123")
.header("authorization", "Bearer HS_TOKEN")
.header("content-type", "application/json")
.body(Body::from(payload.to_string()))
.unwrap(),
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::OK);
let body = axum::body::to_bytes(response.into_body(), usize::MAX)
.await
.unwrap();
assert_eq!(body.as_ref(), b"{}");
}
#[tokio::test]
async fn transactions_endpoint_rejects_missing_token() {
let app = build_router(SynapseState {
hs_token: "HS_TOKEN".to_string(),
});
let payload = serde_json::json!({
"events": [],
"txn_id": "123"
});
let response = app
.oneshot(
Request::builder()
.method("PUT")
.uri("/_matrix/appservice/v1/transactions/123")
.header("content-type", "application/json")
.body(Body::from(payload.to_string()))
.unwrap(),
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::UNAUTHORIZED);
let body = axum::body::to_bytes(response.into_body(), usize::MAX)
.await
.unwrap();
assert_eq!(body.as_ref(), b"{}");
}
#[tokio::test]
async fn transactions_endpoint_rejects_wrong_token() {
let app = build_router(SynapseState {
hs_token: "HS_TOKEN".to_string(),
});
let payload = serde_json::json!({
"events": [],
"txn_id": "123"
});
let response = app
.oneshot(
Request::builder()
.method("PUT")
.uri("/_matrix/appservice/v1/transactions/123")
.header("authorization", "Bearer NOPE")
.header("content-type", "application/json")
.body(Body::from(payload.to_string()))
.unwrap(),
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::UNAUTHORIZED);
let body = axum::body::to_bytes(response.into_body(), usize::MAX)
.await
.unwrap();
assert_eq!(body.as_ref(), b"{}");
}
#[tokio::test]
async fn transactions_endpoint_accepts_legacy_query_token() {
let app = build_router(SynapseState {
hs_token: "HS_TOKEN".to_string(),
});
let payload = serde_json::json!({
"events": [],
"txn_id": "125"
});
let response = app
.oneshot(
Request::builder()
.method("PUT")
.uri("/_matrix/appservice/v1/transactions/125?access_token=HS_TOKEN")
.header("content-type", "application/json")
.body(Body::from(payload.to_string()))
.unwrap(),
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::OK);
}
#[tokio::test]
async fn transactions_endpoint_accepts_x_access_token_header() {
let app = build_router(SynapseState {
hs_token: "HS_TOKEN".to_string(),
});
let payload = serde_json::json!({
"events": [],
"txn_id": "126"
});
let response = app
.oneshot(
Request::builder()
.method("PUT")
.uri("/_matrix/appservice/v1/transactions/126")
.header("x-access-token", "HS_TOKEN")
.header("content-type", "application/json")
.body(Body::from(payload.to_string()))
.unwrap(),
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::OK);
}
#[tokio::test]
async fn run_synapse_listener_starts_and_can_abort() {
let addr = SocketAddr::from(([127, 0, 0, 1], 0));
let handle =
tokio::spawn(async move { run_synapse_listener(addr, "HS_TOKEN".to_string()).await });
sleep(Duration::from_millis(10)).await;
handle.abort();
}
#[tokio::test]
async fn run_synapse_listener_returns_error_on_bind_failure() {
let listener = tokio::net::TcpListener::bind("127.0.0.1:0").await.unwrap();
let addr = listener.local_addr().unwrap();
let result = run_synapse_listener(addr, "HS_TOKEN".to_string()).await;
assert!(result.is_err());
}
}
+22 -4
View File
@@ -19,6 +19,11 @@ use tokio::sync::RwLock;
use crate::config::PotatomeshConfig;
/// Protocol identifier sent as a query parameter to restrict API results to
/// Meshtastic data only. Other protocols (e.g. MeshCore) are excluded until
/// the clients are updated to support them.
const PROTOCOL_FILTER: &str = "meshtastic";
#[allow(dead_code)]
#[derive(Debug, Deserialize, Clone)]
pub struct PotatoMessage {
@@ -131,7 +136,10 @@ impl PotatoClient {
}
pub async fn fetch_messages(&self, params: FetchParams) -> anyhow::Result<Vec<PotatoMessage>> {
let mut req = self.http.get(self.messages_url());
let mut req = self
.http
.get(self.messages_url())
.query(&[("protocol", PROTOCOL_FILTER)]);
if let Some(limit) = params.limit {
req = req.query(&[("limit", limit)]);
}
@@ -336,7 +344,10 @@ mod tests {
let mut server = mockito::Server::new_async().await;
let mock = server
.mock("GET", "/api/messages")
.match_query(mockito::Matcher::Any) // allow optional query params
.match_query(mockito::Matcher::UrlEncoded(
"protocol".into(),
"meshtastic".into(),
))
.with_status(200)
.with_header("content-type", "application/json")
.with_body(
@@ -427,7 +438,10 @@ mod tests {
let mut server = mockito::Server::new_async().await;
let mock = server
.mock("GET", "/api/messages")
.match_query(mockito::Matcher::Any)
.match_query(mockito::Matcher::UrlEncoded(
"protocol".into(),
PROTOCOL_FILTER.into(),
))
.with_status(500)
.create();
@@ -448,7 +462,11 @@ mod tests {
let mut server = mockito::Server::new_async().await;
let mock = server
.mock("GET", "/api/messages")
.match_query("limit=10&since=123")
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("protocol".into(), PROTOCOL_FILTER.into()),
mockito::Matcher::UrlEncoded("limit".into(), "10".into()),
mockito::Matcher::UrlEncoded("since".into(), "123".into()),
]))
.with_status(200)
.with_header("content-type", "application/json")
.with_body("[]")
BIN
View File
Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

BIN
View File
Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

+71
View File
@@ -0,0 +1,71 @@
# Copyright © 2025-26 l5yth & contributors
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
require "base64"
require "meshtastic"
require "openssl"
channel_name = "BerlinMesh"
# === Inputs from your packet ===
cipher_b64 = "Q1R7tgI5yXzMXu/3"
psk_b64 = "Nmh7EooP2Tsc+7pvPwXLcEDDuYhk+fBo2GLnbA1Y1sg="
packet_id = 3_915_687_257
from_id = "!9e95cf60"
channel = 35
# === Decode key and ciphertext ===
key = Base64.decode64(psk_b64) # 32 bytes -> AES-256
ciphertext = Base64.decode64(cipher_b64)
# === Derive numeric node id from Meshtastic-style string ===
hex_str = from_id.sub(/^!/, "") # "9e95cf60"
from_node = hex_str.to_i(16) # 0x9e95cf60
# === Build nonce exactly like Meshtastic CryptoEngine ===
# Little-endian 64-bit packet ID + little-endian 32-bit node ID + 4 zero bytes
nonce = [packet_id].pack("Q<") # uint64, little-endian
nonce += [from_node].pack("L<") # uint32, little-endian
nonce += "\x00" * 4 # extraNonce == 0 for PSK channel msgs
raise "Nonce must be 16 bytes" unless nonce.bytesize == 16
raise "Key must be 32 bytes" unless key.bytesize == 32
# === AES-256-CTR decrypt ===
cipher = OpenSSL::Cipher.new("aes-256-ctr")
cipher.decrypt
cipher.key = key
cipher.iv = nonce
plaintext = cipher.update(ciphertext) + cipher.final
# At this point `plaintext` is the raw Meshtastic protobuf payload
plaintext = plaintext.bytes.pack("C*")
data = Meshtastic::Data.decode(plaintext)
msg = data.payload.dup.force_encoding("UTF-8")
puts msg
# Gets channel number from name and psk
def channel_hash(name, psk_b64)
name_bytes = name.b # UTF-8 bytes
psk_bytes = Base64.decode64(psk_b64)
hn = name_bytes.bytes.reduce(0) { |acc, b| acc ^ b } # XOR over name
hp = psk_bytes.bytes.reduce(0) { |acc, b| acc ^ b } # XOR over PSK
(hn ^ hp) & 0xFF
end
channel_h = channel_hash(channel_name, psk_b64)
puts channel_h
puts channel == channel_h

Some files were not shown because too many files have changed in this diff Show More