Compare commits

..

35 Commits

Author SHA1 Message Date
l5y
e9b1c102f5 address review comments 2026-03-28 17:41:29 +01:00
l5y
29be258b57 data: resolve circular dependency of deamon.py 2026-03-28 17:11:38 +01:00
Ben Allfree
b1c416d029 first cut (#651) 2026-03-28 17:09:12 +01:00
dependabot[bot]
8305ca588c build(deps): bump rustls-webpki from 0.103.8 to 0.103.10 in /matrix (#649)
Bumps [rustls-webpki](https://github.com/rustls/webpki) from 0.103.8 to 0.103.10.
- [Release notes](https://github.com/rustls/webpki/releases)
- [Commits](https://github.com/rustls/webpki/compare/v/0.103.8...v/0.103.10)

---
updated-dependencies:
- dependency-name: rustls-webpki
  dependency-version: 0.103.10
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-21 12:55:17 +01:00
dependabot[bot]
0cf56b6fba build(deps): bump quinn-proto from 0.11.13 to 0.11.14 in /matrix (#646)
Bumps [quinn-proto](https://github.com/quinn-rs/quinn) from 0.11.13 to 0.11.14.
- [Release notes](https://github.com/quinn-rs/quinn/releases)
- [Commits](https://github.com/quinn-rs/quinn/compare/quinn-proto-0.11.13...quinn-proto-0.11.14)

---
updated-dependencies:
- dependency-name: quinn-proto
  dependency-version: 0.11.14
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-11 14:56:43 +01:00
l5y
ecce7f3504 chore: bump version to 0.5.11 (#645)
* chore: bump version to 0.5.11

* data: run black
2026-03-01 21:59:04 +01:00
l5y
17fa183c4f web: limit horizontal size of dropdown (#644)
* web: limit horizontal size of dropdown

* address review comments
2026-03-01 21:49:06 +01:00
l5y
5b0a6f5f8b web: expose node stats in distinct api (#641)
* web: expose node stats in distinct api

* web: address review comments

* web: address review comments

* web: address review comments

* web: address review comments
2026-02-14 21:14:10 +01:00
l5y
2e8b5ad856 web: do not merge channels by name (#640) 2026-02-14 15:42:14 +01:00
l5y
e32b098be4 web: do not merge channels by ID in frontend (#637)
* web: do not merge channels by ID in frontend

* web: address review comments

* web: address review comments
2026-02-14 14:56:25 +01:00
l5y
b45629f13c web: do not touch neighbor last seen on neighbor info (#636)
* web: do not touch neighbor last seen on neighbor info

* web: address review comments
2026-02-14 14:43:46 +01:00
l5y
96421c346d ingestor: report self id per packet (#635)
* ingestor: report self id per packet

* ingestor: address review comments

* ingestor: address review comments

* ingestor: address review comments

* ingestor: address review comments
2026-02-14 14:29:05 +01:00
l5y
724b3e14e5 ci: fix docker compose and docs (#634)
* ci: fix docker compose and docs

* docker: address review comments
2026-02-14 13:25:43 +01:00
l5y
e8c83a2774 web: supress encrypted text messages in frontend (#633)
* web: supress encrypted text messages in frontend

* web: address review comments

* web: address review comments

* web: address review comments

* web: address review comments
2026-02-14 13:11:02 +01:00
l5y
5c5a9df5a6 federation: ensure requests timeout properly and can be terminated (#631)
* federation: ensure requests timeout properly and can be terminated

* web: address review comments

* web: address review comments

* web: address review comments

* web: address review comments
2026-02-14 12:29:01 +01:00
dependabot[bot]
7cb4bbe61b build(deps): bump bytes from 1.11.0 to 1.11.1 in /matrix (#627)
Bumps [bytes](https://github.com/tokio-rs/bytes) from 1.11.0 to 1.11.1.
- [Release notes](https://github.com/tokio-rs/bytes/releases)
- [Changelog](https://github.com/tokio-rs/bytes/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/bytes/compare/v1.11.0...v1.11.1)

---
updated-dependencies:
- dependency-name: bytes
  dependency-version: 1.11.1
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-06 21:40:49 +01:00
l5y
fed8b9e124 matrix: config loading now merges optional TOML with CLI/env/secret inputs (#617)
* matrix: config loading now merges optional TOML with CLI/env/secret inputs

* matrix: fix tests

* matrix: address review comments

* matrix: fix tests

* matrix: cover missing unit test vectors
2026-01-10 23:39:53 +01:00
l5y
60e734086f matrix: logs only non-sensitive config fields (#616)
* matrix: logs only non-sensitive config fields

* matrix: run fmt
2026-01-10 21:06:51 +01:00
l5y
c3181e9bd5 web: decrypted takes precedence (#614)
* web: decrypted takes precedence

* web: run rufo

* web: fix tests

* web: fix tests

* web: cover missing unit test vectors

* web: fix tests
2026-01-10 13:13:55 +01:00
l5y
f4fa487b2d Add Apache headers to missing sources (#615) 2026-01-10 13:07:47 +01:00
l5y
e0237108c6 web: decrypt PSK-1 unencrypted messages on arrival (#611)
* web: decrypt PSK-1 unencrypted messages on arrival

* web: address review comments

* web: use proper psk to decrypt instead of alias

* cover missing unit test vectors

* tests: run black formatter

* web: fix tests

* web: refine decryption data processing logic

* web: address review comments

* web: cover missing unit test vectors

* web: cover missing unit test vectors

* web: cover missing unit test vectors

* web: cover missing unit test vectors
2026-01-10 12:33:59 +01:00
l5y
d7a636251d web: daemonize federation worker pool to avoid deadlocks on stuck announcments (#610)
* web: daemonize federation worker pool to avoid deadlocks on stuck announcments

* web: address review comments

* web: address review comments
2026-01-09 09:12:25 +01:00
l5y
108573b100 web: add announcement banner (#609)
* web: add announcement banner

* web: cover missing unit test vectors
2026-01-08 21:17:59 +01:00
l5y
36f55e6b79 l5y chore version 0510 (#608)
* chore: bump version to 0.5.10

* chore: bump version to 0.5.10

* chore: update changelog
2026-01-08 16:20:14 +01:00
l5y
b4dd72e7eb matrix: listen for synapse on port 41448 (#607)
* matrix: listen for synapse on port 41448

* matrix: address review comments

* matrix: address review comments

* matrix: cover missing unit test vectors

* matrix: cover missing unit test vectors
2026-01-08 15:51:31 +01:00
l5y
f5f2e977a1 web: collapse federation map ledgend (#604)
* web: collapse federation map ledgend

* web: cover missing unit test vectors
2026-01-06 17:31:20 +01:00
l5y
e9a0dc0d59 web: fix stale node queries (#603) 2026-01-06 16:13:04 +01:00
l5y
d75c395514 matrix: move short name to display name (#602)
* matrix: move short name to display name

* matrix: run fmt
2026-01-05 23:24:27 +01:00
l5y
b08f951780 ci: update ruby to 4 (#601)
* ci: update ruby to 4

* ci: update dispatch triggers
2026-01-05 23:23:56 +01:00
l5y
955431ac18 web: display traces of last 28 days if available (#599)
* web: display traces of last 28 days if available

* web: address review comments

* web: fix tests

* web: fix tests
2026-01-05 21:22:16 +01:00
l5y
7f40abf92a web: establish menu structure (#597)
* web: establish menu structure

* web: cover missing unit test vectors

* web: fix tests
2026-01-05 21:18:51 +01:00
l5y
c157fd481b matrix: fixed the text-message checkpoint regression (#595)
* matrix: fixed the text-message checkpoint regression

* matrix: improve formatting

* matrix: fix tests
2026-01-05 18:20:25 +01:00
l5y
a6fc7145bc matrix: cache seen messages by rx_time not id (#594)
* matrix: cache seen messages by rx_time not id

* matrix: fix review comments

* matrix: fix review comments

* matrix: cover missing unit test vectors

* matrix: fix tests
2026-01-05 17:34:54 +01:00
l5y
ca05cbb2c5 web: hide the default '0' tab when not active (#593) 2026-01-05 16:26:56 +01:00
l5y
5c79572c4d matrix: fix empty bridge state json (#592)
* matrix: fix empty bridge state json

* matrix: fix tests
2026-01-05 16:11:24 +01:00
115 changed files with 13113 additions and 2232 deletions

View File

@@ -20,6 +20,7 @@ on:
pull_request:
branches: [ "main" ]
paths:
- '.github/**'
- 'web/**'
- 'tests/**'

View File

@@ -20,6 +20,7 @@ on:
pull_request:
branches: [ "main" ]
paths:
- '.github/**'
- 'app/**'
- 'tests/**'

View File

@@ -20,6 +20,7 @@ on:
pull_request:
branches: [ "main" ]
paths:
- '.github/**'
- 'data/**'
- 'tests/**'

View File

@@ -20,6 +20,7 @@ on:
pull_request:
branches: [ "main" ]
paths:
- '.github/**'
- 'web/**'
- 'tests/**'
@@ -34,7 +35,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
ruby-version: ['3.3', '3.4']
ruby-version: ['3.4', '4.0']
steps:
- uses: actions/checkout@v5

View File

@@ -1,5 +1,44 @@
# CHANGELOG
## v0.5.9
* Matrix: listen for synapse on port 41448 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/607>
* Web: collapse federation map ledgend by @l5yth in <https://github.com/l5yth/potato-mesh/pull/604>
* Web: fix stale node queries by @l5yth in <https://github.com/l5yth/potato-mesh/pull/603>
* Matrix: move short name to display name by @l5yth in <https://github.com/l5yth/potato-mesh/pull/602>
* Ci: update ruby to 4 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/601>
* Web: display traces of last 28 days if available by @l5yth in <https://github.com/l5yth/potato-mesh/pull/599>
* Web: establish menu structure by @l5yth in <https://github.com/l5yth/potato-mesh/pull/597>
* Matrix: fixed the text-message checkpoint regression by @l5yth in <https://github.com/l5yth/potato-mesh/pull/595>
* Matrix: cache seen messages by rx_time not id by @l5yth in <https://github.com/l5yth/potato-mesh/pull/594>
* Web: hide the default '0' tab when not active by @l5yth in <https://github.com/l5yth/potato-mesh/pull/593>
* Matrix: fix empty bridge state json by @l5yth in <https://github.com/l5yth/potato-mesh/pull/592>
* Web: allow certain charts to overflow upper bounds by @l5yth in <https://github.com/l5yth/potato-mesh/pull/585>
* Ingestor: support ROUTING_APP messages by @l5yth in <https://github.com/l5yth/potato-mesh/pull/584>
* Ci: run nix flake check on ci by @l5yth in <https://github.com/l5yth/potato-mesh/pull/583>
* Web: hide legend by default by @l5yth in <https://github.com/l5yth/potato-mesh/pull/582>
* Nix flake by @benjajaja in <https://github.com/l5yth/potato-mesh/pull/577>
* Support BLE UUID format for macOS Bluetooth devices by @apo-mak in <https://github.com/l5yth/potato-mesh/pull/575>
* Web: add mesh.qrp.ro as seed node by @l5yth in <https://github.com/l5yth/potato-mesh/pull/573>
* Web: ensure unknown nodes for messages and traces by @l5yth in <https://github.com/l5yth/potato-mesh/pull/572>
* Chore: bump version to 0.5.9 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/569>
## v0.5.8
* Web: add secondary seed node jmrp.io by @l5yth in <https://github.com/l5yth/potato-mesh/pull/568>
* Data: implement whitelist for ingestor by @l5yth in <https://github.com/l5yth/potato-mesh/pull/567>
* Web: add ?since= parameter to all apis by @l5yth in <https://github.com/l5yth/potato-mesh/pull/566>
* Matrix: fix docker build by @l5yth in <https://github.com/l5yth/potato-mesh/pull/565>
* Matrix: fix docker build by @l5yth in <https://github.com/l5yth/potato-mesh/pull/564>
* Web: fix federation signature validation and create fallback by @l5yth in <https://github.com/l5yth/potato-mesh/pull/563>
* Chore: update readme by @l5yth in <https://github.com/l5yth/potato-mesh/pull/561>
* Matrix: add docker file for bridge by @l5yth in <https://github.com/l5yth/potato-mesh/pull/556>
* Matrix: add health checks to startup by @l5yth in <https://github.com/l5yth/potato-mesh/pull/555>
* Matrix: omit the api part in base url by @l5yth in <https://github.com/l5yth/potato-mesh/pull/554>
* App: add utility coverage tests for main.dart by @l5yth in <https://github.com/l5yth/potato-mesh/pull/552>
* Data: add thorough daemon unit tests by @l5yth in <https://github.com/l5yth/potato-mesh/pull/553>
* Chore: bump version to 0.5.8 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/551>
## v0.5.7
* Data: track ingestors heartbeat by @l5yth in <https://github.com/l5yth/potato-mesh/pull/549>

View File

@@ -88,6 +88,7 @@ The web app can be configured with environment variables (defaults shown):
| `CHANNEL` | `"#LongFast"` | Default channel name displayed in the UI. |
| `FREQUENCY` | `"915MHz"` | Default frequency description displayed in the UI. |
| `CONTACT_LINK` | `"#potatomesh:dod.ngo"` | Chat link or Matrix alias rendered in the footer and overlays. |
| `ANNOUNCEMENT` | _unset_ | Optional announcement banner text rendered above the header on every page. |
| `MAP_CENTER` | `38.761944,-27.090833` | Latitude and longitude that centre the map on load. |
| `MAP_ZOOM` | _unset_ | Fixed Leaflet zoom applied on first load; disables auto-fit when provided. |
| `MAX_DISTANCE` | `42` | Maximum distance (km) before node relationships are hidden on the map. |
@@ -251,15 +252,36 @@ services.potato-mesh = {
## Docker
Docker images are published on Github for each release:
Docker images are published on GitHub Container Registry for each release.
Image names and tags follow the workflow format:
`${IMAGE_PREFIX}-${service}-${architecture}:${tag}` (see `.github/workflows/docker.yml`).
```bash
docker pull ghcr.io/l5yth/potato-mesh/web:latest # newest release
docker pull ghcr.io/l5yth/potato-mesh/web:v0.5.5 # pinned historical release
docker pull ghcr.io/l5yth/potato-mesh/ingestor:latest
docker pull ghcr.io/l5yth/potato-mesh/matrix-bridge:latest
docker pull ghcr.io/l5yth/potato-mesh-web-linux-amd64:latest
docker pull ghcr.io/l5yth/potato-mesh-web-linux-arm64:latest
docker pull ghcr.io/l5yth/potato-mesh-web-linux-armv7:latest
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-amd64:latest
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-arm64:latest
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-armv7:latest
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-amd64:latest
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-arm64:latest
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-armv7:latest
# version-pinned examples
docker pull ghcr.io/l5yth/potato-mesh-web-linux-amd64:v0.5.5
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-amd64:v0.5.5
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-amd64:v0.5.5
```
Note: `latest` is only published for non-prerelease versions. Pre-release tags
such as `-rc`, `-beta`, `-alpha`, or `-dev` are version-tagged only.
When using Compose, set `POTATOMESH_IMAGE_ARCH` in `docker-compose.yml` (or via
environment) so service images resolve to the correct architecture variant and
you avoid manual tag mistakes.
Feel free to run the [configure.sh](./configure.sh) script to set up your
environment. See the [Docker guide](DOCKER.md) for more details and custom
deployment instructions.
@@ -270,6 +292,8 @@ A matrix bridge is currently being worked on. It requests messages from a config
potato-mesh instance and forwards it to a specified matrix channel; see
[matrix/README.md](./matrix/README.md).
![matrix bridge](./scrot-0.6.png)
## Mobile App
A mobile _reader_ app is currently being worked on. Stay tuned for releases and updates.

View File

@@ -1,3 +1,18 @@
/*
* Copyright © 2025-26 l5yth & contributors
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
plugins {
id("com.android.application")
id("kotlin-android")

View File

@@ -1,3 +1,16 @@
// Copyright © 2025-26 l5yth & contributors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package net.potatomesh.reader
import io.flutter.embedding.android.FlutterActivity

View File

@@ -1,3 +1,18 @@
/*
* Copyright © 2025-26 l5yth & contributors
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
allprojects {
repositories {
google()

View File

@@ -1,3 +1,18 @@
/*
* Copyright © 2025-26 l5yth & contributors
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
pluginManagement {
val flutterSdkPath =
run {

View File

@@ -1,5 +1,18 @@
#!/usr/bin/env bash
# Copyright © 2025-26 l5yth & contributors
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
export GIT_TAG="$(git describe --tags --abbrev=0)"
export GIT_COMMITS="$(git rev-list --count ${GIT_TAG}..HEAD)"
export GIT_SHA="$(git rev-parse --short=9 HEAD)"
@@ -12,4 +25,3 @@ flutter run \
--dart-define=GIT_SHA="${GIT_SHA}" \
--dart-define=GIT_DIRTY="${GIT_DIRTY}" \
--device-id 38151FDJH00D4C

View File

@@ -15,11 +15,11 @@
<key>CFBundlePackageType</key>
<string>FMWK</string>
<key>CFBundleShortVersionString</key>
<string>0.5.9</string>
<string>0.5.11</string>
<key>CFBundleSignature</key>
<string>????</string>
<key>CFBundleVersion</key>
<string>0.5.9</string>
<string>0.5.11</string>
<key>MinimumOSVersion</key>
<string>14.0</string>
</dict>

View File

@@ -1,3 +1,16 @@
// Copyright © 2025-26 l5yth & contributors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
import Flutter
import UIKit

View File

@@ -1 +1,14 @@
// Copyright © 2025-26 l5yth & contributors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#import "GeneratedPluginRegistrant.h"

View File

@@ -1,3 +1,16 @@
// Copyright © 2025-26 l5yth & contributors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
import Flutter
import UIKit
import XCTest

View File

@@ -1,7 +1,7 @@
name: potato_mesh_reader
description: Meshtastic Reader — read-only view for PotatoMesh messages.
publish_to: "none"
version: 0.5.9
version: 0.5.11
environment:
sdk: ">=3.4.0 <4.0.0"

View File

@@ -1,5 +1,18 @@
#!/usr/bin/env bash
# Copyright © 2025-26 l5yth & contributors
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -euo pipefail
export GIT_TAG="$(git describe --tags --abbrev=0)"
@@ -27,4 +40,3 @@ fi
export APK_DIR="build/app/outputs/flutter-apk"
mv -v "${APK_DIR}/app-release.apk" "${APK_DIR}/potatomesh-reader-android-${TAG_NAME}.apk"
(cd "${APK_DIR}" && sha256sum "potatomesh-reader-android-${TAG_NAME}.apk" > "potatomesh-reader-android-${TAG_NAME}.apk.sha256sum")

View File

@@ -1,3 +1,16 @@
// Copyright © 2025-26 l5yth & contributors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// This is a basic Flutter widget test.
//
// To perform an interaction with a widget in your test, use the WidgetTester

View File

@@ -18,7 +18,7 @@ The ``data.mesh`` module exposes helpers for reading Meshtastic node and
message information before forwarding it to the accompanying web application.
"""
VERSION = "0.5.9"
VERSION = "0.5.11"
"""Semantic version identifier shared with the dashboard and front-end."""
__version__ = VERSION

View File

@@ -0,0 +1,107 @@
## Mesh ingestor contracts (stable interfaces)
This repos ingestion pipeline is split into:
- **Python collector** (`data/mesh_ingestor/*`) which normalizes packets/events and POSTs JSON to the web app.
- **Sinatra web app** (`web/`) which accepts those payloads on `POST /api/*` ingest routes and persists them into SQLite tables defined under `data/*.sql`.
This document records the **contracts that future providers must preserve**. The intent is to enable adding new providers (MeshCore, Reticulum, …) without changing the Ruby/DB/UI read-side.
### Canonical node identity
- **Canonical node id**: `nodes.node_id` is a `TEXT` primary key and is treated as canonical across the system.
- **Format**: `!%08x` (lowercase hex, 8 chars), for example `!abcdef01`.
- **Normalization**:
- Python currently normalizes via `data/mesh_ingestor/serialization.py:_canonical_node_id`.
- Ruby normalizes via `web/lib/potato_mesh/application/data_processing.rb:canonical_node_parts`.
- **Dual addressing**: Ruby routes and queries accept either a canonical `!xxxxxxxx` string or a numeric node id; they normalize to `node_id`.
Note: non-Meshtastic providers will need a strategy to map their native node identifiers into this `!%08x` space. That mapping is intentionally not standardized in code yet.
### Ingest HTTP routes and payload shapes
Future providers should emit payloads that match these shapes (keys + types), which are validated by existing tests (notably `tests/test_mesh.py`).
#### `POST /api/nodes`
Payload is a mapping keyed by canonical node id:
- `{ "!abcdef01": { ... node fields ... } }`
Node entry fields are “Meshtastic-ish” (camelCase) and may include:
- `num` (int node number)
- `lastHeard` (int unix seconds)
- `snr` (float)
- `hopsAway` (int)
- `isFavorite` (bool)
- `user` (mapping; e.g. `shortName`, `longName`, `macaddr`, `hwModel`, `role`, `publicKey`, `isUnmessagable`)
- `deviceMetrics` (mapping; e.g. `batteryLevel`, `voltage`, `channelUtilization`, `airUtilTx`, `uptimeSeconds`)
- `position` (mapping; `latitude`, `longitude`, `altitude`, `time`, `locationSource`, `precisionBits`, optional nested `raw`)
- Optional radio metadata: `lora_freq`, `modem_preset`
#### `POST /api/messages`
Single message payload:
- Required: `id` (int), `rx_time` (int), `rx_iso` (string)
- Identity: `from_id` (string/int), `to_id` (string/int), `channel` (int), `portnum` (string|nil)
- Payload: `text` (string|nil), `encrypted` (string|nil), `reply_id` (int|nil), `emoji` (string|nil)
- RF: `snr` (float|nil), `rssi` (int|nil), `hop_limit` (int|nil)
- Meta: `channel_name` (string; only when not encrypted and known), `ingestor` (canonical host id), `lora_freq`, `modem_preset`
#### `POST /api/positions`
Single position payload:
- Required: `id` (int), `rx_time` (int), `rx_iso` (string)
- Node: `node_id` (canonical string), `node_num` (int|nil), `num` (int|nil), `from_id` (canonical string), `to_id` (string|nil)
- Position: `latitude`, `longitude`, `altitude` (floats|nil)
- Position time: `position_time` (int|nil)
- Quality: `location_source` (string|nil), `precision_bits` (int|nil), `sats_in_view` (int|nil), `pdop` (float|nil)
- Motion: `ground_speed` (float|nil), `ground_track` (float|nil)
- RF/meta: `snr`, `rssi`, `hop_limit`, `bitfield`, `payload_b64` (string|nil), `raw` (mapping|nil), `ingestor`, `lora_freq`, `modem_preset`
#### `POST /api/telemetry`
Single telemetry payload:
- Required: `id` (int), `rx_time` (int), `rx_iso` (string)
- Node: `node_id` (canonical string|nil), `node_num` (int|nil), `from_id`, `to_id`
- Time: `telemetry_time` (int|nil)
- Packet: `channel` (int), `portnum` (string|nil), `bitfield` (int|nil), `hop_limit` (int|nil)
- RF: `snr` (float|nil), `rssi` (int|nil)
- Raw: `payload_b64` (string; may be empty string when unknown)
- Metrics: many optional snake_case keys (`battery_level`, `voltage`, `temperature`, etc.)
- Meta: `ingestor`, `lora_freq`, `modem_preset`
#### `POST /api/neighbors`
Neighbors snapshot payload:
- Node: `node_id` (canonical string), `node_num` (int|nil)
- `neighbors`: list of entries with `neighbor_id` (canonical string), `neighbor_num` (int|nil), `snr` (float|nil), `rx_time` (int), `rx_iso` (string)
- Snapshot time: `rx_time`, `rx_iso`
- Optional: `node_broadcast_interval_secs` (int|nil), `last_sent_by_id` (canonical string|nil)
- Meta: `ingestor`, `lora_freq`, `modem_preset`
#### `POST /api/traces`
Single trace payload:
- Identity: `id` (int|nil), `request_id` (int|nil)
- Endpoints: `src` (int|nil), `dest` (int|nil)
- Path: `hops` (list[int])
- Time: `rx_time` (int), `rx_iso` (string)
- Metrics: `rssi` (int|nil), `snr` (float|nil), `elapsed_ms` (int|nil)
- Meta: `ingestor`, `lora_freq`, `modem_preset`
#### `POST /api/ingestors`
Heartbeat payload:
- `node_id` (canonical string)
- `start_time` (int), `last_seen_time` (int)
- `version` (string)
- Optional: `lora_freq`, `modem_preset`

View File

@@ -16,6 +16,7 @@
from __future__ import annotations
import dataclasses
import inspect
import signal
import threading
@@ -24,6 +25,7 @@ import time
from pubsub import pub
from . import config, handlers, ingestors, interfaces
from .provider import Provider
_RECEIVE_TOPICS = (
"meshtastic.receive",
@@ -197,11 +199,6 @@ def _process_ingestor_heartbeat(iface, *, ingestor_announcement_sent: bool) -> b
if heartbeat_sent and not ingestor_announcement_sent:
return True
return ingestor_announcement_sent
iface_cls = getattr(iface_obj, "__class__", None)
if iface_cls is None:
return False
module_name = getattr(iface_cls, "__module__", "") or ""
return "ble_interface" in module_name
def _connected_state(candidate) -> bool | None:
@@ -243,10 +240,286 @@ def _connected_state(candidate) -> bool | None:
return None
def main(existing_interface=None) -> None:
# ---------------------------------------------------------------------------
# Loop state container
# ---------------------------------------------------------------------------
@dataclasses.dataclass
class _DaemonState:
"""All mutable state for the :func:`main` daemon loop."""
provider: Provider
stop: threading.Event
configured_port: str | None
inactivity_reconnect_secs: float
energy_saving_enabled: bool
energy_online_secs: float
energy_sleep_secs: float
retry_delay: float
last_seen_packet_monotonic: float | None
active_candidate: str | None
iface: object = None
resolved_target: str | None = None
initial_snapshot_sent: bool = False
energy_session_deadline: float | None = None
iface_connected_at: float | None = None
last_inactivity_reconnect: float | None = None
ingestor_announcement_sent: bool = False
announced_target: bool = False
# ---------------------------------------------------------------------------
# Per-iteration helpers (each returns True when the caller should `continue`)
# ---------------------------------------------------------------------------
def _advance_retry_delay(current: float) -> float:
"""Return the next exponential-backoff retry delay."""
if config._RECONNECT_MAX_DELAY_SECS <= 0:
return current
next_delay = current * 2 if current else config._RECONNECT_INITIAL_DELAY_SECS
return min(next_delay, config._RECONNECT_MAX_DELAY_SECS)
def _energy_sleep(state: _DaemonState, reason: str) -> None:
"""Sleep for the configured energy-saving interval."""
if not state.energy_saving_enabled or state.energy_sleep_secs <= 0:
return
if config.DEBUG:
config._debug_log(
f"energy saving: {reason}; sleeping for {state.energy_sleep_secs:g}s"
)
state.stop.wait(state.energy_sleep_secs)
def _try_connect(state: _DaemonState) -> bool:
"""Attempt to establish the mesh interface.
Returns:
``True`` when connected and the loop should proceed; ``False`` when
the connection failed and the caller should ``continue``.
"""
try:
state.iface, state.resolved_target, state.active_candidate = (
state.provider.connect(active_candidate=state.active_candidate)
)
handlers.register_host_node_id(state.provider.extract_host_node_id(state.iface))
ingestors.set_ingestor_node_id(handlers.host_node_id())
state.retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
state.initial_snapshot_sent = False
if not state.announced_target and state.resolved_target:
config._debug_log(
"Using mesh interface",
context="daemon.interface",
severity="info",
target=state.resolved_target,
)
state.announced_target = True
if state.energy_saving_enabled and state.energy_online_secs > 0:
state.energy_session_deadline = time.monotonic() + state.energy_online_secs
else:
state.energy_session_deadline = None
state.iface_connected_at = time.monotonic()
# Seed the inactivity tracking from the connection time so a
# reconnect is given a full inactivity window even when the
# handler still reports the previous packet timestamp.
state.last_seen_packet_monotonic = state.iface_connected_at
state.last_inactivity_reconnect = None
return True
except interfaces.NoAvailableMeshInterface as exc:
config._debug_log(
"No mesh interface available",
context="daemon.interface",
severity="error",
error_message=str(exc),
)
_close_interface(state.iface)
raise SystemExit(1) from exc
except Exception as exc:
config._debug_log(
"Failed to create mesh interface",
context="daemon.interface",
severity="warn",
candidate=state.active_candidate or "auto",
error_class=exc.__class__.__name__,
error_message=str(exc),
)
if state.configured_port is None:
state.active_candidate = None
state.announced_target = False
state.stop.wait(state.retry_delay)
state.retry_delay = _advance_retry_delay(state.retry_delay)
return False
def _check_energy_saving(state: _DaemonState) -> bool:
"""Disconnect and sleep when energy-saving conditions are met.
Returns:
``True`` when the interface was closed and the caller should
``continue``; ``False`` otherwise.
"""
if not state.energy_saving_enabled or state.iface is None:
return False
session_expired = (
state.energy_session_deadline is not None
and time.monotonic() >= state.energy_session_deadline
)
ble_dropped = (
_is_ble_interface(state.iface)
and getattr(state.iface, "client", object()) is None
)
if not session_expired and not ble_dropped:
return False
reason = "disconnected after session" if session_expired else "BLE client disconnected"
log_msg = "Energy saving disconnect" if session_expired else "Energy saving BLE disconnect"
config._debug_log(log_msg, context="daemon.energy", severity="info")
_close_interface(state.iface)
state.iface = None
state.announced_target = False
state.initial_snapshot_sent = False
state.energy_session_deadline = None
_energy_sleep(state, reason)
return True
def _try_send_snapshot(state: _DaemonState) -> bool:
"""Send the initial node snapshot via the provider.
Returns:
``True`` when the snapshot succeeded (or no nodes exist yet); ``False``
when a hard error occurred and the caller should ``continue``.
"""
try:
node_items = state.provider.node_snapshot_items(state.iface)
processed_any = False
for node_id, node in node_items:
processed_any = True
try:
handlers.upsert_node(node_id, node)
except Exception as exc:
config._debug_log(
"Failed to update node snapshot",
context="daemon.snapshot",
severity="warn",
node_id=node_id,
error_class=exc.__class__.__name__,
error_message=str(exc),
)
if config.DEBUG:
config._debug_log(
"Snapshot node payload",
context="daemon.snapshot",
node=node,
)
if processed_any:
state.initial_snapshot_sent = True
return True
except Exception as exc:
config._debug_log(
"Snapshot refresh failed",
context="daemon.snapshot",
severity="warn",
error_class=exc.__class__.__name__,
error_message=str(exc),
)
_close_interface(state.iface)
state.iface = None
state.stop.wait(state.retry_delay)
state.retry_delay = _advance_retry_delay(state.retry_delay)
return False
def _check_inactivity_reconnect(state: _DaemonState) -> bool:
"""Reconnect when the interface has been silent for too long.
Returns:
``True`` when a reconnect was triggered and the caller should
``continue``; ``False`` otherwise.
"""
if state.iface is None or state.inactivity_reconnect_secs <= 0:
return False
now = time.monotonic()
iface_activity = handlers.last_packet_monotonic()
if (
iface_activity is not None
and state.iface_connected_at is not None
and iface_activity < state.iface_connected_at
):
iface_activity = state.iface_connected_at
if iface_activity is not None and (
state.last_seen_packet_monotonic is None
or iface_activity > state.last_seen_packet_monotonic
):
state.last_seen_packet_monotonic = iface_activity
state.last_inactivity_reconnect = None
latest_activity = iface_activity
if latest_activity is None and state.iface_connected_at is not None:
latest_activity = state.iface_connected_at
if latest_activity is None:
latest_activity = now
inactivity_elapsed = now - latest_activity
believed_disconnected = _connected_state(getattr(state.iface, "isConnected", None)) is False
if not believed_disconnected and inactivity_elapsed < state.inactivity_reconnect_secs:
return False
if (
state.last_inactivity_reconnect is not None
and now - state.last_inactivity_reconnect < state.inactivity_reconnect_secs
):
return False
reason = (
"disconnected"
if believed_disconnected
else f"no data for {inactivity_elapsed:.0f}s"
)
config._debug_log(
"Mesh interface inactivity detected",
context="daemon.interface",
severity="warn",
reason=reason,
)
state.last_inactivity_reconnect = now
_close_interface(state.iface)
state.iface = None
state.announced_target = False
state.initial_snapshot_sent = False
state.energy_session_deadline = None
state.iface_connected_at = None
return True
# ---------------------------------------------------------------------------
# Entry point
# ---------------------------------------------------------------------------
def main(*, provider: Provider | None = None) -> None:
"""Run the mesh ingestion daemon until interrupted."""
subscribed = _subscribe_receive_topics()
if provider is None:
from .providers.meshtastic import MeshtasticProvider
provider = MeshtasticProvider()
subscribed = provider.subscribe()
if subscribed:
config._debug_log(
"Subscribed to receive topics",
@@ -255,313 +528,83 @@ def main(existing_interface=None) -> None:
topics=subscribed,
)
iface = existing_interface
resolved_target = None
retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
stop = threading.Event()
initial_snapshot_sent = False
energy_session_deadline = None
iface_connected_at: float | None = None
last_seen_packet_monotonic = handlers.last_packet_monotonic()
last_inactivity_reconnect: float | None = None
inactivity_reconnect_secs = max(
0.0, getattr(config, "_INACTIVITY_RECONNECT_SECS", 0.0)
state = _DaemonState(
provider=provider,
stop=threading.Event(),
configured_port=config.CONNECTION,
inactivity_reconnect_secs=max(
0.0, getattr(config, "_INACTIVITY_RECONNECT_SECS", 0.0)
),
energy_saving_enabled=config.ENERGY_SAVING,
energy_online_secs=max(0.0, config._ENERGY_ONLINE_DURATION_SECS),
energy_sleep_secs=max(0.0, config._ENERGY_SLEEP_SECS),
retry_delay=max(0.0, config._RECONNECT_INITIAL_DELAY_SECS),
last_seen_packet_monotonic=handlers.last_packet_monotonic(),
active_candidate=config.CONNECTION,
)
ingestor_announcement_sent = False
energy_saving_enabled = config.ENERGY_SAVING
energy_online_secs = max(0.0, config._ENERGY_ONLINE_DURATION_SECS)
energy_sleep_secs = max(0.0, config._ENERGY_SLEEP_SECS)
def _energy_sleep(reason: str) -> None:
if not energy_saving_enabled or energy_sleep_secs <= 0:
return
if config.DEBUG:
config._debug_log(
f"energy saving: {reason}; sleeping for {energy_sleep_secs:g}s"
)
stop.wait(energy_sleep_secs)
def handle_sigterm(*_args) -> None:
stop.set()
state.stop.set()
def handle_sigint(signum, frame) -> None:
if stop.is_set():
if state.stop.is_set():
signal.default_int_handler(signum, frame)
return
stop.set()
state.stop.set()
if threading.current_thread() == threading.main_thread():
signal.signal(signal.SIGINT, handle_sigint)
signal.signal(signal.SIGTERM, handle_sigterm)
target = config.INSTANCE or "(no INSTANCE_DOMAIN configured)"
configured_port = config.CONNECTION
active_candidate = configured_port
announced_target = False
config._debug_log(
"Mesh daemon starting",
context="daemon.main",
severity="info",
target=target,
port=configured_port or "auto",
target=config.INSTANCE or "(no INSTANCE_DOMAIN configured)",
port=config.CONNECTION or "auto",
channel=config.CHANNEL_INDEX,
)
try:
while not stop.is_set():
if iface is None:
try:
if active_candidate:
iface, resolved_target = interfaces._create_serial_interface(
active_candidate
)
else:
iface, resolved_target = interfaces._create_default_interface()
active_candidate = resolved_target
interfaces._ensure_radio_metadata(iface)
interfaces._ensure_channel_metadata(iface)
handlers.register_host_node_id(
interfaces._extract_host_node_id(iface)
)
ingestors.set_ingestor_node_id(handlers.host_node_id())
retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
initial_snapshot_sent = False
if not announced_target and resolved_target:
config._debug_log(
"Using mesh interface",
context="daemon.interface",
severity="info",
target=resolved_target,
)
announced_target = True
if energy_saving_enabled and energy_online_secs > 0:
energy_session_deadline = time.monotonic() + energy_online_secs
else:
energy_session_deadline = None
iface_connected_at = time.monotonic()
# Seed the inactivity tracking from the connection time so a
# reconnect is given a full inactivity window even when the
# handler still reports the previous packet timestamp.
last_seen_packet_monotonic = iface_connected_at
last_inactivity_reconnect = None
except interfaces.NoAvailableMeshInterface as exc:
config._debug_log(
"No mesh interface available",
context="daemon.interface",
severity="error",
error_message=str(exc),
)
_close_interface(iface)
raise SystemExit(1) from exc
except Exception as exc:
candidate_desc = active_candidate or "auto"
config._debug_log(
"Failed to create mesh interface",
context="daemon.interface",
severity="warn",
candidate=candidate_desc,
error_class=exc.__class__.__name__,
error_message=str(exc),
)
if configured_port is None:
active_candidate = None
announced_target = False
stop.wait(retry_delay)
if config._RECONNECT_MAX_DELAY_SECS > 0:
retry_delay = min(
(
retry_delay * 2
if retry_delay
else config._RECONNECT_INITIAL_DELAY_SECS
),
config._RECONNECT_MAX_DELAY_SECS,
)
continue
if energy_saving_enabled and iface is not None:
if (
energy_session_deadline is not None
and time.monotonic() >= energy_session_deadline
):
config._debug_log(
"Energy saving disconnect",
context="daemon.energy",
severity="info",
)
_close_interface(iface)
iface = None
announced_target = False
initial_snapshot_sent = False
energy_session_deadline = None
_energy_sleep("disconnected after session")
continue
if (
_is_ble_interface(iface)
and getattr(iface, "client", object()) is None
):
config._debug_log(
"Energy saving BLE disconnect",
context="daemon.energy",
severity="info",
)
_close_interface(iface)
iface = None
announced_target = False
initial_snapshot_sent = False
energy_session_deadline = None
_energy_sleep("BLE client disconnected")
continue
if not initial_snapshot_sent:
try:
nodes = getattr(iface, "nodes", {}) or {}
node_items = _node_items_snapshot(nodes)
if node_items is None:
config._debug_log(
"Skipping node snapshot due to concurrent modification",
context="daemon.snapshot",
)
else:
processed_snapshot_item = False
for node_id, node in node_items:
processed_snapshot_item = True
try:
handlers.upsert_node(node_id, node)
except Exception as exc:
config._debug_log(
"Failed to update node snapshot",
context="daemon.snapshot",
severity="warn",
node_id=node_id,
error_class=exc.__class__.__name__,
error_message=str(exc),
)
if config.DEBUG:
config._debug_log(
"Snapshot node payload",
context="daemon.snapshot",
node=node,
)
if processed_snapshot_item:
initial_snapshot_sent = True
except Exception as exc:
config._debug_log(
"Snapshot refresh failed",
context="daemon.snapshot",
severity="warn",
error_class=exc.__class__.__name__,
error_message=str(exc),
)
_close_interface(iface)
iface = None
stop.wait(retry_delay)
if config._RECONNECT_MAX_DELAY_SECS > 0:
retry_delay = min(
(
retry_delay * 2
if retry_delay
else config._RECONNECT_INITIAL_DELAY_SECS
),
config._RECONNECT_MAX_DELAY_SECS,
)
continue
if iface is not None and inactivity_reconnect_secs > 0:
now_monotonic = time.monotonic()
iface_activity = handlers.last_packet_monotonic()
if (
iface_activity is not None
and iface_connected_at is not None
and iface_activity < iface_connected_at
):
iface_activity = iface_connected_at
if iface_activity is not None and (
last_seen_packet_monotonic is None
or iface_activity > last_seen_packet_monotonic
):
last_seen_packet_monotonic = iface_activity
last_inactivity_reconnect = None
latest_activity = iface_activity
if latest_activity is None and iface_connected_at is not None:
latest_activity = iface_connected_at
if latest_activity is None:
latest_activity = now_monotonic
inactivity_elapsed = now_monotonic - latest_activity
connected_attr = getattr(iface, "isConnected", None)
believed_disconnected = False
connected_state = _connected_state(connected_attr)
if connected_state is None:
if callable(connected_attr):
try:
believed_disconnected = not bool(connected_attr())
except Exception:
believed_disconnected = False
elif connected_attr is not None:
try:
believed_disconnected = not bool(connected_attr)
except Exception: # pragma: no cover - defensive guard
believed_disconnected = False
else:
believed_disconnected = not connected_state
should_reconnect = believed_disconnected or (
inactivity_elapsed >= inactivity_reconnect_secs
)
if should_reconnect:
if (
last_inactivity_reconnect is None
or now_monotonic - last_inactivity_reconnect
>= inactivity_reconnect_secs
):
reason = (
"disconnected"
if believed_disconnected
else f"no data for {inactivity_elapsed:.0f}s"
)
config._debug_log(
"Mesh interface inactivity detected",
context="daemon.interface",
severity="warn",
reason=reason,
)
last_inactivity_reconnect = now_monotonic
_close_interface(iface)
iface = None
announced_target = False
initial_snapshot_sent = False
energy_session_deadline = None
iface_connected_at = None
continue
ingestor_announcement_sent = _process_ingestor_heartbeat(
iface, ingestor_announcement_sent=ingestor_announcement_sent
while not state.stop.is_set():
if state.iface is None and not _try_connect(state):
continue
if _check_energy_saving(state):
continue
if not state.initial_snapshot_sent and not _try_send_snapshot(state):
continue
if _check_inactivity_reconnect(state):
continue
state.ingestor_announcement_sent = _process_ingestor_heartbeat(
state.iface, ingestor_announcement_sent=state.ingestor_announcement_sent
)
retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
stop.wait(config.SNAPSHOT_SECS)
state.retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
state.stop.wait(config.SNAPSHOT_SECS)
except KeyboardInterrupt: # pragma: no cover - interactive only
config._debug_log(
"Received KeyboardInterrupt; shutting down",
context="daemon.main",
severity="info",
)
stop.set()
state.stop.set()
finally:
_close_interface(iface)
_close_interface(state.iface)
__all__ = [
"_RECEIVE_TOPICS",
"_event_wait_allows_default_timeout",
"_node_items_snapshot",
"_subscribe_receive_topics",
"_is_ble_interface",
"_process_ingestor_heartbeat",
"_DaemonState",
"_advance_retry_delay",
"_check_energy_saving",
"_check_inactivity_reconnect",
"_connected_state",
"_energy_sleep",
"_event_wait_allows_default_timeout",
"_is_ble_interface",
"_node_items_snapshot",
"_process_ingestor_heartbeat",
"_subscribe_receive_topics",
"_try_connect",
"_try_send_snapshot",
"main",
]

View File

@@ -0,0 +1,85 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Decode Meshtastic protobuf payloads from stdin JSON."""
from __future__ import annotations
import base64
import json
import os
import sys
from typing import Any, Dict, Tuple
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
if SCRIPT_DIR in sys.path:
sys.path.remove(SCRIPT_DIR)
from google.protobuf.json_format import MessageToDict
from meshtastic.protobuf import mesh_pb2, telemetry_pb2
PORTNUM_MAP: Dict[int, Tuple[str, Any]] = {
3: ("POSITION_APP", mesh_pb2.Position),
4: ("NODEINFO_APP", mesh_pb2.NodeInfo),
5: ("ROUTING_APP", mesh_pb2.Routing),
67: ("TELEMETRY_APP", telemetry_pb2.Telemetry),
70: ("TRACEROUTE_APP", mesh_pb2.RouteDiscovery),
71: ("NEIGHBORINFO_APP", mesh_pb2.NeighborInfo),
}
def _decode_payload(portnum: int, payload_b64: str) -> dict[str, Any]:
if portnum not in PORTNUM_MAP:
return {"error": "unsupported-port", "portnum": portnum}
try:
payload_bytes = base64.b64decode(payload_b64, validate=True)
except Exception as exc:
return {"error": f"invalid-payload: {exc}"}
name, message_cls = PORTNUM_MAP[portnum]
msg = message_cls()
try:
msg.ParseFromString(payload_bytes)
except Exception as exc:
return {"error": f"decode-failed: {exc}", "portnum": portnum, "type": name}
decoded = MessageToDict(msg, preserving_proto_field_name=True)
return {"portnum": portnum, "type": name, "payload": decoded}
def main() -> int:
raw = sys.stdin.read()
try:
request = json.loads(raw)
except json.JSONDecodeError as exc:
sys.stdout.write(json.dumps({"error": f"invalid-json: {exc}"}))
return 1
portnum = request.get("portnum")
payload_b64 = request.get("payload_b64")
if not isinstance(portnum, int):
sys.stdout.write(json.dumps({"error": "missing-portnum"}))
return 1
if not isinstance(payload_b64, str):
sys.stdout.write(json.dumps({"error": "missing-payload"}))
return 1
result = _decode_payload(portnum, payload_b64)
sys.stdout.write(json.dumps(result))
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,182 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Protocol-agnostic event payload types for ingestion.
The ingestor ultimately POSTs JSON to the web app's ingest routes. These types
capture the *shape* of those payloads so multiple providers can emit the same
events, regardless of how they source or decode packets.
These are intentionally defined as ``TypedDict`` so existing code can continue
to build plain dictionaries without a runtime dependency on dataclasses.
"""
from __future__ import annotations
from typing import NotRequired, TypedDict
class _MessageEventRequired(TypedDict):
id: int
rx_time: int
rx_iso: str
class MessageEvent(_MessageEventRequired, total=False):
from_id: object
to_id: object
channel: int
portnum: str | None
text: str | None
encrypted: str | None
snr: float | None
rssi: int | None
hop_limit: int | None
reply_id: int | None
emoji: str | None
channel_name: str
ingestor: str | None
lora_freq: int
modem_preset: str
class _PositionEventRequired(TypedDict):
id: int
rx_time: int
rx_iso: str
class PositionEvent(_PositionEventRequired, total=False):
node_id: str
node_num: int | None
num: int | None
from_id: str | None
to_id: object
latitude: float | None
longitude: float | None
altitude: float | None
position_time: int | None
location_source: str | None
precision_bits: int | None
sats_in_view: int | None
pdop: float | None
ground_speed: float | None
ground_track: float | None
snr: float | None
rssi: int | None
hop_limit: int | None
bitfield: int | None
payload_b64: str | None
raw: dict
ingestor: str | None
lora_freq: int
modem_preset: str
class _TelemetryEventRequired(TypedDict):
id: int
rx_time: int
rx_iso: str
class TelemetryEvent(_TelemetryEventRequired, total=False):
node_id: str | None
node_num: int | None
from_id: object
to_id: object
telemetry_time: int | None
channel: int
portnum: str | None
hop_limit: int | None
snr: float | None
rssi: int | None
bitfield: int | None
payload_b64: str
ingestor: str | None
lora_freq: int
modem_preset: str
# Metric keys are intentionally open-ended; the Ruby side is permissive and
# evolves over time.
class _NeighborEntryRequired(TypedDict):
rx_time: int
rx_iso: str
class NeighborEntry(_NeighborEntryRequired, total=False):
neighbor_id: str
neighbor_num: int | None
snr: float | None
class _NeighborsSnapshotRequired(TypedDict):
node_id: str
rx_time: int
rx_iso: str
class NeighborsSnapshot(_NeighborsSnapshotRequired, total=False):
node_num: int | None
neighbors: list[NeighborEntry]
node_broadcast_interval_secs: int | None
last_sent_by_id: str | None
ingestor: str | None
lora_freq: int
modem_preset: str
class _TraceEventRequired(TypedDict):
hops: list[int]
rx_time: int
rx_iso: str
class TraceEvent(_TraceEventRequired, total=False):
id: int | None
request_id: int | None
src: int | None
dest: int | None
rssi: int | None
snr: float | None
elapsed_ms: int | None
ingestor: str | None
lora_freq: int
modem_preset: str
class IngestorHeartbeat(TypedDict):
node_id: str
start_time: int
last_seen_time: int
version: str
lora_freq: NotRequired[int]
modem_preset: NotRequired[str]
NodeUpsert = dict[str, dict]
__all__ = [
"IngestorHeartbeat",
"MessageEvent",
"NeighborEntry",
"NeighborsSnapshot",
"NodeUpsert",
"PositionEvent",
"TelemetryEvent",
"TraceEvent",
]

View File

@@ -424,6 +424,7 @@ def store_position_packet(packet: Mapping, decoded: Mapping) -> None:
"hop_limit": hop_limit,
"bitfield": bitfield,
"payload_b64": payload_b64,
"ingestor": host_node_id(),
}
if raw_payload:
position_payload["raw"] = raw_payload
@@ -568,6 +569,7 @@ def store_traceroute_packet(packet: Mapping, decoded: Mapping) -> None:
"rssi": rssi,
"snr": snr,
"elapsed_ms": elapsed_ms,
"ingestor": host_node_id(),
}
_queue_post_json(
@@ -935,6 +937,7 @@ def store_telemetry_packet(packet: Mapping, decoded: Mapping) -> None:
"rssi": rssi,
"hop_limit": hop_limit,
"payload_b64": payload_b64,
"ingestor": host_node_id(),
}
if battery_level is not None:
@@ -1263,6 +1266,7 @@ def store_neighborinfo_packet(packet: Mapping, decoded: Mapping) -> None:
"neighbors": neighbor_entries,
"rx_time": rx_time,
"rx_iso": _iso(rx_time),
"ingestor": host_node_id(),
}
if node_broadcast_interval is not None:
@@ -1520,6 +1524,7 @@ def store_packet_dict(packet: Mapping) -> None:
"hop_limit": int(hop) if hop is not None else None,
"reply_id": reply_id,
"emoji": emoji,
"ingestor": host_node_id(),
}
if not encrypted_flag and channel_name_value:

View File

@@ -0,0 +1,117 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Node identity helpers shared across ingestor providers.
The web application keys nodes by a canonical textual identifier of the form
``!%08x`` (lowercase hex). Both the Python collector and Ruby server accept
several input forms (ints, ``0x`` hex strings, ``!`` hex strings, decimal
strings). This module centralizes that normalization.
"""
from __future__ import annotations
from typing import Final
CANONICAL_PREFIX: Final[str] = "!"
def canonical_node_id(value: object) -> str | None:
"""Convert ``value`` into canonical ``!xxxxxxxx`` form.
Parameters:
value: Node reference which may be an int, float, or string.
Returns:
Canonical node id string or ``None`` when parsing fails.
"""
if value is None:
return None
if isinstance(value, (int, float)):
try:
num = int(value)
except (TypeError, ValueError):
return None
if num < 0:
return None
return f"{CANONICAL_PREFIX}{num & 0xFFFFFFFF:08x}"
if not isinstance(value, str):
return None
trimmed = value.strip()
if not trimmed:
return None
if trimmed.startswith("^"):
# Meshtastic special destinations like "^all" are not node ids; callers
# that already accept them should keep passing them through unchanged.
return trimmed
if trimmed.startswith(CANONICAL_PREFIX):
body = trimmed[1:]
elif trimmed.lower().startswith("0x"):
body = trimmed[2:]
elif trimmed.isdigit():
try:
return f"{CANONICAL_PREFIX}{int(trimmed, 10) & 0xFFFFFFFF:08x}"
except ValueError:
return None
else:
body = trimmed
if not body:
return None
try:
return f"{CANONICAL_PREFIX}{int(body, 16) & 0xFFFFFFFF:08x}"
except ValueError:
return None
def node_num_from_id(node_id: object) -> int | None:
"""Extract the numeric node identifier from a canonical (or near-canonical) id."""
if node_id is None:
return None
if isinstance(node_id, (int, float)):
try:
num = int(node_id)
except (TypeError, ValueError):
return None
return num if num >= 0 else None
if not isinstance(node_id, str):
return None
trimmed = node_id.strip()
if not trimmed:
return None
if trimmed.startswith(CANONICAL_PREFIX):
trimmed = trimmed[1:]
if trimmed.lower().startswith("0x"):
trimmed = trimmed[2:]
try:
return int(trimmed, 16)
except ValueError:
try:
return int(trimmed, 10)
except ValueError:
return None
__all__ = [
"CANONICAL_PREFIX",
"canonical_node_id",
"node_num_from_id",
]

View File

@@ -0,0 +1,65 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Provider interface for ingestion sources.
Today the repo ships a Meshtastic provider only. This module defines the seam so
future providers (MeshCore, Reticulum, ...) can be added without changing the
web app ingest contract.
"""
from __future__ import annotations
import enum
from collections.abc import Iterable
from typing import Protocol
class ProviderCapability(enum.Flag):
"""Feature flags describing what a provider can supply."""
NONE = 0
NODE_SNAPSHOT = enum.auto()
HEARTBEATS = enum.auto()
class Provider(Protocol):
"""Abstract source of mesh observations."""
name: str
def subscribe(self) -> list[str]:
"""Subscribe to any async receive callbacks and return topic names."""
def connect(
self, *, active_candidate: str | None
) -> tuple[object, str | None, str | None]:
"""Create an interface connection.
Returns:
(iface, resolved_target, next_active_candidate)
"""
def extract_host_node_id(self, iface: object) -> str | None:
"""Best-effort extraction of the connected host node id."""
def node_snapshot_items(self, iface: object) -> Iterable[tuple[str, object]]:
"""Return iterable of (node_id, node_obj) for initial snapshot."""
__all__ = [
"Provider",
"ProviderCapability",
]

View File

@@ -0,0 +1,26 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Provider implementations.
This package contains protocol-specific provider implementations (Meshtastic
today, others in the future).
"""
from __future__ import annotations
from .meshtastic import MeshtasticProvider
__all__ = ["MeshtasticProvider"]

View File

@@ -0,0 +1,83 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Meshtastic provider implementation."""
from __future__ import annotations
import time
from collections.abc import Iterable
from .. import config, daemon as _daemon, interfaces
class MeshtasticProvider:
"""Meshtastic ingestion provider (current default)."""
name = "meshtastic"
def __init__(self):
self._subscribed: list[str] = []
def subscribe(self) -> list[str]:
"""Subscribe Meshtastic pubsub receive topics."""
if self._subscribed:
return list(self._subscribed)
topics = _daemon._subscribe_receive_topics()
self._subscribed = topics
return list(topics)
def connect(
self, *, active_candidate: str | None
) -> tuple[object, str | None, str | None]:
"""Create a Meshtastic interface using the existing interface helpers."""
iface = None
resolved_target = None
next_candidate = active_candidate
if active_candidate:
iface, resolved_target = interfaces._create_serial_interface(active_candidate)
else:
iface, resolved_target = interfaces._create_default_interface()
next_candidate = resolved_target
interfaces._ensure_radio_metadata(iface)
interfaces._ensure_channel_metadata(iface)
return iface, resolved_target, next_candidate
def extract_host_node_id(self, iface: object) -> str | None:
return interfaces._extract_host_node_id(iface)
def node_snapshot_items(self, iface: object) -> list[tuple[str, object]]:
nodes = getattr(iface, "nodes", {}) or {}
for _ in range(3):
try:
return list(nodes.items())
except RuntimeError as err:
if "dictionary changed size during iteration" not in str(err):
raise
time.sleep(0)
config._debug_log(
"Skipping node snapshot due to concurrent modification",
context="meshtastic.snapshot",
)
return []
__all__ = ["MeshtasticProvider"]

View File

@@ -33,6 +33,9 @@ from google.protobuf.json_format import MessageToDict
from google.protobuf.message import DecodeError
from google.protobuf.message import Message as ProtoMessage
from .node_identity import canonical_node_id as _canonical_node_id
from .node_identity import node_num_from_id as _node_num_from_id
_CLI_ROLE_MODULE_NAMES: tuple[str, ...] = (
"meshtastic.cli.common",
"meshtastic.cli.roles",
@@ -429,91 +432,6 @@ def _pkt_to_dict(packet) -> dict:
return {"_unparsed": str(packet)}
def _canonical_node_id(value) -> str | None:
"""Convert node identifiers into the canonical ``!xxxxxxxx`` format.
Parameters:
value: Input identifier which may be an int, float or string.
Returns:
The canonical identifier or ``None`` if conversion fails.
"""
if value is None:
return None
if isinstance(value, (int, float)):
try:
num = int(value)
except (TypeError, ValueError):
return None
if num < 0:
return None
return f"!{num & 0xFFFFFFFF:08x}"
if not isinstance(value, str):
return None
trimmed = value.strip()
if not trimmed:
return None
if trimmed.startswith("^"):
return trimmed
if trimmed.startswith("!"):
body = trimmed[1:]
elif trimmed.lower().startswith("0x"):
body = trimmed[2:]
elif trimmed.isdigit():
try:
return f"!{int(trimmed, 10) & 0xFFFFFFFF:08x}"
except ValueError:
return None
else:
body = trimmed
if not body:
return None
try:
return f"!{int(body, 16) & 0xFFFFFFFF:08x}"
except ValueError:
return None
def _node_num_from_id(node_id) -> int | None:
"""Extract the numeric node ID from a canonical identifier.
Parameters:
node_id: Identifier value accepted by :func:`_canonical_node_id`.
Returns:
The numeric node ID or ``None`` when parsing fails.
"""
if node_id is None:
return None
if isinstance(node_id, (int, float)):
try:
num = int(node_id)
except (TypeError, ValueError):
return None
return num if num >= 0 else None
if not isinstance(node_id, str):
return None
trimmed = node_id.strip()
if not trimmed:
return None
if trimmed.startswith("!"):
trimmed = trimmed[1:]
if trimmed.lower().startswith("0x"):
trimmed = trimmed[2:]
try:
return int(trimmed, 16)
except ValueError:
try:
return int(trimmed, 10)
except ValueError:
return None
def _merge_mappings(base, extra):
"""Merge two mapping-like objects recursively.

View File

@@ -29,7 +29,8 @@ CREATE TABLE IF NOT EXISTS messages (
modem_preset TEXT,
channel_name TEXT,
reply_id INTEGER,
emoji TEXT
emoji TEXT,
ingestor TEXT
);
CREATE INDEX IF NOT EXISTS idx_messages_rx_time ON messages(rx_time);

View File

@@ -17,6 +17,7 @@ CREATE TABLE IF NOT EXISTS neighbors (
neighbor_id TEXT NOT NULL,
snr REAL,
rx_time INTEGER NOT NULL,
ingestor TEXT,
PRIMARY KEY (node_id, neighbor_id),
FOREIGN KEY (node_id) REFERENCES nodes(node_id) ON DELETE CASCADE,
FOREIGN KEY (neighbor_id) REFERENCES nodes(node_id) ON DELETE CASCADE

View File

@@ -33,7 +33,8 @@ CREATE TABLE IF NOT EXISTS positions (
rssi INTEGER,
hop_limit INTEGER,
bitfield INTEGER,
payload_b64 TEXT
payload_b64 TEXT,
ingestor TEXT
);
CREATE INDEX IF NOT EXISTS idx_positions_rx_time ON positions(rx_time);

View File

@@ -53,7 +53,8 @@ CREATE TABLE IF NOT EXISTS telemetry (
rainfall_1h REAL,
rainfall_24h REAL,
soil_moisture INTEGER,
soil_temperature REAL
soil_temperature REAL,
ingestor TEXT
);
CREATE INDEX IF NOT EXISTS idx_telemetry_rx_time ON telemetry(rx_time);

View File

@@ -21,7 +21,8 @@ CREATE TABLE IF NOT EXISTS traces (
rx_iso TEXT NOT NULL,
rssi INTEGER,
snr REAL,
elapsed_ms INTEGER
elapsed_ms INTEGER,
ingestor TEXT
);
CREATE TABLE IF NOT EXISTS trace_hops (

View File

@@ -81,7 +81,12 @@ x-matrix-bridge-base: &matrix-bridge-base
image: ghcr.io/l5yth/potato-mesh-matrix-bridge-${POTATOMESH_IMAGE_ARCH:-linux-amd64}:${POTATOMESH_IMAGE_TAG:-latest}
volumes:
- potatomesh_matrix_bridge_state:/app
- ./matrix/Config.toml:/app/Config.toml:ro
- type: bind
source: ./matrix/Config.toml
target: /app/Config.toml
read_only: true
bind:
create_host_path: false
restart: unless-stopped
deploy:
resources:
@@ -128,6 +133,8 @@ services:
matrix-bridge:
<<: *matrix-bridge-base
network_mode: host
profiles:
- matrix
depends_on:
- web
extra_hosts:
@@ -140,6 +147,8 @@ services:
- potatomesh-network
depends_on:
- web-bridge
ports:
- "41448:41448"
profiles:
- bridge

499
matrix/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -14,7 +14,7 @@
[package]
name = "potatomesh-matrix-bridge"
version = "0.5.9"
version = "0.5.11"
edition = "2021"
[dependencies]
@@ -27,8 +27,11 @@ anyhow = "1"
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["fmt", "env-filter"] }
urlencoding = "2"
axum = { version = "0.7", features = ["json"] }
clap = { version = "4", features = ["derive"] }
[dev-dependencies]
tempfile = "3"
mockito = "1"
serial_test = "3"
tower = "0.5"

View File

@@ -9,6 +9,8 @@ poll_interval_secs = 60
homeserver = "https://matrix.dod.ngo"
# Appservice access token (from your registration.yaml)
as_token = "INVALID_TOKEN_NOT_WORKING"
# Homeserver token used to authenticate Synapse callbacks
hs_token = "INVALID_TOKEN_NOT_WORKING"
# Server name (domain) part of Matrix user IDs
server_name = "dod.ngo"
# Room ID to send into (must be joined by the appservice / puppets)
@@ -17,4 +19,3 @@ room_id = "!sXabOBXbVObAlZQEUs:c-base.org" # "#potato-bridge:c-base.org"
[state]
# Where to persist last seen message id (optional but recommended)
state_file = "bridge_state.json"

View File

@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
FROM rust:1.91-bookworm AS builder
FROM rust:1.92-bookworm AS builder
WORKDIR /app
@@ -37,6 +37,8 @@ COPY --from=builder /app/target/release/potatomesh-matrix-bridge /usr/local/bin/
COPY matrix/Config.toml /app/Config.example.toml
COPY matrix/docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
EXPOSE 41448
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]

View File

@@ -2,6 +2,8 @@
A small Rust daemon that bridges **PotatoMesh** LoRa messages into a **Matrix** room.
![matrix bridge](../scrot-0.6.png)
For each PotatoMesh node, the bridge creates (or uses) a **Matrix puppet user**:
- Matrix localpart: `potato_` + the hex node id (without `!`), e.g. `!67fc83cb``@potato_67fc83cb:example.org`
@@ -54,9 +56,17 @@ This is **not** a full appservice framework; it just speaks the minimal HTTP nee
## Configuration
All configuration is in `Config.toml` in the project root.
Configuration can come from a TOML file, CLI flags, environment variables, or secret files. The bridge merges inputs in this order (highest to lowest):
Example:
1. CLI flags
2. Environment variables
3. Secret files (`*_FILE` paths or container defaults)
4. TOML config file
5. Container defaults (paths + poll interval)
If no TOML file is provided, required values must be supplied via CLI/env/secret inputs.
Example TOML:
```toml
[potatomesh]
@@ -70,6 +80,8 @@ poll_interval_secs = 10
homeserver = "https://matrix.example.org"
# Appservice access token (from your registration.yaml)
as_token = "YOUR_APPSERVICE_AS_TOKEN"
# Appservice homeserver token (must match registration hs_token)
hs_token = "SECRET_HS_TOKEN"
# Server name (domain) part of Matrix user IDs
server_name = "example.org"
# Room ID to send into (must be joined by the appservice / puppets)
@@ -80,6 +92,92 @@ room_id = "!yourroomid:example.org"
state_file = "bridge_state.json"
````
The `hs_token` is used to validate inbound appservice transactions. Keep it identical in `Config.toml` and your Matrix appservice registration file.
### CLI Flags
Run `potatomesh-matrix-bridge --help` for the full list. Common flags:
* `--config PATH`
* `--state-file PATH`
* `--potatomesh-base-url URL`
* `--potatomesh-poll-interval-secs SECS`
* `--matrix-homeserver URL`
* `--matrix-as-token TOKEN`
* `--matrix-as-token-file PATH`
* `--matrix-hs-token TOKEN`
* `--matrix-hs-token-file PATH`
* `--matrix-server-name NAME`
* `--matrix-room-id ROOM`
* `--container` / `--no-container`
* `--secrets-dir PATH`
### Environment Variables
* `POTATOMESH_CONFIG`
* `POTATOMESH_BASE_URL`
* `POTATOMESH_POLL_INTERVAL_SECS`
* `MATRIX_HOMESERVER`
* `MATRIX_AS_TOKEN`
* `MATRIX_AS_TOKEN_FILE`
* `MATRIX_HS_TOKEN`
* `MATRIX_HS_TOKEN_FILE`
* `MATRIX_SERVER_NAME`
* `MATRIX_ROOM_ID`
* `STATE_FILE`
* `POTATOMESH_CONTAINER`
* `POTATOMESH_SECRETS_DIR`
### Secret Files
If you supply `*_FILE` values, the bridge reads the secret contents and trims whitespace. When running inside a container, the bridge also checks the default secrets directory (default: `/run/secrets`) for:
* `matrix_as_token`
* `matrix_hs_token`
### Container Defaults
Container detection checks `POTATOMESH_CONTAINER`, `CONTAINER`, and `/proc/1/cgroup`. When detected (or forced with `--container`), defaults shift to:
* Config path: `/app/Config.toml`
* State file: `/app/bridge_state.json`
* Secrets dir: `/run/secrets`
* Poll interval: 15 seconds (if not otherwise configured)
Set `POTATOMESH_CONTAINER=0` or `--no-container` to opt out of container defaults.
### Docker Compose First Run
Before starting Compose, complete this preflight checklist:
1. Ensure `matrix/Config.toml` exists as a regular file on the host (not a directory).
2. Fill required Matrix values in `matrix/Config.toml`:
- `matrix.as_token`
- `matrix.hs_token`
- `matrix.server_name`
- `matrix.room_id`
- `matrix.homeserver`
This is required because the shared Compose anchor `x-matrix-bridge-base` mounts `./matrix/Config.toml` to `/app/Config.toml`.
Then follow the token and namespace requirements in [Matrix Appservice Setup (Synapse example)](#matrix-appservice-setup-synapse-example).
#### Troubleshooting
| Symptom | Likely cause | What to check |
| --- | --- | --- |
| `Is a directory (os error 21)` | Host mount source became a directory | `matrix/Config.toml` was missing at mount time and got created as a directory on host. |
| `M_UNKNOWN_TOKEN` / `401 Unauthorized` | Matrix appservice token mismatch | Verify `matrix.as_token` matches your appservice registration and setup in [Matrix Appservice Setup (Synapse example)](#matrix-appservice-setup-synapse-example). |
#### Recovery from accidental `Config.toml` directory creation
```bash
# from repo root
rm -rf matrix/Config.toml
touch matrix/Config.toml
# then edit matrix/Config.toml and set valid matrix.as_token, matrix.hs_token,
# matrix.server_name, matrix.room_id, and matrix.homeserver before starting compose
```
### PotatoMesh API
The bridge assumes:
@@ -134,7 +232,7 @@ A minimal example sketch (you **must** adjust URLs, secrets, namespaces):
```yaml
id: potatomesh-bridge
url: "http://your-bridge-host:8080" # not used by this bridge if it only calls out
url: "http://your-bridge-host:41448"
as_token: "YOUR_APPSERVICE_AS_TOKEN"
hs_token: "SECRET_HS_TOKEN"
sender_localpart: "potatomesh-bridge"
@@ -145,10 +243,12 @@ namespaces:
regex: "@potato_[0-9a-f]{8}:example.org"
```
For this bridge, only the `as_token` and `namespaces.users` actually matter. The bridge does not accept inbound events; it only uses the `as_token` to call the homeserver.
This bridge listens for Synapse appservice callbacks on port `41448` so it can log inbound transaction payloads. It still only forwards messages one way (PotatoMesh → Matrix), so inbound Matrix events are acknowledged but not bridged. The `as_token` and `namespaces.users` entries remain required for outbound calls, and the `url` should point at the listener.
In Synapses `homeserver.yaml`, add the registration file under `app_service_config_files`, restart, and invite a puppet user to your target room (or use room ID directly).
The bridge validates inbound appservice callbacks by comparing the `access_token` query param to `hs_token` in `Config.toml`, so keep those values in sync.
---
## Build
@@ -178,10 +278,11 @@ Build the container from the repo root with the included `matrix/Dockerfile`:
docker build -f matrix/Dockerfile -t potatomesh-matrix-bridge .
```
Provide your config at `/app/Config.toml` and persist the bridge state file by mounting volumes. Minimal example:
Provide your config at `/app/Config.toml` (or use CLI/env/secret overrides) and persist the bridge state file by mounting volumes. Minimal example:
```bash
docker run --rm \
-p 41448:41448 \
-v bridge_state:/app \
-v "$(pwd)/matrix/Config.toml:/app/Config.toml:ro" \
potatomesh-matrix-bridge
@@ -191,12 +292,13 @@ If you prefer to isolate the state file from the config, mount it directly inste
```bash
docker run --rm \
-p 41448:41448 \
-v bridge_state:/app \
-v "$(pwd)/matrix/Config.toml:/app/Config.toml:ro" \
potatomesh-matrix-bridge
```
The image ships `Config.example.toml` for reference, but the bridge will exit if `/app/Config.toml` is not provided.
The image ships `Config.example.toml` for reference. If `/app/Config.toml` is absent, set the required values via environment variables, CLI flags, or secrets instead.
---
@@ -234,7 +336,7 @@ Delete `bridge_state.json` if you want it to replay all currently available mess
## Development
Run tests (currently mostly compile checks, no real tests yet):
Run tests:
```bash
cargo test

View File

@@ -15,6 +15,13 @@
set -e
# Default to container-aware configuration paths unless explicitly overridden.
: "${POTATOMESH_CONTAINER:=1}"
: "${POTATOMESH_SECRETS_DIR:=/run/secrets}"
export POTATOMESH_CONTAINER
export POTATOMESH_SECRETS_DIR
# Default state file path from Config.toml unless overridden.
STATE_FILE="${STATE_FILE:-/app/bridge_state.json}"
STATE_DIR="$(dirname "$STATE_FILE")"

105
matrix/src/cli.rs Normal file
View File

@@ -0,0 +1,105 @@
// Copyright © 2025-26 l5yth & contributors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use clap::{ArgAction, Parser};
#[cfg(not(test))]
use crate::config::{ConfigInputs, ConfigOverrides};
/// CLI arguments for the Matrix bridge.
#[derive(Debug, Parser)]
#[command(
name = "potatomesh-matrix-bridge",
version,
about = "PotatoMesh Matrix bridge"
)]
pub struct Cli {
/// Path to the configuration TOML file.
#[arg(long, value_name = "PATH")]
pub config: Option<String>,
/// Path to the bridge state file.
#[arg(long, value_name = "PATH")]
pub state_file: Option<String>,
/// PotatoMesh base URL.
#[arg(long, value_name = "URL")]
pub potatomesh_base_url: Option<String>,
/// Poll interval in seconds.
#[arg(long, value_name = "SECS")]
pub potatomesh_poll_interval_secs: Option<u64>,
/// Matrix homeserver base URL.
#[arg(long, value_name = "URL")]
pub matrix_homeserver: Option<String>,
/// Matrix appservice access token.
#[arg(long, value_name = "TOKEN")]
pub matrix_as_token: Option<String>,
/// Path to a secret file containing the Matrix appservice access token.
#[arg(long, value_name = "PATH")]
pub matrix_as_token_file: Option<String>,
/// Matrix homeserver token for inbound appservice requests.
#[arg(long, value_name = "TOKEN")]
pub matrix_hs_token: Option<String>,
/// Path to a secret file containing the Matrix homeserver token.
#[arg(long, value_name = "PATH")]
pub matrix_hs_token_file: Option<String>,
/// Matrix server name (domain).
#[arg(long, value_name = "NAME")]
pub matrix_server_name: Option<String>,
/// Matrix room id to forward into.
#[arg(long, value_name = "ROOM")]
pub matrix_room_id: Option<String>,
/// Force container defaults (overrides detection).
#[arg(long, action = ArgAction::SetTrue)]
pub container: bool,
/// Disable container defaults (overrides detection).
#[arg(long, action = ArgAction::SetTrue)]
pub no_container: bool,
/// Directory to search for default secret files.
#[arg(long, value_name = "PATH")]
pub secrets_dir: Option<String>,
}
impl Cli {
/// Convert CLI args into configuration inputs.
#[cfg(not(test))]
pub fn to_inputs(&self) -> ConfigInputs {
ConfigInputs {
config_path: self.config.clone(),
secrets_dir: self.secrets_dir.clone(),
container_override: resolve_container_override(self.container, self.no_container),
container_hint: None,
overrides: ConfigOverrides {
potatomesh_base_url: self.potatomesh_base_url.clone(),
potatomesh_poll_interval_secs: self.potatomesh_poll_interval_secs,
matrix_homeserver: self.matrix_homeserver.clone(),
matrix_as_token: self.matrix_as_token.clone(),
matrix_as_token_file: self.matrix_as_token_file.clone(),
matrix_hs_token: self.matrix_hs_token.clone(),
matrix_hs_token_file: self.matrix_hs_token_file.clone(),
matrix_server_name: self.matrix_server_name.clone(),
matrix_room_id: self.matrix_room_id.clone(),
state_file: self.state_file.clone(),
},
}
}
}
/// Resolve container override flags into an optional boolean.
#[cfg(not(test))]
fn resolve_container_override(container: bool, no_container: bool) -> Option<bool> {
match (container, no_container) {
(true, false) => Some(true),
(false, true) => Some(false),
_ => None,
}
}

View File

@@ -15,25 +15,37 @@
use serde::Deserialize;
use std::{fs, path::Path};
const DEFAULT_CONFIG_PATH: &str = "Config.toml";
const CONTAINER_CONFIG_PATH: &str = "/app/Config.toml";
const DEFAULT_STATE_FILE: &str = "bridge_state.json";
const CONTAINER_STATE_FILE: &str = "/app/bridge_state.json";
const DEFAULT_SECRETS_DIR: &str = "/run/secrets";
const CONTAINER_POLL_INTERVAL_SECS: u64 = 15;
/// PotatoMesh API settings.
#[derive(Debug, Deserialize, Clone)]
pub struct PotatomeshConfig {
pub base_url: String,
pub poll_interval_secs: u64,
}
/// Matrix appservice settings for the bridge.
#[derive(Debug, Deserialize, Clone)]
pub struct MatrixConfig {
pub homeserver: String,
pub as_token: String,
pub hs_token: String,
pub server_name: String,
pub room_id: String,
}
/// State file configuration for the bridge.
#[derive(Debug, Deserialize, Clone)]
pub struct StateConfig {
pub state_file: String,
}
/// Full configuration loaded for the bridge runtime.
#[derive(Debug, Deserialize, Clone)]
pub struct Config {
pub potatomesh: PotatomeshConfig,
@@ -41,19 +53,447 @@ pub struct Config {
pub state: StateConfig,
}
#[derive(Debug, Deserialize, Clone, Default)]
struct PartialPotatomeshConfig {
#[serde(default)]
base_url: Option<String>,
#[serde(default)]
poll_interval_secs: Option<u64>,
}
#[derive(Debug, Deserialize, Clone, Default)]
struct PartialMatrixConfig {
#[serde(default)]
homeserver: Option<String>,
#[serde(default)]
as_token: Option<String>,
#[serde(default)]
hs_token: Option<String>,
#[serde(default)]
server_name: Option<String>,
#[serde(default)]
room_id: Option<String>,
}
#[derive(Debug, Deserialize, Clone, Default)]
struct PartialStateConfig {
#[serde(default)]
state_file: Option<String>,
}
#[derive(Debug, Deserialize, Clone, Default)]
struct PartialConfig {
#[serde(default)]
potatomesh: PartialPotatomeshConfig,
#[serde(default)]
matrix: PartialMatrixConfig,
#[serde(default)]
state: PartialStateConfig,
}
/// Overwrite an optional value when the incoming value is present.
fn merge_option<T>(target: &mut Option<T>, incoming: Option<T>) {
if incoming.is_some() {
*target = incoming;
}
}
/// CLI or environment overrides for configuration fields.
#[derive(Debug, Clone, Default)]
pub struct ConfigOverrides {
pub potatomesh_base_url: Option<String>,
pub potatomesh_poll_interval_secs: Option<u64>,
pub matrix_homeserver: Option<String>,
pub matrix_as_token: Option<String>,
pub matrix_as_token_file: Option<String>,
pub matrix_hs_token: Option<String>,
pub matrix_hs_token_file: Option<String>,
pub matrix_server_name: Option<String>,
pub matrix_room_id: Option<String>,
pub state_file: Option<String>,
}
impl ConfigOverrides {
fn apply_non_token_overrides(&self, cfg: &mut PartialConfig) {
merge_option(
&mut cfg.potatomesh.base_url,
self.potatomesh_base_url.clone(),
);
merge_option(
&mut cfg.potatomesh.poll_interval_secs,
self.potatomesh_poll_interval_secs,
);
merge_option(&mut cfg.matrix.homeserver, self.matrix_homeserver.clone());
merge_option(&mut cfg.matrix.server_name, self.matrix_server_name.clone());
merge_option(&mut cfg.matrix.room_id, self.matrix_room_id.clone());
merge_option(&mut cfg.state.state_file, self.state_file.clone());
}
fn merge(self, higher: ConfigOverrides) -> ConfigOverrides {
let matrix_as_token = if higher.matrix_as_token_file.is_some() {
higher.matrix_as_token
} else {
higher.matrix_as_token.or(self.matrix_as_token)
};
let matrix_hs_token = if higher.matrix_hs_token_file.is_some() {
higher.matrix_hs_token
} else {
higher.matrix_hs_token.or(self.matrix_hs_token)
};
ConfigOverrides {
potatomesh_base_url: higher.potatomesh_base_url.or(self.potatomesh_base_url),
potatomesh_poll_interval_secs: higher
.potatomesh_poll_interval_secs
.or(self.potatomesh_poll_interval_secs),
matrix_homeserver: higher.matrix_homeserver.or(self.matrix_homeserver),
matrix_as_token,
matrix_as_token_file: higher.matrix_as_token_file.or(self.matrix_as_token_file),
matrix_hs_token,
matrix_hs_token_file: higher.matrix_hs_token_file.or(self.matrix_hs_token_file),
matrix_server_name: higher.matrix_server_name.or(self.matrix_server_name),
matrix_room_id: higher.matrix_room_id.or(self.matrix_room_id),
state_file: higher.state_file.or(self.state_file),
}
}
}
/// Inputs gathered from CLI flags or environment variables.
#[derive(Debug, Clone, Default)]
pub struct ConfigInputs {
pub config_path: Option<String>,
pub secrets_dir: Option<String>,
pub container_override: Option<bool>,
pub container_hint: Option<String>,
pub overrides: ConfigOverrides,
}
impl ConfigInputs {
/// Merge two input sets, preferring values from `higher`.
pub fn merge(self, higher: ConfigInputs) -> ConfigInputs {
ConfigInputs {
config_path: higher.config_path.or(self.config_path),
secrets_dir: higher.secrets_dir.or(self.secrets_dir),
container_override: higher.container_override.or(self.container_override),
container_hint: higher.container_hint.or(self.container_hint),
overrides: self.overrides.merge(higher.overrides),
}
}
/// Load configuration inputs from the process environment.
#[cfg(not(test))]
pub fn from_env() -> anyhow::Result<Self> {
let overrides = ConfigOverrides {
potatomesh_base_url: env_var("POTATOMESH_BASE_URL"),
potatomesh_poll_interval_secs: parse_u64_env("POTATOMESH_POLL_INTERVAL_SECS")?,
matrix_homeserver: env_var("MATRIX_HOMESERVER"),
matrix_as_token: env_var("MATRIX_AS_TOKEN"),
matrix_as_token_file: env_var("MATRIX_AS_TOKEN_FILE"),
matrix_hs_token: env_var("MATRIX_HS_TOKEN"),
matrix_hs_token_file: env_var("MATRIX_HS_TOKEN_FILE"),
matrix_server_name: env_var("MATRIX_SERVER_NAME"),
matrix_room_id: env_var("MATRIX_ROOM_ID"),
state_file: env_var("STATE_FILE"),
};
Ok(ConfigInputs {
config_path: env_var("POTATOMESH_CONFIG"),
secrets_dir: env_var("POTATOMESH_SECRETS_DIR"),
container_override: parse_bool_env("POTATOMESH_CONTAINER")?,
container_hint: env_var("CONTAINER"),
overrides,
})
}
}
impl Config {
/// Load a full Config from a TOML file.
#[cfg(test)]
pub fn load_from_file(path: &str) -> anyhow::Result<Self> {
let contents = fs::read_to_string(path)?;
let cfg = toml::from_str(&contents)?;
Ok(cfg)
}
}
pub fn from_default_path() -> anyhow::Result<Self> {
let path = "Config.toml";
if !Path::new(path).exists() {
anyhow::bail!("Config file {path} not found");
/// Load a Config by merging CLI/env overrides with an optional TOML file.
#[cfg(not(test))]
pub fn load(cli_inputs: ConfigInputs) -> anyhow::Result<Config> {
let env_inputs = ConfigInputs::from_env()?;
let cgroup_hint = read_cgroup();
load_from_sources(cli_inputs, env_inputs, cgroup_hint.as_deref())
}
/// Load configuration by merging CLI/env inputs and an optional config file.
fn load_from_sources(
cli_inputs: ConfigInputs,
env_inputs: ConfigInputs,
cgroup_hint: Option<&str>,
) -> anyhow::Result<Config> {
let merged_inputs = env_inputs.merge(cli_inputs);
let container = detect_container(
merged_inputs.container_override,
merged_inputs.container_hint.as_deref(),
cgroup_hint,
);
let defaults = default_paths(container);
let base_cfg = resolve_base_config(&merged_inputs, &defaults)?;
let mut cfg = base_cfg.unwrap_or_default();
merged_inputs.overrides.apply_non_token_overrides(&mut cfg);
let secrets_dir = resolve_secrets_dir(&merged_inputs, container, &defaults);
let as_token = resolve_token(
cfg.matrix.as_token.clone(),
merged_inputs.overrides.matrix_as_token.clone(),
merged_inputs.overrides.matrix_as_token_file.as_deref(),
secrets_dir.as_deref(),
"matrix_as_token",
)?;
let hs_token = resolve_token(
cfg.matrix.hs_token.clone(),
merged_inputs.overrides.matrix_hs_token.clone(),
merged_inputs.overrides.matrix_hs_token_file.as_deref(),
secrets_dir.as_deref(),
"matrix_hs_token",
)?;
if cfg.potatomesh.poll_interval_secs.is_none() && container {
cfg.potatomesh.poll_interval_secs = Some(defaults.poll_interval_secs);
}
if cfg.state.state_file.is_none() {
cfg.state.state_file = Some(defaults.state_file);
}
let missing = collect_missing_fields(&cfg, &as_token, &hs_token);
if !missing.is_empty() {
anyhow::bail!(
"Missing required configuration values: {}",
missing.join(", ")
);
}
Ok(Config {
potatomesh: PotatomeshConfig {
base_url: cfg.potatomesh.base_url.unwrap(),
poll_interval_secs: cfg.potatomesh.poll_interval_secs.unwrap(),
},
matrix: MatrixConfig {
homeserver: cfg.matrix.homeserver.unwrap(),
as_token: as_token.unwrap(),
hs_token: hs_token.unwrap(),
server_name: cfg.matrix.server_name.unwrap(),
room_id: cfg.matrix.room_id.unwrap(),
},
state: StateConfig {
state_file: cfg.state.state_file.unwrap(),
},
})
}
/// Collect the missing required field identifiers for error reporting.
fn collect_missing_fields(
cfg: &PartialConfig,
as_token: &Option<String>,
hs_token: &Option<String>,
) -> Vec<&'static str> {
let mut missing = Vec::new();
if cfg.potatomesh.base_url.is_none() {
missing.push("potatomesh.base_url");
}
if cfg.potatomesh.poll_interval_secs.is_none() {
missing.push("potatomesh.poll_interval_secs");
}
if cfg.matrix.homeserver.is_none() {
missing.push("matrix.homeserver");
}
if as_token.is_none() {
missing.push("matrix.as_token");
}
if hs_token.is_none() {
missing.push("matrix.hs_token");
}
if cfg.matrix.server_name.is_none() {
missing.push("matrix.server_name");
}
if cfg.matrix.room_id.is_none() {
missing.push("matrix.room_id");
}
if cfg.state.state_file.is_none() {
missing.push("state.state_file");
}
missing
}
/// Resolve the base TOML config file, honoring explicit config paths.
fn resolve_base_config(
inputs: &ConfigInputs,
defaults: &DefaultPaths,
) -> anyhow::Result<Option<PartialConfig>> {
if let Some(path) = &inputs.config_path {
return Ok(Some(load_partial_from_file(path)?));
}
let container_path = Path::new(&defaults.config_path);
if container_path.exists() {
return Ok(Some(load_partial_from_file(&defaults.config_path)?));
}
let host_path = Path::new(DEFAULT_CONFIG_PATH);
if host_path.exists() {
return Ok(Some(load_partial_from_file(DEFAULT_CONFIG_PATH)?));
}
Ok(None)
}
/// Decide which secrets directory to use based on inputs and defaults.
fn resolve_secrets_dir(
inputs: &ConfigInputs,
container: bool,
defaults: &DefaultPaths,
) -> Option<String> {
if let Some(explicit) = inputs.secrets_dir.clone() {
return Some(explicit);
}
if container {
return Some(defaults.secrets_dir.clone());
}
None
}
/// Resolve a token value from explicit values, secret files, or config file values.
fn resolve_token(
base_value: Option<String>,
explicit_value: Option<String>,
explicit_file: Option<&str>,
secrets_dir: Option<&str>,
default_secret_name: &str,
) -> anyhow::Result<Option<String>> {
if let Some(value) = explicit_value {
return Ok(Some(value));
}
if let Some(path) = explicit_file {
return Ok(Some(read_secret_file(path)?));
}
if let Some(dir) = secrets_dir {
let default_path = Path::new(dir).join(default_secret_name);
if default_path.exists() {
return Ok(Some(read_secret_file(
default_path
.to_str()
.ok_or_else(|| anyhow::anyhow!("Invalid secret file path"))?,
)?));
}
Self::load_from_file(path)
}
Ok(base_value)
}
/// Read and trim a secret file from disk.
fn read_secret_file(path: &str) -> anyhow::Result<String> {
let contents = fs::read_to_string(path)?;
let trimmed = contents.trim();
if trimmed.is_empty() {
anyhow::bail!("Secret file {path} is empty");
}
Ok(trimmed.to_string())
}
/// Load a partial config from a TOML file.
fn load_partial_from_file(path: &str) -> anyhow::Result<PartialConfig> {
let contents = fs::read_to_string(path)?;
let cfg = toml::from_str(&contents)?;
Ok(cfg)
}
/// Compute default paths and intervals based on container mode.
fn default_paths(container: bool) -> DefaultPaths {
if container {
DefaultPaths {
config_path: CONTAINER_CONFIG_PATH.to_string(),
state_file: CONTAINER_STATE_FILE.to_string(),
secrets_dir: DEFAULT_SECRETS_DIR.to_string(),
poll_interval_secs: CONTAINER_POLL_INTERVAL_SECS,
}
} else {
DefaultPaths {
config_path: DEFAULT_CONFIG_PATH.to_string(),
state_file: DEFAULT_STATE_FILE.to_string(),
secrets_dir: DEFAULT_SECRETS_DIR.to_string(),
poll_interval_secs: CONTAINER_POLL_INTERVAL_SECS,
}
}
}
#[derive(Debug, Clone)]
struct DefaultPaths {
config_path: String,
state_file: String,
secrets_dir: String,
poll_interval_secs: u64,
}
/// Detect whether the bridge is running inside a container.
fn detect_container(
override_value: Option<bool>,
env_hint: Option<&str>,
cgroup_hint: Option<&str>,
) -> bool {
if let Some(value) = override_value {
return value;
}
if let Some(hint) = env_hint {
if !hint.trim().is_empty() {
return true;
}
}
if let Some(cgroup) = cgroup_hint {
let haystack = cgroup.to_ascii_lowercase();
return haystack.contains("docker")
|| haystack.contains("kubepods")
|| haystack.contains("containerd")
|| haystack.contains("podman");
}
false
}
/// Read the primary cgroup file for container detection.
#[cfg(not(test))]
fn read_cgroup() -> Option<String> {
fs::read_to_string("/proc/1/cgroup").ok()
}
/// Read and trim an environment variable value.
#[cfg(not(test))]
fn env_var(key: &str) -> Option<String> {
std::env::var(key).ok().filter(|v| !v.trim().is_empty())
}
/// Parse a u64 environment variable value.
#[cfg(not(test))]
fn parse_u64_env(key: &str) -> anyhow::Result<Option<u64>> {
match env_var(key) {
None => Ok(None),
Some(value) => value
.parse::<u64>()
.map(Some)
.map_err(|e| anyhow::anyhow!("Invalid {key} value: {e}")),
}
}
/// Parse a boolean environment variable value.
#[cfg(not(test))]
fn parse_bool_env(key: &str) -> anyhow::Result<Option<bool>> {
match env_var(key) {
None => Ok(None),
Some(value) => parse_bool_value(key, &value).map(Some),
}
}
/// Parse a boolean string with standard truthy/falsy values.
#[cfg(not(test))]
fn parse_bool_value(key: &str, value: &str) -> anyhow::Result<bool> {
let normalized = value.trim().to_ascii_lowercase();
match normalized.as_str() {
"1" | "true" | "yes" | "on" => Ok(true),
"0" | "false" | "no" | "off" => Ok(false),
_ => anyhow::bail!("Invalid {key} value: {value}"),
}
}
@@ -62,6 +502,43 @@ mod tests {
use super::*;
use serial_test::serial;
use std::io::Write;
use std::path::{Path, PathBuf};
struct CwdGuard {
original: PathBuf,
}
impl CwdGuard {
/// Switch to the provided path and restore the original cwd on drop.
fn enter(path: &Path) -> Self {
let original = std::env::current_dir().unwrap_or_else(|_| PathBuf::from("/"));
std::env::set_current_dir(path).unwrap();
Self { original }
}
}
impl Drop for CwdGuard {
fn drop(&mut self) {
if std::env::set_current_dir(&self.original).is_err() {
let _ = std::env::set_current_dir("/");
}
}
}
fn minimal_overrides() -> ConfigOverrides {
ConfigOverrides {
potatomesh_base_url: Some("https://potatomesh.net/".to_string()),
potatomesh_poll_interval_secs: Some(10),
matrix_homeserver: Some("https://matrix.example.org".to_string()),
matrix_as_token: Some("AS_TOKEN".to_string()),
matrix_hs_token: Some("HS_TOKEN".to_string()),
matrix_server_name: Some("example.org".to_string()),
matrix_room_id: Some("!roomid:example.org".to_string()),
state_file: Some("bridge_state.json".to_string()),
matrix_as_token_file: None,
matrix_hs_token_file: None,
}
}
#[test]
fn parse_minimal_config_from_toml_str() {
@@ -73,6 +550,7 @@ mod tests {
[matrix]
homeserver = "https://matrix.example.org"
as_token = "AS_TOKEN"
hs_token = "HS_TOKEN"
server_name = "example.org"
room_id = "!roomid:example.org"
@@ -86,6 +564,7 @@ mod tests {
assert_eq!(cfg.matrix.homeserver, "https://matrix.example.org");
assert_eq!(cfg.matrix.as_token, "AS_TOKEN");
assert_eq!(cfg.matrix.hs_token, "HS_TOKEN");
assert_eq!(cfg.matrix.server_name, "example.org");
assert_eq!(cfg.matrix.room_id, "!roomid:example.org");
@@ -108,6 +587,7 @@ mod tests {
[matrix]
homeserver = "https://matrix.example.org"
as_token = "AS_TOKEN"
hs_token = "HS_TOKEN"
server_name = "example.org"
room_id = "!roomid:example.org"
@@ -121,37 +601,378 @@ mod tests {
}
#[test]
#[serial]
fn from_default_path_not_found() {
let tmp_dir = tempfile::tempdir().unwrap();
std::env::set_current_dir(tmp_dir.path()).unwrap();
let result = Config::from_default_path();
assert!(result.is_err());
fn detect_container_prefers_override() {
assert!(detect_container(Some(true), None, None));
assert!(!detect_container(
Some(false),
Some("docker"),
Some("docker")
));
}
#[test]
#[serial]
fn from_default_path_found() {
fn detect_container_from_hint_or_cgroup() {
assert!(detect_container(None, Some("docker"), None));
assert!(detect_container(None, None, Some("kubepods")));
assert!(!detect_container(None, None, Some("")));
}
#[test]
fn load_uses_cli_overrides_over_env() {
let toml_str = r#"
[potatomesh]
base_url = "https://potatomesh.net/"
poll_interval_secs = 10
poll_interval_secs = 5
[matrix]
homeserver = "https://matrix.example.org"
as_token = "AS_TOKEN"
hs_token = "HS_TOKEN"
server_name = "example.org"
room_id = "!roomid:example.org"
[state]
state_file = "bridge_state.json"
"#;
let tmp_dir = tempfile::tempdir().unwrap();
let file_path = tmp_dir.path().join("Config.toml");
let mut file = std::fs::File::create(file_path).unwrap();
let mut file = tempfile::NamedTempFile::new().unwrap();
write!(file, "{}", toml_str).unwrap();
std::env::set_current_dir(tmp_dir.path()).unwrap();
let result = Config::from_default_path();
assert!(result.is_ok());
let env_inputs = ConfigInputs {
config_path: Some(file.path().to_str().unwrap().to_string()),
overrides: ConfigOverrides {
potatomesh_base_url: Some("https://env.example/".to_string()),
..minimal_overrides()
},
..ConfigInputs::default()
};
let cli_inputs = ConfigInputs {
overrides: ConfigOverrides {
potatomesh_base_url: Some("https://cli.example/".to_string()),
..ConfigOverrides::default()
},
..ConfigInputs::default()
};
let cfg = load_from_sources(cli_inputs, env_inputs, None).unwrap();
assert_eq!(cfg.potatomesh.base_url, "https://cli.example/");
}
#[test]
#[serial]
fn load_uses_container_secret_defaults() {
let tmp_dir = tempfile::tempdir().unwrap();
let _guard = CwdGuard::enter(tmp_dir.path());
let secrets_dir = tmp_dir.path();
fs::write(secrets_dir.join("matrix_as_token"), "FROM_SECRET").unwrap();
let cli_inputs = ConfigInputs {
secrets_dir: Some(secrets_dir.to_string_lossy().to_string()),
container_override: Some(true),
overrides: ConfigOverrides {
potatomesh_base_url: Some("https://potatomesh.net/".to_string()),
potatomesh_poll_interval_secs: Some(10),
matrix_homeserver: Some("https://matrix.example.org".to_string()),
matrix_hs_token: Some("HS_TOKEN".to_string()),
matrix_server_name: Some("example.org".to_string()),
matrix_room_id: Some("!roomid:example.org".to_string()),
state_file: Some("bridge_state.json".to_string()),
..ConfigOverrides::default()
},
..ConfigInputs::default()
};
let cfg = load_from_sources(cli_inputs, ConfigInputs::default(), None).unwrap();
assert_eq!(cfg.matrix.as_token, "FROM_SECRET");
}
#[test]
fn resolve_token_prefers_explicit_value() {
let tmp_dir = tempfile::tempdir().unwrap();
let token_file = tmp_dir.path().join("token");
fs::write(&token_file, "FROM_FILE").unwrap();
let resolved = resolve_token(
Some("FROM_BASE".to_string()),
Some("FROM_EXPLICIT".to_string()),
Some(token_file.to_str().unwrap()),
Some(tmp_dir.path().to_str().unwrap()),
"matrix_as_token",
)
.unwrap();
assert_eq!(resolved, Some("FROM_EXPLICIT".to_string()));
}
#[test]
fn resolve_token_reads_explicit_file() {
let tmp_dir = tempfile::tempdir().unwrap();
let token_file = tmp_dir.path().join("token");
fs::write(&token_file, "FROM_FILE").unwrap();
let resolved = resolve_token(
None,
None,
Some(token_file.to_str().unwrap()),
None,
"matrix_as_token",
)
.unwrap();
assert_eq!(resolved, Some("FROM_FILE".to_string()));
}
#[test]
fn resolve_token_reads_default_secret_file() {
let tmp_dir = tempfile::tempdir().unwrap();
fs::write(tmp_dir.path().join("matrix_hs_token"), "FROM_SECRET").unwrap();
let resolved = resolve_token(
None,
None,
None,
Some(tmp_dir.path().to_str().unwrap()),
"matrix_hs_token",
)
.unwrap();
assert_eq!(resolved, Some("FROM_SECRET".to_string()));
}
#[test]
fn resolve_token_errors_on_empty_secret_file() {
let tmp_dir = tempfile::tempdir().unwrap();
let token_file = tmp_dir.path().join("token");
fs::write(&token_file, " ").unwrap();
let result = resolve_token(
None,
None,
Some(token_file.to_str().unwrap()),
None,
"matrix_as_token",
);
assert!(result.is_err());
}
#[test]
fn resolve_secrets_dir_prefers_explicit() {
let defaults = DefaultPaths {
config_path: "Config.toml".to_string(),
state_file: DEFAULT_STATE_FILE.to_string(),
secrets_dir: "default".to_string(),
poll_interval_secs: CONTAINER_POLL_INTERVAL_SECS,
};
let inputs = ConfigInputs {
secrets_dir: Some("explicit".to_string()),
..ConfigInputs::default()
};
let resolved = resolve_secrets_dir(&inputs, true, &defaults);
assert_eq!(resolved, Some("explicit".to_string()));
}
#[test]
fn resolve_secrets_dir_container_default() {
let defaults = DefaultPaths {
config_path: "Config.toml".to_string(),
state_file: DEFAULT_STATE_FILE.to_string(),
secrets_dir: "default".to_string(),
poll_interval_secs: CONTAINER_POLL_INTERVAL_SECS,
};
let inputs = ConfigInputs::default();
let resolved = resolve_secrets_dir(&inputs, true, &defaults);
assert_eq!(resolved, Some("default".to_string()));
assert_eq!(resolve_secrets_dir(&inputs, false, &defaults), None);
}
#[test]
#[serial]
fn resolve_base_config_prefers_explicit_path() {
let tmp_dir = tempfile::tempdir().unwrap();
let _guard = CwdGuard::enter(tmp_dir.path());
let config_path = tmp_dir.path().join("explicit.toml");
fs::write(
&config_path,
r#"[potatomesh]
base_url = "https://potatomesh.net/"
poll_interval_secs = 10
[matrix]
homeserver = "https://matrix.example.org"
as_token = "AS_TOKEN"
hs_token = "HS_TOKEN"
server_name = "example.org"
room_id = "!roomid:example.org"
[state]
state_file = "bridge_state.json"
"#,
)
.unwrap();
let defaults = default_paths(false);
let inputs = ConfigInputs {
config_path: Some(config_path.to_string_lossy().to_string()),
..ConfigInputs::default()
};
let resolved = resolve_base_config(&inputs, &defaults).unwrap();
assert!(resolved.is_some());
}
#[test]
#[serial]
fn resolve_base_config_uses_container_path_when_present() {
let tmp_dir = tempfile::tempdir().unwrap();
let _guard = CwdGuard::enter(tmp_dir.path());
let config_path = tmp_dir.path().join("container.toml");
fs::write(
&config_path,
r#"[potatomesh]
base_url = "https://potatomesh.net/"
poll_interval_secs = 10
[matrix]
homeserver = "https://matrix.example.org"
as_token = "AS_TOKEN"
hs_token = "HS_TOKEN"
server_name = "example.org"
room_id = "!roomid:example.org"
[state]
state_file = "bridge_state.json"
"#,
)
.unwrap();
let defaults = DefaultPaths {
config_path: config_path.to_string_lossy().to_string(),
state_file: DEFAULT_STATE_FILE.to_string(),
secrets_dir: DEFAULT_SECRETS_DIR.to_string(),
poll_interval_secs: CONTAINER_POLL_INTERVAL_SECS,
};
let resolved = resolve_base_config(&ConfigInputs::default(), &defaults).unwrap();
assert!(resolved.is_some());
}
#[test]
#[serial]
fn resolve_base_config_uses_host_path_when_present() {
let tmp_dir = tempfile::tempdir().unwrap();
let _guard = CwdGuard::enter(tmp_dir.path());
fs::write(
"Config.toml",
r#"[potatomesh]
base_url = "https://potatomesh.net/"
poll_interval_secs = 10
[matrix]
homeserver = "https://matrix.example.org"
as_token = "AS_TOKEN"
hs_token = "HS_TOKEN"
server_name = "example.org"
room_id = "!roomid:example.org"
[state]
state_file = "bridge_state.json"
"#,
)
.unwrap();
let defaults = default_paths(false);
let resolved = resolve_base_config(&ConfigInputs::default(), &defaults).unwrap();
assert!(resolved.is_some());
}
#[test]
#[serial]
fn resolve_base_config_returns_none_when_missing() {
let tmp_dir = tempfile::tempdir().unwrap();
let _guard = CwdGuard::enter(tmp_dir.path());
let defaults = default_paths(false);
let resolved = resolve_base_config(&ConfigInputs::default(), &defaults).unwrap();
assert!(resolved.is_none());
}
#[test]
#[serial]
fn load_prefers_cli_token_file_over_env_value() {
let tmp_dir = tempfile::tempdir().unwrap();
let _guard = CwdGuard::enter(tmp_dir.path());
let token_file = tmp_dir.path().join("as_token");
fs::write(&token_file, "CLI_SECRET").unwrap();
let env_inputs = ConfigInputs {
overrides: ConfigOverrides {
potatomesh_base_url: Some("https://potatomesh.net/".to_string()),
potatomesh_poll_interval_secs: Some(10),
matrix_homeserver: Some("https://matrix.example.org".to_string()),
matrix_as_token: Some("ENV_TOKEN".to_string()),
matrix_hs_token: Some("HS_TOKEN".to_string()),
matrix_server_name: Some("example.org".to_string()),
matrix_room_id: Some("!roomid:example.org".to_string()),
..ConfigOverrides::default()
},
..ConfigInputs::default()
};
let cli_inputs = ConfigInputs {
overrides: ConfigOverrides {
matrix_as_token_file: Some(token_file.to_string_lossy().to_string()),
..ConfigOverrides::default()
},
..ConfigInputs::default()
};
let cfg = load_from_sources(cli_inputs, env_inputs, None).unwrap();
assert_eq!(cfg.matrix.as_token, "CLI_SECRET");
}
#[test]
#[serial]
fn load_uses_container_default_poll_interval() {
let tmp_dir = tempfile::tempdir().unwrap();
let _guard = CwdGuard::enter(tmp_dir.path());
let cli_inputs = ConfigInputs {
container_override: Some(true),
overrides: ConfigOverrides {
potatomesh_base_url: Some("https://potatomesh.net/".to_string()),
matrix_homeserver: Some("https://matrix.example.org".to_string()),
matrix_as_token: Some("AS_TOKEN".to_string()),
matrix_hs_token: Some("HS_TOKEN".to_string()),
matrix_server_name: Some("example.org".to_string()),
matrix_room_id: Some("!roomid:example.org".to_string()),
..ConfigOverrides::default()
},
..ConfigInputs::default()
};
let cfg = load_from_sources(cli_inputs, ConfigInputs::default(), None).unwrap();
assert_eq!(
cfg.potatomesh.poll_interval_secs,
CONTAINER_POLL_INTERVAL_SECS
);
}
#[test]
#[serial]
fn load_uses_default_state_path_when_missing() {
let tmp_dir = tempfile::tempdir().unwrap();
let _guard = CwdGuard::enter(tmp_dir.path());
let cli_inputs = ConfigInputs {
overrides: ConfigOverrides {
potatomesh_base_url: Some("https://potatomesh.net/".to_string()),
potatomesh_poll_interval_secs: Some(10),
matrix_homeserver: Some("https://matrix.example.org".to_string()),
matrix_as_token: Some("AS_TOKEN".to_string()),
matrix_hs_token: Some("HS_TOKEN".to_string()),
matrix_server_name: Some("example.org".to_string()),
matrix_room_id: Some("!roomid:example.org".to_string()),
..ConfigOverrides::default()
},
..ConfigInputs::default()
};
let cfg = load_from_sources(cli_inputs, ConfigInputs::default(), None).unwrap();
assert_eq!(cfg.state.state_file, DEFAULT_STATE_FILE);
}
}

View File

@@ -12,23 +12,42 @@
// See the License for the specific language governing permissions and
// limitations under the License.
mod cli;
mod config;
mod matrix;
mod matrix_server;
mod potatomesh;
use std::{fs, path::Path};
use std::{fs, net::SocketAddr, path::Path};
use anyhow::Result;
use tokio::time::{sleep, Duration};
#[cfg(not(test))]
use clap::Parser;
use tokio::time::Duration;
use tracing::{error, info};
#[cfg(not(test))]
use crate::cli::Cli;
#[cfg(not(test))]
use crate::config::Config;
use crate::matrix::MatrixAppserviceClient;
use crate::potatomesh::{FetchParams, PotatoClient, PotatoMessage};
use crate::matrix_server::run_synapse_listener;
use crate::potatomesh::{FetchParams, PotatoClient, PotatoMessage, PotatoNode};
#[cfg(not(test))]
use tokio::time::sleep;
#[derive(Debug, serde::Serialize, serde::Deserialize, Default)]
pub struct BridgeState {
/// Highest message id processed by the bridge.
last_message_id: Option<u64>,
/// Highest rx_time observed; used to build incremental fetch queries.
#[serde(default)]
last_rx_time: Option<u64>,
/// Message ids seen at the current last_rx_time for de-duplication.
#[serde(default)]
last_rx_time_ids: Vec<u64>,
/// Legacy checkpoint timestamp used before last_rx_time was added.
#[serde(default, skip_serializing)]
last_checked_at: Option<u64>,
}
@@ -38,7 +57,15 @@ impl BridgeState {
return Ok(Self::default());
}
let data = fs::read_to_string(path)?;
let s: Self = serde_json::from_str(&data)?;
// Treat empty/whitespace-only files as a fresh state.
if data.trim().is_empty() {
return Ok(Self::default());
}
let mut s: Self = serde_json::from_str(&data)?;
if s.last_rx_time.is_none() {
s.last_rx_time = s.last_checked_at;
}
s.last_checked_at = None;
Ok(s)
}
@@ -49,17 +76,32 @@ impl BridgeState {
}
fn should_forward(&self, msg: &PotatoMessage) -> bool {
match self.last_message_id {
None => true,
Some(last) => msg.id > last,
match self.last_rx_time {
None => match self.last_message_id {
None => true,
Some(last_id) => msg.id > last_id,
},
Some(last_ts) => {
if msg.rx_time > last_ts {
true
} else if msg.rx_time < last_ts {
false
} else {
!self.last_rx_time_ids.contains(&msg.id)
}
}
}
}
fn update_with(&mut self, msg: &PotatoMessage) {
self.last_message_id = Some(match self.last_message_id {
None => msg.id,
Some(last) => last.max(msg.id),
});
self.last_message_id = Some(msg.id);
if self.last_rx_time.is_none() || Some(msg.rx_time) > self.last_rx_time {
self.last_rx_time = Some(msg.rx_time);
self.last_rx_time_ids = vec![msg.id];
} else if Some(msg.rx_time) == self.last_rx_time && !self.last_rx_time_ids.contains(&msg.id)
{
self.last_rx_time_ids.push(msg.id);
}
}
}
@@ -69,7 +111,7 @@ fn build_fetch_params(state: &BridgeState) -> FetchParams {
limit: None,
since: None,
}
} else if let Some(ts) = state.last_checked_at {
} else if let Some(ts) = state.last_rx_time {
FetchParams {
limit: None,
since: Some(ts),
@@ -82,17 +124,29 @@ fn build_fetch_params(state: &BridgeState) -> FetchParams {
}
}
fn update_checkpoint(state: &mut BridgeState, delivered_all: bool, now_secs: u64) -> bool {
if !delivered_all {
return false;
/// Persist the bridge state and log any write errors.
fn persist_state(state: &BridgeState, state_path: &str) {
if let Err(e) = state.save(state_path) {
error!("Error saving state: {:?}", e);
}
}
if state.last_message_id.is_some() {
state.last_checked_at = Some(now_secs);
true
} else {
false
}
/// Emit an info log for the latest bridge state snapshot.
fn log_state_update(state: &BridgeState) {
info!("Updated state: {:?}", state);
}
/// Emit a sanitized config log without sensitive tokens.
#[cfg(not(test))]
fn log_config(cfg: &Config) {
info!(
potatomesh_base_url = cfg.potatomesh.base_url.as_str(),
matrix_homeserver = cfg.matrix.homeserver.as_str(),
matrix_server_name = cfg.matrix.server_name.as_str(),
matrix_room_id = cfg.matrix.room_id.as_str(),
state_file = cfg.state.state_file.as_str(),
"Loaded config"
);
}
async fn poll_once(
@@ -100,16 +154,13 @@ async fn poll_once(
matrix: &MatrixAppserviceClient,
state: &mut BridgeState,
state_path: &str,
now_secs: u64,
) {
let params = build_fetch_params(state);
match potato.fetch_messages(params).await {
Ok(mut msgs) => {
// sort by id ascending so we process in order
msgs.sort_by_key(|m| m.id);
let mut delivered_all = true;
// sort by rx_time so we process by actual receipt time
msgs.sort_by_key(|m| m.rx_time);
for msg in &msgs {
if !state.should_forward(msg) {
@@ -120,27 +171,19 @@ async fn poll_once(
if let Some(port) = &msg.portnum {
if port != "TEXT_MESSAGE_APP" {
state.update_with(msg);
log_state_update(state);
persist_state(state, state_path);
continue;
}
}
if let Err(e) = handle_message(potato, matrix, state, msg).await {
error!("Error handling message {}: {:?}", msg.id, e);
delivered_all = false;
continue;
}
// persist after each processed message
if let Err(e) = state.save(state_path) {
error!("Error saving state: {:?}", e);
}
}
// Only advance checkpoint after successful delivery and a known last_message_id.
if update_checkpoint(state, delivered_all, now_secs) {
if let Err(e) = state.save(state_path) {
error!("Error saving state: {:?}", e);
}
persist_state(state, state_path);
}
}
Err(e) => {
@@ -149,6 +192,15 @@ async fn poll_once(
}
}
fn spawn_synapse_listener(addr: SocketAddr, token: String) -> tokio::task::JoinHandle<()> {
tokio::spawn(async move {
if let Err(e) = run_synapse_listener(addr, token).await {
error!("Synapse listener failed: {:?}", e);
}
})
}
#[cfg(not(test))]
#[tokio::main]
async fn main() -> Result<()> {
// Logging: RUST_LOG=info,bridge=debug,reqwest=warn ...
@@ -160,8 +212,9 @@ async fn main() -> Result<()> {
)
.init();
let cfg = Config::from_default_path()?;
info!("Loaded config: {:?}", cfg);
let cli = Cli::parse();
let cfg = config::load(cli.to_inputs())?;
log_config(&cfg);
let http = reqwest::Client::builder().build()?;
let potato = PotatoClient::new(http.clone(), cfg.potatomesh.clone());
@@ -169,6 +222,10 @@ async fn main() -> Result<()> {
let matrix = MatrixAppserviceClient::new(http.clone(), cfg.matrix.clone());
matrix.health_check().await?;
let synapse_addr = SocketAddr::from(([0, 0, 0, 0], 41448));
let synapse_token = cfg.matrix.hs_token.clone();
let _synapse_handle = spawn_synapse_listener(synapse_addr, synapse_token);
let state_path = &cfg.state.state_file;
let mut state = BridgeState::load(state_path)?;
info!("Loaded state: {:?}", state);
@@ -176,12 +233,7 @@ async fn main() -> Result<()> {
let poll_interval = Duration::from_secs(cfg.potatomesh.poll_interval_secs);
loop {
let now_secs = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_secs();
poll_once(&potato, &matrix, &mut state, state_path, now_secs).await;
poll_once(&potato, &matrix, &mut state, state_path).await;
sleep(poll_interval).await;
}
@@ -199,38 +251,79 @@ async fn handle_message(
// Ensure puppet exists & has display name
matrix.ensure_user_registered(&localpart).await?;
matrix.set_display_name(&user_id, &node.long_name).await?;
matrix.ensure_user_joined_room(&user_id).await?;
let display_name = display_name_for_node(&node);
matrix.set_display_name(&user_id, &display_name).await?;
// Format the bridged message
let short = node
.short_name
.clone()
.unwrap_or_else(|| node.long_name.clone());
let body = format!(
"[{short}] {text}\n({from_id}{to_id}, {rssi}, {snr}, {chan}/{preset})",
short = short,
text = msg.text,
from_id = msg.from_id,
to_id = msg.to_id,
rssi = msg
.rssi
.map(|v| format!("RSSI {v} dB"))
.unwrap_or_else(|| "RSSI n/a".to_string()),
snr = msg
.snr
.map(|v| format!("SNR {v} dB"))
.unwrap_or_else(|| "SNR n/a".to_string()),
chan = msg.channel_name,
preset = msg.modem_preset,
let preset_short = modem_preset_short(&msg.modem_preset);
let prefix = format!(
"[{freq}][{preset_short}][{channel}]",
freq = msg.lora_freq,
preset_short = preset_short,
channel = msg.channel_name,
);
let (body, formatted_body) = format_message_bodies(&prefix, &msg.text);
matrix.send_text_message_as(&user_id, &body).await?;
matrix
.send_formatted_message_as(&user_id, &body, &formatted_body)
.await?;
info!("Bridged message: {:?}", msg);
state.update_with(msg);
log_state_update(state);
Ok(())
}
/// Build a compact modem preset label like "LF" for "LongFast".
fn modem_preset_short(preset: &str) -> String {
let letters: String = preset
.chars()
.filter(|ch| ch.is_ascii_uppercase())
.collect();
if letters.is_empty() {
preset.chars().take(2).collect()
} else {
letters
}
}
/// Build plain text + HTML message bodies with inline-code metadata.
fn format_message_bodies(prefix: &str, text: &str) -> (String, String) {
let body = format!("`{}` {}", prefix, text);
let formatted_body = format!("<code>{}</code> {}", escape_html(prefix), escape_html(text));
(body, formatted_body)
}
/// Build the Matrix display name from a node's long/short names.
fn display_name_for_node(node: &PotatoNode) -> String {
match node
.short_name
.as_deref()
.map(str::trim)
.filter(|s| !s.is_empty())
{
Some(short) if short != node.long_name => format!("{} ({})", node.long_name, short),
_ => node.long_name.clone(),
}
}
/// Minimal HTML escaping for Matrix formatted_body payloads.
fn escape_html(input: &str) -> String {
let mut escaped = String::with_capacity(input.len());
for ch in input.chars() {
match ch {
'&' => escaped.push_str("&amp;"),
'<' => escaped.push_str("&lt;"),
'>' => escaped.push_str("&gt;"),
'"' => escaped.push_str("&quot;"),
'\'' => escaped.push_str("&#39;"),
_ => escaped.push(ch),
}
}
escaped
}
#[cfg(test)]
mod tests {
use super::*;
@@ -259,6 +352,54 @@ mod tests {
}
}
fn sample_node(short_name: Option<&str>, long_name: &str) -> PotatoNode {
PotatoNode {
node_id: "!abcd1234".to_string(),
short_name: short_name.map(str::to_string),
long_name: long_name.to_string(),
role: None,
hw_model: None,
last_heard: None,
first_heard: None,
latitude: None,
longitude: None,
altitude: None,
}
}
#[test]
fn modem_preset_short_handles_camelcase() {
assert_eq!(modem_preset_short("LongFast"), "LF");
assert_eq!(modem_preset_short("MediumFast"), "MF");
}
#[test]
fn format_message_bodies_escape_html() {
let (body, formatted) = format_message_bodies("[868][LF]", "Hello <&>");
assert_eq!(body, "`[868][LF]` Hello <&>");
assert_eq!(formatted, "<code>[868][LF]</code> Hello &lt;&amp;&gt;");
}
#[test]
fn escape_html_escapes_quotes() {
assert_eq!(escape_html("a\"b'c"), "a&quot;b&#39;c");
}
#[test]
fn display_name_for_node_includes_short_when_present() {
let node = sample_node(Some("TN"), "Test Node");
assert_eq!(display_name_for_node(&node), "Test Node (TN)");
}
#[test]
fn display_name_for_node_ignores_empty_or_duplicate_short() {
let empty_short = sample_node(Some(""), "Test Node");
assert_eq!(display_name_for_node(&empty_short), "Test Node");
let duplicate_short = sample_node(Some("Test Node"), "Test Node");
assert_eq!(display_name_for_node(&duplicate_short), "Test Node");
}
#[test]
fn bridge_state_initially_forwards_all() {
let state = BridgeState::default();
@@ -268,39 +409,72 @@ mod tests {
}
#[test]
fn bridge_state_tracks_highest_id_and_skips_older() {
fn bridge_state_tracks_latest_rx_time_and_skips_older() {
let mut state = BridgeState::default();
let m1 = sample_msg(10);
let m2 = sample_msg(20);
let m3 = sample_msg(15);
let m1 = PotatoMessage { rx_time: 10, ..m1 };
let m2 = PotatoMessage { rx_time: 20, ..m2 };
let m3 = PotatoMessage { rx_time: 15, ..m3 };
// First message, should forward
assert!(state.should_forward(&m1));
state.update_with(&m1);
assert_eq!(state.last_message_id, Some(10));
assert_eq!(state.last_rx_time, Some(10));
// Second message, higher id, should forward
assert!(state.should_forward(&m2));
state.update_with(&m2);
assert_eq!(state.last_message_id, Some(20));
assert_eq!(state.last_rx_time, Some(20));
// Third message, lower than last, should NOT forward
assert!(!state.should_forward(&m3));
// state remains unchanged
assert_eq!(state.last_message_id, Some(20));
assert_eq!(state.last_rx_time, Some(20));
}
#[test]
fn bridge_state_update_is_monotonic() {
let mut state = BridgeState {
last_message_id: Some(50),
fn bridge_state_uses_legacy_id_filter_when_rx_time_missing() {
let state = BridgeState {
last_message_id: Some(10),
last_rx_time: None,
last_rx_time_ids: vec![],
last_checked_at: None,
};
let m = sample_msg(40);
let older = sample_msg(9);
let newer = sample_msg(11);
state.update_with(&m); // id is lower than current
// last_message_id must stay at 50
assert_eq!(state.last_message_id, Some(50));
assert!(!state.should_forward(&older));
assert!(state.should_forward(&newer));
}
#[test]
fn bridge_state_dedupes_same_timestamp() {
let mut state = BridgeState::default();
let m1 = PotatoMessage {
rx_time: 100,
..sample_msg(10)
};
let m2 = PotatoMessage {
rx_time: 100,
..sample_msg(9)
};
let dup = PotatoMessage {
rx_time: 100,
..sample_msg(10)
};
assert!(state.should_forward(&m1));
state.update_with(&m1);
assert!(state.should_forward(&m2));
state.update_with(&m2);
assert!(!state.should_forward(&dup));
assert_eq!(state.last_rx_time, Some(100));
assert_eq!(state.last_rx_time_ids, vec![10, 9]);
}
#[test]
@@ -311,13 +485,17 @@ mod tests {
let state = BridgeState {
last_message_id: Some(12345),
last_checked_at: Some(99),
last_rx_time: Some(99),
last_rx_time_ids: vec![123],
last_checked_at: Some(77),
};
state.save(path_str).unwrap();
let loaded_state = BridgeState::load(path_str).unwrap();
assert_eq!(loaded_state.last_message_id, Some(12345));
assert_eq!(loaded_state.last_checked_at, Some(99));
assert_eq!(loaded_state.last_rx_time, Some(99));
assert_eq!(loaded_state.last_rx_time_ids, vec![123]);
assert_eq!(loaded_state.last_checked_at, None);
}
#[test]
@@ -328,50 +506,50 @@ mod tests {
let state = BridgeState::load(path_str).unwrap();
assert_eq!(state.last_message_id, None);
assert_eq!(state.last_rx_time, None);
assert!(state.last_rx_time_ids.is_empty());
}
#[test]
fn bridge_state_load_empty_file() {
let tmp_dir = tempfile::tempdir().unwrap();
let file_path = tmp_dir.path().join("empty.json");
let path_str = file_path.to_str().unwrap();
fs::write(path_str, "").unwrap();
let state = BridgeState::load(path_str).unwrap();
assert_eq!(state.last_message_id, None);
assert_eq!(state.last_rx_time, None);
assert!(state.last_rx_time_ids.is_empty());
assert_eq!(state.last_checked_at, None);
}
#[test]
fn update_checkpoint_requires_last_message_id() {
let mut state = BridgeState {
last_message_id: None,
last_checked_at: Some(10),
};
fn bridge_state_migrates_legacy_checkpoint() {
let tmp_dir = tempfile::tempdir().unwrap();
let file_path = tmp_dir.path().join("legacy_state.json");
let path_str = file_path.to_str().unwrap();
let saved = update_checkpoint(&mut state, true, 123);
assert!(!saved);
assert_eq!(state.last_checked_at, Some(10));
}
fs::write(
path_str,
r#"{"last_message_id":42,"last_checked_at":1710000000}"#,
)
.unwrap();
#[test]
fn update_checkpoint_skips_when_not_delivered() {
let mut state = BridgeState {
last_message_id: Some(5),
last_checked_at: Some(10),
};
let saved = update_checkpoint(&mut state, false, 123);
assert!(!saved);
assert_eq!(state.last_checked_at, Some(10));
}
#[test]
fn update_checkpoint_sets_when_safe() {
let mut state = BridgeState {
last_message_id: Some(5),
last_checked_at: None,
};
let saved = update_checkpoint(&mut state, true, 123);
assert!(saved);
assert_eq!(state.last_checked_at, Some(123));
let state = BridgeState::load(path_str).unwrap();
assert_eq!(state.last_message_id, Some(42));
assert_eq!(state.last_rx_time, Some(1_710_000_000));
assert!(state.last_rx_time_ids.is_empty());
}
#[test]
fn fetch_params_respects_missing_last_message_id() {
let state = BridgeState {
last_message_id: None,
last_checked_at: Some(123),
last_rx_time: Some(123),
last_rx_time_ids: vec![],
last_checked_at: None,
};
let params = build_fetch_params(&state);
@@ -383,7 +561,9 @@ mod tests {
fn fetch_params_uses_since_when_safe() {
let state = BridgeState {
last_message_id: Some(1),
last_checked_at: Some(123),
last_rx_time: Some(123),
last_rx_time_ids: vec![],
last_checked_at: None,
};
let params = build_fetch_params(&state);
@@ -395,6 +575,8 @@ mod tests {
fn fetch_params_defaults_to_small_window() {
let state = BridgeState {
last_message_id: Some(1),
last_rx_time: None,
last_rx_time_ids: vec![],
last_checked_at: None,
};
@@ -403,8 +585,59 @@ mod tests {
assert_eq!(params.since, None);
}
#[test]
fn log_state_update_emits_info() {
let state = BridgeState::default();
log_state_update(&state);
}
#[test]
fn persist_state_writes_file() {
let tmp_dir = tempfile::tempdir().unwrap();
let file_path = tmp_dir.path().join("state.json");
let path_str = file_path.to_str().unwrap();
let state = BridgeState {
last_message_id: Some(42),
last_rx_time: Some(123),
last_rx_time_ids: vec![42],
last_checked_at: None,
};
persist_state(&state, path_str);
let loaded = BridgeState::load(path_str).unwrap();
assert_eq!(loaded.last_message_id, Some(42));
}
#[test]
fn persist_state_logs_on_error() {
let tmp_dir = tempfile::tempdir().unwrap();
let dir_path = tmp_dir.path().to_str().unwrap();
let state = BridgeState::default();
// Writing to a directory path should trigger the error branch.
persist_state(&state, dir_path);
}
#[tokio::test]
async fn poll_once_persists_checkpoint_without_messages() {
async fn spawn_synapse_listener_starts_task() {
let addr = SocketAddr::from(([127, 0, 0, 1], 0));
let handle = spawn_synapse_listener(addr, "HS_TOKEN".to_string());
tokio::time::sleep(Duration::from_millis(10)).await;
handle.abort();
}
#[tokio::test]
async fn spawn_synapse_listener_logs_error_on_bind_failure() {
let listener = tokio::net::TcpListener::bind("127.0.0.1:0").await.unwrap();
let addr = listener.local_addr().unwrap();
let handle = spawn_synapse_listener(addr, "HS_TOKEN".to_string());
let _ = handle.await;
}
#[tokio::test]
async fn poll_once_leaves_state_unchanged_without_messages() {
let tmp_dir = tempfile::tempdir().unwrap();
let state_path = tmp_dir.path().join("state.json");
let state_str = state_path.to_str().unwrap();
@@ -426,6 +659,7 @@ mod tests {
let matrix_cfg = MatrixConfig {
homeserver: server.url(),
as_token: "AS_TOKEN".to_string(),
hs_token: "HS_TOKEN".to_string(),
server_name: "example.org".to_string(),
room_id: "!roomid:example.org".to_string(),
};
@@ -435,18 +669,63 @@ mod tests {
let mut state = BridgeState {
last_message_id: Some(1),
last_rx_time: Some(100),
last_rx_time_ids: vec![1],
last_checked_at: None,
};
poll_once(&potato, &matrix, &mut state, state_str, 123).await;
poll_once(&potato, &matrix, &mut state, state_str).await;
mock_msgs.assert();
// Should have advanced checkpoint and saved it.
assert_eq!(state.last_checked_at, Some(123));
// No new data means state remains unchanged and is not persisted.
assert_eq!(state.last_rx_time, Some(100));
assert_eq!(state.last_rx_time_ids, vec![1]);
assert!(!state_path.exists());
}
#[tokio::test]
async fn poll_once_persists_state_for_non_text_messages() {
let tmp_dir = tempfile::tempdir().unwrap();
let state_path = tmp_dir.path().join("state.json");
let state_str = state_path.to_str().unwrap();
let mut server = mockito::Server::new_async().await;
let mock_msgs = server
.mock("GET", "/api/messages")
.match_query(mockito::Matcher::Any)
.with_status(200)
.with_header("content-type", "application/json")
.with_body(
r#"[{"id":1,"rx_time":100,"rx_iso":"2025-11-27T00:00:00Z","from_id":"!abcd1234","to_id":"^all","channel":1,"portnum":"POSITION_APP","text":"","rssi":-100,"hop_limit":1,"lora_freq":868,"modem_preset":"MediumFast","channel_name":"TEST","snr":0.0,"node_id":"!abcd1234"}]"#,
)
.create();
let http_client = reqwest::Client::new();
let potatomesh_cfg = PotatomeshConfig {
base_url: server.url(),
poll_interval_secs: 1,
};
let matrix_cfg = MatrixConfig {
homeserver: server.url(),
as_token: "AS_TOKEN".to_string(),
hs_token: "HS_TOKEN".to_string(),
server_name: "example.org".to_string(),
room_id: "!roomid:example.org".to_string(),
};
let potato = PotatoClient::new(http_client.clone(), potatomesh_cfg);
let matrix = MatrixAppserviceClient::new(http_client, matrix_cfg);
let mut state = BridgeState::default();
poll_once(&potato, &matrix, &mut state, state_str).await;
mock_msgs.assert();
assert!(state_path.exists());
let loaded = BridgeState::load(state_str).unwrap();
assert_eq!(loaded.last_checked_at, Some(123));
assert_eq!(loaded.last_message_id, Some(1));
assert_eq!(loaded.last_rx_time, Some(100));
assert_eq!(loaded.last_rx_time_ids, vec![1]);
}
#[tokio::test]
@@ -460,6 +739,7 @@ mod tests {
let matrix_cfg = MatrixConfig {
homeserver: server.url(),
as_token: "AS_TOKEN".to_string(),
hs_token: "HS_TOKEN".to_string(),
server_name: "example.org".to_string(),
room_id: "!roomid:example.org".to_string(),
};
@@ -467,6 +747,8 @@ mod tests {
let node_id = "abcd1234";
let user_id = format!("@potato_{}:{}", node_id, matrix_cfg.server_name);
let encoded_user = urlencoding::encode(&user_id);
let room_id = matrix_cfg.room_id.clone();
let encoded_room = urlencoding::encode(&room_id);
let mock_get_node = server
.mock("GET", "/api/nodes/abcd1234")
@@ -477,7 +759,18 @@ mod tests {
let mock_register = server
.mock("POST", "/_matrix/client/v3/register")
.match_query("kind=user&access_token=AS_TOKEN")
.match_query("kind=user")
.match_header("authorization", "Bearer AS_TOKEN")
.with_status(200)
.create();
let mock_join = server
.mock(
"POST",
format!("/_matrix/client/v3/rooms/{}/join", encoded_room).as_str(),
)
.match_query(format!("user_id={}", encoded_user).as_str())
.match_header("authorization", "Bearer AS_TOKEN")
.with_status(200)
.create();
@@ -486,14 +779,16 @@ mod tests {
"PUT",
format!("/_matrix/client/v3/profile/{}/displayname", encoded_user).as_str(),
)
.match_query(format!("user_id={}&access_token=AS_TOKEN", encoded_user).as_str())
.match_query(format!("user_id={}", encoded_user).as_str())
.match_header("authorization", "Bearer AS_TOKEN")
.match_body(mockito::Matcher::PartialJson(serde_json::json!({
"displayname": "Test Node (TN)"
})))
.with_status(200)
.create();
let http_client = reqwest::Client::new();
let matrix_client = MatrixAppserviceClient::new(http_client.clone(), matrix_cfg);
let room_id = &matrix_client.cfg.room_id;
let encoded_room = urlencoding::encode(room_id);
let txn_id = matrix_client
.txn_counter
.load(std::sync::atomic::Ordering::SeqCst);
@@ -507,7 +802,14 @@ mod tests {
)
.as_str(),
)
.match_query(format!("user_id={}&access_token=AS_TOKEN", encoded_user).as_str())
.match_query(format!("user_id={}", encoded_user).as_str())
.match_header("authorization", "Bearer AS_TOKEN")
.match_body(mockito::Matcher::PartialJson(serde_json::json!({
"msgtype": "m.text",
"body": "`[868][MF][TEST]` Ping",
"format": "org.matrix.custom.html",
"formatted_body": "<code>[868][MF][TEST]</code> Ping",
})))
.with_status(200)
.create();
@@ -520,6 +822,7 @@ mod tests {
assert!(result.is_ok());
mock_get_node.assert();
mock_register.assert();
mock_join.assert();
mock_display_name.assert();
mock_send.assert();

View File

@@ -66,10 +66,6 @@ impl MatrixAppserviceClient {
format!("@{}:{}", localpart, self.cfg.server_name)
}
fn auth_query(&self) -> String {
format!("access_token={}", urlencoding::encode(&self.cfg.as_token))
}
/// Ensure the puppet user exists (register via appservice registration).
pub async fn ensure_user_registered(&self, localpart: &str) -> anyhow::Result<()> {
#[derive(Serialize)]
@@ -80,9 +76,8 @@ impl MatrixAppserviceClient {
}
let url = format!(
"{}/_matrix/client/v3/register?kind=user&{}",
self.cfg.homeserver,
self.auth_query()
"{}/_matrix/client/v3/register?kind=user",
self.cfg.homeserver
);
let body = RegisterReq {
@@ -90,7 +85,13 @@ impl MatrixAppserviceClient {
username: localpart,
};
let resp = self.http.post(&url).json(&body).send().await?;
let resp = self
.http
.post(&url)
.bearer_auth(&self.cfg.as_token)
.json(&body)
.send()
.await?;
if resp.status().is_success() {
Ok(())
} else {
@@ -109,18 +110,21 @@ impl MatrixAppserviceClient {
let encoded_user = urlencoding::encode(user_id);
let url = format!(
"{}/_matrix/client/v3/profile/{}/displayname?user_id={}&{}",
self.cfg.homeserver,
encoded_user,
encoded_user,
self.auth_query()
"{}/_matrix/client/v3/profile/{}/displayname?user_id={}",
self.cfg.homeserver, encoded_user, encoded_user
);
let body = DisplayNameReq {
displayname: display_name,
};
let resp = self.http.put(&url).json(&body).send().await?;
let resp = self
.http
.put(&url)
.bearer_auth(&self.cfg.as_token)
.json(&body)
.send()
.await?;
if resp.status().is_success() {
Ok(())
} else {
@@ -134,12 +138,53 @@ impl MatrixAppserviceClient {
}
}
/// Send a plain text message into the configured room as puppet user_id.
pub async fn send_text_message_as(&self, user_id: &str, body_text: &str) -> anyhow::Result<()> {
/// Ensure the puppet user is joined to the configured room.
pub async fn ensure_user_joined_room(&self, user_id: &str) -> anyhow::Result<()> {
#[derive(Serialize)]
struct JoinReq {}
let encoded_room = urlencoding::encode(&self.cfg.room_id);
let encoded_user = urlencoding::encode(user_id);
let url = format!(
"{}/_matrix/client/v3/rooms/{}/join?user_id={}",
self.cfg.homeserver, encoded_room, encoded_user
);
let resp = self
.http
.post(&url)
.bearer_auth(&self.cfg.as_token)
.json(&JoinReq {})
.send()
.await?;
if resp.status().is_success() {
Ok(())
} else {
let status = resp.status();
let body_snip = resp.text().await.unwrap_or_default();
Err(anyhow::anyhow!(
"Matrix join failed for {} in {} with status {} ({})",
user_id,
self.cfg.room_id,
status,
body_snip
))
}
}
/// Send a text message with HTML formatting into the configured room as puppet user_id.
pub async fn send_formatted_message_as(
&self,
user_id: &str,
body_text: &str,
formatted_body: &str,
) -> anyhow::Result<()> {
#[derive(Serialize)]
struct MsgContent<'a> {
msgtype: &'a str,
body: &'a str,
format: &'a str,
formatted_body: &'a str,
}
let txn_id = self.txn_counter.fetch_add(1, Ordering::SeqCst);
@@ -147,35 +192,36 @@ impl MatrixAppserviceClient {
let encoded_user = urlencoding::encode(user_id);
let url = format!(
"{}/_matrix/client/v3/rooms/{}/send/m.room.message/{}?user_id={}&{}",
self.cfg.homeserver,
encoded_room,
txn_id,
encoded_user,
self.auth_query()
"{}/_matrix/client/v3/rooms/{}/send/m.room.message/{}?user_id={}",
self.cfg.homeserver, encoded_room, txn_id, encoded_user
);
let content = MsgContent {
msgtype: "m.text",
body: body_text,
format: "org.matrix.custom.html",
formatted_body,
};
let resp = self.http.put(&url).json(&content).send().await?;
let resp = self
.http
.put(&url)
.bearer_auth(&self.cfg.as_token)
.json(&content)
.send()
.await?;
if !resp.status().is_success() {
let status = resp.status();
// optional: pull a short body snippet for debugging
let body_snip = resp.text().await.unwrap_or_default();
// Log for observability
tracing::warn!(
"Failed to send message as {}: status {}, body: {}",
"Failed to send formatted message as {}: status {}, body: {}",
user_id,
status,
body_snip
);
// Propagate an error so callers know this message was NOT delivered
return Err(anyhow::anyhow!(
"Matrix send failed for {} with status {}",
user_id,
@@ -195,6 +241,7 @@ mod tests {
MatrixConfig {
homeserver: "https://matrix.example.org".to_string(),
as_token: "AS_TOKEN".to_string(),
hs_token: "HS_TOKEN".to_string(),
server_name: "example.org".to_string(),
room_id: "!roomid:example.org".to_string(),
}
@@ -255,16 +302,6 @@ mod tests {
assert!(result.is_err());
}
#[test]
fn auth_query_contains_access_token() {
let http = reqwest::Client::builder().build().unwrap();
let client = MatrixAppserviceClient::new(http, dummy_cfg());
let q = client.auth_query();
assert!(q.starts_with("access_token="));
assert!(q.contains("AS_TOKEN"));
}
#[test]
fn test_new_matrix_client() {
let http_client = reqwest::Client::new();
@@ -280,7 +317,8 @@ mod tests {
let mut server = mockito::Server::new_async().await;
let mock = server
.mock("POST", "/_matrix/client/v3/register")
.match_query("kind=user&access_token=AS_TOKEN")
.match_query("kind=user")
.match_header("authorization", "Bearer AS_TOKEN")
.with_status(200)
.create();
@@ -298,7 +336,8 @@ mod tests {
let mut server = mockito::Server::new_async().await;
let mock = server
.mock("POST", "/_matrix/client/v3/register")
.match_query("kind=user&access_token=AS_TOKEN")
.match_query("kind=user")
.match_header("authorization", "Bearer AS_TOKEN")
.with_status(400) // M_USER_IN_USE
.create();
@@ -316,12 +355,13 @@ mod tests {
let mut server = mockito::Server::new_async().await;
let user_id = "@test:example.org";
let encoded_user = urlencoding::encode(user_id);
let query = format!("user_id={}&access_token=AS_TOKEN", encoded_user);
let query = format!("user_id={}", encoded_user);
let path = format!("/_matrix/client/v3/profile/{}/displayname", encoded_user);
let mock = server
.mock("PUT", path.as_str())
.match_query(query.as_str())
.match_header("authorization", "Bearer AS_TOKEN")
.with_status(200)
.create();
@@ -339,12 +379,13 @@ mod tests {
let mut server = mockito::Server::new_async().await;
let user_id = "@test:example.org";
let encoded_user = urlencoding::encode(user_id);
let query = format!("user_id={}&access_token=AS_TOKEN", encoded_user);
let query = format!("user_id={}", encoded_user);
let path = format!("/_matrix/client/v3/profile/{}/displayname", encoded_user);
let mock = server
.mock("PUT", path.as_str())
.match_query(query.as_str())
.match_header("authorization", "Bearer AS_TOKEN")
.with_status(500)
.create();
@@ -358,40 +399,61 @@ mod tests {
}
#[tokio::test]
async fn test_send_text_message_as_success() {
async fn test_ensure_user_joined_room_success() {
let mut server = mockito::Server::new_async().await;
let user_id = "@test:example.org";
let room_id = "!roomid:example.org";
let encoded_user = urlencoding::encode(user_id);
let encoded_room = urlencoding::encode(room_id);
let client = {
let mut cfg = dummy_cfg();
cfg.homeserver = server.url();
cfg.room_id = room_id.to_string();
MatrixAppserviceClient::new(reqwest::Client::new(), cfg)
};
let txn_id = client.txn_counter.load(Ordering::SeqCst);
let query = format!("user_id={}&access_token=AS_TOKEN", encoded_user);
let path = format!(
"/_matrix/client/v3/rooms/{}/send/m.room.message/{}",
encoded_room, txn_id
);
let query = format!("user_id={}", encoded_user);
let path = format!("/_matrix/client/v3/rooms/{}/join", encoded_room);
let mock = server
.mock("PUT", path.as_str())
.mock("POST", path.as_str())
.match_query(query.as_str())
.match_header("authorization", "Bearer AS_TOKEN")
.with_status(200)
.create();
let result = client.send_text_message_as(user_id, "hello").await;
let mut cfg = dummy_cfg();
cfg.homeserver = server.url();
cfg.room_id = room_id.to_string();
let client = MatrixAppserviceClient::new(reqwest::Client::new(), cfg);
let result = client.ensure_user_joined_room(user_id).await;
mock.assert();
assert!(result.is_ok());
}
#[tokio::test]
async fn test_send_text_message_as_fail() {
async fn test_ensure_user_joined_room_fail() {
let mut server = mockito::Server::new_async().await;
let user_id = "@test:example.org";
let room_id = "!roomid:example.org";
let encoded_user = urlencoding::encode(user_id);
let encoded_room = urlencoding::encode(room_id);
let query = format!("user_id={}", encoded_user);
let path = format!("/_matrix/client/v3/rooms/{}/join", encoded_room);
let mock = server
.mock("POST", path.as_str())
.match_query(query.as_str())
.match_header("authorization", "Bearer AS_TOKEN")
.with_status(403)
.create();
let mut cfg = dummy_cfg();
cfg.homeserver = server.url();
cfg.room_id = room_id.to_string();
let client = MatrixAppserviceClient::new(reqwest::Client::new(), cfg);
let result = client.ensure_user_joined_room(user_id).await;
mock.assert();
assert!(result.is_err());
}
#[tokio::test]
async fn test_send_formatted_message_as_success() {
let mut server = mockito::Server::new_async().await;
let user_id = "@test:example.org";
let room_id = "!roomid:example.org";
@@ -405,7 +467,7 @@ mod tests {
MatrixAppserviceClient::new(reqwest::Client::new(), cfg)
};
let txn_id = client.txn_counter.load(Ordering::SeqCst);
let query = format!("user_id={}&access_token=AS_TOKEN", encoded_user);
let query = format!("user_id={}", encoded_user);
let path = format!(
"/_matrix/client/v3/rooms/{}/send/m.room.message/{}",
encoded_room, txn_id
@@ -414,12 +476,21 @@ mod tests {
let mock = server
.mock("PUT", path.as_str())
.match_query(query.as_str())
.with_status(500)
.match_header("authorization", "Bearer AS_TOKEN")
.match_body(mockito::Matcher::PartialJson(serde_json::json!({
"msgtype": "m.text",
"body": "`[meta]` hello",
"format": "org.matrix.custom.html",
"formatted_body": "<code>[meta]</code> hello",
})))
.with_status(200)
.create();
let result = client.send_text_message_as(user_id, "hello").await;
let result = client
.send_formatted_message_as(user_id, "`[meta]` hello", "<code>[meta]</code> hello")
.await;
mock.assert();
assert!(result.is_err());
assert!(result.is_ok());
}
}

289
matrix/src/matrix_server.rs Normal file
View File

@@ -0,0 +1,289 @@
// Copyright © 2025-26 l5yth & contributors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use axum::{
extract::{Path, Query, State},
http::{header::AUTHORIZATION, HeaderMap, StatusCode},
response::IntoResponse,
routing::put,
Json, Router,
};
use serde_json::Value;
use std::net::SocketAddr;
use tracing::info;
#[derive(Clone)]
struct SynapseState {
hs_token: String,
}
#[derive(serde::Deserialize)]
struct AuthQuery {
access_token: Option<String>,
}
/// Pull access tokens from supported auth headers.
fn extract_access_token(headers: &HeaderMap) -> Option<String> {
if let Some(value) = headers.get(AUTHORIZATION) {
if let Ok(raw) = value.to_str() {
if let Some(token) = raw.strip_prefix("Bearer ") {
return Some(token.trim().to_string());
}
if let Some(token) = raw.strip_prefix("bearer ") {
return Some(token.trim().to_string());
}
}
}
if let Some(value) = headers.get("x-access-token") {
if let Ok(raw) = value.to_str() {
return Some(raw.trim().to_string());
}
}
None
}
/// Compare tokens in constant time to avoid timing leakage.
fn constant_time_eq(a: &str, b: &str) -> bool {
let a_bytes = a.as_bytes();
let b_bytes = b.as_bytes();
let max_len = std::cmp::max(a_bytes.len(), b_bytes.len());
let mut diff = (a_bytes.len() ^ b_bytes.len()) as u8;
for idx in 0..max_len {
let left = *a_bytes.get(idx).unwrap_or(&0);
let right = *b_bytes.get(idx).unwrap_or(&0);
diff |= left ^ right;
}
diff == 0
}
/// Captures inbound Synapse transaction payloads for logging.
#[derive(Debug)]
struct SynapseResponse {
txn_id: String,
payload: Value,
}
/// Build the router that handles Synapse appservice transactions.
fn build_router(state: SynapseState) -> Router {
Router::new()
.route(
"/_matrix/appservice/v1/transactions/:txn_id",
put(handle_transaction),
)
.with_state(state)
}
/// Handle inbound transaction callbacks from Synapse.
async fn handle_transaction(
Path(txn_id): Path<String>,
State(state): State<SynapseState>,
Query(auth): Query<AuthQuery>,
headers: HeaderMap,
Json(payload): Json<Value>,
) -> impl IntoResponse {
let header_token = extract_access_token(&headers);
let token_matches = if let Some(token) = header_token.as_deref() {
constant_time_eq(token, &state.hs_token)
} else {
auth.access_token
.as_deref()
.is_some_and(|token| constant_time_eq(token, &state.hs_token))
};
if !token_matches {
return (StatusCode::UNAUTHORIZED, Json(serde_json::json!({})));
}
let response = SynapseResponse { txn_id, payload };
info!(
"Status response: SynapseResponse {{ txn_id: {}, payload: {:?} }}",
response.txn_id, response.payload
);
(StatusCode::OK, Json(serde_json::json!({})))
}
/// Listen for Synapse callbacks on the configured address.
pub async fn run_synapse_listener(addr: SocketAddr, hs_token: String) -> anyhow::Result<()> {
let app = build_router(SynapseState { hs_token });
let listener = tokio::net::TcpListener::bind(addr).await?;
info!("Synapse listener bound on {}", addr);
axum::serve(listener, app).await?;
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
use axum::body::Body;
use axum::http::Request;
use tokio::time::{sleep, Duration};
use tower::ServiceExt;
#[tokio::test]
async fn transactions_endpoint_accepts_payloads() {
let app = build_router(SynapseState {
hs_token: "HS_TOKEN".to_string(),
});
let payload = serde_json::json!({
"events": [],
"txn_id": "123"
});
let response = app
.oneshot(
Request::builder()
.method("PUT")
.uri("/_matrix/appservice/v1/transactions/123")
.header("authorization", "Bearer HS_TOKEN")
.header("content-type", "application/json")
.body(Body::from(payload.to_string()))
.unwrap(),
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::OK);
let body = axum::body::to_bytes(response.into_body(), usize::MAX)
.await
.unwrap();
assert_eq!(body.as_ref(), b"{}");
}
#[tokio::test]
async fn transactions_endpoint_rejects_missing_token() {
let app = build_router(SynapseState {
hs_token: "HS_TOKEN".to_string(),
});
let payload = serde_json::json!({
"events": [],
"txn_id": "123"
});
let response = app
.oneshot(
Request::builder()
.method("PUT")
.uri("/_matrix/appservice/v1/transactions/123")
.header("content-type", "application/json")
.body(Body::from(payload.to_string()))
.unwrap(),
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::UNAUTHORIZED);
let body = axum::body::to_bytes(response.into_body(), usize::MAX)
.await
.unwrap();
assert_eq!(body.as_ref(), b"{}");
}
#[tokio::test]
async fn transactions_endpoint_rejects_wrong_token() {
let app = build_router(SynapseState {
hs_token: "HS_TOKEN".to_string(),
});
let payload = serde_json::json!({
"events": [],
"txn_id": "123"
});
let response = app
.oneshot(
Request::builder()
.method("PUT")
.uri("/_matrix/appservice/v1/transactions/123")
.header("authorization", "Bearer NOPE")
.header("content-type", "application/json")
.body(Body::from(payload.to_string()))
.unwrap(),
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::UNAUTHORIZED);
let body = axum::body::to_bytes(response.into_body(), usize::MAX)
.await
.unwrap();
assert_eq!(body.as_ref(), b"{}");
}
#[tokio::test]
async fn transactions_endpoint_accepts_legacy_query_token() {
let app = build_router(SynapseState {
hs_token: "HS_TOKEN".to_string(),
});
let payload = serde_json::json!({
"events": [],
"txn_id": "125"
});
let response = app
.oneshot(
Request::builder()
.method("PUT")
.uri("/_matrix/appservice/v1/transactions/125?access_token=HS_TOKEN")
.header("content-type", "application/json")
.body(Body::from(payload.to_string()))
.unwrap(),
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::OK);
}
#[tokio::test]
async fn transactions_endpoint_accepts_x_access_token_header() {
let app = build_router(SynapseState {
hs_token: "HS_TOKEN".to_string(),
});
let payload = serde_json::json!({
"events": [],
"txn_id": "126"
});
let response = app
.oneshot(
Request::builder()
.method("PUT")
.uri("/_matrix/appservice/v1/transactions/126")
.header("x-access-token", "HS_TOKEN")
.header("content-type", "application/json")
.body(Body::from(payload.to_string()))
.unwrap(),
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::OK);
}
#[tokio::test]
async fn run_synapse_listener_starts_and_can_abort() {
let addr = SocketAddr::from(([127, 0, 0, 1], 0));
let handle =
tokio::spawn(async move { run_synapse_listener(addr, "HS_TOKEN".to_string()).await });
sleep(Duration::from_millis(10)).await;
handle.abort();
}
#[tokio::test]
async fn run_synapse_listener_returns_error_on_bind_failure() {
let listener = tokio::net::TcpListener::bind("127.0.0.1:0").await.unwrap();
let addr = listener.local_addr().unwrap();
let result = run_synapse_listener(addr, "HS_TOKEN".to_string()).await;
assert!(result.is_err());
}
}

BIN
scrot-0.6.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

71
tests/cypher.rb Normal file
View File

@@ -0,0 +1,71 @@
# Copyright © 2025-26 l5yth & contributors
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
require "base64"
require "meshtastic"
require "openssl"
channel_name = "BerlinMesh"
# === Inputs from your packet ===
cipher_b64 = "Q1R7tgI5yXzMXu/3"
psk_b64 = "Nmh7EooP2Tsc+7pvPwXLcEDDuYhk+fBo2GLnbA1Y1sg="
packet_id = 3_915_687_257
from_id = "!9e95cf60"
channel = 35
# === Decode key and ciphertext ===
key = Base64.decode64(psk_b64) # 32 bytes -> AES-256
ciphertext = Base64.decode64(cipher_b64)
# === Derive numeric node id from Meshtastic-style string ===
hex_str = from_id.sub(/^!/, "") # "9e95cf60"
from_node = hex_str.to_i(16) # 0x9e95cf60
# === Build nonce exactly like Meshtastic CryptoEngine ===
# Little-endian 64-bit packet ID + little-endian 32-bit node ID + 4 zero bytes
nonce = [packet_id].pack("Q<") # uint64, little-endian
nonce += [from_node].pack("L<") # uint32, little-endian
nonce += "\x00" * 4 # extraNonce == 0 for PSK channel msgs
raise "Nonce must be 16 bytes" unless nonce.bytesize == 16
raise "Key must be 32 bytes" unless key.bytesize == 32
# === AES-256-CTR decrypt ===
cipher = OpenSSL::Cipher.new("aes-256-ctr")
cipher.decrypt
cipher.key = key
cipher.iv = nonce
plaintext = cipher.update(ciphertext) + cipher.final
# At this point `plaintext` is the raw Meshtastic protobuf payload
plaintext = plaintext.bytes.pack("C*")
data = Meshtastic::Data.decode(plaintext)
msg = data.payload.dup.force_encoding("UTF-8")
puts msg
# Gets channel number from name and psk
def channel_hash(name, psk_b64)
name_bytes = name.b # UTF-8 bytes
psk_bytes = Base64.decode64(psk_b64)
hn = name_bytes.bytes.reduce(0) { |acc, b| acc ^ b } # XOR over name
hp = psk_bytes.bytes.reduce(0) { |acc, b| acc ^ b } # XOR over PSK
(hn ^ hp) & 0xFF
end
channel_h = channel_hash(channel_name, psk_b64)
puts channel_h
puts channel == channel_h

491
tests/rainbow.csv Normal file
View File

@@ -0,0 +1,491 @@
hash,name
0,Mesh1
1,DEMO
1,Downlink1
1,NightNet
1,Sideband1
2,CommsNet
2,Mesh3
2,PulseNet
3,LightNet
3,Mesh2
3,WestStar
3,WolfMesh
4,Mesh5
4,OPERATIONS
4,Rescue1
4,SignalFire
5,Base2
5,DeltaNet
5,Mesh4
5,MeshMunich
6,Base1
7,MeshTest
7,Rescue2
7,ZuluMesh
8,CourierNet
8,Fire2
8,Grid2
8,LongFast
8,RescueTeam
9,AlphaNet
9,MeshGrid
10,TestBerlin
10,WaWi
11,Fire1
11,Grid1
12,FoxNet
12,MeshRuhr
12,RadioNet
13,Signal1
13,Zone1
14,BetaBerlin
14,Signal2
14,TangoNet
14,Zone2
15,BerlinMesh
15,LongSlow
15,MeshBerlin
15,Zone3
16,CQ
16,EchoMesh
16,Freq2
16,KiloMesh
16,Node2
16,PhoenixNet
16,Repeater2
17,FoxtrotNet
17,Node3
18,LoRa
19,Freq1
19,HarmonyNet
19,Node1
19,RavenNet
19,Repeater1
20,NomadNet
20,SENSOR
20,TEST
20,test
21,BravoNet
21,EastStar
21,MeshCollective
21,SunNet
22,Node4
22,Uplink1
23,EagleNet
23,MeshHessen
23,Node5
24,MediumSlow
24,Router1
25,Checkpoint1
25,HAMNet
26,Checkpoint2
26,GhostNet
27,HQ
27,Router2
31,DemoBerlin
31,FieldNet
31,MediumFast
32,Clinic
32,Convoy
32,Daylight
32,Town
33,Callisto
33,CQ1
33,Daybreak
33,Demo
33,East
33,LoRaMesh
33,Mist
34,CQ2
34,Freq
34,Gold
34,Link
34,Repeater
35,Aquila
35,Doctor
35,Echo
35,Kilo
35,Public
35,Wyvern
36,District
36,Hessen
36,Io
36,LoRaTest
36,Operations
36,Shadow
36,Unit
37,Campfire
37,City
37,Outsider
37,Sync
38,Beacon
38,Collective
38,Harbor
38,Lion
38,Meteor
39,Firebird
39,Fireteam
39,Quasar
39,Snow
39,Universe
39,Uplink
40,Checkpoint
40,Galaxy
40,Jaguar
40,Sunset
40,Zeta
41,Hinterland
41,HQ2
41,Main
41,Meshtastic
41,Router
41,Valley
41,Wander
41,Wolfpack
42,HQ1
42,Lizard
42,Packet
42,Sahara
42,Tunnel
43,Anaconda
43,Basalt
43,Blackout
43,Crow
43,Dusk
43,Falcon
43,Lima
43,Müggelberg
44,Arctic
44,Backup
44,Bronze
44,Corvus
44,Cosmos
44,LoRaBerlin
44,Neukölln
44,Safari
45,Breeze
45,Burrow
45,Gale
45,Saturn
46,Border
46,Nest
47,Borealis
47,Mars
47,Path
47,Ranger
48,Beat
48,Berg
48,Beta
48,Downlink
48,Hive
48,Rhythm
48,Saxony
48,Sideband
48,Wolf
49,Asteroid
49,Carbon
49,Mesh
50,Blizzard
50,Runner
51,Callsign
51,Carpet
51,Desert
51,Dragon
51,Friedrichshain
51,Help
51,Nebula
51,Safe
52,Amazon
52,Fireline
52,Haze
52,LoRaHessen
52,Platinum
52,Sensor
52,Test
52,Zulu
53,Nord
53,Rescue
53,Secure
53,Silver
54,Bear
54,Hospital
54,Munich
54,Python
54,Rain
54,Wind
54,Wolves
55,Base
55,Bolt
55,Hawk
55,Mirage
55,Nightwatch
55,Obsidian
55,Rock
55,Victor
55,West
56,Aurora
56,Dune
56,Iron
56,Lava
56,Nomads
57,Copper
57,Core
57,Spectrum
57,Summit
58,Colony
58,Fire
58,Ganymede
58,Grid
58,Kraken
58,Road
58,Solstice
58,Tundra
59,911
59,Forest
59,Pack
60,Berlin
60,Chat
60,Sierra
60,Signal
60,Wald
60,Zone
61,Alpine
61,Bridge
61,Camp
61,Dortmund
61,Frontier
61,Jungle
61,Peak
62,Burner
62,Dawn
62,Europa
62,Midnight
62,Nightshift
62,Prenzlauer
62,Safety
62,Sector
62,Wanderer
63,Distress
63,Kiez
63,Ruhr
63,Team
64,Epsilon
64,Field
64,Granite
64,Orbit
64,Trail
64,Whisper
65,Central
65,Cologne
65,Layer
65,Relay
65,Runners
65,Stone
65,Tempo
66,Polar
66,Woods
67,Highway
67,Kreuzberg
67,Leopard
67,Metro
67,Omega
67,Phantom
68,Hamburg
68,Hydra
68,Medic
68,Titan
69,Command
69,Control
69,Gamma
69,Ghost
69,Mercury
69,Oasis
70,Diamond
70,Ham
70,HAM
70,Leipzig
70,Paramedic
70,Savanna
71,Frankfurt
71,Gecko
71,Jupiter
71,Sensors
71,SENSORS
71,Sunrise
72,Chameleon
72,Eagle
72,Hilltop
72,Teufelsberg
73,Firefly
73,Steel
74,Bravo
74,Caravan
74,Ost
74,Süd
75,Emergency
75,EMERGENCY
75,Nomad
75,Watch
76,Alert
76,Bavaria
76,Fog
76,Harmony
76,Raven
77,Admin
77,ADMIN
77,Den
77,Ice
77,LoRaNet
77,North
77,SOS
77,Sos
77,Wanderers
78,Foxtrot
78,Med
78,Ops
79,Flock
79,Phoenix
79,PRIVATE
79,Private
79,Signals
79,Tiger
80,Commune
80,Freedom
80,Pluto
80,Snake
80,Squad
80,Stuttgart
81,Grassland
81,Tango
81,Union
82,Comet
82,Flash
82,Lightning
83,Cloud
83,Equinox
83,Firewatch
83,Fox
83,Radio
83,Shelter
84,Cheetah
84,General
84,Outpost
84,Volcano
85,Glacier
85,Storm
86,Alpha
86,Owl
86,Panther
86,Prairie
86,Thunder
87,Courier
87,Nexus
87,South
88,Ash
88,River
88,Syndicate
89,Amateur
89,Astro
89,Avalanche
89,Bonfire
89,Draco
89,Griffin
89,Nightfall
89,Shade
89,Venus
90,Charlie
90,Delta
90,Stratum
90,Viper
91,Bison
91,Tal
92,Network
92,Scout
93,Comms
93,Fluss
93,Group
93,Hub
93,Pulse
93,Smoke
94,Frost
94,Rover
94,Village
95,Cobra
95,Liberty
95,Ridge
97,DarkNet
97,NightshiftNet
97,Radio2
97,Shelter2
98,CampNet
98,Radio1
98,Shelter1
98,TangoMesh
99,BaseAlpha
99,BerlinNet
99,SouthStar
100,CourierMesh
100,Storm1
101,Courier2
101,GridNet
101,OpsCenter
102,Courier1
103,Storm2
104,HawkNet
105,BearNet
105,StarNet
107,emergency
107,ZuluNet
108,Comms1
108,DragonNet
108,Hub1
109,admin
109,NightMesh
110,MeshNet
111,BaseCharlie
111,Comms2
111,GridSouth
111,Hub2
111,MeshNetwork
111,WolfNet
112,Layer1
112,Relay1
112,ShortFast
113,OpsRoom
114,Layer3
114,MeshCologne
115,Layer2
115,Relay2
115,SOSBerlin
116,Command1
116,Control1
116,CrowNet
116,MeshFrankfurt
117,EmergencyBerlin
117,GridNorth
117,MeshLeipzig
117,PacketNet
119,Command2
119,Control2
119,MeshHamburg
120,NomadMesh
121,NorthStar
121,Watch2
122,CommandRoom
122,ControlRoom
122,SyncNet
122,Watch1
123,PacketRadio
123,ShadowNet
124,EchoNet
124,KiloNet
124,Med2
124,Ops2
125,FoxtrotMesh
125,RepeaterHub
126,MoonNet
127,BaseBravo
127,Med1
127,Ops1
127,WolfDen
1 hash name
2 0 Mesh1
3 1 DEMO
4 1 Downlink1
5 1 NightNet
6 1 Sideband1
7 2 CommsNet
8 2 Mesh3
9 2 PulseNet
10 3 LightNet
11 3 Mesh2
12 3 WestStar
13 3 WolfMesh
14 4 Mesh5
15 4 OPERATIONS
16 4 Rescue1
17 4 SignalFire
18 5 Base2
19 5 DeltaNet
20 5 Mesh4
21 5 MeshMunich
22 6 Base1
23 7 MeshTest
24 7 Rescue2
25 7 ZuluMesh
26 8 CourierNet
27 8 Fire2
28 8 Grid2
29 8 LongFast
30 8 RescueTeam
31 9 AlphaNet
32 9 MeshGrid
33 10 TestBerlin
34 10 WaWi
35 11 Fire1
36 11 Grid1
37 12 FoxNet
38 12 MeshRuhr
39 12 RadioNet
40 13 Signal1
41 13 Zone1
42 14 BetaBerlin
43 14 Signal2
44 14 TangoNet
45 14 Zone2
46 15 BerlinMesh
47 15 LongSlow
48 15 MeshBerlin
49 15 Zone3
50 16 CQ
51 16 EchoMesh
52 16 Freq2
53 16 KiloMesh
54 16 Node2
55 16 PhoenixNet
56 16 Repeater2
57 17 FoxtrotNet
58 17 Node3
59 18 LoRa
60 19 Freq1
61 19 HarmonyNet
62 19 Node1
63 19 RavenNet
64 19 Repeater1
65 20 NomadNet
66 20 SENSOR
67 20 TEST
68 20 test
69 21 BravoNet
70 21 EastStar
71 21 MeshCollective
72 21 SunNet
73 22 Node4
74 22 Uplink1
75 23 EagleNet
76 23 MeshHessen
77 23 Node5
78 24 MediumSlow
79 24 Router1
80 25 Checkpoint1
81 25 HAMNet
82 26 Checkpoint2
83 26 GhostNet
84 27 HQ
85 27 Router2
86 31 DemoBerlin
87 31 FieldNet
88 31 MediumFast
89 32 Clinic
90 32 Convoy
91 32 Daylight
92 32 Town
93 33 Callisto
94 33 CQ1
95 33 Daybreak
96 33 Demo
97 33 East
98 33 LoRaMesh
99 33 Mist
100 34 CQ2
101 34 Freq
102 34 Gold
103 34 Link
104 34 Repeater
105 35 Aquila
106 35 Doctor
107 35 Echo
108 35 Kilo
109 35 Public
110 35 Wyvern
111 36 District
112 36 Hessen
113 36 Io
114 36 LoRaTest
115 36 Operations
116 36 Shadow
117 36 Unit
118 37 Campfire
119 37 City
120 37 Outsider
121 37 Sync
122 38 Beacon
123 38 Collective
124 38 Harbor
125 38 Lion
126 38 Meteor
127 39 Firebird
128 39 Fireteam
129 39 Quasar
130 39 Snow
131 39 Universe
132 39 Uplink
133 40 Checkpoint
134 40 Galaxy
135 40 Jaguar
136 40 Sunset
137 40 Zeta
138 41 Hinterland
139 41 HQ2
140 41 Main
141 41 Meshtastic
142 41 Router
143 41 Valley
144 41 Wander
145 41 Wolfpack
146 42 HQ1
147 42 Lizard
148 42 Packet
149 42 Sahara
150 42 Tunnel
151 43 Anaconda
152 43 Basalt
153 43 Blackout
154 43 Crow
155 43 Dusk
156 43 Falcon
157 43 Lima
158 43 Müggelberg
159 44 Arctic
160 44 Backup
161 44 Bronze
162 44 Corvus
163 44 Cosmos
164 44 LoRaBerlin
165 44 Neukölln
166 44 Safari
167 45 Breeze
168 45 Burrow
169 45 Gale
170 45 Saturn
171 46 Border
172 46 Nest
173 47 Borealis
174 47 Mars
175 47 Path
176 47 Ranger
177 48 Beat
178 48 Berg
179 48 Beta
180 48 Downlink
181 48 Hive
182 48 Rhythm
183 48 Saxony
184 48 Sideband
185 48 Wolf
186 49 Asteroid
187 49 Carbon
188 49 Mesh
189 50 Blizzard
190 50 Runner
191 51 Callsign
192 51 Carpet
193 51 Desert
194 51 Dragon
195 51 Friedrichshain
196 51 Help
197 51 Nebula
198 51 Safe
199 52 Amazon
200 52 Fireline
201 52 Haze
202 52 LoRaHessen
203 52 Platinum
204 52 Sensor
205 52 Test
206 52 Zulu
207 53 Nord
208 53 Rescue
209 53 Secure
210 53 Silver
211 54 Bear
212 54 Hospital
213 54 Munich
214 54 Python
215 54 Rain
216 54 Wind
217 54 Wolves
218 55 Base
219 55 Bolt
220 55 Hawk
221 55 Mirage
222 55 Nightwatch
223 55 Obsidian
224 55 Rock
225 55 Victor
226 55 West
227 56 Aurora
228 56 Dune
229 56 Iron
230 56 Lava
231 56 Nomads
232 57 Copper
233 57 Core
234 57 Spectrum
235 57 Summit
236 58 Colony
237 58 Fire
238 58 Ganymede
239 58 Grid
240 58 Kraken
241 58 Road
242 58 Solstice
243 58 Tundra
244 59 911
245 59 Forest
246 59 Pack
247 60 Berlin
248 60 Chat
249 60 Sierra
250 60 Signal
251 60 Wald
252 60 Zone
253 61 Alpine
254 61 Bridge
255 61 Camp
256 61 Dortmund
257 61 Frontier
258 61 Jungle
259 61 Peak
260 62 Burner
261 62 Dawn
262 62 Europa
263 62 Midnight
264 62 Nightshift
265 62 Prenzlauer
266 62 Safety
267 62 Sector
268 62 Wanderer
269 63 Distress
270 63 Kiez
271 63 Ruhr
272 63 Team
273 64 Epsilon
274 64 Field
275 64 Granite
276 64 Orbit
277 64 Trail
278 64 Whisper
279 65 Central
280 65 Cologne
281 65 Layer
282 65 Relay
283 65 Runners
284 65 Stone
285 65 Tempo
286 66 Polar
287 66 Woods
288 67 Highway
289 67 Kreuzberg
290 67 Leopard
291 67 Metro
292 67 Omega
293 67 Phantom
294 68 Hamburg
295 68 Hydra
296 68 Medic
297 68 Titan
298 69 Command
299 69 Control
300 69 Gamma
301 69 Ghost
302 69 Mercury
303 69 Oasis
304 70 Diamond
305 70 Ham
306 70 HAM
307 70 Leipzig
308 70 Paramedic
309 70 Savanna
310 71 Frankfurt
311 71 Gecko
312 71 Jupiter
313 71 Sensors
314 71 SENSORS
315 71 Sunrise
316 72 Chameleon
317 72 Eagle
318 72 Hilltop
319 72 Teufelsberg
320 73 Firefly
321 73 Steel
322 74 Bravo
323 74 Caravan
324 74 Ost
325 74 Süd
326 75 Emergency
327 75 EMERGENCY
328 75 Nomad
329 75 Watch
330 76 Alert
331 76 Bavaria
332 76 Fog
333 76 Harmony
334 76 Raven
335 77 Admin
336 77 ADMIN
337 77 Den
338 77 Ice
339 77 LoRaNet
340 77 North
341 77 SOS
342 77 Sos
343 77 Wanderers
344 78 Foxtrot
345 78 Med
346 78 Ops
347 79 Flock
348 79 Phoenix
349 79 PRIVATE
350 79 Private
351 79 Signals
352 79 Tiger
353 80 Commune
354 80 Freedom
355 80 Pluto
356 80 Snake
357 80 Squad
358 80 Stuttgart
359 81 Grassland
360 81 Tango
361 81 Union
362 82 Comet
363 82 Flash
364 82 Lightning
365 83 Cloud
366 83 Equinox
367 83 Firewatch
368 83 Fox
369 83 Radio
370 83 Shelter
371 84 Cheetah
372 84 General
373 84 Outpost
374 84 Volcano
375 85 Glacier
376 85 Storm
377 86 Alpha
378 86 Owl
379 86 Panther
380 86 Prairie
381 86 Thunder
382 87 Courier
383 87 Nexus
384 87 South
385 88 Ash
386 88 River
387 88 Syndicate
388 89 Amateur
389 89 Astro
390 89 Avalanche
391 89 Bonfire
392 89 Draco
393 89 Griffin
394 89 Nightfall
395 89 Shade
396 89 Venus
397 90 Charlie
398 90 Delta
399 90 Stratum
400 90 Viper
401 91 Bison
402 91 Tal
403 92 Network
404 92 Scout
405 93 Comms
406 93 Fluss
407 93 Group
408 93 Hub
409 93 Pulse
410 93 Smoke
411 94 Frost
412 94 Rover
413 94 Village
414 95 Cobra
415 95 Liberty
416 95 Ridge
417 97 DarkNet
418 97 NightshiftNet
419 97 Radio2
420 97 Shelter2
421 98 CampNet
422 98 Radio1
423 98 Shelter1
424 98 TangoMesh
425 99 BaseAlpha
426 99 BerlinNet
427 99 SouthStar
428 100 CourierMesh
429 100 Storm1
430 101 Courier2
431 101 GridNet
432 101 OpsCenter
433 102 Courier1
434 103 Storm2
435 104 HawkNet
436 105 BearNet
437 105 StarNet
438 107 emergency
439 107 ZuluNet
440 108 Comms1
441 108 DragonNet
442 108 Hub1
443 109 admin
444 109 NightMesh
445 110 MeshNet
446 111 BaseCharlie
447 111 Comms2
448 111 GridSouth
449 111 Hub2
450 111 MeshNetwork
451 111 WolfNet
452 112 Layer1
453 112 Relay1
454 112 ShortFast
455 113 OpsRoom
456 114 Layer3
457 114 MeshCologne
458 115 Layer2
459 115 Relay2
460 115 SOSBerlin
461 116 Command1
462 116 Control1
463 116 CrowNet
464 116 MeshFrankfurt
465 117 EmergencyBerlin
466 117 GridNorth
467 117 MeshLeipzig
468 117 PacketNet
469 119 Command2
470 119 Control2
471 119 MeshHamburg
472 120 NomadMesh
473 121 NorthStar
474 121 Watch2
475 122 CommandRoom
476 122 ControlRoom
477 122 SyncNet
478 122 Watch1
479 123 PacketRadio
480 123 ShadowNet
481 124 EchoNet
482 124 KiloNet
483 124 Med2
484 124 Ops2
485 125 FoxtrotMesh
486 125 RepeaterHub
487 126 MoonNet
488 127 BaseBravo
489 127 Med1
490 127 Ops1
491 127 WolfDen

736
tests/rainbow.json Normal file
View File

@@ -0,0 +1,736 @@
{
"59": [
"911",
"Forest",
"Pack"
],
"77": [
"Admin",
"ADMIN",
"Den",
"Ice",
"LoRaNet",
"North",
"SOS",
"Sos",
"Wanderers"
],
"109": [
"admin",
"NightMesh"
],
"76": [
"Alert",
"Bavaria",
"Fog",
"Harmony",
"Raven"
],
"86": [
"Alpha",
"Owl",
"Panther",
"Prairie",
"Thunder"
],
"9": [
"AlphaNet",
"MeshGrid"
],
"61": [
"Alpine",
"Bridge",
"Camp",
"Dortmund",
"Frontier",
"Jungle",
"Peak"
],
"89": [
"Amateur",
"Astro",
"Avalanche",
"Bonfire",
"Draco",
"Griffin",
"Nightfall",
"Shade",
"Venus"
],
"52": [
"Amazon",
"Fireline",
"Haze",
"LoRaHessen",
"Platinum",
"Sensor",
"Test",
"Zulu"
],
"43": [
"Anaconda",
"Basalt",
"Blackout",
"Crow",
"Dusk",
"Falcon",
"Lima",
"Müggelberg"
],
"35": [
"Aquila",
"Doctor",
"Echo",
"Kilo",
"Public",
"Wyvern"
],
"44": [
"Arctic",
"Backup",
"Bronze",
"Corvus",
"Cosmos",
"LoRaBerlin",
"Neukölln",
"Safari"
],
"88": [
"Ash",
"River",
"Syndicate"
],
"49": [
"Asteroid",
"Carbon",
"Mesh"
],
"56": [
"Aurora",
"Dune",
"Iron",
"Lava",
"Nomads"
],
"55": [
"Base",
"Bolt",
"Hawk",
"Mirage",
"Nightwatch",
"Obsidian",
"Rock",
"Victor",
"West"
],
"6": [
"Base1"
],
"5": [
"Base2",
"DeltaNet",
"Mesh4",
"MeshMunich"
],
"99": [
"BaseAlpha",
"BerlinNet",
"SouthStar"
],
"127": [
"BaseBravo",
"Med1",
"Ops1",
"WolfDen"
],
"111": [
"BaseCharlie",
"Comms2",
"GridSouth",
"Hub2",
"MeshNetwork",
"WolfNet"
],
"38": [
"Beacon",
"Collective",
"Harbor",
"Lion",
"Meteor"
],
"54": [
"Bear",
"Hospital",
"Munich",
"Python",
"Rain",
"Wind",
"Wolves"
],
"105": [
"BearNet",
"StarNet"
],
"48": [
"Beat",
"Berg",
"Beta",
"Downlink",
"Hive",
"Rhythm",
"Saxony",
"Sideband",
"Wolf"
],
"60": [
"Berlin",
"Chat",
"Sierra",
"Signal",
"Wald",
"Zone"
],
"15": [
"BerlinMesh",
"LongSlow",
"MeshBerlin",
"Zone3"
],
"14": [
"BetaBerlin",
"Signal2",
"TangoNet",
"Zone2"
],
"91": [
"Bison",
"Tal"
],
"50": [
"Blizzard",
"Runner"
],
"46": [
"Border",
"Nest"
],
"47": [
"Borealis",
"Mars",
"Path",
"Ranger"
],
"74": [
"Bravo",
"Caravan",
"Ost",
"Süd"
],
"21": [
"BravoNet",
"EastStar",
"MeshCollective",
"SunNet"
],
"45": [
"Breeze",
"Burrow",
"Gale",
"Saturn"
],
"62": [
"Burner",
"Dawn",
"Europa",
"Midnight",
"Nightshift",
"Prenzlauer",
"Safety",
"Sector",
"Wanderer"
],
"33": [
"Callisto",
"CQ1",
"Daybreak",
"Demo",
"East",
"LoRaMesh",
"Mist"
],
"51": [
"Callsign",
"Carpet",
"Desert",
"Dragon",
"Friedrichshain",
"Help",
"Nebula",
"Safe"
],
"37": [
"Campfire",
"City",
"Outsider",
"Sync"
],
"98": [
"CampNet",
"Radio1",
"Shelter1",
"TangoMesh"
],
"65": [
"Central",
"Cologne",
"Layer",
"Relay",
"Runners",
"Stone",
"Tempo"
],
"72": [
"Chameleon",
"Eagle",
"Hilltop",
"Teufelsberg"
],
"90": [
"Charlie",
"Delta",
"Stratum",
"Viper"
],
"40": [
"Checkpoint",
"Galaxy",
"Jaguar",
"Sunset",
"Zeta"
],
"25": [
"Checkpoint1",
"HAMNet"
],
"26": [
"Checkpoint2",
"GhostNet"
],
"84": [
"Cheetah",
"General",
"Outpost",
"Volcano"
],
"32": [
"Clinic",
"Convoy",
"Daylight",
"Town"
],
"83": [
"Cloud",
"Equinox",
"Firewatch",
"Fox",
"Radio",
"Shelter"
],
"95": [
"Cobra",
"Liberty",
"Ridge"
],
"58": [
"Colony",
"Fire",
"Ganymede",
"Grid",
"Kraken",
"Road",
"Solstice",
"Tundra"
],
"82": [
"Comet",
"Flash",
"Lightning"
],
"69": [
"Command",
"Control",
"Gamma",
"Ghost",
"Mercury",
"Oasis"
],
"116": [
"Command1",
"Control1",
"CrowNet",
"MeshFrankfurt"
],
"119": [
"Command2",
"Control2",
"MeshHamburg"
],
"122": [
"CommandRoom",
"ControlRoom",
"SyncNet",
"Watch1"
],
"93": [
"Comms",
"Fluss",
"Group",
"Hub",
"Pulse",
"Smoke"
],
"108": [
"Comms1",
"DragonNet",
"Hub1"
],
"2": [
"CommsNet",
"Mesh3",
"PulseNet"
],
"80": [
"Commune",
"Freedom",
"Pluto",
"Snake",
"Squad",
"Stuttgart"
],
"57": [
"Copper",
"Core",
"Spectrum",
"Summit"
],
"87": [
"Courier",
"Nexus",
"South"
],
"102": [
"Courier1"
],
"101": [
"Courier2",
"GridNet",
"OpsCenter"
],
"100": [
"CourierMesh",
"Storm1"
],
"8": [
"CourierNet",
"Fire2",
"Grid2",
"LongFast",
"RescueTeam"
],
"16": [
"CQ",
"EchoMesh",
"Freq2",
"KiloMesh",
"Node2",
"PhoenixNet",
"Repeater2"
],
"34": [
"CQ2",
"Freq",
"Gold",
"Link",
"Repeater"
],
"97": [
"DarkNet",
"NightshiftNet",
"Radio2",
"Shelter2"
],
"1": [
"DEMO",
"Downlink1",
"NightNet",
"Sideband1"
],
"31": [
"DemoBerlin",
"FieldNet",
"MediumFast"
],
"70": [
"Diamond",
"Ham",
"HAM",
"Leipzig",
"Paramedic",
"Savanna"
],
"63": [
"Distress",
"Kiez",
"Ruhr",
"Team"
],
"36": [
"District",
"Hessen",
"Io",
"LoRaTest",
"Operations",
"Shadow",
"Unit"
],
"23": [
"EagleNet",
"MeshHessen",
"Node5"
],
"124": [
"EchoNet",
"KiloNet",
"Med2",
"Ops2"
],
"75": [
"Emergency",
"EMERGENCY",
"Nomad",
"Watch"
],
"107": [
"emergency",
"ZuluNet"
],
"117": [
"EmergencyBerlin",
"GridNorth",
"MeshLeipzig",
"PacketNet"
],
"64": [
"Epsilon",
"Field",
"Granite",
"Orbit",
"Trail",
"Whisper"
],
"11": [
"Fire1",
"Grid1"
],
"39": [
"Firebird",
"Fireteam",
"Quasar",
"Snow",
"Universe",
"Uplink"
],
"73": [
"Firefly",
"Steel"
],
"79": [
"Flock",
"Phoenix",
"PRIVATE",
"Private",
"Signals",
"Tiger"
],
"12": [
"FoxNet",
"MeshRuhr",
"RadioNet"
],
"78": [
"Foxtrot",
"Med",
"Ops"
],
"125": [
"FoxtrotMesh",
"RepeaterHub"
],
"17": [
"FoxtrotNet",
"Node3"
],
"71": [
"Frankfurt",
"Gecko",
"Jupiter",
"Sensors",
"SENSORS",
"Sunrise"
],
"19": [
"Freq1",
"HarmonyNet",
"Node1",
"RavenNet",
"Repeater1"
],
"94": [
"Frost",
"Rover",
"Village"
],
"85": [
"Glacier",
"Storm"
],
"81": [
"Grassland",
"Tango",
"Union"
],
"68": [
"Hamburg",
"Hydra",
"Medic",
"Titan"
],
"104": [
"HawkNet"
],
"67": [
"Highway",
"Kreuzberg",
"Leopard",
"Metro",
"Omega",
"Phantom"
],
"41": [
"Hinterland",
"HQ2",
"Main",
"Meshtastic",
"Router",
"Valley",
"Wander",
"Wolfpack"
],
"27": [
"HQ",
"Router2"
],
"42": [
"HQ1",
"Lizard",
"Packet",
"Sahara",
"Tunnel"
],
"112": [
"Layer1",
"Relay1",
"ShortFast"
],
"115": [
"Layer2",
"Relay2",
"SOSBerlin"
],
"114": [
"Layer3",
"MeshCologne"
],
"3": [
"LightNet",
"Mesh2",
"WestStar",
"WolfMesh"
],
"18": [
"LoRa"
],
"24": [
"MediumSlow",
"Router1"
],
"0": [
"Mesh1"
],
"4": [
"Mesh5",
"OPERATIONS",
"Rescue1",
"SignalFire"
],
"110": [
"MeshNet"
],
"7": [
"MeshTest",
"Rescue2",
"ZuluMesh"
],
"126": [
"MoonNet"
],
"92": [
"Network",
"Scout"
],
"22": [
"Node4",
"Uplink1"
],
"120": [
"NomadMesh"
],
"20": [
"NomadNet",
"SENSOR",
"TEST",
"test"
],
"53": [
"Nord",
"Rescue",
"Secure",
"Silver"
],
"121": [
"NorthStar",
"Watch2"
],
"113": [
"OpsRoom"
],
"123": [
"PacketRadio",
"ShadowNet"
],
"66": [
"Polar",
"Woods"
],
"13": [
"Signal1",
"Zone1"
],
"103": [
"Storm2"
],
"10": [
"TestBerlin",
"WaWi"
]
}

134
tests/rainbow.rb Normal file
View File

@@ -0,0 +1,134 @@
#!/usr/bin/env ruby
# frozen_string_literal: true
# Copyright © 2025-26 l5yth & contributors
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
require "base64"
require "json"
require "csv"
# --- CONFIG --------------------------------------------------------
# The PSK you want. Here: public mesh, "AQ==" (0x01).
PSK_B64 = ENV.fetch("PSK_B64", "AQ==")
# 1000 potential channel candidate names for rainbow indices.
CANDIDATE_NAMES = %w[
911 Admin ADMIN admin Alert Alpha AlphaNet Alpine Amateur Amazon Anaconda Aquila Arctic Ash Asteroid Astro Aurora Avalanche Backup Basalt Base Base1 Base2 BaseAlpha BaseBravo BaseCharlie Bavaria Beacon Bear BearNet Beat Berg Berlin BerlinMesh BerlinNet Beta BetaBerlin Bison Blackout Blizzard Bolt Bonfire Border Borealis Bravo BravoNet Breeze Bridge Bronze Burner Burrow Callisto Callsign Camp Campfire CampNet Caravan Carbon Carpet Central Chameleon Charlie Chat Checkpoint Checkpoint1 Checkpoint2 Cheetah City Clinic Cloud Cobra Collective Cologne Colony Comet Command Command1 Command2 CommandRoom Comms Comms1 Comms2 CommsNet Commune Control Control1 Control2 ControlRoom Convoy Copper Core Corvus Cosmos Courier Courier1 Courier2 CourierMesh CourierNet CQ CQ1 CQ2 Crow CrowNet DarkNet Dawn Daybreak Daylight Delta DeltaNet Demo DEMO DemoBerlin Den Desert Diamond Distress District Doctor Dortmund Downlink Downlink1 Draco Dragon DragonNet Dune Dusk Eagle EagleNet East EastStar Echo EchoMesh EchoNet Emergency emergency EMERGENCY EmergencyBerlin Epsilon Equinox Europa Falcon Field FieldNet Fire Fire1 Fire2 Firebird Firefly Fireline Fireteam Firewatch Flash Flock Fluss Fog Forest Fox FoxNet Foxtrot FoxtrotMesh FoxtrotNet Frankfurt Freedom Freq Freq1 Freq2 Friedrichshain Frontier Frost Galaxy Gale Gamma Ganymede Gecko General Ghost GhostNet Glacier Gold Granite Grassland Grid Grid1 Grid2 GridNet GridNorth GridSouth Griffin Group Ham HAM Hamburg HAMNet Harbor Harmony HarmonyNet Hawk HawkNet Haze Help Hessen Highway Hilltop Hinterland Hive Hospital HQ HQ1 HQ2 Hub Hub1 Hub2 Hydra Ice Io Iron Jaguar Jungle Jupiter Kiez Kilo KiloMesh KiloNet Kraken Kreuzberg Lava Layer Layer1 Layer2 Layer3 Leipzig Leopard Liberty LightNet Lightning Lima Link Lion Lizard LongFast LongSlow LoRa LoRaBerlin LoRaHessen LoRaMesh LoRaNet LoRaTest Main Mars Med Med1 Med2 Medic MediumFast MediumSlow Mercury Mesh Mesh1 Mesh2 Mesh3 Mesh4 Mesh5 MeshBerlin MeshCollective MeshCologne MeshFrankfurt MeshGrid MeshHamburg MeshHessen MeshLeipzig MeshMunich MeshNet MeshNetwork MeshRuhr Meshtastic MeshTest Meteor Metro Midnight Mirage Mist MoonNet Munich Müggelberg Nebula Nest Network Neukölln Nexus Nightfall NightMesh NightNet Nightshift NightshiftNet Nightwatch Node1 Node2 Node3 Node4 Node5 Nomad NomadMesh NomadNet Nomads Nord North NorthStar Oasis Obsidian Omega Operations OPERATIONS Ops Ops1 Ops2 OpsCenter OpsRoom Orbit Ost Outpost Outsider Owl Pack Packet PacketNet PacketRadio Panther Paramedic Path Peak Phantom Phoenix PhoenixNet Platinum Pluto Polar Prairie Prenzlauer PRIVATE Private Public Pulse PulseNet Python Quasar Radio Radio1 Radio2 RadioNet Rain Ranger Raven RavenNet Relay Relay1 Relay2 Repeater Repeater1 Repeater2 RepeaterHub Rescue Rescue1 Rescue2 RescueTeam Rhythm Ridge River Road Rock Router Router1 Router2 Rover Ruhr Runner Runners Safari Safe Safety Sahara Saturn Savanna Saxony Scout Sector Secure Sensor SENSOR Sensors SENSORS Shade Shadow ShadowNet Shelter Shelter1 Shelter2 ShortFast Sideband Sideband1 Sierra Signal Signal1 Signal2 SignalFire Signals Silver Smoke Snake Snow Solstice SOS Sos SOSBerlin South SouthStar Spectrum Squad StarNet Steel Stone Storm Storm1 Storm2 Stratum Stuttgart Summit SunNet Sunrise Sunset Sync SyncNet Syndicate Süd Tal Tango TangoMesh TangoNet Team Tempo Test TEST test TestBerlin Teufelsberg Thunder Tiger Titan Town Trail Tundra Tunnel Union Unit Universe Uplink Uplink1 Valley Venus Victor Village Viper Volcano Wald Wander Wanderer Wanderers Watch Watch1 Watch2 WaWi West WestStar Whisper Wind Wolf WolfDen WolfMesh WolfNet Wolfpack Wolves Woods Wyvern Zeta Zone Zone1 Zone2 Zone3 Zulu ZuluMesh ZuluNet
]
# Output filenames
CSV_OUT = ENV.fetch("CSV_OUT", "rainbow.csv")
JSON_OUT = ENV.fetch("JSON_OUT", "rainbow.json")
# --- HASH FUNCTION -------------------------------------------------
def xor_bytes(str_or_bytes)
bytes = str_or_bytes.is_a?(String) ? str_or_bytes.bytes : str_or_bytes
bytes.reduce(0) { |acc, b| (acc ^ b) & 0xFF }
end
def expanded_key(psk_b64)
raw = Base64.decode64(psk_b64 || "")
case raw.bytesize
when 0
# no encryption: length 0, xor = 0
"".b
when 1
alias_index = raw.bytes.first
alias_keys = {
1 => [
0xD4, 0xF1, 0xBB, 0x3A, 0x20, 0x29, 0x07, 0x59,
0xF0, 0xBC, 0xFF, 0xAB, 0xCF, 0x4E, 0x69, 0x01,
].pack("C*"),
2 => [
0x38, 0x4B, 0xBC, 0xC0, 0x1D, 0xC0, 0x22, 0xD1,
0x81, 0xBF, 0x36, 0xB8, 0x61, 0x21, 0xE1, 0xFB,
0x96, 0xB7, 0x2E, 0x55, 0xBF, 0x74, 0x22, 0x7E,
0x9D, 0x6A, 0xFB, 0x48, 0xD6, 0x4C, 0xB1, 0xA1,
].pack("C*"),
}
alias_keys.fetch(alias_index) { raise "Unknown PSK alias #{alias_index}" }
when 2..15
# pad to 16 (AES128)
(raw.bytes + [0] * (16 - raw.bytesize)).pack("C*")
when 16
raw
when 17..31
# pad to 32 (AES256)
(raw.bytes + [0] * (32 - raw.bytesize)).pack("C*")
when 32
raw
else
raise "PSK too long (#{raw.bytesize} bytes)"
end
end
def channel_hash(name, psk_b64)
effective_name = name.b
key = expanded_key(psk_b64)
h_name = xor_bytes(effective_name)
h_key = xor_bytes(key)
(h_name ^ h_key) & 0xFF
end
# --- BUILD RAINBOW TABLE -------------------------------------------
psk_b64 = PSK_B64
puts "Using PSK_B64=#{psk_b64.inspect}"
hash_to_names = Hash.new { |h, k| h[k] = [] }
CANDIDATE_NAMES.each do |name|
h = channel_hash(name, psk_b64)
hash_to_names[h] << name
end
# --- WRITE CSV (hash,name) -----------------------------------------
CSV.open(CSV_OUT, "w") do |csv|
csv << %w[hash name]
hash_to_names.keys.sort.each do |h|
hash_to_names[h].each do |name|
csv << [h, name]
end
end
end
puts "Wrote CSV rainbow table to #{CSV_OUT}"
# --- WRITE JSON ({hash: [names...]}) -------------------------------
json_hash = hash_to_names.transform_keys(&:to_s)
File.write(JSON_OUT, JSON.pretty_generate(json_hash))
puts "Wrote JSON rainbow table to #{JSON_OUT}"
# --- OPTIONAL: interactive query -----------------------------------
if ARGV.first == "query"
target = Integer(ARGV[1] || raise("Usage: #{File.basename($0)} query <hash>"))
names = hash_to_names[target]
if names.empty?
puts "No names for hash #{target}"
else
puts "Names for hash #{target}:"
names.each { |n| puts " - #{n}" }
end
else
puts "Run again with: #{File.basename($0)} query <hash> # to inspect a specific hash"
end

View File

@@ -0,0 +1,185 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import base64
import io
import json
import sys
import pytest
mesh_pb2 = pytest.importorskip("meshtastic.protobuf.mesh_pb2")
telemetry_pb2 = pytest.importorskip("meshtastic.protobuf.telemetry_pb2")
from data.mesh_ingestor import decode_payload
def run_main_with_input(payload: dict) -> tuple[int, dict]:
stdin = io.StringIO(json.dumps(payload))
stdout = io.StringIO()
original_stdin = sys.stdin
original_stdout = sys.stdout
try:
sys.stdin = stdin
sys.stdout = stdout
status = decode_payload.main()
finally:
sys.stdin = original_stdin
sys.stdout = original_stdout
output = json.loads(stdout.getvalue() or "{}")
return status, output
def test_decode_payload_position_success():
position = mesh_pb2.Position()
position.latitude_i = 525598720
position.longitude_i = 136577024
position.altitude = 11
position.precision_bits = 13
payload_b64 = base64.b64encode(position.SerializeToString()).decode("ascii")
result = decode_payload._decode_payload(3, payload_b64)
assert result["type"] == "POSITION_APP"
assert result["payload"]["latitude_i"] == 525598720
assert result["payload"]["longitude_i"] == 136577024
assert result["payload"]["altitude"] == 11
def test_decode_payload_rejects_invalid_payload():
result = decode_payload._decode_payload(3, "not-base64")
assert result["error"].startswith("invalid-payload")
assert "invalid-payload" in result["error"]
def test_decode_payload_rejects_unsupported_port():
result = decode_payload._decode_payload(
999, base64.b64encode(b"ok").decode("ascii")
)
assert result["error"] == "unsupported-port"
assert result["portnum"] == 999
def test_main_handles_invalid_json():
stdin = io.StringIO("nope")
stdout = io.StringIO()
original_stdin = sys.stdin
original_stdout = sys.stdout
try:
sys.stdin = stdin
sys.stdout = stdout
status = decode_payload.main()
finally:
sys.stdin = original_stdin
sys.stdout = original_stdout
result = json.loads(stdout.getvalue())
assert status == 1
assert result["error"].startswith("invalid-json")
def test_main_requires_portnum():
status, result = run_main_with_input(
{"payload_b64": base64.b64encode(b"ok").decode("ascii")}
)
assert status == 1
assert result["error"] == "missing-portnum"
def test_main_requires_integer_portnum():
status, result = run_main_with_input(
{"portnum": "3", "payload_b64": base64.b64encode(b"ok").decode("ascii")}
)
assert status == 1
assert result["error"] == "missing-portnum"
def test_main_requires_payload():
status, result = run_main_with_input({"portnum": 3})
assert status == 1
assert result["error"] == "missing-payload"
def test_main_requires_string_payload():
status, result = run_main_with_input({"portnum": 3, "payload_b64": 123})
assert status == 1
assert result["error"] == "missing-payload"
def test_main_success_position_payload():
position = mesh_pb2.Position()
position.latitude_i = 525598720
position.longitude_i = 136577024
payload_b64 = base64.b64encode(position.SerializeToString()).decode("ascii")
status, result = run_main_with_input({"portnum": 3, "payload_b64": payload_b64})
assert status == 0
assert result["type"] == "POSITION_APP"
assert result["payload"]["latitude_i"] == 525598720
def test_decode_payload_handles_parse_failure():
class BrokenMessage:
def ParseFromString(self, _payload):
raise ValueError("boom")
decode_payload.PORTNUM_MAP[99] = ("BROKEN", BrokenMessage)
payload_b64 = base64.b64encode(b"\x00").decode("ascii")
result = decode_payload._decode_payload(99, payload_b64)
assert result["error"].startswith("decode-failed")
assert result["type"] == "BROKEN"
decode_payload.PORTNUM_MAP.pop(99, None)
def test_main_entrypoint_executes():
import runpy
payload = {"portnum": 3, "payload_b64": base64.b64encode(b"").decode("ascii")}
stdin = io.StringIO(json.dumps(payload))
stdout = io.StringIO()
original_stdin = sys.stdin
original_stdout = sys.stdout
try:
sys.stdin = stdin
sys.stdout = stdout
try:
runpy.run_module("data.mesh_ingestor.decode_payload", run_name="__main__")
except SystemExit as exc:
assert exc.code == 0
finally:
sys.stdin = original_stdin
sys.stdout = original_stdout
def test_decode_payload_telemetry_success():
telemetry = telemetry_pb2.Telemetry()
telemetry.time = 123
payload_b64 = base64.b64encode(telemetry.SerializeToString()).decode("ascii")
result = decode_payload._decode_payload(67, payload_b64)
assert result["type"] == "TELEMETRY_APP"
assert result["payload"]["time"] == 123

167
tests/test_events_unit.py Normal file
View File

@@ -0,0 +1,167 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unit tests for :mod:`data.mesh_ingestor.events`."""
from __future__ import annotations
import sys
from pathlib import Path
REPO_ROOT = Path(__file__).resolve().parents[1]
if str(REPO_ROOT) not in sys.path:
sys.path.insert(0, str(REPO_ROOT))
from data.mesh_ingestor.events import ( # noqa: E402 - path setup
IngestorHeartbeat,
MessageEvent,
NeighborEntry,
NeighborsSnapshot,
PositionEvent,
TelemetryEvent,
TraceEvent,
)
def test_message_event_requires_id_rx_time_rx_iso():
event: MessageEvent = {"id": 1, "rx_time": 1700000000, "rx_iso": "2023-11-14T00:00:00Z"}
assert event["id"] == 1
assert event["rx_time"] == 1700000000
assert event["rx_iso"] == "2023-11-14T00:00:00Z"
def test_message_event_accepts_optional_fields():
event: MessageEvent = {
"id": 2,
"rx_time": 1700000001,
"rx_iso": "2023-11-14T00:00:01Z",
"text": "hello",
"from_id": "!aabbccdd",
"snr": 4.5,
"rssi": -90,
}
assert event["text"] == "hello"
assert event["snr"] == 4.5
def test_position_event_required_fields():
event: PositionEvent = {"id": 10, "rx_time": 1700000002, "rx_iso": "2023-11-14T00:00:02Z"}
assert event["id"] == 10
def test_position_event_optional_fields():
event: PositionEvent = {
"id": 11,
"rx_time": 1700000003,
"rx_iso": "2023-11-14T00:00:03Z",
"latitude": 37.7749,
"longitude": -122.4194,
"altitude": 10.0,
"node_id": "!aabbccdd",
}
assert event["latitude"] == 37.7749
def test_telemetry_event_required_fields():
event: TelemetryEvent = {"id": 20, "rx_time": 1700000004, "rx_iso": "2023-11-14T00:00:04Z"}
assert event["id"] == 20
def test_telemetry_event_optional_fields():
event: TelemetryEvent = {
"id": 21,
"rx_time": 1700000005,
"rx_iso": "2023-11-14T00:00:05Z",
"channel": 0,
"payload_b64": "AAEC",
"snr": 3.0,
}
assert event["payload_b64"] == "AAEC"
def test_neighbor_entry_required_fields():
entry: NeighborEntry = {"rx_time": 1700000006, "rx_iso": "2023-11-14T00:00:06Z"}
assert entry["rx_time"] == 1700000006
def test_neighbor_entry_optional_fields():
entry: NeighborEntry = {
"rx_time": 1700000007,
"rx_iso": "2023-11-14T00:00:07Z",
"neighbor_id": "!11223344",
"snr": 6.0,
}
assert entry["neighbor_id"] == "!11223344"
def test_neighbors_snapshot_required_fields():
snap: NeighborsSnapshot = {
"node_id": "!aabbccdd",
"rx_time": 1700000008,
"rx_iso": "2023-11-14T00:00:08Z",
}
assert snap["node_id"] == "!aabbccdd"
def test_neighbors_snapshot_optional_fields():
snap: NeighborsSnapshot = {
"node_id": "!aabbccdd",
"rx_time": 1700000009,
"rx_iso": "2023-11-14T00:00:09Z",
"neighbors": [],
"node_broadcast_interval_secs": 900,
}
assert snap["node_broadcast_interval_secs"] == 900
def test_trace_event_required_fields():
event: TraceEvent = {
"hops": [1, 2, 3],
"rx_time": 1700000010,
"rx_iso": "2023-11-14T00:00:10Z",
}
assert event["hops"] == [1, 2, 3]
def test_trace_event_optional_fields():
event: TraceEvent = {
"hops": [4, 5],
"rx_time": 1700000011,
"rx_iso": "2023-11-14T00:00:11Z",
"elapsed_ms": 42,
"snr": 2.5,
}
assert event["elapsed_ms"] == 42
def test_ingestor_heartbeat_all_fields():
hb: IngestorHeartbeat = {
"node_id": "!aabbccdd",
"start_time": 1700000000,
"last_seen_time": 1700000012,
"version": "0.5.11",
"lora_freq": 906875,
"modem_preset": "LONG_FAST",
}
assert hb["version"] == "0.5.11"
assert hb["lora_freq"] == 906875
def test_ingestor_heartbeat_without_optional_fields():
hb: IngestorHeartbeat = {
"node_id": "!aabbccdd",
"start_time": 1700000000,
"last_seen_time": 1700000013,
"version": "0.5.11",
}
assert "lora_freq" not in hb

View File

@@ -788,6 +788,7 @@ def test_store_packet_dict_posts_text_message(mesh_module, monkeypatch):
mesh.config.LORA_FREQ = 868
mesh.config.MODEM_PRESET = "MediumFast"
mesh.register_host_node_id("!f00dbabe")
packet = {
"id": 123,
@@ -823,6 +824,7 @@ def test_store_packet_dict_posts_text_message(mesh_module, monkeypatch):
assert payload["rssi"] == -70
assert payload["reply_id"] is None
assert payload["emoji"] is None
assert payload["ingestor"] == "!f00dbabe"
assert payload["lora_freq"] == 868
assert payload["modem_preset"] == "MediumFast"
assert priority == mesh._MESSAGE_POST_PRIORITY
@@ -879,6 +881,7 @@ def test_store_packet_dict_posts_position(mesh_module, monkeypatch):
mesh.config.LORA_FREQ = 868
mesh.config.MODEM_PRESET = "MediumFast"
mesh.register_host_node_id("!f00dbabe")
packet = {
"id": 200498337,
@@ -946,6 +949,7 @@ def test_store_packet_dict_posts_position(mesh_module, monkeypatch):
)
assert payload["lora_freq"] == 868
assert payload["modem_preset"] == "MediumFast"
assert payload["ingestor"] == "!f00dbabe"
assert payload["raw"]["time"] == 1_758_624_189
@@ -960,6 +964,7 @@ def test_store_packet_dict_posts_neighborinfo(mesh_module, monkeypatch):
mesh.config.LORA_FREQ = 868
mesh.config.MODEM_PRESET = "MediumFast"
mesh.register_host_node_id("!f00dbabe")
packet = {
"id": 2049886869,
@@ -1004,6 +1009,7 @@ def test_store_packet_dict_posts_neighborinfo(mesh_module, monkeypatch):
assert neighbors[2]["neighbor_num"] == 0x0BAD_C0DE
assert payload["lora_freq"] == 868
assert payload["modem_preset"] == "MediumFast"
assert payload["ingestor"] == "!f00dbabe"
def test_store_packet_dict_handles_nodeinfo_packet(mesh_module, monkeypatch):
@@ -2282,6 +2288,7 @@ def test_store_packet_dict_handles_telemetry_packet(mesh_module, monkeypatch):
mesh.config.LORA_FREQ = 868
mesh.config.MODEM_PRESET = "MediumFast"
mesh.register_host_node_id("!f00dbabe")
packet = {
"id": 1_256_091_342,
@@ -2334,6 +2341,7 @@ def test_store_packet_dict_handles_telemetry_packet(mesh_module, monkeypatch):
assert payload["current"] == pytest.approx(0.0715)
assert payload["lora_freq"] == 868
assert payload["modem_preset"] == "MediumFast"
assert payload["ingestor"] == "!f00dbabe"
def test_store_packet_dict_handles_environment_telemetry(mesh_module, monkeypatch):
@@ -2477,6 +2485,7 @@ def test_store_packet_dict_handles_traceroute_packet(mesh_module, monkeypatch):
mesh.config.LORA_FREQ = 915
mesh.config.MODEM_PRESET = "LongFast"
mesh.register_host_node_id("!f00dbabe")
packet = {
"id": 2_934_054_466,
@@ -2518,6 +2527,7 @@ def test_store_packet_dict_handles_traceroute_packet(mesh_module, monkeypatch):
assert "elapsed_ms" in payload
assert payload["lora_freq"] == 915
assert payload["modem_preset"] == "LongFast"
assert payload["ingestor"] == "!f00dbabe"
def test_traceroute_hop_normalization_supports_mappings(mesh_module, monkeypatch):

View File

@@ -0,0 +1,76 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unit tests for :mod:`data.mesh_ingestor.node_identity`."""
from __future__ import annotations
import sys
from pathlib import Path
REPO_ROOT = Path(__file__).resolve().parents[1]
if str(REPO_ROOT) not in sys.path:
sys.path.insert(0, str(REPO_ROOT))
from data.mesh_ingestor.node_identity import ( # noqa: E402 - path setup
canonical_node_id,
node_num_from_id,
)
def test_canonical_node_id_accepts_numeric():
assert canonical_node_id(1) == "!00000001"
assert canonical_node_id(0xABCDEF01) == "!abcdef01"
assert canonical_node_id(1.0) == "!00000001"
def test_canonical_node_id_accepts_string_forms():
assert canonical_node_id("!ABCDEF01") == "!abcdef01"
assert canonical_node_id("0xABCDEF01") == "!abcdef01"
assert canonical_node_id("abcdef01") == "!abcdef01"
assert canonical_node_id("123") == "!0000007b"
def test_canonical_node_id_passthrough_caret_destinations():
assert canonical_node_id("^all") == "^all"
def test_node_num_from_id_parses_canonical_and_hex():
assert node_num_from_id("!abcdef01") == 0xABCDEF01
assert node_num_from_id("abcdef01") == 0xABCDEF01
assert node_num_from_id("0xabcdef01") == 0xABCDEF01
assert node_num_from_id(123) == 123
def test_canonical_node_id_rejects_none_and_empty():
assert canonical_node_id(None) is None
assert canonical_node_id("") is None
assert canonical_node_id(" ") is None
def test_canonical_node_id_rejects_negative():
assert canonical_node_id(-1) is None
assert canonical_node_id(-0xABCDEF01) is None
def test_canonical_node_id_truncates_overflow():
# Values wider than 32 bits are masked, not rejected.
assert canonical_node_id(0x1_ABCDEF01) == "!abcdef01"
def test_node_num_from_id_rejects_none_and_empty():
assert node_num_from_id(None) is None
assert node_num_from_id("") is None
assert node_num_from_id("not-hex") is None

149
tests/test_provider_unit.py Normal file
View File

@@ -0,0 +1,149 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unit tests for :mod:`data.mesh_ingestor.provider` integration seams."""
from __future__ import annotations
import sys
import types
from pathlib import Path
import pytest
REPO_ROOT = Path(__file__).resolve().parents[1]
if str(REPO_ROOT) not in sys.path:
sys.path.insert(0, str(REPO_ROOT))
from data.mesh_ingestor import daemon # noqa: E402 - path setup
from data.mesh_ingestor.provider import Provider # noqa: E402 - path setup
from data.mesh_ingestor.providers.meshtastic import ( # noqa: E402 - path setup
MeshtasticProvider,
)
def test_meshtastic_provider_satisfies_protocol():
"""MeshtasticProvider must structurally satisfy the Provider Protocol."""
required = {"name", "subscribe", "connect", "extract_host_node_id", "node_snapshot_items"}
missing = required - set(dir(MeshtasticProvider))
assert not missing, f"MeshtasticProvider is missing Protocol members: {missing}"
def test_daemon_main_uses_provider_connect(monkeypatch):
calls = {"connect": 0}
class FakeProvider(MeshtasticProvider):
def subscribe(self):
return []
def connect(self, *, active_candidate): # type: ignore[override]
calls["connect"] += 1
# Return a minimal iface and stop immediately via Event.
class Iface:
nodes = {}
def close(self):
return None
return Iface(), "serial0", active_candidate
def extract_host_node_id(self, iface): # type: ignore[override]
return "!host"
def node_snapshot_items(self, iface): # type: ignore[override]
return []
# Make the loop exit quickly.
class AutoStopEvent:
def __init__(self):
self._set = False
def set(self):
self._set = True
def is_set(self):
return self._set
def wait(self, _timeout=None):
self._set = True
return True
monkeypatch.setattr(daemon.config, "SNAPSHOT_SECS", 0)
monkeypatch.setattr(daemon.config, "_RECONNECT_INITIAL_DELAY_SECS", 0)
monkeypatch.setattr(daemon.config, "_RECONNECT_MAX_DELAY_SECS", 0)
monkeypatch.setattr(daemon.config, "_CLOSE_TIMEOUT_SECS", 0)
monkeypatch.setattr(daemon.config, "_INGESTOR_HEARTBEAT_SECS", 0)
monkeypatch.setattr(daemon.config, "ENERGY_SAVING", False)
monkeypatch.setattr(daemon.config, "_INACTIVITY_RECONNECT_SECS", 0)
monkeypatch.setattr(daemon.config, "CONNECTION", "serial0")
monkeypatch.setattr(
daemon,
"threading",
types.SimpleNamespace(
Event=AutoStopEvent,
current_thread=daemon.threading.current_thread,
main_thread=daemon.threading.main_thread,
),
)
monkeypatch.setattr(daemon.handlers, "register_host_node_id", lambda *_a, **_k: None)
monkeypatch.setattr(daemon.handlers, "host_node_id", lambda: "!host")
monkeypatch.setattr(daemon.handlers, "upsert_node", lambda *_a, **_k: None)
monkeypatch.setattr(daemon.handlers, "last_packet_monotonic", lambda: None)
monkeypatch.setattr(daemon.ingestors, "set_ingestor_node_id", lambda *_a, **_k: None)
monkeypatch.setattr(daemon.ingestors, "queue_ingestor_heartbeat", lambda *_a, **_k: True)
daemon.main(provider=FakeProvider())
assert calls["connect"] >= 1
def test_node_snapshot_items_retries_on_concurrent_mutation(monkeypatch):
"""node_snapshot_items must retry on dict-mutation RuntimeError, not raise."""
from data.mesh_ingestor.providers.meshtastic import MeshtasticProvider
attempt = {"n": 0}
class MutatingNodes:
def items(self):
attempt["n"] += 1
if attempt["n"] < 3:
raise RuntimeError("dictionary changed size during iteration")
return [("!aabbccdd", {"num": 1})]
class FakeIface:
nodes = MutatingNodes()
monkeypatch.setattr("time.sleep", lambda _: None)
result = MeshtasticProvider().node_snapshot_items(FakeIface())
assert result == [("!aabbccdd", {"num": 1})]
assert attempt["n"] == 3
def test_node_snapshot_items_returns_empty_after_retry_exhaustion(monkeypatch):
"""node_snapshot_items returns [] (non-fatal) when all retries fail."""
from data.mesh_ingestor.providers.meshtastic import MeshtasticProvider
import data.mesh_ingestor.providers.meshtastic as _mod
class AlwaysMutating:
def items(self):
raise RuntimeError("dictionary changed size during iteration")
class FakeIface:
nodes = AlwaysMutating()
monkeypatch.setattr("time.sleep", lambda _: None)
monkeypatch.setattr(_mod.config, "_debug_log", lambda *_a, **_k: None)
result = MeshtasticProvider().node_snapshot_items(FakeIface())
assert result == []

View File

@@ -55,8 +55,38 @@ def _javascript_package_version() -> str:
raise AssertionError("package.json does not expose a string version")
def _flutter_package_version() -> str:
pubspec_path = REPO_ROOT / "app" / "pubspec.yaml"
for line in pubspec_path.read_text(encoding="utf-8").splitlines():
if line.startswith("version:"):
version = line.split(":", 1)[1].strip()
if version:
return version
break
raise AssertionError("pubspec.yaml does not expose a version")
def _rust_package_version() -> str:
cargo_path = REPO_ROOT / "matrix" / "Cargo.toml"
inside_package = False
for line in cargo_path.read_text(encoding="utf-8").splitlines():
stripped = line.strip()
if stripped == "[package]":
inside_package = True
continue
if inside_package and stripped.startswith("[") and stripped.endswith("]"):
break
if inside_package:
literal = re.match(
r'version\s*=\s*["\'](?P<version>[^"\']+)["\']', stripped
)
if literal:
return literal.group("version")
raise AssertionError("Cargo.toml does not expose a package version")
def test_version_identifiers_match_across_languages() -> None:
"""Guard against version drift between Python, Ruby, and JavaScript."""
"""Guard against version drift between Python, Ruby, JavaScript, Flutter, and Rust."""
python_version = getattr(data, "__version__", None)
assert (
@@ -65,5 +95,13 @@ def test_version_identifiers_match_across_languages() -> None:
ruby_version = _ruby_fallback_version()
javascript_version = _javascript_package_version()
flutter_version = _flutter_package_version()
rust_version = _rust_package_version()
assert python_version == ruby_version == javascript_version
assert (
python_version
== ruby_version
== javascript_version
== flutter_version
== rust_version
)

View File

@@ -23,6 +23,9 @@ ENV BUNDLE_FORCE_RUBY_PLATFORM=true
# Install build dependencies and SQLite3
RUN apk add --no-cache \
build-base \
python3 \
py3-pip \
py3-virtualenv \
sqlite-dev \
linux-headers \
pkgconfig
@@ -38,11 +41,16 @@ RUN bundle config set --local force_ruby_platform true && \
bundle config set --local without 'development test' && \
bundle install --jobs=4 --retry=3
# Install Meshtastic decoder dependencies in a dedicated venv
RUN python3 -m venv /opt/meshtastic-venv && \
/opt/meshtastic-venv/bin/pip install --no-cache-dir meshtastic protobuf
# Production stage
FROM ruby:3.3-alpine AS production
# Install runtime dependencies
RUN apk add --no-cache \
python3 \
sqlite \
tzdata \
curl
@@ -56,6 +64,7 @@ WORKDIR /app
# Copy installed gems from builder stage
COPY --from=builder /usr/local/bundle /usr/local/bundle
COPY --from=builder /opt/meshtastic-venv /opt/meshtastic-venv
# Copy application code (excluding the Dockerfile which is not required at runtime)
COPY --chown=potatomesh:potatomesh web/app.rb ./
@@ -70,6 +79,7 @@ COPY --chown=potatomesh:potatomesh web/scripts ./scripts
# Copy SQL schema files from data directory
COPY --chown=potatomesh:potatomesh data/*.sql /data/
COPY --chown=potatomesh:potatomesh data/mesh_ingestor/decode_payload.py /app/data/mesh_ingestor/decode_payload.py
# Create data and configuration directories with correct ownership
RUN mkdir -p /app/.local/share/potato-mesh \
@@ -85,6 +95,7 @@ EXPOSE 41447
# Default environment variables (can be overridden by host)
ENV RACK_ENV=production \
APP_ENV=production \
MESHTASTIC_PYTHON=/opt/meshtastic-venv/bin/python \
XDG_DATA_HOME=/app/.local/share \
XDG_CONFIG_HOME=/app/.config \
SITE_NAME="PotatoMesh Demo" \

View File

@@ -49,6 +49,12 @@ require_relative "application/worker_pool"
require_relative "application/federation"
require_relative "application/prometheus"
require_relative "application/queries"
require_relative "application/meshtastic/channel_names"
require_relative "application/meshtastic/channel_hash"
require_relative "application/meshtastic/protobuf"
require_relative "application/meshtastic/rainbow_table"
require_relative "application/meshtastic/cipher"
require_relative "application/meshtastic/payload_decoder"
require_relative "application/data_processing"
require_relative "application/filesystem"
require_relative "application/instances"
@@ -133,7 +139,10 @@ module PotatoMesh
set :public_folder, File.expand_path("../../public", __dir__)
set :views, File.expand_path("../../views", __dir__)
set :federation_thread, nil
set :initial_federation_thread, nil
set :federation_worker_pool, nil
set :federation_shutdown_requested, false
set :federation_shutdown_hook_installed, false
set :port, resolve_port
set :bind, DEFAULT_BIND_ADDRESS

View File

@@ -160,7 +160,15 @@ module PotatoMesh
inserted
end
def touch_node_last_seen(db, node_ref, fallback_num = nil, rx_time: nil, source: nil)
def touch_node_last_seen(
db,
node_ref,
fallback_num = nil,
rx_time: nil,
source: nil,
lora_freq: nil,
modem_preset: nil
)
timestamp = coerce_integer(rx_time)
return unless timestamp
@@ -185,15 +193,19 @@ module PotatoMesh
return if broadcast_node_ref?(node_id, fallback_num)
return unless node_id
lora_freq = coerce_integer(lora_freq)
modem_preset = string_or_nil(modem_preset)
updated = false
with_busy_retry do
db.execute <<~SQL, [timestamp, timestamp, timestamp, node_id]
db.execute <<~SQL, [timestamp, timestamp, timestamp, lora_freq, modem_preset, node_id]
UPDATE nodes
SET last_heard = CASE
WHEN COALESCE(last_heard, 0) >= ? THEN last_heard
ELSE ?
END,
first_heard = COALESCE(first_heard, ?)
first_heard = COALESCE(first_heard, ?),
lora_freq = COALESCE(?, lora_freq),
modem_preset = COALESCE(?, modem_preset)
WHERE node_id = ?
SQL
updated ||= db.changes.positive?
@@ -206,6 +218,8 @@ module PotatoMesh
node_id: node_id,
timestamp: timestamp,
source: source || :unknown,
lora_freq: lora_freq,
modem_preset: modem_preset,
)
end
@@ -490,20 +504,37 @@ module PotatoMesh
rx_iso ||= Time.at(rx_time).utc.iso8601
raw_node_id = payload["node_id"] || payload["from_id"] || payload["from"]
node_id = string_or_nil(raw_node_id)
node_id = "!#{node_id.delete_prefix("!").downcase}" if node_id&.start_with?("!")
raw_node_num = coerce_integer(payload["node_num"]) || coerce_integer(payload["num"])
node_id ||= format("!%08x", raw_node_num & 0xFFFFFFFF) if node_id.nil? && raw_node_num
payload_for_num = payload.is_a?(Hash) ? payload.dup : {}
payload_for_num["num"] ||= raw_node_num if raw_node_num
node_num = resolve_node_num(node_id, payload_for_num)
node_num ||= raw_node_num
canonical = normalize_node_id(db, node_id || node_num)
node_id = canonical if canonical
canonical_parts = canonical_node_parts(raw_node_id, raw_node_num)
if canonical_parts
node_id, node_num, = canonical_parts
else
node_id = string_or_nil(raw_node_id)
node_id = "!#{node_id.delete_prefix("!").downcase}" if node_id&.start_with?("!")
node_id ||= format("!%08x", raw_node_num & 0xFFFFFFFF) if node_id.nil? && raw_node_num
payload_for_num = payload.is_a?(Hash) ? payload.dup : {}
payload_for_num["num"] ||= raw_node_num if raw_node_num
node_num = resolve_node_num(node_id, payload_for_num)
node_num ||= raw_node_num
canonical = normalize_node_id(db, node_id || node_num)
node_id = canonical if canonical
end
lora_freq = coerce_integer(payload["lora_freq"] || payload["loraFrequency"])
modem_preset = string_or_nil(payload["modem_preset"] || payload["modemPreset"])
ensure_unknown_node(db, node_id || node_num, node_num, heard_time: rx_time)
touch_node_last_seen(db, node_id || node_num, node_num, rx_time: rx_time, source: :position)
touch_node_last_seen(
db,
node_id || node_num,
node_num,
rx_time: rx_time,
source: :position,
lora_freq: lora_freq,
modem_preset: modem_preset,
)
to_id = string_or_nil(payload["to_id"] || payload["to"])
@@ -585,6 +616,7 @@ module PotatoMesh
payload_b64 = string_or_nil(payload["payload_b64"] || payload["payload"])
payload_b64 ||= string_or_nil(position_section.dig("payload", "__bytes_b64__"))
ingestor = string_or_nil(payload["ingestor"])
row = [
pos_id,
@@ -608,13 +640,14 @@ module PotatoMesh
hop_limit,
bitfield,
payload_b64,
ingestor,
]
with_busy_retry do
db.execute <<~SQL, row
INSERT INTO positions(id,node_id,node_num,rx_time,rx_iso,position_time,to_id,latitude,longitude,altitude,location_source,
precision_bits,sats_in_view,pdop,ground_speed,ground_track,snr,rssi,hop_limit,bitfield,payload_b64)
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
precision_bits,sats_in_view,pdop,ground_speed,ground_track,snr,rssi,hop_limit,bitfield,payload_b64,ingestor)
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
ON CONFLICT(id) DO UPDATE SET
node_id=COALESCE(excluded.node_id,positions.node_id),
node_num=COALESCE(excluded.node_num,positions.node_num),
@@ -635,7 +668,8 @@ module PotatoMesh
rssi=COALESCE(excluded.rssi,positions.rssi),
hop_limit=COALESCE(excluded.hop_limit,positions.hop_limit),
bitfield=COALESCE(excluded.bitfield,positions.bitfield),
payload_b64=COALESCE(excluded.payload_b64,positions.payload_b64)
payload_b64=COALESCE(excluded.payload_b64,positions.payload_b64),
ingestor=COALESCE(NULLIF(positions.ingestor,''), excluded.ingestor)
SQL
end
@@ -690,6 +724,7 @@ module PotatoMesh
touch_node_last_seen(db, node_id || node_num, node_num, rx_time: rx_time, source: :neighborinfo)
neighbor_entries = []
ingestor = string_or_nil(payload["ingestor"])
neighbors_payload = payload["neighbors"]
neighbors_list = neighbors_payload.is_a?(Array) ? neighbors_payload : []
@@ -726,28 +761,56 @@ module PotatoMesh
snr = coerce_float(neighbor["snr"])
ensure_unknown_node(db, neighbor_id || neighbor_num, neighbor_num, heard_time: entry_rx_time)
touch_node_last_seen(db, neighbor_id || neighbor_num, neighbor_num, rx_time: entry_rx_time, source: :neighborinfo)
neighbor_entries << [neighbor_id, snr, entry_rx_time]
neighbor_entries << [neighbor_id, snr, entry_rx_time, ingestor]
end
with_busy_retry do
db.transaction do
db.execute("DELETE FROM neighbors WHERE node_id = ?", [node_id])
neighbor_entries.each do |neighbor_id, snr_value, heard_time|
if neighbor_entries.empty?
db.execute("DELETE FROM neighbors WHERE node_id = ?", [node_id])
else
expected_neighbors = neighbor_entries.map(&:first).uniq
existing_neighbors = db.execute(
"SELECT neighbor_id FROM neighbors WHERE node_id = ?",
[node_id],
).flatten
stale_neighbors = existing_neighbors - expected_neighbors
stale_neighbors.each_slice(500) do |slice|
placeholders = slice.map { "?" }.join(",")
db.execute(
"DELETE FROM neighbors WHERE node_id = ? AND neighbor_id IN (#{placeholders})",
[node_id] + slice,
)
end
end
neighbor_entries.each do |neighbor_id, snr_value, heard_time, reporter_id|
db.execute(
<<~SQL,
INSERT OR REPLACE INTO neighbors(node_id, neighbor_id, snr, rx_time)
VALUES (?, ?, ?, ?)
INSERT INTO neighbors(node_id, neighbor_id, snr, rx_time, ingestor)
VALUES (?, ?, ?, ?, ?)
ON CONFLICT(node_id, neighbor_id) DO UPDATE SET
snr = excluded.snr,
rx_time = excluded.rx_time,
ingestor = COALESCE(NULLIF(neighbors.ingestor,''), excluded.ingestor)
SQL
[node_id, neighbor_id, snr_value, heard_time],
[node_id, neighbor_id, snr_value, heard_time, reporter_id],
)
end
end
end
end
def update_node_from_telemetry(db, node_id, node_num, rx_time, metrics = {})
def update_node_from_telemetry(
db,
node_id,
node_num,
rx_time,
metrics = {},
lora_freq: nil,
modem_preset: nil
)
num = coerce_integer(node_num)
id = string_or_nil(node_id)
if id&.start_with?("!")
@@ -757,7 +820,15 @@ module PotatoMesh
return unless id
ensure_unknown_node(db, id, num, heard_time: rx_time)
touch_node_last_seen(db, id, num, rx_time: rx_time, source: :telemetry)
touch_node_last_seen(
db,
id,
num,
rx_time: rx_time,
source: :telemetry,
lora_freq: lora_freq,
modem_preset: modem_preset,
)
battery = coerce_float(metrics[:battery_level] || metrics["battery_level"])
voltage = coerce_float(metrics[:voltage] || metrics["voltage"])
@@ -901,17 +972,23 @@ module PotatoMesh
rx_iso ||= Time.at(rx_time).utc.iso8601
raw_node_id = payload["node_id"] || payload["from_id"] || payload["from"]
node_id = string_or_nil(raw_node_id)
node_id = "!#{node_id.delete_prefix("!").downcase}" if node_id&.start_with?("!")
raw_node_num = coerce_integer(payload["node_num"]) || coerce_integer(payload["num"])
payload_for_num = payload.dup
payload_for_num["num"] ||= raw_node_num if raw_node_num
node_num = resolve_node_num(node_id, payload_for_num)
node_num ||= raw_node_num
canonical_parts = canonical_node_parts(raw_node_id, raw_node_num)
if canonical_parts
node_id, node_num, = canonical_parts
else
node_id = string_or_nil(raw_node_id)
node_id = "!#{node_id.delete_prefix("!").downcase}" if node_id&.start_with?("!")
canonical = normalize_node_id(db, node_id || node_num)
node_id = canonical if canonical
payload_for_num = payload.dup
payload_for_num["num"] ||= raw_node_num if raw_node_num
node_num = resolve_node_num(node_id, payload_for_num)
node_num ||= raw_node_num
canonical = normalize_node_id(db, node_id || node_num)
node_id = canonical if canonical
end
from_id = string_or_nil(payload["from_id"]) || node_id
to_id = string_or_nil(payload["to_id"] || payload["to"])
@@ -926,6 +1003,9 @@ module PotatoMesh
rssi = coerce_integer(payload["rssi"])
bitfield = coerce_integer(payload["bitfield"])
payload_b64 = string_or_nil(payload["payload_b64"] || payload["payload"])
lora_freq = coerce_integer(payload["lora_freq"] || payload["loraFrequency"])
modem_preset = string_or_nil(payload["modem_preset"] || payload["modemPreset"])
ingestor = string_or_nil(payload["ingestor"])
telemetry_section = normalize_json_object(payload["telemetry"])
device_metrics = normalize_json_object(payload["device_metrics"] || payload["deviceMetrics"])
@@ -1255,6 +1335,7 @@ module PotatoMesh
rainfall_24h,
soil_moisture,
soil_temperature,
ingestor,
]
placeholders = Array.new(row.length, "?").join(",")
@@ -1262,7 +1343,7 @@ module PotatoMesh
with_busy_retry do
db.execute <<~SQL, row
INSERT INTO telemetry(id,node_id,node_num,from_id,to_id,rx_time,rx_iso,telemetry_time,channel,portnum,hop_limit,snr,rssi,bitfield,payload_b64,
battery_level,voltage,channel_utilization,air_util_tx,uptime_seconds,temperature,relative_humidity,barometric_pressure,gas_resistance,current,iaq,distance,lux,white_lux,ir_lux,uv_lux,wind_direction,wind_speed,weight,wind_gust,wind_lull,radiation,rainfall_1h,rainfall_24h,soil_moisture,soil_temperature)
battery_level,voltage,channel_utilization,air_util_tx,uptime_seconds,temperature,relative_humidity,barometric_pressure,gas_resistance,current,iaq,distance,lux,white_lux,ir_lux,uv_lux,wind_direction,wind_speed,weight,wind_gust,wind_lull,radiation,rainfall_1h,rainfall_24h,soil_moisture,soil_temperature,ingestor)
VALUES (#{placeholders})
ON CONFLICT(id) DO UPDATE SET
node_id=COALESCE(excluded.node_id,telemetry.node_id),
@@ -1304,17 +1385,26 @@ module PotatoMesh
rainfall_1h=COALESCE(excluded.rainfall_1h,telemetry.rainfall_1h),
rainfall_24h=COALESCE(excluded.rainfall_24h,telemetry.rainfall_24h),
soil_moisture=COALESCE(excluded.soil_moisture,telemetry.soil_moisture),
soil_temperature=COALESCE(excluded.soil_temperature,telemetry.soil_temperature)
soil_temperature=COALESCE(excluded.soil_temperature,telemetry.soil_temperature),
ingestor=COALESCE(NULLIF(telemetry.ingestor,''), excluded.ingestor)
SQL
end
update_node_from_telemetry(db, node_id, node_num, rx_time, {
battery_level: battery_level,
voltage: voltage,
channel_utilization: channel_utilization,
air_util_tx: air_util_tx,
uptime_seconds: uptime_seconds,
})
update_node_from_telemetry(
db,
node_id,
node_num,
rx_time,
{
battery_level: battery_level,
voltage: voltage,
channel_utilization: channel_utilization,
air_util_tx: air_util_tx,
uptime_seconds: uptime_seconds,
},
lora_freq: lora_freq,
modem_preset: modem_preset,
)
end
# Persist a traceroute observation and its hop path.
@@ -1347,6 +1437,7 @@ module PotatoMesh
metrics&.[]("latency_ms") ||
metrics&.[]("latencyMs"),
)
ingestor = string_or_nil(payload["ingestor"])
hops_value = payload.key?("hops") ? payload["hops"] : payload["path"]
hops = normalize_trace_hops(hops_value)
@@ -1358,9 +1449,9 @@ module PotatoMesh
end
with_busy_retry do
db.execute <<~SQL, [trace_identifier, request_id, src, dest, rx_time, rx_iso, rssi, snr, elapsed_ms]
INSERT INTO traces(id, request_id, src, dest, rx_time, rx_iso, rssi, snr, elapsed_ms)
VALUES(?,?,?,?,?,?,?,?,?)
db.execute <<~SQL, [trace_identifier, request_id, src, dest, rx_time, rx_iso, rssi, snr, elapsed_ms, ingestor]
INSERT INTO traces(id, request_id, src, dest, rx_time, rx_iso, rssi, snr, elapsed_ms, ingestor)
VALUES(?,?,?,?,?,?,?,?,?,?)
ON CONFLICT(id) DO UPDATE SET
request_id=COALESCE(excluded.request_id,traces.request_id),
src=COALESCE(excluded.src,traces.src),
@@ -1369,7 +1460,8 @@ module PotatoMesh
rx_iso=excluded.rx_iso,
rssi=COALESCE(excluded.rssi,traces.rssi),
snr=COALESCE(excluded.snr,traces.snr),
elapsed_ms=COALESCE(excluded.elapsed_ms,traces.elapsed_ms)
elapsed_ms=COALESCE(excluded.elapsed_ms,traces.elapsed_ms),
ingestor=COALESCE(NULLIF(traces.ingestor,''), excluded.ingestor)
SQL
trace_id = trace_identifier || db.last_insert_row_id
@@ -1385,6 +1477,58 @@ module PotatoMesh
end
end
# Attempt to decrypt an encrypted Meshtastic message payload.
#
# @param message [Hash] message payload supplied by the ingestor.
# @param packet_id [Integer] message packet identifier.
# @param from_id [String, nil] canonical node identifier when available.
# @param from_num [Integer, nil] numeric node identifier when available.
# @param channel_index [Integer, nil] channel hash index.
# @return [Hash, nil] decrypted payload metadata when parsing succeeds.
def decrypt_meshtastic_message(message, packet_id, from_id, from_num, channel_index)
return nil unless message.is_a?(Hash)
cipher_b64 = string_or_nil(message["encrypted"])
return nil unless cipher_b64
if (ENV["RACK_ENV"] == "test" || ENV["APP_ENV"] == "test" || defined?(RSpec)) &&
ENV["MESHTASTIC_PSK_B64"].nil?
return nil
end
node_num = coerce_integer(from_num)
if node_num.nil?
parts = canonical_node_parts(from_id)
node_num = parts[1] if parts
end
return nil unless node_num
psk_b64 = PotatoMesh::Config.meshtastic_psk_b64
data = PotatoMesh::App::Meshtastic::Cipher.decrypt_data(
cipher_b64: cipher_b64,
packet_id: packet_id,
from_id: from_id,
from_num: node_num,
psk_b64: psk_b64,
)
return nil unless data
channel_name = nil
if channel_index.is_a?(Integer)
candidates = PotatoMesh::App::Meshtastic::RainbowTable.channel_names_for(
channel_index,
psk_b64: psk_b64,
)
channel_name = candidates.first if candidates.any?
end
{
text: data[:text],
portnum: data[:portnum],
payload: data[:payload],
channel_name: channel_name,
}
end
def insert_message(db, message)
return unless message.is_a?(Hash)
@@ -1415,6 +1559,14 @@ module PotatoMesh
from_id = canonical_from_id
end
end
if from_id && !from_id.start_with?("^")
canonical_parts = canonical_node_parts(from_id, message["from_num"])
if canonical_parts && !from_id.start_with?("!")
from_id = canonical_parts[0]
message["from_num"] ||= canonical_parts[1]
end
end
sender_present = !from_id.nil? || !coerce_integer(message["from_num"]).nil? || !trimmed_from_id.nil?
raw_to_id = message["to_id"]
raw_to_id = message["to"] if raw_to_id.nil? || raw_to_id.to_s.strip.empty?
@@ -1428,27 +1580,41 @@ module PotatoMesh
to_id = canonical_to_id
end
end
if to_id && !to_id.start_with?("^")
canonical_parts = canonical_node_parts(to_id, message["to_num"])
if canonical_parts && !to_id.start_with?("!")
to_id = canonical_parts[0]
message["to_num"] ||= canonical_parts[1]
end
end
encrypted = string_or_nil(message["encrypted"])
text = message["text"]
portnum = message["portnum"]
clear_encrypted = false
channel_index = coerce_integer(message["channel"] || message["channel_index"] || message["channelIndex"])
ensure_unknown_node(db, from_id || raw_from_id, message["from_num"], heard_time: rx_time)
touch_node_last_seen(
db,
from_id || raw_from_id || message["from_num"],
message["from_num"],
rx_time: rx_time,
source: :message,
)
decrypted_payload = nil
decrypted_portnum = nil
ensure_unknown_node(db, to_id || raw_to_id, message["to_num"], heard_time: rx_time) if to_id || raw_to_id
if to_id || raw_to_id || message.key?("to_num")
touch_node_last_seen(
db,
to_id || raw_to_id || message["to_num"],
message["to_num"],
rx_time: rx_time,
source: :message,
if encrypted && (text.nil? || text.to_s.strip.empty?)
decrypted = decrypt_meshtastic_message(
message,
msg_id,
from_id,
message["from_num"],
channel_index,
)
if decrypted
decrypted_payload = decrypted
decrypted_portnum = decrypted[:portnum]
end
end
if encrypted && (text.nil? || text.to_s.strip.empty?)
portnum = nil
message.delete("portnum")
end
lora_freq = coerce_integer(message["lora_freq"] || message["loraFrequency"])
@@ -1456,6 +1622,7 @@ module PotatoMesh
channel_name = string_or_nil(message["channel_name"] || message["channelName"])
reply_id = coerce_integer(message["reply_id"] || message["replyId"])
emoji = string_or_nil(message["emoji"])
ingestor = string_or_nil(message["ingestor"])
row = [
msg_id,
@@ -1464,8 +1631,8 @@ module PotatoMesh
from_id,
to_id,
message["channel"],
message["portnum"],
message["text"],
portnum,
text,
encrypted,
message["snr"],
message["rssi"],
@@ -1475,19 +1642,27 @@ module PotatoMesh
channel_name,
reply_id,
emoji,
ingestor,
]
with_busy_retry do
existing = db.get_first_row(
"SELECT from_id, to_id, encrypted, lora_freq, modem_preset, channel_name, reply_id, emoji FROM messages WHERE id = ?",
"SELECT from_id, to_id, text, encrypted, lora_freq, modem_preset, channel_name, reply_id, emoji, portnum, ingestor FROM messages WHERE id = ?",
[msg_id],
)
if existing
updates = {}
existing_text = existing.is_a?(Hash) ? existing["text"] : existing[2]
existing_text_str = existing_text&.to_s
existing_has_text = existing_text_str && !existing_text_str.strip.empty?
existing_from = existing.is_a?(Hash) ? existing["from_id"] : existing[0]
existing_from_str = existing_from&.to_s
return if !sender_present && (existing_from_str.nil? || existing_from_str.strip.empty?)
existing_encrypted = existing.is_a?(Hash) ? existing["encrypted"] : existing[3]
existing_encrypted_str = existing_encrypted&.to_s
decrypted_precedence = text && (clear_encrypted || (existing_encrypted_str && !existing_encrypted_str.strip.empty?))
if from_id
existing_from = existing.is_a?(Hash) ? existing["from_id"] : existing[0]
existing_from_str = existing_from&.to_s
should_update = existing_from_str.nil? || existing_from_str.strip.empty?
should_update ||= existing_from != from_id
updates["from_id"] = from_id if should_update
@@ -1501,21 +1676,48 @@ module PotatoMesh
updates["to_id"] = to_id if should_update
end
if encrypted
existing_encrypted = existing.is_a?(Hash) ? existing["encrypted"] : existing[2]
existing_encrypted_str = existing_encrypted&.to_s
if clear_encrypted || (decrypted_precedence && existing_encrypted_str && !existing_encrypted_str.strip.empty?)
updates["encrypted"] = nil if existing_encrypted
elsif encrypted && !existing_has_text
should_update = existing_encrypted_str.nil? || existing_encrypted_str.strip.empty?
should_update ||= existing_encrypted != encrypted
updates["encrypted"] = encrypted if should_update
end
if text
should_update = existing_text_str.nil? || existing_text_str.strip.empty?
should_update ||= existing_text != text
updates["text"] = text if should_update
end
if decrypted_precedence
updates["channel"] = message["channel"] if message.key?("channel")
updates["snr"] = message["snr"] if message.key?("snr")
updates["rssi"] = message["rssi"] if message.key?("rssi")
updates["hop_limit"] = message["hop_limit"] if message.key?("hop_limit")
updates["lora_freq"] = lora_freq unless lora_freq.nil?
updates["modem_preset"] = modem_preset if modem_preset
updates["channel_name"] = channel_name if channel_name
updates["rx_time"] = rx_time if rx_time
updates["rx_iso"] = rx_iso if rx_iso
end
if portnum
existing_portnum = existing.is_a?(Hash) ? existing["portnum"] : existing[9]
existing_portnum_str = existing_portnum&.to_s
should_update = existing_portnum_str.nil? || existing_portnum_str.strip.empty?
should_update ||= existing_portnum != portnum
should_update ||= decrypted_precedence
updates["portnum"] = portnum if should_update
end
unless lora_freq.nil?
existing_lora = existing.is_a?(Hash) ? existing["lora_freq"] : existing[3]
existing_lora = existing.is_a?(Hash) ? existing["lora_freq"] : existing[4]
updates["lora_freq"] = lora_freq if existing_lora != lora_freq
end
if modem_preset
existing_preset = existing.is_a?(Hash) ? existing["modem_preset"] : existing[4]
existing_preset = existing.is_a?(Hash) ? existing["modem_preset"] : existing[5]
existing_preset_str = existing_preset&.to_s
should_update = existing_preset_str.nil? || existing_preset_str.strip.empty?
should_update ||= existing_preset != modem_preset
@@ -1523,7 +1725,7 @@ module PotatoMesh
end
if channel_name
existing_channel = existing.is_a?(Hash) ? existing["channel_name"] : existing[5]
existing_channel = existing.is_a?(Hash) ? existing["channel_name"] : existing[6]
existing_channel_str = existing_channel&.to_s
should_update = existing_channel_str.nil? || existing_channel_str.strip.empty?
should_update ||= existing_channel != channel_name
@@ -1531,18 +1733,24 @@ module PotatoMesh
end
unless reply_id.nil?
existing_reply = existing.is_a?(Hash) ? existing["reply_id"] : existing[6]
existing_reply = existing.is_a?(Hash) ? existing["reply_id"] : existing[7]
updates["reply_id"] = reply_id if existing_reply != reply_id
end
if emoji
existing_emoji = existing.is_a?(Hash) ? existing["emoji"] : existing[7]
existing_emoji = existing.is_a?(Hash) ? existing["emoji"] : existing[8]
existing_emoji_str = existing_emoji&.to_s
should_update = existing_emoji_str.nil? || existing_emoji_str.strip.empty?
should_update ||= existing_emoji != emoji
updates["emoji"] = emoji if should_update
end
if ingestor
existing_ingestor = existing.is_a?(Hash) ? existing["ingestor"] : existing[10]
existing_ingestor = string_or_nil(existing_ingestor)
updates["ingestor"] = ingestor if existing_ingestor.nil?
end
unless updates.empty?
assignments = updates.keys.map { |column| "#{column} = ?" }.join(", ")
db.execute("UPDATE messages SET #{assignments} WHERE id = ?", updates.values + [msg_id])
@@ -1552,19 +1760,49 @@ module PotatoMesh
begin
db.execute <<~SQL, row
INSERT INTO messages(id,rx_time,rx_iso,from_id,to_id,channel,portnum,text,encrypted,snr,rssi,hop_limit,lora_freq,modem_preset,channel_name,reply_id,emoji)
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
INSERT INTO messages(id,rx_time,rx_iso,from_id,to_id,channel,portnum,text,encrypted,snr,rssi,hop_limit,lora_freq,modem_preset,channel_name,reply_id,emoji,ingestor)
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
SQL
rescue SQLite3::ConstraintException
existing_row = db.get_first_row(
"SELECT text, encrypted, ingestor FROM messages WHERE id = ?",
[msg_id],
)
existing_text = existing_row.is_a?(Hash) ? existing_row["text"] : existing_row&.[](0)
existing_text_str = existing_text&.to_s
allow_encrypted_update = existing_text_str.nil? || existing_text_str.strip.empty?
existing_encrypted = existing_row.is_a?(Hash) ? existing_row["encrypted"] : existing_row&.[](1)
existing_encrypted_str = existing_encrypted&.to_s
existing_ingestor = existing_row.is_a?(Hash) ? existing_row["ingestor"] : existing_row&.[](2)
existing_ingestor = string_or_nil(existing_ingestor)
decrypted_precedence = text && (clear_encrypted || (existing_encrypted_str && !existing_encrypted_str.strip.empty?))
fallback_updates = {}
fallback_updates["from_id"] = from_id if from_id
fallback_updates["to_id"] = to_id if to_id
fallback_updates["encrypted"] = encrypted if encrypted
fallback_updates["lora_freq"] = lora_freq unless lora_freq.nil?
fallback_updates["modem_preset"] = modem_preset if modem_preset
fallback_updates["channel_name"] = channel_name if channel_name
fallback_updates["text"] = text if text
fallback_updates["encrypted"] = encrypted if encrypted && allow_encrypted_update
fallback_updates["encrypted"] = nil if clear_encrypted
fallback_updates["portnum"] = portnum if portnum
if decrypted_precedence
fallback_updates["channel"] = message["channel"] if message.key?("channel")
fallback_updates["snr"] = message["snr"] if message.key?("snr")
fallback_updates["rssi"] = message["rssi"] if message.key?("rssi")
fallback_updates["hop_limit"] = message["hop_limit"] if message.key?("hop_limit")
fallback_updates["portnum"] = portnum if portnum
fallback_updates["lora_freq"] = lora_freq unless lora_freq.nil?
fallback_updates["modem_preset"] = modem_preset if modem_preset
fallback_updates["channel_name"] = channel_name if channel_name
fallback_updates["rx_time"] = rx_time if rx_time
fallback_updates["rx_iso"] = rx_iso if rx_iso
else
fallback_updates["lora_freq"] = lora_freq unless lora_freq.nil?
fallback_updates["modem_preset"] = modem_preset if modem_preset
fallback_updates["channel_name"] = channel_name if channel_name
end
fallback_updates["reply_id"] = reply_id unless reply_id.nil?
fallback_updates["emoji"] = emoji if emoji
fallback_updates["ingestor"] = ingestor if ingestor && existing_ingestor.nil?
unless fallback_updates.empty?
assignments = fallback_updates.keys.map { |column| "#{column} = ?" }.join(", ")
db.execute("UPDATE messages SET #{assignments} WHERE id = ?", fallback_updates.values + [msg_id])
@@ -1572,6 +1810,327 @@ module PotatoMesh
end
end
end
if clear_encrypted && text
debug_log(
"Stored decrypted text message",
context: "data_processing.insert_message",
message_id: msg_id,
channel: message["channel"],
channel_name: message["channel_name"],
portnum: portnum,
)
end
stored_decrypted = nil
if decrypted_payload
stored_decrypted = store_decrypted_payload(
db,
message,
msg_id,
decrypted_payload,
rx_time: rx_time,
rx_iso: rx_iso,
from_id: from_id,
to_id: to_id,
channel: message["channel"],
portnum: portnum || decrypted_portnum,
hop_limit: message["hop_limit"],
snr: message["snr"],
rssi: message["rssi"],
)
end
if stored_decrypted && encrypted
with_busy_retry do
db.execute("UPDATE messages SET encrypted = NULL WHERE id = ?", [msg_id])
end
debug_log(
"Cleared encrypted payload after decoding",
context: "data_processing.insert_message",
message_id: msg_id,
portnum: portnum || decrypted_portnum,
)
end
should_touch_message = !stored_decrypted
if should_touch_message
ensure_unknown_node(db, from_id || raw_from_id, message["from_num"], heard_time: rx_time)
touch_node_last_seen(
db,
from_id || raw_from_id || message["from_num"],
message["from_num"],
rx_time: rx_time,
source: :message,
lora_freq: lora_freq,
modem_preset: modem_preset,
)
ensure_unknown_node(db, to_id || raw_to_id, message["to_num"], heard_time: rx_time) if to_id || raw_to_id
if to_id || raw_to_id || message.key?("to_num")
touch_node_last_seen(
db,
to_id || raw_to_id || message["to_num"],
message["to_num"],
rx_time: rx_time,
source: :message,
lora_freq: lora_freq,
modem_preset: modem_preset,
)
end
end
end
# Decode and store decrypted payloads in domain-specific tables.
#
# @param db [SQLite3::Database] open database handle.
# @param message [Hash] original message payload.
# @param packet_id [Integer] packet identifier for the message.
# @param decrypted [Hash] decrypted payload metadata.
# @param rx_time [Integer] receive time.
# @param rx_iso [String] ISO 8601 receive timestamp.
# @param from_id [String, nil] canonical sender identifier.
# @param to_id [String, nil] destination identifier.
# @param channel [Integer, nil] channel index.
# @param portnum [Object, nil] port number identifier.
# @param hop_limit [Integer, nil] hop limit value.
# @param snr [Numeric, nil] signal-to-noise ratio.
# @param rssi [Integer, nil] RSSI value.
# @return [void]
def store_decrypted_payload(
db,
message,
packet_id,
decrypted,
rx_time:,
rx_iso:,
from_id:,
to_id:,
channel:,
portnum:,
hop_limit:,
snr:,
rssi:
)
payload_bytes = decrypted[:payload]
return false unless payload_bytes
portnum_value = coerce_integer(portnum || decrypted[:portnum])
return false unless portnum_value
payload_b64 = Base64.strict_encode64(payload_bytes)
supported_ports = [3, 4, 67, 70, 71]
return false unless supported_ports.include?(portnum_value)
decoded = PotatoMesh::App::Meshtastic::PayloadDecoder.decode(
portnum: portnum_value,
payload_b64: payload_b64,
)
return false unless decoded.is_a?(Hash)
return false unless decoded["payload"].is_a?(Hash)
common_payload = {
"id" => packet_id,
"packet_id" => packet_id,
"rx_time" => rx_time,
"rx_iso" => rx_iso,
"from_id" => from_id,
"to_id" => to_id,
"channel" => channel,
"portnum" => portnum_value.to_s,
"hop_limit" => hop_limit,
"snr" => snr,
"rssi" => rssi,
"lora_freq" => coerce_integer(message["lora_freq"] || message["loraFrequency"]),
"modem_preset" => string_or_nil(message["modem_preset"] || message["modemPreset"]),
"payload_b64" => payload_b64,
"ingestor" => string_or_nil(message["ingestor"]),
}
case decoded["type"]
when "POSITION_APP"
payload = common_payload.merge("position" => decoded["payload"])
insert_position(db, payload)
debug_log(
"Stored decrypted position payload",
context: "data_processing.store_decrypted_payload",
message_id: packet_id,
portnum: portnum_value,
)
true
when "NODEINFO_APP"
node_payload = normalize_decrypted_nodeinfo_payload(decoded["payload"])
return false unless valid_decrypted_nodeinfo_payload?(node_payload)
node_id = string_or_nil(node_payload["id"]) || from_id
node_num = coerce_integer(node_payload["num"]) ||
coerce_integer(message["from_num"]) ||
resolve_node_num(from_id, message)
node_id ||= format("!%08x", node_num & 0xFFFFFFFF) if node_num
return false unless node_id
payload = node_payload.merge(
"num" => node_num,
"lastHeard" => coerce_integer(node_payload["lastHeard"] || node_payload["last_heard"]) || rx_time,
"snr" => node_payload.key?("snr") ? node_payload["snr"] : snr,
"lora_freq" => common_payload["lora_freq"],
"modem_preset" => common_payload["modem_preset"],
)
upsert_node(db, node_id, payload)
debug_log(
"Stored decrypted node payload",
context: "data_processing.store_decrypted_payload",
message_id: packet_id,
portnum: portnum_value,
node_id: node_id,
)
true
when "TELEMETRY_APP"
payload = common_payload.merge("telemetry" => decoded["payload"])
insert_telemetry(db, payload)
debug_log(
"Stored decrypted telemetry payload",
context: "data_processing.store_decrypted_payload",
message_id: packet_id,
portnum: portnum_value,
)
true
when "NEIGHBORINFO_APP"
neighbor_payload = decoded["payload"]
neighbors = neighbor_payload["neighbors"]
neighbors = [] unless neighbors.is_a?(Array)
normalized_neighbors = neighbors.map do |neighbor|
next unless neighbor.is_a?(Hash)
{
"neighbor_id" => neighbor["node_id"] || neighbor["nodeId"] || neighbor["id"],
"snr" => neighbor["snr"],
"rx_time" => neighbor["last_rx_time"],
}.compact
end.compact
return false if normalized_neighbors.empty?
payload = common_payload.merge(
"node_id" => neighbor_payload["node_id"] || from_id,
"neighbors" => normalized_neighbors,
"node_broadcast_interval_secs" => neighbor_payload["node_broadcast_interval_secs"],
"last_sent_by_id" => neighbor_payload["last_sent_by_id"],
)
insert_neighbors(db, payload)
debug_log(
"Stored decrypted neighbor payload",
context: "data_processing.store_decrypted_payload",
message_id: packet_id,
portnum: portnum_value,
)
true
when "TRACEROUTE_APP"
route = decoded["payload"]["route"]
route_back = decoded["payload"]["route_back"]
hops = route.is_a?(Array) ? route : route_back.is_a?(Array) ? route_back : []
dest = hops.last if hops.is_a?(Array) && !hops.empty?
src_num = coerce_integer(message["from_num"]) || resolve_node_num(from_id, message)
payload = common_payload.merge(
"src" => src_num,
"dest" => dest,
"hops" => hops,
)
insert_trace(db, payload)
debug_log(
"Stored decrypted traceroute payload",
context: "data_processing.store_decrypted_payload",
message_id: packet_id,
portnum: portnum_value,
)
true
else
false
end
end
# Validate decoded NodeInfo payloads before upserting node records.
#
# @param payload [Object] decoded payload candidate.
# @return [Boolean] true when the payload resembles a Meshtastic NodeInfo.
def valid_decrypted_nodeinfo_payload?(payload)
return false unless payload.is_a?(Hash)
return false if payload.empty?
return false unless payload["user"].is_a?(Hash)
return false if payload.key?("position") && !payload["position"].is_a?(Hash)
return false if payload.key?("deviceMetrics") && !payload["deviceMetrics"].is_a?(Hash)
return false unless nodeinfo_user_has_identifying_fields?(payload["user"])
true
end
# Normalize decoded NodeInfo payload keys for +upsert_node+ compatibility.
#
# The Python decoder preserves protobuf field names, so nested hashes may
# use +snake_case+ keys that +upsert_node+ does not read.
#
# @param payload [Object] decoded NodeInfo payload.
# @return [Hash] normalized payload hash.
def normalize_decrypted_nodeinfo_payload(payload)
return {} unless payload.is_a?(Hash)
user = payload["user"]
normalized_user = user.is_a?(Hash) ? user.dup : nil
if normalized_user
normalized_user["shortName"] ||= normalized_user["short_name"]
normalized_user["longName"] ||= normalized_user["long_name"]
normalized_user["hwModel"] ||= normalized_user["hw_model"]
normalized_user["publicKey"] ||= normalized_user["public_key"]
normalized_user["isUnmessagable"] = normalized_user["is_unmessagable"] if normalized_user.key?("is_unmessagable")
end
metrics = payload["deviceMetrics"] || payload["device_metrics"]
normalized_metrics = metrics.is_a?(Hash) ? metrics.dup : nil
if normalized_metrics
normalized_metrics["batteryLevel"] ||= normalized_metrics["battery_level"]
normalized_metrics["channelUtilization"] ||= normalized_metrics["channel_utilization"]
normalized_metrics["airUtilTx"] ||= normalized_metrics["air_util_tx"]
normalized_metrics["uptimeSeconds"] ||= normalized_metrics["uptime_seconds"]
end
position = payload["position"]
normalized_position = position.is_a?(Hash) ? position.dup : nil
if normalized_position
normalized_position["precisionBits"] ||= normalized_position["precision_bits"]
normalized_position["locationSource"] ||= normalized_position["location_source"]
end
normalized = payload.dup
normalized["user"] = normalized_user if normalized_user
normalized["deviceMetrics"] = normalized_metrics if normalized_metrics
normalized["position"] = normalized_position if normalized_position
normalized["lastHeard"] ||= normalized["last_heard"]
normalized["hopsAway"] ||= normalized["hops_away"]
normalized["isFavorite"] = normalized["is_favorite"] if normalized.key?("is_favorite")
normalized["hwModel"] ||= normalized["hw_model"]
normalized
end
# Validate that a decoded NodeInfo user section contains identifying data.
#
# @param user [Hash] decoded NodeInfo user payload.
# @return [Boolean] true when at least one identifying field is present.
def nodeinfo_user_has_identifying_fields?(user)
identifying_fields = [
user["id"],
user["shortName"],
user["short_name"],
user["longName"],
user["long_name"],
user["macaddr"],
user["hwModel"],
user["hw_model"],
user["publicKey"],
user["public_key"],
]
identifying_fields.any? do |value|
value.is_a?(String) ? !value.strip.empty? : !value.nil?
end
end
def normalize_node_id(db, node_ref)

View File

@@ -149,6 +149,9 @@ module PotatoMesh
db.execute("ALTER TABLE messages ADD COLUMN emoji TEXT")
message_columns << "emoji"
end
unless message_columns.include?("ingestor")
db.execute("ALTER TABLE messages ADD COLUMN ingestor TEXT")
end
reply_index_exists =
db.get_first_value(
@@ -188,6 +191,31 @@ module PotatoMesh
db.execute("ALTER TABLE telemetry ADD COLUMN #{name} #{type}")
telemetry_columns << name
end
unless telemetry_columns.include?("ingestor")
db.execute("ALTER TABLE telemetry ADD COLUMN ingestor TEXT")
end
position_tables =
db.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='positions'").flatten
if position_tables.empty?
positions_schema = File.expand_path("../../../../data/positions.sql", __dir__)
db.execute_batch(File.read(positions_schema))
end
position_columns = db.execute("PRAGMA table_info(positions)").map { |row| row[1] }
unless position_columns.include?("ingestor")
db.execute("ALTER TABLE positions ADD COLUMN ingestor TEXT")
end
neighbor_tables =
db.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='neighbors'").flatten
if neighbor_tables.empty?
neighbors_schema = File.expand_path("../../../../data/neighbors.sql", __dir__)
db.execute_batch(File.read(neighbors_schema))
end
neighbor_columns = db.execute("PRAGMA table_info(neighbors)").map { |row| row[1] }
unless neighbor_columns.include?("ingestor")
db.execute("ALTER TABLE neighbors ADD COLUMN ingestor TEXT")
end
trace_tables =
db.execute(
@@ -197,6 +225,10 @@ module PotatoMesh
traces_schema = File.expand_path("../../../../data/traces.sql", __dir__)
db.execute_batch(File.read(traces_schema))
end
trace_columns = db.execute("PRAGMA table_info(traces)").map { |row| row[1] }
unless trace_columns.include?("ingestor")
db.execute("ALTER TABLE traces ADD COLUMN ingestor TEXT")
end
ingestor_tables =
db.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='ingestors'").flatten

View File

@@ -17,6 +17,8 @@
module PotatoMesh
module App
module Federation
FEDERATION_SLEEP_SLICE_SECONDS = 0.2
# Resolve the canonical domain for the running instance.
#
# @return [String, nil] sanitized instance domain or nil outside production.
@@ -170,6 +172,9 @@ module PotatoMesh
# @return [PotatoMesh::App::WorkerPool, nil] active worker pool if created.
def ensure_federation_worker_pool!
return nil unless federation_enabled?
return nil if federation_shutdown_requested?
ensure_federation_shutdown_hook!
existing = settings.respond_to?(:federation_worker_pool) ? settings.federation_worker_pool : nil
return existing if existing&.alive?
@@ -177,19 +182,81 @@ module PotatoMesh
pool = PotatoMesh::App::WorkerPool.new(
size: PotatoMesh::Config.federation_worker_pool_size,
max_queue: PotatoMesh::Config.federation_worker_queue_capacity,
task_timeout: PotatoMesh::Config.federation_task_timeout_seconds,
name: "potato-mesh-fed",
)
set(:federation_worker_pool, pool) if respond_to?(:set)
pool
end
# Ensure federation background workers are torn down during process exit.
#
# @return [void]
def ensure_federation_shutdown_hook!
application = is_a?(Class) ? self : self.class
return application.ensure_federation_shutdown_hook! unless application.equal?(self)
installed = if respond_to?(:settings) && settings.respond_to?(:federation_shutdown_hook_installed)
settings.federation_shutdown_hook_installed
else
instance_variable_defined?(:@federation_shutdown_hook_installed) && @federation_shutdown_hook_installed
end
return if installed
if respond_to?(:set) && settings.respond_to?(:federation_shutdown_hook_installed=)
set(:federation_shutdown_hook_installed, true)
else
@federation_shutdown_hook_installed = true
end
at_exit do
begin
pool.shutdown(timeout: PotatoMesh::Config.federation_task_timeout_seconds)
application.shutdown_federation_background_work!(timeout: PotatoMesh::Config.federation_shutdown_timeout_seconds)
rescue StandardError
# Suppress shutdown errors during interpreter teardown.
end
end
end
set(:federation_worker_pool, pool) if respond_to?(:set)
pool
# Check whether federation workers have received a shutdown request.
#
# @return [Boolean] true when stop has been requested.
def federation_shutdown_requested?
return false unless respond_to?(:settings)
return false unless settings.respond_to?(:federation_shutdown_requested)
settings.federation_shutdown_requested == true
end
# Mark federation background work as shutting down.
#
# @return [void]
def request_federation_shutdown!
set(:federation_shutdown_requested, true) if respond_to?(:set)
end
# Clear any previously requested federation shutdown marker.
#
# @return [void]
def clear_federation_shutdown_request!
set(:federation_shutdown_requested, false) if respond_to?(:set)
end
# Sleep in short intervals so federation loops can react to shutdown.
#
# @param seconds [Numeric] target sleep duration.
# @return [Boolean] true when the full delay elapsed without shutdown.
def federation_sleep_with_shutdown(seconds)
remaining = seconds.to_f
while remaining.positive?
return false if federation_shutdown_requested?
slice = [remaining, FEDERATION_SLEEP_SLICE_SECONDS].min
Kernel.sleep(slice)
remaining -= slice
end
!federation_shutdown_requested?
end
# Shutdown and clear the federation worker pool if present.
@@ -213,6 +280,44 @@ module PotatoMesh
end
end
# Gracefully terminate federation background loops and worker pool tasks.
#
# @param timeout [Numeric, nil] maximum join time applied per thread.
# @return [void]
def shutdown_federation_background_work!(timeout: nil)
request_federation_shutdown!
timeout_value = timeout || PotatoMesh::Config.federation_shutdown_timeout_seconds
stop_federation_thread!(:initial_federation_thread, timeout: timeout_value)
stop_federation_thread!(:federation_thread, timeout: timeout_value)
shutdown_federation_worker_pool!
clear_federation_crawl_state!
end
# Stop a specific federation thread setting and clear its reference.
#
# @param setting_name [Symbol] settings key storing the thread object.
# @param timeout [Numeric] seconds to wait for clean thread exit.
# @return [void]
def stop_federation_thread!(setting_name, timeout:)
return unless respond_to?(:settings)
return unless settings.respond_to?(setting_name)
thread = settings.public_send(setting_name)
if thread&.alive?
begin
thread.wakeup if thread.respond_to?(:wakeup)
rescue ThreadError
# The thread may not currently be sleeping; continue shutdown.
end
thread.join(timeout)
if thread.alive?
thread.kill
thread.join(0.1)
end
end
set(setting_name, nil) if respond_to?(:set)
end
def federation_target_domains(self_domain)
normalized_self = sanitize_instance_domain(self_domain)&.downcase
ordered = []
@@ -264,16 +369,21 @@ module PotatoMesh
def announce_instance_to_domain(domain, payload_json)
return false unless domain && !domain.empty?
return false if federation_shutdown_requested?
https_failures = []
instance_uri_candidates(domain, "/api/instances").each do |uri|
published = instance_uri_candidates(domain, "/api/instances").any? do |uri|
break false if federation_shutdown_requested?
begin
http = build_remote_http_client(uri)
response = http.start do |connection|
request = build_federation_http_request(Net::HTTP::Post, uri)
request.body = payload_json
connection.request(request)
response = Timeout.timeout(PotatoMesh::Config.remote_instance_request_timeout) do
http.start do |connection|
request = build_federation_http_request(Net::HTTP::Post, uri)
request.body = payload_json
connection.request(request)
end
end
if response.is_a?(Net::HTTPSuccess)
debug_log(
@@ -282,14 +392,16 @@ module PotatoMesh
target: uri.to_s,
status: response.code,
)
return true
true
else
debug_log(
"Federation announcement failed",
context: "federation.announce",
target: uri.to_s,
status: response.code,
)
false
end
debug_log(
"Federation announcement failed",
context: "federation.announce",
target: uri.to_s,
status: response.code,
)
rescue StandardError => e
metadata = {
context: "federation.announce",
@@ -304,9 +416,18 @@ module PotatoMesh
**metadata,
)
https_failures << metadata
next
else
warn_log(
"Federation announcement raised exception",
**metadata,
)
end
false
end
end
unless published
https_failures.each do |metadata|
warn_log(
"Federation announcement raised exception",
**metadata,
@@ -314,14 +435,7 @@ module PotatoMesh
end
end
https_failures.each do |metadata|
warn_log(
"Federation announcement raised exception",
**metadata,
)
end
false
published
end
# Determine whether an HTTPS announcement failure should fall back to HTTP.
@@ -341,6 +455,7 @@ module PotatoMesh
def announce_instance_to_all_domains
return unless federation_enabled?
return if federation_shutdown_requested?
attributes, signature = ensure_self_instance_record!
payload_json = JSON.generate(instance_announcement_payload(attributes, signature))
@@ -348,13 +463,15 @@ module PotatoMesh
pool = federation_worker_pool
scheduled = []
domains.each do |domain|
domains.each_with_object(scheduled) do |domain, scheduled_tasks|
break if federation_shutdown_requested?
if pool
begin
task = pool.schedule do
announce_instance_to_domain(domain, payload_json)
end
scheduled << [domain, task]
scheduled_tasks << [domain, task]
next
rescue PotatoMesh::App::WorkerPool::QueueFullError
warn_log(
@@ -395,7 +512,9 @@ module PotatoMesh
return if scheduled.empty?
timeout = PotatoMesh::Config.federation_task_timeout_seconds
scheduled.each do |domain, task|
scheduled.all? do |domain, task|
break false if federation_shutdown_requested?
begin
task.wait(timeout: timeout)
rescue PotatoMesh::App::WorkerPool::TaskTimeoutError => e
@@ -416,19 +535,23 @@ module PotatoMesh
error_message: e.message,
)
end
true
end
end
def start_federation_announcer!
# Federation broadcasts must not execute when federation support is disabled.
return nil unless federation_enabled?
clear_federation_shutdown_request!
ensure_federation_shutdown_hook!
existing = settings.federation_thread
return existing if existing&.alive?
thread = Thread.new do
loop do
sleep PotatoMesh::Config.federation_announcement_interval
break unless federation_sleep_with_shutdown(PotatoMesh::Config.federation_announcement_interval)
begin
announce_instance_to_all_domains
rescue StandardError => e
@@ -442,6 +565,8 @@ module PotatoMesh
end
end
thread.name = "potato-mesh-federation" if thread.respond_to?(:name=)
# Allow shutdown even if the announcement loop is still sleeping.
thread.daemon = true if thread.respond_to?(:daemon=)
set(:federation_thread, thread)
thread
end
@@ -452,6 +577,8 @@ module PotatoMesh
def start_initial_federation_announcement!
# Skip the initial broadcast entirely when federation is disabled.
return nil unless federation_enabled?
clear_federation_shutdown_request!
ensure_federation_shutdown_hook!
existing = settings.respond_to?(:initial_federation_thread) ? settings.initial_federation_thread : nil
return existing if existing&.alive?
@@ -459,7 +586,12 @@ module PotatoMesh
thread = Thread.new do
begin
delay = PotatoMesh::Config.initial_federation_delay_seconds
Kernel.sleep(delay) if delay.positive?
if delay.positive?
completed = federation_sleep_with_shutdown(delay)
next unless completed
end
next if federation_shutdown_requested?
announce_instance_to_all_domains
rescue StandardError => e
warn_log(
@@ -474,6 +606,8 @@ module PotatoMesh
end
thread.name = "potato-mesh-federation-initial" if thread.respond_to?(:name=)
thread.report_on_exception = false if thread.respond_to?(:report_on_exception=)
# Avoid blocking process shutdown during delayed startup announcements.
thread.daemon = true if thread.respond_to?(:daemon=)
set(:initial_federation_thread, thread)
thread
end
@@ -518,15 +652,19 @@ module PotatoMesh
end
def perform_instance_http_request(uri)
raise InstanceFetchError, "federation shutdown requested" if federation_shutdown_requested?
http = build_remote_http_client(uri)
http.start do |connection|
request = build_federation_http_request(Net::HTTP::Get, uri)
response = connection.request(request)
case response
when Net::HTTPSuccess
response.body
else
raise InstanceFetchError, "unexpected response #{response.code}"
Timeout.timeout(PotatoMesh::Config.remote_instance_request_timeout) do
http.start do |connection|
request = build_federation_http_request(Net::HTTP::Get, uri)
response = connection.request(request)
case response
when Net::HTTPSuccess
response.body
else
raise InstanceFetchError, "unexpected response #{response.code}"
end
end
end
rescue StandardError => e
@@ -583,8 +721,12 @@ module PotatoMesh
end
def fetch_instance_json(domain, path)
return [nil, ["federation shutdown requested"]] if federation_shutdown_requested?
errors = []
instance_uri_candidates(domain, path).each do |uri|
break if federation_shutdown_requested?
begin
body = perform_instance_http_request(uri)
return [JSON.parse(body), uri] if body
@@ -597,6 +739,34 @@ module PotatoMesh
[nil, errors]
end
# Resolve the best matching active-node count from a remote /api/stats payload.
#
# @param payload [Hash, nil] decoded JSON payload from /api/stats.
# @param max_age_seconds [Integer] activity window currently expected for federation freshness.
# @return [Integer, nil] selected active-node count when available.
def remote_active_node_count_from_stats(payload, max_age_seconds:)
return nil unless payload.is_a?(Hash)
active_nodes = payload["active_nodes"]
return nil unless active_nodes.is_a?(Hash)
age = coerce_integer(max_age_seconds) || 0
key = if age <= 3600
"hour"
elsif age <= 86_400
"day"
elsif age <= PotatoMesh::Config.week_seconds
"week"
else
"month"
end
value = coerce_integer(active_nodes[key])
return nil unless value
[value, 0].max
end
# Parse a remote federation instance payload into canonical attributes.
#
# @param payload [Hash] JSON object describing a remote instance.
@@ -657,51 +827,149 @@ module PotatoMesh
# @param overall_limit [Integer, nil] maximum unique domains visited.
# @return [Boolean] true when the crawl was scheduled successfully.
def enqueue_federation_crawl(domain, per_response_limit:, overall_limit:)
pool = federation_worker_pool
sanitized_domain = sanitize_instance_domain(domain)
unless sanitized_domain
warn_log(
"Skipped remote instance crawl",
context: "federation.instances",
domain: domain,
reason: "invalid domain",
)
return false
end
return false if federation_shutdown_requested?
application = is_a?(Class) ? self : self.class
pool = application.federation_worker_pool
unless pool
debug_log(
"Skipped remote instance crawl",
context: "federation.instances",
domain: domain,
domain: sanitized_domain,
reason: "federation disabled",
)
return false
end
application = is_a?(Class) ? self : self.class
claim_result = application.claim_federation_crawl_slot(sanitized_domain)
unless claim_result == :claimed
debug_log(
"Skipped remote instance crawl",
context: "federation.instances",
domain: sanitized_domain,
reason: claim_result == :in_flight ? "crawl already in flight" : "recent crawl completed",
)
return false
end
pool.schedule do
db = application.open_database
db = nil
begin
db = application.open_database
application.ingest_known_instances_from!(
db,
domain,
sanitized_domain,
per_response_limit: per_response_limit,
overall_limit: overall_limit,
)
ensure
db&.close
application.release_federation_crawl_slot(sanitized_domain)
end
end
true
rescue PotatoMesh::App::WorkerPool::QueueFullError
warn_log(
"Skipped remote instance crawl",
context: "federation.instances",
domain: domain,
reason: "worker queue saturated",
)
false
application.handle_failed_federation_crawl_schedule(sanitized_domain, "worker queue saturated")
rescue PotatoMesh::App::WorkerPool::ShutdownError
application.handle_failed_federation_crawl_schedule(sanitized_domain, "worker pool shut down")
end
# Handle a failed crawl schedule attempt without applying cooldown.
#
# @param domain [String] canonical domain that failed to schedule.
# @param reason [String] human-readable failure reason.
# @return [Boolean] always false because scheduling did not succeed.
def handle_failed_federation_crawl_schedule(domain, reason)
release_federation_crawl_slot(domain, record_completion: false)
warn_log(
"Skipped remote instance crawl",
context: "federation.instances",
domain: domain,
reason: "worker pool shut down",
reason: reason,
)
false
end
# Initialize shared in-memory state used to deduplicate crawl scheduling.
#
# @return [void]
def initialize_federation_crawl_state!
@federation_crawl_init_mutex ||= Mutex.new
return if instance_variable_defined?(:@federation_crawl_mutex) && @federation_crawl_mutex
@federation_crawl_init_mutex.synchronize do
return if instance_variable_defined?(:@federation_crawl_mutex) && @federation_crawl_mutex
@federation_crawl_mutex = Mutex.new
@federation_crawl_in_flight = Set.new
@federation_crawl_last_completed_at = {}
end
end
# Retrieve the cooldown period used for duplicate crawl suppression.
#
# @return [Integer] seconds a domain remains in cooldown after completion.
def federation_crawl_cooldown_seconds
PotatoMesh::Config.federation_crawl_cooldown_seconds
end
# Mark a domain crawl as claimed if no active or recent crawl exists.
#
# @param domain [String] canonical domain name.
# @return [Symbol] +:claimed+, +:in_flight+, or +:cooldown+.
def claim_federation_crawl_slot(domain)
initialize_federation_crawl_state!
now = Time.now.to_i
@federation_crawl_mutex.synchronize do
return :in_flight if @federation_crawl_in_flight.include?(domain)
last_completed = @federation_crawl_last_completed_at[domain]
if last_completed && now - last_completed < federation_crawl_cooldown_seconds
return :cooldown
end
@federation_crawl_in_flight << domain
:claimed
end
end
# Release an in-flight crawl claim and record completion timestamp.
#
# @param domain [String] canonical domain name.
# @param record_completion [Boolean] true to apply cooldown tracking.
# @return [void]
def release_federation_crawl_slot(domain, record_completion: true)
return unless domain
initialize_federation_crawl_state!
@federation_crawl_mutex.synchronize do
@federation_crawl_in_flight.delete(domain)
@federation_crawl_last_completed_at[domain] = Time.now.to_i if record_completion
end
end
# Clear all in-memory crawl scheduling state.
#
# @return [void]
def clear_federation_crawl_state!
initialize_federation_crawl_state!
@federation_crawl_mutex.synchronize do
@federation_crawl_in_flight.clear
@federation_crawl_last_completed_at.clear
end
end
# Recursively ingest federation records exposed by the supplied domain.
#
# @param db [SQLite3::Database] open database connection used for writes.
@@ -719,6 +987,7 @@ module PotatoMesh
)
sanitized = sanitize_instance_domain(domain)
return visited || Set.new unless sanitized
return visited || Set.new if federation_shutdown_requested?
visited ||= Set.new
@@ -753,6 +1022,8 @@ module PotatoMesh
processed_entries = 0
recent_cutoff = Time.now.to_i - PotatoMesh::Config.remote_instance_max_node_age
payload.each do |entry|
break if federation_shutdown_requested?
if per_response_limit && per_response_limit.positive? && processed_entries >= per_response_limit
debug_log(
"Skipped remote instance entry due to response limit",
@@ -806,21 +1077,33 @@ module PotatoMesh
attributes[:is_private] = false if attributes[:is_private].nil?
stats_payload, stats_metadata = fetch_instance_json(attributes[:domain], "/api/stats")
stats_count = remote_active_node_count_from_stats(
stats_payload,
max_age_seconds: PotatoMesh::Config.remote_instance_max_node_age,
)
attributes[:nodes_count] = stats_count if stats_count
nodes_since_path = "/api/nodes?since=#{recent_cutoff}&limit=1000"
nodes_since_window, nodes_since_metadata = fetch_instance_json(attributes[:domain], nodes_since_path)
if nodes_since_window.is_a?(Array)
if stats_count.nil? && attributes[:nodes_count].nil? && nodes_since_window.is_a?(Array)
attributes[:nodes_count] = nodes_since_window.length
elsif nodes_since_metadata
warn_log(
"Failed to load remote node window",
context: "federation.instances",
domain: attributes[:domain],
reason: Array(nodes_since_metadata).map(&:to_s).join("; "),
)
end
remote_nodes, node_metadata = fetch_instance_json(attributes[:domain], "/api/nodes")
remote_nodes ||= nodes_since_window if nodes_since_window.is_a?(Array)
remote_nodes = nodes_since_window if remote_nodes.nil? && nodes_since_window.is_a?(Array)
if attributes[:nodes_count].nil? && remote_nodes.is_a?(Array)
attributes[:nodes_count] = remote_nodes.length
end
if stats_count.nil? && Array(stats_metadata).any?
debug_log(
"Remote instance /api/stats unavailable; using node list fallback",
context: "federation.instances",
domain: attributes[:domain],
reason: Array(stats_metadata).map(&:to_s).join("; "),
)
end
unless remote_nodes
warn_log(
"Failed to load remote node data",

View File

@@ -20,6 +20,8 @@ module PotatoMesh
# its intended consumers to ensure consistent behaviour across the Sinatra
# application.
module Helpers
ANNOUNCEMENT_URL_PATTERN = %r{\bhttps?://[^\s<]+}i.freeze
# Fetch an application level constant exposed by {PotatoMesh::Application}.
#
# @param name [Symbol] constant identifier to retrieve.
@@ -92,6 +94,47 @@ module PotatoMesh
PotatoMesh::Sanitizer.sanitized_site_name
end
# Retrieve the configured announcement banner copy.
#
# @return [String, nil] sanitised announcement or nil when unset.
def sanitized_announcement
PotatoMesh::Sanitizer.sanitized_announcement
end
# Render the announcement copy with safe outbound links.
#
# @return [String, nil] escaped HTML snippet or nil when unset.
def announcement_html
announcement = sanitized_announcement
return nil unless announcement
fragments = []
last_index = 0
announcement.to_enum(:scan, ANNOUNCEMENT_URL_PATTERN).each do
match = Regexp.last_match
next unless match
start_index = match.begin(0)
end_index = match.end(0)
if start_index > last_index
fragments << Rack::Utils.escape_html(announcement[last_index...start_index])
end
url = match[0]
escaped_url = Rack::Utils.escape_html(url)
fragments << %(<a href="#{escaped_url}" target="_blank" rel="noopener noreferrer">#{escaped_url}</a>)
last_index = end_index
end
if last_index < announcement.length
fragments << Rack::Utils.escape_html(announcement[last_index..])
end
fragments.join
end
# Retrieve the configured channel.
#
# @return [String] sanitised channel identifier.

View File

@@ -0,0 +1,102 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
require "base64"
module PotatoMesh
module App
module Meshtastic
# Compute Meshtastic channel hashes from a name and pre-shared key.
module ChannelHash
module_function
DEFAULT_PSK_ALIAS_KEYS = {
1 => [
0xD4, 0xF1, 0xBB, 0x3A, 0x20, 0x29, 0x07, 0x59,
0xF0, 0xBC, 0xFF, 0xAB, 0xCF, 0x4E, 0x69, 0x01,
].pack("C*"),
2 => [
0x38, 0x4B, 0xBC, 0xC0, 0x1D, 0xC0, 0x22, 0xD1,
0x81, 0xBF, 0x36, 0xB8, 0x61, 0x21, 0xE1, 0xFB,
0x96, 0xB7, 0x2E, 0x55, 0xBF, 0x74, 0x22, 0x7E,
0x9D, 0x6A, 0xFB, 0x48, 0xD6, 0x4C, 0xB1, 0xA1,
].pack("C*"),
}.freeze
# Calculate the Meshtastic channel hash for the given name and PSK.
#
# @param name [String] channel name candidate.
# @param psk_b64 [String, nil] base64-encoded PSK or PSK alias.
# @return [Integer, nil] channel hash byte or nil when inputs are invalid.
def channel_hash(name, psk_b64)
return nil unless name
key = expanded_key(psk_b64)
return nil unless key
h_name = xor_bytes(name.b)
h_key = xor_bytes(key)
(h_name ^ h_key) & 0xFF
end
# Expand the provided PSK into a valid AES key length.
#
# @param psk_b64 [String, nil] base64 PSK value.
# @return [String, nil] expanded key bytes or nil when invalid.
def expanded_key(psk_b64)
raw = Base64.decode64(psk_b64.to_s)
case raw.bytesize
when 0
"".b
when 1
default_key_for_alias(raw.bytes.first)
when 2..15
(raw.bytes + [0] * (16 - raw.bytesize)).pack("C*")
when 16
raw
when 17..31
(raw.bytes + [0] * (32 - raw.bytesize)).pack("C*")
when 32
raw
else
nil
end
end
# Map PSK alias bytes to their default key material.
#
# @param alias_index [Integer, nil] alias identifier for the PSK.
# @return [String, nil] key bytes or nil when unknown.
def default_key_for_alias(alias_index)
return nil unless alias_index
DEFAULT_PSK_ALIAS_KEYS[alias_index]&.dup
end
# XOR all bytes in the given string or byte array.
#
# @param value [String, Array<Integer>] input byte sequence.
# @return [Integer] XOR of all bytes.
def xor_bytes(value)
bytes = value.is_a?(String) ? value.bytes : value
bytes.reduce(0) { |acc, byte| (acc ^ byte) & 0xFF }
end
end
end
end
end

View File

@@ -0,0 +1,28 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Meshtastic
# Canonical list of candidate channel names used to build rainbow tables.
module ChannelNames
CHANNEL_NAME_CANDIDATES = %w[
911 Admin ADMIN admin Alert Alpha AlphaNet Alpine Amateur Amazon Anaconda Aquila Arctic Ash Asteroid Astro Aurora Avalanche Backup Basalt Base Base1 Base2 BaseAlpha BaseBravo BaseCharlie Bavaria Beacon Bear BearNet Beat Berg Berlin BerlinMesh BerlinNet Beta BetaBerlin Bison Blackout Blizzard Bolt Bonfire Border Borealis Bravo BravoNet Breeze Bridge Bronze Burner Burrow Callisto Callsign Camp Campfire CampNet Caravan Carbon Carpet Central Chameleon Charlie Chat Checkpoint Checkpoint1 Checkpoint2 Cheetah City Clinic Cloud Cobra Collective Cologne Colony Comet Command Command1 Command2 CommandRoom Comms Comms1 Comms2 CommsNet Commune Control Control1 Control2 ControlRoom Convoy Copper Core Corvus Cosmos Courier Courier1 Courier2 CourierMesh CourierNet CQ CQ1 CQ2 Crow CrowNet DarkNet Dawn Daybreak Daylight Delta DeltaNet Demo DEMO DemoBerlin Den Desert Diamond Distress District Doctor Dortmund Downlink Downlink1 Draco Dragon DragonNet Dune Dusk Eagle EagleNet East EastStar Echo EchoMesh EchoNet Emergency emergency EMERGENCY EmergencyBerlin Epsilon Equinox Europa Falcon Field FieldNet Fire Fire1 Fire2 Firebird Firefly Fireline Fireteam Firewatch Flash Flock Fluss Fog Forest Fox FoxNet Foxtrot FoxtrotMesh FoxtrotNet Frankfurt Freedom Freq Freq1 Freq2 Friedrichshain Frontier Frost Galaxy Gale Gamma Ganymede Gecko General Ghost GhostNet Glacier Gold Granite Grassland Grid Grid1 Grid2 GridNet GridNorth GridSouth Griffin Group Ham HAM Hamburg HAMNet Harbor Harmony HarmonyNet Hawk HawkNet Haze Help Hessen Highway Hilltop Hinterland Hive Hospital HQ HQ1 HQ2 Hub Hub1 Hub2 Hydra Ice Io Iron Jaguar Jungle Jupiter Kiez Kilo KiloMesh KiloNet Kraken Kreuzberg Lava Layer Layer1 Layer2 Layer3 Leipzig Leopard Liberty LightNet Lightning Lima Link Lion Lizard LongFast LongSlow LoRa LoRaBerlin LoRaHessen LoRaMesh LoRaNet LoRaTest Main Mars Med Med1 Med2 Medic MediumFast MediumSlow Mercury Mesh Mesh1 Mesh2 Mesh3 Mesh4 Mesh5 MeshBerlin MeshCollective MeshCologne MeshFrankfurt MeshGrid MeshHamburg MeshHessen MeshLeipzig MeshMunich MeshNet MeshNetwork MeshRuhr Meshtastic MeshTest Meteor Metro Midnight Mirage Mist MoonNet Munich Müggelberg Nebula Nest Network Neukölln Nexus Nightfall NightMesh NightNet Nightshift NightshiftNet Nightwatch Node1 Node2 Node3 Node4 Node5 Nomad NomadMesh NomadNet Nomads Nord North NorthStar Oasis Obsidian Omega Operations OPERATIONS Ops Ops1 Ops2 OpsCenter OpsRoom Orbit Ost Outpost Outsider Owl Pack Packet PacketNet PacketRadio Panther Paramedic Path Peak Phantom Phoenix PhoenixNet Platinum Pluto Polar Prairie Prenzlauer PRIVATE Private Public PUBLIC Pulse PulseNet Python Quasar Radio Radio1 Radio2 RadioNet Rain Ranger Raven RavenNet Relay Relay1 Relay2 Repeater Repeater1 Repeater2 RepeaterHub Rescue Rescue1 Rescue2 RescueTeam Rhythm Ridge River Road Rock Router Router1 Router2 Rover Ruhr Runner Runners Safari Safe Safety Sahara Saturn Savanna Saxony Scout Sector Secure Sensor SENSOR Sensors SENSORS Shade Shadow ShadowNet Shelter Shelter1 Shelter2 ShortFast Sideband Sideband1 Sierra Signal Signal1 Signal2 SignalFire Signals Silver Smoke Snake Snow Solstice SOS Sos SOSBerlin South SouthStar Spectrum Squad StarNet Steel Stone Storm Storm1 Storm2 Stratum Stuttgart Summit SunNet Sunrise Sunset Sync SyncNet Syndicate Süd Tal Tango TangoMesh TangoNet Team Tempo Test TEST test TestBerlin Teufelsberg Thunder Tiger Titan Town Trail Tundra Tunnel Union Unit Universe Uplink Uplink1 Valley Venus Victor Village Viper Volcano Wald Wander Wanderer Wanderers Watch Watch1 Watch2 WaWi West WestStar Whisper Wind Wolf WolfDen WolfMesh WolfNet Wolfpack Wolves Woods Wyvern Zeta Zone Zone1 Zone2 Zone3 Zulu ZuluMesh ZuluNet
].freeze
end
end
end
end

View File

@@ -0,0 +1,183 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
require "base64"
require "openssl"
require_relative "channel_hash"
require_relative "protobuf"
module PotatoMesh
module App
module Meshtastic
# Decrypt Meshtastic payloads with AES-CTR using Meshtastic nonce rules.
module Cipher
module_function
DEFAULT_PSK_B64 = "AQ=="
TEXT_MESSAGE_PORTNUM = 1
# Decrypt an encrypted Meshtastic payload into UTF-8 text.
#
# @param cipher_b64 [String] base64-encoded encrypted payload.
# @param packet_id [Integer] packet identifier used for the nonce.
# @param from_id [String, nil] Meshtastic node identifier (e.g. "!9e95cf60").
# @param from_num [Integer, nil] numeric node identifier override.
# @param psk_b64 [String, nil] base64 PSK or alias.
# @return [String, nil] decrypted text or nil when decryption fails.
def decrypt_text(cipher_b64:, packet_id:, from_id: nil, from_num: nil, psk_b64: DEFAULT_PSK_B64)
data = decrypt_data(
cipher_b64: cipher_b64,
packet_id: packet_id,
from_id: from_id,
from_num: from_num,
psk_b64: psk_b64,
)
data && data[:text]
end
# Decrypt the Meshtastic data protobuf payload.
#
# @param cipher_b64 [String] base64-encoded encrypted payload.
# @param packet_id [Integer] packet identifier used for the nonce.
# @param from_id [String, nil] Meshtastic node identifier.
# @param from_num [Integer, nil] numeric node identifier override.
# @param psk_b64 [String, nil] base64 PSK or alias.
# @return [Hash, nil] decrypted data payload details or nil when decryption fails.
def decrypt_data(cipher_b64:, packet_id:, from_id: nil, from_num: nil, psk_b64: DEFAULT_PSK_B64)
ciphertext = Base64.strict_decode64(cipher_b64)
key = ChannelHash.expanded_key(psk_b64)
return nil unless key
return nil unless [16, 32].include?(key.bytesize)
packet_value = normalize_packet_id(packet_id)
return nil unless packet_value
from_value = normalize_node_num(from_id, from_num)
return nil unless from_value
nonce = build_nonce(packet_value, from_value)
plaintext = decrypt_aes_ctr(ciphertext, key, nonce)
return nil unless plaintext
data = Protobuf.parse_data(plaintext)
return nil unless data
text = nil
if data[:portnum] == TEXT_MESSAGE_PORTNUM
candidate = data[:payload].dup.force_encoding("UTF-8")
text = candidate if candidate.valid_encoding? && !candidate.empty?
end
{ portnum: data[:portnum], payload: data[:payload], text: text }
rescue ArgumentError, OpenSSL::Cipher::CipherError
nil
end
# Decrypt the Meshtastic data protobuf payload bytes.
#
# @param cipher_b64 [String] base64-encoded encrypted payload.
# @param packet_id [Integer] packet identifier used for the nonce.
# @param from_id [String, nil] Meshtastic node identifier.
# @param from_num [Integer, nil] numeric node identifier override.
# @param psk_b64 [String, nil] base64 PSK or alias.
# @return [String, nil] payload bytes or nil when decryption fails.
def decrypt_payload_bytes(cipher_b64:, packet_id:, from_id: nil, from_num: nil, psk_b64: DEFAULT_PSK_B64)
data = decrypt_data(
cipher_b64: cipher_b64,
packet_id: packet_id,
from_id: from_id,
from_num: from_num,
psk_b64: psk_b64,
)
data && data[:payload]
end
# Build the Meshtastic AES nonce from packet and node identifiers.
#
# @param packet_id [Integer] packet identifier.
# @param from_num [Integer] numeric node identifier.
# @return [String] 16-byte nonce.
def build_nonce(packet_id, from_num)
[packet_id].pack("Q<") + [from_num].pack("L<") + ("\x00" * 4)
end
# Decrypt data using AES-CTR with the derived nonce.
#
# @param ciphertext [String] encrypted payload bytes.
# @param key [String] expanded AES key bytes.
# @param nonce [String] 16-byte nonce.
# @return [String] decrypted plaintext bytes.
def decrypt_aes_ctr(ciphertext, key, nonce)
cipher_name = key.bytesize == 16 ? "aes-128-ctr" : "aes-256-ctr"
cipher = OpenSSL::Cipher.new(cipher_name)
cipher.decrypt
cipher.key = key
cipher.iv = nonce
cipher.update(ciphertext) + cipher.final
end
# Normalise the packet identifier into an integer.
#
# @param packet_id [Integer, nil] packet identifier.
# @return [Integer, nil] validated packet id or nil when invalid.
def normalize_packet_id(packet_id)
return packet_id if packet_id.is_a?(Integer) && packet_id >= 0
return nil if packet_id.nil?
if packet_id.is_a?(Numeric)
return nil if packet_id.negative?
return packet_id.to_i
end
return nil unless packet_id.respond_to?(:to_s)
trimmed = packet_id.to_s.strip
return nil if trimmed.empty?
return trimmed.to_i(10) if trimmed.match?(/\A\d+\z/)
nil
end
# Resolve the node number from any of the supported identifiers.
#
# @param from_id [String, nil] Meshtastic node identifier.
# @param from_num [Integer, nil] numeric node identifier override.
# @return [Integer, nil] node number or nil when invalid.
def normalize_node_num(from_id, from_num)
if from_num.is_a?(Integer)
return from_num & 0xFFFFFFFF
elsif from_num.is_a?(Numeric)
return from_num.to_i & 0xFFFFFFFF
end
return nil unless from_id
trimmed = from_id.to_s.strip
return nil if trimmed.empty?
hex = trimmed.delete_prefix("!")
hex = hex[2..] if hex.start_with?("0x", "0X")
return nil unless hex.match?(/\A[0-9A-Fa-f]+\z/)
hex.to_i(16) & 0xFFFFFFFF
end
end
end
end
end

View File

@@ -0,0 +1,120 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
require "json"
require "open3"
module PotatoMesh
module App
module Meshtastic
# Decode Meshtastic protobuf payloads via the Python helper script.
module PayloadDecoder
module_function
PYTHON_ENV_KEY = "MESHTASTIC_PYTHON"
DEFAULT_PYTHON_RELATIVE = File.join("data", ".venv", "bin", "python")
DEFAULT_DECODER_RELATIVE = File.join("data", "mesh_ingestor", "decode_payload.py")
FALLBACK_PYTHON_NAMES = ["python3", "python"].freeze
# Decode a protobuf payload using the Meshtastic helper.
#
# @param portnum [Integer] Meshtastic port number.
# @param payload_b64 [String] base64-encoded payload bytes.
# @return [Hash, nil] decoded payload hash or nil when decoding fails.
def decode(portnum:, payload_b64:)
return nil unless portnum && payload_b64
decoder_path = decoder_script_path
python_path = python_executable_path
return nil unless decoder_path && python_path
input = JSON.generate({ portnum: portnum, payload_b64: payload_b64 })
stdout, stderr, status = Open3.capture3(python_path, decoder_path, stdin_data: input)
return nil unless status.success?
parsed = JSON.parse(stdout)
return nil unless parsed.is_a?(Hash)
return nil if parsed["error"]
parsed
rescue JSON::ParserError
nil
rescue Errno::ENOENT
nil
rescue ArgumentError
nil
end
# Resolve the configured Python executable for Meshtastic decoding.
#
# @return [String, nil] python path or nil when missing.
def python_executable_path
configured = ENV[PYTHON_ENV_KEY]
return configured if configured && !configured.strip.empty?
candidate = File.expand_path(DEFAULT_PYTHON_RELATIVE, repo_root)
return candidate if File.exist?(candidate)
FALLBACK_PYTHON_NAMES.each do |name|
found = find_executable(name)
return found if found
end
nil
end
# Resolve the Meshtastic payload decoder script path.
#
# @return [String, nil] script path or nil when missing.
def decoder_script_path
repo_candidate = File.expand_path(DEFAULT_DECODER_RELATIVE, repo_root)
return repo_candidate if File.exist?(repo_candidate)
web_candidate = File.expand_path(DEFAULT_DECODER_RELATIVE, web_root)
return web_candidate if File.exist?(web_candidate)
nil
end
# Resolve the repository root directory from the application config.
#
# @return [String] absolute path to the repository root.
def repo_root
PotatoMesh::Config.repo_root
end
def web_root
PotatoMesh::Config.web_root
end
def find_executable(name)
# Locate an executable in PATH without invoking a subshell.
#
# @param name [String] executable name to resolve.
# @return [String, nil] full path when found.
ENV.fetch("PATH", "").split(File::PATH_SEPARATOR).each do |path|
candidate = File.join(path, name)
return candidate if File.file?(candidate) && File.executable?(candidate)
end
nil
end
private_class_method :find_executable
end
end
end
end

View File

@@ -0,0 +1,140 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Meshtastic
# Minimal protobuf helpers for extracting payload bytes from Meshtastic data.
module Protobuf
module_function
WIRE_TYPE_VARINT = 0
WIRE_TYPE_64BIT = 1
WIRE_TYPE_LENGTH_DELIMITED = 2
WIRE_TYPE_32BIT = 5
DATA_PORTNUM_FIELD = 1
DATA_PAYLOAD_FIELD = 2
# Extract a length-delimited field from a protobuf message.
#
# @param payload [String] raw protobuf-encoded bytes.
# @param field_number [Integer] field to extract.
# @return [String, nil] field bytes or nil when absent/invalid.
def extract_field_bytes(payload, field_number)
return nil unless payload && field_number
bytes = payload.bytes
index = 0
while index < bytes.length
tag, index = read_varint(bytes, index)
return nil unless tag
field = tag >> 3
wire = tag & 0x7
case wire
when WIRE_TYPE_VARINT
_, index = read_varint(bytes, index)
return nil unless index
when WIRE_TYPE_64BIT
index += 8
when WIRE_TYPE_LENGTH_DELIMITED
length, index = read_varint(bytes, index)
return nil unless length
return nil if index + length > bytes.length
value = bytes[index, length].pack("C*")
index += length
return value if field == field_number
when WIRE_TYPE_32BIT
index += 4
else
return nil
end
end
nil
end
# Parse a Meshtastic Data message for the port number and payload.
#
# @param payload [String] raw protobuf-encoded bytes.
# @return [Hash, nil] parsed port number and payload bytes.
def parse_data(payload)
return nil unless payload
bytes = payload.bytes
index = 0
portnum = nil
data_payload = nil
while index < bytes.length
tag, index = read_varint(bytes, index)
return nil unless tag
field = tag >> 3
wire = tag & 0x7
case wire
when WIRE_TYPE_VARINT
value, index = read_varint(bytes, index)
return nil unless value
portnum = value if field == DATA_PORTNUM_FIELD
when WIRE_TYPE_64BIT
index += 8
when WIRE_TYPE_LENGTH_DELIMITED
length, index = read_varint(bytes, index)
return nil unless length
return nil if index + length > bytes.length
value = bytes[index, length].pack("C*")
index += length
data_payload = value if field == DATA_PAYLOAD_FIELD
when WIRE_TYPE_32BIT
index += 4
else
return nil
end
end
return nil unless portnum && data_payload
{ portnum: portnum, payload: data_payload }
end
# Read a protobuf varint from a byte array.
#
# @param bytes [Array<Integer>] byte stream.
# @param index [Integer] read offset.
# @return [Array(Integer, Integer), nil] value and new index or nil when invalid.
def read_varint(bytes, index)
shift = 0
value = 0
while index < bytes.length
byte = bytes[index]
index += 1
value |= (byte & 0x7F) << shift
return [value, index] if (byte & 0x80).zero?
shift += 7
return nil if shift > 63
end
nil
end
end
end
end
end

View File

@@ -0,0 +1,68 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
require_relative "channel_hash"
require_relative "channel_names"
module PotatoMesh
module App
module Meshtastic
# Resolve candidate channel names for a hashed channel index.
module RainbowTable
module_function
@tables = {}
# Lookup candidate channel names for a hashed channel index.
#
# @param index [Integer, nil] channel hash byte.
# @param psk_b64 [String, nil] base64 PSK or alias.
# @return [Array<String>] list of candidate names.
def channel_names_for(index, psk_b64:)
return [] unless index.is_a?(Integer)
table_for(psk_b64)[index] || []
end
# Build or retrieve the cached rainbow table for the given PSK.
#
# @param psk_b64 [String, nil] base64 PSK or alias.
# @return [Hash{Integer=>Array<String>}] mapping of hash bytes to names.
def table_for(psk_b64)
key = psk_b64.to_s
@tables[key] ||= build_table(psk_b64)
end
# Build a hash-to-name mapping for the provided PSK.
#
# @param psk_b64 [String, nil] base64 PSK or alias.
# @return [Hash{Integer=>Array<String>}] mapping of hash bytes to names.
def build_table(psk_b64)
mapping = Hash.new { |hash, key| hash[key] = [] }
ChannelNames::CHANNEL_NAME_CANDIDATES.each do |name|
hash = ChannelHash.channel_hash(name, psk_b64)
next unless hash
mapping[hash] << name
end
mapping
end
end
end
end
end

View File

@@ -127,6 +127,43 @@ module PotatoMesh
[threshold, floor].max
end
# Return exact active-node counts across common activity windows.
#
# Counts are resolved directly in SQL with COUNT(*) thresholds against
# +nodes.last_heard+ to avoid sampling bias from list endpoint limits.
#
# @param now [Integer] reference unix timestamp in seconds.
# @param db [SQLite3::Database, nil] optional open database handle to reuse.
# @return [Hash{String => Integer}] counts keyed by hour/day/week/month.
def query_active_node_stats(now: Time.now.to_i, db: nil)
handle = db || open_database(readonly: true)
handle.results_as_hash = true
reference_now = coerce_integer(now) || Time.now.to_i
hour_cutoff = reference_now - 3600
day_cutoff = reference_now - 86_400
week_cutoff = reference_now - PotatoMesh::Config.week_seconds
month_cutoff = reference_now - (30 * 24 * 60 * 60)
private_filter = private_mode? ? " AND (role IS NULL OR role <> 'CLIENT_HIDDEN')" : ""
sql = <<~SQL
SELECT
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{private_filter}) AS hour_count,
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{private_filter}) AS day_count,
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{private_filter}) AS week_count,
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{private_filter}) AS month_count
SQL
row = with_busy_retry do
handle.get_first_row(sql, [hour_cutoff, day_cutoff, week_cutoff, month_cutoff])
end || {}
{
"hour" => row["hour_count"].to_i,
"day" => row["day_count"].to_i,
"week" => row["week_count"].to_i,
"month" => row["month_count"].to_i,
}
ensure
handle&.close unless db
end
def node_reference_tokens(node_ref)
parts = canonical_node_parts(node_ref)
canonical_id, numeric_id = parts ? parts[0, 2] : [nil, nil]
@@ -213,7 +250,7 @@ module PotatoMesh
#
# @param limit [Integer] maximum number of rows to return.
# @param node_ref [String, Integer, nil] optional node reference to narrow results.
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window.
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window for collections.
# @return [Array<Hash>] compacted node rows suitable for API responses.
def query_nodes(limit, node_ref: nil, since: 0)
limit = coerce_query_limit(limit)
@@ -221,7 +258,8 @@ module PotatoMesh
db.results_as_hash = true
now = Time.now.to_i
min_last_heard = now - PotatoMesh::Config.week_seconds
since_threshold = normalize_since_threshold(since, floor: min_last_heard)
since_floor = node_ref ? 0 : min_last_heard
since_threshold = normalize_since_threshold(since, floor: since_floor)
params = []
where_clauses = []
@@ -283,7 +321,7 @@ module PotatoMesh
# Fetch ingestor heartbeats with optional freshness filtering.
#
# @param limit [Integer] maximum number of ingestors to return.
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window.
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window for collections.
# @return [Array<Hash>] compacted ingestor rows suitable for API responses.
def query_ingestors(limit, since: 0)
limit = coerce_query_limit(limit)
@@ -356,7 +394,7 @@ module PotatoMesh
SELECT m.id, m.rx_time, m.rx_iso, m.from_id, m.to_id, m.channel,
m.portnum, m.text, m.encrypted, m.rssi, m.hop_limit,
m.lora_freq, m.modem_preset, m.channel_name, m.snr,
m.reply_id, m.emoji
m.reply_id, m.emoji, m.ingestor
FROM messages m
SQL
sql += " WHERE #{where_clauses.join(" AND ")}\n"
@@ -370,6 +408,9 @@ module PotatoMesh
r.delete_if { |key, _| key.is_a?(Integer) }
r["reply_id"] = coerce_integer(r["reply_id"]) if r.key?("reply_id")
r["emoji"] = string_or_nil(r["emoji"]) if r.key?("emoji")
if string_or_nil(r["encrypted"])
r.delete("portnum")
end
if PotatoMesh::Config.debug? && (r["from_id"].nil? || r["from_id"].to_s.strip.empty?)
raw = db.execute("SELECT * FROM messages WHERE id = ?", [r["id"]]).first
debug_log(
@@ -422,7 +463,8 @@ module PotatoMesh
where_clauses = []
now = Time.now.to_i
min_rx_time = now - PotatoMesh::Config.week_seconds
since_threshold = normalize_since_threshold(since, floor: min_rx_time)
since_floor = node_ref ? 0 : min_rx_time
since_threshold = normalize_since_threshold(since, floor: since_floor)
where_clauses << "COALESCE(rx_time, position_time, 0) >= ?"
params << since_threshold
@@ -470,7 +512,7 @@ module PotatoMesh
#
# @param limit [Integer] maximum number of rows to return.
# @param node_ref [String, Integer, nil] optional node reference to scope results.
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window.
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window for collections.
# @return [Array<Hash>] compacted neighbor rows suitable for API responses.
def query_neighbors(limit, node_ref: nil, since: 0)
limit = coerce_query_limit(limit)
@@ -480,7 +522,8 @@ module PotatoMesh
where_clauses = []
now = Time.now.to_i
min_rx_time = now - PotatoMesh::Config.week_seconds
since_threshold = normalize_since_threshold(since, floor: min_rx_time)
since_floor = node_ref ? 0 : min_rx_time
since_threshold = normalize_since_threshold(since, floor: since_floor)
where_clauses << "COALESCE(rx_time, 0) >= ?"
params << since_threshold
@@ -517,7 +560,7 @@ module PotatoMesh
#
# @param limit [Integer] maximum number of rows to return.
# @param node_ref [String, Integer, nil] optional node reference to scope results.
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window.
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window for collections.
# @return [Array<Hash>] compacted telemetry rows suitable for API responses.
def query_telemetry(limit, node_ref: nil, since: 0)
limit = coerce_query_limit(limit)
@@ -527,7 +570,8 @@ module PotatoMesh
where_clauses = []
now = Time.now.to_i
min_rx_time = now - PotatoMesh::Config.week_seconds
since_threshold = normalize_since_threshold(since, floor: min_rx_time)
since_floor = node_ref ? 0 : min_rx_time
since_threshold = normalize_since_threshold(since, floor: since_floor)
where_clauses << "COALESCE(rx_time, telemetry_time, 0) >= ?"
params << since_threshold
@@ -734,7 +778,7 @@ module PotatoMesh
params = []
where_clauses = []
now = Time.now.to_i
min_rx_time = now - PotatoMesh::Config.week_seconds
min_rx_time = now - PotatoMesh::Config.trace_neighbor_window_seconds
since_threshold = normalize_since_threshold(since, floor: min_rx_time)
where_clauses << "COALESCE(rx_time, 0) >= ?"
params << since_threshold

View File

@@ -67,6 +67,14 @@ module PotatoMesh
query_nodes(limit, since: params["since"]).to_json
end
app.get "/api/stats" do
content_type :json
{
active_nodes: query_active_node_stats,
sampled: false,
}.to_json
end
app.get "/api/nodes/:id" do
content_type :json
node_ref = string_or_nil(params["id"])

View File

@@ -14,6 +14,8 @@
# frozen_string_literal: true
require "timeout"
module PotatoMesh
module App
# WorkerPool executes submitted blocks using a bounded set of Ruby threads.
@@ -124,8 +126,9 @@ module PotatoMesh
#
# @param size [Integer] number of worker threads to spawn.
# @param max_queue [Integer, nil] optional upper bound on queued jobs.
# @param task_timeout [Numeric, nil] optional per-task execution timeout.
# @param name [String] prefix assigned to worker thread names.
def initialize(size:, max_queue: nil, name: "worker-pool")
def initialize(size:, max_queue: nil, task_timeout: nil, name: "worker-pool")
raise ArgumentError, "size must be positive" unless size.is_a?(Integer) && size.positive?
@name = name
@@ -133,6 +136,7 @@ module PotatoMesh
@threads = []
@stopped = false
@mutex = Mutex.new
@task_timeout = normalize_task_timeout(task_timeout)
spawn_workers(size)
end
@@ -192,23 +196,45 @@ module PotatoMesh
worker = Thread.new do
Thread.current.name = "#{@name}-#{index}" if Thread.current.respond_to?(:name=)
Thread.current.report_on_exception = false if Thread.current.respond_to?(:report_on_exception=)
# Daemon threads allow the process to exit even if a job is stuck.
Thread.current.daemon = true if Thread.current.respond_to?(:daemon=)
loop do
task, block = @queue.pop
break if task.equal?(STOP_SIGNAL)
begin
result = block.call
result = if @task_timeout
Timeout.timeout(@task_timeout, TaskTimeoutError, "task exceeded timeout") do
block.call
end
else
block.call
end
task.fulfill(result)
rescue StandardError => e
task.reject(e)
end
end
end
@threads << worker
end
end
# Normalize the per-task timeout into a positive float value.
#
# @param task_timeout [Numeric, nil] candidate timeout value.
# @return [Float, nil] positive timeout in seconds or nil when disabled.
def normalize_task_timeout(task_timeout)
return nil if task_timeout.nil?
value = Float(task_timeout)
return nil unless value.positive?
value
rescue ArgumentError, TypeError
nil
end
end
end
end

View File

@@ -32,15 +32,19 @@ module PotatoMesh
DEFAULT_MAP_CENTER = "#{DEFAULT_MAP_CENTER_LAT},#{DEFAULT_MAP_CENTER_LON}"
DEFAULT_CHANNEL = "#LongFast"
DEFAULT_FREQUENCY = "915MHz"
DEFAULT_MESHTASTIC_PSK_B64 = "AQ=="
DEFAULT_CONTACT_LINK = "#potatomesh:dod.ngo"
DEFAULT_MAX_DISTANCE_KM = 42.0
DEFAULT_REMOTE_INSTANCE_CONNECT_TIMEOUT = 15
DEFAULT_REMOTE_INSTANCE_READ_TIMEOUT = 60
DEFAULT_REMOTE_INSTANCE_REQUEST_TIMEOUT = 30
DEFAULT_FEDERATION_MAX_INSTANCES_PER_RESPONSE = 64
DEFAULT_FEDERATION_MAX_DOMAINS_PER_CRAWL = 256
DEFAULT_FEDERATION_WORKER_POOL_SIZE = 4
DEFAULT_FEDERATION_WORKER_QUEUE_CAPACITY = 128
DEFAULT_FEDERATION_TASK_TIMEOUT_SECONDS = 120
DEFAULT_FEDERATION_SHUTDOWN_TIMEOUT_SECONDS = 3
DEFAULT_FEDERATION_CRAWL_COOLDOWN_SECONDS = 300
DEFAULT_INITIAL_FEDERATION_DELAY_SECONDS = 2
DEFAULT_FEDERATION_SEED_DOMAINS = %w[potatomesh.net potatomesh.jmrp.io mesh.qrp.ro].freeze
@@ -158,6 +162,13 @@ module PotatoMesh
7 * 24 * 60 * 60
end
# Rolling retention window in seconds for trace and neighbor API queries.
#
# @return [Integer] seconds in twenty-eight days.
def trace_neighbor_window_seconds
28 * 24 * 60 * 60
end
# Default upper bound for accepted JSON payload sizes.
#
# @return [Integer] byte ceiling for HTTP request bodies.
@@ -176,7 +187,7 @@ module PotatoMesh
#
# @return [String] semantic version identifier.
def version_fallback
"0.5.9"
"0.5.11"
end
# Default refresh interval for frontend polling routines.
@@ -342,6 +353,16 @@ module PotatoMesh
)
end
# End-to-end timeout applied to each outbound federation HTTP request.
#
# @return [Integer] maximum request duration in seconds.
def remote_instance_request_timeout
fetch_positive_integer(
"REMOTE_INSTANCE_REQUEST_TIMEOUT",
DEFAULT_REMOTE_INSTANCE_REQUEST_TIMEOUT,
)
end
# Limit the number of remote instances processed from a single response.
#
# @return [Integer] maximum entries processed per /api/instances payload.
@@ -392,6 +413,26 @@ module PotatoMesh
)
end
# Determine how long shutdown waits before forcing federation thread exit.
#
# @return [Integer] per-thread shutdown timeout in seconds.
def federation_shutdown_timeout_seconds
fetch_positive_integer(
"FEDERATION_SHUTDOWN_TIMEOUT",
DEFAULT_FEDERATION_SHUTDOWN_TIMEOUT_SECONDS,
)
end
# Define how long finished crawl domains remain on cooldown.
#
# @return [Integer] cooldown window in seconds.
def federation_crawl_cooldown_seconds
fetch_positive_integer(
"FEDERATION_CRAWL_COOLDOWN",
DEFAULT_FEDERATION_CRAWL_COOLDOWN_SECONDS,
)
end
# Maximum acceptable age for remote node data.
#
# @return [Integer] seconds before remote nodes are considered stale.
@@ -437,6 +478,13 @@ module PotatoMesh
fetch_string("SITE_NAME", "PotatoMesh Demo")
end
# Retrieve the configured announcement banner copy.
#
# @return [String, nil] announcement string when configured.
def announcement
fetch_string("ANNOUNCEMENT", nil)
end
# Retrieve the default radio channel label.
#
# @return [String] channel name from configuration.
@@ -451,6 +499,13 @@ module PotatoMesh
fetch_string("FREQUENCY", DEFAULT_FREQUENCY)
end
# Retrieve the Meshtastic PSK used for decrypting channel messages.
#
# @return [String] base64-encoded PSK or alias.
def meshtastic_psk_b64
fetch_string("MESHTASTIC_PSK_B64", DEFAULT_MESHTASTIC_PSK_B64)
end
# Parse the configured map centre coordinates.
#
# @return [Hash{Symbol=>Float}] latitude and longitude in decimal degrees.

View File

@@ -199,6 +199,14 @@ module PotatoMesh
sanitized_string(Config.site_name)
end
# Retrieve the configured announcement banner copy and normalise blank values to nil.
#
# @return [String, nil] announcement copy or +nil+ when blank.
def sanitized_announcement
value = sanitized_string(Config.announcement)
value.empty? ? nil : value
end
# Retrieve the configured channel as a cleaned string.
#
# @return [String] trimmed configuration value.

14
web/package-lock.json generated
View File

@@ -1,16 +1,12 @@
{
"name": "potato-mesh",
"version": "0.5.9",
"version": "0.5.11",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "potato-mesh",
"version": "0.5.9",
"hasInstallScript": true,
"dependencies": {
"uplot": "^1.6.30"
},
"version": "0.5.11",
"devDependencies": {
"istanbul-lib-coverage": "^3.2.2",
"istanbul-lib-report": "^3.0.1",
@@ -158,12 +154,6 @@
"node": ">=8"
}
},
"node_modules/uplot": {
"version": "1.6.32",
"resolved": "https://registry.npmjs.org/uplot/-/uplot-1.6.32.tgz",
"integrity": "sha512-KIMVnG68zvu5XXUbC4LQEPnhwOxBuLyW1AHtpm6IKTXImkbLgkMy+jabjLgSLMasNuGGzQm/ep3tOkyTxpiQIw==",
"license": "MIT"
},
"node_modules/v8-to-istanbul": {
"version": "9.3.0",
"resolved": "https://registry.npmjs.org/v8-to-istanbul/-/v8-to-istanbul-9.3.0.tgz",

View File

@@ -1,15 +1,11 @@
{
"name": "potato-mesh",
"version": "0.5.9",
"version": "0.5.11",
"type": "module",
"private": true,
"scripts": {
"postinstall": "node ./scripts/copy-uplot.js",
"test": "mkdir -p reports coverage && NODE_V8_COVERAGE=coverage node --test --experimental-test-coverage --test-reporter=spec --test-reporter-destination=stdout --test-reporter=junit --test-reporter-destination=reports/javascript-junit.xml && node ./scripts/export-coverage.js"
},
"dependencies": {
"uplot": "^1.6.30"
},
"devDependencies": {
"istanbul-lib-coverage": "^3.2.2",
"istanbul-lib-report": "^3.0.1",

View File

@@ -80,19 +80,13 @@ test('initializeChartsPage renders the telemetry charts when snapshots are avail
},
]);
let receivedOptions = null;
let mountedModels = null;
const createCharts = (node, options) => {
const renderCharts = (node, options) => {
receivedOptions = options;
return { chartsHtml: '<section class="node-detail__charts">Charts</section>', chartModels: [{ id: 'power' }] };
return '<section class="node-detail__charts">Charts</section>';
};
const mountCharts = (chartModels, options) => {
mountedModels = { chartModels, options };
return [];
};
const result = await initializeChartsPage({ document: documentStub, fetchImpl, createCharts, mountCharts });
const result = await initializeChartsPage({ document: documentStub, fetchImpl, renderCharts });
assert.equal(result, true);
assert.equal(container.innerHTML.includes('node-detail__charts'), true);
assert.equal(mountedModels.chartModels.length, 1);
assert.ok(receivedOptions);
assert.equal(receivedOptions.chartOptions.windowMs, 604_800_000);
assert.equal(typeof receivedOptions.chartOptions.lineReducer, 'function');
@@ -124,8 +118,8 @@ test('initializeChartsPage shows an error message when fetching fails', async ()
const fetchImpl = async () => {
throw new Error('network');
};
const createCharts = () => ({ chartsHtml: '<section>unused</section>', chartModels: [] });
const result = await initializeChartsPage({ document: documentStub, fetchImpl, createCharts });
const renderCharts = () => '<section>unused</section>';
const result = await initializeChartsPage({ document: documentStub, fetchImpl, renderCharts });
assert.equal(result, false);
assert.equal(container.innerHTML.includes('Failed to load telemetry charts.'), true);
});
@@ -142,8 +136,8 @@ test('initializeChartsPage handles missing containers and empty telemetry snapsh
},
};
const fetchImpl = async () => createResponse(200, []);
const createCharts = () => ({ chartsHtml: '', chartModels: [] });
const result = await initializeChartsPage({ document: documentStub, fetchImpl, createCharts });
const renderCharts = () => '';
const result = await initializeChartsPage({ document: documentStub, fetchImpl, renderCharts });
assert.equal(result, true);
assert.equal(container.innerHTML.includes('Telemetry snapshots are unavailable.'), true);
});
@@ -161,8 +155,8 @@ test('initializeChartsPage shows a status when rendering produces no markup', as
aggregates: { voltage: { avg: 3.9 } },
},
]);
const createCharts = () => ({ chartsHtml: '', chartModels: [] });
const result = await initializeChartsPage({ document: documentStub, fetchImpl, createCharts });
const renderCharts = () => '';
const result = await initializeChartsPage({ document: documentStub, fetchImpl, renderCharts });
assert.equal(result, true);
assert.equal(container.innerHTML.includes('Telemetry snapshots are unavailable.'), true);
});

View File

@@ -62,6 +62,22 @@ function buildModel(overrides = {}) {
});
}
function findChannelByLabel(model, label) {
return model.channels.find(channel => channel.label === label);
}
function assertChannelMessages(model, { label, id, index, messageIds }) {
const channel = findChannelByLabel(model, label);
assert.ok(channel);
if (id instanceof RegExp) {
assert.match(channel.id, id);
} else {
assert.equal(channel.id, id);
}
assert.equal(channel.index, index);
assert.deepEqual(channel.entries.map(entry => entry.message.id), messageIds);
}
test('buildChatTabModel returns sorted nodes and channel buckets', () => {
const model = buildModel();
assert.equal(model.logEntries.length, 3);
@@ -75,12 +91,13 @@ test('buildChatTabModel returns sorted nodes and channel buckets', () => {
['recent-node', 'iso-node', 'encrypted']
);
assert.equal(model.channels.length, 5);
assert.equal(model.channels.length, 6);
assert.deepEqual(model.channels.map(channel => channel.label), [
'EnvDefault',
'Fallback',
'MediumFast',
'ShortFast',
'1',
'BerlinMesh'
]);
@@ -106,18 +123,21 @@ test('buildChatTabModel returns sorted nodes and channel buckets', () => {
assert.equal(presetChannel.id, 'channel-0-shortfast');
assert.deepEqual(presetChannel.entries.map(entry => entry.message.id), ['primary-preset']);
const unnamedSecondaryChannel = channelByLabel['1'];
assert.equal(unnamedSecondaryChannel.index, 1);
assert.equal(unnamedSecondaryChannel.id, 'channel-1');
assert.deepEqual(unnamedSecondaryChannel.entries.map(entry => entry.message.id), ['iso-ts']);
const secondaryChannel = channelByLabel.BerlinMesh;
assert.equal(secondaryChannel.index, 1);
assert.equal(secondaryChannel.id, 'channel-secondary-berlinmesh');
assert.equal(secondaryChannel.entries.length, 2);
assert.deepEqual(secondaryChannel.entries.map(entry => entry.message.id), ['iso-ts', 'recent-alt']);
assert.match(secondaryChannel.id, /^channel-secondary-name-berlinmesh-[a-z0-9]+$/);
assert.equal(secondaryChannel.entries.length, 1);
assert.deepEqual(secondaryChannel.entries.map(entry => entry.message.id), ['recent-alt']);
});
test('buildChatTabModel always includes channel zero bucket', () => {
test('buildChatTabModel skips channel buckets when there are no messages', () => {
const model = buildChatTabModel({ nodes: [], messages: [], nowSeconds: NOW, windowSeconds: WINDOW });
assert.equal(model.channels.length, 1);
assert.equal(model.channels[0].index, 0);
assert.equal(model.channels[0].entries.length, 0);
assert.equal(model.channels.length, 0);
});
test('buildChatTabModel falls back to numeric label when no metadata provided', () => {
@@ -174,14 +194,13 @@ test('buildChatTabModel includes telemetry, position, and neighbor events', () =
windowSeconds: WINDOW
});
assert.deepEqual(model.logEntries.map(entry => entry.type), [
CHAT_LOG_ENTRY_TYPES.NODE_NEW,
CHAT_LOG_ENTRY_TYPES.NODE_INFO,
CHAT_LOG_ENTRY_TYPES.TELEMETRY,
CHAT_LOG_ENTRY_TYPES.POSITION,
CHAT_LOG_ENTRY_TYPES.NEIGHBOR,
CHAT_LOG_ENTRY_TYPES.TRACE
]);
const types = model.logEntries.map(entry => entry.type);
assert.equal(types[0], CHAT_LOG_ENTRY_TYPES.NODE_NEW);
assert.ok(types.includes(CHAT_LOG_ENTRY_TYPES.NODE_INFO));
assert.ok(types.includes(CHAT_LOG_ENTRY_TYPES.TELEMETRY));
assert.ok(types.includes(CHAT_LOG_ENTRY_TYPES.POSITION));
assert.ok(types.includes(CHAT_LOG_ENTRY_TYPES.NEIGHBOR));
assert.ok(types.includes(CHAT_LOG_ENTRY_TYPES.TRACE));
assert.equal(model.logEntries[0].nodeId, nodeId);
const neighborEntry = model.logEntries.find(entry => entry.type === CHAT_LOG_ENTRY_TYPES.NEIGHBOR);
assert.ok(neighborEntry);
@@ -275,7 +294,7 @@ test('buildChatTabModel ignores plaintext log-only entries', () => {
assert.equal(encryptedEntries[0]?.message?.id, 'enc');
});
test('buildChatTabModel merges secondary channels with matching labels regardless of index', () => {
test('buildChatTabModel merges secondary channels with matching labels across indexes', () => {
const primaryId = 'primary';
const secondaryFirstId = 'secondary-one';
const secondarySecondId = 'secondary-two';
@@ -299,55 +318,139 @@ test('buildChatTabModel merges secondary channels with matching labels regardles
assert.equal(primaryChannel.entries.length, 1);
assert.equal(primaryChannel.entries[0]?.message?.id, primaryId);
const secondaryChannel = meshChannels.find(channel => channel.index > 0);
assert.ok(secondaryChannel);
assert.equal(secondaryChannel.id, 'channel-secondary-meshtown');
assert.equal(secondaryChannel.index, 3);
assert.deepEqual(secondaryChannel.entries.map(entry => entry.message.id), [secondaryFirstId, secondarySecondId]);
const mergedSecondaryChannel = meshChannels.find(channel => channel.index === 3);
assert.ok(mergedSecondaryChannel);
assert.match(mergedSecondaryChannel.id, /^channel-secondary-name-meshtown-[a-z0-9]+$/);
assert.deepEqual(
mergedSecondaryChannel.entries.map(entry => entry.message.id),
[secondaryFirstId, secondarySecondId]
);
});
test('buildChatTabModel rekeys unnamed secondary buckets when a label later arrives', () => {
const unnamedId = 'unnamed';
const namedId = 'named';
const label = 'SideMesh';
const index = 4;
test('buildChatTabModel keeps unnamed secondary buckets separate when a label later arrives', () => {
const scenarios = [
{
index: 4,
label: 'SideMesh',
messages: [
{ id: 'unnamed', rx_time: NOW - 15, channel: 4 },
{ id: 'named', rx_time: NOW - 10, channel: 4, channel_name: 'SideMesh' }
],
namedId: /^channel-secondary-name-sidemesh-[a-z0-9]+$/,
namedMessages: ['named'],
unnamedMessages: ['unnamed']
},
{
index: 5,
label: 'MeshNorth',
messages: [
{ id: 'named', rx_time: NOW - 12, channel: 5, channel_name: 'MeshNorth' },
{ id: 'unlabeled', rx_time: NOW - 8, channel: 5 }
],
namedId: /^channel-secondary-name-meshnorth-[a-z0-9]+$/,
namedMessages: ['named'],
unnamedMessages: ['unlabeled']
}
];
for (const scenario of scenarios) {
const model = buildChatTabModel({
nodes: [],
messages: scenario.messages,
nowSeconds: NOW,
windowSeconds: WINDOW
});
const secondaryChannels = model.channels.filter(channel => channel.index === scenario.index);
assert.equal(secondaryChannels.length, 2);
assertChannelMessages(model, {
label: scenario.label,
id: scenario.namedId,
index: scenario.index,
messageIds: scenario.namedMessages
});
assertChannelMessages(model, {
label: String(scenario.index),
id: `channel-${scenario.index}`,
index: scenario.index,
messageIds: scenario.unnamedMessages
});
}
});
test('buildChatTabModel keeps same-index channels with different names in separate tabs', () => {
const model = buildChatTabModel({
nodes: [],
messages: [
{ id: unnamedId, rx_time: NOW - 15, channel: index },
{ id: namedId, rx_time: NOW - 10, channel: index, channel_name: label }
{ id: 'public-msg', rx_time: NOW - 12, channel: 1, channel_name: 'PUBLIC' },
{ id: 'berlin-msg', rx_time: NOW - 8, channel: 1, channel_name: 'BerlinMesh' }
],
nowSeconds: NOW,
windowSeconds: WINDOW
});
const secondaryChannels = model.channels.filter(channel => channel.index === index);
assert.equal(secondaryChannels.length, 1);
const [secondaryChannel] = secondaryChannels;
assert.equal(secondaryChannel.id, 'channel-secondary-sidemesh');
assert.equal(secondaryChannel.label, label);
assert.deepEqual(secondaryChannel.entries.map(entry => entry.message.id), [unnamedId, namedId]);
assertChannelMessages(model, {
label: 'PUBLIC',
id: /^channel-secondary-name-public-[a-z0-9]+$/,
index: 1,
messageIds: ['public-msg']
});
assertChannelMessages(model, {
label: 'BerlinMesh',
id: /^channel-secondary-name-berlinmesh-[a-z0-9]+$/,
index: 1,
messageIds: ['berlin-msg']
});
});
test('buildChatTabModel merges unlabeled secondary messages into existing named buckets by index', () => {
const namedId = 'named';
const unlabeledId = 'unlabeled';
const label = 'MeshNorth';
const index = 5;
test('buildChatTabModel merges same-name channels even when indexes differ', () => {
const model = buildChatTabModel({
nodes: [],
messages: [
{ id: namedId, rx_time: NOW - 12, channel: index, channel_name: label },
{ id: unlabeledId, rx_time: NOW - 8, channel: index }
{ id: 'test-1', rx_time: NOW - 12, channel: 1, channel_name: 'TEST' },
{ id: 'test-2', rx_time: NOW - 8, channel: 2, channel_name: 'TEST' }
],
nowSeconds: NOW,
windowSeconds: WINDOW
});
const secondaryChannels = model.channels.filter(channel => channel.index === index);
assert.equal(secondaryChannels.length, 1);
const [secondaryChannel] = secondaryChannels;
assert.equal(secondaryChannel.id, 'channel-secondary-meshnorth');
assert.equal(secondaryChannel.label, label);
assert.deepEqual(secondaryChannel.entries.map(entry => entry.message.id), [namedId, unlabeledId]);
assertChannelMessages(model, {
label: 'TEST',
id: /^channel-secondary-name-test-[a-z0-9]+$/,
index: 1,
messageIds: ['test-1', 'test-2']
});
});
test('buildChatTabModel keeps same-index slug-colliding labels on distinct tab ids', () => {
const model = buildChatTabModel({
nodes: [],
messages: [
{ id: 'foo-space', rx_time: NOW - 10, channel: 1, channel_name: 'Foo Bar' },
{ id: 'foo-dash', rx_time: NOW - 8, channel: 1, channel_name: 'Foo-Bar' }
],
nowSeconds: NOW,
windowSeconds: WINDOW
});
const fooSpaceChannel = findChannelByLabel(model, 'Foo Bar');
const fooDashChannel = findChannelByLabel(model, 'Foo-Bar');
assert.ok(fooSpaceChannel);
assert.ok(fooDashChannel);
assert.match(fooSpaceChannel.id, /^channel-secondary-name-foo-bar-[a-z0-9]+$/);
assert.match(fooDashChannel.id, /^channel-secondary-name-foo-bar-[a-z0-9]+$/);
assert.notEqual(fooSpaceChannel.id, fooDashChannel.id);
});
test('buildChatTabModel falls back to hashed id for unsluggable secondary labels', () => {
const model = buildChatTabModel({
nodes: [],
messages: [{ id: 'hash-fallback', rx_time: NOW - 5, channel: 2, channel_name: '###' }],
nowSeconds: NOW,
windowSeconds: WINDOW
});
const channel = findChannelByLabel(model, '###');
assert.ok(channel);
assert.equal(channel.index, 2);
assert.ok(channel.id.startsWith('channel-secondary-name-'));
assert.ok(channel.id.length > 'channel-secondary-name-'.length);
});

View File

@@ -0,0 +1,64 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import {
filterDisplayableFederationInstances,
isSuppressedFederationSiteName,
resolveFederationInstanceLabel,
resolveFederationInstanceSortValue,
resolveFederationSiteNameForDisplay,
shouldDisplayFederationInstance,
truncateFederationSiteName
} from '../federation-instance-display.js';
test('isSuppressedFederationSiteName detects URL-like advertising names', () => {
assert.equal(isSuppressedFederationSiteName('http://spam.example offer'), true);
assert.equal(isSuppressedFederationSiteName('Visit www.spam.example today'), true);
assert.equal(isSuppressedFederationSiteName('Mesh Collective'), false);
assert.equal(isSuppressedFederationSiteName(''), false);
assert.equal(isSuppressedFederationSiteName(null), false);
});
test('truncateFederationSiteName shortens names longer than 32 characters', () => {
assert.equal(truncateFederationSiteName('Short Mesh'), 'Short Mesh');
assert.equal(
truncateFederationSiteName('abcdefghijklmnopqrstuvwxyz1234567890'),
'abcdefghijklmnopqrstuvwxyz123...'
);
assert.equal(truncateFederationSiteName('abcdefghijklmnopqrstuvwxyz123456').length, 32);
assert.equal(truncateFederationSiteName(null), '');
});
test('display helpers filter suppressed names and preserve original domains', () => {
const entries = [
{ name: 'Normal Mesh', domain: 'normal.mesh' },
{ name: 'https://spam.example promo', domain: 'spam.mesh' },
{ domain: 'unnamed.mesh' }
];
assert.equal(shouldDisplayFederationInstance(entries[0]), true);
assert.equal(shouldDisplayFederationInstance(entries[1]), false);
assert.deepEqual(filterDisplayableFederationInstances(entries), [
{ name: 'Normal Mesh', domain: 'normal.mesh' },
{ domain: 'unnamed.mesh' }
]);
assert.equal(resolveFederationSiteNameForDisplay(entries[0]), 'Normal Mesh');
assert.equal(resolveFederationInstanceLabel(entries[2]), 'unnamed.mesh');
assert.equal(resolveFederationInstanceSortValue(entries[0]), 'Normal Mesh');
});

View File

@@ -21,12 +21,83 @@ import { createDomEnvironment } from './dom-environment.js';
import { initializeFederationPage } from '../federation-page.js';
import { roleColors } from '../role-helpers.js';
function createBasicFederationPageHarness() {
const env = createDomEnvironment({ includeBody: true, bodyHasDarkClass: false });
const { document, createElement, registerElement } = env;
const mapEl = createElement('div', 'map');
registerElement('map', mapEl);
const statusEl = createElement('div', 'status');
registerElement('status', statusEl);
const tableEl = createElement('table', 'instances');
const tbodyEl = createElement('tbody');
registerElement('instances', tableEl);
tableEl.appendChild(tbodyEl);
const configEl = createElement('div');
configEl.setAttribute('data-app-config', JSON.stringify({ mapCenter: { lat: 0, lon: 0 }, mapZoom: 3 }));
document.querySelector = selector => {
if (selector === '[data-app-config]') return configEl;
if (selector === '#instances tbody') return tbodyEl;
return null;
};
return { ...env, statusEl, tbodyEl };
}
function createBasicLeafletStub(options = {}) {
const { markerPopups = null, fitBounds = false } = options;
return {
map() {
return {
setView() {},
on() {},
fitBounds: fitBounds ? () => {} : undefined,
getPane() {
return null;
}
};
},
tileLayer() {
return {
addTo() {
return this;
},
getContainer() {
return null;
},
on() {}
};
},
layerGroup() {
return {
addLayer() {},
addTo() {
return this;
}
};
},
circleMarker() {
return {
bindPopup(html) {
markerPopups?.push(html);
return this;
}
};
}
};
}
test('federation map centers on configured coordinates and follows theme filters', async () => {
const env = createDomEnvironment({ includeBody: true, bodyHasDarkClass: true });
const { document, window, createElement, registerElement, cleanup } = env;
const mapEl = createElement('div', 'map');
registerElement('map', mapEl);
const mapPanel = createElement('div', 'mapPanel');
mapPanel.dataset.legendCollapsed = 'true';
registerElement('mapPanel', mapPanel);
const statusEl = createElement('div', 'status');
registerElement('status', statusEl);
const tableEl = createElement('table', 'instances');
@@ -408,47 +479,141 @@ test('federation table sorting, contact rendering, and legend creation', async (
assert.deepEqual(mapSetViewCalls[0], [[0, 0], 3]);
assert.equal(mapFitBoundsCalls[0][0].length, 3);
assert.equal(legendContainers.length, 1);
const legend = legendContainers[0];
assert.ok(legend.className.includes('legend'));
assert.equal(legendContainers.length, 2);
const legend = legendContainers.find(container => container.className.includes('legend--instances'));
assert.ok(legend);
assert.ok(legend.className.includes('legend-hidden'));
const legendHeader = legend.children.find(child => child.className === 'legend-header');
const legendTitle = legendHeader && Array.isArray(legendHeader.children)
? legendHeader.children.find(child => child.className === 'legend-title')
: null;
assert.ok(legendTitle);
assert.equal(legendTitle.textContent, 'Active nodes');
const legendToggle = legendContainers.find(container => container.className.includes('legend-toggle'));
assert.ok(legendToggle);
} finally {
cleanup();
}
});
test('federation page tolerates fetch failures', async () => {
test('federation legend toggle respects media query changes', async () => {
const env = createDomEnvironment({ includeBody: true, bodyHasDarkClass: false });
const { document, createElement, registerElement, cleanup } = env;
const mapEl = createElement('div', 'map');
registerElement('map', mapEl);
const mapPanel = createElement('div', 'mapPanel');
mapPanel.setAttribute('data-legend-collapsed', 'false');
registerElement('mapPanel', mapPanel);
const statusEl = createElement('div', 'status');
registerElement('status', statusEl);
const tableEl = createElement('table', 'instances');
const tbodyEl = createElement('tbody');
registerElement('instances', tableEl);
tableEl.appendChild(tbodyEl);
const configPayload = {
mapCenter: { lat: 0, lon: 0 },
mapZoom: 3,
tileFilters: { light: 'none', dark: 'invert(1)' }
};
const configEl = createElement('div');
configEl.setAttribute('data-app-config', JSON.stringify({}));
configEl.setAttribute('data-app-config', JSON.stringify(configPayload));
document.querySelector = selector => {
if (selector === '[data-app-config]') return configEl;
if (selector === '#instances tbody') return tbodyEl;
return null;
};
let mediaQueryHandler = null;
window.matchMedia = () => ({
matches: false,
addListener(handler) {
mediaQueryHandler = handler;
}
});
const legendContainers = [];
const legendButtons = [];
const DomUtil = {
create(tag, className, parent) {
const classSet = new Set(className ? className.split(/\s+/).filter(Boolean) : []);
const el = {
tagName: tag,
className,
classList: {
toggle(name, force) {
const shouldAdd = typeof force === 'boolean' ? force : !classSet.has(name);
if (shouldAdd) {
classSet.add(name);
} else {
classSet.delete(name);
}
el.className = Array.from(classSet).join(' ');
}
},
children: [],
style: {},
textContent: '',
attributes: new Map(),
setAttribute(name, value) {
this.attributes.set(name, String(value));
},
appendChild(child) {
this.children.push(child);
return child;
},
addEventListener(event, handler) {
if (event === 'click') {
this._clickHandler = handler;
}
},
querySelector() {
return null;
}
};
if (parent && parent.appendChild) parent.appendChild(el);
if (className && className.includes('legend-toggle-button')) {
legendButtons.push(el);
}
return el;
}
};
const controlStub = () => {
const ctrl = {
onAdd: null,
container: null,
addTo(map) {
this.container = this.onAdd ? this.onAdd(map) : null;
legendContainers.push(this.container);
return this;
},
getContainer() {
return this.container;
}
};
return ctrl;
};
const markersLayer = {
addLayer() {
return null;
},
addTo() {
return this;
}
};
const leafletStub = {
map() {
return {
setView() {},
on() {},
getPane() {
return null;
}
fitBounds() {}
};
},
tileLayer() {
@@ -463,17 +628,184 @@ test('federation page tolerates fetch failures', async () => {
};
},
layerGroup() {
return { addLayer() {}, addTo() { return this; } };
return markersLayer;
},
circleMarker() {
return { bindPopup() { return this; } };
return {
bindPopup() {
return this;
}
};
},
control: controlStub,
DomUtil,
DomEvent: {
disableClickPropagation() {},
disableScrollPropagation() {}
}
};
const fetchImpl = async () => ({
ok: true,
json: async () => []
});
try {
await initializeFederationPage({ config: configPayload, fetchImpl, leaflet: leafletStub });
const legend = legendContainers.find(container => container.className.includes('legend--instances'));
assert.ok(legend);
assert.ok(!legend.className.includes('legend-hidden'));
assert.equal(legendButtons.length, 1);
legendButtons[0]._clickHandler?.({ preventDefault() {}, stopPropagation() {} });
assert.ok(legend.className.includes('legend-hidden'));
if (mediaQueryHandler) {
mediaQueryHandler({ matches: false });
assert.ok(!legend.className.includes('legend-hidden'));
}
} finally {
cleanup();
}
});
test('federation page tolerates fetch failures', async () => {
const { cleanup } = createBasicFederationPageHarness();
const fetchImpl = async () => {
throw new Error('boom');
};
const leafletStub = createBasicLeafletStub();
await initializeFederationPage({ config: {}, fetchImpl, leaflet: leafletStub });
cleanup();
});
test('federation page suppresses spammy site names and truncates long names in visible UI', async () => {
const { cleanup, statusEl, tbodyEl } = createBasicFederationPageHarness();
const markerPopups = [];
const leafletStub = createBasicLeafletStub({ markerPopups, fitBounds: true });
const fetchImpl = async () => ({
ok: true,
json: async () => [
{
domain: 'visible.mesh',
name: 'abcdefghijklmnopqrstuvwxyz1234567890',
latitude: 1,
longitude: 1,
lastUpdateTime: Math.floor(Date.now() / 1000) - 30
},
{
domain: 'spam.mesh',
name: 'www.spam.example buy now',
latitude: 2,
longitude: 2,
lastUpdateTime: Math.floor(Date.now() / 1000) - 60
}
]
});
try {
await initializeFederationPage({ config: {}, fetchImpl, leaflet: leafletStub });
assert.equal(statusEl.textContent, '1 instances');
assert.equal(tbodyEl.childNodes.length, 1);
assert.match(tbodyEl.childNodes[0].innerHTML, /abcdefghijklmnopqrstuvwxyz123\.\.\./);
assert.doesNotMatch(tbodyEl.childNodes[0].innerHTML, /spam\.mesh/);
assert.equal(markerPopups.length, 1);
assert.match(markerPopups[0], /abcdefghijklmnopqrstuvwxyz123\.\.\./);
assert.doesNotMatch(markerPopups[0], /www\.spam\.example/);
} finally {
cleanup();
}
});
test('federation page sorts by full site names before truncating visible labels', async () => {
const env = createDomEnvironment({ includeBody: true, bodyHasDarkClass: false });
const { document, createElement, registerElement, cleanup } = env;
const sharedPrefix = 'abcdefghijklmnopqrstuvwxyz123';
const mapEl = createElement('div', 'map');
registerElement('map', mapEl);
const statusEl = createElement('div', 'status');
registerElement('status', statusEl);
const tableEl = createElement('table', 'instances');
const tbodyEl = createElement('tbody');
registerElement('instances', tableEl);
tableEl.appendChild(tbodyEl);
const headerNameTh = createElement('th');
const headerName = createElement('span');
headerName.classList.add('sort-header');
headerName.dataset.sortKey = 'name';
headerName.dataset.sortLabel = 'Name';
headerNameTh.appendChild(headerName);
const ths = [headerNameTh];
const headers = [headerName];
const headerHandlers = new Map();
headers.forEach(header => {
header.addEventListener = (event, handler) => {
const existing = headerHandlers.get(header) || {};
existing[event] = handler;
headerHandlers.set(header, existing);
};
header.closest = () => ths.find(th => th.childNodes.includes(header));
header.querySelector = () => null;
});
tableEl.querySelectorAll = selector => {
if (selector === 'thead .sort-header[data-sort-key]') return headers;
if (selector === 'thead th') return ths;
return [];
};
const configEl = createElement('div');
configEl.setAttribute('data-app-config', JSON.stringify({ mapCenter: { lat: 0, lon: 0 }, mapZoom: 3 }));
document.querySelector = selector => {
if (selector === '[data-app-config]') return configEl;
if (selector === '#instances tbody') return tbodyEl;
return null;
};
const fetchImpl = async () => ({
ok: true,
json: async () => [
{
domain: 'zeta.mesh',
name: `${sharedPrefix}zeta suffix`,
latitude: 1,
longitude: 1,
lastUpdateTime: Math.floor(Date.now() / 1000) - 30
},
{
domain: 'alpha.mesh',
name: `${sharedPrefix}alpha suffix`,
latitude: 2,
longitude: 2,
lastUpdateTime: Math.floor(Date.now() / 1000) - 60
}
]
});
try {
await initializeFederationPage({
config: {},
fetchImpl,
leaflet: createBasicLeafletStub({ fitBounds: true })
});
const nameHandlers = headerHandlers.get(headerName);
nameHandlers.click();
assert.match(tbodyEl.childNodes[0].innerHTML, /alpha\.mesh/);
assert.match(tbodyEl.childNodes[1].innerHTML, /zeta\.mesh/);
assert.match(tbodyEl.childNodes[0].innerHTML, /abcdefghijklmnopqrstuvwxyz123\.\.\./);
assert.match(tbodyEl.childNodes[1].innerHTML, /abcdefghijklmnopqrstuvwxyz123\.\.\./);
} finally {
cleanup();
}
});

View File

@@ -20,7 +20,7 @@ import { createDomEnvironment } from './dom-environment.js';
import { buildInstanceUrl, initializeInstanceSelector, __test__ } from '../instance-selector.js';
const { resolveInstanceLabel } = __test__;
const { resolveInstanceLabel, updateFederationNavCount } = __test__;
function setupSelectElement(document) {
const select = document.createElement('select');
@@ -154,6 +154,75 @@ test('initializeInstanceSelector populates options alphabetically and selects th
}
});
test('initializeInstanceSelector hides suppressed names and truncates long labels', async () => {
const env = createDomEnvironment();
const select = setupSelectElement(env.document);
const navLink = env.document.createElement('a');
navLink.classList.add('js-federation-nav');
navLink.textContent = 'Federation';
env.document.body.appendChild(navLink);
const fetchImpl = async () => ({
ok: true,
async json() {
return [
{ name: 'Visit https://spam.example now', domain: 'spam.mesh' },
{ name: 'abcdefghijklmnopqrstuvwxyz1234567890', domain: 'long.mesh' },
{ name: 'Alpha Mesh', domain: 'alpha.mesh' }
];
}
});
try {
await initializeInstanceSelector({
selectElement: select,
fetchImpl,
windowObject: env.window,
documentObject: env.document
});
assert.equal(select.options.length, 3);
assert.equal(select.options[1].textContent, 'abcdefghijklmnopqrstuvwxyz123...');
assert.equal(select.options[2].textContent, 'Alpha Mesh');
assert.equal(navLink.textContent, 'Federation (2)');
assert.equal(select.options.some(option => option.value === 'spam.mesh'), false);
} finally {
env.cleanup();
}
});
test('initializeInstanceSelector sorts by full site names before truncating labels', async () => {
const env = createDomEnvironment();
const select = setupSelectElement(env.document);
const sharedPrefix = 'abcdefghijklmnopqrstuvwxyz123';
const fetchImpl = async () => ({
ok: true,
async json() {
return [
{ name: `${sharedPrefix}zeta suffix`, domain: 'zeta.mesh' },
{ name: `${sharedPrefix}alpha suffix`, domain: 'alpha.mesh' }
];
}
});
try {
await initializeInstanceSelector({
selectElement: select,
fetchImpl,
windowObject: env.window,
documentObject: env.document
});
assert.equal(select.options[1].value, 'alpha.mesh');
assert.equal(select.options[2].value, 'zeta.mesh');
assert.equal(select.options[1].textContent, 'abcdefghijklmnopqrstuvwxyz123...');
assert.equal(select.options[2].textContent, 'abcdefghijklmnopqrstuvwxyz123...');
} finally {
env.cleanup();
}
});
test('initializeInstanceSelector navigates to the chosen instance domain', async () => {
const env = createDomEnvironment();
const select = setupSelectElement(env.document);
@@ -191,3 +260,65 @@ test('initializeInstanceSelector navigates to the chosen instance domain', async
env.cleanup();
}
});
test('initializeInstanceSelector updates federation navigation labels with instance count', async () => {
const env = createDomEnvironment();
const select = setupSelectElement(env.document);
const navLink = env.document.createElement('a');
navLink.classList.add('js-federation-nav');
navLink.textContent = 'Federation';
env.document.body.appendChild(navLink);
const fetchImpl = async () => ({
ok: true,
async json() {
return [{ domain: 'alpha.mesh' }, { domain: 'beta.mesh' }];
}
});
try {
await initializeInstanceSelector({
selectElement: select,
fetchImpl,
windowObject: env.window,
documentObject: env.document
});
assert.equal(navLink.textContent, 'Federation (2)');
} finally {
env.cleanup();
}
});
test('updateFederationNavCount prefers stored labels and normalizes counts', () => {
const env = createDomEnvironment();
const navLink = env.document.createElement('a');
navLink.classList.add('js-federation-nav');
navLink.textContent = 'Federation';
navLink.dataset.federationLabel = 'Community';
env.document.body.appendChild(navLink);
try {
updateFederationNavCount({ documentObject: env.document, count: -3 });
assert.equal(navLink.textContent, 'Community (0)');
} finally {
env.cleanup();
}
});
test('updateFederationNavCount falls back to existing link text when no dataset label', () => {
const env = createDomEnvironment();
const navLink = env.document.createElement('a');
navLink.classList.add('js-federation-nav');
navLink.textContent = 'Federation (9)';
env.document.body.appendChild(navLink);
try {
updateFederationNavCount({ documentObject: env.document, count: 4 });
assert.equal(navLink.textContent, 'Federation (4)');
} finally {
env.cleanup();
}
});

View File

@@ -0,0 +1,210 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import {
computeLocalActiveNodeStats,
fetchActiveNodeStats,
formatActiveNodeStatsText,
normaliseActiveNodeStatsPayload,
} from '../main.js';
const NOW = 1_700_000_000;
test('computeLocalActiveNodeStats calculates local hour/day/week/month counts', () => {
const nodes = [
{ last_heard: NOW - 60 },
{ last_heard: NOW - 4_000 },
{ last_heard: NOW - 90_000 },
{ last_heard: NOW - (8 * 86_400) },
{ last_heard: NOW - (20 * 86_400) },
];
const stats = computeLocalActiveNodeStats(nodes, NOW);
assert.deepEqual(stats, {
hour: 1,
day: 2,
week: 3,
month: 5,
sampled: true,
});
});
test('normaliseActiveNodeStatsPayload validates and normalizes API payload', () => {
const payload = {
active_nodes: {
hour: '11',
day: 22,
week: 33,
month: 44,
},
sampled: false,
};
assert.deepEqual(normaliseActiveNodeStatsPayload(payload), {
hour: 11,
day: 22,
week: 33,
month: 44,
sampled: false,
});
assert.equal(normaliseActiveNodeStatsPayload({}), null);
});
test('normaliseActiveNodeStatsPayload rejects malformed stat values', () => {
assert.equal(
normaliseActiveNodeStatsPayload({ active_nodes: { hour: 'x', day: 1, week: 1, month: 1 } }),
null
);
assert.equal(
normaliseActiveNodeStatsPayload({ active_nodes: null }),
null
);
});
test('normaliseActiveNodeStatsPayload clamps negatives and truncates floats', () => {
assert.deepEqual(
normaliseActiveNodeStatsPayload({
active_nodes: { hour: -1.9, day: 2.8, week: 3.1, month: 4.9 },
sampled: 1
}),
{ hour: 0, day: 2, week: 3, month: 4, sampled: true }
);
});
test('fetchActiveNodeStats uses /api/stats when available', async () => {
const calls = [];
const fetchImpl = async (url) => {
calls.push(url);
return {
ok: true,
async json() {
return {
active_nodes: { hour: 5, day: 15, week: 25, month: 35 },
sampled: false,
};
},
};
};
const stats = await fetchActiveNodeStats({ nodes: [], nowSeconds: NOW, fetchImpl });
assert.equal(calls[0], '/api/stats');
assert.deepEqual(stats, {
hour: 5,
day: 15,
week: 25,
month: 35,
sampled: false,
});
});
test('fetchActiveNodeStats reuses cached /api/stats response for repeated calls', async () => {
const calls = [];
const fetchImpl = async (url) => {
calls.push(url);
return {
ok: true,
async json() {
return {
active_nodes: { hour: 2, day: 4, week: 6, month: 8 },
sampled: false,
};
},
};
};
const first = await fetchActiveNodeStats({ nodes: [], nowSeconds: NOW, fetchImpl });
const second = await fetchActiveNodeStats({ nodes: [], nowSeconds: NOW, fetchImpl });
assert.equal(calls.length, 1);
assert.deepEqual(first, second);
});
test('fetchActiveNodeStats falls back to local counts when stats fetch fails', async () => {
const nodes = [
{ last_heard: NOW - 120 },
{ last_heard: NOW - (10 * 86_400) },
];
const fetchImpl = async () => {
throw new Error('network down');
};
const stats = await fetchActiveNodeStats({ nodes, nowSeconds: NOW, fetchImpl });
assert.deepEqual(stats, {
hour: 1,
day: 1,
week: 1,
month: 2,
sampled: true,
});
});
test('fetchActiveNodeStats falls back to local counts on non-OK HTTP responses', async () => {
const stats = await fetchActiveNodeStats({
nodes: [{ last_heard: NOW - 10 }],
nowSeconds: NOW,
fetchImpl: async () => ({ ok: false, status: 503 })
});
assert.equal(stats.sampled, true);
assert.equal(stats.hour, 1);
});
test('fetchActiveNodeStats falls back to local counts on invalid payloads', async () => {
const stats = await fetchActiveNodeStats({
nodes: [{ last_heard: NOW - (31 * 86_400) }],
nowSeconds: NOW,
fetchImpl: async () => ({
ok: true,
async json() {
return { active_nodes: { hour: 'bad' } };
}
})
});
assert.equal(stats.sampled, true);
assert.equal(stats.month, 0);
});
test('formatActiveNodeStatsText emits expected dashboard string', () => {
const text = formatActiveNodeStatsText({
channel: 'LongFast',
frequency: '868MHz',
stats: { hour: 1, day: 2, week: 3, month: 4, sampled: false },
});
assert.equal(
text,
'LongFast (868MHz) — active nodes: 1/hour, 2/day, 3/week, 4/month.'
);
});
test('formatActiveNodeStatsText appends sampled marker when local fallback is used', () => {
const text = formatActiveNodeStatsText({
channel: 'LongFast',
frequency: '868MHz',
stats: { hour: 9, day: 8, week: 7, month: 6, sampled: true },
});
assert.equal(
text,
'LongFast (868MHz) — active nodes: 9/hour, 8/day, 7/week, 6/month (sampled).'
);
});

View File

@@ -0,0 +1,455 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import { __test__, initializeMobileMenu } from '../mobile-menu.js';
const { createMobileMenuController, resolveFocusableElements } = __test__;
function createClassList() {
const values = new Set();
return {
add(...names) {
names.forEach(name => values.add(name));
},
remove(...names) {
names.forEach(name => values.delete(name));
},
contains(name) {
return values.has(name);
}
};
}
function createElement(tagName = 'div', initialId = '') {
const listeners = new Map();
const attributes = new Map();
if (initialId) {
attributes.set('id', String(initialId));
}
return {
tagName: tagName.toUpperCase(),
attributes,
classList: createClassList(),
dataset: {},
hidden: false,
parentNode: null,
nextSibling: null,
setAttribute(name, value) {
attributes.set(name, String(value));
},
getAttribute(name) {
return attributes.has(name) ? attributes.get(name) : null;
},
addEventListener(event, handler) {
listeners.set(event, handler);
},
dispatchEvent(event) {
const key = typeof event === 'string' ? event : event?.type;
const handler = listeners.get(key);
if (handler) {
handler(event);
}
},
appendChild(node) {
this.lastAppended = node;
return node;
},
insertBefore(node, nextSibling) {
this.lastInserted = { node, nextSibling };
return node;
},
focus() {
globalThis.document.activeElement = this;
},
querySelector() {
return null;
},
querySelectorAll() {
return [];
}
};
}
function createDomStub() {
const originalDocument = globalThis.document;
const registry = new Map();
const documentStub = {
body: createElement('body'),
activeElement: null,
querySelectorAll() {
return [];
},
getElementById(id) {
return registry.get(id) || null;
}
};
globalThis.document = documentStub;
return {
documentStub,
registry,
cleanup() {
globalThis.document = originalDocument;
}
};
}
function createWindowStub(matches = true) {
const listeners = new Map();
const mediaListeners = new Map();
return {
matchMedia() {
return {
matches,
addEventListener(event, handler) {
mediaListeners.set(event, handler);
}
};
},
addEventListener(event, handler) {
listeners.set(event, handler);
},
dispatchEvent(event) {
const key = typeof event === 'string' ? event : event?.type;
const handler = listeners.get(key);
if (handler) {
handler(event);
}
},
dispatchMediaChange() {
const handler = mediaListeners.get('change');
if (handler) {
handler();
}
}
};
}
function createWindowStubWithListener(matches = true) {
const listeners = new Map();
let mediaHandler = null;
return {
matchMedia() {
return {
matches,
addListener(handler) {
mediaHandler = handler;
}
};
},
addEventListener(event, handler) {
listeners.set(event, handler);
},
dispatchMediaChange() {
if (mediaHandler) {
mediaHandler();
}
}
};
}
test('mobile menu toggles open state and aria-expanded', () => {
const { documentStub, registry, cleanup } = createDomStub();
const windowStub = createWindowStub(true);
const menuToggle = createElement('button');
const menu = createElement('div');
const menuPanel = createElement('div');
const closeButton = createElement('button');
const navLink = createElement('a');
menu.hidden = true;
menuPanel.classList.add('mobile-menu__panel');
menu.querySelector = selector => {
if (selector === '.mobile-menu__panel') return menuPanel;
return null;
};
menu.querySelectorAll = selector => {
if (selector === '[data-mobile-menu-close]') return [closeButton];
if (selector === 'a') return [navLink];
return [];
};
menuPanel.querySelectorAll = () => [closeButton, navLink];
registry.set('mobileMenuToggle', menuToggle);
registry.set('mobileMenu', menu);
try {
const controller = createMobileMenuController({
documentObject: documentStub,
windowObject: windowStub
});
controller.initialize();
windowStub.dispatchMediaChange();
menuToggle.dispatchEvent({ type: 'click', preventDefault() {} });
assert.equal(menu.hidden, false);
assert.equal(menuToggle.getAttribute('aria-expanded'), 'true');
assert.equal(documentStub.body.classList.contains('menu-open'), true);
navLink.dispatchEvent({ type: 'click' });
assert.equal(menu.hidden, true);
closeButton.dispatchEvent({ type: 'click' });
assert.equal(menu.hidden, true);
assert.equal(menuToggle.getAttribute('aria-expanded'), 'false');
} finally {
cleanup();
}
});
test('mobile menu closes on escape and route changes', () => {
const { documentStub, registry, cleanup } = createDomStub();
const windowStub = createWindowStub(true);
const menuToggle = createElement('button');
const menu = createElement('div');
const menuPanel = createElement('div');
const closeButton = createElement('button');
menu.hidden = true;
menuPanel.classList.add('mobile-menu__panel');
menu.querySelector = selector => {
if (selector === '.mobile-menu__panel') return menuPanel;
return null;
};
menu.querySelectorAll = selector => {
if (selector === '[data-mobile-menu-close]') return [closeButton];
return [];
};
menuPanel.querySelectorAll = () => [closeButton];
registry.set('mobileMenuToggle', menuToggle);
registry.set('mobileMenu', menu);
try {
const controller = createMobileMenuController({
documentObject: documentStub,
windowObject: windowStub
});
controller.initialize();
menuPanel.dispatchEvent({ type: 'keydown', key: 'Escape', preventDefault() {} });
assert.equal(menu.hidden, true);
menuToggle.dispatchEvent({ type: 'click', preventDefault() {} });
assert.equal(menu.hidden, false);
menuPanel.dispatchEvent({ type: 'keydown', key: 'ArrowDown' });
assert.equal(menu.hidden, false);
menuPanel.dispatchEvent({ type: 'keydown', key: 'Escape', preventDefault() {} });
assert.equal(menu.hidden, true);
menuToggle.dispatchEvent({ type: 'click', preventDefault() {} });
windowStub.dispatchEvent({ type: 'hashchange' });
assert.equal(menu.hidden, true);
menuToggle.dispatchEvent({ type: 'click', preventDefault() {} });
windowStub.dispatchEvent({ type: 'popstate' });
assert.equal(menu.hidden, true);
} finally {
cleanup();
}
});
test('mobile menu traps focus within the panel', () => {
const { documentStub, registry, cleanup } = createDomStub();
const windowStub = createWindowStub(true);
const menuToggle = createElement('button');
const menu = createElement('div');
const menuPanel = createElement('div');
const firstLink = createElement('a');
const lastButton = createElement('button');
menuPanel.classList.add('mobile-menu__panel');
menuPanel.querySelectorAll = () => [firstLink, lastButton];
menu.querySelector = selector => {
if (selector === '.mobile-menu__panel') return menuPanel;
return null;
};
menu.querySelectorAll = () => [];
registry.set('mobileMenuToggle', menuToggle);
registry.set('mobileMenu', menu);
try {
const controller = createMobileMenuController({
documentObject: documentStub,
windowObject: windowStub
});
controller.initialize();
menuToggle.dispatchEvent({ type: 'click', preventDefault() {} });
documentStub.activeElement = lastButton;
menuPanel.dispatchEvent({ type: 'keydown', key: 'Tab', preventDefault() {}, shiftKey: false });
assert.equal(documentStub.activeElement, firstLink);
documentStub.activeElement = firstLink;
menuPanel.dispatchEvent({ type: 'keydown', key: 'Tab', preventDefault() {}, shiftKey: true });
assert.equal(documentStub.activeElement, lastButton);
} finally {
cleanup();
}
});
test('resolveFocusableElements filters out aria-hidden nodes', () => {
const hiddenButton = createElement('button');
hiddenButton.getAttribute = name => (name === 'aria-hidden' ? 'true' : null);
const openLink = createElement('a');
const bareNode = { tagName: 'DIV' };
const container = {
querySelectorAll() {
return [hiddenButton, bareNode, openLink];
}
};
const focusables = resolveFocusableElements(container);
assert.equal(focusables.length, 1);
assert.equal(focusables[0], openLink);
});
test('resolveFocusableElements handles empty containers', () => {
assert.deepEqual(resolveFocusableElements(null), []);
assert.deepEqual(resolveFocusableElements({}), []);
});
test('mobile menu focuses the panel when no focusables exist', () => {
const { documentStub, registry, cleanup } = createDomStub();
const windowStub = createWindowStub(true);
const menuToggle = createElement('button');
const menu = createElement('div');
const menuPanel = createElement('div');
const lastActive = createElement('button');
menuPanel.classList.add('mobile-menu__panel');
menuPanel.querySelectorAll = () => [];
menu.querySelector = selector => {
if (selector === '.mobile-menu__panel') return menuPanel;
return null;
};
menu.querySelectorAll = () => [];
registry.set('mobileMenuToggle', menuToggle);
registry.set('mobileMenu', menu);
documentStub.activeElement = lastActive;
try {
const controller = createMobileMenuController({
documentObject: documentStub,
windowObject: windowStub
});
controller.initialize();
menuToggle.dispatchEvent({ type: 'click', preventDefault() {} });
assert.equal(documentStub.activeElement, menuPanel);
menuToggle.dispatchEvent({ type: 'click', preventDefault() {} });
assert.equal(documentStub.activeElement, lastActive);
} finally {
cleanup();
}
});
test('mobile menu registers legacy media query listeners', () => {
const { documentStub, registry, cleanup } = createDomStub();
const windowStub = createWindowStubWithListener(true);
const menuToggle = createElement('button');
const menu = createElement('div');
const menuPanel = createElement('div');
menuPanel.classList.add('mobile-menu__panel');
menu.querySelector = selector => {
if (selector === '.mobile-menu__panel') return menuPanel;
return null;
};
menu.querySelectorAll = () => [];
registry.set('mobileMenuToggle', menuToggle);
registry.set('mobileMenu', menu);
try {
const controller = createMobileMenuController({
documentObject: documentStub,
windowObject: windowStub
});
controller.initialize();
windowStub.dispatchMediaChange();
assert.equal(menuToggle.getAttribute('aria-expanded'), 'false');
} finally {
cleanup();
}
});
test('mobile menu safely no-ops without required nodes', () => {
const { documentStub, cleanup } = createDomStub();
const windowStub = createWindowStub(true);
try {
const controller = createMobileMenuController({
documentObject: documentStub,
windowObject: windowStub
});
controller.initialize();
controller.openMenu();
controller.closeMenu();
controller.syncLayout();
assert.equal(documentStub.body.classList.contains('menu-open'), false);
} finally {
cleanup();
}
});
test('initializeMobileMenu returns a controller', () => {
const { documentStub, registry, cleanup } = createDomStub();
const windowStub = createWindowStub(true);
const menuToggle = createElement('button');
const menu = createElement('div');
const menuPanel = createElement('div');
menuPanel.classList.add('mobile-menu__panel');
menu.querySelector = selector => {
if (selector === '.mobile-menu__panel') return menuPanel;
return null;
};
menu.querySelectorAll = () => [];
registry.set('mobileMenuToggle', menuToggle);
registry.set('mobileMenu', menu);
try {
const controller = initializeMobileMenu({
documentObject: documentStub,
windowObject: windowStub
});
assert.equal(typeof controller.openMenu, 'function');
} finally {
cleanup();
}
});

View File

@@ -111,26 +111,6 @@ test('createNodeDetailOverlayManager renders fetched markup and restores focus',
assert.equal(focusTarget.focusCalled, true);
});
test('createNodeDetailOverlayManager mounts telemetry charts for overlay content', async () => {
const { document, content } = createOverlayHarness();
const chartModels = [{ id: 'power' }];
let mountCall = null;
const manager = createNodeDetailOverlayManager({
document,
fetchNodeDetail: async () => ({ html: '<section class="node-detail">Charts</section>', chartModels }),
mountCharts: (models, options) => {
mountCall = { models, options };
return [];
},
});
assert.ok(manager);
await manager.open({ nodeId: '!alpha' });
assert.equal(content.innerHTML.includes('Charts'), true);
assert.ok(mountCall);
assert.equal(mountCall.models, chartModels);
assert.equal(mountCall.options.root, content);
});
test('createNodeDetailOverlayManager surfaces errors and supports escape closing', async () => {
const { document, overlay, content } = createOverlayHarness();
const errors = [];

View File

@@ -47,9 +47,7 @@ const {
categoriseNeighbors,
renderNeighborGroups,
renderSingleNodeTable,
createTelemetryCharts,
renderTelemetryCharts,
buildUPlotChartConfig,
renderMessages,
renderTraceroutes,
renderTracePath,
@@ -156,12 +154,14 @@ test('additional format helpers provide table friendly output', () => {
channel_name: 'Primary',
node: { short_name: 'SRCE', role: 'ROUTER', node_id: '!src' },
},
{ text: ' GAA= ', encrypted: true, rx_time: 1_700_000_405 },
{ emoji: '😊', rx_time: 1_700_000_401 },
],
renderShortHtml,
nodeContext,
);
assert.equal(messagesHtml.includes('hello'), true);
assert.equal(messagesHtml.includes('GAA='), false);
assert.equal(messagesHtml.includes('😊'), true);
assert.match(messagesHtml, /\[\d{4}-\d{2}-\d{2} \d{2}:\d{2}\]\[868\]/);
assert.equal(messagesHtml.includes('[868]'), true);
@@ -388,10 +388,23 @@ test('renderTelemetryCharts renders condensed scatter charts when telemetry exis
},
};
const html = renderTelemetryCharts(node, { nowMs });
const fmt = new Date(nowMs);
const expectedDate = String(fmt.getDate()).padStart(2, '0');
assert.equal(html.includes('node-detail__charts'), true);
assert.equal(html.includes('Power metrics'), true);
assert.equal(html.includes('Environmental telemetry'), true);
assert.equal(html.includes('node-detail__chart-plot'), true);
assert.equal(html.includes('Battery (%)'), true);
assert.equal(html.includes('Voltage (V)'), true);
assert.equal(html.includes('Current (A)'), true);
assert.equal(html.includes('Channel utilization (%)'), true);
assert.equal(html.includes('Air util TX (%)'), true);
assert.equal(html.includes('Utilization (%)'), true);
assert.equal(html.includes('Gas resistance (\u03a9)'), true);
assert.equal(html.includes('Air quality'), true);
assert.equal(html.includes('IAQ index'), true);
assert.equal(html.includes('Temperature (\u00b0C)'), true);
assert.equal(html.includes(expectedDate), true);
assert.equal(html.includes('node-detail__chart-point'), true);
});
test('renderTelemetryCharts expands upper bounds when overflow metrics exceed defaults', () => {
@@ -422,18 +435,12 @@ test('renderTelemetryCharts expands upper bounds when overflow metrics exceed de
},
},
};
const { chartModels } = createTelemetryCharts(node, { nowMs });
const powerChart = chartModels.find(model => model.id === 'power');
const environmentChart = chartModels.find(model => model.id === 'environment');
const airChart = chartModels.find(model => model.id === 'airQuality');
const powerConfig = buildUPlotChartConfig(powerChart);
const envConfig = buildUPlotChartConfig(environmentChart);
const airConfig = buildUPlotChartConfig(airChart);
assert.equal(powerConfig.options.scales.voltage.range()[1], 7.2);
assert.equal(powerConfig.options.scales.current.range()[1], 3.6);
assert.equal(envConfig.options.scales.temperature.range()[1], 45);
assert.equal(airConfig.options.scales.iaq.range()[1], 650);
assert.equal(airConfig.options.scales.pressure.range()[1], 1100);
const html = renderTelemetryCharts(node, { nowMs });
assert.match(html, />7\.2<\/text>/);
assert.match(html, />3\.6<\/text>/);
assert.match(html, />45<\/text>/);
assert.match(html, />650<\/text>/);
assert.match(html, />1100<\/text>/);
});
test('renderTelemetryCharts keeps default bounds when metrics stay within limits', () => {
@@ -464,17 +471,11 @@ test('renderTelemetryCharts keeps default bounds when metrics stay within limits
},
},
};
const { chartModels } = createTelemetryCharts(node, { nowMs });
const powerChart = chartModels.find(model => model.id === 'power');
const environmentChart = chartModels.find(model => model.id === 'environment');
const airChart = chartModels.find(model => model.id === 'airQuality');
const powerConfig = buildUPlotChartConfig(powerChart);
const envConfig = buildUPlotChartConfig(environmentChart);
const airConfig = buildUPlotChartConfig(airChart);
assert.equal(powerConfig.options.scales.voltage.range()[1], 6);
assert.equal(powerConfig.options.scales.current.range()[1], 3);
assert.equal(envConfig.options.scales.temperature.range()[1], 40);
assert.equal(airConfig.options.scales.iaq.range()[1], 500);
const html = renderTelemetryCharts(node, { nowMs });
assert.match(html, />6\.0<\/text>/);
assert.match(html, />3\.0<\/text>/);
assert.match(html, />40<\/text>/);
assert.match(html, />500<\/text>/);
});
test('renderNodeDetailHtml composes the table, neighbors, and messages', () => {
@@ -590,18 +591,17 @@ test('fetchNodeDetailHtml renders the node layout for overlays', async () => {
neighbors: [],
rawSources: { node: { node_id: '!alpha', role: 'CLIENT', short_name: 'ALPH' } },
});
const result = await fetchNodeDetailHtml(reference, {
const html = await fetchNodeDetailHtml(reference, {
refreshImpl,
fetchImpl,
renderShortHtml: short => `<span class="short-name">${short}</span>`,
returnState: true,
});
assert.equal(calledUrls.some(url => url.includes('/api/messages/!alpha')), true);
assert.equal(calledUrls.some(url => url.includes('/api/traces/!alpha')), true);
assert.equal(result.html.includes('Example Alpha'), true);
assert.equal(result.html.includes('Overlay hello'), true);
assert.equal(result.html.includes('Traceroutes'), true);
assert.equal(result.html.includes('node-detail__table'), true);
assert.equal(html.includes('Example Alpha'), true);
assert.equal(html.includes('Overlay hello'), true);
assert.equal(html.includes('Traceroutes'), true);
assert.equal(html.includes('node-detail__table'), true);
});
test('fetchNodeDetailHtml hydrates traceroute nodes with API metadata', async () => {
@@ -639,17 +639,16 @@ test('fetchNodeDetailHtml hydrates traceroute nodes with API metadata', async ()
rawSources: { node: { node_id: '!origin', role: 'CLIENT', short_name: 'ORIG' } },
});
const result = await fetchNodeDetailHtml(reference, {
const html = await fetchNodeDetailHtml(reference, {
refreshImpl,
fetchImpl,
renderShortHtml: short => `<span class="short-name">${short}</span>`,
returnState: true,
});
assert.equal(calledUrls.some(url => url.includes('/api/nodes/!relay')), true);
assert.equal(calledUrls.some(url => url.includes('/api/nodes/!target')), true);
assert.equal(result.html.includes('RLY1'), true);
assert.equal(result.html.includes('TGT1'), true);
assert.equal(html.includes('RLY1'), true);
assert.equal(html.includes('TGT1'), true);
});
test('fetchNodeDetailHtml requires a node identifier reference', async () => {
@@ -949,13 +948,19 @@ test('initializeNodeDetailPage reports an error when refresh fails', async () =>
throw new Error('boom');
};
const renderShortHtml = short => `<span>${short}</span>`;
const result = await initializeNodeDetailPage({
document: documentStub,
refreshImpl,
renderShortHtml,
});
assert.equal(result, false);
assert.equal(element.innerHTML.includes('Failed to load'), true);
const originalError = console.error;
console.error = () => {};
try {
const result = await initializeNodeDetailPage({
document: documentStub,
refreshImpl,
renderShortHtml,
});
assert.equal(result, false);
assert.equal(element.innerHTML.includes('Failed to load'), true);
} finally {
console.error = originalError;
}
});
test('initializeNodeDetailPage handles missing reference payloads', async () => {

View File

@@ -1,360 +0,0 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import { __testUtils } from '../node-page.js';
import { buildMovingAverageSeries } from '../charts-page.js';
const {
createTelemetryCharts,
buildUPlotChartConfig,
mountTelemetryCharts,
mountTelemetryChartsWithRetry,
} = __testUtils;
test('uPlot chart config preserves axes, colors, and tick labels for node telemetry', () => {
const nowMs = Date.UTC(2025, 0, 8, 12, 0, 0);
const nowSeconds = Math.floor(nowMs / 1000);
const node = {
rawSources: {
telemetry: {
snapshots: [
{
rx_time: nowSeconds - 60,
device_metrics: {
battery_level: 80,
voltage: 4.1,
current: 0.75,
},
},
{
rx_time: nowSeconds - 3_600,
device_metrics: {
battery_level: 78,
voltage: 4.05,
current: 0.65,
},
},
],
},
},
};
const { chartModels } = createTelemetryCharts(node, {
nowMs,
chartOptions: {
xAxisTickBuilder: () => [nowMs],
xAxisTickFormatter: () => '08',
},
});
const powerChart = chartModels.find(model => model.id === 'power');
const { options, data } = buildUPlotChartConfig(powerChart);
assert.deepEqual(options.scales.battery.range(), [0, 100]);
assert.deepEqual(options.scales.voltage.range(), [0, 6]);
assert.deepEqual(options.scales.current.range(), [0, 3]);
assert.equal(options.series[1].stroke, '#8856a7');
assert.equal(options.series[2].stroke, '#9ebcda');
assert.equal(options.series[3].stroke, '#3182bd');
assert.deepEqual(options.axes[0].values(null, [nowMs]), ['08']);
assert.equal(options.axes[0].stroke, '#5c6773');
assert.deepEqual(data[0].slice(0, 2), [nowMs - 3_600_000, nowMs - 60_000]);
assert.deepEqual(data[1].slice(0, 2), [78, 80]);
});
test('uPlot chart config maps moving averages and raw points for aggregated telemetry', () => {
const nowMs = Date.UTC(2025, 0, 8, 12, 0, 0);
const nowSeconds = Math.floor(nowMs / 1000);
const snapshots = [
{
rx_time: nowSeconds - 3_600,
device_metrics: { battery_level: 10 },
},
{
rx_time: nowSeconds - 1_800,
device_metrics: { battery_level: 20 },
},
];
const node = { rawSources: { telemetry: { snapshots } } };
const { chartModels } = createTelemetryCharts(node, {
nowMs,
chartOptions: {
lineReducer: points => buildMovingAverageSeries(points, 3_600_000),
},
});
const powerChart = chartModels.find(model => model.id === 'power');
const { options, data } = buildUPlotChartConfig(powerChart);
assert.equal(options.series.length, 3);
assert.equal(options.series[1].stroke.startsWith('rgba('), true);
assert.equal(options.series[2].stroke, '#8856a7');
assert.deepEqual(data[1].slice(0, 2), [10, 15]);
assert.deepEqual(data[2].slice(0, 2), [10, 20]);
});
test('buildUPlotChartConfig applies axis color overrides', () => {
const nowMs = Date.UTC(2025, 0, 8, 12, 0, 0);
const nowSeconds = Math.floor(nowMs / 1000);
const node = {
rawSources: {
telemetry: {
snapshots: [
{
rx_time: nowSeconds - 60,
device_metrics: { battery_level: 80 },
},
],
},
},
};
const { chartModels } = createTelemetryCharts(node, { nowMs });
const powerChart = chartModels.find(model => model.id === 'power');
const { options } = buildUPlotChartConfig(powerChart, {
axisColor: '#ffffff',
gridColor: '#222222',
});
assert.equal(options.axes[0].stroke, '#ffffff');
assert.equal(options.axes[0].grid.stroke, '#222222');
});
test('environment chart renders humidity axis on the right side', () => {
const nowMs = Date.UTC(2025, 0, 8, 12, 0, 0);
const nowSeconds = Math.floor(nowMs / 1000);
const node = {
rawSources: {
telemetry: {
snapshots: [
{
rx_time: nowSeconds - 60,
environment_metrics: {
temperature: 19.5,
relative_humidity: 55,
},
},
],
},
},
};
const { chartModels } = createTelemetryCharts(node, { nowMs });
const envChart = chartModels.find(model => model.id === 'environment');
const { options } = buildUPlotChartConfig(envChart);
const humidityAxis = options.axes.find(axis => axis.scale === 'humidity');
assert.ok(humidityAxis);
assert.equal(humidityAxis.side, 1);
assert.equal(humidityAxis.show, true);
});
test('channel utilization chart includes a right-side utilization axis', () => {
const nowMs = Date.UTC(2025, 0, 8, 12, 0, 0);
const nowSeconds = Math.floor(nowMs / 1000);
const node = {
rawSources: {
telemetry: {
snapshots: [
{
rx_time: nowSeconds - 60,
device_metrics: {
channel_utilization: 40,
air_util_tx: 22,
},
},
],
},
},
};
const { chartModels } = createTelemetryCharts(node, { nowMs });
const channelChart = chartModels.find(model => model.id === 'channel');
const { options } = buildUPlotChartConfig(channelChart);
const rightAxis = options.axes.find(axis => axis.scale === 'channelSecondary');
assert.ok(rightAxis);
assert.equal(rightAxis.side, 1);
assert.equal(rightAxis.show, true);
});
test('createTelemetryCharts returns empty markup when snapshots are missing', () => {
const { chartsHtml, chartModels } = createTelemetryCharts({ rawSources: { telemetry: { snapshots: [] } } });
assert.equal(chartsHtml, '');
assert.equal(chartModels.length, 0);
});
test('mountTelemetryCharts instantiates uPlot for chart containers', () => {
const nowMs = Date.UTC(2025, 0, 8, 12, 0, 0);
const nowSeconds = Math.floor(nowMs / 1000);
const node = {
rawSources: {
telemetry: {
snapshots: [
{
rx_time: nowSeconds - 60,
device_metrics: { battery_level: 80 },
},
],
},
},
};
const { chartModels } = createTelemetryCharts(node, { nowMs });
const [model] = chartModels;
const plotRoot = { innerHTML: 'placeholder' };
const chartContainer = {
querySelector(selector) {
return selector === '[data-telemetry-plot]' ? plotRoot : null;
},
};
const root = {
querySelector(selector) {
return selector === `[data-telemetry-chart-id="${model.id}"]` ? chartContainer : null;
},
};
class UPlotStub {
constructor(options, data, container) {
this.options = options;
this.data = data;
this.container = container;
}
}
const instances = mountTelemetryCharts(chartModels, { root, uPlotImpl: UPlotStub });
assert.equal(plotRoot.innerHTML, '');
assert.equal(instances.length, 1);
assert.equal(instances[0].container, plotRoot);
});
test('mountTelemetryCharts responds to window resize events', async () => {
const nowMs = Date.UTC(2025, 0, 8, 12, 0, 0);
const nowSeconds = Math.floor(nowMs / 1000);
const node = {
rawSources: {
telemetry: {
snapshots: [
{
rx_time: nowSeconds - 60,
device_metrics: { battery_level: 80 },
},
],
},
},
};
const { chartModels } = createTelemetryCharts(node, { nowMs });
const [model] = chartModels;
const plotRoot = {
innerHTML: '',
clientWidth: 320,
clientHeight: 180,
getBoundingClientRect() {
return { width: this.clientWidth, height: this.clientHeight };
},
};
const chartContainer = {
querySelector(selector) {
return selector === '[data-telemetry-plot]' ? plotRoot : null;
},
};
const root = {
querySelector(selector) {
return selector === `[data-telemetry-chart-id="${model.id}"]` ? chartContainer : null;
},
};
const previousResizeObserver = globalThis.ResizeObserver;
const previousAddEventListener = globalThis.addEventListener;
let resizeHandler = null;
globalThis.ResizeObserver = undefined;
globalThis.addEventListener = (event, handler) => {
if (event === 'resize') {
resizeHandler = handler;
}
};
const sizeCalls = [];
class UPlotStub {
constructor(options, data, container) {
this.options = options;
this.data = data;
this.container = container;
this.root = container;
}
setSize(size) {
sizeCalls.push(size);
}
}
mountTelemetryCharts(chartModels, { root, uPlotImpl: UPlotStub });
assert.ok(resizeHandler);
plotRoot.clientWidth = 480;
plotRoot.clientHeight = 240;
resizeHandler();
await new Promise(resolve => setTimeout(resolve, 150));
assert.equal(sizeCalls.length >= 1, true);
assert.deepEqual(sizeCalls[sizeCalls.length - 1], { width: 480, height: 240 });
globalThis.ResizeObserver = previousResizeObserver;
globalThis.addEventListener = previousAddEventListener;
});
test('mountTelemetryChartsWithRetry loads uPlot when missing', async () => {
const nowMs = Date.UTC(2025, 0, 8, 12, 0, 0);
const nowSeconds = Math.floor(nowMs / 1000);
const node = {
rawSources: {
telemetry: {
snapshots: [
{
rx_time: nowSeconds - 60,
device_metrics: { battery_level: 80 },
},
],
},
},
};
const { chartModels } = createTelemetryCharts(node, { nowMs });
const [model] = chartModels;
const plotRoot = { innerHTML: '', clientWidth: 400, clientHeight: 200 };
const chartContainer = {
querySelector(selector) {
return selector === '[data-telemetry-plot]' ? plotRoot : null;
},
};
const root = {
ownerDocument: {
body: {},
querySelector: () => null,
},
querySelector(selector) {
return selector === `[data-telemetry-chart-id="${model.id}"]` ? chartContainer : null;
},
};
const previousUPlot = globalThis.uPlot;
const instances = [];
class UPlotStub {
constructor(options, data, container) {
this.options = options;
this.data = data;
this.container = container;
instances.push(this);
}
}
let loadCalled = false;
const loadUPlot = ({ onLoad }) => {
loadCalled = true;
globalThis.uPlot = UPlotStub;
if (typeof onLoad === 'function') {
onLoad();
}
return true;
};
mountTelemetryChartsWithRetry(chartModels, { root, loadUPlot });
await new Promise(resolve => setTimeout(resolve, 0));
assert.equal(loadCalled, true);
assert.equal(instances.length, 1);
globalThis.uPlot = previousUPlot;
});

View File

@@ -14,7 +14,7 @@
* limitations under the License.
*/
import { createTelemetryCharts, mountTelemetryChartsWithRetry } from './node-page.js';
import { renderTelemetryCharts } from './node-page.js';
const TELEMETRY_BUCKET_SECONDS = 60 * 60;
const HOUR_MS = 60 * 60 * 1000;
@@ -193,21 +193,6 @@ export async function fetchAggregatedTelemetry({
.filter(snapshot => snapshot != null);
}
/**
* Fetch and render aggregated telemetry charts.
*
* @param {{
* document?: Document,
* rootId?: string,
* fetchImpl?: Function,
* bucketSeconds?: number,
* windowMs?: number,
* createCharts?: Function,
* mountCharts?: Function,
* uPlotImpl?: Function,
* }} options Optional overrides for testing.
* @returns {Promise<boolean>} ``true`` when charts were rendered successfully.
*/
export async function initializeChartsPage(options = {}) {
const documentRef = options.document ?? globalThis.document;
if (!documentRef || typeof documentRef.getElementById !== 'function') {
@@ -219,8 +204,7 @@ export async function initializeChartsPage(options = {}) {
return false;
}
const createCharts = typeof options.createCharts === 'function' ? options.createCharts : createTelemetryCharts;
const mountCharts = typeof options.mountCharts === 'function' ? options.mountCharts : mountTelemetryChartsWithRetry;
const renderCharts = typeof options.renderCharts === 'function' ? options.renderCharts : renderTelemetryCharts;
const fetchImpl = options.fetchImpl ?? globalThis.fetch;
const bucketSeconds = options.bucketSeconds ?? TELEMETRY_BUCKET_SECONDS;
const windowMs = options.windowMs ?? CHART_WINDOW_MS;
@@ -234,7 +218,7 @@ export async function initializeChartsPage(options = {}) {
return true;
}
const node = { rawSources: { telemetry: { snapshots } } };
const chartState = createCharts(node, {
const chartsHtml = renderCharts(node, {
nowMs: Date.now(),
chartOptions: {
windowMs,
@@ -244,12 +228,11 @@ export async function initializeChartsPage(options = {}) {
lineReducer: points => buildMovingAverageSeries(points, HOUR_MS),
},
});
if (!chartState.chartsHtml) {
if (!chartsHtml) {
container.innerHTML = renderStatus('Telemetry snapshots are unavailable.');
return true;
}
container.innerHTML = chartState.chartsHtml;
mountCharts(chartState.chartModels, { root: container, uPlotImpl: options.uPlotImpl });
container.innerHTML = chartsHtml;
return true;
} catch (error) {
console.error('Failed to render aggregated telemetry charts', error);

View File

@@ -20,7 +20,7 @@ import { extractModemMetadata } from './node-modem-metadata.js';
* Highest channel index that should be represented within the tab view.
* @type {number}
*/
export const MAX_CHANNEL_INDEX = 9;
export const MAX_CHANNEL_INDEX = 255;
/**
* Discrete event types that can appear in the chat activity log.
@@ -65,7 +65,8 @@ function resolveSnapshotList(entry) {
* Build a data model describing the content for chat tabs.
*
* Entries outside the recent activity window, encrypted messages, and
* channels above {@link MAX_CHANNEL_INDEX} are filtered out.
* channels above {@link MAX_CHANNEL_INDEX} are filtered out. Channel
* buckets are only created when messages are present for that channel.
*
* @param {{
* nodes?: Array<Object>,
@@ -102,11 +103,29 @@ export function buildChatTabModel({
const logEntries = [];
const channelBuckets = new Map();
const primaryChannelEnvLabel = normalisePrimaryChannelEnvLabel(primaryChannelFallbackLabel);
const nodeById = new Map();
const nodeByNum = new Map();
const nodeInfoKeys = new Set();
const buildNodeInfoKey = (nodeId, nodeNum, ts) => `${nodeId ?? ''}:${nodeNum ?? ''}:${ts ?? ''}`;
const recordNodeInfoEntry = (ts, nodeId, nodeNum) => {
if (ts == null) return;
const key = buildNodeInfoKey(nodeId, nodeNum, ts);
if (nodeInfoKeys.has(key)) return;
const node = nodeId && nodeById.has(nodeId)
? nodeById.get(nodeId)
: (nodeNum != null && nodeByNum.has(nodeNum) ? nodeByNum.get(nodeNum) : null);
if (!node) return;
nodeInfoKeys.add(key);
logEntries.push({ ts, type: CHAT_LOG_ENTRY_TYPES.NODE_INFO, node, nodeId, nodeNum });
};
for (const node of nodes || []) {
if (!node) continue;
const nodeId = normaliseNodeId(node);
const nodeNum = normaliseNodeNum(node);
if (nodeId) nodeById.set(nodeId, node);
if (nodeNum != null) nodeByNum.set(nodeNum, node);
const firstTs = resolveTimestampSeconds(node.first_heard ?? node.firstHeard, node.first_heard_iso ?? node.firstHeardIso);
if (firstTs != null && firstTs >= cutoff) {
logEntries.push({ ts: firstTs, type: CHAT_LOG_ENTRY_TYPES.NODE_NEW, node, nodeId, nodeNum });
@@ -114,6 +133,7 @@ export function buildChatTabModel({
const lastTs = resolveTimestampSeconds(node.last_heard ?? node.lastHeard, node.last_seen_iso ?? node.lastSeenIso);
if (lastTs != null && lastTs >= cutoff) {
logEntries.push({ ts: lastTs, type: CHAT_LOG_ENTRY_TYPES.NODE_INFO, node, nodeId, nodeNum });
nodeInfoKeys.add(buildNodeInfoKey(nodeId, nodeNum, lastTs));
}
}
@@ -129,6 +149,7 @@ export function buildChatTabModel({
const nodeId = normaliseNodeId(snapshot);
const nodeNum = normaliseNodeNum(snapshot);
logEntries.push({ ts, type: CHAT_LOG_ENTRY_TYPES.TELEMETRY, telemetry: snapshot, nodeId, nodeNum });
recordNodeInfoEntry(ts, nodeId, nodeNum);
}
}
@@ -144,6 +165,7 @@ export function buildChatTabModel({
const nodeId = normaliseNodeId(snapshot);
const nodeNum = normaliseNodeNum(snapshot);
logEntries.push({ ts, type: CHAT_LOG_ENTRY_TYPES.POSITION, position: snapshot, nodeId, nodeNum });
recordNodeInfoEntry(ts, nodeId, nodeNum);
}
}
@@ -157,6 +179,7 @@ export function buildChatTabModel({
const nodeNum = normaliseNodeNum(snapshot);
const neighborId = normaliseNeighborId(snapshot);
logEntries.push({ ts, type: CHAT_LOG_ENTRY_TYPES.NEIGHBOR, neighbor: snapshot, nodeId, nodeNum, neighborId });
recordNodeInfoEntry(ts, nodeId, nodeNum);
}
}
@@ -186,6 +209,7 @@ export function buildChatTabModel({
nodeId: firstHop.id ?? null,
nodeNum: firstHop.num ?? null
});
recordNodeInfoEntry(ts, firstHop.id ?? null, firstHop.num ?? null);
}
const encryptedLogEntries = [];
@@ -221,28 +245,12 @@ export function buildChatTabModel({
modemPreset,
envFallbackLabel: primaryChannelEnvLabel
});
const nameBucketKey = safeIndex > 0 ? buildSecondaryNameBucketKey(labelInfo) : null;
const nameBucketKey = safeIndex > 0 ? buildSecondaryNameBucketKey(safeIndex, labelInfo) : null;
const primaryBucketKey = safeIndex === 0 && labelInfo.label !== '0' ? buildPrimaryBucketKey(labelInfo.label) : '0';
let bucketKey = safeIndex === 0 ? primaryBucketKey : nameBucketKey ?? String(safeIndex);
const bucketKey = safeIndex === 0 ? primaryBucketKey : nameBucketKey ?? String(safeIndex);
let bucket = channelBuckets.get(bucketKey);
if (!bucket && safeIndex > 0) {
const existingBucketKey = findExistingBucketKeyByIndex(channelBuckets, safeIndex);
if (existingBucketKey) {
bucketKey = existingBucketKey;
bucket = channelBuckets.get(existingBucketKey);
}
}
if (bucket && nameBucketKey && bucket.key !== nameBucketKey) {
channelBuckets.delete(bucket.key);
bucket.key = nameBucketKey;
bucket.id = buildChannelTabId(nameBucketKey);
channelBuckets.set(nameBucketKey, bucket);
bucketKey = nameBucketKey;
}
if (!bucket) {
bucket = {
key: bucketKey,
@@ -287,26 +295,6 @@ export function buildChatTabModel({
logEntries.sort((a, b) => a.ts - b.ts);
let hasPrimaryBucket = false;
for (const bucket of channelBuckets.values()) {
if (bucket.index === 0) {
hasPrimaryBucket = true;
break;
}
}
if (!hasPrimaryBucket) {
const bucketKey = '0';
channelBuckets.set(bucketKey, {
key: bucketKey,
id: buildChannelTabId(bucketKey),
index: 0,
label: '0',
entries: [],
labelPriority: CHANNEL_LABEL_PRIORITY.INDEX,
isPrimaryFallback: true
});
}
const channels = Array.from(channelBuckets.values()).sort((a, b) => {
if (a.index !== b.index) {
return a.index - b.index;
@@ -565,43 +553,42 @@ function buildPrimaryBucketKey(primaryChannelLabel) {
return '0';
}
function buildSecondaryNameBucketKey(labelInfo) {
function buildSecondaryNameBucketKey(index, labelInfo) {
const label = labelInfo?.label ?? null;
const priority = labelInfo?.priority ?? CHANNEL_LABEL_PRIORITY.INDEX;
if (priority !== CHANNEL_LABEL_PRIORITY.NAME || !label) {
if (!Number.isFinite(index) || index <= 0 || priority !== CHANNEL_LABEL_PRIORITY.NAME || !label) {
return null;
}
const trimmedLabel = label.trim().toLowerCase();
if (!trimmedLabel.length) {
return null;
}
return `secondary::${trimmedLabel}`;
}
function findExistingBucketKeyByIndex(channelBuckets, targetIndex) {
if (!channelBuckets || !Number.isFinite(targetIndex) || targetIndex <= 0) {
return null;
}
const normalizedTarget = Math.trunc(targetIndex);
for (const [key, bucket] of channelBuckets.entries()) {
if (!bucket || !Number.isFinite(bucket.index)) {
continue;
}
if (Math.trunc(bucket.index) !== normalizedTarget) {
continue;
}
if (bucket.index === 0) {
continue;
}
return key;
}
return null;
return `secondary-name::${trimmedLabel}`;
}
function buildChannelTabId(bucketKey) {
if (bucketKey === '0') {
return 'channel-0';
}
const secondaryNameParts = /^secondary-name::(.+)$/.exec(String(bucketKey));
if (secondaryNameParts) {
const secondaryLabelSlug = slugify(secondaryNameParts[1]);
const secondaryHash = hashChannelKey(bucketKey);
if (secondaryLabelSlug) {
return `channel-secondary-name-${secondaryLabelSlug}-${secondaryHash}`;
}
return `channel-secondary-name-${secondaryHash}`;
}
const secondaryParts = /^secondary::(\d+)::(.+)$/.exec(String(bucketKey));
if (secondaryParts) {
const secondaryIndex = secondaryParts[1];
const secondaryLabelSlug = slugify(secondaryParts[2]);
const secondaryHash = hashChannelKey(bucketKey);
if (secondaryLabelSlug) {
return `channel-secondary-${secondaryIndex}-${secondaryLabelSlug}-${secondaryHash}`;
}
return `channel-secondary-${secondaryIndex}-${secondaryHash}`;
}
const slug = slugify(bucketKey);
if (slug) {
if (slug !== '0') {

View File

@@ -0,0 +1,172 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
const MAX_VISIBLE_SITE_NAME_LENGTH = 32;
const TRUNCATION_SUFFIX = '...';
const TRUNCATED_SITE_NAME_LENGTH = MAX_VISIBLE_SITE_NAME_LENGTH - TRUNCATION_SUFFIX.length;
const SUPPRESSED_SITE_NAME_PATTERN = /(?:^|[^a-z0-9])(?:https?:\/\/|www\.)\S+/i;
/**
* Read a federated instance site name as a trimmed string.
*
* @param {{ name?: string } | null | undefined} entry Federation instance payload entry.
* @returns {string} Trimmed site name or an empty string when absent.
*/
function readSiteName(entry) {
if (!entry || typeof entry !== 'object') {
return '';
}
return typeof entry.name === 'string' ? entry.name.trim() : '';
}
/**
* Read a federated instance domain as a trimmed string.
*
* @param {{ domain?: string } | null | undefined} entry Federation instance payload entry.
* @returns {string} Trimmed domain or an empty string when absent.
*/
function readDomain(entry) {
if (!entry || typeof entry !== 'object') {
return '';
}
return typeof entry.domain === 'string' ? entry.domain.trim() : '';
}
/**
* Determine whether a remote site name should be suppressed from frontend displays.
*
* @param {string} name Remote site name.
* @returns {boolean} true when the name contains a URL-like advertising token.
*/
export function isSuppressedFederationSiteName(name) {
if (typeof name !== 'string') {
return false;
}
const trimmed = name.trim();
if (!trimmed) {
return false;
}
return SUPPRESSED_SITE_NAME_PATTERN.test(trimmed);
}
/**
* Truncate an instance site name for frontend display without mutating source data.
*
* Names longer than 32 characters are shortened to stay within that 32-character
* budget including the trailing ellipsis.
*
* @param {string} name Remote site name.
* @returns {string} Display-ready site name.
*/
export function truncateFederationSiteName(name) {
if (typeof name !== 'string') {
return '';
}
const trimmed = name.trim();
if (trimmed.length <= MAX_VISIBLE_SITE_NAME_LENGTH) {
return trimmed;
}
return `${trimmed.slice(0, TRUNCATED_SITE_NAME_LENGTH)}${TRUNCATION_SUFFIX}`;
}
/**
* Determine whether an instance should remain visible in frontend federation views.
*
* @param {{ name?: string } | null | undefined} entry Federation instance payload entry.
* @returns {boolean} true when the entry should be shown to users.
*/
export function shouldDisplayFederationInstance(entry) {
return !isSuppressedFederationSiteName(readSiteName(entry));
}
/**
* Resolve a frontend display name for a federation instance.
*
* @param {{ name?: string } | null | undefined} entry Federation instance payload entry.
* @returns {string} Display-ready site name or an empty string when absent.
*/
export function resolveFederationSiteNameForDisplay(entry) {
const siteName = readSiteName(entry);
return siteName ? truncateFederationSiteName(siteName) : '';
}
/**
* Resolve the original trimmed site name for a federation instance.
*
* @param {{ name?: string } | null | undefined} entry Federation instance payload entry.
* @returns {string} Full trimmed site name or an empty string when absent.
*/
export function resolveFederationSiteName(entry) {
return readSiteName(entry);
}
/**
* Determine the full sort value for an instance selector entry.
*
* Sorting must use the original trimmed site name so truncation does not collapse
* multiple entries into the same comparison key.
*
* @param {{ name?: string, domain?: string } | null | undefined} entry Federation instance payload entry.
* @returns {string} Full trimmed site name falling back to the domain.
*/
export function resolveFederationInstanceSortValue(entry) {
const siteName = resolveFederationSiteName(entry);
return siteName || readDomain(entry);
}
/**
* Determine the most suitable display label for an instance list entry.
*
* @param {{ name?: string, domain?: string } | null | undefined} entry Federation instance payload entry.
* @returns {string} Display label falling back to the domain.
*/
export function resolveFederationInstanceLabel(entry) {
const siteName = resolveFederationSiteNameForDisplay(entry);
if (siteName) {
return siteName;
}
return readDomain(entry);
}
/**
* Filter a federation payload down to the instances that should remain visible.
*
* @param {Array<object>} entries Federation payload from the API.
* @returns {Array<object>} Visible instances for frontend rendering.
*/
export function filterDisplayableFederationInstances(entries) {
if (!Array.isArray(entries)) {
return [];
}
return entries.filter(shouldDisplayFederationInstance);
}
export const __test__ = {
MAX_VISIBLE_SITE_NAME_LENGTH,
TRUNCATION_SUFFIX,
TRUNCATED_SITE_NAME_LENGTH,
readDomain,
readSiteName,
SUPPRESSED_SITE_NAME_PATTERN
};

View File

@@ -15,6 +15,12 @@
*/
import { readAppConfig } from './config.js';
import {
filterDisplayableFederationInstances,
resolveFederationSiteName,
resolveFederationSiteNameForDisplay
} from './federation-instance-display.js';
import { resolveLegendVisibility } from './map-legend-visibility.js';
import { mergeConfig } from './settings.js';
import { roleColors } from './role-helpers.js';
@@ -204,6 +210,31 @@ function hasNumberValue(value) {
return toFiniteNumber(value) != null;
}
/**
* Toggle the legend hidden class on a container element.
*
* @param {HTMLElement|{ classList?: { toggle?: Function }, className?: string }} container Legend container.
* @param {boolean} hidden Whether the legend should be hidden.
* @returns {void}
*/
function toggleLegendHiddenClass(container, hidden) {
if (!container) return;
if (container.classList && typeof container.classList.toggle === 'function') {
container.classList.toggle('legend-hidden', hidden);
return;
}
if (typeof container.className === 'string') {
const classes = container.className.split(/\s+/).filter(Boolean);
const hasHidden = classes.includes('legend-hidden');
if (hidden && !hasHidden) {
classes.push('legend-hidden');
} else if (!hidden && hasHidden) {
classes.splice(classes.indexOf('legend-hidden'), 1);
}
container.className = classes.join(' ');
}
}
const TILE_LAYER_URL = 'https://{s}.tile.openstreetmap.fr/hot/{z}/{x}/{y}.png';
/**
@@ -223,6 +254,7 @@ export async function initializeFederationPage(options = {}) {
const fetchImpl = options.fetchImpl || fetch;
const leaflet = options.leaflet || (typeof window !== 'undefined' ? window.L : null);
const mapContainer = document.getElementById('map');
const mapPanel = document.getElementById('mapPanel');
const tableEl = document.getElementById('instances');
const tableBody = document.querySelector('#instances tbody');
const statusEl = document.getElementById('status');
@@ -239,8 +271,20 @@ export async function initializeFederationPage(options = {}) {
let map = null;
let markersLayer = null;
let tileLayer = null;
let legendContainer = null;
let legendToggleButton = null;
let legendVisible = true;
const legendCollapsedValue = mapPanel ? mapPanel.getAttribute('data-legend-collapsed') : null;
const legendDefaultCollapsed = legendCollapsedValue == null
? true
: legendCollapsedValue.trim() !== 'false';
const tableSorters = {
name: { getValue: inst => inst.name ?? '', compare: compareString, hasValue: hasStringValue, defaultDirection: 'asc' },
name: {
getValue: inst => resolveFederationSiteName(inst),
compare: compareString,
hasValue: hasStringValue,
defaultDirection: 'asc'
},
domain: { getValue: inst => inst.domain ?? '', compare: compareString, hasValue: hasStringValue, defaultDirection: 'asc' },
contact: { getValue: inst => inst.contactLink ?? '', compare: compareString, hasValue: hasStringValue, defaultDirection: 'asc' },
version: { getValue: inst => inst.version ?? '', compare: compareString, hasValue: hasStringValue, defaultDirection: 'asc' },
@@ -329,7 +373,8 @@ export async function initializeFederationPage(options = {}) {
for (const instance of sorted) {
const tr = document.createElement('tr');
const url = buildInstanceUrl(instance.domain);
const nameHtml = instance.name ? escapeHtml(instance.name) : '<em>—</em>';
const displayName = resolveFederationSiteNameForDisplay(instance);
const nameHtml = displayName ? escapeHtml(displayName) : '<em>—</em>';
const domainHtml = url
? `<a href="${escapeHtml(url)}" target="_blank" rel="noopener">${escapeHtml(instance.domain || '')}</a>`
: escapeHtml(instance.domain || '');
@@ -357,6 +402,37 @@ export async function initializeFederationPage(options = {}) {
syncSortIndicators();
};
/**
* Update the pressed state of the legend visibility toggle button.
*
* @returns {void}
*/
const updateLegendToggleState = () => {
if (!legendToggleButton) return;
const baseLabel = legendVisible ? 'Hide map legend' : 'Show map legend';
const baseText = legendVisible ? 'Hide legend' : 'Show legend';
legendToggleButton.setAttribute('aria-pressed', legendVisible ? 'true' : 'false');
legendToggleButton.setAttribute('aria-label', baseLabel);
legendToggleButton.textContent = baseText;
};
/**
* Show or hide the map legend component.
*
* @param {boolean} visible Whether the legend should be displayed.
* @returns {void}
*/
const setLegendVisibility = visible => {
legendVisible = Boolean(visible);
if (legendContainer) {
toggleLegendHiddenClass(legendContainer, !legendVisible);
if (typeof legendContainer.setAttribute === 'function') {
legendContainer.setAttribute('aria-hidden', legendVisible ? 'false' : 'true');
}
}
updateLegendToggleState();
};
/**
* Wire up click and keyboard handlers for sortable headers.
*
@@ -464,7 +540,7 @@ export async function initializeFederationPage(options = {}) {
credentials: 'omit'
});
if (response.ok) {
instances = await response.json();
instances = filterDisplayableFederationInstances(await response.json());
}
} catch (err) {
console.warn('Failed to fetch federation instances', err);
@@ -483,6 +559,15 @@ export async function initializeFederationPage(options = {}) {
const canRenderLegend =
typeof leaflet.control === 'function' && leaflet.DomUtil && typeof leaflet.DomUtil.create === 'function';
if (canRenderLegend) {
const legendMediaQuery = typeof window !== 'undefined' && window.matchMedia
? window.matchMedia('(max-width: 1024px)')
: null;
const initialLegendVisible = resolveLegendVisibility({
defaultCollapsed: legendDefaultCollapsed,
mediaQueryMatches: legendMediaQuery ? legendMediaQuery.matches : false
});
legendVisible = initialLegendVisible;
const legendStops = NODE_COUNT_COLOR_STOPS.map((stop, index) => {
const lower = index === 0 ? 0 : NODE_COUNT_COLOR_STOPS[index - 1].limit;
const upper = stop.limit - 1;
@@ -495,7 +580,11 @@ export async function initializeFederationPage(options = {}) {
const legend = leaflet.control({ position: 'bottomright' });
legend.onAdd = function onAdd() {
const container = leaflet.DomUtil.create('div', 'legend legend--instances');
container.id = 'federationLegend';
container.setAttribute('aria-label', 'Active nodes legend');
container.setAttribute('role', 'region');
container.setAttribute('aria-hidden', initialLegendVisible ? 'false' : 'true');
toggleLegendHiddenClass(container, !initialLegendVisible);
const header = leaflet.DomUtil.create('div', 'legend-header', container);
const title = leaflet.DomUtil.create('span', 'legend-title', header);
title.textContent = 'Active nodes';
@@ -508,9 +597,46 @@ export async function initializeFederationPage(options = {}) {
const label = leaflet.DomUtil.create('span', 'legend-label', item);
label.textContent = stop.label;
});
legendContainer = container;
return container;
};
legend.addTo(map);
const legendToggleControl = leaflet.control({ position: 'bottomright' });
legendToggleControl.onAdd = function onAdd() {
const container = leaflet.DomUtil.create('div', 'leaflet-control legend-toggle');
const button = leaflet.DomUtil.create('button', 'legend-toggle-button', container);
button.type = 'button';
button.setAttribute('aria-controls', 'federationLegend');
button.addEventListener?.('click', event => {
event.preventDefault();
event.stopPropagation();
setLegendVisibility(!legendVisible);
});
legendToggleButton = button;
updateLegendToggleState();
if (leaflet.DomEvent && typeof leaflet.DomEvent.disableClickPropagation === 'function') {
leaflet.DomEvent.disableClickPropagation(container);
}
if (leaflet.DomEvent && typeof leaflet.DomEvent.disableScrollPropagation === 'function') {
leaflet.DomEvent.disableScrollPropagation(container);
}
return container;
};
legendToggleControl.addTo(map);
setLegendVisibility(initialLegendVisible);
if (legendMediaQuery) {
const changeHandler = event => {
if (legendDefaultCollapsed) return;
setLegendVisibility(!event.matches);
};
if (typeof legendMediaQuery.addEventListener === 'function') {
legendMediaQuery.addEventListener('change', changeHandler);
} else if (typeof legendMediaQuery.addListener === 'function') {
legendMediaQuery.addListener(changeHandler);
}
}
}
for (const instance of instances) {
@@ -521,7 +647,8 @@ export async function initializeFederationPage(options = {}) {
bounds.push([lat, lon]);
const name = instance.name || instance.domain || 'Unknown';
const displayName = resolveFederationSiteNameForDisplay(instance);
const name = displayName || instance.domain || 'Unknown';
const url = buildInstanceUrl(instance.domain);
const nodeCountValue = toFiniteNumber(instance.nodesCount ?? instance.nodes_count);
const popupLines = [

View File

@@ -14,6 +14,12 @@
* limitations under the License.
*/
import {
filterDisplayableFederationInstances,
resolveFederationInstanceLabel,
resolveFederationInstanceSortValue
} from './federation-instance-display.js';
/**
* Determine the most suitable label for an instance list entry.
*
@@ -21,17 +27,51 @@
* @returns {string} Preferred display label falling back to the domain.
*/
function resolveInstanceLabel(entry) {
if (!entry || typeof entry !== 'object') {
return '';
return resolveFederationInstanceLabel(entry);
}
/**
* Update federation navigation labels with the instance count.
*
* @param {{
* documentObject?: Document | null,
* count: number
* }} options Configuration for updating the navigation labels.
* @returns {void}
*/
function updateFederationNavCount(options) {
const { documentObject, count } = options;
if (!documentObject || typeof count !== 'number' || !Number.isFinite(count)) {
return;
}
const name = typeof entry.name === 'string' ? entry.name.trim() : '';
if (name.length > 0) {
return name;
const normalizedCount = Math.max(0, Math.floor(count));
const root = typeof documentObject.querySelectorAll === 'function'
? documentObject
: documentObject.body;
if (!root || typeof root.querySelectorAll !== 'function') {
return;
}
const domain = typeof entry.domain === 'string' ? entry.domain.trim() : '';
return domain;
const links = Array.from(root.querySelectorAll('.js-federation-nav'));
links.forEach(link => {
if (!link || typeof link !== 'object') {
return;
}
const dataset = link.dataset || {};
const storedLabel = typeof dataset.federationLabel === 'string' ? dataset.federationLabel.trim() : '';
const fallbackLabel = typeof link.textContent === 'string'
? link.textContent.split('(')[0].trim()
: 'Federation';
const label = storedLabel || fallbackLabel || 'Federation';
dataset.federationLabel = label;
link.textContent = `${label} (${normalizedCount})`;
});
}
/**
@@ -162,21 +202,21 @@ export async function initializeInstanceSelector(options) {
return;
}
if (!Array.isArray(payload)) {
return;
}
const visibleEntries = filterDisplayableFederationInstances(payload);
updateFederationNavCount({ documentObject: doc, count: visibleEntries.length });
const sanitizedDomain = typeof instanceDomain === 'string' ? instanceDomain.trim().toLowerCase() : null;
const sortedEntries = payload
const sortedEntries = visibleEntries
.filter(entry => entry && typeof entry.domain === 'string' && entry.domain.trim() !== '')
.map(entry => ({
domain: entry.domain.trim(),
label: resolveInstanceLabel(entry),
sortLabel: resolveFederationInstanceSortValue(entry),
}))
.sort((a, b) => {
const labelA = a.label || a.domain;
const labelB = b.label || b.domain;
const labelA = a.sortLabel || a.domain;
const labelB = b.sortLabel || b.domain;
return labelA.localeCompare(labelB, undefined, { sensitivity: 'base' });
});
@@ -238,4 +278,4 @@ export async function initializeInstanceSelector(options) {
});
}
export const __test__ = { resolveInstanceLabel };
export const __test__ = { resolveInstanceLabel, updateFederationNavCount };

View File

@@ -44,6 +44,7 @@ import {
formatChatPresetTag
} from './chat-format.js';
import { initializeInstanceSelector } from './instance-selector.js';
import { initializeMobileMenu } from './mobile-menu.js';
import { MESSAGE_LIMIT, normaliseMessageLimit } from './message-limit.js';
import { CHAT_LOG_ENTRY_TYPES, buildChatTabModel, MAX_CHANNEL_INDEX } from './chat-log-tabs.js';
import { renderChatTabs } from './chat-tabs.js';
@@ -68,6 +69,144 @@ import {
roleRenderOrder,
} from './role-helpers.js';
/**
* Compute active-node counts from a local node array.
*
* @param {Array<Object>} nodes Node payloads.
* @param {number} nowSeconds Reference timestamp.
* @returns {{hour: number, day: number, week: number, month: number, sampled: boolean}} Local count snapshot.
*/
export function computeLocalActiveNodeStats(nodes, nowSeconds) {
const safeNodes = Array.isArray(nodes) ? nodes : [];
const referenceNow = Number.isFinite(nowSeconds) ? nowSeconds : Date.now() / 1000;
const windows = [
{ key: 'hour', secs: 3600 },
{ key: 'day', secs: 86_400 },
{ key: 'week', secs: 7 * 86_400 },
{ key: 'month', secs: 30 * 86_400 }
];
const counts = { sampled: true };
for (const window of windows) {
counts[window.key] = safeNodes.filter(node => {
const lastHeard = Number(node?.last_heard);
return Number.isFinite(lastHeard) && referenceNow - lastHeard <= window.secs;
}).length;
}
return counts;
}
/**
* Parse and validate the `/api/stats` payload.
*
* @param {*} payload Candidate JSON object from the stats endpoint.
* @returns {{hour: number, day: number, week: number, month: number, sampled: boolean}|null} Normalized stats or null.
*/
export function normaliseActiveNodeStatsPayload(payload) {
const activeNodes = payload && typeof payload === 'object' ? payload.active_nodes : null;
if (!activeNodes || typeof activeNodes !== 'object') {
return null;
}
const hour = Number(activeNodes.hour);
const day = Number(activeNodes.day);
const week = Number(activeNodes.week);
const month = Number(activeNodes.month);
if (![hour, day, week, month].every(Number.isFinite)) {
return null;
}
return {
hour: Math.max(0, Math.trunc(hour)),
day: Math.max(0, Math.trunc(day)),
week: Math.max(0, Math.trunc(week)),
month: Math.max(0, Math.trunc(month)),
sampled: Boolean(payload.sampled)
};
}
const ACTIVE_NODE_STATS_CACHE_TTL_MS = 30_000;
let activeNodeStatsCache = null;
let activeNodeStatsFetchPromise = null;
let activeNodeStatsFetchImpl = null;
/**
* Fetch active-node stats from the dedicated API endpoint with short-lived caching.
*
* @param {Function} fetchImpl Fetch implementation.
* @returns {Promise<{hour: number, day: number, week: number, month: number, sampled: boolean} | null>} Normalized stats or null.
*/
async function fetchRemoteActiveNodeStats(fetchImpl) {
const nowMs = Date.now();
if (activeNodeStatsCache?.fetchImpl === fetchImpl && activeNodeStatsCache.expiresAt > nowMs) {
return activeNodeStatsCache.stats;
}
if (activeNodeStatsFetchPromise && activeNodeStatsFetchImpl === fetchImpl) {
return activeNodeStatsFetchPromise;
}
activeNodeStatsFetchImpl = fetchImpl;
activeNodeStatsFetchPromise = (async () => {
const response = await fetchImpl('/api/stats', { cache: 'no-store' });
if (!response?.ok) {
throw new Error(`stats HTTP ${response?.status ?? 'unknown'}`);
}
const payload = await response.json();
const normalized = normaliseActiveNodeStatsPayload(payload);
if (!normalized) {
throw new Error('invalid stats payload');
}
activeNodeStatsCache = {
fetchImpl,
expiresAt: Date.now() + ACTIVE_NODE_STATS_CACHE_TTL_MS,
stats: normalized
};
return normalized;
})();
try {
return await activeNodeStatsFetchPromise;
} finally {
activeNodeStatsFetchPromise = null;
activeNodeStatsFetchImpl = null;
}
}
/**
* Fetch active-node stats from the dedicated API endpoint with local fallback.
*
* @param {{
* nodes: Array<Object>,
* nowSeconds: number,
* fetchImpl?: Function
* }} params Fetch parameters.
* @returns {Promise<{hour: number, day: number, week: number, month: number, sampled: boolean}>} Stats snapshot.
*/
export async function fetchActiveNodeStats({ nodes, nowSeconds, fetchImpl = fetch }) {
try {
const normalized = await fetchRemoteActiveNodeStats(fetchImpl);
if (normalized) return normalized;
throw new Error('invalid stats payload');
} catch (error) {
console.debug('Failed to fetch /api/stats; using local active-node counts.', error);
return computeLocalActiveNodeStats(nodes, nowSeconds);
}
}
/**
* Format the dashboard refresh-info sentence for active-node counts.
*
* @param {{channel: string, frequency: string, stats: {hour:number,day:number,week:number,month:number,sampled:boolean}}} params Formatting data.
* @returns {string} User-visible sentence for the dashboard header.
*/
export function formatActiveNodeStatsText({ channel, frequency, stats }) {
const parts = [
`${Number(stats?.hour) || 0}/hour`,
`${Number(stats?.day) || 0}/day`,
`${Number(stats?.week) || 0}/week`,
`${Number(stats?.month) || 0}/month`
];
const suffix = stats?.sampled ? ' (sampled)' : '';
return `${channel} (${frequency}) — active nodes: ${parts.join(', ')}${suffix}.`;
}
/**
* Entry point for the interactive dashboard. Wires up event listeners,
* initializes the map, and triggers the first data refresh cycle.
@@ -119,6 +258,8 @@ export function initializeApp(config) {
const isChatView = bodyClassList ? bodyClassList.contains('view-chat') : false;
const isMapView = bodyClassList ? bodyClassList.contains('view-map') : false;
const mapZoomOverride = Number.isFinite(config.mapZoom) ? Number(config.mapZoom) : null;
initializeMobileMenu({ documentObject: document, windowObject: window });
/**
* Column sorter configuration for the node table.
*
@@ -192,7 +333,7 @@ export function initializeApp(config) {
});
const NODE_LIMIT = 1000;
const TRACE_LIMIT = 200;
const TRACE_MAX_AGE_SECONDS = 7 * 24 * 60 * 60;
const TRACE_MAX_AGE_SECONDS = 28 * 24 * 60 * 60;
const SNAPSHOT_LIMIT = SNAPSHOT_WINDOW;
const CHAT_LIMIT = MESSAGE_LIMIT;
const CHAT_RECENT_WINDOW_SECONDS = 7 * 24 * 60 * 60;
@@ -219,6 +360,7 @@ export function initializeApp(config) {
/** @type {ReturnType<typeof setTimeout>|null} */
let refreshTimer = null;
let refreshInfoRequestId = 0;
/**
* Close any open short-info overlays that do not contain the provided anchor.
@@ -2907,6 +3049,14 @@ export function initializeApp(config) {
* @returns {HTMLElement} Chat log element.
*/
function createMessageChatEntry(m) {
let plainText = '';
if (m?.text != null) {
plainText = String(m.text).trim();
}
if (m?.encrypted && plainText === 'GAA=') {
return null;
}
const div = document.createElement('div');
const tsSeconds = resolveTimestampSeconds(
m.rx_time ?? m.rxTime,
@@ -3138,18 +3288,28 @@ export function initializeApp(config) {
}
const getDivider = createDateDividerFactory();
const limitedEntries = entries.slice(Math.max(entries.length - CHAT_LIMIT, 0));
let renderedEntries = 0;
for (const entry of limitedEntries) {
if (!entry || typeof entry.ts !== 'number') {
continue;
}
if (typeof renderEntry !== 'function') {
continue;
}
const node = renderEntry(entry);
if (!node) {
continue;
}
const divider = getDivider(entry.ts);
if (divider) fragment.appendChild(divider);
if (typeof renderEntry === 'function') {
const node = renderEntry(entry);
if (node) {
fragment.appendChild(node);
}
}
fragment.appendChild(node);
renderedEntries += 1;
}
if (renderedEntries === 0 && emptyLabel) {
const empty = document.createElement('p');
empty.className = 'chat-empty';
empty.textContent = emptyLabel;
fragment.appendChild(empty);
}
return fragment;
}
@@ -4374,15 +4534,16 @@ export function initializeApp(config) {
if (!refreshInfo || !isDashboardView) {
return;
}
const windows = [
{ label: 'hour', secs: 3600 },
{ label: 'day', secs: 86400 },
{ label: 'week', secs: 7 * 86400 },
];
const counts = windows.map(w => {
const c = nodes.filter(n => n.last_heard && nowSec - Number(n.last_heard) <= w.secs).length;
return `${c}/${w.label}`;
}).join(', ');
refreshInfo.textContent = `${config.channel} (${config.frequency}) — active nodes: ${counts}.`;
const requestId = ++refreshInfoRequestId;
void fetchActiveNodeStats({ nodes, nowSeconds: nowSec }).then(stats => {
if (requestId !== refreshInfoRequestId) {
return;
}
refreshInfo.textContent = formatActiveNodeStatsText({
channel: config.channel,
frequency: config.frequency,
stats
});
});
}
}

View File

@@ -0,0 +1,271 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
const MOBILE_MENU_MEDIA_QUERY = '(max-width: 900px)';
const FOCUSABLE_SELECTOR = [
'a[href]',
'button:not([disabled])',
'input:not([disabled])',
'select:not([disabled])',
'textarea:not([disabled])',
'[tabindex]:not([tabindex="-1"])'
].join(', ');
/**
* Collect the elements that can receive focus within a container.
*
* @param {?Element} container DOM node hosting focusable descendants.
* @returns {Array<Element>} Ordered list of focusable elements.
*/
function resolveFocusableElements(container) {
if (!container || typeof container.querySelectorAll !== 'function') {
return [];
}
const candidates = Array.from(container.querySelectorAll(FOCUSABLE_SELECTOR));
return candidates.filter(candidate => {
if (!candidate || typeof candidate.getAttribute !== 'function') {
return false;
}
return candidate.getAttribute('aria-hidden') !== 'true';
});
}
/**
* Build a menu controller for handling toggle state, focus trapping, and
* responsive layout swapping.
*
* @param {{
* documentObject?: Document,
* windowObject?: Window
* }} [options]
* @returns {{
* initialize: () => void,
* openMenu: () => void,
* closeMenu: () => void,
* syncLayout: () => void
* }}
*/
function createMobileMenuController(options = {}) {
const documentObject = options.documentObject || document;
const windowObject = options.windowObject || window;
const menuToggle = documentObject.getElementById('mobileMenuToggle');
const menu = documentObject.getElementById('mobileMenu');
const menuPanel = menu ? menu.querySelector('.mobile-menu__panel') : null;
const closeTriggers = menu ? Array.from(menu.querySelectorAll('[data-mobile-menu-close]')) : [];
const menuLinks = menu ? Array.from(menu.querySelectorAll('a')) : [];
const body = documentObject.body;
const mediaQuery = windowObject.matchMedia
? windowObject.matchMedia(MOBILE_MENU_MEDIA_QUERY)
: null;
let isOpen = false;
let lastActive = null;
/**
* Toggle the ``aria-expanded`` state on the menu trigger.
*
* @param {boolean} expanded Whether the menu is open.
* @returns {void}
*/
function setExpandedState(expanded) {
if (!menuToggle || typeof menuToggle.setAttribute !== 'function') {
return;
}
menuToggle.setAttribute('aria-expanded', expanded ? 'true' : 'false');
}
/**
* Synchronize the meta row placement based on the active media query.
*
* @returns {void}
*/
function syncLayout() {
return;
}
/**
* Open the slide-in menu and trap focus within the panel.
*
* @returns {void}
*/
function openMenu() {
if (!menu || !menuToggle || !menuPanel) {
return;
}
syncLayout();
menu.hidden = false;
menu.classList.add('is-open');
if (body && body.classList) {
body.classList.add('menu-open');
}
setExpandedState(true);
isOpen = true;
lastActive = documentObject.activeElement || null;
const focusables = resolveFocusableElements(menuPanel);
const focusTarget = focusables[0] || menuPanel;
if (focusTarget && typeof focusTarget.focus === 'function') {
focusTarget.focus();
}
}
/**
* Close the menu and restore focus to the trigger.
*
* @returns {void}
*/
function closeMenu() {
if (!menu || !menuToggle) {
return;
}
menu.classList.remove('is-open');
menu.hidden = true;
if (body && body.classList) {
body.classList.remove('menu-open');
}
setExpandedState(false);
isOpen = false;
if (lastActive && typeof lastActive.focus === 'function') {
lastActive.focus();
}
}
/**
* Toggle open or closed based on the trigger interaction.
*
* @param {Event} event Click event originating from the trigger.
* @returns {void}
*/
function handleToggleClick(event) {
if (event && typeof event.preventDefault === 'function') {
event.preventDefault();
}
if (isOpen) {
closeMenu();
} else {
openMenu();
}
}
/**
* Trap tab focus within the menu panel while open.
*
* @param {KeyboardEvent} event Keydown event from the panel.
* @returns {void}
*/
function handleKeydown(event) {
if (!isOpen || !event) {
return;
}
if (event.key === 'Escape') {
event.preventDefault();
closeMenu();
return;
}
if (event.key !== 'Tab') {
return;
}
const focusables = resolveFocusableElements(menuPanel);
if (!focusables.length) {
return;
}
const first = focusables[0];
const last = focusables[focusables.length - 1];
const active = documentObject.activeElement;
if (event.shiftKey && active === first) {
event.preventDefault();
last.focus();
} else if (!event.shiftKey && active === last) {
event.preventDefault();
first.focus();
}
}
/**
* Close the menu when navigation state changes.
*
* @returns {void}
*/
function handleRouteChange() {
if (isOpen) {
closeMenu();
}
}
/**
* Attach event listeners and sync initial layout.
*
* @returns {void}
*/
function initialize() {
if (!menuToggle || !menu) {
return;
}
menuToggle.addEventListener('click', handleToggleClick);
closeTriggers.forEach(trigger => {
trigger.addEventListener('click', closeMenu);
});
menuLinks.forEach(link => {
link.addEventListener('click', closeMenu);
});
if (menuPanel && typeof menuPanel.addEventListener === 'function') {
menuPanel.addEventListener('keydown', handleKeydown);
}
if (mediaQuery) {
if (typeof mediaQuery.addEventListener === 'function') {
mediaQuery.addEventListener('change', syncLayout);
} else if (typeof mediaQuery.addListener === 'function') {
mediaQuery.addListener(syncLayout);
}
}
if (windowObject && typeof windowObject.addEventListener === 'function') {
windowObject.addEventListener('hashchange', handleRouteChange);
windowObject.addEventListener('popstate', handleRouteChange);
}
syncLayout();
setExpandedState(false);
}
return {
initialize,
openMenu,
closeMenu,
syncLayout,
};
}
/**
* Initialize the mobile menu using the live DOM environment.
*
* @param {{
* documentObject?: Document,
* windowObject?: Window
* }} [options]
* @returns {{
* initialize: () => void,
* openMenu: () => void,
* closeMenu: () => void,
* syncLayout: () => void
* }}
*/
export function initializeMobileMenu(options = {}) {
const controller = createMobileMenuController(options);
controller.initialize();
return controller;
}
export const __test__ = {
createMobileMenuController,
resolveFocusableElements,
};

View File

@@ -14,7 +14,7 @@
* limitations under the License.
*/
import { fetchNodeDetailHtml, mountTelemetryChartsWithRetry } from './node-page.js';
import { fetchNodeDetailHtml } from './node-page.js';
/**
* Escape a string for safe HTML injection.
@@ -68,9 +68,6 @@ function hasValidReference(reference) {
* fetchImpl?: Function,
* refreshImpl?: Function,
* renderShortHtml?: Function,
* mountCharts?: Function,
* uPlotImpl?: Function,
* loadUPlot?: Function,
* privateMode?: boolean,
* logger?: Console
* }} [options] Behaviour overrides.
@@ -104,9 +101,6 @@ export function createNodeDetailOverlayManager(options = {}) {
const fetchImpl = options.fetchImpl;
const refreshImpl = options.refreshImpl;
const renderShortHtml = options.renderShortHtml;
const mountCharts = typeof options.mountCharts === 'function' ? options.mountCharts : mountTelemetryChartsWithRetry;
const uPlotImpl = options.uPlotImpl;
const loadUPlot = options.loadUPlot;
let requestToken = 0;
let lastTrigger = null;
@@ -204,21 +198,16 @@ export function createNodeDetailOverlayManager(options = {}) {
}
const currentToken = ++requestToken;
try {
const result = await fetchDetail(reference, {
const html = await fetchDetail(reference, {
fetchImpl,
refreshImpl,
renderShortHtml,
privateMode,
returnState: true,
});
if (currentToken !== requestToken) {
return;
}
const resolvedHtml = typeof result === 'string' ? result : result?.html;
content.innerHTML = resolvedHtml ?? '';
if (result && typeof result === 'object' && Array.isArray(result.chartModels)) {
mountCharts(result.chartModels, { root: content, uPlotImpl, loadUPlot });
}
content.innerHTML = html;
if (typeof closeButton.focus === 'function') {
closeButton.focus();
}

View File

@@ -124,15 +124,6 @@ const TELEMETRY_CHART_SPECS = Object.freeze([
ticks: 4,
color: '#2ca25f',
},
{
id: 'channelSecondary',
position: 'right',
label: 'Utilization (%)',
min: 0,
max: 100,
ticks: 4,
color: '#2ca25f',
},
],
series: [
{
@@ -146,7 +137,7 @@ const TELEMETRY_CHART_SPECS = Object.freeze([
},
{
id: 'air',
axis: 'channelSecondary',
axis: 'channel',
color: '#99d8c9',
label: 'Air util tx',
legend: 'Air util TX (%)',
@@ -171,13 +162,13 @@ const TELEMETRY_CHART_SPECS = Object.freeze([
},
{
id: 'humidity',
position: 'right',
position: 'left',
label: 'Humidity (%)',
min: 0,
max: 100,
ticks: 4,
color: '#91bfdb',
visible: true,
visible: false,
},
],
series: [
@@ -866,6 +857,67 @@ function createChartDimensions(spec) {
};
}
/**
* Compute the horizontal drawing position for an axis descriptor.
*
* @param {string} position Axis position keyword.
* @param {Object} dims Chart dimensions.
* @returns {number} X coordinate for the axis baseline.
*/
function resolveAxisX(position, dims) {
switch (position) {
case 'leftSecondary':
return dims.margin.left - 32;
case 'right':
return dims.width - dims.margin.right;
case 'rightSecondary':
return dims.width - dims.margin.right + 32;
case 'left':
default:
return dims.margin.left;
}
}
/**
* Compute the X coordinate for a timestamp constrained to the rolling window.
*
* @param {number} timestamp Timestamp in milliseconds.
* @param {number} domainStart Start of the window in milliseconds.
* @param {number} domainEnd End of the window in milliseconds.
* @param {Object} dims Chart dimensions.
* @returns {number} X coordinate inside the SVG viewport.
*/
function scaleTimestamp(timestamp, domainStart, domainEnd, dims) {
const safeStart = Math.min(domainStart, domainEnd);
const safeEnd = Math.max(domainStart, domainEnd);
const span = Math.max(1, safeEnd - safeStart);
const clamped = clamp(timestamp, safeStart, safeEnd);
const ratio = (clamped - safeStart) / span;
return dims.margin.left + ratio * dims.innerWidth;
}
/**
* Convert a value bound to a specific axis into a Y coordinate.
*
* @param {number} value Series value.
* @param {Object} axis Axis descriptor.
* @param {Object} dims Chart dimensions.
* @returns {number} Y coordinate.
*/
function scaleValueToAxis(value, axis, dims) {
if (!axis) return dims.chartBottom;
if (axis.scale === 'log') {
const minLog = Math.log10(axis.min);
const maxLog = Math.log10(axis.max);
const safe = clamp(value, axis.min, axis.max);
const ratio = (Math.log10(safe) - minLog) / (maxLog - minLog);
return dims.chartBottom - ratio * dims.innerHeight;
}
const safe = clamp(value, axis.min, axis.max);
const ratio = (safe - axis.min) / (axis.max - axis.min || 1);
return dims.chartBottom - ratio * dims.innerHeight;
}
/**
* Collect candidate containers that may hold telemetry values for a snapshot.
*
@@ -982,15 +1034,129 @@ function resolveAxisMax(axis, seriesEntries) {
}
/**
* Build a telemetry chart model from a specification and series entries.
* Render a telemetry series as circles plus an optional translucent guide line.
*
* @param {Object} seriesConfig Series metadata.
* @param {Array<{timestamp: number, value: number}>} points Series points.
* @param {Object} axis Axis descriptor.
* @param {Object} dims Chart dimensions.
* @param {number} domainStart Window start timestamp.
* @param {number} domainEnd Window end timestamp.
* @returns {string} SVG markup for the series.
*/
function renderTelemetrySeries(seriesConfig, points, axis, dims, domainStart, domainEnd, { lineReducer } = {}) {
if (!Array.isArray(points) || points.length === 0) {
return '';
}
const convertPoint = point => {
const cx = scaleTimestamp(point.timestamp, domainStart, domainEnd, dims);
const cy = scaleValueToAxis(point.value, axis, dims);
return { cx, cy, value: point.value };
};
const circleEntries = points.map(point => {
const coords = convertPoint(point);
const tooltip = formatSeriesPointValue(seriesConfig, point.value);
const titleMarkup = tooltip ? `<title>${escapeHtml(tooltip)}</title>` : '';
return `<circle class="node-detail__chart-point" cx="${coords.cx.toFixed(2)}" cy="${coords.cy.toFixed(2)}" r="3.2" fill="${seriesConfig.color}" aria-hidden="true">${titleMarkup}</circle>`;
});
const lineSource = typeof lineReducer === 'function' ? lineReducer(points) : points;
const linePoints = Array.isArray(lineSource) && lineSource.length > 0 ? lineSource : points;
const coordinates = linePoints.map(convertPoint);
let line = '';
if (coordinates.length > 1) {
const path = coordinates
.map((coord, idx) => `${idx === 0 ? 'M' : 'L'}${coord.cx.toFixed(2)} ${coord.cy.toFixed(2)}`)
.join(' ');
line = `<path class="node-detail__chart-trend" d="${path}" fill="none" stroke="${hexToRgba(seriesConfig.color, 0.5)}" stroke-width="1.5" aria-hidden="true"></path>`;
}
return `${line}${circleEntries.join('')}`;
}
/**
* Render a vertical axis when visible.
*
* @param {Object} axis Axis descriptor.
* @param {Object} dims Chart dimensions.
* @returns {string} SVG markup for the axis or an empty string.
*/
function renderYAxis(axis, dims) {
if (!axis || axis.visible === false) {
return '';
}
const x = resolveAxisX(axis.position, dims);
const ticks = axis.scale === 'log'
? buildLogTicks(axis.min, axis.max)
: buildLinearTicks(axis.min, axis.max, axis.ticks);
const tickElements = ticks
.map(value => {
const y = scaleValueToAxis(value, axis, dims);
const tickLength = axis.position === 'left' || axis.position === 'leftSecondary' ? -4 : 4;
const textAnchor = axis.position === 'left' || axis.position === 'leftSecondary' ? 'end' : 'start';
const textOffset = axis.position === 'left' || axis.position === 'leftSecondary' ? -6 : 6;
return `
<g class="node-detail__chart-tick" aria-hidden="true">
<line x1="${x}" y1="${y.toFixed(2)}" x2="${(x + tickLength).toFixed(2)}" y2="${y.toFixed(2)}"></line>
<text x="${(x + textOffset).toFixed(2)}" y="${(y + 3).toFixed(2)}" text-anchor="${textAnchor}" dominant-baseline="middle">${escapeHtml(formatAxisTick(value, axis))}</text>
</g>
`;
})
.join('');
const labelPadding = axis.position === 'left' || axis.position === 'leftSecondary' ? -56 : 56;
const labelX = x + labelPadding;
const labelY = (dims.chartTop + dims.chartBottom) / 2;
const labelTransform = `rotate(-90 ${labelX.toFixed(2)} ${labelY.toFixed(2)})`;
return `
<g class="node-detail__chart-axis node-detail__chart-axis--y" aria-hidden="true">
<line x1="${x}" y1="${dims.chartTop}" x2="${x}" y2="${dims.chartBottom}"></line>
${tickElements}
<text class="node-detail__chart-axis-label" x="${labelX.toFixed(2)}" y="${labelY.toFixed(2)}" text-anchor="middle" dominant-baseline="middle" transform="${labelTransform}">${escapeHtml(axis.label)}</text>
</g>
`;
}
/**
* Render the horizontal floating seven-day axis with midnight ticks.
*
* @param {Object} dims Chart dimensions.
* @param {number} domainStart Window start timestamp.
* @param {number} domainEnd Window end timestamp.
* @param {Array<number>} tickTimestamps Midnight tick timestamps.
* @returns {string} SVG markup for the X axis.
*/
function renderXAxis(dims, domainStart, domainEnd, tickTimestamps, { labelFormatter = formatCompactDate } = {}) {
const y = dims.chartBottom;
const ticks = tickTimestamps
.map(ts => {
const x = scaleTimestamp(ts, domainStart, domainEnd, dims);
const labelY = y + 18;
const xStr = x.toFixed(2);
const yStr = labelY.toFixed(2);
const label = labelFormatter(ts);
return `
<g class="node-detail__chart-tick" aria-hidden="true">
<line class="node-detail__chart-grid-line" x1="${xStr}" y1="${dims.chartTop}" x2="${xStr}" y2="${dims.chartBottom}"></line>
<text x="${xStr}" y="${yStr}" text-anchor="end" dominant-baseline="central" transform="rotate(-90 ${xStr} ${yStr})">${escapeHtml(label)}</text>
</g>
`;
})
.join('');
return `
<g class="node-detail__chart-axis node-detail__chart-axis--x" aria-hidden="true">
<line x1="${dims.margin.left}" y1="${y}" x2="${dims.width - dims.margin.right}" y2="${y}"></line>
${ticks}
</g>
`;
}
/**
* Render a single telemetry chart defined by ``spec``.
*
* @param {Object} spec Chart specification.
* @param {Array<{timestamp: number, snapshot: Object}>} entries Telemetry entries.
* @param {number} nowMs Reference timestamp.
* @param {Object} chartOptions Rendering overrides.
* @returns {Object|null} Chart model or ``null`` when empty.
* @returns {string} Rendered chart markup or an empty string.
*/
function buildTelemetryChartModel(spec, entries, nowMs, chartOptions = {}) {
function renderTelemetryChart(spec, entries, nowMs, chartOptions = {}) {
const windowMs = Number.isFinite(chartOptions.windowMs) && chartOptions.windowMs > 0 ? chartOptions.windowMs : TELEMETRY_WINDOW_MS;
const timeRangeLabel = stringOrNull(chartOptions.timeRangeLabel) ?? 'Last 7 days';
const domainEnd = nowMs;
@@ -1004,7 +1170,7 @@ function buildTelemetryChartModel(spec, entries, nowMs, chartOptions = {}) {
})
.filter(entry => entry != null);
if (seriesEntries.length === 0) {
return null;
return '';
}
const adjustedAxes = spec.axes.map(axis => {
const resolvedMax = resolveAxisMax(axis, seriesEntries);
@@ -1022,33 +1188,22 @@ function buildTelemetryChartModel(spec, entries, nowMs, chartOptions = {}) {
})
.filter(entry => entry != null);
if (plottedSeries.length === 0) {
return null;
return '';
}
const axesMarkup = adjustedAxes.map(axis => renderYAxis(axis, dims)).join('');
const tickBuilder = typeof chartOptions.xAxisTickBuilder === 'function' ? chartOptions.xAxisTickBuilder : buildMidnightTicks;
const tickFormatter = typeof chartOptions.xAxisTickFormatter === 'function' ? chartOptions.xAxisTickFormatter : formatCompactDate;
return {
id: spec.id,
title: spec.title,
timeRangeLabel,
domainStart,
domainEnd,
dims,
axes: adjustedAxes,
seriesEntries: plottedSeries,
ticks: tickBuilder(nowMs, windowMs),
tickFormatter,
lineReducer: typeof chartOptions.lineReducer === 'function' ? chartOptions.lineReducer : null,
};
}
const ticks = tickBuilder(nowMs, windowMs);
const xAxisMarkup = renderXAxis(dims, domainStart, domainEnd, ticks, { labelFormatter: tickFormatter });
/**
* Render a telemetry chart container for a chart model.
*
* @param {Object} model Chart model.
* @returns {string} Chart markup.
*/
function renderTelemetryChartMarkup(model) {
const legendItems = model.seriesEntries
const seriesMarkup = plottedSeries
.map(series =>
renderTelemetrySeries(series.config, series.points, series.axis, dims, domainStart, domainEnd, {
lineReducer: chartOptions.lineReducer,
}),
)
.join('');
const legendItems = plottedSeries
.map(series => {
const legendLabel = stringOrNull(series.config.legend) ?? series.config.label;
return `
@@ -1062,428 +1217,22 @@ function renderTelemetryChartMarkup(model) {
const legendMarkup = legendItems
? `<div class="node-detail__chart-legend" aria-hidden="true">${legendItems}</div>`
: '';
const ariaLabel = `${model.title} over last seven days`;
return `
<figure class="node-detail__chart" data-telemetry-chart-id="${escapeHtml(model.id)}">
<figure class="node-detail__chart">
<figcaption class="node-detail__chart-header">
<h4>${escapeHtml(model.title)}</h4>
<span>${escapeHtml(model.timeRangeLabel)}</span>
<h4>${escapeHtml(spec.title)}</h4>
<span>${escapeHtml(timeRangeLabel)}</span>
</figcaption>
<div class="node-detail__chart-plot" data-telemetry-plot role="img" aria-label="${escapeHtml(ariaLabel)}"></div>
<svg viewBox="0 0 ${dims.width} ${dims.height}" preserveAspectRatio="xMidYMid meet" role="img" aria-label="${escapeHtml(`${spec.title} over last seven days`)}">
${axesMarkup}
${xAxisMarkup}
${seriesMarkup}
</svg>
${legendMarkup}
</figure>
`;
}
/**
* Build a sorted timestamp index shared across series entries.
*
* @param {Array<Object>} seriesEntries Plotted series entries.
* @param {Function|null} lineReducer Optional line reducer.
* @returns {{timestamps: Array<number>, indexByTimestamp: Map<number, number>}} Timestamp index.
*/
function buildChartTimestampIndex(seriesEntries, lineReducer) {
const timestampSet = new Set();
for (const entry of seriesEntries) {
if (!entry || !Array.isArray(entry.points)) continue;
entry.points.forEach(point => {
if (point && Number.isFinite(point.timestamp)) {
timestampSet.add(point.timestamp);
}
});
if (typeof lineReducer === 'function') {
const reduced = lineReducer(entry.points);
if (Array.isArray(reduced)) {
reduced.forEach(point => {
if (point && Number.isFinite(point.timestamp)) {
timestampSet.add(point.timestamp);
}
});
}
}
}
const timestamps = Array.from(timestampSet).sort((a, b) => a - b);
const indexByTimestamp = new Map(timestamps.map((ts, idx) => [ts, idx]));
return { timestamps, indexByTimestamp };
}
/**
* Convert a list of points into an aligned values array.
*
* @param {Array<{timestamp: number, value: number}>} points Series points.
* @param {Map<number, number>} indexByTimestamp Timestamp index.
* @param {number} length Length of the output array.
* @returns {Array<number|null>} Values aligned to timestamps.
*/
function mapSeriesValues(points, indexByTimestamp, length) {
const values = Array.from({ length }, () => null);
if (!Array.isArray(points)) {
return values;
}
for (const point of points) {
if (!point || !Number.isFinite(point.timestamp)) continue;
const idx = indexByTimestamp.get(point.timestamp);
if (idx == null) continue;
values[idx] = Number.isFinite(point.value) ? point.value : null;
}
return values;
}
/**
* Build uPlot series and data arrays for a chart model.
*
* @param {Object} model Chart model.
* @returns {{data: Array<Array<number|null>>, series: Array<Object>}} uPlot data and series config.
*/
function buildTelemetryChartData(model) {
const { timestamps, indexByTimestamp } = buildChartTimestampIndex(model.seriesEntries, model.lineReducer);
const data = [timestamps];
const series = [{ label: 'Time' }];
model.seriesEntries.forEach(entry => {
const baseConfig = {
label: entry.config.label,
scale: entry.axis.id,
};
if (model.lineReducer) {
const reducedPoints = model.lineReducer(entry.points);
const linePoints = Array.isArray(reducedPoints) && reducedPoints.length > 0 ? reducedPoints : entry.points;
const lineValues = mapSeriesValues(linePoints, indexByTimestamp, timestamps.length);
series.push({
...baseConfig,
stroke: hexToRgba(entry.config.color, 0.5),
width: 1.5,
points: { show: false },
});
data.push(lineValues);
const pointValues = mapSeriesValues(entry.points, indexByTimestamp, timestamps.length);
series.push({
...baseConfig,
stroke: entry.config.color,
width: 0,
points: { show: true, size: 6, width: 1 },
});
data.push(pointValues);
} else {
const values = mapSeriesValues(entry.points, indexByTimestamp, timestamps.length);
series.push({
...baseConfig,
stroke: entry.config.color,
width: 1.5,
points: { show: true, size: 6, width: 1 },
});
data.push(values);
}
});
return { data, series };
}
/**
* Build uPlot chart configuration and data for a telemetry chart.
*
* @param {Object} model Chart model.
* @returns {{options: Object, data: Array<Array<number|null>>}} uPlot config and data.
*/
function buildUPlotChartConfig(model, { width, height, axisColor, gridColor } = {}) {
const { data, series } = buildTelemetryChartData(model);
const fallbackWidth = Math.round(model.dims.width * 1.8);
const resolvedWidth = Number.isFinite(width) && width > 0 ? width : fallbackWidth;
const resolvedHeight = Number.isFinite(height) && height > 0 ? height : model.dims.height;
const axisStroke = stringOrNull(axisColor) ?? '#5c6773';
const gridStroke = stringOrNull(gridColor) ?? 'rgba(12, 15, 18, 0.08)';
const axes = [
{
scale: 'x',
side: 2,
stroke: axisStroke,
grid: { show: true, stroke: gridStroke },
splits: () => model.ticks,
values: (u, splits) => splits.map(value => model.tickFormatter(value)),
},
];
const scales = {
x: {
time: true,
range: () => [model.domainStart, model.domainEnd],
},
};
model.axes.forEach(axis => {
const ticks = axis.scale === 'log'
? buildLogTicks(axis.min, axis.max)
: buildLinearTicks(axis.min, axis.max, axis.ticks);
const side = axis.position === 'right' || axis.position === 'rightSecondary' ? 1 : 3;
axes.push({
scale: axis.id,
side,
show: axis.visible !== false,
stroke: axisStroke,
grid: { show: false },
label: axis.label,
splits: () => ticks,
values: (u, splits) => splits.map(value => formatAxisTick(value, axis)),
});
scales[axis.id] = {
distr: axis.scale === 'log' ? 3 : 1,
log: axis.scale === 'log' ? 10 : undefined,
range: () => [axis.min, axis.max],
};
});
return {
options: {
width: resolvedWidth,
height: resolvedHeight,
padding: [
model.dims.margin.top,
model.dims.margin.right,
model.dims.margin.bottom,
model.dims.margin.left,
],
legend: { show: false },
series,
axes,
scales,
},
data,
};
}
/**
* Instantiate uPlot charts for the provided chart models.
*
* @param {Array<Object>} chartModels Chart models to render.
* @param {{root?: ParentNode, uPlotImpl?: Function}} [options] Rendering options.
* @returns {Array<Object>} Instantiated uPlot charts.
*/
export function mountTelemetryCharts(chartModels, { root, uPlotImpl } = {}) {
if (!Array.isArray(chartModels) || chartModels.length === 0) {
return [];
}
const host = root ?? globalThis.document;
if (!host || typeof host.querySelector !== 'function') {
return [];
}
const uPlotCtor = typeof uPlotImpl === 'function' ? uPlotImpl : globalThis.uPlot;
if (typeof uPlotCtor !== 'function') {
console.warn('uPlot is unavailable; telemetry charts will not render.');
return [];
}
const instances = [];
const colorRoot = host?.ownerDocument?.body ?? host?.body ?? globalThis.document?.body ?? null;
const axisColor = colorRoot && typeof globalThis.getComputedStyle === 'function'
? globalThis.getComputedStyle(colorRoot).getPropertyValue('--muted').trim()
: null;
const gridColor = colorRoot && typeof globalThis.getComputedStyle === 'function'
? globalThis.getComputedStyle(colorRoot).getPropertyValue('--line').trim()
: null;
chartModels.forEach(model => {
const container = host.querySelector(`[data-telemetry-chart-id="${model.id}"]`);
if (!container) return;
const plotRoot = container.querySelector('[data-telemetry-plot]');
if (!plotRoot) return;
plotRoot.innerHTML = '';
const plotWidth = plotRoot.clientWidth || plotRoot.getBoundingClientRect?.().width;
const plotHeight = plotRoot.clientHeight || plotRoot.getBoundingClientRect?.().height;
const { options, data } = buildUPlotChartConfig(model, {
width: plotWidth ? Math.round(plotWidth) : undefined,
height: plotHeight ? Math.round(plotHeight) : undefined,
axisColor: axisColor || undefined,
gridColor: gridColor || undefined,
});
const instance = new uPlotCtor(options, data, plotRoot);
instance.__potatoMeshRoot = plotRoot;
instances.push(instance);
});
registerTelemetryChartResize(instances);
return instances;
}
const telemetryResizeRegistry = new Set();
const telemetryResizeObservers = new WeakMap();
let telemetryResizeListenerAttached = false;
let telemetryResizeDebounceId = null;
const TELEMETRY_RESIZE_DEBOUNCE_MS = 120;
function resizeUPlotInstance(instance) {
if (!instance || typeof instance.setSize !== 'function') {
return;
}
const root = instance.__potatoMeshRoot ?? instance.root ?? null;
if (!root) return;
const rect = typeof root.getBoundingClientRect === 'function' ? root.getBoundingClientRect() : null;
const width = Number.isFinite(root.clientWidth) ? root.clientWidth : rect?.width;
const height = Number.isFinite(root.clientHeight) ? root.clientHeight : rect?.height;
if (!width || !height) return;
instance.setSize({ width: Math.round(width), height: Math.round(height) });
}
function registerTelemetryChartResize(instances) {
if (!Array.isArray(instances) || instances.length === 0) {
return;
}
const scheduleResize = () => {
if (telemetryResizeDebounceId != null) {
clearTimeout(telemetryResizeDebounceId);
}
telemetryResizeDebounceId = setTimeout(() => {
telemetryResizeDebounceId = null;
telemetryResizeRegistry.forEach(instance => resizeUPlotInstance(instance));
}, TELEMETRY_RESIZE_DEBOUNCE_MS);
};
instances.forEach(instance => {
telemetryResizeRegistry.add(instance);
resizeUPlotInstance(instance);
if (typeof globalThis.ResizeObserver === 'function') {
if (telemetryResizeObservers.has(instance)) return;
const observer = new globalThis.ResizeObserver(scheduleResize);
telemetryResizeObservers.set(instance, observer);
const root = instance.__potatoMeshRoot ?? instance.root ?? null;
if (root && typeof observer.observe === 'function') {
observer.observe(root);
}
}
});
if (!telemetryResizeListenerAttached && typeof globalThis.addEventListener === 'function') {
globalThis.addEventListener('resize', () => {
scheduleResize();
});
telemetryResizeListenerAttached = true;
}
}
function defaultLoadUPlot({ documentRef, onLoad }) {
if (!documentRef || typeof documentRef.querySelector !== 'function') {
return false;
}
const existing = documentRef.querySelector('script[data-uplot-loader="true"]');
if (existing) {
if (existing.dataset.loaded === 'true' && typeof onLoad === 'function') {
onLoad();
} else if (typeof existing.addEventListener === 'function' && typeof onLoad === 'function') {
existing.addEventListener('load', onLoad, { once: true });
}
return true;
}
if (typeof documentRef.createElement !== 'function') {
return false;
}
const script = documentRef.createElement('script');
script.src = '/assets/vendor/uplot/uPlot.iife.min.js';
script.defer = true;
script.dataset.uplotLoader = 'true';
if (typeof script.addEventListener === 'function') {
script.addEventListener('load', () => {
script.dataset.loaded = 'true';
if (typeof onLoad === 'function') {
onLoad();
}
});
}
const head = documentRef.head ?? documentRef.body;
if (head && typeof head.appendChild === 'function') {
head.appendChild(script);
return true;
}
return false;
}
/**
* Mount telemetry charts, retrying briefly if uPlot has not loaded yet.
*
* @param {Array<Object>} chartModels Chart models to render.
* @param {{root?: ParentNode, uPlotImpl?: Function, loadUPlot?: Function}} [options] Rendering options.
* @returns {Array<Object>} Instantiated uPlot charts.
*/
export function mountTelemetryChartsWithRetry(chartModels, { root, uPlotImpl, loadUPlot } = {}) {
const instances = mountTelemetryCharts(chartModels, { root, uPlotImpl });
if (instances.length > 0 || typeof uPlotImpl === 'function') {
return instances;
}
const host = root ?? globalThis.document;
if (!host || typeof host.querySelector !== 'function') {
return instances;
}
let mounted = false;
let attempts = 0;
const maxAttempts = 10;
const retryDelayMs = 50;
const retry = () => {
if (mounted) return;
attempts += 1;
const next = mountTelemetryCharts(chartModels, { root, uPlotImpl });
if (next.length > 0) {
mounted = true;
return;
}
if (attempts >= maxAttempts) {
return;
}
setTimeout(retry, retryDelayMs);
};
const loadFn = typeof loadUPlot === 'function' ? loadUPlot : defaultLoadUPlot;
loadFn({
documentRef: host.ownerDocument ?? globalThis.document,
onLoad: () => {
const next = mountTelemetryCharts(chartModels, { root, uPlotImpl });
if (next.length > 0) {
mounted = true;
}
},
});
setTimeout(retry, 0);
return instances;
}
/**
* Create chart markup and models for telemetry charts.
*
* @param {Object} node Normalised node payload.
* @param {{ nowMs?: number, chartOptions?: Object }} [options] Rendering options.
* @returns {{chartsHtml: string, chartModels: Array<Object>}} Chart markup and models.
*/
export function createTelemetryCharts(node, { nowMs = Date.now(), chartOptions = {} } = {}) {
const telemetrySource = node?.rawSources?.telemetry;
const snapshotHistory = Array.isArray(node?.rawSources?.telemetrySnapshots) && node.rawSources.telemetrySnapshots.length > 0
? node.rawSources.telemetrySnapshots
: null;
const aggregatedSnapshots = Array.isArray(telemetrySource?.snapshots)
? telemetrySource.snapshots
: null;
const rawSnapshots = snapshotHistory ?? aggregatedSnapshots;
if (!Array.isArray(rawSnapshots) || rawSnapshots.length === 0) {
return { chartsHtml: '', chartModels: [] };
}
const entries = rawSnapshots
.map(snapshot => {
const timestamp = resolveSnapshotTimestamp(snapshot);
if (timestamp == null) return null;
return { timestamp, snapshot };
})
.filter(entry => entry != null && entry.timestamp >= nowMs - TELEMETRY_WINDOW_MS && entry.timestamp <= nowMs)
.sort((a, b) => a.timestamp - b.timestamp);
if (entries.length === 0) {
return { chartsHtml: '', chartModels: [] };
}
const chartModels = TELEMETRY_CHART_SPECS
.map(spec => buildTelemetryChartModel(spec, entries, nowMs, chartOptions))
.filter(model => model != null);
if (chartModels.length === 0) {
return { chartsHtml: '', chartModels: [] };
}
const chartsHtml = `
<section class="node-detail__charts">
<div class="node-detail__charts-grid">
${chartModels.map(model => renderTelemetryChartMarkup(model)).join('')}
</div>
</section>
`;
return { chartsHtml, chartModels };
}
/**
* Render the telemetry charts for the supplied node when telemetry snapshots
* exist.
@@ -1493,7 +1242,41 @@ export function createTelemetryCharts(node, { nowMs = Date.now(), chartOptions =
* @returns {string} Chart grid markup or an empty string.
*/
export function renderTelemetryCharts(node, { nowMs = Date.now(), chartOptions = {} } = {}) {
return createTelemetryCharts(node, { nowMs, chartOptions }).chartsHtml;
const telemetrySource = node?.rawSources?.telemetry;
const snapshotHistory = Array.isArray(node?.rawSources?.telemetrySnapshots) && node.rawSources.telemetrySnapshots.length > 0
? node.rawSources.telemetrySnapshots
: null;
const aggregatedSnapshots = Array.isArray(telemetrySource?.snapshots)
? telemetrySource.snapshots
: null;
const rawSnapshots = snapshotHistory ?? aggregatedSnapshots;
if (!Array.isArray(rawSnapshots) || rawSnapshots.length === 0) {
return '';
}
const entries = rawSnapshots
.map(snapshot => {
const timestamp = resolveSnapshotTimestamp(snapshot);
if (timestamp == null) return null;
return { timestamp, snapshot };
})
.filter(entry => entry != null && entry.timestamp >= nowMs - TELEMETRY_WINDOW_MS && entry.timestamp <= nowMs)
.sort((a, b) => a.timestamp - b.timestamp);
if (entries.length === 0) {
return '';
}
const charts = TELEMETRY_CHART_SPECS
.map(spec => renderTelemetryChart(spec, entries, nowMs, chartOptions))
.filter(chart => stringOrNull(chart));
if (charts.length === 0) {
return '';
}
return `
<section class="node-detail__charts">
<div class="node-detail__charts-grid">
${charts.join('')}
</div>
</section>
`;
}
/**
@@ -2273,6 +2056,7 @@ function renderMessages(messages, renderShortHtml, node) {
if (!message || typeof message !== 'object') return null;
const text = stringOrNull(message.text) || stringOrNull(message.emoji);
if (!text) return null;
if (message.encrypted && String(text).trim() === 'GAA=') return null;
const timestamp = formatMessageTimestamp(message.rx_time, message.rx_iso);
const metadata = extractChatMessageMetadata(message);
@@ -2515,7 +2299,6 @@ function renderTraceroutes(traces, renderShortHtml, { roleIndex = null, node = n
* messages?: Array<Object>,
* traces?: Array<Object>,
* renderShortHtml: Function,
* chartsHtml?: string,
* }} options Rendering options.
* @returns {string} HTML fragment representing the detail view.
*/
@@ -2525,7 +2308,6 @@ function renderNodeDetailHtml(node, {
traces = [],
renderShortHtml,
roleIndex = null,
chartsHtml = null,
chartNowMs = Date.now(),
} = {}) {
const roleAwareBadge = renderRoleAwareBadge(renderShortHtml, {
@@ -2539,7 +2321,7 @@ function renderNodeDetailHtml(node, {
const longName = stringOrNull(node.longName ?? node.long_name);
const identifier = stringOrNull(node.nodeId ?? node.node_id);
const tableHtml = renderSingleNodeTable(node, renderShortHtml);
const telemetryChartsHtml = stringOrNull(chartsHtml) ?? renderTelemetryCharts(node, { nowMs: chartNowMs });
const chartsHtml = renderTelemetryCharts(node, { nowMs: chartNowMs });
const neighborsHtml = renderNeighborGroups(node, neighbors, renderShortHtml, { roleIndex });
const tracesHtml = renderTraceroutes(traces, renderShortHtml, { roleIndex, node });
const messagesHtml = renderMessages(messages, renderShortHtml, node);
@@ -2565,7 +2347,7 @@ function renderNodeDetailHtml(node, {
<header class="node-detail__header">
<h2 class="node-detail__title">${badgeHtml}${nameHtml}${identifierHtml}</h2>
</header>
${telemetryChartsHtml ?? ''}
${chartsHtml ?? ''}
${tableSection}
${contentHtml}
`;
@@ -2679,17 +2461,15 @@ async function fetchTracesForNode(identifier, { fetchImpl } = {}) {
}
/**
* Fetch node detail data and render the HTML fragment.
* Initialise the node detail page by hydrating the DOM with fetched data.
*
* @param {{
* document?: Document,
* fetchImpl?: Function,
* refreshImpl?: Function,
* renderShortHtml?: Function,
* chartNowMs?: number,
* chartOptions?: Object,
* }} options Optional overrides for testing.
* @returns {Promise<string|{html: string, chartModels: Array<Object>}>} Rendered markup or chart models when requested.
* @returns {Promise<boolean>} ``true`` when the node was rendered successfully.
*/
export async function fetchNodeDetailHtml(referenceData, options = {}) {
if (!referenceData || typeof referenceData !== 'object') {
@@ -2719,38 +2499,15 @@ export async function fetchNodeDetailHtml(referenceData, options = {}) {
fetchTracesForNode(messageIdentifier, { fetchImpl: options.fetchImpl }),
]);
const roleIndex = await buildTraceRoleIndex(traces, neighborRoleIndex, { fetchImpl: options.fetchImpl });
const chartNowMs = Number.isFinite(options.chartNowMs) ? options.chartNowMs : Date.now();
const chartState = createTelemetryCharts(node, {
nowMs: chartNowMs,
chartOptions: options.chartOptions ?? {},
});
const html = renderNodeDetailHtml(node, {
return renderNodeDetailHtml(node, {
neighbors: node.neighbors,
messages,
traces,
renderShortHtml,
roleIndex,
chartsHtml: chartState.chartsHtml,
chartNowMs,
});
if (options.returnState === true) {
return { html, chartModels: chartState.chartModels };
}
return html;
}
/**
* Initialise the standalone node detail page and mount telemetry charts.
*
* @param {{
* document?: Document,
* fetchImpl?: Function,
* refreshImpl?: Function,
* renderShortHtml?: Function,
* uPlotImpl?: Function,
* }} options Optional overrides for testing.
* @returns {Promise<boolean>} ``true`` when the node was rendered successfully.
*/
export async function initializeNodeDetailPage(options = {}) {
const documentRef = options.document ?? globalThis.document;
if (!documentRef || typeof documentRef.querySelector !== 'function') {
@@ -2787,15 +2544,13 @@ export async function initializeNodeDetailPage(options = {}) {
const privateMode = (root.dataset?.privateMode ?? '').toLowerCase() === 'true';
try {
const result = await fetchNodeDetailHtml(referenceData, {
const html = await fetchNodeDetailHtml(referenceData, {
fetchImpl: options.fetchImpl,
refreshImpl,
renderShortHtml: options.renderShortHtml,
privateMode,
returnState: true,
});
root.innerHTML = result.html;
mountTelemetryChartsWithRetry(result.chartModels, { root, uPlotImpl: options.uPlotImpl });
root.innerHTML = html;
return true;
} catch (error) {
console.error('Failed to render node detail page', error);
@@ -2832,11 +2587,7 @@ export const __testUtils = {
categoriseNeighbors,
renderNeighborGroups,
renderSingleNodeTable,
createTelemetryCharts,
renderTelemetryCharts,
mountTelemetryCharts,
mountTelemetryChartsWithRetry,
buildUPlotChartConfig,
renderMessages,
renderTraceroutes,
renderTracePath,

View File

@@ -30,6 +30,9 @@
--input-border: rgba(12, 15, 18, 0.18);
--input-placeholder: rgba(12, 15, 18, 0.45);
--control-accent: var(--accent);
--announcement-bg: #fff4d6;
--announcement-fg: #7a3f00;
--announcement-border: #f0c05b;
--pad: 16px;
--map-tile-filter-light: grayscale(1) saturate(0) brightness(0.92) contrast(1.05);
--map-tile-filter-dark: grayscale(1) invert(1) brightness(0.9) contrast(1.08);
@@ -59,6 +62,9 @@ body.dark {
--input-border: rgba(230, 235, 240, 0.24);
--input-placeholder: rgba(230, 235, 240, 0.55);
--control-accent: var(--accent);
--announcement-bg: #3b2500;
--announcement-fg: #ffd184;
--announcement-border: #a56a00;
}
html,
@@ -215,25 +221,237 @@ h1 {
.site-header {
display: flex;
flex-wrap: wrap;
align-items: center;
justify-content: space-between;
gap: 16px;
min-height: 56px;
padding: 4px 0;
margin-bottom: 8px;
}
.site-header__left,
.site-header__right {
display: flex;
align-items: center;
gap: 12px;
margin-bottom: 8px;
}
.site-header__left {
flex: 1 1 auto;
min-width: 0;
}
.site-header__right {
flex: 0 0 auto;
margin-left: auto;
}
.announcement-banner {
display: flex;
align-items: center;
justify-content: center;
height: 1.6em;
padding: 0 var(--pad);
border-radius: 999px;
background: var(--announcement-bg);
color: var(--announcement-fg);
border: 1px solid var(--announcement-border);
box-sizing: border-box;
overflow: hidden;
}
.announcement-banner__content {
margin: 0;
line-height: 1.6;
text-align: center;
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
}
.site-title {
display: inline-flex;
align-items: center;
gap: 12px;
min-width: 0;
}
.site-title-text {
min-width: 0;
max-width: 100%;
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
.site-title img {
width: 52px;
height: 52px;
width: 36px;
height: 36px;
display: block;
border-radius: 12px;
}
.site-nav {
display: flex;
align-items: center;
gap: 8px;
flex-wrap: wrap;
}
.site-nav__link {
display: inline-flex;
align-items: center;
gap: 6px;
padding: 6px 12px;
border-radius: 999px;
color: var(--fg);
text-decoration: none;
border: 1px solid transparent;
font-size: 14px;
}
.site-nav__link:hover {
background: var(--card);
}
.site-nav__link.is-active {
border-color: var(--accent);
color: var(--accent);
background: transparent;
font-weight: 600;
}
.site-nav__link:focus-visible {
outline: 2px solid var(--accent);
outline-offset: 2px;
}
.menu-toggle {
display: none;
}
.menu-toggle:focus-visible {
outline: 2px solid var(--accent);
outline-offset: 2px;
}
.mobile-menu {
position: fixed;
inset: 0;
z-index: 1200;
display: flex;
justify-content: flex-end;
pointer-events: none;
}
.mobile-menu[hidden] {
display: none;
}
.mobile-menu__backdrop {
flex: 1 1 auto;
background: rgba(0, 0, 0, 0.4);
opacity: 0;
transition: opacity 200ms ease;
}
.mobile-menu__panel {
width: min(320px, 86vw);
background: var(--bg2);
color: var(--fg);
padding: 16px;
display: flex;
flex-direction: column;
gap: 16px;
height: 100%;
overflow-y: auto;
transform: translateX(100%);
transition: transform 220ms ease;
box-shadow: -12px 0 32px rgba(0, 0, 0, 0.3);
}
.mobile-menu__header {
display: flex;
align-items: center;
justify-content: space-between;
gap: 12px;
}
.mobile-menu__title {
margin: 0;
font-size: 16px;
}
.mobile-menu__close:focus-visible {
outline: 2px solid var(--accent);
outline-offset: 2px;
}
.mobile-nav {
display: flex;
flex-direction: column;
gap: 8px;
}
.mobile-nav__link {
display: inline-flex;
align-items: center;
padding: 8px 10px;
border-radius: 10px;
color: var(--fg);
text-decoration: none;
border: 1px solid transparent;
}
.mobile-nav__link.is-active {
border-color: var(--accent);
color: var(--accent);
font-weight: 600;
}
.mobile-nav__link:focus-visible {
outline: 2px solid var(--accent);
outline-offset: 2px;
}
.mobile-menu.is-open {
pointer-events: auto;
}
.mobile-menu.is-open .mobile-menu__backdrop {
opacity: 1;
}
.mobile-menu.is-open .mobile-menu__panel {
transform: translateX(0);
}
.menu-open {
overflow: hidden;
}
.section-link {
display: inline-flex;
align-items: center;
gap: 6px;
padding: 6px 10px;
border-radius: 999px;
border: 1px solid var(--line);
color: var(--fg);
text-decoration: none;
font-size: 14px;
}
.section-link:hover {
border-color: var(--accent);
color: var(--accent);
}
.section-link:focus-visible {
outline: 2px solid var(--accent);
outline-offset: 2px;
}
.meta {
color: #555;
margin-bottom: 12px;
@@ -282,11 +500,29 @@ h1 {
@media (max-width: 900px) {
.site-header {
flex-direction: column;
align-items: flex-start;
margin-bottom: 4px;
}
.site-header__left {
flex-wrap: nowrap;
}
.site-header__left--federation {
flex-wrap: wrap;
}
.site-nav {
display: none;
}
.menu-toggle {
display: inline-flex;
}
.instance-selector {
flex: 0 1 auto;
}
.instance-selector,
.instance-select {
width: 100%;
}
@@ -296,6 +532,7 @@ h1 {
}
}
.pill {
display: inline-block;
padding: 2px 8px;
@@ -994,7 +1231,7 @@ body.dark .node-detail-overlay__close:hover {
.node-detail__charts-grid {
display: grid;
gap: 24px;
grid-template-columns: repeat(auto-fit, minmax(min(100%, 1152px), 1fr));
grid-template-columns: repeat(auto-fit, minmax(min(100%, 640px), 1fr));
}
.node-detail__chart {
@@ -1026,45 +1263,10 @@ body.dark .node-detail-overlay__close:hover {
font-size: 1rem;
}
.node-detail__chart-plot {
.node-detail__chart svg {
width: 100%;
height: clamp(240px, 50vw, 360px);
height: auto;
max-height: 420px;
overflow: hidden;
}
.node-detail__chart-plot .uplot {
width: 100%;
height: 100%;
margin: 0;
line-height: 0;
position: relative;
}
.node-detail__chart-plot .uplot .u-wrap,
.node-detail__chart-plot .uplot .u-under,
.node-detail__chart-plot .uplot .u-over {
top: 0;
left: 0;
}
.node-detail__chart-plot .u-axis,
.node-detail__chart-plot .u-axis .u-label,
.node-detail__chart-plot .u-axis .u-value,
.node-detail__chart-plot .u-axis text,
.node-detail__chart-plot .u-axis-label {
color: var(--muted) !important;
fill: var(--muted) !important;
font-size: 0.95rem;
}
.node-detail__chart-plot .u-grid {
stroke: rgba(12, 15, 18, 0.08);
stroke-width: 1;
}
body.dark .node-detail__chart-plot .u-grid {
stroke: rgba(255, 255, 255, 0.15);
}
.node-detail__chart-axis line {
@@ -1729,10 +1931,6 @@ input[type="radio"] {
gap: 12px;
}
.controls--full-screen {
grid-template-columns: minmax(0, 1fr) auto;
}
.controls .filter-input {
width: 100%;
}

File diff suppressed because one or more lines are too long

View File

@@ -1 +0,0 @@
.uplot, .uplot *, .uplot *::before, .uplot *::after {box-sizing: border-box;}.uplot {font-family: system-ui, -apple-system, "Segoe UI", Roboto, "Helvetica Neue", Arial, "Noto Sans", sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji";line-height: 1.5;width: min-content;}.u-title {text-align: center;font-size: 18px;font-weight: bold;}.u-wrap {position: relative;user-select: none;}.u-over, .u-under {position: absolute;}.u-under {overflow: hidden;}.uplot canvas {display: block;position: relative;width: 100%;height: 100%;}.u-axis {position: absolute;}.u-legend {font-size: 14px;margin: auto;text-align: center;}.u-inline {display: block;}.u-inline * {display: inline-block;}.u-inline tr {margin-right: 16px;}.u-legend th {font-weight: 600;}.u-legend th > * {vertical-align: middle;display: inline-block;}.u-legend .u-marker {width: 1em;height: 1em;margin-right: 4px;background-clip: padding-box !important;}.u-inline.u-live th::after {content: ":";vertical-align: middle;}.u-inline:not(.u-live) .u-value {display: none;}.u-series > * {padding: 4px;}.u-series th {cursor: pointer;}.u-legend .u-off > * {opacity: 0.3;}.u-select {background: rgba(0,0,0,0.07);position: absolute;pointer-events: none;}.u-cursor-x, .u-cursor-y {position: absolute;left: 0;top: 0;pointer-events: none;will-change: transform;}.u-hz .u-cursor-x, .u-vt .u-cursor-y {height: 100%;border-right: 1px dashed #607D8B;}.u-hz .u-cursor-y, .u-vt .u-cursor-x {width: 100%;border-bottom: 1px dashed #607D8B;}.u-cursor-pt {position: absolute;top: 0;left: 0;border-radius: 50%;border: 0 solid;pointer-events: none;will-change: transform;/*this has to be !important since we set inline "background" shorthand */background-clip: padding-box !important;}.u-axis.u-off, .u-select.u-off, .u-cursor-x.u-off, .u-cursor-y.u-off, .u-cursor-pt.u-off {display: none;}

View File

@@ -1,55 +0,0 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import { mkdir, copyFile, access } from 'node:fs/promises';
import { constants as fsConstants } from 'node:fs';
import path from 'node:path';
import { fileURLToPath } from 'node:url';
/**
* Resolve an absolute path relative to this script location.
*
* @param {string[]} segments Path segments to append.
* @returns {string} Absolute path resolved from this script.
*/
function resolvePath(...segments) {
const scriptDir = path.dirname(fileURLToPath(import.meta.url));
return path.resolve(scriptDir, ...segments);
}
/**
* Ensure the uPlot assets are available within the public asset tree.
*
* @returns {Promise<void>} Resolves once files have been copied.
*/
async function copyUPlotAssets() {
const sourceDir = resolvePath('..', 'node_modules', 'uplot', 'dist');
const targetDir = resolvePath('..', 'public', 'assets', 'vendor', 'uplot');
const assets = ['uPlot.iife.min.js', 'uPlot.min.css'];
await access(sourceDir, fsConstants.R_OK);
await mkdir(targetDir, { recursive: true });
await Promise.all(
assets.map(async asset => {
const source = path.join(sourceDir, asset);
const target = path.join(targetDir, asset);
await copyFile(source, target);
}),
);
}
await copyUPlotAssets();

Some files were not shown because too many files have changed in this diff Show More