Compare commits

..

42 Commits

Author SHA1 Message Date
l5y c157fd481b matrix: fixed the text-message checkpoint regression (#595)
* matrix: fixed the text-message checkpoint regression

* matrix: improve formatting

* matrix: fix tests
2026-01-05 18:20:25 +01:00
l5y a6fc7145bc matrix: cache seen messages by rx_time not id (#594)
* matrix: cache seen messages by rx_time not id

* matrix: fix review comments

* matrix: fix review comments

* matrix: cover missing unit test vectors

* matrix: fix tests
2026-01-05 17:34:54 +01:00
l5y ca05cbb2c5 web: hide the default '0' tab when not active (#593) 2026-01-05 16:26:56 +01:00
l5y 5c79572c4d matrix: fix empty bridge state json (#592)
* matrix: fix empty bridge state json

* matrix: fix tests
2026-01-05 16:11:24 +01:00
l5y 6fd8e5ad12 web: allow certain charts to overflow upper bounds (#585)
* web: allow certain charts to overflow upper bounds

* web: cover missing unit test vectors
2025-12-31 15:15:18 +01:00
l5y 09fbc32e48 ingestor: support ROUTING_APP messages (#584)
* ingestor: support ROUTING_APP messages

* data: cover missing unit test vectors

* data: address review comments

* tests: fix
2025-12-31 13:13:34 +01:00
l5y 4591d5acd6 ci: run nix flake check on ci (#583)
* ci: run nix flake check on ci

* ci: fix tests
2025-12-31 12:58:37 +01:00
l5y 6c711f80b4 web: hide legend by default (#582)
* web: hide legend my default

* web: run rufo
2025-12-31 12:42:53 +01:00
Benjamin Grosse e61e701240 nix flake (#577) 2025-12-31 12:00:11 +01:00
apo-mak 42f4e80a26 Support BLE UUID format for macOS Bluetooth devices (#575)
* Initial plan

* Add BLE UUID support for macOS devices

Co-authored-by: apo-mak <25563515+apo-mak@users.noreply.github.com>

* docs: Add UUID format example for macOS BLE connections

Co-authored-by: apo-mak <25563515+apo-mak@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: apo-mak <25563515+apo-mak@users.noreply.github.com>
2025-12-20 20:21:59 +01:00
l5y 4dc03f33ca web: add mesh.qrp.ro as seed node (#573) 2025-12-17 10:48:51 +01:00
l5y 5572c6cd12 web: ensure unknown nodes for messages and traces (#572) 2025-12-17 10:21:03 +01:00
l5y 4f7e66de82 chore: bump version to 0.5.9 (#569) 2025-12-16 21:14:10 +00:00
l5y c1898037c0 web: add secondary seed node jmrp.io (#568) 2025-12-16 21:38:41 +01:00
l5y efc5f64279 data: implement whitelist for ingestor (#567)
* data: implement whitelist for ingestor

* data: run black

* data: cover missing unit test vectors
2025-12-16 21:11:53 +01:00
l5y 636a203254 web: add ?since= parameter to all apis (#566) 2025-12-16 20:24:31 +01:00
l5y 2e78fa7a3a matrix: fix docker build 2025-12-16 19:26:31 +01:00
l5y e74f985630 matrix: fix docker build (#564) 2025-12-16 18:52:07 +01:00
l5y e4facd7f26 web: fix federation signature validation and create fallback (#563)
* web: fix federation signature validation and create fallback

* web: cover missing unit test vectors
2025-12-16 10:52:59 +01:00
l5y f533362f8a chore: update readme (#561) 2025-12-16 08:54:31 +01:00
l5y 175a8f368f matrix: add docker file for bridge (#556)
* matrix: add docker file for bridge

* matrix: address review comments

* matrix: address review comments

* matrix: address review comments

* matrix: address review comments

* matrix: address review comments
2025-12-16 08:53:01 +01:00
l5y 872bcbd529 matrix: add health checks to startup (#555)
* matrix: add health checks to startup

* matrix: address review comments

* matrix: cover missing unit test vectors

* matrix: cover missing unit test vectors
2025-12-15 22:53:32 +01:00
l5y 8811f71e53 matrix: omit the api part in base url (#554)
* matrix: omit the api part in base url

* matrix: address review comments
2025-12-15 22:04:01 +01:00
l5y fec649a159 app: add utility coverage tests for main.dart (#552)
* Add utility coverage tests for main.dart

* Add channel names to message sorting tests

* Fix MeshMessage sort test construction

* chore: run dart formatter
2025-12-15 11:03:51 +01:00
l5y 9e3f481401 Add unit tests for daemon helpers (#553) 2025-12-15 08:43:13 +01:00
l5y 1a497864a7 chore: bump version to 0.5.8 (#551)
* chore: bump version to 0.5.8

* chore: add missing license headers
2025-12-15 08:29:27 +01:00
l5y 06fb90513f data: track ingestors heartbeat (#549)
* data: track ingestors heartbeat

* data: address review comments

* cover missing unit test vectors

* cover missing unit test vectors
2025-12-14 18:42:17 +01:00
l5y b5eecb1ec1 Harden instance selector navigation URLs (#550)
* Harden instance selector navigation URLs

* Cover malformed instance URL handling
2025-12-14 18:40:41 +01:00
l5y 0e211aebdd data: hide channels that have been flag for ignoring (#548)
* data: hide channels that have been flag for ignoring

* data: address review comments
2025-12-14 16:47:44 +01:00
l5y 96b62d7e14 web: fix limit when counting remote nodes (#547) 2025-12-14 15:05:19 +01:00
l5y baf6ffff0b web: improve instances map and table view (#546)
* web: improve instances map and table view

* web: address review comments

* run rufo
2025-12-14 14:35:55 +01:00
l5y 135de0863c web: fix traces submission with optional fields on udp (#545) 2025-12-14 13:27:07 +01:00
l5y 074a61baac chore: bump version to 0.5.7 (#542)
* chore: bump version to 0.5.7

* Change version to 0.5.7 in AppFrameworkInfo.plist

Updated version numbers to 0.5.7.
2025-12-08 20:39:58 +01:00
l5y 209cc948bf Handle zero telemetry aggregates (#538)
* Handle zero telemetry aggregates

* Fix telemetry aggregation to drop zero readings
2025-12-08 20:31:32 +01:00
l5y cc108f2f49 web: fix telemetry api to return current in amperes (#541)
* web: fix telemetry api to return current in amperes

* web: address review comments
2025-12-08 20:18:10 +01:00
l5y 844204f64d web: fix traces rendering (#535)
* web: fix traces rendering

* web: remove icon shortcuts

* web: further refine the trace routes
2025-12-08 19:48:33 +01:00
l5y 88f699f4ec Normalize numeric roles in node snapshots (#539) 2025-12-08 19:47:50 +01:00
l5y d1b9196f47 Use INSTANCE_DOMAIN env for ingestor (#536)
* Use INSTANCE_DOMAIN env for ingestor

* Normalize instance domain handling
2025-12-07 11:05:13 +01:00
l5y 8181fc8e03 web: further refine the federation page (#534)
* web: further refine the federation page

* web: address review comments

* web: address review comments
2025-12-04 13:31:23 +01:00
apo-mak 5be2ac417a Add Federation Map (#532)
* Add Federation Map

* Apply suggestions from code review

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: l5y <220195275+l5yth@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-04 12:24:54 +01:00
apo-mak 6acb1c833c add contact link to the instance data (#533) 2025-12-04 12:17:26 +01:00
l5y 2bd69415c1 matrix: create potato-matrix-bridge (#528)
* matrix: create potato-matrix-bridge

* matrix: add unit tests

* matrix: address review comments

* ci: condition github actions to only run on paths affected...

* Add comprehensive unit tests for config, matrix, potatomesh, and main modules

* Revert "Add comprehensive unit tests for config, matrix, potatomesh, and main modules"

This reverts commit 212522b4a2.

* matrix: add unit tests

* matrix: add unit tests

* matrix: add unit tests
2025-11-29 08:52:20 +01:00
86 changed files with 10676 additions and 219 deletions
+2
View File
@@ -36,6 +36,8 @@ jobs:
include:
- language: python
build-mode: none
- language: rust
build-mode: none
- language: ruby
build-mode: none
- language: javascript-typescript
+27 -5
View File
@@ -43,7 +43,7 @@ jobs:
strategy:
matrix:
service: [web, ingestor]
service: [web, ingestor, matrix-bridge]
architecture:
- { name: linux-amd64, platform: linux/amd64, label: "Linux x86_64", os: linux, architecture: amd64 }
- { name: linux-arm64, platform: linux/arm64, label: "Linux ARM64", os: linux, architecture: arm64 }
@@ -109,8 +109,8 @@ jobs:
uses: docker/build-push-action@v5
with:
context: .
file: ./${{ matrix.service == 'web' && 'web/Dockerfile' || 'data/Dockerfile' }}
target: production
file: ${{ matrix.service == 'web' && './web/Dockerfile' || matrix.service == 'ingestor' && './data/Dockerfile' || './matrix/Dockerfile' }}
target: ${{ matrix.service == 'matrix-bridge' && 'runtime' || 'production' }}
platforms: ${{ matrix.architecture.platform }}
push: true
tags: |
@@ -119,12 +119,12 @@ jobs:
${{ steps.tagging.outputs.include_latest == 'true' && format('{0}/{1}-{2}-{3}:latest', env.REGISTRY, env.IMAGE_PREFIX, matrix.service, matrix.architecture.name) || '' }}
labels: |
org.opencontainers.image.source=https://github.com/${{ github.repository }}
org.opencontainers.image.description=PotatoMesh ${{ matrix.service == 'web' && 'Web Application' || 'Python Ingestor' }} for ${{ matrix.architecture.label }}
org.opencontainers.image.description=PotatoMesh ${{ matrix.service == 'web' && 'Web Application' || matrix.service == 'ingestor' && 'Python Ingestor' || 'Matrix Bridge' }} for ${{ matrix.architecture.label }}
org.opencontainers.image.licenses=Apache-2.0
org.opencontainers.image.version=${{ steps.version.outputs.version }}
org.opencontainers.image.created=${{ github.event.head_commit.timestamp }}
org.opencontainers.image.revision=${{ github.sha }}
org.opencontainers.image.title=PotatoMesh ${{ matrix.service == 'web' && 'Web' || 'Ingestor' }} (${{ matrix.architecture.label }})
org.opencontainers.image.title=PotatoMesh ${{ matrix.service == 'web' && 'Web' || matrix.service == 'ingestor' && 'Ingestor' || 'Matrix Bridge' }} (${{ matrix.architecture.label }})
org.opencontainers.image.vendor=PotatoMesh
org.opencontainers.image.architecture=${{ matrix.architecture.architecture }}
org.opencontainers.image.os=${{ matrix.architecture.os }}
@@ -208,6 +208,19 @@ jobs:
VERSION=${GITHUB_REF#refs/tags/v}
echo "version=$VERSION" >> $GITHUB_OUTPUT
- name: Determine tagging strategy
id: tagging
run: |
VERSION="${{ steps.version.outputs.version }}"
if echo "$VERSION" | grep -E -- '-(rc|beta|alpha|dev)'; then
INCLUDE_LATEST=false
else
INCLUDE_LATEST=true
fi
echo "include_latest=$INCLUDE_LATEST" >> $GITHUB_OUTPUT
- name: Publish release summary
run: |
echo "## 🚀 PotatoMesh Images Published to GHCR" >> $GITHUB_STEP_SUMMARY
@@ -234,4 +247,13 @@ jobs:
echo "- \`${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-ingestor-linux-armv7:latest\` - Linux ARMv7" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
fi
# Matrix bridge images
echo "### 🧩 Matrix Bridge" >> $GITHUB_STEP_SUMMARY
if [ "${{ steps.tagging.outputs.include_latest }}" = "true" ]; then
echo "- \`${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-matrix-bridge-linux-amd64:latest\` - Linux x86_64" >> $GITHUB_STEP_SUMMARY
echo "- \`${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-matrix-bridge-linux-arm64:latest\` - Linux ARM64" >> $GITHUB_STEP_SUMMARY
echo "- \`${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-matrix-bridge-linux-armv7:latest\` - Linux ARMv7" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
fi
+4
View File
@@ -19,6 +19,9 @@ on:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
paths:
- 'web/**'
- 'tests/**'
permissions:
contents: read
@@ -47,6 +50,7 @@ jobs:
files: web/reports/javascript-coverage.json
flags: frontend
name: frontend
fail_ci_if_error: false
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
- name: Upload test results to Codecov
+4
View File
@@ -19,6 +19,9 @@ on:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
paths:
- 'app/**'
- 'tests/**'
permissions:
contents: read
@@ -63,5 +66,6 @@ jobs:
files: coverage/lcov.info
flags: flutter-mobile
name: flutter-mobile
fail_ci_if_error: false
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
+35
View File
@@ -0,0 +1,35 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: Nix
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
jobs:
flake-check:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install Nix
uses: cachix/install-nix-action@v30
with:
extra_nix_config: |
experimental-features = nix-command flakes
- name: Run flake checks
run: nix flake check
+4
View File
@@ -19,6 +19,9 @@ on:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
paths:
- 'data/**'
- 'tests/**'
permissions:
contents: read
@@ -47,6 +50,7 @@ jobs:
token: ${{ secrets.CODECOV_TOKEN }}
files: reports/python-coverage.xml
flags: python-ingestor
fail_ci_if_error: false
name: python-ingestor
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
+3
View File
@@ -19,6 +19,9 @@ on:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
paths:
- 'web/**'
- 'tests/**'
permissions:
contents: read
+78
View File
@@ -0,0 +1,78 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: Rust
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
paths:
- '.github/**'
- 'matrix/**'
- 'tests/**'
permissions:
contents: read
jobs:
matrix:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- name: Install Rust toolchain
uses: dtolnay/rust-toolchain@stable
- name: Cache Cargo registry
uses: actions/cache@v4
with:
path: |
~/.cargo/registry
~/.cargo/git
./matrix/target
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.toml', '**/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-cargo-
- name: Show rustc version
run: rustc --version
- name: Install llvm-tools-preview component
run: rustup component add llvm-tools-preview --toolchain stable
- name: Install cargo-llvm-cov
working-directory: ./matrix
run: cargo install cargo-llvm-cov --locked
- name: Check formatting
working-directory: ./matrix
run: cargo fmt --all -- --check
- name: Clippy lint
working-directory: ./matrix
run: cargo clippy --all-targets --all-features -- -D warnings
- name: Build
working-directory: ./matrix
run: cargo build --all --all-features
- name: Test
working-directory: ./matrix
run: cargo test --all --all-features --verbose
- name: Run tests with coverage
working-directory: ./matrix
run: |
cargo llvm-cov --all-features --workspace --lcov --output-path coverage.lcov
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ./matrix/coverage.lcov
flags: matrix-bridge
name: matrix-bridge
fail_ci_if_error: false
+4
View File
@@ -17,11 +17,15 @@ The repository splits runtime and ingestion logic. `web/` holds the Sinatra dash
`data/` hosts the Python Meshtastic ingestor plus migrations and CLI scripts. API fixtures and end-to-end harnesses live in `tests/`. Dockerfiles and compose files support containerized workflows.
`matrix/` contains the Rust Matrix bridge; build with `cargo build --release` or `docker build -f matrix/Dockerfile .`, and keep bridge config under `matrix/Config.toml` when running locally.
## Build, Test, and Development Commands
Run dependency installs inside `web/`: `bundle install` for gems and `npm ci` for JavaScript tooling. Start the app with `cd web && API_TOKEN=dev ./app.sh` for local work or `bundle exec rackup -p 41447` when integrating elsewhere.
Prep ingestion with `python -m venv .venv && pip install -r data/requirements.txt`; `./data/mesh.sh` streams from live radios. `docker-compose -f docker-compose.dev.yml up` brings up the full stack.
Container images publish via `.github/workflows/docker.yml` as `potato-mesh-{service}-linux-$arch` (`web`, `ingestor`, `matrix-bridge`), using the Dockerfiles in `web/`, `data/`, and `matrix/`.
## Coding Style & Naming Conventions
Use two-space indentation for Ruby and keep `# frozen_string_literal: true` at the top of new files. Keep Ruby classes/modules in `CamelCase`, filenames in `snake_case.rb`, and feature specs in `*_spec.rb`.
+62
View File
@@ -1,5 +1,67 @@
# CHANGELOG
## v0.5.7
* Data: track ingestors heartbeat by @l5yth in <https://github.com/l5yth/potato-mesh/pull/549>
* Harden instance selector navigation URLs by @l5yth in <https://github.com/l5yth/potato-mesh/pull/550>
* Data: hide channels that have been flag for ignoring by @l5yth in <https://github.com/l5yth/potato-mesh/pull/548>
* Web: fix limit when counting remote nodes by @l5yth in <https://github.com/l5yth/potato-mesh/pull/547>
* Web: improve instances map and table view by @l5yth in <https://github.com/l5yth/potato-mesh/pull/546>
* Web: fix traces submission with optional fields on udp by @l5yth in <https://github.com/l5yth/potato-mesh/pull/545>
* Chore: bump version to 0.5.7 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/542>
* Handle zero telemetry aggregates by @l5yth in <https://github.com/l5yth/potato-mesh/pull/538>
* Web: fix telemetry api to return current in amperes by @l5yth in <https://github.com/l5yth/potato-mesh/pull/541>
* Web: fix traces rendering by @l5yth in <https://github.com/l5yth/potato-mesh/pull/535>
* Normalize numeric node roles to canonical labels by @l5yth in <https://github.com/l5yth/potato-mesh/pull/539>
* Use INSTANCE_DOMAIN env for ingestor by @l5yth in <https://github.com/l5yth/potato-mesh/pull/536>
* Web: further refine the federation page by @l5yth in <https://github.com/l5yth/potato-mesh/pull/534>
* Add Federation Map by @apo-mak in <https://github.com/l5yth/potato-mesh/pull/532>
* Add contact link to the instance data by @apo-mak in <https://github.com/l5yth/potato-mesh/pull/533>
* Matrix: create potato-matrix-bridge by @l5yth in <https://github.com/l5yth/potato-mesh/pull/528>
## v0.5.6
* Web: display sats in view by @l5yth in <https://github.com/l5yth/potato-mesh/pull/523>
* Web: display air quality in separate chart by @l5yth in <https://github.com/l5yth/potato-mesh/pull/521>
* Ci: Add macOS and Ubuntu builds to Flutter workflow by @l5yth in <https://github.com/l5yth/potato-mesh/pull/519>
* Web: add current to charts by @l5yth in <https://github.com/l5yth/potato-mesh/pull/520>
* App: fix notification icon by @l5yth in <https://github.com/l5yth/potato-mesh/pull/518>
* Spec: update test fixtures by @l5yth in <https://github.com/l5yth/potato-mesh/pull/517>
* App: generate proper icons by @l5yth in <https://github.com/l5yth/potato-mesh/pull/516>
* Web: fix favicon by @l5yth in <https://github.com/l5yth/potato-mesh/pull/515>
* Web: add ?since= parameter to api/messages by @l5yth in <https://github.com/l5yth/potato-mesh/pull/512>
* App: implement notifications by @l5yth in <https://github.com/l5yth/potato-mesh/pull/511>
* App: add theme selector by @l5yth in <https://github.com/l5yth/potato-mesh/pull/507>
* App: further harden refresh logic and prefer local first by @l5yth in <https://github.com/l5yth/potato-mesh/pull/506>
* Ci: fix app artifacts for tags by @l5yth in <https://github.com/l5yth/potato-mesh/pull/504>
* Ci: build app artifacts for tags by @l5yth in <https://github.com/l5yth/potato-mesh/pull/503>
* App: add persistance by @l5yth in <https://github.com/l5yth/potato-mesh/pull/501>
* App: instance and chat mvp by @l5yth in <https://github.com/l5yth/potato-mesh/pull/498>
* App: add instance selector to settings by @l5yth in <https://github.com/l5yth/potato-mesh/pull/497>
* App: add scaffholding gitignore by @l5yth in <https://github.com/l5yth/potato-mesh/pull/496>
* Handle reaction app packets without reply id by @l5yth in <https://github.com/l5yth/potato-mesh/pull/495>
* Render reaction multiplier counts by @l5yth in <https://github.com/l5yth/potato-mesh/pull/494>
* Add comprehensive tests for Flutter reader by @l5yth in <https://github.com/l5yth/potato-mesh/pull/491>
* Map numeric role ids to canonical Meshtastic roles by @l5yth in <https://github.com/l5yth/potato-mesh/pull/489>
* Update node detail hydration for traces by @l5yth in <https://github.com/l5yth/potato-mesh/pull/490>
* Add mobile Flutter CI workflow by @l5yth in <https://github.com/l5yth/potato-mesh/pull/488>
* Align OCI labels in docker workflow by @l5yth in <https://github.com/l5yth/potato-mesh/pull/487>
* Add Meshtastic reader Flutter app by @l5yth in <https://github.com/l5yth/potato-mesh/pull/483>
* Handle pre-release Docker tagging by @l5yth in <https://github.com/l5yth/potato-mesh/pull/486>
* Web: remove range from charts labels by @l5yth in <https://github.com/l5yth/potato-mesh/pull/485>
* Floor override frequencies to MHz integers by @l5yth in <https://github.com/l5yth/potato-mesh/pull/476>
* Prevent message ids from being treated as node identifiers by @l5yth in <https://github.com/l5yth/potato-mesh/pull/475>
* Fix 1 after emojis in reply. by @Alexkurd in <https://github.com/l5yth/potato-mesh/pull/464>
* Add frequency and preset to node table by @l5yth in <https://github.com/l5yth/potato-mesh/pull/472>
* Subscribe to traceroute app pubsub topic by @l5yth in <https://github.com/l5yth/potato-mesh/pull/471>
* Aggregate telemetry over the last 7 days by @l5yth in <https://github.com/l5yth/potato-mesh/pull/470>
* Address missing id field ingestor bug by @l5yth in <https://github.com/l5yth/potato-mesh/pull/469>
* Merge secondary channels by name by @l5yth in <https://github.com/l5yth/potato-mesh/pull/468>
* Rate limit host device telemetry by @l5yth in <https://github.com/l5yth/potato-mesh/pull/467>
* Add traceroutes to frontend by @l5yth in <https://github.com/l5yth/potato-mesh/pull/466>
* Feat: implement traceroute app packet handling across the stack by @l5yth in <https://github.com/l5yth/potato-mesh/pull/463>
* Bump version and update changelog by @l5yth in <https://github.com/l5yth/potato-mesh/pull/462>
## v0.5.5
* Added comprehensive helper unit tests by @l5yth in <https://github.com/l5yth/potato-mesh/pull/457>
+6 -3
View File
@@ -53,13 +53,16 @@ Additional environment variables are optional:
| `MAP_ZOOM` | _unset_ | Fixed Leaflet zoom (disables the auto-fit checkbox when set). |
| `MAX_DISTANCE` | `42` | Maximum relationship distance (km) before edges are hidden. |
| `DEBUG` | `0` | Enables verbose logging across services when set to `1`. |
| `ALLOWED_CHANNELS` | _unset_ | Comma-separated channel names the ingestor accepts; other channels are skipped before hidden filters. |
| `HIDDEN_CHANNELS` | _unset_ | Comma-separated channel names the ingestor skips when forwarding packets. |
| `FEDERATION` | `1` | Controls whether the instance announces itself and crawls peers (`1`) or stays isolated (`0`). |
| `PRIVATE` | `0` | Restricts public visibility and disables chat/message endpoints when set to `1`. |
| `CONNECTION` | `/dev/ttyACM0` | Serial device, TCP endpoint, or Bluetooth target used by the ingestor to reach the radio. |
The ingestor also respects supporting variables such as `POTATOMESH_INSTANCE`
(defaults to `http://web:41447`) for remote posting and `CHANNEL_INDEX` when
selecting a LoRa channel on serial or Bluetooth connections.
The ingestor posts to the URL configured via `INSTANCE_DOMAIN` (defaulting to
`http://web:41447` in the provided compose file) and still accepts
`POTATOMESH_INSTANCE` as a legacy alias when the primary variable is unset. Use
`CHANNEL_INDEX` to select a LoRa channel on serial or Bluetooth connections.
## Docker Compose file
+73 -8
View File
@@ -7,13 +7,20 @@
[![Contributions Welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/l5yth/potato-mesh/issues)
[![Matrix Chat](https://img.shields.io/badge/matrix-%23potatomesh:dod.ngo-blue)](https://matrix.to/#/#potatomesh:dod.ngo)
A federated Meshtastic-powered node dashboard for your local community. _No MQTT clutter, just local LoRa aether._
A federated, Meshtastic-powered node dashboard for your local community.
_No MQTT clutter, just local LoRa aether._
* Web app with chat window and map view showing nodes, neighbors, telemetry, and messages.
* API to POST (authenticated) and to GET nodes and messages.
* Shows new node notifications (first seen) in chat.
* Web dashboard with chat window and map view showing nodes, positions, neighbors,
trace routes, telemetry, and messages.
* API to POST (authenticated) and to GET nodes, messages, and telemetry.
* Shows new node notifications (first seen) and telemetry logs in chat.
* Allows searching and filtering for nodes in map and table view.
* Federated: _automatically_ froms a federation with other communities running
Potato Mesh!
* Supplemental Python ingestor to feed the POST APIs of the Web app with data remotely.
* Supports multiple ingestors per instance.
* Matrix bridge that posts Meshtastic messages to a defined matrix channel (no
radio required).
* Mobile app to _read_ messages on your local aether (no radio required).
Live demo for Berlin #MediumFast: [potatomesh.net](https://potatomesh.net)
@@ -58,6 +65,7 @@ RACK_ENV="production" \
APP_ENV="production" \
API_TOKEN="SuperSecureTokenReally" \
INSTANCE_DOMAIN="https://potatomesh.net" \
MAP_CENTER="53.55,13.42" \
exec ruby app.rb -p 41447 -o 0.0.0.0
```
@@ -68,6 +76,7 @@ exec ruby app.rb -p 41447 -o 0.0.0.0
* Provide a strong `API_TOKEN` value to authorize POST requests against the API.
* Configure `INSTANCE_DOMAIN` with the public URL of your deployment so vanity
links and generated metadata resolve correctly.
* Don't forget to set a `MAP_CENTER` to point to your local region.
The web app can be configured with environment variables (defaults shown):
@@ -83,6 +92,8 @@ The web app can be configured with environment variables (defaults shown):
| `MAP_ZOOM` | _unset_ | Fixed Leaflet zoom applied on first load; disables auto-fit when provided. |
| `MAX_DISTANCE` | `42` | Maximum distance (km) before node relationships are hidden on the map. |
| `DEBUG` | `0` | Set to `1` for verbose logging in the web and ingestor services. |
| `ALLOWED_CHANNELS` | _unset_ | Comma-separated channel names the ingestor accepts; when set, all other channels are skipped before hidden filters. |
| `HIDDEN_CHANNELS` | _unset_ | Comma-separated channel names the ingestor will ignore when forwarding packets. |
| `FEDERATION` | `1` | Set to `1` to announce your instance and crawl peers, or `0` to disable federation. Private mode overrides this. |
| `PRIVATE` | `0` | Set to `1` to hide the chat UI, disable message APIs, and exclude hidden clients from public listings. |
@@ -133,7 +144,9 @@ The web app contains an API:
* GET `/api/messages?limit=100&encrypted=false&since=0` - returns the latest 100 messages newer than the provided unix timestamp (defaults to `since=0` to return full history; disabled when `PRIVATE=1`)
* GET `/api/telemetry?limit=100` - returns the latest 100 telemetry data
* GET `/api/neighbors?limit=100` - returns the latest 100 neighbor tuples
* GET `/api/traces?limit=100` - returns the latest 100 trace-routes caught
* GET `/api/instances` - returns known potato-mesh instances in other locations
* GET `/api/ingestors` - returns active potato-mesh python ingestors that feed data
* GET `/metrics`- metrics for the prometheus endpoint
* GET `/version`- information about the potato-mesh instance
* POST `/api/nodes` - upserts nodes provided as JSON object mapping node ids to node data (requires `Authorization: Bearer <API_TOKEN>`)
@@ -141,6 +154,7 @@ The web app contains an API:
* POST `/api/messages` - appends messages provided as a JSON object or array (requires `Authorization: Bearer <API_TOKEN>`; disabled when `PRIVATE=1`)
* POST `/api/telemetry` - appends telemetry provided as a JSON object or array (requires `Authorization: Bearer <API_TOKEN>`)
* POST `/api/neighbors` - appends neighbor tuples provided as a JSON object or array (requires `Authorization: Bearer <API_TOKEN>`)
* POST `/api/traces` - appends caught traces routes provided as a JSON object or array (requires `Authorization: Bearer <API_TOKEN>`)
The `API_TOKEN` environment variable must be set to a non-empty value and match the token supplied in the `Authorization` header for `POST` requests.
@@ -176,7 +190,7 @@ to the configured potato-mesh instance.
Check out `mesh.sh` ingestor script in the `./data` directory.
```bash
POTATOMESH_INSTANCE=http://127.0.0.1:41447 API_TOKEN=1eb140fd-cab4-40be-b862-41c607762246 CONNECTION=/dev/ttyACM0 DEBUG=1 ./mesh.sh
INSTANCE_DOMAIN=http://127.0.0.1:41447 API_TOKEN=1eb140fd-cab4-40be-b862-41c607762246 CONNECTION=/dev/ttyACM0 DEBUG=1 ./mesh.sh
[2025-02-20T12:34:56.789012Z] [potato-mesh] [info] channel=0 context=daemon.main port='41447' target='http://127.0.0.1' Mesh daemon starting
[...]
[2025-02-20T12:34:57.012345Z] [potato-mesh] [debug] context=handlers.upsert_node node_id=!849b7154 short_name='7154' long_name='7154' Queued node upsert payload
@@ -184,12 +198,56 @@ POTATOMESH_INSTANCE=http://127.0.0.1:41447 API_TOKEN=1eb140fd-cab4-40be-b862-41c
[2025-02-20T12:34:58.001122Z] [potato-mesh] [debug] context=handlers.store_packet_dict channel=0 from_id='!9ee71c38' payload='Guten Morgen!' to_id='^all' Queued message payload
```
Run the script with `POTATOMESH_INSTANCE` and `API_TOKEN` to keep updating
Run the script with `INSTANCE_DOMAIN` and `API_TOKEN` to keep updating
node records and parsing new incoming messages. Enable debug output with `DEBUG=1`,
specify the connection target with `CONNECTION` (default `/dev/ttyACM0`) or set it to
an IP address (for example `192.168.1.20:4403`) to use the Meshtastic TCP
interface. `CONNECTION` also accepts Bluetooth device addresses (e.g.,
`ED:4D:9E:95:CF:60`) and the script attempts a BLE connection if available.
interface. `CONNECTION` also accepts Bluetooth device addresses in MAC format (e.g.,
`ED:4D:9E:95:CF:60`) or UUID format for macOS (e.g., `C0AEA92F-045E-9B82-C9A6-A1FD822B3A9E`)
and the script attempts a BLE connection if available. To keep
ingestion limited, set `ALLOWED_CHANNELS` to a comma-separated whitelist (for
example `ALLOWED_CHANNELS="Chat,Ops"`); packets on other channels are discarded.
Use `HIDDEN_CHANNELS` to block specific channels from the web UI even when they
appear in the allowlist.
## Nix
For the dev shell, run:
```bash
nix develop
```
The shell provides Ruby plus the Python ingestor dependencies (including `meshtastic`
and `protobuf`). To sanity-check that the ingestor starts, run `python -m data.mesh`
with the usual environment variables (`INSTANCE_DOMAIN`, `API_TOKEN`, `CONNECTION`).
To run the packaged apps directly:
```bash
nix run .#web
nix run .#ingestor
```
Minimal NixOS module snippet:
```nix
services.potato-mesh = {
enable = true;
apiTokenFile = config.sops.secrets.potato-mesh-api-token.path;
dataDir = "/var/lib/potato-mesh";
port = 41447;
instanceDomain = "https://mesh.me";
siteName = "Nix Mesh";
contactLink = "homeserver.mx";
mapCenter = "28.96,-13.56";
frequency = "868MHz";
ingestor = {
enable = true;
connection = "192.168.X.Y:4403";
};
};
```
## Docker
@@ -199,12 +257,19 @@ Docker images are published on Github for each release:
docker pull ghcr.io/l5yth/potato-mesh/web:latest # newest release
docker pull ghcr.io/l5yth/potato-mesh/web:v0.5.5 # pinned historical release
docker pull ghcr.io/l5yth/potato-mesh/ingestor:latest
docker pull ghcr.io/l5yth/potato-mesh/matrix-bridge:latest
```
Feel free to run the [configure.sh](./configure.sh) script to set up your
environment. See the [Docker guide](DOCKER.md) for more details and custom
deployment instructions.
## Matrix Bridge
A matrix bridge is currently being worked on. It requests messages from a configured
potato-mesh instance and forwards it to a specified matrix channel; see
[matrix/README.md](./matrix/README.md).
## Mobile App
A mobile _reader_ app is currently being worked on. Stay tuned for releases and updates.
+2 -2
View File
@@ -15,11 +15,11 @@
<key>CFBundlePackageType</key>
<string>FMWK</string>
<key>CFBundleShortVersionString</key>
<string>1.0</string>
<string>0.5.9</string>
<key>CFBundleSignature</key>
<string>????</string>
<key>CFBundleVersion</key>
<string>1.0</string>
<string>0.5.9</string>
<key>MinimumOSVersion</key>
<string>14.0</string>
</dict>
+1 -1
View File
@@ -1,7 +1,7 @@
name: potato_mesh_reader
description: Meshtastic Reader — read-only view for PotatoMesh messages.
publish_to: "none"
version: 0.5.6
version: 0.5.9
environment:
sdk: ">=3.4.0 <4.0.0"
+128
View File
@@ -0,0 +1,128 @@
// Copyright © 2025-26 l5yth & contributors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
import 'package:flutter/material.dart';
import 'package:flutter_test/flutter_test.dart';
import 'package:potato_mesh_reader/main.dart';
void main() {
TestWidgetsFlutterBinding.ensureInitialized();
test('BootstrapProgress renders stage, counts, and detail', () {
const progress = BootstrapProgress(
stage: 'Downloading',
current: 2,
total: 5,
detail: 'instances',
);
expect(progress.label, 'Downloading 2/5 • instances');
const fallback = BootstrapProgress(stage: 'Starting');
expect(fallback.label, 'Starting');
});
test('InstanceVersion summary prefers populated fields', () {
const populated = InstanceVersion(
name: 'BerlinMesh',
channel: '#MediumFast',
frequency: '868MHz',
instanceDomain: 'potatomesh.net',
);
expect(populated.summary, 'BerlinMesh · #MediumFast · 868MHz');
const minimal = InstanceVersion(
name: '',
channel: null,
frequency: null,
instanceDomain: null,
);
expect(minimal.summary, 'Unknown');
});
test('sortMessagesByRxTime keeps unknown timestamps in place', () {
MeshMessage buildMessage({
required int id,
required String text,
required String rxIso,
DateTime? rxTime,
}) {
return MeshMessage(
id: id,
rxTime: rxTime,
rxIso: rxIso,
fromId: '!$id',
nodeId: '!$id',
toId: '^',
channelName: '#general',
channel: 1,
portnum: 'TEXT',
text: text,
rssi: -50,
snr: 1.0,
hopLimit: 1,
);
}
final withTime = buildMessage(
id: 2,
rxTime: DateTime.utc(2024, 1, 1, 12, 1),
rxIso: '2024-01-01T12:01:00Z',
text: 'timed',
);
final withoutTime = buildMessage(
id: 1,
rxTime: null,
rxIso: 'unknown',
text: 'unknown',
);
final laterTime = buildMessage(
id: 3,
rxTime: DateTime.utc(2024, 1, 1, 12, 5),
rxIso: '2024-01-01T12:05:00Z',
text: 'later',
);
final sorted = sortMessagesByRxTime([withoutTime, laterTime, withTime]);
expect(sorted.first.id, withoutTime.id,
reason: 'messages without rxTime should retain position');
expect(sorted[1].id, withTime.id,
reason: 'messages with timestamps should be ordered chronologically');
expect(sorted.last.id, laterTime.id);
});
testWidgets('LoadingScreen displays progress label and icon', (tester) async {
const screen = LoadingScreen(
progress: BootstrapProgress(stage: 'Fetching'),
);
await tester.pumpWidget(const MaterialApp(home: screen));
expect(find.byType(CircularProgressIndicator), findsOneWidget);
expect(find.text('Fetching'), findsOneWidget);
expect(find.bySemanticsLabel('PotatoMesh'), findsOneWidget);
});
testWidgets('LoadingScreen surfaces errors', (tester) async {
const screen = LoadingScreen(
progress: BootstrapProgress(stage: 'Loading'),
error: 'boom',
);
await tester.pumpWidget(const MaterialApp(home: screen));
expect(find.textContaining('Failed to load: boom'), findsOneWidget);
});
}
+19
View File
@@ -76,6 +76,8 @@ CHANNEL=$(grep "^CHANNEL=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo
FREQUENCY=$(grep "^FREQUENCY=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "915MHz")
FEDERATION=$(grep "^FEDERATION=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "1")
PRIVATE=$(grep "^PRIVATE=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "0")
HIDDEN_CHANNELS=$(grep "^HIDDEN_CHANNELS=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "")
ALLOWED_CHANNELS=$(grep "^ALLOWED_CHANNELS=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "")
MAP_CENTER=$(grep "^MAP_CENTER=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "38.761944,-27.090833")
MAP_ZOOM=$(grep "^MAP_ZOOM=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "")
MAX_DISTANCE=$(grep "^MAX_DISTANCE=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "42")
@@ -126,6 +128,11 @@ echo "-------------------"
echo "Private mode hides public mesh messages from unauthenticated visitors."
echo "Set to 1 to hide public feeds or 0 to keep them visible."
read_with_default "Enable private mode (1=yes, 0=no)" "$PRIVATE" PRIVATE
echo "Provide a comma-separated whitelist of channel names to ingest (optional)."
echo "When set, only listed channels are ingested unless explicitly hidden below."
read_with_default "Allowed channels" "$ALLOWED_CHANNELS" ALLOWED_CHANNELS
echo "Provide a comma-separated list of channel names to hide from the web UI (optional)."
read_with_default "Hidden channels" "$HIDDEN_CHANNELS" HIDDEN_CHANNELS
echo ""
echo "🛠 Docker Settings"
@@ -196,6 +203,16 @@ update_env "POTATOMESH_IMAGE_TAG" "$POTATOMESH_IMAGE_TAG"
update_env "FEDERATION" "$FEDERATION"
update_env "PRIVATE" "$PRIVATE"
update_env "CONNECTION" "$CONNECTION"
if [ -n "$ALLOWED_CHANNELS" ]; then
update_env "ALLOWED_CHANNELS" "\"$ALLOWED_CHANNELS\""
else
sed -i.bak '/^ALLOWED_CHANNELS=.*/d' .env
fi
if [ -n "$HIDDEN_CHANNELS" ]; then
update_env "HIDDEN_CHANNELS" "\"$HIDDEN_CHANNELS\""
else
sed -i.bak '/^HIDDEN_CHANNELS=.*/d' .env
fi
if [ -n "$INSTANCE_DOMAIN" ]; then
update_env "INSTANCE_DOMAIN" "$INSTANCE_DOMAIN"
else
@@ -244,6 +261,8 @@ echo " API Token: ${API_TOKEN:0:8}..."
echo " Docker Image Arch: $POTATOMESH_IMAGE_ARCH"
echo " Docker Image Tag: $POTATOMESH_IMAGE_TAG"
echo " Private Mode: ${PRIVATE}"
echo " Allowed Channels: ${ALLOWED_CHANNELS:-'All'}"
echo " Hidden Channels: ${HIDDEN_CHANNELS:-'None'}"
echo " Instance Domain: ${INSTANCE_DOMAIN:-'Auto-detected'}"
if [ "${FEDERATION:-1}" = "0" ]; then
echo " Federation: Disabled"
+6 -2
View File
@@ -50,7 +50,9 @@ USER potatomesh
ENV CONNECTION=/dev/ttyACM0 \
CHANNEL_INDEX=0 \
DEBUG=0 \
POTATOMESH_INSTANCE="" \
ALLOWED_CHANNELS="" \
HIDDEN_CHANNELS="" \
INSTANCE_DOMAIN="" \
API_TOKEN=""
CMD ["python", "-m", "data.mesh"]
@@ -75,7 +77,9 @@ USER ContainerUser
ENV CONNECTION=/dev/ttyACM0 \
CHANNEL_INDEX=0 \
DEBUG=0 \
POTATOMESH_INSTANCE="" \
ALLOWED_CHANNELS="" \
HIDDEN_CHANNELS="" \
INSTANCE_DOMAIN="" \
API_TOKEN=""
CMD ["python", "-m", "data.mesh"]
+1 -1
View File
@@ -18,7 +18,7 @@ The ``data.mesh`` module exposes helpers for reading Meshtastic node and
message information before forwarding it to the accompanying web application.
"""
VERSION = "0.5.6"
VERSION = "0.5.9"
"""Semantic version identifier shared with the dashboard and front-end."""
__version__ = VERSION
+26
View File
@@ -0,0 +1,26 @@
-- Copyright © 2025-26 l5yth & contributors
--
-- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License.
-- You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing, software
-- distributed under the License is distributed on an "AS IS" BASIS,
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-- See the License for the specific language governing permissions and
-- limitations under the License.
PRAGMA journal_mode=WAL;
CREATE TABLE IF NOT EXISTS ingestors (
node_id TEXT PRIMARY KEY,
start_time INTEGER NOT NULL,
last_seen_time INTEGER NOT NULL,
version TEXT,
lora_freq INTEGER,
modem_preset TEXT
);
CREATE INDEX IF NOT EXISTS idx_ingestors_last_seen ON ingestors(last_seen_time);
+2
View File
@@ -26,6 +26,8 @@ CREATE TABLE IF NOT EXISTS instances (
longitude REAL,
last_update_time INTEGER,
is_private BOOLEAN NOT NULL DEFAULT 0,
nodes_count INTEGER,
contact_link TEXT,
signature TEXT
);
+38 -2
View File
@@ -21,7 +21,17 @@ import threading as threading # re-exported for compatibility
import sys
import types
from . import channels, config, daemon, handlers, interfaces, queue, serialization
from .. import VERSION as _PACKAGE_VERSION
from . import (
channels,
config,
daemon,
handlers,
ingestors,
interfaces,
queue,
serialization,
)
__all__: list[str] = []
@@ -40,7 +50,15 @@ def _export_constants() -> None:
__all__.extend(["json", "urllib", "glob", "threading", "signal"])
for _module in (channels, daemon, handlers, interfaces, queue, serialization):
for _module in (
channels,
daemon,
handlers,
interfaces,
queue,
serialization,
ingestors,
):
_reexport(_module)
_export_constants()
@@ -52,11 +70,14 @@ _CONFIG_ATTRS = {
"DEBUG",
"INSTANCE",
"API_TOKEN",
"ALLOWED_CHANNELS",
"HIDDEN_CHANNELS",
"LORA_FREQ",
"MODEM_PRESET",
"_RECONNECT_INITIAL_DELAY_SECS",
"_RECONNECT_MAX_DELAY_SECS",
"_CLOSE_TIMEOUT_SECS",
"_INGESTOR_HEARTBEAT_SECS",
"_debug_log",
}
@@ -70,9 +91,16 @@ _HANDLER_ATTRS = set(handlers.__all__)
_DAEMON_ATTRS = set(daemon.__all__)
_SERIALIZATION_ATTRS = set(serialization.__all__)
_INTERFACE_EXPORTS = set(interfaces.__all__)
_INGESTOR_ATTRS = set(ingestors.__all__)
# Re-export the package version for callers that previously referenced
# data.mesh_ingestor.VERSION directly.
VERSION = _PACKAGE_VERSION
__all__.append("VERSION")
__all__.extend(sorted(_CONFIG_ATTRS))
__all__.extend(sorted(_INTERFACE_ATTRS))
__all__.append("VERSION")
class _MeshIngestorModule(types.ModuleType):
@@ -87,6 +115,10 @@ class _MeshIngestorModule(types.ModuleType):
return getattr(interfaces, name)
if name in _INTERFACE_EXPORTS:
return getattr(interfaces, name)
if name in _INGESTOR_ATTRS:
return getattr(ingestors, name)
if name == "VERSION":
return VERSION
raise AttributeError(name)
def __setattr__(self, name: str, value): # type: ignore[override]
@@ -121,6 +153,10 @@ class _MeshIngestorModule(types.ModuleType):
setattr(serialization, name, value)
super().__setattr__(name, getattr(serialization, name, value))
handled = True
if name in _INGESTOR_ATTRS:
setattr(ingestors, name, value)
super().__setattr__(name, getattr(ingestors, name, value))
handled = True
if handled:
return
super().__setattr__(name, value)
+52
View File
@@ -222,6 +222,54 @@ def channel_name(channel_index: int | None) -> str | None:
return _CHANNEL_LOOKUP.get(int(channel_index))
def hidden_channel_names() -> tuple[str, ...]:
"""Return the configured set of hidden channel names."""
return tuple(getattr(config, "HIDDEN_CHANNELS", ()))
def allowed_channel_names() -> tuple[str, ...]:
"""Return the configured set of explicitly allowed channel names."""
return tuple(getattr(config, "ALLOWED_CHANNELS", ()))
def is_allowed_channel(channel_name_value: str | None) -> bool:
"""Return ``True`` when ``channel_name_value`` is permitted by policy."""
allowed = getattr(config, "ALLOWED_CHANNELS", ())
if not allowed:
return True
if channel_name_value is None:
return False
normalized = channel_name_value.strip()
if not normalized:
return False
normalized_casefold = normalized.casefold()
for allowed_name in allowed:
if normalized_casefold == allowed_name.casefold():
return True
return False
def is_hidden_channel(channel_name_value: str | None) -> bool:
"""Return ``True`` when ``channel_name_value`` is configured as hidden."""
if channel_name_value is None:
return False
normalized = channel_name_value.strip()
if not normalized:
return False
normalized_casefold = normalized.casefold()
for hidden in getattr(config, "HIDDEN_CHANNELS", ()):
if normalized_casefold == hidden.casefold():
return True
return False
def _reset_channel_cache() -> None:
"""Clear cached channel data. Intended for use in tests only."""
@@ -234,5 +282,9 @@ __all__ = [
"capture_from_interface",
"channel_mappings",
"channel_name",
"allowed_channel_names",
"hidden_channel_names",
"is_allowed_channel",
"is_hidden_channel",
"_reset_channel_cache",
]
+73 -1
View File
@@ -46,6 +46,9 @@ DEFAULT_ENERGY_ONLINE_DURATION_SECS = 300.0
DEFAULT_ENERGY_SLEEP_SECS = float(6 * 60 * 60)
"""Sleep duration used when energy saving mode is active."""
DEFAULT_INGESTOR_HEARTBEAT_SECS = float(60 * 60)
"""Interval between ingestor heartbeat announcements."""
CONNECTION = os.environ.get("CONNECTION") or os.environ.get("MESH_SERIAL")
"""Optional connection target for the mesh interface.
@@ -61,7 +64,72 @@ CHANNEL_INDEX = int(os.environ.get("CHANNEL_INDEX", str(DEFAULT_CHANNEL_INDEX)))
"""Index of the LoRa channel to select when connecting."""
DEBUG = os.environ.get("DEBUG") == "1"
INSTANCE = os.environ.get("POTATOMESH_INSTANCE", "").rstrip("/")
def _parse_channel_names(raw_value: str | None) -> tuple[str, ...]:
"""Normalise a comma-separated list of channel names.
Parameters:
raw_value: Raw environment string containing channel names separated by
commas. ``None`` and empty segments are ignored.
Returns:
A tuple of unique, non-empty channel names preserving input order while
deduplicating case-insensitively.
"""
if not raw_value:
return ()
normalized_entries: list[str] = []
seen: set[str] = set()
for part in raw_value.split(","):
name = part.strip()
if not name:
continue
key = name.casefold()
if key in seen:
continue
seen.add(key)
normalized_entries.append(name)
return tuple(normalized_entries)
def _parse_hidden_channels(raw_value: str | None) -> tuple[str, ...]:
"""Compatibility wrapper that parses hidden channel names."""
return _parse_channel_names(raw_value)
HIDDEN_CHANNELS = _parse_hidden_channels(os.environ.get("HIDDEN_CHANNELS"))
"""Channel names configured to be ignored by the ingestor."""
ALLOWED_CHANNELS = _parse_channel_names(os.environ.get("ALLOWED_CHANNELS"))
"""Explicitly permitted channel names; when set, other channels are ignored."""
def _resolve_instance_domain() -> str:
"""Resolve the configured instance domain from the environment.
The ingestor prefers the :envvar:`INSTANCE_DOMAIN` variable for clarity and
compatibility with the web application. For deployments that still
configure the legacy :envvar:`POTATOMESH_INSTANCE` variable, the resolver
falls back to that value when no primary domain is set.
"""
instance_domain = os.environ.get("INSTANCE_DOMAIN", "")
legacy_instance = os.environ.get("POTATOMESH_INSTANCE", "")
configured_instance = (instance_domain or legacy_instance).rstrip("/")
if configured_instance and "://" not in configured_instance:
return f"https://{configured_instance}"
return configured_instance
INSTANCE = _resolve_instance_domain()
API_TOKEN = os.environ.get("API_TOKEN", "")
ENERGY_SAVING = os.environ.get("ENERGY_SAVING") == "1"
"""When ``True``, enables the ingestor's energy saving mode."""
@@ -78,6 +146,7 @@ _CLOSE_TIMEOUT_SECS = DEFAULT_CLOSE_TIMEOUT_SECS
_INACTIVITY_RECONNECT_SECS = DEFAULT_INACTIVITY_RECONNECT_SECS
_ENERGY_ONLINE_DURATION_SECS = DEFAULT_ENERGY_ONLINE_DURATION_SECS
_ENERGY_SLEEP_SECS = DEFAULT_ENERGY_SLEEP_SECS
_INGESTOR_HEARTBEAT_SECS = DEFAULT_INGESTOR_HEARTBEAT_SECS
# Backwards compatibility shim for legacy imports.
PORT = CONNECTION
@@ -122,6 +191,8 @@ __all__ = [
"SNAPSHOT_SECS",
"CHANNEL_INDEX",
"DEBUG",
"HIDDEN_CHANNELS",
"ALLOWED_CHANNELS",
"INSTANCE",
"API_TOKEN",
"ENERGY_SAVING",
@@ -133,6 +204,7 @@ __all__ = [
"_INACTIVITY_RECONNECT_SECS",
"_ENERGY_ONLINE_DURATION_SECS",
"_ENERGY_SLEEP_SECS",
"_INGESTOR_HEARTBEAT_SECS",
"_debug_log",
]
+44 -2
View File
@@ -23,7 +23,7 @@ import time
from pubsub import pub
from . import config, handlers, interfaces
from . import config, handlers, ingestors, interfaces
_RECEIVE_TOPICS = (
"meshtastic.receive",
@@ -169,6 +169,41 @@ def _is_ble_interface(iface_obj) -> bool:
return "ble_interface" in module_name
def _process_ingestor_heartbeat(iface, *, ingestor_announcement_sent: bool) -> bool:
"""Send ingestor liveness heartbeats when a host id is known.
Parameters:
iface: Active mesh interface used to extract a host node id when absent.
ingestor_announcement_sent: Whether an initial heartbeat has already
been sent during the current session.
Returns:
Updated ``ingestor_announcement_sent`` flag reflecting whether an
initial heartbeat was transmitted.
"""
host_id = handlers.host_node_id()
if host_id is None and iface is not None:
extracted = interfaces._extract_host_node_id(iface)
if extracted:
handlers.register_host_node_id(extracted)
host_id = handlers.host_node_id()
if host_id:
ingestors.set_ingestor_node_id(host_id)
heartbeat_sent = ingestors.queue_ingestor_heartbeat(
force=not ingestor_announcement_sent
)
if heartbeat_sent and not ingestor_announcement_sent:
return True
return ingestor_announcement_sent
iface_cls = getattr(iface_obj, "__class__", None)
if iface_cls is None:
return False
module_name = getattr(iface_cls, "__module__", "") or ""
return "ble_interface" in module_name
def _connected_state(candidate) -> bool | None:
"""Return the connection state advertised by ``candidate``.
@@ -233,6 +268,7 @@ def main(existing_interface=None) -> None:
inactivity_reconnect_secs = max(
0.0, getattr(config, "_INACTIVITY_RECONNECT_SECS", 0.0)
)
ingestor_announcement_sent = False
energy_saving_enabled = config.ENERGY_SAVING
energy_online_secs = max(0.0, config._ENERGY_ONLINE_DURATION_SECS)
@@ -260,7 +296,7 @@ def main(existing_interface=None) -> None:
signal.signal(signal.SIGINT, handle_sigint)
signal.signal(signal.SIGTERM, handle_sigterm)
target = config.INSTANCE or "(no POTATOMESH_INSTANCE)"
target = config.INSTANCE or "(no INSTANCE_DOMAIN configured)"
configured_port = config.CONNECTION
active_candidate = configured_port
announced_target = False
@@ -288,6 +324,7 @@ def main(existing_interface=None) -> None:
handlers.register_host_node_id(
interfaces._extract_host_node_id(iface)
)
ingestors.set_ingestor_node_id(handlers.host_node_id())
retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
initial_snapshot_sent = False
if not announced_target and resolved_target:
@@ -501,6 +538,10 @@ def main(existing_interface=None) -> None:
iface_connected_at = None
continue
ingestor_announcement_sent = _process_ingestor_heartbeat(
iface, ingestor_announcement_sent=ingestor_announcement_sent
)
retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
stop.wait(config.SNAPSHOT_SECS)
except KeyboardInterrupt: # pragma: no cover - interactive only
@@ -520,6 +561,7 @@ __all__ = [
"_node_items_snapshot",
"_subscribe_receive_topics",
"_is_ble_interface",
"_process_ingestor_heartbeat",
"_connected_state",
"main",
]
+94 -51
View File
@@ -100,6 +100,41 @@ from .serialization import (
)
def _portnum_candidates(name: str) -> set[int]:
"""Return Meshtastic port number candidates for ``name``.
Parameters:
name: Port name to look up in Meshtastic ``PortNum`` enums.
Returns:
Set of integer port numbers resolved from Meshtastic modules.
"""
candidates: set[int] = set()
for module_name in (
"meshtastic.portnums_pb2",
"meshtastic.protobuf.portnums_pb2",
):
module = sys.modules.get(module_name)
if module is None:
with contextlib.suppress(ModuleNotFoundError):
module = importlib.import_module(module_name)
if module is None:
continue
portnum_enum = getattr(module, "PortNum", None)
value_lookup = getattr(portnum_enum, "Value", None) if portnum_enum else None
if callable(value_lookup):
with contextlib.suppress(Exception):
candidate = _coerce_int(value_lookup(name))
if candidate is not None:
candidates.add(candidate)
constant_value = getattr(module, name, None)
candidate = _coerce_int(constant_value)
if candidate is not None:
candidates.add(candidate)
return candidates
def register_host_node_id(node_id: str | None) -> None:
"""Record the canonical identifier for the connected host device.
@@ -1280,28 +1315,7 @@ def store_packet_dict(packet: Mapping) -> None:
traceroute_section = (
decoded.get("traceroute") if isinstance(decoded, Mapping) else None
)
traceroute_port_ints: set[int] = set()
for module_name in (
"meshtastic.portnums_pb2",
"meshtastic.protobuf.portnums_pb2",
):
module = sys.modules.get(module_name)
if module is None:
with contextlib.suppress(ModuleNotFoundError):
module = importlib.import_module(module_name)
if module is None:
continue
portnum_enum = getattr(module, "PortNum", None)
value_lookup = getattr(portnum_enum, "Value", None) if portnum_enum else None
if callable(value_lookup):
with contextlib.suppress(Exception):
candidate = _coerce_int(value_lookup("TRACEROUTE_APP"))
if candidate is not None:
traceroute_port_ints.add(candidate)
constant_value = getattr(module, "TRACEROUTE_APP", None)
candidate = _coerce_int(constant_value)
if candidate is not None:
traceroute_port_ints.add(candidate)
traceroute_port_ints = _portnum_candidates("TRACEROUTE_APP")
if (
portnum == "TRACEROUTE_APP"
@@ -1359,36 +1373,43 @@ def store_packet_dict(packet: Mapping) -> None:
if emoji_text:
emoji = emoji_text
allowed_port_values = {"1", "TEXT_MESSAGE_APP", "REACTION_APP"}
routing_section = decoded.get("routing") if isinstance(decoded, Mapping) else None
routing_port_candidates = _portnum_candidates("ROUTING_APP")
if text is None and (
portnum == "ROUTING_APP"
or (portnum_int is not None and portnum_int in routing_port_candidates)
or isinstance(routing_section, Mapping)
):
routing_payload = _first(decoded, "payload", "data", default=None)
if routing_payload is not None:
if isinstance(routing_payload, bytes):
text = base64.b64encode(routing_payload).decode("ascii")
elif isinstance(routing_payload, str):
text = routing_payload
else:
try:
text = json.dumps(routing_payload, ensure_ascii=True)
except TypeError:
text = str(routing_payload)
if isinstance(text, str):
text = text.strip() or None
allowed_port_values = {"1", "TEXT_MESSAGE_APP", "REACTION_APP", "ROUTING_APP"}
allowed_port_ints = {1}
reaction_port_candidates: set[int] = set()
for module_name in (
"meshtastic.portnums_pb2",
"meshtastic.protobuf.portnums_pb2",
):
module = sys.modules.get(module_name)
if module is None:
with contextlib.suppress(ModuleNotFoundError):
module = importlib.import_module(module_name)
if module is None:
continue
portnum_enum = getattr(module, "PortNum", None)
value_lookup = getattr(portnum_enum, "Value", None) if portnum_enum else None
if callable(value_lookup):
with contextlib.suppress(Exception):
candidate = _coerce_int(value_lookup("REACTION_APP"))
if candidate is not None:
reaction_port_candidates.add(candidate)
constant_value = getattr(module, "REACTION_APP", None)
candidate = _coerce_int(constant_value)
if candidate is not None:
reaction_port_candidates.add(candidate)
reaction_port_candidates = _portnum_candidates("REACTION_APP")
for candidate in reaction_port_candidates:
allowed_port_ints.add(candidate)
allowed_port_values.add(str(candidate))
for candidate in routing_port_candidates:
allowed_port_ints.add(candidate)
allowed_port_values.add(str(candidate))
if isinstance(routing_section, Mapping) and portnum_int is not None:
allowed_port_ints.add(portnum_int)
allowed_port_values.add(str(portnum_int))
is_reaction_packet = portnum == "REACTION_APP" or (
reply_id is not None and emoji is not None
)
@@ -1414,6 +1435,8 @@ def store_packet_dict(packet: Mapping) -> None:
except Exception:
channel = 0
channel_name_value = channels.channel_name(channel)
pkt_id = _first(packet, "id", "packet_id", "packetId", default=None)
if pkt_id is None:
_record_ignored_packet(packet, reason="missing-packet-id")
@@ -1459,6 +1482,29 @@ def store_packet_dict(packet: Mapping) -> None:
_record_ignored_packet(packet, reason="skipped-direct-message")
return
if not channels.is_allowed_channel(channel_name_value):
_record_ignored_packet(packet, reason="disallowed-channel")
if config.DEBUG:
config._debug_log(
"Ignored packet on disallowed channel",
context="handlers.store_packet_dict",
channel=channel,
channel_name=channel_name_value,
allowed_channels=channels.allowed_channel_names(),
)
return
if channels.is_hidden_channel(channel_name_value):
_record_ignored_packet(packet, reason="hidden-channel")
if config.DEBUG:
config._debug_log(
"Ignored packet on hidden channel",
context="handlers.store_packet_dict",
channel=channel,
channel_name=channel_name_value,
)
return
message_payload = {
"id": int(pkt_id),
"rx_time": rx_time,
@@ -1476,11 +1522,8 @@ def store_packet_dict(packet: Mapping) -> None:
"emoji": emoji,
}
channel_name_value = None
if not encrypted_flag:
channel_name_value = channels.channel_name(channel)
if channel_name_value:
message_payload["channel_name"] = channel_name_value
if not encrypted_flag and channel_name_value:
message_payload["channel_name"] = channel_name_value
_queue_post_json(
"/api/messages",
_apply_radio_metadata(message_payload),
+139
View File
@@ -0,0 +1,139 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Helpers for tracking ingestor identity and liveness announcements."""
from __future__ import annotations
import time
from dataclasses import dataclass, field
from typing import Callable
from .. import VERSION as INGESTOR_VERSION
from . import config, queue
from .serialization import _canonical_node_id
HEARTBEAT_INTERVAL_SECS = 60 * 60
"""Default interval between ingestor heartbeat announcements."""
@dataclass
class _IngestorState:
"""Mutable ingestor identity and heartbeat tracking data."""
start_time: int = field(default_factory=lambda: int(time.time()))
last_heartbeat: int | None = None
node_id: str | None = None
STATE = _IngestorState()
"""Shared ingestor identity state."""
# Alias retained for clarity without exporting into the top-level mesh module to
# avoid colliding with the HTTP queue state.
INGESTOR_STATE = STATE
def ingestor_start_time() -> int:
"""Return the unix timestamp representing when the ingestor booted."""
return STATE.start_time
def set_ingestor_node_id(node_id: str | None) -> str | None:
"""Record the canonical host node identifier for the ingestor.
Parameters:
node_id: Raw node identifier reported by the connected device.
Returns:
Canonical node identifier in ``!xxxxxxxx`` form or ``None`` when the
provided value cannot be normalised.
"""
canonical = _canonical_node_id(node_id)
if canonical is None:
return None
if STATE.node_id != canonical:
STATE.node_id = canonical
STATE.last_heartbeat = None
return canonical
def queue_ingestor_heartbeat(
*,
force: bool = False,
send: Callable[[str, dict], None] | None = None,
node_id: str | None = None,
) -> bool:
"""Queue a heartbeat payload advertising ingestor liveness.
Parameters:
force: When ``True``, bypasses the heartbeat interval guard so an
announcement is queued immediately.
send: Optional transport callable used for tests; defaults to the queue
dispatcher.
node_id: Optional node identifier to register before sending. When
omitted the previously recorded identifier is reused.
Returns:
``True`` when a heartbeat payload was queued, ``False`` otherwise.
"""
canonical = _canonical_node_id(node_id) if node_id is not None else None
if canonical:
set_ingestor_node_id(canonical)
canonical = STATE.node_id
if canonical is None:
return False
now = int(time.time())
interval = max(
0, int(getattr(config, "_INGESTOR_HEARTBEAT_SECS", HEARTBEAT_INTERVAL_SECS))
)
last = STATE.last_heartbeat
if not force and last is not None and now - last < interval:
return False
payload = {
"node_id": canonical,
"start_time": STATE.start_time,
"last_seen_time": now,
"version": INGESTOR_VERSION,
}
if getattr(config, "LORA_FREQ", None) is not None:
payload["lora_freq"] = config.LORA_FREQ
if getattr(config, "MODEM_PRESET", None) is not None:
payload["modem_preset"] = config.MODEM_PRESET
queue._queue_post_json(
"/api/ingestors",
payload,
priority=getattr(
queue, "_INGESTOR_POST_PRIORITY", queue._DEFAULT_POST_PRIORITY
),
send=send,
)
STATE.last_heartbeat = now
return True
__all__ = [
"HEARTBEAT_INTERVAL_SECS",
"INGESTOR_STATE",
"ingestor_start_time",
"queue_ingestor_heartbeat",
"set_ingestor_node_id",
]
+12 -3
View File
@@ -628,7 +628,13 @@ _DEFAULT_SERIAL_PATTERNS = (
"/dev/cu.usbserial*",
)
_BLE_ADDRESS_RE = re.compile(r"^(?:[0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}$")
# Support both MAC addresses (Linux/Windows) and UUIDs (macOS)
_BLE_ADDRESS_RE = re.compile(
r"^(?:"
r"(?:[0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}|" # MAC address format
r"[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}" # UUID format
r")$"
)
class _DummySerialInterface:
@@ -642,13 +648,13 @@ class _DummySerialInterface:
def _parse_ble_target(value: str) -> str | None:
"""Return an uppercase BLE MAC address when ``value`` matches the format.
"""Return a normalized BLE address (MAC or UUID) when ``value`` matches the format.
Parameters:
value: User-provided target string.
Returns:
The normalised MAC address or ``None`` when validation fails.
The normalised MAC address or UUID, or ``None`` when validation fails.
"""
if not value:
@@ -772,10 +778,13 @@ def _create_serial_interface(port: str) -> tuple[object, str]:
return _DummySerialInterface(), "mock"
ble_target = _parse_ble_target(port_value)
if ble_target:
# Determine if it's a MAC address or UUID
address_type = "MAC" if ":" in ble_target else "UUID"
config._debug_log(
"Using BLE interface",
context="interfaces.ble",
address=ble_target,
address_type=address_type,
)
return _load_ble_interface()(address=ble_target), ble_target
network_target = _parse_network_target(port_value)
+2
View File
@@ -74,6 +74,7 @@ def _payload_key_value_pairs(payload: Mapping[str, object]) -> str:
_MESSAGE_POST_PRIORITY = 10
_INGESTOR_POST_PRIORITY = 80
_NEIGHBOR_POST_PRIORITY = 20
_TRACE_POST_PRIORITY = 25
_POSITION_POST_PRIORITY = 30
@@ -259,6 +260,7 @@ __all__ = [
"QueueState",
"_DEFAULT_POST_PRIORITY",
"_MESSAGE_POST_PRIORITY",
"_INGESTOR_POST_PRIORITY",
"_NEIGHBOR_POST_PRIORITY",
"_NODE_POST_PRIORITY",
"_POSITION_POST_PRIORITY",
+38 -1
View File
@@ -49,9 +49,11 @@ x-ingestor-base: &ingestor-base
environment:
CONNECTION: ${CONNECTION:-/dev/ttyACM0}
CHANNEL_INDEX: ${CHANNEL_INDEX:-0}
POTATOMESH_INSTANCE: ${POTATOMESH_INSTANCE:-http://web:41447}
ALLOWED_CHANNELS: ${ALLOWED_CHANNELS:-""}
HIDDEN_CHANNELS: ${HIDDEN_CHANNELS:-""}
API_TOKEN: ${API_TOKEN}
INSTANCE_DOMAIN: ${INSTANCE_DOMAIN}
POTATOMESH_INSTANCE: ${POTATOMESH_INSTANCE:-http://web:41447}
DEBUG: ${DEBUG:-0}
FEDERATION: ${FEDERATION:-1}
PRIVATE: ${PRIVATE:-0}
@@ -75,6 +77,21 @@ x-ingestor-base: &ingestor-base
memory: 128M
cpus: '0.1'
x-matrix-bridge-base: &matrix-bridge-base
image: ghcr.io/l5yth/potato-mesh-matrix-bridge-${POTATOMESH_IMAGE_ARCH:-linux-amd64}:${POTATOMESH_IMAGE_TAG:-latest}
volumes:
- potatomesh_matrix_bridge_state:/app
- ./matrix/Config.toml:/app/Config.toml:ro
restart: unless-stopped
deploy:
resources:
limits:
memory: 128M
cpus: '0.1'
reservations:
memory: 64M
cpus: '0.05'
services:
web:
<<: *web-base
@@ -108,6 +125,24 @@ services:
profiles:
- bridge
matrix-bridge:
<<: *matrix-bridge-base
network_mode: host
depends_on:
- web
extra_hosts:
- "web:127.0.0.1"
matrix-bridge-bridge:
<<: *matrix-bridge-base
container_name: potatomesh-matrix-bridge
networks:
- potatomesh-network
depends_on:
- web-bridge
profiles:
- bridge
volumes:
potatomesh_data:
driver: local
@@ -115,6 +150,8 @@ volumes:
driver: local
potatomesh_logs:
driver: local
potatomesh_matrix_bridge_state:
driver: local
networks:
potatomesh-network:
Generated
+61
View File
@@ -0,0 +1,61 @@
{
"nodes": {
"flake-utils": {
"inputs": {
"systems": "systems"
},
"locked": {
"lastModified": 1731533236,
"narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1766070988,
"narHash": "sha256-G/WVghka6c4bAzMhTwT2vjLccg/awmHkdKSd2JrycLc=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "c6245e83d836d0433170a16eb185cefe0572f8b8",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"root": {
"inputs": {
"flake-utils": "flake-utils",
"nixpkgs": "nixpkgs"
}
},
"systems": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
}
},
"root": "root",
"version": 7
}
+384
View File
@@ -0,0 +1,384 @@
{
description = "PotatoMesh - A federated, Meshtastic-powered node dashboard";
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
flake-utils.url = "github:numtide/flake-utils";
};
outputs = { self, nixpkgs, flake-utils }:
flake-utils.lib.eachDefaultSystem (system:
let
pkgs = nixpkgs.legacyPackages.${system};
# Python environment for the ingestor
pythonEnv = pkgs.python3.withPackages (ps: with ps; [
meshtastic
protobuf
requests
]);
# Web app wrapper script
webApp = pkgs.writeShellApplication {
name = "potato-mesh-web";
runtimeInputs = [ pkgs.ruby pkgs.bundler pkgs.sqlite pkgs.git pkgs.gnumake pkgs.gcc ];
text = ''
if [ -n "''${XDG_DATA_HOME:-}" ]; then
BASEDIR="$XDG_DATA_HOME"
else
BASEDIR="$HOME/.local/share/potato-mesh"
fi
WORKDIR="$BASEDIR/web"
mkdir -p "$WORKDIR"
# Copy app files if not present or outdated
APP_SRC="${./web}"
DATA_SRC="${./data}"
if [ ! -f "$WORKDIR/.installed" ] || [ "$APP_SRC" != "$(cat "$WORKDIR/.src_path" 2>/dev/null)" ]; then
# Copy web app
cp -rT "$APP_SRC" "$WORKDIR/"
chmod -R u+w "$WORKDIR"
# Copy data directory (contains SQL schemas)
mkdir -p "$BASEDIR/data"
cp -rT "$DATA_SRC" "$BASEDIR/data/"
chmod -R u+w "$BASEDIR/data"
echo "$APP_SRC" > "$WORKDIR/.src_path"
rm -f "$WORKDIR/.installed"
fi
cd "$WORKDIR"
# Install gems if needed
if [ ! -f ".installed" ]; then
bundle config set --local path 'vendor/bundle'
bundle install
touch .installed
fi
exec bundle exec ruby app.rb -p "''${PORT:-41447}" -o "''${HOST:-0.0.0.0}"
'';
};
# Ingestor wrapper script
ingestor = pkgs.writeShellApplication {
name = "potato-mesh-ingestor";
runtimeInputs = [ pythonEnv ];
text = ''
# The ingestor needs to run from parent directory with data/ folder
if [ -n "''${XDG_DATA_HOME:-}" ]; then
BASEDIR="$XDG_DATA_HOME"
else
BASEDIR="$HOME/.local/share/potato-mesh"
fi
if [ ! -d "$BASEDIR/data" ]; then
mkdir -p "$BASEDIR"
cp -rT "${./data}" "$BASEDIR/data/"
chmod -R u+w "$BASEDIR/data"
fi
cd "$BASEDIR"
exec python -m data.mesh
'';
};
in {
packages = {
web = webApp;
ingestor = ingestor;
default = webApp;
};
apps = {
web = {
type = "app";
program = "${webApp}/bin/potato-mesh-web";
};
ingestor = {
type = "app";
program = "${ingestor}/bin/potato-mesh-ingestor";
};
default = self.apps.${system}.web;
};
devShells.default = pkgs.mkShell {
buildInputs = [
pkgs.ruby
pkgs.bundler
pythonEnv
pkgs.sqlite
];
shellHook = ''
echo "PotatoMesh development shell"
echo " - Ruby: $(ruby --version)"
echo " - Python: $(python --version)"
echo ""
echo "To run the web app: cd web && bundle install && ./app.sh"
echo "To run the ingestor: cd data && python mesh.py"
'';
};
checks.potato-mesh-nixos = pkgs.testers.nixosTest {
name = "potato-mesh-data-dir";
nodes.machine = { lib, ... }: {
imports = [ self.nixosModules.default ];
services.potato-mesh = {
enable = true;
apiToken = "test-token";
dataDir = "/var/lib/potato-mesh";
ingestor.enable = true;
};
systemd.services.potato-mesh-ingestor.wantedBy = lib.mkForce [];
};
testScript = ''
machine.start
machine.succeed("grep -q 'XDG_DATA_HOME=/var/lib/potato-mesh' /etc/systemd/system/potato-mesh-web.service")
machine.succeed("grep -q 'XDG_DATA_HOME=/var/lib/potato-mesh' /etc/systemd/system/potato-mesh-ingestor.service")
machine.succeed("grep -q 'WorkingDirectory=/var/lib/potato-mesh' /etc/systemd/system/potato-mesh-web.service")
machine.succeed("grep -q 'WorkingDirectory=/var/lib/potato-mesh' /etc/systemd/system/potato-mesh-ingestor.service")
'';
};
}
) // {
# NixOS module
nixosModules.default = { config, lib, pkgs, ... }:
let
cfg = config.services.potato-mesh;
in {
options.services.potato-mesh = {
enable = lib.mkEnableOption "PotatoMesh web dashboard";
package = lib.mkOption {
type = lib.types.package;
default = self.packages.${pkgs.system}.web;
description = "The potato-mesh web package to use";
};
port = lib.mkOption {
type = lib.types.port;
default = 41447;
description = "Port to listen on";
};
host = lib.mkOption {
type = lib.types.str;
default = "0.0.0.0";
description = "Host to bind to";
};
apiToken = lib.mkOption {
type = lib.types.nullOr lib.types.str;
default = null;
description = "Shared secret that authorizes ingestors and API clients making POST requests. Warning: visible in nix store. Prefer apiTokenFile for production.";
};
apiTokenFile = lib.mkOption {
type = lib.types.nullOr lib.types.path;
default = null;
description = "File containing API_TOKEN=<secret> (recommended for production)";
};
instanceDomain = lib.mkOption {
type = lib.types.nullOr lib.types.str;
default = null;
description = "Public hostname used for metadata, federation, and generated API links";
};
siteName = lib.mkOption {
type = lib.types.str;
default = "PotatoMesh Demo";
description = "Title and header displayed in the UI";
};
channel = lib.mkOption {
type = lib.types.str;
default = "#LongFast";
description = "Default channel name displayed in the UI";
};
frequency = lib.mkOption {
type = lib.types.str;
default = "915MHz";
description = "Default frequency description displayed in the UI";
};
contactLink = lib.mkOption {
type = lib.types.str;
default = "#potatomesh:dod.ngo";
description = "Chat link or Matrix alias rendered in the footer and overlays";
};
mapCenter = lib.mkOption {
type = lib.types.str;
default = "38.761944,-27.090833";
description = "Latitude and longitude that centre the map on load";
};
mapZoom = lib.mkOption {
type = lib.types.nullOr lib.types.int;
default = null;
description = "Fixed Leaflet zoom applied on first load; disables auto-fit when provided";
};
maxDistance = lib.mkOption {
type = lib.types.int;
default = 42;
description = "Maximum distance (km) before node relationships are hidden on the map";
};
debug = lib.mkOption {
type = lib.types.bool;
default = false;
description = "Enable verbose logging";
};
allowedChannels = lib.mkOption {
type = lib.types.nullOr lib.types.str;
default = null;
description = "Comma-separated channel names the ingestor accepts";
};
hiddenChannels = lib.mkOption {
type = lib.types.nullOr lib.types.str;
default = null;
description = "Comma-separated channel names the ingestor will ignore";
};
federation = lib.mkOption {
type = lib.types.bool;
default = true;
description = "Announce instance and crawl peers";
};
private = lib.mkOption {
type = lib.types.bool;
default = false;
description = "Hide chat UI, disable message APIs, and exclude hidden clients from public listings";
};
dataDir = lib.mkOption {
type = lib.types.path;
default = "/var/lib/potato-mesh";
description = "Directory to store database and configuration";
};
user = lib.mkOption {
type = lib.types.str;
default = "potato-mesh";
description = "User to run the service as";
};
group = lib.mkOption {
type = lib.types.str;
default = "potato-mesh";
description = "Group to run the service as";
};
# Ingestor options
ingestor = {
enable = lib.mkEnableOption "PotatoMesh Python ingestor";
package = lib.mkOption {
type = lib.types.package;
default = self.packages.${pkgs.system}.ingestor;
description = "The potato-mesh ingestor package to use";
};
connection = lib.mkOption {
type = lib.types.str;
default = "/dev/ttyACM0";
description = "Connection target: serial port, IP:port for TCP, or Bluetooth address for BLE";
};
};
};
config = lib.mkIf cfg.enable {
users.users.${cfg.user} = {
isSystemUser = true;
group = cfg.group;
home = cfg.dataDir;
createHome = true;
};
users.groups.${cfg.group} = {};
systemd.services.potato-mesh-web = {
description = "PotatoMesh Web Dashboard";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
environment = {
RACK_ENV = "production";
APP_ENV = "production";
PORT = toString cfg.port;
HOST = cfg.host;
SITE_NAME = cfg.siteName;
CHANNEL = cfg.channel;
FREQUENCY = cfg.frequency;
CONTACT_LINK = cfg.contactLink;
MAP_CENTER = cfg.mapCenter;
MAX_DISTANCE = toString cfg.maxDistance;
DEBUG = if cfg.debug then "1" else "0";
FEDERATION = if cfg.federation then "1" else "0";
PRIVATE = if cfg.private then "1" else "0";
XDG_DATA_HOME = cfg.dataDir;
XDG_CONFIG_HOME = "${cfg.dataDir}/config";
} // lib.optionalAttrs (cfg.instanceDomain != null) {
INSTANCE_DOMAIN = cfg.instanceDomain;
} // lib.optionalAttrs (cfg.mapZoom != null) {
MAP_ZOOM = toString cfg.mapZoom;
} // lib.optionalAttrs (cfg.allowedChannels != null) {
ALLOWED_CHANNELS = cfg.allowedChannels;
} // lib.optionalAttrs (cfg.hiddenChannels != null) {
HIDDEN_CHANNELS = cfg.hiddenChannels;
} // lib.optionalAttrs (cfg.apiToken != null) {
API_TOKEN = cfg.apiToken;
};
serviceConfig = {
Type = "simple";
User = cfg.user;
Group = cfg.group;
WorkingDirectory = cfg.dataDir;
ExecStart = "${cfg.package}/bin/potato-mesh-web";
Restart = "always";
RestartSec = 5;
} // lib.optionalAttrs (cfg.apiTokenFile != null) {
EnvironmentFile = cfg.apiTokenFile;
};
};
systemd.services.potato-mesh-ingestor = lib.mkIf cfg.ingestor.enable {
description = "PotatoMesh Python Ingestor";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" "potato-mesh-web.service" ];
requires = [ "potato-mesh-web.service" ];
environment = {
INSTANCE_DOMAIN = "http://127.0.0.1:${toString cfg.port}";
CONNECTION = cfg.ingestor.connection;
DEBUG = if cfg.debug then "1" else "0";
XDG_DATA_HOME = cfg.dataDir;
} // lib.optionalAttrs (cfg.allowedChannels != null) {
ALLOWED_CHANNELS = cfg.allowedChannels;
} // lib.optionalAttrs (cfg.hiddenChannels != null) {
HIDDEN_CHANNELS = cfg.hiddenChannels;
} // lib.optionalAttrs (cfg.apiToken != null) {
API_TOKEN = cfg.apiToken;
};
serviceConfig = {
Type = "simple";
User = cfg.user;
Group = cfg.group;
WorkingDirectory = cfg.dataDir;
ExecStart = "${cfg.ingestor.package}/bin/potato-mesh-ingestor";
Restart = "always";
RestartSec = 10;
} // lib.optionalAttrs (cfg.apiTokenFile != null) {
EnvironmentFile = cfg.apiTokenFile;
};
};
};
};
};
}
+3
View File
@@ -0,0 +1,3 @@
target/
coverage.lcov
bridge_state.json
+2117
View File
File diff suppressed because it is too large Load Diff
+34
View File
@@ -0,0 +1,34 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
[package]
name = "potatomesh-matrix-bridge"
version = "0.5.9"
edition = "2021"
[dependencies]
tokio = { version = "1", features = ["macros", "rt-multi-thread", "time"] }
reqwest = { version = "0.12", features = ["json", "rustls-tls"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
toml = "0.9"
anyhow = "1"
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["fmt", "env-filter"] }
urlencoding = "2"
[dev-dependencies]
tempfile = "3"
mockito = "1"
serial_test = "3"
+20
View File
@@ -0,0 +1,20 @@
[potatomesh]
# Base domain (with or without trailing slash)
base_url = "https://potatomesh.net"
# Poll interval in seconds
poll_interval_secs = 60
[matrix]
# Homeserver base URL (client API) without trailing slash
homeserver = "https://matrix.dod.ngo"
# Appservice access token (from your registration.yaml)
as_token = "INVALID_TOKEN_NOT_WORKING"
# Server name (domain) part of Matrix user IDs
server_name = "dod.ngo"
# Room ID to send into (must be joined by the appservice / puppets)
room_id = "!sXabOBXbVObAlZQEUs:c-base.org" # "#potato-bridge:c-base.org"
[state]
# Where to persist last seen message id (optional but recommended)
state_file = "bridge_state.json"
+42
View File
@@ -0,0 +1,42 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM rust:1.91-bookworm AS builder
WORKDIR /app
COPY matrix/Cargo.toml matrix/Cargo.lock ./
COPY matrix/src ./src
RUN --mount=type=cache,target=/usr/local/cargo/registry \
--mount=type=cache,target=/usr/local/cargo/git \
cargo build --release --locked
FROM debian:bookworm-slim AS runtime
RUN apt-get update \
&& apt-get install -y --no-install-recommends ca-certificates gosu \
&& rm -rf /var/lib/apt/lists/*
RUN useradd --create-home --uid 10001 --shell /usr/sbin/nologin potatomesh
WORKDIR /app
COPY --from=builder /app/target/release/potatomesh-matrix-bridge /usr/local/bin/potatomesh-matrix-bridge
COPY matrix/Config.toml /app/Config.example.toml
COPY matrix/docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
+279
View File
@@ -0,0 +1,279 @@
# potatomesh-matrix-bridge
A small Rust daemon that bridges **PotatoMesh** LoRa messages into a **Matrix** room.
For each PotatoMesh node, the bridge creates (or uses) a **Matrix puppet user**:
- Matrix localpart: `potato_` + the hex node id (without `!`), e.g. `!67fc83cb``@potato_67fc83cb:example.org`
- Matrix display name: the nodes `long_name` from the PotatoMesh API
Messages from PotatoMesh are periodically fetched and forwarded to a single Matrix room as those puppet users.
---
## Features
- Polls `https://potatomesh.net/api/messages` (deriving `/api` from the configured base domain)
- Looks up node metadata via `GET /api/nodes/{hex}` and caches it
- One Matrix user per node:
- username: `potato_{hex node id}`
- display name: `long_name`
- Forwards `TEXT_MESSAGE_APP` messages into a single Matrix room
- Persists last-seen message ID to avoid duplicates across restarts
---
## Architecture Overview
- **PotatoMesh side**
- `GET /api/messages` returns an array of messages
- `GET /api/nodes/{hex}` returns node metadata (including `long_name`)
- **Matrix side**
- Uses the Matrix Client-Server API with an **appservice access token**
- Impersonates puppet users via `user_id=@potato_{hex}:{server_name}&access_token={as_token}`
- Sends `m.room.message` events into a configured room
This is **not** a full appservice framework; it just speaks the minimal HTTP needed.
---
## Requirements
- Rust (stable) and `cargo`
- A Matrix homeserver you control (e.g. Synapse)
- An **application service registration** on your homeserver that:
- Whitelists the puppet user namespace (e.g. `@potato_[0-9a-f]{8}:example.org`)
- Provides an `as_token` the bridge can use
- Network access from the bridge host to:
- `https://potatomesh.net/` (bridge appends `/api`)
- Your Matrix homeserver (`https://matrix.example.org`)
---
## Configuration
All configuration is in `Config.toml` in the project root.
Example:
```toml
[potatomesh]
# Base domain (bridge will call {base_url}/api)
base_url = "https://potatomesh.net/"
# Poll interval in seconds
poll_interval_secs = 10
[matrix]
# Homeserver base URL (client API) without trailing slash
homeserver = "https://matrix.example.org"
# Appservice access token (from your registration.yaml)
as_token = "YOUR_APPSERVICE_AS_TOKEN"
# Server name (domain) part of Matrix user IDs
server_name = "example.org"
# Room ID to send into (must be joined by the appservice / puppets)
room_id = "!yourroomid:example.org"
[state]
# Where to persist last seen message id
state_file = "bridge_state.json"
````
### PotatoMesh API
The bridge assumes:
* Messages: `GET {base_url}/api/messages` → JSON array, for example:
```json
[
{
"id": 2947676906,
"rx_time": 1764241436,
"rx_iso": "2025-11-27T11:03:56Z",
"from_id": "!da6556d4",
"to_id": "^all",
"channel": 1,
"portnum": "TEXT_MESSAGE_APP",
"text": "Ping",
"rssi": -111,
"hop_limit": 1,
"lora_freq": 868,
"modem_preset": "MediumFast",
"channel_name": "TEST",
"snr": -9.0,
"node_id": "!06871773"
}
]
```
* Nodes: `GET {base_url}/api/nodes/{hex}` → JSON, for example:
```json
{
"node_id": "!67fc83cb",
"short_name": "83CB",
"long_name": "Meshtastic 83CB",
"role": "CLIENT_HIDDEN",
"last_heard": 1764250515,
"first_heard": 1758993817,
"last_seen_iso": "2025-11-27T13:35:15Z"
}
```
Node hex ID is derived from `node_id` by stripping the leading `!` and using the remainder inside the puppet localpart prefix (`potato_{hex}`).
---
## Matrix Appservice Setup (Synapse example)
You need an appservice registration file (e.g. `potatomesh-bridge.yaml`) configured in Synapse.
A minimal example sketch (you **must** adjust URLs, secrets, namespaces):
```yaml
id: potatomesh-bridge
url: "http://your-bridge-host:8080" # not used by this bridge if it only calls out
as_token: "YOUR_APPSERVICE_AS_TOKEN"
hs_token: "SECRET_HS_TOKEN"
sender_localpart: "potatomesh-bridge"
rate_limited: false
namespaces:
users:
- exclusive: true
regex: "@potato_[0-9a-f]{8}:example.org"
```
For this bridge, only the `as_token` and `namespaces.users` actually matter. The bridge does not accept inbound events; it only uses the `as_token` to call the homeserver.
In Synapses `homeserver.yaml`, add the registration file under `app_service_config_files`, restart, and invite a puppet user to your target room (or use room ID directly).
---
## Build
```bash
# clone
git clone https://github.com/YOUR_USER/potatomesh-matrix-bridge.git
cd potatomesh-matrix-bridge
# build
cargo build --release
```
The resulting binary will be at:
```bash
target/release/potatomesh-matrix-bridge
```
---
## Docker
Build the container from the repo root with the included `matrix/Dockerfile`:
```bash
docker build -f matrix/Dockerfile -t potatomesh-matrix-bridge .
```
Provide your config at `/app/Config.toml` and persist the bridge state file by mounting volumes. Minimal example:
```bash
docker run --rm \
-v bridge_state:/app \
-v "$(pwd)/matrix/Config.toml:/app/Config.toml:ro" \
potatomesh-matrix-bridge
```
If you prefer to isolate the state file from the config, mount it directly instead of the whole `/app` directory:
```bash
docker run --rm \
-v bridge_state:/app \
-v "$(pwd)/matrix/Config.toml:/app/Config.toml:ro" \
potatomesh-matrix-bridge
```
The image ships `Config.example.toml` for reference, but the bridge will exit if `/app/Config.toml` is not provided.
---
## Run
Ensure `Config.toml` is present and valid, then:
```bash
./target/release/potatomesh-matrix-bridge
```
Environment variables you may care about:
* `RUST_LOG` for logging, e.g.:
```bash
RUST_LOG=info,reqwest=warn ./target/release/potatomesh-matrix-bridge
```
The bridge will:
1. Load state from `bridge_state.json` (if present).
2. Poll PotatoMesh every `poll_interval_secs`.
3. For each new `TEXT_MESSAGE_APP`:
* Fetch node info.
* Ensure puppet is registered (`@potato_{hex}:{server_name}`).
* Set puppet display name to `long_name`.
* Send a formatted text message into `room_id` as that puppet.
* Update and persist `bridge_state.json`.
Delete `bridge_state.json` if you want it to replay all currently available messages.
---
## Development
Run tests (currently mostly compile checks, no real tests yet):
```bash
cargo test
```
Format code:
```bash
cargo fmt
```
Lint (optional but recommended):
```bash
cargo clippy -- -D warnings
```
---
## GitHub Actions CI
This repository includes a GitHub Actions workflow (`.github/workflows/ci.yml`) that:
* runs on pushes and pull requests
* caches Cargo dependencies
* runs:
* `cargo fmt --check`
* `cargo clippy`
* `cargo test`
See the workflow file for details.
---
## Caveats & Future Work
* No E2EE: this bridge posts into unencrypted (or server-side managed) rooms. For encrypted rooms, youd need real E2EE support and key management.
* No inbound Matrix → PotatoMesh direction yet. This is a one-way bridge (PotatoMesh → Matrix).
* No pagination or `since` support on the PotatoMesh API. The bridge simply deduplicates by message `id` and stores the highest seen.
If you change the PotatoMesh API, adjust the types in `src/potatomesh.rs` accordingly.
+33
View File
@@ -0,0 +1,33 @@
#!/bin/sh
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
# Default state file path from Config.toml unless overridden.
STATE_FILE="${STATE_FILE:-/app/bridge_state.json}"
STATE_DIR="$(dirname "$STATE_FILE")"
# Ensure state directory exists and is writable by the non-root user without
# touching the read-only config bind mount.
if [ ! -d "$STATE_DIR" ]; then
mkdir -p "$STATE_DIR"
fi
# Best-effort ownership fix; ignore if the underlying volume is read-only.
chown potatomesh:potatomesh "$STATE_DIR" 2>/dev/null || true
touch "$STATE_FILE" 2>/dev/null || true
chown potatomesh:potatomesh "$STATE_FILE" 2>/dev/null || true
exec gosu potatomesh potatomesh-matrix-bridge "$@"
+157
View File
@@ -0,0 +1,157 @@
// Copyright © 2025-26 l5yth & contributors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use serde::Deserialize;
use std::{fs, path::Path};
#[derive(Debug, Deserialize, Clone)]
pub struct PotatomeshConfig {
pub base_url: String,
pub poll_interval_secs: u64,
}
#[derive(Debug, Deserialize, Clone)]
pub struct MatrixConfig {
pub homeserver: String,
pub as_token: String,
pub server_name: String,
pub room_id: String,
}
#[derive(Debug, Deserialize, Clone)]
pub struct StateConfig {
pub state_file: String,
}
#[derive(Debug, Deserialize, Clone)]
pub struct Config {
pub potatomesh: PotatomeshConfig,
pub matrix: MatrixConfig,
pub state: StateConfig,
}
impl Config {
pub fn load_from_file(path: &str) -> anyhow::Result<Self> {
let contents = fs::read_to_string(path)?;
let cfg = toml::from_str(&contents)?;
Ok(cfg)
}
pub fn from_default_path() -> anyhow::Result<Self> {
let path = "Config.toml";
if !Path::new(path).exists() {
anyhow::bail!("Config file {path} not found");
}
Self::load_from_file(path)
}
}
#[cfg(test)]
mod tests {
use super::*;
use serial_test::serial;
use std::io::Write;
#[test]
fn parse_minimal_config_from_toml_str() {
let toml_str = r#"
[potatomesh]
base_url = "https://potatomesh.net/"
poll_interval_secs = 10
[matrix]
homeserver = "https://matrix.example.org"
as_token = "AS_TOKEN"
server_name = "example.org"
room_id = "!roomid:example.org"
[state]
state_file = "bridge_state.json"
"#;
let cfg: Config = toml::from_str(toml_str).expect("toml should parse");
assert_eq!(cfg.potatomesh.base_url, "https://potatomesh.net/");
assert_eq!(cfg.potatomesh.poll_interval_secs, 10);
assert_eq!(cfg.matrix.homeserver, "https://matrix.example.org");
assert_eq!(cfg.matrix.as_token, "AS_TOKEN");
assert_eq!(cfg.matrix.server_name, "example.org");
assert_eq!(cfg.matrix.room_id, "!roomid:example.org");
assert_eq!(cfg.state.state_file, "bridge_state.json");
}
#[test]
fn load_from_file_not_found() {
let result = Config::load_from_file("file_that_does_not_exist.toml");
assert!(result.is_err());
}
#[test]
fn load_from_file_valid_file() {
let toml_str = r#"
[potatomesh]
base_url = "https://potatomesh.net/"
poll_interval_secs = 10
[matrix]
homeserver = "https://matrix.example.org"
as_token = "AS_TOKEN"
server_name = "example.org"
room_id = "!roomid:example.org"
[state]
state_file = "bridge_state.json"
"#;
let mut file = tempfile::NamedTempFile::new().unwrap();
write!(file, "{}", toml_str).unwrap();
let result = Config::load_from_file(file.path().to_str().unwrap());
assert!(result.is_ok());
}
#[test]
#[serial]
fn from_default_path_not_found() {
let tmp_dir = tempfile::tempdir().unwrap();
std::env::set_current_dir(tmp_dir.path()).unwrap();
let result = Config::from_default_path();
assert!(result.is_err());
}
#[test]
#[serial]
fn from_default_path_found() {
let toml_str = r#"
[potatomesh]
base_url = "https://potatomesh.net/"
poll_interval_secs = 10
[matrix]
homeserver = "https://matrix.example.org"
as_token = "AS_TOKEN"
server_name = "example.org"
room_id = "!roomid:example.org"
[state]
state_file = "bridge_state.json"
"#;
let tmp_dir = tempfile::tempdir().unwrap();
let file_path = tmp_dir.path().join("Config.toml");
let mut file = std::fs::File::create(file_path).unwrap();
write!(file, "{}", toml_str).unwrap();
std::env::set_current_dir(tmp_dir.path()).unwrap();
let result = Config::from_default_path();
assert!(result.is_ok());
}
}
+683
View File
@@ -0,0 +1,683 @@
// Copyright © 2025-26 l5yth & contributors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
mod config;
mod matrix;
mod potatomesh;
use std::{fs, path::Path};
use anyhow::Result;
use tokio::time::{sleep, Duration};
use tracing::{error, info};
use crate::config::Config;
use crate::matrix::MatrixAppserviceClient;
use crate::potatomesh::{FetchParams, PotatoClient, PotatoMessage};
#[derive(Debug, serde::Serialize, serde::Deserialize, Default)]
pub struct BridgeState {
/// Highest message id processed by the bridge.
last_message_id: Option<u64>,
/// Highest rx_time observed; used to build incremental fetch queries.
#[serde(default)]
last_rx_time: Option<u64>,
/// Message ids seen at the current last_rx_time for de-duplication.
#[serde(default)]
last_rx_time_ids: Vec<u64>,
/// Legacy checkpoint timestamp used before last_rx_time was added.
#[serde(default, skip_serializing)]
last_checked_at: Option<u64>,
}
impl BridgeState {
fn load(path: &str) -> Result<Self> {
if !Path::new(path).exists() {
return Ok(Self::default());
}
let data = fs::read_to_string(path)?;
// Treat empty/whitespace-only files as a fresh state.
if data.trim().is_empty() {
return Ok(Self::default());
}
let mut s: Self = serde_json::from_str(&data)?;
if s.last_rx_time.is_none() {
s.last_rx_time = s.last_checked_at;
}
s.last_checked_at = None;
Ok(s)
}
fn save(&self, path: &str) -> Result<()> {
let data = serde_json::to_string_pretty(self)?;
fs::write(path, data)?;
Ok(())
}
fn should_forward(&self, msg: &PotatoMessage) -> bool {
match self.last_rx_time {
None => match self.last_message_id {
None => true,
Some(last_id) => msg.id > last_id,
},
Some(last_ts) => {
if msg.rx_time > last_ts {
true
} else if msg.rx_time < last_ts {
false
} else {
!self.last_rx_time_ids.contains(&msg.id)
}
}
}
}
fn update_with(&mut self, msg: &PotatoMessage) {
self.last_message_id = Some(msg.id);
if self.last_rx_time.is_none() || Some(msg.rx_time) > self.last_rx_time {
self.last_rx_time = Some(msg.rx_time);
self.last_rx_time_ids = vec![msg.id];
} else if Some(msg.rx_time) == self.last_rx_time && !self.last_rx_time_ids.contains(&msg.id)
{
self.last_rx_time_ids.push(msg.id);
}
}
}
fn build_fetch_params(state: &BridgeState) -> FetchParams {
if state.last_message_id.is_none() {
FetchParams {
limit: None,
since: None,
}
} else if let Some(ts) = state.last_rx_time {
FetchParams {
limit: None,
since: Some(ts),
}
} else {
FetchParams {
limit: Some(10),
since: None,
}
}
}
async fn poll_once(
potato: &PotatoClient,
matrix: &MatrixAppserviceClient,
state: &mut BridgeState,
state_path: &str,
) {
let params = build_fetch_params(state);
match potato.fetch_messages(params).await {
Ok(mut msgs) => {
// sort by rx_time so we process by actual receipt time
msgs.sort_by_key(|m| m.rx_time);
for msg in &msgs {
if !state.should_forward(msg) {
continue;
}
// Filter to the ports you care about
if let Some(port) = &msg.portnum {
if port != "TEXT_MESSAGE_APP" {
state.update_with(msg);
if let Err(e) = state.save(state_path) {
error!("Error saving state: {:?}", e);
}
continue;
}
}
if let Err(e) = handle_message(potato, matrix, state, msg).await {
error!("Error handling message {}: {:?}", msg.id, e);
continue;
}
state.update_with(msg);
// persist after each processed message
if let Err(e) = state.save(state_path) {
error!("Error saving state: {:?}", e);
}
}
}
Err(e) => {
error!("Error fetching PotatoMesh messages: {:?}", e);
}
}
}
#[tokio::main]
async fn main() -> Result<()> {
// Logging: RUST_LOG=info,bridge=debug,reqwest=warn ...
tracing_subscriber::fmt()
.with_env_filter(
tracing_subscriber::EnvFilter::from_default_env()
.add_directive("potatomesh_matrix_bridge=info".parse().unwrap_or_default())
.add_directive("reqwest=warn".parse().unwrap_or_default()),
)
.init();
let cfg = Config::from_default_path()?;
info!("Loaded config: {:?}", cfg);
let http = reqwest::Client::builder().build()?;
let potato = PotatoClient::new(http.clone(), cfg.potatomesh.clone());
potato.health_check().await?;
let matrix = MatrixAppserviceClient::new(http.clone(), cfg.matrix.clone());
matrix.health_check().await?;
let state_path = &cfg.state.state_file;
let mut state = BridgeState::load(state_path)?;
info!("Loaded state: {:?}", state);
let poll_interval = Duration::from_secs(cfg.potatomesh.poll_interval_secs);
loop {
poll_once(&potato, &matrix, &mut state, state_path).await;
sleep(poll_interval).await;
}
}
async fn handle_message(
potato: &PotatoClient,
matrix: &MatrixAppserviceClient,
state: &mut BridgeState,
msg: &PotatoMessage,
) -> Result<()> {
let node = potato.get_node(&msg.node_id).await?;
let localpart = MatrixAppserviceClient::localpart_from_node_id(&msg.node_id);
let user_id = matrix.user_id(&localpart);
// Ensure puppet exists & has display name
matrix.ensure_user_registered(&localpart).await?;
matrix.ensure_user_joined_room(&user_id).await?;
matrix.set_display_name(&user_id, &node.long_name).await?;
// Format the bridged message
let short = node
.short_name
.clone()
.unwrap_or_else(|| node.long_name.clone());
let preset_short = modem_preset_short(&msg.modem_preset);
let prefix = format!(
"[{freq}][{preset_short}][{channel}][{short}]",
freq = msg.lora_freq,
preset_short = preset_short,
channel = msg.channel_name,
short = short,
);
let (body, formatted_body) = format_message_bodies(&prefix, &msg.text);
matrix
.send_formatted_message_as(&user_id, &body, &formatted_body)
.await?;
state.update_with(msg);
Ok(())
}
/// Build a compact modem preset label like "LF" for "LongFast".
fn modem_preset_short(preset: &str) -> String {
let letters: String = preset
.chars()
.filter(|ch| ch.is_ascii_uppercase())
.collect();
if letters.is_empty() {
preset.chars().take(2).collect()
} else {
letters
}
}
/// Build plain text + HTML message bodies with inline-code metadata.
fn format_message_bodies(prefix: &str, text: &str) -> (String, String) {
let body = format!("`{}` {}", prefix, text);
let formatted_body = format!("<code>{}</code> {}", escape_html(prefix), escape_html(text));
(body, formatted_body)
}
/// Minimal HTML escaping for Matrix formatted_body payloads.
fn escape_html(input: &str) -> String {
let mut escaped = String::with_capacity(input.len());
for ch in input.chars() {
match ch {
'&' => escaped.push_str("&amp;"),
'<' => escaped.push_str("&lt;"),
'>' => escaped.push_str("&gt;"),
'"' => escaped.push_str("&quot;"),
'\'' => escaped.push_str("&#39;"),
_ => escaped.push(ch),
}
}
escaped
}
#[cfg(test)]
mod tests {
use super::*;
use crate::config::{MatrixConfig, PotatomeshConfig};
use crate::matrix::MatrixAppserviceClient;
use crate::potatomesh::PotatoClient;
fn sample_msg(id: u64) -> PotatoMessage {
PotatoMessage {
id,
rx_time: 0,
rx_iso: "2025-11-27T00:00:00Z".to_string(),
from_id: "!abcd1234".to_string(),
to_id: "^all".to_string(),
channel: 1,
portnum: Some("TEXT_MESSAGE_APP".to_string()),
text: "Ping".to_string(),
rssi: Some(-100),
hop_limit: Some(1),
lora_freq: 868,
modem_preset: "MediumFast".to_string(),
channel_name: "TEST".to_string(),
snr: Some(0.0),
reply_id: None,
node_id: "!abcd1234".to_string(),
}
}
#[test]
fn modem_preset_short_handles_camelcase() {
assert_eq!(modem_preset_short("LongFast"), "LF");
assert_eq!(modem_preset_short("MediumFast"), "MF");
}
#[test]
fn format_message_bodies_escape_html() {
let (body, formatted) = format_message_bodies("[868][LF]", "Hello <&>");
assert_eq!(body, "`[868][LF]` Hello <&>");
assert_eq!(formatted, "<code>[868][LF]</code> Hello &lt;&amp;&gt;");
}
#[test]
fn escape_html_escapes_quotes() {
assert_eq!(escape_html("a\"b'c"), "a&quot;b&#39;c");
}
#[test]
fn bridge_state_initially_forwards_all() {
let state = BridgeState::default();
let msg = sample_msg(42);
assert!(state.should_forward(&msg));
}
#[test]
fn bridge_state_tracks_latest_rx_time_and_skips_older() {
let mut state = BridgeState::default();
let m1 = sample_msg(10);
let m2 = sample_msg(20);
let m3 = sample_msg(15);
let m1 = PotatoMessage { rx_time: 10, ..m1 };
let m2 = PotatoMessage { rx_time: 20, ..m2 };
let m3 = PotatoMessage { rx_time: 15, ..m3 };
// First message, should forward
assert!(state.should_forward(&m1));
state.update_with(&m1);
assert_eq!(state.last_message_id, Some(10));
assert_eq!(state.last_rx_time, Some(10));
// Second message, higher id, should forward
assert!(state.should_forward(&m2));
state.update_with(&m2);
assert_eq!(state.last_message_id, Some(20));
assert_eq!(state.last_rx_time, Some(20));
// Third message, lower than last, should NOT forward
assert!(!state.should_forward(&m3));
// state remains unchanged
assert_eq!(state.last_message_id, Some(20));
assert_eq!(state.last_rx_time, Some(20));
}
#[test]
fn bridge_state_uses_legacy_id_filter_when_rx_time_missing() {
let state = BridgeState {
last_message_id: Some(10),
last_rx_time: None,
last_rx_time_ids: vec![],
last_checked_at: None,
};
let older = sample_msg(9);
let newer = sample_msg(11);
assert!(!state.should_forward(&older));
assert!(state.should_forward(&newer));
}
#[test]
fn bridge_state_dedupes_same_timestamp() {
let mut state = BridgeState::default();
let m1 = PotatoMessage {
rx_time: 100,
..sample_msg(10)
};
let m2 = PotatoMessage {
rx_time: 100,
..sample_msg(9)
};
let dup = PotatoMessage {
rx_time: 100,
..sample_msg(10)
};
assert!(state.should_forward(&m1));
state.update_with(&m1);
assert!(state.should_forward(&m2));
state.update_with(&m2);
assert!(!state.should_forward(&dup));
assert_eq!(state.last_rx_time, Some(100));
assert_eq!(state.last_rx_time_ids, vec![10, 9]);
}
#[test]
fn bridge_state_load_save_roundtrip() {
let tmp_dir = tempfile::tempdir().unwrap();
let file_path = tmp_dir.path().join("state.json");
let path_str = file_path.to_str().unwrap();
let state = BridgeState {
last_message_id: Some(12345),
last_rx_time: Some(99),
last_rx_time_ids: vec![123],
last_checked_at: Some(77),
};
state.save(path_str).unwrap();
let loaded_state = BridgeState::load(path_str).unwrap();
assert_eq!(loaded_state.last_message_id, Some(12345));
assert_eq!(loaded_state.last_rx_time, Some(99));
assert_eq!(loaded_state.last_rx_time_ids, vec![123]);
assert_eq!(loaded_state.last_checked_at, None);
}
#[test]
fn bridge_state_load_nonexistent() {
let tmp_dir = tempfile::tempdir().unwrap();
let file_path = tmp_dir.path().join("nonexistent.json");
let path_str = file_path.to_str().unwrap();
let state = BridgeState::load(path_str).unwrap();
assert_eq!(state.last_message_id, None);
assert_eq!(state.last_rx_time, None);
assert!(state.last_rx_time_ids.is_empty());
}
#[test]
fn bridge_state_load_empty_file() {
let tmp_dir = tempfile::tempdir().unwrap();
let file_path = tmp_dir.path().join("empty.json");
let path_str = file_path.to_str().unwrap();
fs::write(path_str, "").unwrap();
let state = BridgeState::load(path_str).unwrap();
assert_eq!(state.last_message_id, None);
assert_eq!(state.last_rx_time, None);
assert!(state.last_rx_time_ids.is_empty());
assert_eq!(state.last_checked_at, None);
}
#[test]
fn bridge_state_migrates_legacy_checkpoint() {
let tmp_dir = tempfile::tempdir().unwrap();
let file_path = tmp_dir.path().join("legacy_state.json");
let path_str = file_path.to_str().unwrap();
fs::write(
path_str,
r#"{"last_message_id":42,"last_checked_at":1710000000}"#,
)
.unwrap();
let state = BridgeState::load(path_str).unwrap();
assert_eq!(state.last_message_id, Some(42));
assert_eq!(state.last_rx_time, Some(1_710_000_000));
assert!(state.last_rx_time_ids.is_empty());
}
#[test]
fn fetch_params_respects_missing_last_message_id() {
let state = BridgeState {
last_message_id: None,
last_rx_time: Some(123),
last_rx_time_ids: vec![],
last_checked_at: None,
};
let params = build_fetch_params(&state);
assert_eq!(params.limit, None);
assert_eq!(params.since, None);
}
#[test]
fn fetch_params_uses_since_when_safe() {
let state = BridgeState {
last_message_id: Some(1),
last_rx_time: Some(123),
last_rx_time_ids: vec![],
last_checked_at: None,
};
let params = build_fetch_params(&state);
assert_eq!(params.limit, None);
assert_eq!(params.since, Some(123));
}
#[test]
fn fetch_params_defaults_to_small_window() {
let state = BridgeState {
last_message_id: Some(1),
last_rx_time: None,
last_rx_time_ids: vec![],
last_checked_at: None,
};
let params = build_fetch_params(&state);
assert_eq!(params.limit, Some(10));
assert_eq!(params.since, None);
}
#[tokio::test]
async fn poll_once_leaves_state_unchanged_without_messages() {
let tmp_dir = tempfile::tempdir().unwrap();
let state_path = tmp_dir.path().join("state.json");
let state_str = state_path.to_str().unwrap();
let mut server = mockito::Server::new_async().await;
let mock_msgs = server
.mock("GET", "/api/messages")
.match_query(mockito::Matcher::Any)
.with_status(200)
.with_header("content-type", "application/json")
.with_body("[]")
.create();
let http_client = reqwest::Client::new();
let potatomesh_cfg = PotatomeshConfig {
base_url: server.url(),
poll_interval_secs: 1,
};
let matrix_cfg = MatrixConfig {
homeserver: server.url(),
as_token: "AS_TOKEN".to_string(),
server_name: "example.org".to_string(),
room_id: "!roomid:example.org".to_string(),
};
let potato = PotatoClient::new(http_client.clone(), potatomesh_cfg);
let matrix = MatrixAppserviceClient::new(http_client, matrix_cfg);
let mut state = BridgeState {
last_message_id: Some(1),
last_rx_time: Some(100),
last_rx_time_ids: vec![1],
last_checked_at: None,
};
poll_once(&potato, &matrix, &mut state, state_str).await;
mock_msgs.assert();
// No new data means state remains unchanged and is not persisted.
assert_eq!(state.last_rx_time, Some(100));
assert_eq!(state.last_rx_time_ids, vec![1]);
assert!(!state_path.exists());
}
#[tokio::test]
async fn poll_once_persists_state_for_non_text_messages() {
let tmp_dir = tempfile::tempdir().unwrap();
let state_path = tmp_dir.path().join("state.json");
let state_str = state_path.to_str().unwrap();
let mut server = mockito::Server::new_async().await;
let mock_msgs = server
.mock("GET", "/api/messages")
.match_query(mockito::Matcher::Any)
.with_status(200)
.with_header("content-type", "application/json")
.with_body(
r#"[{"id":1,"rx_time":100,"rx_iso":"2025-11-27T00:00:00Z","from_id":"!abcd1234","to_id":"^all","channel":1,"portnum":"POSITION_APP","text":"","rssi":-100,"hop_limit":1,"lora_freq":868,"modem_preset":"MediumFast","channel_name":"TEST","snr":0.0,"node_id":"!abcd1234"}]"#,
)
.create();
let http_client = reqwest::Client::new();
let potatomesh_cfg = PotatomeshConfig {
base_url: server.url(),
poll_interval_secs: 1,
};
let matrix_cfg = MatrixConfig {
homeserver: server.url(),
as_token: "AS_TOKEN".to_string(),
server_name: "example.org".to_string(),
room_id: "!roomid:example.org".to_string(),
};
let potato = PotatoClient::new(http_client.clone(), potatomesh_cfg);
let matrix = MatrixAppserviceClient::new(http_client, matrix_cfg);
let mut state = BridgeState::default();
poll_once(&potato, &matrix, &mut state, state_str).await;
mock_msgs.assert();
assert!(state_path.exists());
let loaded = BridgeState::load(state_str).unwrap();
assert_eq!(loaded.last_message_id, Some(1));
assert_eq!(loaded.last_rx_time, Some(100));
assert_eq!(loaded.last_rx_time_ids, vec![1]);
}
#[tokio::test]
async fn test_handle_message() {
let mut server = mockito::Server::new_async().await;
let potatomesh_cfg = PotatomeshConfig {
base_url: server.url(),
poll_interval_secs: 1,
};
let matrix_cfg = MatrixConfig {
homeserver: server.url(),
as_token: "AS_TOKEN".to_string(),
server_name: "example.org".to_string(),
room_id: "!roomid:example.org".to_string(),
};
let node_id = "abcd1234";
let user_id = format!("@potato_{}:{}", node_id, matrix_cfg.server_name);
let encoded_user = urlencoding::encode(&user_id);
let room_id = matrix_cfg.room_id.clone();
let encoded_room = urlencoding::encode(&room_id);
let mock_get_node = server
.mock("GET", "/api/nodes/abcd1234")
.with_status(200)
.with_header("content-type", "application/json")
.with_body(r#"{"node_id": "!abcd1234", "long_name": "Test Node", "short_name": "TN"}"#)
.create();
let mock_register = server
.mock("POST", "/_matrix/client/v3/register")
.match_query("kind=user&access_token=AS_TOKEN")
.with_status(200)
.create();
let mock_join = server
.mock(
"POST",
format!("/_matrix/client/v3/rooms/{}/join", encoded_room).as_str(),
)
.match_query(format!("user_id={}&access_token=AS_TOKEN", encoded_user).as_str())
.with_status(200)
.create();
let mock_display_name = server
.mock(
"PUT",
format!("/_matrix/client/v3/profile/{}/displayname", encoded_user).as_str(),
)
.match_query(format!("user_id={}&access_token=AS_TOKEN", encoded_user).as_str())
.with_status(200)
.create();
let http_client = reqwest::Client::new();
let matrix_client = MatrixAppserviceClient::new(http_client.clone(), matrix_cfg);
let txn_id = matrix_client
.txn_counter
.load(std::sync::atomic::Ordering::SeqCst);
let mock_send = server
.mock(
"PUT",
format!(
"/_matrix/client/v3/rooms/{}/send/m.room.message/{}",
encoded_room, txn_id
)
.as_str(),
)
.match_query(format!("user_id={}&access_token=AS_TOKEN", encoded_user).as_str())
.match_body(mockito::Matcher::PartialJson(serde_json::json!({
"msgtype": "m.text",
"format": "org.matrix.custom.html",
})))
.with_status(200)
.create();
let potato_client = PotatoClient::new(http_client.clone(), potatomesh_cfg);
let mut state = BridgeState::default();
let msg = sample_msg(100);
let result = handle_message(&potato_client, &matrix_client, &mut state, &msg).await;
assert!(result.is_ok());
mock_get_node.assert();
mock_register.assert();
mock_join.assert();
mock_display_name.assert();
mock_send.assert();
assert_eq!(state.last_message_id, Some(100));
}
}
+489
View File
@@ -0,0 +1,489 @@
// Copyright © 2025-26 l5yth & contributors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use serde::Serialize;
use std::sync::{
atomic::{AtomicU64, Ordering},
Arc,
};
use crate::config::MatrixConfig;
#[derive(Clone)]
pub struct MatrixAppserviceClient {
http: reqwest::Client,
pub cfg: MatrixConfig,
pub txn_counter: Arc<AtomicU64>,
}
impl MatrixAppserviceClient {
pub fn new(http: reqwest::Client, cfg: MatrixConfig) -> Self {
let start = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_millis() as u64;
Self {
http,
cfg,
txn_counter: Arc::new(AtomicU64::new(start)),
}
}
/// Basic liveness check against the homeserver.
pub async fn health_check(&self) -> anyhow::Result<()> {
let url = format!("{}/_matrix/client/versions", self.cfg.homeserver);
let resp = self.http.get(&url).send().await?;
if resp.status().is_success() {
tracing::info!("Matrix homeserver healthy at {}", self.cfg.homeserver);
Ok(())
} else {
Err(anyhow::anyhow!(
"Matrix homeserver versions check failed with status {}",
resp.status()
))
}
}
/// Convert a node_id like "!deadbeef" into Matrix localpart "potato_deadbeef".
pub fn localpart_from_node_id(node_id: &str) -> String {
format!("potato_{}", node_id.trim_start_matches('!'))
}
/// Build a full Matrix user_id from localpart.
pub fn user_id(&self, localpart: &str) -> String {
format!("@{}:{}", localpart, self.cfg.server_name)
}
fn auth_query(&self) -> String {
format!("access_token={}", urlencoding::encode(&self.cfg.as_token))
}
/// Ensure the puppet user exists (register via appservice registration).
pub async fn ensure_user_registered(&self, localpart: &str) -> anyhow::Result<()> {
#[derive(Serialize)]
struct RegisterReq<'a> {
#[serde(rename = "type")]
typ: &'a str,
username: &'a str,
}
let url = format!(
"{}/_matrix/client/v3/register?kind=user&{}",
self.cfg.homeserver,
self.auth_query()
);
let body = RegisterReq {
typ: "m.login.application_service",
username: localpart,
};
let resp = self.http.post(&url).json(&body).send().await?;
if resp.status().is_success() {
Ok(())
} else {
// If user already exists, Synapse / HS usually returns 400 M_USER_IN_USE.
// We'll just ignore non-success and hope it's that case.
Ok(())
}
}
/// Set display name for puppet user.
pub async fn set_display_name(&self, user_id: &str, display_name: &str) -> anyhow::Result<()> {
#[derive(Serialize)]
struct DisplayNameReq<'a> {
displayname: &'a str,
}
let encoded_user = urlencoding::encode(user_id);
let url = format!(
"{}/_matrix/client/v3/profile/{}/displayname?user_id={}&{}",
self.cfg.homeserver,
encoded_user,
encoded_user,
self.auth_query()
);
let body = DisplayNameReq {
displayname: display_name,
};
let resp = self.http.put(&url).json(&body).send().await?;
if resp.status().is_success() {
Ok(())
} else {
// Non-fatal.
tracing::warn!(
"Failed to set display name for {}: {}",
user_id,
resp.status()
);
Ok(())
}
}
/// Ensure the puppet user is joined to the configured room.
pub async fn ensure_user_joined_room(&self, user_id: &str) -> anyhow::Result<()> {
#[derive(Serialize)]
struct JoinReq {}
let encoded_room = urlencoding::encode(&self.cfg.room_id);
let encoded_user = urlencoding::encode(user_id);
let url = format!(
"{}/_matrix/client/v3/rooms/{}/join?user_id={}&{}",
self.cfg.homeserver,
encoded_room,
encoded_user,
self.auth_query()
);
let resp = self.http.post(&url).json(&JoinReq {}).send().await?;
if resp.status().is_success() {
Ok(())
} else {
let status = resp.status();
let body_snip = resp.text().await.unwrap_or_default();
Err(anyhow::anyhow!(
"Matrix join failed for {} in {} with status {} ({})",
user_id,
self.cfg.room_id,
status,
body_snip
))
}
}
/// Send a text message with HTML formatting into the configured room as puppet user_id.
pub async fn send_formatted_message_as(
&self,
user_id: &str,
body_text: &str,
formatted_body: &str,
) -> anyhow::Result<()> {
#[derive(Serialize)]
struct MsgContent<'a> {
msgtype: &'a str,
body: &'a str,
format: &'a str,
formatted_body: &'a str,
}
let txn_id = self.txn_counter.fetch_add(1, Ordering::SeqCst);
let encoded_room = urlencoding::encode(&self.cfg.room_id);
let encoded_user = urlencoding::encode(user_id);
let url = format!(
"{}/_matrix/client/v3/rooms/{}/send/m.room.message/{}?user_id={}&{}",
self.cfg.homeserver,
encoded_room,
txn_id,
encoded_user,
self.auth_query()
);
let content = MsgContent {
msgtype: "m.text",
body: body_text,
format: "org.matrix.custom.html",
formatted_body,
};
let resp = self.http.put(&url).json(&content).send().await?;
if !resp.status().is_success() {
let status = resp.status();
let body_snip = resp.text().await.unwrap_or_default();
tracing::warn!(
"Failed to send formatted message as {}: status {}, body: {}",
user_id,
status,
body_snip
);
return Err(anyhow::anyhow!(
"Matrix send failed for {} with status {}",
user_id,
status
));
}
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
fn dummy_cfg() -> MatrixConfig {
MatrixConfig {
homeserver: "https://matrix.example.org".to_string(),
as_token: "AS_TOKEN".to_string(),
server_name: "example.org".to_string(),
room_id: "!roomid:example.org".to_string(),
}
}
#[test]
fn localpart_strips_bang_correctly() {
assert_eq!(
MatrixAppserviceClient::localpart_from_node_id("!deadbeef"),
"potato_deadbeef"
);
assert_eq!(
MatrixAppserviceClient::localpart_from_node_id("cafebabe"),
"potato_cafebabe"
);
}
#[test]
fn user_id_builds_from_localpart_and_server_name() {
let http = reqwest::Client::builder().build().unwrap();
let client = MatrixAppserviceClient::new(http, dummy_cfg());
let uid = client.user_id("potato_deadbeef");
assert_eq!(uid, "@potato_deadbeef:example.org");
}
#[tokio::test]
async fn health_check_success() {
let mut server = mockito::Server::new_async().await;
let mock = server
.mock("GET", "/_matrix/client/versions")
.with_status(200)
.create();
let mut cfg = dummy_cfg();
cfg.homeserver = server.url();
let client = MatrixAppserviceClient::new(reqwest::Client::new(), cfg);
let result = client.health_check().await;
mock.assert();
assert!(result.is_ok());
}
#[tokio::test]
async fn health_check_failure() {
let mut server = mockito::Server::new_async().await;
let mock = server
.mock("GET", "/_matrix/client/versions")
.with_status(500)
.create();
let mut cfg = dummy_cfg();
cfg.homeserver = server.url();
let client = MatrixAppserviceClient::new(reqwest::Client::new(), cfg);
let result = client.health_check().await;
mock.assert();
assert!(result.is_err());
}
#[test]
fn auth_query_contains_access_token() {
let http = reqwest::Client::builder().build().unwrap();
let client = MatrixAppserviceClient::new(http, dummy_cfg());
let q = client.auth_query();
assert!(q.starts_with("access_token="));
assert!(q.contains("AS_TOKEN"));
}
#[test]
fn test_new_matrix_client() {
let http_client = reqwest::Client::new();
let config = dummy_cfg();
let client = MatrixAppserviceClient::new(http_client, config);
assert_eq!(client.cfg.homeserver, "https://matrix.example.org");
assert_eq!(client.cfg.as_token, "AS_TOKEN");
assert!(client.txn_counter.load(Ordering::SeqCst) > 0);
}
#[tokio::test]
async fn test_ensure_user_registered_success() {
let mut server = mockito::Server::new_async().await;
let mock = server
.mock("POST", "/_matrix/client/v3/register")
.match_query("kind=user&access_token=AS_TOKEN")
.with_status(200)
.create();
let mut cfg = dummy_cfg();
cfg.homeserver = server.url();
let client = MatrixAppserviceClient::new(reqwest::Client::new(), cfg);
let result = client.ensure_user_registered("testuser").await;
mock.assert();
assert!(result.is_ok());
}
#[tokio::test]
async fn test_ensure_user_registered_user_in_use() {
let mut server = mockito::Server::new_async().await;
let mock = server
.mock("POST", "/_matrix/client/v3/register")
.match_query("kind=user&access_token=AS_TOKEN")
.with_status(400) // M_USER_IN_USE
.create();
let mut cfg = dummy_cfg();
cfg.homeserver = server.url();
let client = MatrixAppserviceClient::new(reqwest::Client::new(), cfg);
let result = client.ensure_user_registered("testuser").await;
mock.assert();
assert!(result.is_ok());
}
#[tokio::test]
async fn test_set_display_name_success() {
let mut server = mockito::Server::new_async().await;
let user_id = "@test:example.org";
let encoded_user = urlencoding::encode(user_id);
let query = format!("user_id={}&access_token=AS_TOKEN", encoded_user);
let path = format!("/_matrix/client/v3/profile/{}/displayname", encoded_user);
let mock = server
.mock("PUT", path.as_str())
.match_query(query.as_str())
.with_status(200)
.create();
let mut cfg = dummy_cfg();
cfg.homeserver = server.url();
let client = MatrixAppserviceClient::new(reqwest::Client::new(), cfg);
let result = client.set_display_name(user_id, "Test Name").await;
mock.assert();
assert!(result.is_ok());
}
#[tokio::test]
async fn test_set_display_name_fail_is_ok() {
let mut server = mockito::Server::new_async().await;
let user_id = "@test:example.org";
let encoded_user = urlencoding::encode(user_id);
let query = format!("user_id={}&access_token=AS_TOKEN", encoded_user);
let path = format!("/_matrix/client/v3/profile/{}/displayname", encoded_user);
let mock = server
.mock("PUT", path.as_str())
.match_query(query.as_str())
.with_status(500)
.create();
let mut cfg = dummy_cfg();
cfg.homeserver = server.url();
let client = MatrixAppserviceClient::new(reqwest::Client::new(), cfg);
let result = client.set_display_name(user_id, "Test Name").await;
mock.assert();
assert!(result.is_ok());
}
#[tokio::test]
async fn test_ensure_user_joined_room_success() {
let mut server = mockito::Server::new_async().await;
let user_id = "@test:example.org";
let room_id = "!roomid:example.org";
let encoded_user = urlencoding::encode(user_id);
let encoded_room = urlencoding::encode(room_id);
let query = format!("user_id={}&access_token=AS_TOKEN", encoded_user);
let path = format!("/_matrix/client/v3/rooms/{}/join", encoded_room);
let mock = server
.mock("POST", path.as_str())
.match_query(query.as_str())
.with_status(200)
.create();
let mut cfg = dummy_cfg();
cfg.homeserver = server.url();
cfg.room_id = room_id.to_string();
let client = MatrixAppserviceClient::new(reqwest::Client::new(), cfg);
let result = client.ensure_user_joined_room(user_id).await;
mock.assert();
assert!(result.is_ok());
}
#[tokio::test]
async fn test_ensure_user_joined_room_fail() {
let mut server = mockito::Server::new_async().await;
let user_id = "@test:example.org";
let room_id = "!roomid:example.org";
let encoded_user = urlencoding::encode(user_id);
let encoded_room = urlencoding::encode(room_id);
let query = format!("user_id={}&access_token=AS_TOKEN", encoded_user);
let path = format!("/_matrix/client/v3/rooms/{}/join", encoded_room);
let mock = server
.mock("POST", path.as_str())
.match_query(query.as_str())
.with_status(403)
.create();
let mut cfg = dummy_cfg();
cfg.homeserver = server.url();
cfg.room_id = room_id.to_string();
let client = MatrixAppserviceClient::new(reqwest::Client::new(), cfg);
let result = client.ensure_user_joined_room(user_id).await;
mock.assert();
assert!(result.is_err());
}
#[tokio::test]
async fn test_send_formatted_message_as_success() {
let mut server = mockito::Server::new_async().await;
let user_id = "@test:example.org";
let room_id = "!roomid:example.org";
let encoded_user = urlencoding::encode(user_id);
let encoded_room = urlencoding::encode(room_id);
let client = {
let mut cfg = dummy_cfg();
cfg.homeserver = server.url();
cfg.room_id = room_id.to_string();
MatrixAppserviceClient::new(reqwest::Client::new(), cfg)
};
let txn_id = client.txn_counter.load(Ordering::SeqCst);
let query = format!("user_id={}&access_token=AS_TOKEN", encoded_user);
let path = format!(
"/_matrix/client/v3/rooms/{}/send/m.room.message/{}",
encoded_room, txn_id
);
let mock = server
.mock("PUT", path.as_str())
.match_query(query.as_str())
.match_body(mockito::Matcher::PartialJson(serde_json::json!({
"msgtype": "m.text",
"body": "`[meta]` hello",
"format": "org.matrix.custom.html",
"formatted_body": "<code>[meta]</code> hello",
})))
.with_status(200)
.create();
let result = client
.send_formatted_message_as(user_id, "`[meta]` hello", "<code>[meta]</code> hello")
.await;
mock.assert();
assert!(result.is_ok());
}
}
+561
View File
@@ -0,0 +1,561 @@
// Copyright © 2025-26 l5yth & contributors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use serde::Deserialize;
use std::collections::HashMap;
use std::sync::Arc;
use tokio::sync::RwLock;
use crate::config::PotatomeshConfig;
#[allow(dead_code)]
#[derive(Debug, Deserialize, Clone)]
pub struct PotatoMessage {
pub id: u64,
pub rx_time: u64,
pub rx_iso: String,
pub from_id: String,
pub to_id: String,
pub channel: u8,
#[serde(default)]
pub portnum: Option<String>,
pub text: String,
#[serde(default)]
pub rssi: Option<i16>,
#[serde(default)]
pub hop_limit: Option<u8>,
pub lora_freq: u32,
pub modem_preset: String,
pub channel_name: String,
#[serde(default)]
pub snr: Option<f32>,
#[serde(default)]
pub reply_id: Option<u64>,
pub node_id: String,
}
#[derive(Debug, Default, Clone)]
pub struct FetchParams {
pub limit: Option<u32>,
pub since: Option<u64>,
}
#[allow(dead_code)]
#[derive(Debug, Deserialize, Clone)]
pub struct PotatoNode {
pub node_id: String,
#[serde(default)]
pub short_name: Option<String>,
pub long_name: String,
#[serde(default)]
pub role: Option<String>,
#[serde(default)]
pub hw_model: Option<String>,
#[serde(default)]
pub last_heard: Option<u64>,
#[serde(default)]
pub first_heard: Option<u64>,
#[serde(default)]
pub latitude: Option<f64>,
#[serde(default)]
pub longitude: Option<f64>,
#[serde(default)]
pub altitude: Option<f64>,
}
#[derive(Clone)]
pub struct PotatoClient {
http: reqwest::Client,
cfg: PotatomeshConfig,
// simple in-memory cache for node metadata
nodes_cache: Arc<RwLock<HashMap<String, PotatoNode>>>,
}
impl PotatoClient {
pub fn new(http: reqwest::Client, cfg: PotatomeshConfig) -> Self {
Self {
http,
cfg,
nodes_cache: Arc::new(RwLock::new(HashMap::new())),
}
}
/// Build the API root; accept either a bare domain or one already ending in `/api`.
fn api_base(&self) -> String {
let trimmed = self.cfg.base_url.trim_end_matches('/');
if trimmed.ends_with("/api") {
trimmed.to_string()
} else {
format!("{}/api", trimmed)
}
}
fn messages_url(&self) -> String {
format!("{}/messages", self.api_base())
}
fn node_url(&self, hex_id: &str) -> String {
// e.g. https://potatomesh.net/api/nodes/67fc83cb
format!("{}/nodes/{}", self.api_base(), hex_id)
}
/// Basic liveness check against the PotatoMesh API.
pub async fn health_check(&self) -> anyhow::Result<()> {
let base = self
.cfg
.base_url
.trim_end_matches('/')
.trim_end_matches("/api");
let url = format!("{}/version", base);
let resp = self.http.get(&url).send().await?;
if resp.status().is_success() {
tracing::info!("PotatoMesh API healthy at {}", self.cfg.base_url);
Ok(())
} else {
Err(anyhow::anyhow!(
"PotatoMesh health check failed with status {}",
resp.status()
))
}
}
pub async fn fetch_messages(&self, params: FetchParams) -> anyhow::Result<Vec<PotatoMessage>> {
let mut req = self.http.get(self.messages_url());
if let Some(limit) = params.limit {
req = req.query(&[("limit", limit)]);
}
if let Some(since) = params.since {
req = req.query(&[("since", since)]);
}
let resp = req.send().await?.error_for_status()?;
let msgs: Vec<PotatoMessage> = resp.json().await?;
Ok(msgs)
}
pub async fn get_node(&self, node_id_with_bang: &str) -> anyhow::Result<PotatoNode> {
// node_id is like "!67fc83cb" → we need "67fc83cb"
let hex = node_id_with_bang.trim_start_matches('!').to_string();
{
let cache = self.nodes_cache.read().await;
if let Some(n) = cache.get(&hex) {
return Ok(n.clone());
}
}
let url = self.node_url(&hex);
let resp = self.http.get(url).send().await?.error_for_status()?;
let node: PotatoNode = resp.json().await?;
{
let mut cache = self.nodes_cache.write().await;
cache.insert(hex, node.clone());
}
Ok(node)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn deserialize_sample_message_array() {
let json = r#"
[
{
"id": 2947676906,
"rx_time": 1764241436,
"rx_iso": "2025-11-27T11:03:56Z",
"from_id": "!da6556d4",
"to_id": "^all",
"channel": 1,
"portnum": "TEXT_MESSAGE_APP",
"text": "Ping",
"rssi": -111,
"hop_limit": 1,
"lora_freq": 868,
"modem_preset": "MediumFast",
"channel_name": "TEST",
"snr": -9.0,
"node_id": "!06871773"
}
]
"#;
let msgs: Vec<PotatoMessage> = serde_json::from_str(json).expect("valid message json");
assert_eq!(msgs.len(), 1);
let m = &msgs[0];
assert_eq!(m.id, 2947676906);
assert_eq!(m.from_id, "!da6556d4");
assert_eq!(m.node_id, "!06871773");
assert_eq!(m.portnum.as_deref(), Some("TEXT_MESSAGE_APP"));
assert_eq!(m.lora_freq, 868);
assert!((m.snr.unwrap() - (-9.0)).abs() < f32::EPSILON);
}
#[test]
fn deserialize_message_with_missing_optional_fields() {
let json = r#"
[
{
"id": 1,
"rx_time": 0,
"rx_iso": "2025-11-27T11:03:56Z",
"from_id": "!abcd1234",
"to_id": "^all",
"channel": 1,
"text": "Ping",
"lora_freq": 868,
"modem_preset": "MediumFast",
"channel_name": "TEST",
"node_id": "!abcd1234"
}
]
"#;
let msgs: Vec<PotatoMessage> = serde_json::from_str(json).expect("valid message json");
assert_eq!(msgs.len(), 1);
let m = &msgs[0];
assert!(m.portnum.is_none());
assert!(m.rssi.is_none());
assert!(m.hop_limit.is_none());
assert!(m.snr.is_none());
}
#[test]
fn deserialize_sample_node() {
let json = r#"
{
"node_id": "!67fc83cb",
"short_name": "83CB",
"long_name": "Meshtastic 83CB",
"role": "CLIENT_HIDDEN",
"last_heard": 1764250515,
"first_heard": 1758993817,
"last_seen_iso": "2025-11-27T13:35:15Z"
}
"#;
let node: PotatoNode = serde_json::from_str(json).expect("valid node json");
assert_eq!(node.node_id, "!67fc83cb");
assert_eq!(node.short_name.as_deref(), Some("83CB"));
assert_eq!(node.long_name, "Meshtastic 83CB");
assert_eq!(node.role.as_deref(), Some("CLIENT_HIDDEN"));
assert_eq!(node.last_heard, Some(1764250515));
assert_eq!(node.first_heard, Some(1758993817));
assert!(node.latitude.is_none());
}
#[test]
fn node_hex_id_is_stripped_correctly() {
let with_bang = "!deadbeef";
let hex = with_bang.trim_start_matches('!');
assert_eq!(hex, "deadbeef");
let already_hex = "cafebabe";
let hex2 = already_hex.trim_start_matches('!');
assert_eq!(hex2, "cafebabe");
}
#[test]
fn test_new_potato_client() {
let http_client = reqwest::Client::new();
let config = PotatomeshConfig {
base_url: "http://localhost:8080".to_string(),
poll_interval_secs: 60,
};
let client = PotatoClient::new(http_client, config);
assert_eq!(client.cfg.base_url, "http://localhost:8080");
assert_eq!(client.cfg.poll_interval_secs, 60);
}
#[test]
fn test_messages_url() {
let http_client = reqwest::Client::new();
let config = PotatomeshConfig {
base_url: "http://localhost:8080".to_string(),
poll_interval_secs: 60,
};
let client = PotatoClient::new(http_client, config);
assert_eq!(client.messages_url(), "http://localhost:8080/api/messages");
}
#[test]
fn test_messages_url_with_trailing_slash() {
let http_client = reqwest::Client::new();
let config = PotatomeshConfig {
base_url: "http://localhost:8080/".to_string(),
poll_interval_secs: 60,
};
let client = PotatoClient::new(http_client, config);
assert_eq!(client.messages_url(), "http://localhost:8080/api/messages");
}
#[test]
fn test_messages_url_with_existing_api_suffix() {
let http_client = reqwest::Client::new();
let config = PotatomeshConfig {
base_url: "http://localhost:8080/api/".to_string(),
poll_interval_secs: 60,
};
let client = PotatoClient::new(http_client, config);
assert_eq!(client.messages_url(), "http://localhost:8080/api/messages");
}
#[test]
fn test_node_url() {
let http_client = reqwest::Client::new();
let config = PotatomeshConfig {
base_url: "http://localhost:8080".to_string(),
poll_interval_secs: 60,
};
let client = PotatoClient::new(http_client, config);
assert_eq!(
client.node_url("!1234"),
"http://localhost:8080/api/nodes/!1234"
);
}
#[tokio::test]
async fn test_fetch_messages_success() {
let mut server = mockito::Server::new_async().await;
let mock = server
.mock("GET", "/api/messages")
.match_query(mockito::Matcher::Any) // allow optional query params
.with_status(200)
.with_header("content-type", "application/json")
.with_body(
r#"
[
{
"id": 2947676906, "rx_time": 1764241436, "rx_iso": "2025-11-27T11:03:56Z",
"from_id": "!da6556d4", "to_id": "^all", "channel": 1,
"portnum": "TEXT_MESSAGE_APP", "text": "Ping", "rssi": -111,
"hop_limit": 1, "lora_freq": 868, "modem_preset": "MediumFast",
"channel_name": "TEST", "snr": -9.0, "node_id": "!06871773"
}
]
"#,
)
.create();
let http_client = reqwest::Client::new();
let config = PotatomeshConfig {
base_url: server.url(),
poll_interval_secs: 60,
};
let client = PotatoClient::new(http_client, config);
let result = client.fetch_messages(FetchParams::default()).await;
mock.assert();
assert!(result.is_ok());
let messages = result.unwrap();
assert_eq!(messages.len(), 1);
assert_eq!(messages[0].id, 2947676906);
}
#[tokio::test]
async fn test_health_check_success() {
let mut server = mockito::Server::new_async().await;
let mock = server.mock("GET", "/version").with_status(200).create();
let http_client = reqwest::Client::new();
let config = PotatomeshConfig {
base_url: server.url(),
poll_interval_secs: 60,
};
let client = PotatoClient::new(http_client, config);
let result = client.health_check().await;
mock.assert();
assert!(result.is_ok());
}
#[tokio::test]
async fn test_health_check_strips_api_suffix() {
let mut server = mockito::Server::new_async().await;
let mock = server.mock("GET", "/version").with_status(200).create();
let http_client = reqwest::Client::new();
let mut base = server.url();
base.push_str("/api");
let config = PotatomeshConfig {
base_url: base,
poll_interval_secs: 60,
};
let client = PotatoClient::new(http_client, config);
let result = client.health_check().await;
mock.assert();
assert!(result.is_ok());
}
#[tokio::test]
async fn test_health_check_failure() {
let mut server = mockito::Server::new_async().await;
let mock = server.mock("GET", "/version").with_status(500).create();
let http_client = reqwest::Client::new();
let config = PotatomeshConfig {
base_url: server.url(),
poll_interval_secs: 60,
};
let client = PotatoClient::new(http_client, config);
let result = client.health_check().await;
mock.assert();
assert!(result.is_err());
}
#[tokio::test]
async fn test_fetch_messages_error() {
let mut server = mockito::Server::new_async().await;
let mock = server
.mock("GET", "/api/messages")
.match_query(mockito::Matcher::Any)
.with_status(500)
.create();
let http_client = reqwest::Client::new();
let config = PotatomeshConfig {
base_url: server.url(),
poll_interval_secs: 60,
};
let client = PotatoClient::new(http_client, config);
let result = client.fetch_messages(FetchParams::default()).await;
mock.assert();
assert!(result.is_err());
}
#[tokio::test]
async fn test_fetch_messages_with_limit_and_since() {
let mut server = mockito::Server::new_async().await;
let mock = server
.mock("GET", "/api/messages")
.match_query("limit=10&since=123")
.with_status(200)
.with_header("content-type", "application/json")
.with_body("[]")
.create();
let http_client = reqwest::Client::new();
let config = PotatomeshConfig {
base_url: server.url(),
poll_interval_secs: 60,
};
let client = PotatoClient::new(http_client, config);
let params = FetchParams {
limit: Some(10),
since: Some(123),
};
let result = client.fetch_messages(params).await;
mock.assert();
assert!(result.is_ok());
assert!(result.unwrap().is_empty());
}
#[tokio::test]
async fn test_get_node_cache_hit() {
let http_client = reqwest::Client::new();
let config = PotatomeshConfig {
base_url: "http://localhost:8080".to_string(),
poll_interval_secs: 60,
};
let client = PotatoClient::new(http_client, config);
let node = PotatoNode {
node_id: "!1234".to_string(),
short_name: Some("test".to_string()),
long_name: "test node".to_string(),
role: None,
hw_model: None,
last_heard: None,
first_heard: None,
latitude: None,
longitude: None,
altitude: None,
};
client
.nodes_cache
.write()
.await
.insert("1234".to_string(), node.clone());
let result = client.get_node("!1234").await;
assert!(result.is_ok());
let got = result.unwrap();
assert_eq!(got.node_id, "!1234");
assert_eq!(got.short_name.unwrap(), "test");
}
#[tokio::test]
async fn test_get_node_cache_miss() {
let mut server = mockito::Server::new_async().await;
let mock = server
.mock("GET", "/api/nodes/1234")
.match_query(mockito::Matcher::Any)
.with_status(200)
.with_header("content-type", "application/json")
.with_body(
r#"
{
"node_id": "!1234", "short_name": "test", "long_name": "test node",
"role": "test", "hw_model": "test", "last_heard": 1, "first_heard": 1,
"latitude": 1.0, "longitude": 1.0, "altitude": 1.0
}
"#,
)
.create();
let http_client = reqwest::Client::new();
let config = PotatomeshConfig {
base_url: server.url(),
poll_interval_secs: 60,
};
let client = PotatoClient::new(http_client, config);
// first call, should miss cache and hit the server
let result = client.get_node("!1234").await;
mock.assert();
assert!(result.is_ok());
// second call, should hit cache
let result2 = client.get_node("!1234").await;
assert!(result2.is_ok());
// mockito would panic here if we made a second request
}
#[tokio::test]
async fn test_get_node_error() {
let mut server = mockito::Server::new_async().await;
let mock = server
.mock("GET", "/api/nodes/1234")
.with_status(500)
.create();
let http_client = reqwest::Client::new();
let config = PotatomeshConfig {
base_url: server.url(),
poll_interval_secs: 60,
};
let client = PotatoClient::new(http_client, config);
let result = client.get_node("!1234").await;
mock.assert();
assert!(result.is_err());
}
}
+437
View File
@@ -0,0 +1,437 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unit tests for :mod:`data.mesh_ingestor.daemon`."""
from __future__ import annotations
import sys
import threading
import types
from pathlib import Path
from typing import Any
import pytest
REPO_ROOT = Path(__file__).resolve().parents[1]
if str(REPO_ROOT) not in sys.path:
sys.path.insert(0, str(REPO_ROOT))
from data.mesh_ingestor import daemon
class FakeEvent:
"""Test double for :class:`threading.Event` that can auto-set itself."""
instances: list["FakeEvent"] = []
def __init__(self, *, auto_set_on_wait: bool = False):
self._is_set = False
self._auto_set_on_wait = auto_set_on_wait
self.wait_calls: list[Any] = []
FakeEvent.instances.append(self)
def set(self) -> None:
"""Mark the event as set."""
self._is_set = True
def is_set(self) -> bool:
"""Return whether the event is currently set."""
return self._is_set
def wait(self, timeout: float | None = None) -> bool:
"""Record waits and optionally auto-set the flag."""
self.wait_calls.append(timeout)
if self._auto_set_on_wait:
self._is_set = True
return self._is_set
class AutoSetEvent(FakeEvent):
"""Event variant that automatically sets on each wait call."""
def __init__(self): # noqa: D401 - short initializer docstring handled by class
super().__init__(auto_set_on_wait=True)
@pytest.fixture(autouse=True)
def reset_fake_events():
"""Ensure :class:`FakeEvent` registry is cleared between tests."""
FakeEvent.instances.clear()
yield
FakeEvent.instances.clear()
def test_event_wait_default_detection(monkeypatch):
"""``_event_wait_allows_default_timeout`` matches defaulted signatures."""
assert daemon._event_wait_allows_default_timeout() is True
class _NoDefaultEvent:
def wait(self, timeout): # type: ignore[override]
return bool(timeout)
monkeypatch.setattr(
daemon, "threading", types.SimpleNamespace(Event=_NoDefaultEvent)
)
assert daemon._event_wait_allows_default_timeout() is False
def test_subscribe_receive_topics(monkeypatch):
"""Subscribing to receive topics returns the exact topic list."""
subscribed: list[str] = []
def _record_subscription(_handler, topic):
subscribed.append(topic)
monkeypatch.setattr(
daemon, "pub", types.SimpleNamespace(subscribe=_record_subscription)
)
assert daemon._subscribe_receive_topics() == list(daemon._RECEIVE_TOPICS)
assert subscribed == list(daemon._RECEIVE_TOPICS)
def test_node_items_snapshot_handles_mutation(monkeypatch):
"""Snapshots tolerate temporary runtime errors while iterating."""
class MutatingMapping(dict):
def __bool__(self):
return True
def items(self): # type: ignore[override]
raise RuntimeError("dictionary changed size during iteration")
monkeypatch.setattr(daemon.time, "sleep", lambda _: None)
assert daemon._node_items_snapshot({"a": 1}) == [("a", 1)]
assert daemon._node_items_snapshot(MutatingMapping(), retries=1) is None
class IteratingMapping:
def __init__(self):
self.calls = 0
self._data = {"x": 10, "y": 20}
def __iter__(self):
self.calls += 1
if self.calls == 1:
raise RuntimeError("dictionary changed size during iteration")
return iter(self._data)
def __getitem__(self, key):
return self._data[key]
mapping = IteratingMapping()
assert daemon._node_items_snapshot(mapping, retries=2) == [("x", 10), ("y", 20)]
def test_close_interface_respects_timeout(monkeypatch):
"""Long-running close calls emit a timeout debug log."""
log_calls = []
monkeypatch.setattr(daemon.config, "_CLOSE_TIMEOUT_SECS", 0.01)
monkeypatch.setattr(
daemon.config, "_debug_log", lambda *args, **kwargs: log_calls.append(kwargs)
)
blocker = threading.Event()
class SlowInterface:
def close(self):
blocker.wait(timeout=0.1)
daemon._close_interface(SlowInterface())
assert any("timeout_seconds" in entry for entry in log_calls)
def test_close_interface_immediate_path(monkeypatch):
"""A zero timeout calls ``close`` inline without threading."""
flags = {"called": False}
monkeypatch.setattr(daemon.config, "_CLOSE_TIMEOUT_SECS", 0)
class ImmediateInterface:
def close(self):
flags["called"] = True
daemon._close_interface(ImmediateInterface())
assert flags["called"] is True
def test_ble_interface_detection():
"""Detect BLE module names reliably."""
class BLE:
__module__ = "meshtastic.ble_interface"
class NonBLE:
__module__ = "meshtastic.serial"
assert daemon._is_ble_interface(BLE()) is True
assert daemon._is_ble_interface(NonBLE()) is False
assert daemon._is_ble_interface(None) is False
def test_process_ingestor_heartbeat_with_extracted_host(monkeypatch):
"""Host id extraction triggers heartbeat announcement flag updates."""
host_ids: list[str | None] = [None]
ingestor_ids: list[str | None] = []
queued: list[bool] = []
monkeypatch.setattr(daemon.handlers, "host_node_id", lambda: host_ids[0])
monkeypatch.setattr(
daemon.interfaces, "_extract_host_node_id", lambda iface: "!abcd"
)
monkeypatch.setattr(
daemon.handlers,
"register_host_node_id",
lambda node: host_ids.__setitem__(0, node),
)
monkeypatch.setattr(daemon.ingestors, "set_ingestor_node_id", ingestor_ids.append)
monkeypatch.setattr(
daemon.ingestors,
"queue_ingestor_heartbeat",
lambda force: queued.append(force) or True,
)
assert (
daemon._process_ingestor_heartbeat(object(), ingestor_announcement_sent=False)
is True
)
assert host_ids[0] == "!abcd"
assert ingestor_ids[-1] == "!abcd"
assert queued[-1] is True
monkeypatch.setattr(daemon.handlers, "host_node_id", lambda: "!abcd")
monkeypatch.setattr(
daemon.ingestors,
"queue_ingestor_heartbeat",
lambda force: queued.append(force) or False,
)
assert (
daemon._process_ingestor_heartbeat(object(), ingestor_announcement_sent=True)
is True
)
assert queued[-1] is False
def test_connected_state_branches(monkeypatch):
"""Connection state resolves across multiple attribute forms."""
event = threading.Event()
event.set()
assert daemon._connected_state(event) is True
class CallableCandidate:
def __call__(self):
return False
assert daemon._connected_state(CallableCandidate()) is False
class BooleanCandidate:
def __bool__(self):
raise RuntimeError("cannot bool")
assert daemon._connected_state(BooleanCandidate()) is None
class HasIsSet:
def is_set(self):
raise RuntimeError("broken")
assert daemon._connected_state(HasIsSet()) is None
def _configure_common_defaults(
monkeypatch, *, energy_saving: bool = False, inactivity: float = 0.0
):
"""Set fast configuration defaults shared by daemon integration tests."""
monkeypatch.setattr(daemon.config, "SNAPSHOT_SECS", 0)
monkeypatch.setattr(daemon.config, "_RECONNECT_INITIAL_DELAY_SECS", 0)
monkeypatch.setattr(daemon.config, "_RECONNECT_MAX_DELAY_SECS", 0)
monkeypatch.setattr(daemon.config, "_CLOSE_TIMEOUT_SECS", 0)
monkeypatch.setattr(daemon.config, "ENERGY_SAVING", energy_saving)
monkeypatch.setattr(
daemon.config, "_ENERGY_ONLINE_DURATION_SECS", 0 if energy_saving else 0.0
)
monkeypatch.setattr(daemon.config, "_ENERGY_SLEEP_SECS", 0.0)
monkeypatch.setattr(daemon.config, "_INGESTOR_HEARTBEAT_SECS", 0)
monkeypatch.setattr(daemon.config, "_INACTIVITY_RECONNECT_SECS", inactivity)
monkeypatch.setattr(daemon.config, "CONNECTION", "serial0")
class DummyInterface:
"""Lightweight mesh interface stand-in used for daemon integration tests."""
def __init__(self, *, nodes=None, is_connected=True, client_present=True):
self.nodes = nodes if nodes is not None else {"!node": {"id": 1}}
self.isConnected = is_connected
self.client = object() if client_present else None
def close(self):
return None
def test_main_happy_path(monkeypatch):
"""The main loop processes snapshots and heartbeats once before stopping."""
_configure_common_defaults(monkeypatch)
monkeypatch.setattr(
daemon,
"threading",
types.SimpleNamespace(
Event=AutoSetEvent,
current_thread=threading.current_thread,
main_thread=threading.main_thread,
),
)
monkeypatch.setattr(
daemon, "pub", types.SimpleNamespace(subscribe=lambda *_args, **_kwargs: None)
)
monkeypatch.setattr(
daemon.interfaces,
"_create_serial_interface",
lambda candidate: (DummyInterface(), candidate),
)
monkeypatch.setattr(daemon.interfaces, "_ensure_radio_metadata", lambda iface: None)
monkeypatch.setattr(
daemon.interfaces, "_ensure_channel_metadata", lambda iface: None
)
monkeypatch.setattr(
daemon.interfaces, "_extract_host_node_id", lambda iface: "!host"
)
host_id = {"value": None}
monkeypatch.setattr(
daemon.handlers,
"register_host_node_id",
lambda node: host_id.__setitem__("value", node),
)
monkeypatch.setattr(daemon.handlers, "host_node_id", lambda: host_id["value"])
monkeypatch.setattr(daemon.handlers, "upsert_node", lambda *_args, **_kwargs: None)
monkeypatch.setattr(daemon.handlers, "last_packet_monotonic", lambda: None)
heartbeats: list[bool] = []
monkeypatch.setattr(
daemon.ingestors, "set_ingestor_node_id", lambda *_args, **_kwargs: None
)
monkeypatch.setattr(
daemon.ingestors,
"queue_ingestor_heartbeat",
lambda force: heartbeats.append(force) or True,
)
daemon.main()
assert heartbeats
assert host_id["value"] == "!host"
assert FakeEvent.instances and FakeEvent.instances[0].is_set() is True
def test_main_energy_saving_disconnect(monkeypatch):
"""Energy saving mode disconnects and sleeps when deadlines expire."""
_configure_common_defaults(monkeypatch, energy_saving=True)
monkeypatch.setattr(
daemon,
"threading",
types.SimpleNamespace(
Event=AutoSetEvent,
current_thread=threading.current_thread,
main_thread=threading.main_thread,
),
)
monkeypatch.setattr(
daemon, "pub", types.SimpleNamespace(subscribe=lambda *_args, **_kwargs: None)
)
monkeypatch.setattr(
daemon.interfaces,
"_create_serial_interface",
lambda candidate: (DummyInterface(), candidate),
)
monkeypatch.setattr(daemon.interfaces, "_ensure_radio_metadata", lambda iface: None)
monkeypatch.setattr(
daemon.interfaces, "_ensure_channel_metadata", lambda iface: None
)
monkeypatch.setattr(
daemon.interfaces, "_extract_host_node_id", lambda iface: "!host"
)
monkeypatch.setattr(
daemon.handlers, "register_host_node_id", lambda *_args, **_kwargs: None
)
monkeypatch.setattr(daemon.handlers, "host_node_id", lambda: "!host")
monkeypatch.setattr(daemon.handlers, "upsert_node", lambda *_args, **_kwargs: None)
monkeypatch.setattr(daemon.handlers, "last_packet_monotonic", lambda: None)
monkeypatch.setattr(
daemon.ingestors, "set_ingestor_node_id", lambda *_args, **_kwargs: None
)
monkeypatch.setattr(
daemon.ingestors, "queue_ingestor_heartbeat", lambda *_args, **_kwargs: True
)
daemon.main()
assert FakeEvent.instances and FakeEvent.instances[0].is_set() is True
def test_main_inactivity_reconnect(monkeypatch):
"""Inactivity triggers reconnect attempts and respects stop events."""
_configure_common_defaults(monkeypatch, inactivity=0.5)
monkeypatch.setattr(
daemon,
"threading",
types.SimpleNamespace(
Event=AutoSetEvent,
current_thread=threading.current_thread,
main_thread=threading.main_thread,
),
)
monkeypatch.setattr(
daemon, "pub", types.SimpleNamespace(subscribe=lambda *_args, **_kwargs: None)
)
interface_cycle = iter(
[DummyInterface(is_connected=False), DummyInterface(is_connected=True)]
)
monkeypatch.setattr(
daemon.interfaces,
"_create_serial_interface",
lambda candidate: (next(interface_cycle), candidate),
)
monkeypatch.setattr(daemon.interfaces, "_ensure_radio_metadata", lambda iface: None)
monkeypatch.setattr(
daemon.interfaces, "_ensure_channel_metadata", lambda iface: None
)
monkeypatch.setattr(
daemon.interfaces, "_extract_host_node_id", lambda iface: "!host"
)
monkeypatch.setattr(
daemon.handlers, "register_host_node_id", lambda *_args, **_kwargs: None
)
monkeypatch.setattr(daemon.handlers, "host_node_id", lambda: "!host")
monkeypatch.setattr(daemon.handlers, "upsert_node", lambda *_args, **_kwargs: None)
monotonic_calls = iter([0.0, 1.0, 2.0, 3.0, 4.0])
monkeypatch.setattr(daemon.time, "monotonic", lambda: next(monotonic_calls))
monkeypatch.setattr(daemon.handlers, "last_packet_monotonic", lambda: 0.0)
monkeypatch.setattr(
daemon.ingestors, "set_ingestor_node_id", lambda *_args, **_kwargs: None
)
monkeypatch.setattr(
daemon.ingestors, "queue_ingestor_heartbeat", lambda *_args, **_kwargs: True
)
daemon.main()
assert any(event.is_set() for event in FakeEvent.instances)
+542
View File
@@ -20,6 +20,7 @@ import re
import sys
import threading
import types
import time
"""End-to-end tests covering the mesh ingestion package."""
@@ -214,6 +215,9 @@ def mesh_module(monkeypatch):
if attr in module.__dict__:
delattr(module, attr)
module.channels._reset_channel_cache()
module.ingestors.STATE.start_time = int(time.time())
module.ingestors.STATE.last_heartbeat = None
module.ingestors.STATE.node_id = None
yield module
@@ -223,6 +227,117 @@ def mesh_module(monkeypatch):
sys.modules.pop(module_name, None)
def test_instance_domain_prefers_primary_env(mesh_module, monkeypatch):
"""Ensure the ingestor prefers ``INSTANCE_DOMAIN`` over the legacy variable."""
monkeypatch.setenv("INSTANCE_DOMAIN", "https://new.example")
monkeypatch.setenv("POTATOMESH_INSTANCE", "https://legacy.example")
try:
refreshed_instance = mesh_module.config._resolve_instance_domain()
mesh_module.config.INSTANCE = refreshed_instance
mesh_module.INSTANCE = refreshed_instance
assert refreshed_instance == "https://new.example"
assert mesh_module.INSTANCE == "https://new.example"
finally:
monkeypatch.delenv("INSTANCE_DOMAIN", raising=False)
monkeypatch.delenv("POTATOMESH_INSTANCE", raising=False)
mesh_module.config.INSTANCE = mesh_module.config._resolve_instance_domain()
mesh_module.INSTANCE = mesh_module.config.INSTANCE
def test_instance_domain_falls_back_to_legacy(mesh_module, monkeypatch):
"""Verify ``POTATOMESH_INSTANCE`` is used when ``INSTANCE_DOMAIN`` is unset."""
monkeypatch.delenv("INSTANCE_DOMAIN", raising=False)
monkeypatch.setenv("POTATOMESH_INSTANCE", "https://legacy-only.example")
try:
refreshed_instance = mesh_module.config._resolve_instance_domain()
mesh_module.config.INSTANCE = refreshed_instance
mesh_module.INSTANCE = refreshed_instance
assert refreshed_instance == "https://legacy-only.example"
assert mesh_module.INSTANCE == "https://legacy-only.example"
finally:
monkeypatch.delenv("POTATOMESH_INSTANCE", raising=False)
mesh_module.config.INSTANCE = mesh_module.config._resolve_instance_domain()
mesh_module.INSTANCE = mesh_module.config.INSTANCE
def test_instance_domain_infers_scheme_for_hostnames(mesh_module, monkeypatch):
"""Ensure bare hostnames are promoted to HTTPS URLs for ingestion."""
monkeypatch.setenv("INSTANCE_DOMAIN", "mesh.example.org")
monkeypatch.delenv("POTATOMESH_INSTANCE", raising=False)
try:
refreshed_instance = mesh_module.config._resolve_instance_domain()
mesh_module.config.INSTANCE = refreshed_instance
mesh_module.INSTANCE = refreshed_instance
assert refreshed_instance == "https://mesh.example.org"
assert mesh_module.INSTANCE == "https://mesh.example.org"
finally:
monkeypatch.delenv("INSTANCE_DOMAIN", raising=False)
mesh_module.config.INSTANCE = mesh_module.config._resolve_instance_domain()
mesh_module.INSTANCE = mesh_module.config.INSTANCE
def test_parse_channel_names_applies_allowlist(mesh_module):
"""Ensure allowlists reuse the shared channel parser."""
mesh = mesh_module
previous_allowed = mesh.ALLOWED_CHANNELS
try:
parsed = mesh.config._parse_channel_names(" Primary ,Chat ,primary , Ops ")
mesh.ALLOWED_CHANNELS = parsed
assert parsed == ("Primary", "Chat", "Ops")
assert mesh.channels.allowed_channel_names() == ("Primary", "Chat", "Ops")
assert mesh.channels.is_allowed_channel("chat")
assert mesh.channels.is_allowed_channel(" ops ")
assert not mesh.channels.is_allowed_channel("unknown")
assert not mesh.channels.is_allowed_channel(None)
assert mesh.config._parse_channel_names("") == ()
finally:
mesh.ALLOWED_CHANNELS = previous_allowed
def test_allowed_channel_defaults_allow_all(mesh_module):
"""Ensure unset allowlists do not block any channels."""
mesh = mesh_module
previous_allowed = mesh.ALLOWED_CHANNELS
try:
mesh.ALLOWED_CHANNELS = ()
assert mesh.channels.is_allowed_channel("Any")
finally:
mesh.ALLOWED_CHANNELS = previous_allowed
def test_parse_hidden_channels_deduplicates_names(mesh_module):
"""Ensure hidden channel parsing strips blanks and deduplicates."""
mesh = mesh_module
previous_hidden = mesh.HIDDEN_CHANNELS
try:
parsed = mesh.config._parse_hidden_channels(" Chat , ,Secret ,chat")
mesh.HIDDEN_CHANNELS = parsed
assert parsed == ("Chat", "Secret")
assert mesh.channels.hidden_channel_names() == ("Chat", "Secret")
assert mesh.channels.is_hidden_channel(" chat ")
assert not mesh.channels.is_hidden_channel("unknown")
assert mesh.config._parse_hidden_channels("") == ()
finally:
mesh.HIDDEN_CHANNELS = previous_hidden
def test_subscribe_receive_topics_covers_all_handlers(mesh_module, monkeypatch):
mesh = mesh_module
daemon_mod = sys.modules["data.mesh_ingestor.daemon"]
@@ -1814,6 +1929,110 @@ def test_store_packet_dict_allows_primary_channel_broadcast(mesh_module, monkeyp
assert priority == mesh._MESSAGE_POST_PRIORITY
def test_store_packet_dict_accepts_routing_app_messages(mesh_module, monkeypatch):
"""Ensure routing app payloads are treated as message posts."""
mesh = mesh_module
captured = []
monkeypatch.setattr(
mesh,
"_queue_post_json",
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
packet = {
"id": 333,
"rxTime": 999,
"fromId": "!node",
"toId": "^all",
"channel": 0,
"decoded": {"payload": "GAA=", "portnum": "ROUTING_APP"},
}
mesh.store_packet_dict(packet)
assert captured, "Expected routing packet to be stored"
path, payload, priority = captured[0]
assert path == "/api/messages"
assert payload["portnum"] == "ROUTING_APP"
assert payload["text"] == "GAA="
assert payload["channel"] == 0
assert payload["encrypted"] is None
assert priority == mesh._MESSAGE_POST_PRIORITY
def test_store_packet_dict_serializes_routing_payloads(mesh_module, monkeypatch):
"""Ensure routing payloads are serialized when text is absent."""
mesh = mesh_module
captured = []
monkeypatch.setattr(
mesh,
"_queue_post_json",
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
packet = {
"id": 334,
"rxTime": 1000,
"fromId": "!node",
"toId": "^all",
"channel": 0,
"decoded": {
"payload": b"\x01\x02",
"portnum": "ROUTING_APP",
},
}
mesh.store_packet_dict(packet)
assert captured, "Expected routing packet to be stored"
_, payload, _ = captured[0]
assert payload["text"] == "AQI="
captured.clear()
packet["decoded"]["payload"] = {"kind": "ack"}
mesh.store_packet_dict(packet)
assert captured, "Expected routing packet to be stored"
_, payload, _ = captured[0]
assert payload["text"] == '{"kind": "ack"}'
captured.clear()
packet["decoded"]["portnum"] = 7
packet["decoded"]["payload"] = b"\x00"
packet["decoded"]["routing"] = {"errorReason": "NONE"}
mesh.store_packet_dict(packet)
assert captured, "Expected numeric routing packet to be stored"
_, payload, _ = captured[0]
assert payload["text"] == "AA=="
def test_portnum_candidates_reads_enum_values(mesh_module, monkeypatch):
"""Ensure portnum candidates include enum and constants when available."""
mesh = mesh_module
module_name = "meshtastic.portnums_pb2"
class DummyPortNum:
@staticmethod
def Value(name):
if name == "ROUTING_APP":
return 7
raise KeyError(name)
dummy_module = types.SimpleNamespace(PortNum=DummyPortNum, ROUTING_APP=8)
monkeypatch.setitem(sys.modules, module_name, dummy_module)
candidates = mesh.handlers._portnum_candidates("ROUTING_APP")
assert 7 in candidates
assert 8 in candidates
def test_store_packet_dict_appends_channel_name(mesh_module, monkeypatch, capsys):
mesh = mesh_module
mesh.channels._reset_channel_cache()
@@ -1874,6 +2093,146 @@ def test_store_packet_dict_appends_channel_name(mesh_module, monkeypatch, capsys
assert "channel_display='Chat'" in log_output
def test_store_packet_dict_skips_hidden_channel(mesh_module, monkeypatch, capsys):
mesh = mesh_module
mesh.channels._reset_channel_cache()
mesh.config.MODEM_PRESET = None
class DummyInterface:
def __init__(self) -> None:
self.localNode = SimpleNamespace(
channels=[
SimpleNamespace(
role=1,
settings=SimpleNamespace(name="Primary"),
),
SimpleNamespace(
role=2,
index=5,
settings=SimpleNamespace(name="Chat"),
),
]
)
def waitForConfig(self):
return None
mesh.channels.capture_from_interface(DummyInterface())
capsys.readouterr()
captured: list[tuple[str, dict, int]] = []
ignored: list[str] = []
monkeypatch.setattr(
mesh,
"_queue_post_json",
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
monkeypatch.setattr(
mesh.handlers,
"_record_ignored_packet",
lambda packet, *, reason: ignored.append(reason),
)
previous_debug = mesh.config.DEBUG
previous_hidden = mesh.HIDDEN_CHANNELS
previous_allowed = mesh.ALLOWED_CHANNELS
mesh.config.DEBUG = True
mesh.DEBUG = True
mesh.ALLOWED_CHANNELS = ("Chat",)
mesh.HIDDEN_CHANNELS = ("Chat",)
try:
packet = {
"id": "999",
"rxTime": 24_680,
"from": "!sender",
"to": "^all",
"channel": 5,
"decoded": {"text": "hidden msg", "portnum": 1},
}
mesh.store_packet_dict(packet)
assert captured == []
assert ignored == ["hidden-channel"]
assert "Ignored packet on hidden channel" in capsys.readouterr().out
finally:
mesh.HIDDEN_CHANNELS = previous_hidden
mesh.ALLOWED_CHANNELS = previous_allowed
mesh.config.DEBUG = previous_debug
mesh.DEBUG = previous_debug
def test_store_packet_dict_skips_disallowed_channel(mesh_module, monkeypatch, capsys):
mesh = mesh_module
mesh.channels._reset_channel_cache()
mesh.config.MODEM_PRESET = None
class DummyInterface:
def __init__(self) -> None:
self.localNode = SimpleNamespace(
channels=[
SimpleNamespace(
role=1,
settings=SimpleNamespace(name="Primary"),
),
SimpleNamespace(
role=2,
index=5,
settings=SimpleNamespace(name="Chat"),
),
]
)
def waitForConfig(self):
return None
mesh.channels.capture_from_interface(DummyInterface())
capsys.readouterr()
captured: list[tuple[str, dict, int]] = []
ignored: list[str] = []
monkeypatch.setattr(
mesh,
"_queue_post_json",
lambda path, payload, *, priority: captured.append((path, payload, priority)),
)
monkeypatch.setattr(
mesh.handlers,
"_record_ignored_packet",
lambda packet, *, reason: ignored.append(reason),
)
previous_debug = mesh.config.DEBUG
previous_allowed = mesh.ALLOWED_CHANNELS
previous_hidden = mesh.HIDDEN_CHANNELS
mesh.config.DEBUG = True
mesh.DEBUG = True
mesh.ALLOWED_CHANNELS = ("Primary",)
mesh.HIDDEN_CHANNELS = ()
try:
packet = {
"id": "1001",
"rxTime": 25_680,
"from": "!sender",
"to": "^all",
"channel": 5,
"decoded": {"text": "disallowed msg", "portnum": 1},
}
mesh.store_packet_dict(packet)
assert captured == []
assert ignored == ["disallowed-channel"]
assert "Ignored packet on disallowed channel" in capsys.readouterr().out
finally:
mesh.ALLOWED_CHANNELS = previous_allowed
mesh.HIDDEN_CHANNELS = previous_hidden
mesh.config.DEBUG = previous_debug
mesh.DEBUG = previous_debug
def test_store_packet_dict_includes_encrypted_payload(mesh_module, monkeypatch):
mesh = mesh_module
captured = []
@@ -2385,6 +2744,62 @@ def test_parse_ble_target_rejects_invalid_values(mesh_module):
assert mesh._parse_ble_target("zz:zz:zz:zz:zz:zz") is None
def test_parse_ble_target_accepts_mac_addresses(mesh_module):
"""Test that _parse_ble_target accepts valid MAC address format (Linux/Windows)."""
mesh = mesh_module
# Valid MAC addresses should be accepted and normalized to uppercase
assert mesh._parse_ble_target("ED:4D:9E:95:CF:60") == "ED:4D:9E:95:CF:60"
assert mesh._parse_ble_target("ed:4d:9e:95:cf:60") == "ED:4D:9E:95:CF:60"
assert mesh._parse_ble_target("AA:BB:CC:DD:EE:FF") == "AA:BB:CC:DD:EE:FF"
assert mesh._parse_ble_target("00:11:22:33:44:55") == "00:11:22:33:44:55"
# With whitespace
assert mesh._parse_ble_target(" ED:4D:9E:95:CF:60 ") == "ED:4D:9E:95:CF:60"
# Invalid MAC addresses should be rejected
assert mesh._parse_ble_target("ED:4D:9E:95:CF") is None # Too short
assert mesh._parse_ble_target("ED:4D:9E:95:CF:60:AB") is None # Too long
assert mesh._parse_ble_target("GG:HH:II:JJ:KK:LL") is None # Invalid hex
def test_parse_ble_target_accepts_uuids(mesh_module):
"""Test that _parse_ble_target accepts valid UUID format (macOS)."""
mesh = mesh_module
# Valid UUIDs should be accepted and normalized to uppercase
assert (
mesh._parse_ble_target("C0AEA92F-045E-9B82-C9A6-A1FD822B3A9E")
== "C0AEA92F-045E-9B82-C9A6-A1FD822B3A9E"
)
assert (
mesh._parse_ble_target("c0aea92f-045e-9b82-c9a6-a1fd822b3a9e")
== "C0AEA92F-045E-9B82-C9A6-A1FD822B3A9E"
)
assert (
mesh._parse_ble_target("12345678-1234-5678-9ABC-DEF012345678")
== "12345678-1234-5678-9ABC-DEF012345678"
)
# With whitespace
assert (
mesh._parse_ble_target(" C0AEA92F-045E-9B82-C9A6-A1FD822B3A9E ")
== "C0AEA92F-045E-9B82-C9A6-A1FD822B3A9E"
)
# Invalid UUIDs should be rejected
assert mesh._parse_ble_target("C0AEA92F-045E-9B82-C9A6") is None # Too short
assert (
mesh._parse_ble_target("C0AEA92F-045E-9B82-C9A6-A1FD822B3A9E-EXTRA") is None
) # Too long
assert (
mesh._parse_ble_target("GGGGGGGG-GGGG-GGGG-GGGG-GGGGGGGGGGGG") is None
) # Invalid hex
assert (
mesh._parse_ble_target("C0AEA92F:045E:9B82:C9A6:A1FD822B3A9E") is None
) # Wrong separator
def test_parse_network_target_additional_cases(mesh_module):
mesh = mesh_module
@@ -2517,6 +2932,133 @@ def test_queue_post_json_skips_when_active(mesh_module, monkeypatch):
mesh._clear_post_queue()
def test_process_ingestor_heartbeat_updates_flag(mesh_module, monkeypatch):
mesh = mesh_module
mesh.ingestors.STATE.last_heartbeat = None
mesh.ingestors.STATE.node_id = None
mesh.handlers.register_host_node_id(None)
recorded = {"force": None, "count": 0}
def fake_queue_ingestor_heartbeat(*, force):
recorded["force"] = force
recorded["count"] += 1
return True
monkeypatch.setattr(
mesh.ingestors, "queue_ingestor_heartbeat", fake_queue_ingestor_heartbeat
)
class DummyIface:
def __init__(self):
self.myNodeNum = 0xCAFEBABE
updated = mesh._process_ingestor_heartbeat(
DummyIface(), ingestor_announcement_sent=False
)
assert updated is True
assert recorded["force"] is True
assert recorded["count"] == 1
assert mesh.handlers.host_node_id() == "!cafebabe"
def test_process_ingestor_heartbeat_skips_without_host(mesh_module, monkeypatch):
mesh = mesh_module
mesh.handlers.register_host_node_id(None)
mesh.ingestors.STATE.node_id = None
mesh.ingestors.STATE.last_heartbeat = None
monkeypatch.setattr(mesh.ingestors, "queue_ingestor_heartbeat", lambda **_: False)
updated = mesh._process_ingestor_heartbeat(None, ingestor_announcement_sent=False)
assert updated is False
assert mesh.ingestors.STATE.node_id is None
assert mesh.ingestors.STATE.last_heartbeat is None
def test_ingestor_heartbeat_respects_interval_override(mesh_module, monkeypatch):
mesh = mesh_module
mesh.ingestors.STATE.start_time = 100
mesh.ingestors.STATE.last_heartbeat = 1_000
mesh.ingestors.STATE.node_id = "!abcd0001"
mesh._INGESTOR_HEARTBEAT_SECS = 10_000
monkeypatch.setattr(mesh.ingestors.time, "time", lambda: 2_000)
sent = mesh.ingestors.queue_ingestor_heartbeat()
assert sent is False
assert mesh.ingestors.STATE.last_heartbeat == 1_000
def test_setting_ingestor_attr_propagates(mesh_module):
mesh = mesh_module
mesh._INGESTOR_HEARTBEAT_SECS = 123
assert mesh.config._INGESTOR_HEARTBEAT_SECS == 123
def test_queue_ingestor_heartbeat_requires_node_id(mesh_module, monkeypatch):
mesh = mesh_module
captured = []
monkeypatch.setattr(
mesh.queue,
"_queue_post_json",
lambda path, payload, *, priority, send=None: captured.append(
(path, payload, priority)
),
)
mesh.ingestors.STATE.node_id = None
mesh.ingestors.STATE.last_heartbeat = None
queued = mesh.ingestors.queue_ingestor_heartbeat(force=True)
assert queued is False
assert captured == []
def test_queue_ingestor_heartbeat_enqueues_and_throttles(mesh_module, monkeypatch):
mesh = mesh_module
captured = []
monkeypatch.setattr(
mesh.queue,
"_queue_post_json",
lambda path, payload, *, priority, send=None: captured.append(
(path, payload, priority)
),
)
mesh.ingestors.STATE.start_time = 1_700_000_000
mesh.ingestors.STATE.last_heartbeat = None
mesh.ingestors.STATE.node_id = None
mesh.config.LORA_FREQ = 915
mesh.config.MODEM_PRESET = "LongFast"
mesh.ingestors.set_ingestor_node_id("!CAFEBABE")
first = mesh.ingestors.queue_ingestor_heartbeat(force=True)
second = mesh.ingestors.queue_ingestor_heartbeat()
assert first is True
assert second is False
assert len(captured) == 1
path, payload, priority = captured[0]
assert path == "/api/ingestors"
assert payload["node_id"] == "!cafebabe"
assert payload["start_time"] == 1_700_000_000
assert payload["last_seen_time"] >= payload["start_time"]
assert payload["version"] == mesh.VERSION
assert payload["lora_freq"] == 915
assert payload["modem_preset"] == "LongFast"
assert priority == mesh.queue._INGESTOR_POST_PRIORITY
def test_mesh_version_export_matches_package(mesh_module):
import data
mesh = mesh_module
assert mesh.VERSION == data.VERSION
def test_node_to_dict_handles_proto_fallback(mesh_module, monkeypatch):
mesh = mesh_module
@@ -110,11 +110,20 @@ module PotatoMesh
["!#{canonical_hex}", parsed, short_id]
end
def broadcast_node_ref?(node_ref, fallback_num = nil)
return true if fallback_num == 0xFFFFFFFF
trimmed = string_or_nil(node_ref)
return false unless trimmed
normalized = trimmed.delete_prefix("!").strip.downcase
normalized == "ffffffff"
end
def ensure_unknown_node(db, node_ref, fallback_num = nil, heard_time: nil)
parts = canonical_node_parts(node_ref, fallback_num)
return unless parts
node_id, node_num, short_id = parts
return if broadcast_node_ref?(node_id, node_num)
existing = db.get_first_value(
"SELECT 1 FROM nodes WHERE node_id = ? LIMIT 1",
@@ -158,7 +167,10 @@ module PotatoMesh
node_id = nil
parts = canonical_node_parts(node_ref, fallback_num)
node_id, = parts if parts
if parts
node_id, node_num = parts
return if broadcast_node_ref?(node_id, node_num)
end
unless node_id
trimmed = string_or_nil(node_ref)
@@ -170,6 +182,7 @@ module PotatoMesh
end
end
return if broadcast_node_ref?(node_id, fallback_num)
return unless node_id
updated = false
@@ -199,6 +212,66 @@ module PotatoMesh
updated
end
# Insert or update an ingestor heartbeat payload.
#
# @param db [SQLite3::Database] open database handle.
# @param payload [Hash] ingestor payload from the collector.
# @return [Boolean] true when persistence succeeded.
def upsert_ingestor(db, payload)
return false unless payload.is_a?(Hash)
parts = canonical_node_parts(payload["node_id"] || payload["id"])
return false unless parts
node_id, = parts
now = Time.now.to_i
start_time = coerce_integer(payload["start_time"] || payload["startTime"]) || now
last_seen_time =
coerce_integer(payload["last_seen_time"] || payload["lastSeenTime"]) || start_time
start_time = 0 if start_time.negative?
last_seen_time = 0 if last_seen_time.negative?
start_time = now if start_time > now
last_seen_time = now if last_seen_time > now
last_seen_time = start_time if last_seen_time < start_time
version = string_or_nil(payload["version"] || payload["ingestorVersion"])
return false unless version
lora_freq = coerce_integer(payload["lora_freq"])
modem_preset = string_or_nil(payload["modem_preset"])
with_busy_retry do
db.execute <<~SQL, [node_id, start_time, last_seen_time, version, lora_freq, modem_preset]
INSERT INTO ingestors(node_id, start_time, last_seen_time, version, lora_freq, modem_preset)
VALUES(?,?,?,?,?,?)
ON CONFLICT(node_id) DO UPDATE SET
start_time = CASE
WHEN excluded.start_time > ingestors.start_time THEN excluded.start_time
ELSE ingestors.start_time
END,
last_seen_time = CASE
WHEN excluded.last_seen_time > ingestors.last_seen_time THEN excluded.last_seen_time
ELSE ingestors.last_seen_time
END,
version = COALESCE(excluded.version, ingestors.version),
lora_freq = COALESCE(excluded.lora_freq, ingestors.lora_freq),
modem_preset = COALESCE(excluded.modem_preset, ingestors.modem_preset)
SQL
end
true
rescue SQLite3::SQLException => e
warn_log(
"Failed to upsert ingestor record",
context: "data_processing.ingestors",
node_id: node_id,
error_class: e.class.name,
error_message: e.message,
)
false
end
def upsert_node(db, node_id, n)
user = n["user"] || {}
met = n["deviceMetrics"] || {}
@@ -1262,7 +1335,7 @@ module PotatoMesh
rx_time = now if rx_time.nil? || rx_time > now
rx_iso = string_or_nil(payload["rx_iso"]) || Time.at(rx_time).utc.iso8601
metrics = normalize_json_object(payload["metrics"])
metrics = normalize_json_object(payload["metrics"]) || {}
src = coerce_integer(payload["src"] || payload["source"] || payload["from"])
dest = coerce_integer(payload["dest"] || payload["destination"] || payload["to"])
rssi = coerce_integer(payload["rssi"]) || coerce_integer(metrics["rssi"])
@@ -1367,6 +1440,17 @@ module PotatoMesh
source: :message,
)
ensure_unknown_node(db, to_id || raw_to_id, message["to_num"], heard_time: rx_time) if to_id || raw_to_id
if to_id || raw_to_id || message.key?("to_num")
touch_node_last_seen(
db,
to_id || raw_to_id || message["to_num"],
message["to_num"],
rx_time: rx_time,
source: :message,
)
end
lora_freq = coerce_integer(message["lora_freq"] || message["loraFrequency"])
modem_preset = string_or_nil(message["modem_preset"] || message["modemPreset"])
channel_name = string_or_nil(message["channel_name"] || message["channelName"])
+31 -3
View File
@@ -81,10 +81,10 @@ module PotatoMesh
return false unless File.exist?(PotatoMesh::Config.db_path)
db = open_database(readonly: true)
required = %w[nodes messages positions telemetry neighbors instances traces trace_hops]
required = %w[nodes messages positions telemetry neighbors instances traces trace_hops ingestors]
tables =
db.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name IN ('nodes','messages','positions','telemetry','neighbors','instances','traces','trace_hops')",
"SELECT name FROM sqlite_master WHERE type='table' AND name IN ('nodes','messages','positions','telemetry','neighbors','instances','traces','trace_hops','ingestors')",
).flatten
(required - tables).empty?
rescue SQLite3::Exception
@@ -99,7 +99,7 @@ module PotatoMesh
def init_db
FileUtils.mkdir_p(File.dirname(PotatoMesh::Config.db_path))
db = open_database
%w[nodes messages positions telemetry neighbors instances traces].each do |schema|
%w[nodes messages positions telemetry neighbors instances traces ingestors].each do |schema|
sql_file = File.expand_path("../../../../data/#{schema}.sql", __dir__)
db.execute_batch(File.read(sql_file))
end
@@ -164,6 +164,16 @@ module PotatoMesh
db.execute_batch(File.read(sql_file))
end
instance_columns = db.execute("PRAGMA table_info(instances)").map { |row| row[1] }
unless instance_columns.include?("contact_link")
db.execute("ALTER TABLE instances ADD COLUMN contact_link TEXT")
instance_columns << "contact_link"
end
unless instance_columns.include?("nodes_count")
db.execute("ALTER TABLE instances ADD COLUMN nodes_count INTEGER")
end
telemetry_tables =
db.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='telemetry'").flatten
if telemetry_tables.empty?
@@ -187,6 +197,24 @@ module PotatoMesh
traces_schema = File.expand_path("../../../../data/traces.sql", __dir__)
db.execute_batch(File.read(traces_schema))
end
ingestor_tables =
db.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='ingestors'").flatten
if ingestor_tables.empty?
ingestors_schema = File.expand_path("../../../../data/ingestors.sql", __dir__)
db.execute_batch(File.read(ingestors_schema))
else
ingestor_columns = db.execute("PRAGMA table_info(ingestors)").map { |row| row[1] }
unless ingestor_columns.include?("version")
db.execute("ALTER TABLE ingestors ADD COLUMN version TEXT")
end
unless ingestor_columns.include?("lora_freq")
db.execute("ALTER TABLE ingestors ADD COLUMN lora_freq INTEGER")
end
unless ingestor_columns.include?("modem_preset")
db.execute("ALTER TABLE ingestors ADD COLUMN modem_preset TEXT")
end
end
rescue SQLite3::SQLException, Errno::ENOENT => e
warn_log(
"Failed to apply schema upgrade",
+55 -3
View File
@@ -61,6 +61,7 @@ module PotatoMesh
def self_instance_attributes
domain = self_instance_domain
last_update = latest_node_update_timestamp || Time.now.to_i
nodes_count = active_node_count_since(Time.now.to_i - PotatoMesh::Config.remote_instance_max_node_age)
{
id: app_constant(:SELF_INSTANCE_ID),
domain: domain,
@@ -73,9 +74,37 @@ module PotatoMesh
longitude: PotatoMesh::Config.map_center_lon,
last_update_time: last_update,
is_private: private_mode?,
contact_link: sanitized_contact_link,
nodes_count: nodes_count,
}
end
# Count the number of nodes active since the supplied timestamp.
#
# @param cutoff [Integer] unix timestamp in seconds.
# @param db [SQLite3::Database, nil] optional open handle to reuse.
# @return [Integer, nil] node count or nil when unavailable.
def active_node_count_since(cutoff, db: nil)
return nil unless cutoff
handle = db || open_database(readonly: true)
count =
with_busy_retry do
handle.get_first_value("SELECT COUNT(*) FROM nodes WHERE last_heard >= ?", cutoff.to_i)
end
Integer(count)
rescue SQLite3::Exception, ArgumentError => e
warn_log(
"Failed to count active nodes",
context: "instances.nodes_count",
error_class: e.class.name,
error_message: e.message,
)
nil
ensure
handle&.close unless db
end
def sign_instance_attributes(attributes)
payload = canonical_instance_payload(attributes)
Base64.strict_encode64(
@@ -96,6 +125,7 @@ module PotatoMesh
"longitude" => attributes[:longitude],
"lastUpdateTime" => attributes[:last_update_time],
"isPrivate" => attributes[:is_private],
"contactLink" => attributes[:contact_link],
"signature" => signature,
}
payload.reject { |_, value| value.nil? }
@@ -450,6 +480,7 @@ module PotatoMesh
def canonical_instance_payload(attributes)
data = {}
data["contactLink"] = attributes[:contact_link] if attributes[:contact_link]
data["id"] = attributes[:id] if attributes[:id]
data["domain"] = attributes[:domain] if attributes[:domain]
data["pubkey"] = attributes[:pubkey] if attributes[:pubkey]
@@ -611,6 +642,7 @@ module PotatoMesh
longitude: coerce_float(payload["longitude"]),
last_update_time: coerce_integer(payload["lastUpdateTime"]),
is_private: private_flag,
contact_link: string_or_nil(payload["contactLink"]),
}
[attributes, signature, nil]
@@ -719,6 +751,7 @@ module PotatoMesh
end
processed_entries = 0
recent_cutoff = Time.now.to_i - PotatoMesh::Config.remote_instance_max_node_age
payload.each do |entry|
if per_response_limit && per_response_limit.positive? && processed_entries >= per_response_limit
debug_log(
@@ -773,13 +806,27 @@ module PotatoMesh
attributes[:is_private] = false if attributes[:is_private].nil?
nodes_since_path = "/api/nodes?since=#{recent_cutoff}&limit=1000"
nodes_since_window, nodes_since_metadata = fetch_instance_json(attributes[:domain], nodes_since_path)
if nodes_since_window.is_a?(Array)
attributes[:nodes_count] = nodes_since_window.length
elsif nodes_since_metadata
warn_log(
"Failed to load remote node window",
context: "federation.instances",
domain: attributes[:domain],
reason: Array(nodes_since_metadata).map(&:to_s).join("; "),
)
end
remote_nodes, node_metadata = fetch_instance_json(attributes[:domain], "/api/nodes")
remote_nodes ||= nodes_since_window if nodes_since_window.is_a?(Array)
unless remote_nodes
warn_log(
"Failed to load remote node data",
context: "federation.instances",
domain: attributes[:domain],
reason: Array(node_metadata).map(&:to_s).join("; "),
reason: Array(node_metadata || nodes_since_metadata).map(&:to_s).join("; "),
)
next
end
@@ -1055,8 +1102,8 @@ module PotatoMesh
sql = <<~SQL
INSERT INTO instances (
id, domain, pubkey, name, version, channel, frequency,
latitude, longitude, last_update_time, is_private, signature
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
latitude, longitude, last_update_time, is_private, nodes_count, contact_link, signature
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(id) DO UPDATE SET
domain=excluded.domain,
pubkey=excluded.pubkey,
@@ -1068,9 +1115,12 @@ module PotatoMesh
longitude=excluded.longitude,
last_update_time=excluded.last_update_time,
is_private=excluded.is_private,
nodes_count=excluded.nodes_count,
contact_link=excluded.contact_link,
signature=excluded.signature
SQL
nodes_count = coerce_integer(attributes[:nodes_count])
params = [
attributes[:id],
normalized_domain,
@@ -1083,6 +1133,8 @@ module PotatoMesh
attributes[:longitude],
attributes[:last_update_time],
attributes[:is_private] ? 1 : 0,
nodes_count,
attributes[:contact_link],
signature,
]
+3 -1
View File
@@ -143,6 +143,8 @@ module PotatoMesh
"longitude" => coerce_float(row["longitude"]),
"lastUpdateTime" => last_update_time,
"isPrivate" => private_flag,
"nodesCount" => coerce_integer(row["nodes_count"]),
"contactLink" => string_or_nil(row["contact_link"]),
"signature" => signature,
}
@@ -173,7 +175,7 @@ module PotatoMesh
min_last_update_time = now - PotatoMesh::Config.week_seconds
sql = <<~SQL
SELECT id, domain, pubkey, name, version, channel, frequency,
latitude, longitude, last_update_time, is_private, signature
latitude, longitude, last_update_time, is_private, nodes_count, contact_link, signature
FROM instances
WHERE domain IS NOT NULL AND TRIM(domain) != ''
AND pubkey IS NOT NULL AND TRIM(pubkey) != ''
+174 -20
View File
@@ -20,6 +20,7 @@ module PotatoMesh
MAX_QUERY_LIMIT = 1000
DEFAULT_TELEMETRY_WINDOW_SECONDS = 86_400
DEFAULT_TELEMETRY_BUCKET_SECONDS = 300
TELEMETRY_ZERO_INVALID_COLUMNS = %w[battery_level voltage].freeze
TELEMETRY_AGGREGATE_COLUMNS =
%w[
battery_level
@@ -48,6 +49,9 @@ module PotatoMesh
soil_moisture
soil_temperature
].freeze
TELEMETRY_AGGREGATE_SCALERS = {
"current" => 0.001,
}.freeze
# Remove nil or empty values from an API response hash to reduce payload size
# while preserving legitimate zero-valued measurements.
@@ -78,6 +82,19 @@ module PotatoMesh
end
end
# Treat zero-valued telemetry measurements that are known to be invalid
# (such as battery level or voltage) as missing data so they are omitted
# from API responses. Metrics that can legitimately be zero will remain
# untouched when routed through this helper.
#
# @param value [Numeric, nil] telemetry measurement.
# @return [Numeric, nil] nil when the value is zero, otherwise the original value.
def nil_if_zero(value)
return nil if value.respond_to?(:zero?) && value.zero?
value
end
# Normalise a caller-provided limit to a sane, positive integer.
#
# @param limit [Object] value coerced to an integer.
@@ -99,6 +116,17 @@ module PotatoMesh
coerced
end
# Normalise a caller-supplied timestamp for API pagination windows.
#
# @param since [Object] requested lower bound expressed as seconds since the epoch.
# @param floor [Integer] minimum allowable timestamp used to clamp the value.
# @return [Integer] non-negative timestamp greater than or equal to +floor+.
def normalize_since_threshold(since, floor: 0)
threshold = coerce_integer(since)
threshold = 0 if threshold.nil? || threshold.negative?
[threshold, floor].max
end
def node_reference_tokens(node_ref)
parts = canonical_node_parts(node_ref)
canonical_id, numeric_id = parts ? parts[0, 2] : [nil, nil]
@@ -181,12 +209,19 @@ module PotatoMesh
["(#{clauses.join(" OR ")})", params]
end
def query_nodes(limit, node_ref: nil)
# Fetch node state optionally scoped by identifier and timestamp.
#
# @param limit [Integer] maximum number of rows to return.
# @param node_ref [String, Integer, nil] optional node reference to narrow results.
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window.
# @return [Array<Hash>] compacted node rows suitable for API responses.
def query_nodes(limit, node_ref: nil, since: 0)
limit = coerce_query_limit(limit)
db = open_database(readonly: true)
db.results_as_hash = true
now = Time.now.to_i
min_last_heard = now - PotatoMesh::Config.week_seconds
since_threshold = normalize_since_threshold(since, floor: min_last_heard)
params = []
where_clauses = []
@@ -197,7 +232,7 @@ module PotatoMesh
params.concat(clause.last)
else
where_clauses << "last_heard >= ?"
params << min_last_heard
params << since_threshold
end
if private_mode?
@@ -225,7 +260,7 @@ module PotatoMesh
.map { |value| coerce_integer(value) }
.compact
.max
last_candidate && last_candidate >= min_last_heard
last_candidate && last_candidate >= since_threshold
end
rows.each do |r|
r["role"] ||= "CLIENT"
@@ -245,6 +280,47 @@ module PotatoMesh
db&.close
end
# Fetch ingestor heartbeats with optional freshness filtering.
#
# @param limit [Integer] maximum number of ingestors to return.
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window.
# @return [Array<Hash>] compacted ingestor rows suitable for API responses.
def query_ingestors(limit, since: 0)
limit = coerce_query_limit(limit)
db = open_database(readonly: true)
db.results_as_hash = true
now = Time.now.to_i
cutoff = now - PotatoMesh::Config.week_seconds
since_threshold = normalize_since_threshold(since, floor: cutoff)
sql = <<~SQL
SELECT node_id, start_time, last_seen_time, version, lora_freq, modem_preset
FROM ingestors
WHERE last_seen_time >= ?
ORDER BY last_seen_time DESC
LIMIT ?
SQL
rows = db.execute(sql, [since_threshold, limit])
rows.each do |row|
row.delete_if { |key, _| key.is_a?(Integer) }
start_time = coerce_integer(row["start_time"])
last_seen_time = coerce_integer(row["last_seen_time"])
start_time = now if start_time && start_time > now
last_seen_time = now if last_seen_time && last_seen_time > now
if start_time && last_seen_time && last_seen_time < start_time
last_seen_time = start_time
end
row["start_time"] = start_time
row["last_seen_time"] = last_seen_time
row["start_time_iso"] = Time.at(start_time).utc.iso8601 if start_time
row["last_seen_iso"] = Time.at(last_seen_time).utc.iso8601 if last_seen_time
end
rows.map { |row| compact_api_row(row) }
ensure
db&.close
end
# Fetch chat messages with optional filtering.
#
# @param limit [Integer] maximum number of rows to return.
@@ -254,8 +330,7 @@ module PotatoMesh
# @return [Array<Hash>] compacted message rows safe for API responses.
def query_messages(limit, node_ref: nil, include_encrypted: false, since: 0)
limit = coerce_query_limit(limit)
since_threshold = coerce_integer(since)
since_threshold = 0 if since_threshold.nil? || since_threshold.negative?
since_threshold = normalize_since_threshold(since, floor: 0)
db = open_database(readonly: true)
db.results_as_hash = true
params = []
@@ -333,7 +408,13 @@ module PotatoMesh
db&.close
end
def query_positions(limit, node_ref: nil)
# Fetch positions optionally scoped by node and timestamp.
#
# @param limit [Integer] maximum number of rows to return.
# @param node_ref [String, Integer, nil] optional node reference to scope results.
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window.
# @return [Array<Hash>] compacted position rows suitable for API responses.
def query_positions(limit, node_ref: nil, since: 0)
limit = coerce_query_limit(limit)
db = open_database(readonly: true)
db.results_as_hash = true
@@ -341,8 +422,9 @@ module PotatoMesh
where_clauses = []
now = Time.now.to_i
min_rx_time = now - PotatoMesh::Config.week_seconds
since_threshold = normalize_since_threshold(since, floor: min_rx_time)
where_clauses << "COALESCE(rx_time, position_time, 0) >= ?"
params << min_rx_time
params << since_threshold
if node_ref
clause = node_lookup_clause(node_ref, string_columns: ["node_id"], numeric_columns: ["node_num"])
@@ -384,7 +466,13 @@ module PotatoMesh
db&.close
end
def query_neighbors(limit, node_ref: nil)
# Fetch neighbor relationships optionally scoped by node and timestamp.
#
# @param limit [Integer] maximum number of rows to return.
# @param node_ref [String, Integer, nil] optional node reference to scope results.
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window.
# @return [Array<Hash>] compacted neighbor rows suitable for API responses.
def query_neighbors(limit, node_ref: nil, since: 0)
limit = coerce_query_limit(limit)
db = open_database(readonly: true)
db.results_as_hash = true
@@ -392,8 +480,9 @@ module PotatoMesh
where_clauses = []
now = Time.now.to_i
min_rx_time = now - PotatoMesh::Config.week_seconds
since_threshold = normalize_since_threshold(since, floor: min_rx_time)
where_clauses << "COALESCE(rx_time, 0) >= ?"
params << min_rx_time
params << since_threshold
if node_ref
clause = node_lookup_clause(node_ref, string_columns: ["node_id", "neighbor_id"])
@@ -424,7 +513,13 @@ module PotatoMesh
db&.close
end
def query_telemetry(limit, node_ref: nil)
# Fetch telemetry packets optionally scoped by node and timestamp.
#
# @param limit [Integer] maximum number of rows to return.
# @param node_ref [String, Integer, nil] optional node reference to scope results.
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window.
# @return [Array<Hash>] compacted telemetry rows suitable for API responses.
def query_telemetry(limit, node_ref: nil, since: 0)
limit = coerce_query_limit(limit)
db = open_database(readonly: true)
db.results_as_hash = true
@@ -432,8 +527,9 @@ module PotatoMesh
where_clauses = []
now = Time.now.to_i
min_rx_time = now - PotatoMesh::Config.week_seconds
since_threshold = normalize_since_threshold(since, floor: min_rx_time)
where_clauses << "COALESCE(rx_time, telemetry_time, 0) >= ?"
params << min_rx_time
params << since_threshold
if node_ref
clause = node_lookup_clause(node_ref, string_columns: ["node_id"], numeric_columns: ["node_num"])
@@ -470,8 +566,8 @@ module PotatoMesh
r["rssi"] = coerce_integer(r["rssi"])
r["bitfield"] = coerce_integer(r["bitfield"])
r["snr"] = coerce_float(r["snr"])
r["battery_level"] = coerce_float(r["battery_level"])
r["voltage"] = coerce_float(r["voltage"])
r["battery_level"] = sanitize_zero_invalid_metric("battery_level", coerce_float(r["battery_level"]))
r["voltage"] = sanitize_zero_invalid_metric("voltage", coerce_float(r["voltage"]))
r["channel_utilization"] = coerce_float(r["channel_utilization"])
r["air_util_tx"] = coerce_float(r["air_util_tx"])
r["uptime_seconds"] = coerce_integer(r["uptime_seconds"])
@@ -479,7 +575,8 @@ module PotatoMesh
r["relative_humidity"] = coerce_float(r["relative_humidity"])
r["barometric_pressure"] = coerce_float(r["barometric_pressure"])
r["gas_resistance"] = coerce_float(r["gas_resistance"])
r["current"] = coerce_float(r["current"])
current_ma = coerce_float(r["current"])
r["current"] = current_ma.nil? ? nil : current_ma / 1000.0
r["iaq"] = coerce_integer(r["iaq"])
r["distance"] = coerce_float(r["distance"])
r["lux"] = coerce_float(r["lux"])
@@ -502,7 +599,13 @@ module PotatoMesh
db&.close
end
def query_telemetry_buckets(window_seconds:, bucket_seconds:)
# Aggregate telemetry metrics into time buckets.
#
# @param window_seconds [Integer] duration expressed in seconds to include in the query.
# @param bucket_seconds [Integer] size of each aggregation bucket in seconds.
# @param since [Integer] unix timestamp threshold applied in addition to the requested window.
# @return [Array<Hash>] aggregated telemetry metrics grouped by bucket start time.
def query_telemetry_buckets(window_seconds:, bucket_seconds:, since: 0)
window = coerce_integer(window_seconds) || DEFAULT_TELEMETRY_WINDOW_SECONDS
window = DEFAULT_TELEMETRY_WINDOW_SECONDS if window <= 0
bucket = coerce_integer(bucket_seconds) || DEFAULT_TELEMETRY_BUCKET_SECONDS
@@ -512,6 +615,7 @@ module PotatoMesh
db.results_as_hash = true
now = Time.now.to_i
min_timestamp = now - window
since_threshold = normalize_since_threshold(since, floor: min_timestamp)
bucket_expression = "((COALESCE(rx_time, telemetry_time) / ?) * ?)"
select_clauses = [
"#{bucket_expression} AS bucket_start",
@@ -521,9 +625,10 @@ module PotatoMesh
]
TELEMETRY_AGGREGATE_COLUMNS.each do |column|
select_clauses << "AVG(#{column}) AS #{column}_avg"
select_clauses << "MIN(#{column}) AS #{column}_min"
select_clauses << "MAX(#{column}) AS #{column}_max"
aggregate_source = telemetry_aggregate_source(column)
select_clauses << "AVG(#{aggregate_source}) AS #{column}_avg"
select_clauses << "MIN(#{aggregate_source}) AS #{column}_min"
select_clauses << "MAX(#{aggregate_source}) AS #{column}_max"
end
sql = <<~SQL
@@ -536,7 +641,7 @@ module PotatoMesh
ORDER BY bucket_start ASC
LIMIT ?
SQL
params = [bucket, bucket, min_timestamp, MAX_QUERY_LIMIT]
params = [bucket, bucket, since_threshold, MAX_QUERY_LIMIT]
rows = db.execute(sql, params)
rows.map do |row|
bucket_start = coerce_integer(row["bucket_start"])
@@ -549,8 +654,18 @@ module PotatoMesh
avg = coerce_float(row["#{column}_avg"])
min_value = coerce_float(row["#{column}_min"])
max_value = coerce_float(row["#{column}_max"])
scale = TELEMETRY_AGGREGATE_SCALERS[column]
if scale
avg *= scale unless avg.nil?
min_value *= scale unless min_value.nil?
max_value *= scale unless max_value.nil?
end
metrics = {}
avg = sanitize_zero_invalid_metric(column, avg)
min_value = sanitize_zero_invalid_metric(column, min_value)
max_value = sanitize_zero_invalid_metric(column, max_value)
metrics["avg"] = avg unless avg.nil?
metrics["min"] = min_value unless min_value.nil?
metrics["max"] = max_value unless max_value.nil?
@@ -578,12 +693,51 @@ module PotatoMesh
db&.close
end
def query_traces(limit, node_ref: nil)
# Normalise telemetry metrics that cannot legitimately be zero so API
# consumers do not mistake absent readings for valid measurements. Values
# for fields such as battery level and voltage are treated as missing data
# when they are zero.
#
# @param column [String] telemetry metric name.
# @param value [Numeric, nil] raw metric value.
# @return [Numeric, nil] metric value or nil when zero is invalid.
def sanitize_zero_invalid_metric(column, value)
return nil_if_zero(value) if TELEMETRY_ZERO_INVALID_COLUMNS.include?(column)
value
end
# Choose the SQL expression used to aggregate telemetry metrics. Metrics
# that cannot legitimately be zero are wrapped in a NULLIF to ensure
# invalid zero readings are ignored by aggregate functions such as AVG,
# MIN, and MAX, aligning the database semantics with API-level
# zero-as-missing handling.
#
# @param column [String] telemetry metric name.
# @return [String] SQL fragment used in aggregate expressions.
def telemetry_aggregate_source(column)
return "NULLIF(#{column}, 0)" if TELEMETRY_ZERO_INVALID_COLUMNS.include?(column)
column
end
# Fetch trace records optionally scoped by node and timestamp.
#
# @param limit [Integer] maximum number of rows to return.
# @param node_ref [String, Integer, nil] optional node reference to scope results.
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window.
# @return [Array<Hash>] compacted trace rows suitable for API responses.
def query_traces(limit, node_ref: nil, since: 0)
limit = coerce_query_limit(limit)
db = open_database(readonly: true)
db.results_as_hash = true
params = []
where_clauses = []
now = Time.now.to_i
min_rx_time = now - PotatoMesh::Config.week_seconds
since_threshold = normalize_since_threshold(since, floor: min_rx_time)
where_clauses << "COALESCE(rx_time, 0) >= ?"
params << since_threshold
if node_ref
tokens = node_reference_tokens(node_ref)
+21 -11
View File
@@ -64,7 +64,7 @@ module PotatoMesh
app.get "/api/nodes" do
content_type :json
limit = [params["limit"]&.to_i || 200, 1000].min
query_nodes(limit).to_json
query_nodes(limit, since: params["since"]).to_json
end
app.get "/api/nodes/:id" do
@@ -72,11 +72,17 @@ module PotatoMesh
node_ref = string_or_nil(params["id"])
halt 400, { error: "missing node id" }.to_json unless node_ref
limit = [params["limit"]&.to_i || 200, 1000].min
rows = query_nodes(limit, node_ref: node_ref)
rows = query_nodes(limit, node_ref: node_ref, since: params["since"])
halt 404, { error: "not found" }.to_json if rows.empty?
rows.first.to_json
end
app.get "/api/ingestors" do
content_type :json
limit = coerce_query_limit(params["limit"])
query_ingestors(limit, since: params["since"]).to_json
end
app.get "/api/messages" do
content_type :json
limit = [params["limit"]&.to_i || 200, 1000].min
@@ -105,7 +111,7 @@ module PotatoMesh
app.get "/api/positions" do
content_type :json
limit = [params["limit"]&.to_i || 200, 1000].min
query_positions(limit).to_json
query_positions(limit, since: params["since"]).to_json
end
app.get "/api/positions/:id" do
@@ -113,13 +119,13 @@ module PotatoMesh
node_ref = string_or_nil(params["id"])
halt 400, { error: "missing node id" }.to_json unless node_ref
limit = [params["limit"]&.to_i || 200, 1000].min
query_positions(limit, node_ref: node_ref).to_json
query_positions(limit, node_ref: node_ref, since: params["since"]).to_json
end
app.get "/api/neighbors" do
content_type :json
limit = [params["limit"]&.to_i || 200, 1000].min
query_neighbors(limit).to_json
query_neighbors(limit, since: params["since"]).to_json
end
app.get "/api/neighbors/:id" do
@@ -127,13 +133,13 @@ module PotatoMesh
node_ref = string_or_nil(params["id"])
halt 400, { error: "missing node id" }.to_json unless node_ref
limit = [params["limit"]&.to_i || 200, 1000].min
query_neighbors(limit, node_ref: node_ref).to_json
query_neighbors(limit, node_ref: node_ref, since: params["since"]).to_json
end
app.get "/api/telemetry" do
content_type :json
limit = [params["limit"]&.to_i || 200, 1000].min
query_telemetry(limit).to_json
query_telemetry(limit, since: params["since"]).to_json
end
app.get "/api/telemetry/aggregated" do
@@ -164,7 +170,11 @@ module PotatoMesh
halt 400, { error: "bucketSeconds too small for requested window" }.to_json
end
query_telemetry_buckets(window_seconds: window_seconds, bucket_seconds: bucket_seconds).to_json
query_telemetry_buckets(
window_seconds: window_seconds,
bucket_seconds: bucket_seconds,
since: params["since"],
).to_json
end
app.get "/api/telemetry/:id" do
@@ -172,13 +182,13 @@ module PotatoMesh
node_ref = string_or_nil(params["id"])
halt 400, { error: "missing node id" }.to_json unless node_ref
limit = [params["limit"]&.to_i || 200, 1000].min
query_telemetry(limit, node_ref: node_ref).to_json
query_telemetry(limit, node_ref: node_ref, since: params["since"]).to_json
end
app.get "/api/traces" do
content_type :json
limit = [params["limit"]&.to_i || 200, 1000].min
query_traces(limit).to_json
query_traces(limit, since: params["since"]).to_json
end
app.get "/api/traces/:id" do
@@ -186,7 +196,7 @@ module PotatoMesh
node_ref = string_or_nil(params["id"])
halt 400, { error: "missing node id" }.to_json unless node_ref
limit = [params["limit"]&.to_i || 200, 1000].min
query_traces(limit, node_ref: node_ref).to_json
query_traces(limit, node_ref: node_ref, since: params["since"]).to_json
end
app.get "/api/instances" do
@@ -65,6 +65,25 @@ module PotatoMesh
db&.close
end
app.post "/api/ingestors" do
require_token!
content_type :json
begin
payload = JSON.parse(read_json_body)
rescue JSON::ParserError
halt 400, { error: "invalid JSON" }.to_json
end
unless payload.is_a?(Hash)
halt 400, { error: "invalid payload" }.to_json
end
db = open_database
stored = upsert_ingestor(db, payload)
halt 400, { error: "invalid payload" }.to_json unless stored
{ status: "ok" }.to_json
ensure
db&.close
end
app.post "/api/instances" do
content_type :json
begin
@@ -113,6 +132,7 @@ module PotatoMesh
raw_private = payload.key?("isPrivate") ? payload["isPrivate"] : payload["is_private"]
is_private = coerce_boolean(raw_private)
signature = string_or_nil(payload["signature"])
contact_link = string_or_nil(payload["contactLink"])
attributes = {
id: id,
@@ -126,6 +146,7 @@ module PotatoMesh
longitude: longitude,
last_update_time: last_update_time,
is_private: is_private,
contact_link: contact_link,
}
if [attributes[:id], attributes[:domain], attributes[:pubkey], signature, attributes[:last_update_time]].any?(&:nil?)
@@ -138,6 +159,10 @@ module PotatoMesh
end
signature_valid = verify_instance_signature(attributes, signature, attributes[:pubkey])
if !signature_valid && contact_link
stripped_attributes = attributes.merge(contact_link: nil)
signature_valid = verify_instance_signature(stripped_attributes, signature, attributes[:pubkey])
end
# Some remote peers sign payloads using a canonicalised lowercase
# domain while still sending a mixed-case domain. Retry signature
# verification with the original casing when the first attempt
@@ -145,6 +170,10 @@ module PotatoMesh
if !signature_valid && raw_domain && normalized_domain && raw_domain.casecmp?(normalized_domain) && raw_domain != normalized_domain
alternate_attributes = attributes.merge(domain: raw_domain)
signature_valid = verify_instance_signature(alternate_attributes, signature, attributes[:pubkey])
if !signature_valid && contact_link
stripped_alternate = alternate_attributes.merge(contact_link: nil)
signature_valid = verify_instance_signature(stripped_alternate, signature, attributes[:pubkey])
end
end
unless signature_valid
@@ -186,6 +186,11 @@ module PotatoMesh
render_root_view(:charts, view_mode: :charts)
end
app.get %r{/federation/?} do
halt 404 unless federation_enabled?
render_root_view(:federation, view_mode: :federation)
end
app.get "/nodes/:id" do
node_ref = params.fetch("id", nil)
reference_payload = build_node_detail_reference(node_ref)
+3 -2
View File
@@ -42,6 +42,7 @@ module PotatoMesh
DEFAULT_FEDERATION_WORKER_QUEUE_CAPACITY = 128
DEFAULT_FEDERATION_TASK_TIMEOUT_SECONDS = 120
DEFAULT_INITIAL_FEDERATION_DELAY_SECONDS = 2
DEFAULT_FEDERATION_SEED_DOMAINS = %w[potatomesh.net potatomesh.jmrp.io mesh.qrp.ro].freeze
# Retrieve the configured API token used for authenticated requests.
#
@@ -175,7 +176,7 @@ module PotatoMesh
#
# @return [String] semantic version identifier.
def version_fallback
"0.5.6"
"0.5.9"
end
# Default refresh interval for frontend polling routines.
@@ -409,7 +410,7 @@ module PotatoMesh
#
# @return [Array<String>] list of default seed domains.
def federation_seed_domains
["potatomesh.net"].freeze
DEFAULT_FEDERATION_SEED_DOMAINS
end
# Determine how often we broadcast federation announcements.
+2 -2
View File
@@ -1,12 +1,12 @@
{
"name": "potato-mesh",
"version": "0.5.6",
"version": "0.5.9",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "potato-mesh",
"version": "0.5.6",
"version": "0.5.9",
"devDependencies": {
"istanbul-lib-coverage": "^3.2.2",
"istanbul-lib-report": "^3.0.1",
+1 -1
View File
@@ -1,6 +1,6 @@
{
"name": "potato-mesh",
"version": "0.5.6",
"version": "0.5.9",
"type": "module",
"private": true,
"scripts": {
@@ -113,11 +113,9 @@ test('buildChatTabModel returns sorted nodes and channel buckets', () => {
assert.deepEqual(secondaryChannel.entries.map(entry => entry.message.id), ['iso-ts', 'recent-alt']);
});
test('buildChatTabModel always includes channel zero bucket', () => {
test('buildChatTabModel skips channel buckets when there are no messages', () => {
const model = buildChatTabModel({ nodes: [], messages: [], nowSeconds: NOW, windowSeconds: WINDOW });
assert.equal(model.channels.length, 1);
assert.equal(model.channels[0].index, 0);
assert.equal(model.channels[0].entries.length, 0);
assert.equal(model.channels.length, 0);
});
test('buildChatTabModel falls back to numeric label when no metadata provided', () => {
@@ -168,6 +166,7 @@ test('buildChatTabModel includes telemetry, position, and neighbor events', () =
telemetry: [{ node_id: nodeId, rx_time: NOW - 30 }],
positions: [{ node_id: nodeId, rx_time: NOW - 20 }],
neighbors: [{ node_id: nodeId, neighbor_id: neighborId, rx_time: NOW - 10 }],
traces: [{ id: 5_000, src: nodeId, hops: [neighborId], dest: '!charlie', rx_time: NOW - 5 }],
messages: [],
nowSeconds: NOW,
windowSeconds: WINDOW
@@ -178,11 +177,35 @@ test('buildChatTabModel includes telemetry, position, and neighbor events', () =
CHAT_LOG_ENTRY_TYPES.NODE_INFO,
CHAT_LOG_ENTRY_TYPES.TELEMETRY,
CHAT_LOG_ENTRY_TYPES.POSITION,
CHAT_LOG_ENTRY_TYPES.NEIGHBOR
CHAT_LOG_ENTRY_TYPES.NEIGHBOR,
CHAT_LOG_ENTRY_TYPES.TRACE
]);
assert.equal(model.logEntries[0].nodeId, nodeId);
const lastEntry = model.logEntries[model.logEntries.length - 1];
assert.equal(lastEntry.neighborId, neighborId);
const neighborEntry = model.logEntries.find(entry => entry.type === CHAT_LOG_ENTRY_TYPES.NEIGHBOR);
assert.ok(neighborEntry);
assert.equal(neighborEntry.neighborId, neighborId);
const traceEntry = model.logEntries.find(entry => entry.type === CHAT_LOG_ENTRY_TYPES.TRACE);
assert.ok(traceEntry);
assert.deepEqual(traceEntry.traceLabels, [nodeId, neighborId, '!charlie']);
});
test('buildChatTabModel normalises numeric traceroute hops into canonical IDs', () => {
const source = 0xabcdef01;
const hops = ['0xABCDEF02', '!abcdef03', 123];
const dest = 0xabcdef04;
const model = buildChatTabModel({
nodes: [],
traces: [{ rx_time: NOW - 5, src: source, hops, dest }],
nowSeconds: NOW,
windowSeconds: WINDOW
});
const traceEntry = model.logEntries.find(entry => entry.type === CHAT_LOG_ENTRY_TYPES.TRACE);
assert.ok(traceEntry);
assert.equal(traceEntry.nodeId, '!abcdef01');
assert.deepEqual(
traceEntry.tracePath.map(hop => hop.id),
['!abcdef01', '!abcdef02', '!abcdef03', '!0000007b', '!abcdef04']
);
});
test('buildChatTabModel merges dedicated encrypted log feed without altering channels', () => {
@@ -74,6 +74,18 @@ test('chatLogEntryMatchesQuery inspects neighbor node context', () => {
assert.equal(chatLogEntryMatchesQuery(entry, query), true);
});
test('chatLogEntryMatchesQuery inspects traceroute hop labels', () => {
const entry = {
type: CHAT_LOG_ENTRY_TYPES.TRACE,
traceLabels: ['!alpha', '!bravo', '!charlie'],
tracePath: [{ id: '!alpha' }, { id: '!bravo' }, { id: '!charlie' }]
};
const query = normaliseChatFilterQuery('bravo');
assert.equal(chatLogEntryMatchesQuery(entry, query), true);
const missQuery = normaliseChatFilterQuery('delta');
assert.equal(chatLogEntryMatchesQuery(entry, missQuery), false);
});
test('filterChatModel filters both log entries and channel messages', () => {
const model = {
logEntries: [
@@ -104,6 +104,7 @@ class MockElement {
this.style = {};
this.textContent = '';
this.classList = new MockClassList();
this.childNodes = [];
}
/**
@@ -129,6 +130,113 @@ class MockElement {
getAttribute(name) {
return this.attributes.has(name) ? this.attributes.get(name) : null;
}
/**
* Remove an attribute from the element.
*
* @param {string} name Attribute identifier.
* @returns {void}
*/
removeAttribute(name) {
this.attributes.delete(name);
}
/**
* Append a child node to this element.
*
* @param {Object} node Child node to append.
* @returns {Object} Appended node.
*/
appendChild(node) {
this.childNodes.push(node);
return node;
}
/**
* Replace all existing children with the provided nodes.
*
* @param {...Object} nodes Child nodes to set on the element.
* @returns {void}
*/
replaceChildren(...nodes) {
const expanded = [];
nodes.forEach(node => {
if (node && node.tagName === 'FRAGMENT' && Array.isArray(node.childNodes)) {
expanded.push(...node.childNodes);
} else {
expanded.push(node);
}
});
this.childNodes = expanded;
}
/**
* Serialize the element's children into a naive HTML string for test
* assertions. This intentionally covers only the subset of markup produced
* in unit tests.
*
* @returns {string} Serialized HTML content.
*/
get innerHTML() {
return this.childNodes
.map(node => {
if (typeof node === 'string') return node;
if (node && node.tagName) {
const attrs = [];
if (node.attributes.size) {
node.attributes.forEach((value, key) => {
attrs.push(`${key}="${value}"`);
});
}
const classAttr = node.classList && node.classList._values && node.classList._values.size
? `class="${Array.from(node.classList._values).join(' ')}"`
: null;
if (classAttr) attrs.push(classAttr);
const children = node.innerHTML || '';
return `<${node.tagName.toLowerCase()}${attrs.length ? ' ' + attrs.join(' ') : ''}>${children}</${node.tagName.toLowerCase()}>`;
}
return '';
})
.join('');
}
/**
* Setter to overwrite children from a raw HTML string in tests. This is a
* minimal stub and only supports plain text content insertion.
*
* @param {string} value Raw HTML content.
* @returns {void}
*/
set innerHTML(value) {
this.childNodes = [String(value)];
}
/**
* Very small querySelectorAll implementation that supports ``.class`` lookups
* used in unit tests.
*
* @param {string} selector CSS selector (class names only).
* @returns {Array<MockElement>} Matching child nodes.
*/
querySelectorAll(selector) {
if (!selector || typeof selector !== 'string') return [];
const classMatch = selector.match(/^\.(.+)$/);
if (!classMatch) return [];
const className = classMatch[1];
const matches = [];
const visit = node => {
if (node && node.classList && typeof node.classList.contains === 'function') {
if (node.classList.contains(className)) {
matches.push(node);
}
}
if (node && Array.isArray(node.childNodes)) {
node.childNodes.forEach(child => visit(child));
}
};
visit(this);
return matches;
}
}
/**
@@ -182,8 +290,9 @@ export function createDomEnvironment(options = {}) {
documentListeners.delete(event);
},
dispatchEvent(event) {
const handler = documentListeners.get(event);
if (handler) handler();
const key = typeof event === 'string' ? event : event?.type;
const handler = documentListeners.get(key);
if (handler) handler(event);
},
getElementById(id) {
return registry.get(id) || null;
@@ -193,6 +302,18 @@ export function createDomEnvironment(options = {}) {
},
createElement(tagName) {
return new MockElement(tagName, registry);
},
createDocumentFragment() {
const fragment = new MockElement('fragment', null);
fragment.childNodes = [];
fragment.appendChild = node => {
fragment.childNodes.push(node);
return node;
};
fragment.replaceChildren = (...nodes) => {
fragment.childNodes = [...nodes];
};
return fragment;
}
};
@@ -218,8 +339,9 @@ export function createDomEnvironment(options = {}) {
windowListeners.delete(event);
},
dispatchEvent(event) {
const handler = windowListeners.get(event);
if (handler) handler();
const key = typeof event === 'string' ? event : event?.type;
const handler = windowListeners.get(key);
if (handler) handler(event);
},
getComputedStyle(target) {
if (typeof computedStyleImpl === 'function') {
@@ -0,0 +1,54 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import { createDomEnvironment } from './dom-environment.js';
test('dom environment supports class queries and innerHTML setter', () => {
const env = createDomEnvironment({ includeBody: true });
const { document, createElement, cleanup } = env;
const parent = createElement('div');
const child = createElement('span');
child.classList.add('leaflet-tile');
child.setAttribute('data-test', 'ok');
parent.appendChild(child);
const matches = parent.querySelectorAll('.leaflet-tile');
assert.equal(matches.length, 1);
assert.equal(matches[0], child);
const target = createElement('div');
target.innerHTML = '<b>hello</b>';
assert.match(target.innerHTML, /hello/);
const fragment = document.createDocumentFragment();
fragment.replaceChildren(createElement('p'));
const container = createElement('section');
const decorated = createElement('span');
decorated.setAttribute('data-id', '123');
decorated.classList.add('foo');
container.appendChild(decorated);
assert.match(container.innerHTML, /data-id="123"/);
assert.match(container.innerHTML, /class="foo"/);
container.replaceChildren(createElement('div')); // cover non-fragment path
container.childNodes.push({}); // cover empty serialization branch
assert.ok(container.innerHTML.includes('<div'));
cleanup();
});
@@ -0,0 +1,479 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import { createDomEnvironment } from './dom-environment.js';
import { initializeFederationPage } from '../federation-page.js';
import { roleColors } from '../role-helpers.js';
test('federation map centers on configured coordinates and follows theme filters', async () => {
const env = createDomEnvironment({ includeBody: true, bodyHasDarkClass: true });
const { document, window, createElement, registerElement, cleanup } = env;
const mapEl = createElement('div', 'map');
registerElement('map', mapEl);
const statusEl = createElement('div', 'status');
registerElement('status', statusEl);
const tableEl = createElement('table', 'instances');
const tbodyEl = createElement('tbody');
registerElement('instances', tableEl);
const configPayload = {
mapCenter: { lat: 10, lon: 20 },
mapZoom: 7,
tileFilters: { light: 'brightness(1)', dark: 'invert(1)' }
};
const configEl = createElement('div');
configEl.setAttribute('data-app-config', JSON.stringify(configPayload));
document.querySelector = selector => {
if (selector === '[data-app-config]') return configEl;
if (selector === '#instances tbody') return tbodyEl;
return null;
};
const tileContainer = createElement('div');
const tilePane = createElement('div');
const tileImage = createElement('img');
tileImage.classList.add('leaflet-tile');
tileContainer.appendChild(tileImage);
tilePane.appendChild(tileImage);
const mapSetViewCalls = [];
const mapFitBoundsCalls = [];
const circleMarkerCalls = [];
const tileLayerStub = {
addTo() {
return this;
},
getContainer() {
return tileContainer;
},
on(event, handler) {
if (event === 'load') {
this._onLoad = handler;
}
}
};
const mapStub = {
setView(...args) {
mapSetViewCalls.push(args);
},
on() {},
getPane(name) {
return name === 'tilePane' ? tilePane : null;
},
fitBounds(...args) {
mapFitBoundsCalls.push(args);
}
};
const leafletStub = {
map() {
return mapStub;
},
tileLayer() {
return tileLayerStub;
},
layerGroup() {
return {
addLayer() {},
addTo() {
return this;
}
};
},
circleMarker(latlng, options) {
circleMarkerCalls.push({ latlng, options });
return {
bindPopup() {
return this;
}
};
}
};
const fetchImpl = async () => ({
ok: true,
json: async () => [
{
domain: 'alpha.mesh',
contactLink: 'https://chat.alpha',
version: '1.0.0',
latitude: 10.12345,
longitude: -20.98765,
lastUpdateTime: Math.floor(Date.now() / 1000) - 90,
nodesCount: 12
},
{
domain: 'bravo.mesh',
contactLink: null,
version: '2.0.0',
lastUpdateTime: Math.floor(Date.now() / 1000) - (2 * 86400),
nodesCount: 2
}
]
});
try {
await initializeFederationPage({ config: configPayload, fetchImpl, leaflet: leafletStub });
assert.deepEqual(mapSetViewCalls[0], [[10, 20], 7]);
assert.equal(tileContainer.style.filter, 'invert(1)');
assert.equal(tilePane.style.filter, 'invert(1)');
assert.equal(tileImage.style.filter, 'invert(1)');
document.body.classList.remove('dark');
document.documentElement.setAttribute('data-theme', 'light');
window.dispatchEvent({ type: 'themechange', detail: { theme: 'light' } });
assert.equal(tileContainer.style.filter, 'brightness(1)');
assert.equal(tilePane.style.filter, 'brightness(1)');
assert.equal(tileImage.style.filter, 'brightness(1)');
document.documentElement.removeAttribute('data-theme');
document.body.classList.remove('dark');
window.dispatchEvent({ type: 'themechange', detail: { theme: null } });
assert.equal(tileContainer.style.filter, 'invert(1)');
const rows = tbodyEl.childNodes;
assert.equal(rows.length, 2);
const firstRowHtml = rows[0].innerHTML;
assert.match(firstRowHtml, /alpha\.mesh/);
assert.match(firstRowHtml, /https:\/\/chat\.alpha/);
assert.match(firstRowHtml, /10\.12345/);
assert.match(firstRowHtml, /-20\.98765/);
assert.match(firstRowHtml, />12</);
assert.match(firstRowHtml, /ago/);
const secondRowHtml = rows[1].innerHTML;
assert.match(secondRowHtml, /bravo\.mesh/);
assert.match(secondRowHtml, /<em>—<\/em>/); // no contact link
assert.match(secondRowHtml, /2\.0\.0/);
assert.match(secondRowHtml, />2</);
assert.match(secondRowHtml, /d ago/);
assert.deepEqual(mapFitBoundsCalls[0][0], [[10.12345, -20.98765]]);
assert.equal(circleMarkerCalls[0].options.fillColor, roleColors.CLIENT_HIDDEN);
} catch (error) {
console.error('federation sorting test error', error);
throw error;
} finally {
cleanup();
}
});
test('federation table sorting, contact rendering, and legend creation', async () => {
const env = createDomEnvironment({ includeBody: true, bodyHasDarkClass: false });
const { document, createElement, registerElement, cleanup } = env;
const mapEl = createElement('div', 'map');
registerElement('map', mapEl);
const statusEl = createElement('div', 'status');
registerElement('status', statusEl);
const tableEl = createElement('table', 'instances');
const tbodyEl = createElement('tbody');
registerElement('instances', tableEl);
tableEl.appendChild(tbodyEl);
const headerNameTh = createElement('th');
const headerName = createElement('span');
headerName.classList.add('sort-header');
headerName.dataset.sortKey = 'name';
headerName.dataset.sortLabel = 'Name';
headerNameTh.appendChild(headerName);
const headerDomainTh = createElement('th');
const headerDomain = createElement('span');
headerDomain.classList.add('sort-header');
headerDomain.dataset.sortKey = 'domain';
headerDomain.dataset.sortLabel = 'Domain';
headerDomainTh.appendChild(headerDomain);
const ths = [headerNameTh, headerDomainTh];
const headers = [headerName, headerDomain];
const headerHandlers = new Map();
headers.forEach(header => {
header.addEventListener = (event, handler) => {
const existing = headerHandlers.get(header) || {};
existing[event] = handler;
headerHandlers.set(header, existing);
};
header.closest = () => ths.find(th => th.childNodes.includes(header));
header.querySelector = selector => {
if (selector === '.sort-indicator') {
const span = createElement('span');
span.classList.add('sort-indicator');
return span;
}
return null;
};
});
tableEl.querySelectorAll = selector => {
if (selector === 'thead .sort-header[data-sort-key]') return headers;
if (selector === 'thead th') return ths;
return [];
};
const configPayload = {
mapCenter: { lat: 0, lon: 0 },
mapZoom: 3,
tileFilters: { light: 'none', dark: 'invert(1)' }
};
const configEl = createElement('div');
configEl.setAttribute('data-app-config', JSON.stringify(configPayload));
document.querySelector = selector => {
if (selector === '[data-app-config]') return configEl;
if (selector === '#instances tbody') return tbodyEl;
return null;
};
const legendContainers = [];
const mapSetViewCalls = [];
const mapFitBoundsCalls = [];
const circleMarkerCalls = [];
const DomUtil = {
create(tag, className, parent) {
const el = {
tagName: tag,
className,
children: [],
style: {},
textContent: '',
setAttribute() {},
appendChild(child) {
this.children.push(child);
return child;
},
};
if (parent && parent.appendChild) parent.appendChild(el);
return el;
}
};
const controlStub = () => {
const ctrl = {
onAdd: null,
container: null,
addTo(map) {
this.container = this.onAdd ? this.onAdd(map) : null;
legendContainers.push(this.container);
return this;
},
getContainer() {
return this.container;
}
};
return ctrl;
};
const markersLayer = {
layers: [],
addLayer(marker) {
this.layers.push(marker);
return marker;
},
addTo() {
return this;
}
};
const mapStub = {
addedControls: [],
setView(...args) {
mapSetViewCalls.push(args);
},
on() {},
fitBounds(...args) {
mapFitBoundsCalls.push(args);
},
addLayer(layer) {
this.addedControls.push(layer);
return layer;
}
};
const leafletStub = {
map() {
return mapStub;
},
tileLayer() {
return {
addTo() {
return this;
},
getContainer() {
return null;
},
on() {}
};
},
layerGroup() {
return markersLayer;
},
circleMarker(latlng, options) {
circleMarkerCalls.push({ latlng, options });
return {
bindPopup() {
return this;
},
addTo() {
return this;
}
};
},
control: controlStub,
DomUtil
};
const now = Math.floor(Date.now() / 1000);
const fetchImpl = async () => ({
ok: true,
json: async () => [
{
domain: 'c.mesh',
name: 'Charlie',
contactLink: 'https://charlie.example\nmatrix:#c:mesh',
version: '3.0.0',
latitude: 1,
longitude: 1,
lastUpdateTime: now - 10,
nodesCount: 0
},
{
domain: 'b.mesh',
contactLink: '',
version: '2.0.0',
latitude: 2,
longitude: 2,
lastUpdateTime: now - 60,
nodesCount: 650
},
{
domain: 'a.mesh',
name: 'Alpha',
contactLink: 'mailto:alpha@mesh',
version: '1.0.0',
latitude: 3,
longitude: 3,
lastUpdateTime: now - 30,
nodesCount: 5
}
]
});
try {
await initializeFederationPage({ config: configPayload, fetchImpl, leaflet: leafletStub });
const rows = tbodyEl.childNodes.map(node => String(node.childNodes[0]));
assert.match(rows[0], /c\.mesh/);
assert.match(rows[0], /0</);
assert.match(rows[0], /https:\/\/charlie\.example/);
assert.match(rows[0], /matrix:#c:mesh/);
assert.match(rows[1], /a\.mesh/);
assert.match(rows[2], /b\.mesh/);
const nameHandlers = headerHandlers.get(headerName);
nameHandlers.click();
const afterNameSort = tbodyEl.childNodes.map(node => String(node.childNodes[0]));
assert.match(afterNameSort[0], /a\.mesh/);
assert.match(afterNameSort[1], /c\.mesh/);
assert.match(afterNameSort[2], /b\.mesh/);
nameHandlers.click();
const descSort = tbodyEl.childNodes.map(node => String(node.childNodes[0]));
assert.match(descSort[0], /c\.mesh/);
assert.match(descSort[1], /a\.mesh/);
assert.match(descSort[2], /b\.mesh/);
assert.equal(headerName.closest().attributes.get('aria-sort'), 'descending');
assert.equal(circleMarkerCalls[0].options.fillColor, roleColors.CLIENT_HIDDEN);
assert.equal(circleMarkerCalls[1].options.fillColor, roleColors.REPEATER);
assert.deepEqual(mapSetViewCalls[0], [[0, 0], 3]);
assert.equal(mapFitBoundsCalls[0][0].length, 3);
assert.equal(legendContainers.length, 1);
const legend = legendContainers[0];
assert.ok(legend.className.includes('legend'));
const legendHeader = legend.children.find(child => child.className === 'legend-header');
const legendTitle = legendHeader && Array.isArray(legendHeader.children)
? legendHeader.children.find(child => child.className === 'legend-title')
: null;
assert.ok(legendTitle);
assert.equal(legendTitle.textContent, 'Active nodes');
} finally {
cleanup();
}
});
test('federation page tolerates fetch failures', async () => {
const env = createDomEnvironment({ includeBody: true, bodyHasDarkClass: false });
const { document, createElement, registerElement, cleanup } = env;
const mapEl = createElement('div', 'map');
registerElement('map', mapEl);
const statusEl = createElement('div', 'status');
registerElement('status', statusEl);
const tableEl = createElement('table', 'instances');
const tbodyEl = createElement('tbody');
registerElement('instances', tableEl);
const configEl = createElement('div');
configEl.setAttribute('data-app-config', JSON.stringify({}));
document.querySelector = selector => {
if (selector === '[data-app-config]') return configEl;
if (selector === '#instances tbody') return tbodyEl;
return null;
};
const leafletStub = {
map() {
return {
setView() {},
on() {},
getPane() {
return null;
}
};
},
tileLayer() {
return {
addTo() {
return this;
},
getContainer() {
return null;
},
on() {}
};
},
layerGroup() {
return { addLayer() {}, addTo() { return this; } };
},
circleMarker() {
return { bindPopup() { return this; } };
}
};
const fetchImpl = async () => {
throw new Error('boom');
};
await initializeFederationPage({ config: {}, fetchImpl, leaflet: leafletStub });
cleanup();
});
@@ -90,10 +90,29 @@ test('resolveInstanceLabel falls back to the domain when the name is missing', (
test('buildInstanceUrl normalises domains into navigable HTTPS URLs', () => {
assert.equal(buildInstanceUrl('mesh.example'), 'https://mesh.example');
assert.equal(buildInstanceUrl(' https://mesh.example '), 'https://mesh.example');
assert.equal(buildInstanceUrl('https://mesh.example/path?query#fragment'), 'https://mesh.example');
assert.equal(buildInstanceUrl('javascript:alert(1)'), null);
assert.equal(buildInstanceUrl('ftp://mesh.example'), null);
assert.equal(buildInstanceUrl('mesh.example:8080'), 'https://mesh.example:8080');
assert.equal(buildInstanceUrl('mesh.example<script>'), null);
assert.equal(buildInstanceUrl(''), null);
assert.equal(buildInstanceUrl(null), null);
});
test('buildInstanceUrl rejects malformed HTTP URLs safely', () => {
const originalWarn = console.warn;
const warnings = [];
console.warn = message => warnings.push(message);
try {
assert.equal(buildInstanceUrl('http://[::1'), null);
assert.equal(buildInstanceUrl('https://bad host.example'), null);
assert.ok(warnings.length >= 1);
} finally {
console.warn = originalWarn;
}
});
test('initializeInstanceSelector populates options alphabetically and selects the configured domain', async () => {
const env = createDomEnvironment();
const select = setupSelectElement(env.document);
@@ -0,0 +1,41 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import { resolveLegendVisibility } from '../map-legend-visibility.js';
test('resolveLegendVisibility hides when a default collapse is requested', () => {
assert.equal(resolveLegendVisibility({ defaultCollapsed: true, mediaQueryMatches: false }), false);
assert.equal(resolveLegendVisibility({ defaultCollapsed: true, mediaQueryMatches: true }), false);
});
test('resolveLegendVisibility hides for dashboard and map views', () => {
assert.equal(
resolveLegendVisibility({ defaultCollapsed: false, mediaQueryMatches: false, viewMode: 'dashboard' }),
false
);
assert.equal(
resolveLegendVisibility({ defaultCollapsed: false, mediaQueryMatches: false, viewMode: 'map' }),
false
);
});
test('resolveLegendVisibility follows the media query when not forced', () => {
assert.equal(resolveLegendVisibility({ defaultCollapsed: false, mediaQueryMatches: false }), true);
assert.equal(resolveLegendVisibility({ defaultCollapsed: false, mediaQueryMatches: true }), false);
});
@@ -405,6 +405,77 @@ test('renderTelemetryCharts renders condensed scatter charts when telemetry exis
assert.equal(html.includes('node-detail__chart-point'), true);
});
test('renderTelemetryCharts expands upper bounds when overflow metrics exceed defaults', () => {
const nowMs = Date.UTC(2025, 0, 8, 12, 0, 0);
const nowSeconds = Math.floor(nowMs / 1000);
const node = {
rawSources: {
telemetry: {
snapshots: [
{
rx_time: nowSeconds - 120,
device_metrics: {
battery_level: 90,
voltage: 7.2,
current: 3.6,
channel_utilization: 45,
air_util_tx: 18,
},
environment_metrics: {
temperature: 45,
relative_humidity: 48,
barometric_pressure: 1250,
gas_resistance: 1200,
iaq: 650,
},
},
],
},
},
};
const html = renderTelemetryCharts(node, { nowMs });
assert.match(html, />7\.2<\/text>/);
assert.match(html, />3\.6<\/text>/);
assert.match(html, />45<\/text>/);
assert.match(html, />650<\/text>/);
assert.match(html, />1100<\/text>/);
});
test('renderTelemetryCharts keeps default bounds when metrics stay within limits', () => {
const nowMs = Date.UTC(2025, 0, 8, 12, 0, 0);
const nowSeconds = Math.floor(nowMs / 1000);
const node = {
rawSources: {
telemetry: {
snapshots: [
{
rx_time: nowSeconds - 180,
device_metrics: {
battery_level: 70,
voltage: 4.5,
current: 1.5,
channel_utilization: 35,
air_util_tx: 15,
},
environment_metrics: {
temperature: 25,
relative_humidity: 50,
barometric_pressure: 1015,
gas_resistance: 1500,
iaq: 200,
},
},
],
},
},
};
const html = renderTelemetryCharts(node, { nowMs });
assert.match(html, />6\.0<\/text>/);
assert.match(html, />3\.0<\/text>/);
assert.match(html, />40<\/text>/);
assert.match(html, />500<\/text>/);
});
test('renderNodeDetailHtml composes the table, neighbors, and messages', () => {
const html = renderNodeDetailHtml(
{
@@ -62,6 +62,16 @@ test('normalizeNodeCollection applies canonical forms to all nodes', () => {
assert.equal(nodes[1].air_util_tx, 5.5);
});
test('normalizeNodeSnapshot maps numeric roles to canonical identifiers', () => {
const roleNode = { role: '12', node_id: '!role' };
const numberRoleNode = { role: 12, nodeId: '!number-role' };
normalizeNodeCollection([roleNode, numberRoleNode]);
assert.equal(roleNode.role, 'CLIENT_BASE');
assert.equal(numberRoleNode.role, 'CLIENT_BASE');
});
test('normaliser helpers coerce primitive values consistently', () => {
assert.equal(normalizeNumber('42.1'), 42.1);
assert.equal(normalizeNumber('not-a-number'), null);
@@ -19,8 +19,13 @@ import assert from 'node:assert/strict';
import { buildTraceSegments, __testUtils } from '../trace-paths.js';
const { coerceFiniteNumber, findNode, resolveNodeCoordinates } = __testUtils;
const { buildNodeIndex } = __testUtils;
const {
coerceFiniteNumber,
findNode,
resolveNodeCoordinates,
canonicalNodeIdFromNumeric,
buildNodeIndex
} = __testUtils;
test('buildTraceSegments connects source, hops, and destination when coordinates exist', () => {
const traces = [
@@ -43,6 +48,29 @@ test('buildTraceSegments connects source, hops, and destination when coordinates
assert.equal(segments[0].color, 'color:ROUTER');
assert.equal(segments[1].color, 'color:CLIENT');
assert.equal(segments[0].rxTime, 1700);
assert.deepEqual(
segments[0].pathNodes.map(node => node.node_id),
['2658361180', '19088743', '4242424242']
);
});
test('buildTraceSegments links traces to canonical node IDs when numeric references are provided', () => {
const traces = [
{ id: 9_010, src: 0xbead_f00d, hops: [0xcafe_babe], dest: 0xfeed_c0de, rx_time: 1900 },
];
const nodes = [
{ node_id: '!beadf00d', latitude: 0, longitude: 0, role: 'ROUTER' },
{ node_id: '!cafebabe', latitude: 1, longitude: 1, role: 'CLIENT' },
{ node_id: '!feedc0de', latitude: 2, longitude: 2, role: 'CLIENT' },
];
const segments = buildTraceSegments(traces, nodes, { colorForNode: () => '#abcdef' });
assert.equal(segments.length, 2);
assert.deepEqual(segments[0].latlngs, [[0, 0], [1, 1]]);
assert.deepEqual(segments[1].latlngs, [[1, 1], [2, 2]]);
assert.equal(segments[0].color, '#abcdef');
assert.equal(segments[1].color, '#abcdef');
});
test('buildTraceSegments drops paths through hops without locations', () => {
@@ -98,13 +126,24 @@ test('helper utilities coerce values and locate nodes', () => {
assert.equal(coerceFiniteNumber(null), null);
assert.equal(coerceFiniteNumber(' '), null);
assert.equal(coerceFiniteNumber('7'), 7);
assert.equal(coerceFiniteNumber('!beadf00d'), 0xbeadf00d);
assert.equal(coerceFiniteNumber('0x10'), 16);
const byId = new Map([['!id', { node_id: '!id', latitude: 1, longitude: 2 }]]);
const byNum = new Map([[99, { node_id: '!other', latitude: 0, longitude: 0 }]]);
const byId = new Map([
['!id', { node_id: '!id', latitude: 1, longitude: 2 }],
['!beadf00d', { node_id: '!beadf00d', latitude: 3, longitude: 4 }]
]);
const byNum = new Map([
[99, { node_id: '!other', latitude: 0, longitude: 0 }],
[0xbeadf00d, { node_id: '!beadf00d', latitude: 3, longitude: 4 }]
]);
assert.equal(findNode(byId, byNum, '!id').node_id, '!id');
assert.equal(findNode(byId, byNum, 99).node_id, '!other');
assert.equal(findNode(byId, new Map(), 0xbeadf00d).node_id, '!beadf00d');
assert.equal(findNode(byId, byNum, 100), null);
assert.equal(canonicalNodeIdFromNumeric(0xbeadf00d), '!beadf00d');
const coords = resolveNodeCoordinates({ latitude: 5, longitude: 6, distance_km: 10 }, { limitDistance: true, maxDistanceKm: 15 });
assert.deepEqual(coords, [5, 6]);
const outOfRange = resolveNodeCoordinates({ latitude: 0, longitude: 0, distance_km: 20 }, { limitDistance: true, maxDistanceKm: 15 });
+120 -32
View File
@@ -30,7 +30,8 @@ export const MAX_CHANNEL_INDEX = 9;
* NODE_INFO: 'node-info',
* TELEMETRY: 'telemetry',
* POSITION: 'position',
* NEIGHBOR: 'neighbor'
* NEIGHBOR: 'neighbor',
* TRACE: 'trace'
* }}
*/
export const CHAT_LOG_ENTRY_TYPES = Object.freeze({
@@ -39,6 +40,7 @@ export const CHAT_LOG_ENTRY_TYPES = Object.freeze({
TELEMETRY: 'telemetry',
POSITION: 'position',
NEIGHBOR: 'neighbor',
TRACE: 'trace',
MESSAGE_ENCRYPTED: 'message-encrypted'
});
@@ -63,13 +65,15 @@ function resolveSnapshotList(entry) {
* Build a data model describing the content for chat tabs.
*
* Entries outside the recent activity window, encrypted messages, and
* channels above {@link MAX_CHANNEL_INDEX} are filtered out.
* channels above {@link MAX_CHANNEL_INDEX} are filtered out. Channel
* buckets are only created when messages are present for that channel.
*
* @param {{
* nodes?: Array<Object>,
* telemetry?: Array<Object>,
* positions?: Array<Object>,
* neighbors?: Array<Object>,
* traces?: Array<Object>,
* messages?: Array<Object>,
* logOnlyMessages?: Array<Object>,
* nowSeconds: number,
@@ -87,6 +91,7 @@ export function buildChatTabModel({
telemetry = [],
positions = [],
neighbors = [],
traces = [],
messages = [],
logOnlyMessages = [],
nowSeconds,
@@ -156,6 +161,34 @@ export function buildChatTabModel({
}
}
for (const trace of traces || []) {
if (!trace) continue;
const ts = resolveTimestampSeconds(trace.rx_time ?? trace.rxTime, trace.rx_iso ?? trace.rxIso);
if (ts == null || ts < cutoff) continue;
const path = buildTracePath(trace);
if (path.length < 2) continue;
const firstHop = path[0] || {};
const traceLabels = path
.map(hop => {
if (!hop || typeof hop !== 'object') return null;
const candidates = [hop.id, hop.raw];
if (Number.isFinite(hop.num)) {
candidates.push(String(hop.num));
}
return candidates.find(val => val != null && String(val).trim().length > 0) ?? null;
})
.filter(value => value != null && value !== '');
logEntries.push({
ts,
type: CHAT_LOG_ENTRY_TYPES.TRACE,
trace,
tracePath: path,
traceLabels,
nodeId: firstHop.id ?? null,
nodeNum: firstHop.num ?? null
});
}
const encryptedLogEntries = [];
const encryptedLogKeys = new Set();
@@ -255,26 +288,6 @@ export function buildChatTabModel({
logEntries.sort((a, b) => a.ts - b.ts);
let hasPrimaryBucket = false;
for (const bucket of channelBuckets.values()) {
if (bucket.index === 0) {
hasPrimaryBucket = true;
break;
}
}
if (!hasPrimaryBucket) {
const bucketKey = '0';
channelBuckets.set(bucketKey, {
key: bucketKey,
id: buildChannelTabId(bucketKey),
index: 0,
label: '0',
entries: [],
labelPriority: CHANNEL_LABEL_PRIORITY.INDEX,
isPrimaryFallback: true
});
}
const channels = Array.from(channelBuckets.values()).sort((a, b) => {
if (a.index !== b.index) {
return a.index - b.index;
@@ -345,10 +358,59 @@ function pickFirstPropertyValue(source, keys) {
* @param {*} value Arbitrary payload candidate.
* @returns {?string} Canonical node identifier.
*/
function coerceFiniteNumber(value) {
if (value == null) return null;
if (typeof value === 'number') {
return Number.isFinite(value) ? value : null;
}
if (typeof value === 'string') {
const trimmed = value.trim();
if (!trimmed) return null;
if (trimmed.startsWith('!')) {
const hex = trimmed.slice(1);
if (!/^[0-9A-Fa-f]+$/.test(hex)) return null;
const parsedHex = Number.parseInt(hex, 16);
return Number.isFinite(parsedHex) ? parsedHex >>> 0 : null;
}
if (/^0[xX][0-9A-Fa-f]+$/.test(trimmed)) {
const parsedHex = Number.parseInt(trimmed, 16);
return Number.isFinite(parsedHex) ? parsedHex >>> 0 : null;
}
const parsed = Number(trimmed);
return Number.isFinite(parsed) ? parsed : null;
}
const parsed = Number(value);
return Number.isFinite(parsed) ? parsed : null;
}
function canonicalNodeIdFromNumeric(ref) {
if (!Number.isFinite(ref)) return null;
const unsigned = ref >>> 0;
const hex = unsigned.toString(16).padStart(8, '0');
return `!${hex}`;
}
function normaliseNodeId(value) {
if (!value || typeof value !== 'object') return null;
const raw = value.node_id ?? value.nodeId ?? null;
return typeof raw === 'string' && raw.trim().length ? raw.trim() : null;
if (value == null) return null;
if (typeof value === 'number') {
return canonicalNodeIdFromNumeric(value);
}
if (typeof value === 'string') {
const trimmed = value.trim();
if (!trimmed) return null;
const canonicalFromNumeric = canonicalNodeIdFromNumeric(coerceFiniteNumber(trimmed));
return canonicalFromNumeric ?? trimmed;
}
if (typeof value !== 'object') return null;
const rawId = value.node_id ?? value.nodeId ?? null;
if (rawId != null) {
const canonical = normaliseNodeId(rawId);
if (canonical) return canonical;
}
const numericRef = value.node_num ?? value.nodeNum ?? value.num;
const numericId = canonicalNodeIdFromNumeric(coerceFiniteNumber(numericRef));
if (numericId) return numericId;
return null;
}
/**
@@ -366,6 +428,29 @@ function normaliseNeighborId(value) {
return null;
}
/**
* Build an ordered trace path of node identifiers and numeric references.
*
* @param {Object} trace Trace payload.
* @returns {Array<{id: ?string, num: ?number, raw: *}>} Ordered hop descriptors.
*/
function buildTracePath(trace) {
const path = [];
const append = value => {
if (value == null || value === '') return;
const id = normaliseNodeId(value);
const num = normaliseNodeNum({ num: value });
path.push({ id, num, raw: value });
};
append(trace.src ?? trace.source ?? trace.from);
const hops = Array.isArray(trace.hops) ? trace.hops : [];
for (const hop of hops) {
append(hop);
}
append(trace.dest ?? trace.destination ?? trace.to);
return path;
}
/**
* Extract a finite node number from a payload when available.
*
@@ -373,14 +458,17 @@ function normaliseNeighborId(value) {
* @returns {?number} Canonical numeric identifier.
*/
function normaliseNodeNum(value) {
if (!value || typeof value !== 'object') return null;
const raw = value.node_num ?? value.nodeNum ?? value.num;
if (raw == null || raw === '') return null;
if (typeof raw === 'number') {
return Number.isFinite(raw) ? raw : null;
if (Number.isFinite(value)) {
return Math.trunc(value);
}
const parsed = Number(raw);
return Number.isFinite(parsed) ? parsed : null;
const fromObject = value && typeof value === 'object'
? coerceFiniteNumber(value.node_num ?? value.nodeNum ?? value.num)
: null;
if (fromObject != null) {
return Math.trunc(fromObject);
}
const parsed = coerceFiniteNumber(value);
return parsed != null ? Math.trunc(parsed) : null;
}
/**
+1
View File
@@ -110,6 +110,7 @@ export function chatLogEntryMatchesQuery(entry, query) {
candidates.push(...collectSearchValues(entry.position));
candidates.push(...collectSearchValues(entry.neighbor));
candidates.push(...collectSearchValues(entry.neighborNode));
candidates.push(...(Array.isArray(entry.traceLabels) ? entry.traceLabels : []));
if (entry.nodeId) candidates.push(entry.nodeId);
if (entry.nodeNum != null && entry.nodeNum !== '') candidates.push(entry.nodeNum);
if (entry.neighborId) candidates.push(entry.neighborId);
+565
View File
@@ -0,0 +1,565 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import { readAppConfig } from './config.js';
import { mergeConfig } from './settings.js';
import { roleColors } from './role-helpers.js';
/**
* Escape HTML special characters to prevent XSS.
*
* @param {string} str Raw string to escape.
* @returns {string} Escaped string safe for HTML insertion.
*/
function escapeHtml(str) {
if (typeof str !== 'string') return '';
return str
.replace(/&/g, '&amp;')
.replace(/</g, '&lt;')
.replace(/>/g, '&gt;')
.replace(/"/g, '&quot;')
.replace(/'/g, '&#039;');
}
/**
* Format a coordinate value to fixed decimal places.
*
* @param {number|null|undefined} v Coordinate value.
* @param {number} d Decimal places (default 5).
* @returns {string} Formatted coordinate or empty string.
*/
function fmtCoords(v, d = 5) {
if (v == null || v === '') return '';
const n = Number(v);
if (!Number.isFinite(n)) return '';
return n.toFixed(d);
}
/**
* Convert a Unix timestamp to a human-readable relative time string.
*
* @param {number|null|undefined} unixSec Unix timestamp in seconds.
* @param {number} nowSec Current timestamp in seconds.
* @returns {string} Relative time string or empty string.
*/
function timeAgo(unixSec, nowSec = Date.now() / 1000) {
if (unixSec == null || unixSec === '') return '';
const ts = Number(unixSec);
if (!Number.isFinite(ts) || ts <= 0) return '';
const diff = Math.max(0, Math.floor(nowSec - ts));
if (diff < 60) return `${diff}s ago`;
if (diff < 3600) return `${Math.floor(diff / 60)}m ago`;
if (diff < 86400) return `${Math.floor(diff / 3600)}h ago`;
return `${Math.floor(diff / 86400)}d ago`;
}
/**
* Build a navigable URL for an instance domain.
*
* @param {string} domain Instance domain.
* @returns {string|null} Navigable URL or null.
*/
function buildInstanceUrl(domain) {
if (typeof domain !== 'string' || !domain.trim()) return null;
const trimmed = domain.trim();
if (/^https?:\/\//i.test(trimmed)) return trimmed;
return `https://${trimmed}`;
}
const NODE_COUNT_COLOR_STOPS = [
{ limit: 100, color: roleColors.CLIENT_HIDDEN },
{ limit: 200, color: roleColors.SENSOR },
{ limit: 300, color: roleColors.TRACKER },
{ limit: 400, color: roleColors.CLIENT_MUTE },
{ limit: 500, color: roleColors.CLIENT },
{ limit: 600, color: roleColors.CLIENT_BASE },
{ limit: 700, color: roleColors.REPEATER },
{ limit: 800, color: roleColors.ROUTER_LATE },
{ limit: 900, color: roleColors.ROUTER }
];
const DEFAULT_INSTANCE_COLOR = roleColors.LOST_AND_FOUND || '#3388ff';
/**
* Determine the marker colour for an instance based on its active node count.
*
* @param {*} count Raw node count value from the API.
* @returns {string} CSS colour string.
*/
function colorForNodeCount(count) {
const numeric = Number(count);
if (!Number.isFinite(numeric) || numeric < 0) return DEFAULT_INSTANCE_COLOR;
const stop = NODE_COUNT_COLOR_STOPS.find(entry => numeric < entry.limit);
return stop && stop.color ? stop.color : DEFAULT_INSTANCE_COLOR;
}
/**
* Render arbitrary contact text while hyperlinking recognised URL-like segments.
*
* @param {*} contact Raw contact value from the API.
* @returns {string} HTML markup safe for insertion.
*/
function renderContactHtml(contact) {
if (typeof contact !== 'string') return '';
const trimmed = contact.trim();
if (!trimmed) return '';
const urlPattern = /(https?:\/\/[^\s]+|mailto:[^\s]+|matrix:[^\s]+)/gi;
const parts = [];
let lastIndex = 0;
let match;
while ((match = urlPattern.exec(trimmed)) !== null) {
const textBefore = trimmed.slice(lastIndex, match.index);
if (textBefore) {
parts.push(escapeHtml(textBefore));
}
const url = match[0];
const safeUrl = escapeHtml(url);
parts.push(`<a href="${safeUrl}" target="_blank" rel="noopener noreferrer">${safeUrl}</a>`);
lastIndex = match.index + url.length;
}
const trailing = trimmed.slice(lastIndex);
if (trailing) {
parts.push(escapeHtml(trailing));
}
const html = parts.join('');
return html.replace(/\r?\n/g, '<br>');
}
/**
* Convert a value into a finite number or null when invalid.
*
* @param {*} value Raw value to convert.
* @returns {number|null} Finite number or null.
*/
function toFiniteNumber(value) {
const num = Number(value);
return Number.isFinite(num) ? num : null;
}
/**
* Compare two string-like values ignoring case.
*
* @param {*} a Left-hand operand.
* @param {*} b Right-hand operand.
* @returns {number} Comparator result.
*/
function compareString(a, b) {
const left = typeof a === 'string' ? a.toLowerCase() : String(a ?? '').toLowerCase();
const right = typeof b === 'string' ? b.toLowerCase() : String(b ?? '').toLowerCase();
return left.localeCompare(right);
}
/**
* Compare two numeric values.
*
* @param {*} a Left-hand operand.
* @param {*} b Right-hand operand.
* @returns {number} Comparator result.
*/
function compareNumber(a, b) {
const left = toFiniteNumber(a);
const right = toFiniteNumber(b);
if (left == null && right == null) return 0;
if (left == null) return 1;
if (right == null) return -1;
if (left === right) return 0;
return left < right ? -1 : 1;
}
/**
* Determine whether a string-like value is present.
*
* @param {*} value Candidate value.
* @returns {boolean} true when present.
*/
function hasStringValue(value) {
if (value == null) return false;
if (typeof value === 'string') return value.trim() !== '';
return String(value).trim() !== '';
}
/**
* Determine whether a numeric value is present.
*
* @param {*} value Candidate value.
* @returns {boolean} true when present.
*/
function hasNumberValue(value) {
return toFiniteNumber(value) != null;
}
const TILE_LAYER_URL = 'https://{s}.tile.openstreetmap.fr/hot/{z}/{x}/{y}.png';
/**
* Initialize the federation page by fetching instances, rendering the map,
* and populating the table.
*
* @param {{
* config?: object,
* fetchImpl?: typeof fetch,
* leaflet?: typeof L
* }} [options] Optional overrides for testing.
* @returns {Promise<void>}
*/
export async function initializeFederationPage(options = {}) {
const rawConfig = options.config || readAppConfig();
const config = mergeConfig(rawConfig);
const fetchImpl = options.fetchImpl || fetch;
const leaflet = options.leaflet || (typeof window !== 'undefined' ? window.L : null);
const mapContainer = document.getElementById('map');
const tableEl = document.getElementById('instances');
const tableBody = document.querySelector('#instances tbody');
const statusEl = document.getElementById('status');
const sortHeaders = tableEl
? Array.from(tableEl.querySelectorAll('thead .sort-header[data-sort-key]'))
: [];
const hasLeaflet =
typeof leaflet === 'object' &&
leaflet &&
typeof leaflet.map === 'function' &&
typeof leaflet.tileLayer === 'function';
let map = null;
let markersLayer = null;
let tileLayer = null;
const tableSorters = {
name: { getValue: inst => inst.name ?? '', compare: compareString, hasValue: hasStringValue, defaultDirection: 'asc' },
domain: { getValue: inst => inst.domain ?? '', compare: compareString, hasValue: hasStringValue, defaultDirection: 'asc' },
contact: { getValue: inst => inst.contactLink ?? '', compare: compareString, hasValue: hasStringValue, defaultDirection: 'asc' },
version: { getValue: inst => inst.version ?? '', compare: compareString, hasValue: hasStringValue, defaultDirection: 'asc' },
channel: { getValue: inst => inst.channel ?? '', compare: compareString, hasValue: hasStringValue, defaultDirection: 'asc' },
frequency: { getValue: inst => inst.frequency ?? '', compare: compareString, hasValue: hasStringValue, defaultDirection: 'asc' },
nodesCount: {
getValue: inst => toFiniteNumber(inst.nodesCount ?? inst.nodes_count),
compare: compareNumber,
hasValue: hasNumberValue,
defaultDirection: 'desc'
},
latitude: { getValue: inst => toFiniteNumber(inst.latitude), compare: compareNumber, hasValue: hasNumberValue, defaultDirection: 'asc' },
longitude: { getValue: inst => toFiniteNumber(inst.longitude), compare: compareNumber, hasValue: hasNumberValue, defaultDirection: 'asc' },
lastUpdateTime: {
getValue: inst => toFiniteNumber(inst.lastUpdateTime),
compare: compareNumber,
hasValue: hasNumberValue,
defaultDirection: 'desc'
}
};
let sortState = {
key: 'lastUpdateTime',
direction: tableSorters.lastUpdateTime ? tableSorters.lastUpdateTime.defaultDirection : 'desc'
};
/**
* Sort instances using the active sort configuration.
*
* @param {Array<Object>} data Instance rows.
* @returns {Array<Object>} sorted rows.
*/
const sortInstancesData = data => {
const sorter = tableSorters[sortState.key];
if (!sorter) return Array.isArray(data) ? [...data] : [];
const dir = sortState.direction === 'asc' ? 1 : -1;
return [...(data || [])].sort((a, b) => {
const aVal = sorter.getValue(a);
const bVal = sorter.getValue(b);
const aHas = sorter.hasValue ? sorter.hasValue(aVal) : hasStringValue(aVal);
const bHas = sorter.hasValue ? sorter.hasValue(bVal) : hasStringValue(bVal);
if (aHas && bHas) {
return sorter.compare(aVal, bVal) * dir;
}
if (aHas) return -1;
if (bHas) return 1;
return 0;
});
};
/**
* Update the visual sort indicators for the active column.
*
* @returns {void}
*/
const syncSortIndicators = () => {
if (!tableEl || !sortHeaders.length) return;
tableEl.querySelectorAll('thead th').forEach(th => th.removeAttribute('aria-sort'));
sortHeaders.forEach(header => {
header.removeAttribute('data-sort-active');
const indicator = header.querySelector('.sort-indicator');
if (indicator) indicator.textContent = '';
});
const active = sortHeaders.find(header => header.dataset.sortKey === sortState.key);
if (!active) return;
const indicator = active.querySelector('.sort-indicator');
if (indicator) indicator.textContent = sortState.direction === 'asc' ? '▲' : '▼';
active.setAttribute('data-sort-active', 'true');
const th = active.closest('th');
if (th) {
th.setAttribute('aria-sort', sortState.direction === 'asc' ? 'ascending' : 'descending');
}
};
/**
* Render the instances table body with sorting applied.
*
* @param {Array<Object>} data Instance rows.
* @param {number} nowSec Reference timestamp for relative time rendering.
* @returns {void}
*/
const renderTableRows = (data, nowSec) => {
if (!tableBody) return;
const frag = document.createDocumentFragment();
const sorted = sortInstancesData(data);
for (const instance of sorted) {
const tr = document.createElement('tr');
const url = buildInstanceUrl(instance.domain);
const nameHtml = instance.name ? escapeHtml(instance.name) : '<em>—</em>';
const domainHtml = url
? `<a href="${escapeHtml(url)}" target="_blank" rel="noopener">${escapeHtml(instance.domain || '')}</a>`
: escapeHtml(instance.domain || '');
const contactHtml = renderContactHtml(instance.contactLink);
const nodesCountValue = toFiniteNumber(instance.nodesCount ?? instance.nodes_count);
const nodesCountText = nodesCountValue == null ? '<em>—</em>' : escapeHtml(String(nodesCountValue));
tr.innerHTML = `
<td class="instances-col instances-col--name">${nameHtml}</td>
<td class="instances-col instances-col--domain mono">${domainHtml}</td>
<td class="instances-col instances-col--contact">${contactHtml || '<em>—</em>'}</td>
<td class="instances-col instances-col--version mono">${escapeHtml(instance.version || '')}</td>
<td class="instances-col instances-col--channel">${escapeHtml(instance.channel || '')}</td>
<td class="instances-col instances-col--frequency">${escapeHtml(instance.frequency || '')}</td>
<td class="instances-col instances-col--nodes mono">${nodesCountText}</td>
<td class="instances-col instances-col--latitude mono">${fmtCoords(instance.latitude)}</td>
<td class="instances-col instances-col--longitude mono">${fmtCoords(instance.longitude)}</td>
<td class="instances-col instances-col--last-update mono">${timeAgo(instance.lastUpdateTime, nowSec)}</td>
`;
frag.appendChild(tr);
}
tableBody.replaceChildren(frag);
syncSortIndicators();
};
/**
* Wire up click and keyboard handlers for sortable headers.
*
* @param {Function} rerender Callback to refresh the table.
* @returns {void}
*/
const attachSortHandlers = rerender => {
if (!sortHeaders.length) return;
const applySortKey = key => {
if (!key) return;
if (sortState.key === key) {
sortState = { key, direction: sortState.direction === 'asc' ? 'desc' : 'asc' };
} else {
const defaultDir = tableSorters[key]?.defaultDirection || 'asc';
sortState = { key, direction: defaultDir };
}
rerender();
};
sortHeaders.forEach(header => {
const key = header.dataset.sortKey;
header.addEventListener('click', () => applySortKey(key));
header.addEventListener('keydown', event => {
if (event.key === 'Enter' || event.key === ' ') {
event.preventDefault();
applySortKey(key);
}
});
});
};
/**
* Resolve the active theme based on the DOM state.
*
* @returns {'dark' | 'light'}
*/
const resolveTheme = () => {
if (document.body && document.body.classList.contains('dark')) return 'dark';
const htmlTheme = document.documentElement?.getAttribute('data-theme');
if (htmlTheme === 'dark' || htmlTheme === 'light') return htmlTheme;
return 'dark';
};
/**
* Apply the configured CSS filter to the active tile container.
*
* @returns {void}
*/
const applyTileFilter = () => {
if (!tileLayer) return;
const theme = resolveTheme();
const filterValue = theme === 'dark' ? config.tileFilters.dark : config.tileFilters.light;
const container =
typeof tileLayer.getContainer === 'function' ? tileLayer.getContainer() : null;
if (container && container.style) {
container.style.filter = filterValue;
container.style.webkitFilter = filterValue;
}
const tilePane = map && typeof map.getPane === 'function' ? map.getPane('tilePane') : null;
if (tilePane && tilePane.style) {
tilePane.style.filter = filterValue;
tilePane.style.webkitFilter = filterValue;
}
const tileNodes = [];
if (container && typeof container.querySelectorAll === 'function') {
tileNodes.push(...container.querySelectorAll('.leaflet-tile'));
}
if (tilePane && typeof tilePane.querySelectorAll === 'function') {
tileNodes.push(...tilePane.querySelectorAll('.leaflet-tile'));
}
tileNodes.forEach(tile => {
if (tile && tile.style) {
tile.style.filter = filterValue;
tile.style.webkitFilter = filterValue;
}
});
};
// Initialize the map if Leaflet is available
if (hasLeaflet && mapContainer) {
const initialZoom = Number.isFinite(config.mapZoom) ? config.mapZoom : 5;
map = leaflet.map(mapContainer, { worldCopyJump: true, attributionControl: false });
map.setView([config.mapCenter.lat, config.mapCenter.lon], initialZoom);
tileLayer = leaflet
.tileLayer(TILE_LAYER_URL, {
maxZoom: 19,
className: 'map-tiles',
crossOrigin: 'anonymous'
})
.addTo(map);
tileLayer.on?.('load', applyTileFilter);
applyTileFilter();
window.addEventListener('themechange', applyTileFilter);
markersLayer = leaflet.layerGroup().addTo(map);
}
// Fetch instances data
let instances = [];
try {
const response = await fetchImpl('/api/instances', {
headers: { Accept: 'application/json' },
credentials: 'omit'
});
if (response.ok) {
instances = await response.json();
}
} catch (err) {
console.warn('Failed to fetch federation instances', err);
}
if (statusEl) {
statusEl.textContent = `${instances.length} instances`;
statusEl.classList.remove('pill--loading');
}
const nowSec = Date.now() / 1000;
// Render map markers
if (map && markersLayer && hasLeaflet && Array.isArray(instances)) {
const bounds = [];
const canRenderLegend =
typeof leaflet.control === 'function' && leaflet.DomUtil && typeof leaflet.DomUtil.create === 'function';
if (canRenderLegend) {
const legendStops = NODE_COUNT_COLOR_STOPS.map((stop, index) => {
const lower = index === 0 ? 0 : NODE_COUNT_COLOR_STOPS[index - 1].limit;
const upper = stop.limit - 1;
const label = index === 0 ? `< ${stop.limit} nodes` : `${lower}-${upper} nodes`;
return { color: stop.color || DEFAULT_INSTANCE_COLOR, label };
});
const lastLimit = NODE_COUNT_COLOR_STOPS[NODE_COUNT_COLOR_STOPS.length - 1]?.limit || 900;
legendStops.push({ color: DEFAULT_INSTANCE_COLOR, label: `${lastLimit} nodes` });
const legend = leaflet.control({ position: 'bottomright' });
legend.onAdd = function onAdd() {
const container = leaflet.DomUtil.create('div', 'legend legend--instances');
container.setAttribute('aria-label', 'Active nodes legend');
const header = leaflet.DomUtil.create('div', 'legend-header', container);
const title = leaflet.DomUtil.create('span', 'legend-title', header);
title.textContent = 'Active nodes';
const items = leaflet.DomUtil.create('div', 'legend-items', container);
legendStops.forEach(stop => {
const item = leaflet.DomUtil.create('div', 'legend-item', items);
item.setAttribute('aria-hidden', 'true');
const swatch = leaflet.DomUtil.create('span', 'legend-swatch', item);
swatch.style.background = stop.color;
const label = leaflet.DomUtil.create('span', 'legend-label', item);
label.textContent = stop.label;
});
return container;
};
legend.addTo(map);
}
for (const instance of instances) {
const lat = Number(instance.latitude);
const lon = Number(instance.longitude);
if (!Number.isFinite(lat) || !Number.isFinite(lon)) continue;
bounds.push([lat, lon]);
const name = instance.name || instance.domain || 'Unknown';
const url = buildInstanceUrl(instance.domain);
const nodeCountValue = toFiniteNumber(instance.nodesCount ?? instance.nodes_count);
const popupLines = [
url
? `<strong><a href="${escapeHtml(url)}" target="_blank" rel="noopener">${escapeHtml(name)}</a></strong>`
: `<strong>${escapeHtml(name)}</strong>`,
`<span class="mono">${escapeHtml(instance.domain || '')}</span>`,
instance.channel ? `Channel: ${escapeHtml(instance.channel)}` : '',
instance.frequency ? `Frequency: ${escapeHtml(instance.frequency)}` : '',
instance.version ? `Version: ${escapeHtml(instance.version)}` : '',
nodeCountValue != null ? `Active nodes (24h): ${escapeHtml(String(nodeCountValue))}` : ''
].filter(Boolean);
const marker = leaflet.circleMarker([lat, lon], {
radius: 9,
fillColor: colorForNodeCount(nodeCountValue),
color: '#000',
weight: 1,
opacity: 0.8,
fillOpacity: 0.75
});
marker.bindPopup(popupLines.join('<br>'));
markersLayer.addLayer(marker);
}
if (bounds.length > 0 && typeof map.fitBounds === 'function') {
try {
map.fitBounds(bounds, { padding: [50, 50], maxZoom: 10 });
} catch (err) {
console.warn('Failed to fit federation map bounds', err);
}
}
}
// Render table
if (tableBody && Array.isArray(instances)) {
attachSortHandlers(() => renderTableRows(instances, nowSec));
renderTableRows(instances, nowSec);
}
}
+32 -8
View File
@@ -34,12 +34,15 @@ function resolveInstanceLabel(entry) {
return domain;
}
/**
* Construct a navigable URL for the provided instance domain.
*
* @param {string} domain Instance domain as returned by the federation catalog.
* @returns {string|null} Navigable absolute URL or ``null`` when the domain is empty.
*/
/**
* Construct a navigable URL for the provided instance domain.
*
* The returned URL is guaranteed to use HTTP(S) and a host-only component to avoid
* interpreting arbitrary DOM-controlled text as executable content.
*
* @param {string} domain Instance domain as returned by the federation catalog.
* @returns {string|null} Navigable absolute URL or ``null`` when the domain is empty or unsafe.
*/
export function buildInstanceUrl(domain) {
if (typeof domain !== 'string') {
return null;
@@ -50,8 +53,29 @@ export function buildInstanceUrl(domain) {
return null;
}
if (/^[a-zA-Z][a-zA-Z\d+.-]*:\/\//.test(trimmed)) {
return trimmed;
const allowedHostPattern = /^[a-zA-Z0-9.-]+(?::\d{1,5})?$/;
if (/^https?:\/\//i.test(trimmed)) {
try {
const parsed = new URL(trimmed);
if (!['http:', 'https:'].includes(parsed.protocol)) {
return null;
}
const sanitizedHost = parsed.host.trim();
if (!allowedHostPattern.test(sanitizedHost)) {
return null;
}
return `${parsed.protocol}//${sanitizedHost}`;
} catch (error) {
console.warn('Rejected invalid instance URL', error);
return null;
}
}
if (!allowedHostPattern.test(trimmed)) {
return null;
}
return `https://${trimmed}`;
+318 -9
View File
@@ -18,6 +18,7 @@ import { computeBoundingBox, computeBoundsForPoints, haversineDistanceKm } from
import { createMapAutoFitController } from './map-auto-fit-controller.js';
import { resolveAutoFitBoundsConfig } from './map-auto-fit-settings.js';
import { attachNodeInfoRefreshToMarker, overlayToPopupNode } from './map-marker-node-info.js';
import { resolveLegendVisibility } from './map-legend-visibility.js';
import { createMapFocusHandler, DEFAULT_NODE_FOCUS_ZOOM } from './nodes-map-focus.js';
import { enhanceCoordinateCell } from './nodes-coordinate-links.js';
import { createShortInfoOverlayStack } from './short-info-overlay-manager.js';
@@ -116,6 +117,7 @@ export function initializeApp(config) {
: false;
const isDashboardView = bodyClassList ? bodyClassList.contains('view-dashboard') : false;
const isChatView = bodyClassList ? bodyClassList.contains('view-chat') : false;
const isMapView = bodyClassList ? bodyClassList.contains('view-map') : false;
const mapZoomOverride = Number.isFinite(config.mapZoom) ? Number(config.mapZoom) : null;
/**
* Column sorter configuration for the node table.
@@ -190,6 +192,7 @@ export function initializeApp(config) {
});
const NODE_LIMIT = 1000;
const TRACE_LIMIT = 200;
const TRACE_MAX_AGE_SECONDS = 7 * 24 * 60 * 60;
const SNAPSHOT_LIMIT = SNAPSHOT_WINDOW;
const CHAT_LIMIT = MESSAGE_LIMIT;
const CHAT_RECENT_WINDOW_SECONDS = 7 * 24 * 60 * 60;
@@ -433,6 +436,8 @@ export function initializeApp(config) {
const mapPanel = document.getElementById('mapPanel');
const mapFullscreenToggle = document.getElementById('mapFullscreenToggle');
const fullscreenContainer = mapPanel || mapContainer;
const isFederationView = bodyClassList ? bodyClassList.contains('view-federation') : false;
const legendDefaultCollapsed = mapPanel ? mapPanel.dataset.legendCollapsed === 'true' : false;
let mapStatusEl = null;
let map = null;
let mapCenterLatLng = null;
@@ -451,8 +456,11 @@ export function initializeApp(config) {
const AUTO_FIT_PADDING_PX = 12;
const MAX_INITIAL_ZOOM = 13;
let neighborLinesLayer = null;
let traceLinesLayer = null;
let neighborLinesVisible = true;
let traceLinesVisible = true;
let neighborLinesToggleButton = null;
let traceLinesToggleButton = null;
let markersLayer = null;
let tileDomObserver = null;
const fullscreenChangeEvents = [
@@ -1170,7 +1178,9 @@ export function initializeApp(config) {
applyFiltersToAllTiles();
}
if (hasLeaflet && mapContainer) {
const mapAlreadyInitialized = mapContainer && mapContainer._leaflet_id;
if (hasLeaflet && mapContainer && !isFederationView && !mapAlreadyInitialized) {
map = L.map(mapContainer, { worldCopyJump: true, attributionControl: false });
showMapStatus('Loading map tiles…');
tiles = L.tileLayer(TILE_LAYER_URL, {
@@ -1241,12 +1251,13 @@ export function initializeApp(config) {
});
neighborLinesLayer = L.layerGroup().addTo(map);
traceLinesLayer = L.layerGroup().addTo(map);
markersLayer = L.layerGroup().addTo(map);
if (typeof navigator !== 'undefined' && navigator && navigator.onLine === false) {
activateOfflineTiles('Offline mode detected. Using placeholder basemap.');
}
} else if (mapContainer) {
} else if (mapContainer && !isFederationView) {
setMapPlaceholder('Leaflet assets are unavailable. Data will continue to refresh without a live map.');
}
@@ -1333,6 +1344,38 @@ export function initializeApp(config) {
updateNeighborLinesToggleState();
}
/**
* Synchronise the traceroute line toggle button with the active state.
*
* @returns {void}
*/
function updateTraceLinesToggleState() {
if (!traceLinesToggleButton) return;
const label = traceLinesVisible ? 'Hide trace lines' : 'Show trace lines';
traceLinesToggleButton.textContent = label;
traceLinesToggleButton.setAttribute('aria-pressed', traceLinesVisible ? 'true' : 'false');
traceLinesToggleButton.setAttribute('aria-label', label);
}
/**
* Toggle the Leaflet layer that renders traceroute connections.
*
* @param {boolean} visible Whether to show traceroute paths.
* @returns {void}
*/
function setTraceLinesVisibility(visible) {
traceLinesVisible = Boolean(visible);
if (traceLinesLayer && map) {
const hasLayer = map.hasLayer(traceLinesLayer);
if (traceLinesVisible && !hasLayer) {
traceLinesLayer.addTo(map);
} else if (!traceLinesVisible && hasLayer) {
map.removeLayer(traceLinesLayer);
}
}
updateTraceLinesToggleState();
}
/**
* Refresh the legend buttons to reflect the active role filters.
*
@@ -1430,6 +1473,15 @@ export function initializeApp(config) {
});
updateNeighborLinesToggleState();
traceLinesToggleButton = L.DomUtil.create('button', 'legend-item legend-toggle-traces', toggle);
traceLinesToggleButton.type = 'button';
traceLinesToggleButton.addEventListener('click', event => {
event.preventDefault();
event.stopPropagation();
setTraceLinesVisibility(!traceLinesVisible);
});
updateTraceLinesToggleState();
const resetButton = L.DomUtil.create('button', 'legend-item legend-reset', toggle);
resetButton.type = 'button';
resetButton.textContent = 'Clear filters';
@@ -1477,8 +1529,14 @@ export function initializeApp(config) {
legendToggleControl.addTo(map);
const legendMediaQuery = window.matchMedia('(max-width: 1024px)');
setLegendVisibility(!legendMediaQuery.matches);
const initialLegendVisible = resolveLegendVisibility({
defaultCollapsed: legendDefaultCollapsed,
mediaQueryMatches: legendMediaQuery.matches,
viewMode: isDashboardView ? 'dashboard' : (isMapView ? 'map' : undefined)
});
setLegendVisibility(initialLegendVisible);
legendMediaQuery.addEventListener('change', event => {
if (legendDefaultCollapsed || isDashboardView || isMapView) return;
setLegendVisibility(!event.matches);
});
} else if (mapContainer && !hasLeaflet) {
@@ -1761,6 +1819,37 @@ export function initializeApp(config) {
return value.replace(/[^a-zA-Z0-9_-]/g, chr => `\\${chr}`);
}
/**
* Parse a node identifier or numeric reference into a finite number.
*
* @param {*} ref Identifier or numeric reference.
* @returns {number|null} Parsed number or ``null``.
*/
function parseNodeNumericRef(ref) {
if (ref == null) return null;
if (typeof ref === 'number') {
return Number.isFinite(ref) ? ref : null;
}
if (typeof ref === 'string') {
const trimmed = ref.trim();
if (!trimmed) return null;
if (trimmed.startsWith('!')) {
const hex = trimmed.slice(1);
if (!/^[0-9A-Fa-f]+$/.test(hex)) return null;
const parsedHex = Number.parseInt(hex, 16);
return Number.isFinite(parsedHex) ? parsedHex >>> 0 : null;
}
if (/^0[xX][0-9A-Fa-f]+$/.test(trimmed)) {
const parsedHex = Number.parseInt(trimmed, 16);
return Number.isFinite(parsedHex) ? parsedHex >>> 0 : null;
}
const parsed = Number(trimmed);
return Number.isFinite(parsed) ? parsed : null;
}
const parsed = Number(ref);
return Number.isFinite(parsed) ? parsed : null;
}
/**
* Populate the ``nodesById`` index for quick lookups.
*
@@ -1778,9 +1867,13 @@ export function initializeApp(config) {
: (typeof node.nodeId === 'string' ? node.nodeId : null);
if (nodeIdRaw) {
nodesById.set(nodeIdRaw.trim(), node);
const numericFromId = parseNodeNumericRef(nodeIdRaw);
if (numericFromId != null && !nodesByNum.has(numericFromId)) {
nodesByNum.set(numericFromId, node);
}
}
const nodeNumRaw = node.num ?? node.node_num ?? node.nodeNum;
const nodeNum = typeof nodeNumRaw === 'number' ? nodeNumRaw : Number(nodeNumRaw);
const nodeNum = parseNodeNumericRef(nodeNumRaw);
if (Number.isFinite(nodeNum)) {
nodesByNum.set(nodeNum, node);
}
@@ -2397,6 +2490,8 @@ export function initializeApp(config) {
return createPositionChatEntry(entry, context);
case CHAT_LOG_ENTRY_TYPES.NEIGHBOR:
return createNeighborChatEntry(entry, context);
case CHAT_LOG_ENTRY_TYPES.TRACE:
return createTraceChatEntry(entry, context);
case CHAT_LOG_ENTRY_TYPES.MESSAGE_ENCRYPTED:
return entry?.message ? createMessageChatEntry(entry.message) : null;
default:
@@ -2443,6 +2538,138 @@ export function initializeApp(config) {
return div;
}
/**
* Convert a trace path into user-friendly labels using cached node metadata.
*
* @param {Array<{id: ?string, num: ?number, raw: *}>} tracePath Ordered hop references.
* @returns {Array<string>} Display labels for each hop.
*/
function formatTracePathLabels(tracePath) {
if (!Array.isArray(tracePath)) return [];
const labels = [];
for (const hop of tracePath) {
if (!hop || typeof hop !== 'object') continue;
const node = resolveNodeForHop(hop);
const fallbackId = hop.id ?? (Number.isFinite(hop.num) ? String(hop.num) : (hop.raw != null ? String(hop.raw) : ''));
const shortName = node ? normalizeNodeNameValue(node.short_name ?? node.shortName) : null;
const label = shortName || (node ? (getNodeDisplayNameForOverlay(node) || fallbackId) : fallbackId);
if (label) {
labels.push(String(label));
}
}
return labels;
}
function createTraceChatEntry(entry, context) {
if (!entry || !Array.isArray(entry.tracePath) || entry.tracePath.length < 2) {
return null;
}
const sourceHop = entry.tracePath[0] || null;
const sourceNode = resolveNodeForHop(sourceHop);
const labels = formatTracePathLabels(entry.tracePath);
const labelText = labels.length ? labels.join(', ') : 'Traceroute';
const labelSuffix = `: ${escapeHtml(labelText)}`;
return createAnnouncementEntry({
timestampSeconds: entry?.ts ?? null,
shortName: context.shortName,
longName: context.longName || context.nodeId || labels[0] || 'Traceroute',
role: context.role,
metadataSource: sourceNode || context.metadataSource,
nodeData: sourceNode || context.nodeData,
messageHtml: `${renderEmojiHtml('👣')} ${renderAnnouncementCopy('Caught trace', labelSuffix)}`
});
}
/**
* Build tooltip HTML showing styled short-name badges for a trace path.
*
* @param {Array<Object>} pathNodes Ordered node payloads along the trace.
* @returns {string} HTML fragment or ``''`` when unavailable.
*/
function buildTraceTooltipHtml(pathNodes) {
if (!Array.isArray(pathNodes) || pathNodes.length < 2) {
return '';
}
const parts = pathNodes
.map(node => {
if (!node || typeof node !== 'object') {
return null;
}
const short = normalizeNodeNameValue(node.short_name ?? node.shortName) || (typeof node.node_id === 'string' ? node.node_id : '');
const long = normalizeNodeNameValue(node.long_name ?? node.longName) || '';
return renderShortHtml(short, node.role, long, node);
})
.filter(Boolean);
if (!parts.length) return '';
const arrow = '<span class="trace-tooltip__arrow" aria-hidden="true">→</span>';
return `<div class="trace-tooltip__content">${parts.join(arrow)}</div>`;
}
/**
* Build tooltip HTML for a neighbor segment showing styled short-name badges.
*
* @param {{sourceNode?: Object, targetNode?: Object, sourceShortName?: string, targetShortName?: string, sourceRole?: string, targetRole?: string}} segment Neighbor segment descriptor.
* @returns {string} HTML fragment or ``''`` when unavailable.
*/
function buildNeighborTooltipHtml(segment) {
if (!segment) return '';
const sourceNode = segment.sourceNode || null;
const targetNode = segment.targetNode || null;
const sourceShort = normalizeNodeNameValue(
segment.sourceShortName ||
(sourceNode ? sourceNode.short_name ?? sourceNode.shortName : null) ||
(sourceNode && typeof sourceNode.node_id === 'string' ? sourceNode.node_id : '')
);
const targetShort = normalizeNodeNameValue(
segment.targetShortName ||
(targetNode ? targetNode.short_name ?? targetNode.shortName : null) ||
(targetNode && typeof targetNode.node_id === 'string' ? targetNode.node_id : '')
);
if (!sourceShort || !targetShort) return '';
const sourceLong = normalizeNodeNameValue(sourceNode?.long_name ?? sourceNode?.longName) || '';
const targetLong = normalizeNodeNameValue(targetNode?.long_name ?? targetNode?.longName) || '';
const sourceHtml = renderShortHtml(sourceShort, segment.sourceRole, sourceLong, sourceNode || {});
const targetHtml = renderShortHtml(targetShort, segment.targetRole, targetLong, targetNode || {});
const arrow = '<span class="trace-tooltip__arrow" aria-hidden="true">→</span>';
return `<div class="trace-tooltip__content">${sourceHtml}${arrow}${targetHtml}</div>`;
}
/**
* Resolve a node reference for a trace hop using cached node indices.
*
* @param {{id?: string, num?: number}|null} hop Trace hop descriptor.
* @returns {?Object} Node payload when available.
*/
function resolveNodeForHop(hop) {
if (!hop || typeof hop !== 'object') {
return null;
}
const id = typeof hop.id === 'string' ? hop.id.trim() : null;
const idCandidates = [];
if (id) {
idCandidates.push(id);
idCandidates.push(id.toUpperCase());
idCandidates.push(id.toLowerCase());
}
for (const candidate of idCandidates) {
if (candidate && nodesById instanceof Map && nodesById.has(candidate)) {
return nodesById.get(candidate);
}
}
const numericCandidates = [];
if (Number.isFinite(hop.num)) numericCandidates.push(hop.num);
const parsedFromId = parseNodeNumericRef(id);
if (parsedFromId != null) numericCandidates.push(parsedFromId);
const parsedFromNum = parseNodeNumericRef(hop.num);
if (parsedFromNum != null) numericCandidates.push(parsedFromNum);
for (const numeric of numericCandidates) {
if (Number.isFinite(numeric) && nodesByNum instanceof Map && nodesByNum.has(numeric)) {
return nodesByNum.get(numeric);
}
}
return null;
}
/**
* Derive display context for a chat log entry by inspecting node payloads.
*
@@ -2806,6 +3033,7 @@ export function initializeApp(config) {
* telemetryEntries?: Array<Object>,
* positionEntries?: Array<Object>,
* neighborEntries?: Array<Object>,
* traceEntries?: Array<Object>,
* filterQuery?: string
* }} params Render inputs.
* @returns {void}
@@ -2817,6 +3045,7 @@ export function initializeApp(config) {
telemetryEntries = [],
positionEntries = [],
neighborEntries = [],
traceEntries = [],
filterQuery = ''
}) {
if (!CHAT_ENABLED || !chatEl) return;
@@ -2831,6 +3060,7 @@ export function initializeApp(config) {
telemetry: telemetryEntries,
positions: positionEntries,
neighbors: neighborEntries,
traces: traceEntries,
messages,
logOnlyMessages: encryptedMessages,
nowSeconds,
@@ -3280,7 +3510,8 @@ export function initializeApp(config) {
const effectiveLimit = Math.min(safeLimit, NODE_LIMIT);
const r = await fetch(`/api/traces?limit=${effectiveLimit}`, { cache: 'no-store' });
if (!r.ok) throw new Error('HTTP ' + r.status);
return r.json();
const traces = await r.json();
return filterRecentTraces(traces, TRACE_MAX_AGE_SECONDS);
}
/**
@@ -3340,6 +3571,28 @@ export function initializeApp(config) {
return null;
}
/**
* Filter trace entries to discard packets older than the configured window.
*
* @param {Array<Object>} traces Trace payloads.
* @param {number} [maxAgeSeconds=TRACE_MAX_AGE_SECONDS] Maximum allowed age in seconds.
* @returns {Array<Object>} Recent trace entries.
*/
function filterRecentTraces(traces, maxAgeSeconds = TRACE_MAX_AGE_SECONDS) {
if (!Array.isArray(traces)) {
return [];
}
if (!Number.isFinite(maxAgeSeconds) || maxAgeSeconds <= 0) {
return [...traces];
}
const nowSeconds = Math.floor(Date.now() / 1000);
const cutoff = nowSeconds - maxAgeSeconds;
return traces.filter(trace => {
const rxTime = resolveTimestampSeconds(trace?.rx_time ?? trace?.rxTime, trace?.rx_iso ?? trace?.rxIso);
return rxTime != null && rxTime >= cutoff;
});
}
/**
* Merge recent position packets into the node list.
*
@@ -3620,6 +3873,9 @@ export function initializeApp(config) {
if (neighborLinesLayer) {
neighborLinesLayer.clearLayers();
}
if (traceLinesLayer) {
traceLinesLayer.clearLayers();
}
markersLayer.clearLayers();
const pts = [];
const nodesById = new Map();
@@ -3629,7 +3885,7 @@ export function initializeApp(config) {
if (typeof nodeId !== 'string' || nodeId.length === 0) continue;
nodesById.set(nodeId, node);
}
const traceSegments = neighborLinesLayer
const traceSegments = traceLinesLayer
? buildTraceSegments(allTraces, nodes, {
limitDistance: LIMIT_DISTANCE,
maxDistanceKm: MAX_DISTANCE_KM,
@@ -3721,6 +3977,21 @@ export function initializeApp(config) {
opacity: 0.42,
className: 'neighbor-connection-line'
}).addTo(neighborLinesLayer);
if (polyline && typeof polyline.bindTooltip === 'function') {
const tooltipHtml = buildNeighborTooltipHtml({
...segment,
sourceNode: nodesById.get(segment.sourceId),
targetNode: nodesById.get(segment.targetId)
});
if (tooltipHtml) {
polyline.bindTooltip(tooltipHtml, {
direction: 'center',
opacity: 0.92,
sticky: true,
className: 'trace-tooltip'
});
}
}
if (polyline && typeof polyline.on === 'function') {
polyline.on('click', event => {
if (event && event.originalEvent) {
@@ -3738,6 +4009,13 @@ export function initializeApp(config) {
? event.originalEvent.target
: null;
const anchorEl = polyline.getElement() || clickTarget;
if (polyline && typeof polyline.isTooltipOpen === 'function' && typeof polyline.openTooltip === 'function') {
if (polyline.isTooltipOpen()) {
polyline.closeTooltip();
} else {
polyline.openTooltip();
}
}
if (!anchorEl) return;
if (overlayStack.isOpen(anchorEl)) {
overlayStack.close(anchorEl);
@@ -3749,7 +4027,7 @@ export function initializeApp(config) {
});
}
if (neighborLinesLayer && traceSegments.length) {
if (traceLinesLayer && traceSegments.length) {
traceSegments
.sort((a, b) => {
const rxA = Number.isFinite(a.rxTime) ? a.rxTime : -Infinity;
@@ -3758,13 +4036,43 @@ export function initializeApp(config) {
return rxA - rxB;
})
.forEach(segment => {
L.polyline(segment.latlngs, {
const polyline = L.polyline(segment.latlngs, {
color: segment.color,
weight: 2,
opacity: 0.42,
dashArray: '6 6',
className: 'neighbor-connection-line trace-connection-line'
}).addTo(neighborLinesLayer);
}).addTo(traceLinesLayer);
if (polyline && typeof polyline.bindTooltip === 'function') {
const tooltipHtml = buildTraceTooltipHtml(segment.pathNodes);
if (tooltipHtml) {
polyline.bindTooltip(tooltipHtml, {
direction: 'center',
opacity: 0.92,
sticky: true,
className: 'trace-tooltip'
});
}
}
if (polyline && typeof polyline.on === 'function') {
polyline.on('click', event => {
if (event && event.originalEvent) {
if (typeof event.originalEvent.preventDefault === 'function') {
event.originalEvent.preventDefault();
}
if (typeof event.originalEvent.stopPropagation === 'function') {
event.originalEvent.stopPropagation();
}
}
if (polyline && typeof polyline.isTooltipOpen === 'function' && typeof polyline.openTooltip === 'function') {
if (polyline.isTooltipOpen()) {
polyline.closeTooltip();
} else {
polyline.openTooltip();
}
}
});
}
});
}
@@ -3914,6 +4222,7 @@ export function initializeApp(config) {
telemetryEntries: allTelemetryEntries,
positionEntries: allPositionEntries,
neighborEntries: allNeighbors,
traceEntries: allTraces,
filterQuery
});
}
@@ -0,0 +1,26 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Resolve the initial visibility of the map legend.
*
* @param {{ defaultCollapsed: boolean, mediaQueryMatches: boolean, viewMode?: string }} options
* @returns {boolean} True when the legend should be visible.
*/
export function resolveLegendVisibility({ defaultCollapsed, mediaQueryMatches, viewMode }) {
if (defaultCollapsed || viewMode === 'dashboard' || viewMode === 'map') return false;
return !mediaQueryMatches;
}
+51 -7
View File
@@ -68,6 +68,7 @@ const TELEMETRY_CHART_SPECS = Object.freeze([
max: 6,
ticks: 3,
color: '#9ebcda',
allowUpperOverflow: true,
},
{
id: 'current',
@@ -77,6 +78,7 @@ const TELEMETRY_CHART_SPECS = Object.freeze([
max: 3,
ticks: 3,
color: '#3182bd',
allowUpperOverflow: true,
},
],
series: [
@@ -156,6 +158,7 @@ const TELEMETRY_CHART_SPECS = Object.freeze([
max: 40,
ticks: 4,
color: '#fc8d59',
allowUpperOverflow: true,
},
{
id: 'humidity',
@@ -220,6 +223,7 @@ const TELEMETRY_CHART_SPECS = Object.freeze([
max: 500,
ticks: 5,
color: '#636363',
allowUpperOverflow: true,
},
],
series: [
@@ -1004,6 +1008,31 @@ function buildSeriesPoints(entries, fields, domainStart, domainEnd) {
return points;
}
/**
* Resolve the effective axis maximum when upper overflow is allowed.
*
* @param {Object} axis Axis descriptor.
* @param {Array<{axisId: string, points: Array<{timestamp: number, value: number}>}>} seriesEntries Series entries.
* @returns {number} Effective axis max.
*/
function resolveAxisMax(axis, seriesEntries) {
if (!axis || axis.allowUpperOverflow !== true) {
return axis?.max;
}
let observedMax = null;
for (const entry of seriesEntries) {
if (!entry || entry.axisId !== axis.id || !Array.isArray(entry.points)) continue;
for (const point of entry.points) {
if (!point || !Number.isFinite(point.value)) continue;
observedMax = observedMax == null ? point.value : Math.max(observedMax, point.value);
}
}
if (observedMax != null && Number.isFinite(axis.max) && observedMax > axis.max) {
return observedMax;
}
return axis.max;
}
/**
* Render a telemetry series as circles plus an optional translucent guide line.
*
@@ -1133,33 +1162,48 @@ function renderTelemetryChart(spec, entries, nowMs, chartOptions = {}) {
const domainEnd = nowMs;
const domainStart = nowMs - windowMs;
const dims = createChartDimensions(spec);
const axisMap = new Map(spec.axes.map(axis => [axis.id, axis]));
const seriesEntries = spec.series
.map(series => {
const axis = axisMap.get(series.axis);
if (!axis) return null;
const points = buildSeriesPoints(entries, series.fields, domainStart, domainEnd);
if (points.length === 0) return null;
return { config: series, axis, points };
return { config: series, axisId: series.axis, points };
})
.filter(entry => entry != null);
if (seriesEntries.length === 0) {
return '';
}
const axesMarkup = spec.axes.map(axis => renderYAxis(axis, dims)).join('');
const adjustedAxes = spec.axes.map(axis => {
const resolvedMax = resolveAxisMax(axis, seriesEntries);
if (resolvedMax != null && resolvedMax !== axis.max) {
return { ...axis, max: resolvedMax };
}
return axis;
});
const axisMap = new Map(adjustedAxes.map(axis => [axis.id, axis]));
const plottedSeries = seriesEntries
.map(series => {
const axis = axisMap.get(series.axisId);
if (!axis) return null;
return { config: series.config, axis, points: series.points };
})
.filter(entry => entry != null);
if (plottedSeries.length === 0) {
return '';
}
const axesMarkup = adjustedAxes.map(axis => renderYAxis(axis, dims)).join('');
const tickBuilder = typeof chartOptions.xAxisTickBuilder === 'function' ? chartOptions.xAxisTickBuilder : buildMidnightTicks;
const tickFormatter = typeof chartOptions.xAxisTickFormatter === 'function' ? chartOptions.xAxisTickFormatter : formatCompactDate;
const ticks = tickBuilder(nowMs, windowMs);
const xAxisMarkup = renderXAxis(dims, domainStart, domainEnd, ticks, { labelFormatter: tickFormatter });
const seriesMarkup = seriesEntries
const seriesMarkup = plottedSeries
.map(series =>
renderTelemetrySeries(series.config, series.points, series.axis, dims, domainStart, domainEnd, {
lineReducer: chartOptions.lineReducer,
}),
)
.join('');
const legendItems = seriesEntries
const legendItems = plottedSeries
.map(series => {
const legendLabel = stringOrNull(series.config.legend) ?? series.config.label;
return `
@@ -14,6 +14,8 @@
* limitations under the License.
*/
import { translateRoleId } from './role-helpers.js';
/**
* Determine whether the supplied value acts like an object instance.
*
@@ -36,6 +38,18 @@ function normalizeString(value) {
return str.length === 0 ? null : str;
}
/**
* Convert a raw role value into a canonical identifier.
*
* @param {*} value Raw role candidate from the API or cached snapshots.
* @returns {string|null} Canonical role string or ``null`` when blank.
*/
function normalizeRole(value) {
if (value == null) return null;
const translated = translateRoleId(value);
return normalizeString(translated);
}
/**
* Convert a raw value into a finite number when possible.
*
@@ -61,7 +75,7 @@ const FIELD_ALIASES = Object.freeze([
{ keys: ['node_num', 'nodeNum', 'num'], normalise: normalizeNumber },
{ keys: ['short_name', 'shortName'], normalise: normalizeString },
{ keys: ['long_name', 'longName'], normalise: normalizeString },
{ keys: ['role'], normalise: normalizeString },
{ keys: ['role'], normalise: normalizeRole },
{ keys: ['hw_model', 'hwModel'], normalise: normalizeString },
{ keys: ['modem_preset', 'modemPreset'], normalise: normalizeString },
{ keys: ['lora_freq', 'loraFreq'], normalise: normalizeNumber },
+52 -3
View File
@@ -22,8 +22,26 @@
*/
function coerceFiniteNumber(value) {
if (value == null) return null;
if (typeof value === 'string' && value.trim().length === 0) return null;
const num = typeof value === 'number' ? value : Number(value);
if (typeof value === 'number') {
return Number.isFinite(value) ? value : null;
}
if (typeof value === 'string') {
const trimmed = value.trim();
if (trimmed.length === 0) return null;
if (trimmed.startsWith('!')) {
const hex = trimmed.slice(1);
if (!/^[0-9A-Fa-f]+$/.test(hex)) return null;
const parsedHex = Number.parseInt(hex, 16);
return Number.isFinite(parsedHex) ? parsedHex >>> 0 : null;
}
if (/^0[xX][0-9A-Fa-f]+$/.test(trimmed)) {
const parsedHex = Number.parseInt(trimmed, 16);
return Number.isFinite(parsedHex) ? parsedHex >>> 0 : null;
}
const parsed = Number(trimmed);
return Number.isFinite(parsed) ? parsed : null;
}
const num = Number(value);
return Number.isFinite(num) ? num : null;
}
@@ -64,6 +82,19 @@ function buildNodeIndex(nodes) {
return { byId, byNum };
}
/**
* Convert a numeric node reference into the canonical hex-prefixed identifier.
*
* @param {number} ref Numeric node identifier.
* @returns {string|null} Canonical identifier or ``null`` when invalid.
*/
function canonicalNodeIdFromNumeric(ref) {
if (!Number.isFinite(ref)) return null;
const unsigned = ref >>> 0;
const hex = unsigned.toString(16).padStart(8, '0');
return `!${hex}`;
}
/**
* Locate a node by either string identifier or numeric reference.
*
@@ -80,6 +111,12 @@ function findNode(byId, byNum, ref) {
}
if (numeric != null) {
if (byNum.has(numeric)) return byNum.get(numeric) || null;
const canonicalId = canonicalNodeIdFromNumeric(numeric);
if (canonicalId) {
if (byId.has(canonicalId)) return byId.get(canonicalId) || null;
const canonicalUpper = canonicalId.toUpperCase();
if (byId.has(canonicalUpper)) return byId.get(canonicalUpper) || null;
}
const asString = String(numeric);
if (byId.has(asString)) return byId.get(asString) || null;
}
@@ -163,6 +200,8 @@ export function buildTraceSegments(traces, nodes, { limitDistance = false, maxDi
const path = extractTracePath(trace);
if (path.length < 2) continue;
const rxTime = coerceFiniteNumber(trace.rx_time ?? trace.rxTime);
const nodesWithCoords = [];
const segmentsForTrace = [];
let previous = null;
for (const ref of path) {
@@ -172,8 +211,9 @@ export function buildTraceSegments(traces, nodes, { limitDistance = false, maxDi
previous = null;
continue;
}
nodesWithCoords.push(node);
if (previous) {
segments.push({
segmentsForTrace.push({
latlngs: [previous.coords, coords],
color: colorResolver(previous.node),
traceId: trace.id ?? trace.packet_id ?? trace.trace_id,
@@ -182,6 +222,14 @@ export function buildTraceSegments(traces, nodes, { limitDistance = false, maxDi
}
previous = { node, coords };
}
if (segmentsForTrace.length) {
const pathNodes = nodesWithCoords.slice();
segmentsForTrace.forEach(segment => {
segment.pathNodes = pathNodes;
});
segments.push(...segmentsForTrace);
}
}
return segments;
@@ -193,4 +241,5 @@ export const __testUtils = {
findNode,
resolveNodeCoordinates,
extractTracePath,
canonicalNodeIdFromNumeric,
};
+187
View File
@@ -121,6 +121,25 @@ tbody tr:nth-child(even) td {
stroke-dasharray: 6 6;
}
.leaflet-tooltip.trace-tooltip {
background: var(--bg2);
color: var(--fg);
border: 1px solid var(--line);
box-shadow: 0 2px 6px rgba(0, 0, 0, 0.18);
padding: 6px 8px;
font-size: 13px;
}
.trace-tooltip__content {
display: inline-flex;
align-items: center;
gap: 6px;
}
.trace-tooltip__arrow {
opacity: 0.7;
}
.neighbor-snr {
margin-left: 4px;
color: var(--muted);
@@ -1354,6 +1373,19 @@ button:not(.chat-tab):not(.sort-button):hover {
outline-offset: 2px;
}
.sort-header {
display: inline-flex;
align-items: center;
gap: 4px;
cursor: pointer;
user-select: none;
}
.sort-header:focus-visible {
outline: 2px solid #4a90e2;
outline-offset: 2px;
}
.sort-indicator {
font-size: 0.75em;
opacity: 0.6;
@@ -1831,6 +1863,10 @@ body.dark .sort-button {
color: inherit;
}
body.dark .sort-header {
color: inherit;
}
body.dark .sort-button:hover {
background: none;
}
@@ -1948,3 +1984,154 @@ body.dark #map .leaflet-tile.map-tiles {
filter: var(--map-tiles-filter, var(--map-tile-filter-dark));
-webkit-filter: var(--map-tiles-filter, var(--map-tile-filter-dark));
}
/* ===========================
Federation Page Styles
=========================== */
.header-federation {
display: flex;
align-items: center;
gap: 12px;
}
.federation-link {
display: inline-flex;
align-items: center;
justify-content: center;
font-size: 1.5rem;
text-decoration: none;
padding: 4px 8px;
border-radius: 8px;
transition: background-color 160ms ease, transform 160ms ease;
}
.federation-link:hover {
background: var(--hover-bg, rgba(255, 255, 255, 0.1));
transform: scale(1.1);
}
.federation-page {
padding: 24px var(--pad) 48px;
max-width: none;
margin: 0;
width: 100%;
}
.federation-page--full-width {
padding-top: 0;
}
.federation-page__content {
display: flex;
flex-direction: column;
gap: 24px;
}
.federation-page__map-row {
width: 100%;
height: 400px;
border-radius: 8px;
overflow: hidden;
border: 1px solid var(--border);
}
.federation-page__map-row .map-panel {
height: 100%;
}
.federation-page__map-row #map {
height: 100%;
width: 100%;
}
.instances-table-wrapper {
width: 100%;
overflow-x: auto;
}
#instances {
width: 100%;
border-collapse: collapse;
font-size: 0.95rem;
}
#instances th,
#instances td {
padding: 10px 12px;
text-align: left;
border-bottom: 1px solid var(--border);
}
#instances th {
font-weight: 600;
background: var(--table-header-bg, var(--surface));
position: sticky;
top: 0;
z-index: 1;
}
#instances tbody tr:hover {
background: var(--hover-bg, rgba(255, 255, 255, 0.05));
}
#instances a {
color: var(--accent);
text-decoration: none;
}
#instances a:hover {
text-decoration: underline;
}
.instances-col--name {
min-width: 140px;
}
.instances-col--domain {
min-width: 180px;
}
.instances-col--contact {
min-width: 160px;
white-space: pre-wrap;
word-break: break-word;
}
.instances-col--version {
min-width: 80px;
}
.instances-col--channel,
.instances-col--frequency {
min-width: 100px;
}
.instances-col--nodes {
min-width: 110px;
}
.instances-col--latitude,
.instances-col--longitude {
min-width: 100px;
}
.instances-col--last-update {
min-width: 90px;
}
@media (max-width: 900px) {
.federation-page__map-row {
height: 300px;
}
.header-federation {
flex-direction: column;
align-items: flex-start;
gap: 8px;
}
.federation-link {
align-self: flex-end;
}
}
+665 -4
View File
@@ -103,6 +103,7 @@ RSpec.describe "Potato Mesh Sinatra app" do
db.execute("DELETE FROM nodes")
db.execute("DELETE FROM positions")
db.execute("DELETE FROM telemetry")
db.execute("DELETE FROM ingestors")
end
ensure_self_instance_record!
end
@@ -1079,7 +1080,8 @@ RSpec.describe "Potato Mesh Sinatra app" do
targets = application_class.federation_target_domains("self.mesh")
expect(targets.first).to eq("potatomesh.net")
seed_domains = PotatoMesh::Config.federation_seed_domains.map(&:downcase)
expect(targets.first(seed_domains.length)).to eq(seed_domains)
expect(targets).to include("remote.mesh")
expect(targets).not_to include("self.mesh")
end
@@ -1089,7 +1091,7 @@ RSpec.describe "Potato Mesh Sinatra app" do
targets = application_class.federation_target_domains("self.mesh")
expect(targets).to eq(["potatomesh.net"])
expect(targets).to eq(PotatoMesh::Config.federation_seed_domains.map(&:downcase))
end
it "ignores remote instances that have not updated within a week" do
@@ -1117,7 +1119,7 @@ RSpec.describe "Potato Mesh Sinatra app" do
targets = application_class.federation_target_domains("self.mesh")
expect(targets).to eq(["potatomesh.net"])
expect(targets).to eq(PotatoMesh::Config.federation_seed_domains.map(&:downcase))
end
end
@@ -1337,6 +1339,38 @@ RSpec.describe "Potato Mesh Sinatra app" do
end
end
describe "GET /federation" do
it "returns 404 when federation is disabled" do
allow(PotatoMesh::Config).to receive(:federation_enabled?).and_return(false)
get "/federation"
expect(last_response.status).to eq(404)
end
it "renders the federation subpage when enabled" do
allow(PotatoMesh::Config).to receive(:federation_enabled?).and_return(true)
get "/federation"
expect(last_response).to be_ok
expect(last_response.body).to include('class="federation-page federation-page--full-width"')
expect(last_response.body).to include("initializeFederationPage")
end
it "hides dashboard-only refresh controls while keeping manual refresh and theme toggle" do
allow(PotatoMesh::Config).to receive(:federation_enabled?).and_return(true)
get "/federation"
expect(last_response).to be_ok
expect(last_response.body).not_to include('id="autoRefresh"')
expect(last_response.body).not_to include('id="filterInput"')
expect(last_response.body).to include('id="refreshBtn"')
expect(last_response.body).to include('id="themeToggle"')
end
end
describe "GET /chat" do
it "renders the chat container when chat is enabled" do
get "/chat"
@@ -1574,6 +1608,161 @@ RSpec.describe "Potato Mesh Sinatra app" do
end
end
it "accepts registrations when contactLink is part of the signed payload" do
contact_link = "https://example.test/contact"
linked_attributes = instance_attributes.merge(contact_link: contact_link)
linked_signature_payload = canonical_instance_payload(linked_attributes)
linked_signature = Base64.strict_encode64(
instance_key.sign(OpenSSL::Digest::SHA256.new, linked_signature_payload),
)
linked_payload = instance_payload.merge(
"contactLink" => contact_link,
"signature" => linked_signature,
)
post "/api/instances", linked_payload.to_json, { "CONTENT_TYPE" => "application/json" }
expect(last_response.status).to eq(201)
expect(JSON.parse(last_response.body)).to eq("status" => "registered")
with_db(readonly: true) do |db|
db.results_as_hash = true
row = db.get_first_row(
"SELECT contact_link, signature FROM instances WHERE id = ?",
[instance_attributes[:id]],
)
expect(row).not_to be_nil
expect(row["contact_link"]).to eq(contact_link)
expect(row["signature"]).to eq(linked_signature)
end
end
it "accepts instance announcement payloads produced by the application including contactLink" do
contact_link = "https://example.test/contact"
announcement_attributes = instance_attributes.merge(contact_link: contact_link)
announcement_signature = Base64.strict_encode64(
instance_key.sign(
OpenSSL::Digest::SHA256.new,
canonical_instance_payload(announcement_attributes),
),
)
announcement_payload = application_class.instance_announcement_payload(
announcement_attributes,
announcement_signature,
)
post "/api/instances", announcement_payload.to_json, { "CONTENT_TYPE" => "application/json" }
expect(last_response.status).to eq(201)
expect(JSON.parse(last_response.body)).to eq("status" => "registered")
with_db(readonly: true) do |db|
db.results_as_hash = true
row = db.get_first_row(
"SELECT contact_link, signature FROM instances WHERE id = ?",
[instance_attributes[:id]],
)
expect(row).not_to be_nil
expect(row["contact_link"]).to eq(contact_link)
expect(row["signature"]).to eq(announcement_signature)
end
end
it "accepts signatures that omit contactLink for backwards compatibility" do
contact_link = "https://legacy.example/contact"
legacy_signature_payload = canonical_instance_payload(instance_attributes)
legacy_signature = Base64.strict_encode64(
instance_key.sign(OpenSSL::Digest::SHA256.new, legacy_signature_payload),
)
legacy_payload = instance_payload.merge(
"contactLink" => contact_link,
"signature" => legacy_signature,
)
post "/api/instances", legacy_payload.to_json, { "CONTENT_TYPE" => "application/json" }
expect(last_response.status).to eq(201)
expect(JSON.parse(last_response.body)).to eq("status" => "registered")
with_db(readonly: true) do |db|
db.results_as_hash = true
row = db.get_first_row(
"SELECT contact_link, signature FROM instances WHERE id = ?",
[instance_attributes[:id]],
)
expect(row).not_to be_nil
expect(row["contact_link"]).to eq(contact_link)
expect(row["signature"]).to eq(legacy_signature)
end
end
it "accepts mixed-case domains when the signature omits contactLink but the payload includes it" do
raw_domain = "Mesh.Example"
normalized_domain = raw_domain.downcase
contact_link = "https://mixed.example/contact"
mixed_attributes = instance_attributes.merge(domain: raw_domain)
mixed_signature_payload = canonical_instance_payload(mixed_attributes)
mixed_signature = Base64.strict_encode64(
instance_key.sign(OpenSSL::Digest::SHA256.new, mixed_signature_payload),
)
mixed_payload = instance_payload.merge(
"domain" => raw_domain,
"contactLink" => contact_link,
"signature" => mixed_signature,
)
mixed_remote_payload = JSON.generate(
{
"publicKey" => pubkey,
"name" => instance_attributes[:name],
"version" => instance_attributes[:version],
"domain" => normalized_domain,
"lastUpdate" => last_update_time,
},
sort_keys: true,
)
mixed_document = well_known_document.merge(
"domain" => normalized_domain,
"signedPayload" => Base64.strict_encode64(mixed_remote_payload),
"signature" => Base64.strict_encode64(
instance_key.sign(OpenSSL::Digest::SHA256.new, mixed_remote_payload),
),
)
allow_any_instance_of(Sinatra::Application).to receive(:fetch_instance_json) do |_instance, host, path|
case path
when "/.well-known/potato-mesh"
[mixed_document, URI("https://#{host}#{path}")]
when "/api/nodes"
[remote_nodes, URI("https://#{host}#{path}")]
else
[nil, []]
end
end
post "/api/instances", mixed_payload.to_json, { "CONTENT_TYPE" => "application/json" }
expect(last_response.status).to eq(201)
expect(JSON.parse(last_response.body)).to eq("status" => "registered")
with_db(readonly: true) do |db|
db.results_as_hash = true
row = db.get_first_row(
"SELECT domain, contact_link, signature FROM instances WHERE id = ?",
[mixed_attributes[:id]],
)
expect(row).not_to be_nil
expect(row["domain"]).to eq(normalized_domain)
expect(row["contact_link"]).to eq(contact_link)
expect(row["signature"]).to eq(mixed_signature)
end
end
it "rejects registrations with invalid domains" do
invalid_payload = instance_payload.merge("domain" => "mesh-instance")
@@ -2828,6 +3017,69 @@ RSpec.describe "Potato Mesh Sinatra app" do
end
end
it "ignores broadcast identifiers when creating placeholders" do
clear_database
allow(Time).to receive(:now).and_return(reference_time)
payload = {
"id" => 10_002,
"rx_time" => reference_time.to_i,
"from_id" => "!ffffffff",
"channel" => 0,
"text" => "broadcast",
}
post "/api/messages", payload.to_json, auth_headers
expect(last_response).to be_ok
with_db(readonly: true) do |db|
count = db.get_first_value("SELECT COUNT(*) FROM nodes WHERE node_id = '!ffffffff'")
expect(count).to eq(0)
end
end
it "creates hidden nodes for unseen message participants" do
clear_database
allow(Time).to receive(:now).and_return(reference_time)
payload = {
"id" => 10_001,
"rx_time" => reference_time.to_i,
"from_id" => "!cafef00d",
"to_id" => "!deadbeef",
"channel" => 0,
"portnum" => "TEXT_MESSAGE_APP",
"text" => "Spec participant placeholder",
}
post "/api/messages", payload.to_json, auth_headers
expect(last_response).to be_ok
expect(JSON.parse(last_response.body)).to eq("status" => "ok")
with_db(readonly: true) do |db|
db.results_as_hash = true
rows = db.execute(
<<~SQL,
SELECT node_id, num, short_name, long_name, role, last_heard, first_heard
FROM nodes
WHERE node_id IN ("!cafef00d", "!deadbeef")
ORDER BY node_id
SQL
)
expect(rows.map { |row| row["node_id"] }).to contain_exactly("!cafef00d", "!deadbeef")
rows.each do |row|
expect(row["num"]).to be_an(Integer)
expect(row["role"]).to eq("CLIENT_HIDDEN")
expect(row["short_name"]).to eq(row["node_id"][-4, 4].upcase)
expect(row["long_name"]).to eq("Meshtastic #{row["short_name"]}")
expect(row["last_heard"]).to eq(reference_time.to_i)
expect(row["first_heard"]).to eq(reference_time.to_i)
end
end
end
it "returns 400 when the payload is not valid JSON" do
post "/api/messages", "{", auth_headers
@@ -3435,6 +3687,43 @@ RSpec.describe "Potato Mesh Sinatra app" do
end
end
it "accepts traceroutes without metrics or RSSI fields" do
allow(Time).to receive(:now).and_return(reference_time)
payload = [
{
"id" => 9_003,
"request_id" => 42,
"src" => 0xAAAA0001,
"dest" => 0xAAAA0002,
"rx_time" => reference_time.to_i - 1,
"hops" => [0xAAAA0001, 0xAAAA0003, 0xAAAA0002],
},
]
post "/api/traces", payload.to_json, auth_headers
expect(last_response).to be_ok
expect(JSON.parse(last_response.body)).to eq("status" => "ok")
with_db(readonly: true) do |db|
db.results_as_hash = true
stored = db.get_first_row("SELECT * FROM traces WHERE id = ?", [payload.first["id"]])
expect(stored["rx_time"]).to eq(payload.first["rx_time"])
expect(stored["rx_iso"]).to eq(Time.at(payload.first["rx_time"]).utc.iso8601)
expect(stored["rssi"]).to be_nil
expect(stored["snr"]).to be_nil
expect(stored["elapsed_ms"]).to be_nil
hops = db.execute(
"SELECT hop_index, node_id FROM trace_hops WHERE trace_id = ? ORDER BY hop_index",
[stored["id"]],
)
expect(hops.map { |row| row["node_id"] }).to eq(payload.first["hops"])
end
end
it "returns 400 when the payload is not valid JSON" do
post "/api/traces", "{", auth_headers
@@ -3881,6 +4170,39 @@ RSpec.describe "Potato Mesh Sinatra app" do
expect(payload["node_id"]).to eq("!fresh-node")
end
it "filters node results using the since parameter for collections and single lookups" do
clear_database
allow(Time).to receive(:now).and_return(reference_time)
now = reference_time.to_i
older_last_heard = now - 120
recent_last_heard = now - 30
with_db do |db|
db.execute(
"INSERT INTO nodes(node_id, short_name, long_name, hw_model, role, snr, last_heard, first_heard) VALUES(?,?,?,?,?,?,?,?)",
["!older-node", "old", "Older", "TBEAM", "CLIENT", 0.0, older_last_heard, older_last_heard],
)
db.execute(
"INSERT INTO nodes(node_id, short_name, long_name, hw_model, role, snr, last_heard, first_heard) VALUES(?,?,?,?,?,?,?,?)",
["!recent-node", "new", "Recent", "TBEAM", "CLIENT", 0.0, recent_last_heard, recent_last_heard],
)
end
get "/api/nodes?since=#{recent_last_heard}"
expect(last_response).to be_ok
payload = JSON.parse(last_response.body)
expect(payload.map { |row| row["node_id"] }).to eq(["!recent-node"])
get "/api/nodes/!older-node?since=#{recent_last_heard}"
expect(last_response.status).to eq(404)
get "/api/nodes/!recent-node?since=#{recent_last_heard}"
expect(last_response).to be_ok
detail = JSON.parse(last_response.body)
expect(detail["node_id"]).to eq("!recent-node")
end
it "omits blank values from node responses" do
clear_database
allow(Time).to receive(:now).and_return(reference_time)
@@ -4242,6 +4564,37 @@ RSpec.describe "Potato Mesh Sinatra app" do
expect(filtered.map { |row| row["id"] }).to eq([2])
end
it "filters positions using the since parameter for both global and node queries" do
clear_database
allow(Time).to receive(:now).and_return(reference_time)
now = reference_time.to_i
older_rx = now - 180
recent_rx = now - 15
with_db do |db|
db.execute(
"INSERT INTO positions(id, node_id, node_num, rx_time, rx_iso, position_time, latitude, longitude) VALUES(?,?,?,?,?,?,?,?)",
[10, "!pos-since", 101, older_rx, Time.at(older_rx).utc.iso8601, older_rx - 5, 52.0, 13.0],
)
db.execute(
"INSERT INTO positions(id, node_id, node_num, rx_time, rx_iso, position_time, latitude, longitude) VALUES(?,?,?,?,?,?,?,?)",
[11, "!pos-since", 101, recent_rx, Time.at(recent_rx).utc.iso8601, recent_rx - 5, 53.0, 14.0],
)
end
get "/api/positions?since=#{recent_rx}"
expect(last_response).to be_ok
payload = JSON.parse(last_response.body)
expect(payload.map { |row| row["id"] }).to eq([11])
get "/api/positions/!pos-since?since=#{recent_rx}"
expect(last_response).to be_ok
filtered = JSON.parse(last_response.body)
expect(filtered.map { |row| row["id"] }).to eq([11])
end
it "omits blank values from position responses" do
clear_database
allow(Time).to receive(:now).and_return(reference_time)
@@ -4340,6 +4693,49 @@ RSpec.describe "Potato Mesh Sinatra app" do
expect(filtered.first["rx_time"]).to eq(fresh_rx)
end
it "honours the since parameter for neighbor queries" do
clear_database
allow(Time).to receive(:now).and_return(reference_time)
now = reference_time.to_i
older_rx = now - 300
recent_rx = now - 30
with_db do |db|
db.execute(
"INSERT INTO nodes(node_id, short_name, long_name, hw_model, role, snr, last_heard, first_heard) VALUES(?,?,?,?,?,?,?,?)",
["!origin-since", "orig", "Origin", "TBEAM", "CLIENT", 0.0, now, now],
)
db.execute(
"INSERT INTO nodes(node_id, short_name, long_name, hw_model, role, snr, last_heard, first_heard) VALUES(?,?,?,?,?,?,?,?)",
["!neighbor-old", "oldn", "Neighbor Old", "TBEAM", "CLIENT", 0.0, now, now],
)
db.execute(
"INSERT INTO nodes(node_id, short_name, long_name, hw_model, role, snr, last_heard, first_heard) VALUES(?,?,?,?,?,?,?,?)",
["!neighbor-new", "newn", "Neighbor New", "TBEAM", "CLIENT", 0.0, now, now],
)
db.execute(
"INSERT INTO neighbors(node_id, neighbor_id, snr, rx_time) VALUES(?,?,?,?)",
["!origin-since", "!neighbor-old", 1.5, older_rx],
)
db.execute(
"INSERT INTO neighbors(node_id, neighbor_id, snr, rx_time) VALUES(?,?,?,?)",
["!origin-since", "!neighbor-new", 7.5, recent_rx],
)
end
get "/api/neighbors?since=#{recent_rx}"
expect(last_response).to be_ok
payload = JSON.parse(last_response.body)
expect(payload.map { |row| row["neighbor_id"] }).to eq(["!neighbor-new"])
get "/api/neighbors/!origin-since?since=#{recent_rx}"
expect(last_response).to be_ok
filtered = JSON.parse(last_response.body)
expect(filtered.map { |row| row["neighbor_id"] }).to eq(["!neighbor-new"])
end
it "omits blank values from neighbor responses" do
clear_database
allow(Time).to receive(:now).and_return(reference_time)
@@ -4402,7 +4798,8 @@ RSpec.describe "Potato Mesh Sinatra app" do
expect(first_entry["telemetry_time_iso"]).to eq(Time.at(latest["telemetry_time"]).utc.iso8601)
expect(first_entry).not_to have_key("device_metrics")
expect_same_value(first_entry["battery_level"], telemetry_metric(latest, "battery_level"))
expect_same_value(first_entry["current"], telemetry_metric(latest, "current"))
expected_current = telemetry_metric(latest, "current")
expect_same_value(first_entry["current"], expected_current.nil? ? nil : expected_current / 1000.0)
expect_same_value(first_entry["distance"], telemetry_metric(latest, "distance"))
expect_same_value(first_entry["lux"], telemetry_metric(latest, "lux"))
expect_same_value(first_entry["wind_direction"], telemetry_metric(latest, "wind_direction"))
@@ -4469,6 +4866,37 @@ RSpec.describe "Potato Mesh Sinatra app" do
expect(filtered.map { |row| row["id"] }).to eq([2])
end
it "filters telemetry rows using the since parameter for both global and node-scoped queries" do
clear_database
allow(Time).to receive(:now).and_return(reference_time)
now = reference_time.to_i
older_rx = now - 300
recent_rx = now - 60
with_db do |db|
db.execute(
"INSERT INTO telemetry(id, node_id, node_num, rx_time, rx_iso, telemetry_time, battery_level, voltage) VALUES(?,?,?,?,?,?,?,?)",
[10, "!tele-since", 21, older_rx, Time.at(older_rx).utc.iso8601, older_rx - 5, 20.0, 3.9],
)
db.execute(
"INSERT INTO telemetry(id, node_id, node_num, rx_time, rx_iso, telemetry_time, battery_level, voltage) VALUES(?,?,?,?,?,?,?,?)",
[11, "!tele-since", 21, recent_rx, Time.at(recent_rx).utc.iso8601, recent_rx - 5, 80.0, 4.1],
)
end
get "/api/telemetry?since=#{recent_rx}"
expect(last_response).to be_ok
payload = JSON.parse(last_response.body)
expect(payload.map { |row| row["id"] }).to eq([11])
get "/api/telemetry/!tele-since?since=#{recent_rx}"
expect(last_response).to be_ok
filtered = JSON.parse(last_response.body)
expect(filtered.map { |row| row["id"] }).to eq([11])
end
it "omits blank values from telemetry responses" do
clear_database
allow(Time).to receive(:now).and_return(reference_time)
@@ -4523,6 +4951,51 @@ RSpec.describe "Potato Mesh Sinatra app" do
expect(filtered.first).not_to have_key("battery_level")
expect(filtered.first).not_to have_key("portnum")
end
it "omits zero-valued battery and voltage metrics from telemetry responses" do
clear_database
allow(Time).to receive(:now).and_return(reference_time)
now = reference_time.to_i
with_db do |db|
db.execute(
"INSERT INTO telemetry(id, node_id, rx_time, rx_iso, telemetry_time, battery_level, voltage, uptime_seconds, channel_utilization) VALUES(?,?,?,?,?,?,?,?,?)",
[
88,
"!tele-zero",
now,
Time.at(now).utc.iso8601,
now - 60,
0,
0,
0,
0.5,
],
)
end
get "/api/telemetry"
expect(last_response).to be_ok
rows = JSON.parse(last_response.body)
expect(rows.length).to eq(1)
entry = rows.first
expect(entry["node_id"]).to eq("!tele-zero")
expect(entry["rx_time"]).to eq(now)
expect(entry["telemetry_time"]).to eq(now - 60)
expect(entry).not_to have_key("battery_level")
expect(entry).not_to have_key("voltage")
expect(entry["uptime_seconds"]).to eq(0)
expect(entry["channel_utilization"]).to eq(0.5)
get "/api/telemetry/!tele-zero"
expect(last_response).to be_ok
scoped_rows = JSON.parse(last_response.body)
expect(scoped_rows.length).to eq(1)
expect(scoped_rows.first).not_to have_key("battery_level")
expect(scoped_rows.first).not_to have_key("voltage")
end
end
describe "GET /api/telemetry/aggregated" do
@@ -4544,6 +5017,35 @@ RSpec.describe "Potato Mesh Sinatra app" do
expect(a_bucket["aggregates"]).to have_key("battery_level")
expect(a_bucket["aggregates"]["battery_level"]).to include("avg")
expect(a_bucket).not_to have_key("device_metrics")
buckets_by_start = {}
buckets.each do |bucket|
start_time = bucket["bucket_start"]
buckets_by_start[start_time] = bucket if start_time
end
bucket_seconds = 300
current_by_bucket = Hash.new { |hash, key| hash[key] = [] }
telemetry_fixture.each do |entry|
timestamp = entry["rx_time"] || entry["telemetry_time"]
next unless timestamp
bucket_start = (timestamp / bucket_seconds) * bucket_seconds
current_value = telemetry_metric(entry, "current")
next if current_value.nil?
current_by_bucket[bucket_start] << current_value
end
current_by_bucket.each do |bucket_start, values|
bucket = buckets_by_start[bucket_start]
next unless bucket
aggregates = bucket.fetch("aggregates", {})
metrics = aggregates["current"]
expect(metrics).not_to be_nil
expect_same_value(metrics["avg"], values.sum / values.length / 1000.0)
expect_same_value(metrics["min"], values.min / 1000.0)
expect_same_value(metrics["max"], values.max / 1000.0)
end
end
it "applies default window and bucket sizes when parameters are omitted" do
@@ -4558,6 +5060,114 @@ RSpec.describe "Potato Mesh Sinatra app" do
expect(buckets.first["bucket_seconds"]).to eq(PotatoMesh::App::Queries::DEFAULT_TELEMETRY_BUCKET_SECONDS)
end
it "filters aggregated telemetry buckets using the since parameter" do
clear_database
allow(Time).to receive(:now).and_return(reference_time)
now = reference_time.to_i
older_rx = now - 1800
recent_rx = now - 120
with_db do |db|
db.execute(
"INSERT INTO telemetry(id, node_id, rx_time, rx_iso, telemetry_time, battery_level) VALUES(?,?,?,?,?,?)",
[801, "!agg-since", older_rx, Time.at(older_rx).utc.iso8601, older_rx - 30, 30.0],
)
db.execute(
"INSERT INTO telemetry(id, node_id, rx_time, rx_iso, telemetry_time, battery_level) VALUES(?,?,?,?,?,?)",
[802, "!agg-since", recent_rx, Time.at(recent_rx).utc.iso8601, recent_rx - 30, 80.0],
)
end
get "/api/telemetry/aggregated?windowSeconds=3600&bucketSeconds=300&since=#{recent_rx}"
expect(last_response).to be_ok
buckets = JSON.parse(last_response.body)
expect(buckets.length).to eq(1)
aggregates = buckets.first.fetch("aggregates")
expect(aggregates).to have_key("battery_level")
expect_same_value(aggregates.dig("battery_level", "avg"), 80.0)
end
it "omits zero-valued battery and voltage aggregates" do
clear_database
allow(Time).to receive(:now).and_return(reference_time)
now = reference_time.to_i
with_db do |db|
db.execute(
"INSERT INTO telemetry(id, node_id, rx_time, rx_iso, telemetry_time, battery_level, voltage, channel_utilization) VALUES(?,?,?,?,?,?,?,?)",
[
991,
"!tele-agg-zero",
now,
Time.at(now).utc.iso8601,
now - 30,
0,
0,
0.25,
],
)
end
get "/api/telemetry/aggregated?windowSeconds=3600&bucketSeconds=300"
expect(last_response).to be_ok
buckets = JSON.parse(last_response.body)
expect(buckets.length).to eq(1)
aggregates = buckets.first.fetch("aggregates")
expect(aggregates).not_to have_key("battery_level")
expect(aggregates).not_to have_key("voltage")
expect(aggregates.dig("channel_utilization", "avg")).to eq(0.25)
end
it "ignores zero-valued telemetry when aggregating mixed buckets" do
clear_database
allow(Time).to receive(:now).and_return(reference_time)
now = reference_time.to_i
with_db do |db|
db.execute(
"INSERT INTO telemetry(id, node_id, rx_time, rx_iso, telemetry_time, battery_level, voltage) VALUES(?,?,?,?,?,?,?)",
[
992,
"!tele-agg-mixed",
now,
Time.at(now).utc.iso8601,
now - 120,
0,
0,
],
)
db.execute(
"INSERT INTO telemetry(id, node_id, rx_time, rx_iso, telemetry_time, battery_level, voltage) VALUES(?,?,?,?,?,?,?)",
[
993,
"!tele-agg-mixed",
now,
Time.at(now).utc.iso8601,
now - 60,
80.0,
3.7,
],
)
end
get "/api/telemetry/aggregated?windowSeconds=3600&bucketSeconds=300"
expect(last_response).to be_ok
buckets = JSON.parse(last_response.body)
expect(buckets.length).to eq(1)
aggregates = buckets.first.fetch("aggregates")
expect(aggregates).to have_key("battery_level")
expect(aggregates.dig("battery_level", "avg")).to eq(80.0)
expect(aggregates.dig("battery_level", "min")).to eq(80.0)
expect(aggregates.dig("battery_level", "max")).to eq(80.0)
expect(aggregates.dig("voltage", "avg")).to eq(3.7)
expect(aggregates.dig("voltage", "min")).to eq(3.7)
expect(aggregates.dig("voltage", "max")).to eq(3.7)
end
it "rejects invalid bucket and window parameters" do
get "/api/telemetry/aggregated?windowSeconds=0&bucketSeconds=300"
expect(last_response.status).to eq(400)
@@ -4625,6 +5235,57 @@ RSpec.describe "Potato Mesh Sinatra app" do
expect(last_response).to be_ok
expect(JSON.parse(last_response.body)).to eq([])
end
it "excludes traces older than one week" do
clear_database
now = Time.now.to_i
recent_rx = now - (PotatoMesh::Config.week_seconds / 2)
stale_rx = now - (PotatoMesh::Config.week_seconds + 60)
payload = [
{ "id" => 50_001, "src" => 1, "dest" => 2, "rx_time" => recent_rx, "metrics" => {} },
{ "id" => 50_002, "src" => 3, "dest" => 4, "rx_time" => stale_rx, "metrics" => {} },
]
post "/api/traces", payload.to_json, auth_headers
expect(last_response).to be_ok
get "/api/traces"
expect(last_response).to be_ok
ids = JSON.parse(last_response.body).map { |row| row["id"] }
expect(ids).to eq([50_001])
end
it "filters traces using the since parameter for collection and scoped requests" do
clear_database
allow(Time).to receive(:now).and_return(reference_time)
now = reference_time.to_i
older_rx = now - 300
recent_rx = now - 25
with_db do |db|
db.execute(
"INSERT INTO traces(id, src, dest, rx_time, rx_iso) VALUES(?,?,?,?,?)",
[60_001, 123, 456, older_rx, Time.at(older_rx).utc.iso8601],
)
db.execute(
"INSERT INTO traces(id, src, dest, rx_time, rx_iso) VALUES(?,?,?,?,?)",
[60_002, 123, 456, recent_rx, Time.at(recent_rx).utc.iso8601],
)
end
get "/api/traces?since=#{recent_rx}"
expect(last_response).to be_ok
payload = JSON.parse(last_response.body)
expect(payload.map { |row| row["id"] }).to eq([60_002])
get "/api/traces/123?since=#{recent_rx}"
expect(last_response).to be_ok
scoped = JSON.parse(last_response.body)
expect(scoped.map { |row| row["id"] }).to eq([60_002])
end
end
describe "GET /nodes/:id" do
+16
View File
@@ -183,4 +183,20 @@ RSpec.describe PotatoMesh::App::Database do
hop_columns = column_names_for("trace_hops")
expect(hop_columns).to include("trace_id", "hop_index", "node_id")
end
it "adds the contact_link column to existing instances tables" do
SQLite3::Database.new(PotatoMesh::Config.db_path) do |db|
db.execute("CREATE TABLE nodes(node_id TEXT)")
db.execute("CREATE TABLE messages(id INTEGER PRIMARY KEY)")
db.execute(
"CREATE TABLE instances(id TEXT PRIMARY KEY, domain TEXT, pubkey TEXT, last_update_time INTEGER, is_private INTEGER)",
)
end
expect(column_names_for("instances")).not_to include("contact_link")
harness_class.ensure_schema_upgrades
expect(column_names_for("instances")).to include("contact_link")
end
end
+141
View File
@@ -17,6 +17,7 @@
require "spec_helper"
require "net/http"
require "openssl"
require "sqlite3"
require "set"
require "uri"
require "socket"
@@ -320,6 +321,146 @@ RSpec.describe PotatoMesh::App::Federation do
expect(visited).not_to include(attributes_list[1][:domain], attributes_list[2][:domain])
expect(federation_helpers.debug_messages).to include(a_string_including("crawl limit"))
end
it "requests an expanded recent node window when counting remote activity" do
now = Time.at(1_700_000_000)
allow(Time).to receive(:now).and_return(now)
allow(PotatoMesh::Config).to receive(:remote_instance_max_node_age).and_return(900)
recent_cutoff = now.to_i - 900
mapping = { [seed_domain, "/api/instances"] => [payload_entries, :instances] }
attributes_list.each_with_index do |attributes, index|
mapping[[attributes[:domain], "/api/nodes?since=#{recent_cutoff}&limit=1000"]] = [node_payload, :nodes]
mapping[[attributes[:domain], "/api/nodes"]] = [node_payload, :nodes]
mapping[[attributes[:domain], "/api/instances"]] = [[], :instances]
allow(federation_helpers).to receive(:remote_instance_attributes_from_payload).with(payload_entries[index]).and_return([attributes, "signature-#{index}", nil])
end
captured_paths = []
allow(federation_helpers).to receive(:fetch_instance_json) do |host, path|
captured_paths << [host, path]
mapping.fetch([host, path]) { [nil, []] }
end
allow(federation_helpers).to receive(:verify_instance_signature).and_return(true)
allow(federation_helpers).to receive(:validate_remote_nodes).and_return([true, nil])
allow(federation_helpers).to receive(:upsert_instance_record)
federation_helpers.ingest_known_instances_from!(db, seed_domain)
expect(captured_paths).to include(
[attributes_list[0][:domain], "/api/nodes?since=#{recent_cutoff}&limit=1000"],
[attributes_list[1][:domain], "/api/nodes?since=#{recent_cutoff}&limit=1000"],
[attributes_list[2][:domain], "/api/nodes?since=#{recent_cutoff}&limit=1000"],
)
expect(attributes_list.map { |attrs| attrs[:nodes_count] }).to all(eq(node_payload.length))
end
end
describe ".upsert_instance_record" do
let(:application_class) { PotatoMesh::Application }
let(:base_attributes) do
{
id: "remote-instance",
domain: "Remote.Mesh",
pubkey: PotatoMesh::Application::INSTANCE_PUBLIC_KEY_PEM,
name: "Remote Mesh",
version: "1.0.0",
channel: "longfox",
frequency: "915",
latitude: 45.0,
longitude: -122.0,
last_update_time: Time.now.to_i,
is_private: false,
contact_link: "https://example.org/contact",
}
end
def with_db
db = SQLite3::Database.new(PotatoMesh::Config.db_path)
db.busy_timeout = PotatoMesh::Config.db_busy_timeout_ms
db.execute("PRAGMA foreign_keys = ON")
yield db
ensure
db&.close
end
before do
FileUtils.mkdir_p(File.dirname(PotatoMesh::Config.db_path))
application_class.init_db unless application_class.db_schema_present?
application_class.ensure_schema_upgrades
with_db do |db|
db.execute("DELETE FROM instances")
end
allow(federation_helpers).to receive(:ip_from_domain).and_return(nil)
end
it "inserts the contact_link for new records" do
with_db do |db|
federation_helpers.send(:upsert_instance_record, db, base_attributes, "sig-1")
stored = db.get_first_value("SELECT contact_link FROM instances WHERE id = ?", base_attributes[:id])
expect(stored).to eq("https://example.org/contact")
end
end
it "updates the contact_link on conflict" do
with_db do |db|
federation_helpers.send(:upsert_instance_record, db, base_attributes, "sig-1")
federation_helpers.send(
:upsert_instance_record,
db,
base_attributes.merge(contact_link: "https://example.org/new-contact", name: "Renamed Mesh"),
"sig-2",
)
row =
db.get_first_row("SELECT contact_link, name, signature FROM instances WHERE id = ?", base_attributes[:id])
expect(row[0]).to eq("https://example.org/new-contact")
expect(row[1]).to eq("Renamed Mesh")
expect(row[2]).to eq("sig-2")
end
end
it "allows the contact_link to be cleared" do
with_db do |db|
federation_helpers.send(:upsert_instance_record, db, base_attributes, "sig-1")
federation_helpers.send(:upsert_instance_record, db, base_attributes.merge(contact_link: nil), "sig-3")
row = db.get_first_row("SELECT contact_link, signature FROM instances WHERE id = ?", base_attributes[:id])
expect(row[0]).to be_nil
expect(row[1]).to eq("sig-3")
end
end
it "stores the nodes_count for new records" do
with_db do |db|
federation_helpers.send(:upsert_instance_record, db, base_attributes.merge(nodes_count: 77), "sig-1")
stored = db.get_first_value("SELECT nodes_count FROM instances WHERE id = ?", base_attributes[:id])
expect(stored).to eq(77)
end
end
it "updates the nodes_count on conflict" do
with_db do |db|
federation_helpers.send(:upsert_instance_record, db, base_attributes.merge(nodes_count: 12), "sig-1")
federation_helpers.send(
:upsert_instance_record,
db,
base_attributes.merge(nodes_count: 99, name: "Renamed Mesh"),
"sig-2",
)
row =
db.get_first_row("SELECT nodes_count, name, signature FROM instances WHERE id = ?", base_attributes[:id])
expect(row[0]).to eq(99)
expect(row[1]).to eq("Renamed Mesh")
expect(row[2]).to eq("sig-2")
end
end
end
describe ".federation_user_agent_header" do
+206
View File
@@ -0,0 +1,206 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
require "spec_helper"
require "json"
require "time"
RSpec.describe "Ingestor endpoints" do
let(:app) { Sinatra::Application }
let(:api_token) { "secret-token" }
let(:auth_headers) do
{
"CONTENT_TYPE" => "application/json",
"HTTP_AUTHORIZATION" => "Bearer #{api_token}",
}
end
before do
@original_token = ENV["API_TOKEN"]
ENV["API_TOKEN"] = api_token
clear_ingestors_table
end
after do
ENV["API_TOKEN"] = @original_token
clear_ingestors_table
end
def clear_ingestors_table
with_db do |db|
db.execute("DELETE FROM ingestors")
db.execute("VACUUM")
end
end
def with_db(readonly: false)
db = PotatoMesh::Application.open_database(readonly: readonly)
db.busy_timeout = PotatoMesh::Config.db_busy_timeout_ms
db.execute("PRAGMA foreign_keys = ON")
yield db
ensure
db&.close
end
def ingestor_payload(overrides = {})
now = Time.now.to_i
{
node_id: "!abc12345",
start_time: now - 120,
last_seen_time: now - 60,
version: "0.5.9",
lora_freq: 915,
modem_preset: "LongFast",
}.merge(overrides)
end
describe "POST /api/ingestors" do
it "requires a bearer token" do
post "/api/ingestors", ingestor_payload.to_json, { "CONTENT_TYPE" => "application/json" }
expect(last_response.status).to eq(403)
end
it "upserts ingestor state without regressing start time" do
payload = ingestor_payload
post "/api/ingestors", payload.to_json, auth_headers
expect(last_response.status).to eq(200)
newer_last_seen = payload[:last_seen_time] + 3_600
older_start = payload[:start_time] - 500
post "/api/ingestors",
payload.merge(last_seen_time: newer_last_seen, start_time: older_start).to_json,
auth_headers
expect(last_response.status).to eq(200)
with_db(readonly: true) do |db|
row = db.get_first_row(
"SELECT node_id, start_time, last_seen_time, version, lora_freq, modem_preset FROM ingestors WHERE node_id = ?",
[payload[:node_id]],
)
expect(row[0]).to eq(payload[:node_id])
expect(row[1]).to eq(payload[:start_time])
expect(row[2]).to be >= payload[:last_seen_time]
expect(row[2]).to be <= Time.now.to_i
expect(row[3]).to eq(payload[:version])
expect(row[4]).to eq(payload[:lora_freq])
expect(row[5]).to eq(payload[:modem_preset])
end
end
it "rejects payloads missing required fields" do
post "/api/ingestors", { node_id: "!abcd0001" }.to_json, auth_headers
expect(last_response.status).to eq(400)
end
it "rejects invalid JSON" do
post "/api/ingestors", "{", auth_headers
expect(last_response.status).to eq(400)
end
it "rejects payloads missing version" do
post "/api/ingestors", ingestor_payload(version: nil).to_json, auth_headers
expect(last_response.status).to eq(400)
end
it "rejects non-object payloads" do
post "/api/ingestors", [].to_json, auth_headers
expect(last_response.status).to eq(400)
end
end
describe "GET /api/ingestors" do
it "returns recent ingestors and omits stale rows" do
now = Time.now.to_i
with_db do |db|
db.execute(
"INSERT INTO ingestors(node_id, start_time, last_seen_time, version) VALUES(?,?,?,?)",
["!fresh000", now - 100, now - 10, "0.5.9"],
)
db.execute(
"INSERT INTO ingestors(node_id, start_time, last_seen_time, version) VALUES(?,?,?,?)",
["!stale000", now - (9 * 24 * 60 * 60), now - (9 * 24 * 60 * 60), "0.5.6"],
)
db.execute(
"INSERT INTO ingestors(node_id, start_time, last_seen_time, version, lora_freq, modem_preset) VALUES(?,?,?,?,?,?)",
["!rich000", now - 200, now - 100, "0.5.9", 915, "MediumFast"],
)
end
get "/api/ingestors"
expect(last_response.status).to eq(200)
payload = JSON.parse(last_response.body)
expect(payload).to all(include("node_id", "start_time", "last_seen_time", "version"))
node_ids = payload.map { |entry| entry["node_id"] }
expect(node_ids).to include("!fresh000")
expect(node_ids).not_to include("!stale000")
rich = payload.find { |row| row["node_id"] == "!rich000" }
expect(rich["lora_freq"]).to eq(915)
expect(rich["modem_preset"]).to eq("MediumFast")
expect(rich["start_time_iso"]).to be_a(String)
expect(rich["last_seen_iso"]).to be_a(String)
end
it "filters ingestors using the since parameter" do
frozen_time = Time.at(1_700_000_000)
allow(Time).to receive(:now).and_return(frozen_time)
now = frozen_time.to_i
recent_cutoff = now - 120
with_db do |db|
db.execute(
"INSERT INTO ingestors(node_id, start_time, last_seen_time, version) VALUES(?,?,?,?)",
["!old-ingestor", now - 600, now - 300, "0.5.5"],
)
db.execute(
"INSERT INTO ingestors(node_id, start_time, last_seen_time, version) VALUES(?,?,?,?)",
["!new-ingestor", now - 60, now - 30, "0.5.9"],
)
end
get "/api/ingestors?since=#{recent_cutoff}"
expect(last_response).to be_ok
payload = JSON.parse(last_response.body)
expect(payload.map { |entry| entry["node_id"] }).to eq(["!new-ingestor"])
end
end
describe "schema migrations" do
it "creates the ingestors table with frequency and modem columns" do
tmp_db = File.join(SPEC_TMPDIR, "ingestor-migrate.db")
FileUtils.rm_f(tmp_db)
original = PotatoMesh::Config.db_path
allow(PotatoMesh::Config).to receive(:db_path).and_return(tmp_db)
begin
PotatoMesh::Application.init_db
with_db(readonly: true) do |db|
columns = db.execute("PRAGMA table_info(ingestors)").map { |row| row[1] }
expect(columns).to include("lora_freq", "modem_preset", "version")
end
ensure
allow(PotatoMesh::Config).to receive(:db_path).and_return(original)
end
end
end
end
+81
View File
@@ -38,6 +38,7 @@ RSpec.describe PotatoMesh::App::Instances do
before do
FileUtils.mkdir_p(File.dirname(PotatoMesh::Config.db_path))
application_class.init_db unless application_class.db_schema_present?
application_class.ensure_schema_upgrades
with_db do |db|
db.execute("DELETE FROM instances")
end
@@ -95,5 +96,85 @@ RSpec.describe PotatoMesh::App::Instances do
expect(domains).not_to include("missing.mesh.test")
expect(payload.all? { |row| row["lastUpdateTime"] >= lower_bound }).to be(true)
end
it "exposes contactLink when present and omits blank values" do
fixed_time = Time.utc(2025, 2, 1, 12, 0, 0)
allow(Time).to receive(:now).and_return(fixed_time)
with_db do |db|
db.execute(
"INSERT INTO instances (id, domain, pubkey, last_update_time, is_private, contact_link) VALUES (?, ?, ?, ?, ?, ?)",
[
"instance-with-contact",
"alpha.mesh.test",
PotatoMesh::Application::INSTANCE_PUBLIC_KEY_PEM,
fixed_time.to_i,
0,
" https://example.org/contact ",
],
)
db.execute(
"INSERT INTO instances (id, domain, pubkey, last_update_time, is_private, contact_link) VALUES (?, ?, ?, ?, ?, ?)",
[
"instance-without-contact",
"beta.mesh.test",
PotatoMesh::Application::INSTANCE_PUBLIC_KEY_PEM,
fixed_time.to_i,
0,
" \t ",
],
)
end
payload = application_class.load_instances_for_api
with_contact = payload.find { |row| row["domain"] == "alpha.mesh.test" }
without_contact = payload.find { |row| row["domain"] == "beta.mesh.test" }
expect(with_contact["contactLink"]).to eq("https://example.org/contact")
expect(without_contact.key?("contactLink")).to be(false)
end
it "includes nodesCount values, preserving zeros" do
fixed_time = Time.utc(2025, 2, 2, 8, 0, 0)
allow(Time).to receive(:now).and_return(fixed_time)
with_db do |db|
db.execute(
<<~SQL,
INSERT INTO instances (id, domain, pubkey, last_update_time, is_private, nodes_count)
VALUES (?, ?, ?, ?, ?, ?)
SQL
[
"instance-with-nodes",
"gamma.mesh.test",
PotatoMesh::Application::INSTANCE_PUBLIC_KEY_PEM,
fixed_time.to_i,
0,
42,
],
)
db.execute(
<<~SQL,
INSERT INTO instances (id, domain, pubkey, last_update_time, is_private, nodes_count)
VALUES (?, ?, ?, ?, ?, ?)
SQL
[
"instance-with-zero",
"delta.mesh.test",
PotatoMesh::Application::INSTANCE_PUBLIC_KEY_PEM,
fixed_time.to_i,
0,
0,
],
)
end
payload = application_class.load_instances_for_api
with_nodes = payload.find { |row| row["domain"] == "gamma.mesh.test" }
zero_nodes = payload.find { |row| row["domain"] == "delta.mesh.test" }
expect(with_nodes["nodesCount"]).to eq(42)
expect(zero_nodes["nodesCount"]).to eq(0)
end
end
end
+27
View File
@@ -0,0 +1,27 @@
<!--
Copyright © 2025-26 l5yth & contributors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<section class="federation-page federation-page--full-width">
<div class="federation-page__content">
<div class="federation-page__map-row">
<%= erb :"shared/_map_panel", locals: { full_screen: true } %>
</div>
<%= erb :"shared/_instances_table" %>
</div>
</section>
<script type="module">
import { initializeFederationPage } from '/assets/js/app/federation-page.js';
initializeFederationPage();
</script>
+1 -1
View File
@@ -17,7 +17,7 @@
<% unless private_mode %>
<%= erb :"shared/_chat_panel", locals: { full_screen: false } %>
<% end %>
<%= erb :"shared/_map_panel", locals: { full_screen: false } %>
<%= erb :"shared/_map_panel", locals: { full_screen: false, legend_collapsed: true } %>
</div>
<%= erb :"shared/_nodes_table", locals: { full_screen: false } %>
+15 -9
View File
@@ -77,12 +77,14 @@
main_classes << "page-main--full-screen" if full_screen_view
show_header = !full_screen_view
show_meta_info = true
show_auto_refresh_controls = true
show_auto_refresh_controls = view_mode != :federation
show_auto_fit_toggle = %i[dashboard map].include?(view_mode)
map_zoom_override = defined?(map_zoom) ? map_zoom : nil
show_info_button = !full_screen_view
show_footer = !full_screen_view
show_filter_input = !%i[node_detail charts].include?(view_mode)
show_filter_input = !%i[node_detail charts federation].include?(view_mode)
show_auto_refresh_toggle = show_auto_refresh_controls
show_refresh_actions = show_auto_refresh_controls || view_mode == :federation
controls_classes = ["controls"]
controls_classes << "controls--full-screen" if full_screen_view
refresh_row_classes = ["refresh-row"]
@@ -104,11 +106,13 @@
<span class="site-title-text"><%= site_name %></span>
</h1>
<% if !private_mode && federation_enabled %>
<div class="instance-selector">
<label class="visually-hidden" for="instanceSelect">Select a region</label>
<select id="instanceSelect" class="instance-select" aria-label="Select instance region">
<option value=""><%= Rack::Utils.escape_html("Select region ...") %></option>
</select>
<div class="header-federation">
<div class="instance-selector">
<label class="visually-hidden" for="instanceSelect">Select a region</label>
<select id="instanceSelect" class="instance-select" aria-label="Select instance region">
<option value=""><%= Rack::Utils.escape_html("Select region ...") %></option>
</select>
</div>
</div>
<% end %>
</header>
@@ -123,9 +127,11 @@
<% else %>
<p id="refreshInfo" class="<%= refresh_info_classes.join(" ") %>" aria-live="polite"></p>
<% end %>
<% if show_auto_refresh_controls %>
<% if show_refresh_actions %>
<div class="refresh-actions">
<label class="auto-refresh-toggle"><input type="checkbox" id="autoRefresh" checked /> Auto-refresh every <%= refresh_interval_seconds %> seconds</label>
<% if show_auto_refresh_toggle %>
<label class="auto-refresh-toggle"><input type="checkbox" id="autoRefresh" checked /> Auto-refresh every <%= refresh_interval_seconds %> seconds</label>
<% end %>
<button id="refreshBtn" type="button">Refresh now</button>
<span id="status" class="pill">loading…</span>
</div>
+1 -1
View File
@@ -14,5 +14,5 @@
limitations under the License.
-->
<section class="full-screen-section full-screen-section--map">
<%= erb :"shared/_map_panel", locals: { full_screen: true } %>
<%= erb :"shared/_map_panel", locals: { full_screen: true, legend_collapsed: true } %>
</section>
+34
View File
@@ -0,0 +1,34 @@
<!--
Copyright © 2025-26 l5yth & contributors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<div class="instances-table-wrapper">
<table id="instances">
<thead>
<tr>
<th class="instances-col instances-col--name" data-sort-key="name"><span class="sort-header" role="button" tabindex="0" data-sort-key="name" data-sort-label="Name">Name <span class="sort-indicator" aria-hidden="true"></span></span></th>
<th class="instances-col instances-col--domain" data-sort-key="domain"><span class="sort-header" role="button" tabindex="0" data-sort-key="domain" data-sort-label="Domain">Domain <span class="sort-indicator" aria-hidden="true"></span></span></th>
<th class="instances-col instances-col--contact" data-sort-key="contact"><span class="sort-header" role="button" tabindex="0" data-sort-key="contact" data-sort-label="Contact">Contact <span class="sort-indicator" aria-hidden="true"></span></span></th>
<th class="instances-col instances-col--version" data-sort-key="version"><span class="sort-header" role="button" tabindex="0" data-sort-key="version" data-sort-label="Version">Version <span class="sort-indicator" aria-hidden="true"></span></span></th>
<th class="instances-col instances-col--channel" data-sort-key="channel"><span class="sort-header" role="button" tabindex="0" data-sort-key="channel" data-sort-label="Channel">Channel <span class="sort-indicator" aria-hidden="true"></span></span></th>
<th class="instances-col instances-col--frequency" data-sort-key="frequency"><span class="sort-header" role="button" tabindex="0" data-sort-key="frequency" data-sort-label="Frequency">Frequency <span class="sort-indicator" aria-hidden="true"></span></span></th>
<th class="instances-col instances-col--nodes" data-sort-key="nodesCount"><span class="sort-header" role="button" tabindex="0" data-sort-key="nodesCount" data-sort-label="Active Nodes (24h)">Active Nodes (24h) <span class="sort-indicator" aria-hidden="true"></span></span></th>
<th class="instances-col instances-col--latitude" data-sort-key="latitude"><span class="sort-header" role="button" tabindex="0" data-sort-key="latitude" data-sort-label="Latitude">Latitude <span class="sort-indicator" aria-hidden="true"></span></span></th>
<th class="instances-col instances-col--longitude" data-sort-key="longitude"><span class="sort-header" role="button" tabindex="0" data-sort-key="longitude" data-sort-label="Longitude">Longitude <span class="sort-indicator" aria-hidden="true"></span></span></th>
<th class="instances-col instances-col--last-update" data-sort-key="lastUpdateTime"><span class="sort-header" role="button" tabindex="0" data-sort-key="lastUpdateTime" data-sort-label="Last Update">Last Update <span class="sort-indicator" aria-hidden="true"></span></span></th>
</tr>
</thead>
<tbody></tbody>
</table>
</div>
+6 -2
View File
@@ -14,8 +14,12 @@
limitations under the License.
-->
<% map_classes = ["map-panel"]
map_classes << "map-panel--full" if defined?(full_screen) && full_screen %>
<div class="<%= map_classes.join(" ") %>" id="mapPanel">
map_classes << "map-panel--full" if defined?(full_screen) && full_screen
data_attrs = []
if defined?(legend_collapsed)
data_attrs << "data-legend-collapsed=\\" #{legend_collapsed ? 'true' : 'false'}\\""
end %>
<div class="<%= map_classes.join(" ") %>" id="mapPanel" <%= data_attrs.join(" ") %>>
<div id="map" role="region" aria-label="Nodes map"></div>
<div class="map-toolbar" role="group" aria-label="Map view controls">
<button id="mapFullscreenToggle" type="button" aria-pressed="false" aria-label="Enter full screen map view">