mirror of
https://github.com/l5yth/potato-mesh.git
synced 2026-05-15 13:55:51 +02:00
Compare commits
1 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 858e9fa189 |
+2
-2
@@ -16,5 +16,5 @@ coverage:
|
||||
status:
|
||||
project:
|
||||
default:
|
||||
target: 100%
|
||||
threshold: 10%
|
||||
target: 99%
|
||||
threshold: 1%
|
||||
|
||||
@@ -19,22 +19,6 @@ updates:
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
- package-ecosystem: "python"
|
||||
directory: "/data"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
- package-ecosystem: "github-actions"
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
- package-ecosystem: "cargo"
|
||||
directory: "/matrix"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
- package-ecosystem: "npm"
|
||||
directory: "/web"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
- package-ecosystem: "pub"
|
||||
directory: "/app"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
|
||||
@@ -20,7 +20,6 @@ on:
|
||||
pull_request:
|
||||
branches: [ "main" ]
|
||||
paths:
|
||||
- '.github/**'
|
||||
- 'web/**'
|
||||
- 'tests/**'
|
||||
|
||||
|
||||
@@ -20,7 +20,6 @@ on:
|
||||
pull_request:
|
||||
branches: [ "main" ]
|
||||
paths:
|
||||
- '.github/**'
|
||||
- 'app/**'
|
||||
- 'tests/**'
|
||||
|
||||
|
||||
@@ -20,7 +20,6 @@ on:
|
||||
pull_request:
|
||||
branches: [ "main" ]
|
||||
paths:
|
||||
- '.github/**'
|
||||
- 'data/**'
|
||||
- 'tests/**'
|
||||
|
||||
|
||||
@@ -20,7 +20,6 @@ on:
|
||||
pull_request:
|
||||
branches: [ "main" ]
|
||||
paths:
|
||||
- '.github/**'
|
||||
- 'web/**'
|
||||
- 'tests/**'
|
||||
|
||||
@@ -35,7 +34,7 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
ruby-version: ['3.4', '4.0']
|
||||
ruby-version: ['3.3', '3.4']
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
|
||||
@@ -76,4 +76,3 @@ web/node_modules/
|
||||
|
||||
# Debug symbols
|
||||
ignored.txt
|
||||
ignored-*.txt
|
||||
|
||||
+2
-12
@@ -15,7 +15,7 @@ Run linters for Python (`black`) and Ruby (`rufo`) to ensure consistent code for
|
||||
## Project Structure & Module Organization
|
||||
The repository splits runtime and ingestion logic. `web/` holds the Sinatra dashboard (Ruby code in `lib/potato_mesh`, views in `views/`, static bundles in `public/`).
|
||||
|
||||
`data/` hosts the Python Meshtastic ingestor plus migrations and CLI scripts. The ingestor is structured as the `data/mesh_ingestor/` package with the following key modules: `daemon.py` (main loop), `handlers.py` (packet processing), `interfaces.py` (interface helpers), `config.py` (env-driven config), `events.py` (TypedDict event schemas), `provider.py` (Provider protocol), `node_identity.py` (canonical node ID utilities), `decode_payload.py` (CLI protobuf decoder), and the `providers/` subpackage (currently `meshtastic.py`). API contracts for all POST ingest routes are documented in `data/mesh_ingestor/CONTRACTS.md`. API fixtures and end-to-end harnesses live in `tests/`. Dockerfiles and compose files support containerized workflows.
|
||||
`data/` hosts the Python Meshtastic ingestor plus migrations and CLI scripts. API fixtures and end-to-end harnesses live in `tests/`. Dockerfiles and compose files support containerized workflows.
|
||||
|
||||
`matrix/` contains the Rust Matrix bridge; build with `cargo build --release` or `docker build -f matrix/Dockerfile .`, and keep bridge config under `matrix/Config.toml` when running locally.
|
||||
|
||||
@@ -39,17 +39,7 @@ Install dependencies with `cd app && flutter pub get`; format with `dart format
|
||||
## Testing Guidelines
|
||||
Ruby specs run with `cd web && bundle exec rspec`, producing SimpleCov output in `coverage/`. Front-end behaviour is verified through Node’s test runner: `cd web && npm test` writes V8 coverage and JUnit XML under `reports/`.
|
||||
|
||||
The ingestion layer is tested with `pytest -q tests/`; leave fixtures in `tests/` untouched so CI can replay them. The suite includes both integration tests (`test_mesh.py`) and focused unit tests — `test_events_unit.py` (TypedDict schemas), `test_provider_unit.py` (Provider protocol conformance and `MeshtasticProvider`), `test_node_identity_unit.py` (canonical ID helpers), `test_daemon_unit.py`, `test_serialization_unit.py`, and `test_decode_payload.py`. New features should ship with matching specs and updated integration checks.
|
||||
|
||||
## Adding a New Ingestor Provider
|
||||
The `data/mesh_ingestor/provider.py` module defines a `@runtime_checkable` `Provider` Protocol with five members: `name` (str), `subscribe()`, `connect(*, active_candidate)`, `extract_host_node_id(iface)`, and `node_snapshot_items(iface)`. To add a new backend (e.g. Reticulum, MeshCore):
|
||||
|
||||
1. Create `data/mesh_ingestor/providers/<name>.py` with a class satisfying the Protocol.
|
||||
2. Register it in `data/mesh_ingestor/providers/__init__.py`.
|
||||
3. Pass an instance via `daemon.main(provider=...)` or make it the default in `main()`.
|
||||
4. Cover the provider with unit tests in `tests/test_provider_unit.py` — at minimum an `isinstance(..., Provider)` conformance check and any retry/error-handling paths.
|
||||
|
||||
Consult `data/mesh_ingestor/CONTRACTS.md` for the canonical event shapes all providers must emit.
|
||||
The ingestion layer is guarded by `pytest -q tests/test_mesh.py`; leave fixtures in `tests/` untouched so CI can replay them. New features should ship with matching specs and updated integration checks.
|
||||
|
||||
## Commit & Pull Request Guidelines
|
||||
Commits should stay imperative and reference issues the way history does (`Add chat log entries... (#408)`). Squash noisy work-in-progress commits before pushing. Pull requests need a concise summary, screenshots or curl traces for UI/API tweaks, and links to tracked issues. Paste the command output for the test suites you ran and mention configuration toggles (`API_TOKEN`, `PRIVATE`) reviewers must set.
|
||||
@@ -1,44 +1,5 @@
|
||||
# CHANGELOG
|
||||
|
||||
## v0.5.9
|
||||
|
||||
* Matrix: listen for synapse on port 41448 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/607>
|
||||
* Web: collapse federation map ledgend by @l5yth in <https://github.com/l5yth/potato-mesh/pull/604>
|
||||
* Web: fix stale node queries by @l5yth in <https://github.com/l5yth/potato-mesh/pull/603>
|
||||
* Matrix: move short name to display name by @l5yth in <https://github.com/l5yth/potato-mesh/pull/602>
|
||||
* Ci: update ruby to 4 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/601>
|
||||
* Web: display traces of last 28 days if available by @l5yth in <https://github.com/l5yth/potato-mesh/pull/599>
|
||||
* Web: establish menu structure by @l5yth in <https://github.com/l5yth/potato-mesh/pull/597>
|
||||
* Matrix: fixed the text-message checkpoint regression by @l5yth in <https://github.com/l5yth/potato-mesh/pull/595>
|
||||
* Matrix: cache seen messages by rx_time not id by @l5yth in <https://github.com/l5yth/potato-mesh/pull/594>
|
||||
* Web: hide the default '0' tab when not active by @l5yth in <https://github.com/l5yth/potato-mesh/pull/593>
|
||||
* Matrix: fix empty bridge state json by @l5yth in <https://github.com/l5yth/potato-mesh/pull/592>
|
||||
* Web: allow certain charts to overflow upper bounds by @l5yth in <https://github.com/l5yth/potato-mesh/pull/585>
|
||||
* Ingestor: support ROUTING_APP messages by @l5yth in <https://github.com/l5yth/potato-mesh/pull/584>
|
||||
* Ci: run nix flake check on ci by @l5yth in <https://github.com/l5yth/potato-mesh/pull/583>
|
||||
* Web: hide legend by default by @l5yth in <https://github.com/l5yth/potato-mesh/pull/582>
|
||||
* Nix flake by @benjajaja in <https://github.com/l5yth/potato-mesh/pull/577>
|
||||
* Support BLE UUID format for macOS Bluetooth devices by @apo-mak in <https://github.com/l5yth/potato-mesh/pull/575>
|
||||
* Web: add mesh.qrp.ro as seed node by @l5yth in <https://github.com/l5yth/potato-mesh/pull/573>
|
||||
* Web: ensure unknown nodes for messages and traces by @l5yth in <https://github.com/l5yth/potato-mesh/pull/572>
|
||||
* Chore: bump version to 0.5.9 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/569>
|
||||
|
||||
## v0.5.8
|
||||
|
||||
* Web: add secondary seed node jmrp.io by @l5yth in <https://github.com/l5yth/potato-mesh/pull/568>
|
||||
* Data: implement whitelist for ingestor by @l5yth in <https://github.com/l5yth/potato-mesh/pull/567>
|
||||
* Web: add ?since= parameter to all apis by @l5yth in <https://github.com/l5yth/potato-mesh/pull/566>
|
||||
* Matrix: fix docker build by @l5yth in <https://github.com/l5yth/potato-mesh/pull/565>
|
||||
* Matrix: fix docker build by @l5yth in <https://github.com/l5yth/potato-mesh/pull/564>
|
||||
* Web: fix federation signature validation and create fallback by @l5yth in <https://github.com/l5yth/potato-mesh/pull/563>
|
||||
* Chore: update readme by @l5yth in <https://github.com/l5yth/potato-mesh/pull/561>
|
||||
* Matrix: add docker file for bridge by @l5yth in <https://github.com/l5yth/potato-mesh/pull/556>
|
||||
* Matrix: add health checks to startup by @l5yth in <https://github.com/l5yth/potato-mesh/pull/555>
|
||||
* Matrix: omit the api part in base url by @l5yth in <https://github.com/l5yth/potato-mesh/pull/554>
|
||||
* App: add utility coverage tests for main.dart by @l5yth in <https://github.com/l5yth/potato-mesh/pull/552>
|
||||
* Data: add thorough daemon unit tests by @l5yth in <https://github.com/l5yth/potato-mesh/pull/553>
|
||||
* Chore: bump version to 0.5.8 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/551>
|
||||
|
||||
## v0.5.7
|
||||
|
||||
* Data: track ingestors heartbeat by @l5yth in <https://github.com/l5yth/potato-mesh/pull/549>
|
||||
|
||||
@@ -88,7 +88,6 @@ The web app can be configured with environment variables (defaults shown):
|
||||
| `CHANNEL` | `"#LongFast"` | Default channel name displayed in the UI. |
|
||||
| `FREQUENCY` | `"915MHz"` | Default frequency description displayed in the UI. |
|
||||
| `CONTACT_LINK` | `"#potatomesh:dod.ngo"` | Chat link or Matrix alias rendered in the footer and overlays. |
|
||||
| `ANNOUNCEMENT` | _unset_ | Optional announcement banner text rendered above the header on every page. |
|
||||
| `MAP_CENTER` | `38.761944,-27.090833` | Latitude and longitude that centre the map on load. |
|
||||
| `MAP_ZOOM` | _unset_ | Fixed Leaflet zoom applied on first load; disables auto-fit when provided. |
|
||||
| `MAX_DISTANCE` | `42` | Maximum distance (km) before node relationships are hidden on the map. |
|
||||
@@ -252,36 +251,15 @@ services.potato-mesh = {
|
||||
|
||||
## Docker
|
||||
|
||||
Docker images are published on GitHub Container Registry for each release.
|
||||
Image names and tags follow the workflow format:
|
||||
`${IMAGE_PREFIX}-${service}-${architecture}:${tag}` (see `.github/workflows/docker.yml`).
|
||||
Docker images are published on Github for each release:
|
||||
|
||||
```bash
|
||||
docker pull ghcr.io/l5yth/potato-mesh-web-linux-amd64:latest
|
||||
docker pull ghcr.io/l5yth/potato-mesh-web-linux-arm64:latest
|
||||
docker pull ghcr.io/l5yth/potato-mesh-web-linux-armv7:latest
|
||||
|
||||
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-amd64:latest
|
||||
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-arm64:latest
|
||||
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-armv7:latest
|
||||
|
||||
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-amd64:latest
|
||||
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-arm64:latest
|
||||
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-armv7:latest
|
||||
|
||||
# version-pinned examples
|
||||
docker pull ghcr.io/l5yth/potato-mesh-web-linux-amd64:v0.5.5
|
||||
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-amd64:v0.5.5
|
||||
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-amd64:v0.5.5
|
||||
docker pull ghcr.io/l5yth/potato-mesh/web:latest # newest release
|
||||
docker pull ghcr.io/l5yth/potato-mesh/web:v0.5.5 # pinned historical release
|
||||
docker pull ghcr.io/l5yth/potato-mesh/ingestor:latest
|
||||
docker pull ghcr.io/l5yth/potato-mesh/matrix-bridge:latest
|
||||
```
|
||||
|
||||
Note: `latest` is only published for non-prerelease versions. Pre-release tags
|
||||
such as `-rc`, `-beta`, `-alpha`, or `-dev` are version-tagged only.
|
||||
|
||||
When using Compose, set `POTATOMESH_IMAGE_ARCH` in `docker-compose.yml` (or via
|
||||
environment) so service images resolve to the correct architecture variant and
|
||||
you avoid manual tag mistakes.
|
||||
|
||||
Feel free to run the [configure.sh](./configure.sh) script to set up your
|
||||
environment. See the [Docker guide](DOCKER.md) for more details and custom
|
||||
deployment instructions.
|
||||
@@ -292,8 +270,6 @@ A matrix bridge is currently being worked on. It requests messages from a config
|
||||
potato-mesh instance and forwards it to a specified matrix channel; see
|
||||
[matrix/README.md](./matrix/README.md).
|
||||
|
||||

|
||||
|
||||
## Mobile App
|
||||
|
||||
A mobile _reader_ app is currently being worked on. Stay tuned for releases and updates.
|
||||
|
||||
@@ -1,18 +1,3 @@
|
||||
/*
|
||||
* Copyright © 2025-26 l5yth & contributors
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
plugins {
|
||||
id("com.android.application")
|
||||
id("kotlin-android")
|
||||
|
||||
@@ -1,16 +1,3 @@
|
||||
// Copyright © 2025-26 l5yth & contributors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package net.potatomesh.reader
|
||||
|
||||
import io.flutter.embedding.android.FlutterActivity
|
||||
|
||||
@@ -1,18 +1,3 @@
|
||||
/*
|
||||
* Copyright © 2025-26 l5yth & contributors
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
allprojects {
|
||||
repositories {
|
||||
google()
|
||||
|
||||
@@ -1,18 +1,3 @@
|
||||
/*
|
||||
* Copyright © 2025-26 l5yth & contributors
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
pluginManagement {
|
||||
val flutterSdkPath =
|
||||
run {
|
||||
|
||||
+1
-13
@@ -1,18 +1,5 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
export GIT_TAG="$(git describe --tags --abbrev=0)"
|
||||
export GIT_COMMITS="$(git rev-list --count ${GIT_TAG}..HEAD)"
|
||||
export GIT_SHA="$(git rev-parse --short=9 HEAD)"
|
||||
@@ -25,3 +12,4 @@ flutter run \
|
||||
--dart-define=GIT_SHA="${GIT_SHA}" \
|
||||
--dart-define=GIT_DIRTY="${GIT_DIRTY}" \
|
||||
--device-id 38151FDJH00D4C
|
||||
|
||||
|
||||
@@ -15,11 +15,11 @@
|
||||
<key>CFBundlePackageType</key>
|
||||
<string>FMWK</string>
|
||||
<key>CFBundleShortVersionString</key>
|
||||
<string>0.5.12</string>
|
||||
<string>0.5.9</string>
|
||||
<key>CFBundleSignature</key>
|
||||
<string>????</string>
|
||||
<key>CFBundleVersion</key>
|
||||
<string>0.5.12</string>
|
||||
<string>0.5.9</string>
|
||||
<key>MinimumOSVersion</key>
|
||||
<string>14.0</string>
|
||||
</dict>
|
||||
|
||||
@@ -1,16 +1,3 @@
|
||||
// Copyright © 2025-26 l5yth & contributors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import Flutter
|
||||
import UIKit
|
||||
|
||||
|
||||
@@ -1,14 +1 @@
|
||||
// Copyright © 2025-26 l5yth & contributors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
#import "GeneratedPluginRegistrant.h"
|
||||
|
||||
@@ -1,16 +1,3 @@
|
||||
// Copyright © 2025-26 l5yth & contributors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import Flutter
|
||||
import UIKit
|
||||
import XCTest
|
||||
|
||||
+1
-5
@@ -2944,9 +2944,6 @@ class MeshNode {
|
||||
}
|
||||
}
|
||||
|
||||
/// The protocol identifier sent to the API to filter results to Meshtastic only.
|
||||
const String _kProtocolFilter = 'meshtastic';
|
||||
|
||||
/// Build a messages API URI for a given domain or absolute URL.
|
||||
Uri _buildMessagesUri(String domain, {int since = 0, int limit = 1000}) {
|
||||
final trimmed = domain.trim();
|
||||
@@ -2954,7 +2951,6 @@ Uri _buildMessagesUri(String domain, {int since = 0, int limit = 1000}) {
|
||||
'limit': limit.toString(),
|
||||
'encrypted': 'false',
|
||||
'since': since.toString(),
|
||||
'protocol': _kProtocolFilter,
|
||||
};
|
||||
if (trimmed.isEmpty) {
|
||||
return Uri.https('potatomesh.net', '/api/messages', params);
|
||||
@@ -2992,7 +2988,7 @@ Uri _buildNodeUri(String domain, String nodeId) {
|
||||
/// Build the bulk nodes API URI for fetching recent nodes.
|
||||
Uri _buildNodesUri(String domain, {int limit = 1000}) {
|
||||
final trimmedDomain = domain.trim();
|
||||
final params = {'limit': limit.toString(), 'protocol': _kProtocolFilter};
|
||||
final params = {'limit': limit.toString()};
|
||||
|
||||
if (trimmedDomain.isEmpty) {
|
||||
return Uri.https('potatomesh.net', '/api/nodes', params);
|
||||
|
||||
+1
-1
@@ -1,7 +1,7 @@
|
||||
name: potato_mesh_reader
|
||||
description: Meshtastic Reader — read-only view for PotatoMesh messages.
|
||||
publish_to: "none"
|
||||
version: 0.5.12
|
||||
version: 0.5.9
|
||||
|
||||
environment:
|
||||
sdk: ">=3.4.0 <4.0.0"
|
||||
|
||||
+1
-13
@@ -1,18 +1,5 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
export GIT_TAG="$(git describe --tags --abbrev=0)"
|
||||
@@ -40,3 +27,4 @@ fi
|
||||
export APK_DIR="build/app/outputs/flutter-apk"
|
||||
mv -v "${APK_DIR}/app-release.apk" "${APK_DIR}/potatomesh-reader-android-${TAG_NAME}.apk"
|
||||
(cd "${APK_DIR}" && sha256sum "potatomesh-reader-android-${TAG_NAME}.apk" > "potatomesh-reader-android-${TAG_NAME}.apk.sha256sum")
|
||||
|
||||
|
||||
@@ -206,10 +206,8 @@ void main() {
|
||||
|
||||
expect(calls[0].host, 'mesh.example.org');
|
||||
expect(calls[0].path, '/api/messages');
|
||||
expect(calls[0].queryParameters['protocol'], 'meshtastic');
|
||||
expect(calls[1].scheme, 'https');
|
||||
expect(calls[1].path, '/api/messages');
|
||||
expect(calls[1].queryParameters['protocol'], 'meshtastic');
|
||||
});
|
||||
});
|
||||
|
||||
|
||||
@@ -145,7 +145,6 @@ void main() {
|
||||
if (request.url.path == '/api/messages') {
|
||||
sinces.add(request.url.queryParameters['since'] ?? '');
|
||||
expect(request.url.queryParameters['limit'], '1000');
|
||||
expect(request.url.queryParameters['protocol'], 'meshtastic');
|
||||
if (sinces.length == 1) {
|
||||
return http.Response(
|
||||
jsonEncode([
|
||||
|
||||
@@ -1,16 +1,3 @@
|
||||
// Copyright © 2025-26 l5yth & contributors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// This is a basic Flutter widget test.
|
||||
//
|
||||
// To perform an interaction with a widget in your test, use the WidgetTester
|
||||
|
||||
+1
-1
@@ -18,7 +18,7 @@ The ``data.mesh`` module exposes helpers for reading Meshtastic node and
|
||||
message information before forwarding it to the accompanying web application.
|
||||
"""
|
||||
|
||||
VERSION = "0.5.12"
|
||||
VERSION = "0.5.9"
|
||||
"""Semantic version identifier shared with the dashboard and front-end."""
|
||||
|
||||
__version__ = VERSION
|
||||
|
||||
+1
-2
@@ -20,8 +20,7 @@ CREATE TABLE IF NOT EXISTS ingestors (
|
||||
last_seen_time INTEGER NOT NULL,
|
||||
version TEXT,
|
||||
lora_freq INTEGER,
|
||||
modem_preset TEXT,
|
||||
protocol TEXT NOT NULL DEFAULT 'meshtastic'
|
||||
modem_preset TEXT
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_ingestors_last_seen ON ingestors(last_seen_time);
|
||||
|
||||
@@ -1,117 +0,0 @@
|
||||
## Mesh ingestor contracts (stable interfaces)
|
||||
|
||||
This repo’s ingestion pipeline is split into:
|
||||
|
||||
- **Python collector** (`data/mesh_ingestor/*`) which normalizes packets/events and POSTs JSON to the web app.
|
||||
- **Sinatra web app** (`web/`) which accepts those payloads on `POST /api/*` ingest routes and persists them into SQLite tables defined under `data/*.sql`.
|
||||
|
||||
This document records the **contracts that future providers must preserve**. The intent is to enable adding new providers (MeshCore, Reticulum, …) without changing the Ruby/DB/UI read-side.
|
||||
|
||||
### Canonical node identity
|
||||
|
||||
- **Canonical node id**: `nodes.node_id` is a `TEXT` primary key and is treated as canonical across the system.
|
||||
- **Format**: `!%08x` (lowercase hex, 8 chars), for example `!abcdef01`.
|
||||
- **Normalization**:
|
||||
- Python currently normalizes via `data/mesh_ingestor/serialization.py:_canonical_node_id`.
|
||||
- Ruby normalizes via `web/lib/potato_mesh/application/data_processing.rb:canonical_node_parts`.
|
||||
- **Dual addressing**: Ruby routes and queries accept either a canonical `!xxxxxxxx` string or a numeric node id; they normalize to `node_id`.
|
||||
|
||||
Note: non-Meshtastic providers will need a strategy to map their native node identifiers into this `!%08x` space. That mapping is intentionally not standardized in code yet.
|
||||
|
||||
### Ingest HTTP routes and payload shapes
|
||||
|
||||
Future providers should emit payloads that match these shapes (keys + types), which are validated by existing tests (notably `tests/test_mesh.py`).
|
||||
|
||||
#### `POST /api/nodes`
|
||||
|
||||
Payload is a mapping keyed by canonical node id, with an optional top-level `”ingestor”` key:
|
||||
|
||||
- `{ “!abcdef01”: { ... node fields ... }, “ingestor”: “!ingestornodeid” }`
|
||||
|
||||
When `”ingestor”` is present the protocol is inherited from the registered ingestor (see `POST /api/ingestors`); omitting it defaults to `”meshtastic”`.
|
||||
|
||||
Node entry fields are “Meshtastic-ish” (camelCase) and may include:
|
||||
|
||||
- `num` (int node number)
|
||||
- `lastHeard` (int unix seconds)
|
||||
- `snr` (float)
|
||||
- `hopsAway` (int)
|
||||
- `isFavorite` (bool)
|
||||
- `user` (mapping; e.g. `shortName`, `longName`, `macaddr`, `hwModel`, `role`, `publicKey`, `isUnmessagable`)
|
||||
- `deviceMetrics` (mapping; e.g. `batteryLevel`, `voltage`, `channelUtilization`, `airUtilTx`, `uptimeSeconds`)
|
||||
- `position` (mapping; `latitude`, `longitude`, `altitude`, `time`, `locationSource`, `precisionBits`, optional nested `raw`)
|
||||
- Optional radio metadata: `lora_freq`, `modem_preset`
|
||||
|
||||
#### `POST /api/messages`
|
||||
|
||||
Single message payload:
|
||||
|
||||
- Required: `id` (int), `rx_time` (int), `rx_iso` (string)
|
||||
- Identity: `from_id` (string/int), `to_id` (string/int), `channel` (int), `portnum` (string|nil)
|
||||
- Payload: `text` (string|nil), `encrypted` (string|nil), `reply_id` (int|nil), `emoji` (string|nil)
|
||||
- RF: `snr` (float|nil), `rssi` (int|nil), `hop_limit` (int|nil)
|
||||
- Meta: `channel_name` (string; only when not encrypted and known), `ingestor` (canonical host id), `lora_freq`, `modem_preset`
|
||||
|
||||
#### `POST /api/positions`
|
||||
|
||||
Single position payload:
|
||||
|
||||
- Required: `id` (int), `rx_time` (int), `rx_iso` (string)
|
||||
- Node: `node_id` (canonical string), `node_num` (int|nil), `num` (int|nil), `from_id` (canonical string), `to_id` (string|nil)
|
||||
- Position: `latitude`, `longitude`, `altitude` (floats|nil)
|
||||
- Position time: `position_time` (int|nil)
|
||||
- Quality: `location_source` (string|nil), `precision_bits` (int|nil), `sats_in_view` (int|nil), `pdop` (float|nil)
|
||||
- Motion: `ground_speed` (float|nil), `ground_track` (float|nil)
|
||||
- RF/meta: `snr`, `rssi`, `hop_limit`, `bitfield`, `payload_b64` (string|nil), `raw` (mapping|nil), `ingestor`, `lora_freq`, `modem_preset`
|
||||
|
||||
#### `POST /api/telemetry`
|
||||
|
||||
Single telemetry payload:
|
||||
|
||||
- Required: `id` (int), `rx_time` (int), `rx_iso` (string)
|
||||
- Node: `node_id` (canonical string|nil), `node_num` (int|nil), `from_id`, `to_id`
|
||||
- Time: `telemetry_time` (int|nil)
|
||||
- Packet: `channel` (int), `portnum` (string|nil), `bitfield` (int|nil), `hop_limit` (int|nil)
|
||||
- RF: `snr` (float|nil), `rssi` (int|nil)
|
||||
- Raw: `payload_b64` (string; may be empty string when unknown)
|
||||
- Metrics: many optional snake_case keys (`battery_level`, `voltage`, `temperature`, etc.)
|
||||
- Subtype: `telemetry_type` (string|nil) — optional discriminator identifying which Meshtastic protobuf oneof was set; one of `"device"`, `"environment"`, `"power"`, or `"air_quality"`. Ingestors that detect the subtype SHOULD include this field; omit rather than send `null` when unknown. The web app infers the type from metric-field presence when absent, so old ingestors remain compatible.
|
||||
- Meta: `ingestor`, `lora_freq`, `modem_preset`
|
||||
|
||||
#### `POST /api/neighbors`
|
||||
|
||||
Neighbors snapshot payload:
|
||||
|
||||
- Node: `node_id` (canonical string), `node_num` (int|nil)
|
||||
- `neighbors`: list of entries with `neighbor_id` (canonical string), `neighbor_num` (int|nil), `snr` (float|nil), `rx_time` (int), `rx_iso` (string)
|
||||
- Snapshot time: `rx_time`, `rx_iso`
|
||||
- Optional: `node_broadcast_interval_secs` (int|nil), `last_sent_by_id` (canonical string|nil)
|
||||
- Meta: `ingestor`, `lora_freq`, `modem_preset`
|
||||
|
||||
#### `POST /api/traces`
|
||||
|
||||
Single trace payload:
|
||||
|
||||
- Identity: `id` (int|nil), `request_id` (int|nil)
|
||||
- Endpoints: `src` (int|nil), `dest` (int|nil)
|
||||
- Path: `hops` (list[int])
|
||||
- Time: `rx_time` (int), `rx_iso` (string)
|
||||
- Metrics: `rssi` (int|nil), `snr` (float|nil), `elapsed_ms` (int|nil)
|
||||
- Meta: `ingestor`, `lora_freq`, `modem_preset`
|
||||
|
||||
#### `POST /api/ingestors`
|
||||
|
||||
Heartbeat payload:
|
||||
|
||||
- `node_id` (canonical string)
|
||||
- `start_time` (int), `last_seen_time` (int)
|
||||
- `version` (string)
|
||||
- Optional: `lora_freq`, `modem_preset`
|
||||
- Optional: `protocol` (string; e.g. `"meshtastic"`, `"meshcore"`) — declares the mesh backend for this ingestor; defaults to `"meshtastic"` when absent
|
||||
|
||||
**Protocol propagation**: all event records (`messages`, `positions`, `telemetry`, `traces`, `neighbors`) that reference this ingestor via their `ingestor` field will inherit its `protocol` value at write time.
|
||||
|
||||
### GET endpoint filtering
|
||||
|
||||
All collection GET endpoints (`/api/nodes`, `/api/messages`, `/api/positions`, `/api/telemetry`, `/api/traces`, `/api/neighbors`, `/api/ingestors`) accept an optional `?protocol=<value>` query parameter. When present, only records whose `protocol` column matches the given value are returned. The `protocol` field is included in all GET responses.
|
||||
|
||||
@@ -25,7 +25,6 @@ from .. import VERSION as _PACKAGE_VERSION
|
||||
from . import (
|
||||
channels,
|
||||
config,
|
||||
connection,
|
||||
daemon,
|
||||
handlers,
|
||||
ingestors,
|
||||
@@ -47,7 +46,7 @@ def _reexport(module) -> None:
|
||||
def _export_constants() -> None:
|
||||
globals()["json"] = queue.json
|
||||
globals()["urllib"] = queue.urllib
|
||||
globals()["glob"] = connection.glob
|
||||
globals()["glob"] = interfaces.glob
|
||||
__all__.extend(["json", "urllib", "glob", "threading", "signal"])
|
||||
|
||||
|
||||
|
||||
@@ -65,21 +65,6 @@ CHANNEL_INDEX = int(os.environ.get("CHANNEL_INDEX", str(DEFAULT_CHANNEL_INDEX)))
|
||||
|
||||
DEBUG = os.environ.get("DEBUG") == "1"
|
||||
|
||||
_KNOWN_PROVIDERS = ("meshtastic", "meshcore")
|
||||
|
||||
_raw_provider = os.environ.get("PROVIDER", "meshtastic").strip().lower()
|
||||
if _raw_provider not in _KNOWN_PROVIDERS:
|
||||
raise ValueError(
|
||||
f"Unknown PROVIDER={_raw_provider!r}. "
|
||||
f"Valid options: {', '.join(_KNOWN_PROVIDERS)}"
|
||||
)
|
||||
|
||||
PROVIDER = _raw_provider
|
||||
"""Active ingestion provider, selected via the :envvar:`PROVIDER` environment variable.
|
||||
|
||||
Accepted values are ``meshtastic`` (default) and ``meshcore``.
|
||||
"""
|
||||
|
||||
|
||||
def _parse_channel_names(raw_value: str | None) -> tuple[str, ...]:
|
||||
"""Normalise a comma-separated list of channel names.
|
||||
|
||||
@@ -1,163 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Provider-agnostic connection target helpers.
|
||||
|
||||
This module contains utilities shared by all ingestor providers for
|
||||
parsing and auto-discovering connection targets. It is intentionally
|
||||
free of any provider-specific imports so that Meshtastic, MeshCore,
|
||||
and future providers can all rely on the same logic.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import glob
|
||||
import re
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Constants
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
DEFAULT_TCP_PORT: int = 4403
|
||||
"""Default TCP port used when no port is explicitly supplied."""
|
||||
|
||||
DEFAULT_SERIAL_PATTERNS: tuple[str, ...] = (
|
||||
"/dev/ttyACM*",
|
||||
"/dev/ttyUSB*",
|
||||
"/dev/tty.usbmodem*",
|
||||
"/dev/tty.usbserial*",
|
||||
"/dev/cu.usbmodem*",
|
||||
"/dev/cu.usbserial*",
|
||||
)
|
||||
"""Glob patterns for common serial device paths on Linux and macOS."""
|
||||
|
||||
# Support both MAC addresses (Linux/Windows) and UUIDs (macOS).
|
||||
BLE_ADDRESS_RE = re.compile(
|
||||
r"^(?:"
|
||||
r"(?:[0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}|" # MAC address format
|
||||
r"[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}" # UUID format
|
||||
r")$"
|
||||
)
|
||||
"""Compiled regex matching a BLE MAC address or UUID."""
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def parse_ble_target(value: str) -> str | None:
|
||||
"""Return a normalised BLE address (MAC or UUID) when ``value`` matches the format.
|
||||
|
||||
Parameters:
|
||||
value: User-provided target string.
|
||||
|
||||
Returns:
|
||||
The normalised MAC address (upper-cased) or UUID, or ``None`` when
|
||||
the value does not match a recognised BLE address format.
|
||||
"""
|
||||
if not value:
|
||||
return None
|
||||
value = value.strip()
|
||||
if not value:
|
||||
return None
|
||||
if BLE_ADDRESS_RE.fullmatch(value):
|
||||
return value.upper()
|
||||
return None
|
||||
|
||||
|
||||
def parse_tcp_target(value: str) -> tuple[str, int] | None:
|
||||
"""Parse a TCP ``host:port`` target, accepting both IPs and hostnames.
|
||||
|
||||
Unlike the Meshtastic-specific helper in :mod:`interfaces`, hostnames are
|
||||
accepted here because MeshCore companions may be reached over a local
|
||||
network by name (e.g. ``meshcore-node.local:4403``).
|
||||
|
||||
BLE MAC addresses (five colons) and bare serial port paths (no colon) are
|
||||
correctly rejected — they cannot produce a valid ``host:port`` pair.
|
||||
|
||||
Parameters:
|
||||
value: User-provided target string.
|
||||
|
||||
Returns:
|
||||
``(host, port)`` on success, or ``None`` when *value* does not look
|
||||
like a TCP target.
|
||||
"""
|
||||
if not value:
|
||||
return None
|
||||
value = value.strip()
|
||||
if not value:
|
||||
return None
|
||||
|
||||
# Strip URL scheme prefix (e.g. ``tcp://host:4403`` or ``http://host:4403``).
|
||||
if "://" in value:
|
||||
value = value.split("://", 1)[1]
|
||||
|
||||
# Handle bracketed IPv6: ``[::1]:4403``.
|
||||
if value.startswith("["):
|
||||
bracket_end = value.find("]")
|
||||
if bracket_end == -1:
|
||||
return None
|
||||
host = value[1:bracket_end]
|
||||
rest = value[bracket_end + 1 :]
|
||||
if rest.startswith(":"):
|
||||
try:
|
||||
port = int(rest[1:])
|
||||
except ValueError:
|
||||
return None
|
||||
if not (1 <= port <= 65535):
|
||||
return None
|
||||
else:
|
||||
port = DEFAULT_TCP_PORT
|
||||
if not host:
|
||||
return None
|
||||
return host, port
|
||||
|
||||
# For non-bracketed addresses require exactly one colon so that BLE MACs
|
||||
# (five colons) and bare serial paths (no colon) are rejected.
|
||||
colon_count = value.count(":")
|
||||
if colon_count != 1:
|
||||
return None
|
||||
|
||||
host, _, port_str = value.partition(":")
|
||||
if not host:
|
||||
return None
|
||||
try:
|
||||
port = int(port_str)
|
||||
except ValueError:
|
||||
return None
|
||||
if not (1 <= port <= 65535):
|
||||
return None
|
||||
return host, port
|
||||
|
||||
|
||||
def default_serial_targets() -> list[str]:
|
||||
"""Return candidate serial device paths for auto-discovery.
|
||||
|
||||
Globs for common USB serial device paths on Linux and macOS. Always
|
||||
includes ``/dev/ttyACM0`` as a final fallback so callers have at least
|
||||
one candidate even on systems without any attached hardware.
|
||||
|
||||
Returns:
|
||||
Ordered list of candidate device paths, deduplicated.
|
||||
"""
|
||||
candidates: list[str] = []
|
||||
seen: set[str] = set()
|
||||
for pattern in DEFAULT_SERIAL_PATTERNS:
|
||||
for path in sorted(glob.glob(pattern)):
|
||||
if path not in seen:
|
||||
candidates.append(path)
|
||||
seen.add(path)
|
||||
if "/dev/ttyACM0" not in seen:
|
||||
candidates.append("/dev/ttyACM0")
|
||||
return candidates
|
||||
+282
-358
@@ -16,7 +16,6 @@
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import dataclasses
|
||||
import inspect
|
||||
import signal
|
||||
import threading
|
||||
@@ -25,7 +24,6 @@ import time
|
||||
from pubsub import pub
|
||||
|
||||
from . import config, handlers, ingestors, interfaces
|
||||
from .provider import Provider
|
||||
|
||||
_RECEIVE_TOPICS = (
|
||||
"meshtastic.receive",
|
||||
@@ -199,6 +197,11 @@ def _process_ingestor_heartbeat(iface, *, ingestor_announcement_sent: bool) -> b
|
||||
if heartbeat_sent and not ingestor_announcement_sent:
|
||||
return True
|
||||
return ingestor_announcement_sent
|
||||
iface_cls = getattr(iface_obj, "__class__", None)
|
||||
if iface_cls is None:
|
||||
return False
|
||||
module_name = getattr(iface_cls, "__module__", "") or ""
|
||||
return "ble_interface" in module_name
|
||||
|
||||
|
||||
def _connected_state(candidate) -> bool | None:
|
||||
@@ -240,330 +243,10 @@ def _connected_state(candidate) -> bool | None:
|
||||
return None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Loop state container
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@dataclasses.dataclass
|
||||
class _DaemonState:
|
||||
"""All mutable state for the :func:`main` daemon loop."""
|
||||
|
||||
provider: Provider
|
||||
stop: threading.Event
|
||||
configured_port: str | None
|
||||
inactivity_reconnect_secs: float
|
||||
energy_saving_enabled: bool
|
||||
energy_online_secs: float
|
||||
energy_sleep_secs: float
|
||||
retry_delay: float
|
||||
last_seen_packet_monotonic: float | None
|
||||
active_candidate: str | None
|
||||
|
||||
iface: object = None
|
||||
resolved_target: str | None = None
|
||||
initial_snapshot_sent: bool = False
|
||||
energy_session_deadline: float | None = None
|
||||
iface_connected_at: float | None = None
|
||||
last_inactivity_reconnect: float | None = None
|
||||
ingestor_announcement_sent: bool = False
|
||||
announced_target: bool = False
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Per-iteration helpers (each returns True when the caller should `continue`)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _advance_retry_delay(current: float) -> float:
|
||||
"""Return the next exponential-backoff retry delay."""
|
||||
|
||||
if config._RECONNECT_MAX_DELAY_SECS <= 0:
|
||||
return current
|
||||
# `current == 0` on the very first call (bootstrap); seed from config.
|
||||
next_delay = current * 2 if current else config._RECONNECT_INITIAL_DELAY_SECS
|
||||
return min(next_delay, config._RECONNECT_MAX_DELAY_SECS)
|
||||
|
||||
|
||||
def _energy_sleep(state: _DaemonState, reason: str) -> None:
|
||||
"""Sleep for the configured energy-saving interval."""
|
||||
|
||||
if not state.energy_saving_enabled or state.energy_sleep_secs <= 0:
|
||||
return
|
||||
if config.DEBUG:
|
||||
config._debug_log(
|
||||
f"energy saving: {reason}; sleeping for {state.energy_sleep_secs:g}s"
|
||||
)
|
||||
state.stop.wait(state.energy_sleep_secs)
|
||||
|
||||
|
||||
def _try_connect(state: _DaemonState) -> bool:
|
||||
"""Attempt to establish the mesh interface.
|
||||
|
||||
Returns:
|
||||
``True`` when connected and the loop should proceed; ``False`` when
|
||||
the connection failed and the caller should ``continue``.
|
||||
"""
|
||||
|
||||
try:
|
||||
state.iface, state.resolved_target, state.active_candidate = (
|
||||
state.provider.connect(active_candidate=state.active_candidate)
|
||||
)
|
||||
handlers.register_host_node_id(state.provider.extract_host_node_id(state.iface))
|
||||
ingestors.set_ingestor_node_id(handlers.host_node_id())
|
||||
state.retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
|
||||
state.initial_snapshot_sent = False
|
||||
if not state.announced_target and state.resolved_target:
|
||||
config._debug_log(
|
||||
"Using mesh interface",
|
||||
context="daemon.interface",
|
||||
severity="info",
|
||||
target=state.resolved_target,
|
||||
)
|
||||
state.announced_target = True
|
||||
if state.energy_saving_enabled and state.energy_online_secs > 0:
|
||||
state.energy_session_deadline = time.monotonic() + state.energy_online_secs
|
||||
else:
|
||||
state.energy_session_deadline = None
|
||||
state.iface_connected_at = time.monotonic()
|
||||
# Seed the inactivity tracking from the connection time so a
|
||||
# reconnect is given a full inactivity window even when the
|
||||
# handler still reports the previous packet timestamp.
|
||||
state.last_seen_packet_monotonic = state.iface_connected_at
|
||||
state.last_inactivity_reconnect = None
|
||||
return True
|
||||
except interfaces.NoAvailableMeshInterface as exc:
|
||||
config._debug_log(
|
||||
"No mesh interface available",
|
||||
context="daemon.interface",
|
||||
severity="error",
|
||||
error_message=str(exc),
|
||||
)
|
||||
_close_interface(state.iface)
|
||||
raise SystemExit(1) from exc
|
||||
except Exception as exc:
|
||||
config._debug_log(
|
||||
"Failed to create mesh interface",
|
||||
context="daemon.interface",
|
||||
severity="warn",
|
||||
candidate=state.active_candidate or "auto",
|
||||
error_class=exc.__class__.__name__,
|
||||
error_message=str(exc),
|
||||
)
|
||||
if state.configured_port is None:
|
||||
state.active_candidate = None
|
||||
state.announced_target = False
|
||||
state.stop.wait(state.retry_delay)
|
||||
state.retry_delay = _advance_retry_delay(state.retry_delay)
|
||||
return False
|
||||
|
||||
|
||||
def _check_energy_saving(state: _DaemonState) -> bool:
|
||||
"""Disconnect and sleep when energy-saving conditions are met.
|
||||
|
||||
Returns:
|
||||
``True`` when the interface was closed and the caller should
|
||||
``continue``; ``False`` otherwise.
|
||||
"""
|
||||
|
||||
if not state.energy_saving_enabled or state.iface is None:
|
||||
return False
|
||||
|
||||
if (
|
||||
state.energy_session_deadline is not None
|
||||
and time.monotonic() >= state.energy_session_deadline
|
||||
):
|
||||
reason = "disconnected after session"
|
||||
log_msg = "Energy saving disconnect"
|
||||
elif (
|
||||
_is_ble_interface(state.iface)
|
||||
and getattr(state.iface, "client", object()) is None
|
||||
):
|
||||
reason = "BLE client disconnected"
|
||||
log_msg = "Energy saving BLE disconnect"
|
||||
else:
|
||||
return False
|
||||
config._debug_log(log_msg, context="daemon.energy", severity="info")
|
||||
_close_interface(state.iface)
|
||||
state.iface = None
|
||||
state.announced_target = False
|
||||
state.initial_snapshot_sent = False
|
||||
state.energy_session_deadline = None
|
||||
_energy_sleep(state, reason)
|
||||
return True
|
||||
|
||||
|
||||
def _try_send_snapshot(state: _DaemonState) -> bool:
|
||||
"""Send the initial node snapshot via the provider.
|
||||
|
||||
Returns:
|
||||
``True`` when the snapshot succeeded (or no nodes exist yet); ``False``
|
||||
when a hard error occurred and the caller should ``continue``.
|
||||
"""
|
||||
|
||||
try:
|
||||
node_items = state.provider.node_snapshot_items(state.iface)
|
||||
processed_any = False
|
||||
for node_id, node in node_items:
|
||||
processed_any = True
|
||||
try:
|
||||
handlers.upsert_node(node_id, node)
|
||||
except Exception as exc:
|
||||
config._debug_log(
|
||||
"Failed to update node snapshot",
|
||||
context="daemon.snapshot",
|
||||
severity="warn",
|
||||
node_id=node_id,
|
||||
error_class=exc.__class__.__name__,
|
||||
error_message=str(exc),
|
||||
)
|
||||
if config.DEBUG:
|
||||
config._debug_log(
|
||||
"Snapshot node payload",
|
||||
context="daemon.snapshot",
|
||||
node=node,
|
||||
)
|
||||
if processed_any:
|
||||
state.initial_snapshot_sent = True
|
||||
return True
|
||||
except Exception as exc:
|
||||
config._debug_log(
|
||||
"Snapshot refresh failed",
|
||||
context="daemon.snapshot",
|
||||
severity="warn",
|
||||
error_class=exc.__class__.__name__,
|
||||
error_message=str(exc),
|
||||
)
|
||||
_close_interface(state.iface)
|
||||
state.iface = None
|
||||
state.stop.wait(state.retry_delay)
|
||||
state.retry_delay = _advance_retry_delay(state.retry_delay)
|
||||
return False
|
||||
|
||||
|
||||
def _check_inactivity_reconnect(state: _DaemonState) -> bool:
|
||||
"""Reconnect when the interface has been silent for too long.
|
||||
|
||||
Returns:
|
||||
``True`` when a reconnect was triggered and the caller should
|
||||
``continue``; ``False`` otherwise.
|
||||
"""
|
||||
|
||||
if state.iface is None or state.inactivity_reconnect_secs <= 0:
|
||||
return False
|
||||
|
||||
now = time.monotonic()
|
||||
iface_activity = handlers.last_packet_monotonic()
|
||||
|
||||
if (
|
||||
iface_activity is not None
|
||||
and state.iface_connected_at is not None
|
||||
and iface_activity < state.iface_connected_at
|
||||
):
|
||||
iface_activity = state.iface_connected_at
|
||||
|
||||
if iface_activity is not None and (
|
||||
state.last_seen_packet_monotonic is None
|
||||
or iface_activity > state.last_seen_packet_monotonic
|
||||
):
|
||||
state.last_seen_packet_monotonic = iface_activity
|
||||
state.last_inactivity_reconnect = None
|
||||
|
||||
latest_activity = iface_activity
|
||||
if latest_activity is None and state.iface_connected_at is not None:
|
||||
latest_activity = state.iface_connected_at
|
||||
if latest_activity is None:
|
||||
latest_activity = now
|
||||
|
||||
inactivity_elapsed = now - latest_activity
|
||||
believed_disconnected = (
|
||||
_connected_state(getattr(state.iface, "isConnected", None)) is False
|
||||
)
|
||||
|
||||
if (
|
||||
not believed_disconnected
|
||||
and inactivity_elapsed < state.inactivity_reconnect_secs
|
||||
):
|
||||
return False
|
||||
|
||||
if (
|
||||
state.last_inactivity_reconnect is not None
|
||||
and now - state.last_inactivity_reconnect < state.inactivity_reconnect_secs
|
||||
):
|
||||
return False
|
||||
|
||||
reason = (
|
||||
"disconnected"
|
||||
if believed_disconnected
|
||||
else f"no data for {inactivity_elapsed:.0f}s"
|
||||
)
|
||||
config._debug_log(
|
||||
"Mesh interface inactivity detected",
|
||||
context="daemon.interface",
|
||||
severity="warn",
|
||||
reason=reason,
|
||||
)
|
||||
state.last_inactivity_reconnect = now
|
||||
_close_interface(state.iface)
|
||||
state.iface = None
|
||||
state.announced_target = False
|
||||
state.initial_snapshot_sent = False
|
||||
state.energy_session_deadline = None
|
||||
state.iface_connected_at = None
|
||||
return True
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Loop iteration helper
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _loop_iteration(state: _DaemonState) -> bool:
|
||||
"""Execute one pass of the daemon main loop.
|
||||
|
||||
Encapsulates the per-iteration ``continue`` decisions so that
|
||||
:func:`main` stays within the allowed cognitive-complexity budget.
|
||||
|
||||
Returns:
|
||||
``True`` when the loop should start the next iteration immediately
|
||||
(equivalent to a ``continue``); ``False`` when the full pass
|
||||
completed and the caller should sleep before iterating again.
|
||||
"""
|
||||
|
||||
if state.iface is None and not _try_connect(state):
|
||||
return True
|
||||
if _check_energy_saving(state):
|
||||
return True
|
||||
if not state.initial_snapshot_sent and not _try_send_snapshot(state):
|
||||
return True
|
||||
if _check_inactivity_reconnect(state):
|
||||
return True
|
||||
state.ingestor_announcement_sent = _process_ingestor_heartbeat(
|
||||
state.iface, ingestor_announcement_sent=state.ingestor_announcement_sent
|
||||
)
|
||||
state.retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
|
||||
return False
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Entry point
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def main(*, provider: Provider | None = None) -> None:
|
||||
def main(existing_interface=None) -> None:
|
||||
"""Run the mesh ingestion daemon until interrupted."""
|
||||
|
||||
if provider is None:
|
||||
if config.PROVIDER == "meshcore":
|
||||
from .providers.meshcore import MeshcoreProvider
|
||||
|
||||
provider = MeshcoreProvider()
|
||||
else:
|
||||
from .providers.meshtastic import MeshtasticProvider
|
||||
|
||||
provider = MeshtasticProvider()
|
||||
|
||||
subscribed = provider.subscribe()
|
||||
subscribed = _subscribe_receive_topics()
|
||||
if subscribed:
|
||||
config._debug_log(
|
||||
"Subscribed to receive topics",
|
||||
@@ -572,72 +255,313 @@ def main(*, provider: Provider | None = None) -> None:
|
||||
topics=subscribed,
|
||||
)
|
||||
|
||||
state = _DaemonState(
|
||||
provider=provider,
|
||||
stop=threading.Event(),
|
||||
configured_port=config.CONNECTION,
|
||||
inactivity_reconnect_secs=max(
|
||||
0.0, getattr(config, "_INACTIVITY_RECONNECT_SECS", 0.0)
|
||||
),
|
||||
energy_saving_enabled=config.ENERGY_SAVING,
|
||||
energy_online_secs=max(0.0, config._ENERGY_ONLINE_DURATION_SECS),
|
||||
energy_sleep_secs=max(0.0, config._ENERGY_SLEEP_SECS),
|
||||
retry_delay=max(0.0, config._RECONNECT_INITIAL_DELAY_SECS),
|
||||
last_seen_packet_monotonic=handlers.last_packet_monotonic(),
|
||||
active_candidate=config.CONNECTION,
|
||||
iface = existing_interface
|
||||
resolved_target = None
|
||||
retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
|
||||
|
||||
stop = threading.Event()
|
||||
initial_snapshot_sent = False
|
||||
energy_session_deadline = None
|
||||
iface_connected_at: float | None = None
|
||||
last_seen_packet_monotonic = handlers.last_packet_monotonic()
|
||||
last_inactivity_reconnect: float | None = None
|
||||
inactivity_reconnect_secs = max(
|
||||
0.0, getattr(config, "_INACTIVITY_RECONNECT_SECS", 0.0)
|
||||
)
|
||||
ingestor_announcement_sent = False
|
||||
|
||||
energy_saving_enabled = config.ENERGY_SAVING
|
||||
energy_online_secs = max(0.0, config._ENERGY_ONLINE_DURATION_SECS)
|
||||
energy_sleep_secs = max(0.0, config._ENERGY_SLEEP_SECS)
|
||||
|
||||
def _energy_sleep(reason: str) -> None:
|
||||
if not energy_saving_enabled or energy_sleep_secs <= 0:
|
||||
return
|
||||
if config.DEBUG:
|
||||
config._debug_log(
|
||||
f"energy saving: {reason}; sleeping for {energy_sleep_secs:g}s"
|
||||
)
|
||||
stop.wait(energy_sleep_secs)
|
||||
|
||||
def handle_sigterm(*_args) -> None:
|
||||
state.stop.set()
|
||||
stop.set()
|
||||
|
||||
def handle_sigint(signum, frame) -> None:
|
||||
if state.stop.is_set():
|
||||
if stop.is_set():
|
||||
signal.default_int_handler(signum, frame)
|
||||
return
|
||||
state.stop.set()
|
||||
stop.set()
|
||||
|
||||
if threading.current_thread() == threading.main_thread():
|
||||
signal.signal(signal.SIGINT, handle_sigint)
|
||||
signal.signal(signal.SIGTERM, handle_sigterm)
|
||||
|
||||
target = config.INSTANCE or "(no INSTANCE_DOMAIN configured)"
|
||||
configured_port = config.CONNECTION
|
||||
active_candidate = configured_port
|
||||
announced_target = False
|
||||
config._debug_log(
|
||||
"Mesh daemon starting",
|
||||
context="daemon.main",
|
||||
severity="info",
|
||||
target=config.INSTANCE or "(no INSTANCE_DOMAIN configured)",
|
||||
port=config.CONNECTION or "auto",
|
||||
target=target,
|
||||
port=configured_port or "auto",
|
||||
channel=config.CHANNEL_INDEX,
|
||||
)
|
||||
|
||||
try:
|
||||
while not state.stop.is_set():
|
||||
if not _loop_iteration(state):
|
||||
state.stop.wait(config.SNAPSHOT_SECS)
|
||||
while not stop.is_set():
|
||||
if iface is None:
|
||||
try:
|
||||
if active_candidate:
|
||||
iface, resolved_target = interfaces._create_serial_interface(
|
||||
active_candidate
|
||||
)
|
||||
else:
|
||||
iface, resolved_target = interfaces._create_default_interface()
|
||||
active_candidate = resolved_target
|
||||
interfaces._ensure_radio_metadata(iface)
|
||||
interfaces._ensure_channel_metadata(iface)
|
||||
handlers.register_host_node_id(
|
||||
interfaces._extract_host_node_id(iface)
|
||||
)
|
||||
ingestors.set_ingestor_node_id(handlers.host_node_id())
|
||||
retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
|
||||
initial_snapshot_sent = False
|
||||
if not announced_target and resolved_target:
|
||||
config._debug_log(
|
||||
"Using mesh interface",
|
||||
context="daemon.interface",
|
||||
severity="info",
|
||||
target=resolved_target,
|
||||
)
|
||||
announced_target = True
|
||||
if energy_saving_enabled and energy_online_secs > 0:
|
||||
energy_session_deadline = time.monotonic() + energy_online_secs
|
||||
else:
|
||||
energy_session_deadline = None
|
||||
iface_connected_at = time.monotonic()
|
||||
# Seed the inactivity tracking from the connection time so a
|
||||
# reconnect is given a full inactivity window even when the
|
||||
# handler still reports the previous packet timestamp.
|
||||
last_seen_packet_monotonic = iface_connected_at
|
||||
last_inactivity_reconnect = None
|
||||
except interfaces.NoAvailableMeshInterface as exc:
|
||||
config._debug_log(
|
||||
"No mesh interface available",
|
||||
context="daemon.interface",
|
||||
severity="error",
|
||||
error_message=str(exc),
|
||||
)
|
||||
_close_interface(iface)
|
||||
raise SystemExit(1) from exc
|
||||
except Exception as exc:
|
||||
candidate_desc = active_candidate or "auto"
|
||||
config._debug_log(
|
||||
"Failed to create mesh interface",
|
||||
context="daemon.interface",
|
||||
severity="warn",
|
||||
candidate=candidate_desc,
|
||||
error_class=exc.__class__.__name__,
|
||||
error_message=str(exc),
|
||||
)
|
||||
if configured_port is None:
|
||||
active_candidate = None
|
||||
announced_target = False
|
||||
stop.wait(retry_delay)
|
||||
if config._RECONNECT_MAX_DELAY_SECS > 0:
|
||||
retry_delay = min(
|
||||
(
|
||||
retry_delay * 2
|
||||
if retry_delay
|
||||
else config._RECONNECT_INITIAL_DELAY_SECS
|
||||
),
|
||||
config._RECONNECT_MAX_DELAY_SECS,
|
||||
)
|
||||
continue
|
||||
|
||||
if energy_saving_enabled and iface is not None:
|
||||
if (
|
||||
energy_session_deadline is not None
|
||||
and time.monotonic() >= energy_session_deadline
|
||||
):
|
||||
config._debug_log(
|
||||
"Energy saving disconnect",
|
||||
context="daemon.energy",
|
||||
severity="info",
|
||||
)
|
||||
_close_interface(iface)
|
||||
iface = None
|
||||
announced_target = False
|
||||
initial_snapshot_sent = False
|
||||
energy_session_deadline = None
|
||||
_energy_sleep("disconnected after session")
|
||||
continue
|
||||
if (
|
||||
_is_ble_interface(iface)
|
||||
and getattr(iface, "client", object()) is None
|
||||
):
|
||||
config._debug_log(
|
||||
"Energy saving BLE disconnect",
|
||||
context="daemon.energy",
|
||||
severity="info",
|
||||
)
|
||||
_close_interface(iface)
|
||||
iface = None
|
||||
announced_target = False
|
||||
initial_snapshot_sent = False
|
||||
energy_session_deadline = None
|
||||
_energy_sleep("BLE client disconnected")
|
||||
continue
|
||||
|
||||
if not initial_snapshot_sent:
|
||||
try:
|
||||
nodes = getattr(iface, "nodes", {}) or {}
|
||||
node_items = _node_items_snapshot(nodes)
|
||||
if node_items is None:
|
||||
config._debug_log(
|
||||
"Skipping node snapshot due to concurrent modification",
|
||||
context="daemon.snapshot",
|
||||
)
|
||||
else:
|
||||
processed_snapshot_item = False
|
||||
for node_id, node in node_items:
|
||||
processed_snapshot_item = True
|
||||
try:
|
||||
handlers.upsert_node(node_id, node)
|
||||
except Exception as exc:
|
||||
config._debug_log(
|
||||
"Failed to update node snapshot",
|
||||
context="daemon.snapshot",
|
||||
severity="warn",
|
||||
node_id=node_id,
|
||||
error_class=exc.__class__.__name__,
|
||||
error_message=str(exc),
|
||||
)
|
||||
if config.DEBUG:
|
||||
config._debug_log(
|
||||
"Snapshot node payload",
|
||||
context="daemon.snapshot",
|
||||
node=node,
|
||||
)
|
||||
if processed_snapshot_item:
|
||||
initial_snapshot_sent = True
|
||||
except Exception as exc:
|
||||
config._debug_log(
|
||||
"Snapshot refresh failed",
|
||||
context="daemon.snapshot",
|
||||
severity="warn",
|
||||
error_class=exc.__class__.__name__,
|
||||
error_message=str(exc),
|
||||
)
|
||||
_close_interface(iface)
|
||||
iface = None
|
||||
stop.wait(retry_delay)
|
||||
if config._RECONNECT_MAX_DELAY_SECS > 0:
|
||||
retry_delay = min(
|
||||
(
|
||||
retry_delay * 2
|
||||
if retry_delay
|
||||
else config._RECONNECT_INITIAL_DELAY_SECS
|
||||
),
|
||||
config._RECONNECT_MAX_DELAY_SECS,
|
||||
)
|
||||
continue
|
||||
|
||||
if iface is not None and inactivity_reconnect_secs > 0:
|
||||
now_monotonic = time.monotonic()
|
||||
iface_activity = handlers.last_packet_monotonic()
|
||||
if (
|
||||
iface_activity is not None
|
||||
and iface_connected_at is not None
|
||||
and iface_activity < iface_connected_at
|
||||
):
|
||||
iface_activity = iface_connected_at
|
||||
if iface_activity is not None and (
|
||||
last_seen_packet_monotonic is None
|
||||
or iface_activity > last_seen_packet_monotonic
|
||||
):
|
||||
last_seen_packet_monotonic = iface_activity
|
||||
last_inactivity_reconnect = None
|
||||
|
||||
latest_activity = iface_activity
|
||||
if latest_activity is None and iface_connected_at is not None:
|
||||
latest_activity = iface_connected_at
|
||||
if latest_activity is None:
|
||||
latest_activity = now_monotonic
|
||||
|
||||
inactivity_elapsed = now_monotonic - latest_activity
|
||||
|
||||
connected_attr = getattr(iface, "isConnected", None)
|
||||
believed_disconnected = False
|
||||
connected_state = _connected_state(connected_attr)
|
||||
if connected_state is None:
|
||||
if callable(connected_attr):
|
||||
try:
|
||||
believed_disconnected = not bool(connected_attr())
|
||||
except Exception:
|
||||
believed_disconnected = False
|
||||
elif connected_attr is not None:
|
||||
try:
|
||||
believed_disconnected = not bool(connected_attr)
|
||||
except Exception: # pragma: no cover - defensive guard
|
||||
believed_disconnected = False
|
||||
else:
|
||||
believed_disconnected = not connected_state
|
||||
|
||||
should_reconnect = believed_disconnected or (
|
||||
inactivity_elapsed >= inactivity_reconnect_secs
|
||||
)
|
||||
|
||||
if should_reconnect:
|
||||
if (
|
||||
last_inactivity_reconnect is None
|
||||
or now_monotonic - last_inactivity_reconnect
|
||||
>= inactivity_reconnect_secs
|
||||
):
|
||||
reason = (
|
||||
"disconnected"
|
||||
if believed_disconnected
|
||||
else f"no data for {inactivity_elapsed:.0f}s"
|
||||
)
|
||||
config._debug_log(
|
||||
"Mesh interface inactivity detected",
|
||||
context="daemon.interface",
|
||||
severity="warn",
|
||||
reason=reason,
|
||||
)
|
||||
last_inactivity_reconnect = now_monotonic
|
||||
_close_interface(iface)
|
||||
iface = None
|
||||
announced_target = False
|
||||
initial_snapshot_sent = False
|
||||
energy_session_deadline = None
|
||||
iface_connected_at = None
|
||||
continue
|
||||
|
||||
ingestor_announcement_sent = _process_ingestor_heartbeat(
|
||||
iface, ingestor_announcement_sent=ingestor_announcement_sent
|
||||
)
|
||||
|
||||
retry_delay = max(0.0, config._RECONNECT_INITIAL_DELAY_SECS)
|
||||
stop.wait(config.SNAPSHOT_SECS)
|
||||
except KeyboardInterrupt: # pragma: no cover - interactive only
|
||||
config._debug_log(
|
||||
"Received KeyboardInterrupt; shutting down",
|
||||
context="daemon.main",
|
||||
severity="info",
|
||||
)
|
||||
state.stop.set()
|
||||
stop.set()
|
||||
finally:
|
||||
_close_interface(state.iface)
|
||||
_close_interface(iface)
|
||||
|
||||
|
||||
__all__ = [
|
||||
"_RECEIVE_TOPICS",
|
||||
"_advance_retry_delay",
|
||||
"_loop_iteration",
|
||||
"_check_energy_saving",
|
||||
"_check_inactivity_reconnect",
|
||||
"_connected_state",
|
||||
"_energy_sleep",
|
||||
"_event_wait_allows_default_timeout",
|
||||
"_is_ble_interface",
|
||||
"_node_items_snapshot",
|
||||
"_process_ingestor_heartbeat",
|
||||
"_subscribe_receive_topics",
|
||||
"_try_connect",
|
||||
"_try_send_snapshot",
|
||||
"_is_ble_interface",
|
||||
"_process_ingestor_heartbeat",
|
||||
"_connected_state",
|
||||
"main",
|
||||
]
|
||||
|
||||
@@ -1,85 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Decode Meshtastic protobuf payloads from stdin JSON."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import base64
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from typing import Any, Dict, Tuple
|
||||
|
||||
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
if SCRIPT_DIR in sys.path:
|
||||
sys.path.remove(SCRIPT_DIR)
|
||||
|
||||
from google.protobuf.json_format import MessageToDict
|
||||
from meshtastic.protobuf import mesh_pb2, telemetry_pb2
|
||||
|
||||
PORTNUM_MAP: Dict[int, Tuple[str, Any]] = {
|
||||
3: ("POSITION_APP", mesh_pb2.Position),
|
||||
4: ("NODEINFO_APP", mesh_pb2.NodeInfo),
|
||||
5: ("ROUTING_APP", mesh_pb2.Routing),
|
||||
67: ("TELEMETRY_APP", telemetry_pb2.Telemetry),
|
||||
70: ("TRACEROUTE_APP", mesh_pb2.RouteDiscovery),
|
||||
71: ("NEIGHBORINFO_APP", mesh_pb2.NeighborInfo),
|
||||
}
|
||||
|
||||
|
||||
def _decode_payload(portnum: int, payload_b64: str) -> dict[str, Any]:
|
||||
if portnum not in PORTNUM_MAP:
|
||||
return {"error": "unsupported-port", "portnum": portnum}
|
||||
try:
|
||||
payload_bytes = base64.b64decode(payload_b64, validate=True)
|
||||
except Exception as exc:
|
||||
return {"error": f"invalid-payload: {exc}"}
|
||||
|
||||
name, message_cls = PORTNUM_MAP[portnum]
|
||||
msg = message_cls()
|
||||
try:
|
||||
msg.ParseFromString(payload_bytes)
|
||||
except Exception as exc:
|
||||
return {"error": f"decode-failed: {exc}", "portnum": portnum, "type": name}
|
||||
|
||||
decoded = MessageToDict(msg, preserving_proto_field_name=True)
|
||||
return {"portnum": portnum, "type": name, "payload": decoded}
|
||||
|
||||
|
||||
def main() -> int:
|
||||
raw = sys.stdin.read()
|
||||
try:
|
||||
request = json.loads(raw)
|
||||
except json.JSONDecodeError as exc:
|
||||
sys.stdout.write(json.dumps({"error": f"invalid-json: {exc}"}))
|
||||
return 1
|
||||
|
||||
portnum = request.get("portnum")
|
||||
payload_b64 = request.get("payload_b64")
|
||||
|
||||
if not isinstance(portnum, int):
|
||||
sys.stdout.write(json.dumps({"error": "missing-portnum"}))
|
||||
return 1
|
||||
if not isinstance(payload_b64, str):
|
||||
sys.stdout.write(json.dumps({"error": "missing-payload"}))
|
||||
return 1
|
||||
|
||||
result = _decode_payload(portnum, payload_b64)
|
||||
sys.stdout.write(json.dumps(result))
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
@@ -1,181 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Protocol-agnostic event payload types for ingestion.
|
||||
|
||||
The ingestor ultimately POSTs JSON to the web app's ingest routes. These types
|
||||
capture the *shape* of those payloads so multiple providers can emit the same
|
||||
events, regardless of how they source or decode packets.
|
||||
|
||||
These are intentionally defined as ``TypedDict`` so existing code can continue
|
||||
to build plain dictionaries without a runtime dependency on dataclasses.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import NotRequired, TypedDict
|
||||
|
||||
|
||||
class _MessageEventRequired(TypedDict):
|
||||
id: int
|
||||
rx_time: int
|
||||
rx_iso: str
|
||||
|
||||
|
||||
class MessageEvent(_MessageEventRequired, total=False):
|
||||
from_id: object
|
||||
to_id: object
|
||||
channel: int
|
||||
portnum: str | None
|
||||
text: str | None
|
||||
encrypted: str | None
|
||||
snr: float | None
|
||||
rssi: int | None
|
||||
hop_limit: int | None
|
||||
reply_id: int | None
|
||||
emoji: str | None
|
||||
channel_name: str
|
||||
ingestor: str | None
|
||||
lora_freq: int
|
||||
modem_preset: str
|
||||
|
||||
|
||||
class _PositionEventRequired(TypedDict):
|
||||
id: int
|
||||
rx_time: int
|
||||
rx_iso: str
|
||||
|
||||
|
||||
class PositionEvent(_PositionEventRequired, total=False):
|
||||
node_id: str
|
||||
node_num: int | None
|
||||
num: int | None
|
||||
from_id: str | None
|
||||
to_id: object
|
||||
latitude: float | None
|
||||
longitude: float | None
|
||||
altitude: float | None
|
||||
position_time: int | None
|
||||
location_source: str | None
|
||||
precision_bits: int | None
|
||||
sats_in_view: int | None
|
||||
pdop: float | None
|
||||
ground_speed: float | None
|
||||
ground_track: float | None
|
||||
snr: float | None
|
||||
rssi: int | None
|
||||
hop_limit: int | None
|
||||
bitfield: int | None
|
||||
payload_b64: str | None
|
||||
raw: dict
|
||||
ingestor: str | None
|
||||
lora_freq: int
|
||||
modem_preset: str
|
||||
|
||||
|
||||
class _TelemetryEventRequired(TypedDict):
|
||||
id: int
|
||||
rx_time: int
|
||||
rx_iso: str
|
||||
|
||||
|
||||
class TelemetryEvent(_TelemetryEventRequired, total=False):
|
||||
node_id: str | None
|
||||
node_num: int | None
|
||||
from_id: object
|
||||
to_id: object
|
||||
telemetry_time: int | None
|
||||
channel: int
|
||||
portnum: str | None
|
||||
hop_limit: int | None
|
||||
snr: float | None
|
||||
rssi: int | None
|
||||
bitfield: int | None
|
||||
payload_b64: str
|
||||
ingestor: str | None
|
||||
lora_freq: int
|
||||
modem_preset: str
|
||||
|
||||
# Metric keys are intentionally open-ended; the Ruby side is permissive and
|
||||
# evolves over time.
|
||||
|
||||
|
||||
class _NeighborEntryRequired(TypedDict):
|
||||
rx_time: int
|
||||
rx_iso: str
|
||||
|
||||
|
||||
class NeighborEntry(_NeighborEntryRequired, total=False):
|
||||
neighbor_id: str
|
||||
neighbor_num: int | None
|
||||
snr: float | None
|
||||
|
||||
|
||||
class _NeighborsSnapshotRequired(TypedDict):
|
||||
node_id: str
|
||||
rx_time: int
|
||||
rx_iso: str
|
||||
|
||||
|
||||
class NeighborsSnapshot(_NeighborsSnapshotRequired, total=False):
|
||||
node_num: int | None
|
||||
neighbors: list[NeighborEntry]
|
||||
node_broadcast_interval_secs: int | None
|
||||
last_sent_by_id: str | None
|
||||
ingestor: str | None
|
||||
lora_freq: int
|
||||
modem_preset: str
|
||||
|
||||
|
||||
class _TraceEventRequired(TypedDict):
|
||||
hops: list[int]
|
||||
rx_time: int
|
||||
rx_iso: str
|
||||
|
||||
|
||||
class TraceEvent(_TraceEventRequired, total=False):
|
||||
id: int | None
|
||||
request_id: int | None
|
||||
src: int | None
|
||||
dest: int | None
|
||||
rssi: int | None
|
||||
snr: float | None
|
||||
elapsed_ms: int | None
|
||||
ingestor: str | None
|
||||
lora_freq: int
|
||||
modem_preset: str
|
||||
|
||||
|
||||
class IngestorHeartbeat(TypedDict):
|
||||
node_id: str
|
||||
start_time: int
|
||||
last_seen_time: int
|
||||
version: str
|
||||
lora_freq: NotRequired[int]
|
||||
modem_preset: NotRequired[str]
|
||||
|
||||
|
||||
NodeUpsert = dict[str, dict]
|
||||
|
||||
|
||||
__all__ = [
|
||||
"IngestorHeartbeat",
|
||||
"MessageEvent",
|
||||
"NeighborEntry",
|
||||
"NeighborsSnapshot",
|
||||
"NodeUpsert",
|
||||
"PositionEvent",
|
||||
"TelemetryEvent",
|
||||
"TraceEvent",
|
||||
]
|
||||
@@ -30,19 +30,12 @@ from pathlib import Path
|
||||
|
||||
from . import channels, config, queue
|
||||
|
||||
_IGNORED_PACKET_LOG_PATH = (
|
||||
Path(__file__).resolve().parents[2] / "ignored-meshtastic.txt"
|
||||
)
|
||||
"""Filesystem path that stores ignored Meshtastic packets when debugging."""
|
||||
_IGNORED_PACKET_LOG_PATH = Path(__file__).resolve().parents[2] / "ignored.txt"
|
||||
"""Filesystem path that stores ignored packets when debugging."""
|
||||
|
||||
_IGNORED_PACKET_LOCK = threading.Lock()
|
||||
"""Lock guarding writes to :data:`_IGNORED_PACKET_LOG_PATH`."""
|
||||
|
||||
_VALID_TELEMETRY_TYPES: frozenset[str] = frozenset(
|
||||
{"device", "environment", "power", "air_quality"}
|
||||
)
|
||||
"""Allowed values for the ``telemetry_type`` discriminator field."""
|
||||
|
||||
_HOST_TELEMETRY_INTERVAL_SECS = 60 * 60
|
||||
"""Minimum interval between accepted host telemetry packets."""
|
||||
|
||||
@@ -69,7 +62,7 @@ def _ignored_packet_default(value: object) -> object:
|
||||
|
||||
|
||||
def _record_ignored_packet(packet: Mapping | object, *, reason: str) -> None:
|
||||
"""Persist packet details to :data:`ignored-meshtastic.txt` during debugging."""
|
||||
"""Persist packet details to :data:`ignored.txt` during debugging."""
|
||||
|
||||
if not config.DEBUG:
|
||||
return
|
||||
@@ -247,7 +240,6 @@ def upsert_node(node_id, node) -> None:
|
||||
"""
|
||||
|
||||
payload = _apply_radio_metadata_to_nodes(upsert_payload(node_id, node))
|
||||
payload["ingestor"] = host_node_id()
|
||||
_queue_post_json("/api/nodes", payload, priority=queue._NODE_POST_PRIORITY)
|
||||
|
||||
if config.DEBUG:
|
||||
@@ -432,7 +424,6 @@ def store_position_packet(packet: Mapping, decoded: Mapping) -> None:
|
||||
"hop_limit": hop_limit,
|
||||
"bitfield": bitfield,
|
||||
"payload_b64": payload_b64,
|
||||
"ingestor": host_node_id(),
|
||||
}
|
||||
if raw_payload:
|
||||
position_payload["raw"] = raw_payload
|
||||
@@ -577,7 +568,6 @@ def store_traceroute_packet(packet: Mapping, decoded: Mapping) -> None:
|
||||
"rssi": rssi,
|
||||
"snr": snr,
|
||||
"elapsed_ms": elapsed_ms,
|
||||
"ingestor": host_node_id(),
|
||||
}
|
||||
|
||||
_queue_post_json(
|
||||
@@ -648,43 +638,6 @@ def store_telemetry_packet(packet: Mapping, decoded: Mapping) -> None:
|
||||
|
||||
telemetry_time = _coerce_int(_first(telemetry_section, "time", default=None))
|
||||
|
||||
_dm = telemetry_section.get("deviceMetrics") or telemetry_section.get(
|
||||
"device_metrics"
|
||||
)
|
||||
_em = telemetry_section.get("environmentMetrics") or telemetry_section.get(
|
||||
"environment_metrics"
|
||||
)
|
||||
_pm = telemetry_section.get("powerMetrics") or telemetry_section.get(
|
||||
"power_metrics"
|
||||
)
|
||||
_aq = telemetry_section.get("airQualityMetrics") or telemetry_section.get(
|
||||
"air_quality_metrics"
|
||||
)
|
||||
# Priority order matters: deviceMetrics is checked first because the device
|
||||
# sub-object also carries a voltage field that overlaps with powerMetrics.
|
||||
# Meshtastic uses a protobuf oneof so only one sub-object can be populated per
|
||||
# packet; the elif chain handles any hypothetical overlap from future providers.
|
||||
if isinstance(_dm, Mapping):
|
||||
telemetry_type: str | None = "device"
|
||||
elif isinstance(_em, Mapping):
|
||||
telemetry_type = "environment"
|
||||
elif isinstance(_pm, Mapping):
|
||||
telemetry_type = "power"
|
||||
elif isinstance(_aq, Mapping):
|
||||
telemetry_type = "air_quality"
|
||||
else:
|
||||
telemetry_type = None
|
||||
|
||||
if telemetry_type is not None and telemetry_type not in _VALID_TELEMETRY_TYPES:
|
||||
config._debug_log(
|
||||
"Unexpected telemetry_type value; dropping field",
|
||||
context="handlers.store_telemetry",
|
||||
severity="warning",
|
||||
always=True,
|
||||
telemetry_type=telemetry_type,
|
||||
)
|
||||
telemetry_type = None
|
||||
|
||||
channel = _coerce_int(_first(decoded, "channel", default=None))
|
||||
if channel is None:
|
||||
channel = _coerce_int(_first(packet, "channel", default=None))
|
||||
@@ -982,7 +935,6 @@ def store_telemetry_packet(packet: Mapping, decoded: Mapping) -> None:
|
||||
"rssi": rssi,
|
||||
"hop_limit": hop_limit,
|
||||
"payload_b64": payload_b64,
|
||||
"ingestor": host_node_id(),
|
||||
}
|
||||
|
||||
if battery_level is not None:
|
||||
@@ -1037,8 +989,6 @@ def store_telemetry_packet(packet: Mapping, decoded: Mapping) -> None:
|
||||
telemetry_payload["soil_moisture"] = soil_moisture
|
||||
if soil_temperature is not None:
|
||||
telemetry_payload["soil_temperature"] = soil_temperature
|
||||
if telemetry_type is not None:
|
||||
telemetry_payload["telemetry_type"] = telemetry_type
|
||||
|
||||
_queue_post_json(
|
||||
"/api/telemetry",
|
||||
@@ -1056,43 +1006,6 @@ def store_telemetry_packet(packet: Mapping, decoded: Mapping) -> None:
|
||||
)
|
||||
|
||||
|
||||
def store_router_heartbeat_packet(packet: Mapping) -> None:
|
||||
"""Persist a STORE_FORWARD_APP ``ROUTER_HEARTBEAT`` as a node presence update.
|
||||
|
||||
The heartbeat carries no message payload — the only actionable signal is
|
||||
that the store-and-forward router is alive at the observed ``rx_time``.
|
||||
All other fields are left untouched so the router's existing profile is
|
||||
not overwritten.
|
||||
|
||||
Parameters:
|
||||
packet: Raw packet metadata.
|
||||
|
||||
Returns:
|
||||
``None``. A minimal node upsert is enqueued at low priority.
|
||||
"""
|
||||
|
||||
node_id = _canonical_node_id(
|
||||
_first(packet, "fromId", "from_id", "from", default=None)
|
||||
)
|
||||
if node_id is None:
|
||||
return
|
||||
|
||||
rx_time = int(_first(packet, "rxTime", "rx_time", default=time.time()))
|
||||
|
||||
node_payload: dict = {"lastHeard": rx_time}
|
||||
nodes_payload = _apply_radio_metadata_to_nodes({node_id: node_payload})
|
||||
nodes_payload["ingestor"] = host_node_id()
|
||||
_queue_post_json("/api/nodes", nodes_payload, priority=queue._DEFAULT_POST_PRIORITY)
|
||||
|
||||
if config.DEBUG:
|
||||
config._debug_log(
|
||||
"Queued router heartbeat node upsert",
|
||||
context="handlers.store_router_heartbeat",
|
||||
node_id=node_id,
|
||||
rx_time=rx_time,
|
||||
)
|
||||
|
||||
|
||||
def store_nodeinfo_packet(packet: Mapping, decoded: Mapping) -> None:
|
||||
"""Persist node information updates.
|
||||
|
||||
@@ -1241,11 +1154,9 @@ def store_nodeinfo_packet(packet: Mapping, decoded: Mapping) -> None:
|
||||
except (TypeError, ValueError):
|
||||
pass
|
||||
|
||||
nodes_payload = _apply_radio_metadata_to_nodes({node_id: node_payload})
|
||||
nodes_payload["ingestor"] = host_node_id()
|
||||
_queue_post_json(
|
||||
"/api/nodes",
|
||||
nodes_payload,
|
||||
_apply_radio_metadata_to_nodes({node_id: node_payload}),
|
||||
priority=queue._NODE_POST_PRIORITY,
|
||||
)
|
||||
|
||||
@@ -1352,7 +1263,6 @@ def store_neighborinfo_packet(packet: Mapping, decoded: Mapping) -> None:
|
||||
"neighbors": neighbor_entries,
|
||||
"rx_time": rx_time,
|
||||
"rx_iso": _iso(rx_time),
|
||||
"ingestor": host_node_id(),
|
||||
}
|
||||
|
||||
if node_broadcast_interval is not None:
|
||||
@@ -1430,23 +1340,6 @@ def store_packet_dict(packet: Mapping) -> None:
|
||||
store_neighborinfo_packet(packet, decoded)
|
||||
return
|
||||
|
||||
store_forward_port_candidates = _portnum_candidates("STORE_FORWARD_APP")
|
||||
store_forward_section = (
|
||||
decoded.get("storeforward") if isinstance(decoded, Mapping) else None
|
||||
)
|
||||
if portnum == "STORE_FORWARD_APP" or (
|
||||
portnum_int is not None and portnum_int in store_forward_port_candidates
|
||||
):
|
||||
if not isinstance(store_forward_section, Mapping):
|
||||
_record_ignored_packet(packet, reason="unsupported-store-forward")
|
||||
return
|
||||
rr = str(store_forward_section.get("rr") or "").upper()
|
||||
if rr == "ROUTER_HEARTBEAT":
|
||||
store_router_heartbeat_packet(packet)
|
||||
return
|
||||
_record_ignored_packet(packet, reason="unsupported-store-forward-rr")
|
||||
return
|
||||
|
||||
text = _first(decoded, "payload.text", "text", "data.text", default=None)
|
||||
encrypted = _first(decoded, "payload.encrypted", "encrypted", default=None)
|
||||
if encrypted is None:
|
||||
@@ -1627,7 +1520,6 @@ def store_packet_dict(packet: Mapping) -> None:
|
||||
"hop_limit": int(hop) if hop is not None else None,
|
||||
"reply_id": reply_id,
|
||||
"emoji": emoji,
|
||||
"ingestor": host_node_id(),
|
||||
}
|
||||
|
||||
if not encrypted_flag and channel_name_value:
|
||||
@@ -1718,7 +1610,6 @@ __all__ = [
|
||||
"store_nodeinfo_packet",
|
||||
"store_packet_dict",
|
||||
"store_position_packet",
|
||||
"store_router_heartbeat_packet",
|
||||
"store_telemetry_packet",
|
||||
"upsert_node",
|
||||
]
|
||||
|
||||
@@ -113,7 +113,6 @@ def queue_ingestor_heartbeat(
|
||||
"start_time": STATE.start_time,
|
||||
"last_seen_time": now,
|
||||
"version": INGESTOR_VERSION,
|
||||
"protocol": getattr(config, "PROVIDER", "meshtastic") or "meshtastic",
|
||||
}
|
||||
if getattr(config, "LORA_FREQ", None) is not None:
|
||||
payload["lora_freq"] = config.LORA_FREQ
|
||||
|
||||
@@ -17,6 +17,7 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import contextlib
|
||||
import glob
|
||||
import importlib
|
||||
import ipaddress
|
||||
import math
|
||||
@@ -32,13 +33,6 @@ except Exception: # pragma: no cover - dependency optional in tests
|
||||
meshtastic = None # type: ignore[assignment]
|
||||
|
||||
from . import channels, config, serialization
|
||||
from .connection import (
|
||||
BLE_ADDRESS_RE,
|
||||
DEFAULT_TCP_PORT,
|
||||
DEFAULT_SERIAL_PATTERNS,
|
||||
default_serial_targets,
|
||||
parse_ble_target,
|
||||
)
|
||||
|
||||
|
||||
def _ensure_mapping(value) -> Mapping | None:
|
||||
@@ -622,13 +616,25 @@ def _ensure_channel_metadata(iface: Any) -> None:
|
||||
)
|
||||
|
||||
|
||||
_DEFAULT_TCP_PORT = 4403
|
||||
_DEFAULT_TCP_TARGET = "http://127.0.0.1"
|
||||
|
||||
# Private aliases so that existing internal callers and monkeypatching in
|
||||
# tests keep working without modification.
|
||||
_DEFAULT_TCP_PORT = DEFAULT_TCP_PORT # backward-compat alias
|
||||
_DEFAULT_SERIAL_PATTERNS = DEFAULT_SERIAL_PATTERNS # backward-compat alias
|
||||
_BLE_ADDRESS_RE = BLE_ADDRESS_RE # backward-compat alias
|
||||
_DEFAULT_SERIAL_PATTERNS = (
|
||||
"/dev/ttyACM*",
|
||||
"/dev/ttyUSB*",
|
||||
"/dev/tty.usbmodem*",
|
||||
"/dev/tty.usbserial*",
|
||||
"/dev/cu.usbmodem*",
|
||||
"/dev/cu.usbserial*",
|
||||
)
|
||||
|
||||
# Support both MAC addresses (Linux/Windows) and UUIDs (macOS)
|
||||
_BLE_ADDRESS_RE = re.compile(
|
||||
r"^(?:"
|
||||
r"(?:[0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}|" # MAC address format
|
||||
r"[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}" # UUID format
|
||||
r")$"
|
||||
)
|
||||
|
||||
|
||||
class _DummySerialInterface:
|
||||
@@ -641,7 +647,24 @@ class _DummySerialInterface:
|
||||
pass
|
||||
|
||||
|
||||
_parse_ble_target = parse_ble_target # backward-compat alias
|
||||
def _parse_ble_target(value: str) -> str | None:
|
||||
"""Return a normalized BLE address (MAC or UUID) when ``value`` matches the format.
|
||||
|
||||
Parameters:
|
||||
value: User-provided target string.
|
||||
|
||||
Returns:
|
||||
The normalised MAC address or UUID, or ``None`` when validation fails.
|
||||
"""
|
||||
|
||||
if not value:
|
||||
return None
|
||||
value = value.strip()
|
||||
if not value:
|
||||
return None
|
||||
if _BLE_ADDRESS_RE.fullmatch(value):
|
||||
return value.upper()
|
||||
return None
|
||||
|
||||
|
||||
def _parse_network_target(value: str) -> tuple[str, int] | None:
|
||||
@@ -789,7 +812,19 @@ class NoAvailableMeshInterface(RuntimeError):
|
||||
"""Raised when no default mesh interface can be created."""
|
||||
|
||||
|
||||
_default_serial_targets = default_serial_targets # backward-compat alias
|
||||
def _default_serial_targets() -> list[str]:
|
||||
"""Return candidate serial device paths for auto-discovery."""
|
||||
|
||||
candidates: list[str] = []
|
||||
seen: set[str] = set()
|
||||
for pattern in _DEFAULT_SERIAL_PATTERNS:
|
||||
for path in sorted(glob.glob(pattern)):
|
||||
if path not in seen:
|
||||
candidates.append(path)
|
||||
seen.add(path)
|
||||
if "/dev/ttyACM0" not in seen:
|
||||
candidates.append("/dev/ttyACM0")
|
||||
return candidates
|
||||
|
||||
|
||||
def _create_default_interface() -> tuple[object, str]:
|
||||
|
||||
@@ -1,115 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Node identity helpers shared across ingestor providers.
|
||||
|
||||
The web application keys nodes by a canonical textual identifier of the form
|
||||
``!%08x`` (lowercase hex). Both the Python collector and Ruby server accept
|
||||
several input forms (ints, ``0x`` hex strings, ``!`` hex strings, decimal
|
||||
strings). This module centralizes that normalization.
|
||||
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Final
|
||||
|
||||
CANONICAL_PREFIX: Final[str] = "!"
|
||||
|
||||
|
||||
def canonical_node_id(value: object) -> str | None:
|
||||
"""Convert ``value`` into canonical ``!xxxxxxxx`` form.
|
||||
|
||||
Parameters:
|
||||
value: Node reference which may be an int, float, or string.
|
||||
|
||||
Returns:
|
||||
Canonical node id string or ``None`` when parsing fails.
|
||||
"""
|
||||
|
||||
if value is None:
|
||||
return None
|
||||
if isinstance(value, (int, float)):
|
||||
try:
|
||||
num = int(value)
|
||||
except (TypeError, ValueError):
|
||||
return None
|
||||
if num < 0:
|
||||
return None
|
||||
return f"{CANONICAL_PREFIX}{num & 0xFFFFFFFF:08x}"
|
||||
if not isinstance(value, str):
|
||||
return None
|
||||
|
||||
trimmed = value.strip()
|
||||
if not trimmed:
|
||||
return None
|
||||
if trimmed.startswith("^"):
|
||||
# Meshtastic special destinations like "^all" are not node ids; callers
|
||||
# that already accept them should keep passing them through unchanged.
|
||||
return trimmed
|
||||
if trimmed.startswith(CANONICAL_PREFIX):
|
||||
body = trimmed[1:]
|
||||
elif trimmed.lower().startswith("0x"):
|
||||
body = trimmed[2:]
|
||||
elif trimmed.isdigit():
|
||||
try:
|
||||
return f"{CANONICAL_PREFIX}{int(trimmed, 10) & 0xFFFFFFFF:08x}"
|
||||
except ValueError:
|
||||
return None
|
||||
else:
|
||||
body = trimmed
|
||||
|
||||
if not body:
|
||||
return None
|
||||
try:
|
||||
return f"{CANONICAL_PREFIX}{int(body, 16) & 0xFFFFFFFF:08x}"
|
||||
except ValueError:
|
||||
return None
|
||||
|
||||
|
||||
def node_num_from_id(node_id: object) -> int | None:
|
||||
"""Extract the numeric node identifier from a canonical (or near-canonical) id."""
|
||||
|
||||
if node_id is None:
|
||||
return None
|
||||
if isinstance(node_id, (int, float)):
|
||||
try:
|
||||
num = int(node_id)
|
||||
except (TypeError, ValueError):
|
||||
return None
|
||||
return num if num >= 0 else None
|
||||
if not isinstance(node_id, str):
|
||||
return None
|
||||
|
||||
trimmed = node_id.strip()
|
||||
if not trimmed:
|
||||
return None
|
||||
if trimmed.startswith(CANONICAL_PREFIX):
|
||||
trimmed = trimmed[1:]
|
||||
if trimmed.lower().startswith("0x"):
|
||||
trimmed = trimmed[2:]
|
||||
try:
|
||||
return int(trimmed, 16)
|
||||
except ValueError:
|
||||
try:
|
||||
return int(trimmed, 10)
|
||||
except ValueError:
|
||||
return None
|
||||
|
||||
|
||||
__all__ = [
|
||||
"CANONICAL_PREFIX",
|
||||
"canonical_node_id",
|
||||
"node_num_from_id",
|
||||
]
|
||||
@@ -1,55 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Provider interface for ingestion sources.
|
||||
|
||||
Today the repo ships a Meshtastic provider only. This module defines the seam so
|
||||
future providers (MeshCore, Reticulum, ...) can be added without changing the
|
||||
web app ingest contract.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from collections.abc import Iterable
|
||||
from typing import Protocol, runtime_checkable
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class Provider(Protocol):
|
||||
"""Abstract source of mesh observations."""
|
||||
|
||||
name: str
|
||||
|
||||
def subscribe(self) -> list[str]:
|
||||
"""Subscribe to any async receive callbacks and return topic names."""
|
||||
|
||||
def connect(
|
||||
self, *, active_candidate: str | None
|
||||
) -> tuple[object, str | None, str | None]:
|
||||
"""Create an interface connection.
|
||||
|
||||
Returns:
|
||||
(iface, resolved_target, next_active_candidate)
|
||||
"""
|
||||
|
||||
def extract_host_node_id(self, iface: object) -> str | None:
|
||||
"""Best-effort extraction of the connected host node id."""
|
||||
|
||||
def node_snapshot_items(self, iface: object) -> Iterable[tuple[str, object]]:
|
||||
"""Return iterable of (node_id, node_obj) for initial snapshot."""
|
||||
|
||||
|
||||
__all__ = [
|
||||
"Provider",
|
||||
]
|
||||
@@ -1,39 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Provider implementations.
|
||||
|
||||
This package contains protocol-specific provider implementations (Meshtastic,
|
||||
MeshCore, and others in the future).
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from .meshtastic import MeshtasticProvider
|
||||
|
||||
|
||||
def __getattr__(name: str) -> object:
|
||||
"""Lazy-load provider classes that carry optional heavy dependencies.
|
||||
|
||||
``MeshcoreProvider`` is imported on demand so that the MeshCore library
|
||||
(once wired in) is not loaded at startup when ``PROVIDER=meshtastic``.
|
||||
"""
|
||||
if name == "MeshcoreProvider":
|
||||
from .meshcore import MeshcoreProvider
|
||||
|
||||
return MeshcoreProvider
|
||||
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
|
||||
|
||||
|
||||
__all__ = ["MeshtasticProvider", "MeshcoreProvider"]
|
||||
@@ -1,789 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""MeshCore provider implementation.
|
||||
|
||||
This module defines :class:`MeshcoreProvider`, which satisfies the
|
||||
:class:`~data.mesh_ingestor.provider.Provider` protocol for MeshCore nodes
|
||||
connected via serial port, BLE, or TCP/IP.
|
||||
|
||||
The provider runs MeshCore's ``asyncio`` event loop in a background daemon
|
||||
thread so that incoming events are dispatched without blocking the
|
||||
synchronous daemon loop. Received contacts, channel messages, and direct
|
||||
messages are forwarded to the shared HTTP ingest queue via the same
|
||||
:mod:`~data.mesh_ingestor.handlers` helpers used by the Meshtastic provider.
|
||||
|
||||
Connection type is detected automatically from the target string:
|
||||
|
||||
* **BLE** — MAC address (``AA:BB:CC:DD:EE:FF``) or UUID (macOS format).
|
||||
* **TCP** — ``host:port`` or ``[ipv6]:port`` (accepts hostnames).
|
||||
* **Serial** — any other non-empty string (e.g. ``/dev/ttyUSB0``).
|
||||
* **Auto** — ``None`` or empty: tries serial candidates from
|
||||
:func:`~data.mesh_ingestor.connection.default_serial_targets`.
|
||||
|
||||
Node identities are derived from the first four bytes (eight hex characters)
|
||||
of each contact's 32-byte public key, formatted as ``!xxxxxxxx`` to match
|
||||
the canonical node-ID schema used across the system.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import base64
|
||||
import hashlib
|
||||
import json
|
||||
import threading
|
||||
import time
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
|
||||
from .. import config
|
||||
from ..connection import default_serial_targets, parse_ble_target, parse_tcp_target
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Debug log file
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
_IGNORED_MESSAGE_LOG_PATH = Path(__file__).resolve().parents[3] / "ignored-meshcore.txt"
|
||||
"""Filesystem path that stores raw MeshCore messages when ``DEBUG=1``."""
|
||||
|
||||
_IGNORED_MESSAGE_LOCK = threading.Lock()
|
||||
"""Lock guarding writes to :data:`_IGNORED_MESSAGE_LOG_PATH`."""
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Connection constants
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
_CONNECT_TIMEOUT_SECS: float = 30.0
|
||||
"""Seconds to wait for the MeshCore node to respond to the appstart handshake."""
|
||||
|
||||
_DEFAULT_BAUDRATE: int = 115200
|
||||
"""Default baud rate for MeshCore serial connections."""
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _derive_message_id(sender_ts: int, discriminator: str, text: str) -> int:
|
||||
"""Derive a stable 32-bit message ID from available MeshCore fields.
|
||||
|
||||
MeshCore does not assign firmware-side packet IDs. This function
|
||||
produces a deterministic 32-bit integer so that re-delivered messages
|
||||
resolve to the same database row via the UPSERT ON CONFLICT path, while
|
||||
messages that differ in timestamp, channel/peer, or text content produce
|
||||
distinct IDs.
|
||||
|
||||
Parameters:
|
||||
sender_ts: Unix timestamp from the sender's clock.
|
||||
discriminator: Channel index (``"c<N>"`` for channel messages) or
|
||||
pubkey prefix (for direct messages) to separate messages with
|
||||
the same timestamp.
|
||||
text: Message text.
|
||||
|
||||
Returns:
|
||||
A non-negative 32-bit integer suitable for the ``id`` column.
|
||||
"""
|
||||
data = f"{sender_ts}:{discriminator}:{text}".encode("utf-8", errors="replace")
|
||||
return int.from_bytes(hashlib.sha256(data).digest()[:4], "big")
|
||||
|
||||
|
||||
def _meshcore_node_id(public_key_hex: str | None) -> str | None:
|
||||
"""Derive a canonical ``!xxxxxxxx`` node ID from a MeshCore public key.
|
||||
|
||||
Uses the first four bytes (eight hex characters) of the 32-byte public
|
||||
key, formatted as ``!xxxxxxxx``.
|
||||
|
||||
Parameters:
|
||||
public_key_hex: 64-character lowercase hex string for the node's
|
||||
public key as returned by the MeshCore library.
|
||||
|
||||
Returns:
|
||||
Canonical ``!xxxxxxxx`` node ID string, or ``None`` when the key is
|
||||
absent or too short.
|
||||
"""
|
||||
if not public_key_hex or len(public_key_hex) < 8:
|
||||
return None
|
||||
return "!" + public_key_hex[:8].lower()
|
||||
|
||||
|
||||
def _pubkey_prefix_to_node_id(contacts: dict, pubkey_prefix: str) -> str | None:
|
||||
"""Look up a canonical node ID by six-byte public-key prefix.
|
||||
|
||||
Parameters:
|
||||
contacts: Mapping of full ``public_key`` hex strings to contact dicts.
|
||||
pubkey_prefix: Twelve-character hex string (six bytes) as used in
|
||||
MeshCore direct-message events.
|
||||
|
||||
Returns:
|
||||
Canonical ``!xxxxxxxx`` node ID for the first matching contact, or
|
||||
``None`` when no contact's public key starts with *pubkey_prefix*.
|
||||
"""
|
||||
for pub_key in contacts:
|
||||
if pub_key.startswith(pubkey_prefix):
|
||||
return _meshcore_node_id(pub_key)
|
||||
return None
|
||||
|
||||
|
||||
def _contact_to_node_dict(contact: dict) -> dict:
|
||||
"""Convert a MeshCore contact dict to a Meshtastic-ish node dict.
|
||||
|
||||
Parameters:
|
||||
contact: Contact dict from the MeshCore library. Expected keys
|
||||
include ``public_key``, ``adv_name``, ``last_advert``,
|
||||
``adv_lat``, and ``adv_lon``.
|
||||
|
||||
Returns:
|
||||
Node dict compatible with the ``POST /api/nodes`` payload format.
|
||||
"""
|
||||
pub_key = contact.get("public_key", "")
|
||||
name = (contact.get("adv_name") or "").strip()
|
||||
node: dict = {
|
||||
"lastHeard": contact.get("last_advert"),
|
||||
"user": {
|
||||
"longName": name,
|
||||
"shortName": name[:4] if name else "",
|
||||
"publicKey": pub_key,
|
||||
},
|
||||
}
|
||||
lat = contact.get("adv_lat")
|
||||
lon = contact.get("adv_lon")
|
||||
if lat is not None and lon is not None and (lat or lon):
|
||||
node["position"] = {"latitude": lat, "longitude": lon}
|
||||
return node
|
||||
|
||||
|
||||
def _self_info_to_node_dict(self_info: dict) -> dict:
|
||||
"""Convert a MeshCore ``SELF_INFO`` payload to a Meshtastic-ish node dict.
|
||||
|
||||
Parameters:
|
||||
self_info: Payload dict from the ``SELF_INFO`` event. Expected keys
|
||||
include ``name``, ``public_key``, ``adv_lat``, and ``adv_lon``.
|
||||
|
||||
Returns:
|
||||
Node dict compatible with the ``POST /api/nodes`` payload format.
|
||||
"""
|
||||
name = (self_info.get("name") or "").strip()
|
||||
pub_key = self_info.get("public_key", "")
|
||||
node: dict = {
|
||||
"lastHeard": int(time.time()),
|
||||
"user": {
|
||||
"longName": name,
|
||||
"shortName": name[:4] if name else "",
|
||||
"publicKey": pub_key,
|
||||
},
|
||||
}
|
||||
lat = self_info.get("adv_lat")
|
||||
lon = self_info.get("adv_lon")
|
||||
if lat is not None and lon is not None and (lat or lon):
|
||||
node["position"] = {"latitude": lat, "longitude": lon}
|
||||
return node
|
||||
|
||||
|
||||
def _to_json_safe(value: object) -> object:
|
||||
"""Recursively convert *value* to a JSON-serialisable form.
|
||||
|
||||
Handles the common types present in mesh protocol messages: dicts, lists,
|
||||
bytes (base64-encoded), and primitives. Anything else is coerced via
|
||||
``str()``.
|
||||
"""
|
||||
if isinstance(value, dict):
|
||||
return {str(k): _to_json_safe(v) for k, v in value.items()}
|
||||
if isinstance(value, (list, tuple, set)):
|
||||
return [_to_json_safe(v) for v in value]
|
||||
if isinstance(value, bytes):
|
||||
return base64.b64encode(value).decode("ascii")
|
||||
if isinstance(value, (str, int, float, bool)) or value is None:
|
||||
return value
|
||||
return str(value)
|
||||
|
||||
|
||||
def _record_meshcore_message(message: object, *, source: str) -> None:
|
||||
"""Persist a MeshCore message to :data:`ignored-meshcore.txt` when ``DEBUG=1``.
|
||||
|
||||
When ``DEBUG`` is not set the function returns immediately without any
|
||||
I/O so that production deployments are not burdened by file writes.
|
||||
|
||||
Parameters:
|
||||
message: The raw message object received from the MeshCore node.
|
||||
source: A short label describing where the message originated (e.g.
|
||||
a serial port path or BLE address).
|
||||
"""
|
||||
if not config.DEBUG:
|
||||
return
|
||||
|
||||
timestamp = datetime.now(timezone.utc).isoformat()
|
||||
entry = {
|
||||
"message": _to_json_safe(message),
|
||||
"source": source,
|
||||
"timestamp": timestamp,
|
||||
}
|
||||
payload = json.dumps(entry, ensure_ascii=False, sort_keys=True)
|
||||
with _IGNORED_MESSAGE_LOCK:
|
||||
_IGNORED_MESSAGE_LOG_PATH.parent.mkdir(parents=True, exist_ok=True)
|
||||
with _IGNORED_MESSAGE_LOG_PATH.open("a", encoding="utf-8") as fh:
|
||||
fh.write(f"{payload}\n")
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Interface
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class _MeshcoreInterface:
|
||||
"""Live MeshCore interface managing an asyncio event loop in a background thread.
|
||||
|
||||
Holds connection state, a thread-safe snapshot of known contacts, and the
|
||||
handles needed to shut down cleanly when the daemon requests a disconnect.
|
||||
"""
|
||||
|
||||
host_node_id: str | None = None
|
||||
"""Canonical ``!xxxxxxxx`` identifier for the connected host device."""
|
||||
|
||||
def __init__(self, *, target: str | None) -> None:
|
||||
"""Initialise the interface with the connection *target*."""
|
||||
self._target = target
|
||||
self._mc: object | None = None
|
||||
self._loop: asyncio.AbstractEventLoop | None = None
|
||||
self._thread: threading.Thread | None = None
|
||||
self._stop_event: asyncio.Event | None = None
|
||||
self._contacts_lock = threading.Lock()
|
||||
self._contacts: dict = {}
|
||||
self.isConnected: bool = False
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Contact management (called from the asyncio thread)
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def _update_contact(self, contact: dict) -> None:
|
||||
"""Thread-safely add or update a contact in the local snapshot.
|
||||
|
||||
Parameters:
|
||||
contact: Contact dict from a ``CONTACTS``, ``NEW_CONTACT``, or
|
||||
``NEXT_CONTACT`` event.
|
||||
"""
|
||||
pub_key = contact.get("public_key")
|
||||
if pub_key:
|
||||
with self._contacts_lock:
|
||||
self._contacts[pub_key] = contact
|
||||
|
||||
def contacts_snapshot(self) -> list[tuple[str, dict]]:
|
||||
"""Return a thread-safe snapshot of all known contacts as node entries.
|
||||
|
||||
Returns:
|
||||
List of ``(canonical_node_id, node_dict)`` pairs, skipping any
|
||||
contact whose public key cannot be mapped to a valid node ID.
|
||||
"""
|
||||
with self._contacts_lock:
|
||||
items = list(self._contacts.items())
|
||||
result = []
|
||||
for pub_key, contact in items:
|
||||
node_id = _meshcore_node_id(pub_key)
|
||||
if node_id is not None:
|
||||
result.append((node_id, _contact_to_node_dict(contact)))
|
||||
return result
|
||||
|
||||
def lookup_node_id(self, pubkey_prefix: str) -> str | None:
|
||||
"""Return the canonical node ID for the contact matching *pubkey_prefix*.
|
||||
|
||||
Parameters:
|
||||
pubkey_prefix: Twelve-character hex string (six bytes) from a
|
||||
``CONTACT_MSG_RECV`` event.
|
||||
|
||||
Returns:
|
||||
Canonical ``!xxxxxxxx`` node ID, or ``None`` when no match.
|
||||
"""
|
||||
with self._contacts_lock:
|
||||
return _pubkey_prefix_to_node_id(self._contacts, pubkey_prefix)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Lifecycle
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def close(self) -> None:
|
||||
"""Signal the background event loop to stop and wait for the thread.
|
||||
|
||||
Safe to call multiple times and from any thread.
|
||||
"""
|
||||
self.isConnected = False
|
||||
loop = self._loop
|
||||
stop_event = self._stop_event
|
||||
if loop is not None and not loop.is_closed():
|
||||
try:
|
||||
if stop_event is not None:
|
||||
loop.call_soon_threadsafe(stop_event.set)
|
||||
else:
|
||||
loop.call_soon_threadsafe(loop.stop)
|
||||
except RuntimeError:
|
||||
pass
|
||||
thread = self._thread
|
||||
if thread is not None and thread.is_alive():
|
||||
thread.join(timeout=5.0)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Handler logic helpers (module-level to keep _make_event_handlers lean)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _process_self_info(
|
||||
payload: dict, iface: _MeshcoreInterface, handlers: object
|
||||
) -> None:
|
||||
"""Apply a ``SELF_INFO`` payload: set host_node_id and upsert the host node.
|
||||
|
||||
Parameters:
|
||||
payload: Event payload dict containing at minimum ``public_key`` and
|
||||
optionally ``name``, ``adv_lat``, ``adv_lon``.
|
||||
iface: Active interface whose :attr:`host_node_id` will be updated.
|
||||
handlers: Module reference for :func:`~data.mesh_ingestor.handlers`
|
||||
functions (passed to avoid circular-import issues).
|
||||
"""
|
||||
pub_key = payload.get("public_key", "")
|
||||
node_id = _meshcore_node_id(pub_key)
|
||||
if node_id:
|
||||
iface.host_node_id = node_id
|
||||
handlers.register_host_node_id(node_id)
|
||||
handlers.upsert_node(node_id, _self_info_to_node_dict(payload))
|
||||
handlers._mark_packet_seen()
|
||||
config._debug_log(
|
||||
"MeshCore self-info received",
|
||||
context="meshcore.self_info",
|
||||
node_id=node_id,
|
||||
name=payload.get("name"),
|
||||
)
|
||||
|
||||
|
||||
def _process_contacts(
|
||||
contacts: dict, iface: _MeshcoreInterface, handlers: object
|
||||
) -> None:
|
||||
"""Apply a bulk ``CONTACTS`` payload: update the local snapshot and upsert nodes.
|
||||
|
||||
Parameters:
|
||||
contacts: Mapping of full ``public_key`` hex strings to contact dicts.
|
||||
iface: Active interface whose contact snapshot will be updated.
|
||||
handlers: Module reference for :func:`~data.mesh_ingestor.handlers`.
|
||||
"""
|
||||
for pub_key, contact in contacts.items():
|
||||
node_id = _meshcore_node_id(pub_key)
|
||||
if node_id is None:
|
||||
continue
|
||||
iface._update_contact(contact)
|
||||
handlers.upsert_node(node_id, _contact_to_node_dict(contact))
|
||||
handlers._mark_packet_seen()
|
||||
|
||||
|
||||
def _process_contact_update(
|
||||
contact: dict, iface: _MeshcoreInterface, handlers: object
|
||||
) -> None:
|
||||
"""Apply a single ``NEW_CONTACT`` or ``NEXT_CONTACT`` event.
|
||||
|
||||
Parameters:
|
||||
contact: Contact dict containing at minimum ``public_key``.
|
||||
iface: Active interface whose contact snapshot will be updated.
|
||||
handlers: Module reference for :func:`~data.mesh_ingestor.handlers`.
|
||||
"""
|
||||
pub_key = contact.get("public_key", "")
|
||||
node_id = _meshcore_node_id(pub_key)
|
||||
if node_id is None:
|
||||
return
|
||||
iface._update_contact(contact)
|
||||
handlers.upsert_node(node_id, _contact_to_node_dict(contact))
|
||||
handlers._mark_packet_seen()
|
||||
config._debug_log(
|
||||
"MeshCore contact updated",
|
||||
context="meshcore.contact",
|
||||
node_id=node_id,
|
||||
name=contact.get("adv_name"),
|
||||
)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Async event handlers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _make_event_handlers(iface: _MeshcoreInterface, target: str | None) -> dict:
|
||||
"""Build async callbacks for each relevant MeshCore event type.
|
||||
|
||||
All callbacks are closures over *iface* and *target* so they can update
|
||||
connection state and forward data to the ingest queue without global state.
|
||||
|
||||
Parameters:
|
||||
iface: The active :class:`_MeshcoreInterface` instance.
|
||||
target: Human-readable connection target for log messages.
|
||||
|
||||
Returns:
|
||||
Mapping of ``EventType`` member name → async callback coroutine.
|
||||
"""
|
||||
# Deferred import to avoid a circular dependency: meshcore.py is imported by
|
||||
# providers/__init__.py which is imported by the top-level mesh_ingestor
|
||||
# package, while handlers.py imports from that same package.
|
||||
from .. import handlers as _handlers
|
||||
|
||||
async def on_self_info(evt) -> None:
|
||||
_process_self_info(evt.payload or {}, iface, _handlers)
|
||||
|
||||
async def on_contacts(evt) -> None:
|
||||
_process_contacts(evt.payload or {}, iface, _handlers)
|
||||
|
||||
async def on_contact_update(evt) -> None:
|
||||
_process_contact_update(evt.payload or {}, iface, _handlers)
|
||||
|
||||
async def on_channel_msg(evt) -> None:
|
||||
payload = evt.payload or {}
|
||||
sender_ts = payload.get("sender_timestamp")
|
||||
text = payload.get("text")
|
||||
if sender_ts is None or not text:
|
||||
return
|
||||
|
||||
rx_time = int(time.time())
|
||||
channel_idx = payload.get("channel_idx", 0)
|
||||
|
||||
packet = {
|
||||
"id": _derive_message_id(sender_ts, f"c{channel_idx}", text),
|
||||
"rxTime": rx_time,
|
||||
"rx_time": rx_time,
|
||||
"from_id": None,
|
||||
"to_id": "^all",
|
||||
"channel": channel_idx,
|
||||
"snr": payload.get("SNR"),
|
||||
"rssi": payload.get("RSSI"),
|
||||
"protocol": "meshcore",
|
||||
"decoded": {
|
||||
"portnum": "TEXT_MESSAGE_APP",
|
||||
"text": text,
|
||||
"channel": channel_idx,
|
||||
},
|
||||
}
|
||||
_handlers._mark_packet_seen()
|
||||
_handlers.store_packet_dict(packet)
|
||||
config._debug_log(
|
||||
"MeshCore channel message",
|
||||
context="meshcore.channel_msg",
|
||||
channel=channel_idx,
|
||||
)
|
||||
|
||||
async def on_contact_msg(evt) -> None:
|
||||
payload = evt.payload or {}
|
||||
sender_ts = payload.get("sender_timestamp")
|
||||
text = payload.get("text")
|
||||
if sender_ts is None or not text:
|
||||
return
|
||||
|
||||
rx_time = int(time.time())
|
||||
pubkey_prefix = payload.get("pubkey_prefix", "")
|
||||
from_id = iface.lookup_node_id(pubkey_prefix)
|
||||
|
||||
packet = {
|
||||
"id": _derive_message_id(sender_ts, pubkey_prefix or "", text),
|
||||
"rxTime": rx_time,
|
||||
"rx_time": rx_time,
|
||||
"from_id": from_id,
|
||||
"to_id": iface.host_node_id,
|
||||
"channel": 0,
|
||||
"snr": payload.get("SNR"),
|
||||
"protocol": "meshcore",
|
||||
"decoded": {
|
||||
"portnum": "TEXT_MESSAGE_APP",
|
||||
"text": text,
|
||||
"channel": 0,
|
||||
},
|
||||
}
|
||||
_handlers._mark_packet_seen()
|
||||
_handlers.store_packet_dict(packet)
|
||||
|
||||
async def on_disconnected(evt) -> None:
|
||||
iface.isConnected = False
|
||||
config._debug_log(
|
||||
"MeshCore node disconnected",
|
||||
context="meshcore.disconnect",
|
||||
target=target or "unknown",
|
||||
severity="warning",
|
||||
always=True,
|
||||
)
|
||||
|
||||
return {
|
||||
"SELF_INFO": on_self_info,
|
||||
"CONTACTS": on_contacts,
|
||||
"NEW_CONTACT": on_contact_update,
|
||||
"NEXT_CONTACT": on_contact_update,
|
||||
"CHANNEL_MSG_RECV": on_channel_msg,
|
||||
"CONTACT_MSG_RECV": on_contact_msg,
|
||||
"DISCONNECTED": on_disconnected,
|
||||
}
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Asyncio entry point (runs inside background thread)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _make_connection(target: str, baudrate: int) -> object:
|
||||
"""Create the appropriate MeshCore connection object for *target*.
|
||||
|
||||
Routes to the correct ``meshcore`` connection class based on the target
|
||||
string format:
|
||||
|
||||
* BLE MAC / UUID → :class:`meshcore.BLEConnection`
|
||||
* ``host:port`` / ``[ipv6]:port`` → :class:`meshcore.TCPConnection`
|
||||
* anything else → :class:`meshcore.SerialConnection`
|
||||
|
||||
Parameters:
|
||||
target: Resolved, non-empty connection target.
|
||||
baudrate: Baud rate for serial connections (ignored for BLE/TCP).
|
||||
|
||||
Returns:
|
||||
An unconnected ``meshcore`` connection object.
|
||||
"""
|
||||
from meshcore import BLEConnection, SerialConnection, TCPConnection
|
||||
|
||||
ble_addr = parse_ble_target(target)
|
||||
if ble_addr:
|
||||
return BLEConnection(address=ble_addr)
|
||||
|
||||
tcp_target = parse_tcp_target(target)
|
||||
if tcp_target:
|
||||
host, port = tcp_target
|
||||
return TCPConnection(host, port)
|
||||
|
||||
return SerialConnection(target, baudrate)
|
||||
|
||||
|
||||
async def _run_meshcore(
|
||||
iface: _MeshcoreInterface,
|
||||
target: str,
|
||||
connected_event: threading.Event,
|
||||
error_holder: list,
|
||||
) -> None:
|
||||
"""Connect to a MeshCore node and keep the event loop running until closed.
|
||||
|
||||
This coroutine is the single entry point for the background asyncio thread.
|
||||
It connects the MeshCore library, registers event handlers, fetches the
|
||||
initial contact list, starts auto-message polling, and then waits for the
|
||||
:attr:`_MeshcoreInterface._stop_event` to be set.
|
||||
|
||||
Parameters:
|
||||
iface: Shared interface object for state and contact tracking.
|
||||
target: Resolved, non-empty connection target (serial, BLE, or TCP).
|
||||
connected_event: Threading event signalled when the connection
|
||||
succeeds or fails, to unblock the calling ``connect()`` method.
|
||||
error_holder: Single-element list; set to the raised exception when
|
||||
the connection attempt fails so the caller can re-raise it.
|
||||
"""
|
||||
from meshcore import EventType, MeshCore
|
||||
|
||||
mc: MeshCore | None = None
|
||||
try:
|
||||
cx = _make_connection(target, _DEFAULT_BAUDRATE)
|
||||
mc = MeshCore(cx)
|
||||
iface._mc = mc
|
||||
|
||||
handlers_map = _make_event_handlers(iface, target)
|
||||
for event_name, callback in handlers_map.items():
|
||||
mc.subscribe(EventType[event_name], callback)
|
||||
|
||||
_handled_types = frozenset(EventType[n] for n in handlers_map)
|
||||
# Bookkeeping events that require no action and should not be logged.
|
||||
_silent_types = frozenset(
|
||||
{
|
||||
EventType.CONNECTED,
|
||||
EventType.ACK,
|
||||
EventType.OK,
|
||||
EventType.ERROR,
|
||||
EventType.NO_MORE_MSGS,
|
||||
EventType.MESSAGES_WAITING,
|
||||
EventType.MSG_SENT,
|
||||
EventType.CURRENT_TIME,
|
||||
}
|
||||
)
|
||||
|
||||
async def _on_unhandled(evt) -> None:
|
||||
if evt.type in _handled_types or evt.type in _silent_types:
|
||||
return
|
||||
_record_meshcore_message(
|
||||
evt.payload,
|
||||
source=f"{target or 'auto'}:{evt.type.name}",
|
||||
)
|
||||
|
||||
mc.subscribe(None, _on_unhandled)
|
||||
|
||||
result = await mc.connect()
|
||||
if result is None:
|
||||
raise ConnectionError(
|
||||
f"MeshCore node at {target!r} did not respond to the appstart "
|
||||
"handshake. Ensure the device is running MeshCore companion-mode "
|
||||
"firmware."
|
||||
)
|
||||
|
||||
iface.isConnected = True
|
||||
connected_event.set()
|
||||
|
||||
try:
|
||||
await mc.ensure_contacts()
|
||||
except Exception as exc:
|
||||
config._debug_log(
|
||||
"Failed to fetch initial contacts",
|
||||
context="meshcore.contacts",
|
||||
severity="warning",
|
||||
always=True,
|
||||
error=str(exc),
|
||||
)
|
||||
|
||||
await mc.start_auto_message_fetching()
|
||||
|
||||
stop_event = asyncio.Event()
|
||||
iface._stop_event = stop_event
|
||||
await stop_event.wait()
|
||||
|
||||
except Exception as exc:
|
||||
if not connected_event.is_set():
|
||||
error_holder[0] = exc
|
||||
connected_event.set()
|
||||
finally:
|
||||
if mc is not None:
|
||||
try:
|
||||
await mc.disconnect()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Provider
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class MeshcoreProvider:
|
||||
"""MeshCore ingestion provider.
|
||||
|
||||
Connects to a MeshCore node via serial port, BLE, or TCP/IP. The
|
||||
connection type is inferred from the target string; see :meth:`connect`
|
||||
for routing rules.
|
||||
|
||||
The provider runs MeshCore's ``asyncio`` event loop in a background daemon
|
||||
thread. Incoming ``SELF_INFO``, ``CONTACTS``, ``NEW_CONTACT``,
|
||||
``CHANNEL_MSG_RECV``, and ``CONTACT_MSG_RECV`` events are forwarded to the
|
||||
HTTP ingest queue via the shared handler functions.
|
||||
"""
|
||||
|
||||
name = "meshcore"
|
||||
|
||||
def subscribe(self) -> list[str]:
|
||||
"""Return subscribed topic names.
|
||||
|
||||
MeshCore uses an ``asyncio`` event system rather than a pubsub bus,
|
||||
so there are no topics to register at startup.
|
||||
"""
|
||||
return []
|
||||
|
||||
def connect(
|
||||
self, *, active_candidate: str | None
|
||||
) -> tuple[object, str | None, str | None]:
|
||||
"""Connect to a MeshCore node via serial, BLE, or TCP.
|
||||
|
||||
Starts an asyncio event loop in a background daemon thread, performs
|
||||
the MeshCore companion-protocol handshake, and blocks until the node's
|
||||
self-info is received or the timeout expires.
|
||||
|
||||
Connection type is inferred from *active_candidate* (or
|
||||
:data:`~data.mesh_ingestor.config.CONNECTION`):
|
||||
|
||||
* BLE MAC / UUID → :class:`meshcore.BLEConnection`
|
||||
* ``host:port`` → :class:`meshcore.TCPConnection`
|
||||
* serial path → :class:`meshcore.SerialConnection`
|
||||
* ``None`` / empty → first candidate from
|
||||
:func:`~data.mesh_ingestor.connection.default_serial_targets`
|
||||
|
||||
Parameters:
|
||||
active_candidate: Previously resolved connection target, or
|
||||
``None`` to fall back to
|
||||
:data:`~data.mesh_ingestor.config.CONNECTION`.
|
||||
|
||||
Returns:
|
||||
``(iface, resolved_target, next_active_candidate)`` matching the
|
||||
:class:`~data.mesh_ingestor.provider.Provider` contract.
|
||||
|
||||
Raises:
|
||||
ConnectionError: When the node does not complete the handshake
|
||||
within :data:`_CONNECT_TIMEOUT_SECS` seconds.
|
||||
"""
|
||||
target: str | None = active_candidate or config.CONNECTION
|
||||
|
||||
if not target:
|
||||
candidates = default_serial_targets()
|
||||
target = candidates[0] if candidates else "/dev/ttyACM0"
|
||||
|
||||
config._debug_log(
|
||||
"Connecting to MeshCore node",
|
||||
context="meshcore.connect",
|
||||
target=target,
|
||||
)
|
||||
|
||||
iface = _MeshcoreInterface(target=target)
|
||||
connected_event = threading.Event()
|
||||
error_holder: list = [None]
|
||||
|
||||
def _run_loop() -> None:
|
||||
loop = asyncio.new_event_loop()
|
||||
asyncio.set_event_loop(loop)
|
||||
iface._loop = loop
|
||||
try:
|
||||
loop.run_until_complete(
|
||||
_run_meshcore(iface, target, connected_event, error_holder)
|
||||
)
|
||||
finally:
|
||||
loop.close()
|
||||
|
||||
thread = threading.Thread(target=_run_loop, name="meshcore-loop", daemon=True)
|
||||
iface._thread = thread
|
||||
thread.start()
|
||||
|
||||
if not connected_event.wait(timeout=_CONNECT_TIMEOUT_SECS):
|
||||
iface.close()
|
||||
raise ConnectionError(
|
||||
f"Timed out waiting for MeshCore node at {target!r} "
|
||||
f"after {_CONNECT_TIMEOUT_SECS:g}s."
|
||||
)
|
||||
|
||||
if error_holder[0] is not None:
|
||||
iface.close()
|
||||
raise error_holder[0]
|
||||
|
||||
return iface, target, target
|
||||
|
||||
def extract_host_node_id(self, iface: object) -> str | None:
|
||||
"""Return the canonical ``!xxxxxxxx`` host node ID from the interface.
|
||||
|
||||
Parameters:
|
||||
iface: Active :class:`_MeshcoreInterface` returned by
|
||||
:meth:`connect`.
|
||||
"""
|
||||
return getattr(iface, "host_node_id", None)
|
||||
|
||||
def node_snapshot_items(self, iface: object) -> list[tuple[str, dict]]:
|
||||
"""Return a snapshot of all known MeshCore contacts as node entries.
|
||||
|
||||
Parameters:
|
||||
iface: Active :class:`_MeshcoreInterface` instance. Any other
|
||||
object type causes an empty list to be returned.
|
||||
|
||||
Returns:
|
||||
List of ``(canonical_node_id, node_dict)`` pairs suitable for
|
||||
passing to :func:`~data.mesh_ingestor.handlers.upsert_node`.
|
||||
"""
|
||||
if not isinstance(iface, _MeshcoreInterface):
|
||||
return []
|
||||
return iface.contacts_snapshot()
|
||||
|
||||
|
||||
__all__ = ["MeshcoreProvider"]
|
||||
@@ -1,91 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Meshtastic provider implementation."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import time
|
||||
|
||||
from pubsub import pub
|
||||
|
||||
from .. import config, daemon as _daemon, handlers, interfaces
|
||||
|
||||
|
||||
class MeshtasticProvider:
|
||||
"""Meshtastic ingestion provider (current default)."""
|
||||
|
||||
name = "meshtastic"
|
||||
|
||||
def __init__(self):
|
||||
self._subscribed: list[str] = []
|
||||
|
||||
def subscribe(self) -> list[str]:
|
||||
"""Subscribe Meshtastic pubsub receive topics."""
|
||||
|
||||
if self._subscribed:
|
||||
return list(self._subscribed)
|
||||
|
||||
subscribed = []
|
||||
for topic in _daemon._RECEIVE_TOPICS:
|
||||
try:
|
||||
pub.subscribe(handlers.on_receive, topic)
|
||||
subscribed.append(topic)
|
||||
except Exception as exc: # pragma: no cover
|
||||
config._debug_log(f"failed to subscribe to {topic!r}: {exc}")
|
||||
self._subscribed = subscribed
|
||||
return list(subscribed)
|
||||
|
||||
def connect(
|
||||
self, *, active_candidate: str | None
|
||||
) -> tuple[object, str | None, str | None]:
|
||||
"""Create a Meshtastic interface using the existing interface helpers."""
|
||||
|
||||
iface = None
|
||||
resolved_target = None
|
||||
next_candidate = active_candidate
|
||||
|
||||
if active_candidate:
|
||||
iface, resolved_target = interfaces._create_serial_interface(
|
||||
active_candidate
|
||||
)
|
||||
else:
|
||||
iface, resolved_target = interfaces._create_default_interface()
|
||||
next_candidate = resolved_target
|
||||
|
||||
interfaces._ensure_radio_metadata(iface)
|
||||
interfaces._ensure_channel_metadata(iface)
|
||||
|
||||
return iface, resolved_target, next_candidate
|
||||
|
||||
def extract_host_node_id(self, iface: object) -> str | None:
|
||||
return interfaces._extract_host_node_id(iface)
|
||||
|
||||
def node_snapshot_items(self, iface: object) -> list[tuple[str, object]]:
|
||||
nodes = getattr(iface, "nodes", {}) or {}
|
||||
for _ in range(3):
|
||||
try:
|
||||
return list(nodes.items())
|
||||
except RuntimeError as err:
|
||||
if "dictionary changed size during iteration" not in str(err):
|
||||
raise
|
||||
time.sleep(0)
|
||||
config._debug_log(
|
||||
"Skipping node snapshot due to concurrent modification",
|
||||
context="meshtastic.snapshot",
|
||||
)
|
||||
return []
|
||||
|
||||
|
||||
__all__ = ["MeshtasticProvider"]
|
||||
@@ -33,9 +33,6 @@ from google.protobuf.json_format import MessageToDict
|
||||
from google.protobuf.message import DecodeError
|
||||
from google.protobuf.message import Message as ProtoMessage
|
||||
|
||||
from .node_identity import canonical_node_id as _canonical_node_id
|
||||
from .node_identity import node_num_from_id as _node_num_from_id
|
||||
|
||||
_CLI_ROLE_MODULE_NAMES: tuple[str, ...] = (
|
||||
"meshtastic.cli.common",
|
||||
"meshtastic.cli.roles",
|
||||
@@ -432,6 +429,91 @@ def _pkt_to_dict(packet) -> dict:
|
||||
return {"_unparsed": str(packet)}
|
||||
|
||||
|
||||
def _canonical_node_id(value) -> str | None:
|
||||
"""Convert node identifiers into the canonical ``!xxxxxxxx`` format.
|
||||
|
||||
Parameters:
|
||||
value: Input identifier which may be an int, float or string.
|
||||
|
||||
Returns:
|
||||
The canonical identifier or ``None`` if conversion fails.
|
||||
"""
|
||||
|
||||
if value is None:
|
||||
return None
|
||||
if isinstance(value, (int, float)):
|
||||
try:
|
||||
num = int(value)
|
||||
except (TypeError, ValueError):
|
||||
return None
|
||||
if num < 0:
|
||||
return None
|
||||
return f"!{num & 0xFFFFFFFF:08x}"
|
||||
if not isinstance(value, str):
|
||||
return None
|
||||
|
||||
trimmed = value.strip()
|
||||
if not trimmed:
|
||||
return None
|
||||
if trimmed.startswith("^"):
|
||||
return trimmed
|
||||
if trimmed.startswith("!"):
|
||||
body = trimmed[1:]
|
||||
elif trimmed.lower().startswith("0x"):
|
||||
body = trimmed[2:]
|
||||
elif trimmed.isdigit():
|
||||
try:
|
||||
return f"!{int(trimmed, 10) & 0xFFFFFFFF:08x}"
|
||||
except ValueError:
|
||||
return None
|
||||
else:
|
||||
body = trimmed
|
||||
|
||||
if not body:
|
||||
return None
|
||||
try:
|
||||
return f"!{int(body, 16) & 0xFFFFFFFF:08x}"
|
||||
except ValueError:
|
||||
return None
|
||||
|
||||
|
||||
def _node_num_from_id(node_id) -> int | None:
|
||||
"""Extract the numeric node ID from a canonical identifier.
|
||||
|
||||
Parameters:
|
||||
node_id: Identifier value accepted by :func:`_canonical_node_id`.
|
||||
|
||||
Returns:
|
||||
The numeric node ID or ``None`` when parsing fails.
|
||||
"""
|
||||
|
||||
if node_id is None:
|
||||
return None
|
||||
if isinstance(node_id, (int, float)):
|
||||
try:
|
||||
num = int(node_id)
|
||||
except (TypeError, ValueError):
|
||||
return None
|
||||
return num if num >= 0 else None
|
||||
if not isinstance(node_id, str):
|
||||
return None
|
||||
|
||||
trimmed = node_id.strip()
|
||||
if not trimmed:
|
||||
return None
|
||||
if trimmed.startswith("!"):
|
||||
trimmed = trimmed[1:]
|
||||
if trimmed.lower().startswith("0x"):
|
||||
trimmed = trimmed[2:]
|
||||
try:
|
||||
return int(trimmed, 16)
|
||||
except ValueError:
|
||||
try:
|
||||
return int(trimmed, 10)
|
||||
except ValueError:
|
||||
return None
|
||||
|
||||
|
||||
def _merge_mappings(base, extra):
|
||||
"""Merge two mapping-like objects recursively.
|
||||
|
||||
|
||||
+1
-3
@@ -29,9 +29,7 @@ CREATE TABLE IF NOT EXISTS messages (
|
||||
modem_preset TEXT,
|
||||
channel_name TEXT,
|
||||
reply_id INTEGER,
|
||||
emoji TEXT,
|
||||
ingestor TEXT,
|
||||
protocol TEXT NOT NULL DEFAULT 'meshtastic'
|
||||
emoji TEXT
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_messages_rx_time ON messages(rx_time);
|
||||
|
||||
@@ -1,39 +0,0 @@
|
||||
-- Copyright © 2025-26 l5yth & contributors
|
||||
--
|
||||
-- Licensed under the Apache License, Version 2.0 (the "License");
|
||||
-- you may not use this file except in compliance with the License.
|
||||
-- You may obtain a copy of the License at
|
||||
--
|
||||
-- http://www.apache.org/licenses/LICENSE-2.0
|
||||
--
|
||||
-- Unless required by applicable law or agreed to in writing, software
|
||||
-- distributed under the License is distributed on an "AS IS" BASIS,
|
||||
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
-- See the License for the specific language governing permissions and
|
||||
-- limitations under the License.
|
||||
|
||||
-- Add a protocol column to every entity and event table so records from
|
||||
-- different mesh backends (meshtastic, meshcore, reticulum, …) can co-exist
|
||||
-- in the same database and be queried independently.
|
||||
--
|
||||
-- Existing rows default to 'meshtastic' for backward compatibility.
|
||||
|
||||
BEGIN;
|
||||
ALTER TABLE ingestors ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
|
||||
ALTER TABLE nodes ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
|
||||
ALTER TABLE messages ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
|
||||
ALTER TABLE positions ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
|
||||
ALTER TABLE telemetry ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
|
||||
ALTER TABLE traces ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
|
||||
ALTER TABLE neighbors ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic';
|
||||
|
||||
-- Indices to support ?protocol= filtering on every entity endpoint without
|
||||
-- full table scans as multi-protocol traffic grows.
|
||||
CREATE INDEX IF NOT EXISTS idx_ingestors_protocol ON ingestors(protocol);
|
||||
CREATE INDEX IF NOT EXISTS idx_nodes_protocol ON nodes(protocol);
|
||||
CREATE INDEX IF NOT EXISTS idx_messages_protocol ON messages(protocol);
|
||||
CREATE INDEX IF NOT EXISTS idx_positions_protocol ON positions(protocol);
|
||||
CREATE INDEX IF NOT EXISTS idx_telemetry_protocol ON telemetry(protocol);
|
||||
CREATE INDEX IF NOT EXISTS idx_traces_protocol ON traces(protocol);
|
||||
CREATE INDEX IF NOT EXISTS idx_neighbors_protocol ON neighbors(protocol);
|
||||
COMMIT;
|
||||
@@ -1,47 +0,0 @@
|
||||
-- Copyright © 2025-26 l5yth & contributors
|
||||
--
|
||||
-- Licensed under the Apache License, Version 2.0 (the "License");
|
||||
-- you may not use this file except in compliance with the License.
|
||||
-- You may obtain a copy of the License at
|
||||
--
|
||||
-- http://www.apache.org/licenses/LICENSE-2.0
|
||||
--
|
||||
-- Unless required by applicable law or agreed to in writing, software
|
||||
-- distributed under the License is distributed on an "AS IS" BASIS,
|
||||
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
-- See the License for the specific language governing permissions and
|
||||
-- limitations under the License.
|
||||
|
||||
-- Add telemetry subtype discriminator to enable per-chart type filtering.
|
||||
-- Backfills existing rows using field-presence heuristics that mirror
|
||||
-- classifySnapshot() in node-page.js, so historical data is classified
|
||||
-- consistently regardless of whether the new ingestors are deployed yet.
|
||||
|
||||
BEGIN;
|
||||
ALTER TABLE telemetry ADD COLUMN telemetry_type TEXT;
|
||||
|
||||
-- Device metrics: battery/channel fields are exclusive to device_metrics
|
||||
UPDATE telemetry SET telemetry_type = 'device'
|
||||
WHERE telemetry_type IS NULL
|
||||
AND (battery_level IS NOT NULL OR channel_utilization IS NOT NULL
|
||||
OR air_util_tx IS NOT NULL OR uptime_seconds IS NOT NULL);
|
||||
|
||||
-- Power sensor: current is the unambiguous power-sensor discriminator.
|
||||
-- voltage is intentionally excluded here: device_metrics also stores a voltage
|
||||
-- reading (~4.2 V for battery), so using voltage alone would misclassify device
|
||||
-- rows whose four device-discriminator fields (battery_level, channel_utilization,
|
||||
-- air_util_tx, uptime_seconds) happen to be NULL. Rows that have only voltage
|
||||
-- and no other classifiable fields are left as NULL (unclassified), which is
|
||||
-- more accurate than a wrong classification.
|
||||
UPDATE telemetry SET telemetry_type = 'power'
|
||||
WHERE telemetry_type IS NULL
|
||||
AND current IS NOT NULL;
|
||||
|
||||
-- Environment: temperature/humidity/pressure
|
||||
UPDATE telemetry SET telemetry_type = 'environment'
|
||||
WHERE telemetry_type IS NULL
|
||||
AND (temperature IS NOT NULL OR relative_humidity IS NOT NULL
|
||||
OR barometric_pressure IS NOT NULL OR iaq IS NOT NULL
|
||||
OR gas_resistance IS NOT NULL);
|
||||
|
||||
COMMIT;
|
||||
@@ -17,8 +17,6 @@ CREATE TABLE IF NOT EXISTS neighbors (
|
||||
neighbor_id TEXT NOT NULL,
|
||||
snr REAL,
|
||||
rx_time INTEGER NOT NULL,
|
||||
ingestor TEXT,
|
||||
protocol TEXT NOT NULL DEFAULT 'meshtastic',
|
||||
PRIMARY KEY (node_id, neighbor_id),
|
||||
FOREIGN KEY (node_id) REFERENCES nodes(node_id) ON DELETE CASCADE,
|
||||
FOREIGN KEY (neighbor_id) REFERENCES nodes(node_id) ON DELETE CASCADE
|
||||
|
||||
+1
-2
@@ -41,8 +41,7 @@ CREATE TABLE IF NOT EXISTS nodes (
|
||||
longitude REAL,
|
||||
altitude REAL,
|
||||
lora_freq INTEGER,
|
||||
modem_preset TEXT,
|
||||
protocol TEXT NOT NULL DEFAULT 'meshtastic'
|
||||
modem_preset TEXT
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_nodes_last_heard ON nodes(last_heard);
|
||||
|
||||
+1
-3
@@ -33,9 +33,7 @@ CREATE TABLE IF NOT EXISTS positions (
|
||||
rssi INTEGER,
|
||||
hop_limit INTEGER,
|
||||
bitfield INTEGER,
|
||||
payload_b64 TEXT,
|
||||
ingestor TEXT,
|
||||
protocol TEXT NOT NULL DEFAULT 'meshtastic'
|
||||
payload_b64 TEXT
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_positions_rx_time ON positions(rx_time);
|
||||
|
||||
@@ -1,7 +1,5 @@
|
||||
# Production dependencies
|
||||
meshtastic>=2.5.0
|
||||
meshcore>=2.3.5
|
||||
bleak>=0.21.0
|
||||
protobuf>=5.27.2
|
||||
|
||||
# Development dependencies (optional)
|
||||
|
||||
+1
-4
@@ -53,10 +53,7 @@ CREATE TABLE IF NOT EXISTS telemetry (
|
||||
rainfall_1h REAL,
|
||||
rainfall_24h REAL,
|
||||
soil_moisture INTEGER,
|
||||
soil_temperature REAL,
|
||||
ingestor TEXT,
|
||||
protocol TEXT NOT NULL DEFAULT 'meshtastic',
|
||||
telemetry_type TEXT
|
||||
soil_temperature REAL
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_telemetry_rx_time ON telemetry(rx_time);
|
||||
|
||||
+1
-3
@@ -21,9 +21,7 @@ CREATE TABLE IF NOT EXISTS traces (
|
||||
rx_iso TEXT NOT NULL,
|
||||
rssi INTEGER,
|
||||
snr REAL,
|
||||
elapsed_ms INTEGER,
|
||||
ingestor TEXT,
|
||||
protocol TEXT NOT NULL DEFAULT 'meshtastic'
|
||||
elapsed_ms INTEGER
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS trace_hops (
|
||||
|
||||
+1
-10
@@ -81,12 +81,7 @@ x-matrix-bridge-base: &matrix-bridge-base
|
||||
image: ghcr.io/l5yth/potato-mesh-matrix-bridge-${POTATOMESH_IMAGE_ARCH:-linux-amd64}:${POTATOMESH_IMAGE_TAG:-latest}
|
||||
volumes:
|
||||
- potatomesh_matrix_bridge_state:/app
|
||||
- type: bind
|
||||
source: ./matrix/Config.toml
|
||||
target: /app/Config.toml
|
||||
read_only: true
|
||||
bind:
|
||||
create_host_path: false
|
||||
- ./matrix/Config.toml:/app/Config.toml:ro
|
||||
restart: unless-stopped
|
||||
deploy:
|
||||
resources:
|
||||
@@ -133,8 +128,6 @@ services:
|
||||
matrix-bridge:
|
||||
<<: *matrix-bridge-base
|
||||
network_mode: host
|
||||
profiles:
|
||||
- matrix
|
||||
depends_on:
|
||||
- web
|
||||
extra_hosts:
|
||||
@@ -147,8 +140,6 @@ services:
|
||||
- potatomesh-network
|
||||
depends_on:
|
||||
- web-bridge
|
||||
ports:
|
||||
- "41448:41448"
|
||||
profiles:
|
||||
- bridge
|
||||
|
||||
|
||||
Generated
+149
-348
File diff suppressed because it is too large
Load Diff
+1
-4
@@ -14,7 +14,7 @@
|
||||
|
||||
[package]
|
||||
name = "potatomesh-matrix-bridge"
|
||||
version = "0.5.12"
|
||||
version = "0.5.9"
|
||||
edition = "2021"
|
||||
|
||||
[dependencies]
|
||||
@@ -27,11 +27,8 @@ anyhow = "1"
|
||||
tracing = "0.1"
|
||||
tracing-subscriber = { version = "0.3", features = ["fmt", "env-filter"] }
|
||||
urlencoding = "2"
|
||||
axum = { version = "0.7", features = ["json"] }
|
||||
clap = { version = "4", features = ["derive"] }
|
||||
|
||||
[dev-dependencies]
|
||||
tempfile = "3"
|
||||
mockito = "1"
|
||||
serial_test = "3"
|
||||
tower = "0.5"
|
||||
|
||||
+1
-2
@@ -9,8 +9,6 @@ poll_interval_secs = 60
|
||||
homeserver = "https://matrix.dod.ngo"
|
||||
# Appservice access token (from your registration.yaml)
|
||||
as_token = "INVALID_TOKEN_NOT_WORKING"
|
||||
# Homeserver token used to authenticate Synapse callbacks
|
||||
hs_token = "INVALID_TOKEN_NOT_WORKING"
|
||||
# Server name (domain) part of Matrix user IDs
|
||||
server_name = "dod.ngo"
|
||||
# Room ID to send into (must be joined by the appservice / puppets)
|
||||
@@ -19,3 +17,4 @@ room_id = "!sXabOBXbVObAlZQEUs:c-base.org" # "#potato-bridge:c-base.org"
|
||||
[state]
|
||||
# Where to persist last seen message id (optional but recommended)
|
||||
state_file = "bridge_state.json"
|
||||
|
||||
|
||||
+1
-3
@@ -12,7 +12,7 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
FROM rust:1.92-bookworm AS builder
|
||||
FROM rust:1.91-bookworm AS builder
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
@@ -37,8 +37,6 @@ COPY --from=builder /app/target/release/potatomesh-matrix-bridge /usr/local/bin/
|
||||
COPY matrix/Config.toml /app/Config.example.toml
|
||||
COPY matrix/docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
|
||||
|
||||
EXPOSE 41448
|
||||
|
||||
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
|
||||
|
||||
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
|
||||
|
||||
+7
-109
@@ -2,8 +2,6 @@
|
||||
|
||||
A small Rust daemon that bridges **PotatoMesh** LoRa messages into a **Matrix** room.
|
||||
|
||||

|
||||
|
||||
For each PotatoMesh node, the bridge creates (or uses) a **Matrix puppet user**:
|
||||
|
||||
- Matrix localpart: `potato_` + the hex node id (without `!`), e.g. `!67fc83cb` → `@potato_67fc83cb:example.org`
|
||||
@@ -56,17 +54,9 @@ This is **not** a full appservice framework; it just speaks the minimal HTTP nee
|
||||
|
||||
## Configuration
|
||||
|
||||
Configuration can come from a TOML file, CLI flags, environment variables, or secret files. The bridge merges inputs in this order (highest to lowest):
|
||||
All configuration is in `Config.toml` in the project root.
|
||||
|
||||
1. CLI flags
|
||||
2. Environment variables
|
||||
3. Secret files (`*_FILE` paths or container defaults)
|
||||
4. TOML config file
|
||||
5. Container defaults (paths + poll interval)
|
||||
|
||||
If no TOML file is provided, required values must be supplied via CLI/env/secret inputs.
|
||||
|
||||
Example TOML:
|
||||
Example:
|
||||
|
||||
```toml
|
||||
[potatomesh]
|
||||
@@ -80,8 +70,6 @@ poll_interval_secs = 10
|
||||
homeserver = "https://matrix.example.org"
|
||||
# Appservice access token (from your registration.yaml)
|
||||
as_token = "YOUR_APPSERVICE_AS_TOKEN"
|
||||
# Appservice homeserver token (must match registration hs_token)
|
||||
hs_token = "SECRET_HS_TOKEN"
|
||||
# Server name (domain) part of Matrix user IDs
|
||||
server_name = "example.org"
|
||||
# Room ID to send into (must be joined by the appservice / puppets)
|
||||
@@ -92,92 +80,6 @@ room_id = "!yourroomid:example.org"
|
||||
state_file = "bridge_state.json"
|
||||
````
|
||||
|
||||
The `hs_token` is used to validate inbound appservice transactions. Keep it identical in `Config.toml` and your Matrix appservice registration file.
|
||||
|
||||
### CLI Flags
|
||||
|
||||
Run `potatomesh-matrix-bridge --help` for the full list. Common flags:
|
||||
|
||||
* `--config PATH`
|
||||
* `--state-file PATH`
|
||||
* `--potatomesh-base-url URL`
|
||||
* `--potatomesh-poll-interval-secs SECS`
|
||||
* `--matrix-homeserver URL`
|
||||
* `--matrix-as-token TOKEN`
|
||||
* `--matrix-as-token-file PATH`
|
||||
* `--matrix-hs-token TOKEN`
|
||||
* `--matrix-hs-token-file PATH`
|
||||
* `--matrix-server-name NAME`
|
||||
* `--matrix-room-id ROOM`
|
||||
* `--container` / `--no-container`
|
||||
* `--secrets-dir PATH`
|
||||
|
||||
### Environment Variables
|
||||
|
||||
* `POTATOMESH_CONFIG`
|
||||
* `POTATOMESH_BASE_URL`
|
||||
* `POTATOMESH_POLL_INTERVAL_SECS`
|
||||
* `MATRIX_HOMESERVER`
|
||||
* `MATRIX_AS_TOKEN`
|
||||
* `MATRIX_AS_TOKEN_FILE`
|
||||
* `MATRIX_HS_TOKEN`
|
||||
* `MATRIX_HS_TOKEN_FILE`
|
||||
* `MATRIX_SERVER_NAME`
|
||||
* `MATRIX_ROOM_ID`
|
||||
* `STATE_FILE`
|
||||
* `POTATOMESH_CONTAINER`
|
||||
* `POTATOMESH_SECRETS_DIR`
|
||||
|
||||
### Secret Files
|
||||
|
||||
If you supply `*_FILE` values, the bridge reads the secret contents and trims whitespace. When running inside a container, the bridge also checks the default secrets directory (default: `/run/secrets`) for:
|
||||
|
||||
* `matrix_as_token`
|
||||
* `matrix_hs_token`
|
||||
|
||||
### Container Defaults
|
||||
|
||||
Container detection checks `POTATOMESH_CONTAINER`, `CONTAINER`, and `/proc/1/cgroup`. When detected (or forced with `--container`), defaults shift to:
|
||||
|
||||
* Config path: `/app/Config.toml`
|
||||
* State file: `/app/bridge_state.json`
|
||||
* Secrets dir: `/run/secrets`
|
||||
* Poll interval: 15 seconds (if not otherwise configured)
|
||||
|
||||
Set `POTATOMESH_CONTAINER=0` or `--no-container` to opt out of container defaults.
|
||||
|
||||
### Docker Compose First Run
|
||||
|
||||
Before starting Compose, complete this preflight checklist:
|
||||
|
||||
1. Ensure `matrix/Config.toml` exists as a regular file on the host (not a directory).
|
||||
2. Fill required Matrix values in `matrix/Config.toml`:
|
||||
- `matrix.as_token`
|
||||
- `matrix.hs_token`
|
||||
- `matrix.server_name`
|
||||
- `matrix.room_id`
|
||||
- `matrix.homeserver`
|
||||
|
||||
This is required because the shared Compose anchor `x-matrix-bridge-base` mounts `./matrix/Config.toml` to `/app/Config.toml`.
|
||||
Then follow the token and namespace requirements in [Matrix Appservice Setup (Synapse example)](#matrix-appservice-setup-synapse-example).
|
||||
|
||||
#### Troubleshooting
|
||||
|
||||
| Symptom | Likely cause | What to check |
|
||||
| --- | --- | --- |
|
||||
| `Is a directory (os error 21)` | Host mount source became a directory | `matrix/Config.toml` was missing at mount time and got created as a directory on host. |
|
||||
| `M_UNKNOWN_TOKEN` / `401 Unauthorized` | Matrix appservice token mismatch | Verify `matrix.as_token` matches your appservice registration and setup in [Matrix Appservice Setup (Synapse example)](#matrix-appservice-setup-synapse-example). |
|
||||
|
||||
#### Recovery from accidental `Config.toml` directory creation
|
||||
|
||||
```bash
|
||||
# from repo root
|
||||
rm -rf matrix/Config.toml
|
||||
touch matrix/Config.toml
|
||||
# then edit matrix/Config.toml and set valid matrix.as_token, matrix.hs_token,
|
||||
# matrix.server_name, matrix.room_id, and matrix.homeserver before starting compose
|
||||
```
|
||||
|
||||
### PotatoMesh API
|
||||
|
||||
The bridge assumes:
|
||||
@@ -232,7 +134,7 @@ A minimal example sketch (you **must** adjust URLs, secrets, namespaces):
|
||||
|
||||
```yaml
|
||||
id: potatomesh-bridge
|
||||
url: "http://your-bridge-host:41448"
|
||||
url: "http://your-bridge-host:8080" # not used by this bridge if it only calls out
|
||||
as_token: "YOUR_APPSERVICE_AS_TOKEN"
|
||||
hs_token: "SECRET_HS_TOKEN"
|
||||
sender_localpart: "potatomesh-bridge"
|
||||
@@ -243,12 +145,10 @@ namespaces:
|
||||
regex: "@potato_[0-9a-f]{8}:example.org"
|
||||
```
|
||||
|
||||
This bridge listens for Synapse appservice callbacks on port `41448` so it can log inbound transaction payloads. It still only forwards messages one way (PotatoMesh → Matrix), so inbound Matrix events are acknowledged but not bridged. The `as_token` and `namespaces.users` entries remain required for outbound calls, and the `url` should point at the listener.
|
||||
For this bridge, only the `as_token` and `namespaces.users` actually matter. The bridge does not accept inbound events; it only uses the `as_token` to call the homeserver.
|
||||
|
||||
In Synapse’s `homeserver.yaml`, add the registration file under `app_service_config_files`, restart, and invite a puppet user to your target room (or use room ID directly).
|
||||
|
||||
The bridge validates inbound appservice callbacks by comparing the `access_token` query param to `hs_token` in `Config.toml`, so keep those values in sync.
|
||||
|
||||
---
|
||||
|
||||
## Build
|
||||
@@ -278,11 +178,10 @@ Build the container from the repo root with the included `matrix/Dockerfile`:
|
||||
docker build -f matrix/Dockerfile -t potatomesh-matrix-bridge .
|
||||
```
|
||||
|
||||
Provide your config at `/app/Config.toml` (or use CLI/env/secret overrides) and persist the bridge state file by mounting volumes. Minimal example:
|
||||
Provide your config at `/app/Config.toml` and persist the bridge state file by mounting volumes. Minimal example:
|
||||
|
||||
```bash
|
||||
docker run --rm \
|
||||
-p 41448:41448 \
|
||||
-v bridge_state:/app \
|
||||
-v "$(pwd)/matrix/Config.toml:/app/Config.toml:ro" \
|
||||
potatomesh-matrix-bridge
|
||||
@@ -292,13 +191,12 @@ If you prefer to isolate the state file from the config, mount it directly inste
|
||||
|
||||
```bash
|
||||
docker run --rm \
|
||||
-p 41448:41448 \
|
||||
-v bridge_state:/app \
|
||||
-v "$(pwd)/matrix/Config.toml:/app/Config.toml:ro" \
|
||||
potatomesh-matrix-bridge
|
||||
```
|
||||
|
||||
The image ships `Config.example.toml` for reference. If `/app/Config.toml` is absent, set the required values via environment variables, CLI flags, or secrets instead.
|
||||
The image ships `Config.example.toml` for reference, but the bridge will exit if `/app/Config.toml` is not provided.
|
||||
|
||||
---
|
||||
|
||||
@@ -336,7 +234,7 @@ Delete `bridge_state.json` if you want it to replay all currently available mess
|
||||
|
||||
## Development
|
||||
|
||||
Run tests:
|
||||
Run tests (currently mostly compile checks, no real tests yet):
|
||||
|
||||
```bash
|
||||
cargo test
|
||||
|
||||
@@ -15,13 +15,6 @@
|
||||
|
||||
set -e
|
||||
|
||||
# Default to container-aware configuration paths unless explicitly overridden.
|
||||
: "${POTATOMESH_CONTAINER:=1}"
|
||||
: "${POTATOMESH_SECRETS_DIR:=/run/secrets}"
|
||||
|
||||
export POTATOMESH_CONTAINER
|
||||
export POTATOMESH_SECRETS_DIR
|
||||
|
||||
# Default state file path from Config.toml unless overridden.
|
||||
STATE_FILE="${STATE_FILE:-/app/bridge_state.json}"
|
||||
STATE_DIR="$(dirname "$STATE_FILE")"
|
||||
|
||||
@@ -1,105 +0,0 @@
|
||||
// Copyright © 2025-26 l5yth & contributors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use clap::{ArgAction, Parser};
|
||||
|
||||
#[cfg(not(test))]
|
||||
use crate::config::{ConfigInputs, ConfigOverrides};
|
||||
|
||||
/// CLI arguments for the Matrix bridge.
|
||||
#[derive(Debug, Parser)]
|
||||
#[command(
|
||||
name = "potatomesh-matrix-bridge",
|
||||
version,
|
||||
about = "PotatoMesh Matrix bridge"
|
||||
)]
|
||||
pub struct Cli {
|
||||
/// Path to the configuration TOML file.
|
||||
#[arg(long, value_name = "PATH")]
|
||||
pub config: Option<String>,
|
||||
/// Path to the bridge state file.
|
||||
#[arg(long, value_name = "PATH")]
|
||||
pub state_file: Option<String>,
|
||||
/// PotatoMesh base URL.
|
||||
#[arg(long, value_name = "URL")]
|
||||
pub potatomesh_base_url: Option<String>,
|
||||
/// Poll interval in seconds.
|
||||
#[arg(long, value_name = "SECS")]
|
||||
pub potatomesh_poll_interval_secs: Option<u64>,
|
||||
/// Matrix homeserver base URL.
|
||||
#[arg(long, value_name = "URL")]
|
||||
pub matrix_homeserver: Option<String>,
|
||||
/// Matrix appservice access token.
|
||||
#[arg(long, value_name = "TOKEN")]
|
||||
pub matrix_as_token: Option<String>,
|
||||
/// Path to a secret file containing the Matrix appservice access token.
|
||||
#[arg(long, value_name = "PATH")]
|
||||
pub matrix_as_token_file: Option<String>,
|
||||
/// Matrix homeserver token for inbound appservice requests.
|
||||
#[arg(long, value_name = "TOKEN")]
|
||||
pub matrix_hs_token: Option<String>,
|
||||
/// Path to a secret file containing the Matrix homeserver token.
|
||||
#[arg(long, value_name = "PATH")]
|
||||
pub matrix_hs_token_file: Option<String>,
|
||||
/// Matrix server name (domain).
|
||||
#[arg(long, value_name = "NAME")]
|
||||
pub matrix_server_name: Option<String>,
|
||||
/// Matrix room id to forward into.
|
||||
#[arg(long, value_name = "ROOM")]
|
||||
pub matrix_room_id: Option<String>,
|
||||
/// Force container defaults (overrides detection).
|
||||
#[arg(long, action = ArgAction::SetTrue)]
|
||||
pub container: bool,
|
||||
/// Disable container defaults (overrides detection).
|
||||
#[arg(long, action = ArgAction::SetTrue)]
|
||||
pub no_container: bool,
|
||||
/// Directory to search for default secret files.
|
||||
#[arg(long, value_name = "PATH")]
|
||||
pub secrets_dir: Option<String>,
|
||||
}
|
||||
|
||||
impl Cli {
|
||||
/// Convert CLI args into configuration inputs.
|
||||
#[cfg(not(test))]
|
||||
pub fn to_inputs(&self) -> ConfigInputs {
|
||||
ConfigInputs {
|
||||
config_path: self.config.clone(),
|
||||
secrets_dir: self.secrets_dir.clone(),
|
||||
container_override: resolve_container_override(self.container, self.no_container),
|
||||
container_hint: None,
|
||||
overrides: ConfigOverrides {
|
||||
potatomesh_base_url: self.potatomesh_base_url.clone(),
|
||||
potatomesh_poll_interval_secs: self.potatomesh_poll_interval_secs,
|
||||
matrix_homeserver: self.matrix_homeserver.clone(),
|
||||
matrix_as_token: self.matrix_as_token.clone(),
|
||||
matrix_as_token_file: self.matrix_as_token_file.clone(),
|
||||
matrix_hs_token: self.matrix_hs_token.clone(),
|
||||
matrix_hs_token_file: self.matrix_hs_token_file.clone(),
|
||||
matrix_server_name: self.matrix_server_name.clone(),
|
||||
matrix_room_id: self.matrix_room_id.clone(),
|
||||
state_file: self.state_file.clone(),
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Resolve container override flags into an optional boolean.
|
||||
#[cfg(not(test))]
|
||||
fn resolve_container_override(container: bool, no_container: bool) -> Option<bool> {
|
||||
match (container, no_container) {
|
||||
(true, false) => Some(true),
|
||||
(false, true) => Some(false),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
+20
-841
@@ -15,37 +15,25 @@
|
||||
use serde::Deserialize;
|
||||
use std::{fs, path::Path};
|
||||
|
||||
const DEFAULT_CONFIG_PATH: &str = "Config.toml";
|
||||
const CONTAINER_CONFIG_PATH: &str = "/app/Config.toml";
|
||||
const DEFAULT_STATE_FILE: &str = "bridge_state.json";
|
||||
const CONTAINER_STATE_FILE: &str = "/app/bridge_state.json";
|
||||
const DEFAULT_SECRETS_DIR: &str = "/run/secrets";
|
||||
const CONTAINER_POLL_INTERVAL_SECS: u64 = 15;
|
||||
|
||||
/// PotatoMesh API settings.
|
||||
#[derive(Debug, Deserialize, Clone)]
|
||||
pub struct PotatomeshConfig {
|
||||
pub base_url: String,
|
||||
pub poll_interval_secs: u64,
|
||||
}
|
||||
|
||||
/// Matrix appservice settings for the bridge.
|
||||
#[derive(Debug, Deserialize, Clone)]
|
||||
pub struct MatrixConfig {
|
||||
pub homeserver: String,
|
||||
pub as_token: String,
|
||||
pub hs_token: String,
|
||||
pub server_name: String,
|
||||
pub room_id: String,
|
||||
}
|
||||
|
||||
/// State file configuration for the bridge.
|
||||
#[derive(Debug, Deserialize, Clone)]
|
||||
pub struct StateConfig {
|
||||
pub state_file: String,
|
||||
}
|
||||
|
||||
/// Full configuration loaded for the bridge runtime.
|
||||
#[derive(Debug, Deserialize, Clone)]
|
||||
pub struct Config {
|
||||
pub potatomesh: PotatomeshConfig,
|
||||
@@ -53,447 +41,19 @@ pub struct Config {
|
||||
pub state: StateConfig,
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize, Clone, Default)]
|
||||
struct PartialPotatomeshConfig {
|
||||
#[serde(default)]
|
||||
base_url: Option<String>,
|
||||
#[serde(default)]
|
||||
poll_interval_secs: Option<u64>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize, Clone, Default)]
|
||||
struct PartialMatrixConfig {
|
||||
#[serde(default)]
|
||||
homeserver: Option<String>,
|
||||
#[serde(default)]
|
||||
as_token: Option<String>,
|
||||
#[serde(default)]
|
||||
hs_token: Option<String>,
|
||||
#[serde(default)]
|
||||
server_name: Option<String>,
|
||||
#[serde(default)]
|
||||
room_id: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize, Clone, Default)]
|
||||
struct PartialStateConfig {
|
||||
#[serde(default)]
|
||||
state_file: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize, Clone, Default)]
|
||||
struct PartialConfig {
|
||||
#[serde(default)]
|
||||
potatomesh: PartialPotatomeshConfig,
|
||||
#[serde(default)]
|
||||
matrix: PartialMatrixConfig,
|
||||
#[serde(default)]
|
||||
state: PartialStateConfig,
|
||||
}
|
||||
|
||||
/// Overwrite an optional value when the incoming value is present.
|
||||
fn merge_option<T>(target: &mut Option<T>, incoming: Option<T>) {
|
||||
if incoming.is_some() {
|
||||
*target = incoming;
|
||||
}
|
||||
}
|
||||
|
||||
/// CLI or environment overrides for configuration fields.
|
||||
#[derive(Debug, Clone, Default)]
|
||||
pub struct ConfigOverrides {
|
||||
pub potatomesh_base_url: Option<String>,
|
||||
pub potatomesh_poll_interval_secs: Option<u64>,
|
||||
pub matrix_homeserver: Option<String>,
|
||||
pub matrix_as_token: Option<String>,
|
||||
pub matrix_as_token_file: Option<String>,
|
||||
pub matrix_hs_token: Option<String>,
|
||||
pub matrix_hs_token_file: Option<String>,
|
||||
pub matrix_server_name: Option<String>,
|
||||
pub matrix_room_id: Option<String>,
|
||||
pub state_file: Option<String>,
|
||||
}
|
||||
|
||||
impl ConfigOverrides {
|
||||
fn apply_non_token_overrides(&self, cfg: &mut PartialConfig) {
|
||||
merge_option(
|
||||
&mut cfg.potatomesh.base_url,
|
||||
self.potatomesh_base_url.clone(),
|
||||
);
|
||||
merge_option(
|
||||
&mut cfg.potatomesh.poll_interval_secs,
|
||||
self.potatomesh_poll_interval_secs,
|
||||
);
|
||||
merge_option(&mut cfg.matrix.homeserver, self.matrix_homeserver.clone());
|
||||
merge_option(&mut cfg.matrix.server_name, self.matrix_server_name.clone());
|
||||
merge_option(&mut cfg.matrix.room_id, self.matrix_room_id.clone());
|
||||
merge_option(&mut cfg.state.state_file, self.state_file.clone());
|
||||
}
|
||||
|
||||
fn merge(self, higher: ConfigOverrides) -> ConfigOverrides {
|
||||
let matrix_as_token = if higher.matrix_as_token_file.is_some() {
|
||||
higher.matrix_as_token
|
||||
} else {
|
||||
higher.matrix_as_token.or(self.matrix_as_token)
|
||||
};
|
||||
let matrix_hs_token = if higher.matrix_hs_token_file.is_some() {
|
||||
higher.matrix_hs_token
|
||||
} else {
|
||||
higher.matrix_hs_token.or(self.matrix_hs_token)
|
||||
};
|
||||
ConfigOverrides {
|
||||
potatomesh_base_url: higher.potatomesh_base_url.or(self.potatomesh_base_url),
|
||||
potatomesh_poll_interval_secs: higher
|
||||
.potatomesh_poll_interval_secs
|
||||
.or(self.potatomesh_poll_interval_secs),
|
||||
matrix_homeserver: higher.matrix_homeserver.or(self.matrix_homeserver),
|
||||
matrix_as_token,
|
||||
matrix_as_token_file: higher.matrix_as_token_file.or(self.matrix_as_token_file),
|
||||
matrix_hs_token,
|
||||
matrix_hs_token_file: higher.matrix_hs_token_file.or(self.matrix_hs_token_file),
|
||||
matrix_server_name: higher.matrix_server_name.or(self.matrix_server_name),
|
||||
matrix_room_id: higher.matrix_room_id.or(self.matrix_room_id),
|
||||
state_file: higher.state_file.or(self.state_file),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Inputs gathered from CLI flags or environment variables.
|
||||
#[derive(Debug, Clone, Default)]
|
||||
pub struct ConfigInputs {
|
||||
pub config_path: Option<String>,
|
||||
pub secrets_dir: Option<String>,
|
||||
pub container_override: Option<bool>,
|
||||
pub container_hint: Option<String>,
|
||||
pub overrides: ConfigOverrides,
|
||||
}
|
||||
|
||||
impl ConfigInputs {
|
||||
/// Merge two input sets, preferring values from `higher`.
|
||||
pub fn merge(self, higher: ConfigInputs) -> ConfigInputs {
|
||||
ConfigInputs {
|
||||
config_path: higher.config_path.or(self.config_path),
|
||||
secrets_dir: higher.secrets_dir.or(self.secrets_dir),
|
||||
container_override: higher.container_override.or(self.container_override),
|
||||
container_hint: higher.container_hint.or(self.container_hint),
|
||||
overrides: self.overrides.merge(higher.overrides),
|
||||
}
|
||||
}
|
||||
|
||||
/// Load configuration inputs from the process environment.
|
||||
#[cfg(not(test))]
|
||||
pub fn from_env() -> anyhow::Result<Self> {
|
||||
let overrides = ConfigOverrides {
|
||||
potatomesh_base_url: env_var("POTATOMESH_BASE_URL"),
|
||||
potatomesh_poll_interval_secs: parse_u64_env("POTATOMESH_POLL_INTERVAL_SECS")?,
|
||||
matrix_homeserver: env_var("MATRIX_HOMESERVER"),
|
||||
matrix_as_token: env_var("MATRIX_AS_TOKEN"),
|
||||
matrix_as_token_file: env_var("MATRIX_AS_TOKEN_FILE"),
|
||||
matrix_hs_token: env_var("MATRIX_HS_TOKEN"),
|
||||
matrix_hs_token_file: env_var("MATRIX_HS_TOKEN_FILE"),
|
||||
matrix_server_name: env_var("MATRIX_SERVER_NAME"),
|
||||
matrix_room_id: env_var("MATRIX_ROOM_ID"),
|
||||
state_file: env_var("STATE_FILE"),
|
||||
};
|
||||
Ok(ConfigInputs {
|
||||
config_path: env_var("POTATOMESH_CONFIG"),
|
||||
secrets_dir: env_var("POTATOMESH_SECRETS_DIR"),
|
||||
container_override: parse_bool_env("POTATOMESH_CONTAINER")?,
|
||||
container_hint: env_var("CONTAINER"),
|
||||
overrides,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
impl Config {
|
||||
/// Load a full Config from a TOML file.
|
||||
#[cfg(test)]
|
||||
pub fn load_from_file(path: &str) -> anyhow::Result<Self> {
|
||||
let contents = fs::read_to_string(path)?;
|
||||
let cfg = toml::from_str(&contents)?;
|
||||
Ok(cfg)
|
||||
}
|
||||
}
|
||||
|
||||
/// Load a Config by merging CLI/env overrides with an optional TOML file.
|
||||
#[cfg(not(test))]
|
||||
pub fn load(cli_inputs: ConfigInputs) -> anyhow::Result<Config> {
|
||||
let env_inputs = ConfigInputs::from_env()?;
|
||||
let cgroup_hint = read_cgroup();
|
||||
load_from_sources(cli_inputs, env_inputs, cgroup_hint.as_deref())
|
||||
}
|
||||
|
||||
/// Load configuration by merging CLI/env inputs and an optional config file.
|
||||
fn load_from_sources(
|
||||
cli_inputs: ConfigInputs,
|
||||
env_inputs: ConfigInputs,
|
||||
cgroup_hint: Option<&str>,
|
||||
) -> anyhow::Result<Config> {
|
||||
let merged_inputs = env_inputs.merge(cli_inputs);
|
||||
let container = detect_container(
|
||||
merged_inputs.container_override,
|
||||
merged_inputs.container_hint.as_deref(),
|
||||
cgroup_hint,
|
||||
);
|
||||
let defaults = default_paths(container);
|
||||
|
||||
let base_cfg = resolve_base_config(&merged_inputs, &defaults)?;
|
||||
let mut cfg = base_cfg.unwrap_or_default();
|
||||
merged_inputs.overrides.apply_non_token_overrides(&mut cfg);
|
||||
|
||||
let secrets_dir = resolve_secrets_dir(&merged_inputs, container, &defaults);
|
||||
let as_token = resolve_token(
|
||||
cfg.matrix.as_token.clone(),
|
||||
merged_inputs.overrides.matrix_as_token.clone(),
|
||||
merged_inputs.overrides.matrix_as_token_file.as_deref(),
|
||||
secrets_dir.as_deref(),
|
||||
"matrix_as_token",
|
||||
)?;
|
||||
let hs_token = resolve_token(
|
||||
cfg.matrix.hs_token.clone(),
|
||||
merged_inputs.overrides.matrix_hs_token.clone(),
|
||||
merged_inputs.overrides.matrix_hs_token_file.as_deref(),
|
||||
secrets_dir.as_deref(),
|
||||
"matrix_hs_token",
|
||||
)?;
|
||||
|
||||
if cfg.potatomesh.poll_interval_secs.is_none() && container {
|
||||
cfg.potatomesh.poll_interval_secs = Some(defaults.poll_interval_secs);
|
||||
}
|
||||
|
||||
if cfg.state.state_file.is_none() {
|
||||
cfg.state.state_file = Some(defaults.state_file);
|
||||
}
|
||||
|
||||
let missing = collect_missing_fields(&cfg, &as_token, &hs_token);
|
||||
if !missing.is_empty() {
|
||||
anyhow::bail!(
|
||||
"Missing required configuration values: {}",
|
||||
missing.join(", ")
|
||||
);
|
||||
}
|
||||
|
||||
Ok(Config {
|
||||
potatomesh: PotatomeshConfig {
|
||||
base_url: cfg.potatomesh.base_url.unwrap(),
|
||||
poll_interval_secs: cfg.potatomesh.poll_interval_secs.unwrap(),
|
||||
},
|
||||
matrix: MatrixConfig {
|
||||
homeserver: cfg.matrix.homeserver.unwrap(),
|
||||
as_token: as_token.unwrap(),
|
||||
hs_token: hs_token.unwrap(),
|
||||
server_name: cfg.matrix.server_name.unwrap(),
|
||||
room_id: cfg.matrix.room_id.unwrap(),
|
||||
},
|
||||
state: StateConfig {
|
||||
state_file: cfg.state.state_file.unwrap(),
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
/// Collect the missing required field identifiers for error reporting.
|
||||
fn collect_missing_fields(
|
||||
cfg: &PartialConfig,
|
||||
as_token: &Option<String>,
|
||||
hs_token: &Option<String>,
|
||||
) -> Vec<&'static str> {
|
||||
let mut missing = Vec::new();
|
||||
if cfg.potatomesh.base_url.is_none() {
|
||||
missing.push("potatomesh.base_url");
|
||||
}
|
||||
if cfg.potatomesh.poll_interval_secs.is_none() {
|
||||
missing.push("potatomesh.poll_interval_secs");
|
||||
}
|
||||
if cfg.matrix.homeserver.is_none() {
|
||||
missing.push("matrix.homeserver");
|
||||
}
|
||||
if as_token.is_none() {
|
||||
missing.push("matrix.as_token");
|
||||
}
|
||||
if hs_token.is_none() {
|
||||
missing.push("matrix.hs_token");
|
||||
}
|
||||
if cfg.matrix.server_name.is_none() {
|
||||
missing.push("matrix.server_name");
|
||||
}
|
||||
if cfg.matrix.room_id.is_none() {
|
||||
missing.push("matrix.room_id");
|
||||
}
|
||||
if cfg.state.state_file.is_none() {
|
||||
missing.push("state.state_file");
|
||||
}
|
||||
missing
|
||||
}
|
||||
|
||||
/// Resolve the base TOML config file, honoring explicit config paths.
|
||||
fn resolve_base_config(
|
||||
inputs: &ConfigInputs,
|
||||
defaults: &DefaultPaths,
|
||||
) -> anyhow::Result<Option<PartialConfig>> {
|
||||
if let Some(path) = &inputs.config_path {
|
||||
return Ok(Some(load_partial_from_file(path)?));
|
||||
}
|
||||
let container_path = Path::new(&defaults.config_path);
|
||||
if container_path.exists() {
|
||||
return Ok(Some(load_partial_from_file(&defaults.config_path)?));
|
||||
}
|
||||
let host_path = Path::new(DEFAULT_CONFIG_PATH);
|
||||
if host_path.exists() {
|
||||
return Ok(Some(load_partial_from_file(DEFAULT_CONFIG_PATH)?));
|
||||
}
|
||||
Ok(None)
|
||||
}
|
||||
|
||||
/// Decide which secrets directory to use based on inputs and defaults.
|
||||
fn resolve_secrets_dir(
|
||||
inputs: &ConfigInputs,
|
||||
container: bool,
|
||||
defaults: &DefaultPaths,
|
||||
) -> Option<String> {
|
||||
if let Some(explicit) = inputs.secrets_dir.clone() {
|
||||
return Some(explicit);
|
||||
}
|
||||
if container {
|
||||
return Some(defaults.secrets_dir.clone());
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
/// Resolve a token value from explicit values, secret files, or config file values.
|
||||
fn resolve_token(
|
||||
base_value: Option<String>,
|
||||
explicit_value: Option<String>,
|
||||
explicit_file: Option<&str>,
|
||||
secrets_dir: Option<&str>,
|
||||
default_secret_name: &str,
|
||||
) -> anyhow::Result<Option<String>> {
|
||||
if let Some(value) = explicit_value {
|
||||
return Ok(Some(value));
|
||||
}
|
||||
if let Some(path) = explicit_file {
|
||||
return Ok(Some(read_secret_file(path)?));
|
||||
}
|
||||
if let Some(dir) = secrets_dir {
|
||||
let default_path = Path::new(dir).join(default_secret_name);
|
||||
if default_path.exists() {
|
||||
return Ok(Some(read_secret_file(
|
||||
default_path
|
||||
.to_str()
|
||||
.ok_or_else(|| anyhow::anyhow!("Invalid secret file path"))?,
|
||||
)?));
|
||||
pub fn from_default_path() -> anyhow::Result<Self> {
|
||||
let path = "Config.toml";
|
||||
if !Path::new(path).exists() {
|
||||
anyhow::bail!("Config file {path} not found");
|
||||
}
|
||||
}
|
||||
Ok(base_value)
|
||||
}
|
||||
|
||||
/// Read and trim a secret file from disk.
|
||||
fn read_secret_file(path: &str) -> anyhow::Result<String> {
|
||||
let contents = fs::read_to_string(path)?;
|
||||
let trimmed = contents.trim();
|
||||
if trimmed.is_empty() {
|
||||
anyhow::bail!("Secret file {path} is empty");
|
||||
}
|
||||
Ok(trimmed.to_string())
|
||||
}
|
||||
|
||||
/// Load a partial config from a TOML file.
|
||||
fn load_partial_from_file(path: &str) -> anyhow::Result<PartialConfig> {
|
||||
let contents = fs::read_to_string(path)?;
|
||||
let cfg = toml::from_str(&contents)?;
|
||||
Ok(cfg)
|
||||
}
|
||||
|
||||
/// Compute default paths and intervals based on container mode.
|
||||
fn default_paths(container: bool) -> DefaultPaths {
|
||||
if container {
|
||||
DefaultPaths {
|
||||
config_path: CONTAINER_CONFIG_PATH.to_string(),
|
||||
state_file: CONTAINER_STATE_FILE.to_string(),
|
||||
secrets_dir: DEFAULT_SECRETS_DIR.to_string(),
|
||||
poll_interval_secs: CONTAINER_POLL_INTERVAL_SECS,
|
||||
}
|
||||
} else {
|
||||
DefaultPaths {
|
||||
config_path: DEFAULT_CONFIG_PATH.to_string(),
|
||||
state_file: DEFAULT_STATE_FILE.to_string(),
|
||||
secrets_dir: DEFAULT_SECRETS_DIR.to_string(),
|
||||
poll_interval_secs: CONTAINER_POLL_INTERVAL_SECS,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
struct DefaultPaths {
|
||||
config_path: String,
|
||||
state_file: String,
|
||||
secrets_dir: String,
|
||||
poll_interval_secs: u64,
|
||||
}
|
||||
|
||||
/// Detect whether the bridge is running inside a container.
|
||||
fn detect_container(
|
||||
override_value: Option<bool>,
|
||||
env_hint: Option<&str>,
|
||||
cgroup_hint: Option<&str>,
|
||||
) -> bool {
|
||||
if let Some(value) = override_value {
|
||||
return value;
|
||||
}
|
||||
if let Some(hint) = env_hint {
|
||||
if !hint.trim().is_empty() {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
if let Some(cgroup) = cgroup_hint {
|
||||
let haystack = cgroup.to_ascii_lowercase();
|
||||
return haystack.contains("docker")
|
||||
|| haystack.contains("kubepods")
|
||||
|| haystack.contains("containerd")
|
||||
|| haystack.contains("podman");
|
||||
}
|
||||
false
|
||||
}
|
||||
|
||||
/// Read the primary cgroup file for container detection.
|
||||
#[cfg(not(test))]
|
||||
fn read_cgroup() -> Option<String> {
|
||||
fs::read_to_string("/proc/1/cgroup").ok()
|
||||
}
|
||||
|
||||
/// Read and trim an environment variable value.
|
||||
#[cfg(not(test))]
|
||||
fn env_var(key: &str) -> Option<String> {
|
||||
std::env::var(key).ok().filter(|v| !v.trim().is_empty())
|
||||
}
|
||||
|
||||
/// Parse a u64 environment variable value.
|
||||
#[cfg(not(test))]
|
||||
fn parse_u64_env(key: &str) -> anyhow::Result<Option<u64>> {
|
||||
match env_var(key) {
|
||||
None => Ok(None),
|
||||
Some(value) => value
|
||||
.parse::<u64>()
|
||||
.map(Some)
|
||||
.map_err(|e| anyhow::anyhow!("Invalid {key} value: {e}")),
|
||||
}
|
||||
}
|
||||
|
||||
/// Parse a boolean environment variable value.
|
||||
#[cfg(not(test))]
|
||||
fn parse_bool_env(key: &str) -> anyhow::Result<Option<bool>> {
|
||||
match env_var(key) {
|
||||
None => Ok(None),
|
||||
Some(value) => parse_bool_value(key, &value).map(Some),
|
||||
}
|
||||
}
|
||||
|
||||
/// Parse a boolean string with standard truthy/falsy values.
|
||||
#[cfg(not(test))]
|
||||
fn parse_bool_value(key: &str, value: &str) -> anyhow::Result<bool> {
|
||||
let normalized = value.trim().to_ascii_lowercase();
|
||||
match normalized.as_str() {
|
||||
"1" | "true" | "yes" | "on" => Ok(true),
|
||||
"0" | "false" | "no" | "off" => Ok(false),
|
||||
_ => anyhow::bail!("Invalid {key} value: {value}"),
|
||||
Self::load_from_file(path)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -502,43 +62,6 @@ mod tests {
|
||||
use super::*;
|
||||
use serial_test::serial;
|
||||
use std::io::Write;
|
||||
use std::path::{Path, PathBuf};
|
||||
|
||||
struct CwdGuard {
|
||||
original: PathBuf,
|
||||
}
|
||||
|
||||
impl CwdGuard {
|
||||
/// Switch to the provided path and restore the original cwd on drop.
|
||||
fn enter(path: &Path) -> Self {
|
||||
let original = std::env::current_dir().unwrap_or_else(|_| PathBuf::from("/"));
|
||||
std::env::set_current_dir(path).unwrap();
|
||||
Self { original }
|
||||
}
|
||||
}
|
||||
|
||||
impl Drop for CwdGuard {
|
||||
fn drop(&mut self) {
|
||||
if std::env::set_current_dir(&self.original).is_err() {
|
||||
let _ = std::env::set_current_dir("/");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn minimal_overrides() -> ConfigOverrides {
|
||||
ConfigOverrides {
|
||||
potatomesh_base_url: Some("https://potatomesh.net/".to_string()),
|
||||
potatomesh_poll_interval_secs: Some(10),
|
||||
matrix_homeserver: Some("https://matrix.example.org".to_string()),
|
||||
matrix_as_token: Some("AS_TOKEN".to_string()),
|
||||
matrix_hs_token: Some("HS_TOKEN".to_string()),
|
||||
matrix_server_name: Some("example.org".to_string()),
|
||||
matrix_room_id: Some("!roomid:example.org".to_string()),
|
||||
state_file: Some("bridge_state.json".to_string()),
|
||||
matrix_as_token_file: None,
|
||||
matrix_hs_token_file: None,
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_minimal_config_from_toml_str() {
|
||||
@@ -550,7 +73,6 @@ mod tests {
|
||||
[matrix]
|
||||
homeserver = "https://matrix.example.org"
|
||||
as_token = "AS_TOKEN"
|
||||
hs_token = "HS_TOKEN"
|
||||
server_name = "example.org"
|
||||
room_id = "!roomid:example.org"
|
||||
|
||||
@@ -564,7 +86,6 @@ mod tests {
|
||||
|
||||
assert_eq!(cfg.matrix.homeserver, "https://matrix.example.org");
|
||||
assert_eq!(cfg.matrix.as_token, "AS_TOKEN");
|
||||
assert_eq!(cfg.matrix.hs_token, "HS_TOKEN");
|
||||
assert_eq!(cfg.matrix.server_name, "example.org");
|
||||
assert_eq!(cfg.matrix.room_id, "!roomid:example.org");
|
||||
|
||||
@@ -587,7 +108,6 @@ mod tests {
|
||||
[matrix]
|
||||
homeserver = "https://matrix.example.org"
|
||||
as_token = "AS_TOKEN"
|
||||
hs_token = "HS_TOKEN"
|
||||
server_name = "example.org"
|
||||
room_id = "!roomid:example.org"
|
||||
|
||||
@@ -601,378 +121,37 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_container_prefers_override() {
|
||||
assert!(detect_container(Some(true), None, None));
|
||||
assert!(!detect_container(
|
||||
Some(false),
|
||||
Some("docker"),
|
||||
Some("docker")
|
||||
));
|
||||
#[serial]
|
||||
fn from_default_path_not_found() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
std::env::set_current_dir(tmp_dir.path()).unwrap();
|
||||
let result = Config::from_default_path();
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_container_from_hint_or_cgroup() {
|
||||
assert!(detect_container(None, Some("docker"), None));
|
||||
assert!(detect_container(None, None, Some("kubepods")));
|
||||
assert!(!detect_container(None, None, Some("")));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn load_uses_cli_overrides_over_env() {
|
||||
#[serial]
|
||||
fn from_default_path_found() {
|
||||
let toml_str = r#"
|
||||
[potatomesh]
|
||||
base_url = "https://potatomesh.net/"
|
||||
poll_interval_secs = 5
|
||||
poll_interval_secs = 10
|
||||
|
||||
[matrix]
|
||||
homeserver = "https://matrix.example.org"
|
||||
as_token = "AS_TOKEN"
|
||||
hs_token = "HS_TOKEN"
|
||||
server_name = "example.org"
|
||||
room_id = "!roomid:example.org"
|
||||
|
||||
[state]
|
||||
state_file = "bridge_state.json"
|
||||
"#;
|
||||
let mut file = tempfile::NamedTempFile::new().unwrap();
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let file_path = tmp_dir.path().join("Config.toml");
|
||||
let mut file = std::fs::File::create(file_path).unwrap();
|
||||
write!(file, "{}", toml_str).unwrap();
|
||||
|
||||
let env_inputs = ConfigInputs {
|
||||
config_path: Some(file.path().to_str().unwrap().to_string()),
|
||||
overrides: ConfigOverrides {
|
||||
potatomesh_base_url: Some("https://env.example/".to_string()),
|
||||
..minimal_overrides()
|
||||
},
|
||||
..ConfigInputs::default()
|
||||
};
|
||||
let cli_inputs = ConfigInputs {
|
||||
overrides: ConfigOverrides {
|
||||
potatomesh_base_url: Some("https://cli.example/".to_string()),
|
||||
..ConfigOverrides::default()
|
||||
},
|
||||
..ConfigInputs::default()
|
||||
};
|
||||
|
||||
let cfg = load_from_sources(cli_inputs, env_inputs, None).unwrap();
|
||||
assert_eq!(cfg.potatomesh.base_url, "https://cli.example/");
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn load_uses_container_secret_defaults() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let _guard = CwdGuard::enter(tmp_dir.path());
|
||||
let secrets_dir = tmp_dir.path();
|
||||
fs::write(secrets_dir.join("matrix_as_token"), "FROM_SECRET").unwrap();
|
||||
|
||||
let cli_inputs = ConfigInputs {
|
||||
secrets_dir: Some(secrets_dir.to_string_lossy().to_string()),
|
||||
container_override: Some(true),
|
||||
overrides: ConfigOverrides {
|
||||
potatomesh_base_url: Some("https://potatomesh.net/".to_string()),
|
||||
potatomesh_poll_interval_secs: Some(10),
|
||||
matrix_homeserver: Some("https://matrix.example.org".to_string()),
|
||||
matrix_hs_token: Some("HS_TOKEN".to_string()),
|
||||
matrix_server_name: Some("example.org".to_string()),
|
||||
matrix_room_id: Some("!roomid:example.org".to_string()),
|
||||
state_file: Some("bridge_state.json".to_string()),
|
||||
..ConfigOverrides::default()
|
||||
},
|
||||
..ConfigInputs::default()
|
||||
};
|
||||
|
||||
let cfg = load_from_sources(cli_inputs, ConfigInputs::default(), None).unwrap();
|
||||
assert_eq!(cfg.matrix.as_token, "FROM_SECRET");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_token_prefers_explicit_value() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let token_file = tmp_dir.path().join("token");
|
||||
fs::write(&token_file, "FROM_FILE").unwrap();
|
||||
|
||||
let resolved = resolve_token(
|
||||
Some("FROM_BASE".to_string()),
|
||||
Some("FROM_EXPLICIT".to_string()),
|
||||
Some(token_file.to_str().unwrap()),
|
||||
Some(tmp_dir.path().to_str().unwrap()),
|
||||
"matrix_as_token",
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(resolved, Some("FROM_EXPLICIT".to_string()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_token_reads_explicit_file() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let token_file = tmp_dir.path().join("token");
|
||||
fs::write(&token_file, "FROM_FILE").unwrap();
|
||||
|
||||
let resolved = resolve_token(
|
||||
None,
|
||||
None,
|
||||
Some(token_file.to_str().unwrap()),
|
||||
None,
|
||||
"matrix_as_token",
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(resolved, Some("FROM_FILE".to_string()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_token_reads_default_secret_file() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
fs::write(tmp_dir.path().join("matrix_hs_token"), "FROM_SECRET").unwrap();
|
||||
|
||||
let resolved = resolve_token(
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
Some(tmp_dir.path().to_str().unwrap()),
|
||||
"matrix_hs_token",
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(resolved, Some("FROM_SECRET".to_string()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_token_errors_on_empty_secret_file() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let token_file = tmp_dir.path().join("token");
|
||||
fs::write(&token_file, " ").unwrap();
|
||||
|
||||
let result = resolve_token(
|
||||
None,
|
||||
None,
|
||||
Some(token_file.to_str().unwrap()),
|
||||
None,
|
||||
"matrix_as_token",
|
||||
);
|
||||
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_secrets_dir_prefers_explicit() {
|
||||
let defaults = DefaultPaths {
|
||||
config_path: "Config.toml".to_string(),
|
||||
state_file: DEFAULT_STATE_FILE.to_string(),
|
||||
secrets_dir: "default".to_string(),
|
||||
poll_interval_secs: CONTAINER_POLL_INTERVAL_SECS,
|
||||
};
|
||||
let inputs = ConfigInputs {
|
||||
secrets_dir: Some("explicit".to_string()),
|
||||
..ConfigInputs::default()
|
||||
};
|
||||
|
||||
let resolved = resolve_secrets_dir(&inputs, true, &defaults);
|
||||
assert_eq!(resolved, Some("explicit".to_string()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_secrets_dir_container_default() {
|
||||
let defaults = DefaultPaths {
|
||||
config_path: "Config.toml".to_string(),
|
||||
state_file: DEFAULT_STATE_FILE.to_string(),
|
||||
secrets_dir: "default".to_string(),
|
||||
poll_interval_secs: CONTAINER_POLL_INTERVAL_SECS,
|
||||
};
|
||||
let inputs = ConfigInputs::default();
|
||||
|
||||
let resolved = resolve_secrets_dir(&inputs, true, &defaults);
|
||||
assert_eq!(resolved, Some("default".to_string()));
|
||||
assert_eq!(resolve_secrets_dir(&inputs, false, &defaults), None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn resolve_base_config_prefers_explicit_path() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let _guard = CwdGuard::enter(tmp_dir.path());
|
||||
let config_path = tmp_dir.path().join("explicit.toml");
|
||||
fs::write(
|
||||
&config_path,
|
||||
r#"[potatomesh]
|
||||
base_url = "https://potatomesh.net/"
|
||||
poll_interval_secs = 10
|
||||
[matrix]
|
||||
homeserver = "https://matrix.example.org"
|
||||
as_token = "AS_TOKEN"
|
||||
hs_token = "HS_TOKEN"
|
||||
server_name = "example.org"
|
||||
room_id = "!roomid:example.org"
|
||||
[state]
|
||||
state_file = "bridge_state.json"
|
||||
"#,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let defaults = default_paths(false);
|
||||
let inputs = ConfigInputs {
|
||||
config_path: Some(config_path.to_string_lossy().to_string()),
|
||||
..ConfigInputs::default()
|
||||
};
|
||||
|
||||
let resolved = resolve_base_config(&inputs, &defaults).unwrap();
|
||||
assert!(resolved.is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn resolve_base_config_uses_container_path_when_present() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let _guard = CwdGuard::enter(tmp_dir.path());
|
||||
let config_path = tmp_dir.path().join("container.toml");
|
||||
fs::write(
|
||||
&config_path,
|
||||
r#"[potatomesh]
|
||||
base_url = "https://potatomesh.net/"
|
||||
poll_interval_secs = 10
|
||||
[matrix]
|
||||
homeserver = "https://matrix.example.org"
|
||||
as_token = "AS_TOKEN"
|
||||
hs_token = "HS_TOKEN"
|
||||
server_name = "example.org"
|
||||
room_id = "!roomid:example.org"
|
||||
[state]
|
||||
state_file = "bridge_state.json"
|
||||
"#,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let defaults = DefaultPaths {
|
||||
config_path: config_path.to_string_lossy().to_string(),
|
||||
state_file: DEFAULT_STATE_FILE.to_string(),
|
||||
secrets_dir: DEFAULT_SECRETS_DIR.to_string(),
|
||||
poll_interval_secs: CONTAINER_POLL_INTERVAL_SECS,
|
||||
};
|
||||
|
||||
let resolved = resolve_base_config(&ConfigInputs::default(), &defaults).unwrap();
|
||||
assert!(resolved.is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn resolve_base_config_uses_host_path_when_present() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let _guard = CwdGuard::enter(tmp_dir.path());
|
||||
fs::write(
|
||||
"Config.toml",
|
||||
r#"[potatomesh]
|
||||
base_url = "https://potatomesh.net/"
|
||||
poll_interval_secs = 10
|
||||
[matrix]
|
||||
homeserver = "https://matrix.example.org"
|
||||
as_token = "AS_TOKEN"
|
||||
hs_token = "HS_TOKEN"
|
||||
server_name = "example.org"
|
||||
room_id = "!roomid:example.org"
|
||||
[state]
|
||||
state_file = "bridge_state.json"
|
||||
"#,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let defaults = default_paths(false);
|
||||
let resolved = resolve_base_config(&ConfigInputs::default(), &defaults).unwrap();
|
||||
assert!(resolved.is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn resolve_base_config_returns_none_when_missing() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let _guard = CwdGuard::enter(tmp_dir.path());
|
||||
let defaults = default_paths(false);
|
||||
let resolved = resolve_base_config(&ConfigInputs::default(), &defaults).unwrap();
|
||||
assert!(resolved.is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn load_prefers_cli_token_file_over_env_value() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let _guard = CwdGuard::enter(tmp_dir.path());
|
||||
|
||||
let token_file = tmp_dir.path().join("as_token");
|
||||
fs::write(&token_file, "CLI_SECRET").unwrap();
|
||||
|
||||
let env_inputs = ConfigInputs {
|
||||
overrides: ConfigOverrides {
|
||||
potatomesh_base_url: Some("https://potatomesh.net/".to_string()),
|
||||
potatomesh_poll_interval_secs: Some(10),
|
||||
matrix_homeserver: Some("https://matrix.example.org".to_string()),
|
||||
matrix_as_token: Some("ENV_TOKEN".to_string()),
|
||||
matrix_hs_token: Some("HS_TOKEN".to_string()),
|
||||
matrix_server_name: Some("example.org".to_string()),
|
||||
matrix_room_id: Some("!roomid:example.org".to_string()),
|
||||
..ConfigOverrides::default()
|
||||
},
|
||||
..ConfigInputs::default()
|
||||
};
|
||||
let cli_inputs = ConfigInputs {
|
||||
overrides: ConfigOverrides {
|
||||
matrix_as_token_file: Some(token_file.to_string_lossy().to_string()),
|
||||
..ConfigOverrides::default()
|
||||
},
|
||||
..ConfigInputs::default()
|
||||
};
|
||||
|
||||
let cfg = load_from_sources(cli_inputs, env_inputs, None).unwrap();
|
||||
assert_eq!(cfg.matrix.as_token, "CLI_SECRET");
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn load_uses_container_default_poll_interval() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let _guard = CwdGuard::enter(tmp_dir.path());
|
||||
|
||||
let cli_inputs = ConfigInputs {
|
||||
container_override: Some(true),
|
||||
overrides: ConfigOverrides {
|
||||
potatomesh_base_url: Some("https://potatomesh.net/".to_string()),
|
||||
matrix_homeserver: Some("https://matrix.example.org".to_string()),
|
||||
matrix_as_token: Some("AS_TOKEN".to_string()),
|
||||
matrix_hs_token: Some("HS_TOKEN".to_string()),
|
||||
matrix_server_name: Some("example.org".to_string()),
|
||||
matrix_room_id: Some("!roomid:example.org".to_string()),
|
||||
..ConfigOverrides::default()
|
||||
},
|
||||
..ConfigInputs::default()
|
||||
};
|
||||
|
||||
let cfg = load_from_sources(cli_inputs, ConfigInputs::default(), None).unwrap();
|
||||
assert_eq!(
|
||||
cfg.potatomesh.poll_interval_secs,
|
||||
CONTAINER_POLL_INTERVAL_SECS
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn load_uses_default_state_path_when_missing() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let _guard = CwdGuard::enter(tmp_dir.path());
|
||||
|
||||
let cli_inputs = ConfigInputs {
|
||||
overrides: ConfigOverrides {
|
||||
potatomesh_base_url: Some("https://potatomesh.net/".to_string()),
|
||||
potatomesh_poll_interval_secs: Some(10),
|
||||
matrix_homeserver: Some("https://matrix.example.org".to_string()),
|
||||
matrix_as_token: Some("AS_TOKEN".to_string()),
|
||||
matrix_hs_token: Some("HS_TOKEN".to_string()),
|
||||
matrix_server_name: Some("example.org".to_string()),
|
||||
matrix_room_id: Some("!roomid:example.org".to_string()),
|
||||
..ConfigOverrides::default()
|
||||
},
|
||||
..ConfigInputs::default()
|
||||
};
|
||||
|
||||
let cfg = load_from_sources(cli_inputs, ConfigInputs::default(), None).unwrap();
|
||||
assert_eq!(cfg.state.state_file, DEFAULT_STATE_FILE);
|
||||
std::env::set_current_dir(tmp_dir.path()).unwrap();
|
||||
let result = Config::from_default_path();
|
||||
assert!(result.is_ok());
|
||||
}
|
||||
}
|
||||
|
||||
+124
-427
@@ -12,42 +12,23 @@
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
mod cli;
|
||||
mod config;
|
||||
mod matrix;
|
||||
mod matrix_server;
|
||||
mod potatomesh;
|
||||
|
||||
use std::{fs, net::SocketAddr, path::Path};
|
||||
use std::{fs, path::Path};
|
||||
|
||||
use anyhow::Result;
|
||||
#[cfg(not(test))]
|
||||
use clap::Parser;
|
||||
use tokio::time::Duration;
|
||||
use tokio::time::{sleep, Duration};
|
||||
use tracing::{error, info};
|
||||
|
||||
#[cfg(not(test))]
|
||||
use crate::cli::Cli;
|
||||
#[cfg(not(test))]
|
||||
use crate::config::Config;
|
||||
use crate::matrix::MatrixAppserviceClient;
|
||||
use crate::matrix_server::run_synapse_listener;
|
||||
use crate::potatomesh::{FetchParams, PotatoClient, PotatoMessage, PotatoNode};
|
||||
#[cfg(not(test))]
|
||||
use tokio::time::sleep;
|
||||
use crate::potatomesh::{FetchParams, PotatoClient, PotatoMessage};
|
||||
|
||||
#[derive(Debug, serde::Serialize, serde::Deserialize, Default)]
|
||||
pub struct BridgeState {
|
||||
/// Highest message id processed by the bridge.
|
||||
last_message_id: Option<u64>,
|
||||
/// Highest rx_time observed; used to build incremental fetch queries.
|
||||
#[serde(default)]
|
||||
last_rx_time: Option<u64>,
|
||||
/// Message ids seen at the current last_rx_time for de-duplication.
|
||||
#[serde(default)]
|
||||
last_rx_time_ids: Vec<u64>,
|
||||
/// Legacy checkpoint timestamp used before last_rx_time was added.
|
||||
#[serde(default, skip_serializing)]
|
||||
last_checked_at: Option<u64>,
|
||||
}
|
||||
|
||||
@@ -57,15 +38,7 @@ impl BridgeState {
|
||||
return Ok(Self::default());
|
||||
}
|
||||
let data = fs::read_to_string(path)?;
|
||||
// Treat empty/whitespace-only files as a fresh state.
|
||||
if data.trim().is_empty() {
|
||||
return Ok(Self::default());
|
||||
}
|
||||
let mut s: Self = serde_json::from_str(&data)?;
|
||||
if s.last_rx_time.is_none() {
|
||||
s.last_rx_time = s.last_checked_at;
|
||||
}
|
||||
s.last_checked_at = None;
|
||||
let s: Self = serde_json::from_str(&data)?;
|
||||
Ok(s)
|
||||
}
|
||||
|
||||
@@ -76,32 +49,17 @@ impl BridgeState {
|
||||
}
|
||||
|
||||
fn should_forward(&self, msg: &PotatoMessage) -> bool {
|
||||
match self.last_rx_time {
|
||||
None => match self.last_message_id {
|
||||
None => true,
|
||||
Some(last_id) => msg.id > last_id,
|
||||
},
|
||||
Some(last_ts) => {
|
||||
if msg.rx_time > last_ts {
|
||||
true
|
||||
} else if msg.rx_time < last_ts {
|
||||
false
|
||||
} else {
|
||||
!self.last_rx_time_ids.contains(&msg.id)
|
||||
}
|
||||
}
|
||||
match self.last_message_id {
|
||||
None => true,
|
||||
Some(last) => msg.id > last,
|
||||
}
|
||||
}
|
||||
|
||||
fn update_with(&mut self, msg: &PotatoMessage) {
|
||||
self.last_message_id = Some(msg.id);
|
||||
if self.last_rx_time.is_none() || Some(msg.rx_time) > self.last_rx_time {
|
||||
self.last_rx_time = Some(msg.rx_time);
|
||||
self.last_rx_time_ids = vec![msg.id];
|
||||
} else if Some(msg.rx_time) == self.last_rx_time && !self.last_rx_time_ids.contains(&msg.id)
|
||||
{
|
||||
self.last_rx_time_ids.push(msg.id);
|
||||
}
|
||||
self.last_message_id = Some(match self.last_message_id {
|
||||
None => msg.id,
|
||||
Some(last) => last.max(msg.id),
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
@@ -111,7 +69,7 @@ fn build_fetch_params(state: &BridgeState) -> FetchParams {
|
||||
limit: None,
|
||||
since: None,
|
||||
}
|
||||
} else if let Some(ts) = state.last_rx_time {
|
||||
} else if let Some(ts) = state.last_checked_at {
|
||||
FetchParams {
|
||||
limit: None,
|
||||
since: Some(ts),
|
||||
@@ -124,29 +82,17 @@ fn build_fetch_params(state: &BridgeState) -> FetchParams {
|
||||
}
|
||||
}
|
||||
|
||||
/// Persist the bridge state and log any write errors.
|
||||
fn persist_state(state: &BridgeState, state_path: &str) {
|
||||
if let Err(e) = state.save(state_path) {
|
||||
error!("Error saving state: {:?}", e);
|
||||
fn update_checkpoint(state: &mut BridgeState, delivered_all: bool, now_secs: u64) -> bool {
|
||||
if !delivered_all {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/// Emit an info log for the latest bridge state snapshot.
|
||||
fn log_state_update(state: &BridgeState) {
|
||||
info!("Updated state: {:?}", state);
|
||||
}
|
||||
|
||||
/// Emit a sanitized config log without sensitive tokens.
|
||||
#[cfg(not(test))]
|
||||
fn log_config(cfg: &Config) {
|
||||
info!(
|
||||
potatomesh_base_url = cfg.potatomesh.base_url.as_str(),
|
||||
matrix_homeserver = cfg.matrix.homeserver.as_str(),
|
||||
matrix_server_name = cfg.matrix.server_name.as_str(),
|
||||
matrix_room_id = cfg.matrix.room_id.as_str(),
|
||||
state_file = cfg.state.state_file.as_str(),
|
||||
"Loaded config"
|
||||
);
|
||||
if state.last_message_id.is_some() {
|
||||
state.last_checked_at = Some(now_secs);
|
||||
true
|
||||
} else {
|
||||
false
|
||||
}
|
||||
}
|
||||
|
||||
async fn poll_once(
|
||||
@@ -154,13 +100,16 @@ async fn poll_once(
|
||||
matrix: &MatrixAppserviceClient,
|
||||
state: &mut BridgeState,
|
||||
state_path: &str,
|
||||
now_secs: u64,
|
||||
) {
|
||||
let params = build_fetch_params(state);
|
||||
|
||||
match potato.fetch_messages(params).await {
|
||||
Ok(mut msgs) => {
|
||||
// sort by rx_time so we process by actual receipt time
|
||||
msgs.sort_by_key(|m| m.rx_time);
|
||||
// sort by id ascending so we process in order
|
||||
msgs.sort_by_key(|m| m.id);
|
||||
|
||||
let mut delivered_all = true;
|
||||
|
||||
for msg in &msgs {
|
||||
if !state.should_forward(msg) {
|
||||
@@ -171,19 +120,27 @@ async fn poll_once(
|
||||
if let Some(port) = &msg.portnum {
|
||||
if port != "TEXT_MESSAGE_APP" {
|
||||
state.update_with(msg);
|
||||
log_state_update(state);
|
||||
persist_state(state, state_path);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
if let Err(e) = handle_message(potato, matrix, state, msg).await {
|
||||
error!("Error handling message {}: {:?}", msg.id, e);
|
||||
delivered_all = false;
|
||||
continue;
|
||||
}
|
||||
|
||||
// persist after each processed message
|
||||
persist_state(state, state_path);
|
||||
if let Err(e) = state.save(state_path) {
|
||||
error!("Error saving state: {:?}", e);
|
||||
}
|
||||
}
|
||||
|
||||
// Only advance checkpoint after successful delivery and a known last_message_id.
|
||||
if update_checkpoint(state, delivered_all, now_secs) {
|
||||
if let Err(e) = state.save(state_path) {
|
||||
error!("Error saving state: {:?}", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
Err(e) => {
|
||||
@@ -192,15 +149,6 @@ async fn poll_once(
|
||||
}
|
||||
}
|
||||
|
||||
fn spawn_synapse_listener(addr: SocketAddr, token: String) -> tokio::task::JoinHandle<()> {
|
||||
tokio::spawn(async move {
|
||||
if let Err(e) = run_synapse_listener(addr, token).await {
|
||||
error!("Synapse listener failed: {:?}", e);
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
#[cfg(not(test))]
|
||||
#[tokio::main]
|
||||
async fn main() -> Result<()> {
|
||||
// Logging: RUST_LOG=info,bridge=debug,reqwest=warn ...
|
||||
@@ -212,9 +160,8 @@ async fn main() -> Result<()> {
|
||||
)
|
||||
.init();
|
||||
|
||||
let cli = Cli::parse();
|
||||
let cfg = config::load(cli.to_inputs())?;
|
||||
log_config(&cfg);
|
||||
let cfg = Config::from_default_path()?;
|
||||
info!("Loaded config: {:?}", cfg);
|
||||
|
||||
let http = reqwest::Client::builder().build()?;
|
||||
let potato = PotatoClient::new(http.clone(), cfg.potatomesh.clone());
|
||||
@@ -222,10 +169,6 @@ async fn main() -> Result<()> {
|
||||
let matrix = MatrixAppserviceClient::new(http.clone(), cfg.matrix.clone());
|
||||
matrix.health_check().await?;
|
||||
|
||||
let synapse_addr = SocketAddr::from(([0, 0, 0, 0], 41448));
|
||||
let synapse_token = cfg.matrix.hs_token.clone();
|
||||
let _synapse_handle = spawn_synapse_listener(synapse_addr, synapse_token);
|
||||
|
||||
let state_path = &cfg.state.state_file;
|
||||
let mut state = BridgeState::load(state_path)?;
|
||||
info!("Loaded state: {:?}", state);
|
||||
@@ -233,7 +176,12 @@ async fn main() -> Result<()> {
|
||||
let poll_interval = Duration::from_secs(cfg.potatomesh.poll_interval_secs);
|
||||
|
||||
loop {
|
||||
poll_once(&potato, &matrix, &mut state, state_path).await;
|
||||
let now_secs = std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)
|
||||
.unwrap_or_default()
|
||||
.as_secs();
|
||||
|
||||
poll_once(&potato, &matrix, &mut state, state_path, now_secs).await;
|
||||
|
||||
sleep(poll_interval).await;
|
||||
}
|
||||
@@ -251,77 +199,36 @@ async fn handle_message(
|
||||
|
||||
// Ensure puppet exists & has display name
|
||||
matrix.ensure_user_registered(&localpart).await?;
|
||||
matrix.ensure_user_joined_room(&user_id).await?;
|
||||
let display_name = display_name_for_node(&node);
|
||||
matrix.set_display_name(&user_id, &display_name).await?;
|
||||
matrix.set_display_name(&user_id, &node.long_name).await?;
|
||||
|
||||
// Format the bridged message
|
||||
let preset_short = modem_preset_short(&msg.modem_preset);
|
||||
let prefix = format!(
|
||||
"[{freq}][{preset_short}][{channel}]",
|
||||
freq = msg.lora_freq,
|
||||
preset_short = preset_short,
|
||||
channel = msg.channel_name,
|
||||
);
|
||||
let (body, formatted_body) = format_message_bodies(&prefix, &msg.text);
|
||||
|
||||
matrix
|
||||
.send_formatted_message_as(&user_id, &body, &formatted_body)
|
||||
.await?;
|
||||
|
||||
info!("Bridged message: {:?}", msg);
|
||||
state.update_with(msg);
|
||||
log_state_update(state);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Build a compact modem preset label like "LF" for "LongFast".
|
||||
fn modem_preset_short(preset: &str) -> String {
|
||||
let letters: String = preset
|
||||
.chars()
|
||||
.filter(|ch| ch.is_ascii_uppercase())
|
||||
.collect();
|
||||
if letters.is_empty() {
|
||||
preset.chars().take(2).collect()
|
||||
} else {
|
||||
letters
|
||||
}
|
||||
}
|
||||
|
||||
/// Build plain text + HTML message bodies with inline-code metadata.
|
||||
fn format_message_bodies(prefix: &str, text: &str) -> (String, String) {
|
||||
let body = format!("`{}` {}", prefix, text);
|
||||
let formatted_body = format!("<code>{}</code> {}", escape_html(prefix), escape_html(text));
|
||||
(body, formatted_body)
|
||||
}
|
||||
|
||||
/// Build the Matrix display name from a node's long/short names.
|
||||
fn display_name_for_node(node: &PotatoNode) -> String {
|
||||
match node
|
||||
let short = node
|
||||
.short_name
|
||||
.as_deref()
|
||||
.map(str::trim)
|
||||
.filter(|s| !s.is_empty())
|
||||
{
|
||||
Some(short) if short != node.long_name => format!("{} ({})", node.long_name, short),
|
||||
_ => node.long_name.clone(),
|
||||
}
|
||||
}
|
||||
.clone()
|
||||
.unwrap_or_else(|| node.long_name.clone());
|
||||
|
||||
/// Minimal HTML escaping for Matrix formatted_body payloads.
|
||||
fn escape_html(input: &str) -> String {
|
||||
let mut escaped = String::with_capacity(input.len());
|
||||
for ch in input.chars() {
|
||||
match ch {
|
||||
'&' => escaped.push_str("&"),
|
||||
'<' => escaped.push_str("<"),
|
||||
'>' => escaped.push_str(">"),
|
||||
'"' => escaped.push_str("""),
|
||||
'\'' => escaped.push_str("'"),
|
||||
_ => escaped.push(ch),
|
||||
}
|
||||
}
|
||||
escaped
|
||||
let body = format!(
|
||||
"[{short}] {text}\n({from_id} → {to_id}, {rssi}, {snr}, {chan}/{preset})",
|
||||
short = short,
|
||||
text = msg.text,
|
||||
from_id = msg.from_id,
|
||||
to_id = msg.to_id,
|
||||
rssi = msg
|
||||
.rssi
|
||||
.map(|v| format!("RSSI {v} dB"))
|
||||
.unwrap_or_else(|| "RSSI n/a".to_string()),
|
||||
snr = msg
|
||||
.snr
|
||||
.map(|v| format!("SNR {v} dB"))
|
||||
.unwrap_or_else(|| "SNR n/a".to_string()),
|
||||
chan = msg.channel_name,
|
||||
preset = msg.modem_preset,
|
||||
);
|
||||
|
||||
matrix.send_text_message_as(&user_id, &body).await?;
|
||||
|
||||
state.update_with(msg);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
@@ -352,54 +259,6 @@ mod tests {
|
||||
}
|
||||
}
|
||||
|
||||
fn sample_node(short_name: Option<&str>, long_name: &str) -> PotatoNode {
|
||||
PotatoNode {
|
||||
node_id: "!abcd1234".to_string(),
|
||||
short_name: short_name.map(str::to_string),
|
||||
long_name: long_name.to_string(),
|
||||
role: None,
|
||||
hw_model: None,
|
||||
last_heard: None,
|
||||
first_heard: None,
|
||||
latitude: None,
|
||||
longitude: None,
|
||||
altitude: None,
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn modem_preset_short_handles_camelcase() {
|
||||
assert_eq!(modem_preset_short("LongFast"), "LF");
|
||||
assert_eq!(modem_preset_short("MediumFast"), "MF");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn format_message_bodies_escape_html() {
|
||||
let (body, formatted) = format_message_bodies("[868][LF]", "Hello <&>");
|
||||
assert_eq!(body, "`[868][LF]` Hello <&>");
|
||||
assert_eq!(formatted, "<code>[868][LF]</code> Hello <&>");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn escape_html_escapes_quotes() {
|
||||
assert_eq!(escape_html("a\"b'c"), "a"b'c");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn display_name_for_node_includes_short_when_present() {
|
||||
let node = sample_node(Some("TN"), "Test Node");
|
||||
assert_eq!(display_name_for_node(&node), "Test Node (TN)");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn display_name_for_node_ignores_empty_or_duplicate_short() {
|
||||
let empty_short = sample_node(Some(""), "Test Node");
|
||||
assert_eq!(display_name_for_node(&empty_short), "Test Node");
|
||||
|
||||
let duplicate_short = sample_node(Some("Test Node"), "Test Node");
|
||||
assert_eq!(display_name_for_node(&duplicate_short), "Test Node");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn bridge_state_initially_forwards_all() {
|
||||
let state = BridgeState::default();
|
||||
@@ -409,72 +268,39 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn bridge_state_tracks_latest_rx_time_and_skips_older() {
|
||||
fn bridge_state_tracks_highest_id_and_skips_older() {
|
||||
let mut state = BridgeState::default();
|
||||
let m1 = sample_msg(10);
|
||||
let m2 = sample_msg(20);
|
||||
let m3 = sample_msg(15);
|
||||
let m1 = PotatoMessage { rx_time: 10, ..m1 };
|
||||
let m2 = PotatoMessage { rx_time: 20, ..m2 };
|
||||
let m3 = PotatoMessage { rx_time: 15, ..m3 };
|
||||
|
||||
// First message, should forward
|
||||
assert!(state.should_forward(&m1));
|
||||
state.update_with(&m1);
|
||||
assert_eq!(state.last_message_id, Some(10));
|
||||
assert_eq!(state.last_rx_time, Some(10));
|
||||
|
||||
// Second message, higher id, should forward
|
||||
assert!(state.should_forward(&m2));
|
||||
state.update_with(&m2);
|
||||
assert_eq!(state.last_message_id, Some(20));
|
||||
assert_eq!(state.last_rx_time, Some(20));
|
||||
|
||||
// Third message, lower than last, should NOT forward
|
||||
assert!(!state.should_forward(&m3));
|
||||
// state remains unchanged
|
||||
assert_eq!(state.last_message_id, Some(20));
|
||||
assert_eq!(state.last_rx_time, Some(20));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn bridge_state_uses_legacy_id_filter_when_rx_time_missing() {
|
||||
let state = BridgeState {
|
||||
last_message_id: Some(10),
|
||||
last_rx_time: None,
|
||||
last_rx_time_ids: vec![],
|
||||
fn bridge_state_update_is_monotonic() {
|
||||
let mut state = BridgeState {
|
||||
last_message_id: Some(50),
|
||||
last_checked_at: None,
|
||||
};
|
||||
let older = sample_msg(9);
|
||||
let newer = sample_msg(11);
|
||||
let m = sample_msg(40);
|
||||
|
||||
assert!(!state.should_forward(&older));
|
||||
assert!(state.should_forward(&newer));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn bridge_state_dedupes_same_timestamp() {
|
||||
let mut state = BridgeState::default();
|
||||
let m1 = PotatoMessage {
|
||||
rx_time: 100,
|
||||
..sample_msg(10)
|
||||
};
|
||||
let m2 = PotatoMessage {
|
||||
rx_time: 100,
|
||||
..sample_msg(9)
|
||||
};
|
||||
let dup = PotatoMessage {
|
||||
rx_time: 100,
|
||||
..sample_msg(10)
|
||||
};
|
||||
|
||||
assert!(state.should_forward(&m1));
|
||||
state.update_with(&m1);
|
||||
assert!(state.should_forward(&m2));
|
||||
state.update_with(&m2);
|
||||
assert!(!state.should_forward(&dup));
|
||||
assert_eq!(state.last_rx_time, Some(100));
|
||||
assert_eq!(state.last_rx_time_ids, vec![10, 9]);
|
||||
state.update_with(&m); // id is lower than current
|
||||
// last_message_id must stay at 50
|
||||
assert_eq!(state.last_message_id, Some(50));
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -485,17 +311,13 @@ mod tests {
|
||||
|
||||
let state = BridgeState {
|
||||
last_message_id: Some(12345),
|
||||
last_rx_time: Some(99),
|
||||
last_rx_time_ids: vec![123],
|
||||
last_checked_at: Some(77),
|
||||
last_checked_at: Some(99),
|
||||
};
|
||||
state.save(path_str).unwrap();
|
||||
|
||||
let loaded_state = BridgeState::load(path_str).unwrap();
|
||||
assert_eq!(loaded_state.last_message_id, Some(12345));
|
||||
assert_eq!(loaded_state.last_rx_time, Some(99));
|
||||
assert_eq!(loaded_state.last_rx_time_ids, vec![123]);
|
||||
assert_eq!(loaded_state.last_checked_at, None);
|
||||
assert_eq!(loaded_state.last_checked_at, Some(99));
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -506,50 +328,50 @@ mod tests {
|
||||
|
||||
let state = BridgeState::load(path_str).unwrap();
|
||||
assert_eq!(state.last_message_id, None);
|
||||
assert_eq!(state.last_rx_time, None);
|
||||
assert!(state.last_rx_time_ids.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn bridge_state_load_empty_file() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let file_path = tmp_dir.path().join("empty.json");
|
||||
let path_str = file_path.to_str().unwrap();
|
||||
|
||||
fs::write(path_str, "").unwrap();
|
||||
|
||||
let state = BridgeState::load(path_str).unwrap();
|
||||
assert_eq!(state.last_message_id, None);
|
||||
assert_eq!(state.last_rx_time, None);
|
||||
assert!(state.last_rx_time_ids.is_empty());
|
||||
assert_eq!(state.last_checked_at, None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn bridge_state_migrates_legacy_checkpoint() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let file_path = tmp_dir.path().join("legacy_state.json");
|
||||
let path_str = file_path.to_str().unwrap();
|
||||
fn update_checkpoint_requires_last_message_id() {
|
||||
let mut state = BridgeState {
|
||||
last_message_id: None,
|
||||
last_checked_at: Some(10),
|
||||
};
|
||||
|
||||
fs::write(
|
||||
path_str,
|
||||
r#"{"last_message_id":42,"last_checked_at":1710000000}"#,
|
||||
)
|
||||
.unwrap();
|
||||
let saved = update_checkpoint(&mut state, true, 123);
|
||||
assert!(!saved);
|
||||
assert_eq!(state.last_checked_at, Some(10));
|
||||
}
|
||||
|
||||
let state = BridgeState::load(path_str).unwrap();
|
||||
assert_eq!(state.last_message_id, Some(42));
|
||||
assert_eq!(state.last_rx_time, Some(1_710_000_000));
|
||||
assert!(state.last_rx_time_ids.is_empty());
|
||||
#[test]
|
||||
fn update_checkpoint_skips_when_not_delivered() {
|
||||
let mut state = BridgeState {
|
||||
last_message_id: Some(5),
|
||||
last_checked_at: Some(10),
|
||||
};
|
||||
|
||||
let saved = update_checkpoint(&mut state, false, 123);
|
||||
assert!(!saved);
|
||||
assert_eq!(state.last_checked_at, Some(10));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn update_checkpoint_sets_when_safe() {
|
||||
let mut state = BridgeState {
|
||||
last_message_id: Some(5),
|
||||
last_checked_at: None,
|
||||
};
|
||||
|
||||
let saved = update_checkpoint(&mut state, true, 123);
|
||||
assert!(saved);
|
||||
assert_eq!(state.last_checked_at, Some(123));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn fetch_params_respects_missing_last_message_id() {
|
||||
let state = BridgeState {
|
||||
last_message_id: None,
|
||||
last_rx_time: Some(123),
|
||||
last_rx_time_ids: vec![],
|
||||
last_checked_at: None,
|
||||
last_checked_at: Some(123),
|
||||
};
|
||||
|
||||
let params = build_fetch_params(&state);
|
||||
@@ -561,9 +383,7 @@ mod tests {
|
||||
fn fetch_params_uses_since_when_safe() {
|
||||
let state = BridgeState {
|
||||
last_message_id: Some(1),
|
||||
last_rx_time: Some(123),
|
||||
last_rx_time_ids: vec![],
|
||||
last_checked_at: None,
|
||||
last_checked_at: Some(123),
|
||||
};
|
||||
|
||||
let params = build_fetch_params(&state);
|
||||
@@ -575,8 +395,6 @@ mod tests {
|
||||
fn fetch_params_defaults_to_small_window() {
|
||||
let state = BridgeState {
|
||||
last_message_id: Some(1),
|
||||
last_rx_time: None,
|
||||
last_rx_time_ids: vec![],
|
||||
last_checked_at: None,
|
||||
};
|
||||
|
||||
@@ -585,59 +403,8 @@ mod tests {
|
||||
assert_eq!(params.since, None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn log_state_update_emits_info() {
|
||||
let state = BridgeState::default();
|
||||
log_state_update(&state);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn persist_state_writes_file() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let file_path = tmp_dir.path().join("state.json");
|
||||
let path_str = file_path.to_str().unwrap();
|
||||
|
||||
let state = BridgeState {
|
||||
last_message_id: Some(42),
|
||||
last_rx_time: Some(123),
|
||||
last_rx_time_ids: vec![42],
|
||||
last_checked_at: None,
|
||||
};
|
||||
|
||||
persist_state(&state, path_str);
|
||||
|
||||
let loaded = BridgeState::load(path_str).unwrap();
|
||||
assert_eq!(loaded.last_message_id, Some(42));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn persist_state_logs_on_error() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let dir_path = tmp_dir.path().to_str().unwrap();
|
||||
let state = BridgeState::default();
|
||||
|
||||
// Writing to a directory path should trigger the error branch.
|
||||
persist_state(&state, dir_path);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn spawn_synapse_listener_starts_task() {
|
||||
let addr = SocketAddr::from(([127, 0, 0, 1], 0));
|
||||
let handle = spawn_synapse_listener(addr, "HS_TOKEN".to_string());
|
||||
tokio::time::sleep(Duration::from_millis(10)).await;
|
||||
handle.abort();
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn spawn_synapse_listener_logs_error_on_bind_failure() {
|
||||
let listener = tokio::net::TcpListener::bind("127.0.0.1:0").await.unwrap();
|
||||
let addr = listener.local_addr().unwrap();
|
||||
let handle = spawn_synapse_listener(addr, "HS_TOKEN".to_string());
|
||||
let _ = handle.await;
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn poll_once_leaves_state_unchanged_without_messages() {
|
||||
async fn poll_once_persists_checkpoint_without_messages() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let state_path = tmp_dir.path().join("state.json");
|
||||
let state_str = state_path.to_str().unwrap();
|
||||
@@ -659,7 +426,6 @@ mod tests {
|
||||
let matrix_cfg = MatrixConfig {
|
||||
homeserver: server.url(),
|
||||
as_token: "AS_TOKEN".to_string(),
|
||||
hs_token: "HS_TOKEN".to_string(),
|
||||
server_name: "example.org".to_string(),
|
||||
room_id: "!roomid:example.org".to_string(),
|
||||
};
|
||||
@@ -669,63 +435,18 @@ mod tests {
|
||||
|
||||
let mut state = BridgeState {
|
||||
last_message_id: Some(1),
|
||||
last_rx_time: Some(100),
|
||||
last_rx_time_ids: vec![1],
|
||||
last_checked_at: None,
|
||||
};
|
||||
|
||||
poll_once(&potato, &matrix, &mut state, state_str).await;
|
||||
poll_once(&potato, &matrix, &mut state, state_str, 123).await;
|
||||
|
||||
mock_msgs.assert();
|
||||
|
||||
// No new data means state remains unchanged and is not persisted.
|
||||
assert_eq!(state.last_rx_time, Some(100));
|
||||
assert_eq!(state.last_rx_time_ids, vec![1]);
|
||||
assert!(!state_path.exists());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn poll_once_persists_state_for_non_text_messages() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let state_path = tmp_dir.path().join("state.json");
|
||||
let state_str = state_path.to_str().unwrap();
|
||||
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let mock_msgs = server
|
||||
.mock("GET", "/api/messages")
|
||||
.match_query(mockito::Matcher::Any)
|
||||
.with_status(200)
|
||||
.with_header("content-type", "application/json")
|
||||
.with_body(
|
||||
r#"[{"id":1,"rx_time":100,"rx_iso":"2025-11-27T00:00:00Z","from_id":"!abcd1234","to_id":"^all","channel":1,"portnum":"POSITION_APP","text":"","rssi":-100,"hop_limit":1,"lora_freq":868,"modem_preset":"MediumFast","channel_name":"TEST","snr":0.0,"node_id":"!abcd1234"}]"#,
|
||||
)
|
||||
.create();
|
||||
|
||||
let http_client = reqwest::Client::new();
|
||||
let potatomesh_cfg = PotatomeshConfig {
|
||||
base_url: server.url(),
|
||||
poll_interval_secs: 1,
|
||||
};
|
||||
let matrix_cfg = MatrixConfig {
|
||||
homeserver: server.url(),
|
||||
as_token: "AS_TOKEN".to_string(),
|
||||
hs_token: "HS_TOKEN".to_string(),
|
||||
server_name: "example.org".to_string(),
|
||||
room_id: "!roomid:example.org".to_string(),
|
||||
};
|
||||
|
||||
let potato = PotatoClient::new(http_client.clone(), potatomesh_cfg);
|
||||
let matrix = MatrixAppserviceClient::new(http_client, matrix_cfg);
|
||||
let mut state = BridgeState::default();
|
||||
|
||||
poll_once(&potato, &matrix, &mut state, state_str).await;
|
||||
|
||||
mock_msgs.assert();
|
||||
assert!(state_path.exists());
|
||||
// Should have advanced checkpoint and saved it.
|
||||
assert_eq!(state.last_checked_at, Some(123));
|
||||
let loaded = BridgeState::load(state_str).unwrap();
|
||||
assert_eq!(loaded.last_checked_at, Some(123));
|
||||
assert_eq!(loaded.last_message_id, Some(1));
|
||||
assert_eq!(loaded.last_rx_time, Some(100));
|
||||
assert_eq!(loaded.last_rx_time_ids, vec![1]);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
@@ -739,7 +460,6 @@ mod tests {
|
||||
let matrix_cfg = MatrixConfig {
|
||||
homeserver: server.url(),
|
||||
as_token: "AS_TOKEN".to_string(),
|
||||
hs_token: "HS_TOKEN".to_string(),
|
||||
server_name: "example.org".to_string(),
|
||||
room_id: "!roomid:example.org".to_string(),
|
||||
};
|
||||
@@ -747,8 +467,6 @@ mod tests {
|
||||
let node_id = "abcd1234";
|
||||
let user_id = format!("@potato_{}:{}", node_id, matrix_cfg.server_name);
|
||||
let encoded_user = urlencoding::encode(&user_id);
|
||||
let room_id = matrix_cfg.room_id.clone();
|
||||
let encoded_room = urlencoding::encode(&room_id);
|
||||
|
||||
let mock_get_node = server
|
||||
.mock("GET", "/api/nodes/abcd1234")
|
||||
@@ -759,18 +477,7 @@ mod tests {
|
||||
|
||||
let mock_register = server
|
||||
.mock("POST", "/_matrix/client/v3/register")
|
||||
.match_query("kind=user")
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.with_status(200)
|
||||
.create();
|
||||
|
||||
let mock_join = server
|
||||
.mock(
|
||||
"POST",
|
||||
format!("/_matrix/client/v3/rooms/{}/join", encoded_room).as_str(),
|
||||
)
|
||||
.match_query(format!("user_id={}", encoded_user).as_str())
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.match_query("kind=user&access_token=AS_TOKEN")
|
||||
.with_status(200)
|
||||
.create();
|
||||
|
||||
@@ -779,16 +486,14 @@ mod tests {
|
||||
"PUT",
|
||||
format!("/_matrix/client/v3/profile/{}/displayname", encoded_user).as_str(),
|
||||
)
|
||||
.match_query(format!("user_id={}", encoded_user).as_str())
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.match_body(mockito::Matcher::PartialJson(serde_json::json!({
|
||||
"displayname": "Test Node (TN)"
|
||||
})))
|
||||
.match_query(format!("user_id={}&access_token=AS_TOKEN", encoded_user).as_str())
|
||||
.with_status(200)
|
||||
.create();
|
||||
|
||||
let http_client = reqwest::Client::new();
|
||||
let matrix_client = MatrixAppserviceClient::new(http_client.clone(), matrix_cfg);
|
||||
let room_id = &matrix_client.cfg.room_id;
|
||||
let encoded_room = urlencoding::encode(room_id);
|
||||
let txn_id = matrix_client
|
||||
.txn_counter
|
||||
.load(std::sync::atomic::Ordering::SeqCst);
|
||||
@@ -802,14 +507,7 @@ mod tests {
|
||||
)
|
||||
.as_str(),
|
||||
)
|
||||
.match_query(format!("user_id={}", encoded_user).as_str())
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.match_body(mockito::Matcher::PartialJson(serde_json::json!({
|
||||
"msgtype": "m.text",
|
||||
"body": "`[868][MF][TEST]` Ping",
|
||||
"format": "org.matrix.custom.html",
|
||||
"formatted_body": "<code>[868][MF][TEST]</code> Ping",
|
||||
})))
|
||||
.match_query(format!("user_id={}&access_token=AS_TOKEN", encoded_user).as_str())
|
||||
.with_status(200)
|
||||
.create();
|
||||
|
||||
@@ -822,7 +520,6 @@ mod tests {
|
||||
assert!(result.is_ok());
|
||||
mock_get_node.assert();
|
||||
mock_register.assert();
|
||||
mock_join.assert();
|
||||
mock_display_name.assert();
|
||||
mock_send.assert();
|
||||
|
||||
|
||||
+77
-148
@@ -66,6 +66,10 @@ impl MatrixAppserviceClient {
|
||||
format!("@{}:{}", localpart, self.cfg.server_name)
|
||||
}
|
||||
|
||||
fn auth_query(&self) -> String {
|
||||
format!("access_token={}", urlencoding::encode(&self.cfg.as_token))
|
||||
}
|
||||
|
||||
/// Ensure the puppet user exists (register via appservice registration).
|
||||
pub async fn ensure_user_registered(&self, localpart: &str) -> anyhow::Result<()> {
|
||||
#[derive(Serialize)]
|
||||
@@ -76,8 +80,9 @@ impl MatrixAppserviceClient {
|
||||
}
|
||||
|
||||
let url = format!(
|
||||
"{}/_matrix/client/v3/register?kind=user",
|
||||
self.cfg.homeserver
|
||||
"{}/_matrix/client/v3/register?kind=user&{}",
|
||||
self.cfg.homeserver,
|
||||
self.auth_query()
|
||||
);
|
||||
|
||||
let body = RegisterReq {
|
||||
@@ -85,13 +90,7 @@ impl MatrixAppserviceClient {
|
||||
username: localpart,
|
||||
};
|
||||
|
||||
let resp = self
|
||||
.http
|
||||
.post(&url)
|
||||
.bearer_auth(&self.cfg.as_token)
|
||||
.json(&body)
|
||||
.send()
|
||||
.await?;
|
||||
let resp = self.http.post(&url).json(&body).send().await?;
|
||||
if resp.status().is_success() {
|
||||
Ok(())
|
||||
} else {
|
||||
@@ -110,21 +109,18 @@ impl MatrixAppserviceClient {
|
||||
|
||||
let encoded_user = urlencoding::encode(user_id);
|
||||
let url = format!(
|
||||
"{}/_matrix/client/v3/profile/{}/displayname?user_id={}",
|
||||
self.cfg.homeserver, encoded_user, encoded_user
|
||||
"{}/_matrix/client/v3/profile/{}/displayname?user_id={}&{}",
|
||||
self.cfg.homeserver,
|
||||
encoded_user,
|
||||
encoded_user,
|
||||
self.auth_query()
|
||||
);
|
||||
|
||||
let body = DisplayNameReq {
|
||||
displayname: display_name,
|
||||
};
|
||||
|
||||
let resp = self
|
||||
.http
|
||||
.put(&url)
|
||||
.bearer_auth(&self.cfg.as_token)
|
||||
.json(&body)
|
||||
.send()
|
||||
.await?;
|
||||
let resp = self.http.put(&url).json(&body).send().await?;
|
||||
if resp.status().is_success() {
|
||||
Ok(())
|
||||
} else {
|
||||
@@ -138,53 +134,12 @@ impl MatrixAppserviceClient {
|
||||
}
|
||||
}
|
||||
|
||||
/// Ensure the puppet user is joined to the configured room.
|
||||
pub async fn ensure_user_joined_room(&self, user_id: &str) -> anyhow::Result<()> {
|
||||
#[derive(Serialize)]
|
||||
struct JoinReq {}
|
||||
|
||||
let encoded_room = urlencoding::encode(&self.cfg.room_id);
|
||||
let encoded_user = urlencoding::encode(user_id);
|
||||
let url = format!(
|
||||
"{}/_matrix/client/v3/rooms/{}/join?user_id={}",
|
||||
self.cfg.homeserver, encoded_room, encoded_user
|
||||
);
|
||||
|
||||
let resp = self
|
||||
.http
|
||||
.post(&url)
|
||||
.bearer_auth(&self.cfg.as_token)
|
||||
.json(&JoinReq {})
|
||||
.send()
|
||||
.await?;
|
||||
if resp.status().is_success() {
|
||||
Ok(())
|
||||
} else {
|
||||
let status = resp.status();
|
||||
let body_snip = resp.text().await.unwrap_or_default();
|
||||
Err(anyhow::anyhow!(
|
||||
"Matrix join failed for {} in {} with status {} ({})",
|
||||
user_id,
|
||||
self.cfg.room_id,
|
||||
status,
|
||||
body_snip
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
/// Send a text message with HTML formatting into the configured room as puppet user_id.
|
||||
pub async fn send_formatted_message_as(
|
||||
&self,
|
||||
user_id: &str,
|
||||
body_text: &str,
|
||||
formatted_body: &str,
|
||||
) -> anyhow::Result<()> {
|
||||
/// Send a plain text message into the configured room as puppet user_id.
|
||||
pub async fn send_text_message_as(&self, user_id: &str, body_text: &str) -> anyhow::Result<()> {
|
||||
#[derive(Serialize)]
|
||||
struct MsgContent<'a> {
|
||||
msgtype: &'a str,
|
||||
body: &'a str,
|
||||
format: &'a str,
|
||||
formatted_body: &'a str,
|
||||
}
|
||||
|
||||
let txn_id = self.txn_counter.fetch_add(1, Ordering::SeqCst);
|
||||
@@ -192,36 +147,35 @@ impl MatrixAppserviceClient {
|
||||
let encoded_user = urlencoding::encode(user_id);
|
||||
|
||||
let url = format!(
|
||||
"{}/_matrix/client/v3/rooms/{}/send/m.room.message/{}?user_id={}",
|
||||
self.cfg.homeserver, encoded_room, txn_id, encoded_user
|
||||
"{}/_matrix/client/v3/rooms/{}/send/m.room.message/{}?user_id={}&{}",
|
||||
self.cfg.homeserver,
|
||||
encoded_room,
|
||||
txn_id,
|
||||
encoded_user,
|
||||
self.auth_query()
|
||||
);
|
||||
|
||||
let content = MsgContent {
|
||||
msgtype: "m.text",
|
||||
body: body_text,
|
||||
format: "org.matrix.custom.html",
|
||||
formatted_body,
|
||||
};
|
||||
|
||||
let resp = self
|
||||
.http
|
||||
.put(&url)
|
||||
.bearer_auth(&self.cfg.as_token)
|
||||
.json(&content)
|
||||
.send()
|
||||
.await?;
|
||||
let resp = self.http.put(&url).json(&content).send().await?;
|
||||
|
||||
if !resp.status().is_success() {
|
||||
let status = resp.status();
|
||||
// optional: pull a short body snippet for debugging
|
||||
let body_snip = resp.text().await.unwrap_or_default();
|
||||
|
||||
// Log for observability
|
||||
tracing::warn!(
|
||||
"Failed to send formatted message as {}: status {}, body: {}",
|
||||
"Failed to send message as {}: status {}, body: {}",
|
||||
user_id,
|
||||
status,
|
||||
body_snip
|
||||
);
|
||||
|
||||
// Propagate an error so callers know this message was NOT delivered
|
||||
return Err(anyhow::anyhow!(
|
||||
"Matrix send failed for {} with status {}",
|
||||
user_id,
|
||||
@@ -241,7 +195,6 @@ mod tests {
|
||||
MatrixConfig {
|
||||
homeserver: "https://matrix.example.org".to_string(),
|
||||
as_token: "AS_TOKEN".to_string(),
|
||||
hs_token: "HS_TOKEN".to_string(),
|
||||
server_name: "example.org".to_string(),
|
||||
room_id: "!roomid:example.org".to_string(),
|
||||
}
|
||||
@@ -302,6 +255,16 @@ mod tests {
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn auth_query_contains_access_token() {
|
||||
let http = reqwest::Client::builder().build().unwrap();
|
||||
let client = MatrixAppserviceClient::new(http, dummy_cfg());
|
||||
|
||||
let q = client.auth_query();
|
||||
assert!(q.starts_with("access_token="));
|
||||
assert!(q.contains("AS_TOKEN"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_new_matrix_client() {
|
||||
let http_client = reqwest::Client::new();
|
||||
@@ -317,8 +280,7 @@ mod tests {
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let mock = server
|
||||
.mock("POST", "/_matrix/client/v3/register")
|
||||
.match_query("kind=user")
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.match_query("kind=user&access_token=AS_TOKEN")
|
||||
.with_status(200)
|
||||
.create();
|
||||
|
||||
@@ -336,8 +298,7 @@ mod tests {
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let mock = server
|
||||
.mock("POST", "/_matrix/client/v3/register")
|
||||
.match_query("kind=user")
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.match_query("kind=user&access_token=AS_TOKEN")
|
||||
.with_status(400) // M_USER_IN_USE
|
||||
.create();
|
||||
|
||||
@@ -355,13 +316,12 @@ mod tests {
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let user_id = "@test:example.org";
|
||||
let encoded_user = urlencoding::encode(user_id);
|
||||
let query = format!("user_id={}", encoded_user);
|
||||
let query = format!("user_id={}&access_token=AS_TOKEN", encoded_user);
|
||||
let path = format!("/_matrix/client/v3/profile/{}/displayname", encoded_user);
|
||||
|
||||
let mock = server
|
||||
.mock("PUT", path.as_str())
|
||||
.match_query(query.as_str())
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.with_status(200)
|
||||
.create();
|
||||
|
||||
@@ -379,13 +339,12 @@ mod tests {
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let user_id = "@test:example.org";
|
||||
let encoded_user = urlencoding::encode(user_id);
|
||||
let query = format!("user_id={}", encoded_user);
|
||||
let query = format!("user_id={}&access_token=AS_TOKEN", encoded_user);
|
||||
let path = format!("/_matrix/client/v3/profile/{}/displayname", encoded_user);
|
||||
|
||||
let mock = server
|
||||
.mock("PUT", path.as_str())
|
||||
.match_query(query.as_str())
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.with_status(500)
|
||||
.create();
|
||||
|
||||
@@ -399,61 +358,7 @@ mod tests {
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_ensure_user_joined_room_success() {
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let user_id = "@test:example.org";
|
||||
let room_id = "!roomid:example.org";
|
||||
let encoded_user = urlencoding::encode(user_id);
|
||||
let encoded_room = urlencoding::encode(room_id);
|
||||
let query = format!("user_id={}", encoded_user);
|
||||
let path = format!("/_matrix/client/v3/rooms/{}/join", encoded_room);
|
||||
|
||||
let mock = server
|
||||
.mock("POST", path.as_str())
|
||||
.match_query(query.as_str())
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.with_status(200)
|
||||
.create();
|
||||
|
||||
let mut cfg = dummy_cfg();
|
||||
cfg.homeserver = server.url();
|
||||
cfg.room_id = room_id.to_string();
|
||||
let client = MatrixAppserviceClient::new(reqwest::Client::new(), cfg);
|
||||
let result = client.ensure_user_joined_room(user_id).await;
|
||||
|
||||
mock.assert();
|
||||
assert!(result.is_ok());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_ensure_user_joined_room_fail() {
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let user_id = "@test:example.org";
|
||||
let room_id = "!roomid:example.org";
|
||||
let encoded_user = urlencoding::encode(user_id);
|
||||
let encoded_room = urlencoding::encode(room_id);
|
||||
let query = format!("user_id={}", encoded_user);
|
||||
let path = format!("/_matrix/client/v3/rooms/{}/join", encoded_room);
|
||||
|
||||
let mock = server
|
||||
.mock("POST", path.as_str())
|
||||
.match_query(query.as_str())
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.with_status(403)
|
||||
.create();
|
||||
|
||||
let mut cfg = dummy_cfg();
|
||||
cfg.homeserver = server.url();
|
||||
cfg.room_id = room_id.to_string();
|
||||
let client = MatrixAppserviceClient::new(reqwest::Client::new(), cfg);
|
||||
let result = client.ensure_user_joined_room(user_id).await;
|
||||
|
||||
mock.assert();
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_send_formatted_message_as_success() {
|
||||
async fn test_send_text_message_as_success() {
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let user_id = "@test:example.org";
|
||||
let room_id = "!roomid:example.org";
|
||||
@@ -467,7 +372,7 @@ mod tests {
|
||||
MatrixAppserviceClient::new(reqwest::Client::new(), cfg)
|
||||
};
|
||||
let txn_id = client.txn_counter.load(Ordering::SeqCst);
|
||||
let query = format!("user_id={}", encoded_user);
|
||||
let query = format!("user_id={}&access_token=AS_TOKEN", encoded_user);
|
||||
let path = format!(
|
||||
"/_matrix/client/v3/rooms/{}/send/m.room.message/{}",
|
||||
encoded_room, txn_id
|
||||
@@ -476,21 +381,45 @@ mod tests {
|
||||
let mock = server
|
||||
.mock("PUT", path.as_str())
|
||||
.match_query(query.as_str())
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.match_body(mockito::Matcher::PartialJson(serde_json::json!({
|
||||
"msgtype": "m.text",
|
||||
"body": "`[meta]` hello",
|
||||
"format": "org.matrix.custom.html",
|
||||
"formatted_body": "<code>[meta]</code> hello",
|
||||
})))
|
||||
.with_status(200)
|
||||
.create();
|
||||
|
||||
let result = client
|
||||
.send_formatted_message_as(user_id, "`[meta]` hello", "<code>[meta]</code> hello")
|
||||
.await;
|
||||
let result = client.send_text_message_as(user_id, "hello").await;
|
||||
|
||||
mock.assert();
|
||||
assert!(result.is_ok());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_send_text_message_as_fail() {
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let user_id = "@test:example.org";
|
||||
let room_id = "!roomid:example.org";
|
||||
let encoded_user = urlencoding::encode(user_id);
|
||||
let encoded_room = urlencoding::encode(room_id);
|
||||
|
||||
let client = {
|
||||
let mut cfg = dummy_cfg();
|
||||
cfg.homeserver = server.url();
|
||||
cfg.room_id = room_id.to_string();
|
||||
MatrixAppserviceClient::new(reqwest::Client::new(), cfg)
|
||||
};
|
||||
let txn_id = client.txn_counter.load(Ordering::SeqCst);
|
||||
let query = format!("user_id={}&access_token=AS_TOKEN", encoded_user);
|
||||
let path = format!(
|
||||
"/_matrix/client/v3/rooms/{}/send/m.room.message/{}",
|
||||
encoded_room, txn_id
|
||||
);
|
||||
|
||||
let mock = server
|
||||
.mock("PUT", path.as_str())
|
||||
.match_query(query.as_str())
|
||||
.with_status(500)
|
||||
.create();
|
||||
|
||||
let result = client.send_text_message_as(user_id, "hello").await;
|
||||
|
||||
mock.assert();
|
||||
assert!(result.is_err());
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,289 +0,0 @@
|
||||
// Copyright © 2025-26 l5yth & contributors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use axum::{
|
||||
extract::{Path, Query, State},
|
||||
http::{header::AUTHORIZATION, HeaderMap, StatusCode},
|
||||
response::IntoResponse,
|
||||
routing::put,
|
||||
Json, Router,
|
||||
};
|
||||
use serde_json::Value;
|
||||
use std::net::SocketAddr;
|
||||
use tracing::info;
|
||||
|
||||
#[derive(Clone)]
|
||||
struct SynapseState {
|
||||
hs_token: String,
|
||||
}
|
||||
|
||||
#[derive(serde::Deserialize)]
|
||||
struct AuthQuery {
|
||||
access_token: Option<String>,
|
||||
}
|
||||
|
||||
/// Pull access tokens from supported auth headers.
|
||||
fn extract_access_token(headers: &HeaderMap) -> Option<String> {
|
||||
if let Some(value) = headers.get(AUTHORIZATION) {
|
||||
if let Ok(raw) = value.to_str() {
|
||||
if let Some(token) = raw.strip_prefix("Bearer ") {
|
||||
return Some(token.trim().to_string());
|
||||
}
|
||||
if let Some(token) = raw.strip_prefix("bearer ") {
|
||||
return Some(token.trim().to_string());
|
||||
}
|
||||
}
|
||||
}
|
||||
if let Some(value) = headers.get("x-access-token") {
|
||||
if let Ok(raw) = value.to_str() {
|
||||
return Some(raw.trim().to_string());
|
||||
}
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
/// Compare tokens in constant time to avoid timing leakage.
|
||||
fn constant_time_eq(a: &str, b: &str) -> bool {
|
||||
let a_bytes = a.as_bytes();
|
||||
let b_bytes = b.as_bytes();
|
||||
let max_len = std::cmp::max(a_bytes.len(), b_bytes.len());
|
||||
let mut diff = (a_bytes.len() ^ b_bytes.len()) as u8;
|
||||
|
||||
for idx in 0..max_len {
|
||||
let left = *a_bytes.get(idx).unwrap_or(&0);
|
||||
let right = *b_bytes.get(idx).unwrap_or(&0);
|
||||
diff |= left ^ right;
|
||||
}
|
||||
|
||||
diff == 0
|
||||
}
|
||||
|
||||
/// Captures inbound Synapse transaction payloads for logging.
|
||||
#[derive(Debug)]
|
||||
struct SynapseResponse {
|
||||
txn_id: String,
|
||||
payload: Value,
|
||||
}
|
||||
|
||||
/// Build the router that handles Synapse appservice transactions.
|
||||
fn build_router(state: SynapseState) -> Router {
|
||||
Router::new()
|
||||
.route(
|
||||
"/_matrix/appservice/v1/transactions/:txn_id",
|
||||
put(handle_transaction),
|
||||
)
|
||||
.with_state(state)
|
||||
}
|
||||
|
||||
/// Handle inbound transaction callbacks from Synapse.
|
||||
async fn handle_transaction(
|
||||
Path(txn_id): Path<String>,
|
||||
State(state): State<SynapseState>,
|
||||
Query(auth): Query<AuthQuery>,
|
||||
headers: HeaderMap,
|
||||
Json(payload): Json<Value>,
|
||||
) -> impl IntoResponse {
|
||||
let header_token = extract_access_token(&headers);
|
||||
let token_matches = if let Some(token) = header_token.as_deref() {
|
||||
constant_time_eq(token, &state.hs_token)
|
||||
} else {
|
||||
auth.access_token
|
||||
.as_deref()
|
||||
.is_some_and(|token| constant_time_eq(token, &state.hs_token))
|
||||
};
|
||||
if !token_matches {
|
||||
return (StatusCode::UNAUTHORIZED, Json(serde_json::json!({})));
|
||||
}
|
||||
let response = SynapseResponse { txn_id, payload };
|
||||
info!(
|
||||
"Status response: SynapseResponse {{ txn_id: {}, payload: {:?} }}",
|
||||
response.txn_id, response.payload
|
||||
);
|
||||
(StatusCode::OK, Json(serde_json::json!({})))
|
||||
}
|
||||
|
||||
/// Listen for Synapse callbacks on the configured address.
|
||||
pub async fn run_synapse_listener(addr: SocketAddr, hs_token: String) -> anyhow::Result<()> {
|
||||
let app = build_router(SynapseState { hs_token });
|
||||
let listener = tokio::net::TcpListener::bind(addr).await?;
|
||||
info!("Synapse listener bound on {}", addr);
|
||||
axum::serve(listener, app).await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use axum::body::Body;
|
||||
use axum::http::Request;
|
||||
use tokio::time::{sleep, Duration};
|
||||
use tower::ServiceExt;
|
||||
|
||||
#[tokio::test]
|
||||
async fn transactions_endpoint_accepts_payloads() {
|
||||
let app = build_router(SynapseState {
|
||||
hs_token: "HS_TOKEN".to_string(),
|
||||
});
|
||||
let payload = serde_json::json!({
|
||||
"events": [],
|
||||
"txn_id": "123"
|
||||
});
|
||||
|
||||
let response = app
|
||||
.oneshot(
|
||||
Request::builder()
|
||||
.method("PUT")
|
||||
.uri("/_matrix/appservice/v1/transactions/123")
|
||||
.header("authorization", "Bearer HS_TOKEN")
|
||||
.header("content-type", "application/json")
|
||||
.body(Body::from(payload.to_string()))
|
||||
.unwrap(),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
let body = axum::body::to_bytes(response.into_body(), usize::MAX)
|
||||
.await
|
||||
.unwrap();
|
||||
assert_eq!(body.as_ref(), b"{}");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn transactions_endpoint_rejects_missing_token() {
|
||||
let app = build_router(SynapseState {
|
||||
hs_token: "HS_TOKEN".to_string(),
|
||||
});
|
||||
let payload = serde_json::json!({
|
||||
"events": [],
|
||||
"txn_id": "123"
|
||||
});
|
||||
|
||||
let response = app
|
||||
.oneshot(
|
||||
Request::builder()
|
||||
.method("PUT")
|
||||
.uri("/_matrix/appservice/v1/transactions/123")
|
||||
.header("content-type", "application/json")
|
||||
.body(Body::from(payload.to_string()))
|
||||
.unwrap(),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(response.status(), StatusCode::UNAUTHORIZED);
|
||||
let body = axum::body::to_bytes(response.into_body(), usize::MAX)
|
||||
.await
|
||||
.unwrap();
|
||||
assert_eq!(body.as_ref(), b"{}");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn transactions_endpoint_rejects_wrong_token() {
|
||||
let app = build_router(SynapseState {
|
||||
hs_token: "HS_TOKEN".to_string(),
|
||||
});
|
||||
let payload = serde_json::json!({
|
||||
"events": [],
|
||||
"txn_id": "123"
|
||||
});
|
||||
|
||||
let response = app
|
||||
.oneshot(
|
||||
Request::builder()
|
||||
.method("PUT")
|
||||
.uri("/_matrix/appservice/v1/transactions/123")
|
||||
.header("authorization", "Bearer NOPE")
|
||||
.header("content-type", "application/json")
|
||||
.body(Body::from(payload.to_string()))
|
||||
.unwrap(),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(response.status(), StatusCode::UNAUTHORIZED);
|
||||
let body = axum::body::to_bytes(response.into_body(), usize::MAX)
|
||||
.await
|
||||
.unwrap();
|
||||
assert_eq!(body.as_ref(), b"{}");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn transactions_endpoint_accepts_legacy_query_token() {
|
||||
let app = build_router(SynapseState {
|
||||
hs_token: "HS_TOKEN".to_string(),
|
||||
});
|
||||
let payload = serde_json::json!({
|
||||
"events": [],
|
||||
"txn_id": "125"
|
||||
});
|
||||
|
||||
let response = app
|
||||
.oneshot(
|
||||
Request::builder()
|
||||
.method("PUT")
|
||||
.uri("/_matrix/appservice/v1/transactions/125?access_token=HS_TOKEN")
|
||||
.header("content-type", "application/json")
|
||||
.body(Body::from(payload.to_string()))
|
||||
.unwrap(),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn transactions_endpoint_accepts_x_access_token_header() {
|
||||
let app = build_router(SynapseState {
|
||||
hs_token: "HS_TOKEN".to_string(),
|
||||
});
|
||||
let payload = serde_json::json!({
|
||||
"events": [],
|
||||
"txn_id": "126"
|
||||
});
|
||||
|
||||
let response = app
|
||||
.oneshot(
|
||||
Request::builder()
|
||||
.method("PUT")
|
||||
.uri("/_matrix/appservice/v1/transactions/126")
|
||||
.header("x-access-token", "HS_TOKEN")
|
||||
.header("content-type", "application/json")
|
||||
.body(Body::from(payload.to_string()))
|
||||
.unwrap(),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn run_synapse_listener_starts_and_can_abort() {
|
||||
let addr = SocketAddr::from(([127, 0, 0, 1], 0));
|
||||
let handle =
|
||||
tokio::spawn(async move { run_synapse_listener(addr, "HS_TOKEN".to_string()).await });
|
||||
sleep(Duration::from_millis(10)).await;
|
||||
handle.abort();
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn run_synapse_listener_returns_error_on_bind_failure() {
|
||||
let listener = tokio::net::TcpListener::bind("127.0.0.1:0").await.unwrap();
|
||||
let addr = listener.local_addr().unwrap();
|
||||
let result = run_synapse_listener(addr, "HS_TOKEN".to_string()).await;
|
||||
assert!(result.is_err());
|
||||
}
|
||||
}
|
||||
@@ -19,11 +19,6 @@ use tokio::sync::RwLock;
|
||||
|
||||
use crate::config::PotatomeshConfig;
|
||||
|
||||
/// Protocol identifier sent as a query parameter to restrict API results to
|
||||
/// Meshtastic data only. Other protocols (e.g. MeshCore) are excluded until
|
||||
/// the clients are updated to support them.
|
||||
const PROTOCOL_FILTER: &str = "meshtastic";
|
||||
|
||||
#[allow(dead_code)]
|
||||
#[derive(Debug, Deserialize, Clone)]
|
||||
pub struct PotatoMessage {
|
||||
@@ -136,10 +131,7 @@ impl PotatoClient {
|
||||
}
|
||||
|
||||
pub async fn fetch_messages(&self, params: FetchParams) -> anyhow::Result<Vec<PotatoMessage>> {
|
||||
let mut req = self
|
||||
.http
|
||||
.get(self.messages_url())
|
||||
.query(&[("protocol", PROTOCOL_FILTER)]);
|
||||
let mut req = self.http.get(self.messages_url());
|
||||
if let Some(limit) = params.limit {
|
||||
req = req.query(&[("limit", limit)]);
|
||||
}
|
||||
@@ -344,10 +336,7 @@ mod tests {
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let mock = server
|
||||
.mock("GET", "/api/messages")
|
||||
.match_query(mockito::Matcher::UrlEncoded(
|
||||
"protocol".into(),
|
||||
"meshtastic".into(),
|
||||
))
|
||||
.match_query(mockito::Matcher::Any) // allow optional query params
|
||||
.with_status(200)
|
||||
.with_header("content-type", "application/json")
|
||||
.with_body(
|
||||
@@ -438,10 +427,7 @@ mod tests {
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let mock = server
|
||||
.mock("GET", "/api/messages")
|
||||
.match_query(mockito::Matcher::UrlEncoded(
|
||||
"protocol".into(),
|
||||
PROTOCOL_FILTER.into(),
|
||||
))
|
||||
.match_query(mockito::Matcher::Any)
|
||||
.with_status(500)
|
||||
.create();
|
||||
|
||||
@@ -462,11 +448,7 @@ mod tests {
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let mock = server
|
||||
.mock("GET", "/api/messages")
|
||||
.match_query(mockito::Matcher::AllOf(vec![
|
||||
mockito::Matcher::UrlEncoded("protocol".into(), PROTOCOL_FILTER.into()),
|
||||
mockito::Matcher::UrlEncoded("limit".into(), "10".into()),
|
||||
mockito::Matcher::UrlEncoded("since".into(), "123".into()),
|
||||
]))
|
||||
.match_query("limit=10&since=123")
|
||||
.with_status(200)
|
||||
.with_header("content-type", "application/json")
|
||||
.with_body("[]")
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 62 KiB |
@@ -1,71 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
require "base64"
|
||||
require "meshtastic"
|
||||
require "openssl"
|
||||
|
||||
channel_name = "BerlinMesh"
|
||||
|
||||
# === Inputs from your packet ===
|
||||
cipher_b64 = "Q1R7tgI5yXzMXu/3"
|
||||
psk_b64 = "Nmh7EooP2Tsc+7pvPwXLcEDDuYhk+fBo2GLnbA1Y1sg="
|
||||
packet_id = 3_915_687_257
|
||||
from_id = "!9e95cf60"
|
||||
channel = 35
|
||||
|
||||
# === Decode key and ciphertext ===
|
||||
key = Base64.decode64(psk_b64) # 32 bytes -> AES-256
|
||||
ciphertext = Base64.decode64(cipher_b64)
|
||||
|
||||
# === Derive numeric node id from Meshtastic-style string ===
|
||||
hex_str = from_id.sub(/^!/, "") # "9e95cf60"
|
||||
from_node = hex_str.to_i(16) # 0x9e95cf60
|
||||
|
||||
# === Build nonce exactly like Meshtastic CryptoEngine ===
|
||||
# Little-endian 64-bit packet ID + little-endian 32-bit node ID + 4 zero bytes
|
||||
nonce = [packet_id].pack("Q<") # uint64, little-endian
|
||||
nonce += [from_node].pack("L<") # uint32, little-endian
|
||||
nonce += "\x00" * 4 # extraNonce == 0 for PSK channel msgs
|
||||
|
||||
raise "Nonce must be 16 bytes" unless nonce.bytesize == 16
|
||||
raise "Key must be 32 bytes" unless key.bytesize == 32
|
||||
|
||||
# === AES-256-CTR decrypt ===
|
||||
cipher = OpenSSL::Cipher.new("aes-256-ctr")
|
||||
cipher.decrypt
|
||||
cipher.key = key
|
||||
cipher.iv = nonce
|
||||
|
||||
plaintext = cipher.update(ciphertext) + cipher.final
|
||||
|
||||
# At this point `plaintext` is the raw Meshtastic protobuf payload
|
||||
plaintext = plaintext.bytes.pack("C*")
|
||||
data = Meshtastic::Data.decode(plaintext)
|
||||
msg = data.payload.dup.force_encoding("UTF-8")
|
||||
puts msg
|
||||
|
||||
# Gets channel number from name and psk
|
||||
def channel_hash(name, psk_b64)
|
||||
name_bytes = name.b # UTF-8 bytes
|
||||
psk_bytes = Base64.decode64(psk_b64)
|
||||
|
||||
hn = name_bytes.bytes.reduce(0) { |acc, b| acc ^ b } # XOR over name
|
||||
hp = psk_bytes.bytes.reduce(0) { |acc, b| acc ^ b } # XOR over PSK
|
||||
|
||||
(hn ^ hp) & 0xFF
|
||||
end
|
||||
|
||||
channel_h = channel_hash(channel_name, psk_b64)
|
||||
puts channel_h
|
||||
puts channel == channel_h
|
||||
@@ -1,491 +0,0 @@
|
||||
hash,name
|
||||
0,Mesh1
|
||||
1,DEMO
|
||||
1,Downlink1
|
||||
1,NightNet
|
||||
1,Sideband1
|
||||
2,CommsNet
|
||||
2,Mesh3
|
||||
2,PulseNet
|
||||
3,LightNet
|
||||
3,Mesh2
|
||||
3,WestStar
|
||||
3,WolfMesh
|
||||
4,Mesh5
|
||||
4,OPERATIONS
|
||||
4,Rescue1
|
||||
4,SignalFire
|
||||
5,Base2
|
||||
5,DeltaNet
|
||||
5,Mesh4
|
||||
5,MeshMunich
|
||||
6,Base1
|
||||
7,MeshTest
|
||||
7,Rescue2
|
||||
7,ZuluMesh
|
||||
8,CourierNet
|
||||
8,Fire2
|
||||
8,Grid2
|
||||
8,LongFast
|
||||
8,RescueTeam
|
||||
9,AlphaNet
|
||||
9,MeshGrid
|
||||
10,TestBerlin
|
||||
10,WaWi
|
||||
11,Fire1
|
||||
11,Grid1
|
||||
12,FoxNet
|
||||
12,MeshRuhr
|
||||
12,RadioNet
|
||||
13,Signal1
|
||||
13,Zone1
|
||||
14,BetaBerlin
|
||||
14,Signal2
|
||||
14,TangoNet
|
||||
14,Zone2
|
||||
15,BerlinMesh
|
||||
15,LongSlow
|
||||
15,MeshBerlin
|
||||
15,Zone3
|
||||
16,CQ
|
||||
16,EchoMesh
|
||||
16,Freq2
|
||||
16,KiloMesh
|
||||
16,Node2
|
||||
16,PhoenixNet
|
||||
16,Repeater2
|
||||
17,FoxtrotNet
|
||||
17,Node3
|
||||
18,LoRa
|
||||
19,Freq1
|
||||
19,HarmonyNet
|
||||
19,Node1
|
||||
19,RavenNet
|
||||
19,Repeater1
|
||||
20,NomadNet
|
||||
20,SENSOR
|
||||
20,TEST
|
||||
20,test
|
||||
21,BravoNet
|
||||
21,EastStar
|
||||
21,MeshCollective
|
||||
21,SunNet
|
||||
22,Node4
|
||||
22,Uplink1
|
||||
23,EagleNet
|
||||
23,MeshHessen
|
||||
23,Node5
|
||||
24,MediumSlow
|
||||
24,Router1
|
||||
25,Checkpoint1
|
||||
25,HAMNet
|
||||
26,Checkpoint2
|
||||
26,GhostNet
|
||||
27,HQ
|
||||
27,Router2
|
||||
31,DemoBerlin
|
||||
31,FieldNet
|
||||
31,MediumFast
|
||||
32,Clinic
|
||||
32,Convoy
|
||||
32,Daylight
|
||||
32,Town
|
||||
33,Callisto
|
||||
33,CQ1
|
||||
33,Daybreak
|
||||
33,Demo
|
||||
33,East
|
||||
33,LoRaMesh
|
||||
33,Mist
|
||||
34,CQ2
|
||||
34,Freq
|
||||
34,Gold
|
||||
34,Link
|
||||
34,Repeater
|
||||
35,Aquila
|
||||
35,Doctor
|
||||
35,Echo
|
||||
35,Kilo
|
||||
35,Public
|
||||
35,Wyvern
|
||||
36,District
|
||||
36,Hessen
|
||||
36,Io
|
||||
36,LoRaTest
|
||||
36,Operations
|
||||
36,Shadow
|
||||
36,Unit
|
||||
37,Campfire
|
||||
37,City
|
||||
37,Outsider
|
||||
37,Sync
|
||||
38,Beacon
|
||||
38,Collective
|
||||
38,Harbor
|
||||
38,Lion
|
||||
38,Meteor
|
||||
39,Firebird
|
||||
39,Fireteam
|
||||
39,Quasar
|
||||
39,Snow
|
||||
39,Universe
|
||||
39,Uplink
|
||||
40,Checkpoint
|
||||
40,Galaxy
|
||||
40,Jaguar
|
||||
40,Sunset
|
||||
40,Zeta
|
||||
41,Hinterland
|
||||
41,HQ2
|
||||
41,Main
|
||||
41,Meshtastic
|
||||
41,Router
|
||||
41,Valley
|
||||
41,Wander
|
||||
41,Wolfpack
|
||||
42,HQ1
|
||||
42,Lizard
|
||||
42,Packet
|
||||
42,Sahara
|
||||
42,Tunnel
|
||||
43,Anaconda
|
||||
43,Basalt
|
||||
43,Blackout
|
||||
43,Crow
|
||||
43,Dusk
|
||||
43,Falcon
|
||||
43,Lima
|
||||
43,Müggelberg
|
||||
44,Arctic
|
||||
44,Backup
|
||||
44,Bronze
|
||||
44,Corvus
|
||||
44,Cosmos
|
||||
44,LoRaBerlin
|
||||
44,Neukölln
|
||||
44,Safari
|
||||
45,Breeze
|
||||
45,Burrow
|
||||
45,Gale
|
||||
45,Saturn
|
||||
46,Border
|
||||
46,Nest
|
||||
47,Borealis
|
||||
47,Mars
|
||||
47,Path
|
||||
47,Ranger
|
||||
48,Beat
|
||||
48,Berg
|
||||
48,Beta
|
||||
48,Downlink
|
||||
48,Hive
|
||||
48,Rhythm
|
||||
48,Saxony
|
||||
48,Sideband
|
||||
48,Wolf
|
||||
49,Asteroid
|
||||
49,Carbon
|
||||
49,Mesh
|
||||
50,Blizzard
|
||||
50,Runner
|
||||
51,Callsign
|
||||
51,Carpet
|
||||
51,Desert
|
||||
51,Dragon
|
||||
51,Friedrichshain
|
||||
51,Help
|
||||
51,Nebula
|
||||
51,Safe
|
||||
52,Amazon
|
||||
52,Fireline
|
||||
52,Haze
|
||||
52,LoRaHessen
|
||||
52,Platinum
|
||||
52,Sensor
|
||||
52,Test
|
||||
52,Zulu
|
||||
53,Nord
|
||||
53,Rescue
|
||||
53,Secure
|
||||
53,Silver
|
||||
54,Bear
|
||||
54,Hospital
|
||||
54,Munich
|
||||
54,Python
|
||||
54,Rain
|
||||
54,Wind
|
||||
54,Wolves
|
||||
55,Base
|
||||
55,Bolt
|
||||
55,Hawk
|
||||
55,Mirage
|
||||
55,Nightwatch
|
||||
55,Obsidian
|
||||
55,Rock
|
||||
55,Victor
|
||||
55,West
|
||||
56,Aurora
|
||||
56,Dune
|
||||
56,Iron
|
||||
56,Lava
|
||||
56,Nomads
|
||||
57,Copper
|
||||
57,Core
|
||||
57,Spectrum
|
||||
57,Summit
|
||||
58,Colony
|
||||
58,Fire
|
||||
58,Ganymede
|
||||
58,Grid
|
||||
58,Kraken
|
||||
58,Road
|
||||
58,Solstice
|
||||
58,Tundra
|
||||
59,911
|
||||
59,Forest
|
||||
59,Pack
|
||||
60,Berlin
|
||||
60,Chat
|
||||
60,Sierra
|
||||
60,Signal
|
||||
60,Wald
|
||||
60,Zone
|
||||
61,Alpine
|
||||
61,Bridge
|
||||
61,Camp
|
||||
61,Dortmund
|
||||
61,Frontier
|
||||
61,Jungle
|
||||
61,Peak
|
||||
62,Burner
|
||||
62,Dawn
|
||||
62,Europa
|
||||
62,Midnight
|
||||
62,Nightshift
|
||||
62,Prenzlauer
|
||||
62,Safety
|
||||
62,Sector
|
||||
62,Wanderer
|
||||
63,Distress
|
||||
63,Kiez
|
||||
63,Ruhr
|
||||
63,Team
|
||||
64,Epsilon
|
||||
64,Field
|
||||
64,Granite
|
||||
64,Orbit
|
||||
64,Trail
|
||||
64,Whisper
|
||||
65,Central
|
||||
65,Cologne
|
||||
65,Layer
|
||||
65,Relay
|
||||
65,Runners
|
||||
65,Stone
|
||||
65,Tempo
|
||||
66,Polar
|
||||
66,Woods
|
||||
67,Highway
|
||||
67,Kreuzberg
|
||||
67,Leopard
|
||||
67,Metro
|
||||
67,Omega
|
||||
67,Phantom
|
||||
68,Hamburg
|
||||
68,Hydra
|
||||
68,Medic
|
||||
68,Titan
|
||||
69,Command
|
||||
69,Control
|
||||
69,Gamma
|
||||
69,Ghost
|
||||
69,Mercury
|
||||
69,Oasis
|
||||
70,Diamond
|
||||
70,Ham
|
||||
70,HAM
|
||||
70,Leipzig
|
||||
70,Paramedic
|
||||
70,Savanna
|
||||
71,Frankfurt
|
||||
71,Gecko
|
||||
71,Jupiter
|
||||
71,Sensors
|
||||
71,SENSORS
|
||||
71,Sunrise
|
||||
72,Chameleon
|
||||
72,Eagle
|
||||
72,Hilltop
|
||||
72,Teufelsberg
|
||||
73,Firefly
|
||||
73,Steel
|
||||
74,Bravo
|
||||
74,Caravan
|
||||
74,Ost
|
||||
74,Süd
|
||||
75,Emergency
|
||||
75,EMERGENCY
|
||||
75,Nomad
|
||||
75,Watch
|
||||
76,Alert
|
||||
76,Bavaria
|
||||
76,Fog
|
||||
76,Harmony
|
||||
76,Raven
|
||||
77,Admin
|
||||
77,ADMIN
|
||||
77,Den
|
||||
77,Ice
|
||||
77,LoRaNet
|
||||
77,North
|
||||
77,SOS
|
||||
77,Sos
|
||||
77,Wanderers
|
||||
78,Foxtrot
|
||||
78,Med
|
||||
78,Ops
|
||||
79,Flock
|
||||
79,Phoenix
|
||||
79,PRIVATE
|
||||
79,Private
|
||||
79,Signals
|
||||
79,Tiger
|
||||
80,Commune
|
||||
80,Freedom
|
||||
80,Pluto
|
||||
80,Snake
|
||||
80,Squad
|
||||
80,Stuttgart
|
||||
81,Grassland
|
||||
81,Tango
|
||||
81,Union
|
||||
82,Comet
|
||||
82,Flash
|
||||
82,Lightning
|
||||
83,Cloud
|
||||
83,Equinox
|
||||
83,Firewatch
|
||||
83,Fox
|
||||
83,Radio
|
||||
83,Shelter
|
||||
84,Cheetah
|
||||
84,General
|
||||
84,Outpost
|
||||
84,Volcano
|
||||
85,Glacier
|
||||
85,Storm
|
||||
86,Alpha
|
||||
86,Owl
|
||||
86,Panther
|
||||
86,Prairie
|
||||
86,Thunder
|
||||
87,Courier
|
||||
87,Nexus
|
||||
87,South
|
||||
88,Ash
|
||||
88,River
|
||||
88,Syndicate
|
||||
89,Amateur
|
||||
89,Astro
|
||||
89,Avalanche
|
||||
89,Bonfire
|
||||
89,Draco
|
||||
89,Griffin
|
||||
89,Nightfall
|
||||
89,Shade
|
||||
89,Venus
|
||||
90,Charlie
|
||||
90,Delta
|
||||
90,Stratum
|
||||
90,Viper
|
||||
91,Bison
|
||||
91,Tal
|
||||
92,Network
|
||||
92,Scout
|
||||
93,Comms
|
||||
93,Fluss
|
||||
93,Group
|
||||
93,Hub
|
||||
93,Pulse
|
||||
93,Smoke
|
||||
94,Frost
|
||||
94,Rover
|
||||
94,Village
|
||||
95,Cobra
|
||||
95,Liberty
|
||||
95,Ridge
|
||||
97,DarkNet
|
||||
97,NightshiftNet
|
||||
97,Radio2
|
||||
97,Shelter2
|
||||
98,CampNet
|
||||
98,Radio1
|
||||
98,Shelter1
|
||||
98,TangoMesh
|
||||
99,BaseAlpha
|
||||
99,BerlinNet
|
||||
99,SouthStar
|
||||
100,CourierMesh
|
||||
100,Storm1
|
||||
101,Courier2
|
||||
101,GridNet
|
||||
101,OpsCenter
|
||||
102,Courier1
|
||||
103,Storm2
|
||||
104,HawkNet
|
||||
105,BearNet
|
||||
105,StarNet
|
||||
107,emergency
|
||||
107,ZuluNet
|
||||
108,Comms1
|
||||
108,DragonNet
|
||||
108,Hub1
|
||||
109,admin
|
||||
109,NightMesh
|
||||
110,MeshNet
|
||||
111,BaseCharlie
|
||||
111,Comms2
|
||||
111,GridSouth
|
||||
111,Hub2
|
||||
111,MeshNetwork
|
||||
111,WolfNet
|
||||
112,Layer1
|
||||
112,Relay1
|
||||
112,ShortFast
|
||||
113,OpsRoom
|
||||
114,Layer3
|
||||
114,MeshCologne
|
||||
115,Layer2
|
||||
115,Relay2
|
||||
115,SOSBerlin
|
||||
116,Command1
|
||||
116,Control1
|
||||
116,CrowNet
|
||||
116,MeshFrankfurt
|
||||
117,EmergencyBerlin
|
||||
117,GridNorth
|
||||
117,MeshLeipzig
|
||||
117,PacketNet
|
||||
119,Command2
|
||||
119,Control2
|
||||
119,MeshHamburg
|
||||
120,NomadMesh
|
||||
121,NorthStar
|
||||
121,Watch2
|
||||
122,CommandRoom
|
||||
122,ControlRoom
|
||||
122,SyncNet
|
||||
122,Watch1
|
||||
123,PacketRadio
|
||||
123,ShadowNet
|
||||
124,EchoNet
|
||||
124,KiloNet
|
||||
124,Med2
|
||||
124,Ops2
|
||||
125,FoxtrotMesh
|
||||
125,RepeaterHub
|
||||
126,MoonNet
|
||||
127,BaseBravo
|
||||
127,Med1
|
||||
127,Ops1
|
||||
127,WolfDen
|
||||
|
@@ -1,736 +0,0 @@
|
||||
{
|
||||
"59": [
|
||||
"911",
|
||||
"Forest",
|
||||
"Pack"
|
||||
],
|
||||
"77": [
|
||||
"Admin",
|
||||
"ADMIN",
|
||||
"Den",
|
||||
"Ice",
|
||||
"LoRaNet",
|
||||
"North",
|
||||
"SOS",
|
||||
"Sos",
|
||||
"Wanderers"
|
||||
],
|
||||
"109": [
|
||||
"admin",
|
||||
"NightMesh"
|
||||
],
|
||||
"76": [
|
||||
"Alert",
|
||||
"Bavaria",
|
||||
"Fog",
|
||||
"Harmony",
|
||||
"Raven"
|
||||
],
|
||||
"86": [
|
||||
"Alpha",
|
||||
"Owl",
|
||||
"Panther",
|
||||
"Prairie",
|
||||
"Thunder"
|
||||
],
|
||||
"9": [
|
||||
"AlphaNet",
|
||||
"MeshGrid"
|
||||
],
|
||||
"61": [
|
||||
"Alpine",
|
||||
"Bridge",
|
||||
"Camp",
|
||||
"Dortmund",
|
||||
"Frontier",
|
||||
"Jungle",
|
||||
"Peak"
|
||||
],
|
||||
"89": [
|
||||
"Amateur",
|
||||
"Astro",
|
||||
"Avalanche",
|
||||
"Bonfire",
|
||||
"Draco",
|
||||
"Griffin",
|
||||
"Nightfall",
|
||||
"Shade",
|
||||
"Venus"
|
||||
],
|
||||
"52": [
|
||||
"Amazon",
|
||||
"Fireline",
|
||||
"Haze",
|
||||
"LoRaHessen",
|
||||
"Platinum",
|
||||
"Sensor",
|
||||
"Test",
|
||||
"Zulu"
|
||||
],
|
||||
"43": [
|
||||
"Anaconda",
|
||||
"Basalt",
|
||||
"Blackout",
|
||||
"Crow",
|
||||
"Dusk",
|
||||
"Falcon",
|
||||
"Lima",
|
||||
"Müggelberg"
|
||||
],
|
||||
"35": [
|
||||
"Aquila",
|
||||
"Doctor",
|
||||
"Echo",
|
||||
"Kilo",
|
||||
"Public",
|
||||
"Wyvern"
|
||||
],
|
||||
"44": [
|
||||
"Arctic",
|
||||
"Backup",
|
||||
"Bronze",
|
||||
"Corvus",
|
||||
"Cosmos",
|
||||
"LoRaBerlin",
|
||||
"Neukölln",
|
||||
"Safari"
|
||||
],
|
||||
"88": [
|
||||
"Ash",
|
||||
"River",
|
||||
"Syndicate"
|
||||
],
|
||||
"49": [
|
||||
"Asteroid",
|
||||
"Carbon",
|
||||
"Mesh"
|
||||
],
|
||||
"56": [
|
||||
"Aurora",
|
||||
"Dune",
|
||||
"Iron",
|
||||
"Lava",
|
||||
"Nomads"
|
||||
],
|
||||
"55": [
|
||||
"Base",
|
||||
"Bolt",
|
||||
"Hawk",
|
||||
"Mirage",
|
||||
"Nightwatch",
|
||||
"Obsidian",
|
||||
"Rock",
|
||||
"Victor",
|
||||
"West"
|
||||
],
|
||||
"6": [
|
||||
"Base1"
|
||||
],
|
||||
"5": [
|
||||
"Base2",
|
||||
"DeltaNet",
|
||||
"Mesh4",
|
||||
"MeshMunich"
|
||||
],
|
||||
"99": [
|
||||
"BaseAlpha",
|
||||
"BerlinNet",
|
||||
"SouthStar"
|
||||
],
|
||||
"127": [
|
||||
"BaseBravo",
|
||||
"Med1",
|
||||
"Ops1",
|
||||
"WolfDen"
|
||||
],
|
||||
"111": [
|
||||
"BaseCharlie",
|
||||
"Comms2",
|
||||
"GridSouth",
|
||||
"Hub2",
|
||||
"MeshNetwork",
|
||||
"WolfNet"
|
||||
],
|
||||
"38": [
|
||||
"Beacon",
|
||||
"Collective",
|
||||
"Harbor",
|
||||
"Lion",
|
||||
"Meteor"
|
||||
],
|
||||
"54": [
|
||||
"Bear",
|
||||
"Hospital",
|
||||
"Munich",
|
||||
"Python",
|
||||
"Rain",
|
||||
"Wind",
|
||||
"Wolves"
|
||||
],
|
||||
"105": [
|
||||
"BearNet",
|
||||
"StarNet"
|
||||
],
|
||||
"48": [
|
||||
"Beat",
|
||||
"Berg",
|
||||
"Beta",
|
||||
"Downlink",
|
||||
"Hive",
|
||||
"Rhythm",
|
||||
"Saxony",
|
||||
"Sideband",
|
||||
"Wolf"
|
||||
],
|
||||
"60": [
|
||||
"Berlin",
|
||||
"Chat",
|
||||
"Sierra",
|
||||
"Signal",
|
||||
"Wald",
|
||||
"Zone"
|
||||
],
|
||||
"15": [
|
||||
"BerlinMesh",
|
||||
"LongSlow",
|
||||
"MeshBerlin",
|
||||
"Zone3"
|
||||
],
|
||||
"14": [
|
||||
"BetaBerlin",
|
||||
"Signal2",
|
||||
"TangoNet",
|
||||
"Zone2"
|
||||
],
|
||||
"91": [
|
||||
"Bison",
|
||||
"Tal"
|
||||
],
|
||||
"50": [
|
||||
"Blizzard",
|
||||
"Runner"
|
||||
],
|
||||
"46": [
|
||||
"Border",
|
||||
"Nest"
|
||||
],
|
||||
"47": [
|
||||
"Borealis",
|
||||
"Mars",
|
||||
"Path",
|
||||
"Ranger"
|
||||
],
|
||||
"74": [
|
||||
"Bravo",
|
||||
"Caravan",
|
||||
"Ost",
|
||||
"Süd"
|
||||
],
|
||||
"21": [
|
||||
"BravoNet",
|
||||
"EastStar",
|
||||
"MeshCollective",
|
||||
"SunNet"
|
||||
],
|
||||
"45": [
|
||||
"Breeze",
|
||||
"Burrow",
|
||||
"Gale",
|
||||
"Saturn"
|
||||
],
|
||||
"62": [
|
||||
"Burner",
|
||||
"Dawn",
|
||||
"Europa",
|
||||
"Midnight",
|
||||
"Nightshift",
|
||||
"Prenzlauer",
|
||||
"Safety",
|
||||
"Sector",
|
||||
"Wanderer"
|
||||
],
|
||||
"33": [
|
||||
"Callisto",
|
||||
"CQ1",
|
||||
"Daybreak",
|
||||
"Demo",
|
||||
"East",
|
||||
"LoRaMesh",
|
||||
"Mist"
|
||||
],
|
||||
"51": [
|
||||
"Callsign",
|
||||
"Carpet",
|
||||
"Desert",
|
||||
"Dragon",
|
||||
"Friedrichshain",
|
||||
"Help",
|
||||
"Nebula",
|
||||
"Safe"
|
||||
],
|
||||
"37": [
|
||||
"Campfire",
|
||||
"City",
|
||||
"Outsider",
|
||||
"Sync"
|
||||
],
|
||||
"98": [
|
||||
"CampNet",
|
||||
"Radio1",
|
||||
"Shelter1",
|
||||
"TangoMesh"
|
||||
],
|
||||
"65": [
|
||||
"Central",
|
||||
"Cologne",
|
||||
"Layer",
|
||||
"Relay",
|
||||
"Runners",
|
||||
"Stone",
|
||||
"Tempo"
|
||||
],
|
||||
"72": [
|
||||
"Chameleon",
|
||||
"Eagle",
|
||||
"Hilltop",
|
||||
"Teufelsberg"
|
||||
],
|
||||
"90": [
|
||||
"Charlie",
|
||||
"Delta",
|
||||
"Stratum",
|
||||
"Viper"
|
||||
],
|
||||
"40": [
|
||||
"Checkpoint",
|
||||
"Galaxy",
|
||||
"Jaguar",
|
||||
"Sunset",
|
||||
"Zeta"
|
||||
],
|
||||
"25": [
|
||||
"Checkpoint1",
|
||||
"HAMNet"
|
||||
],
|
||||
"26": [
|
||||
"Checkpoint2",
|
||||
"GhostNet"
|
||||
],
|
||||
"84": [
|
||||
"Cheetah",
|
||||
"General",
|
||||
"Outpost",
|
||||
"Volcano"
|
||||
],
|
||||
"32": [
|
||||
"Clinic",
|
||||
"Convoy",
|
||||
"Daylight",
|
||||
"Town"
|
||||
],
|
||||
"83": [
|
||||
"Cloud",
|
||||
"Equinox",
|
||||
"Firewatch",
|
||||
"Fox",
|
||||
"Radio",
|
||||
"Shelter"
|
||||
],
|
||||
"95": [
|
||||
"Cobra",
|
||||
"Liberty",
|
||||
"Ridge"
|
||||
],
|
||||
"58": [
|
||||
"Colony",
|
||||
"Fire",
|
||||
"Ganymede",
|
||||
"Grid",
|
||||
"Kraken",
|
||||
"Road",
|
||||
"Solstice",
|
||||
"Tundra"
|
||||
],
|
||||
"82": [
|
||||
"Comet",
|
||||
"Flash",
|
||||
"Lightning"
|
||||
],
|
||||
"69": [
|
||||
"Command",
|
||||
"Control",
|
||||
"Gamma",
|
||||
"Ghost",
|
||||
"Mercury",
|
||||
"Oasis"
|
||||
],
|
||||
"116": [
|
||||
"Command1",
|
||||
"Control1",
|
||||
"CrowNet",
|
||||
"MeshFrankfurt"
|
||||
],
|
||||
"119": [
|
||||
"Command2",
|
||||
"Control2",
|
||||
"MeshHamburg"
|
||||
],
|
||||
"122": [
|
||||
"CommandRoom",
|
||||
"ControlRoom",
|
||||
"SyncNet",
|
||||
"Watch1"
|
||||
],
|
||||
"93": [
|
||||
"Comms",
|
||||
"Fluss",
|
||||
"Group",
|
||||
"Hub",
|
||||
"Pulse",
|
||||
"Smoke"
|
||||
],
|
||||
"108": [
|
||||
"Comms1",
|
||||
"DragonNet",
|
||||
"Hub1"
|
||||
],
|
||||
"2": [
|
||||
"CommsNet",
|
||||
"Mesh3",
|
||||
"PulseNet"
|
||||
],
|
||||
"80": [
|
||||
"Commune",
|
||||
"Freedom",
|
||||
"Pluto",
|
||||
"Snake",
|
||||
"Squad",
|
||||
"Stuttgart"
|
||||
],
|
||||
"57": [
|
||||
"Copper",
|
||||
"Core",
|
||||
"Spectrum",
|
||||
"Summit"
|
||||
],
|
||||
"87": [
|
||||
"Courier",
|
||||
"Nexus",
|
||||
"South"
|
||||
],
|
||||
"102": [
|
||||
"Courier1"
|
||||
],
|
||||
"101": [
|
||||
"Courier2",
|
||||
"GridNet",
|
||||
"OpsCenter"
|
||||
],
|
||||
"100": [
|
||||
"CourierMesh",
|
||||
"Storm1"
|
||||
],
|
||||
"8": [
|
||||
"CourierNet",
|
||||
"Fire2",
|
||||
"Grid2",
|
||||
"LongFast",
|
||||
"RescueTeam"
|
||||
],
|
||||
"16": [
|
||||
"CQ",
|
||||
"EchoMesh",
|
||||
"Freq2",
|
||||
"KiloMesh",
|
||||
"Node2",
|
||||
"PhoenixNet",
|
||||
"Repeater2"
|
||||
],
|
||||
"34": [
|
||||
"CQ2",
|
||||
"Freq",
|
||||
"Gold",
|
||||
"Link",
|
||||
"Repeater"
|
||||
],
|
||||
"97": [
|
||||
"DarkNet",
|
||||
"NightshiftNet",
|
||||
"Radio2",
|
||||
"Shelter2"
|
||||
],
|
||||
"1": [
|
||||
"DEMO",
|
||||
"Downlink1",
|
||||
"NightNet",
|
||||
"Sideband1"
|
||||
],
|
||||
"31": [
|
||||
"DemoBerlin",
|
||||
"FieldNet",
|
||||
"MediumFast"
|
||||
],
|
||||
"70": [
|
||||
"Diamond",
|
||||
"Ham",
|
||||
"HAM",
|
||||
"Leipzig",
|
||||
"Paramedic",
|
||||
"Savanna"
|
||||
],
|
||||
"63": [
|
||||
"Distress",
|
||||
"Kiez",
|
||||
"Ruhr",
|
||||
"Team"
|
||||
],
|
||||
"36": [
|
||||
"District",
|
||||
"Hessen",
|
||||
"Io",
|
||||
"LoRaTest",
|
||||
"Operations",
|
||||
"Shadow",
|
||||
"Unit"
|
||||
],
|
||||
"23": [
|
||||
"EagleNet",
|
||||
"MeshHessen",
|
||||
"Node5"
|
||||
],
|
||||
"124": [
|
||||
"EchoNet",
|
||||
"KiloNet",
|
||||
"Med2",
|
||||
"Ops2"
|
||||
],
|
||||
"75": [
|
||||
"Emergency",
|
||||
"EMERGENCY",
|
||||
"Nomad",
|
||||
"Watch"
|
||||
],
|
||||
"107": [
|
||||
"emergency",
|
||||
"ZuluNet"
|
||||
],
|
||||
"117": [
|
||||
"EmergencyBerlin",
|
||||
"GridNorth",
|
||||
"MeshLeipzig",
|
||||
"PacketNet"
|
||||
],
|
||||
"64": [
|
||||
"Epsilon",
|
||||
"Field",
|
||||
"Granite",
|
||||
"Orbit",
|
||||
"Trail",
|
||||
"Whisper"
|
||||
],
|
||||
"11": [
|
||||
"Fire1",
|
||||
"Grid1"
|
||||
],
|
||||
"39": [
|
||||
"Firebird",
|
||||
"Fireteam",
|
||||
"Quasar",
|
||||
"Snow",
|
||||
"Universe",
|
||||
"Uplink"
|
||||
],
|
||||
"73": [
|
||||
"Firefly",
|
||||
"Steel"
|
||||
],
|
||||
"79": [
|
||||
"Flock",
|
||||
"Phoenix",
|
||||
"PRIVATE",
|
||||
"Private",
|
||||
"Signals",
|
||||
"Tiger"
|
||||
],
|
||||
"12": [
|
||||
"FoxNet",
|
||||
"MeshRuhr",
|
||||
"RadioNet"
|
||||
],
|
||||
"78": [
|
||||
"Foxtrot",
|
||||
"Med",
|
||||
"Ops"
|
||||
],
|
||||
"125": [
|
||||
"FoxtrotMesh",
|
||||
"RepeaterHub"
|
||||
],
|
||||
"17": [
|
||||
"FoxtrotNet",
|
||||
"Node3"
|
||||
],
|
||||
"71": [
|
||||
"Frankfurt",
|
||||
"Gecko",
|
||||
"Jupiter",
|
||||
"Sensors",
|
||||
"SENSORS",
|
||||
"Sunrise"
|
||||
],
|
||||
"19": [
|
||||
"Freq1",
|
||||
"HarmonyNet",
|
||||
"Node1",
|
||||
"RavenNet",
|
||||
"Repeater1"
|
||||
],
|
||||
"94": [
|
||||
"Frost",
|
||||
"Rover",
|
||||
"Village"
|
||||
],
|
||||
"85": [
|
||||
"Glacier",
|
||||
"Storm"
|
||||
],
|
||||
"81": [
|
||||
"Grassland",
|
||||
"Tango",
|
||||
"Union"
|
||||
],
|
||||
"68": [
|
||||
"Hamburg",
|
||||
"Hydra",
|
||||
"Medic",
|
||||
"Titan"
|
||||
],
|
||||
"104": [
|
||||
"HawkNet"
|
||||
],
|
||||
"67": [
|
||||
"Highway",
|
||||
"Kreuzberg",
|
||||
"Leopard",
|
||||
"Metro",
|
||||
"Omega",
|
||||
"Phantom"
|
||||
],
|
||||
"41": [
|
||||
"Hinterland",
|
||||
"HQ2",
|
||||
"Main",
|
||||
"Meshtastic",
|
||||
"Router",
|
||||
"Valley",
|
||||
"Wander",
|
||||
"Wolfpack"
|
||||
],
|
||||
"27": [
|
||||
"HQ",
|
||||
"Router2"
|
||||
],
|
||||
"42": [
|
||||
"HQ1",
|
||||
"Lizard",
|
||||
"Packet",
|
||||
"Sahara",
|
||||
"Tunnel"
|
||||
],
|
||||
"112": [
|
||||
"Layer1",
|
||||
"Relay1",
|
||||
"ShortFast"
|
||||
],
|
||||
"115": [
|
||||
"Layer2",
|
||||
"Relay2",
|
||||
"SOSBerlin"
|
||||
],
|
||||
"114": [
|
||||
"Layer3",
|
||||
"MeshCologne"
|
||||
],
|
||||
"3": [
|
||||
"LightNet",
|
||||
"Mesh2",
|
||||
"WestStar",
|
||||
"WolfMesh"
|
||||
],
|
||||
"18": [
|
||||
"LoRa"
|
||||
],
|
||||
"24": [
|
||||
"MediumSlow",
|
||||
"Router1"
|
||||
],
|
||||
"0": [
|
||||
"Mesh1"
|
||||
],
|
||||
"4": [
|
||||
"Mesh5",
|
||||
"OPERATIONS",
|
||||
"Rescue1",
|
||||
"SignalFire"
|
||||
],
|
||||
"110": [
|
||||
"MeshNet"
|
||||
],
|
||||
"7": [
|
||||
"MeshTest",
|
||||
"Rescue2",
|
||||
"ZuluMesh"
|
||||
],
|
||||
"126": [
|
||||
"MoonNet"
|
||||
],
|
||||
"92": [
|
||||
"Network",
|
||||
"Scout"
|
||||
],
|
||||
"22": [
|
||||
"Node4",
|
||||
"Uplink1"
|
||||
],
|
||||
"120": [
|
||||
"NomadMesh"
|
||||
],
|
||||
"20": [
|
||||
"NomadNet",
|
||||
"SENSOR",
|
||||
"TEST",
|
||||
"test"
|
||||
],
|
||||
"53": [
|
||||
"Nord",
|
||||
"Rescue",
|
||||
"Secure",
|
||||
"Silver"
|
||||
],
|
||||
"121": [
|
||||
"NorthStar",
|
||||
"Watch2"
|
||||
],
|
||||
"113": [
|
||||
"OpsRoom"
|
||||
],
|
||||
"123": [
|
||||
"PacketRadio",
|
||||
"ShadowNet"
|
||||
],
|
||||
"66": [
|
||||
"Polar",
|
||||
"Woods"
|
||||
],
|
||||
"13": [
|
||||
"Signal1",
|
||||
"Zone1"
|
||||
],
|
||||
"103": [
|
||||
"Storm2"
|
||||
],
|
||||
"10": [
|
||||
"TestBerlin",
|
||||
"WaWi"
|
||||
]
|
||||
}
|
||||
@@ -1,134 +0,0 @@
|
||||
#!/usr/bin/env ruby
|
||||
# frozen_string_literal: true
|
||||
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
require "base64"
|
||||
require "json"
|
||||
require "csv"
|
||||
|
||||
# --- CONFIG --------------------------------------------------------
|
||||
|
||||
# The PSK you want. Here: public mesh, "AQ==" (0x01).
|
||||
PSK_B64 = ENV.fetch("PSK_B64", "AQ==")
|
||||
|
||||
# 1000 potential channel candidate names for rainbow indices.
|
||||
CANDIDATE_NAMES = %w[
|
||||
911 Admin ADMIN admin Alert Alpha AlphaNet Alpine Amateur Amazon Anaconda Aquila Arctic Ash Asteroid Astro Aurora Avalanche Backup Basalt Base Base1 Base2 BaseAlpha BaseBravo BaseCharlie Bavaria Beacon Bear BearNet Beat Berg Berlin BerlinMesh BerlinNet Beta BetaBerlin Bison Blackout Blizzard Bolt Bonfire Border Borealis Bravo BravoNet Breeze Bridge Bronze Burner Burrow Callisto Callsign Camp Campfire CampNet Caravan Carbon Carpet Central Chameleon Charlie Chat Checkpoint Checkpoint1 Checkpoint2 Cheetah City Clinic Cloud Cobra Collective Cologne Colony Comet Command Command1 Command2 CommandRoom Comms Comms1 Comms2 CommsNet Commune Control Control1 Control2 ControlRoom Convoy Copper Core Corvus Cosmos Courier Courier1 Courier2 CourierMesh CourierNet CQ CQ1 CQ2 Crow CrowNet DarkNet Dawn Daybreak Daylight Delta DeltaNet Demo DEMO DemoBerlin Den Desert Diamond Distress District Doctor Dortmund Downlink Downlink1 Draco Dragon DragonNet Dune Dusk Eagle EagleNet East EastStar Echo EchoMesh EchoNet Emergency emergency EMERGENCY EmergencyBerlin Epsilon Equinox Europa Falcon Field FieldNet Fire Fire1 Fire2 Firebird Firefly Fireline Fireteam Firewatch Flash Flock Fluss Fog Forest Fox FoxNet Foxtrot FoxtrotMesh FoxtrotNet Frankfurt Freedom Freq Freq1 Freq2 Friedrichshain Frontier Frost Galaxy Gale Gamma Ganymede Gecko General Ghost GhostNet Glacier Gold Granite Grassland Grid Grid1 Grid2 GridNet GridNorth GridSouth Griffin Group Ham HAM Hamburg HAMNet Harbor Harmony HarmonyNet Hawk HawkNet Haze Help Hessen Highway Hilltop Hinterland Hive Hospital HQ HQ1 HQ2 Hub Hub1 Hub2 Hydra Ice Io Iron Jaguar Jungle Jupiter Kiez Kilo KiloMesh KiloNet Kraken Kreuzberg Lava Layer Layer1 Layer2 Layer3 Leipzig Leopard Liberty LightNet Lightning Lima Link Lion Lizard LongFast LongSlow LoRa LoRaBerlin LoRaHessen LoRaMesh LoRaNet LoRaTest Main Mars Med Med1 Med2 Medic MediumFast MediumSlow Mercury Mesh Mesh1 Mesh2 Mesh3 Mesh4 Mesh5 MeshBerlin MeshCollective MeshCologne MeshFrankfurt MeshGrid MeshHamburg MeshHessen MeshLeipzig MeshMunich MeshNet MeshNetwork MeshRuhr Meshtastic MeshTest Meteor Metro Midnight Mirage Mist MoonNet Munich Müggelberg Nebula Nest Network Neukölln Nexus Nightfall NightMesh NightNet Nightshift NightshiftNet Nightwatch Node1 Node2 Node3 Node4 Node5 Nomad NomadMesh NomadNet Nomads Nord North NorthStar Oasis Obsidian Omega Operations OPERATIONS Ops Ops1 Ops2 OpsCenter OpsRoom Orbit Ost Outpost Outsider Owl Pack Packet PacketNet PacketRadio Panther Paramedic Path Peak Phantom Phoenix PhoenixNet Platinum Pluto Polar Prairie Prenzlauer PRIVATE Private Public Pulse PulseNet Python Quasar Radio Radio1 Radio2 RadioNet Rain Ranger Raven RavenNet Relay Relay1 Relay2 Repeater Repeater1 Repeater2 RepeaterHub Rescue Rescue1 Rescue2 RescueTeam Rhythm Ridge River Road Rock Router Router1 Router2 Rover Ruhr Runner Runners Safari Safe Safety Sahara Saturn Savanna Saxony Scout Sector Secure Sensor SENSOR Sensors SENSORS Shade Shadow ShadowNet Shelter Shelter1 Shelter2 ShortFast Sideband Sideband1 Sierra Signal Signal1 Signal2 SignalFire Signals Silver Smoke Snake Snow Solstice SOS Sos SOSBerlin South SouthStar Spectrum Squad StarNet Steel Stone Storm Storm1 Storm2 Stratum Stuttgart Summit SunNet Sunrise Sunset Sync SyncNet Syndicate Süd Tal Tango TangoMesh TangoNet Team Tempo Test TEST test TestBerlin Teufelsberg Thunder Tiger Titan Town Trail Tundra Tunnel Union Unit Universe Uplink Uplink1 Valley Venus Victor Village Viper Volcano Wald Wander Wanderer Wanderers Watch Watch1 Watch2 WaWi West WestStar Whisper Wind Wolf WolfDen WolfMesh WolfNet Wolfpack Wolves Woods Wyvern Zeta Zone Zone1 Zone2 Zone3 Zulu ZuluMesh ZuluNet
|
||||
]
|
||||
|
||||
# Output filenames
|
||||
CSV_OUT = ENV.fetch("CSV_OUT", "rainbow.csv")
|
||||
JSON_OUT = ENV.fetch("JSON_OUT", "rainbow.json")
|
||||
|
||||
# --- HASH FUNCTION -------------------------------------------------
|
||||
|
||||
def xor_bytes(str_or_bytes)
|
||||
bytes = str_or_bytes.is_a?(String) ? str_or_bytes.bytes : str_or_bytes
|
||||
bytes.reduce(0) { |acc, b| (acc ^ b) & 0xFF }
|
||||
end
|
||||
|
||||
def expanded_key(psk_b64)
|
||||
raw = Base64.decode64(psk_b64 || "")
|
||||
|
||||
case raw.bytesize
|
||||
when 0
|
||||
# no encryption: length 0, xor = 0
|
||||
"".b
|
||||
when 1
|
||||
alias_index = raw.bytes.first
|
||||
alias_keys = {
|
||||
1 => [
|
||||
0xD4, 0xF1, 0xBB, 0x3A, 0x20, 0x29, 0x07, 0x59,
|
||||
0xF0, 0xBC, 0xFF, 0xAB, 0xCF, 0x4E, 0x69, 0x01,
|
||||
].pack("C*"),
|
||||
2 => [
|
||||
0x38, 0x4B, 0xBC, 0xC0, 0x1D, 0xC0, 0x22, 0xD1,
|
||||
0x81, 0xBF, 0x36, 0xB8, 0x61, 0x21, 0xE1, 0xFB,
|
||||
0x96, 0xB7, 0x2E, 0x55, 0xBF, 0x74, 0x22, 0x7E,
|
||||
0x9D, 0x6A, 0xFB, 0x48, 0xD6, 0x4C, 0xB1, 0xA1,
|
||||
].pack("C*"),
|
||||
}
|
||||
alias_keys.fetch(alias_index) { raise "Unknown PSK alias #{alias_index}" }
|
||||
when 2..15
|
||||
# pad to 16 (AES128)
|
||||
(raw.bytes + [0] * (16 - raw.bytesize)).pack("C*")
|
||||
when 16
|
||||
raw
|
||||
when 17..31
|
||||
# pad to 32 (AES256)
|
||||
(raw.bytes + [0] * (32 - raw.bytesize)).pack("C*")
|
||||
when 32
|
||||
raw
|
||||
else
|
||||
raise "PSK too long (#{raw.bytesize} bytes)"
|
||||
end
|
||||
end
|
||||
|
||||
def channel_hash(name, psk_b64)
|
||||
effective_name = name.b
|
||||
key = expanded_key(psk_b64)
|
||||
|
||||
h_name = xor_bytes(effective_name)
|
||||
h_key = xor_bytes(key)
|
||||
|
||||
(h_name ^ h_key) & 0xFF
|
||||
end
|
||||
|
||||
# --- BUILD RAINBOW TABLE -------------------------------------------
|
||||
|
||||
psk_b64 = PSK_B64
|
||||
puts "Using PSK_B64=#{psk_b64.inspect}"
|
||||
|
||||
hash_to_names = Hash.new { |h, k| h[k] = [] }
|
||||
|
||||
CANDIDATE_NAMES.each do |name|
|
||||
h = channel_hash(name, psk_b64)
|
||||
hash_to_names[h] << name
|
||||
end
|
||||
|
||||
# --- WRITE CSV (hash,name) -----------------------------------------
|
||||
|
||||
CSV.open(CSV_OUT, "w") do |csv|
|
||||
csv << %w[hash name]
|
||||
hash_to_names.keys.sort.each do |h|
|
||||
hash_to_names[h].each do |name|
|
||||
csv << [h, name]
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
puts "Wrote CSV rainbow table to #{CSV_OUT}"
|
||||
|
||||
# --- WRITE JSON ({hash: [names...]}) -------------------------------
|
||||
|
||||
json_hash = hash_to_names.transform_keys(&:to_s)
|
||||
File.write(JSON_OUT, JSON.pretty_generate(json_hash))
|
||||
|
||||
puts "Wrote JSON rainbow table to #{JSON_OUT}"
|
||||
|
||||
# --- OPTIONAL: interactive query -----------------------------------
|
||||
|
||||
if ARGV.first == "query"
|
||||
target = Integer(ARGV[1] || raise("Usage: #{File.basename($0)} query <hash>"))
|
||||
names = hash_to_names[target]
|
||||
if names.empty?
|
||||
puts "No names for hash #{target}"
|
||||
else
|
||||
puts "Names for hash #{target}:"
|
||||
names.each { |n| puts " - #{n}" }
|
||||
end
|
||||
else
|
||||
puts "Run again with: #{File.basename($0)} query <hash> # to inspect a specific hash"
|
||||
end
|
||||
@@ -1,256 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Unit tests for :mod:`data.mesh_ingestor.connection`."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parents[1]
|
||||
if str(REPO_ROOT) not in sys.path:
|
||||
sys.path.insert(0, str(REPO_ROOT))
|
||||
|
||||
from data.mesh_ingestor.connection import ( # noqa: E402
|
||||
BLE_ADDRESS_RE,
|
||||
DEFAULT_TCP_PORT,
|
||||
default_serial_targets,
|
||||
parse_ble_target,
|
||||
parse_tcp_target,
|
||||
)
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# parse_ble_target
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"value,expected",
|
||||
[
|
||||
# MAC addresses — returned upper-cased
|
||||
("AA:BB:CC:DD:EE:FF", "AA:BB:CC:DD:EE:FF"),
|
||||
("aa:bb:cc:dd:ee:ff", "AA:BB:CC:DD:EE:FF"),
|
||||
("AA:BB:CC:DD:EE:12", "AA:BB:CC:DD:EE:12"),
|
||||
# UUID (macOS format)
|
||||
(
|
||||
"12345678-1234-1234-1234-123456789abc",
|
||||
"12345678-1234-1234-1234-123456789ABC",
|
||||
),
|
||||
(
|
||||
"12345678-1234-1234-1234-123456789ABC",
|
||||
"12345678-1234-1234-1234-123456789ABC",
|
||||
),
|
||||
],
|
||||
)
|
||||
def test_parse_ble_target_accepts_ble_addresses(value, expected):
|
||||
"""parse_ble_target must return the normalised address for valid BLE formats."""
|
||||
assert parse_ble_target(value) == expected
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"value",
|
||||
[
|
||||
"/dev/ttyUSB0",
|
||||
"/dev/ttyACM0",
|
||||
"COM3",
|
||||
"hostname:4403",
|
||||
"192.168.1.1:4403",
|
||||
"",
|
||||
" ",
|
||||
"AA:BB:CC:DD:EE", # too short — only 5 groups
|
||||
"ZZ:BB:CC:DD:EE:FF", # invalid hex
|
||||
],
|
||||
)
|
||||
def test_parse_ble_target_rejects_non_ble(value):
|
||||
"""parse_ble_target must return None for serial paths, TCP targets, and malformed inputs."""
|
||||
assert parse_ble_target(value) is None
|
||||
|
||||
|
||||
def test_parse_ble_target_none_input():
|
||||
"""parse_ble_target must return None for None input."""
|
||||
assert parse_ble_target(None) is None # type: ignore[arg-type]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# parse_tcp_target
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"value,expected_host,expected_port",
|
||||
[
|
||||
# hostname:port
|
||||
("meshcore-node.local:4403", "meshcore-node.local", 4403),
|
||||
("meshnode.local:4403", "meshnode.local", 4403),
|
||||
("hostname:1234", "hostname", 1234),
|
||||
("otherhost:80", "otherhost", 80),
|
||||
# IP:port
|
||||
("192.168.1.1:4403", "192.168.1.1", 4403),
|
||||
("10.0.0.1:9000", "10.0.0.1", 9000),
|
||||
# With scheme prefix
|
||||
("tcp://meshnode.local:4403", "meshnode.local", 4403),
|
||||
("http://192.168.1.1:4403", "192.168.1.1", 4403),
|
||||
# IPv6 with brackets
|
||||
("[::1]:4403", "::1", 4403),
|
||||
("[2001:db8::1]:8080", "2001:db8::1", 8080),
|
||||
],
|
||||
)
|
||||
def test_parse_tcp_target_accepts_tcp(value, expected_host, expected_port):
|
||||
"""parse_tcp_target must return (host, port) for valid TCP target strings."""
|
||||
result = parse_tcp_target(value)
|
||||
assert result is not None
|
||||
host, port = result
|
||||
assert host == expected_host
|
||||
assert port == expected_port
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"value",
|
||||
[
|
||||
# Serial paths
|
||||
"/dev/ttyUSB0",
|
||||
"/dev/ttyACM0",
|
||||
"COM3",
|
||||
# BLE MACs — multiple colons, no valid port
|
||||
"AA:BB:CC:DD:EE:FF",
|
||||
"AA:BB:CC:DD:EE:12",
|
||||
# UUIDs — hyphens, no colon
|
||||
"12345678-1234-1234-1234-123456789abc",
|
||||
# Bare hostname without port
|
||||
"meshcore-node.local",
|
||||
# Empty / whitespace
|
||||
"",
|
||||
" ",
|
||||
# Port out of range
|
||||
"host:0",
|
||||
"host:65536",
|
||||
# Non-numeric port
|
||||
"host:notaport",
|
||||
],
|
||||
)
|
||||
def test_parse_tcp_target_rejects_non_tcp(value):
|
||||
"""parse_tcp_target must return None for serial paths, BLE addresses, and malformed inputs."""
|
||||
assert parse_tcp_target(value) is None
|
||||
|
||||
|
||||
def test_parse_tcp_target_none_input():
|
||||
"""parse_tcp_target must return None for None input."""
|
||||
assert parse_tcp_target(None) is None # type: ignore[arg-type]
|
||||
|
||||
|
||||
def test_parse_tcp_target_default_port_for_bracketed_ipv6_no_port():
|
||||
"""parse_tcp_target must use DEFAULT_TCP_PORT for bracketed IPv6 without port."""
|
||||
result = parse_tcp_target("[::1]")
|
||||
assert result == ("::1", DEFAULT_TCP_PORT)
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"value",
|
||||
[
|
||||
"[::1", # no closing bracket
|
||||
"[]:4403", # empty host in brackets
|
||||
"[::1]:abc", # non-numeric port after bracket
|
||||
"[::1]:0", # port out of range (low)
|
||||
"[::1]:65536", # port out of range (high)
|
||||
],
|
||||
)
|
||||
def test_parse_tcp_target_rejects_malformed_ipv6(value):
|
||||
"""parse_tcp_target must return None for malformed bracketed IPv6 targets."""
|
||||
assert parse_tcp_target(value) is None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# default_serial_targets
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_default_serial_targets_returns_list():
|
||||
"""default_serial_targets must return a non-empty list."""
|
||||
targets = default_serial_targets()
|
||||
assert isinstance(targets, list)
|
||||
assert len(targets) > 0
|
||||
|
||||
|
||||
def test_default_serial_targets_includes_fallback():
|
||||
"""default_serial_targets always includes /dev/ttyACM0 as a fallback."""
|
||||
targets = default_serial_targets()
|
||||
assert "/dev/ttyACM0" in targets
|
||||
|
||||
|
||||
def test_default_serial_targets_no_duplicates():
|
||||
"""default_serial_targets must not return duplicate paths."""
|
||||
targets = default_serial_targets()
|
||||
assert len(targets) == len(set(targets))
|
||||
|
||||
|
||||
def test_default_serial_targets_deduplicates_glob_results():
|
||||
"""default_serial_targets must deduplicate paths returned by multiple globs."""
|
||||
|
||||
def _fake_glob(pattern):
|
||||
if "ttyACM" in pattern:
|
||||
return ["/dev/ttyACM0", "/dev/ttyACM1"]
|
||||
if "ttyUSB" in pattern:
|
||||
return ["/dev/ttyACM0"] # intentional duplicate across patterns
|
||||
return []
|
||||
|
||||
with patch("data.mesh_ingestor.connection.glob.glob", side_effect=_fake_glob):
|
||||
targets = default_serial_targets()
|
||||
|
||||
assert targets.count("/dev/ttyACM0") == 1
|
||||
assert "/dev/ttyACM1" in targets
|
||||
# ttyACM0 already found by glob so fallback append must not re-add it
|
||||
assert targets.count("/dev/ttyACM0") == 1
|
||||
|
||||
|
||||
def test_default_serial_targets_omits_fallback_when_ttyacm0_found():
|
||||
"""default_serial_targets must not append /dev/ttyACM0 when glob already found it."""
|
||||
|
||||
def _fake_glob(pattern):
|
||||
if "ttyACM" in pattern:
|
||||
return ["/dev/ttyACM0"]
|
||||
return []
|
||||
|
||||
with patch("data.mesh_ingestor.connection.glob.glob", side_effect=_fake_glob):
|
||||
targets = default_serial_targets()
|
||||
|
||||
# present exactly once — from glob, not appended again
|
||||
assert targets.count("/dev/ttyACM0") == 1
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# BLE_ADDRESS_RE sanity
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_ble_address_re_mac():
|
||||
"""BLE_ADDRESS_RE matches a canonical 6-byte MAC address."""
|
||||
assert BLE_ADDRESS_RE.fullmatch("AA:BB:CC:DD:EE:FF") is not None
|
||||
|
||||
|
||||
def test_ble_address_re_uuid():
|
||||
"""BLE_ADDRESS_RE matches a standard 128-bit UUID."""
|
||||
assert BLE_ADDRESS_RE.fullmatch("12345678-1234-1234-1234-123456789abc") is not None
|
||||
|
||||
|
||||
def test_ble_address_re_rejects_tcp():
|
||||
"""BLE_ADDRESS_RE must not match a hostname:port string."""
|
||||
assert BLE_ADDRESS_RE.fullmatch("hostname:4403") is None
|
||||
|
||||
|
||||
def test_ble_address_re_rejects_partial_mac():
|
||||
"""BLE_ADDRESS_RE must not match an incomplete MAC address."""
|
||||
assert BLE_ADDRESS_RE.fullmatch("AA:BB:CC:DD:EE") is None
|
||||
+1
-515
@@ -15,7 +15,6 @@
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import importlib
|
||||
import sys
|
||||
import threading
|
||||
import types
|
||||
@@ -28,8 +27,7 @@ REPO_ROOT = Path(__file__).resolve().parents[1]
|
||||
if str(REPO_ROOT) not in sys.path:
|
||||
sys.path.insert(0, str(REPO_ROOT))
|
||||
|
||||
from data.mesh_ingestor import daemon # noqa: E402 - path setup
|
||||
import data.mesh_ingestor.config as _cfg_module # noqa: E402 - path setup
|
||||
from data.mesh_ingestor import daemon
|
||||
|
||||
|
||||
class FakeEvent:
|
||||
@@ -437,515 +435,3 @@ def test_main_inactivity_reconnect(monkeypatch):
|
||||
|
||||
daemon.main()
|
||||
assert any(event.is_set() for event in FakeEvent.instances)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helper: build a minimal _DaemonState for unit tests
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _make_state(**overrides):
|
||||
"""Return a :class:`daemon._DaemonState` with sensible defaults.
|
||||
|
||||
Any keyword argument is forwarded as a field override via ``setattr``
|
||||
after construction, so callers only need to supply fields under test.
|
||||
"""
|
||||
state = daemon._DaemonState(
|
||||
provider=None, # type: ignore[arg-type]
|
||||
stop=FakeEvent(), # type: ignore[arg-type]
|
||||
configured_port=None,
|
||||
inactivity_reconnect_secs=0.0,
|
||||
energy_saving_enabled=False,
|
||||
energy_online_secs=0.0,
|
||||
energy_sleep_secs=0.0,
|
||||
retry_delay=0.0,
|
||||
last_seen_packet_monotonic=None,
|
||||
active_candidate=None,
|
||||
)
|
||||
for key, val in overrides.items():
|
||||
setattr(state, key, val)
|
||||
return state
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _advance_retry_delay
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_advance_retry_delay_disabled(monkeypatch):
|
||||
"""Returns current delay unchanged when the max is zero."""
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_MAX_DELAY_SECS", 0)
|
||||
assert daemon._advance_retry_delay(5.0) == 5.0
|
||||
|
||||
|
||||
def test_advance_retry_delay_bootstrap(monkeypatch):
|
||||
"""Seeds from initial config when current delay is zero (first call)."""
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_MAX_DELAY_SECS", 60.0)
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_INITIAL_DELAY_SECS", 3.0)
|
||||
assert daemon._advance_retry_delay(0.0) == 3.0
|
||||
|
||||
|
||||
def test_advance_retry_delay_doubles_and_caps(monkeypatch):
|
||||
"""Doubles current delay and caps at the configured maximum."""
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_MAX_DELAY_SECS", 10.0)
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_INITIAL_DELAY_SECS", 1.0)
|
||||
assert daemon._advance_retry_delay(3.0) == 6.0
|
||||
assert daemon._advance_retry_delay(7.0) == 10.0
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _energy_sleep
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_energy_sleep_no_op_when_disabled():
|
||||
"""No wait issued when energy saving is disabled."""
|
||||
state = _make_state(energy_saving_enabled=False, energy_sleep_secs=1.0)
|
||||
daemon._energy_sleep(state, "reason")
|
||||
assert not state.stop.wait_calls
|
||||
|
||||
|
||||
def test_energy_sleep_no_op_when_zero_secs():
|
||||
"""No wait issued when sleep duration is zero."""
|
||||
state = _make_state(energy_saving_enabled=True, energy_sleep_secs=0.0)
|
||||
daemon._energy_sleep(state, "reason")
|
||||
assert not state.stop.wait_calls
|
||||
|
||||
|
||||
def test_energy_sleep_emits_debug_log(monkeypatch):
|
||||
"""Debug log is emitted when DEBUG is enabled."""
|
||||
state = _make_state(energy_saving_enabled=True, energy_sleep_secs=2.0)
|
||||
logged = []
|
||||
monkeypatch.setattr(daemon.config, "DEBUG", True)
|
||||
monkeypatch.setattr(
|
||||
daemon.config, "_debug_log", lambda msg, **_kw: logged.append(msg)
|
||||
)
|
||||
daemon._energy_sleep(state, "wake up")
|
||||
assert any("wake up" in m for m in logged)
|
||||
assert state.stop.wait_calls == [2.0]
|
||||
|
||||
|
||||
def test_energy_sleep_waits_when_debug_off(monkeypatch):
|
||||
"""Wait is issued for the configured duration when DEBUG is off."""
|
||||
state = _make_state(energy_saving_enabled=True, energy_sleep_secs=1.5)
|
||||
monkeypatch.setattr(daemon.config, "DEBUG", False)
|
||||
daemon._energy_sleep(state, "reason")
|
||||
assert state.stop.wait_calls == [1.5]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _try_connect
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_try_connect_no_available_interface_raises_system_exit(monkeypatch):
|
||||
"""NoAvailableMeshInterface propagates as SystemExit(1)."""
|
||||
|
||||
class _NoIface:
|
||||
def connect(self, *, active_candidate):
|
||||
raise daemon.interfaces.NoAvailableMeshInterface("none")
|
||||
|
||||
def extract_host_node_id(self, iface):
|
||||
return None
|
||||
|
||||
state = _make_state(active_candidate="serial0", configured_port="serial0")
|
||||
state.provider = _NoIface() # type: ignore[assignment]
|
||||
monkeypatch.setattr(daemon.config, "_debug_log", lambda *_a, **_k: None)
|
||||
with pytest.raises(SystemExit):
|
||||
daemon._try_connect(state)
|
||||
|
||||
|
||||
def test_try_connect_generic_failure_resets_candidate(monkeypatch):
|
||||
"""Connect failure in auto-detect mode clears the active candidate."""
|
||||
|
||||
class _FailProvider:
|
||||
def connect(self, *, active_candidate):
|
||||
raise OSError("device busy")
|
||||
|
||||
def extract_host_node_id(self, iface):
|
||||
return None
|
||||
|
||||
state = _make_state(active_candidate="serial0", configured_port=None)
|
||||
state.provider = _FailProvider() # type: ignore[assignment]
|
||||
monkeypatch.setattr(daemon.config, "_debug_log", lambda *_a, **_k: None)
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_MAX_DELAY_SECS", 0)
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_INITIAL_DELAY_SECS", 0)
|
||||
|
||||
result = daemon._try_connect(state)
|
||||
assert result is False
|
||||
assert state.active_candidate is None
|
||||
assert state.announced_target is False
|
||||
|
||||
|
||||
def test_try_connect_sets_energy_session_deadline(monkeypatch):
|
||||
"""Energy-saving deadline is assigned when online duration is positive."""
|
||||
|
||||
class _OkProvider:
|
||||
def connect(self, *, active_candidate):
|
||||
return DummyInterface(), active_candidate, active_candidate
|
||||
|
||||
def extract_host_node_id(self, iface):
|
||||
return "!host"
|
||||
|
||||
state = _make_state(
|
||||
active_candidate="serial0",
|
||||
configured_port="serial0",
|
||||
energy_saving_enabled=True,
|
||||
energy_online_secs=30.0,
|
||||
)
|
||||
state.provider = _OkProvider() # type: ignore[assignment]
|
||||
monkeypatch.setattr(daemon.config, "_debug_log", lambda *_a, **_k: None)
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_INITIAL_DELAY_SECS", 0)
|
||||
monkeypatch.setattr(
|
||||
daemon.handlers, "register_host_node_id", lambda *_a, **_k: None
|
||||
)
|
||||
monkeypatch.setattr(daemon.handlers, "host_node_id", lambda: "!host")
|
||||
monkeypatch.setattr(
|
||||
daemon.ingestors, "set_ingestor_node_id", lambda *_a, **_k: None
|
||||
)
|
||||
|
||||
result = daemon._try_connect(state)
|
||||
assert result is True
|
||||
assert state.energy_session_deadline is not None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _check_energy_saving
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_check_energy_saving_session_expired(monkeypatch):
|
||||
"""Iface is closed and True returned when the session deadline has passed."""
|
||||
state = _make_state(energy_saving_enabled=True)
|
||||
state.iface = DummyInterface()
|
||||
state.energy_session_deadline = 0.0
|
||||
monkeypatch.setattr(daemon.time, "monotonic", lambda: 1.0)
|
||||
monkeypatch.setattr(daemon.config, "_debug_log", lambda *_a, **_k: None)
|
||||
|
||||
result = daemon._check_energy_saving(state)
|
||||
assert result is True
|
||||
assert state.iface is None
|
||||
assert state.energy_session_deadline is None
|
||||
|
||||
|
||||
def test_check_energy_saving_ble_client_disconnected(monkeypatch):
|
||||
"""Iface is closed and True returned when the BLE client reference is gone."""
|
||||
state = _make_state(energy_saving_enabled=True)
|
||||
state.iface = DummyInterface(client_present=False)
|
||||
state.energy_session_deadline = None
|
||||
monkeypatch.setattr(daemon, "_is_ble_interface", lambda _: True)
|
||||
monkeypatch.setattr(daemon.config, "_debug_log", lambda *_a, **_k: None)
|
||||
|
||||
result = daemon._check_energy_saving(state)
|
||||
assert result is True
|
||||
assert state.iface is None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _try_send_snapshot
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_try_send_snapshot_empty_nodes():
|
||||
"""Returns True without setting initial_snapshot_sent when no nodes exist."""
|
||||
|
||||
class _EmptyProvider:
|
||||
def node_snapshot_items(self, iface):
|
||||
return []
|
||||
|
||||
state = _make_state()
|
||||
state.iface = DummyInterface(nodes={})
|
||||
state.provider = _EmptyProvider() # type: ignore[assignment]
|
||||
|
||||
result = daemon._try_send_snapshot(state)
|
||||
assert result is True
|
||||
assert state.initial_snapshot_sent is False
|
||||
|
||||
|
||||
def test_try_send_snapshot_upsert_failure_is_non_fatal(monkeypatch):
|
||||
"""Upsert errors are logged but do not abort the snapshot pass."""
|
||||
|
||||
class _OneNodeProvider:
|
||||
def node_snapshot_items(self, iface):
|
||||
return [("!node1", {"id": 1})]
|
||||
|
||||
def _raise(*_a, **_k):
|
||||
raise ValueError("bad node")
|
||||
|
||||
state = _make_state()
|
||||
state.iface = DummyInterface()
|
||||
state.provider = _OneNodeProvider() # type: ignore[assignment]
|
||||
logged = []
|
||||
monkeypatch.setattr(daemon.config, "_debug_log", lambda *a, **kw: logged.append(kw))
|
||||
monkeypatch.setattr(daemon.config, "DEBUG", False)
|
||||
monkeypatch.setattr(daemon.handlers, "upsert_node", _raise)
|
||||
|
||||
result = daemon._try_send_snapshot(state)
|
||||
assert result is True
|
||||
assert state.initial_snapshot_sent is True
|
||||
assert any(c.get("context") == "daemon.snapshot" for c in logged)
|
||||
|
||||
|
||||
def test_try_send_snapshot_upsert_failure_debug_payload(monkeypatch):
|
||||
"""The node payload is logged when DEBUG is enabled and upsert fails."""
|
||||
|
||||
class _OneNodeProvider:
|
||||
def node_snapshot_items(self, iface):
|
||||
return [("!node1", {"id": 1})]
|
||||
|
||||
def _raise(*_a, **_k):
|
||||
raise ValueError("bad")
|
||||
|
||||
state = _make_state()
|
||||
state.iface = DummyInterface()
|
||||
state.provider = _OneNodeProvider() # type: ignore[assignment]
|
||||
logged = []
|
||||
monkeypatch.setattr(daemon.config, "_debug_log", lambda *a, **kw: logged.append(kw))
|
||||
monkeypatch.setattr(daemon.config, "DEBUG", True)
|
||||
monkeypatch.setattr(daemon.handlers, "upsert_node", _raise)
|
||||
|
||||
daemon._try_send_snapshot(state)
|
||||
assert any("node" in c for c in logged)
|
||||
|
||||
|
||||
def test_try_send_snapshot_outer_exception_resets_iface(monkeypatch):
|
||||
"""An exception from node_snapshot_items resets the interface and returns False."""
|
||||
|
||||
class _BrokenProvider:
|
||||
def node_snapshot_items(self, iface):
|
||||
raise RuntimeError("boom")
|
||||
|
||||
state = _make_state()
|
||||
state.iface = DummyInterface()
|
||||
state.provider = _BrokenProvider() # type: ignore[assignment]
|
||||
monkeypatch.setattr(daemon.config, "_debug_log", lambda *_a, **_k: None)
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_MAX_DELAY_SECS", 0)
|
||||
|
||||
result = daemon._try_send_snapshot(state)
|
||||
assert result is False
|
||||
assert state.iface is None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _check_inactivity_reconnect (additional branches)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_check_inactivity_reconnect_throttles_rapid_reconnects(monkeypatch):
|
||||
"""A reconnect within the inactivity window is suppressed."""
|
||||
state = _make_state(inactivity_reconnect_secs=60.0)
|
||||
state.iface = DummyInterface(is_connected=False)
|
||||
state.iface_connected_at = 0.0
|
||||
state.last_inactivity_reconnect = 1.0 # recent
|
||||
|
||||
monkeypatch.setattr(daemon.time, "monotonic", lambda: 10.0)
|
||||
monkeypatch.setattr(daemon.handlers, "last_packet_monotonic", lambda: None)
|
||||
|
||||
assert daemon._check_inactivity_reconnect(state) is False
|
||||
|
||||
|
||||
def test_check_inactivity_reconnect_uses_connected_at_when_no_packets(monkeypatch):
|
||||
"""Uses iface_connected_at as the activity baseline when no packets seen."""
|
||||
state = _make_state(inactivity_reconnect_secs=60.0)
|
||||
state.iface = DummyInterface(is_connected=True)
|
||||
state.iface_connected_at = 5.0
|
||||
state.last_inactivity_reconnect = None
|
||||
|
||||
monkeypatch.setattr(daemon.time, "monotonic", lambda: 10.0)
|
||||
monkeypatch.setattr(daemon.handlers, "last_packet_monotonic", lambda: None)
|
||||
|
||||
# 10.0 - 5.0 = 5.0 < 60.0 → not triggered
|
||||
assert daemon._check_inactivity_reconnect(state) is False
|
||||
|
||||
|
||||
def test_check_inactivity_reconnect_uses_now_when_no_baseline(monkeypatch):
|
||||
"""Falls back to current time when neither packets nor connected_at is set."""
|
||||
state = _make_state(inactivity_reconnect_secs=60.0)
|
||||
state.iface = DummyInterface(is_connected=True)
|
||||
state.iface_connected_at = None
|
||||
state.last_inactivity_reconnect = None
|
||||
|
||||
monkeypatch.setattr(daemon.time, "monotonic", lambda: 10.0)
|
||||
monkeypatch.setattr(daemon.handlers, "last_packet_monotonic", lambda: None)
|
||||
|
||||
# latest_activity = now(10.0); inactivity_elapsed = 0.0 < 60.0 → not triggered
|
||||
assert daemon._check_inactivity_reconnect(state) is False
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _loop_iteration
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_loop_iteration_connect_fails_returns_true(monkeypatch):
|
||||
"""Returns True (continue) when iface is absent and connect fails."""
|
||||
state = _make_state()
|
||||
state.iface = None
|
||||
monkeypatch.setattr(daemon, "_try_connect", lambda s: False)
|
||||
assert daemon._loop_iteration(state) is True
|
||||
|
||||
|
||||
def test_loop_iteration_energy_saving_triggers_returns_true(monkeypatch):
|
||||
"""Returns True (continue) when energy saving disconnects the interface."""
|
||||
state = _make_state()
|
||||
state.iface = object()
|
||||
monkeypatch.setattr(daemon, "_check_energy_saving", lambda s: True)
|
||||
assert daemon._loop_iteration(state) is True
|
||||
|
||||
|
||||
def test_loop_iteration_snapshot_fails_returns_true(monkeypatch):
|
||||
"""Returns True (continue) when the initial snapshot fails."""
|
||||
state = _make_state()
|
||||
state.iface = object()
|
||||
state.initial_snapshot_sent = False
|
||||
monkeypatch.setattr(daemon, "_check_energy_saving", lambda s: False)
|
||||
monkeypatch.setattr(daemon, "_try_send_snapshot", lambda s: False)
|
||||
assert daemon._loop_iteration(state) is True
|
||||
|
||||
|
||||
def test_loop_iteration_inactivity_triggers_returns_true(monkeypatch):
|
||||
"""Returns True (continue) when inactivity reconnect fires."""
|
||||
state = _make_state()
|
||||
state.iface = object()
|
||||
state.initial_snapshot_sent = True
|
||||
monkeypatch.setattr(daemon, "_check_energy_saving", lambda s: False)
|
||||
monkeypatch.setattr(daemon, "_check_inactivity_reconnect", lambda s: True)
|
||||
assert daemon._loop_iteration(state) is True
|
||||
|
||||
|
||||
def test_loop_iteration_full_pass_returns_false(monkeypatch):
|
||||
"""Returns False (sleep) after a complete iteration with no early exits."""
|
||||
state = _make_state()
|
||||
state.iface = object()
|
||||
state.initial_snapshot_sent = True
|
||||
monkeypatch.setattr(daemon, "_check_energy_saving", lambda s: False)
|
||||
monkeypatch.setattr(daemon, "_check_inactivity_reconnect", lambda s: False)
|
||||
monkeypatch.setattr(
|
||||
daemon, "_process_ingestor_heartbeat", lambda iface, **_kw: False
|
||||
)
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_INITIAL_DELAY_SECS", 0)
|
||||
assert daemon._loop_iteration(state) is False
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# PROVIDER env-var selection
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _make_minimal_fake_provider(name: str):
|
||||
"""Return a minimal provider-like object that causes main() to exit quickly."""
|
||||
|
||||
class FakeIface:
|
||||
def close(self):
|
||||
return None
|
||||
|
||||
class FakeProvider:
|
||||
def subscribe(self):
|
||||
return []
|
||||
|
||||
def connect(self, *, active_candidate):
|
||||
return FakeIface(), "fake", active_candidate
|
||||
|
||||
def extract_host_node_id(self, iface):
|
||||
return None
|
||||
|
||||
def node_snapshot_items(self, iface):
|
||||
return []
|
||||
|
||||
fp = FakeProvider()
|
||||
fp.name = name
|
||||
return fp
|
||||
|
||||
|
||||
def _patch_daemon_for_fast_exit(monkeypatch):
|
||||
"""Apply monkeypatches that make daemon.main() return after one iteration."""
|
||||
_configure_common_defaults(monkeypatch)
|
||||
monkeypatch.setattr(daemon.config, "CONNECTION", "fake")
|
||||
monkeypatch.setattr(
|
||||
daemon,
|
||||
"threading",
|
||||
types.SimpleNamespace(
|
||||
Event=AutoSetEvent,
|
||||
current_thread=daemon.threading.current_thread,
|
||||
main_thread=daemon.threading.main_thread,
|
||||
),
|
||||
)
|
||||
monkeypatch.setattr(
|
||||
daemon.handlers, "register_host_node_id", lambda *_a, **_k: None
|
||||
)
|
||||
monkeypatch.setattr(daemon.handlers, "host_node_id", lambda: None)
|
||||
monkeypatch.setattr(daemon.handlers, "upsert_node", lambda *_a, **_k: None)
|
||||
monkeypatch.setattr(daemon.handlers, "last_packet_monotonic", lambda: None)
|
||||
monkeypatch.setattr(
|
||||
daemon.ingestors, "set_ingestor_node_id", lambda *_a, **_k: None
|
||||
)
|
||||
monkeypatch.setattr(
|
||||
daemon.ingestors, "queue_ingestor_heartbeat", lambda *_a, **_k: True
|
||||
)
|
||||
|
||||
|
||||
def _reload_config() -> types.ModuleType:
|
||||
"""Reload and return the config module, picking up any env-var changes."""
|
||||
importlib.reload(_cfg_module)
|
||||
return _cfg_module
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def reset_provider_config():
|
||||
"""Reload config after the test so PROVIDER changes don't leak across tests."""
|
||||
yield
|
||||
import os
|
||||
|
||||
os.environ.pop("PROVIDER", None)
|
||||
_reload_config()
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"env_value, expected",
|
||||
[
|
||||
(None, "meshtastic"),
|
||||
("meshcore", "meshcore"),
|
||||
],
|
||||
)
|
||||
def test_config_provider_env(monkeypatch, reset_provider_config, env_value, expected):
|
||||
"""PROVIDER env var selects the provider; absent defaults to 'meshtastic'."""
|
||||
if env_value is None:
|
||||
monkeypatch.delenv("PROVIDER", raising=False)
|
||||
else:
|
||||
monkeypatch.setenv("PROVIDER", env_value)
|
||||
assert _reload_config().PROVIDER == expected
|
||||
|
||||
|
||||
def test_config_provider_unknown_raises(monkeypatch, reset_provider_config):
|
||||
"""An unrecognised PROVIDER value must raise ValueError at import time."""
|
||||
monkeypatch.setenv("PROVIDER", "reticulum")
|
||||
with pytest.raises(ValueError, match="PROVIDER"):
|
||||
_reload_config()
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"provider_name, module_path, class_name",
|
||||
[
|
||||
("meshtastic", "data.mesh_ingestor.providers.meshtastic", "MeshtasticProvider"),
|
||||
("meshcore", "data.mesh_ingestor.providers.meshcore", "MeshcoreProvider"),
|
||||
],
|
||||
)
|
||||
def test_daemon_main_selects_provider(
|
||||
monkeypatch, provider_name, module_path, class_name
|
||||
):
|
||||
"""main() must instantiate the correct provider class based on PROVIDER."""
|
||||
mod = importlib.import_module(module_path)
|
||||
instantiated = []
|
||||
|
||||
def make_provider():
|
||||
p = _make_minimal_fake_provider(provider_name)
|
||||
instantiated.append(p)
|
||||
return p
|
||||
|
||||
_patch_daemon_for_fast_exit(monkeypatch)
|
||||
monkeypatch.setattr(daemon.config, "PROVIDER", provider_name)
|
||||
monkeypatch.setattr(mod, class_name, make_provider)
|
||||
|
||||
daemon.main()
|
||||
assert len(instantiated) == 1
|
||||
assert instantiated[0].name == provider_name
|
||||
|
||||
@@ -1,185 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import base64
|
||||
import io
|
||||
import json
|
||||
import sys
|
||||
|
||||
import pytest
|
||||
|
||||
mesh_pb2 = pytest.importorskip("meshtastic.protobuf.mesh_pb2")
|
||||
telemetry_pb2 = pytest.importorskip("meshtastic.protobuf.telemetry_pb2")
|
||||
|
||||
from data.mesh_ingestor import decode_payload
|
||||
|
||||
|
||||
def run_main_with_input(payload: dict) -> tuple[int, dict]:
|
||||
stdin = io.StringIO(json.dumps(payload))
|
||||
stdout = io.StringIO()
|
||||
original_stdin = sys.stdin
|
||||
original_stdout = sys.stdout
|
||||
try:
|
||||
sys.stdin = stdin
|
||||
sys.stdout = stdout
|
||||
status = decode_payload.main()
|
||||
finally:
|
||||
sys.stdin = original_stdin
|
||||
sys.stdout = original_stdout
|
||||
|
||||
output = json.loads(stdout.getvalue() or "{}")
|
||||
return status, output
|
||||
|
||||
|
||||
def test_decode_payload_position_success():
|
||||
position = mesh_pb2.Position()
|
||||
position.latitude_i = 525598720
|
||||
position.longitude_i = 136577024
|
||||
position.altitude = 11
|
||||
position.precision_bits = 13
|
||||
payload_b64 = base64.b64encode(position.SerializeToString()).decode("ascii")
|
||||
|
||||
result = decode_payload._decode_payload(3, payload_b64)
|
||||
|
||||
assert result["type"] == "POSITION_APP"
|
||||
assert result["payload"]["latitude_i"] == 525598720
|
||||
assert result["payload"]["longitude_i"] == 136577024
|
||||
assert result["payload"]["altitude"] == 11
|
||||
|
||||
|
||||
def test_decode_payload_rejects_invalid_payload():
|
||||
result = decode_payload._decode_payload(3, "not-base64")
|
||||
|
||||
assert result["error"].startswith("invalid-payload")
|
||||
assert "invalid-payload" in result["error"]
|
||||
|
||||
|
||||
def test_decode_payload_rejects_unsupported_port():
|
||||
result = decode_payload._decode_payload(
|
||||
999, base64.b64encode(b"ok").decode("ascii")
|
||||
)
|
||||
|
||||
assert result["error"] == "unsupported-port"
|
||||
assert result["portnum"] == 999
|
||||
|
||||
|
||||
def test_main_handles_invalid_json():
|
||||
stdin = io.StringIO("nope")
|
||||
stdout = io.StringIO()
|
||||
original_stdin = sys.stdin
|
||||
original_stdout = sys.stdout
|
||||
try:
|
||||
sys.stdin = stdin
|
||||
sys.stdout = stdout
|
||||
status = decode_payload.main()
|
||||
finally:
|
||||
sys.stdin = original_stdin
|
||||
sys.stdout = original_stdout
|
||||
|
||||
result = json.loads(stdout.getvalue())
|
||||
assert status == 1
|
||||
assert result["error"].startswith("invalid-json")
|
||||
|
||||
|
||||
def test_main_requires_portnum():
|
||||
status, result = run_main_with_input(
|
||||
{"payload_b64": base64.b64encode(b"ok").decode("ascii")}
|
||||
)
|
||||
|
||||
assert status == 1
|
||||
assert result["error"] == "missing-portnum"
|
||||
|
||||
|
||||
def test_main_requires_integer_portnum():
|
||||
status, result = run_main_with_input(
|
||||
{"portnum": "3", "payload_b64": base64.b64encode(b"ok").decode("ascii")}
|
||||
)
|
||||
|
||||
assert status == 1
|
||||
assert result["error"] == "missing-portnum"
|
||||
|
||||
|
||||
def test_main_requires_payload():
|
||||
status, result = run_main_with_input({"portnum": 3})
|
||||
|
||||
assert status == 1
|
||||
assert result["error"] == "missing-payload"
|
||||
|
||||
|
||||
def test_main_requires_string_payload():
|
||||
status, result = run_main_with_input({"portnum": 3, "payload_b64": 123})
|
||||
|
||||
assert status == 1
|
||||
assert result["error"] == "missing-payload"
|
||||
|
||||
|
||||
def test_main_success_position_payload():
|
||||
position = mesh_pb2.Position()
|
||||
position.latitude_i = 525598720
|
||||
position.longitude_i = 136577024
|
||||
payload_b64 = base64.b64encode(position.SerializeToString()).decode("ascii")
|
||||
|
||||
status, result = run_main_with_input({"portnum": 3, "payload_b64": payload_b64})
|
||||
|
||||
assert status == 0
|
||||
assert result["type"] == "POSITION_APP"
|
||||
assert result["payload"]["latitude_i"] == 525598720
|
||||
|
||||
|
||||
def test_decode_payload_handles_parse_failure():
|
||||
class BrokenMessage:
|
||||
def ParseFromString(self, _payload):
|
||||
raise ValueError("boom")
|
||||
|
||||
decode_payload.PORTNUM_MAP[99] = ("BROKEN", BrokenMessage)
|
||||
payload_b64 = base64.b64encode(b"\x00").decode("ascii")
|
||||
|
||||
result = decode_payload._decode_payload(99, payload_b64)
|
||||
|
||||
assert result["error"].startswith("decode-failed")
|
||||
assert result["type"] == "BROKEN"
|
||||
decode_payload.PORTNUM_MAP.pop(99, None)
|
||||
|
||||
|
||||
def test_main_entrypoint_executes():
|
||||
import runpy
|
||||
|
||||
payload = {"portnum": 3, "payload_b64": base64.b64encode(b"").decode("ascii")}
|
||||
stdin = io.StringIO(json.dumps(payload))
|
||||
stdout = io.StringIO()
|
||||
original_stdin = sys.stdin
|
||||
original_stdout = sys.stdout
|
||||
try:
|
||||
sys.stdin = stdin
|
||||
sys.stdout = stdout
|
||||
try:
|
||||
runpy.run_module("data.mesh_ingestor.decode_payload", run_name="__main__")
|
||||
except SystemExit as exc:
|
||||
assert exc.code == 0
|
||||
finally:
|
||||
sys.stdin = original_stdin
|
||||
sys.stdout = original_stdout
|
||||
|
||||
|
||||
def test_decode_payload_telemetry_success():
|
||||
telemetry = telemetry_pb2.Telemetry()
|
||||
telemetry.time = 123
|
||||
payload_b64 = base64.b64encode(telemetry.SerializeToString()).decode("ascii")
|
||||
|
||||
result = decode_payload._decode_payload(67, payload_b64)
|
||||
|
||||
assert result["type"] == "TELEMETRY_APP"
|
||||
assert result["payload"]["time"] == 123
|
||||
@@ -1,232 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Unit tests for :mod:`data.mesh_ingestor.events`."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parents[1]
|
||||
if str(REPO_ROOT) not in sys.path:
|
||||
sys.path.insert(0, str(REPO_ROOT))
|
||||
|
||||
from data.mesh_ingestor.events import ( # noqa: E402 - path setup
|
||||
IngestorHeartbeat,
|
||||
MessageEvent,
|
||||
NeighborEntry,
|
||||
NeighborsSnapshot,
|
||||
PositionEvent,
|
||||
TelemetryEvent,
|
||||
TraceEvent,
|
||||
)
|
||||
|
||||
|
||||
def test_message_event_schema():
|
||||
assert MessageEvent.__required_keys__ == frozenset({"id", "rx_time", "rx_iso"})
|
||||
assert "text" in MessageEvent.__optional_keys__
|
||||
assert "from_id" in MessageEvent.__optional_keys__
|
||||
assert "snr" in MessageEvent.__optional_keys__
|
||||
assert "rssi" in MessageEvent.__optional_keys__
|
||||
|
||||
|
||||
def test_message_event_requires_id_rx_time_rx_iso():
|
||||
event: MessageEvent = {
|
||||
"id": 1,
|
||||
"rx_time": 1700000000,
|
||||
"rx_iso": "2023-11-14T00:00:00Z",
|
||||
}
|
||||
assert event["id"] == 1
|
||||
assert event["rx_time"] == 1700000000
|
||||
assert event["rx_iso"] == "2023-11-14T00:00:00Z"
|
||||
|
||||
|
||||
def test_message_event_accepts_optional_fields():
|
||||
event: MessageEvent = {
|
||||
"id": 2,
|
||||
"rx_time": 1700000001,
|
||||
"rx_iso": "2023-11-14T00:00:01Z",
|
||||
"text": "hello",
|
||||
"from_id": "!aabbccdd",
|
||||
"snr": 4.5,
|
||||
"rssi": -90,
|
||||
}
|
||||
assert event["text"] == "hello"
|
||||
assert event["snr"] == pytest.approx(4.5)
|
||||
|
||||
|
||||
def test_position_event_schema():
|
||||
assert PositionEvent.__required_keys__ == frozenset({"id", "rx_time", "rx_iso"})
|
||||
assert "latitude" in PositionEvent.__optional_keys__
|
||||
assert "longitude" in PositionEvent.__optional_keys__
|
||||
assert "node_id" in PositionEvent.__optional_keys__
|
||||
|
||||
|
||||
def test_position_event_required_fields():
|
||||
event: PositionEvent = {
|
||||
"id": 10,
|
||||
"rx_time": 1700000002,
|
||||
"rx_iso": "2023-11-14T00:00:02Z",
|
||||
}
|
||||
assert event["id"] == 10
|
||||
|
||||
|
||||
def test_position_event_optional_fields():
|
||||
event: PositionEvent = {
|
||||
"id": 11,
|
||||
"rx_time": 1700000003,
|
||||
"rx_iso": "2023-11-14T00:00:03Z",
|
||||
"latitude": 37.7749,
|
||||
"longitude": -122.4194,
|
||||
"altitude": 10.0,
|
||||
"node_id": "!aabbccdd",
|
||||
}
|
||||
assert event["latitude"] == pytest.approx(37.7749)
|
||||
|
||||
|
||||
def test_telemetry_event_schema():
|
||||
assert TelemetryEvent.__required_keys__ == frozenset({"id", "rx_time", "rx_iso"})
|
||||
assert "payload_b64" in TelemetryEvent.__optional_keys__
|
||||
assert "snr" in TelemetryEvent.__optional_keys__
|
||||
|
||||
|
||||
def test_telemetry_event_required_fields():
|
||||
event: TelemetryEvent = {
|
||||
"id": 20,
|
||||
"rx_time": 1700000004,
|
||||
"rx_iso": "2023-11-14T00:00:04Z",
|
||||
}
|
||||
assert event["id"] == 20
|
||||
|
||||
|
||||
def test_telemetry_event_optional_fields():
|
||||
event: TelemetryEvent = {
|
||||
"id": 21,
|
||||
"rx_time": 1700000005,
|
||||
"rx_iso": "2023-11-14T00:00:05Z",
|
||||
"channel": 0,
|
||||
"payload_b64": "AAEC",
|
||||
"snr": 3.0,
|
||||
}
|
||||
assert event["payload_b64"] == "AAEC"
|
||||
|
||||
|
||||
def test_neighbor_entry_schema():
|
||||
assert NeighborEntry.__required_keys__ == frozenset({"rx_time", "rx_iso"})
|
||||
assert "neighbor_id" in NeighborEntry.__optional_keys__
|
||||
assert "snr" in NeighborEntry.__optional_keys__
|
||||
|
||||
|
||||
def test_neighbor_entry_required_fields():
|
||||
entry: NeighborEntry = {"rx_time": 1700000006, "rx_iso": "2023-11-14T00:00:06Z"}
|
||||
assert entry["rx_time"] == 1700000006
|
||||
|
||||
|
||||
def test_neighbor_entry_optional_fields():
|
||||
entry: NeighborEntry = {
|
||||
"rx_time": 1700000007,
|
||||
"rx_iso": "2023-11-14T00:00:07Z",
|
||||
"neighbor_id": "!11223344",
|
||||
"snr": 6.0,
|
||||
}
|
||||
assert entry["neighbor_id"] == "!11223344"
|
||||
|
||||
|
||||
def test_neighbors_snapshot_schema():
|
||||
assert NeighborsSnapshot.__required_keys__ == frozenset(
|
||||
{"node_id", "rx_time", "rx_iso"}
|
||||
)
|
||||
assert "neighbors" in NeighborsSnapshot.__optional_keys__
|
||||
assert "node_broadcast_interval_secs" in NeighborsSnapshot.__optional_keys__
|
||||
|
||||
|
||||
def test_neighbors_snapshot_required_fields():
|
||||
snap: NeighborsSnapshot = {
|
||||
"node_id": "!aabbccdd",
|
||||
"rx_time": 1700000008,
|
||||
"rx_iso": "2023-11-14T00:00:08Z",
|
||||
}
|
||||
assert snap["node_id"] == "!aabbccdd"
|
||||
|
||||
|
||||
def test_neighbors_snapshot_optional_fields():
|
||||
snap: NeighborsSnapshot = {
|
||||
"node_id": "!aabbccdd",
|
||||
"rx_time": 1700000009,
|
||||
"rx_iso": "2023-11-14T00:00:09Z",
|
||||
"neighbors": [],
|
||||
"node_broadcast_interval_secs": 900,
|
||||
}
|
||||
assert snap["node_broadcast_interval_secs"] == 900
|
||||
|
||||
|
||||
def test_trace_event_schema():
|
||||
assert TraceEvent.__required_keys__ == frozenset({"hops", "rx_time", "rx_iso"})
|
||||
assert "elapsed_ms" in TraceEvent.__optional_keys__
|
||||
assert "snr" in TraceEvent.__optional_keys__
|
||||
|
||||
|
||||
def test_trace_event_required_fields():
|
||||
event: TraceEvent = {
|
||||
"hops": [1, 2, 3],
|
||||
"rx_time": 1700000010,
|
||||
"rx_iso": "2023-11-14T00:00:10Z",
|
||||
}
|
||||
assert event["hops"] == [1, 2, 3]
|
||||
|
||||
|
||||
def test_trace_event_optional_fields():
|
||||
event: TraceEvent = {
|
||||
"hops": [4, 5],
|
||||
"rx_time": 1700000011,
|
||||
"rx_iso": "2023-11-14T00:00:11Z",
|
||||
"elapsed_ms": 42,
|
||||
"snr": 2.5,
|
||||
}
|
||||
assert event["elapsed_ms"] == 42
|
||||
|
||||
|
||||
def test_ingestor_heartbeat_schema():
|
||||
# IngestorHeartbeat uses total=True with NotRequired fields. Under
|
||||
# `from __future__ import annotations` the TypedDict metaclass cannot
|
||||
# evaluate the annotation strings at class creation time, so
|
||||
# NotRequired keys appear in __required_keys__ rather than
|
||||
# __optional_keys__. Verify the four always-present keys are included.
|
||||
always_required = {"node_id", "start_time", "last_seen_time", "version"}
|
||||
assert always_required <= IngestorHeartbeat.__required_keys__
|
||||
|
||||
|
||||
def test_ingestor_heartbeat_all_fields():
|
||||
hb: IngestorHeartbeat = {
|
||||
"node_id": "!aabbccdd",
|
||||
"start_time": 1700000000,
|
||||
"last_seen_time": 1700000012,
|
||||
"version": "0.5.12",
|
||||
"lora_freq": 906875,
|
||||
"modem_preset": "LONG_FAST",
|
||||
}
|
||||
assert hb["version"] == "0.5.12"
|
||||
assert hb["lora_freq"] == 906875
|
||||
|
||||
|
||||
def test_ingestor_heartbeat_without_optional_fields():
|
||||
hb: IngestorHeartbeat = {
|
||||
"node_id": "!aabbccdd",
|
||||
"start_time": 1700000000,
|
||||
"last_seen_time": 1700000013,
|
||||
"version": "0.5.12",
|
||||
}
|
||||
assert "lora_freq" not in hb
|
||||
+1
-288
@@ -788,7 +788,6 @@ def test_store_packet_dict_posts_text_message(mesh_module, monkeypatch):
|
||||
|
||||
mesh.config.LORA_FREQ = 868
|
||||
mesh.config.MODEM_PRESET = "MediumFast"
|
||||
mesh.register_host_node_id("!f00dbabe")
|
||||
|
||||
packet = {
|
||||
"id": 123,
|
||||
@@ -824,7 +823,6 @@ def test_store_packet_dict_posts_text_message(mesh_module, monkeypatch):
|
||||
assert payload["rssi"] == -70
|
||||
assert payload["reply_id"] is None
|
||||
assert payload["emoji"] is None
|
||||
assert payload["ingestor"] == "!f00dbabe"
|
||||
assert payload["lora_freq"] == 868
|
||||
assert payload["modem_preset"] == "MediumFast"
|
||||
assert priority == mesh._MESSAGE_POST_PRIORITY
|
||||
@@ -881,7 +879,6 @@ def test_store_packet_dict_posts_position(mesh_module, monkeypatch):
|
||||
|
||||
mesh.config.LORA_FREQ = 868
|
||||
mesh.config.MODEM_PRESET = "MediumFast"
|
||||
mesh.register_host_node_id("!f00dbabe")
|
||||
|
||||
packet = {
|
||||
"id": 200498337,
|
||||
@@ -949,7 +946,6 @@ def test_store_packet_dict_posts_position(mesh_module, monkeypatch):
|
||||
)
|
||||
assert payload["lora_freq"] == 868
|
||||
assert payload["modem_preset"] == "MediumFast"
|
||||
assert payload["ingestor"] == "!f00dbabe"
|
||||
assert payload["raw"]["time"] == 1_758_624_189
|
||||
|
||||
|
||||
@@ -964,7 +960,6 @@ def test_store_packet_dict_posts_neighborinfo(mesh_module, monkeypatch):
|
||||
|
||||
mesh.config.LORA_FREQ = 868
|
||||
mesh.config.MODEM_PRESET = "MediumFast"
|
||||
mesh.register_host_node_id("!f00dbabe")
|
||||
|
||||
packet = {
|
||||
"id": 2049886869,
|
||||
@@ -1009,7 +1004,6 @@ def test_store_packet_dict_posts_neighborinfo(mesh_module, monkeypatch):
|
||||
assert neighbors[2]["neighbor_num"] == 0x0BAD_C0DE
|
||||
assert payload["lora_freq"] == 868
|
||||
assert payload["modem_preset"] == "MediumFast"
|
||||
assert payload["ingestor"] == "!f00dbabe"
|
||||
|
||||
|
||||
def test_store_packet_dict_handles_nodeinfo_packet(mesh_module, monkeypatch):
|
||||
@@ -2288,7 +2282,6 @@ def test_store_packet_dict_handles_telemetry_packet(mesh_module, monkeypatch):
|
||||
|
||||
mesh.config.LORA_FREQ = 868
|
||||
mesh.config.MODEM_PRESET = "MediumFast"
|
||||
mesh.register_host_node_id("!f00dbabe")
|
||||
|
||||
packet = {
|
||||
"id": 1_256_091_342,
|
||||
@@ -2341,8 +2334,6 @@ def test_store_packet_dict_handles_telemetry_packet(mesh_module, monkeypatch):
|
||||
assert payload["current"] == pytest.approx(0.0715)
|
||||
assert payload["lora_freq"] == 868
|
||||
assert payload["modem_preset"] == "MediumFast"
|
||||
assert payload["ingestor"] == "!f00dbabe"
|
||||
assert payload["telemetry_type"] == "device"
|
||||
|
||||
|
||||
def test_store_packet_dict_handles_environment_telemetry(mesh_module, monkeypatch):
|
||||
@@ -2422,144 +2413,6 @@ def test_store_packet_dict_handles_environment_telemetry(mesh_module, monkeypatc
|
||||
assert payload["soil_temperature"] == pytest.approx(18.9)
|
||||
assert payload["lora_freq"] == 868
|
||||
assert payload["modem_preset"] == "MediumFast"
|
||||
assert payload["telemetry_type"] == "environment"
|
||||
|
||||
|
||||
def test_store_packet_dict_handles_power_telemetry(mesh_module, monkeypatch):
|
||||
"""Power-metrics packets are tagged telemetry_type='power'."""
|
||||
mesh = mesh_module
|
||||
captured = []
|
||||
monkeypatch.setattr(
|
||||
mesh,
|
||||
"_queue_post_json",
|
||||
lambda path, payload, *, priority: captured.append((path, payload, priority)),
|
||||
)
|
||||
|
||||
packet = {
|
||||
"id": 3_000_000_001,
|
||||
"rxTime": 1_758_030_000,
|
||||
"fromId": "!aabbccdd",
|
||||
"toId": "^all",
|
||||
"decoded": {
|
||||
"portnum": "TELEMETRY_APP",
|
||||
"telemetry": {
|
||||
"time": 1_758_030_000,
|
||||
"powerMetrics": {
|
||||
"ch1Voltage": 5.02,
|
||||
"ch1Current": 0.48,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
mesh.store_packet_dict(packet)
|
||||
|
||||
assert captured
|
||||
_, payload, _ = captured[0]
|
||||
assert payload["telemetry_type"] == "power"
|
||||
|
||||
|
||||
def test_store_packet_dict_handles_air_quality_telemetry(mesh_module, monkeypatch):
|
||||
"""Air-quality-metrics packets are tagged telemetry_type='air_quality'."""
|
||||
mesh = mesh_module
|
||||
captured = []
|
||||
monkeypatch.setattr(
|
||||
mesh,
|
||||
"_queue_post_json",
|
||||
lambda path, payload, *, priority: captured.append((path, payload, priority)),
|
||||
)
|
||||
|
||||
packet = {
|
||||
"id": 3_000_000_003,
|
||||
"rxTime": 1_758_032_000,
|
||||
"fromId": "!aabbccdd",
|
||||
"toId": "^all",
|
||||
"decoded": {
|
||||
"portnum": "TELEMETRY_APP",
|
||||
"telemetry": {
|
||||
"time": 1_758_032_000,
|
||||
"airQualityMetrics": {
|
||||
"pm10Standard": 4,
|
||||
"pm25Standard": 8,
|
||||
"iaq": 65,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
mesh.store_packet_dict(packet)
|
||||
|
||||
assert captured
|
||||
_, payload, _ = captured[0]
|
||||
assert payload["telemetry_type"] == "air_quality"
|
||||
|
||||
|
||||
def test_store_packet_dict_telemetry_type_absent_for_unknown_subtype(
|
||||
mesh_module, monkeypatch
|
||||
):
|
||||
"""Packets with no recognised sub-object do not include telemetry_type in the payload."""
|
||||
mesh = mesh_module
|
||||
captured = []
|
||||
monkeypatch.setattr(
|
||||
mesh,
|
||||
"_queue_post_json",
|
||||
lambda path, payload, *, priority: captured.append((path, payload, priority)),
|
||||
)
|
||||
|
||||
packet = {
|
||||
"id": 3_000_000_002,
|
||||
"rxTime": 1_758_031_000,
|
||||
"fromId": "!aabbccdd",
|
||||
"toId": "^all",
|
||||
"decoded": {
|
||||
"portnum": "TELEMETRY_APP",
|
||||
"telemetry": {
|
||||
"time": 1_758_031_000,
|
||||
"someUnknownMetrics": {"foo": 1},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
mesh.store_packet_dict(packet)
|
||||
|
||||
assert captured
|
||||
_, payload, _ = captured[0]
|
||||
assert "telemetry_type" not in payload
|
||||
|
||||
|
||||
def test_store_packet_dict_invalid_telemetry_type_is_dropped(mesh_module, monkeypatch):
|
||||
"""A telemetry_type value that isn't in _VALID_TELEMETRY_TYPES is omitted from the payload."""
|
||||
mesh = mesh_module
|
||||
captured = []
|
||||
monkeypatch.setattr(
|
||||
mesh,
|
||||
"_queue_post_json",
|
||||
lambda path, payload, *, priority: captured.append((path, payload, priority)),
|
||||
)
|
||||
|
||||
# Inject a bad type by monkey-patching the validator constant so we can
|
||||
# verify the drop path without needing a real packet with an impossible type.
|
||||
monkeypatch.setattr(mesh.handlers, "_VALID_TELEMETRY_TYPES", frozenset())
|
||||
|
||||
packet = {
|
||||
"id": 3_000_000_010,
|
||||
"rxTime": 1_758_040_000,
|
||||
"fromId": "!aabbccdd",
|
||||
"toId": "^all",
|
||||
"decoded": {
|
||||
"portnum": "TELEMETRY_APP",
|
||||
"telemetry": {
|
||||
"time": 1_758_040_000,
|
||||
"deviceMetrics": {"batteryLevel": 80},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
mesh.store_packet_dict(packet)
|
||||
|
||||
assert captured
|
||||
_, payload, _ = captured[0]
|
||||
assert "telemetry_type" not in payload
|
||||
|
||||
|
||||
def test_store_packet_dict_throttles_host_telemetry(mesh_module, monkeypatch):
|
||||
@@ -2624,7 +2477,6 @@ def test_store_packet_dict_handles_traceroute_packet(mesh_module, monkeypatch):
|
||||
|
||||
mesh.config.LORA_FREQ = 915
|
||||
mesh.config.MODEM_PRESET = "LongFast"
|
||||
mesh.register_host_node_id("!f00dbabe")
|
||||
|
||||
packet = {
|
||||
"id": 2_934_054_466,
|
||||
@@ -2666,7 +2518,6 @@ def test_store_packet_dict_handles_traceroute_packet(mesh_module, monkeypatch):
|
||||
assert "elapsed_ms" in payload
|
||||
assert payload["lora_freq"] == 915
|
||||
assert payload["modem_preset"] == "LongFast"
|
||||
assert payload["ingestor"] == "!f00dbabe"
|
||||
|
||||
|
||||
def test_traceroute_hop_normalization_supports_mappings(mesh_module, monkeypatch):
|
||||
@@ -3023,7 +2874,7 @@ def test_default_serial_targets_deduplicates(mesh_module, monkeypatch):
|
||||
return ["/dev/ttyACM1"]
|
||||
return []
|
||||
|
||||
monkeypatch.setattr(mesh.connection.glob, "glob", fake_glob)
|
||||
monkeypatch.setattr(mesh.interfaces.glob, "glob", fake_glob)
|
||||
|
||||
targets = mesh._default_serial_targets()
|
||||
|
||||
@@ -3198,32 +3049,9 @@ def test_queue_ingestor_heartbeat_enqueues_and_throttles(mesh_module, monkeypatc
|
||||
assert payload["version"] == mesh.VERSION
|
||||
assert payload["lora_freq"] == 915
|
||||
assert payload["modem_preset"] == "LongFast"
|
||||
assert payload["protocol"] == "meshtastic"
|
||||
assert priority == mesh.queue._INGESTOR_POST_PRIORITY
|
||||
|
||||
|
||||
def test_queue_ingestor_heartbeat_protocol_meshcore(mesh_module, monkeypatch):
|
||||
"""Heartbeat payload must carry the configured PROVIDER as its protocol."""
|
||||
mesh = mesh_module
|
||||
captured = []
|
||||
|
||||
monkeypatch.setattr(
|
||||
mesh.queue,
|
||||
"_queue_post_json",
|
||||
lambda path, payload, *, priority, send=None: captured.append(payload),
|
||||
)
|
||||
|
||||
mesh.ingestors.STATE.last_heartbeat = None
|
||||
mesh.ingestors.STATE.node_id = None
|
||||
mesh.config.PROVIDER = "meshcore"
|
||||
|
||||
mesh.ingestors.set_ingestor_node_id("!aabbccdd")
|
||||
mesh.ingestors.queue_ingestor_heartbeat(force=True)
|
||||
|
||||
assert len(captured) == 1, "expected exactly one heartbeat payload"
|
||||
assert captured[0]["protocol"] == "meshcore"
|
||||
|
||||
|
||||
def test_mesh_version_export_matches_package(mesh_module):
|
||||
import data
|
||||
|
||||
@@ -3631,118 +3459,3 @@ def test_on_receive_skips_seen_packets(mesh_module):
|
||||
mesh.on_receive(packet, interface=None)
|
||||
|
||||
assert packet["_potatomesh_seen"] is True
|
||||
|
||||
|
||||
def test_upsert_node_includes_ingestor_key(mesh_module, monkeypatch):
|
||||
"""upsert_node must attach the host node ID so /api/nodes can resolve protocol."""
|
||||
mesh = mesh_module
|
||||
captured = []
|
||||
monkeypatch.setattr(
|
||||
mesh,
|
||||
"_queue_post_json",
|
||||
lambda path, payload, *, priority: captured.append((path, payload, priority)),
|
||||
)
|
||||
mesh.register_host_node_id("!aabbccdd")
|
||||
|
||||
mesh.upsert_node("!deadbeef", {"user": {"shortName": "X"}})
|
||||
|
||||
assert captured
|
||||
_, payload, _ = captured[0]
|
||||
assert payload.get("ingestor") == "!aabbccdd"
|
||||
|
||||
|
||||
def test_store_packet_dict_nodeinfo_includes_ingestor_key(mesh_module, monkeypatch):
|
||||
"""store_nodeinfo_packet must include the ingestor key in the /api/nodes payload."""
|
||||
mesh = mesh_module
|
||||
captured = []
|
||||
monkeypatch.setattr(
|
||||
mesh,
|
||||
"_queue_post_json",
|
||||
lambda path, payload, *, priority: captured.append((path, payload, priority)),
|
||||
)
|
||||
mesh.register_host_node_id("!11223344")
|
||||
|
||||
packet = {
|
||||
"id": 1,
|
||||
"rxTime": 1_700_000_000,
|
||||
"fromId": "!aabbccdd",
|
||||
"decoded": {
|
||||
"portnum": "NODEINFO_APP",
|
||||
"user": {"id": "!aabbccdd", "shortName": "N"},
|
||||
},
|
||||
}
|
||||
mesh.store_packet_dict(packet)
|
||||
|
||||
node_calls = [(p, pl) for p, pl, _ in captured if p == "/api/nodes"]
|
||||
assert node_calls, "Expected a /api/nodes POST"
|
||||
_, payload = node_calls[0]
|
||||
assert payload.get("ingestor") == "!11223344"
|
||||
|
||||
|
||||
def test_store_packet_dict_router_heartbeat(mesh_module, monkeypatch):
|
||||
"""STORE_FORWARD_APP ROUTER_HEARTBEAT upserts the node at low priority."""
|
||||
mesh = mesh_module
|
||||
captured = []
|
||||
monkeypatch.setattr(
|
||||
mesh,
|
||||
"_queue_post_json",
|
||||
lambda path, payload, *, priority: captured.append((path, payload, priority)),
|
||||
)
|
||||
mesh.register_host_node_id("!f00dbabe")
|
||||
|
||||
packet = {
|
||||
"id": 2377284085,
|
||||
"rxTime": 1_774_868_197,
|
||||
"fromId": "!435a7fbc",
|
||||
"toId": "^all",
|
||||
"hopLimit": "2",
|
||||
"rxSnr": "-12.25",
|
||||
"rxRssi": "-110",
|
||||
"decoded": {
|
||||
"portnum": "STORE_FORWARD_APP",
|
||||
"storeforward": {
|
||||
"heartbeat": {"period": "900"},
|
||||
"rr": "ROUTER_HEARTBEAT",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
mesh.store_packet_dict(packet)
|
||||
|
||||
assert captured, "Expected a POST for router heartbeat"
|
||||
path, payload, priority = captured[0]
|
||||
assert path == "/api/nodes"
|
||||
assert priority == mesh._DEFAULT_POST_PRIORITY
|
||||
assert "!435a7fbc" in payload
|
||||
node_entry = payload["!435a7fbc"]
|
||||
assert node_entry["lastHeard"] == 1_774_868_197
|
||||
assert payload.get("ingestor") == "!f00dbabe"
|
||||
assert set(node_entry.keys()) == {
|
||||
"lastHeard"
|
||||
}, "Heartbeat must only set lastHeard, nothing else"
|
||||
|
||||
|
||||
def test_store_packet_dict_store_forward_non_heartbeat_ignored(
|
||||
mesh_module, monkeypatch
|
||||
):
|
||||
"""STORE_FORWARD_APP packets that are not ROUTER_HEARTBEAT are dropped."""
|
||||
mesh = mesh_module
|
||||
captured = []
|
||||
monkeypatch.setattr(
|
||||
mesh,
|
||||
"_queue_post_json",
|
||||
lambda *a, **kw: captured.append(a),
|
||||
)
|
||||
|
||||
packet = {
|
||||
"id": 1,
|
||||
"rxTime": 1_700_000_000,
|
||||
"fromId": "!aabbccdd",
|
||||
"decoded": {
|
||||
"portnum": "STORE_FORWARD_APP",
|
||||
"storeforward": {"rr": "ROUTER_CLIENT_RESPONSE"},
|
||||
},
|
||||
}
|
||||
mesh.store_packet_dict(packet)
|
||||
|
||||
assert not captured, "Non-heartbeat STORE_FORWARD_APP must not be queued"
|
||||
|
||||
@@ -1,74 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Unit tests for :mod:`data.mesh_ingestor.node_identity`."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parents[1]
|
||||
if str(REPO_ROOT) not in sys.path:
|
||||
sys.path.insert(0, str(REPO_ROOT))
|
||||
|
||||
from data.mesh_ingestor.node_identity import ( # noqa: E402 - path setup
|
||||
canonical_node_id,
|
||||
node_num_from_id,
|
||||
)
|
||||
|
||||
|
||||
def test_canonical_node_id_accepts_numeric():
|
||||
assert canonical_node_id(1) == "!00000001"
|
||||
assert canonical_node_id(0xABCDEF01) == "!abcdef01"
|
||||
assert canonical_node_id(1.0) == "!00000001"
|
||||
|
||||
|
||||
def test_canonical_node_id_accepts_string_forms():
|
||||
assert canonical_node_id("!ABCDEF01") == "!abcdef01"
|
||||
assert canonical_node_id("0xABCDEF01") == "!abcdef01"
|
||||
assert canonical_node_id("abcdef01") == "!abcdef01"
|
||||
assert canonical_node_id("123") == "!0000007b"
|
||||
|
||||
|
||||
def test_canonical_node_id_passthrough_caret_destinations():
|
||||
assert canonical_node_id("^all") == "^all"
|
||||
|
||||
|
||||
def test_node_num_from_id_parses_canonical_and_hex():
|
||||
assert node_num_from_id("!abcdef01") == 0xABCDEF01
|
||||
assert node_num_from_id("abcdef01") == 0xABCDEF01
|
||||
assert node_num_from_id("0xabcdef01") == 0xABCDEF01
|
||||
assert node_num_from_id(123) == 123
|
||||
|
||||
|
||||
def test_canonical_node_id_rejects_none_and_empty():
|
||||
assert canonical_node_id(None) is None
|
||||
assert canonical_node_id("") is None
|
||||
assert canonical_node_id(" ") is None
|
||||
|
||||
|
||||
def test_canonical_node_id_rejects_negative():
|
||||
assert canonical_node_id(-1) is None
|
||||
assert canonical_node_id(-0xABCDEF01) is None
|
||||
|
||||
|
||||
def test_canonical_node_id_truncates_overflow():
|
||||
# Values wider than 32 bits are masked, not rejected.
|
||||
assert canonical_node_id(0x1_ABCDEF01) == "!abcdef01"
|
||||
|
||||
|
||||
def test_node_num_from_id_rejects_none_and_empty():
|
||||
assert node_num_from_id(None) is None
|
||||
assert node_num_from_id("") is None
|
||||
assert node_num_from_id("not-hex") is None
|
||||
File diff suppressed because it is too large
Load Diff
@@ -55,38 +55,8 @@ def _javascript_package_version() -> str:
|
||||
raise AssertionError("package.json does not expose a string version")
|
||||
|
||||
|
||||
def _flutter_package_version() -> str:
|
||||
pubspec_path = REPO_ROOT / "app" / "pubspec.yaml"
|
||||
for line in pubspec_path.read_text(encoding="utf-8").splitlines():
|
||||
if line.startswith("version:"):
|
||||
version = line.split(":", 1)[1].strip()
|
||||
if version:
|
||||
return version
|
||||
break
|
||||
raise AssertionError("pubspec.yaml does not expose a version")
|
||||
|
||||
|
||||
def _rust_package_version() -> str:
|
||||
cargo_path = REPO_ROOT / "matrix" / "Cargo.toml"
|
||||
inside_package = False
|
||||
for line in cargo_path.read_text(encoding="utf-8").splitlines():
|
||||
stripped = line.strip()
|
||||
if stripped == "[package]":
|
||||
inside_package = True
|
||||
continue
|
||||
if inside_package and stripped.startswith("[") and stripped.endswith("]"):
|
||||
break
|
||||
if inside_package:
|
||||
literal = re.match(
|
||||
r'version\s*=\s*["\'](?P<version>[^"\']+)["\']', stripped
|
||||
)
|
||||
if literal:
|
||||
return literal.group("version")
|
||||
raise AssertionError("Cargo.toml does not expose a package version")
|
||||
|
||||
|
||||
def test_version_identifiers_match_across_languages() -> None:
|
||||
"""Guard against version drift between Python, Ruby, JavaScript, Flutter, and Rust."""
|
||||
"""Guard against version drift between Python, Ruby, and JavaScript."""
|
||||
|
||||
python_version = getattr(data, "__version__", None)
|
||||
assert (
|
||||
@@ -95,13 +65,5 @@ def test_version_identifiers_match_across_languages() -> None:
|
||||
|
||||
ruby_version = _ruby_fallback_version()
|
||||
javascript_version = _javascript_package_version()
|
||||
flutter_version = _flutter_package_version()
|
||||
rust_version = _rust_package_version()
|
||||
|
||||
assert (
|
||||
python_version
|
||||
== ruby_version
|
||||
== javascript_version
|
||||
== flutter_version
|
||||
== rust_version
|
||||
)
|
||||
assert python_version == ruby_version == javascript_version
|
||||
|
||||
@@ -23,9 +23,6 @@ ENV BUNDLE_FORCE_RUBY_PLATFORM=true
|
||||
# Install build dependencies and SQLite3
|
||||
RUN apk add --no-cache \
|
||||
build-base \
|
||||
python3 \
|
||||
py3-pip \
|
||||
py3-virtualenv \
|
||||
sqlite-dev \
|
||||
linux-headers \
|
||||
pkgconfig
|
||||
@@ -41,16 +38,11 @@ RUN bundle config set --local force_ruby_platform true && \
|
||||
bundle config set --local without 'development test' && \
|
||||
bundle install --jobs=4 --retry=3
|
||||
|
||||
# Install Meshtastic decoder dependencies in a dedicated venv
|
||||
RUN python3 -m venv /opt/meshtastic-venv && \
|
||||
/opt/meshtastic-venv/bin/pip install --no-cache-dir meshtastic protobuf
|
||||
|
||||
# Production stage
|
||||
FROM ruby:3.3-alpine AS production
|
||||
|
||||
# Install runtime dependencies
|
||||
RUN apk add --no-cache \
|
||||
python3 \
|
||||
sqlite \
|
||||
tzdata \
|
||||
curl
|
||||
@@ -64,7 +56,6 @@ WORKDIR /app
|
||||
|
||||
# Copy installed gems from builder stage
|
||||
COPY --from=builder /usr/local/bundle /usr/local/bundle
|
||||
COPY --from=builder /opt/meshtastic-venv /opt/meshtastic-venv
|
||||
|
||||
# Copy application code (excluding the Dockerfile which is not required at runtime)
|
||||
COPY --chown=potatomesh:potatomesh web/app.rb ./
|
||||
@@ -79,7 +70,6 @@ COPY --chown=potatomesh:potatomesh web/scripts ./scripts
|
||||
|
||||
# Copy SQL schema files from data directory
|
||||
COPY --chown=potatomesh:potatomesh data/*.sql /data/
|
||||
COPY --chown=potatomesh:potatomesh data/mesh_ingestor/decode_payload.py /app/data/mesh_ingestor/decode_payload.py
|
||||
|
||||
# Create data and configuration directories with correct ownership
|
||||
RUN mkdir -p /app/.local/share/potato-mesh \
|
||||
@@ -95,7 +85,6 @@ EXPOSE 41447
|
||||
# Default environment variables (can be overridden by host)
|
||||
ENV RACK_ENV=production \
|
||||
APP_ENV=production \
|
||||
MESHTASTIC_PYTHON=/opt/meshtastic-venv/bin/python \
|
||||
XDG_DATA_HOME=/app/.local/share \
|
||||
XDG_CONFIG_HOME=/app/.config \
|
||||
SITE_NAME="PotatoMesh Demo" \
|
||||
|
||||
@@ -49,12 +49,6 @@ require_relative "application/worker_pool"
|
||||
require_relative "application/federation"
|
||||
require_relative "application/prometheus"
|
||||
require_relative "application/queries"
|
||||
require_relative "application/meshtastic/channel_names"
|
||||
require_relative "application/meshtastic/channel_hash"
|
||||
require_relative "application/meshtastic/protobuf"
|
||||
require_relative "application/meshtastic/rainbow_table"
|
||||
require_relative "application/meshtastic/cipher"
|
||||
require_relative "application/meshtastic/payload_decoder"
|
||||
require_relative "application/data_processing"
|
||||
require_relative "application/filesystem"
|
||||
require_relative "application/instances"
|
||||
@@ -139,10 +133,7 @@ module PotatoMesh
|
||||
set :public_folder, File.expand_path("../../public", __dir__)
|
||||
set :views, File.expand_path("../../views", __dir__)
|
||||
set :federation_thread, nil
|
||||
set :initial_federation_thread, nil
|
||||
set :federation_worker_pool, nil
|
||||
set :federation_shutdown_requested, false
|
||||
set :federation_shutdown_hook_installed, false
|
||||
set :port, resolve_port
|
||||
set :bind, DEFAULT_BIND_ADDRESS
|
||||
|
||||
@@ -157,8 +148,8 @@ module PotatoMesh
|
||||
|
||||
perform_initial_filesystem_setup!
|
||||
cleanup_legacy_well_known_artifacts
|
||||
ensure_schema_upgrades
|
||||
init_db unless db_schema_present?
|
||||
ensure_schema_upgrades
|
||||
|
||||
log_instance_domain_resolution
|
||||
log_instance_public_key
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -111,77 +111,51 @@ module PotatoMesh
|
||||
#
|
||||
# @return [void]
|
||||
def ensure_schema_upgrades
|
||||
FileUtils.mkdir_p(File.dirname(PotatoMesh::Config.db_path))
|
||||
db = open_database
|
||||
|
||||
node_table_exists = db.get_first_value(
|
||||
"SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name='nodes'",
|
||||
).to_i > 0
|
||||
if node_table_exists
|
||||
node_columns = db.execute("PRAGMA table_info(nodes)").map { |row| row[1] }
|
||||
unless node_columns.include?("precision_bits")
|
||||
db.execute("ALTER TABLE nodes ADD COLUMN precision_bits INTEGER")
|
||||
node_columns << "precision_bits"
|
||||
end
|
||||
|
||||
unless node_columns.include?("lora_freq")
|
||||
db.execute("ALTER TABLE nodes ADD COLUMN lora_freq INTEGER")
|
||||
end
|
||||
|
||||
unless node_columns.include?("modem_preset")
|
||||
db.execute("ALTER TABLE nodes ADD COLUMN modem_preset TEXT")
|
||||
end
|
||||
|
||||
unless node_columns.include?("protocol")
|
||||
db.execute("ALTER TABLE nodes ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic'")
|
||||
db.execute("UPDATE nodes SET protocol = 'meshtastic' WHERE protocol IS NULL OR TRIM(protocol) = ''")
|
||||
end
|
||||
node_columns = db.execute("PRAGMA table_info(nodes)").map { |row| row[1] }
|
||||
unless node_columns.include?("precision_bits")
|
||||
db.execute("ALTER TABLE nodes ADD COLUMN precision_bits INTEGER")
|
||||
node_columns << "precision_bits"
|
||||
end
|
||||
|
||||
message_table_exists = db.get_first_value(
|
||||
"SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name='messages'",
|
||||
).to_i > 0
|
||||
message_columns = message_table_exists ? db.execute("PRAGMA table_info(messages)").map { |row| row[1] } : []
|
||||
unless node_columns.include?("lora_freq")
|
||||
db.execute("ALTER TABLE nodes ADD COLUMN lora_freq INTEGER")
|
||||
end
|
||||
|
||||
if message_table_exists
|
||||
unless message_columns.include?("lora_freq")
|
||||
db.execute("ALTER TABLE messages ADD COLUMN lora_freq INTEGER")
|
||||
end
|
||||
unless node_columns.include?("modem_preset")
|
||||
db.execute("ALTER TABLE nodes ADD COLUMN modem_preset TEXT")
|
||||
end
|
||||
|
||||
unless message_columns.include?("modem_preset")
|
||||
db.execute("ALTER TABLE messages ADD COLUMN modem_preset TEXT")
|
||||
end
|
||||
message_columns = db.execute("PRAGMA table_info(messages)").map { |row| row[1] }
|
||||
|
||||
unless message_columns.include?("channel_name")
|
||||
db.execute("ALTER TABLE messages ADD COLUMN channel_name TEXT")
|
||||
end
|
||||
unless message_columns.include?("lora_freq")
|
||||
db.execute("ALTER TABLE messages ADD COLUMN lora_freq INTEGER")
|
||||
end
|
||||
|
||||
unless message_columns.include?("reply_id")
|
||||
db.execute("ALTER TABLE messages ADD COLUMN reply_id INTEGER")
|
||||
message_columns << "reply_id"
|
||||
end
|
||||
unless message_columns.include?("modem_preset")
|
||||
db.execute("ALTER TABLE messages ADD COLUMN modem_preset TEXT")
|
||||
end
|
||||
|
||||
unless message_columns.include?("emoji")
|
||||
db.execute("ALTER TABLE messages ADD COLUMN emoji TEXT")
|
||||
message_columns << "emoji"
|
||||
end
|
||||
unless message_columns.include?("channel_name")
|
||||
db.execute("ALTER TABLE messages ADD COLUMN channel_name TEXT")
|
||||
end
|
||||
|
||||
unless message_columns.include?("ingestor")
|
||||
db.execute("ALTER TABLE messages ADD COLUMN ingestor TEXT")
|
||||
end
|
||||
unless message_columns.include?("reply_id")
|
||||
db.execute("ALTER TABLE messages ADD COLUMN reply_id INTEGER")
|
||||
message_columns << "reply_id"
|
||||
end
|
||||
|
||||
unless message_columns.include?("protocol")
|
||||
db.execute("ALTER TABLE messages ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic'")
|
||||
db.execute("UPDATE messages SET protocol = 'meshtastic' WHERE protocol IS NULL OR TRIM(protocol) = ''")
|
||||
end
|
||||
unless message_columns.include?("emoji")
|
||||
db.execute("ALTER TABLE messages ADD COLUMN emoji TEXT")
|
||||
message_columns << "emoji"
|
||||
end
|
||||
|
||||
reply_index_exists =
|
||||
db.get_first_value(
|
||||
"SELECT COUNT(*) FROM sqlite_master WHERE type='index' AND name='idx_messages_reply_id'",
|
||||
).to_i > 0
|
||||
unless reply_index_exists
|
||||
db.execute("CREATE INDEX IF NOT EXISTS idx_messages_reply_id ON messages(reply_id)")
|
||||
end
|
||||
reply_index_exists =
|
||||
db.get_first_value(
|
||||
"SELECT COUNT(*) FROM sqlite_master WHERE type='index' AND name='idx_messages_reply_id'",
|
||||
).to_i > 0
|
||||
unless reply_index_exists
|
||||
db.execute("CREATE INDEX IF NOT EXISTS idx_messages_reply_id ON messages(reply_id)")
|
||||
end
|
||||
|
||||
tables = db.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='instances'").flatten
|
||||
@@ -214,49 +188,6 @@ module PotatoMesh
|
||||
db.execute("ALTER TABLE telemetry ADD COLUMN #{name} #{type}")
|
||||
telemetry_columns << name
|
||||
end
|
||||
unless telemetry_columns.include?("ingestor")
|
||||
db.execute("ALTER TABLE telemetry ADD COLUMN ingestor TEXT")
|
||||
end
|
||||
unless telemetry_columns.include?("telemetry_type")
|
||||
db.execute("ALTER TABLE telemetry ADD COLUMN telemetry_type TEXT")
|
||||
end
|
||||
|
||||
unless telemetry_columns.include?("protocol")
|
||||
db.execute("ALTER TABLE telemetry ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic'")
|
||||
db.execute("UPDATE telemetry SET protocol = 'meshtastic' WHERE protocol IS NULL OR TRIM(protocol) = ''")
|
||||
end
|
||||
|
||||
position_tables =
|
||||
db.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='positions'").flatten
|
||||
if position_tables.empty?
|
||||
positions_schema = File.expand_path("../../../../data/positions.sql", __dir__)
|
||||
db.execute_batch(File.read(positions_schema))
|
||||
end
|
||||
position_columns = db.execute("PRAGMA table_info(positions)").map { |row| row[1] }
|
||||
unless position_columns.include?("ingestor")
|
||||
db.execute("ALTER TABLE positions ADD COLUMN ingestor TEXT")
|
||||
end
|
||||
|
||||
unless position_columns.include?("protocol")
|
||||
db.execute("ALTER TABLE positions ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic'")
|
||||
db.execute("UPDATE positions SET protocol = 'meshtastic' WHERE protocol IS NULL OR TRIM(protocol) = ''")
|
||||
end
|
||||
|
||||
neighbor_tables =
|
||||
db.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='neighbors'").flatten
|
||||
if neighbor_tables.empty?
|
||||
neighbors_schema = File.expand_path("../../../../data/neighbors.sql", __dir__)
|
||||
db.execute_batch(File.read(neighbors_schema))
|
||||
end
|
||||
neighbor_columns = db.execute("PRAGMA table_info(neighbors)").map { |row| row[1] }
|
||||
unless neighbor_columns.include?("ingestor")
|
||||
db.execute("ALTER TABLE neighbors ADD COLUMN ingestor TEXT")
|
||||
end
|
||||
|
||||
unless neighbor_columns.include?("protocol")
|
||||
db.execute("ALTER TABLE neighbors ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic'")
|
||||
db.execute("UPDATE neighbors SET protocol = 'meshtastic' WHERE protocol IS NULL OR TRIM(protocol) = ''")
|
||||
end
|
||||
|
||||
trace_tables =
|
||||
db.execute(
|
||||
@@ -266,15 +197,6 @@ module PotatoMesh
|
||||
traces_schema = File.expand_path("../../../../data/traces.sql", __dir__)
|
||||
db.execute_batch(File.read(traces_schema))
|
||||
end
|
||||
trace_columns = db.execute("PRAGMA table_info(traces)").map { |row| row[1] }
|
||||
unless trace_columns.include?("ingestor")
|
||||
db.execute("ALTER TABLE traces ADD COLUMN ingestor TEXT")
|
||||
end
|
||||
|
||||
unless trace_columns.include?("protocol")
|
||||
db.execute("ALTER TABLE traces ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic'")
|
||||
db.execute("UPDATE traces SET protocol = 'meshtastic' WHERE protocol IS NULL OR TRIM(protocol) = ''")
|
||||
end
|
||||
|
||||
ingestor_tables =
|
||||
db.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='ingestors'").flatten
|
||||
@@ -292,11 +214,6 @@ module PotatoMesh
|
||||
unless ingestor_columns.include?("modem_preset")
|
||||
db.execute("ALTER TABLE ingestors ADD COLUMN modem_preset TEXT")
|
||||
end
|
||||
|
||||
unless ingestor_columns.include?("protocol")
|
||||
db.execute("ALTER TABLE ingestors ADD COLUMN protocol TEXT NOT NULL DEFAULT 'meshtastic'")
|
||||
db.execute("UPDATE ingestors SET protocol = 'meshtastic' WHERE protocol IS NULL OR TRIM(protocol) = ''")
|
||||
end
|
||||
end
|
||||
rescue SQLite3::SQLException, Errno::ENOENT => e
|
||||
warn_log(
|
||||
|
||||
@@ -17,8 +17,6 @@
|
||||
module PotatoMesh
|
||||
module App
|
||||
module Federation
|
||||
FEDERATION_SLEEP_SLICE_SECONDS = 0.2
|
||||
|
||||
# Resolve the canonical domain for the running instance.
|
||||
#
|
||||
# @return [String, nil] sanitized instance domain or nil outside production.
|
||||
@@ -172,9 +170,6 @@ module PotatoMesh
|
||||
# @return [PotatoMesh::App::WorkerPool, nil] active worker pool if created.
|
||||
def ensure_federation_worker_pool!
|
||||
return nil unless federation_enabled?
|
||||
return nil if federation_shutdown_requested?
|
||||
|
||||
ensure_federation_shutdown_hook!
|
||||
|
||||
existing = settings.respond_to?(:federation_worker_pool) ? settings.federation_worker_pool : nil
|
||||
return existing if existing&.alive?
|
||||
@@ -182,81 +177,19 @@ module PotatoMesh
|
||||
pool = PotatoMesh::App::WorkerPool.new(
|
||||
size: PotatoMesh::Config.federation_worker_pool_size,
|
||||
max_queue: PotatoMesh::Config.federation_worker_queue_capacity,
|
||||
task_timeout: PotatoMesh::Config.federation_task_timeout_seconds,
|
||||
name: "potato-mesh-fed",
|
||||
)
|
||||
|
||||
set(:federation_worker_pool, pool) if respond_to?(:set)
|
||||
pool
|
||||
end
|
||||
|
||||
# Ensure federation background workers are torn down during process exit.
|
||||
#
|
||||
# @return [void]
|
||||
def ensure_federation_shutdown_hook!
|
||||
application = is_a?(Class) ? self : self.class
|
||||
return application.ensure_federation_shutdown_hook! unless application.equal?(self)
|
||||
|
||||
installed = if respond_to?(:settings) && settings.respond_to?(:federation_shutdown_hook_installed)
|
||||
settings.federation_shutdown_hook_installed
|
||||
else
|
||||
instance_variable_defined?(:@federation_shutdown_hook_installed) && @federation_shutdown_hook_installed
|
||||
end
|
||||
return if installed
|
||||
|
||||
if respond_to?(:set) && settings.respond_to?(:federation_shutdown_hook_installed=)
|
||||
set(:federation_shutdown_hook_installed, true)
|
||||
else
|
||||
@federation_shutdown_hook_installed = true
|
||||
end
|
||||
|
||||
at_exit do
|
||||
begin
|
||||
application.shutdown_federation_background_work!(timeout: PotatoMesh::Config.federation_shutdown_timeout_seconds)
|
||||
pool.shutdown(timeout: PotatoMesh::Config.federation_task_timeout_seconds)
|
||||
rescue StandardError
|
||||
# Suppress shutdown errors during interpreter teardown.
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Check whether federation workers have received a shutdown request.
|
||||
#
|
||||
# @return [Boolean] true when stop has been requested.
|
||||
def federation_shutdown_requested?
|
||||
return false unless respond_to?(:settings)
|
||||
return false unless settings.respond_to?(:federation_shutdown_requested)
|
||||
|
||||
settings.federation_shutdown_requested == true
|
||||
end
|
||||
|
||||
# Mark federation background work as shutting down.
|
||||
#
|
||||
# @return [void]
|
||||
def request_federation_shutdown!
|
||||
set(:federation_shutdown_requested, true) if respond_to?(:set)
|
||||
end
|
||||
|
||||
# Clear any previously requested federation shutdown marker.
|
||||
#
|
||||
# @return [void]
|
||||
def clear_federation_shutdown_request!
|
||||
set(:federation_shutdown_requested, false) if respond_to?(:set)
|
||||
end
|
||||
|
||||
# Sleep in short intervals so federation loops can react to shutdown.
|
||||
#
|
||||
# @param seconds [Numeric] target sleep duration.
|
||||
# @return [Boolean] true when the full delay elapsed without shutdown.
|
||||
def federation_sleep_with_shutdown(seconds)
|
||||
remaining = seconds.to_f
|
||||
while remaining.positive?
|
||||
return false if federation_shutdown_requested?
|
||||
|
||||
slice = [remaining, FEDERATION_SLEEP_SLICE_SECONDS].min
|
||||
Kernel.sleep(slice)
|
||||
remaining -= slice
|
||||
end
|
||||
!federation_shutdown_requested?
|
||||
set(:federation_worker_pool, pool) if respond_to?(:set)
|
||||
pool
|
||||
end
|
||||
|
||||
# Shutdown and clear the federation worker pool if present.
|
||||
@@ -280,44 +213,6 @@ module PotatoMesh
|
||||
end
|
||||
end
|
||||
|
||||
# Gracefully terminate federation background loops and worker pool tasks.
|
||||
#
|
||||
# @param timeout [Numeric, nil] maximum join time applied per thread.
|
||||
# @return [void]
|
||||
def shutdown_federation_background_work!(timeout: nil)
|
||||
request_federation_shutdown!
|
||||
timeout_value = timeout || PotatoMesh::Config.federation_shutdown_timeout_seconds
|
||||
stop_federation_thread!(:initial_federation_thread, timeout: timeout_value)
|
||||
stop_federation_thread!(:federation_thread, timeout: timeout_value)
|
||||
shutdown_federation_worker_pool!
|
||||
clear_federation_crawl_state!
|
||||
end
|
||||
|
||||
# Stop a specific federation thread setting and clear its reference.
|
||||
#
|
||||
# @param setting_name [Symbol] settings key storing the thread object.
|
||||
# @param timeout [Numeric] seconds to wait for clean thread exit.
|
||||
# @return [void]
|
||||
def stop_federation_thread!(setting_name, timeout:)
|
||||
return unless respond_to?(:settings)
|
||||
return unless settings.respond_to?(setting_name)
|
||||
|
||||
thread = settings.public_send(setting_name)
|
||||
if thread&.alive?
|
||||
begin
|
||||
thread.wakeup if thread.respond_to?(:wakeup)
|
||||
rescue ThreadError
|
||||
# The thread may not currently be sleeping; continue shutdown.
|
||||
end
|
||||
thread.join(timeout)
|
||||
if thread.alive?
|
||||
thread.kill
|
||||
thread.join(0.1)
|
||||
end
|
||||
end
|
||||
set(setting_name, nil) if respond_to?(:set)
|
||||
end
|
||||
|
||||
def federation_target_domains(self_domain)
|
||||
normalized_self = sanitize_instance_domain(self_domain)&.downcase
|
||||
ordered = []
|
||||
@@ -369,21 +264,16 @@ module PotatoMesh
|
||||
|
||||
def announce_instance_to_domain(domain, payload_json)
|
||||
return false unless domain && !domain.empty?
|
||||
return false if federation_shutdown_requested?
|
||||
|
||||
https_failures = []
|
||||
|
||||
published = instance_uri_candidates(domain, "/api/instances").any? do |uri|
|
||||
break false if federation_shutdown_requested?
|
||||
|
||||
instance_uri_candidates(domain, "/api/instances").each do |uri|
|
||||
begin
|
||||
http = build_remote_http_client(uri)
|
||||
response = Timeout.timeout(PotatoMesh::Config.remote_instance_request_timeout) do
|
||||
http.start do |connection|
|
||||
request = build_federation_http_request(Net::HTTP::Post, uri)
|
||||
request.body = payload_json
|
||||
connection.request(request)
|
||||
end
|
||||
response = http.start do |connection|
|
||||
request = build_federation_http_request(Net::HTTP::Post, uri)
|
||||
request.body = payload_json
|
||||
connection.request(request)
|
||||
end
|
||||
if response.is_a?(Net::HTTPSuccess)
|
||||
debug_log(
|
||||
@@ -392,16 +282,14 @@ module PotatoMesh
|
||||
target: uri.to_s,
|
||||
status: response.code,
|
||||
)
|
||||
true
|
||||
else
|
||||
debug_log(
|
||||
"Federation announcement failed",
|
||||
context: "federation.announce",
|
||||
target: uri.to_s,
|
||||
status: response.code,
|
||||
)
|
||||
false
|
||||
return true
|
||||
end
|
||||
debug_log(
|
||||
"Federation announcement failed",
|
||||
context: "federation.announce",
|
||||
target: uri.to_s,
|
||||
status: response.code,
|
||||
)
|
||||
rescue StandardError => e
|
||||
metadata = {
|
||||
context: "federation.announce",
|
||||
@@ -416,18 +304,9 @@ module PotatoMesh
|
||||
**metadata,
|
||||
)
|
||||
https_failures << metadata
|
||||
else
|
||||
warn_log(
|
||||
"Federation announcement raised exception",
|
||||
**metadata,
|
||||
)
|
||||
next
|
||||
end
|
||||
false
|
||||
end
|
||||
end
|
||||
|
||||
unless published
|
||||
https_failures.each do |metadata|
|
||||
warn_log(
|
||||
"Federation announcement raised exception",
|
||||
**metadata,
|
||||
@@ -435,7 +314,14 @@ module PotatoMesh
|
||||
end
|
||||
end
|
||||
|
||||
published
|
||||
https_failures.each do |metadata|
|
||||
warn_log(
|
||||
"Federation announcement raised exception",
|
||||
**metadata,
|
||||
)
|
||||
end
|
||||
|
||||
false
|
||||
end
|
||||
|
||||
# Determine whether an HTTPS announcement failure should fall back to HTTP.
|
||||
@@ -455,7 +341,6 @@ module PotatoMesh
|
||||
|
||||
def announce_instance_to_all_domains
|
||||
return unless federation_enabled?
|
||||
return if federation_shutdown_requested?
|
||||
|
||||
attributes, signature = ensure_self_instance_record!
|
||||
payload_json = JSON.generate(instance_announcement_payload(attributes, signature))
|
||||
@@ -463,15 +348,13 @@ module PotatoMesh
|
||||
pool = federation_worker_pool
|
||||
scheduled = []
|
||||
|
||||
domains.each_with_object(scheduled) do |domain, scheduled_tasks|
|
||||
break if federation_shutdown_requested?
|
||||
|
||||
domains.each do |domain|
|
||||
if pool
|
||||
begin
|
||||
task = pool.schedule do
|
||||
announce_instance_to_domain(domain, payload_json)
|
||||
end
|
||||
scheduled_tasks << [domain, task]
|
||||
scheduled << [domain, task]
|
||||
next
|
||||
rescue PotatoMesh::App::WorkerPool::QueueFullError
|
||||
warn_log(
|
||||
@@ -512,9 +395,7 @@ module PotatoMesh
|
||||
return if scheduled.empty?
|
||||
|
||||
timeout = PotatoMesh::Config.federation_task_timeout_seconds
|
||||
scheduled.all? do |domain, task|
|
||||
break false if federation_shutdown_requested?
|
||||
|
||||
scheduled.each do |domain, task|
|
||||
begin
|
||||
task.wait(timeout: timeout)
|
||||
rescue PotatoMesh::App::WorkerPool::TaskTimeoutError => e
|
||||
@@ -535,23 +416,19 @@ module PotatoMesh
|
||||
error_message: e.message,
|
||||
)
|
||||
end
|
||||
true
|
||||
end
|
||||
end
|
||||
|
||||
def start_federation_announcer!
|
||||
# Federation broadcasts must not execute when federation support is disabled.
|
||||
return nil unless federation_enabled?
|
||||
clear_federation_shutdown_request!
|
||||
ensure_federation_shutdown_hook!
|
||||
|
||||
existing = settings.federation_thread
|
||||
return existing if existing&.alive?
|
||||
|
||||
thread = Thread.new do
|
||||
loop do
|
||||
break unless federation_sleep_with_shutdown(PotatoMesh::Config.federation_announcement_interval)
|
||||
|
||||
sleep PotatoMesh::Config.federation_announcement_interval
|
||||
begin
|
||||
announce_instance_to_all_domains
|
||||
rescue StandardError => e
|
||||
@@ -565,8 +442,6 @@ module PotatoMesh
|
||||
end
|
||||
end
|
||||
thread.name = "potato-mesh-federation" if thread.respond_to?(:name=)
|
||||
# Allow shutdown even if the announcement loop is still sleeping.
|
||||
thread.daemon = true if thread.respond_to?(:daemon=)
|
||||
set(:federation_thread, thread)
|
||||
thread
|
||||
end
|
||||
@@ -577,8 +452,6 @@ module PotatoMesh
|
||||
def start_initial_federation_announcement!
|
||||
# Skip the initial broadcast entirely when federation is disabled.
|
||||
return nil unless federation_enabled?
|
||||
clear_federation_shutdown_request!
|
||||
ensure_federation_shutdown_hook!
|
||||
|
||||
existing = settings.respond_to?(:initial_federation_thread) ? settings.initial_federation_thread : nil
|
||||
return existing if existing&.alive?
|
||||
@@ -586,12 +459,7 @@ module PotatoMesh
|
||||
thread = Thread.new do
|
||||
begin
|
||||
delay = PotatoMesh::Config.initial_federation_delay_seconds
|
||||
if delay.positive?
|
||||
completed = federation_sleep_with_shutdown(delay)
|
||||
next unless completed
|
||||
end
|
||||
next if federation_shutdown_requested?
|
||||
|
||||
Kernel.sleep(delay) if delay.positive?
|
||||
announce_instance_to_all_domains
|
||||
rescue StandardError => e
|
||||
warn_log(
|
||||
@@ -606,8 +474,6 @@ module PotatoMesh
|
||||
end
|
||||
thread.name = "potato-mesh-federation-initial" if thread.respond_to?(:name=)
|
||||
thread.report_on_exception = false if thread.respond_to?(:report_on_exception=)
|
||||
# Avoid blocking process shutdown during delayed startup announcements.
|
||||
thread.daemon = true if thread.respond_to?(:daemon=)
|
||||
set(:initial_federation_thread, thread)
|
||||
thread
|
||||
end
|
||||
@@ -652,19 +518,15 @@ module PotatoMesh
|
||||
end
|
||||
|
||||
def perform_instance_http_request(uri)
|
||||
raise InstanceFetchError, "federation shutdown requested" if federation_shutdown_requested?
|
||||
|
||||
http = build_remote_http_client(uri)
|
||||
Timeout.timeout(PotatoMesh::Config.remote_instance_request_timeout) do
|
||||
http.start do |connection|
|
||||
request = build_federation_http_request(Net::HTTP::Get, uri)
|
||||
response = connection.request(request)
|
||||
case response
|
||||
when Net::HTTPSuccess
|
||||
response.body
|
||||
else
|
||||
raise InstanceFetchError, "unexpected response #{response.code}"
|
||||
end
|
||||
http.start do |connection|
|
||||
request = build_federation_http_request(Net::HTTP::Get, uri)
|
||||
response = connection.request(request)
|
||||
case response
|
||||
when Net::HTTPSuccess
|
||||
response.body
|
||||
else
|
||||
raise InstanceFetchError, "unexpected response #{response.code}"
|
||||
end
|
||||
end
|
||||
rescue StandardError => e
|
||||
@@ -721,12 +583,8 @@ module PotatoMesh
|
||||
end
|
||||
|
||||
def fetch_instance_json(domain, path)
|
||||
return [nil, ["federation shutdown requested"]] if federation_shutdown_requested?
|
||||
|
||||
errors = []
|
||||
instance_uri_candidates(domain, path).each do |uri|
|
||||
break if federation_shutdown_requested?
|
||||
|
||||
begin
|
||||
body = perform_instance_http_request(uri)
|
||||
return [JSON.parse(body), uri] if body
|
||||
@@ -739,34 +597,6 @@ module PotatoMesh
|
||||
[nil, errors]
|
||||
end
|
||||
|
||||
# Resolve the best matching active-node count from a remote /api/stats payload.
|
||||
#
|
||||
# @param payload [Hash, nil] decoded JSON payload from /api/stats.
|
||||
# @param max_age_seconds [Integer] activity window currently expected for federation freshness.
|
||||
# @return [Integer, nil] selected active-node count when available.
|
||||
def remote_active_node_count_from_stats(payload, max_age_seconds:)
|
||||
return nil unless payload.is_a?(Hash)
|
||||
|
||||
active_nodes = payload["active_nodes"]
|
||||
return nil unless active_nodes.is_a?(Hash)
|
||||
|
||||
age = coerce_integer(max_age_seconds) || 0
|
||||
key = if age <= 3600
|
||||
"hour"
|
||||
elsif age <= 86_400
|
||||
"day"
|
||||
elsif age <= PotatoMesh::Config.week_seconds
|
||||
"week"
|
||||
else
|
||||
"month"
|
||||
end
|
||||
|
||||
value = coerce_integer(active_nodes[key])
|
||||
return nil unless value
|
||||
|
||||
[value, 0].max
|
||||
end
|
||||
|
||||
# Parse a remote federation instance payload into canonical attributes.
|
||||
#
|
||||
# @param payload [Hash] JSON object describing a remote instance.
|
||||
@@ -827,147 +657,49 @@ module PotatoMesh
|
||||
# @param overall_limit [Integer, nil] maximum unique domains visited.
|
||||
# @return [Boolean] true when the crawl was scheduled successfully.
|
||||
def enqueue_federation_crawl(domain, per_response_limit:, overall_limit:)
|
||||
sanitized_domain = sanitize_instance_domain(domain)
|
||||
unless sanitized_domain
|
||||
warn_log(
|
||||
"Skipped remote instance crawl",
|
||||
context: "federation.instances",
|
||||
domain: domain,
|
||||
reason: "invalid domain",
|
||||
)
|
||||
return false
|
||||
end
|
||||
return false if federation_shutdown_requested?
|
||||
|
||||
application = is_a?(Class) ? self : self.class
|
||||
pool = application.federation_worker_pool
|
||||
pool = federation_worker_pool
|
||||
unless pool
|
||||
debug_log(
|
||||
"Skipped remote instance crawl",
|
||||
context: "federation.instances",
|
||||
domain: sanitized_domain,
|
||||
domain: domain,
|
||||
reason: "federation disabled",
|
||||
)
|
||||
return false
|
||||
end
|
||||
|
||||
claim_result = application.claim_federation_crawl_slot(sanitized_domain)
|
||||
unless claim_result == :claimed
|
||||
debug_log(
|
||||
"Skipped remote instance crawl",
|
||||
context: "federation.instances",
|
||||
domain: sanitized_domain,
|
||||
reason: claim_result == :in_flight ? "crawl already in flight" : "recent crawl completed",
|
||||
)
|
||||
return false
|
||||
end
|
||||
|
||||
application = is_a?(Class) ? self : self.class
|
||||
pool.schedule do
|
||||
db = nil
|
||||
db = application.open_database
|
||||
begin
|
||||
db = application.open_database
|
||||
application.ingest_known_instances_from!(
|
||||
db,
|
||||
sanitized_domain,
|
||||
domain,
|
||||
per_response_limit: per_response_limit,
|
||||
overall_limit: overall_limit,
|
||||
)
|
||||
ensure
|
||||
db&.close
|
||||
application.release_federation_crawl_slot(sanitized_domain)
|
||||
end
|
||||
end
|
||||
|
||||
true
|
||||
rescue PotatoMesh::App::WorkerPool::QueueFullError
|
||||
application.handle_failed_federation_crawl_schedule(sanitized_domain, "worker queue saturated")
|
||||
rescue PotatoMesh::App::WorkerPool::ShutdownError
|
||||
application.handle_failed_federation_crawl_schedule(sanitized_domain, "worker pool shut down")
|
||||
end
|
||||
|
||||
# Handle a failed crawl schedule attempt without applying cooldown.
|
||||
#
|
||||
# @param domain [String] canonical domain that failed to schedule.
|
||||
# @param reason [String] human-readable failure reason.
|
||||
# @return [Boolean] always false because scheduling did not succeed.
|
||||
def handle_failed_federation_crawl_schedule(domain, reason)
|
||||
release_federation_crawl_slot(domain, record_completion: false)
|
||||
warn_log(
|
||||
"Skipped remote instance crawl",
|
||||
context: "federation.instances",
|
||||
domain: domain,
|
||||
reason: reason,
|
||||
reason: "worker queue saturated",
|
||||
)
|
||||
false
|
||||
rescue PotatoMesh::App::WorkerPool::ShutdownError
|
||||
warn_log(
|
||||
"Skipped remote instance crawl",
|
||||
context: "federation.instances",
|
||||
domain: domain,
|
||||
reason: "worker pool shut down",
|
||||
)
|
||||
false
|
||||
end
|
||||
|
||||
# Initialize shared in-memory state used to deduplicate crawl scheduling.
|
||||
#
|
||||
# @return [void]
|
||||
def initialize_federation_crawl_state!
|
||||
@federation_crawl_init_mutex ||= Mutex.new
|
||||
return if instance_variable_defined?(:@federation_crawl_mutex) && @federation_crawl_mutex
|
||||
|
||||
@federation_crawl_init_mutex.synchronize do
|
||||
return if instance_variable_defined?(:@federation_crawl_mutex) && @federation_crawl_mutex
|
||||
|
||||
@federation_crawl_mutex = Mutex.new
|
||||
@federation_crawl_in_flight = Set.new
|
||||
@federation_crawl_last_completed_at = {}
|
||||
end
|
||||
end
|
||||
|
||||
# Retrieve the cooldown period used for duplicate crawl suppression.
|
||||
#
|
||||
# @return [Integer] seconds a domain remains in cooldown after completion.
|
||||
def federation_crawl_cooldown_seconds
|
||||
PotatoMesh::Config.federation_crawl_cooldown_seconds
|
||||
end
|
||||
|
||||
# Mark a domain crawl as claimed if no active or recent crawl exists.
|
||||
#
|
||||
# @param domain [String] canonical domain name.
|
||||
# @return [Symbol] +:claimed+, +:in_flight+, or +:cooldown+.
|
||||
def claim_federation_crawl_slot(domain)
|
||||
initialize_federation_crawl_state!
|
||||
now = Time.now.to_i
|
||||
@federation_crawl_mutex.synchronize do
|
||||
return :in_flight if @federation_crawl_in_flight.include?(domain)
|
||||
|
||||
last_completed = @federation_crawl_last_completed_at[domain]
|
||||
if last_completed && now - last_completed < federation_crawl_cooldown_seconds
|
||||
return :cooldown
|
||||
end
|
||||
|
||||
@federation_crawl_in_flight << domain
|
||||
:claimed
|
||||
end
|
||||
end
|
||||
|
||||
# Release an in-flight crawl claim and record completion timestamp.
|
||||
#
|
||||
# @param domain [String] canonical domain name.
|
||||
# @param record_completion [Boolean] true to apply cooldown tracking.
|
||||
# @return [void]
|
||||
def release_federation_crawl_slot(domain, record_completion: true)
|
||||
return unless domain
|
||||
|
||||
initialize_federation_crawl_state!
|
||||
@federation_crawl_mutex.synchronize do
|
||||
@federation_crawl_in_flight.delete(domain)
|
||||
@federation_crawl_last_completed_at[domain] = Time.now.to_i if record_completion
|
||||
end
|
||||
end
|
||||
|
||||
# Clear all in-memory crawl scheduling state.
|
||||
#
|
||||
# @return [void]
|
||||
def clear_federation_crawl_state!
|
||||
initialize_federation_crawl_state!
|
||||
@federation_crawl_mutex.synchronize do
|
||||
@federation_crawl_in_flight.clear
|
||||
@federation_crawl_last_completed_at.clear
|
||||
end
|
||||
end
|
||||
|
||||
# Recursively ingest federation records exposed by the supplied domain.
|
||||
@@ -987,7 +719,6 @@ module PotatoMesh
|
||||
)
|
||||
sanitized = sanitize_instance_domain(domain)
|
||||
return visited || Set.new unless sanitized
|
||||
return visited || Set.new if federation_shutdown_requested?
|
||||
|
||||
visited ||= Set.new
|
||||
|
||||
@@ -1022,8 +753,6 @@ module PotatoMesh
|
||||
processed_entries = 0
|
||||
recent_cutoff = Time.now.to_i - PotatoMesh::Config.remote_instance_max_node_age
|
||||
payload.each do |entry|
|
||||
break if federation_shutdown_requested?
|
||||
|
||||
if per_response_limit && per_response_limit.positive? && processed_entries >= per_response_limit
|
||||
debug_log(
|
||||
"Skipped remote instance entry due to response limit",
|
||||
@@ -1077,33 +806,21 @@ module PotatoMesh
|
||||
|
||||
attributes[:is_private] = false if attributes[:is_private].nil?
|
||||
|
||||
stats_payload, stats_metadata = fetch_instance_json(attributes[:domain], "/api/stats")
|
||||
stats_count = remote_active_node_count_from_stats(
|
||||
stats_payload,
|
||||
max_age_seconds: PotatoMesh::Config.remote_instance_max_node_age,
|
||||
)
|
||||
attributes[:nodes_count] = stats_count if stats_count
|
||||
|
||||
nodes_since_path = "/api/nodes?since=#{recent_cutoff}&limit=1000"
|
||||
nodes_since_window, nodes_since_metadata = fetch_instance_json(attributes[:domain], nodes_since_path)
|
||||
if stats_count.nil? && attributes[:nodes_count].nil? && nodes_since_window.is_a?(Array)
|
||||
if nodes_since_window.is_a?(Array)
|
||||
attributes[:nodes_count] = nodes_since_window.length
|
||||
elsif nodes_since_metadata
|
||||
warn_log(
|
||||
"Failed to load remote node window",
|
||||
context: "federation.instances",
|
||||
domain: attributes[:domain],
|
||||
reason: Array(nodes_since_metadata).map(&:to_s).join("; "),
|
||||
)
|
||||
end
|
||||
|
||||
remote_nodes, node_metadata = fetch_instance_json(attributes[:domain], "/api/nodes")
|
||||
remote_nodes = nodes_since_window if remote_nodes.nil? && nodes_since_window.is_a?(Array)
|
||||
if attributes[:nodes_count].nil? && remote_nodes.is_a?(Array)
|
||||
attributes[:nodes_count] = remote_nodes.length
|
||||
end
|
||||
|
||||
if stats_count.nil? && Array(stats_metadata).any?
|
||||
debug_log(
|
||||
"Remote instance /api/stats unavailable; using node list fallback",
|
||||
context: "federation.instances",
|
||||
domain: attributes[:domain],
|
||||
reason: Array(stats_metadata).map(&:to_s).join("; "),
|
||||
)
|
||||
end
|
||||
remote_nodes ||= nodes_since_window if nodes_since_window.is_a?(Array)
|
||||
unless remote_nodes
|
||||
warn_log(
|
||||
"Failed to load remote node data",
|
||||
|
||||
@@ -20,8 +20,6 @@ module PotatoMesh
|
||||
# its intended consumers to ensure consistent behaviour across the Sinatra
|
||||
# application.
|
||||
module Helpers
|
||||
ANNOUNCEMENT_URL_PATTERN = %r{\bhttps?://[^\s<]+}i.freeze
|
||||
|
||||
# Fetch an application level constant exposed by {PotatoMesh::Application}.
|
||||
#
|
||||
# @param name [Symbol] constant identifier to retrieve.
|
||||
@@ -94,47 +92,6 @@ module PotatoMesh
|
||||
PotatoMesh::Sanitizer.sanitized_site_name
|
||||
end
|
||||
|
||||
# Retrieve the configured announcement banner copy.
|
||||
#
|
||||
# @return [String, nil] sanitised announcement or nil when unset.
|
||||
def sanitized_announcement
|
||||
PotatoMesh::Sanitizer.sanitized_announcement
|
||||
end
|
||||
|
||||
# Render the announcement copy with safe outbound links.
|
||||
#
|
||||
# @return [String, nil] escaped HTML snippet or nil when unset.
|
||||
def announcement_html
|
||||
announcement = sanitized_announcement
|
||||
return nil unless announcement
|
||||
|
||||
fragments = []
|
||||
last_index = 0
|
||||
|
||||
announcement.to_enum(:scan, ANNOUNCEMENT_URL_PATTERN).each do
|
||||
match = Regexp.last_match
|
||||
next unless match
|
||||
|
||||
start_index = match.begin(0)
|
||||
end_index = match.end(0)
|
||||
|
||||
if start_index > last_index
|
||||
fragments << Rack::Utils.escape_html(announcement[last_index...start_index])
|
||||
end
|
||||
|
||||
url = match[0]
|
||||
escaped_url = Rack::Utils.escape_html(url)
|
||||
fragments << %(<a href="#{escaped_url}" target="_blank" rel="noopener noreferrer">#{escaped_url}</a>)
|
||||
last_index = end_index
|
||||
end
|
||||
|
||||
if last_index < announcement.length
|
||||
fragments << Rack::Utils.escape_html(announcement[last_index..])
|
||||
end
|
||||
|
||||
fragments.join
|
||||
end
|
||||
|
||||
# Retrieve the configured channel.
|
||||
#
|
||||
# @return [String] sanitised channel identifier.
|
||||
|
||||
@@ -1,102 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# frozen_string_literal: true
|
||||
|
||||
require "base64"
|
||||
|
||||
module PotatoMesh
|
||||
module App
|
||||
module Meshtastic
|
||||
# Compute Meshtastic channel hashes from a name and pre-shared key.
|
||||
module ChannelHash
|
||||
module_function
|
||||
|
||||
DEFAULT_PSK_ALIAS_KEYS = {
|
||||
1 => [
|
||||
0xD4, 0xF1, 0xBB, 0x3A, 0x20, 0x29, 0x07, 0x59,
|
||||
0xF0, 0xBC, 0xFF, 0xAB, 0xCF, 0x4E, 0x69, 0x01,
|
||||
].pack("C*"),
|
||||
2 => [
|
||||
0x38, 0x4B, 0xBC, 0xC0, 0x1D, 0xC0, 0x22, 0xD1,
|
||||
0x81, 0xBF, 0x36, 0xB8, 0x61, 0x21, 0xE1, 0xFB,
|
||||
0x96, 0xB7, 0x2E, 0x55, 0xBF, 0x74, 0x22, 0x7E,
|
||||
0x9D, 0x6A, 0xFB, 0x48, 0xD6, 0x4C, 0xB1, 0xA1,
|
||||
].pack("C*"),
|
||||
}.freeze
|
||||
|
||||
# Calculate the Meshtastic channel hash for the given name and PSK.
|
||||
#
|
||||
# @param name [String] channel name candidate.
|
||||
# @param psk_b64 [String, nil] base64-encoded PSK or PSK alias.
|
||||
# @return [Integer, nil] channel hash byte or nil when inputs are invalid.
|
||||
def channel_hash(name, psk_b64)
|
||||
return nil unless name
|
||||
|
||||
key = expanded_key(psk_b64)
|
||||
return nil unless key
|
||||
|
||||
h_name = xor_bytes(name.b)
|
||||
h_key = xor_bytes(key)
|
||||
|
||||
(h_name ^ h_key) & 0xFF
|
||||
end
|
||||
|
||||
# Expand the provided PSK into a valid AES key length.
|
||||
#
|
||||
# @param psk_b64 [String, nil] base64 PSK value.
|
||||
# @return [String, nil] expanded key bytes or nil when invalid.
|
||||
def expanded_key(psk_b64)
|
||||
raw = Base64.decode64(psk_b64.to_s)
|
||||
|
||||
case raw.bytesize
|
||||
when 0
|
||||
"".b
|
||||
when 1
|
||||
default_key_for_alias(raw.bytes.first)
|
||||
when 2..15
|
||||
(raw.bytes + [0] * (16 - raw.bytesize)).pack("C*")
|
||||
when 16
|
||||
raw
|
||||
when 17..31
|
||||
(raw.bytes + [0] * (32 - raw.bytesize)).pack("C*")
|
||||
when 32
|
||||
raw
|
||||
else
|
||||
nil
|
||||
end
|
||||
end
|
||||
|
||||
# Map PSK alias bytes to their default key material.
|
||||
#
|
||||
# @param alias_index [Integer, nil] alias identifier for the PSK.
|
||||
# @return [String, nil] key bytes or nil when unknown.
|
||||
def default_key_for_alias(alias_index)
|
||||
return nil unless alias_index
|
||||
|
||||
DEFAULT_PSK_ALIAS_KEYS[alias_index]&.dup
|
||||
end
|
||||
|
||||
# XOR all bytes in the given string or byte array.
|
||||
#
|
||||
# @param value [String, Array<Integer>] input byte sequence.
|
||||
# @return [Integer] XOR of all bytes.
|
||||
def xor_bytes(value)
|
||||
bytes = value.is_a?(String) ? value.bytes : value
|
||||
bytes.reduce(0) { |acc, byte| (acc ^ byte) & 0xFF }
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
@@ -1,28 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# frozen_string_literal: true
|
||||
|
||||
module PotatoMesh
|
||||
module App
|
||||
module Meshtastic
|
||||
# Canonical list of candidate channel names used to build rainbow tables.
|
||||
module ChannelNames
|
||||
CHANNEL_NAME_CANDIDATES = %w[
|
||||
911 Admin ADMIN admin Alert Alpha AlphaNet Alpine Amateur Amazon Anaconda Aquila Arctic Ash Asteroid Astro Aurora Avalanche Backup Basalt Base Base1 Base2 BaseAlpha BaseBravo BaseCharlie Bavaria Beacon Bear BearNet Beat Berg Berlin BerlinMesh BerlinNet Beta BetaBerlin Bison Blackout Blizzard Bolt Bonfire Border Borealis Bravo BravoNet Breeze Bridge Bronze Burner Burrow Callisto Callsign Camp Campfire CampNet Caravan Carbon Carpet Central Chameleon Charlie Chat Checkpoint Checkpoint1 Checkpoint2 Cheetah City Clinic Cloud Cobra Collective Cologne Colony Comet Command Command1 Command2 CommandRoom Comms Comms1 Comms2 CommsNet Commune Control Control1 Control2 ControlRoom Convoy Copper Core Corvus Cosmos Courier Courier1 Courier2 CourierMesh CourierNet CQ CQ1 CQ2 Crow CrowNet DarkNet Dawn Daybreak Daylight Delta DeltaNet Demo DEMO DemoBerlin Den Desert Diamond Distress District Doctor Dortmund Downlink Downlink1 Draco Dragon DragonNet Dune Dusk Eagle EagleNet East EastStar Echo EchoMesh EchoNet Emergency emergency EMERGENCY EmergencyBerlin Epsilon Equinox Europa Falcon Field FieldNet Fire Fire1 Fire2 Firebird Firefly Fireline Fireteam Firewatch Flash Flock Fluss Fog Forest Fox FoxNet Foxtrot FoxtrotMesh FoxtrotNet Frankfurt Freedom Freq Freq1 Freq2 Friedrichshain Frontier Frost Galaxy Gale Gamma Ganymede Gecko General Ghost GhostNet Glacier Gold Granite Grassland Grid Grid1 Grid2 GridNet GridNorth GridSouth Griffin Group Ham HAM Hamburg HAMNet Harbor Harmony HarmonyNet Hawk HawkNet Haze Help Hessen Highway Hilltop Hinterland Hive Hospital HQ HQ1 HQ2 Hub Hub1 Hub2 Hydra Ice Io Iron Jaguar Jungle Jupiter Kiez Kilo KiloMesh KiloNet Kraken Kreuzberg Lava Layer Layer1 Layer2 Layer3 Leipzig Leopard Liberty LightNet Lightning Lima Link Lion Lizard LongFast LongSlow LoRa LoRaBerlin LoRaHessen LoRaMesh LoRaNet LoRaTest Main Mars Med Med1 Med2 Medic MediumFast MediumSlow Mercury Mesh Mesh1 Mesh2 Mesh3 Mesh4 Mesh5 MeshBerlin MeshCollective MeshCologne MeshFrankfurt MeshGrid MeshHamburg MeshHessen MeshLeipzig MeshMunich MeshNet MeshNetwork MeshRuhr Meshtastic MeshTest Meteor Metro Midnight Mirage Mist MoonNet Munich Müggelberg Nebula Nest Network Neukölln Nexus Nightfall NightMesh NightNet Nightshift NightshiftNet Nightwatch Node1 Node2 Node3 Node4 Node5 Nomad NomadMesh NomadNet Nomads Nord North NorthStar Oasis Obsidian Omega Operations OPERATIONS Ops Ops1 Ops2 OpsCenter OpsRoom Orbit Ost Outpost Outsider Owl Pack Packet PacketNet PacketRadio Panther Paramedic Path Peak Phantom Phoenix PhoenixNet Platinum Pluto Polar Prairie Prenzlauer PRIVATE Private Public PUBLIC Pulse PulseNet Python Quasar Radio Radio1 Radio2 RadioNet Rain Ranger Raven RavenNet Relay Relay1 Relay2 Repeater Repeater1 Repeater2 RepeaterHub Rescue Rescue1 Rescue2 RescueTeam Rhythm Ridge River Road Rock Router Router1 Router2 Rover Ruhr Runner Runners Safari Safe Safety Sahara Saturn Savanna Saxony Scout Sector Secure Sensor SENSOR Sensors SENSORS Shade Shadow ShadowNet Shelter Shelter1 Shelter2 ShortFast Sideband Sideband1 Sierra Signal Signal1 Signal2 SignalFire Signals Silver Smoke Snake Snow Solstice SOS Sos SOSBerlin South SouthStar Spectrum Squad StarNet Steel Stone Storm Storm1 Storm2 Stratum Stuttgart Summit SunNet Sunrise Sunset Sync SyncNet Syndicate Süd Tal Tango TangoMesh TangoNet Team Tempo Test TEST test TestBerlin Teufelsberg Thunder Tiger Titan Town Trail Tundra Tunnel Union Unit Universe Uplink Uplink1 Valley Venus Victor Village Viper Volcano Wald Wander Wanderer Wanderers Watch Watch1 Watch2 WaWi West WestStar Whisper Wind Wolf WolfDen WolfMesh WolfNet Wolfpack Wolves Woods Wyvern Zeta Zone Zone1 Zone2 Zone3 Zulu ZuluMesh ZuluNet
|
||||
].freeze
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
@@ -1,183 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# frozen_string_literal: true
|
||||
|
||||
require "base64"
|
||||
require "openssl"
|
||||
|
||||
require_relative "channel_hash"
|
||||
require_relative "protobuf"
|
||||
|
||||
module PotatoMesh
|
||||
module App
|
||||
module Meshtastic
|
||||
# Decrypt Meshtastic payloads with AES-CTR using Meshtastic nonce rules.
|
||||
module Cipher
|
||||
module_function
|
||||
|
||||
DEFAULT_PSK_B64 = "AQ=="
|
||||
TEXT_MESSAGE_PORTNUM = 1
|
||||
|
||||
# Decrypt an encrypted Meshtastic payload into UTF-8 text.
|
||||
#
|
||||
# @param cipher_b64 [String] base64-encoded encrypted payload.
|
||||
# @param packet_id [Integer] packet identifier used for the nonce.
|
||||
# @param from_id [String, nil] Meshtastic node identifier (e.g. "!9e95cf60").
|
||||
# @param from_num [Integer, nil] numeric node identifier override.
|
||||
# @param psk_b64 [String, nil] base64 PSK or alias.
|
||||
# @return [String, nil] decrypted text or nil when decryption fails.
|
||||
def decrypt_text(cipher_b64:, packet_id:, from_id: nil, from_num: nil, psk_b64: DEFAULT_PSK_B64)
|
||||
data = decrypt_data(
|
||||
cipher_b64: cipher_b64,
|
||||
packet_id: packet_id,
|
||||
from_id: from_id,
|
||||
from_num: from_num,
|
||||
psk_b64: psk_b64,
|
||||
)
|
||||
|
||||
data && data[:text]
|
||||
end
|
||||
|
||||
# Decrypt the Meshtastic data protobuf payload.
|
||||
#
|
||||
# @param cipher_b64 [String] base64-encoded encrypted payload.
|
||||
# @param packet_id [Integer] packet identifier used for the nonce.
|
||||
# @param from_id [String, nil] Meshtastic node identifier.
|
||||
# @param from_num [Integer, nil] numeric node identifier override.
|
||||
# @param psk_b64 [String, nil] base64 PSK or alias.
|
||||
# @return [Hash, nil] decrypted data payload details or nil when decryption fails.
|
||||
def decrypt_data(cipher_b64:, packet_id:, from_id: nil, from_num: nil, psk_b64: DEFAULT_PSK_B64)
|
||||
ciphertext = Base64.strict_decode64(cipher_b64)
|
||||
key = ChannelHash.expanded_key(psk_b64)
|
||||
return nil unless key
|
||||
return nil unless [16, 32].include?(key.bytesize)
|
||||
|
||||
packet_value = normalize_packet_id(packet_id)
|
||||
return nil unless packet_value
|
||||
|
||||
from_value = normalize_node_num(from_id, from_num)
|
||||
return nil unless from_value
|
||||
|
||||
nonce = build_nonce(packet_value, from_value)
|
||||
plaintext = decrypt_aes_ctr(ciphertext, key, nonce)
|
||||
return nil unless plaintext
|
||||
|
||||
data = Protobuf.parse_data(plaintext)
|
||||
return nil unless data
|
||||
|
||||
text = nil
|
||||
if data[:portnum] == TEXT_MESSAGE_PORTNUM
|
||||
candidate = data[:payload].dup.force_encoding("UTF-8")
|
||||
text = candidate if candidate.valid_encoding? && !candidate.empty?
|
||||
end
|
||||
|
||||
{ portnum: data[:portnum], payload: data[:payload], text: text }
|
||||
rescue ArgumentError, OpenSSL::Cipher::CipherError
|
||||
nil
|
||||
end
|
||||
|
||||
# Decrypt the Meshtastic data protobuf payload bytes.
|
||||
#
|
||||
# @param cipher_b64 [String] base64-encoded encrypted payload.
|
||||
# @param packet_id [Integer] packet identifier used for the nonce.
|
||||
# @param from_id [String, nil] Meshtastic node identifier.
|
||||
# @param from_num [Integer, nil] numeric node identifier override.
|
||||
# @param psk_b64 [String, nil] base64 PSK or alias.
|
||||
# @return [String, nil] payload bytes or nil when decryption fails.
|
||||
def decrypt_payload_bytes(cipher_b64:, packet_id:, from_id: nil, from_num: nil, psk_b64: DEFAULT_PSK_B64)
|
||||
data = decrypt_data(
|
||||
cipher_b64: cipher_b64,
|
||||
packet_id: packet_id,
|
||||
from_id: from_id,
|
||||
from_num: from_num,
|
||||
psk_b64: psk_b64,
|
||||
)
|
||||
|
||||
data && data[:payload]
|
||||
end
|
||||
|
||||
# Build the Meshtastic AES nonce from packet and node identifiers.
|
||||
#
|
||||
# @param packet_id [Integer] packet identifier.
|
||||
# @param from_num [Integer] numeric node identifier.
|
||||
# @return [String] 16-byte nonce.
|
||||
def build_nonce(packet_id, from_num)
|
||||
[packet_id].pack("Q<") + [from_num].pack("L<") + ("\x00" * 4)
|
||||
end
|
||||
|
||||
# Decrypt data using AES-CTR with the derived nonce.
|
||||
#
|
||||
# @param ciphertext [String] encrypted payload bytes.
|
||||
# @param key [String] expanded AES key bytes.
|
||||
# @param nonce [String] 16-byte nonce.
|
||||
# @return [String] decrypted plaintext bytes.
|
||||
def decrypt_aes_ctr(ciphertext, key, nonce)
|
||||
cipher_name = key.bytesize == 16 ? "aes-128-ctr" : "aes-256-ctr"
|
||||
cipher = OpenSSL::Cipher.new(cipher_name)
|
||||
cipher.decrypt
|
||||
cipher.key = key
|
||||
cipher.iv = nonce
|
||||
cipher.update(ciphertext) + cipher.final
|
||||
end
|
||||
|
||||
# Normalise the packet identifier into an integer.
|
||||
#
|
||||
# @param packet_id [Integer, nil] packet identifier.
|
||||
# @return [Integer, nil] validated packet id or nil when invalid.
|
||||
def normalize_packet_id(packet_id)
|
||||
return packet_id if packet_id.is_a?(Integer) && packet_id >= 0
|
||||
return nil if packet_id.nil?
|
||||
|
||||
if packet_id.is_a?(Numeric)
|
||||
return nil if packet_id.negative?
|
||||
return packet_id.to_i
|
||||
end
|
||||
|
||||
return nil unless packet_id.respond_to?(:to_s)
|
||||
|
||||
trimmed = packet_id.to_s.strip
|
||||
return nil if trimmed.empty?
|
||||
return trimmed.to_i(10) if trimmed.match?(/\A\d+\z/)
|
||||
|
||||
nil
|
||||
end
|
||||
|
||||
# Resolve the node number from any of the supported identifiers.
|
||||
#
|
||||
# @param from_id [String, nil] Meshtastic node identifier.
|
||||
# @param from_num [Integer, nil] numeric node identifier override.
|
||||
# @return [Integer, nil] node number or nil when invalid.
|
||||
def normalize_node_num(from_id, from_num)
|
||||
if from_num.is_a?(Integer)
|
||||
return from_num & 0xFFFFFFFF
|
||||
elsif from_num.is_a?(Numeric)
|
||||
return from_num.to_i & 0xFFFFFFFF
|
||||
end
|
||||
|
||||
return nil unless from_id
|
||||
|
||||
trimmed = from_id.to_s.strip
|
||||
return nil if trimmed.empty?
|
||||
|
||||
hex = trimmed.delete_prefix("!")
|
||||
hex = hex[2..] if hex.start_with?("0x", "0X")
|
||||
return nil unless hex.match?(/\A[0-9A-Fa-f]+\z/)
|
||||
|
||||
hex.to_i(16) & 0xFFFFFFFF
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
@@ -1,120 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# frozen_string_literal: true
|
||||
|
||||
require "json"
|
||||
require "open3"
|
||||
|
||||
module PotatoMesh
|
||||
module App
|
||||
module Meshtastic
|
||||
# Decode Meshtastic protobuf payloads via the Python helper script.
|
||||
module PayloadDecoder
|
||||
module_function
|
||||
|
||||
PYTHON_ENV_KEY = "MESHTASTIC_PYTHON"
|
||||
DEFAULT_PYTHON_RELATIVE = File.join("data", ".venv", "bin", "python")
|
||||
DEFAULT_DECODER_RELATIVE = File.join("data", "mesh_ingestor", "decode_payload.py")
|
||||
FALLBACK_PYTHON_NAMES = ["python3", "python"].freeze
|
||||
|
||||
# Decode a protobuf payload using the Meshtastic helper.
|
||||
#
|
||||
# @param portnum [Integer] Meshtastic port number.
|
||||
# @param payload_b64 [String] base64-encoded payload bytes.
|
||||
# @return [Hash, nil] decoded payload hash or nil when decoding fails.
|
||||
def decode(portnum:, payload_b64:)
|
||||
return nil unless portnum && payload_b64
|
||||
|
||||
decoder_path = decoder_script_path
|
||||
python_path = python_executable_path
|
||||
return nil unless decoder_path && python_path
|
||||
|
||||
input = JSON.generate({ portnum: portnum, payload_b64: payload_b64 })
|
||||
stdout, stderr, status = Open3.capture3(python_path, decoder_path, stdin_data: input)
|
||||
return nil unless status.success?
|
||||
|
||||
parsed = JSON.parse(stdout)
|
||||
return nil unless parsed.is_a?(Hash)
|
||||
return nil if parsed["error"]
|
||||
|
||||
parsed
|
||||
rescue JSON::ParserError
|
||||
nil
|
||||
rescue Errno::ENOENT
|
||||
nil
|
||||
rescue ArgumentError
|
||||
nil
|
||||
end
|
||||
|
||||
# Resolve the configured Python executable for Meshtastic decoding.
|
||||
#
|
||||
# @return [String, nil] python path or nil when missing.
|
||||
def python_executable_path
|
||||
configured = ENV[PYTHON_ENV_KEY]
|
||||
return configured if configured && !configured.strip.empty?
|
||||
|
||||
candidate = File.expand_path(DEFAULT_PYTHON_RELATIVE, repo_root)
|
||||
return candidate if File.exist?(candidate)
|
||||
|
||||
FALLBACK_PYTHON_NAMES.each do |name|
|
||||
found = find_executable(name)
|
||||
return found if found
|
||||
end
|
||||
|
||||
nil
|
||||
end
|
||||
|
||||
# Resolve the Meshtastic payload decoder script path.
|
||||
#
|
||||
# @return [String, nil] script path or nil when missing.
|
||||
def decoder_script_path
|
||||
repo_candidate = File.expand_path(DEFAULT_DECODER_RELATIVE, repo_root)
|
||||
return repo_candidate if File.exist?(repo_candidate)
|
||||
|
||||
web_candidate = File.expand_path(DEFAULT_DECODER_RELATIVE, web_root)
|
||||
return web_candidate if File.exist?(web_candidate)
|
||||
|
||||
nil
|
||||
end
|
||||
|
||||
# Resolve the repository root directory from the application config.
|
||||
#
|
||||
# @return [String] absolute path to the repository root.
|
||||
def repo_root
|
||||
PotatoMesh::Config.repo_root
|
||||
end
|
||||
|
||||
def web_root
|
||||
PotatoMesh::Config.web_root
|
||||
end
|
||||
|
||||
def find_executable(name)
|
||||
# Locate an executable in PATH without invoking a subshell.
|
||||
#
|
||||
# @param name [String] executable name to resolve.
|
||||
# @return [String, nil] full path when found.
|
||||
ENV.fetch("PATH", "").split(File::PATH_SEPARATOR).each do |path|
|
||||
candidate = File.join(path, name)
|
||||
return candidate if File.file?(candidate) && File.executable?(candidate)
|
||||
end
|
||||
|
||||
nil
|
||||
end
|
||||
|
||||
private_class_method :find_executable
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
@@ -1,140 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# frozen_string_literal: true
|
||||
|
||||
module PotatoMesh
|
||||
module App
|
||||
module Meshtastic
|
||||
# Minimal protobuf helpers for extracting payload bytes from Meshtastic data.
|
||||
module Protobuf
|
||||
module_function
|
||||
|
||||
WIRE_TYPE_VARINT = 0
|
||||
WIRE_TYPE_64BIT = 1
|
||||
WIRE_TYPE_LENGTH_DELIMITED = 2
|
||||
WIRE_TYPE_32BIT = 5
|
||||
DATA_PORTNUM_FIELD = 1
|
||||
DATA_PAYLOAD_FIELD = 2
|
||||
|
||||
# Extract a length-delimited field from a protobuf message.
|
||||
#
|
||||
# @param payload [String] raw protobuf-encoded bytes.
|
||||
# @param field_number [Integer] field to extract.
|
||||
# @return [String, nil] field bytes or nil when absent/invalid.
|
||||
def extract_field_bytes(payload, field_number)
|
||||
return nil unless payload && field_number
|
||||
|
||||
bytes = payload.bytes
|
||||
index = 0
|
||||
|
||||
while index < bytes.length
|
||||
tag, index = read_varint(bytes, index)
|
||||
return nil unless tag
|
||||
|
||||
field = tag >> 3
|
||||
wire = tag & 0x7
|
||||
|
||||
case wire
|
||||
when WIRE_TYPE_VARINT
|
||||
_, index = read_varint(bytes, index)
|
||||
return nil unless index
|
||||
when WIRE_TYPE_64BIT
|
||||
index += 8
|
||||
when WIRE_TYPE_LENGTH_DELIMITED
|
||||
length, index = read_varint(bytes, index)
|
||||
return nil unless length
|
||||
return nil if index + length > bytes.length
|
||||
value = bytes[index, length].pack("C*")
|
||||
index += length
|
||||
return value if field == field_number
|
||||
when WIRE_TYPE_32BIT
|
||||
index += 4
|
||||
else
|
||||
return nil
|
||||
end
|
||||
end
|
||||
|
||||
nil
|
||||
end
|
||||
|
||||
# Parse a Meshtastic Data message for the port number and payload.
|
||||
#
|
||||
# @param payload [String] raw protobuf-encoded bytes.
|
||||
# @return [Hash, nil] parsed port number and payload bytes.
|
||||
def parse_data(payload)
|
||||
return nil unless payload
|
||||
|
||||
bytes = payload.bytes
|
||||
index = 0
|
||||
portnum = nil
|
||||
data_payload = nil
|
||||
|
||||
while index < bytes.length
|
||||
tag, index = read_varint(bytes, index)
|
||||
return nil unless tag
|
||||
|
||||
field = tag >> 3
|
||||
wire = tag & 0x7
|
||||
|
||||
case wire
|
||||
when WIRE_TYPE_VARINT
|
||||
value, index = read_varint(bytes, index)
|
||||
return nil unless value
|
||||
portnum = value if field == DATA_PORTNUM_FIELD
|
||||
when WIRE_TYPE_64BIT
|
||||
index += 8
|
||||
when WIRE_TYPE_LENGTH_DELIMITED
|
||||
length, index = read_varint(bytes, index)
|
||||
return nil unless length
|
||||
return nil if index + length > bytes.length
|
||||
value = bytes[index, length].pack("C*")
|
||||
index += length
|
||||
data_payload = value if field == DATA_PAYLOAD_FIELD
|
||||
when WIRE_TYPE_32BIT
|
||||
index += 4
|
||||
else
|
||||
return nil
|
||||
end
|
||||
end
|
||||
|
||||
return nil unless portnum && data_payload
|
||||
|
||||
{ portnum: portnum, payload: data_payload }
|
||||
end
|
||||
|
||||
# Read a protobuf varint from a byte array.
|
||||
#
|
||||
# @param bytes [Array<Integer>] byte stream.
|
||||
# @param index [Integer] read offset.
|
||||
# @return [Array(Integer, Integer), nil] value and new index or nil when invalid.
|
||||
def read_varint(bytes, index)
|
||||
shift = 0
|
||||
value = 0
|
||||
|
||||
while index < bytes.length
|
||||
byte = bytes[index]
|
||||
index += 1
|
||||
value |= (byte & 0x7F) << shift
|
||||
return [value, index] if (byte & 0x80).zero?
|
||||
shift += 7
|
||||
return nil if shift > 63
|
||||
end
|
||||
|
||||
nil
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
@@ -1,68 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# frozen_string_literal: true
|
||||
|
||||
require_relative "channel_hash"
|
||||
require_relative "channel_names"
|
||||
|
||||
module PotatoMesh
|
||||
module App
|
||||
module Meshtastic
|
||||
# Resolve candidate channel names for a hashed channel index.
|
||||
module RainbowTable
|
||||
module_function
|
||||
|
||||
@tables = {}
|
||||
|
||||
# Lookup candidate channel names for a hashed channel index.
|
||||
#
|
||||
# @param index [Integer, nil] channel hash byte.
|
||||
# @param psk_b64 [String, nil] base64 PSK or alias.
|
||||
# @return [Array<String>] list of candidate names.
|
||||
def channel_names_for(index, psk_b64:)
|
||||
return [] unless index.is_a?(Integer)
|
||||
|
||||
table_for(psk_b64)[index] || []
|
||||
end
|
||||
|
||||
# Build or retrieve the cached rainbow table for the given PSK.
|
||||
#
|
||||
# @param psk_b64 [String, nil] base64 PSK or alias.
|
||||
# @return [Hash{Integer=>Array<String>}] mapping of hash bytes to names.
|
||||
def table_for(psk_b64)
|
||||
key = psk_b64.to_s
|
||||
@tables[key] ||= build_table(psk_b64)
|
||||
end
|
||||
|
||||
# Build a hash-to-name mapping for the provided PSK.
|
||||
#
|
||||
# @param psk_b64 [String, nil] base64 PSK or alias.
|
||||
# @return [Hash{Integer=>Array<String>}] mapping of hash bytes to names.
|
||||
def build_table(psk_b64)
|
||||
mapping = Hash.new { |hash, key| hash[key] = [] }
|
||||
|
||||
ChannelNames::CHANNEL_NAME_CANDIDATES.each do |name|
|
||||
hash = ChannelHash.channel_hash(name, psk_b64)
|
||||
next unless hash
|
||||
|
||||
mapping[hash] << name
|
||||
end
|
||||
|
||||
mapping
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
@@ -20,7 +20,6 @@ module PotatoMesh
|
||||
MAX_QUERY_LIMIT = 1000
|
||||
DEFAULT_TELEMETRY_WINDOW_SECONDS = 86_400
|
||||
DEFAULT_TELEMETRY_BUCKET_SECONDS = 300
|
||||
PROTOCOL_CLAUSE = "protocol = ?".freeze
|
||||
TELEMETRY_ZERO_INVALID_COLUMNS = %w[battery_level voltage].freeze
|
||||
TELEMETRY_AGGREGATE_COLUMNS =
|
||||
%w[
|
||||
@@ -96,22 +95,6 @@ module PotatoMesh
|
||||
value
|
||||
end
|
||||
|
||||
# Append a protocol equality clause to an existing WHERE clause list when a
|
||||
# protocol filter is specified. Mutates +where_clauses+ and +params+ in place.
|
||||
#
|
||||
# @param where_clauses [Array<String>] accumulating WHERE conditions.
|
||||
# @param params [Array] accumulating bind parameters.
|
||||
# @param protocol [String, nil] optional protocol value to filter by.
|
||||
# @param table_alias [String, nil] optional table alias prefix (e.g. "m" → "m.protocol = ?").
|
||||
# @return [void]
|
||||
def append_protocol_filter(where_clauses, params, protocol, table_alias: nil)
|
||||
return unless protocol
|
||||
|
||||
clause = table_alias ? "#{table_alias}.#{PROTOCOL_CLAUSE}" : PROTOCOL_CLAUSE
|
||||
where_clauses << clause
|
||||
params << protocol
|
||||
end
|
||||
|
||||
# Normalise a caller-provided limit to a sane, positive integer.
|
||||
#
|
||||
# @param limit [Object] value coerced to an integer.
|
||||
@@ -144,43 +127,6 @@ module PotatoMesh
|
||||
[threshold, floor].max
|
||||
end
|
||||
|
||||
# Return exact active-node counts across common activity windows.
|
||||
#
|
||||
# Counts are resolved directly in SQL with COUNT(*) thresholds against
|
||||
# +nodes.last_heard+ to avoid sampling bias from list endpoint limits.
|
||||
#
|
||||
# @param now [Integer] reference unix timestamp in seconds.
|
||||
# @param db [SQLite3::Database, nil] optional open database handle to reuse.
|
||||
# @return [Hash{String => Integer}] counts keyed by hour/day/week/month.
|
||||
def query_active_node_stats(now: Time.now.to_i, db: nil)
|
||||
handle = db || open_database(readonly: true)
|
||||
handle.results_as_hash = true
|
||||
reference_now = coerce_integer(now) || Time.now.to_i
|
||||
hour_cutoff = reference_now - 3600
|
||||
day_cutoff = reference_now - 86_400
|
||||
week_cutoff = reference_now - PotatoMesh::Config.week_seconds
|
||||
month_cutoff = reference_now - (30 * 24 * 60 * 60)
|
||||
private_filter = private_mode? ? " AND (role IS NULL OR role <> 'CLIENT_HIDDEN')" : ""
|
||||
sql = <<~SQL
|
||||
SELECT
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{private_filter}) AS hour_count,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{private_filter}) AS day_count,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{private_filter}) AS week_count,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{private_filter}) AS month_count
|
||||
SQL
|
||||
row = with_busy_retry do
|
||||
handle.get_first_row(sql, [hour_cutoff, day_cutoff, week_cutoff, month_cutoff])
|
||||
end || {}
|
||||
{
|
||||
"hour" => row["hour_count"].to_i,
|
||||
"day" => row["day_count"].to_i,
|
||||
"week" => row["week_count"].to_i,
|
||||
"month" => row["month_count"].to_i,
|
||||
}
|
||||
ensure
|
||||
handle&.close unless db
|
||||
end
|
||||
|
||||
def node_reference_tokens(node_ref)
|
||||
parts = canonical_node_parts(node_ref)
|
||||
canonical_id, numeric_id = parts ? parts[0, 2] : [nil, nil]
|
||||
@@ -267,16 +213,15 @@ module PotatoMesh
|
||||
#
|
||||
# @param limit [Integer] maximum number of rows to return.
|
||||
# @param node_ref [String, Integer, nil] optional node reference to narrow results.
|
||||
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window for collections.
|
||||
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window.
|
||||
# @return [Array<Hash>] compacted node rows suitable for API responses.
|
||||
def query_nodes(limit, node_ref: nil, since: 0, protocol: nil)
|
||||
def query_nodes(limit, node_ref: nil, since: 0)
|
||||
limit = coerce_query_limit(limit)
|
||||
db = open_database(readonly: true)
|
||||
db.results_as_hash = true
|
||||
now = Time.now.to_i
|
||||
min_last_heard = now - PotatoMesh::Config.week_seconds
|
||||
since_floor = node_ref ? 0 : min_last_heard
|
||||
since_threshold = normalize_since_threshold(since, floor: since_floor)
|
||||
since_threshold = normalize_since_threshold(since, floor: min_last_heard)
|
||||
params = []
|
||||
where_clauses = []
|
||||
|
||||
@@ -294,14 +239,12 @@ module PotatoMesh
|
||||
where_clauses << "(role IS NULL OR role <> 'CLIENT_HIDDEN')"
|
||||
end
|
||||
|
||||
append_protocol_filter(where_clauses, params, protocol)
|
||||
|
||||
sql = <<~SQL
|
||||
SELECT node_id, short_name, long_name, hw_model, role, snr,
|
||||
battery_level, voltage, last_heard, first_heard,
|
||||
uptime_seconds, channel_utilization, air_util_tx,
|
||||
position_time, location_source, precision_bits,
|
||||
latitude, longitude, altitude, lora_freq, modem_preset, protocol
|
||||
latitude, longitude, altitude, lora_freq, modem_preset
|
||||
FROM nodes
|
||||
SQL
|
||||
sql += " WHERE #{where_clauses.join(" AND ")}\n" if where_clauses.any?
|
||||
@@ -340,28 +283,24 @@ module PotatoMesh
|
||||
# Fetch ingestor heartbeats with optional freshness filtering.
|
||||
#
|
||||
# @param limit [Integer] maximum number of ingestors to return.
|
||||
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window for collections.
|
||||
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window.
|
||||
# @return [Array<Hash>] compacted ingestor rows suitable for API responses.
|
||||
def query_ingestors(limit, since: 0, protocol: nil)
|
||||
def query_ingestors(limit, since: 0)
|
||||
limit = coerce_query_limit(limit)
|
||||
db = open_database(readonly: true)
|
||||
db.results_as_hash = true
|
||||
now = Time.now.to_i
|
||||
cutoff = now - PotatoMesh::Config.week_seconds
|
||||
since_threshold = normalize_since_threshold(since, floor: cutoff)
|
||||
where_clauses = ["last_seen_time >= ?"]
|
||||
params = [since_threshold]
|
||||
append_protocol_filter(where_clauses, params, protocol)
|
||||
sql = <<~SQL
|
||||
SELECT node_id, start_time, last_seen_time, version, lora_freq, modem_preset, protocol
|
||||
SELECT node_id, start_time, last_seen_time, version, lora_freq, modem_preset
|
||||
FROM ingestors
|
||||
WHERE #{where_clauses.join(" AND ")}
|
||||
WHERE last_seen_time >= ?
|
||||
ORDER BY last_seen_time DESC
|
||||
LIMIT ?
|
||||
SQL
|
||||
params << limit
|
||||
|
||||
rows = db.execute(sql, params)
|
||||
rows = db.execute(sql, [since_threshold, limit])
|
||||
rows.each do |row|
|
||||
row.delete_if { |key, _| key.is_a?(Integer) }
|
||||
start_time = coerce_integer(row["start_time"])
|
||||
@@ -389,7 +328,7 @@ module PotatoMesh
|
||||
# @param include_encrypted [Boolean] when true, include encrypted payloads in the response.
|
||||
# @param since [Integer] unix timestamp threshold; messages with rx_time older than this are excluded.
|
||||
# @return [Array<Hash>] compacted message rows safe for API responses.
|
||||
def query_messages(limit, node_ref: nil, include_encrypted: false, since: 0, protocol: nil)
|
||||
def query_messages(limit, node_ref: nil, include_encrypted: false, since: 0)
|
||||
limit = coerce_query_limit(limit)
|
||||
since_threshold = normalize_since_threshold(since, floor: 0)
|
||||
db = open_database(readonly: true)
|
||||
@@ -413,13 +352,11 @@ module PotatoMesh
|
||||
params.concat(clause.last)
|
||||
end
|
||||
|
||||
append_protocol_filter(where_clauses, params, protocol, table_alias: "m")
|
||||
|
||||
sql = <<~SQL
|
||||
SELECT m.id, m.rx_time, m.rx_iso, m.from_id, m.to_id, m.channel,
|
||||
m.portnum, m.text, m.encrypted, m.rssi, m.hop_limit,
|
||||
m.lora_freq, m.modem_preset, m.channel_name, m.snr,
|
||||
m.reply_id, m.emoji, m.ingestor, m.protocol
|
||||
m.reply_id, m.emoji
|
||||
FROM messages m
|
||||
SQL
|
||||
sql += " WHERE #{where_clauses.join(" AND ")}\n"
|
||||
@@ -433,9 +370,6 @@ module PotatoMesh
|
||||
r.delete_if { |key, _| key.is_a?(Integer) }
|
||||
r["reply_id"] = coerce_integer(r["reply_id"]) if r.key?("reply_id")
|
||||
r["emoji"] = string_or_nil(r["emoji"]) if r.key?("emoji")
|
||||
if string_or_nil(r["encrypted"])
|
||||
r.delete("portnum")
|
||||
end
|
||||
if PotatoMesh::Config.debug? && (r["from_id"].nil? || r["from_id"].to_s.strip.empty?)
|
||||
raw = db.execute("SELECT * FROM messages WHERE id = ?", [r["id"]]).first
|
||||
debug_log(
|
||||
@@ -480,7 +414,7 @@ module PotatoMesh
|
||||
# @param node_ref [String, Integer, nil] optional node reference to scope results.
|
||||
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window.
|
||||
# @return [Array<Hash>] compacted position rows suitable for API responses.
|
||||
def query_positions(limit, node_ref: nil, since: 0, protocol: nil)
|
||||
def query_positions(limit, node_ref: nil, since: 0)
|
||||
limit = coerce_query_limit(limit)
|
||||
db = open_database(readonly: true)
|
||||
db.results_as_hash = true
|
||||
@@ -488,8 +422,7 @@ module PotatoMesh
|
||||
where_clauses = []
|
||||
now = Time.now.to_i
|
||||
min_rx_time = now - PotatoMesh::Config.week_seconds
|
||||
since_floor = node_ref ? 0 : min_rx_time
|
||||
since_threshold = normalize_since_threshold(since, floor: since_floor)
|
||||
since_threshold = normalize_since_threshold(since, floor: min_rx_time)
|
||||
where_clauses << "COALESCE(rx_time, position_time, 0) >= ?"
|
||||
params << since_threshold
|
||||
|
||||
@@ -500,8 +433,6 @@ module PotatoMesh
|
||||
params.concat(clause.last)
|
||||
end
|
||||
|
||||
append_protocol_filter(where_clauses, params, protocol)
|
||||
|
||||
sql = <<~SQL
|
||||
SELECT * FROM positions
|
||||
SQL
|
||||
@@ -539,9 +470,9 @@ module PotatoMesh
|
||||
#
|
||||
# @param limit [Integer] maximum number of rows to return.
|
||||
# @param node_ref [String, Integer, nil] optional node reference to scope results.
|
||||
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window for collections.
|
||||
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window.
|
||||
# @return [Array<Hash>] compacted neighbor rows suitable for API responses.
|
||||
def query_neighbors(limit, node_ref: nil, since: 0, protocol: nil)
|
||||
def query_neighbors(limit, node_ref: nil, since: 0)
|
||||
limit = coerce_query_limit(limit)
|
||||
db = open_database(readonly: true)
|
||||
db.results_as_hash = true
|
||||
@@ -549,8 +480,7 @@ module PotatoMesh
|
||||
where_clauses = []
|
||||
now = Time.now.to_i
|
||||
min_rx_time = now - PotatoMesh::Config.week_seconds
|
||||
since_floor = node_ref ? 0 : min_rx_time
|
||||
since_threshold = normalize_since_threshold(since, floor: since_floor)
|
||||
since_threshold = normalize_since_threshold(since, floor: min_rx_time)
|
||||
where_clauses << "COALESCE(rx_time, 0) >= ?"
|
||||
params << since_threshold
|
||||
|
||||
@@ -561,8 +491,6 @@ module PotatoMesh
|
||||
params.concat(clause.last)
|
||||
end
|
||||
|
||||
append_protocol_filter(where_clauses, params, protocol)
|
||||
|
||||
sql = <<~SQL
|
||||
SELECT * FROM neighbors
|
||||
SQL
|
||||
@@ -589,9 +517,9 @@ module PotatoMesh
|
||||
#
|
||||
# @param limit [Integer] maximum number of rows to return.
|
||||
# @param node_ref [String, Integer, nil] optional node reference to scope results.
|
||||
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window for collections.
|
||||
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window.
|
||||
# @return [Array<Hash>] compacted telemetry rows suitable for API responses.
|
||||
def query_telemetry(limit, node_ref: nil, since: 0, protocol: nil)
|
||||
def query_telemetry(limit, node_ref: nil, since: 0)
|
||||
limit = coerce_query_limit(limit)
|
||||
db = open_database(readonly: true)
|
||||
db.results_as_hash = true
|
||||
@@ -599,8 +527,7 @@ module PotatoMesh
|
||||
where_clauses = []
|
||||
now = Time.now.to_i
|
||||
min_rx_time = now - PotatoMesh::Config.week_seconds
|
||||
since_floor = node_ref ? 0 : min_rx_time
|
||||
since_threshold = normalize_since_threshold(since, floor: since_floor)
|
||||
since_threshold = normalize_since_threshold(since, floor: min_rx_time)
|
||||
where_clauses << "COALESCE(rx_time, telemetry_time, 0) >= ?"
|
||||
params << since_threshold
|
||||
|
||||
@@ -611,8 +538,6 @@ module PotatoMesh
|
||||
params.concat(clause.last)
|
||||
end
|
||||
|
||||
append_protocol_filter(where_clauses, params, protocol)
|
||||
|
||||
sql = <<~SQL
|
||||
SELECT * FROM telemetry
|
||||
SQL
|
||||
@@ -668,7 +593,6 @@ module PotatoMesh
|
||||
r["rainfall_24h"] = coerce_float(r["rainfall_24h"])
|
||||
r["soil_moisture"] = coerce_integer(r["soil_moisture"])
|
||||
r["soil_temperature"] = coerce_float(r["soil_temperature"])
|
||||
r["telemetry_type"] = string_or_nil(r["telemetry_type"])
|
||||
end
|
||||
rows.map { |row| compact_api_row(row) }
|
||||
ensure
|
||||
@@ -803,14 +727,14 @@ module PotatoMesh
|
||||
# @param node_ref [String, Integer, nil] optional node reference to scope results.
|
||||
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window.
|
||||
# @return [Array<Hash>] compacted trace rows suitable for API responses.
|
||||
def query_traces(limit, node_ref: nil, since: 0, protocol: nil)
|
||||
def query_traces(limit, node_ref: nil, since: 0)
|
||||
limit = coerce_query_limit(limit)
|
||||
db = open_database(readonly: true)
|
||||
db.results_as_hash = true
|
||||
params = []
|
||||
where_clauses = []
|
||||
now = Time.now.to_i
|
||||
min_rx_time = now - PotatoMesh::Config.trace_neighbor_window_seconds
|
||||
min_rx_time = now - PotatoMesh::Config.week_seconds
|
||||
since_threshold = normalize_since_threshold(since, floor: min_rx_time)
|
||||
where_clauses << "COALESCE(rx_time, 0) >= ?"
|
||||
params << since_threshold
|
||||
@@ -830,10 +754,8 @@ module PotatoMesh
|
||||
3.times { params.concat(numeric_values) }
|
||||
end
|
||||
|
||||
append_protocol_filter(where_clauses, params, protocol)
|
||||
|
||||
sql = <<~SQL
|
||||
SELECT id, request_id, src, dest, rx_time, rx_iso, rssi, snr, elapsed_ms, protocol
|
||||
SELECT id, request_id, src, dest, rx_time, rx_iso, rssi, snr, elapsed_ms
|
||||
FROM traces
|
||||
SQL
|
||||
sql += " WHERE #{where_clauses.join(" AND ")}\n" if where_clauses.any?
|
||||
|
||||
@@ -64,15 +64,7 @@ module PotatoMesh
|
||||
app.get "/api/nodes" do
|
||||
content_type :json
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
query_nodes(limit, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
end
|
||||
|
||||
app.get "/api/stats" do
|
||||
content_type :json
|
||||
{
|
||||
active_nodes: query_active_node_stats,
|
||||
sampled: false,
|
||||
}.to_json
|
||||
query_nodes(limit, since: params["since"]).to_json
|
||||
end
|
||||
|
||||
app.get "/api/nodes/:id" do
|
||||
@@ -88,7 +80,7 @@ module PotatoMesh
|
||||
app.get "/api/ingestors" do
|
||||
content_type :json
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
query_ingestors(limit, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
query_ingestors(limit, since: params["since"]).to_json
|
||||
end
|
||||
|
||||
app.get "/api/messages" do
|
||||
@@ -97,7 +89,7 @@ module PotatoMesh
|
||||
include_encrypted = coerce_boolean(params["encrypted"]) || false
|
||||
since = coerce_integer(params["since"])
|
||||
since = 0 if since.nil? || since.negative?
|
||||
query_messages(limit, include_encrypted: include_encrypted, since: since, protocol: string_or_nil(params["protocol"])).to_json
|
||||
query_messages(limit, include_encrypted: include_encrypted, since: since).to_json
|
||||
end
|
||||
|
||||
app.get "/api/messages/:id" do
|
||||
@@ -113,14 +105,13 @@ module PotatoMesh
|
||||
node_ref: node_ref,
|
||||
include_encrypted: include_encrypted,
|
||||
since: since,
|
||||
protocol: string_or_nil(params["protocol"]),
|
||||
).to_json
|
||||
end
|
||||
|
||||
app.get "/api/positions" do
|
||||
content_type :json
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
query_positions(limit, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
query_positions(limit, since: params["since"]).to_json
|
||||
end
|
||||
|
||||
app.get "/api/positions/:id" do
|
||||
@@ -128,13 +119,13 @@ module PotatoMesh
|
||||
node_ref = string_or_nil(params["id"])
|
||||
halt 400, { error: "missing node id" }.to_json unless node_ref
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
query_positions(limit, node_ref: node_ref, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
query_positions(limit, node_ref: node_ref, since: params["since"]).to_json
|
||||
end
|
||||
|
||||
app.get "/api/neighbors" do
|
||||
content_type :json
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
query_neighbors(limit, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
query_neighbors(limit, since: params["since"]).to_json
|
||||
end
|
||||
|
||||
app.get "/api/neighbors/:id" do
|
||||
@@ -142,13 +133,13 @@ module PotatoMesh
|
||||
node_ref = string_or_nil(params["id"])
|
||||
halt 400, { error: "missing node id" }.to_json unless node_ref
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
query_neighbors(limit, node_ref: node_ref, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
query_neighbors(limit, node_ref: node_ref, since: params["since"]).to_json
|
||||
end
|
||||
|
||||
app.get "/api/telemetry" do
|
||||
content_type :json
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
query_telemetry(limit, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
query_telemetry(limit, since: params["since"]).to_json
|
||||
end
|
||||
|
||||
app.get "/api/telemetry/aggregated" do
|
||||
@@ -191,13 +182,13 @@ module PotatoMesh
|
||||
node_ref = string_or_nil(params["id"])
|
||||
halt 400, { error: "missing node id" }.to_json unless node_ref
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
query_telemetry(limit, node_ref: node_ref, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
query_telemetry(limit, node_ref: node_ref, since: params["since"]).to_json
|
||||
end
|
||||
|
||||
app.get "/api/traces" do
|
||||
content_type :json
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
query_traces(limit, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
query_traces(limit, since: params["since"]).to_json
|
||||
end
|
||||
|
||||
app.get "/api/traces/:id" do
|
||||
@@ -205,7 +196,7 @@ module PotatoMesh
|
||||
node_ref = string_or_nil(params["id"])
|
||||
halt 400, { error: "missing node id" }.to_json unless node_ref
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
query_traces(limit, node_ref: node_ref, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
query_traces(limit, node_ref: node_ref, since: params["since"]).to_json
|
||||
end
|
||||
|
||||
app.get "/api/instances" do
|
||||
|
||||
@@ -35,14 +35,10 @@ module PotatoMesh
|
||||
unless data.is_a?(Hash)
|
||||
halt 400, { error: "invalid payload" }.to_json
|
||||
end
|
||||
node_count = data.count { |k, _| k != "ingestor" }
|
||||
halt 400, { error: "too many nodes" }.to_json if node_count > 1000
|
||||
halt 400, { error: "too many nodes" }.to_json if data.size > 1000
|
||||
db = open_database
|
||||
ingestor_node_id = string_or_nil(data["ingestor"])
|
||||
protocol = resolve_protocol(db, ingestor_node_id)
|
||||
data.each do |node_id, node|
|
||||
next if node_id == "ingestor"
|
||||
upsert_node(db, node_id, node, protocol: protocol)
|
||||
upsert_node(db, node_id, node)
|
||||
end
|
||||
PotatoMesh::App::Prometheus::NODES_GAUGE.set(query_nodes(1000).length)
|
||||
{ status: "ok" }.to_json
|
||||
@@ -61,9 +57,8 @@ module PotatoMesh
|
||||
messages = data.is_a?(Array) ? data : [data]
|
||||
halt 400, { error: "too many messages" }.to_json if messages.size > 1000
|
||||
db = open_database
|
||||
protocol_cache = {}
|
||||
messages.each do |msg|
|
||||
insert_message(db, msg, protocol_cache: protocol_cache)
|
||||
insert_message(db, msg)
|
||||
end
|
||||
{ status: "ok" }.to_json
|
||||
ensure
|
||||
@@ -310,9 +305,8 @@ module PotatoMesh
|
||||
positions = data.is_a?(Array) ? data : [data]
|
||||
halt 400, { error: "too many positions" }.to_json if positions.size > 1000
|
||||
db = open_database
|
||||
protocol_cache = {}
|
||||
positions.each do |pos|
|
||||
insert_position(db, pos, protocol_cache: protocol_cache)
|
||||
insert_position(db, pos)
|
||||
end
|
||||
{ status: "ok" }.to_json
|
||||
ensure
|
||||
@@ -330,9 +324,8 @@ module PotatoMesh
|
||||
neighbor_payloads = data.is_a?(Array) ? data : [data]
|
||||
halt 400, { error: "too many neighbor packets" }.to_json if neighbor_payloads.size > 1000
|
||||
db = open_database
|
||||
protocol_cache = {}
|
||||
neighbor_payloads.each do |packet|
|
||||
insert_neighbors(db, packet, protocol_cache: protocol_cache)
|
||||
insert_neighbors(db, packet)
|
||||
end
|
||||
{ status: "ok" }.to_json
|
||||
ensure
|
||||
@@ -350,9 +343,8 @@ module PotatoMesh
|
||||
telemetry_packets = data.is_a?(Array) ? data : [data]
|
||||
halt 400, { error: "too many telemetry packets" }.to_json if telemetry_packets.size > 1000
|
||||
db = open_database
|
||||
protocol_cache = {}
|
||||
telemetry_packets.each do |packet|
|
||||
insert_telemetry(db, packet, protocol_cache: protocol_cache)
|
||||
insert_telemetry(db, packet)
|
||||
end
|
||||
{ status: "ok" }.to_json
|
||||
ensure
|
||||
@@ -370,9 +362,8 @@ module PotatoMesh
|
||||
trace_packets = data.is_a?(Array) ? data : [data]
|
||||
halt 400, { error: "too many traces" }.to_json if trace_packets.size > 1000
|
||||
db = open_database
|
||||
protocol_cache = {}
|
||||
trace_packets.each do |packet|
|
||||
insert_trace(db, packet, protocol_cache: protocol_cache)
|
||||
insert_trace(db, packet)
|
||||
end
|
||||
{ status: "ok" }.to_json
|
||||
ensure
|
||||
|
||||
@@ -14,8 +14,6 @@
|
||||
|
||||
# frozen_string_literal: true
|
||||
|
||||
require "timeout"
|
||||
|
||||
module PotatoMesh
|
||||
module App
|
||||
# WorkerPool executes submitted blocks using a bounded set of Ruby threads.
|
||||
@@ -126,9 +124,8 @@ module PotatoMesh
|
||||
#
|
||||
# @param size [Integer] number of worker threads to spawn.
|
||||
# @param max_queue [Integer, nil] optional upper bound on queued jobs.
|
||||
# @param task_timeout [Numeric, nil] optional per-task execution timeout.
|
||||
# @param name [String] prefix assigned to worker thread names.
|
||||
def initialize(size:, max_queue: nil, task_timeout: nil, name: "worker-pool")
|
||||
def initialize(size:, max_queue: nil, name: "worker-pool")
|
||||
raise ArgumentError, "size must be positive" unless size.is_a?(Integer) && size.positive?
|
||||
|
||||
@name = name
|
||||
@@ -136,7 +133,6 @@ module PotatoMesh
|
||||
@threads = []
|
||||
@stopped = false
|
||||
@mutex = Mutex.new
|
||||
@task_timeout = normalize_task_timeout(task_timeout)
|
||||
spawn_workers(size)
|
||||
end
|
||||
|
||||
@@ -196,45 +192,23 @@ module PotatoMesh
|
||||
worker = Thread.new do
|
||||
Thread.current.name = "#{@name}-#{index}" if Thread.current.respond_to?(:name=)
|
||||
Thread.current.report_on_exception = false if Thread.current.respond_to?(:report_on_exception=)
|
||||
# Daemon threads allow the process to exit even if a job is stuck.
|
||||
Thread.current.daemon = true if Thread.current.respond_to?(:daemon=)
|
||||
|
||||
loop do
|
||||
task, block = @queue.pop
|
||||
break if task.equal?(STOP_SIGNAL)
|
||||
|
||||
begin
|
||||
result = if @task_timeout
|
||||
Timeout.timeout(@task_timeout, TaskTimeoutError, "task exceeded timeout") do
|
||||
block.call
|
||||
end
|
||||
else
|
||||
block.call
|
||||
end
|
||||
result = block.call
|
||||
task.fulfill(result)
|
||||
rescue StandardError => e
|
||||
task.reject(e)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
@threads << worker
|
||||
end
|
||||
end
|
||||
|
||||
# Normalize the per-task timeout into a positive float value.
|
||||
#
|
||||
# @param task_timeout [Numeric, nil] candidate timeout value.
|
||||
# @return [Float, nil] positive timeout in seconds or nil when disabled.
|
||||
def normalize_task_timeout(task_timeout)
|
||||
return nil if task_timeout.nil?
|
||||
|
||||
value = Float(task_timeout)
|
||||
return nil unless value.positive?
|
||||
|
||||
value
|
||||
rescue ArgumentError, TypeError
|
||||
nil
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
@@ -32,19 +32,15 @@ module PotatoMesh
|
||||
DEFAULT_MAP_CENTER = "#{DEFAULT_MAP_CENTER_LAT},#{DEFAULT_MAP_CENTER_LON}"
|
||||
DEFAULT_CHANNEL = "#LongFast"
|
||||
DEFAULT_FREQUENCY = "915MHz"
|
||||
DEFAULT_MESHTASTIC_PSK_B64 = "AQ=="
|
||||
DEFAULT_CONTACT_LINK = "#potatomesh:dod.ngo"
|
||||
DEFAULT_MAX_DISTANCE_KM = 42.0
|
||||
DEFAULT_REMOTE_INSTANCE_CONNECT_TIMEOUT = 15
|
||||
DEFAULT_REMOTE_INSTANCE_READ_TIMEOUT = 60
|
||||
DEFAULT_REMOTE_INSTANCE_REQUEST_TIMEOUT = 30
|
||||
DEFAULT_FEDERATION_MAX_INSTANCES_PER_RESPONSE = 64
|
||||
DEFAULT_FEDERATION_MAX_DOMAINS_PER_CRAWL = 256
|
||||
DEFAULT_FEDERATION_WORKER_POOL_SIZE = 4
|
||||
DEFAULT_FEDERATION_WORKER_QUEUE_CAPACITY = 128
|
||||
DEFAULT_FEDERATION_TASK_TIMEOUT_SECONDS = 120
|
||||
DEFAULT_FEDERATION_SHUTDOWN_TIMEOUT_SECONDS = 3
|
||||
DEFAULT_FEDERATION_CRAWL_COOLDOWN_SECONDS = 300
|
||||
DEFAULT_INITIAL_FEDERATION_DELAY_SECONDS = 2
|
||||
DEFAULT_FEDERATION_SEED_DOMAINS = %w[potatomesh.net potatomesh.jmrp.io mesh.qrp.ro].freeze
|
||||
|
||||
@@ -162,13 +158,6 @@ module PotatoMesh
|
||||
7 * 24 * 60 * 60
|
||||
end
|
||||
|
||||
# Rolling retention window in seconds for trace and neighbor API queries.
|
||||
#
|
||||
# @return [Integer] seconds in twenty-eight days.
|
||||
def trace_neighbor_window_seconds
|
||||
28 * 24 * 60 * 60
|
||||
end
|
||||
|
||||
# Default upper bound for accepted JSON payload sizes.
|
||||
#
|
||||
# @return [Integer] byte ceiling for HTTP request bodies.
|
||||
@@ -187,7 +176,7 @@ module PotatoMesh
|
||||
#
|
||||
# @return [String] semantic version identifier.
|
||||
def version_fallback
|
||||
"0.5.12"
|
||||
"0.5.9"
|
||||
end
|
||||
|
||||
# Default refresh interval for frontend polling routines.
|
||||
@@ -353,16 +342,6 @@ module PotatoMesh
|
||||
)
|
||||
end
|
||||
|
||||
# End-to-end timeout applied to each outbound federation HTTP request.
|
||||
#
|
||||
# @return [Integer] maximum request duration in seconds.
|
||||
def remote_instance_request_timeout
|
||||
fetch_positive_integer(
|
||||
"REMOTE_INSTANCE_REQUEST_TIMEOUT",
|
||||
DEFAULT_REMOTE_INSTANCE_REQUEST_TIMEOUT,
|
||||
)
|
||||
end
|
||||
|
||||
# Limit the number of remote instances processed from a single response.
|
||||
#
|
||||
# @return [Integer] maximum entries processed per /api/instances payload.
|
||||
@@ -413,26 +392,6 @@ module PotatoMesh
|
||||
)
|
||||
end
|
||||
|
||||
# Determine how long shutdown waits before forcing federation thread exit.
|
||||
#
|
||||
# @return [Integer] per-thread shutdown timeout in seconds.
|
||||
def federation_shutdown_timeout_seconds
|
||||
fetch_positive_integer(
|
||||
"FEDERATION_SHUTDOWN_TIMEOUT",
|
||||
DEFAULT_FEDERATION_SHUTDOWN_TIMEOUT_SECONDS,
|
||||
)
|
||||
end
|
||||
|
||||
# Define how long finished crawl domains remain on cooldown.
|
||||
#
|
||||
# @return [Integer] cooldown window in seconds.
|
||||
def federation_crawl_cooldown_seconds
|
||||
fetch_positive_integer(
|
||||
"FEDERATION_CRAWL_COOLDOWN",
|
||||
DEFAULT_FEDERATION_CRAWL_COOLDOWN_SECONDS,
|
||||
)
|
||||
end
|
||||
|
||||
# Maximum acceptable age for remote node data.
|
||||
#
|
||||
# @return [Integer] seconds before remote nodes are considered stale.
|
||||
@@ -478,13 +437,6 @@ module PotatoMesh
|
||||
fetch_string("SITE_NAME", "PotatoMesh Demo")
|
||||
end
|
||||
|
||||
# Retrieve the configured announcement banner copy.
|
||||
#
|
||||
# @return [String, nil] announcement string when configured.
|
||||
def announcement
|
||||
fetch_string("ANNOUNCEMENT", nil)
|
||||
end
|
||||
|
||||
# Retrieve the default radio channel label.
|
||||
#
|
||||
# @return [String] channel name from configuration.
|
||||
@@ -499,13 +451,6 @@ module PotatoMesh
|
||||
fetch_string("FREQUENCY", DEFAULT_FREQUENCY)
|
||||
end
|
||||
|
||||
# Retrieve the Meshtastic PSK used for decrypting channel messages.
|
||||
#
|
||||
# @return [String] base64-encoded PSK or alias.
|
||||
def meshtastic_psk_b64
|
||||
fetch_string("MESHTASTIC_PSK_B64", DEFAULT_MESHTASTIC_PSK_B64)
|
||||
end
|
||||
|
||||
# Parse the configured map centre coordinates.
|
||||
#
|
||||
# @return [Hash{Symbol=>Float}] latitude and longitude in decimal degrees.
|
||||
|
||||
@@ -199,14 +199,6 @@ module PotatoMesh
|
||||
sanitized_string(Config.site_name)
|
||||
end
|
||||
|
||||
# Retrieve the configured announcement banner copy and normalise blank values to nil.
|
||||
#
|
||||
# @return [String, nil] announcement copy or +nil+ when blank.
|
||||
def sanitized_announcement
|
||||
value = sanitized_string(Config.announcement)
|
||||
value.empty? ? nil : value
|
||||
end
|
||||
|
||||
# Retrieve the configured channel as a cleaned string.
|
||||
#
|
||||
# @return [String] trimmed configuration value.
|
||||
|
||||
Generated
+12
-2
@@ -1,12 +1,16 @@
|
||||
{
|
||||
"name": "potato-mesh",
|
||||
"version": "0.5.12",
|
||||
"version": "0.5.9",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "potato-mesh",
|
||||
"version": "0.5.12",
|
||||
"version": "0.5.9",
|
||||
"hasInstallScript": true,
|
||||
"dependencies": {
|
||||
"uplot": "^1.6.30"
|
||||
},
|
||||
"devDependencies": {
|
||||
"istanbul-lib-coverage": "^3.2.2",
|
||||
"istanbul-lib-report": "^3.0.1",
|
||||
@@ -154,6 +158,12 @@
|
||||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"node_modules/uplot": {
|
||||
"version": "1.6.32",
|
||||
"resolved": "https://registry.npmjs.org/uplot/-/uplot-1.6.32.tgz",
|
||||
"integrity": "sha512-KIMVnG68zvu5XXUbC4LQEPnhwOxBuLyW1AHtpm6IKTXImkbLgkMy+jabjLgSLMasNuGGzQm/ep3tOkyTxpiQIw==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/v8-to-istanbul": {
|
||||
"version": "9.3.0",
|
||||
"resolved": "https://registry.npmjs.org/v8-to-istanbul/-/v8-to-istanbul-9.3.0.tgz",
|
||||
|
||||
+5
-1
@@ -1,11 +1,15 @@
|
||||
{
|
||||
"name": "potato-mesh",
|
||||
"version": "0.5.12",
|
||||
"version": "0.5.9",
|
||||
"type": "module",
|
||||
"private": true,
|
||||
"scripts": {
|
||||
"postinstall": "node ./scripts/copy-uplot.js",
|
||||
"test": "mkdir -p reports coverage && NODE_V8_COVERAGE=coverage node --test --experimental-test-coverage --test-reporter=spec --test-reporter-destination=stdout --test-reporter=junit --test-reporter-destination=reports/javascript-junit.xml && node ./scripts/export-coverage.js"
|
||||
},
|
||||
"dependencies": {
|
||||
"uplot": "^1.6.30"
|
||||
},
|
||||
"devDependencies": {
|
||||
"istanbul-lib-coverage": "^3.2.2",
|
||||
"istanbul-lib-report": "^3.0.1",
|
||||
|
||||
@@ -1,5 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<svg version="1.1" xmlns="http://www.w3.org/2000/svg" width="150" height="150">
|
||||
<path d="M0 0 C49.5 0 99 0 150 0 C150 49.5 150 99 150 150 C100.5 150 51 150 0 150 C0 100.5 0 51 0 0 Z " fill="#010101" transform="translate(0,0)"/>
|
||||
<path d="M0 0 C1.53527344 -0.01836914 1.53527344 -0.01836914 3.1015625 -0.03710938 C4.07867187 -0.03904297 5.05578125 -0.04097656 6.0625 -0.04296875 C7.41021484 -0.04913208 7.41021484 -0.04913208 8.78515625 -0.05541992 C11 0.1875 11 0.1875 13 2.1875 C13.82833738 4.60048282 14.56308409 6.98402953 15.25 9.4375 C15.46180908 10.17532715 15.67361816 10.9131543 15.8918457 11.67333984 C18.46626793 20.70198673 20.91045751 29.76786964 23.35351562 38.83276367 C23.93407205 40.94736589 24.54518448 43.04989666 25.16796875 45.15234375 C25.94846312 47.99949928 26.17763687 50.25649158 26 53.1875 C29.53719962 52.00843346 29.92947089 51.20820225 31.9074707 48.16186523 C32.46847977 47.30684921 33.02948883 46.45183319 33.60749817 45.57090759 C34.20753159 44.63871902 34.807565 43.70653046 35.42578125 42.74609375 C36.05099167 41.78985611 36.67620209 40.83361847 37.32035828 39.84840393 C39.32548415 36.77890469 41.31945998 33.70235756 43.3125 30.625 C45.29667428 27.57388909 47.28318817 24.5243371 49.27250671 21.47657776 C50.51031178 19.57885081 51.74487721 17.67900549 52.97566223 15.77671814 C54.63024288 13.23282832 56.30863092 10.70704307 58 8.1875 C58.51591705 7.39784851 59.03183411 6.60819702 59.56338501 5.7946167 C63.24847691 0.44336876 63.24847691 0.44336876 66.76074219 -0.75244141 C67.68016602 -0.72327637 68.59958984 -0.69411133 69.546875 -0.6640625 C71.06087891 -0.63892578 71.06087891 -0.63892578 72.60546875 -0.61328125 C73.64316406 -0.55527344 74.68085937 -0.49726562 75.75 -0.4375 C77.32974609 -0.40849609 77.32974609 -0.40849609 78.94140625 -0.37890625 C86.66153067 -0.15096933 86.66153067 -0.15096933 89 2.1875 C88.78535972 4.58877596 88.44690562 6.8263515 88 9.1875 C87.73732629 10.743369 87.47593231 12.2994544 87.21557617 13.85571289 C86.90917246 15.62405721 86.60157143 17.39219436 86.29296875 19.16015625 C86.12630203 20.11909286 85.95963531 21.07802948 85.78791809 22.06602478 C85.43456299 24.09757235 85.08068534 26.12902908 84.72631836 28.16040039 C84.19830027 31.18807095 83.67225319 34.21607876 83.14648438 37.24414062 C82.45475573 41.22681206 81.75398458 45.20776121 81.04833984 49.18798828 C80.32688931 53.26991242 79.64519057 57.3561796 78.98681641 61.44873047 C78.65403741 63.44369592 78.3211948 65.43865075 77.98828125 67.43359375 C77.84722458 68.35680923 77.70616791 69.28002472 77.56083679 70.23121643 C76.91250415 74.03349339 76.41881054 76.60462503 74.14404297 79.77050781 C71.74234511 81.35778342 70.55833969 81.67049074 67.70703125 81.61328125 C66.91232422 81.60490234 66.11761719 81.59652344 65.29882812 81.58789062 C64.47833984 81.55888672 63.65785156 81.52988281 62.8125 81.5 C61.98041016 81.49033203 61.14832031 81.48066406 60.29101562 81.47070312 C54.16843594 81.35593594 54.16843594 81.35593594 53 80.1875 C52.52795585 73.46087084 53.58284962 67.3308395 54.93261719 60.77319336 C55.60813469 57.46821193 56 54.58252922 56 51.1875 C55.54149658 51.88311035 55.08299316 52.5787207 54.6105957 53.29541016 C52.88950164 55.90278622 51.16517913 58.50798983 49.4387207 61.11181641 C48.33233335 62.78255005 47.22960887 64.45570746 46.12695312 66.12890625 C44.72286979 68.24464826 43.30312124 70.35009394 41.86645508 72.44384766 C40.78976022 74.02656797 39.741896 75.6316021 38.7800293 77.28662109 C37.71875 78.9375 37.71875 78.9375 35 81.1875 C31.03043345 81.88673061 27.20146456 81.75375793 23.1875 81.5625 C21.57584961 81.52189453 21.57584961 81.52189453 19.93164062 81.48046875 C17.28570432 81.40991045 14.6438876 81.31138756 12 81.1875 C6.74936442 71.19139204 3.55442915 60.34690926 2 49.1875 C1.34 49.1875 0.68 49.1875 0 49.1875 C-0.03738281 49.9403125 -0.07476562 50.693125 -0.11328125 51.46875 C-0.51682086 57.98868401 -1.23629152 64.38456784 -2.375 70.8125 C-2.48884033 71.49892578 -2.60268066 72.18535156 -2.7199707 72.89257812 C-3.41310966 76.60288301 -3.97093895 78.87525936 -7 81.1875 C-9.06884766 81.54394531 -9.06884766 81.54394531 -11.4140625 81.515625 C-12.25710937 81.51046875 -13.10015625 81.5053125 -13.96875 81.5 C-14.8453125 81.479375 -15.721875 81.45875 -16.625 81.4375 C-17.50929688 81.43234375 -18.39359375 81.4271875 -19.3046875 81.421875 C-25.84588509 81.34161491 -25.84588509 81.34161491 -27 80.1875 C-27.31453657 73.34879702 -26.46185737 67.1360847 -25.13671875 60.44140625 C-24.94406937 59.43142075 -24.75141998 58.42143524 -24.55293274 57.38084412 C-23.93887716 54.1690146 -23.31372179 50.95947739 -22.6875 47.75 C-21.87557047 43.54360648 -21.06853993 39.33628201 -20.26171875 35.12890625 C-20.06488495 34.11059723 -19.86805115 33.09228821 -19.66525269 32.04312134 C-18.3031104 24.94644791 -17.06375396 17.83729619 -16.00382996 10.68885803 C-15.46327937 7.2164468 -14.95513495 4.15342185 -13 1.1875 C-9.11308052 -0.75595974 -4.27485419 0.00269197 0 0 Z " fill="#FBFBFB" transform="translate(44,33.8125)"/>
|
||||
</svg>
|
||||
|
Before Width: | Height: | Size: 4.9 KiB |
@@ -1,16 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8" standalone="no" ?>
|
||||
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
|
||||
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.1" width="512" height="512" viewBox="0 0 512 512" xml:space="preserve">
|
||||
<desc>Created with Fabric.js 4.6.0</desc>
|
||||
<defs>
|
||||
</defs>
|
||||
<g transform="matrix(1 0 0 1 256 256)" id="xYQ9Gk9Jwpgj_HMOXB3F_" >
|
||||
<path style="stroke: rgb(213,130,139); stroke-width: 0; stroke-dasharray: none; stroke-linecap: butt; stroke-dashoffset: 0; stroke-linejoin: miter; stroke-miterlimit: 4; fill: rgb(103,234,148); fill-rule: nonzero; opacity: 1;" vector-effect="non-scaling-stroke" transform=" translate(-256, -256)" d="M 0 0 L 512 0 L 512 512 L 0 512 z" stroke-linecap="round" />
|
||||
</g>
|
||||
<g transform="matrix(1.79 0 0 1.79 313.74 258.36)" id="1xBsk2n9FZp60Rz1O-ceJ" >
|
||||
<path style="stroke: none; stroke-width: 1; stroke-dasharray: none; stroke-linecap: butt; stroke-dashoffset: 0; stroke-linejoin: round; stroke-miterlimit: 2; fill: rgb(44,45,60); fill-rule: evenodd; opacity: 1;" vector-effect="non-scaling-stroke" transform=" translate(-250.97, -362.41)" d="M 250.908 330.267 L 193.126 415.005 L 180.938 406.694 L 244.802 313.037 C 246.174 311.024 248.453 309.819 250.889 309.816 C 253.326 309.814 255.606 311.015 256.982 313.026 L 320.994 406.536 L 308.821 414.869 L 250.908 330.267 Z" stroke-linecap="round" />
|
||||
</g>
|
||||
<g transform="matrix(1.81 0 0 1.81 145 256.15)" id="KxN7E9YpbyPgz0S4z4Cl6" >
|
||||
<path style="stroke: none; stroke-width: 1; stroke-dasharray: none; stroke-linecap: butt; stroke-dashoffset: 0; stroke-linejoin: round; stroke-miterlimit: 2; fill: rgb(44,45,60); fill-rule: evenodd; opacity: 1;" vector-effect="non-scaling-stroke" transform=" translate(-115.14, -528.06)" d="M 87.642 581.398 L 154.757 482.977 L 142.638 474.713 L 75.523 573.134 L 87.642 581.398 Z" stroke-linecap="round" />
|
||||
</g>
|
||||
</svg>
|
||||
|
Before Width: | Height: | Size: 1.9 KiB |
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user