mirror of
https://github.com/l5yth/potato-mesh.git
synced 2026-05-13 21:05:51 +02:00
Compare commits
1 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 858e9fa189 |
@@ -20,7 +20,6 @@ on:
|
||||
pull_request:
|
||||
branches: [ "main" ]
|
||||
paths:
|
||||
- '.github/**'
|
||||
- 'web/**'
|
||||
- 'tests/**'
|
||||
|
||||
|
||||
@@ -20,7 +20,6 @@ on:
|
||||
pull_request:
|
||||
branches: [ "main" ]
|
||||
paths:
|
||||
- '.github/**'
|
||||
- 'app/**'
|
||||
- 'tests/**'
|
||||
|
||||
|
||||
@@ -20,7 +20,6 @@ on:
|
||||
pull_request:
|
||||
branches: [ "main" ]
|
||||
paths:
|
||||
- '.github/**'
|
||||
- 'data/**'
|
||||
- 'tests/**'
|
||||
|
||||
|
||||
@@ -20,7 +20,6 @@ on:
|
||||
pull_request:
|
||||
branches: [ "main" ]
|
||||
paths:
|
||||
- '.github/**'
|
||||
- 'web/**'
|
||||
- 'tests/**'
|
||||
|
||||
@@ -35,7 +34,7 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
ruby-version: ['3.4', '4.0']
|
||||
ruby-version: ['3.3', '3.4']
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
|
||||
@@ -1,44 +1,5 @@
|
||||
# CHANGELOG
|
||||
|
||||
## v0.5.9
|
||||
|
||||
* Matrix: listen for synapse on port 41448 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/607>
|
||||
* Web: collapse federation map ledgend by @l5yth in <https://github.com/l5yth/potato-mesh/pull/604>
|
||||
* Web: fix stale node queries by @l5yth in <https://github.com/l5yth/potato-mesh/pull/603>
|
||||
* Matrix: move short name to display name by @l5yth in <https://github.com/l5yth/potato-mesh/pull/602>
|
||||
* Ci: update ruby to 4 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/601>
|
||||
* Web: display traces of last 28 days if available by @l5yth in <https://github.com/l5yth/potato-mesh/pull/599>
|
||||
* Web: establish menu structure by @l5yth in <https://github.com/l5yth/potato-mesh/pull/597>
|
||||
* Matrix: fixed the text-message checkpoint regression by @l5yth in <https://github.com/l5yth/potato-mesh/pull/595>
|
||||
* Matrix: cache seen messages by rx_time not id by @l5yth in <https://github.com/l5yth/potato-mesh/pull/594>
|
||||
* Web: hide the default '0' tab when not active by @l5yth in <https://github.com/l5yth/potato-mesh/pull/593>
|
||||
* Matrix: fix empty bridge state json by @l5yth in <https://github.com/l5yth/potato-mesh/pull/592>
|
||||
* Web: allow certain charts to overflow upper bounds by @l5yth in <https://github.com/l5yth/potato-mesh/pull/585>
|
||||
* Ingestor: support ROUTING_APP messages by @l5yth in <https://github.com/l5yth/potato-mesh/pull/584>
|
||||
* Ci: run nix flake check on ci by @l5yth in <https://github.com/l5yth/potato-mesh/pull/583>
|
||||
* Web: hide legend by default by @l5yth in <https://github.com/l5yth/potato-mesh/pull/582>
|
||||
* Nix flake by @benjajaja in <https://github.com/l5yth/potato-mesh/pull/577>
|
||||
* Support BLE UUID format for macOS Bluetooth devices by @apo-mak in <https://github.com/l5yth/potato-mesh/pull/575>
|
||||
* Web: add mesh.qrp.ro as seed node by @l5yth in <https://github.com/l5yth/potato-mesh/pull/573>
|
||||
* Web: ensure unknown nodes for messages and traces by @l5yth in <https://github.com/l5yth/potato-mesh/pull/572>
|
||||
* Chore: bump version to 0.5.9 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/569>
|
||||
|
||||
## v0.5.8
|
||||
|
||||
* Web: add secondary seed node jmrp.io by @l5yth in <https://github.com/l5yth/potato-mesh/pull/568>
|
||||
* Data: implement whitelist for ingestor by @l5yth in <https://github.com/l5yth/potato-mesh/pull/567>
|
||||
* Web: add ?since= parameter to all apis by @l5yth in <https://github.com/l5yth/potato-mesh/pull/566>
|
||||
* Matrix: fix docker build by @l5yth in <https://github.com/l5yth/potato-mesh/pull/565>
|
||||
* Matrix: fix docker build by @l5yth in <https://github.com/l5yth/potato-mesh/pull/564>
|
||||
* Web: fix federation signature validation and create fallback by @l5yth in <https://github.com/l5yth/potato-mesh/pull/563>
|
||||
* Chore: update readme by @l5yth in <https://github.com/l5yth/potato-mesh/pull/561>
|
||||
* Matrix: add docker file for bridge by @l5yth in <https://github.com/l5yth/potato-mesh/pull/556>
|
||||
* Matrix: add health checks to startup by @l5yth in <https://github.com/l5yth/potato-mesh/pull/555>
|
||||
* Matrix: omit the api part in base url by @l5yth in <https://github.com/l5yth/potato-mesh/pull/554>
|
||||
* App: add utility coverage tests for main.dart by @l5yth in <https://github.com/l5yth/potato-mesh/pull/552>
|
||||
* Data: add thorough daemon unit tests by @l5yth in <https://github.com/l5yth/potato-mesh/pull/553>
|
||||
* Chore: bump version to 0.5.8 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/551>
|
||||
|
||||
## v0.5.7
|
||||
|
||||
* Data: track ingestors heartbeat by @l5yth in <https://github.com/l5yth/potato-mesh/pull/549>
|
||||
|
||||
@@ -88,7 +88,6 @@ The web app can be configured with environment variables (defaults shown):
|
||||
| `CHANNEL` | `"#LongFast"` | Default channel name displayed in the UI. |
|
||||
| `FREQUENCY` | `"915MHz"` | Default frequency description displayed in the UI. |
|
||||
| `CONTACT_LINK` | `"#potatomesh:dod.ngo"` | Chat link or Matrix alias rendered in the footer and overlays. |
|
||||
| `ANNOUNCEMENT` | _unset_ | Optional announcement banner text rendered above the header on every page. |
|
||||
| `MAP_CENTER` | `38.761944,-27.090833` | Latitude and longitude that centre the map on load. |
|
||||
| `MAP_ZOOM` | _unset_ | Fixed Leaflet zoom applied on first load; disables auto-fit when provided. |
|
||||
| `MAX_DISTANCE` | `42` | Maximum distance (km) before node relationships are hidden on the map. |
|
||||
@@ -252,36 +251,15 @@ services.potato-mesh = {
|
||||
|
||||
## Docker
|
||||
|
||||
Docker images are published on GitHub Container Registry for each release.
|
||||
Image names and tags follow the workflow format:
|
||||
`${IMAGE_PREFIX}-${service}-${architecture}:${tag}` (see `.github/workflows/docker.yml`).
|
||||
Docker images are published on Github for each release:
|
||||
|
||||
```bash
|
||||
docker pull ghcr.io/l5yth/potato-mesh-web-linux-amd64:latest
|
||||
docker pull ghcr.io/l5yth/potato-mesh-web-linux-arm64:latest
|
||||
docker pull ghcr.io/l5yth/potato-mesh-web-linux-armv7:latest
|
||||
|
||||
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-amd64:latest
|
||||
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-arm64:latest
|
||||
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-armv7:latest
|
||||
|
||||
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-amd64:latest
|
||||
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-arm64:latest
|
||||
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-armv7:latest
|
||||
|
||||
# version-pinned examples
|
||||
docker pull ghcr.io/l5yth/potato-mesh-web-linux-amd64:v0.5.5
|
||||
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-amd64:v0.5.5
|
||||
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-amd64:v0.5.5
|
||||
docker pull ghcr.io/l5yth/potato-mesh/web:latest # newest release
|
||||
docker pull ghcr.io/l5yth/potato-mesh/web:v0.5.5 # pinned historical release
|
||||
docker pull ghcr.io/l5yth/potato-mesh/ingestor:latest
|
||||
docker pull ghcr.io/l5yth/potato-mesh/matrix-bridge:latest
|
||||
```
|
||||
|
||||
Note: `latest` is only published for non-prerelease versions. Pre-release tags
|
||||
such as `-rc`, `-beta`, `-alpha`, or `-dev` are version-tagged only.
|
||||
|
||||
When using Compose, set `POTATOMESH_IMAGE_ARCH` in `docker-compose.yml` (or via
|
||||
environment) so service images resolve to the correct architecture variant and
|
||||
you avoid manual tag mistakes.
|
||||
|
||||
Feel free to run the [configure.sh](./configure.sh) script to set up your
|
||||
environment. See the [Docker guide](DOCKER.md) for more details and custom
|
||||
deployment instructions.
|
||||
@@ -292,8 +270,6 @@ A matrix bridge is currently being worked on. It requests messages from a config
|
||||
potato-mesh instance and forwards it to a specified matrix channel; see
|
||||
[matrix/README.md](./matrix/README.md).
|
||||
|
||||

|
||||
|
||||
## Mobile App
|
||||
|
||||
A mobile _reader_ app is currently being worked on. Stay tuned for releases and updates.
|
||||
|
||||
@@ -1,18 +1,3 @@
|
||||
/*
|
||||
* Copyright © 2025-26 l5yth & contributors
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
plugins {
|
||||
id("com.android.application")
|
||||
id("kotlin-android")
|
||||
|
||||
@@ -1,16 +1,3 @@
|
||||
// Copyright © 2025-26 l5yth & contributors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package net.potatomesh.reader
|
||||
|
||||
import io.flutter.embedding.android.FlutterActivity
|
||||
|
||||
@@ -1,18 +1,3 @@
|
||||
/*
|
||||
* Copyright © 2025-26 l5yth & contributors
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
allprojects {
|
||||
repositories {
|
||||
google()
|
||||
|
||||
@@ -1,18 +1,3 @@
|
||||
/*
|
||||
* Copyright © 2025-26 l5yth & contributors
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
pluginManagement {
|
||||
val flutterSdkPath =
|
||||
run {
|
||||
|
||||
+1
-13
@@ -1,18 +1,5 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
export GIT_TAG="$(git describe --tags --abbrev=0)"
|
||||
export GIT_COMMITS="$(git rev-list --count ${GIT_TAG}..HEAD)"
|
||||
export GIT_SHA="$(git rev-parse --short=9 HEAD)"
|
||||
@@ -25,3 +12,4 @@ flutter run \
|
||||
--dart-define=GIT_SHA="${GIT_SHA}" \
|
||||
--dart-define=GIT_DIRTY="${GIT_DIRTY}" \
|
||||
--device-id 38151FDJH00D4C
|
||||
|
||||
|
||||
@@ -15,11 +15,11 @@
|
||||
<key>CFBundlePackageType</key>
|
||||
<string>FMWK</string>
|
||||
<key>CFBundleShortVersionString</key>
|
||||
<string>0.5.10</string>
|
||||
<string>0.5.9</string>
|
||||
<key>CFBundleSignature</key>
|
||||
<string>????</string>
|
||||
<key>CFBundleVersion</key>
|
||||
<string>0.5.10</string>
|
||||
<string>0.5.9</string>
|
||||
<key>MinimumOSVersion</key>
|
||||
<string>14.0</string>
|
||||
</dict>
|
||||
|
||||
@@ -1,16 +1,3 @@
|
||||
// Copyright © 2025-26 l5yth & contributors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import Flutter
|
||||
import UIKit
|
||||
|
||||
|
||||
@@ -1,14 +1 @@
|
||||
// Copyright © 2025-26 l5yth & contributors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
#import "GeneratedPluginRegistrant.h"
|
||||
|
||||
@@ -1,16 +1,3 @@
|
||||
// Copyright © 2025-26 l5yth & contributors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import Flutter
|
||||
import UIKit
|
||||
import XCTest
|
||||
|
||||
+1
-1
@@ -1,7 +1,7 @@
|
||||
name: potato_mesh_reader
|
||||
description: Meshtastic Reader — read-only view for PotatoMesh messages.
|
||||
publish_to: "none"
|
||||
version: 0.5.10
|
||||
version: 0.5.9
|
||||
|
||||
environment:
|
||||
sdk: ">=3.4.0 <4.0.0"
|
||||
|
||||
+1
-13
@@ -1,18 +1,5 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
export GIT_TAG="$(git describe --tags --abbrev=0)"
|
||||
@@ -40,3 +27,4 @@ fi
|
||||
export APK_DIR="build/app/outputs/flutter-apk"
|
||||
mv -v "${APK_DIR}/app-release.apk" "${APK_DIR}/potatomesh-reader-android-${TAG_NAME}.apk"
|
||||
(cd "${APK_DIR}" && sha256sum "potatomesh-reader-android-${TAG_NAME}.apk" > "potatomesh-reader-android-${TAG_NAME}.apk.sha256sum")
|
||||
|
||||
|
||||
@@ -1,16 +1,3 @@
|
||||
// Copyright © 2025-26 l5yth & contributors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// This is a basic Flutter widget test.
|
||||
//
|
||||
// To perform an interaction with a widget in your test, use the WidgetTester
|
||||
|
||||
+1
-1
@@ -18,7 +18,7 @@ The ``data.mesh`` module exposes helpers for reading Meshtastic node and
|
||||
message information before forwarding it to the accompanying web application.
|
||||
"""
|
||||
|
||||
VERSION = "0.5.10"
|
||||
VERSION = "0.5.9"
|
||||
"""Semantic version identifier shared with the dashboard and front-end."""
|
||||
|
||||
__version__ = VERSION
|
||||
|
||||
@@ -1,85 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Decode Meshtastic protobuf payloads from stdin JSON."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import base64
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from typing import Any, Dict, Tuple
|
||||
|
||||
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
if SCRIPT_DIR in sys.path:
|
||||
sys.path.remove(SCRIPT_DIR)
|
||||
|
||||
from google.protobuf.json_format import MessageToDict
|
||||
from meshtastic.protobuf import mesh_pb2, telemetry_pb2
|
||||
|
||||
PORTNUM_MAP: Dict[int, Tuple[str, Any]] = {
|
||||
3: ("POSITION_APP", mesh_pb2.Position),
|
||||
4: ("NODEINFO_APP", mesh_pb2.NodeInfo),
|
||||
5: ("ROUTING_APP", mesh_pb2.Routing),
|
||||
67: ("TELEMETRY_APP", telemetry_pb2.Telemetry),
|
||||
70: ("TRACEROUTE_APP", mesh_pb2.RouteDiscovery),
|
||||
71: ("NEIGHBORINFO_APP", mesh_pb2.NeighborInfo),
|
||||
}
|
||||
|
||||
|
||||
def _decode_payload(portnum: int, payload_b64: str) -> dict[str, Any]:
|
||||
if portnum not in PORTNUM_MAP:
|
||||
return {"error": "unsupported-port", "portnum": portnum}
|
||||
try:
|
||||
payload_bytes = base64.b64decode(payload_b64, validate=True)
|
||||
except Exception as exc:
|
||||
return {"error": f"invalid-payload: {exc}"}
|
||||
|
||||
name, message_cls = PORTNUM_MAP[portnum]
|
||||
msg = message_cls()
|
||||
try:
|
||||
msg.ParseFromString(payload_bytes)
|
||||
except Exception as exc:
|
||||
return {"error": f"decode-failed: {exc}", "portnum": portnum, "type": name}
|
||||
|
||||
decoded = MessageToDict(msg, preserving_proto_field_name=True)
|
||||
return {"portnum": portnum, "type": name, "payload": decoded}
|
||||
|
||||
|
||||
def main() -> int:
|
||||
raw = sys.stdin.read()
|
||||
try:
|
||||
request = json.loads(raw)
|
||||
except json.JSONDecodeError as exc:
|
||||
sys.stdout.write(json.dumps({"error": f"invalid-json: {exc}"}))
|
||||
return 1
|
||||
|
||||
portnum = request.get("portnum")
|
||||
payload_b64 = request.get("payload_b64")
|
||||
|
||||
if not isinstance(portnum, int):
|
||||
sys.stdout.write(json.dumps({"error": "missing-portnum"}))
|
||||
return 1
|
||||
if not isinstance(payload_b64, str):
|
||||
sys.stdout.write(json.dumps({"error": "missing-payload"}))
|
||||
return 1
|
||||
|
||||
result = _decode_payload(portnum, payload_b64)
|
||||
sys.stdout.write(json.dumps(result))
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
@@ -424,7 +424,6 @@ def store_position_packet(packet: Mapping, decoded: Mapping) -> None:
|
||||
"hop_limit": hop_limit,
|
||||
"bitfield": bitfield,
|
||||
"payload_b64": payload_b64,
|
||||
"ingestor": host_node_id(),
|
||||
}
|
||||
if raw_payload:
|
||||
position_payload["raw"] = raw_payload
|
||||
@@ -569,7 +568,6 @@ def store_traceroute_packet(packet: Mapping, decoded: Mapping) -> None:
|
||||
"rssi": rssi,
|
||||
"snr": snr,
|
||||
"elapsed_ms": elapsed_ms,
|
||||
"ingestor": host_node_id(),
|
||||
}
|
||||
|
||||
_queue_post_json(
|
||||
@@ -937,7 +935,6 @@ def store_telemetry_packet(packet: Mapping, decoded: Mapping) -> None:
|
||||
"rssi": rssi,
|
||||
"hop_limit": hop_limit,
|
||||
"payload_b64": payload_b64,
|
||||
"ingestor": host_node_id(),
|
||||
}
|
||||
|
||||
if battery_level is not None:
|
||||
@@ -1266,7 +1263,6 @@ def store_neighborinfo_packet(packet: Mapping, decoded: Mapping) -> None:
|
||||
"neighbors": neighbor_entries,
|
||||
"rx_time": rx_time,
|
||||
"rx_iso": _iso(rx_time),
|
||||
"ingestor": host_node_id(),
|
||||
}
|
||||
|
||||
if node_broadcast_interval is not None:
|
||||
@@ -1524,7 +1520,6 @@ def store_packet_dict(packet: Mapping) -> None:
|
||||
"hop_limit": int(hop) if hop is not None else None,
|
||||
"reply_id": reply_id,
|
||||
"emoji": emoji,
|
||||
"ingestor": host_node_id(),
|
||||
}
|
||||
|
||||
if not encrypted_flag and channel_name_value:
|
||||
|
||||
+1
-2
@@ -29,8 +29,7 @@ CREATE TABLE IF NOT EXISTS messages (
|
||||
modem_preset TEXT,
|
||||
channel_name TEXT,
|
||||
reply_id INTEGER,
|
||||
emoji TEXT,
|
||||
ingestor TEXT
|
||||
emoji TEXT
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_messages_rx_time ON messages(rx_time);
|
||||
|
||||
@@ -17,7 +17,6 @@ CREATE TABLE IF NOT EXISTS neighbors (
|
||||
neighbor_id TEXT NOT NULL,
|
||||
snr REAL,
|
||||
rx_time INTEGER NOT NULL,
|
||||
ingestor TEXT,
|
||||
PRIMARY KEY (node_id, neighbor_id),
|
||||
FOREIGN KEY (node_id) REFERENCES nodes(node_id) ON DELETE CASCADE,
|
||||
FOREIGN KEY (neighbor_id) REFERENCES nodes(node_id) ON DELETE CASCADE
|
||||
|
||||
+1
-2
@@ -33,8 +33,7 @@ CREATE TABLE IF NOT EXISTS positions (
|
||||
rssi INTEGER,
|
||||
hop_limit INTEGER,
|
||||
bitfield INTEGER,
|
||||
payload_b64 TEXT,
|
||||
ingestor TEXT
|
||||
payload_b64 TEXT
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_positions_rx_time ON positions(rx_time);
|
||||
|
||||
+1
-2
@@ -53,8 +53,7 @@ CREATE TABLE IF NOT EXISTS telemetry (
|
||||
rainfall_1h REAL,
|
||||
rainfall_24h REAL,
|
||||
soil_moisture INTEGER,
|
||||
soil_temperature REAL,
|
||||
ingestor TEXT
|
||||
soil_temperature REAL
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_telemetry_rx_time ON telemetry(rx_time);
|
||||
|
||||
+1
-2
@@ -21,8 +21,7 @@ CREATE TABLE IF NOT EXISTS traces (
|
||||
rx_iso TEXT NOT NULL,
|
||||
rssi INTEGER,
|
||||
snr REAL,
|
||||
elapsed_ms INTEGER,
|
||||
ingestor TEXT
|
||||
elapsed_ms INTEGER
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS trace_hops (
|
||||
|
||||
+1
-10
@@ -81,12 +81,7 @@ x-matrix-bridge-base: &matrix-bridge-base
|
||||
image: ghcr.io/l5yth/potato-mesh-matrix-bridge-${POTATOMESH_IMAGE_ARCH:-linux-amd64}:${POTATOMESH_IMAGE_TAG:-latest}
|
||||
volumes:
|
||||
- potatomesh_matrix_bridge_state:/app
|
||||
- type: bind
|
||||
source: ./matrix/Config.toml
|
||||
target: /app/Config.toml
|
||||
read_only: true
|
||||
bind:
|
||||
create_host_path: false
|
||||
- ./matrix/Config.toml:/app/Config.toml:ro
|
||||
restart: unless-stopped
|
||||
deploy:
|
||||
resources:
|
||||
@@ -133,8 +128,6 @@ services:
|
||||
matrix-bridge:
|
||||
<<: *matrix-bridge-base
|
||||
network_mode: host
|
||||
profiles:
|
||||
- matrix
|
||||
depends_on:
|
||||
- web
|
||||
extra_hosts:
|
||||
@@ -147,8 +140,6 @@ services:
|
||||
- potatomesh-network
|
||||
depends_on:
|
||||
- web-bridge
|
||||
ports:
|
||||
- "41448:41448"
|
||||
profiles:
|
||||
- bridge
|
||||
|
||||
|
||||
Generated
+145
-344
@@ -11,56 +11,6 @@ dependencies = [
|
||||
"memchr",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "anstream"
|
||||
version = "0.6.21"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "43d5b281e737544384e969a5ccad3f1cdd24b48086a0fc1b2a5262a26b8f4f4a"
|
||||
dependencies = [
|
||||
"anstyle",
|
||||
"anstyle-parse",
|
||||
"anstyle-query",
|
||||
"anstyle-wincon",
|
||||
"colorchoice",
|
||||
"is_terminal_polyfill",
|
||||
"utf8parse",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "anstyle"
|
||||
version = "1.0.13"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5192cca8006f1fd4f7237516f40fa183bb07f8fbdfedaa0036de5ea9b0b45e78"
|
||||
|
||||
[[package]]
|
||||
name = "anstyle-parse"
|
||||
version = "0.2.7"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "4e7644824f0aa2c7b9384579234ef10eb7efb6a0deb83f9630a49594dd9c15c2"
|
||||
dependencies = [
|
||||
"utf8parse",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "anstyle-query"
|
||||
version = "1.1.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "40c48f72fd53cd289104fc64099abca73db4166ad86ea0b4341abe65af83dadc"
|
||||
dependencies = [
|
||||
"windows-sys 0.61.2",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "anstyle-wincon"
|
||||
version = "3.0.11"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "291e6a250ff86cd4a820112fb8898808a366d8f9f58ce16d1f538353ad55747d"
|
||||
dependencies = [
|
||||
"anstyle",
|
||||
"once_cell_polyfill",
|
||||
"windows-sys 0.61.2",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "anyhow"
|
||||
version = "1.0.100"
|
||||
@@ -77,84 +27,24 @@ dependencies = [
|
||||
"serde_json",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "async-trait"
|
||||
version = "0.1.89"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "9035ad2d096bed7955a320ee7e2230574d28fd3c3a0f186cbea1ff3c7eed5dbb"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "atomic-waker"
|
||||
version = "1.1.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1505bd5d3d116872e7271a6d4e16d81d0c8570876c8de68093a09ac269d8aac0"
|
||||
|
||||
[[package]]
|
||||
name = "axum"
|
||||
version = "0.7.9"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "edca88bc138befd0323b20752846e6587272d3b03b0343c8ea28a6f819e6e71f"
|
||||
dependencies = [
|
||||
"async-trait",
|
||||
"axum-core",
|
||||
"bytes",
|
||||
"futures-util",
|
||||
"http",
|
||||
"http-body",
|
||||
"http-body-util",
|
||||
"hyper",
|
||||
"hyper-util",
|
||||
"itoa",
|
||||
"matchit",
|
||||
"memchr",
|
||||
"mime",
|
||||
"percent-encoding",
|
||||
"pin-project-lite",
|
||||
"rustversion",
|
||||
"serde",
|
||||
"serde_json",
|
||||
"serde_path_to_error",
|
||||
"serde_urlencoded",
|
||||
"sync_wrapper",
|
||||
"tokio",
|
||||
"tower",
|
||||
"tower-layer",
|
||||
"tower-service",
|
||||
"tracing",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "axum-core"
|
||||
version = "0.4.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "09f2bd6146b97ae3359fa0cc6d6b376d9539582c7b4220f041a33ec24c226199"
|
||||
dependencies = [
|
||||
"async-trait",
|
||||
"bytes",
|
||||
"futures-util",
|
||||
"http",
|
||||
"http-body",
|
||||
"http-body-util",
|
||||
"mime",
|
||||
"pin-project-lite",
|
||||
"rustversion",
|
||||
"sync_wrapper",
|
||||
"tower-layer",
|
||||
"tower-service",
|
||||
"tracing",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "base64"
|
||||
version = "0.22.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "72b3254f16251a8381aa12e40e3c4d2f0199f8c6508fbecb9d91f575e0fbb8c6"
|
||||
|
||||
[[package]]
|
||||
name = "bitflags"
|
||||
version = "1.3.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "bef38d45163c2f1dde094a7dfd33ccf595c92905c8f8f4fdc18d06fb1037718a"
|
||||
|
||||
[[package]]
|
||||
name = "bitflags"
|
||||
version = "2.10.0"
|
||||
@@ -163,21 +53,21 @@ checksum = "812e12b5285cc515a9c72a5c1d3b6d46a19dac5acfef5265968c166106e31dd3"
|
||||
|
||||
[[package]]
|
||||
name = "bumpalo"
|
||||
version = "3.19.1"
|
||||
version = "3.19.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5dd9dc738b7a8311c7ade152424974d8115f2cdad61e8dab8dac9f2362298510"
|
||||
checksum = "46c5e41b57b8bba42a04676d81cb89e9ee8e859a1a66f80a5a72e1cb76b34d43"
|
||||
|
||||
[[package]]
|
||||
name = "bytes"
|
||||
version = "1.11.1"
|
||||
version = "1.11.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1e748733b7cbc798e1434b6ac524f0c1ff2ab456fe201501e6497c8417a4fc33"
|
||||
checksum = "b35204fbdc0b3f4446b89fc1ac2cf84a8a68971995d0bf2e925ec7cd960f9cb3"
|
||||
|
||||
[[package]]
|
||||
name = "cc"
|
||||
version = "1.2.52"
|
||||
version = "1.2.47"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "cd4932aefd12402b36c60956a4fe0035421f544799057659ff86f923657aada3"
|
||||
checksum = "cd405d82c84ff7f35739f175f67d8b9fb7687a0e84ccdc78bd3568839827cf07"
|
||||
dependencies = [
|
||||
"find-msvc-tools",
|
||||
"shlex",
|
||||
@@ -195,59 +85,13 @@ version = "0.2.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "613afe47fcd5fac7ccf1db93babcb082c5994d996f20b8b159f2ad1658eb5724"
|
||||
|
||||
[[package]]
|
||||
name = "clap"
|
||||
version = "4.5.54"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "c6e6ff9dcd79cff5cd969a17a545d79e84ab086e444102a591e288a8aa3ce394"
|
||||
dependencies = [
|
||||
"clap_builder",
|
||||
"clap_derive",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "clap_builder"
|
||||
version = "4.5.54"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "fa42cf4d2b7a41bc8f663a7cab4031ebafa1bf3875705bfaf8466dc60ab52c00"
|
||||
dependencies = [
|
||||
"anstream",
|
||||
"anstyle",
|
||||
"clap_lex",
|
||||
"strsim",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "clap_derive"
|
||||
version = "4.5.49"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2a0b5487afeab2deb2ff4e03a807ad1a03ac532ff5a2cee5d86884440c7f7671"
|
||||
dependencies = [
|
||||
"heck",
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "clap_lex"
|
||||
version = "0.7.6"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a1d728cc89cf3aee9ff92b05e62b19ee65a02b5702cff7d5a377e32c6ae29d8d"
|
||||
|
||||
[[package]]
|
||||
name = "colorchoice"
|
||||
version = "1.0.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b05b61dc5112cbb17e4b6cd61790d9845d13888356391624cbe7e41efeac1e75"
|
||||
|
||||
[[package]]
|
||||
name = "colored"
|
||||
version = "3.0.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "fde0e0ec90c9dfb3b4b1a0891a7dcd0e2bffde2f7efed5fe7c9bb00e5bfb915e"
|
||||
dependencies = [
|
||||
"windows-sys 0.59.0",
|
||||
"windows-sys 0.52.0",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -310,9 +154,9 @@ checksum = "37909eebbb50d72f9059c3b6d82c0463f2ff062c9e95845c43a6c9c0355411be"
|
||||
|
||||
[[package]]
|
||||
name = "find-msvc-tools"
|
||||
version = "0.1.7"
|
||||
version = "0.1.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f449e6c6c08c865631d4890cfacf252b3d396c9bcc83adb6623cdb02a8336c41"
|
||||
checksum = "3a3076410a55c90011c298b04d0cfa770b00fa04e1e3c97d3f6c9de105a03844"
|
||||
|
||||
[[package]]
|
||||
name = "fnv"
|
||||
@@ -344,6 +188,21 @@ dependencies = [
|
||||
"percent-encoding",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "futures"
|
||||
version = "0.3.31"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "65bc07b1a8bc7c85c5f2e110c476c7389b4554ba72af57d8445ea63a576b0876"
|
||||
dependencies = [
|
||||
"futures-channel",
|
||||
"futures-core",
|
||||
"futures-executor",
|
||||
"futures-io",
|
||||
"futures-sink",
|
||||
"futures-task",
|
||||
"futures-util",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "futures-channel"
|
||||
version = "0.3.31"
|
||||
@@ -351,6 +210,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2dff15bf788c671c1934e366d07e30c1814a8ef514e1af724a602e8a2fbe1b10"
|
||||
dependencies = [
|
||||
"futures-core",
|
||||
"futures-sink",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -370,6 +230,12 @@ dependencies = [
|
||||
"futures-util",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "futures-io"
|
||||
version = "0.3.31"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "9e5c1b78ca4aae1ac06c48a526a655760685149f0d465d21f37abfe57ce075c6"
|
||||
|
||||
[[package]]
|
||||
name = "futures-sink"
|
||||
version = "0.3.31"
|
||||
@@ -388,8 +254,12 @@ version = "0.3.31"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "9fa08315bb612088cc391249efdc3bc77536f16c91f6cf495e6fbe85b20a4a81"
|
||||
dependencies = [
|
||||
"futures-channel",
|
||||
"futures-core",
|
||||
"futures-io",
|
||||
"futures-sink",
|
||||
"futures-task",
|
||||
"memchr",
|
||||
"pin-project-lite",
|
||||
"pin-utils",
|
||||
"slab",
|
||||
@@ -424,9 +294,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "h2"
|
||||
version = "0.4.13"
|
||||
version = "0.4.12"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2f44da3a8150a6703ed5d34e164b875fd14c2cdab9af1252a9a1020bde2bdc54"
|
||||
checksum = "f3c0b69cfcb4e1b9f1bf2f53f95f766e4661169728ec61cd3fe5a0166f2d1386"
|
||||
dependencies = [
|
||||
"atomic-waker",
|
||||
"bytes",
|
||||
@@ -447,12 +317,6 @@ version = "0.16.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "841d1cc9bed7f9236f321df977030373f4a4163ae1a7dbfe1a51a2c1a51d9100"
|
||||
|
||||
[[package]]
|
||||
name = "heck"
|
||||
version = "0.5.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2304e00983f87ffb38b55b444b5e3b60a884b5d30c0fca7d82fe33449bbe55ea"
|
||||
|
||||
[[package]]
|
||||
name = "http"
|
||||
version = "1.4.0"
|
||||
@@ -556,9 +420,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "hyper-util"
|
||||
version = "0.1.19"
|
||||
version = "0.1.18"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "727805d60e7938b76b826a6ef209eb70eaa1812794f9424d4a4e2d740662df5f"
|
||||
checksum = "52e9a2a24dc5c6821e71a7030e1e14b7b632acac55c40e9d2e082c621261bb56"
|
||||
dependencies = [
|
||||
"base64",
|
||||
"bytes",
|
||||
@@ -628,9 +492,9 @@ checksum = "7aedcccd01fc5fe81e6b489c15b247b8b0690feb23304303a9e560f37efc560a"
|
||||
|
||||
[[package]]
|
||||
name = "icu_properties"
|
||||
version = "2.1.2"
|
||||
version = "2.1.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "020bfc02fe870ec3a66d93e677ccca0562506e5872c650f893269e08615d74ec"
|
||||
checksum = "e93fcd3157766c0c8da2f8cff6ce651a31f0810eaa1c51ec363ef790bbb5fb99"
|
||||
dependencies = [
|
||||
"icu_collections",
|
||||
"icu_locale_core",
|
||||
@@ -642,9 +506,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "icu_properties_data"
|
||||
version = "2.1.2"
|
||||
version = "2.1.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "616c294cf8d725c6afcd8f55abc17c56464ef6211f9ed59cccffe534129c77af"
|
||||
checksum = "02845b3647bb045f1100ecd6480ff52f34c35f82d9880e029d329c21d1054899"
|
||||
|
||||
[[package]]
|
||||
name = "icu_provider"
|
||||
@@ -684,9 +548,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "indexmap"
|
||||
version = "2.13.0"
|
||||
version = "2.12.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "7714e70437a7dc3ac8eb7e6f8df75fd8eb422675fc7678aff7364301092b1017"
|
||||
checksum = "0ad4bb2b565bca0645f4d68c5c9af97fba094e9791da685bf83cb5f3ce74acf2"
|
||||
dependencies = [
|
||||
"equivalent",
|
||||
"hashbrown",
|
||||
@@ -700,31 +564,25 @@ checksum = "469fb0b9cefa57e3ef31275ee7cacb78f2fdca44e4765491884a2b119d4eb130"
|
||||
|
||||
[[package]]
|
||||
name = "iri-string"
|
||||
version = "0.7.10"
|
||||
version = "0.7.9"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "c91338f0783edbd6195decb37bae672fd3b165faffb89bf7b9e6942f8b1a731a"
|
||||
checksum = "4f867b9d1d896b67beb18518eda36fdb77a32ea590de864f1325b294a6d14397"
|
||||
dependencies = [
|
||||
"memchr",
|
||||
"serde",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "is_terminal_polyfill"
|
||||
version = "1.70.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a6cb138bb79a146c1bd460005623e142ef0181e3d0219cb493e02f7d08a35695"
|
||||
|
||||
[[package]]
|
||||
name = "itoa"
|
||||
version = "1.0.17"
|
||||
version = "1.0.15"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "92ecc6618181def0457392ccd0ee51198e065e016d1d527a7ac1b6dc7c1f09d2"
|
||||
checksum = "4a5f13b858c8d314ee3e8f639011f7ccefe71f97f96e50151fb991f267928e2c"
|
||||
|
||||
[[package]]
|
||||
name = "js-sys"
|
||||
version = "0.3.83"
|
||||
version = "0.3.82"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "464a3709c7f55f1f721e5389aa6ea4e3bc6aba669353300af094b29ffbdde1d8"
|
||||
checksum = "b011eec8cc36da2aab2d5cff675ec18454fad408585853910a202391cf9f8e65"
|
||||
dependencies = [
|
||||
"once_cell",
|
||||
"wasm-bindgen",
|
||||
@@ -738,9 +596,9 @@ checksum = "bbd2bcb4c963f2ddae06a2efc7e9f3591312473c50c6685e1f298068316e66fe"
|
||||
|
||||
[[package]]
|
||||
name = "libc"
|
||||
version = "0.2.180"
|
||||
version = "0.2.177"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "bcc35a38544a891a5f7c865aca548a982ccb3b8650a5b06d0fd33a10283c56fc"
|
||||
checksum = "2874a2af47a2325c2001a6e6fad9b16a53b802102b528163885171cf92b15976"
|
||||
|
||||
[[package]]
|
||||
name = "linux-raw-sys"
|
||||
@@ -765,9 +623,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "log"
|
||||
version = "0.4.29"
|
||||
version = "0.4.28"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5e5032e24019045c762d3c0f28f5b6b8bbf38563a65908389bf7978758920897"
|
||||
checksum = "34080505efa8e45a4b816c349525ebe327ceaa8559756f0356cba97ef3bf7432"
|
||||
|
||||
[[package]]
|
||||
name = "lru-slab"
|
||||
@@ -784,12 +642,6 @@ dependencies = [
|
||||
"regex-automata",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "matchit"
|
||||
version = "0.7.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "0e7465ac9959cc2b1404e8e2367b43684a6d13790fe23056cc8c6c5a6b7bcb94"
|
||||
|
||||
[[package]]
|
||||
name = "memchr"
|
||||
version = "2.7.6"
|
||||
@@ -804,9 +656,9 @@ checksum = "6877bb514081ee2a7ff5ef9de3281f14a4dd4bceac4c09388074a6b5df8a139a"
|
||||
|
||||
[[package]]
|
||||
name = "mio"
|
||||
version = "1.1.1"
|
||||
version = "1.1.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a69bcab0ad47271a0234d9422b131806bf3968021e5dc9328caf2d4cd58557fc"
|
||||
checksum = "69d83b0086dc8ecf3ce9ae2874b2d1290252e2a30720bea58a5c6639b0092873"
|
||||
dependencies = [
|
||||
"libc",
|
||||
"wasi",
|
||||
@@ -815,21 +667,20 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "mockito"
|
||||
version = "1.7.1"
|
||||
version = "1.7.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "7e0603425789b4a70fcc4ac4f5a46a566c116ee3e2a6b768dc623f7719c611de"
|
||||
checksum = "7760e0e418d9b7e5777c0374009ca4c93861b9066f18cb334a20ce50ab63aa48"
|
||||
dependencies = [
|
||||
"assert-json-diff",
|
||||
"bytes",
|
||||
"colored",
|
||||
"futures-core",
|
||||
"futures-util",
|
||||
"http",
|
||||
"http-body",
|
||||
"http-body-util",
|
||||
"hyper",
|
||||
"hyper-util",
|
||||
"log",
|
||||
"pin-project-lite",
|
||||
"rand",
|
||||
"regex",
|
||||
"serde_json",
|
||||
@@ -870,19 +721,13 @@ version = "1.21.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "42f5e15c9953c5e4ccceeb2e7382a716482c34515315f7b03532b8b4e8393d2d"
|
||||
|
||||
[[package]]
|
||||
name = "once_cell_polyfill"
|
||||
version = "1.70.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "384b8ab6d37215f3c5301a95a4accb5d64aa607f1fcb26a11b5303878451b4fe"
|
||||
|
||||
[[package]]
|
||||
name = "openssl"
|
||||
version = "0.10.75"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "08838db121398ad17ab8531ce9de97b244589089e290a384c900cb9ff7434328"
|
||||
dependencies = [
|
||||
"bitflags",
|
||||
"bitflags 2.10.0",
|
||||
"cfg-if",
|
||||
"foreign-types",
|
||||
"libc",
|
||||
@@ -969,11 +814,9 @@ checksum = "7edddbd0b52d732b21ad9a5fab5c704c14cd949e5e9a1ec5929a24fded1b904c"
|
||||
|
||||
[[package]]
|
||||
name = "potatomesh-matrix-bridge"
|
||||
version = "0.5.10"
|
||||
version = "0.5.9"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"axum",
|
||||
"clap",
|
||||
"mockito",
|
||||
"reqwest",
|
||||
"serde",
|
||||
@@ -982,7 +825,6 @@ dependencies = [
|
||||
"tempfile",
|
||||
"tokio",
|
||||
"toml",
|
||||
"tower",
|
||||
"tracing",
|
||||
"tracing-subscriber",
|
||||
"urlencoding",
|
||||
@@ -1008,9 +850,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "proc-macro2"
|
||||
version = "1.0.105"
|
||||
version = "1.0.103"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "535d180e0ecab6268a3e718bb9fd44db66bbbc256257165fc699dadf70d16fe7"
|
||||
checksum = "5ee95bc4ef87b8d5ba32e8b7714ccc834865276eab0aed5c9958d00ec45f49e8"
|
||||
dependencies = [
|
||||
"unicode-ident",
|
||||
]
|
||||
@@ -1072,9 +914,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "quote"
|
||||
version = "1.0.43"
|
||||
version = "1.0.42"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "dc74d9a594b72ae6656596548f56f667211f8a97b3d4c3d467150794690dc40a"
|
||||
checksum = "a338cc41d27e6cc6dce6cefc13a0729dfbb81c262b1f519331575dd80ef3067f"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
]
|
||||
@@ -1120,7 +962,7 @@ version = "0.5.18"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "ed2bf2547551a7053d6fdfafda3f938979645c44812fbfcda098faae3f1a362d"
|
||||
dependencies = [
|
||||
"bitflags",
|
||||
"bitflags 2.10.0",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -1154,9 +996,9 @@ checksum = "7a2d987857b319362043e95f5353c0535c1f58eec5336fdfcf626430af7def58"
|
||||
|
||||
[[package]]
|
||||
name = "reqwest"
|
||||
version = "0.12.28"
|
||||
version = "0.12.24"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "eddd3ca559203180a307f12d114c268abf583f59b03cb906fd0b3ff8646c1147"
|
||||
checksum = "9d0946410b9f7b082a427e4ef5c8ff541a88b357bc6c637c40db3a68ac70a36f"
|
||||
dependencies = [
|
||||
"base64",
|
||||
"bytes",
|
||||
@@ -1218,11 +1060,11 @@ checksum = "357703d41365b4b27c590e3ed91eabb1b663f07c4c084095e60cbed4362dff0d"
|
||||
|
||||
[[package]]
|
||||
name = "rustix"
|
||||
version = "1.1.3"
|
||||
version = "1.1.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "146c9e247ccc180c1f61615433868c99f3de3ae256a30a43b49f67c2d9171f34"
|
||||
checksum = "cd15f8a2c5551a84d56efdc1cd049089e409ac19a3072d5037a17fd70719ff3e"
|
||||
dependencies = [
|
||||
"bitflags",
|
||||
"bitflags 2.10.0",
|
||||
"errno",
|
||||
"libc",
|
||||
"linux-raw-sys",
|
||||
@@ -1231,9 +1073,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rustls"
|
||||
version = "0.23.36"
|
||||
version = "0.23.35"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "c665f33d38cea657d9614f766881e4d510e0eda4239891eea56b4cadcf01801b"
|
||||
checksum = "533f54bc6a7d4f647e46ad909549eda97bf5afc1585190ef692b4286b198bd8f"
|
||||
dependencies = [
|
||||
"once_cell",
|
||||
"ring",
|
||||
@@ -1245,9 +1087,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rustls-pki-types"
|
||||
version = "1.13.2"
|
||||
version = "1.13.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "21e6f2ab2928ca4291b86736a8bd920a277a399bba1589409d72154ff87c1282"
|
||||
checksum = "94182ad936a0c91c324cd46c6511b9510ed16af436d7b5bab34beab0afd55f7a"
|
||||
dependencies = [
|
||||
"web-time",
|
||||
"zeroize",
|
||||
@@ -1272,9 +1114,9 @@ checksum = "b39cdef0fa800fc44525c84ccb54a029961a8215f9619753635a9c0d2538d46d"
|
||||
|
||||
[[package]]
|
||||
name = "ryu"
|
||||
version = "1.0.22"
|
||||
version = "1.0.20"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a50f4cf475b65d88e057964e0e9bb1f0aa9bbb2036dc65c64596b42932536984"
|
||||
checksum = "28d3b2b1366ec20994f1fd18c3c594f05c5dd4bc44d8bb0c1c632c8d6829481f"
|
||||
|
||||
[[package]]
|
||||
name = "scc"
|
||||
@@ -1312,7 +1154,7 @@ version = "2.11.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "897b2245f0b511c87893af39b033e5ca9cce68824c4d7e7630b5a1d339658d02"
|
||||
dependencies = [
|
||||
"bitflags",
|
||||
"bitflags 2.10.0",
|
||||
"core-foundation",
|
||||
"core-foundation-sys",
|
||||
"libc",
|
||||
@@ -1361,33 +1203,22 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "serde_json"
|
||||
version = "1.0.149"
|
||||
version = "1.0.145"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "83fc039473c5595ace860d8c4fafa220ff474b3fc6bfdb4293327f1a37e94d86"
|
||||
checksum = "402a6f66d8c709116cf22f558eab210f5a50187f702eb4d7e5ef38d9a7f1c79c"
|
||||
dependencies = [
|
||||
"itoa",
|
||||
"memchr",
|
||||
"serde",
|
||||
"serde_core",
|
||||
"zmij",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "serde_path_to_error"
|
||||
version = "0.1.20"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "10a9ff822e371bb5403e391ecd83e182e0e77ba7f6fe0160b795797109d1b457"
|
||||
dependencies = [
|
||||
"itoa",
|
||||
"ryu",
|
||||
"serde",
|
||||
"serde_core",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "serde_spanned"
|
||||
version = "1.0.4"
|
||||
version = "1.0.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f8bbf91e5a4d6315eee45e704372590b30e260ee83af6639d64557f51b067776"
|
||||
checksum = "e24345aa0fe688594e73770a5f6d1b216508b4f93484c0026d521acd30134392"
|
||||
dependencies = [
|
||||
"serde_core",
|
||||
]
|
||||
@@ -1406,12 +1237,11 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "serial_test"
|
||||
version = "3.3.1"
|
||||
version = "3.2.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "0d0b343e184fc3b7bb44dff0705fffcf4b3756ba6aff420dddd8b24ca145e555"
|
||||
checksum = "1b258109f244e1d6891bf1053a55d63a5cd4f8f4c30cf9a1280989f80e7a1fa9"
|
||||
dependencies = [
|
||||
"futures-executor",
|
||||
"futures-util",
|
||||
"futures",
|
||||
"log",
|
||||
"once_cell",
|
||||
"parking_lot",
|
||||
@@ -1421,9 +1251,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "serial_test_derive"
|
||||
version = "3.3.1"
|
||||
version = "3.2.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "6f50427f258fb77356e4cd4aa0e87e2bd2c66dbcee41dc405282cae2bfc26c83"
|
||||
checksum = "5d69265a08751de7844521fd15003ae0a888e035773ba05695c5c759a6f89eef"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
@@ -1479,12 +1309,6 @@ version = "1.2.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "6ce2be8dc25455e1f91df71bfa12ad37d7af1092ae736f3a6cd0e37bc7810596"
|
||||
|
||||
[[package]]
|
||||
name = "strsim"
|
||||
version = "0.11.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "7da8b5736845d9f2fcb837ea5d9e2628564b3b043a70948a3f0b778838c5fb4f"
|
||||
|
||||
[[package]]
|
||||
name = "subtle"
|
||||
version = "2.6.1"
|
||||
@@ -1493,9 +1317,9 @@ checksum = "13c2bddecc57b384dee18652358fb23172facb8a2c51ccc10d74c157bdea3292"
|
||||
|
||||
[[package]]
|
||||
name = "syn"
|
||||
version = "2.0.114"
|
||||
version = "2.0.111"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d4d107df263a3013ef9b1879b0df87d706ff80f65a86ea879bd9c31f9b307c2a"
|
||||
checksum = "390cc9a294ab71bdb1aa2e99d13be9c753cd2d7bd6560c77118597410c4d2e87"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
@@ -1524,20 +1348,20 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "system-configuration"
|
||||
version = "0.6.1"
|
||||
version = "0.5.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "3c879d448e9d986b661742763247d3693ed13609438cf3d006f51f5368a5ba6b"
|
||||
checksum = "ba3a3adc5c275d719af8cb4272ea1c4a6d668a777f37e115f6d11ddbc1c8e0e7"
|
||||
dependencies = [
|
||||
"bitflags",
|
||||
"bitflags 1.3.2",
|
||||
"core-foundation",
|
||||
"system-configuration-sys",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "system-configuration-sys"
|
||||
version = "0.6.0"
|
||||
version = "0.5.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8e1d1b10ced5ca923a1fcb8d03e96b8d3268065d724548c0211415ff6ac6bac4"
|
||||
checksum = "a75fb188eb626b924683e3b95e3a48e63551fcfb51949de2f06a9d91dbee93c9"
|
||||
dependencies = [
|
||||
"core-foundation-sys",
|
||||
"libc",
|
||||
@@ -1545,9 +1369,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "tempfile"
|
||||
version = "3.24.0"
|
||||
version = "3.23.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "655da9c7eb6305c55742045d5a8d2037996d61d8de95806335c7c86ce0f82e9c"
|
||||
checksum = "2d31c77bdf42a745371d260a26ca7163f1e0924b64afa0b688e61b5a9fa02f16"
|
||||
dependencies = [
|
||||
"fastrand",
|
||||
"getrandom 0.3.4",
|
||||
@@ -1612,9 +1436,9 @@ checksum = "1f3ccbac311fea05f86f61904b462b55fb3df8837a366dfc601a0161d0532f20"
|
||||
|
||||
[[package]]
|
||||
name = "tokio"
|
||||
version = "1.49.0"
|
||||
version = "1.48.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "72a2903cd7736441aac9df9d7688bd0ce48edccaadf181c3b90be801e81d3d86"
|
||||
checksum = "ff360e02eab121e0bc37a2d3b4d4dc622e6eda3a8e5253d5435ecf5bd4c68408"
|
||||
dependencies = [
|
||||
"bytes",
|
||||
"libc",
|
||||
@@ -1659,9 +1483,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "tokio-util"
|
||||
version = "0.7.18"
|
||||
version = "0.7.17"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "9ae9cec805b01e8fc3fd2fe289f89149a9b66dd16786abd8b19cfa7b48cb0098"
|
||||
checksum = "2efa149fe76073d6e8fd97ef4f4eca7b67f599660115591483572e406e165594"
|
||||
dependencies = [
|
||||
"bytes",
|
||||
"futures-core",
|
||||
@@ -1672,9 +1496,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "toml"
|
||||
version = "0.9.11+spec-1.1.0"
|
||||
version = "0.9.8"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f3afc9a848309fe1aaffaed6e1546a7a14de1f935dc9d89d32afd9a44bab7c46"
|
||||
checksum = "f0dc8b1fb61449e27716ec0e1bdf0f6b8f3e8f6b05391e8497b8b6d7804ea6d8"
|
||||
dependencies = [
|
||||
"indexmap",
|
||||
"serde_core",
|
||||
@@ -1687,27 +1511,27 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "toml_datetime"
|
||||
version = "0.7.5+spec-1.1.0"
|
||||
version = "0.7.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "92e1cfed4a3038bc5a127e35a2d360f145e1f4b971b551a2ba5fd7aedf7e1347"
|
||||
checksum = "f2cdb639ebbc97961c51720f858597f7f24c4fc295327923af55b74c3c724533"
|
||||
dependencies = [
|
||||
"serde_core",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "toml_parser"
|
||||
version = "1.0.6+spec-1.1.0"
|
||||
version = "1.0.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a3198b4b0a8e11f09dd03e133c0280504d0801269e9afa46362ffde1cbeebf44"
|
||||
checksum = "c0cbe268d35bdb4bb5a56a2de88d0ad0eb70af5384a99d648cd4b3d04039800e"
|
||||
dependencies = [
|
||||
"winnow",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "toml_writer"
|
||||
version = "1.0.6+spec-1.1.0"
|
||||
version = "1.0.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "ab16f14aed21ee8bfd8ec22513f7287cd4a91aa92e44edfe2c17ddd004e92607"
|
||||
checksum = "df8b2b54733674ad286d16267dcfc7a71ed5c776e4ac7aa3c3e2561f7c637bf2"
|
||||
|
||||
[[package]]
|
||||
name = "tower"
|
||||
@@ -1722,16 +1546,15 @@ dependencies = [
|
||||
"tokio",
|
||||
"tower-layer",
|
||||
"tower-service",
|
||||
"tracing",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tower-http"
|
||||
version = "0.6.8"
|
||||
version = "0.6.7"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d4e6559d53cc268e5031cd8429d05415bc4cb4aefc4aa5d6cc35fbf5b924a1f8"
|
||||
checksum = "9cf146f99d442e8e68e585f5d798ccd3cad9a7835b917e09728880a862706456"
|
||||
dependencies = [
|
||||
"bitflags",
|
||||
"bitflags 2.10.0",
|
||||
"bytes",
|
||||
"futures-util",
|
||||
"http",
|
||||
@@ -1757,11 +1580,10 @@ checksum = "8df9b6e13f2d32c91b9bd719c00d1958837bc7dec474d94952798cc8e69eeec3"
|
||||
|
||||
[[package]]
|
||||
name = "tracing"
|
||||
version = "0.1.44"
|
||||
version = "0.1.41"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "63e71662fa4b2a2c3a26f570f037eb95bb1f85397f3cd8076caed2f026a6d100"
|
||||
checksum = "784e0ac535deb450455cbfa28a6f0df145ea1bb7ae51b821cf5e7927fdcfbdd0"
|
||||
dependencies = [
|
||||
"log",
|
||||
"pin-project-lite",
|
||||
"tracing-attributes",
|
||||
"tracing-core",
|
||||
@@ -1780,9 +1602,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "tracing-core"
|
||||
version = "0.1.36"
|
||||
version = "0.1.35"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "db97caf9d906fbde555dd62fa95ddba9eecfd14cb388e4f491a66d74cd5fb79a"
|
||||
checksum = "7a04e24fab5c89c6a36eb8558c9656f30d81de51dfa4d3b45f26b21d61fa0a6c"
|
||||
dependencies = [
|
||||
"once_cell",
|
||||
"valuable",
|
||||
@@ -1801,9 +1623,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "tracing-subscriber"
|
||||
version = "0.3.22"
|
||||
version = "0.3.20"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2f30143827ddab0d256fd843b7a66d164e9f271cfa0dde49142c5ca0ca291f1e"
|
||||
checksum = "2054a14f5307d601f88daf0553e1cbf472acc4f2c51afab632431cdcd72124d5"
|
||||
dependencies = [
|
||||
"matchers",
|
||||
"nu-ansi-term",
|
||||
@@ -1837,9 +1659,9 @@ checksum = "8ecb6da28b8a351d773b68d5825ac39017e680750f980f3a1a85cd8dd28a47c1"
|
||||
|
||||
[[package]]
|
||||
name = "url"
|
||||
version = "2.5.8"
|
||||
version = "2.5.7"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "ff67a8a4397373c3ef660812acab3268222035010ab8680ec4215f38ba3d0eed"
|
||||
checksum = "08bc136a29a3d1758e07a9cca267be308aeebf5cfd5a10f3f67ab2097683ef5b"
|
||||
dependencies = [
|
||||
"form_urlencoded",
|
||||
"idna",
|
||||
@@ -1859,12 +1681,6 @@ version = "1.0.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b6c140620e7ffbb22c2dee59cafe6084a59b5ffc27a8859a5f0d494b5d52b6be"
|
||||
|
||||
[[package]]
|
||||
name = "utf8parse"
|
||||
version = "0.2.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "06abde3611657adf66d383f00b093d7faecc7fa57071cce2578660c9f1010821"
|
||||
|
||||
[[package]]
|
||||
name = "valuable"
|
||||
version = "0.1.1"
|
||||
@@ -1903,9 +1719,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "wasm-bindgen"
|
||||
version = "0.2.106"
|
||||
version = "0.2.105"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "0d759f433fa64a2d763d1340820e46e111a7a5ab75f993d1852d70b03dbb80fd"
|
||||
checksum = "da95793dfc411fbbd93f5be7715b0578ec61fe87cb1a42b12eb625caa5c5ea60"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
"once_cell",
|
||||
@@ -1916,9 +1732,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "wasm-bindgen-futures"
|
||||
version = "0.4.56"
|
||||
version = "0.4.55"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "836d9622d604feee9e5de25ac10e3ea5f2d65b41eac0d9ce72eb5deae707ce7c"
|
||||
checksum = "551f88106c6d5e7ccc7cd9a16f312dd3b5d36ea8b4954304657d5dfba115d4a0"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
"js-sys",
|
||||
@@ -1929,9 +1745,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "wasm-bindgen-macro"
|
||||
version = "0.2.106"
|
||||
version = "0.2.105"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "48cb0d2638f8baedbc542ed444afc0644a29166f1595371af4fecf8ce1e7eeb3"
|
||||
checksum = "04264334509e04a7bf8690f2384ef5265f05143a4bff3889ab7a3269adab59c2"
|
||||
dependencies = [
|
||||
"quote",
|
||||
"wasm-bindgen-macro-support",
|
||||
@@ -1939,9 +1755,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "wasm-bindgen-macro-support"
|
||||
version = "0.2.106"
|
||||
version = "0.2.105"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "cefb59d5cd5f92d9dcf80e4683949f15ca4b511f4ac0a6e14d4e1ac60c6ecd40"
|
||||
checksum = "420bc339d9f322e562942d52e115d57e950d12d88983a14c79b86859ee6c7ebc"
|
||||
dependencies = [
|
||||
"bumpalo",
|
||||
"proc-macro2",
|
||||
@@ -1952,18 +1768,18 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "wasm-bindgen-shared"
|
||||
version = "0.2.106"
|
||||
version = "0.2.105"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "cbc538057e648b67f72a982e708d485b2efa771e1ac05fec311f9f63e5800db4"
|
||||
checksum = "76f218a38c84bcb33c25ec7059b07847d465ce0e0a76b995e134a45adcb6af76"
|
||||
dependencies = [
|
||||
"unicode-ident",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "web-sys"
|
||||
version = "0.3.83"
|
||||
version = "0.3.82"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "9b32828d774c412041098d182a8b38b16ea816958e07cf40eec2bc080ae137ac"
|
||||
checksum = "3a1f95c0d03a47f4ae1f7a64643a6bb97465d9b740f0fa8f90ea33915c99a9a1"
|
||||
dependencies = [
|
||||
"js-sys",
|
||||
"wasm-bindgen",
|
||||
@@ -1981,9 +1797,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "webpki-roots"
|
||||
version = "1.0.5"
|
||||
version = "1.0.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "12bed680863276c63889429bfd6cab3b99943659923822de1c8a39c49e4d722c"
|
||||
checksum = "b2878ef029c47c6e8cf779119f20fcf52bde7ad42a731b2a304bc221df17571e"
|
||||
dependencies = [
|
||||
"rustls-pki-types",
|
||||
]
|
||||
@@ -2032,15 +1848,6 @@ dependencies = [
|
||||
"windows-targets 0.52.6",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "windows-sys"
|
||||
version = "0.59.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1e38bc4d79ed67fd075bcc251a1c39b32a1776bbe92e5bef1f0bf1f8c531853b"
|
||||
dependencies = [
|
||||
"windows-targets 0.52.6",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "windows-sys"
|
||||
version = "0.60.2"
|
||||
@@ -2231,18 +2038,18 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "zerocopy"
|
||||
version = "0.8.33"
|
||||
version = "0.8.30"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "668f5168d10b9ee831de31933dc111a459c97ec93225beb307aed970d1372dfd"
|
||||
checksum = "4ea879c944afe8a2b25fef16bb4ba234f47c694565e97383b36f3a878219065c"
|
||||
dependencies = [
|
||||
"zerocopy-derive",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "zerocopy-derive"
|
||||
version = "0.8.33"
|
||||
version = "0.8.30"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2c7962b26b0a8685668b671ee4b54d007a67d4eaf05fda79ac0ecf41e32270f1"
|
||||
checksum = "cf955aa904d6040f70dc8e9384444cb1030aed272ba3cb09bbc4ab9e7c1f34f5"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
@@ -2308,9 +2115,3 @@ dependencies = [
|
||||
"quote",
|
||||
"syn",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "zmij"
|
||||
version = "1.0.12"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2fc5a66a20078bf1251bde995aa2fdcc4b800c70b5d92dd2c62abc5c60f679f8"
|
||||
|
||||
+1
-4
@@ -14,7 +14,7 @@
|
||||
|
||||
[package]
|
||||
name = "potatomesh-matrix-bridge"
|
||||
version = "0.5.10"
|
||||
version = "0.5.9"
|
||||
edition = "2021"
|
||||
|
||||
[dependencies]
|
||||
@@ -27,11 +27,8 @@ anyhow = "1"
|
||||
tracing = "0.1"
|
||||
tracing-subscriber = { version = "0.3", features = ["fmt", "env-filter"] }
|
||||
urlencoding = "2"
|
||||
axum = { version = "0.7", features = ["json"] }
|
||||
clap = { version = "4", features = ["derive"] }
|
||||
|
||||
[dev-dependencies]
|
||||
tempfile = "3"
|
||||
mockito = "1"
|
||||
serial_test = "3"
|
||||
tower = "0.5"
|
||||
|
||||
+1
-2
@@ -9,8 +9,6 @@ poll_interval_secs = 60
|
||||
homeserver = "https://matrix.dod.ngo"
|
||||
# Appservice access token (from your registration.yaml)
|
||||
as_token = "INVALID_TOKEN_NOT_WORKING"
|
||||
# Homeserver token used to authenticate Synapse callbacks
|
||||
hs_token = "INVALID_TOKEN_NOT_WORKING"
|
||||
# Server name (domain) part of Matrix user IDs
|
||||
server_name = "dod.ngo"
|
||||
# Room ID to send into (must be joined by the appservice / puppets)
|
||||
@@ -19,3 +17,4 @@ room_id = "!sXabOBXbVObAlZQEUs:c-base.org" # "#potato-bridge:c-base.org"
|
||||
[state]
|
||||
# Where to persist last seen message id (optional but recommended)
|
||||
state_file = "bridge_state.json"
|
||||
|
||||
|
||||
+1
-3
@@ -12,7 +12,7 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
FROM rust:1.92-bookworm AS builder
|
||||
FROM rust:1.91-bookworm AS builder
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
@@ -37,8 +37,6 @@ COPY --from=builder /app/target/release/potatomesh-matrix-bridge /usr/local/bin/
|
||||
COPY matrix/Config.toml /app/Config.example.toml
|
||||
COPY matrix/docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
|
||||
|
||||
EXPOSE 41448
|
||||
|
||||
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
|
||||
|
||||
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
|
||||
|
||||
+7
-109
@@ -2,8 +2,6 @@
|
||||
|
||||
A small Rust daemon that bridges **PotatoMesh** LoRa messages into a **Matrix** room.
|
||||
|
||||

|
||||
|
||||
For each PotatoMesh node, the bridge creates (or uses) a **Matrix puppet user**:
|
||||
|
||||
- Matrix localpart: `potato_` + the hex node id (without `!`), e.g. `!67fc83cb` → `@potato_67fc83cb:example.org`
|
||||
@@ -56,17 +54,9 @@ This is **not** a full appservice framework; it just speaks the minimal HTTP nee
|
||||
|
||||
## Configuration
|
||||
|
||||
Configuration can come from a TOML file, CLI flags, environment variables, or secret files. The bridge merges inputs in this order (highest to lowest):
|
||||
All configuration is in `Config.toml` in the project root.
|
||||
|
||||
1. CLI flags
|
||||
2. Environment variables
|
||||
3. Secret files (`*_FILE` paths or container defaults)
|
||||
4. TOML config file
|
||||
5. Container defaults (paths + poll interval)
|
||||
|
||||
If no TOML file is provided, required values must be supplied via CLI/env/secret inputs.
|
||||
|
||||
Example TOML:
|
||||
Example:
|
||||
|
||||
```toml
|
||||
[potatomesh]
|
||||
@@ -80,8 +70,6 @@ poll_interval_secs = 10
|
||||
homeserver = "https://matrix.example.org"
|
||||
# Appservice access token (from your registration.yaml)
|
||||
as_token = "YOUR_APPSERVICE_AS_TOKEN"
|
||||
# Appservice homeserver token (must match registration hs_token)
|
||||
hs_token = "SECRET_HS_TOKEN"
|
||||
# Server name (domain) part of Matrix user IDs
|
||||
server_name = "example.org"
|
||||
# Room ID to send into (must be joined by the appservice / puppets)
|
||||
@@ -92,92 +80,6 @@ room_id = "!yourroomid:example.org"
|
||||
state_file = "bridge_state.json"
|
||||
````
|
||||
|
||||
The `hs_token` is used to validate inbound appservice transactions. Keep it identical in `Config.toml` and your Matrix appservice registration file.
|
||||
|
||||
### CLI Flags
|
||||
|
||||
Run `potatomesh-matrix-bridge --help` for the full list. Common flags:
|
||||
|
||||
* `--config PATH`
|
||||
* `--state-file PATH`
|
||||
* `--potatomesh-base-url URL`
|
||||
* `--potatomesh-poll-interval-secs SECS`
|
||||
* `--matrix-homeserver URL`
|
||||
* `--matrix-as-token TOKEN`
|
||||
* `--matrix-as-token-file PATH`
|
||||
* `--matrix-hs-token TOKEN`
|
||||
* `--matrix-hs-token-file PATH`
|
||||
* `--matrix-server-name NAME`
|
||||
* `--matrix-room-id ROOM`
|
||||
* `--container` / `--no-container`
|
||||
* `--secrets-dir PATH`
|
||||
|
||||
### Environment Variables
|
||||
|
||||
* `POTATOMESH_CONFIG`
|
||||
* `POTATOMESH_BASE_URL`
|
||||
* `POTATOMESH_POLL_INTERVAL_SECS`
|
||||
* `MATRIX_HOMESERVER`
|
||||
* `MATRIX_AS_TOKEN`
|
||||
* `MATRIX_AS_TOKEN_FILE`
|
||||
* `MATRIX_HS_TOKEN`
|
||||
* `MATRIX_HS_TOKEN_FILE`
|
||||
* `MATRIX_SERVER_NAME`
|
||||
* `MATRIX_ROOM_ID`
|
||||
* `STATE_FILE`
|
||||
* `POTATOMESH_CONTAINER`
|
||||
* `POTATOMESH_SECRETS_DIR`
|
||||
|
||||
### Secret Files
|
||||
|
||||
If you supply `*_FILE` values, the bridge reads the secret contents and trims whitespace. When running inside a container, the bridge also checks the default secrets directory (default: `/run/secrets`) for:
|
||||
|
||||
* `matrix_as_token`
|
||||
* `matrix_hs_token`
|
||||
|
||||
### Container Defaults
|
||||
|
||||
Container detection checks `POTATOMESH_CONTAINER`, `CONTAINER`, and `/proc/1/cgroup`. When detected (or forced with `--container`), defaults shift to:
|
||||
|
||||
* Config path: `/app/Config.toml`
|
||||
* State file: `/app/bridge_state.json`
|
||||
* Secrets dir: `/run/secrets`
|
||||
* Poll interval: 15 seconds (if not otherwise configured)
|
||||
|
||||
Set `POTATOMESH_CONTAINER=0` or `--no-container` to opt out of container defaults.
|
||||
|
||||
### Docker Compose First Run
|
||||
|
||||
Before starting Compose, complete this preflight checklist:
|
||||
|
||||
1. Ensure `matrix/Config.toml` exists as a regular file on the host (not a directory).
|
||||
2. Fill required Matrix values in `matrix/Config.toml`:
|
||||
- `matrix.as_token`
|
||||
- `matrix.hs_token`
|
||||
- `matrix.server_name`
|
||||
- `matrix.room_id`
|
||||
- `matrix.homeserver`
|
||||
|
||||
This is required because the shared Compose anchor `x-matrix-bridge-base` mounts `./matrix/Config.toml` to `/app/Config.toml`.
|
||||
Then follow the token and namespace requirements in [Matrix Appservice Setup (Synapse example)](#matrix-appservice-setup-synapse-example).
|
||||
|
||||
#### Troubleshooting
|
||||
|
||||
| Symptom | Likely cause | What to check |
|
||||
| --- | --- | --- |
|
||||
| `Is a directory (os error 21)` | Host mount source became a directory | `matrix/Config.toml` was missing at mount time and got created as a directory on host. |
|
||||
| `M_UNKNOWN_TOKEN` / `401 Unauthorized` | Matrix appservice token mismatch | Verify `matrix.as_token` matches your appservice registration and setup in [Matrix Appservice Setup (Synapse example)](#matrix-appservice-setup-synapse-example). |
|
||||
|
||||
#### Recovery from accidental `Config.toml` directory creation
|
||||
|
||||
```bash
|
||||
# from repo root
|
||||
rm -rf matrix/Config.toml
|
||||
touch matrix/Config.toml
|
||||
# then edit matrix/Config.toml and set valid matrix.as_token, matrix.hs_token,
|
||||
# matrix.server_name, matrix.room_id, and matrix.homeserver before starting compose
|
||||
```
|
||||
|
||||
### PotatoMesh API
|
||||
|
||||
The bridge assumes:
|
||||
@@ -232,7 +134,7 @@ A minimal example sketch (you **must** adjust URLs, secrets, namespaces):
|
||||
|
||||
```yaml
|
||||
id: potatomesh-bridge
|
||||
url: "http://your-bridge-host:41448"
|
||||
url: "http://your-bridge-host:8080" # not used by this bridge if it only calls out
|
||||
as_token: "YOUR_APPSERVICE_AS_TOKEN"
|
||||
hs_token: "SECRET_HS_TOKEN"
|
||||
sender_localpart: "potatomesh-bridge"
|
||||
@@ -243,12 +145,10 @@ namespaces:
|
||||
regex: "@potato_[0-9a-f]{8}:example.org"
|
||||
```
|
||||
|
||||
This bridge listens for Synapse appservice callbacks on port `41448` so it can log inbound transaction payloads. It still only forwards messages one way (PotatoMesh → Matrix), so inbound Matrix events are acknowledged but not bridged. The `as_token` and `namespaces.users` entries remain required for outbound calls, and the `url` should point at the listener.
|
||||
For this bridge, only the `as_token` and `namespaces.users` actually matter. The bridge does not accept inbound events; it only uses the `as_token` to call the homeserver.
|
||||
|
||||
In Synapse’s `homeserver.yaml`, add the registration file under `app_service_config_files`, restart, and invite a puppet user to your target room (or use room ID directly).
|
||||
|
||||
The bridge validates inbound appservice callbacks by comparing the `access_token` query param to `hs_token` in `Config.toml`, so keep those values in sync.
|
||||
|
||||
---
|
||||
|
||||
## Build
|
||||
@@ -278,11 +178,10 @@ Build the container from the repo root with the included `matrix/Dockerfile`:
|
||||
docker build -f matrix/Dockerfile -t potatomesh-matrix-bridge .
|
||||
```
|
||||
|
||||
Provide your config at `/app/Config.toml` (or use CLI/env/secret overrides) and persist the bridge state file by mounting volumes. Minimal example:
|
||||
Provide your config at `/app/Config.toml` and persist the bridge state file by mounting volumes. Minimal example:
|
||||
|
||||
```bash
|
||||
docker run --rm \
|
||||
-p 41448:41448 \
|
||||
-v bridge_state:/app \
|
||||
-v "$(pwd)/matrix/Config.toml:/app/Config.toml:ro" \
|
||||
potatomesh-matrix-bridge
|
||||
@@ -292,13 +191,12 @@ If you prefer to isolate the state file from the config, mount it directly inste
|
||||
|
||||
```bash
|
||||
docker run --rm \
|
||||
-p 41448:41448 \
|
||||
-v bridge_state:/app \
|
||||
-v "$(pwd)/matrix/Config.toml:/app/Config.toml:ro" \
|
||||
potatomesh-matrix-bridge
|
||||
```
|
||||
|
||||
The image ships `Config.example.toml` for reference. If `/app/Config.toml` is absent, set the required values via environment variables, CLI flags, or secrets instead.
|
||||
The image ships `Config.example.toml` for reference, but the bridge will exit if `/app/Config.toml` is not provided.
|
||||
|
||||
---
|
||||
|
||||
@@ -336,7 +234,7 @@ Delete `bridge_state.json` if you want it to replay all currently available mess
|
||||
|
||||
## Development
|
||||
|
||||
Run tests:
|
||||
Run tests (currently mostly compile checks, no real tests yet):
|
||||
|
||||
```bash
|
||||
cargo test
|
||||
|
||||
@@ -15,13 +15,6 @@
|
||||
|
||||
set -e
|
||||
|
||||
# Default to container-aware configuration paths unless explicitly overridden.
|
||||
: "${POTATOMESH_CONTAINER:=1}"
|
||||
: "${POTATOMESH_SECRETS_DIR:=/run/secrets}"
|
||||
|
||||
export POTATOMESH_CONTAINER
|
||||
export POTATOMESH_SECRETS_DIR
|
||||
|
||||
# Default state file path from Config.toml unless overridden.
|
||||
STATE_FILE="${STATE_FILE:-/app/bridge_state.json}"
|
||||
STATE_DIR="$(dirname "$STATE_FILE")"
|
||||
|
||||
@@ -1,105 +0,0 @@
|
||||
// Copyright © 2025-26 l5yth & contributors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use clap::{ArgAction, Parser};
|
||||
|
||||
#[cfg(not(test))]
|
||||
use crate::config::{ConfigInputs, ConfigOverrides};
|
||||
|
||||
/// CLI arguments for the Matrix bridge.
|
||||
#[derive(Debug, Parser)]
|
||||
#[command(
|
||||
name = "potatomesh-matrix-bridge",
|
||||
version,
|
||||
about = "PotatoMesh Matrix bridge"
|
||||
)]
|
||||
pub struct Cli {
|
||||
/// Path to the configuration TOML file.
|
||||
#[arg(long, value_name = "PATH")]
|
||||
pub config: Option<String>,
|
||||
/// Path to the bridge state file.
|
||||
#[arg(long, value_name = "PATH")]
|
||||
pub state_file: Option<String>,
|
||||
/// PotatoMesh base URL.
|
||||
#[arg(long, value_name = "URL")]
|
||||
pub potatomesh_base_url: Option<String>,
|
||||
/// Poll interval in seconds.
|
||||
#[arg(long, value_name = "SECS")]
|
||||
pub potatomesh_poll_interval_secs: Option<u64>,
|
||||
/// Matrix homeserver base URL.
|
||||
#[arg(long, value_name = "URL")]
|
||||
pub matrix_homeserver: Option<String>,
|
||||
/// Matrix appservice access token.
|
||||
#[arg(long, value_name = "TOKEN")]
|
||||
pub matrix_as_token: Option<String>,
|
||||
/// Path to a secret file containing the Matrix appservice access token.
|
||||
#[arg(long, value_name = "PATH")]
|
||||
pub matrix_as_token_file: Option<String>,
|
||||
/// Matrix homeserver token for inbound appservice requests.
|
||||
#[arg(long, value_name = "TOKEN")]
|
||||
pub matrix_hs_token: Option<String>,
|
||||
/// Path to a secret file containing the Matrix homeserver token.
|
||||
#[arg(long, value_name = "PATH")]
|
||||
pub matrix_hs_token_file: Option<String>,
|
||||
/// Matrix server name (domain).
|
||||
#[arg(long, value_name = "NAME")]
|
||||
pub matrix_server_name: Option<String>,
|
||||
/// Matrix room id to forward into.
|
||||
#[arg(long, value_name = "ROOM")]
|
||||
pub matrix_room_id: Option<String>,
|
||||
/// Force container defaults (overrides detection).
|
||||
#[arg(long, action = ArgAction::SetTrue)]
|
||||
pub container: bool,
|
||||
/// Disable container defaults (overrides detection).
|
||||
#[arg(long, action = ArgAction::SetTrue)]
|
||||
pub no_container: bool,
|
||||
/// Directory to search for default secret files.
|
||||
#[arg(long, value_name = "PATH")]
|
||||
pub secrets_dir: Option<String>,
|
||||
}
|
||||
|
||||
impl Cli {
|
||||
/// Convert CLI args into configuration inputs.
|
||||
#[cfg(not(test))]
|
||||
pub fn to_inputs(&self) -> ConfigInputs {
|
||||
ConfigInputs {
|
||||
config_path: self.config.clone(),
|
||||
secrets_dir: self.secrets_dir.clone(),
|
||||
container_override: resolve_container_override(self.container, self.no_container),
|
||||
container_hint: None,
|
||||
overrides: ConfigOverrides {
|
||||
potatomesh_base_url: self.potatomesh_base_url.clone(),
|
||||
potatomesh_poll_interval_secs: self.potatomesh_poll_interval_secs,
|
||||
matrix_homeserver: self.matrix_homeserver.clone(),
|
||||
matrix_as_token: self.matrix_as_token.clone(),
|
||||
matrix_as_token_file: self.matrix_as_token_file.clone(),
|
||||
matrix_hs_token: self.matrix_hs_token.clone(),
|
||||
matrix_hs_token_file: self.matrix_hs_token_file.clone(),
|
||||
matrix_server_name: self.matrix_server_name.clone(),
|
||||
matrix_room_id: self.matrix_room_id.clone(),
|
||||
state_file: self.state_file.clone(),
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Resolve container override flags into an optional boolean.
|
||||
#[cfg(not(test))]
|
||||
fn resolve_container_override(container: bool, no_container: bool) -> Option<bool> {
|
||||
match (container, no_container) {
|
||||
(true, false) => Some(true),
|
||||
(false, true) => Some(false),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
+20
-841
@@ -15,37 +15,25 @@
|
||||
use serde::Deserialize;
|
||||
use std::{fs, path::Path};
|
||||
|
||||
const DEFAULT_CONFIG_PATH: &str = "Config.toml";
|
||||
const CONTAINER_CONFIG_PATH: &str = "/app/Config.toml";
|
||||
const DEFAULT_STATE_FILE: &str = "bridge_state.json";
|
||||
const CONTAINER_STATE_FILE: &str = "/app/bridge_state.json";
|
||||
const DEFAULT_SECRETS_DIR: &str = "/run/secrets";
|
||||
const CONTAINER_POLL_INTERVAL_SECS: u64 = 15;
|
||||
|
||||
/// PotatoMesh API settings.
|
||||
#[derive(Debug, Deserialize, Clone)]
|
||||
pub struct PotatomeshConfig {
|
||||
pub base_url: String,
|
||||
pub poll_interval_secs: u64,
|
||||
}
|
||||
|
||||
/// Matrix appservice settings for the bridge.
|
||||
#[derive(Debug, Deserialize, Clone)]
|
||||
pub struct MatrixConfig {
|
||||
pub homeserver: String,
|
||||
pub as_token: String,
|
||||
pub hs_token: String,
|
||||
pub server_name: String,
|
||||
pub room_id: String,
|
||||
}
|
||||
|
||||
/// State file configuration for the bridge.
|
||||
#[derive(Debug, Deserialize, Clone)]
|
||||
pub struct StateConfig {
|
||||
pub state_file: String,
|
||||
}
|
||||
|
||||
/// Full configuration loaded for the bridge runtime.
|
||||
#[derive(Debug, Deserialize, Clone)]
|
||||
pub struct Config {
|
||||
pub potatomesh: PotatomeshConfig,
|
||||
@@ -53,447 +41,19 @@ pub struct Config {
|
||||
pub state: StateConfig,
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize, Clone, Default)]
|
||||
struct PartialPotatomeshConfig {
|
||||
#[serde(default)]
|
||||
base_url: Option<String>,
|
||||
#[serde(default)]
|
||||
poll_interval_secs: Option<u64>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize, Clone, Default)]
|
||||
struct PartialMatrixConfig {
|
||||
#[serde(default)]
|
||||
homeserver: Option<String>,
|
||||
#[serde(default)]
|
||||
as_token: Option<String>,
|
||||
#[serde(default)]
|
||||
hs_token: Option<String>,
|
||||
#[serde(default)]
|
||||
server_name: Option<String>,
|
||||
#[serde(default)]
|
||||
room_id: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize, Clone, Default)]
|
||||
struct PartialStateConfig {
|
||||
#[serde(default)]
|
||||
state_file: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize, Clone, Default)]
|
||||
struct PartialConfig {
|
||||
#[serde(default)]
|
||||
potatomesh: PartialPotatomeshConfig,
|
||||
#[serde(default)]
|
||||
matrix: PartialMatrixConfig,
|
||||
#[serde(default)]
|
||||
state: PartialStateConfig,
|
||||
}
|
||||
|
||||
/// Overwrite an optional value when the incoming value is present.
|
||||
fn merge_option<T>(target: &mut Option<T>, incoming: Option<T>) {
|
||||
if incoming.is_some() {
|
||||
*target = incoming;
|
||||
}
|
||||
}
|
||||
|
||||
/// CLI or environment overrides for configuration fields.
|
||||
#[derive(Debug, Clone, Default)]
|
||||
pub struct ConfigOverrides {
|
||||
pub potatomesh_base_url: Option<String>,
|
||||
pub potatomesh_poll_interval_secs: Option<u64>,
|
||||
pub matrix_homeserver: Option<String>,
|
||||
pub matrix_as_token: Option<String>,
|
||||
pub matrix_as_token_file: Option<String>,
|
||||
pub matrix_hs_token: Option<String>,
|
||||
pub matrix_hs_token_file: Option<String>,
|
||||
pub matrix_server_name: Option<String>,
|
||||
pub matrix_room_id: Option<String>,
|
||||
pub state_file: Option<String>,
|
||||
}
|
||||
|
||||
impl ConfigOverrides {
|
||||
fn apply_non_token_overrides(&self, cfg: &mut PartialConfig) {
|
||||
merge_option(
|
||||
&mut cfg.potatomesh.base_url,
|
||||
self.potatomesh_base_url.clone(),
|
||||
);
|
||||
merge_option(
|
||||
&mut cfg.potatomesh.poll_interval_secs,
|
||||
self.potatomesh_poll_interval_secs,
|
||||
);
|
||||
merge_option(&mut cfg.matrix.homeserver, self.matrix_homeserver.clone());
|
||||
merge_option(&mut cfg.matrix.server_name, self.matrix_server_name.clone());
|
||||
merge_option(&mut cfg.matrix.room_id, self.matrix_room_id.clone());
|
||||
merge_option(&mut cfg.state.state_file, self.state_file.clone());
|
||||
}
|
||||
|
||||
fn merge(self, higher: ConfigOverrides) -> ConfigOverrides {
|
||||
let matrix_as_token = if higher.matrix_as_token_file.is_some() {
|
||||
higher.matrix_as_token
|
||||
} else {
|
||||
higher.matrix_as_token.or(self.matrix_as_token)
|
||||
};
|
||||
let matrix_hs_token = if higher.matrix_hs_token_file.is_some() {
|
||||
higher.matrix_hs_token
|
||||
} else {
|
||||
higher.matrix_hs_token.or(self.matrix_hs_token)
|
||||
};
|
||||
ConfigOverrides {
|
||||
potatomesh_base_url: higher.potatomesh_base_url.or(self.potatomesh_base_url),
|
||||
potatomesh_poll_interval_secs: higher
|
||||
.potatomesh_poll_interval_secs
|
||||
.or(self.potatomesh_poll_interval_secs),
|
||||
matrix_homeserver: higher.matrix_homeserver.or(self.matrix_homeserver),
|
||||
matrix_as_token,
|
||||
matrix_as_token_file: higher.matrix_as_token_file.or(self.matrix_as_token_file),
|
||||
matrix_hs_token,
|
||||
matrix_hs_token_file: higher.matrix_hs_token_file.or(self.matrix_hs_token_file),
|
||||
matrix_server_name: higher.matrix_server_name.or(self.matrix_server_name),
|
||||
matrix_room_id: higher.matrix_room_id.or(self.matrix_room_id),
|
||||
state_file: higher.state_file.or(self.state_file),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Inputs gathered from CLI flags or environment variables.
|
||||
#[derive(Debug, Clone, Default)]
|
||||
pub struct ConfigInputs {
|
||||
pub config_path: Option<String>,
|
||||
pub secrets_dir: Option<String>,
|
||||
pub container_override: Option<bool>,
|
||||
pub container_hint: Option<String>,
|
||||
pub overrides: ConfigOverrides,
|
||||
}
|
||||
|
||||
impl ConfigInputs {
|
||||
/// Merge two input sets, preferring values from `higher`.
|
||||
pub fn merge(self, higher: ConfigInputs) -> ConfigInputs {
|
||||
ConfigInputs {
|
||||
config_path: higher.config_path.or(self.config_path),
|
||||
secrets_dir: higher.secrets_dir.or(self.secrets_dir),
|
||||
container_override: higher.container_override.or(self.container_override),
|
||||
container_hint: higher.container_hint.or(self.container_hint),
|
||||
overrides: self.overrides.merge(higher.overrides),
|
||||
}
|
||||
}
|
||||
|
||||
/// Load configuration inputs from the process environment.
|
||||
#[cfg(not(test))]
|
||||
pub fn from_env() -> anyhow::Result<Self> {
|
||||
let overrides = ConfigOverrides {
|
||||
potatomesh_base_url: env_var("POTATOMESH_BASE_URL"),
|
||||
potatomesh_poll_interval_secs: parse_u64_env("POTATOMESH_POLL_INTERVAL_SECS")?,
|
||||
matrix_homeserver: env_var("MATRIX_HOMESERVER"),
|
||||
matrix_as_token: env_var("MATRIX_AS_TOKEN"),
|
||||
matrix_as_token_file: env_var("MATRIX_AS_TOKEN_FILE"),
|
||||
matrix_hs_token: env_var("MATRIX_HS_TOKEN"),
|
||||
matrix_hs_token_file: env_var("MATRIX_HS_TOKEN_FILE"),
|
||||
matrix_server_name: env_var("MATRIX_SERVER_NAME"),
|
||||
matrix_room_id: env_var("MATRIX_ROOM_ID"),
|
||||
state_file: env_var("STATE_FILE"),
|
||||
};
|
||||
Ok(ConfigInputs {
|
||||
config_path: env_var("POTATOMESH_CONFIG"),
|
||||
secrets_dir: env_var("POTATOMESH_SECRETS_DIR"),
|
||||
container_override: parse_bool_env("POTATOMESH_CONTAINER")?,
|
||||
container_hint: env_var("CONTAINER"),
|
||||
overrides,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
impl Config {
|
||||
/// Load a full Config from a TOML file.
|
||||
#[cfg(test)]
|
||||
pub fn load_from_file(path: &str) -> anyhow::Result<Self> {
|
||||
let contents = fs::read_to_string(path)?;
|
||||
let cfg = toml::from_str(&contents)?;
|
||||
Ok(cfg)
|
||||
}
|
||||
}
|
||||
|
||||
/// Load a Config by merging CLI/env overrides with an optional TOML file.
|
||||
#[cfg(not(test))]
|
||||
pub fn load(cli_inputs: ConfigInputs) -> anyhow::Result<Config> {
|
||||
let env_inputs = ConfigInputs::from_env()?;
|
||||
let cgroup_hint = read_cgroup();
|
||||
load_from_sources(cli_inputs, env_inputs, cgroup_hint.as_deref())
|
||||
}
|
||||
|
||||
/// Load configuration by merging CLI/env inputs and an optional config file.
|
||||
fn load_from_sources(
|
||||
cli_inputs: ConfigInputs,
|
||||
env_inputs: ConfigInputs,
|
||||
cgroup_hint: Option<&str>,
|
||||
) -> anyhow::Result<Config> {
|
||||
let merged_inputs = env_inputs.merge(cli_inputs);
|
||||
let container = detect_container(
|
||||
merged_inputs.container_override,
|
||||
merged_inputs.container_hint.as_deref(),
|
||||
cgroup_hint,
|
||||
);
|
||||
let defaults = default_paths(container);
|
||||
|
||||
let base_cfg = resolve_base_config(&merged_inputs, &defaults)?;
|
||||
let mut cfg = base_cfg.unwrap_or_default();
|
||||
merged_inputs.overrides.apply_non_token_overrides(&mut cfg);
|
||||
|
||||
let secrets_dir = resolve_secrets_dir(&merged_inputs, container, &defaults);
|
||||
let as_token = resolve_token(
|
||||
cfg.matrix.as_token.clone(),
|
||||
merged_inputs.overrides.matrix_as_token.clone(),
|
||||
merged_inputs.overrides.matrix_as_token_file.as_deref(),
|
||||
secrets_dir.as_deref(),
|
||||
"matrix_as_token",
|
||||
)?;
|
||||
let hs_token = resolve_token(
|
||||
cfg.matrix.hs_token.clone(),
|
||||
merged_inputs.overrides.matrix_hs_token.clone(),
|
||||
merged_inputs.overrides.matrix_hs_token_file.as_deref(),
|
||||
secrets_dir.as_deref(),
|
||||
"matrix_hs_token",
|
||||
)?;
|
||||
|
||||
if cfg.potatomesh.poll_interval_secs.is_none() && container {
|
||||
cfg.potatomesh.poll_interval_secs = Some(defaults.poll_interval_secs);
|
||||
}
|
||||
|
||||
if cfg.state.state_file.is_none() {
|
||||
cfg.state.state_file = Some(defaults.state_file);
|
||||
}
|
||||
|
||||
let missing = collect_missing_fields(&cfg, &as_token, &hs_token);
|
||||
if !missing.is_empty() {
|
||||
anyhow::bail!(
|
||||
"Missing required configuration values: {}",
|
||||
missing.join(", ")
|
||||
);
|
||||
}
|
||||
|
||||
Ok(Config {
|
||||
potatomesh: PotatomeshConfig {
|
||||
base_url: cfg.potatomesh.base_url.unwrap(),
|
||||
poll_interval_secs: cfg.potatomesh.poll_interval_secs.unwrap(),
|
||||
},
|
||||
matrix: MatrixConfig {
|
||||
homeserver: cfg.matrix.homeserver.unwrap(),
|
||||
as_token: as_token.unwrap(),
|
||||
hs_token: hs_token.unwrap(),
|
||||
server_name: cfg.matrix.server_name.unwrap(),
|
||||
room_id: cfg.matrix.room_id.unwrap(),
|
||||
},
|
||||
state: StateConfig {
|
||||
state_file: cfg.state.state_file.unwrap(),
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
/// Collect the missing required field identifiers for error reporting.
|
||||
fn collect_missing_fields(
|
||||
cfg: &PartialConfig,
|
||||
as_token: &Option<String>,
|
||||
hs_token: &Option<String>,
|
||||
) -> Vec<&'static str> {
|
||||
let mut missing = Vec::new();
|
||||
if cfg.potatomesh.base_url.is_none() {
|
||||
missing.push("potatomesh.base_url");
|
||||
}
|
||||
if cfg.potatomesh.poll_interval_secs.is_none() {
|
||||
missing.push("potatomesh.poll_interval_secs");
|
||||
}
|
||||
if cfg.matrix.homeserver.is_none() {
|
||||
missing.push("matrix.homeserver");
|
||||
}
|
||||
if as_token.is_none() {
|
||||
missing.push("matrix.as_token");
|
||||
}
|
||||
if hs_token.is_none() {
|
||||
missing.push("matrix.hs_token");
|
||||
}
|
||||
if cfg.matrix.server_name.is_none() {
|
||||
missing.push("matrix.server_name");
|
||||
}
|
||||
if cfg.matrix.room_id.is_none() {
|
||||
missing.push("matrix.room_id");
|
||||
}
|
||||
if cfg.state.state_file.is_none() {
|
||||
missing.push("state.state_file");
|
||||
}
|
||||
missing
|
||||
}
|
||||
|
||||
/// Resolve the base TOML config file, honoring explicit config paths.
|
||||
fn resolve_base_config(
|
||||
inputs: &ConfigInputs,
|
||||
defaults: &DefaultPaths,
|
||||
) -> anyhow::Result<Option<PartialConfig>> {
|
||||
if let Some(path) = &inputs.config_path {
|
||||
return Ok(Some(load_partial_from_file(path)?));
|
||||
}
|
||||
let container_path = Path::new(&defaults.config_path);
|
||||
if container_path.exists() {
|
||||
return Ok(Some(load_partial_from_file(&defaults.config_path)?));
|
||||
}
|
||||
let host_path = Path::new(DEFAULT_CONFIG_PATH);
|
||||
if host_path.exists() {
|
||||
return Ok(Some(load_partial_from_file(DEFAULT_CONFIG_PATH)?));
|
||||
}
|
||||
Ok(None)
|
||||
}
|
||||
|
||||
/// Decide which secrets directory to use based on inputs and defaults.
|
||||
fn resolve_secrets_dir(
|
||||
inputs: &ConfigInputs,
|
||||
container: bool,
|
||||
defaults: &DefaultPaths,
|
||||
) -> Option<String> {
|
||||
if let Some(explicit) = inputs.secrets_dir.clone() {
|
||||
return Some(explicit);
|
||||
}
|
||||
if container {
|
||||
return Some(defaults.secrets_dir.clone());
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
/// Resolve a token value from explicit values, secret files, or config file values.
|
||||
fn resolve_token(
|
||||
base_value: Option<String>,
|
||||
explicit_value: Option<String>,
|
||||
explicit_file: Option<&str>,
|
||||
secrets_dir: Option<&str>,
|
||||
default_secret_name: &str,
|
||||
) -> anyhow::Result<Option<String>> {
|
||||
if let Some(value) = explicit_value {
|
||||
return Ok(Some(value));
|
||||
}
|
||||
if let Some(path) = explicit_file {
|
||||
return Ok(Some(read_secret_file(path)?));
|
||||
}
|
||||
if let Some(dir) = secrets_dir {
|
||||
let default_path = Path::new(dir).join(default_secret_name);
|
||||
if default_path.exists() {
|
||||
return Ok(Some(read_secret_file(
|
||||
default_path
|
||||
.to_str()
|
||||
.ok_or_else(|| anyhow::anyhow!("Invalid secret file path"))?,
|
||||
)?));
|
||||
pub fn from_default_path() -> anyhow::Result<Self> {
|
||||
let path = "Config.toml";
|
||||
if !Path::new(path).exists() {
|
||||
anyhow::bail!("Config file {path} not found");
|
||||
}
|
||||
}
|
||||
Ok(base_value)
|
||||
}
|
||||
|
||||
/// Read and trim a secret file from disk.
|
||||
fn read_secret_file(path: &str) -> anyhow::Result<String> {
|
||||
let contents = fs::read_to_string(path)?;
|
||||
let trimmed = contents.trim();
|
||||
if trimmed.is_empty() {
|
||||
anyhow::bail!("Secret file {path} is empty");
|
||||
}
|
||||
Ok(trimmed.to_string())
|
||||
}
|
||||
|
||||
/// Load a partial config from a TOML file.
|
||||
fn load_partial_from_file(path: &str) -> anyhow::Result<PartialConfig> {
|
||||
let contents = fs::read_to_string(path)?;
|
||||
let cfg = toml::from_str(&contents)?;
|
||||
Ok(cfg)
|
||||
}
|
||||
|
||||
/// Compute default paths and intervals based on container mode.
|
||||
fn default_paths(container: bool) -> DefaultPaths {
|
||||
if container {
|
||||
DefaultPaths {
|
||||
config_path: CONTAINER_CONFIG_PATH.to_string(),
|
||||
state_file: CONTAINER_STATE_FILE.to_string(),
|
||||
secrets_dir: DEFAULT_SECRETS_DIR.to_string(),
|
||||
poll_interval_secs: CONTAINER_POLL_INTERVAL_SECS,
|
||||
}
|
||||
} else {
|
||||
DefaultPaths {
|
||||
config_path: DEFAULT_CONFIG_PATH.to_string(),
|
||||
state_file: DEFAULT_STATE_FILE.to_string(),
|
||||
secrets_dir: DEFAULT_SECRETS_DIR.to_string(),
|
||||
poll_interval_secs: CONTAINER_POLL_INTERVAL_SECS,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
struct DefaultPaths {
|
||||
config_path: String,
|
||||
state_file: String,
|
||||
secrets_dir: String,
|
||||
poll_interval_secs: u64,
|
||||
}
|
||||
|
||||
/// Detect whether the bridge is running inside a container.
|
||||
fn detect_container(
|
||||
override_value: Option<bool>,
|
||||
env_hint: Option<&str>,
|
||||
cgroup_hint: Option<&str>,
|
||||
) -> bool {
|
||||
if let Some(value) = override_value {
|
||||
return value;
|
||||
}
|
||||
if let Some(hint) = env_hint {
|
||||
if !hint.trim().is_empty() {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
if let Some(cgroup) = cgroup_hint {
|
||||
let haystack = cgroup.to_ascii_lowercase();
|
||||
return haystack.contains("docker")
|
||||
|| haystack.contains("kubepods")
|
||||
|| haystack.contains("containerd")
|
||||
|| haystack.contains("podman");
|
||||
}
|
||||
false
|
||||
}
|
||||
|
||||
/// Read the primary cgroup file for container detection.
|
||||
#[cfg(not(test))]
|
||||
fn read_cgroup() -> Option<String> {
|
||||
fs::read_to_string("/proc/1/cgroup").ok()
|
||||
}
|
||||
|
||||
/// Read and trim an environment variable value.
|
||||
#[cfg(not(test))]
|
||||
fn env_var(key: &str) -> Option<String> {
|
||||
std::env::var(key).ok().filter(|v| !v.trim().is_empty())
|
||||
}
|
||||
|
||||
/// Parse a u64 environment variable value.
|
||||
#[cfg(not(test))]
|
||||
fn parse_u64_env(key: &str) -> anyhow::Result<Option<u64>> {
|
||||
match env_var(key) {
|
||||
None => Ok(None),
|
||||
Some(value) => value
|
||||
.parse::<u64>()
|
||||
.map(Some)
|
||||
.map_err(|e| anyhow::anyhow!("Invalid {key} value: {e}")),
|
||||
}
|
||||
}
|
||||
|
||||
/// Parse a boolean environment variable value.
|
||||
#[cfg(not(test))]
|
||||
fn parse_bool_env(key: &str) -> anyhow::Result<Option<bool>> {
|
||||
match env_var(key) {
|
||||
None => Ok(None),
|
||||
Some(value) => parse_bool_value(key, &value).map(Some),
|
||||
}
|
||||
}
|
||||
|
||||
/// Parse a boolean string with standard truthy/falsy values.
|
||||
#[cfg(not(test))]
|
||||
fn parse_bool_value(key: &str, value: &str) -> anyhow::Result<bool> {
|
||||
let normalized = value.trim().to_ascii_lowercase();
|
||||
match normalized.as_str() {
|
||||
"1" | "true" | "yes" | "on" => Ok(true),
|
||||
"0" | "false" | "no" | "off" => Ok(false),
|
||||
_ => anyhow::bail!("Invalid {key} value: {value}"),
|
||||
Self::load_from_file(path)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -502,43 +62,6 @@ mod tests {
|
||||
use super::*;
|
||||
use serial_test::serial;
|
||||
use std::io::Write;
|
||||
use std::path::{Path, PathBuf};
|
||||
|
||||
struct CwdGuard {
|
||||
original: PathBuf,
|
||||
}
|
||||
|
||||
impl CwdGuard {
|
||||
/// Switch to the provided path and restore the original cwd on drop.
|
||||
fn enter(path: &Path) -> Self {
|
||||
let original = std::env::current_dir().unwrap_or_else(|_| PathBuf::from("/"));
|
||||
std::env::set_current_dir(path).unwrap();
|
||||
Self { original }
|
||||
}
|
||||
}
|
||||
|
||||
impl Drop for CwdGuard {
|
||||
fn drop(&mut self) {
|
||||
if std::env::set_current_dir(&self.original).is_err() {
|
||||
let _ = std::env::set_current_dir("/");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn minimal_overrides() -> ConfigOverrides {
|
||||
ConfigOverrides {
|
||||
potatomesh_base_url: Some("https://potatomesh.net/".to_string()),
|
||||
potatomesh_poll_interval_secs: Some(10),
|
||||
matrix_homeserver: Some("https://matrix.example.org".to_string()),
|
||||
matrix_as_token: Some("AS_TOKEN".to_string()),
|
||||
matrix_hs_token: Some("HS_TOKEN".to_string()),
|
||||
matrix_server_name: Some("example.org".to_string()),
|
||||
matrix_room_id: Some("!roomid:example.org".to_string()),
|
||||
state_file: Some("bridge_state.json".to_string()),
|
||||
matrix_as_token_file: None,
|
||||
matrix_hs_token_file: None,
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_minimal_config_from_toml_str() {
|
||||
@@ -550,7 +73,6 @@ mod tests {
|
||||
[matrix]
|
||||
homeserver = "https://matrix.example.org"
|
||||
as_token = "AS_TOKEN"
|
||||
hs_token = "HS_TOKEN"
|
||||
server_name = "example.org"
|
||||
room_id = "!roomid:example.org"
|
||||
|
||||
@@ -564,7 +86,6 @@ mod tests {
|
||||
|
||||
assert_eq!(cfg.matrix.homeserver, "https://matrix.example.org");
|
||||
assert_eq!(cfg.matrix.as_token, "AS_TOKEN");
|
||||
assert_eq!(cfg.matrix.hs_token, "HS_TOKEN");
|
||||
assert_eq!(cfg.matrix.server_name, "example.org");
|
||||
assert_eq!(cfg.matrix.room_id, "!roomid:example.org");
|
||||
|
||||
@@ -587,7 +108,6 @@ mod tests {
|
||||
[matrix]
|
||||
homeserver = "https://matrix.example.org"
|
||||
as_token = "AS_TOKEN"
|
||||
hs_token = "HS_TOKEN"
|
||||
server_name = "example.org"
|
||||
room_id = "!roomid:example.org"
|
||||
|
||||
@@ -601,378 +121,37 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_container_prefers_override() {
|
||||
assert!(detect_container(Some(true), None, None));
|
||||
assert!(!detect_container(
|
||||
Some(false),
|
||||
Some("docker"),
|
||||
Some("docker")
|
||||
));
|
||||
#[serial]
|
||||
fn from_default_path_not_found() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
std::env::set_current_dir(tmp_dir.path()).unwrap();
|
||||
let result = Config::from_default_path();
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_container_from_hint_or_cgroup() {
|
||||
assert!(detect_container(None, Some("docker"), None));
|
||||
assert!(detect_container(None, None, Some("kubepods")));
|
||||
assert!(!detect_container(None, None, Some("")));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn load_uses_cli_overrides_over_env() {
|
||||
#[serial]
|
||||
fn from_default_path_found() {
|
||||
let toml_str = r#"
|
||||
[potatomesh]
|
||||
base_url = "https://potatomesh.net/"
|
||||
poll_interval_secs = 5
|
||||
poll_interval_secs = 10
|
||||
|
||||
[matrix]
|
||||
homeserver = "https://matrix.example.org"
|
||||
as_token = "AS_TOKEN"
|
||||
hs_token = "HS_TOKEN"
|
||||
server_name = "example.org"
|
||||
room_id = "!roomid:example.org"
|
||||
|
||||
[state]
|
||||
state_file = "bridge_state.json"
|
||||
"#;
|
||||
let mut file = tempfile::NamedTempFile::new().unwrap();
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let file_path = tmp_dir.path().join("Config.toml");
|
||||
let mut file = std::fs::File::create(file_path).unwrap();
|
||||
write!(file, "{}", toml_str).unwrap();
|
||||
|
||||
let env_inputs = ConfigInputs {
|
||||
config_path: Some(file.path().to_str().unwrap().to_string()),
|
||||
overrides: ConfigOverrides {
|
||||
potatomesh_base_url: Some("https://env.example/".to_string()),
|
||||
..minimal_overrides()
|
||||
},
|
||||
..ConfigInputs::default()
|
||||
};
|
||||
let cli_inputs = ConfigInputs {
|
||||
overrides: ConfigOverrides {
|
||||
potatomesh_base_url: Some("https://cli.example/".to_string()),
|
||||
..ConfigOverrides::default()
|
||||
},
|
||||
..ConfigInputs::default()
|
||||
};
|
||||
|
||||
let cfg = load_from_sources(cli_inputs, env_inputs, None).unwrap();
|
||||
assert_eq!(cfg.potatomesh.base_url, "https://cli.example/");
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn load_uses_container_secret_defaults() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let _guard = CwdGuard::enter(tmp_dir.path());
|
||||
let secrets_dir = tmp_dir.path();
|
||||
fs::write(secrets_dir.join("matrix_as_token"), "FROM_SECRET").unwrap();
|
||||
|
||||
let cli_inputs = ConfigInputs {
|
||||
secrets_dir: Some(secrets_dir.to_string_lossy().to_string()),
|
||||
container_override: Some(true),
|
||||
overrides: ConfigOverrides {
|
||||
potatomesh_base_url: Some("https://potatomesh.net/".to_string()),
|
||||
potatomesh_poll_interval_secs: Some(10),
|
||||
matrix_homeserver: Some("https://matrix.example.org".to_string()),
|
||||
matrix_hs_token: Some("HS_TOKEN".to_string()),
|
||||
matrix_server_name: Some("example.org".to_string()),
|
||||
matrix_room_id: Some("!roomid:example.org".to_string()),
|
||||
state_file: Some("bridge_state.json".to_string()),
|
||||
..ConfigOverrides::default()
|
||||
},
|
||||
..ConfigInputs::default()
|
||||
};
|
||||
|
||||
let cfg = load_from_sources(cli_inputs, ConfigInputs::default(), None).unwrap();
|
||||
assert_eq!(cfg.matrix.as_token, "FROM_SECRET");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_token_prefers_explicit_value() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let token_file = tmp_dir.path().join("token");
|
||||
fs::write(&token_file, "FROM_FILE").unwrap();
|
||||
|
||||
let resolved = resolve_token(
|
||||
Some("FROM_BASE".to_string()),
|
||||
Some("FROM_EXPLICIT".to_string()),
|
||||
Some(token_file.to_str().unwrap()),
|
||||
Some(tmp_dir.path().to_str().unwrap()),
|
||||
"matrix_as_token",
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(resolved, Some("FROM_EXPLICIT".to_string()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_token_reads_explicit_file() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let token_file = tmp_dir.path().join("token");
|
||||
fs::write(&token_file, "FROM_FILE").unwrap();
|
||||
|
||||
let resolved = resolve_token(
|
||||
None,
|
||||
None,
|
||||
Some(token_file.to_str().unwrap()),
|
||||
None,
|
||||
"matrix_as_token",
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(resolved, Some("FROM_FILE".to_string()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_token_reads_default_secret_file() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
fs::write(tmp_dir.path().join("matrix_hs_token"), "FROM_SECRET").unwrap();
|
||||
|
||||
let resolved = resolve_token(
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
Some(tmp_dir.path().to_str().unwrap()),
|
||||
"matrix_hs_token",
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(resolved, Some("FROM_SECRET".to_string()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_token_errors_on_empty_secret_file() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let token_file = tmp_dir.path().join("token");
|
||||
fs::write(&token_file, " ").unwrap();
|
||||
|
||||
let result = resolve_token(
|
||||
None,
|
||||
None,
|
||||
Some(token_file.to_str().unwrap()),
|
||||
None,
|
||||
"matrix_as_token",
|
||||
);
|
||||
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_secrets_dir_prefers_explicit() {
|
||||
let defaults = DefaultPaths {
|
||||
config_path: "Config.toml".to_string(),
|
||||
state_file: DEFAULT_STATE_FILE.to_string(),
|
||||
secrets_dir: "default".to_string(),
|
||||
poll_interval_secs: CONTAINER_POLL_INTERVAL_SECS,
|
||||
};
|
||||
let inputs = ConfigInputs {
|
||||
secrets_dir: Some("explicit".to_string()),
|
||||
..ConfigInputs::default()
|
||||
};
|
||||
|
||||
let resolved = resolve_secrets_dir(&inputs, true, &defaults);
|
||||
assert_eq!(resolved, Some("explicit".to_string()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_secrets_dir_container_default() {
|
||||
let defaults = DefaultPaths {
|
||||
config_path: "Config.toml".to_string(),
|
||||
state_file: DEFAULT_STATE_FILE.to_string(),
|
||||
secrets_dir: "default".to_string(),
|
||||
poll_interval_secs: CONTAINER_POLL_INTERVAL_SECS,
|
||||
};
|
||||
let inputs = ConfigInputs::default();
|
||||
|
||||
let resolved = resolve_secrets_dir(&inputs, true, &defaults);
|
||||
assert_eq!(resolved, Some("default".to_string()));
|
||||
assert_eq!(resolve_secrets_dir(&inputs, false, &defaults), None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn resolve_base_config_prefers_explicit_path() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let _guard = CwdGuard::enter(tmp_dir.path());
|
||||
let config_path = tmp_dir.path().join("explicit.toml");
|
||||
fs::write(
|
||||
&config_path,
|
||||
r#"[potatomesh]
|
||||
base_url = "https://potatomesh.net/"
|
||||
poll_interval_secs = 10
|
||||
[matrix]
|
||||
homeserver = "https://matrix.example.org"
|
||||
as_token = "AS_TOKEN"
|
||||
hs_token = "HS_TOKEN"
|
||||
server_name = "example.org"
|
||||
room_id = "!roomid:example.org"
|
||||
[state]
|
||||
state_file = "bridge_state.json"
|
||||
"#,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let defaults = default_paths(false);
|
||||
let inputs = ConfigInputs {
|
||||
config_path: Some(config_path.to_string_lossy().to_string()),
|
||||
..ConfigInputs::default()
|
||||
};
|
||||
|
||||
let resolved = resolve_base_config(&inputs, &defaults).unwrap();
|
||||
assert!(resolved.is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn resolve_base_config_uses_container_path_when_present() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let _guard = CwdGuard::enter(tmp_dir.path());
|
||||
let config_path = tmp_dir.path().join("container.toml");
|
||||
fs::write(
|
||||
&config_path,
|
||||
r#"[potatomesh]
|
||||
base_url = "https://potatomesh.net/"
|
||||
poll_interval_secs = 10
|
||||
[matrix]
|
||||
homeserver = "https://matrix.example.org"
|
||||
as_token = "AS_TOKEN"
|
||||
hs_token = "HS_TOKEN"
|
||||
server_name = "example.org"
|
||||
room_id = "!roomid:example.org"
|
||||
[state]
|
||||
state_file = "bridge_state.json"
|
||||
"#,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let defaults = DefaultPaths {
|
||||
config_path: config_path.to_string_lossy().to_string(),
|
||||
state_file: DEFAULT_STATE_FILE.to_string(),
|
||||
secrets_dir: DEFAULT_SECRETS_DIR.to_string(),
|
||||
poll_interval_secs: CONTAINER_POLL_INTERVAL_SECS,
|
||||
};
|
||||
|
||||
let resolved = resolve_base_config(&ConfigInputs::default(), &defaults).unwrap();
|
||||
assert!(resolved.is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn resolve_base_config_uses_host_path_when_present() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let _guard = CwdGuard::enter(tmp_dir.path());
|
||||
fs::write(
|
||||
"Config.toml",
|
||||
r#"[potatomesh]
|
||||
base_url = "https://potatomesh.net/"
|
||||
poll_interval_secs = 10
|
||||
[matrix]
|
||||
homeserver = "https://matrix.example.org"
|
||||
as_token = "AS_TOKEN"
|
||||
hs_token = "HS_TOKEN"
|
||||
server_name = "example.org"
|
||||
room_id = "!roomid:example.org"
|
||||
[state]
|
||||
state_file = "bridge_state.json"
|
||||
"#,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let defaults = default_paths(false);
|
||||
let resolved = resolve_base_config(&ConfigInputs::default(), &defaults).unwrap();
|
||||
assert!(resolved.is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn resolve_base_config_returns_none_when_missing() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let _guard = CwdGuard::enter(tmp_dir.path());
|
||||
let defaults = default_paths(false);
|
||||
let resolved = resolve_base_config(&ConfigInputs::default(), &defaults).unwrap();
|
||||
assert!(resolved.is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn load_prefers_cli_token_file_over_env_value() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let _guard = CwdGuard::enter(tmp_dir.path());
|
||||
|
||||
let token_file = tmp_dir.path().join("as_token");
|
||||
fs::write(&token_file, "CLI_SECRET").unwrap();
|
||||
|
||||
let env_inputs = ConfigInputs {
|
||||
overrides: ConfigOverrides {
|
||||
potatomesh_base_url: Some("https://potatomesh.net/".to_string()),
|
||||
potatomesh_poll_interval_secs: Some(10),
|
||||
matrix_homeserver: Some("https://matrix.example.org".to_string()),
|
||||
matrix_as_token: Some("ENV_TOKEN".to_string()),
|
||||
matrix_hs_token: Some("HS_TOKEN".to_string()),
|
||||
matrix_server_name: Some("example.org".to_string()),
|
||||
matrix_room_id: Some("!roomid:example.org".to_string()),
|
||||
..ConfigOverrides::default()
|
||||
},
|
||||
..ConfigInputs::default()
|
||||
};
|
||||
let cli_inputs = ConfigInputs {
|
||||
overrides: ConfigOverrides {
|
||||
matrix_as_token_file: Some(token_file.to_string_lossy().to_string()),
|
||||
..ConfigOverrides::default()
|
||||
},
|
||||
..ConfigInputs::default()
|
||||
};
|
||||
|
||||
let cfg = load_from_sources(cli_inputs, env_inputs, None).unwrap();
|
||||
assert_eq!(cfg.matrix.as_token, "CLI_SECRET");
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn load_uses_container_default_poll_interval() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let _guard = CwdGuard::enter(tmp_dir.path());
|
||||
|
||||
let cli_inputs = ConfigInputs {
|
||||
container_override: Some(true),
|
||||
overrides: ConfigOverrides {
|
||||
potatomesh_base_url: Some("https://potatomesh.net/".to_string()),
|
||||
matrix_homeserver: Some("https://matrix.example.org".to_string()),
|
||||
matrix_as_token: Some("AS_TOKEN".to_string()),
|
||||
matrix_hs_token: Some("HS_TOKEN".to_string()),
|
||||
matrix_server_name: Some("example.org".to_string()),
|
||||
matrix_room_id: Some("!roomid:example.org".to_string()),
|
||||
..ConfigOverrides::default()
|
||||
},
|
||||
..ConfigInputs::default()
|
||||
};
|
||||
|
||||
let cfg = load_from_sources(cli_inputs, ConfigInputs::default(), None).unwrap();
|
||||
assert_eq!(
|
||||
cfg.potatomesh.poll_interval_secs,
|
||||
CONTAINER_POLL_INTERVAL_SECS
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn load_uses_default_state_path_when_missing() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let _guard = CwdGuard::enter(tmp_dir.path());
|
||||
|
||||
let cli_inputs = ConfigInputs {
|
||||
overrides: ConfigOverrides {
|
||||
potatomesh_base_url: Some("https://potatomesh.net/".to_string()),
|
||||
potatomesh_poll_interval_secs: Some(10),
|
||||
matrix_homeserver: Some("https://matrix.example.org".to_string()),
|
||||
matrix_as_token: Some("AS_TOKEN".to_string()),
|
||||
matrix_hs_token: Some("HS_TOKEN".to_string()),
|
||||
matrix_server_name: Some("example.org".to_string()),
|
||||
matrix_room_id: Some("!roomid:example.org".to_string()),
|
||||
..ConfigOverrides::default()
|
||||
},
|
||||
..ConfigInputs::default()
|
||||
};
|
||||
|
||||
let cfg = load_from_sources(cli_inputs, ConfigInputs::default(), None).unwrap();
|
||||
assert_eq!(cfg.state.state_file, DEFAULT_STATE_FILE);
|
||||
std::env::set_current_dir(tmp_dir.path()).unwrap();
|
||||
let result = Config::from_default_path();
|
||||
assert!(result.is_ok());
|
||||
}
|
||||
}
|
||||
|
||||
+124
-427
@@ -12,42 +12,23 @@
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
mod cli;
|
||||
mod config;
|
||||
mod matrix;
|
||||
mod matrix_server;
|
||||
mod potatomesh;
|
||||
|
||||
use std::{fs, net::SocketAddr, path::Path};
|
||||
use std::{fs, path::Path};
|
||||
|
||||
use anyhow::Result;
|
||||
#[cfg(not(test))]
|
||||
use clap::Parser;
|
||||
use tokio::time::Duration;
|
||||
use tokio::time::{sleep, Duration};
|
||||
use tracing::{error, info};
|
||||
|
||||
#[cfg(not(test))]
|
||||
use crate::cli::Cli;
|
||||
#[cfg(not(test))]
|
||||
use crate::config::Config;
|
||||
use crate::matrix::MatrixAppserviceClient;
|
||||
use crate::matrix_server::run_synapse_listener;
|
||||
use crate::potatomesh::{FetchParams, PotatoClient, PotatoMessage, PotatoNode};
|
||||
#[cfg(not(test))]
|
||||
use tokio::time::sleep;
|
||||
use crate::potatomesh::{FetchParams, PotatoClient, PotatoMessage};
|
||||
|
||||
#[derive(Debug, serde::Serialize, serde::Deserialize, Default)]
|
||||
pub struct BridgeState {
|
||||
/// Highest message id processed by the bridge.
|
||||
last_message_id: Option<u64>,
|
||||
/// Highest rx_time observed; used to build incremental fetch queries.
|
||||
#[serde(default)]
|
||||
last_rx_time: Option<u64>,
|
||||
/// Message ids seen at the current last_rx_time for de-duplication.
|
||||
#[serde(default)]
|
||||
last_rx_time_ids: Vec<u64>,
|
||||
/// Legacy checkpoint timestamp used before last_rx_time was added.
|
||||
#[serde(default, skip_serializing)]
|
||||
last_checked_at: Option<u64>,
|
||||
}
|
||||
|
||||
@@ -57,15 +38,7 @@ impl BridgeState {
|
||||
return Ok(Self::default());
|
||||
}
|
||||
let data = fs::read_to_string(path)?;
|
||||
// Treat empty/whitespace-only files as a fresh state.
|
||||
if data.trim().is_empty() {
|
||||
return Ok(Self::default());
|
||||
}
|
||||
let mut s: Self = serde_json::from_str(&data)?;
|
||||
if s.last_rx_time.is_none() {
|
||||
s.last_rx_time = s.last_checked_at;
|
||||
}
|
||||
s.last_checked_at = None;
|
||||
let s: Self = serde_json::from_str(&data)?;
|
||||
Ok(s)
|
||||
}
|
||||
|
||||
@@ -76,32 +49,17 @@ impl BridgeState {
|
||||
}
|
||||
|
||||
fn should_forward(&self, msg: &PotatoMessage) -> bool {
|
||||
match self.last_rx_time {
|
||||
None => match self.last_message_id {
|
||||
None => true,
|
||||
Some(last_id) => msg.id > last_id,
|
||||
},
|
||||
Some(last_ts) => {
|
||||
if msg.rx_time > last_ts {
|
||||
true
|
||||
} else if msg.rx_time < last_ts {
|
||||
false
|
||||
} else {
|
||||
!self.last_rx_time_ids.contains(&msg.id)
|
||||
}
|
||||
}
|
||||
match self.last_message_id {
|
||||
None => true,
|
||||
Some(last) => msg.id > last,
|
||||
}
|
||||
}
|
||||
|
||||
fn update_with(&mut self, msg: &PotatoMessage) {
|
||||
self.last_message_id = Some(msg.id);
|
||||
if self.last_rx_time.is_none() || Some(msg.rx_time) > self.last_rx_time {
|
||||
self.last_rx_time = Some(msg.rx_time);
|
||||
self.last_rx_time_ids = vec![msg.id];
|
||||
} else if Some(msg.rx_time) == self.last_rx_time && !self.last_rx_time_ids.contains(&msg.id)
|
||||
{
|
||||
self.last_rx_time_ids.push(msg.id);
|
||||
}
|
||||
self.last_message_id = Some(match self.last_message_id {
|
||||
None => msg.id,
|
||||
Some(last) => last.max(msg.id),
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
@@ -111,7 +69,7 @@ fn build_fetch_params(state: &BridgeState) -> FetchParams {
|
||||
limit: None,
|
||||
since: None,
|
||||
}
|
||||
} else if let Some(ts) = state.last_rx_time {
|
||||
} else if let Some(ts) = state.last_checked_at {
|
||||
FetchParams {
|
||||
limit: None,
|
||||
since: Some(ts),
|
||||
@@ -124,29 +82,17 @@ fn build_fetch_params(state: &BridgeState) -> FetchParams {
|
||||
}
|
||||
}
|
||||
|
||||
/// Persist the bridge state and log any write errors.
|
||||
fn persist_state(state: &BridgeState, state_path: &str) {
|
||||
if let Err(e) = state.save(state_path) {
|
||||
error!("Error saving state: {:?}", e);
|
||||
fn update_checkpoint(state: &mut BridgeState, delivered_all: bool, now_secs: u64) -> bool {
|
||||
if !delivered_all {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/// Emit an info log for the latest bridge state snapshot.
|
||||
fn log_state_update(state: &BridgeState) {
|
||||
info!("Updated state: {:?}", state);
|
||||
}
|
||||
|
||||
/// Emit a sanitized config log without sensitive tokens.
|
||||
#[cfg(not(test))]
|
||||
fn log_config(cfg: &Config) {
|
||||
info!(
|
||||
potatomesh_base_url = cfg.potatomesh.base_url.as_str(),
|
||||
matrix_homeserver = cfg.matrix.homeserver.as_str(),
|
||||
matrix_server_name = cfg.matrix.server_name.as_str(),
|
||||
matrix_room_id = cfg.matrix.room_id.as_str(),
|
||||
state_file = cfg.state.state_file.as_str(),
|
||||
"Loaded config"
|
||||
);
|
||||
if state.last_message_id.is_some() {
|
||||
state.last_checked_at = Some(now_secs);
|
||||
true
|
||||
} else {
|
||||
false
|
||||
}
|
||||
}
|
||||
|
||||
async fn poll_once(
|
||||
@@ -154,13 +100,16 @@ async fn poll_once(
|
||||
matrix: &MatrixAppserviceClient,
|
||||
state: &mut BridgeState,
|
||||
state_path: &str,
|
||||
now_secs: u64,
|
||||
) {
|
||||
let params = build_fetch_params(state);
|
||||
|
||||
match potato.fetch_messages(params).await {
|
||||
Ok(mut msgs) => {
|
||||
// sort by rx_time so we process by actual receipt time
|
||||
msgs.sort_by_key(|m| m.rx_time);
|
||||
// sort by id ascending so we process in order
|
||||
msgs.sort_by_key(|m| m.id);
|
||||
|
||||
let mut delivered_all = true;
|
||||
|
||||
for msg in &msgs {
|
||||
if !state.should_forward(msg) {
|
||||
@@ -171,19 +120,27 @@ async fn poll_once(
|
||||
if let Some(port) = &msg.portnum {
|
||||
if port != "TEXT_MESSAGE_APP" {
|
||||
state.update_with(msg);
|
||||
log_state_update(state);
|
||||
persist_state(state, state_path);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
if let Err(e) = handle_message(potato, matrix, state, msg).await {
|
||||
error!("Error handling message {}: {:?}", msg.id, e);
|
||||
delivered_all = false;
|
||||
continue;
|
||||
}
|
||||
|
||||
// persist after each processed message
|
||||
persist_state(state, state_path);
|
||||
if let Err(e) = state.save(state_path) {
|
||||
error!("Error saving state: {:?}", e);
|
||||
}
|
||||
}
|
||||
|
||||
// Only advance checkpoint after successful delivery and a known last_message_id.
|
||||
if update_checkpoint(state, delivered_all, now_secs) {
|
||||
if let Err(e) = state.save(state_path) {
|
||||
error!("Error saving state: {:?}", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
Err(e) => {
|
||||
@@ -192,15 +149,6 @@ async fn poll_once(
|
||||
}
|
||||
}
|
||||
|
||||
fn spawn_synapse_listener(addr: SocketAddr, token: String) -> tokio::task::JoinHandle<()> {
|
||||
tokio::spawn(async move {
|
||||
if let Err(e) = run_synapse_listener(addr, token).await {
|
||||
error!("Synapse listener failed: {:?}", e);
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
#[cfg(not(test))]
|
||||
#[tokio::main]
|
||||
async fn main() -> Result<()> {
|
||||
// Logging: RUST_LOG=info,bridge=debug,reqwest=warn ...
|
||||
@@ -212,9 +160,8 @@ async fn main() -> Result<()> {
|
||||
)
|
||||
.init();
|
||||
|
||||
let cli = Cli::parse();
|
||||
let cfg = config::load(cli.to_inputs())?;
|
||||
log_config(&cfg);
|
||||
let cfg = Config::from_default_path()?;
|
||||
info!("Loaded config: {:?}", cfg);
|
||||
|
||||
let http = reqwest::Client::builder().build()?;
|
||||
let potato = PotatoClient::new(http.clone(), cfg.potatomesh.clone());
|
||||
@@ -222,10 +169,6 @@ async fn main() -> Result<()> {
|
||||
let matrix = MatrixAppserviceClient::new(http.clone(), cfg.matrix.clone());
|
||||
matrix.health_check().await?;
|
||||
|
||||
let synapse_addr = SocketAddr::from(([0, 0, 0, 0], 41448));
|
||||
let synapse_token = cfg.matrix.hs_token.clone();
|
||||
let _synapse_handle = spawn_synapse_listener(synapse_addr, synapse_token);
|
||||
|
||||
let state_path = &cfg.state.state_file;
|
||||
let mut state = BridgeState::load(state_path)?;
|
||||
info!("Loaded state: {:?}", state);
|
||||
@@ -233,7 +176,12 @@ async fn main() -> Result<()> {
|
||||
let poll_interval = Duration::from_secs(cfg.potatomesh.poll_interval_secs);
|
||||
|
||||
loop {
|
||||
poll_once(&potato, &matrix, &mut state, state_path).await;
|
||||
let now_secs = std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)
|
||||
.unwrap_or_default()
|
||||
.as_secs();
|
||||
|
||||
poll_once(&potato, &matrix, &mut state, state_path, now_secs).await;
|
||||
|
||||
sleep(poll_interval).await;
|
||||
}
|
||||
@@ -251,77 +199,36 @@ async fn handle_message(
|
||||
|
||||
// Ensure puppet exists & has display name
|
||||
matrix.ensure_user_registered(&localpart).await?;
|
||||
matrix.ensure_user_joined_room(&user_id).await?;
|
||||
let display_name = display_name_for_node(&node);
|
||||
matrix.set_display_name(&user_id, &display_name).await?;
|
||||
matrix.set_display_name(&user_id, &node.long_name).await?;
|
||||
|
||||
// Format the bridged message
|
||||
let preset_short = modem_preset_short(&msg.modem_preset);
|
||||
let prefix = format!(
|
||||
"[{freq}][{preset_short}][{channel}]",
|
||||
freq = msg.lora_freq,
|
||||
preset_short = preset_short,
|
||||
channel = msg.channel_name,
|
||||
);
|
||||
let (body, formatted_body) = format_message_bodies(&prefix, &msg.text);
|
||||
|
||||
matrix
|
||||
.send_formatted_message_as(&user_id, &body, &formatted_body)
|
||||
.await?;
|
||||
|
||||
info!("Bridged message: {:?}", msg);
|
||||
state.update_with(msg);
|
||||
log_state_update(state);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Build a compact modem preset label like "LF" for "LongFast".
|
||||
fn modem_preset_short(preset: &str) -> String {
|
||||
let letters: String = preset
|
||||
.chars()
|
||||
.filter(|ch| ch.is_ascii_uppercase())
|
||||
.collect();
|
||||
if letters.is_empty() {
|
||||
preset.chars().take(2).collect()
|
||||
} else {
|
||||
letters
|
||||
}
|
||||
}
|
||||
|
||||
/// Build plain text + HTML message bodies with inline-code metadata.
|
||||
fn format_message_bodies(prefix: &str, text: &str) -> (String, String) {
|
||||
let body = format!("`{}` {}", prefix, text);
|
||||
let formatted_body = format!("<code>{}</code> {}", escape_html(prefix), escape_html(text));
|
||||
(body, formatted_body)
|
||||
}
|
||||
|
||||
/// Build the Matrix display name from a node's long/short names.
|
||||
fn display_name_for_node(node: &PotatoNode) -> String {
|
||||
match node
|
||||
let short = node
|
||||
.short_name
|
||||
.as_deref()
|
||||
.map(str::trim)
|
||||
.filter(|s| !s.is_empty())
|
||||
{
|
||||
Some(short) if short != node.long_name => format!("{} ({})", node.long_name, short),
|
||||
_ => node.long_name.clone(),
|
||||
}
|
||||
}
|
||||
.clone()
|
||||
.unwrap_or_else(|| node.long_name.clone());
|
||||
|
||||
/// Minimal HTML escaping for Matrix formatted_body payloads.
|
||||
fn escape_html(input: &str) -> String {
|
||||
let mut escaped = String::with_capacity(input.len());
|
||||
for ch in input.chars() {
|
||||
match ch {
|
||||
'&' => escaped.push_str("&"),
|
||||
'<' => escaped.push_str("<"),
|
||||
'>' => escaped.push_str(">"),
|
||||
'"' => escaped.push_str("""),
|
||||
'\'' => escaped.push_str("'"),
|
||||
_ => escaped.push(ch),
|
||||
}
|
||||
}
|
||||
escaped
|
||||
let body = format!(
|
||||
"[{short}] {text}\n({from_id} → {to_id}, {rssi}, {snr}, {chan}/{preset})",
|
||||
short = short,
|
||||
text = msg.text,
|
||||
from_id = msg.from_id,
|
||||
to_id = msg.to_id,
|
||||
rssi = msg
|
||||
.rssi
|
||||
.map(|v| format!("RSSI {v} dB"))
|
||||
.unwrap_or_else(|| "RSSI n/a".to_string()),
|
||||
snr = msg
|
||||
.snr
|
||||
.map(|v| format!("SNR {v} dB"))
|
||||
.unwrap_or_else(|| "SNR n/a".to_string()),
|
||||
chan = msg.channel_name,
|
||||
preset = msg.modem_preset,
|
||||
);
|
||||
|
||||
matrix.send_text_message_as(&user_id, &body).await?;
|
||||
|
||||
state.update_with(msg);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
@@ -352,54 +259,6 @@ mod tests {
|
||||
}
|
||||
}
|
||||
|
||||
fn sample_node(short_name: Option<&str>, long_name: &str) -> PotatoNode {
|
||||
PotatoNode {
|
||||
node_id: "!abcd1234".to_string(),
|
||||
short_name: short_name.map(str::to_string),
|
||||
long_name: long_name.to_string(),
|
||||
role: None,
|
||||
hw_model: None,
|
||||
last_heard: None,
|
||||
first_heard: None,
|
||||
latitude: None,
|
||||
longitude: None,
|
||||
altitude: None,
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn modem_preset_short_handles_camelcase() {
|
||||
assert_eq!(modem_preset_short("LongFast"), "LF");
|
||||
assert_eq!(modem_preset_short("MediumFast"), "MF");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn format_message_bodies_escape_html() {
|
||||
let (body, formatted) = format_message_bodies("[868][LF]", "Hello <&>");
|
||||
assert_eq!(body, "`[868][LF]` Hello <&>");
|
||||
assert_eq!(formatted, "<code>[868][LF]</code> Hello <&>");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn escape_html_escapes_quotes() {
|
||||
assert_eq!(escape_html("a\"b'c"), "a"b'c");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn display_name_for_node_includes_short_when_present() {
|
||||
let node = sample_node(Some("TN"), "Test Node");
|
||||
assert_eq!(display_name_for_node(&node), "Test Node (TN)");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn display_name_for_node_ignores_empty_or_duplicate_short() {
|
||||
let empty_short = sample_node(Some(""), "Test Node");
|
||||
assert_eq!(display_name_for_node(&empty_short), "Test Node");
|
||||
|
||||
let duplicate_short = sample_node(Some("Test Node"), "Test Node");
|
||||
assert_eq!(display_name_for_node(&duplicate_short), "Test Node");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn bridge_state_initially_forwards_all() {
|
||||
let state = BridgeState::default();
|
||||
@@ -409,72 +268,39 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn bridge_state_tracks_latest_rx_time_and_skips_older() {
|
||||
fn bridge_state_tracks_highest_id_and_skips_older() {
|
||||
let mut state = BridgeState::default();
|
||||
let m1 = sample_msg(10);
|
||||
let m2 = sample_msg(20);
|
||||
let m3 = sample_msg(15);
|
||||
let m1 = PotatoMessage { rx_time: 10, ..m1 };
|
||||
let m2 = PotatoMessage { rx_time: 20, ..m2 };
|
||||
let m3 = PotatoMessage { rx_time: 15, ..m3 };
|
||||
|
||||
// First message, should forward
|
||||
assert!(state.should_forward(&m1));
|
||||
state.update_with(&m1);
|
||||
assert_eq!(state.last_message_id, Some(10));
|
||||
assert_eq!(state.last_rx_time, Some(10));
|
||||
|
||||
// Second message, higher id, should forward
|
||||
assert!(state.should_forward(&m2));
|
||||
state.update_with(&m2);
|
||||
assert_eq!(state.last_message_id, Some(20));
|
||||
assert_eq!(state.last_rx_time, Some(20));
|
||||
|
||||
// Third message, lower than last, should NOT forward
|
||||
assert!(!state.should_forward(&m3));
|
||||
// state remains unchanged
|
||||
assert_eq!(state.last_message_id, Some(20));
|
||||
assert_eq!(state.last_rx_time, Some(20));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn bridge_state_uses_legacy_id_filter_when_rx_time_missing() {
|
||||
let state = BridgeState {
|
||||
last_message_id: Some(10),
|
||||
last_rx_time: None,
|
||||
last_rx_time_ids: vec![],
|
||||
fn bridge_state_update_is_monotonic() {
|
||||
let mut state = BridgeState {
|
||||
last_message_id: Some(50),
|
||||
last_checked_at: None,
|
||||
};
|
||||
let older = sample_msg(9);
|
||||
let newer = sample_msg(11);
|
||||
let m = sample_msg(40);
|
||||
|
||||
assert!(!state.should_forward(&older));
|
||||
assert!(state.should_forward(&newer));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn bridge_state_dedupes_same_timestamp() {
|
||||
let mut state = BridgeState::default();
|
||||
let m1 = PotatoMessage {
|
||||
rx_time: 100,
|
||||
..sample_msg(10)
|
||||
};
|
||||
let m2 = PotatoMessage {
|
||||
rx_time: 100,
|
||||
..sample_msg(9)
|
||||
};
|
||||
let dup = PotatoMessage {
|
||||
rx_time: 100,
|
||||
..sample_msg(10)
|
||||
};
|
||||
|
||||
assert!(state.should_forward(&m1));
|
||||
state.update_with(&m1);
|
||||
assert!(state.should_forward(&m2));
|
||||
state.update_with(&m2);
|
||||
assert!(!state.should_forward(&dup));
|
||||
assert_eq!(state.last_rx_time, Some(100));
|
||||
assert_eq!(state.last_rx_time_ids, vec![10, 9]);
|
||||
state.update_with(&m); // id is lower than current
|
||||
// last_message_id must stay at 50
|
||||
assert_eq!(state.last_message_id, Some(50));
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -485,17 +311,13 @@ mod tests {
|
||||
|
||||
let state = BridgeState {
|
||||
last_message_id: Some(12345),
|
||||
last_rx_time: Some(99),
|
||||
last_rx_time_ids: vec![123],
|
||||
last_checked_at: Some(77),
|
||||
last_checked_at: Some(99),
|
||||
};
|
||||
state.save(path_str).unwrap();
|
||||
|
||||
let loaded_state = BridgeState::load(path_str).unwrap();
|
||||
assert_eq!(loaded_state.last_message_id, Some(12345));
|
||||
assert_eq!(loaded_state.last_rx_time, Some(99));
|
||||
assert_eq!(loaded_state.last_rx_time_ids, vec![123]);
|
||||
assert_eq!(loaded_state.last_checked_at, None);
|
||||
assert_eq!(loaded_state.last_checked_at, Some(99));
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -506,50 +328,50 @@ mod tests {
|
||||
|
||||
let state = BridgeState::load(path_str).unwrap();
|
||||
assert_eq!(state.last_message_id, None);
|
||||
assert_eq!(state.last_rx_time, None);
|
||||
assert!(state.last_rx_time_ids.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn bridge_state_load_empty_file() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let file_path = tmp_dir.path().join("empty.json");
|
||||
let path_str = file_path.to_str().unwrap();
|
||||
|
||||
fs::write(path_str, "").unwrap();
|
||||
|
||||
let state = BridgeState::load(path_str).unwrap();
|
||||
assert_eq!(state.last_message_id, None);
|
||||
assert_eq!(state.last_rx_time, None);
|
||||
assert!(state.last_rx_time_ids.is_empty());
|
||||
assert_eq!(state.last_checked_at, None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn bridge_state_migrates_legacy_checkpoint() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let file_path = tmp_dir.path().join("legacy_state.json");
|
||||
let path_str = file_path.to_str().unwrap();
|
||||
fn update_checkpoint_requires_last_message_id() {
|
||||
let mut state = BridgeState {
|
||||
last_message_id: None,
|
||||
last_checked_at: Some(10),
|
||||
};
|
||||
|
||||
fs::write(
|
||||
path_str,
|
||||
r#"{"last_message_id":42,"last_checked_at":1710000000}"#,
|
||||
)
|
||||
.unwrap();
|
||||
let saved = update_checkpoint(&mut state, true, 123);
|
||||
assert!(!saved);
|
||||
assert_eq!(state.last_checked_at, Some(10));
|
||||
}
|
||||
|
||||
let state = BridgeState::load(path_str).unwrap();
|
||||
assert_eq!(state.last_message_id, Some(42));
|
||||
assert_eq!(state.last_rx_time, Some(1_710_000_000));
|
||||
assert!(state.last_rx_time_ids.is_empty());
|
||||
#[test]
|
||||
fn update_checkpoint_skips_when_not_delivered() {
|
||||
let mut state = BridgeState {
|
||||
last_message_id: Some(5),
|
||||
last_checked_at: Some(10),
|
||||
};
|
||||
|
||||
let saved = update_checkpoint(&mut state, false, 123);
|
||||
assert!(!saved);
|
||||
assert_eq!(state.last_checked_at, Some(10));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn update_checkpoint_sets_when_safe() {
|
||||
let mut state = BridgeState {
|
||||
last_message_id: Some(5),
|
||||
last_checked_at: None,
|
||||
};
|
||||
|
||||
let saved = update_checkpoint(&mut state, true, 123);
|
||||
assert!(saved);
|
||||
assert_eq!(state.last_checked_at, Some(123));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn fetch_params_respects_missing_last_message_id() {
|
||||
let state = BridgeState {
|
||||
last_message_id: None,
|
||||
last_rx_time: Some(123),
|
||||
last_rx_time_ids: vec![],
|
||||
last_checked_at: None,
|
||||
last_checked_at: Some(123),
|
||||
};
|
||||
|
||||
let params = build_fetch_params(&state);
|
||||
@@ -561,9 +383,7 @@ mod tests {
|
||||
fn fetch_params_uses_since_when_safe() {
|
||||
let state = BridgeState {
|
||||
last_message_id: Some(1),
|
||||
last_rx_time: Some(123),
|
||||
last_rx_time_ids: vec![],
|
||||
last_checked_at: None,
|
||||
last_checked_at: Some(123),
|
||||
};
|
||||
|
||||
let params = build_fetch_params(&state);
|
||||
@@ -575,8 +395,6 @@ mod tests {
|
||||
fn fetch_params_defaults_to_small_window() {
|
||||
let state = BridgeState {
|
||||
last_message_id: Some(1),
|
||||
last_rx_time: None,
|
||||
last_rx_time_ids: vec![],
|
||||
last_checked_at: None,
|
||||
};
|
||||
|
||||
@@ -585,59 +403,8 @@ mod tests {
|
||||
assert_eq!(params.since, None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn log_state_update_emits_info() {
|
||||
let state = BridgeState::default();
|
||||
log_state_update(&state);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn persist_state_writes_file() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let file_path = tmp_dir.path().join("state.json");
|
||||
let path_str = file_path.to_str().unwrap();
|
||||
|
||||
let state = BridgeState {
|
||||
last_message_id: Some(42),
|
||||
last_rx_time: Some(123),
|
||||
last_rx_time_ids: vec![42],
|
||||
last_checked_at: None,
|
||||
};
|
||||
|
||||
persist_state(&state, path_str);
|
||||
|
||||
let loaded = BridgeState::load(path_str).unwrap();
|
||||
assert_eq!(loaded.last_message_id, Some(42));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn persist_state_logs_on_error() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let dir_path = tmp_dir.path().to_str().unwrap();
|
||||
let state = BridgeState::default();
|
||||
|
||||
// Writing to a directory path should trigger the error branch.
|
||||
persist_state(&state, dir_path);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn spawn_synapse_listener_starts_task() {
|
||||
let addr = SocketAddr::from(([127, 0, 0, 1], 0));
|
||||
let handle = spawn_synapse_listener(addr, "HS_TOKEN".to_string());
|
||||
tokio::time::sleep(Duration::from_millis(10)).await;
|
||||
handle.abort();
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn spawn_synapse_listener_logs_error_on_bind_failure() {
|
||||
let listener = tokio::net::TcpListener::bind("127.0.0.1:0").await.unwrap();
|
||||
let addr = listener.local_addr().unwrap();
|
||||
let handle = spawn_synapse_listener(addr, "HS_TOKEN".to_string());
|
||||
let _ = handle.await;
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn poll_once_leaves_state_unchanged_without_messages() {
|
||||
async fn poll_once_persists_checkpoint_without_messages() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let state_path = tmp_dir.path().join("state.json");
|
||||
let state_str = state_path.to_str().unwrap();
|
||||
@@ -659,7 +426,6 @@ mod tests {
|
||||
let matrix_cfg = MatrixConfig {
|
||||
homeserver: server.url(),
|
||||
as_token: "AS_TOKEN".to_string(),
|
||||
hs_token: "HS_TOKEN".to_string(),
|
||||
server_name: "example.org".to_string(),
|
||||
room_id: "!roomid:example.org".to_string(),
|
||||
};
|
||||
@@ -669,63 +435,18 @@ mod tests {
|
||||
|
||||
let mut state = BridgeState {
|
||||
last_message_id: Some(1),
|
||||
last_rx_time: Some(100),
|
||||
last_rx_time_ids: vec![1],
|
||||
last_checked_at: None,
|
||||
};
|
||||
|
||||
poll_once(&potato, &matrix, &mut state, state_str).await;
|
||||
poll_once(&potato, &matrix, &mut state, state_str, 123).await;
|
||||
|
||||
mock_msgs.assert();
|
||||
|
||||
// No new data means state remains unchanged and is not persisted.
|
||||
assert_eq!(state.last_rx_time, Some(100));
|
||||
assert_eq!(state.last_rx_time_ids, vec![1]);
|
||||
assert!(!state_path.exists());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn poll_once_persists_state_for_non_text_messages() {
|
||||
let tmp_dir = tempfile::tempdir().unwrap();
|
||||
let state_path = tmp_dir.path().join("state.json");
|
||||
let state_str = state_path.to_str().unwrap();
|
||||
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let mock_msgs = server
|
||||
.mock("GET", "/api/messages")
|
||||
.match_query(mockito::Matcher::Any)
|
||||
.with_status(200)
|
||||
.with_header("content-type", "application/json")
|
||||
.with_body(
|
||||
r#"[{"id":1,"rx_time":100,"rx_iso":"2025-11-27T00:00:00Z","from_id":"!abcd1234","to_id":"^all","channel":1,"portnum":"POSITION_APP","text":"","rssi":-100,"hop_limit":1,"lora_freq":868,"modem_preset":"MediumFast","channel_name":"TEST","snr":0.0,"node_id":"!abcd1234"}]"#,
|
||||
)
|
||||
.create();
|
||||
|
||||
let http_client = reqwest::Client::new();
|
||||
let potatomesh_cfg = PotatomeshConfig {
|
||||
base_url: server.url(),
|
||||
poll_interval_secs: 1,
|
||||
};
|
||||
let matrix_cfg = MatrixConfig {
|
||||
homeserver: server.url(),
|
||||
as_token: "AS_TOKEN".to_string(),
|
||||
hs_token: "HS_TOKEN".to_string(),
|
||||
server_name: "example.org".to_string(),
|
||||
room_id: "!roomid:example.org".to_string(),
|
||||
};
|
||||
|
||||
let potato = PotatoClient::new(http_client.clone(), potatomesh_cfg);
|
||||
let matrix = MatrixAppserviceClient::new(http_client, matrix_cfg);
|
||||
let mut state = BridgeState::default();
|
||||
|
||||
poll_once(&potato, &matrix, &mut state, state_str).await;
|
||||
|
||||
mock_msgs.assert();
|
||||
assert!(state_path.exists());
|
||||
// Should have advanced checkpoint and saved it.
|
||||
assert_eq!(state.last_checked_at, Some(123));
|
||||
let loaded = BridgeState::load(state_str).unwrap();
|
||||
assert_eq!(loaded.last_checked_at, Some(123));
|
||||
assert_eq!(loaded.last_message_id, Some(1));
|
||||
assert_eq!(loaded.last_rx_time, Some(100));
|
||||
assert_eq!(loaded.last_rx_time_ids, vec![1]);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
@@ -739,7 +460,6 @@ mod tests {
|
||||
let matrix_cfg = MatrixConfig {
|
||||
homeserver: server.url(),
|
||||
as_token: "AS_TOKEN".to_string(),
|
||||
hs_token: "HS_TOKEN".to_string(),
|
||||
server_name: "example.org".to_string(),
|
||||
room_id: "!roomid:example.org".to_string(),
|
||||
};
|
||||
@@ -747,8 +467,6 @@ mod tests {
|
||||
let node_id = "abcd1234";
|
||||
let user_id = format!("@potato_{}:{}", node_id, matrix_cfg.server_name);
|
||||
let encoded_user = urlencoding::encode(&user_id);
|
||||
let room_id = matrix_cfg.room_id.clone();
|
||||
let encoded_room = urlencoding::encode(&room_id);
|
||||
|
||||
let mock_get_node = server
|
||||
.mock("GET", "/api/nodes/abcd1234")
|
||||
@@ -759,18 +477,7 @@ mod tests {
|
||||
|
||||
let mock_register = server
|
||||
.mock("POST", "/_matrix/client/v3/register")
|
||||
.match_query("kind=user")
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.with_status(200)
|
||||
.create();
|
||||
|
||||
let mock_join = server
|
||||
.mock(
|
||||
"POST",
|
||||
format!("/_matrix/client/v3/rooms/{}/join", encoded_room).as_str(),
|
||||
)
|
||||
.match_query(format!("user_id={}", encoded_user).as_str())
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.match_query("kind=user&access_token=AS_TOKEN")
|
||||
.with_status(200)
|
||||
.create();
|
||||
|
||||
@@ -779,16 +486,14 @@ mod tests {
|
||||
"PUT",
|
||||
format!("/_matrix/client/v3/profile/{}/displayname", encoded_user).as_str(),
|
||||
)
|
||||
.match_query(format!("user_id={}", encoded_user).as_str())
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.match_body(mockito::Matcher::PartialJson(serde_json::json!({
|
||||
"displayname": "Test Node (TN)"
|
||||
})))
|
||||
.match_query(format!("user_id={}&access_token=AS_TOKEN", encoded_user).as_str())
|
||||
.with_status(200)
|
||||
.create();
|
||||
|
||||
let http_client = reqwest::Client::new();
|
||||
let matrix_client = MatrixAppserviceClient::new(http_client.clone(), matrix_cfg);
|
||||
let room_id = &matrix_client.cfg.room_id;
|
||||
let encoded_room = urlencoding::encode(room_id);
|
||||
let txn_id = matrix_client
|
||||
.txn_counter
|
||||
.load(std::sync::atomic::Ordering::SeqCst);
|
||||
@@ -802,14 +507,7 @@ mod tests {
|
||||
)
|
||||
.as_str(),
|
||||
)
|
||||
.match_query(format!("user_id={}", encoded_user).as_str())
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.match_body(mockito::Matcher::PartialJson(serde_json::json!({
|
||||
"msgtype": "m.text",
|
||||
"body": "`[868][MF][TEST]` Ping",
|
||||
"format": "org.matrix.custom.html",
|
||||
"formatted_body": "<code>[868][MF][TEST]</code> Ping",
|
||||
})))
|
||||
.match_query(format!("user_id={}&access_token=AS_TOKEN", encoded_user).as_str())
|
||||
.with_status(200)
|
||||
.create();
|
||||
|
||||
@@ -822,7 +520,6 @@ mod tests {
|
||||
assert!(result.is_ok());
|
||||
mock_get_node.assert();
|
||||
mock_register.assert();
|
||||
mock_join.assert();
|
||||
mock_display_name.assert();
|
||||
mock_send.assert();
|
||||
|
||||
|
||||
+77
-148
@@ -66,6 +66,10 @@ impl MatrixAppserviceClient {
|
||||
format!("@{}:{}", localpart, self.cfg.server_name)
|
||||
}
|
||||
|
||||
fn auth_query(&self) -> String {
|
||||
format!("access_token={}", urlencoding::encode(&self.cfg.as_token))
|
||||
}
|
||||
|
||||
/// Ensure the puppet user exists (register via appservice registration).
|
||||
pub async fn ensure_user_registered(&self, localpart: &str) -> anyhow::Result<()> {
|
||||
#[derive(Serialize)]
|
||||
@@ -76,8 +80,9 @@ impl MatrixAppserviceClient {
|
||||
}
|
||||
|
||||
let url = format!(
|
||||
"{}/_matrix/client/v3/register?kind=user",
|
||||
self.cfg.homeserver
|
||||
"{}/_matrix/client/v3/register?kind=user&{}",
|
||||
self.cfg.homeserver,
|
||||
self.auth_query()
|
||||
);
|
||||
|
||||
let body = RegisterReq {
|
||||
@@ -85,13 +90,7 @@ impl MatrixAppserviceClient {
|
||||
username: localpart,
|
||||
};
|
||||
|
||||
let resp = self
|
||||
.http
|
||||
.post(&url)
|
||||
.bearer_auth(&self.cfg.as_token)
|
||||
.json(&body)
|
||||
.send()
|
||||
.await?;
|
||||
let resp = self.http.post(&url).json(&body).send().await?;
|
||||
if resp.status().is_success() {
|
||||
Ok(())
|
||||
} else {
|
||||
@@ -110,21 +109,18 @@ impl MatrixAppserviceClient {
|
||||
|
||||
let encoded_user = urlencoding::encode(user_id);
|
||||
let url = format!(
|
||||
"{}/_matrix/client/v3/profile/{}/displayname?user_id={}",
|
||||
self.cfg.homeserver, encoded_user, encoded_user
|
||||
"{}/_matrix/client/v3/profile/{}/displayname?user_id={}&{}",
|
||||
self.cfg.homeserver,
|
||||
encoded_user,
|
||||
encoded_user,
|
||||
self.auth_query()
|
||||
);
|
||||
|
||||
let body = DisplayNameReq {
|
||||
displayname: display_name,
|
||||
};
|
||||
|
||||
let resp = self
|
||||
.http
|
||||
.put(&url)
|
||||
.bearer_auth(&self.cfg.as_token)
|
||||
.json(&body)
|
||||
.send()
|
||||
.await?;
|
||||
let resp = self.http.put(&url).json(&body).send().await?;
|
||||
if resp.status().is_success() {
|
||||
Ok(())
|
||||
} else {
|
||||
@@ -138,53 +134,12 @@ impl MatrixAppserviceClient {
|
||||
}
|
||||
}
|
||||
|
||||
/// Ensure the puppet user is joined to the configured room.
|
||||
pub async fn ensure_user_joined_room(&self, user_id: &str) -> anyhow::Result<()> {
|
||||
#[derive(Serialize)]
|
||||
struct JoinReq {}
|
||||
|
||||
let encoded_room = urlencoding::encode(&self.cfg.room_id);
|
||||
let encoded_user = urlencoding::encode(user_id);
|
||||
let url = format!(
|
||||
"{}/_matrix/client/v3/rooms/{}/join?user_id={}",
|
||||
self.cfg.homeserver, encoded_room, encoded_user
|
||||
);
|
||||
|
||||
let resp = self
|
||||
.http
|
||||
.post(&url)
|
||||
.bearer_auth(&self.cfg.as_token)
|
||||
.json(&JoinReq {})
|
||||
.send()
|
||||
.await?;
|
||||
if resp.status().is_success() {
|
||||
Ok(())
|
||||
} else {
|
||||
let status = resp.status();
|
||||
let body_snip = resp.text().await.unwrap_or_default();
|
||||
Err(anyhow::anyhow!(
|
||||
"Matrix join failed for {} in {} with status {} ({})",
|
||||
user_id,
|
||||
self.cfg.room_id,
|
||||
status,
|
||||
body_snip
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
/// Send a text message with HTML formatting into the configured room as puppet user_id.
|
||||
pub async fn send_formatted_message_as(
|
||||
&self,
|
||||
user_id: &str,
|
||||
body_text: &str,
|
||||
formatted_body: &str,
|
||||
) -> anyhow::Result<()> {
|
||||
/// Send a plain text message into the configured room as puppet user_id.
|
||||
pub async fn send_text_message_as(&self, user_id: &str, body_text: &str) -> anyhow::Result<()> {
|
||||
#[derive(Serialize)]
|
||||
struct MsgContent<'a> {
|
||||
msgtype: &'a str,
|
||||
body: &'a str,
|
||||
format: &'a str,
|
||||
formatted_body: &'a str,
|
||||
}
|
||||
|
||||
let txn_id = self.txn_counter.fetch_add(1, Ordering::SeqCst);
|
||||
@@ -192,36 +147,35 @@ impl MatrixAppserviceClient {
|
||||
let encoded_user = urlencoding::encode(user_id);
|
||||
|
||||
let url = format!(
|
||||
"{}/_matrix/client/v3/rooms/{}/send/m.room.message/{}?user_id={}",
|
||||
self.cfg.homeserver, encoded_room, txn_id, encoded_user
|
||||
"{}/_matrix/client/v3/rooms/{}/send/m.room.message/{}?user_id={}&{}",
|
||||
self.cfg.homeserver,
|
||||
encoded_room,
|
||||
txn_id,
|
||||
encoded_user,
|
||||
self.auth_query()
|
||||
);
|
||||
|
||||
let content = MsgContent {
|
||||
msgtype: "m.text",
|
||||
body: body_text,
|
||||
format: "org.matrix.custom.html",
|
||||
formatted_body,
|
||||
};
|
||||
|
||||
let resp = self
|
||||
.http
|
||||
.put(&url)
|
||||
.bearer_auth(&self.cfg.as_token)
|
||||
.json(&content)
|
||||
.send()
|
||||
.await?;
|
||||
let resp = self.http.put(&url).json(&content).send().await?;
|
||||
|
||||
if !resp.status().is_success() {
|
||||
let status = resp.status();
|
||||
// optional: pull a short body snippet for debugging
|
||||
let body_snip = resp.text().await.unwrap_or_default();
|
||||
|
||||
// Log for observability
|
||||
tracing::warn!(
|
||||
"Failed to send formatted message as {}: status {}, body: {}",
|
||||
"Failed to send message as {}: status {}, body: {}",
|
||||
user_id,
|
||||
status,
|
||||
body_snip
|
||||
);
|
||||
|
||||
// Propagate an error so callers know this message was NOT delivered
|
||||
return Err(anyhow::anyhow!(
|
||||
"Matrix send failed for {} with status {}",
|
||||
user_id,
|
||||
@@ -241,7 +195,6 @@ mod tests {
|
||||
MatrixConfig {
|
||||
homeserver: "https://matrix.example.org".to_string(),
|
||||
as_token: "AS_TOKEN".to_string(),
|
||||
hs_token: "HS_TOKEN".to_string(),
|
||||
server_name: "example.org".to_string(),
|
||||
room_id: "!roomid:example.org".to_string(),
|
||||
}
|
||||
@@ -302,6 +255,16 @@ mod tests {
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn auth_query_contains_access_token() {
|
||||
let http = reqwest::Client::builder().build().unwrap();
|
||||
let client = MatrixAppserviceClient::new(http, dummy_cfg());
|
||||
|
||||
let q = client.auth_query();
|
||||
assert!(q.starts_with("access_token="));
|
||||
assert!(q.contains("AS_TOKEN"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_new_matrix_client() {
|
||||
let http_client = reqwest::Client::new();
|
||||
@@ -317,8 +280,7 @@ mod tests {
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let mock = server
|
||||
.mock("POST", "/_matrix/client/v3/register")
|
||||
.match_query("kind=user")
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.match_query("kind=user&access_token=AS_TOKEN")
|
||||
.with_status(200)
|
||||
.create();
|
||||
|
||||
@@ -336,8 +298,7 @@ mod tests {
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let mock = server
|
||||
.mock("POST", "/_matrix/client/v3/register")
|
||||
.match_query("kind=user")
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.match_query("kind=user&access_token=AS_TOKEN")
|
||||
.with_status(400) // M_USER_IN_USE
|
||||
.create();
|
||||
|
||||
@@ -355,13 +316,12 @@ mod tests {
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let user_id = "@test:example.org";
|
||||
let encoded_user = urlencoding::encode(user_id);
|
||||
let query = format!("user_id={}", encoded_user);
|
||||
let query = format!("user_id={}&access_token=AS_TOKEN", encoded_user);
|
||||
let path = format!("/_matrix/client/v3/profile/{}/displayname", encoded_user);
|
||||
|
||||
let mock = server
|
||||
.mock("PUT", path.as_str())
|
||||
.match_query(query.as_str())
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.with_status(200)
|
||||
.create();
|
||||
|
||||
@@ -379,13 +339,12 @@ mod tests {
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let user_id = "@test:example.org";
|
||||
let encoded_user = urlencoding::encode(user_id);
|
||||
let query = format!("user_id={}", encoded_user);
|
||||
let query = format!("user_id={}&access_token=AS_TOKEN", encoded_user);
|
||||
let path = format!("/_matrix/client/v3/profile/{}/displayname", encoded_user);
|
||||
|
||||
let mock = server
|
||||
.mock("PUT", path.as_str())
|
||||
.match_query(query.as_str())
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.with_status(500)
|
||||
.create();
|
||||
|
||||
@@ -399,61 +358,7 @@ mod tests {
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_ensure_user_joined_room_success() {
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let user_id = "@test:example.org";
|
||||
let room_id = "!roomid:example.org";
|
||||
let encoded_user = urlencoding::encode(user_id);
|
||||
let encoded_room = urlencoding::encode(room_id);
|
||||
let query = format!("user_id={}", encoded_user);
|
||||
let path = format!("/_matrix/client/v3/rooms/{}/join", encoded_room);
|
||||
|
||||
let mock = server
|
||||
.mock("POST", path.as_str())
|
||||
.match_query(query.as_str())
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.with_status(200)
|
||||
.create();
|
||||
|
||||
let mut cfg = dummy_cfg();
|
||||
cfg.homeserver = server.url();
|
||||
cfg.room_id = room_id.to_string();
|
||||
let client = MatrixAppserviceClient::new(reqwest::Client::new(), cfg);
|
||||
let result = client.ensure_user_joined_room(user_id).await;
|
||||
|
||||
mock.assert();
|
||||
assert!(result.is_ok());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_ensure_user_joined_room_fail() {
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let user_id = "@test:example.org";
|
||||
let room_id = "!roomid:example.org";
|
||||
let encoded_user = urlencoding::encode(user_id);
|
||||
let encoded_room = urlencoding::encode(room_id);
|
||||
let query = format!("user_id={}", encoded_user);
|
||||
let path = format!("/_matrix/client/v3/rooms/{}/join", encoded_room);
|
||||
|
||||
let mock = server
|
||||
.mock("POST", path.as_str())
|
||||
.match_query(query.as_str())
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.with_status(403)
|
||||
.create();
|
||||
|
||||
let mut cfg = dummy_cfg();
|
||||
cfg.homeserver = server.url();
|
||||
cfg.room_id = room_id.to_string();
|
||||
let client = MatrixAppserviceClient::new(reqwest::Client::new(), cfg);
|
||||
let result = client.ensure_user_joined_room(user_id).await;
|
||||
|
||||
mock.assert();
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_send_formatted_message_as_success() {
|
||||
async fn test_send_text_message_as_success() {
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let user_id = "@test:example.org";
|
||||
let room_id = "!roomid:example.org";
|
||||
@@ -467,7 +372,7 @@ mod tests {
|
||||
MatrixAppserviceClient::new(reqwest::Client::new(), cfg)
|
||||
};
|
||||
let txn_id = client.txn_counter.load(Ordering::SeqCst);
|
||||
let query = format!("user_id={}", encoded_user);
|
||||
let query = format!("user_id={}&access_token=AS_TOKEN", encoded_user);
|
||||
let path = format!(
|
||||
"/_matrix/client/v3/rooms/{}/send/m.room.message/{}",
|
||||
encoded_room, txn_id
|
||||
@@ -476,21 +381,45 @@ mod tests {
|
||||
let mock = server
|
||||
.mock("PUT", path.as_str())
|
||||
.match_query(query.as_str())
|
||||
.match_header("authorization", "Bearer AS_TOKEN")
|
||||
.match_body(mockito::Matcher::PartialJson(serde_json::json!({
|
||||
"msgtype": "m.text",
|
||||
"body": "`[meta]` hello",
|
||||
"format": "org.matrix.custom.html",
|
||||
"formatted_body": "<code>[meta]</code> hello",
|
||||
})))
|
||||
.with_status(200)
|
||||
.create();
|
||||
|
||||
let result = client
|
||||
.send_formatted_message_as(user_id, "`[meta]` hello", "<code>[meta]</code> hello")
|
||||
.await;
|
||||
let result = client.send_text_message_as(user_id, "hello").await;
|
||||
|
||||
mock.assert();
|
||||
assert!(result.is_ok());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_send_text_message_as_fail() {
|
||||
let mut server = mockito::Server::new_async().await;
|
||||
let user_id = "@test:example.org";
|
||||
let room_id = "!roomid:example.org";
|
||||
let encoded_user = urlencoding::encode(user_id);
|
||||
let encoded_room = urlencoding::encode(room_id);
|
||||
|
||||
let client = {
|
||||
let mut cfg = dummy_cfg();
|
||||
cfg.homeserver = server.url();
|
||||
cfg.room_id = room_id.to_string();
|
||||
MatrixAppserviceClient::new(reqwest::Client::new(), cfg)
|
||||
};
|
||||
let txn_id = client.txn_counter.load(Ordering::SeqCst);
|
||||
let query = format!("user_id={}&access_token=AS_TOKEN", encoded_user);
|
||||
let path = format!(
|
||||
"/_matrix/client/v3/rooms/{}/send/m.room.message/{}",
|
||||
encoded_room, txn_id
|
||||
);
|
||||
|
||||
let mock = server
|
||||
.mock("PUT", path.as_str())
|
||||
.match_query(query.as_str())
|
||||
.with_status(500)
|
||||
.create();
|
||||
|
||||
let result = client.send_text_message_as(user_id, "hello").await;
|
||||
|
||||
mock.assert();
|
||||
assert!(result.is_err());
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,289 +0,0 @@
|
||||
// Copyright © 2025-26 l5yth & contributors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use axum::{
|
||||
extract::{Path, Query, State},
|
||||
http::{header::AUTHORIZATION, HeaderMap, StatusCode},
|
||||
response::IntoResponse,
|
||||
routing::put,
|
||||
Json, Router,
|
||||
};
|
||||
use serde_json::Value;
|
||||
use std::net::SocketAddr;
|
||||
use tracing::info;
|
||||
|
||||
#[derive(Clone)]
|
||||
struct SynapseState {
|
||||
hs_token: String,
|
||||
}
|
||||
|
||||
#[derive(serde::Deserialize)]
|
||||
struct AuthQuery {
|
||||
access_token: Option<String>,
|
||||
}
|
||||
|
||||
/// Pull access tokens from supported auth headers.
|
||||
fn extract_access_token(headers: &HeaderMap) -> Option<String> {
|
||||
if let Some(value) = headers.get(AUTHORIZATION) {
|
||||
if let Ok(raw) = value.to_str() {
|
||||
if let Some(token) = raw.strip_prefix("Bearer ") {
|
||||
return Some(token.trim().to_string());
|
||||
}
|
||||
if let Some(token) = raw.strip_prefix("bearer ") {
|
||||
return Some(token.trim().to_string());
|
||||
}
|
||||
}
|
||||
}
|
||||
if let Some(value) = headers.get("x-access-token") {
|
||||
if let Ok(raw) = value.to_str() {
|
||||
return Some(raw.trim().to_string());
|
||||
}
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
/// Compare tokens in constant time to avoid timing leakage.
|
||||
fn constant_time_eq(a: &str, b: &str) -> bool {
|
||||
let a_bytes = a.as_bytes();
|
||||
let b_bytes = b.as_bytes();
|
||||
let max_len = std::cmp::max(a_bytes.len(), b_bytes.len());
|
||||
let mut diff = (a_bytes.len() ^ b_bytes.len()) as u8;
|
||||
|
||||
for idx in 0..max_len {
|
||||
let left = *a_bytes.get(idx).unwrap_or(&0);
|
||||
let right = *b_bytes.get(idx).unwrap_or(&0);
|
||||
diff |= left ^ right;
|
||||
}
|
||||
|
||||
diff == 0
|
||||
}
|
||||
|
||||
/// Captures inbound Synapse transaction payloads for logging.
|
||||
#[derive(Debug)]
|
||||
struct SynapseResponse {
|
||||
txn_id: String,
|
||||
payload: Value,
|
||||
}
|
||||
|
||||
/// Build the router that handles Synapse appservice transactions.
|
||||
fn build_router(state: SynapseState) -> Router {
|
||||
Router::new()
|
||||
.route(
|
||||
"/_matrix/appservice/v1/transactions/:txn_id",
|
||||
put(handle_transaction),
|
||||
)
|
||||
.with_state(state)
|
||||
}
|
||||
|
||||
/// Handle inbound transaction callbacks from Synapse.
|
||||
async fn handle_transaction(
|
||||
Path(txn_id): Path<String>,
|
||||
State(state): State<SynapseState>,
|
||||
Query(auth): Query<AuthQuery>,
|
||||
headers: HeaderMap,
|
||||
Json(payload): Json<Value>,
|
||||
) -> impl IntoResponse {
|
||||
let header_token = extract_access_token(&headers);
|
||||
let token_matches = if let Some(token) = header_token.as_deref() {
|
||||
constant_time_eq(token, &state.hs_token)
|
||||
} else {
|
||||
auth.access_token
|
||||
.as_deref()
|
||||
.is_some_and(|token| constant_time_eq(token, &state.hs_token))
|
||||
};
|
||||
if !token_matches {
|
||||
return (StatusCode::UNAUTHORIZED, Json(serde_json::json!({})));
|
||||
}
|
||||
let response = SynapseResponse { txn_id, payload };
|
||||
info!(
|
||||
"Status response: SynapseResponse {{ txn_id: {}, payload: {:?} }}",
|
||||
response.txn_id, response.payload
|
||||
);
|
||||
(StatusCode::OK, Json(serde_json::json!({})))
|
||||
}
|
||||
|
||||
/// Listen for Synapse callbacks on the configured address.
|
||||
pub async fn run_synapse_listener(addr: SocketAddr, hs_token: String) -> anyhow::Result<()> {
|
||||
let app = build_router(SynapseState { hs_token });
|
||||
let listener = tokio::net::TcpListener::bind(addr).await?;
|
||||
info!("Synapse listener bound on {}", addr);
|
||||
axum::serve(listener, app).await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use axum::body::Body;
|
||||
use axum::http::Request;
|
||||
use tokio::time::{sleep, Duration};
|
||||
use tower::ServiceExt;
|
||||
|
||||
#[tokio::test]
|
||||
async fn transactions_endpoint_accepts_payloads() {
|
||||
let app = build_router(SynapseState {
|
||||
hs_token: "HS_TOKEN".to_string(),
|
||||
});
|
||||
let payload = serde_json::json!({
|
||||
"events": [],
|
||||
"txn_id": "123"
|
||||
});
|
||||
|
||||
let response = app
|
||||
.oneshot(
|
||||
Request::builder()
|
||||
.method("PUT")
|
||||
.uri("/_matrix/appservice/v1/transactions/123")
|
||||
.header("authorization", "Bearer HS_TOKEN")
|
||||
.header("content-type", "application/json")
|
||||
.body(Body::from(payload.to_string()))
|
||||
.unwrap(),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
let body = axum::body::to_bytes(response.into_body(), usize::MAX)
|
||||
.await
|
||||
.unwrap();
|
||||
assert_eq!(body.as_ref(), b"{}");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn transactions_endpoint_rejects_missing_token() {
|
||||
let app = build_router(SynapseState {
|
||||
hs_token: "HS_TOKEN".to_string(),
|
||||
});
|
||||
let payload = serde_json::json!({
|
||||
"events": [],
|
||||
"txn_id": "123"
|
||||
});
|
||||
|
||||
let response = app
|
||||
.oneshot(
|
||||
Request::builder()
|
||||
.method("PUT")
|
||||
.uri("/_matrix/appservice/v1/transactions/123")
|
||||
.header("content-type", "application/json")
|
||||
.body(Body::from(payload.to_string()))
|
||||
.unwrap(),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(response.status(), StatusCode::UNAUTHORIZED);
|
||||
let body = axum::body::to_bytes(response.into_body(), usize::MAX)
|
||||
.await
|
||||
.unwrap();
|
||||
assert_eq!(body.as_ref(), b"{}");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn transactions_endpoint_rejects_wrong_token() {
|
||||
let app = build_router(SynapseState {
|
||||
hs_token: "HS_TOKEN".to_string(),
|
||||
});
|
||||
let payload = serde_json::json!({
|
||||
"events": [],
|
||||
"txn_id": "123"
|
||||
});
|
||||
|
||||
let response = app
|
||||
.oneshot(
|
||||
Request::builder()
|
||||
.method("PUT")
|
||||
.uri("/_matrix/appservice/v1/transactions/123")
|
||||
.header("authorization", "Bearer NOPE")
|
||||
.header("content-type", "application/json")
|
||||
.body(Body::from(payload.to_string()))
|
||||
.unwrap(),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(response.status(), StatusCode::UNAUTHORIZED);
|
||||
let body = axum::body::to_bytes(response.into_body(), usize::MAX)
|
||||
.await
|
||||
.unwrap();
|
||||
assert_eq!(body.as_ref(), b"{}");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn transactions_endpoint_accepts_legacy_query_token() {
|
||||
let app = build_router(SynapseState {
|
||||
hs_token: "HS_TOKEN".to_string(),
|
||||
});
|
||||
let payload = serde_json::json!({
|
||||
"events": [],
|
||||
"txn_id": "125"
|
||||
});
|
||||
|
||||
let response = app
|
||||
.oneshot(
|
||||
Request::builder()
|
||||
.method("PUT")
|
||||
.uri("/_matrix/appservice/v1/transactions/125?access_token=HS_TOKEN")
|
||||
.header("content-type", "application/json")
|
||||
.body(Body::from(payload.to_string()))
|
||||
.unwrap(),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn transactions_endpoint_accepts_x_access_token_header() {
|
||||
let app = build_router(SynapseState {
|
||||
hs_token: "HS_TOKEN".to_string(),
|
||||
});
|
||||
let payload = serde_json::json!({
|
||||
"events": [],
|
||||
"txn_id": "126"
|
||||
});
|
||||
|
||||
let response = app
|
||||
.oneshot(
|
||||
Request::builder()
|
||||
.method("PUT")
|
||||
.uri("/_matrix/appservice/v1/transactions/126")
|
||||
.header("x-access-token", "HS_TOKEN")
|
||||
.header("content-type", "application/json")
|
||||
.body(Body::from(payload.to_string()))
|
||||
.unwrap(),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn run_synapse_listener_starts_and_can_abort() {
|
||||
let addr = SocketAddr::from(([127, 0, 0, 1], 0));
|
||||
let handle =
|
||||
tokio::spawn(async move { run_synapse_listener(addr, "HS_TOKEN".to_string()).await });
|
||||
sleep(Duration::from_millis(10)).await;
|
||||
handle.abort();
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn run_synapse_listener_returns_error_on_bind_failure() {
|
||||
let listener = tokio::net::TcpListener::bind("127.0.0.1:0").await.unwrap();
|
||||
let addr = listener.local_addr().unwrap();
|
||||
let result = run_synapse_listener(addr, "HS_TOKEN".to_string()).await;
|
||||
assert!(result.is_err());
|
||||
}
|
||||
}
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 62 KiB |
@@ -1,71 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
require "base64"
|
||||
require "meshtastic"
|
||||
require "openssl"
|
||||
|
||||
channel_name = "BerlinMesh"
|
||||
|
||||
# === Inputs from your packet ===
|
||||
cipher_b64 = "Q1R7tgI5yXzMXu/3"
|
||||
psk_b64 = "Nmh7EooP2Tsc+7pvPwXLcEDDuYhk+fBo2GLnbA1Y1sg="
|
||||
packet_id = 3_915_687_257
|
||||
from_id = "!9e95cf60"
|
||||
channel = 35
|
||||
|
||||
# === Decode key and ciphertext ===
|
||||
key = Base64.decode64(psk_b64) # 32 bytes -> AES-256
|
||||
ciphertext = Base64.decode64(cipher_b64)
|
||||
|
||||
# === Derive numeric node id from Meshtastic-style string ===
|
||||
hex_str = from_id.sub(/^!/, "") # "9e95cf60"
|
||||
from_node = hex_str.to_i(16) # 0x9e95cf60
|
||||
|
||||
# === Build nonce exactly like Meshtastic CryptoEngine ===
|
||||
# Little-endian 64-bit packet ID + little-endian 32-bit node ID + 4 zero bytes
|
||||
nonce = [packet_id].pack("Q<") # uint64, little-endian
|
||||
nonce += [from_node].pack("L<") # uint32, little-endian
|
||||
nonce += "\x00" * 4 # extraNonce == 0 for PSK channel msgs
|
||||
|
||||
raise "Nonce must be 16 bytes" unless nonce.bytesize == 16
|
||||
raise "Key must be 32 bytes" unless key.bytesize == 32
|
||||
|
||||
# === AES-256-CTR decrypt ===
|
||||
cipher = OpenSSL::Cipher.new("aes-256-ctr")
|
||||
cipher.decrypt
|
||||
cipher.key = key
|
||||
cipher.iv = nonce
|
||||
|
||||
plaintext = cipher.update(ciphertext) + cipher.final
|
||||
|
||||
# At this point `plaintext` is the raw Meshtastic protobuf payload
|
||||
plaintext = plaintext.bytes.pack("C*")
|
||||
data = Meshtastic::Data.decode(plaintext)
|
||||
msg = data.payload.dup.force_encoding("UTF-8")
|
||||
puts msg
|
||||
|
||||
# Gets channel number from name and psk
|
||||
def channel_hash(name, psk_b64)
|
||||
name_bytes = name.b # UTF-8 bytes
|
||||
psk_bytes = Base64.decode64(psk_b64)
|
||||
|
||||
hn = name_bytes.bytes.reduce(0) { |acc, b| acc ^ b } # XOR over name
|
||||
hp = psk_bytes.bytes.reduce(0) { |acc, b| acc ^ b } # XOR over PSK
|
||||
|
||||
(hn ^ hp) & 0xFF
|
||||
end
|
||||
|
||||
channel_h = channel_hash(channel_name, psk_b64)
|
||||
puts channel_h
|
||||
puts channel == channel_h
|
||||
@@ -1,491 +0,0 @@
|
||||
hash,name
|
||||
0,Mesh1
|
||||
1,DEMO
|
||||
1,Downlink1
|
||||
1,NightNet
|
||||
1,Sideband1
|
||||
2,CommsNet
|
||||
2,Mesh3
|
||||
2,PulseNet
|
||||
3,LightNet
|
||||
3,Mesh2
|
||||
3,WestStar
|
||||
3,WolfMesh
|
||||
4,Mesh5
|
||||
4,OPERATIONS
|
||||
4,Rescue1
|
||||
4,SignalFire
|
||||
5,Base2
|
||||
5,DeltaNet
|
||||
5,Mesh4
|
||||
5,MeshMunich
|
||||
6,Base1
|
||||
7,MeshTest
|
||||
7,Rescue2
|
||||
7,ZuluMesh
|
||||
8,CourierNet
|
||||
8,Fire2
|
||||
8,Grid2
|
||||
8,LongFast
|
||||
8,RescueTeam
|
||||
9,AlphaNet
|
||||
9,MeshGrid
|
||||
10,TestBerlin
|
||||
10,WaWi
|
||||
11,Fire1
|
||||
11,Grid1
|
||||
12,FoxNet
|
||||
12,MeshRuhr
|
||||
12,RadioNet
|
||||
13,Signal1
|
||||
13,Zone1
|
||||
14,BetaBerlin
|
||||
14,Signal2
|
||||
14,TangoNet
|
||||
14,Zone2
|
||||
15,BerlinMesh
|
||||
15,LongSlow
|
||||
15,MeshBerlin
|
||||
15,Zone3
|
||||
16,CQ
|
||||
16,EchoMesh
|
||||
16,Freq2
|
||||
16,KiloMesh
|
||||
16,Node2
|
||||
16,PhoenixNet
|
||||
16,Repeater2
|
||||
17,FoxtrotNet
|
||||
17,Node3
|
||||
18,LoRa
|
||||
19,Freq1
|
||||
19,HarmonyNet
|
||||
19,Node1
|
||||
19,RavenNet
|
||||
19,Repeater1
|
||||
20,NomadNet
|
||||
20,SENSOR
|
||||
20,TEST
|
||||
20,test
|
||||
21,BravoNet
|
||||
21,EastStar
|
||||
21,MeshCollective
|
||||
21,SunNet
|
||||
22,Node4
|
||||
22,Uplink1
|
||||
23,EagleNet
|
||||
23,MeshHessen
|
||||
23,Node5
|
||||
24,MediumSlow
|
||||
24,Router1
|
||||
25,Checkpoint1
|
||||
25,HAMNet
|
||||
26,Checkpoint2
|
||||
26,GhostNet
|
||||
27,HQ
|
||||
27,Router2
|
||||
31,DemoBerlin
|
||||
31,FieldNet
|
||||
31,MediumFast
|
||||
32,Clinic
|
||||
32,Convoy
|
||||
32,Daylight
|
||||
32,Town
|
||||
33,Callisto
|
||||
33,CQ1
|
||||
33,Daybreak
|
||||
33,Demo
|
||||
33,East
|
||||
33,LoRaMesh
|
||||
33,Mist
|
||||
34,CQ2
|
||||
34,Freq
|
||||
34,Gold
|
||||
34,Link
|
||||
34,Repeater
|
||||
35,Aquila
|
||||
35,Doctor
|
||||
35,Echo
|
||||
35,Kilo
|
||||
35,Public
|
||||
35,Wyvern
|
||||
36,District
|
||||
36,Hessen
|
||||
36,Io
|
||||
36,LoRaTest
|
||||
36,Operations
|
||||
36,Shadow
|
||||
36,Unit
|
||||
37,Campfire
|
||||
37,City
|
||||
37,Outsider
|
||||
37,Sync
|
||||
38,Beacon
|
||||
38,Collective
|
||||
38,Harbor
|
||||
38,Lion
|
||||
38,Meteor
|
||||
39,Firebird
|
||||
39,Fireteam
|
||||
39,Quasar
|
||||
39,Snow
|
||||
39,Universe
|
||||
39,Uplink
|
||||
40,Checkpoint
|
||||
40,Galaxy
|
||||
40,Jaguar
|
||||
40,Sunset
|
||||
40,Zeta
|
||||
41,Hinterland
|
||||
41,HQ2
|
||||
41,Main
|
||||
41,Meshtastic
|
||||
41,Router
|
||||
41,Valley
|
||||
41,Wander
|
||||
41,Wolfpack
|
||||
42,HQ1
|
||||
42,Lizard
|
||||
42,Packet
|
||||
42,Sahara
|
||||
42,Tunnel
|
||||
43,Anaconda
|
||||
43,Basalt
|
||||
43,Blackout
|
||||
43,Crow
|
||||
43,Dusk
|
||||
43,Falcon
|
||||
43,Lima
|
||||
43,Müggelberg
|
||||
44,Arctic
|
||||
44,Backup
|
||||
44,Bronze
|
||||
44,Corvus
|
||||
44,Cosmos
|
||||
44,LoRaBerlin
|
||||
44,Neukölln
|
||||
44,Safari
|
||||
45,Breeze
|
||||
45,Burrow
|
||||
45,Gale
|
||||
45,Saturn
|
||||
46,Border
|
||||
46,Nest
|
||||
47,Borealis
|
||||
47,Mars
|
||||
47,Path
|
||||
47,Ranger
|
||||
48,Beat
|
||||
48,Berg
|
||||
48,Beta
|
||||
48,Downlink
|
||||
48,Hive
|
||||
48,Rhythm
|
||||
48,Saxony
|
||||
48,Sideband
|
||||
48,Wolf
|
||||
49,Asteroid
|
||||
49,Carbon
|
||||
49,Mesh
|
||||
50,Blizzard
|
||||
50,Runner
|
||||
51,Callsign
|
||||
51,Carpet
|
||||
51,Desert
|
||||
51,Dragon
|
||||
51,Friedrichshain
|
||||
51,Help
|
||||
51,Nebula
|
||||
51,Safe
|
||||
52,Amazon
|
||||
52,Fireline
|
||||
52,Haze
|
||||
52,LoRaHessen
|
||||
52,Platinum
|
||||
52,Sensor
|
||||
52,Test
|
||||
52,Zulu
|
||||
53,Nord
|
||||
53,Rescue
|
||||
53,Secure
|
||||
53,Silver
|
||||
54,Bear
|
||||
54,Hospital
|
||||
54,Munich
|
||||
54,Python
|
||||
54,Rain
|
||||
54,Wind
|
||||
54,Wolves
|
||||
55,Base
|
||||
55,Bolt
|
||||
55,Hawk
|
||||
55,Mirage
|
||||
55,Nightwatch
|
||||
55,Obsidian
|
||||
55,Rock
|
||||
55,Victor
|
||||
55,West
|
||||
56,Aurora
|
||||
56,Dune
|
||||
56,Iron
|
||||
56,Lava
|
||||
56,Nomads
|
||||
57,Copper
|
||||
57,Core
|
||||
57,Spectrum
|
||||
57,Summit
|
||||
58,Colony
|
||||
58,Fire
|
||||
58,Ganymede
|
||||
58,Grid
|
||||
58,Kraken
|
||||
58,Road
|
||||
58,Solstice
|
||||
58,Tundra
|
||||
59,911
|
||||
59,Forest
|
||||
59,Pack
|
||||
60,Berlin
|
||||
60,Chat
|
||||
60,Sierra
|
||||
60,Signal
|
||||
60,Wald
|
||||
60,Zone
|
||||
61,Alpine
|
||||
61,Bridge
|
||||
61,Camp
|
||||
61,Dortmund
|
||||
61,Frontier
|
||||
61,Jungle
|
||||
61,Peak
|
||||
62,Burner
|
||||
62,Dawn
|
||||
62,Europa
|
||||
62,Midnight
|
||||
62,Nightshift
|
||||
62,Prenzlauer
|
||||
62,Safety
|
||||
62,Sector
|
||||
62,Wanderer
|
||||
63,Distress
|
||||
63,Kiez
|
||||
63,Ruhr
|
||||
63,Team
|
||||
64,Epsilon
|
||||
64,Field
|
||||
64,Granite
|
||||
64,Orbit
|
||||
64,Trail
|
||||
64,Whisper
|
||||
65,Central
|
||||
65,Cologne
|
||||
65,Layer
|
||||
65,Relay
|
||||
65,Runners
|
||||
65,Stone
|
||||
65,Tempo
|
||||
66,Polar
|
||||
66,Woods
|
||||
67,Highway
|
||||
67,Kreuzberg
|
||||
67,Leopard
|
||||
67,Metro
|
||||
67,Omega
|
||||
67,Phantom
|
||||
68,Hamburg
|
||||
68,Hydra
|
||||
68,Medic
|
||||
68,Titan
|
||||
69,Command
|
||||
69,Control
|
||||
69,Gamma
|
||||
69,Ghost
|
||||
69,Mercury
|
||||
69,Oasis
|
||||
70,Diamond
|
||||
70,Ham
|
||||
70,HAM
|
||||
70,Leipzig
|
||||
70,Paramedic
|
||||
70,Savanna
|
||||
71,Frankfurt
|
||||
71,Gecko
|
||||
71,Jupiter
|
||||
71,Sensors
|
||||
71,SENSORS
|
||||
71,Sunrise
|
||||
72,Chameleon
|
||||
72,Eagle
|
||||
72,Hilltop
|
||||
72,Teufelsberg
|
||||
73,Firefly
|
||||
73,Steel
|
||||
74,Bravo
|
||||
74,Caravan
|
||||
74,Ost
|
||||
74,Süd
|
||||
75,Emergency
|
||||
75,EMERGENCY
|
||||
75,Nomad
|
||||
75,Watch
|
||||
76,Alert
|
||||
76,Bavaria
|
||||
76,Fog
|
||||
76,Harmony
|
||||
76,Raven
|
||||
77,Admin
|
||||
77,ADMIN
|
||||
77,Den
|
||||
77,Ice
|
||||
77,LoRaNet
|
||||
77,North
|
||||
77,SOS
|
||||
77,Sos
|
||||
77,Wanderers
|
||||
78,Foxtrot
|
||||
78,Med
|
||||
78,Ops
|
||||
79,Flock
|
||||
79,Phoenix
|
||||
79,PRIVATE
|
||||
79,Private
|
||||
79,Signals
|
||||
79,Tiger
|
||||
80,Commune
|
||||
80,Freedom
|
||||
80,Pluto
|
||||
80,Snake
|
||||
80,Squad
|
||||
80,Stuttgart
|
||||
81,Grassland
|
||||
81,Tango
|
||||
81,Union
|
||||
82,Comet
|
||||
82,Flash
|
||||
82,Lightning
|
||||
83,Cloud
|
||||
83,Equinox
|
||||
83,Firewatch
|
||||
83,Fox
|
||||
83,Radio
|
||||
83,Shelter
|
||||
84,Cheetah
|
||||
84,General
|
||||
84,Outpost
|
||||
84,Volcano
|
||||
85,Glacier
|
||||
85,Storm
|
||||
86,Alpha
|
||||
86,Owl
|
||||
86,Panther
|
||||
86,Prairie
|
||||
86,Thunder
|
||||
87,Courier
|
||||
87,Nexus
|
||||
87,South
|
||||
88,Ash
|
||||
88,River
|
||||
88,Syndicate
|
||||
89,Amateur
|
||||
89,Astro
|
||||
89,Avalanche
|
||||
89,Bonfire
|
||||
89,Draco
|
||||
89,Griffin
|
||||
89,Nightfall
|
||||
89,Shade
|
||||
89,Venus
|
||||
90,Charlie
|
||||
90,Delta
|
||||
90,Stratum
|
||||
90,Viper
|
||||
91,Bison
|
||||
91,Tal
|
||||
92,Network
|
||||
92,Scout
|
||||
93,Comms
|
||||
93,Fluss
|
||||
93,Group
|
||||
93,Hub
|
||||
93,Pulse
|
||||
93,Smoke
|
||||
94,Frost
|
||||
94,Rover
|
||||
94,Village
|
||||
95,Cobra
|
||||
95,Liberty
|
||||
95,Ridge
|
||||
97,DarkNet
|
||||
97,NightshiftNet
|
||||
97,Radio2
|
||||
97,Shelter2
|
||||
98,CampNet
|
||||
98,Radio1
|
||||
98,Shelter1
|
||||
98,TangoMesh
|
||||
99,BaseAlpha
|
||||
99,BerlinNet
|
||||
99,SouthStar
|
||||
100,CourierMesh
|
||||
100,Storm1
|
||||
101,Courier2
|
||||
101,GridNet
|
||||
101,OpsCenter
|
||||
102,Courier1
|
||||
103,Storm2
|
||||
104,HawkNet
|
||||
105,BearNet
|
||||
105,StarNet
|
||||
107,emergency
|
||||
107,ZuluNet
|
||||
108,Comms1
|
||||
108,DragonNet
|
||||
108,Hub1
|
||||
109,admin
|
||||
109,NightMesh
|
||||
110,MeshNet
|
||||
111,BaseCharlie
|
||||
111,Comms2
|
||||
111,GridSouth
|
||||
111,Hub2
|
||||
111,MeshNetwork
|
||||
111,WolfNet
|
||||
112,Layer1
|
||||
112,Relay1
|
||||
112,ShortFast
|
||||
113,OpsRoom
|
||||
114,Layer3
|
||||
114,MeshCologne
|
||||
115,Layer2
|
||||
115,Relay2
|
||||
115,SOSBerlin
|
||||
116,Command1
|
||||
116,Control1
|
||||
116,CrowNet
|
||||
116,MeshFrankfurt
|
||||
117,EmergencyBerlin
|
||||
117,GridNorth
|
||||
117,MeshLeipzig
|
||||
117,PacketNet
|
||||
119,Command2
|
||||
119,Control2
|
||||
119,MeshHamburg
|
||||
120,NomadMesh
|
||||
121,NorthStar
|
||||
121,Watch2
|
||||
122,CommandRoom
|
||||
122,ControlRoom
|
||||
122,SyncNet
|
||||
122,Watch1
|
||||
123,PacketRadio
|
||||
123,ShadowNet
|
||||
124,EchoNet
|
||||
124,KiloNet
|
||||
124,Med2
|
||||
124,Ops2
|
||||
125,FoxtrotMesh
|
||||
125,RepeaterHub
|
||||
126,MoonNet
|
||||
127,BaseBravo
|
||||
127,Med1
|
||||
127,Ops1
|
||||
127,WolfDen
|
||||
|
@@ -1,736 +0,0 @@
|
||||
{
|
||||
"59": [
|
||||
"911",
|
||||
"Forest",
|
||||
"Pack"
|
||||
],
|
||||
"77": [
|
||||
"Admin",
|
||||
"ADMIN",
|
||||
"Den",
|
||||
"Ice",
|
||||
"LoRaNet",
|
||||
"North",
|
||||
"SOS",
|
||||
"Sos",
|
||||
"Wanderers"
|
||||
],
|
||||
"109": [
|
||||
"admin",
|
||||
"NightMesh"
|
||||
],
|
||||
"76": [
|
||||
"Alert",
|
||||
"Bavaria",
|
||||
"Fog",
|
||||
"Harmony",
|
||||
"Raven"
|
||||
],
|
||||
"86": [
|
||||
"Alpha",
|
||||
"Owl",
|
||||
"Panther",
|
||||
"Prairie",
|
||||
"Thunder"
|
||||
],
|
||||
"9": [
|
||||
"AlphaNet",
|
||||
"MeshGrid"
|
||||
],
|
||||
"61": [
|
||||
"Alpine",
|
||||
"Bridge",
|
||||
"Camp",
|
||||
"Dortmund",
|
||||
"Frontier",
|
||||
"Jungle",
|
||||
"Peak"
|
||||
],
|
||||
"89": [
|
||||
"Amateur",
|
||||
"Astro",
|
||||
"Avalanche",
|
||||
"Bonfire",
|
||||
"Draco",
|
||||
"Griffin",
|
||||
"Nightfall",
|
||||
"Shade",
|
||||
"Venus"
|
||||
],
|
||||
"52": [
|
||||
"Amazon",
|
||||
"Fireline",
|
||||
"Haze",
|
||||
"LoRaHessen",
|
||||
"Platinum",
|
||||
"Sensor",
|
||||
"Test",
|
||||
"Zulu"
|
||||
],
|
||||
"43": [
|
||||
"Anaconda",
|
||||
"Basalt",
|
||||
"Blackout",
|
||||
"Crow",
|
||||
"Dusk",
|
||||
"Falcon",
|
||||
"Lima",
|
||||
"Müggelberg"
|
||||
],
|
||||
"35": [
|
||||
"Aquila",
|
||||
"Doctor",
|
||||
"Echo",
|
||||
"Kilo",
|
||||
"Public",
|
||||
"Wyvern"
|
||||
],
|
||||
"44": [
|
||||
"Arctic",
|
||||
"Backup",
|
||||
"Bronze",
|
||||
"Corvus",
|
||||
"Cosmos",
|
||||
"LoRaBerlin",
|
||||
"Neukölln",
|
||||
"Safari"
|
||||
],
|
||||
"88": [
|
||||
"Ash",
|
||||
"River",
|
||||
"Syndicate"
|
||||
],
|
||||
"49": [
|
||||
"Asteroid",
|
||||
"Carbon",
|
||||
"Mesh"
|
||||
],
|
||||
"56": [
|
||||
"Aurora",
|
||||
"Dune",
|
||||
"Iron",
|
||||
"Lava",
|
||||
"Nomads"
|
||||
],
|
||||
"55": [
|
||||
"Base",
|
||||
"Bolt",
|
||||
"Hawk",
|
||||
"Mirage",
|
||||
"Nightwatch",
|
||||
"Obsidian",
|
||||
"Rock",
|
||||
"Victor",
|
||||
"West"
|
||||
],
|
||||
"6": [
|
||||
"Base1"
|
||||
],
|
||||
"5": [
|
||||
"Base2",
|
||||
"DeltaNet",
|
||||
"Mesh4",
|
||||
"MeshMunich"
|
||||
],
|
||||
"99": [
|
||||
"BaseAlpha",
|
||||
"BerlinNet",
|
||||
"SouthStar"
|
||||
],
|
||||
"127": [
|
||||
"BaseBravo",
|
||||
"Med1",
|
||||
"Ops1",
|
||||
"WolfDen"
|
||||
],
|
||||
"111": [
|
||||
"BaseCharlie",
|
||||
"Comms2",
|
||||
"GridSouth",
|
||||
"Hub2",
|
||||
"MeshNetwork",
|
||||
"WolfNet"
|
||||
],
|
||||
"38": [
|
||||
"Beacon",
|
||||
"Collective",
|
||||
"Harbor",
|
||||
"Lion",
|
||||
"Meteor"
|
||||
],
|
||||
"54": [
|
||||
"Bear",
|
||||
"Hospital",
|
||||
"Munich",
|
||||
"Python",
|
||||
"Rain",
|
||||
"Wind",
|
||||
"Wolves"
|
||||
],
|
||||
"105": [
|
||||
"BearNet",
|
||||
"StarNet"
|
||||
],
|
||||
"48": [
|
||||
"Beat",
|
||||
"Berg",
|
||||
"Beta",
|
||||
"Downlink",
|
||||
"Hive",
|
||||
"Rhythm",
|
||||
"Saxony",
|
||||
"Sideband",
|
||||
"Wolf"
|
||||
],
|
||||
"60": [
|
||||
"Berlin",
|
||||
"Chat",
|
||||
"Sierra",
|
||||
"Signal",
|
||||
"Wald",
|
||||
"Zone"
|
||||
],
|
||||
"15": [
|
||||
"BerlinMesh",
|
||||
"LongSlow",
|
||||
"MeshBerlin",
|
||||
"Zone3"
|
||||
],
|
||||
"14": [
|
||||
"BetaBerlin",
|
||||
"Signal2",
|
||||
"TangoNet",
|
||||
"Zone2"
|
||||
],
|
||||
"91": [
|
||||
"Bison",
|
||||
"Tal"
|
||||
],
|
||||
"50": [
|
||||
"Blizzard",
|
||||
"Runner"
|
||||
],
|
||||
"46": [
|
||||
"Border",
|
||||
"Nest"
|
||||
],
|
||||
"47": [
|
||||
"Borealis",
|
||||
"Mars",
|
||||
"Path",
|
||||
"Ranger"
|
||||
],
|
||||
"74": [
|
||||
"Bravo",
|
||||
"Caravan",
|
||||
"Ost",
|
||||
"Süd"
|
||||
],
|
||||
"21": [
|
||||
"BravoNet",
|
||||
"EastStar",
|
||||
"MeshCollective",
|
||||
"SunNet"
|
||||
],
|
||||
"45": [
|
||||
"Breeze",
|
||||
"Burrow",
|
||||
"Gale",
|
||||
"Saturn"
|
||||
],
|
||||
"62": [
|
||||
"Burner",
|
||||
"Dawn",
|
||||
"Europa",
|
||||
"Midnight",
|
||||
"Nightshift",
|
||||
"Prenzlauer",
|
||||
"Safety",
|
||||
"Sector",
|
||||
"Wanderer"
|
||||
],
|
||||
"33": [
|
||||
"Callisto",
|
||||
"CQ1",
|
||||
"Daybreak",
|
||||
"Demo",
|
||||
"East",
|
||||
"LoRaMesh",
|
||||
"Mist"
|
||||
],
|
||||
"51": [
|
||||
"Callsign",
|
||||
"Carpet",
|
||||
"Desert",
|
||||
"Dragon",
|
||||
"Friedrichshain",
|
||||
"Help",
|
||||
"Nebula",
|
||||
"Safe"
|
||||
],
|
||||
"37": [
|
||||
"Campfire",
|
||||
"City",
|
||||
"Outsider",
|
||||
"Sync"
|
||||
],
|
||||
"98": [
|
||||
"CampNet",
|
||||
"Radio1",
|
||||
"Shelter1",
|
||||
"TangoMesh"
|
||||
],
|
||||
"65": [
|
||||
"Central",
|
||||
"Cologne",
|
||||
"Layer",
|
||||
"Relay",
|
||||
"Runners",
|
||||
"Stone",
|
||||
"Tempo"
|
||||
],
|
||||
"72": [
|
||||
"Chameleon",
|
||||
"Eagle",
|
||||
"Hilltop",
|
||||
"Teufelsberg"
|
||||
],
|
||||
"90": [
|
||||
"Charlie",
|
||||
"Delta",
|
||||
"Stratum",
|
||||
"Viper"
|
||||
],
|
||||
"40": [
|
||||
"Checkpoint",
|
||||
"Galaxy",
|
||||
"Jaguar",
|
||||
"Sunset",
|
||||
"Zeta"
|
||||
],
|
||||
"25": [
|
||||
"Checkpoint1",
|
||||
"HAMNet"
|
||||
],
|
||||
"26": [
|
||||
"Checkpoint2",
|
||||
"GhostNet"
|
||||
],
|
||||
"84": [
|
||||
"Cheetah",
|
||||
"General",
|
||||
"Outpost",
|
||||
"Volcano"
|
||||
],
|
||||
"32": [
|
||||
"Clinic",
|
||||
"Convoy",
|
||||
"Daylight",
|
||||
"Town"
|
||||
],
|
||||
"83": [
|
||||
"Cloud",
|
||||
"Equinox",
|
||||
"Firewatch",
|
||||
"Fox",
|
||||
"Radio",
|
||||
"Shelter"
|
||||
],
|
||||
"95": [
|
||||
"Cobra",
|
||||
"Liberty",
|
||||
"Ridge"
|
||||
],
|
||||
"58": [
|
||||
"Colony",
|
||||
"Fire",
|
||||
"Ganymede",
|
||||
"Grid",
|
||||
"Kraken",
|
||||
"Road",
|
||||
"Solstice",
|
||||
"Tundra"
|
||||
],
|
||||
"82": [
|
||||
"Comet",
|
||||
"Flash",
|
||||
"Lightning"
|
||||
],
|
||||
"69": [
|
||||
"Command",
|
||||
"Control",
|
||||
"Gamma",
|
||||
"Ghost",
|
||||
"Mercury",
|
||||
"Oasis"
|
||||
],
|
||||
"116": [
|
||||
"Command1",
|
||||
"Control1",
|
||||
"CrowNet",
|
||||
"MeshFrankfurt"
|
||||
],
|
||||
"119": [
|
||||
"Command2",
|
||||
"Control2",
|
||||
"MeshHamburg"
|
||||
],
|
||||
"122": [
|
||||
"CommandRoom",
|
||||
"ControlRoom",
|
||||
"SyncNet",
|
||||
"Watch1"
|
||||
],
|
||||
"93": [
|
||||
"Comms",
|
||||
"Fluss",
|
||||
"Group",
|
||||
"Hub",
|
||||
"Pulse",
|
||||
"Smoke"
|
||||
],
|
||||
"108": [
|
||||
"Comms1",
|
||||
"DragonNet",
|
||||
"Hub1"
|
||||
],
|
||||
"2": [
|
||||
"CommsNet",
|
||||
"Mesh3",
|
||||
"PulseNet"
|
||||
],
|
||||
"80": [
|
||||
"Commune",
|
||||
"Freedom",
|
||||
"Pluto",
|
||||
"Snake",
|
||||
"Squad",
|
||||
"Stuttgart"
|
||||
],
|
||||
"57": [
|
||||
"Copper",
|
||||
"Core",
|
||||
"Spectrum",
|
||||
"Summit"
|
||||
],
|
||||
"87": [
|
||||
"Courier",
|
||||
"Nexus",
|
||||
"South"
|
||||
],
|
||||
"102": [
|
||||
"Courier1"
|
||||
],
|
||||
"101": [
|
||||
"Courier2",
|
||||
"GridNet",
|
||||
"OpsCenter"
|
||||
],
|
||||
"100": [
|
||||
"CourierMesh",
|
||||
"Storm1"
|
||||
],
|
||||
"8": [
|
||||
"CourierNet",
|
||||
"Fire2",
|
||||
"Grid2",
|
||||
"LongFast",
|
||||
"RescueTeam"
|
||||
],
|
||||
"16": [
|
||||
"CQ",
|
||||
"EchoMesh",
|
||||
"Freq2",
|
||||
"KiloMesh",
|
||||
"Node2",
|
||||
"PhoenixNet",
|
||||
"Repeater2"
|
||||
],
|
||||
"34": [
|
||||
"CQ2",
|
||||
"Freq",
|
||||
"Gold",
|
||||
"Link",
|
||||
"Repeater"
|
||||
],
|
||||
"97": [
|
||||
"DarkNet",
|
||||
"NightshiftNet",
|
||||
"Radio2",
|
||||
"Shelter2"
|
||||
],
|
||||
"1": [
|
||||
"DEMO",
|
||||
"Downlink1",
|
||||
"NightNet",
|
||||
"Sideband1"
|
||||
],
|
||||
"31": [
|
||||
"DemoBerlin",
|
||||
"FieldNet",
|
||||
"MediumFast"
|
||||
],
|
||||
"70": [
|
||||
"Diamond",
|
||||
"Ham",
|
||||
"HAM",
|
||||
"Leipzig",
|
||||
"Paramedic",
|
||||
"Savanna"
|
||||
],
|
||||
"63": [
|
||||
"Distress",
|
||||
"Kiez",
|
||||
"Ruhr",
|
||||
"Team"
|
||||
],
|
||||
"36": [
|
||||
"District",
|
||||
"Hessen",
|
||||
"Io",
|
||||
"LoRaTest",
|
||||
"Operations",
|
||||
"Shadow",
|
||||
"Unit"
|
||||
],
|
||||
"23": [
|
||||
"EagleNet",
|
||||
"MeshHessen",
|
||||
"Node5"
|
||||
],
|
||||
"124": [
|
||||
"EchoNet",
|
||||
"KiloNet",
|
||||
"Med2",
|
||||
"Ops2"
|
||||
],
|
||||
"75": [
|
||||
"Emergency",
|
||||
"EMERGENCY",
|
||||
"Nomad",
|
||||
"Watch"
|
||||
],
|
||||
"107": [
|
||||
"emergency",
|
||||
"ZuluNet"
|
||||
],
|
||||
"117": [
|
||||
"EmergencyBerlin",
|
||||
"GridNorth",
|
||||
"MeshLeipzig",
|
||||
"PacketNet"
|
||||
],
|
||||
"64": [
|
||||
"Epsilon",
|
||||
"Field",
|
||||
"Granite",
|
||||
"Orbit",
|
||||
"Trail",
|
||||
"Whisper"
|
||||
],
|
||||
"11": [
|
||||
"Fire1",
|
||||
"Grid1"
|
||||
],
|
||||
"39": [
|
||||
"Firebird",
|
||||
"Fireteam",
|
||||
"Quasar",
|
||||
"Snow",
|
||||
"Universe",
|
||||
"Uplink"
|
||||
],
|
||||
"73": [
|
||||
"Firefly",
|
||||
"Steel"
|
||||
],
|
||||
"79": [
|
||||
"Flock",
|
||||
"Phoenix",
|
||||
"PRIVATE",
|
||||
"Private",
|
||||
"Signals",
|
||||
"Tiger"
|
||||
],
|
||||
"12": [
|
||||
"FoxNet",
|
||||
"MeshRuhr",
|
||||
"RadioNet"
|
||||
],
|
||||
"78": [
|
||||
"Foxtrot",
|
||||
"Med",
|
||||
"Ops"
|
||||
],
|
||||
"125": [
|
||||
"FoxtrotMesh",
|
||||
"RepeaterHub"
|
||||
],
|
||||
"17": [
|
||||
"FoxtrotNet",
|
||||
"Node3"
|
||||
],
|
||||
"71": [
|
||||
"Frankfurt",
|
||||
"Gecko",
|
||||
"Jupiter",
|
||||
"Sensors",
|
||||
"SENSORS",
|
||||
"Sunrise"
|
||||
],
|
||||
"19": [
|
||||
"Freq1",
|
||||
"HarmonyNet",
|
||||
"Node1",
|
||||
"RavenNet",
|
||||
"Repeater1"
|
||||
],
|
||||
"94": [
|
||||
"Frost",
|
||||
"Rover",
|
||||
"Village"
|
||||
],
|
||||
"85": [
|
||||
"Glacier",
|
||||
"Storm"
|
||||
],
|
||||
"81": [
|
||||
"Grassland",
|
||||
"Tango",
|
||||
"Union"
|
||||
],
|
||||
"68": [
|
||||
"Hamburg",
|
||||
"Hydra",
|
||||
"Medic",
|
||||
"Titan"
|
||||
],
|
||||
"104": [
|
||||
"HawkNet"
|
||||
],
|
||||
"67": [
|
||||
"Highway",
|
||||
"Kreuzberg",
|
||||
"Leopard",
|
||||
"Metro",
|
||||
"Omega",
|
||||
"Phantom"
|
||||
],
|
||||
"41": [
|
||||
"Hinterland",
|
||||
"HQ2",
|
||||
"Main",
|
||||
"Meshtastic",
|
||||
"Router",
|
||||
"Valley",
|
||||
"Wander",
|
||||
"Wolfpack"
|
||||
],
|
||||
"27": [
|
||||
"HQ",
|
||||
"Router2"
|
||||
],
|
||||
"42": [
|
||||
"HQ1",
|
||||
"Lizard",
|
||||
"Packet",
|
||||
"Sahara",
|
||||
"Tunnel"
|
||||
],
|
||||
"112": [
|
||||
"Layer1",
|
||||
"Relay1",
|
||||
"ShortFast"
|
||||
],
|
||||
"115": [
|
||||
"Layer2",
|
||||
"Relay2",
|
||||
"SOSBerlin"
|
||||
],
|
||||
"114": [
|
||||
"Layer3",
|
||||
"MeshCologne"
|
||||
],
|
||||
"3": [
|
||||
"LightNet",
|
||||
"Mesh2",
|
||||
"WestStar",
|
||||
"WolfMesh"
|
||||
],
|
||||
"18": [
|
||||
"LoRa"
|
||||
],
|
||||
"24": [
|
||||
"MediumSlow",
|
||||
"Router1"
|
||||
],
|
||||
"0": [
|
||||
"Mesh1"
|
||||
],
|
||||
"4": [
|
||||
"Mesh5",
|
||||
"OPERATIONS",
|
||||
"Rescue1",
|
||||
"SignalFire"
|
||||
],
|
||||
"110": [
|
||||
"MeshNet"
|
||||
],
|
||||
"7": [
|
||||
"MeshTest",
|
||||
"Rescue2",
|
||||
"ZuluMesh"
|
||||
],
|
||||
"126": [
|
||||
"MoonNet"
|
||||
],
|
||||
"92": [
|
||||
"Network",
|
||||
"Scout"
|
||||
],
|
||||
"22": [
|
||||
"Node4",
|
||||
"Uplink1"
|
||||
],
|
||||
"120": [
|
||||
"NomadMesh"
|
||||
],
|
||||
"20": [
|
||||
"NomadNet",
|
||||
"SENSOR",
|
||||
"TEST",
|
||||
"test"
|
||||
],
|
||||
"53": [
|
||||
"Nord",
|
||||
"Rescue",
|
||||
"Secure",
|
||||
"Silver"
|
||||
],
|
||||
"121": [
|
||||
"NorthStar",
|
||||
"Watch2"
|
||||
],
|
||||
"113": [
|
||||
"OpsRoom"
|
||||
],
|
||||
"123": [
|
||||
"PacketRadio",
|
||||
"ShadowNet"
|
||||
],
|
||||
"66": [
|
||||
"Polar",
|
||||
"Woods"
|
||||
],
|
||||
"13": [
|
||||
"Signal1",
|
||||
"Zone1"
|
||||
],
|
||||
"103": [
|
||||
"Storm2"
|
||||
],
|
||||
"10": [
|
||||
"TestBerlin",
|
||||
"WaWi"
|
||||
]
|
||||
}
|
||||
@@ -1,134 +0,0 @@
|
||||
#!/usr/bin/env ruby
|
||||
# frozen_string_literal: true
|
||||
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
require "base64"
|
||||
require "json"
|
||||
require "csv"
|
||||
|
||||
# --- CONFIG --------------------------------------------------------
|
||||
|
||||
# The PSK you want. Here: public mesh, "AQ==" (0x01).
|
||||
PSK_B64 = ENV.fetch("PSK_B64", "AQ==")
|
||||
|
||||
# 1000 potential channel candidate names for rainbow indices.
|
||||
CANDIDATE_NAMES = %w[
|
||||
911 Admin ADMIN admin Alert Alpha AlphaNet Alpine Amateur Amazon Anaconda Aquila Arctic Ash Asteroid Astro Aurora Avalanche Backup Basalt Base Base1 Base2 BaseAlpha BaseBravo BaseCharlie Bavaria Beacon Bear BearNet Beat Berg Berlin BerlinMesh BerlinNet Beta BetaBerlin Bison Blackout Blizzard Bolt Bonfire Border Borealis Bravo BravoNet Breeze Bridge Bronze Burner Burrow Callisto Callsign Camp Campfire CampNet Caravan Carbon Carpet Central Chameleon Charlie Chat Checkpoint Checkpoint1 Checkpoint2 Cheetah City Clinic Cloud Cobra Collective Cologne Colony Comet Command Command1 Command2 CommandRoom Comms Comms1 Comms2 CommsNet Commune Control Control1 Control2 ControlRoom Convoy Copper Core Corvus Cosmos Courier Courier1 Courier2 CourierMesh CourierNet CQ CQ1 CQ2 Crow CrowNet DarkNet Dawn Daybreak Daylight Delta DeltaNet Demo DEMO DemoBerlin Den Desert Diamond Distress District Doctor Dortmund Downlink Downlink1 Draco Dragon DragonNet Dune Dusk Eagle EagleNet East EastStar Echo EchoMesh EchoNet Emergency emergency EMERGENCY EmergencyBerlin Epsilon Equinox Europa Falcon Field FieldNet Fire Fire1 Fire2 Firebird Firefly Fireline Fireteam Firewatch Flash Flock Fluss Fog Forest Fox FoxNet Foxtrot FoxtrotMesh FoxtrotNet Frankfurt Freedom Freq Freq1 Freq2 Friedrichshain Frontier Frost Galaxy Gale Gamma Ganymede Gecko General Ghost GhostNet Glacier Gold Granite Grassland Grid Grid1 Grid2 GridNet GridNorth GridSouth Griffin Group Ham HAM Hamburg HAMNet Harbor Harmony HarmonyNet Hawk HawkNet Haze Help Hessen Highway Hilltop Hinterland Hive Hospital HQ HQ1 HQ2 Hub Hub1 Hub2 Hydra Ice Io Iron Jaguar Jungle Jupiter Kiez Kilo KiloMesh KiloNet Kraken Kreuzberg Lava Layer Layer1 Layer2 Layer3 Leipzig Leopard Liberty LightNet Lightning Lima Link Lion Lizard LongFast LongSlow LoRa LoRaBerlin LoRaHessen LoRaMesh LoRaNet LoRaTest Main Mars Med Med1 Med2 Medic MediumFast MediumSlow Mercury Mesh Mesh1 Mesh2 Mesh3 Mesh4 Mesh5 MeshBerlin MeshCollective MeshCologne MeshFrankfurt MeshGrid MeshHamburg MeshHessen MeshLeipzig MeshMunich MeshNet MeshNetwork MeshRuhr Meshtastic MeshTest Meteor Metro Midnight Mirage Mist MoonNet Munich Müggelberg Nebula Nest Network Neukölln Nexus Nightfall NightMesh NightNet Nightshift NightshiftNet Nightwatch Node1 Node2 Node3 Node4 Node5 Nomad NomadMesh NomadNet Nomads Nord North NorthStar Oasis Obsidian Omega Operations OPERATIONS Ops Ops1 Ops2 OpsCenter OpsRoom Orbit Ost Outpost Outsider Owl Pack Packet PacketNet PacketRadio Panther Paramedic Path Peak Phantom Phoenix PhoenixNet Platinum Pluto Polar Prairie Prenzlauer PRIVATE Private Public Pulse PulseNet Python Quasar Radio Radio1 Radio2 RadioNet Rain Ranger Raven RavenNet Relay Relay1 Relay2 Repeater Repeater1 Repeater2 RepeaterHub Rescue Rescue1 Rescue2 RescueTeam Rhythm Ridge River Road Rock Router Router1 Router2 Rover Ruhr Runner Runners Safari Safe Safety Sahara Saturn Savanna Saxony Scout Sector Secure Sensor SENSOR Sensors SENSORS Shade Shadow ShadowNet Shelter Shelter1 Shelter2 ShortFast Sideband Sideband1 Sierra Signal Signal1 Signal2 SignalFire Signals Silver Smoke Snake Snow Solstice SOS Sos SOSBerlin South SouthStar Spectrum Squad StarNet Steel Stone Storm Storm1 Storm2 Stratum Stuttgart Summit SunNet Sunrise Sunset Sync SyncNet Syndicate Süd Tal Tango TangoMesh TangoNet Team Tempo Test TEST test TestBerlin Teufelsberg Thunder Tiger Titan Town Trail Tundra Tunnel Union Unit Universe Uplink Uplink1 Valley Venus Victor Village Viper Volcano Wald Wander Wanderer Wanderers Watch Watch1 Watch2 WaWi West WestStar Whisper Wind Wolf WolfDen WolfMesh WolfNet Wolfpack Wolves Woods Wyvern Zeta Zone Zone1 Zone2 Zone3 Zulu ZuluMesh ZuluNet
|
||||
]
|
||||
|
||||
# Output filenames
|
||||
CSV_OUT = ENV.fetch("CSV_OUT", "rainbow.csv")
|
||||
JSON_OUT = ENV.fetch("JSON_OUT", "rainbow.json")
|
||||
|
||||
# --- HASH FUNCTION -------------------------------------------------
|
||||
|
||||
def xor_bytes(str_or_bytes)
|
||||
bytes = str_or_bytes.is_a?(String) ? str_or_bytes.bytes : str_or_bytes
|
||||
bytes.reduce(0) { |acc, b| (acc ^ b) & 0xFF }
|
||||
end
|
||||
|
||||
def expanded_key(psk_b64)
|
||||
raw = Base64.decode64(psk_b64 || "")
|
||||
|
||||
case raw.bytesize
|
||||
when 0
|
||||
# no encryption: length 0, xor = 0
|
||||
"".b
|
||||
when 1
|
||||
alias_index = raw.bytes.first
|
||||
alias_keys = {
|
||||
1 => [
|
||||
0xD4, 0xF1, 0xBB, 0x3A, 0x20, 0x29, 0x07, 0x59,
|
||||
0xF0, 0xBC, 0xFF, 0xAB, 0xCF, 0x4E, 0x69, 0x01,
|
||||
].pack("C*"),
|
||||
2 => [
|
||||
0x38, 0x4B, 0xBC, 0xC0, 0x1D, 0xC0, 0x22, 0xD1,
|
||||
0x81, 0xBF, 0x36, 0xB8, 0x61, 0x21, 0xE1, 0xFB,
|
||||
0x96, 0xB7, 0x2E, 0x55, 0xBF, 0x74, 0x22, 0x7E,
|
||||
0x9D, 0x6A, 0xFB, 0x48, 0xD6, 0x4C, 0xB1, 0xA1,
|
||||
].pack("C*"),
|
||||
}
|
||||
alias_keys.fetch(alias_index) { raise "Unknown PSK alias #{alias_index}" }
|
||||
when 2..15
|
||||
# pad to 16 (AES128)
|
||||
(raw.bytes + [0] * (16 - raw.bytesize)).pack("C*")
|
||||
when 16
|
||||
raw
|
||||
when 17..31
|
||||
# pad to 32 (AES256)
|
||||
(raw.bytes + [0] * (32 - raw.bytesize)).pack("C*")
|
||||
when 32
|
||||
raw
|
||||
else
|
||||
raise "PSK too long (#{raw.bytesize} bytes)"
|
||||
end
|
||||
end
|
||||
|
||||
def channel_hash(name, psk_b64)
|
||||
effective_name = name.b
|
||||
key = expanded_key(psk_b64)
|
||||
|
||||
h_name = xor_bytes(effective_name)
|
||||
h_key = xor_bytes(key)
|
||||
|
||||
(h_name ^ h_key) & 0xFF
|
||||
end
|
||||
|
||||
# --- BUILD RAINBOW TABLE -------------------------------------------
|
||||
|
||||
psk_b64 = PSK_B64
|
||||
puts "Using PSK_B64=#{psk_b64.inspect}"
|
||||
|
||||
hash_to_names = Hash.new { |h, k| h[k] = [] }
|
||||
|
||||
CANDIDATE_NAMES.each do |name|
|
||||
h = channel_hash(name, psk_b64)
|
||||
hash_to_names[h] << name
|
||||
end
|
||||
|
||||
# --- WRITE CSV (hash,name) -----------------------------------------
|
||||
|
||||
CSV.open(CSV_OUT, "w") do |csv|
|
||||
csv << %w[hash name]
|
||||
hash_to_names.keys.sort.each do |h|
|
||||
hash_to_names[h].each do |name|
|
||||
csv << [h, name]
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
puts "Wrote CSV rainbow table to #{CSV_OUT}"
|
||||
|
||||
# --- WRITE JSON ({hash: [names...]}) -------------------------------
|
||||
|
||||
json_hash = hash_to_names.transform_keys(&:to_s)
|
||||
File.write(JSON_OUT, JSON.pretty_generate(json_hash))
|
||||
|
||||
puts "Wrote JSON rainbow table to #{JSON_OUT}"
|
||||
|
||||
# --- OPTIONAL: interactive query -----------------------------------
|
||||
|
||||
if ARGV.first == "query"
|
||||
target = Integer(ARGV[1] || raise("Usage: #{File.basename($0)} query <hash>"))
|
||||
names = hash_to_names[target]
|
||||
if names.empty?
|
||||
puts "No names for hash #{target}"
|
||||
else
|
||||
puts "Names for hash #{target}:"
|
||||
names.each { |n| puts " - #{n}" }
|
||||
end
|
||||
else
|
||||
puts "Run again with: #{File.basename($0)} query <hash> # to inspect a specific hash"
|
||||
end
|
||||
@@ -1,183 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import base64
|
||||
import io
|
||||
import json
|
||||
import sys
|
||||
|
||||
from meshtastic.protobuf import mesh_pb2
|
||||
from meshtastic.protobuf import telemetry_pb2
|
||||
|
||||
from data.mesh_ingestor import decode_payload
|
||||
|
||||
|
||||
def run_main_with_input(payload: dict) -> tuple[int, dict]:
|
||||
stdin = io.StringIO(json.dumps(payload))
|
||||
stdout = io.StringIO()
|
||||
original_stdin = sys.stdin
|
||||
original_stdout = sys.stdout
|
||||
try:
|
||||
sys.stdin = stdin
|
||||
sys.stdout = stdout
|
||||
status = decode_payload.main()
|
||||
finally:
|
||||
sys.stdin = original_stdin
|
||||
sys.stdout = original_stdout
|
||||
|
||||
output = json.loads(stdout.getvalue() or "{}")
|
||||
return status, output
|
||||
|
||||
|
||||
def test_decode_payload_position_success():
|
||||
position = mesh_pb2.Position()
|
||||
position.latitude_i = 525598720
|
||||
position.longitude_i = 136577024
|
||||
position.altitude = 11
|
||||
position.precision_bits = 13
|
||||
payload_b64 = base64.b64encode(position.SerializeToString()).decode("ascii")
|
||||
|
||||
result = decode_payload._decode_payload(3, payload_b64)
|
||||
|
||||
assert result["type"] == "POSITION_APP"
|
||||
assert result["payload"]["latitude_i"] == 525598720
|
||||
assert result["payload"]["longitude_i"] == 136577024
|
||||
assert result["payload"]["altitude"] == 11
|
||||
|
||||
|
||||
def test_decode_payload_rejects_invalid_payload():
|
||||
result = decode_payload._decode_payload(3, "not-base64")
|
||||
|
||||
assert result["error"].startswith("invalid-payload")
|
||||
assert "invalid-payload" in result["error"]
|
||||
|
||||
|
||||
def test_decode_payload_rejects_unsupported_port():
|
||||
result = decode_payload._decode_payload(
|
||||
999, base64.b64encode(b"ok").decode("ascii")
|
||||
)
|
||||
|
||||
assert result["error"] == "unsupported-port"
|
||||
assert result["portnum"] == 999
|
||||
|
||||
|
||||
def test_main_handles_invalid_json():
|
||||
stdin = io.StringIO("nope")
|
||||
stdout = io.StringIO()
|
||||
original_stdin = sys.stdin
|
||||
original_stdout = sys.stdout
|
||||
try:
|
||||
sys.stdin = stdin
|
||||
sys.stdout = stdout
|
||||
status = decode_payload.main()
|
||||
finally:
|
||||
sys.stdin = original_stdin
|
||||
sys.stdout = original_stdout
|
||||
|
||||
result = json.loads(stdout.getvalue())
|
||||
assert status == 1
|
||||
assert result["error"].startswith("invalid-json")
|
||||
|
||||
|
||||
def test_main_requires_portnum():
|
||||
status, result = run_main_with_input(
|
||||
{"payload_b64": base64.b64encode(b"ok").decode("ascii")}
|
||||
)
|
||||
|
||||
assert status == 1
|
||||
assert result["error"] == "missing-portnum"
|
||||
|
||||
|
||||
def test_main_requires_integer_portnum():
|
||||
status, result = run_main_with_input(
|
||||
{"portnum": "3", "payload_b64": base64.b64encode(b"ok").decode("ascii")}
|
||||
)
|
||||
|
||||
assert status == 1
|
||||
assert result["error"] == "missing-portnum"
|
||||
|
||||
|
||||
def test_main_requires_payload():
|
||||
status, result = run_main_with_input({"portnum": 3})
|
||||
|
||||
assert status == 1
|
||||
assert result["error"] == "missing-payload"
|
||||
|
||||
|
||||
def test_main_requires_string_payload():
|
||||
status, result = run_main_with_input({"portnum": 3, "payload_b64": 123})
|
||||
|
||||
assert status == 1
|
||||
assert result["error"] == "missing-payload"
|
||||
|
||||
|
||||
def test_main_success_position_payload():
|
||||
position = mesh_pb2.Position()
|
||||
position.latitude_i = 525598720
|
||||
position.longitude_i = 136577024
|
||||
payload_b64 = base64.b64encode(position.SerializeToString()).decode("ascii")
|
||||
|
||||
status, result = run_main_with_input({"portnum": 3, "payload_b64": payload_b64})
|
||||
|
||||
assert status == 0
|
||||
assert result["type"] == "POSITION_APP"
|
||||
assert result["payload"]["latitude_i"] == 525598720
|
||||
|
||||
|
||||
def test_decode_payload_handles_parse_failure():
|
||||
class BrokenMessage:
|
||||
def ParseFromString(self, _payload):
|
||||
raise ValueError("boom")
|
||||
|
||||
decode_payload.PORTNUM_MAP[99] = ("BROKEN", BrokenMessage)
|
||||
payload_b64 = base64.b64encode(b"\x00").decode("ascii")
|
||||
|
||||
result = decode_payload._decode_payload(99, payload_b64)
|
||||
|
||||
assert result["error"].startswith("decode-failed")
|
||||
assert result["type"] == "BROKEN"
|
||||
decode_payload.PORTNUM_MAP.pop(99, None)
|
||||
|
||||
|
||||
def test_main_entrypoint_executes():
|
||||
import runpy
|
||||
|
||||
payload = {"portnum": 3, "payload_b64": base64.b64encode(b"").decode("ascii")}
|
||||
stdin = io.StringIO(json.dumps(payload))
|
||||
stdout = io.StringIO()
|
||||
original_stdin = sys.stdin
|
||||
original_stdout = sys.stdout
|
||||
try:
|
||||
sys.stdin = stdin
|
||||
sys.stdout = stdout
|
||||
try:
|
||||
runpy.run_module("data.mesh_ingestor.decode_payload", run_name="__main__")
|
||||
except SystemExit as exc:
|
||||
assert exc.code == 0
|
||||
finally:
|
||||
sys.stdin = original_stdin
|
||||
sys.stdout = original_stdout
|
||||
|
||||
|
||||
def test_decode_payload_telemetry_success():
|
||||
telemetry = telemetry_pb2.Telemetry()
|
||||
telemetry.time = 123
|
||||
payload_b64 = base64.b64encode(telemetry.SerializeToString()).decode("ascii")
|
||||
|
||||
result = decode_payload._decode_payload(67, payload_b64)
|
||||
|
||||
assert result["type"] == "TELEMETRY_APP"
|
||||
assert result["payload"]["time"] == 123
|
||||
@@ -788,7 +788,6 @@ def test_store_packet_dict_posts_text_message(mesh_module, monkeypatch):
|
||||
|
||||
mesh.config.LORA_FREQ = 868
|
||||
mesh.config.MODEM_PRESET = "MediumFast"
|
||||
mesh.register_host_node_id("!f00dbabe")
|
||||
|
||||
packet = {
|
||||
"id": 123,
|
||||
@@ -824,7 +823,6 @@ def test_store_packet_dict_posts_text_message(mesh_module, monkeypatch):
|
||||
assert payload["rssi"] == -70
|
||||
assert payload["reply_id"] is None
|
||||
assert payload["emoji"] is None
|
||||
assert payload["ingestor"] == "!f00dbabe"
|
||||
assert payload["lora_freq"] == 868
|
||||
assert payload["modem_preset"] == "MediumFast"
|
||||
assert priority == mesh._MESSAGE_POST_PRIORITY
|
||||
@@ -881,7 +879,6 @@ def test_store_packet_dict_posts_position(mesh_module, monkeypatch):
|
||||
|
||||
mesh.config.LORA_FREQ = 868
|
||||
mesh.config.MODEM_PRESET = "MediumFast"
|
||||
mesh.register_host_node_id("!f00dbabe")
|
||||
|
||||
packet = {
|
||||
"id": 200498337,
|
||||
@@ -949,7 +946,6 @@ def test_store_packet_dict_posts_position(mesh_module, monkeypatch):
|
||||
)
|
||||
assert payload["lora_freq"] == 868
|
||||
assert payload["modem_preset"] == "MediumFast"
|
||||
assert payload["ingestor"] == "!f00dbabe"
|
||||
assert payload["raw"]["time"] == 1_758_624_189
|
||||
|
||||
|
||||
@@ -964,7 +960,6 @@ def test_store_packet_dict_posts_neighborinfo(mesh_module, monkeypatch):
|
||||
|
||||
mesh.config.LORA_FREQ = 868
|
||||
mesh.config.MODEM_PRESET = "MediumFast"
|
||||
mesh.register_host_node_id("!f00dbabe")
|
||||
|
||||
packet = {
|
||||
"id": 2049886869,
|
||||
@@ -1009,7 +1004,6 @@ def test_store_packet_dict_posts_neighborinfo(mesh_module, monkeypatch):
|
||||
assert neighbors[2]["neighbor_num"] == 0x0BAD_C0DE
|
||||
assert payload["lora_freq"] == 868
|
||||
assert payload["modem_preset"] == "MediumFast"
|
||||
assert payload["ingestor"] == "!f00dbabe"
|
||||
|
||||
|
||||
def test_store_packet_dict_handles_nodeinfo_packet(mesh_module, monkeypatch):
|
||||
@@ -2288,7 +2282,6 @@ def test_store_packet_dict_handles_telemetry_packet(mesh_module, monkeypatch):
|
||||
|
||||
mesh.config.LORA_FREQ = 868
|
||||
mesh.config.MODEM_PRESET = "MediumFast"
|
||||
mesh.register_host_node_id("!f00dbabe")
|
||||
|
||||
packet = {
|
||||
"id": 1_256_091_342,
|
||||
@@ -2341,7 +2334,6 @@ def test_store_packet_dict_handles_telemetry_packet(mesh_module, monkeypatch):
|
||||
assert payload["current"] == pytest.approx(0.0715)
|
||||
assert payload["lora_freq"] == 868
|
||||
assert payload["modem_preset"] == "MediumFast"
|
||||
assert payload["ingestor"] == "!f00dbabe"
|
||||
|
||||
|
||||
def test_store_packet_dict_handles_environment_telemetry(mesh_module, monkeypatch):
|
||||
@@ -2485,7 +2477,6 @@ def test_store_packet_dict_handles_traceroute_packet(mesh_module, monkeypatch):
|
||||
|
||||
mesh.config.LORA_FREQ = 915
|
||||
mesh.config.MODEM_PRESET = "LongFast"
|
||||
mesh.register_host_node_id("!f00dbabe")
|
||||
|
||||
packet = {
|
||||
"id": 2_934_054_466,
|
||||
@@ -2527,7 +2518,6 @@ def test_store_packet_dict_handles_traceroute_packet(mesh_module, monkeypatch):
|
||||
assert "elapsed_ms" in payload
|
||||
assert payload["lora_freq"] == 915
|
||||
assert payload["modem_preset"] == "LongFast"
|
||||
assert payload["ingestor"] == "!f00dbabe"
|
||||
|
||||
|
||||
def test_traceroute_hop_normalization_supports_mappings(mesh_module, monkeypatch):
|
||||
|
||||
@@ -23,9 +23,6 @@ ENV BUNDLE_FORCE_RUBY_PLATFORM=true
|
||||
# Install build dependencies and SQLite3
|
||||
RUN apk add --no-cache \
|
||||
build-base \
|
||||
python3 \
|
||||
py3-pip \
|
||||
py3-virtualenv \
|
||||
sqlite-dev \
|
||||
linux-headers \
|
||||
pkgconfig
|
||||
@@ -41,16 +38,11 @@ RUN bundle config set --local force_ruby_platform true && \
|
||||
bundle config set --local without 'development test' && \
|
||||
bundle install --jobs=4 --retry=3
|
||||
|
||||
# Install Meshtastic decoder dependencies in a dedicated venv
|
||||
RUN python3 -m venv /opt/meshtastic-venv && \
|
||||
/opt/meshtastic-venv/bin/pip install --no-cache-dir meshtastic protobuf
|
||||
|
||||
# Production stage
|
||||
FROM ruby:3.3-alpine AS production
|
||||
|
||||
# Install runtime dependencies
|
||||
RUN apk add --no-cache \
|
||||
python3 \
|
||||
sqlite \
|
||||
tzdata \
|
||||
curl
|
||||
@@ -64,7 +56,6 @@ WORKDIR /app
|
||||
|
||||
# Copy installed gems from builder stage
|
||||
COPY --from=builder /usr/local/bundle /usr/local/bundle
|
||||
COPY --from=builder /opt/meshtastic-venv /opt/meshtastic-venv
|
||||
|
||||
# Copy application code (excluding the Dockerfile which is not required at runtime)
|
||||
COPY --chown=potatomesh:potatomesh web/app.rb ./
|
||||
@@ -79,7 +70,6 @@ COPY --chown=potatomesh:potatomesh web/scripts ./scripts
|
||||
|
||||
# Copy SQL schema files from data directory
|
||||
COPY --chown=potatomesh:potatomesh data/*.sql /data/
|
||||
COPY --chown=potatomesh:potatomesh data/mesh_ingestor/decode_payload.py /app/data/mesh_ingestor/decode_payload.py
|
||||
|
||||
# Create data and configuration directories with correct ownership
|
||||
RUN mkdir -p /app/.local/share/potato-mesh \
|
||||
@@ -95,7 +85,6 @@ EXPOSE 41447
|
||||
# Default environment variables (can be overridden by host)
|
||||
ENV RACK_ENV=production \
|
||||
APP_ENV=production \
|
||||
MESHTASTIC_PYTHON=/opt/meshtastic-venv/bin/python \
|
||||
XDG_DATA_HOME=/app/.local/share \
|
||||
XDG_CONFIG_HOME=/app/.config \
|
||||
SITE_NAME="PotatoMesh Demo" \
|
||||
|
||||
@@ -49,12 +49,6 @@ require_relative "application/worker_pool"
|
||||
require_relative "application/federation"
|
||||
require_relative "application/prometheus"
|
||||
require_relative "application/queries"
|
||||
require_relative "application/meshtastic/channel_names"
|
||||
require_relative "application/meshtastic/channel_hash"
|
||||
require_relative "application/meshtastic/protobuf"
|
||||
require_relative "application/meshtastic/rainbow_table"
|
||||
require_relative "application/meshtastic/cipher"
|
||||
require_relative "application/meshtastic/payload_decoder"
|
||||
require_relative "application/data_processing"
|
||||
require_relative "application/filesystem"
|
||||
require_relative "application/instances"
|
||||
@@ -139,10 +133,7 @@ module PotatoMesh
|
||||
set :public_folder, File.expand_path("../../public", __dir__)
|
||||
set :views, File.expand_path("../../views", __dir__)
|
||||
set :federation_thread, nil
|
||||
set :initial_federation_thread, nil
|
||||
set :federation_worker_pool, nil
|
||||
set :federation_shutdown_requested, false
|
||||
set :federation_shutdown_hook_installed, false
|
||||
set :port, resolve_port
|
||||
set :bind, DEFAULT_BIND_ADDRESS
|
||||
|
||||
|
||||
@@ -160,15 +160,7 @@ module PotatoMesh
|
||||
inserted
|
||||
end
|
||||
|
||||
def touch_node_last_seen(
|
||||
db,
|
||||
node_ref,
|
||||
fallback_num = nil,
|
||||
rx_time: nil,
|
||||
source: nil,
|
||||
lora_freq: nil,
|
||||
modem_preset: nil
|
||||
)
|
||||
def touch_node_last_seen(db, node_ref, fallback_num = nil, rx_time: nil, source: nil)
|
||||
timestamp = coerce_integer(rx_time)
|
||||
return unless timestamp
|
||||
|
||||
@@ -193,19 +185,15 @@ module PotatoMesh
|
||||
return if broadcast_node_ref?(node_id, fallback_num)
|
||||
return unless node_id
|
||||
|
||||
lora_freq = coerce_integer(lora_freq)
|
||||
modem_preset = string_or_nil(modem_preset)
|
||||
updated = false
|
||||
with_busy_retry do
|
||||
db.execute <<~SQL, [timestamp, timestamp, timestamp, lora_freq, modem_preset, node_id]
|
||||
db.execute <<~SQL, [timestamp, timestamp, timestamp, node_id]
|
||||
UPDATE nodes
|
||||
SET last_heard = CASE
|
||||
WHEN COALESCE(last_heard, 0) >= ? THEN last_heard
|
||||
ELSE ?
|
||||
END,
|
||||
first_heard = COALESCE(first_heard, ?),
|
||||
lora_freq = COALESCE(?, lora_freq),
|
||||
modem_preset = COALESCE(?, modem_preset)
|
||||
first_heard = COALESCE(first_heard, ?)
|
||||
WHERE node_id = ?
|
||||
SQL
|
||||
updated ||= db.changes.positive?
|
||||
@@ -218,8 +206,6 @@ module PotatoMesh
|
||||
node_id: node_id,
|
||||
timestamp: timestamp,
|
||||
source: source || :unknown,
|
||||
lora_freq: lora_freq,
|
||||
modem_preset: modem_preset,
|
||||
)
|
||||
end
|
||||
|
||||
@@ -504,37 +490,20 @@ module PotatoMesh
|
||||
rx_iso ||= Time.at(rx_time).utc.iso8601
|
||||
|
||||
raw_node_id = payload["node_id"] || payload["from_id"] || payload["from"]
|
||||
node_id = string_or_nil(raw_node_id)
|
||||
node_id = "!#{node_id.delete_prefix("!").downcase}" if node_id&.start_with?("!")
|
||||
raw_node_num = coerce_integer(payload["node_num"]) || coerce_integer(payload["num"])
|
||||
node_id ||= format("!%08x", raw_node_num & 0xFFFFFFFF) if node_id.nil? && raw_node_num
|
||||
|
||||
canonical_parts = canonical_node_parts(raw_node_id, raw_node_num)
|
||||
if canonical_parts
|
||||
node_id, node_num, = canonical_parts
|
||||
else
|
||||
node_id = string_or_nil(raw_node_id)
|
||||
node_id = "!#{node_id.delete_prefix("!").downcase}" if node_id&.start_with?("!")
|
||||
node_id ||= format("!%08x", raw_node_num & 0xFFFFFFFF) if node_id.nil? && raw_node_num
|
||||
|
||||
payload_for_num = payload.is_a?(Hash) ? payload.dup : {}
|
||||
payload_for_num["num"] ||= raw_node_num if raw_node_num
|
||||
node_num = resolve_node_num(node_id, payload_for_num)
|
||||
node_num ||= raw_node_num
|
||||
canonical = normalize_node_id(db, node_id || node_num)
|
||||
node_id = canonical if canonical
|
||||
end
|
||||
|
||||
lora_freq = coerce_integer(payload["lora_freq"] || payload["loraFrequency"])
|
||||
modem_preset = string_or_nil(payload["modem_preset"] || payload["modemPreset"])
|
||||
payload_for_num = payload.is_a?(Hash) ? payload.dup : {}
|
||||
payload_for_num["num"] ||= raw_node_num if raw_node_num
|
||||
node_num = resolve_node_num(node_id, payload_for_num)
|
||||
node_num ||= raw_node_num
|
||||
canonical = normalize_node_id(db, node_id || node_num)
|
||||
node_id = canonical if canonical
|
||||
|
||||
ensure_unknown_node(db, node_id || node_num, node_num, heard_time: rx_time)
|
||||
touch_node_last_seen(
|
||||
db,
|
||||
node_id || node_num,
|
||||
node_num,
|
||||
rx_time: rx_time,
|
||||
source: :position,
|
||||
lora_freq: lora_freq,
|
||||
modem_preset: modem_preset,
|
||||
)
|
||||
touch_node_last_seen(db, node_id || node_num, node_num, rx_time: rx_time, source: :position)
|
||||
|
||||
to_id = string_or_nil(payload["to_id"] || payload["to"])
|
||||
|
||||
@@ -616,7 +585,6 @@ module PotatoMesh
|
||||
|
||||
payload_b64 = string_or_nil(payload["payload_b64"] || payload["payload"])
|
||||
payload_b64 ||= string_or_nil(position_section.dig("payload", "__bytes_b64__"))
|
||||
ingestor = string_or_nil(payload["ingestor"])
|
||||
|
||||
row = [
|
||||
pos_id,
|
||||
@@ -640,14 +608,13 @@ module PotatoMesh
|
||||
hop_limit,
|
||||
bitfield,
|
||||
payload_b64,
|
||||
ingestor,
|
||||
]
|
||||
|
||||
with_busy_retry do
|
||||
db.execute <<~SQL, row
|
||||
INSERT INTO positions(id,node_id,node_num,rx_time,rx_iso,position_time,to_id,latitude,longitude,altitude,location_source,
|
||||
precision_bits,sats_in_view,pdop,ground_speed,ground_track,snr,rssi,hop_limit,bitfield,payload_b64,ingestor)
|
||||
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
|
||||
precision_bits,sats_in_view,pdop,ground_speed,ground_track,snr,rssi,hop_limit,bitfield,payload_b64)
|
||||
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
|
||||
ON CONFLICT(id) DO UPDATE SET
|
||||
node_id=COALESCE(excluded.node_id,positions.node_id),
|
||||
node_num=COALESCE(excluded.node_num,positions.node_num),
|
||||
@@ -668,8 +635,7 @@ module PotatoMesh
|
||||
rssi=COALESCE(excluded.rssi,positions.rssi),
|
||||
hop_limit=COALESCE(excluded.hop_limit,positions.hop_limit),
|
||||
bitfield=COALESCE(excluded.bitfield,positions.bitfield),
|
||||
payload_b64=COALESCE(excluded.payload_b64,positions.payload_b64),
|
||||
ingestor=COALESCE(NULLIF(positions.ingestor,''), excluded.ingestor)
|
||||
payload_b64=COALESCE(excluded.payload_b64,positions.payload_b64)
|
||||
SQL
|
||||
end
|
||||
|
||||
@@ -724,7 +690,6 @@ module PotatoMesh
|
||||
touch_node_last_seen(db, node_id || node_num, node_num, rx_time: rx_time, source: :neighborinfo)
|
||||
|
||||
neighbor_entries = []
|
||||
ingestor = string_or_nil(payload["ingestor"])
|
||||
neighbors_payload = payload["neighbors"]
|
||||
neighbors_list = neighbors_payload.is_a?(Array) ? neighbors_payload : []
|
||||
|
||||
@@ -761,56 +726,28 @@ module PotatoMesh
|
||||
snr = coerce_float(neighbor["snr"])
|
||||
|
||||
ensure_unknown_node(db, neighbor_id || neighbor_num, neighbor_num, heard_time: entry_rx_time)
|
||||
touch_node_last_seen(db, neighbor_id || neighbor_num, neighbor_num, rx_time: entry_rx_time, source: :neighborinfo)
|
||||
|
||||
neighbor_entries << [neighbor_id, snr, entry_rx_time, ingestor]
|
||||
neighbor_entries << [neighbor_id, snr, entry_rx_time]
|
||||
end
|
||||
|
||||
with_busy_retry do
|
||||
db.transaction do
|
||||
if neighbor_entries.empty?
|
||||
db.execute("DELETE FROM neighbors WHERE node_id = ?", [node_id])
|
||||
else
|
||||
expected_neighbors = neighbor_entries.map(&:first).uniq
|
||||
existing_neighbors = db.execute(
|
||||
"SELECT neighbor_id FROM neighbors WHERE node_id = ?",
|
||||
[node_id],
|
||||
).flatten
|
||||
stale_neighbors = existing_neighbors - expected_neighbors
|
||||
stale_neighbors.each_slice(500) do |slice|
|
||||
placeholders = slice.map { "?" }.join(",")
|
||||
db.execute(
|
||||
"DELETE FROM neighbors WHERE node_id = ? AND neighbor_id IN (#{placeholders})",
|
||||
[node_id] + slice,
|
||||
)
|
||||
end
|
||||
end
|
||||
|
||||
neighbor_entries.each do |neighbor_id, snr_value, heard_time, reporter_id|
|
||||
db.execute("DELETE FROM neighbors WHERE node_id = ?", [node_id])
|
||||
neighbor_entries.each do |neighbor_id, snr_value, heard_time|
|
||||
db.execute(
|
||||
<<~SQL,
|
||||
INSERT INTO neighbors(node_id, neighbor_id, snr, rx_time, ingestor)
|
||||
VALUES (?, ?, ?, ?, ?)
|
||||
ON CONFLICT(node_id, neighbor_id) DO UPDATE SET
|
||||
snr = excluded.snr,
|
||||
rx_time = excluded.rx_time,
|
||||
ingestor = COALESCE(NULLIF(neighbors.ingestor,''), excluded.ingestor)
|
||||
INSERT OR REPLACE INTO neighbors(node_id, neighbor_id, snr, rx_time)
|
||||
VALUES (?, ?, ?, ?)
|
||||
SQL
|
||||
[node_id, neighbor_id, snr_value, heard_time, reporter_id],
|
||||
[node_id, neighbor_id, snr_value, heard_time],
|
||||
)
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
def update_node_from_telemetry(
|
||||
db,
|
||||
node_id,
|
||||
node_num,
|
||||
rx_time,
|
||||
metrics = {},
|
||||
lora_freq: nil,
|
||||
modem_preset: nil
|
||||
)
|
||||
def update_node_from_telemetry(db, node_id, node_num, rx_time, metrics = {})
|
||||
num = coerce_integer(node_num)
|
||||
id = string_or_nil(node_id)
|
||||
if id&.start_with?("!")
|
||||
@@ -820,15 +757,7 @@ module PotatoMesh
|
||||
return unless id
|
||||
|
||||
ensure_unknown_node(db, id, num, heard_time: rx_time)
|
||||
touch_node_last_seen(
|
||||
db,
|
||||
id,
|
||||
num,
|
||||
rx_time: rx_time,
|
||||
source: :telemetry,
|
||||
lora_freq: lora_freq,
|
||||
modem_preset: modem_preset,
|
||||
)
|
||||
touch_node_last_seen(db, id, num, rx_time: rx_time, source: :telemetry)
|
||||
|
||||
battery = coerce_float(metrics[:battery_level] || metrics["battery_level"])
|
||||
voltage = coerce_float(metrics[:voltage] || metrics["voltage"])
|
||||
@@ -972,23 +901,17 @@ module PotatoMesh
|
||||
rx_iso ||= Time.at(rx_time).utc.iso8601
|
||||
|
||||
raw_node_id = payload["node_id"] || payload["from_id"] || payload["from"]
|
||||
node_id = string_or_nil(raw_node_id)
|
||||
node_id = "!#{node_id.delete_prefix("!").downcase}" if node_id&.start_with?("!")
|
||||
raw_node_num = coerce_integer(payload["node_num"]) || coerce_integer(payload["num"])
|
||||
|
||||
canonical_parts = canonical_node_parts(raw_node_id, raw_node_num)
|
||||
if canonical_parts
|
||||
node_id, node_num, = canonical_parts
|
||||
else
|
||||
node_id = string_or_nil(raw_node_id)
|
||||
node_id = "!#{node_id.delete_prefix("!").downcase}" if node_id&.start_with?("!")
|
||||
payload_for_num = payload.dup
|
||||
payload_for_num["num"] ||= raw_node_num if raw_node_num
|
||||
node_num = resolve_node_num(node_id, payload_for_num)
|
||||
node_num ||= raw_node_num
|
||||
|
||||
payload_for_num = payload.dup
|
||||
payload_for_num["num"] ||= raw_node_num if raw_node_num
|
||||
node_num = resolve_node_num(node_id, payload_for_num)
|
||||
node_num ||= raw_node_num
|
||||
|
||||
canonical = normalize_node_id(db, node_id || node_num)
|
||||
node_id = canonical if canonical
|
||||
end
|
||||
canonical = normalize_node_id(db, node_id || node_num)
|
||||
node_id = canonical if canonical
|
||||
|
||||
from_id = string_or_nil(payload["from_id"]) || node_id
|
||||
to_id = string_or_nil(payload["to_id"] || payload["to"])
|
||||
@@ -1003,9 +926,6 @@ module PotatoMesh
|
||||
rssi = coerce_integer(payload["rssi"])
|
||||
bitfield = coerce_integer(payload["bitfield"])
|
||||
payload_b64 = string_or_nil(payload["payload_b64"] || payload["payload"])
|
||||
lora_freq = coerce_integer(payload["lora_freq"] || payload["loraFrequency"])
|
||||
modem_preset = string_or_nil(payload["modem_preset"] || payload["modemPreset"])
|
||||
ingestor = string_or_nil(payload["ingestor"])
|
||||
|
||||
telemetry_section = normalize_json_object(payload["telemetry"])
|
||||
device_metrics = normalize_json_object(payload["device_metrics"] || payload["deviceMetrics"])
|
||||
@@ -1335,7 +1255,6 @@ module PotatoMesh
|
||||
rainfall_24h,
|
||||
soil_moisture,
|
||||
soil_temperature,
|
||||
ingestor,
|
||||
]
|
||||
|
||||
placeholders = Array.new(row.length, "?").join(",")
|
||||
@@ -1343,7 +1262,7 @@ module PotatoMesh
|
||||
with_busy_retry do
|
||||
db.execute <<~SQL, row
|
||||
INSERT INTO telemetry(id,node_id,node_num,from_id,to_id,rx_time,rx_iso,telemetry_time,channel,portnum,hop_limit,snr,rssi,bitfield,payload_b64,
|
||||
battery_level,voltage,channel_utilization,air_util_tx,uptime_seconds,temperature,relative_humidity,barometric_pressure,gas_resistance,current,iaq,distance,lux,white_lux,ir_lux,uv_lux,wind_direction,wind_speed,weight,wind_gust,wind_lull,radiation,rainfall_1h,rainfall_24h,soil_moisture,soil_temperature,ingestor)
|
||||
battery_level,voltage,channel_utilization,air_util_tx,uptime_seconds,temperature,relative_humidity,barometric_pressure,gas_resistance,current,iaq,distance,lux,white_lux,ir_lux,uv_lux,wind_direction,wind_speed,weight,wind_gust,wind_lull,radiation,rainfall_1h,rainfall_24h,soil_moisture,soil_temperature)
|
||||
VALUES (#{placeholders})
|
||||
ON CONFLICT(id) DO UPDATE SET
|
||||
node_id=COALESCE(excluded.node_id,telemetry.node_id),
|
||||
@@ -1385,26 +1304,17 @@ module PotatoMesh
|
||||
rainfall_1h=COALESCE(excluded.rainfall_1h,telemetry.rainfall_1h),
|
||||
rainfall_24h=COALESCE(excluded.rainfall_24h,telemetry.rainfall_24h),
|
||||
soil_moisture=COALESCE(excluded.soil_moisture,telemetry.soil_moisture),
|
||||
soil_temperature=COALESCE(excluded.soil_temperature,telemetry.soil_temperature),
|
||||
ingestor=COALESCE(NULLIF(telemetry.ingestor,''), excluded.ingestor)
|
||||
soil_temperature=COALESCE(excluded.soil_temperature,telemetry.soil_temperature)
|
||||
SQL
|
||||
end
|
||||
|
||||
update_node_from_telemetry(
|
||||
db,
|
||||
node_id,
|
||||
node_num,
|
||||
rx_time,
|
||||
{
|
||||
battery_level: battery_level,
|
||||
voltage: voltage,
|
||||
channel_utilization: channel_utilization,
|
||||
air_util_tx: air_util_tx,
|
||||
uptime_seconds: uptime_seconds,
|
||||
},
|
||||
lora_freq: lora_freq,
|
||||
modem_preset: modem_preset,
|
||||
)
|
||||
update_node_from_telemetry(db, node_id, node_num, rx_time, {
|
||||
battery_level: battery_level,
|
||||
voltage: voltage,
|
||||
channel_utilization: channel_utilization,
|
||||
air_util_tx: air_util_tx,
|
||||
uptime_seconds: uptime_seconds,
|
||||
})
|
||||
end
|
||||
|
||||
# Persist a traceroute observation and its hop path.
|
||||
@@ -1437,7 +1347,6 @@ module PotatoMesh
|
||||
metrics&.[]("latency_ms") ||
|
||||
metrics&.[]("latencyMs"),
|
||||
)
|
||||
ingestor = string_or_nil(payload["ingestor"])
|
||||
|
||||
hops_value = payload.key?("hops") ? payload["hops"] : payload["path"]
|
||||
hops = normalize_trace_hops(hops_value)
|
||||
@@ -1449,9 +1358,9 @@ module PotatoMesh
|
||||
end
|
||||
|
||||
with_busy_retry do
|
||||
db.execute <<~SQL, [trace_identifier, request_id, src, dest, rx_time, rx_iso, rssi, snr, elapsed_ms, ingestor]
|
||||
INSERT INTO traces(id, request_id, src, dest, rx_time, rx_iso, rssi, snr, elapsed_ms, ingestor)
|
||||
VALUES(?,?,?,?,?,?,?,?,?,?)
|
||||
db.execute <<~SQL, [trace_identifier, request_id, src, dest, rx_time, rx_iso, rssi, snr, elapsed_ms]
|
||||
INSERT INTO traces(id, request_id, src, dest, rx_time, rx_iso, rssi, snr, elapsed_ms)
|
||||
VALUES(?,?,?,?,?,?,?,?,?)
|
||||
ON CONFLICT(id) DO UPDATE SET
|
||||
request_id=COALESCE(excluded.request_id,traces.request_id),
|
||||
src=COALESCE(excluded.src,traces.src),
|
||||
@@ -1460,8 +1369,7 @@ module PotatoMesh
|
||||
rx_iso=excluded.rx_iso,
|
||||
rssi=COALESCE(excluded.rssi,traces.rssi),
|
||||
snr=COALESCE(excluded.snr,traces.snr),
|
||||
elapsed_ms=COALESCE(excluded.elapsed_ms,traces.elapsed_ms),
|
||||
ingestor=COALESCE(NULLIF(traces.ingestor,''), excluded.ingestor)
|
||||
elapsed_ms=COALESCE(excluded.elapsed_ms,traces.elapsed_ms)
|
||||
SQL
|
||||
|
||||
trace_id = trace_identifier || db.last_insert_row_id
|
||||
@@ -1477,58 +1385,6 @@ module PotatoMesh
|
||||
end
|
||||
end
|
||||
|
||||
# Attempt to decrypt an encrypted Meshtastic message payload.
|
||||
#
|
||||
# @param message [Hash] message payload supplied by the ingestor.
|
||||
# @param packet_id [Integer] message packet identifier.
|
||||
# @param from_id [String, nil] canonical node identifier when available.
|
||||
# @param from_num [Integer, nil] numeric node identifier when available.
|
||||
# @param channel_index [Integer, nil] channel hash index.
|
||||
# @return [Hash, nil] decrypted payload metadata when parsing succeeds.
|
||||
def decrypt_meshtastic_message(message, packet_id, from_id, from_num, channel_index)
|
||||
return nil unless message.is_a?(Hash)
|
||||
|
||||
cipher_b64 = string_or_nil(message["encrypted"])
|
||||
return nil unless cipher_b64
|
||||
if (ENV["RACK_ENV"] == "test" || ENV["APP_ENV"] == "test" || defined?(RSpec)) &&
|
||||
ENV["MESHTASTIC_PSK_B64"].nil?
|
||||
return nil
|
||||
end
|
||||
|
||||
node_num = coerce_integer(from_num)
|
||||
if node_num.nil?
|
||||
parts = canonical_node_parts(from_id)
|
||||
node_num = parts[1] if parts
|
||||
end
|
||||
return nil unless node_num
|
||||
|
||||
psk_b64 = PotatoMesh::Config.meshtastic_psk_b64
|
||||
data = PotatoMesh::App::Meshtastic::Cipher.decrypt_data(
|
||||
cipher_b64: cipher_b64,
|
||||
packet_id: packet_id,
|
||||
from_id: from_id,
|
||||
from_num: node_num,
|
||||
psk_b64: psk_b64,
|
||||
)
|
||||
return nil unless data
|
||||
|
||||
channel_name = nil
|
||||
if channel_index.is_a?(Integer)
|
||||
candidates = PotatoMesh::App::Meshtastic::RainbowTable.channel_names_for(
|
||||
channel_index,
|
||||
psk_b64: psk_b64,
|
||||
)
|
||||
channel_name = candidates.first if candidates.any?
|
||||
end
|
||||
|
||||
{
|
||||
text: data[:text],
|
||||
portnum: data[:portnum],
|
||||
payload: data[:payload],
|
||||
channel_name: channel_name,
|
||||
}
|
||||
end
|
||||
|
||||
def insert_message(db, message)
|
||||
return unless message.is_a?(Hash)
|
||||
|
||||
@@ -1559,14 +1415,6 @@ module PotatoMesh
|
||||
from_id = canonical_from_id
|
||||
end
|
||||
end
|
||||
if from_id && !from_id.start_with?("^")
|
||||
canonical_parts = canonical_node_parts(from_id, message["from_num"])
|
||||
if canonical_parts && !from_id.start_with?("!")
|
||||
from_id = canonical_parts[0]
|
||||
message["from_num"] ||= canonical_parts[1]
|
||||
end
|
||||
end
|
||||
sender_present = !from_id.nil? || !coerce_integer(message["from_num"]).nil? || !trimmed_from_id.nil?
|
||||
|
||||
raw_to_id = message["to_id"]
|
||||
raw_to_id = message["to"] if raw_to_id.nil? || raw_to_id.to_s.strip.empty?
|
||||
@@ -1580,41 +1428,27 @@ module PotatoMesh
|
||||
to_id = canonical_to_id
|
||||
end
|
||||
end
|
||||
if to_id && !to_id.start_with?("^")
|
||||
canonical_parts = canonical_node_parts(to_id, message["to_num"])
|
||||
if canonical_parts && !to_id.start_with?("!")
|
||||
to_id = canonical_parts[0]
|
||||
message["to_num"] ||= canonical_parts[1]
|
||||
end
|
||||
end
|
||||
|
||||
encrypted = string_or_nil(message["encrypted"])
|
||||
text = message["text"]
|
||||
portnum = message["portnum"]
|
||||
clear_encrypted = false
|
||||
channel_index = coerce_integer(message["channel"] || message["channel_index"] || message["channelIndex"])
|
||||
|
||||
decrypted_payload = nil
|
||||
decrypted_portnum = nil
|
||||
ensure_unknown_node(db, from_id || raw_from_id, message["from_num"], heard_time: rx_time)
|
||||
touch_node_last_seen(
|
||||
db,
|
||||
from_id || raw_from_id || message["from_num"],
|
||||
message["from_num"],
|
||||
rx_time: rx_time,
|
||||
source: :message,
|
||||
)
|
||||
|
||||
if encrypted && (text.nil? || text.to_s.strip.empty?)
|
||||
decrypted = decrypt_meshtastic_message(
|
||||
message,
|
||||
msg_id,
|
||||
from_id,
|
||||
message["from_num"],
|
||||
channel_index,
|
||||
ensure_unknown_node(db, to_id || raw_to_id, message["to_num"], heard_time: rx_time) if to_id || raw_to_id
|
||||
if to_id || raw_to_id || message.key?("to_num")
|
||||
touch_node_last_seen(
|
||||
db,
|
||||
to_id || raw_to_id || message["to_num"],
|
||||
message["to_num"],
|
||||
rx_time: rx_time,
|
||||
source: :message,
|
||||
)
|
||||
|
||||
if decrypted
|
||||
decrypted_payload = decrypted
|
||||
decrypted_portnum = decrypted[:portnum]
|
||||
end
|
||||
end
|
||||
|
||||
if encrypted && (text.nil? || text.to_s.strip.empty?)
|
||||
portnum = nil
|
||||
message.delete("portnum")
|
||||
end
|
||||
|
||||
lora_freq = coerce_integer(message["lora_freq"] || message["loraFrequency"])
|
||||
@@ -1622,7 +1456,6 @@ module PotatoMesh
|
||||
channel_name = string_or_nil(message["channel_name"] || message["channelName"])
|
||||
reply_id = coerce_integer(message["reply_id"] || message["replyId"])
|
||||
emoji = string_or_nil(message["emoji"])
|
||||
ingestor = string_or_nil(message["ingestor"])
|
||||
|
||||
row = [
|
||||
msg_id,
|
||||
@@ -1631,8 +1464,8 @@ module PotatoMesh
|
||||
from_id,
|
||||
to_id,
|
||||
message["channel"],
|
||||
portnum,
|
||||
text,
|
||||
message["portnum"],
|
||||
message["text"],
|
||||
encrypted,
|
||||
message["snr"],
|
||||
message["rssi"],
|
||||
@@ -1642,27 +1475,19 @@ module PotatoMesh
|
||||
channel_name,
|
||||
reply_id,
|
||||
emoji,
|
||||
ingestor,
|
||||
]
|
||||
|
||||
with_busy_retry do
|
||||
existing = db.get_first_row(
|
||||
"SELECT from_id, to_id, text, encrypted, lora_freq, modem_preset, channel_name, reply_id, emoji, portnum, ingestor FROM messages WHERE id = ?",
|
||||
"SELECT from_id, to_id, encrypted, lora_freq, modem_preset, channel_name, reply_id, emoji FROM messages WHERE id = ?",
|
||||
[msg_id],
|
||||
)
|
||||
if existing
|
||||
updates = {}
|
||||
existing_text = existing.is_a?(Hash) ? existing["text"] : existing[2]
|
||||
existing_text_str = existing_text&.to_s
|
||||
existing_has_text = existing_text_str && !existing_text_str.strip.empty?
|
||||
existing_from = existing.is_a?(Hash) ? existing["from_id"] : existing[0]
|
||||
existing_from_str = existing_from&.to_s
|
||||
return if !sender_present && (existing_from_str.nil? || existing_from_str.strip.empty?)
|
||||
existing_encrypted = existing.is_a?(Hash) ? existing["encrypted"] : existing[3]
|
||||
existing_encrypted_str = existing_encrypted&.to_s
|
||||
decrypted_precedence = text && (clear_encrypted || (existing_encrypted_str && !existing_encrypted_str.strip.empty?))
|
||||
|
||||
if from_id
|
||||
existing_from = existing.is_a?(Hash) ? existing["from_id"] : existing[0]
|
||||
existing_from_str = existing_from&.to_s
|
||||
should_update = existing_from_str.nil? || existing_from_str.strip.empty?
|
||||
should_update ||= existing_from != from_id
|
||||
updates["from_id"] = from_id if should_update
|
||||
@@ -1676,48 +1501,21 @@ module PotatoMesh
|
||||
updates["to_id"] = to_id if should_update
|
||||
end
|
||||
|
||||
if clear_encrypted || (decrypted_precedence && existing_encrypted_str && !existing_encrypted_str.strip.empty?)
|
||||
updates["encrypted"] = nil if existing_encrypted
|
||||
elsif encrypted && !existing_has_text
|
||||
if encrypted
|
||||
existing_encrypted = existing.is_a?(Hash) ? existing["encrypted"] : existing[2]
|
||||
existing_encrypted_str = existing_encrypted&.to_s
|
||||
should_update = existing_encrypted_str.nil? || existing_encrypted_str.strip.empty?
|
||||
should_update ||= existing_encrypted != encrypted
|
||||
updates["encrypted"] = encrypted if should_update
|
||||
end
|
||||
|
||||
if text
|
||||
should_update = existing_text_str.nil? || existing_text_str.strip.empty?
|
||||
should_update ||= existing_text != text
|
||||
updates["text"] = text if should_update
|
||||
end
|
||||
|
||||
if decrypted_precedence
|
||||
updates["channel"] = message["channel"] if message.key?("channel")
|
||||
updates["snr"] = message["snr"] if message.key?("snr")
|
||||
updates["rssi"] = message["rssi"] if message.key?("rssi")
|
||||
updates["hop_limit"] = message["hop_limit"] if message.key?("hop_limit")
|
||||
updates["lora_freq"] = lora_freq unless lora_freq.nil?
|
||||
updates["modem_preset"] = modem_preset if modem_preset
|
||||
updates["channel_name"] = channel_name if channel_name
|
||||
updates["rx_time"] = rx_time if rx_time
|
||||
updates["rx_iso"] = rx_iso if rx_iso
|
||||
end
|
||||
|
||||
if portnum
|
||||
existing_portnum = existing.is_a?(Hash) ? existing["portnum"] : existing[9]
|
||||
existing_portnum_str = existing_portnum&.to_s
|
||||
should_update = existing_portnum_str.nil? || existing_portnum_str.strip.empty?
|
||||
should_update ||= existing_portnum != portnum
|
||||
should_update ||= decrypted_precedence
|
||||
updates["portnum"] = portnum if should_update
|
||||
end
|
||||
|
||||
unless lora_freq.nil?
|
||||
existing_lora = existing.is_a?(Hash) ? existing["lora_freq"] : existing[4]
|
||||
existing_lora = existing.is_a?(Hash) ? existing["lora_freq"] : existing[3]
|
||||
updates["lora_freq"] = lora_freq if existing_lora != lora_freq
|
||||
end
|
||||
|
||||
if modem_preset
|
||||
existing_preset = existing.is_a?(Hash) ? existing["modem_preset"] : existing[5]
|
||||
existing_preset = existing.is_a?(Hash) ? existing["modem_preset"] : existing[4]
|
||||
existing_preset_str = existing_preset&.to_s
|
||||
should_update = existing_preset_str.nil? || existing_preset_str.strip.empty?
|
||||
should_update ||= existing_preset != modem_preset
|
||||
@@ -1725,7 +1523,7 @@ module PotatoMesh
|
||||
end
|
||||
|
||||
if channel_name
|
||||
existing_channel = existing.is_a?(Hash) ? existing["channel_name"] : existing[6]
|
||||
existing_channel = existing.is_a?(Hash) ? existing["channel_name"] : existing[5]
|
||||
existing_channel_str = existing_channel&.to_s
|
||||
should_update = existing_channel_str.nil? || existing_channel_str.strip.empty?
|
||||
should_update ||= existing_channel != channel_name
|
||||
@@ -1733,24 +1531,18 @@ module PotatoMesh
|
||||
end
|
||||
|
||||
unless reply_id.nil?
|
||||
existing_reply = existing.is_a?(Hash) ? existing["reply_id"] : existing[7]
|
||||
existing_reply = existing.is_a?(Hash) ? existing["reply_id"] : existing[6]
|
||||
updates["reply_id"] = reply_id if existing_reply != reply_id
|
||||
end
|
||||
|
||||
if emoji
|
||||
existing_emoji = existing.is_a?(Hash) ? existing["emoji"] : existing[8]
|
||||
existing_emoji = existing.is_a?(Hash) ? existing["emoji"] : existing[7]
|
||||
existing_emoji_str = existing_emoji&.to_s
|
||||
should_update = existing_emoji_str.nil? || existing_emoji_str.strip.empty?
|
||||
should_update ||= existing_emoji != emoji
|
||||
updates["emoji"] = emoji if should_update
|
||||
end
|
||||
|
||||
if ingestor
|
||||
existing_ingestor = existing.is_a?(Hash) ? existing["ingestor"] : existing[10]
|
||||
existing_ingestor = string_or_nil(existing_ingestor)
|
||||
updates["ingestor"] = ingestor if existing_ingestor.nil?
|
||||
end
|
||||
|
||||
unless updates.empty?
|
||||
assignments = updates.keys.map { |column| "#{column} = ?" }.join(", ")
|
||||
db.execute("UPDATE messages SET #{assignments} WHERE id = ?", updates.values + [msg_id])
|
||||
@@ -1760,49 +1552,19 @@ module PotatoMesh
|
||||
|
||||
begin
|
||||
db.execute <<~SQL, row
|
||||
INSERT INTO messages(id,rx_time,rx_iso,from_id,to_id,channel,portnum,text,encrypted,snr,rssi,hop_limit,lora_freq,modem_preset,channel_name,reply_id,emoji,ingestor)
|
||||
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
|
||||
INSERT INTO messages(id,rx_time,rx_iso,from_id,to_id,channel,portnum,text,encrypted,snr,rssi,hop_limit,lora_freq,modem_preset,channel_name,reply_id,emoji)
|
||||
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
|
||||
SQL
|
||||
rescue SQLite3::ConstraintException
|
||||
existing_row = db.get_first_row(
|
||||
"SELECT text, encrypted, ingestor FROM messages WHERE id = ?",
|
||||
[msg_id],
|
||||
)
|
||||
existing_text = existing_row.is_a?(Hash) ? existing_row["text"] : existing_row&.[](0)
|
||||
existing_text_str = existing_text&.to_s
|
||||
allow_encrypted_update = existing_text_str.nil? || existing_text_str.strip.empty?
|
||||
existing_encrypted = existing_row.is_a?(Hash) ? existing_row["encrypted"] : existing_row&.[](1)
|
||||
existing_encrypted_str = existing_encrypted&.to_s
|
||||
existing_ingestor = existing_row.is_a?(Hash) ? existing_row["ingestor"] : existing_row&.[](2)
|
||||
existing_ingestor = string_or_nil(existing_ingestor)
|
||||
decrypted_precedence = text && (clear_encrypted || (existing_encrypted_str && !existing_encrypted_str.strip.empty?))
|
||||
|
||||
fallback_updates = {}
|
||||
fallback_updates["from_id"] = from_id if from_id
|
||||
fallback_updates["to_id"] = to_id if to_id
|
||||
fallback_updates["text"] = text if text
|
||||
fallback_updates["encrypted"] = encrypted if encrypted && allow_encrypted_update
|
||||
fallback_updates["encrypted"] = nil if clear_encrypted
|
||||
fallback_updates["portnum"] = portnum if portnum
|
||||
if decrypted_precedence
|
||||
fallback_updates["channel"] = message["channel"] if message.key?("channel")
|
||||
fallback_updates["snr"] = message["snr"] if message.key?("snr")
|
||||
fallback_updates["rssi"] = message["rssi"] if message.key?("rssi")
|
||||
fallback_updates["hop_limit"] = message["hop_limit"] if message.key?("hop_limit")
|
||||
fallback_updates["portnum"] = portnum if portnum
|
||||
fallback_updates["lora_freq"] = lora_freq unless lora_freq.nil?
|
||||
fallback_updates["modem_preset"] = modem_preset if modem_preset
|
||||
fallback_updates["channel_name"] = channel_name if channel_name
|
||||
fallback_updates["rx_time"] = rx_time if rx_time
|
||||
fallback_updates["rx_iso"] = rx_iso if rx_iso
|
||||
else
|
||||
fallback_updates["lora_freq"] = lora_freq unless lora_freq.nil?
|
||||
fallback_updates["modem_preset"] = modem_preset if modem_preset
|
||||
fallback_updates["channel_name"] = channel_name if channel_name
|
||||
end
|
||||
fallback_updates["encrypted"] = encrypted if encrypted
|
||||
fallback_updates["lora_freq"] = lora_freq unless lora_freq.nil?
|
||||
fallback_updates["modem_preset"] = modem_preset if modem_preset
|
||||
fallback_updates["channel_name"] = channel_name if channel_name
|
||||
fallback_updates["reply_id"] = reply_id unless reply_id.nil?
|
||||
fallback_updates["emoji"] = emoji if emoji
|
||||
fallback_updates["ingestor"] = ingestor if ingestor && existing_ingestor.nil?
|
||||
unless fallback_updates.empty?
|
||||
assignments = fallback_updates.keys.map { |column| "#{column} = ?" }.join(", ")
|
||||
db.execute("UPDATE messages SET #{assignments} WHERE id = ?", fallback_updates.values + [msg_id])
|
||||
@@ -1810,327 +1572,6 @@ module PotatoMesh
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
if clear_encrypted && text
|
||||
debug_log(
|
||||
"Stored decrypted text message",
|
||||
context: "data_processing.insert_message",
|
||||
message_id: msg_id,
|
||||
channel: message["channel"],
|
||||
channel_name: message["channel_name"],
|
||||
portnum: portnum,
|
||||
)
|
||||
end
|
||||
|
||||
stored_decrypted = nil
|
||||
if decrypted_payload
|
||||
stored_decrypted = store_decrypted_payload(
|
||||
db,
|
||||
message,
|
||||
msg_id,
|
||||
decrypted_payload,
|
||||
rx_time: rx_time,
|
||||
rx_iso: rx_iso,
|
||||
from_id: from_id,
|
||||
to_id: to_id,
|
||||
channel: message["channel"],
|
||||
portnum: portnum || decrypted_portnum,
|
||||
hop_limit: message["hop_limit"],
|
||||
snr: message["snr"],
|
||||
rssi: message["rssi"],
|
||||
)
|
||||
end
|
||||
|
||||
if stored_decrypted && encrypted
|
||||
with_busy_retry do
|
||||
db.execute("UPDATE messages SET encrypted = NULL WHERE id = ?", [msg_id])
|
||||
end
|
||||
debug_log(
|
||||
"Cleared encrypted payload after decoding",
|
||||
context: "data_processing.insert_message",
|
||||
message_id: msg_id,
|
||||
portnum: portnum || decrypted_portnum,
|
||||
)
|
||||
end
|
||||
|
||||
should_touch_message = !stored_decrypted
|
||||
if should_touch_message
|
||||
ensure_unknown_node(db, from_id || raw_from_id, message["from_num"], heard_time: rx_time)
|
||||
touch_node_last_seen(
|
||||
db,
|
||||
from_id || raw_from_id || message["from_num"],
|
||||
message["from_num"],
|
||||
rx_time: rx_time,
|
||||
source: :message,
|
||||
lora_freq: lora_freq,
|
||||
modem_preset: modem_preset,
|
||||
)
|
||||
|
||||
ensure_unknown_node(db, to_id || raw_to_id, message["to_num"], heard_time: rx_time) if to_id || raw_to_id
|
||||
if to_id || raw_to_id || message.key?("to_num")
|
||||
touch_node_last_seen(
|
||||
db,
|
||||
to_id || raw_to_id || message["to_num"],
|
||||
message["to_num"],
|
||||
rx_time: rx_time,
|
||||
source: :message,
|
||||
lora_freq: lora_freq,
|
||||
modem_preset: modem_preset,
|
||||
)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Decode and store decrypted payloads in domain-specific tables.
|
||||
#
|
||||
# @param db [SQLite3::Database] open database handle.
|
||||
# @param message [Hash] original message payload.
|
||||
# @param packet_id [Integer] packet identifier for the message.
|
||||
# @param decrypted [Hash] decrypted payload metadata.
|
||||
# @param rx_time [Integer] receive time.
|
||||
# @param rx_iso [String] ISO 8601 receive timestamp.
|
||||
# @param from_id [String, nil] canonical sender identifier.
|
||||
# @param to_id [String, nil] destination identifier.
|
||||
# @param channel [Integer, nil] channel index.
|
||||
# @param portnum [Object, nil] port number identifier.
|
||||
# @param hop_limit [Integer, nil] hop limit value.
|
||||
# @param snr [Numeric, nil] signal-to-noise ratio.
|
||||
# @param rssi [Integer, nil] RSSI value.
|
||||
# @return [void]
|
||||
def store_decrypted_payload(
|
||||
db,
|
||||
message,
|
||||
packet_id,
|
||||
decrypted,
|
||||
rx_time:,
|
||||
rx_iso:,
|
||||
from_id:,
|
||||
to_id:,
|
||||
channel:,
|
||||
portnum:,
|
||||
hop_limit:,
|
||||
snr:,
|
||||
rssi:
|
||||
)
|
||||
payload_bytes = decrypted[:payload]
|
||||
return false unless payload_bytes
|
||||
|
||||
portnum_value = coerce_integer(portnum || decrypted[:portnum])
|
||||
return false unless portnum_value
|
||||
|
||||
payload_b64 = Base64.strict_encode64(payload_bytes)
|
||||
supported_ports = [3, 4, 67, 70, 71]
|
||||
return false unless supported_ports.include?(portnum_value)
|
||||
|
||||
decoded = PotatoMesh::App::Meshtastic::PayloadDecoder.decode(
|
||||
portnum: portnum_value,
|
||||
payload_b64: payload_b64,
|
||||
)
|
||||
return false unless decoded.is_a?(Hash)
|
||||
return false unless decoded["payload"].is_a?(Hash)
|
||||
|
||||
common_payload = {
|
||||
"id" => packet_id,
|
||||
"packet_id" => packet_id,
|
||||
"rx_time" => rx_time,
|
||||
"rx_iso" => rx_iso,
|
||||
"from_id" => from_id,
|
||||
"to_id" => to_id,
|
||||
"channel" => channel,
|
||||
"portnum" => portnum_value.to_s,
|
||||
"hop_limit" => hop_limit,
|
||||
"snr" => snr,
|
||||
"rssi" => rssi,
|
||||
"lora_freq" => coerce_integer(message["lora_freq"] || message["loraFrequency"]),
|
||||
"modem_preset" => string_or_nil(message["modem_preset"] || message["modemPreset"]),
|
||||
"payload_b64" => payload_b64,
|
||||
"ingestor" => string_or_nil(message["ingestor"]),
|
||||
}
|
||||
|
||||
case decoded["type"]
|
||||
when "POSITION_APP"
|
||||
payload = common_payload.merge("position" => decoded["payload"])
|
||||
insert_position(db, payload)
|
||||
debug_log(
|
||||
"Stored decrypted position payload",
|
||||
context: "data_processing.store_decrypted_payload",
|
||||
message_id: packet_id,
|
||||
portnum: portnum_value,
|
||||
)
|
||||
true
|
||||
when "NODEINFO_APP"
|
||||
node_payload = normalize_decrypted_nodeinfo_payload(decoded["payload"])
|
||||
return false unless valid_decrypted_nodeinfo_payload?(node_payload)
|
||||
|
||||
node_id = string_or_nil(node_payload["id"]) || from_id
|
||||
node_num = coerce_integer(node_payload["num"]) ||
|
||||
coerce_integer(message["from_num"]) ||
|
||||
resolve_node_num(from_id, message)
|
||||
node_id ||= format("!%08x", node_num & 0xFFFFFFFF) if node_num
|
||||
return false unless node_id
|
||||
|
||||
payload = node_payload.merge(
|
||||
"num" => node_num,
|
||||
"lastHeard" => coerce_integer(node_payload["lastHeard"] || node_payload["last_heard"]) || rx_time,
|
||||
"snr" => node_payload.key?("snr") ? node_payload["snr"] : snr,
|
||||
"lora_freq" => common_payload["lora_freq"],
|
||||
"modem_preset" => common_payload["modem_preset"],
|
||||
)
|
||||
upsert_node(db, node_id, payload)
|
||||
debug_log(
|
||||
"Stored decrypted node payload",
|
||||
context: "data_processing.store_decrypted_payload",
|
||||
message_id: packet_id,
|
||||
portnum: portnum_value,
|
||||
node_id: node_id,
|
||||
)
|
||||
true
|
||||
when "TELEMETRY_APP"
|
||||
payload = common_payload.merge("telemetry" => decoded["payload"])
|
||||
insert_telemetry(db, payload)
|
||||
debug_log(
|
||||
"Stored decrypted telemetry payload",
|
||||
context: "data_processing.store_decrypted_payload",
|
||||
message_id: packet_id,
|
||||
portnum: portnum_value,
|
||||
)
|
||||
true
|
||||
when "NEIGHBORINFO_APP"
|
||||
neighbor_payload = decoded["payload"]
|
||||
neighbors = neighbor_payload["neighbors"]
|
||||
neighbors = [] unless neighbors.is_a?(Array)
|
||||
normalized_neighbors = neighbors.map do |neighbor|
|
||||
next unless neighbor.is_a?(Hash)
|
||||
{
|
||||
"neighbor_id" => neighbor["node_id"] || neighbor["nodeId"] || neighbor["id"],
|
||||
"snr" => neighbor["snr"],
|
||||
"rx_time" => neighbor["last_rx_time"],
|
||||
}.compact
|
||||
end.compact
|
||||
return false if normalized_neighbors.empty?
|
||||
|
||||
payload = common_payload.merge(
|
||||
"node_id" => neighbor_payload["node_id"] || from_id,
|
||||
"neighbors" => normalized_neighbors,
|
||||
"node_broadcast_interval_secs" => neighbor_payload["node_broadcast_interval_secs"],
|
||||
"last_sent_by_id" => neighbor_payload["last_sent_by_id"],
|
||||
)
|
||||
insert_neighbors(db, payload)
|
||||
debug_log(
|
||||
"Stored decrypted neighbor payload",
|
||||
context: "data_processing.store_decrypted_payload",
|
||||
message_id: packet_id,
|
||||
portnum: portnum_value,
|
||||
)
|
||||
true
|
||||
when "TRACEROUTE_APP"
|
||||
route = decoded["payload"]["route"]
|
||||
route_back = decoded["payload"]["route_back"]
|
||||
hops = route.is_a?(Array) ? route : route_back.is_a?(Array) ? route_back : []
|
||||
dest = hops.last if hops.is_a?(Array) && !hops.empty?
|
||||
src_num = coerce_integer(message["from_num"]) || resolve_node_num(from_id, message)
|
||||
payload = common_payload.merge(
|
||||
"src" => src_num,
|
||||
"dest" => dest,
|
||||
"hops" => hops,
|
||||
)
|
||||
insert_trace(db, payload)
|
||||
debug_log(
|
||||
"Stored decrypted traceroute payload",
|
||||
context: "data_processing.store_decrypted_payload",
|
||||
message_id: packet_id,
|
||||
portnum: portnum_value,
|
||||
)
|
||||
true
|
||||
else
|
||||
false
|
||||
end
|
||||
end
|
||||
|
||||
# Validate decoded NodeInfo payloads before upserting node records.
|
||||
#
|
||||
# @param payload [Object] decoded payload candidate.
|
||||
# @return [Boolean] true when the payload resembles a Meshtastic NodeInfo.
|
||||
def valid_decrypted_nodeinfo_payload?(payload)
|
||||
return false unless payload.is_a?(Hash)
|
||||
return false if payload.empty?
|
||||
return false unless payload["user"].is_a?(Hash)
|
||||
|
||||
return false if payload.key?("position") && !payload["position"].is_a?(Hash)
|
||||
return false if payload.key?("deviceMetrics") && !payload["deviceMetrics"].is_a?(Hash)
|
||||
return false unless nodeinfo_user_has_identifying_fields?(payload["user"])
|
||||
|
||||
true
|
||||
end
|
||||
|
||||
# Normalize decoded NodeInfo payload keys for +upsert_node+ compatibility.
|
||||
#
|
||||
# The Python decoder preserves protobuf field names, so nested hashes may
|
||||
# use +snake_case+ keys that +upsert_node+ does not read.
|
||||
#
|
||||
# @param payload [Object] decoded NodeInfo payload.
|
||||
# @return [Hash] normalized payload hash.
|
||||
def normalize_decrypted_nodeinfo_payload(payload)
|
||||
return {} unless payload.is_a?(Hash)
|
||||
|
||||
user = payload["user"]
|
||||
normalized_user = user.is_a?(Hash) ? user.dup : nil
|
||||
if normalized_user
|
||||
normalized_user["shortName"] ||= normalized_user["short_name"]
|
||||
normalized_user["longName"] ||= normalized_user["long_name"]
|
||||
normalized_user["hwModel"] ||= normalized_user["hw_model"]
|
||||
normalized_user["publicKey"] ||= normalized_user["public_key"]
|
||||
normalized_user["isUnmessagable"] = normalized_user["is_unmessagable"] if normalized_user.key?("is_unmessagable")
|
||||
end
|
||||
|
||||
metrics = payload["deviceMetrics"] || payload["device_metrics"]
|
||||
normalized_metrics = metrics.is_a?(Hash) ? metrics.dup : nil
|
||||
if normalized_metrics
|
||||
normalized_metrics["batteryLevel"] ||= normalized_metrics["battery_level"]
|
||||
normalized_metrics["channelUtilization"] ||= normalized_metrics["channel_utilization"]
|
||||
normalized_metrics["airUtilTx"] ||= normalized_metrics["air_util_tx"]
|
||||
normalized_metrics["uptimeSeconds"] ||= normalized_metrics["uptime_seconds"]
|
||||
end
|
||||
|
||||
position = payload["position"]
|
||||
normalized_position = position.is_a?(Hash) ? position.dup : nil
|
||||
if normalized_position
|
||||
normalized_position["precisionBits"] ||= normalized_position["precision_bits"]
|
||||
normalized_position["locationSource"] ||= normalized_position["location_source"]
|
||||
end
|
||||
|
||||
normalized = payload.dup
|
||||
normalized["user"] = normalized_user if normalized_user
|
||||
normalized["deviceMetrics"] = normalized_metrics if normalized_metrics
|
||||
normalized["position"] = normalized_position if normalized_position
|
||||
normalized["lastHeard"] ||= normalized["last_heard"]
|
||||
normalized["hopsAway"] ||= normalized["hops_away"]
|
||||
normalized["isFavorite"] = normalized["is_favorite"] if normalized.key?("is_favorite")
|
||||
normalized["hwModel"] ||= normalized["hw_model"]
|
||||
normalized
|
||||
end
|
||||
|
||||
# Validate that a decoded NodeInfo user section contains identifying data.
|
||||
#
|
||||
# @param user [Hash] decoded NodeInfo user payload.
|
||||
# @return [Boolean] true when at least one identifying field is present.
|
||||
def nodeinfo_user_has_identifying_fields?(user)
|
||||
identifying_fields = [
|
||||
user["id"],
|
||||
user["shortName"],
|
||||
user["short_name"],
|
||||
user["longName"],
|
||||
user["long_name"],
|
||||
user["macaddr"],
|
||||
user["hwModel"],
|
||||
user["hw_model"],
|
||||
user["publicKey"],
|
||||
user["public_key"],
|
||||
]
|
||||
|
||||
identifying_fields.any? do |value|
|
||||
value.is_a?(String) ? !value.strip.empty? : !value.nil?
|
||||
end
|
||||
end
|
||||
|
||||
def normalize_node_id(db, node_ref)
|
||||
|
||||
@@ -149,9 +149,6 @@ module PotatoMesh
|
||||
db.execute("ALTER TABLE messages ADD COLUMN emoji TEXT")
|
||||
message_columns << "emoji"
|
||||
end
|
||||
unless message_columns.include?("ingestor")
|
||||
db.execute("ALTER TABLE messages ADD COLUMN ingestor TEXT")
|
||||
end
|
||||
|
||||
reply_index_exists =
|
||||
db.get_first_value(
|
||||
@@ -191,31 +188,6 @@ module PotatoMesh
|
||||
db.execute("ALTER TABLE telemetry ADD COLUMN #{name} #{type}")
|
||||
telemetry_columns << name
|
||||
end
|
||||
unless telemetry_columns.include?("ingestor")
|
||||
db.execute("ALTER TABLE telemetry ADD COLUMN ingestor TEXT")
|
||||
end
|
||||
|
||||
position_tables =
|
||||
db.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='positions'").flatten
|
||||
if position_tables.empty?
|
||||
positions_schema = File.expand_path("../../../../data/positions.sql", __dir__)
|
||||
db.execute_batch(File.read(positions_schema))
|
||||
end
|
||||
position_columns = db.execute("PRAGMA table_info(positions)").map { |row| row[1] }
|
||||
unless position_columns.include?("ingestor")
|
||||
db.execute("ALTER TABLE positions ADD COLUMN ingestor TEXT")
|
||||
end
|
||||
|
||||
neighbor_tables =
|
||||
db.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='neighbors'").flatten
|
||||
if neighbor_tables.empty?
|
||||
neighbors_schema = File.expand_path("../../../../data/neighbors.sql", __dir__)
|
||||
db.execute_batch(File.read(neighbors_schema))
|
||||
end
|
||||
neighbor_columns = db.execute("PRAGMA table_info(neighbors)").map { |row| row[1] }
|
||||
unless neighbor_columns.include?("ingestor")
|
||||
db.execute("ALTER TABLE neighbors ADD COLUMN ingestor TEXT")
|
||||
end
|
||||
|
||||
trace_tables =
|
||||
db.execute(
|
||||
@@ -225,10 +197,6 @@ module PotatoMesh
|
||||
traces_schema = File.expand_path("../../../../data/traces.sql", __dir__)
|
||||
db.execute_batch(File.read(traces_schema))
|
||||
end
|
||||
trace_columns = db.execute("PRAGMA table_info(traces)").map { |row| row[1] }
|
||||
unless trace_columns.include?("ingestor")
|
||||
db.execute("ALTER TABLE traces ADD COLUMN ingestor TEXT")
|
||||
end
|
||||
|
||||
ingestor_tables =
|
||||
db.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='ingestors'").flatten
|
||||
|
||||
@@ -17,8 +17,6 @@
|
||||
module PotatoMesh
|
||||
module App
|
||||
module Federation
|
||||
FEDERATION_SLEEP_SLICE_SECONDS = 0.2
|
||||
|
||||
# Resolve the canonical domain for the running instance.
|
||||
#
|
||||
# @return [String, nil] sanitized instance domain or nil outside production.
|
||||
@@ -172,9 +170,6 @@ module PotatoMesh
|
||||
# @return [PotatoMesh::App::WorkerPool, nil] active worker pool if created.
|
||||
def ensure_federation_worker_pool!
|
||||
return nil unless federation_enabled?
|
||||
return nil if federation_shutdown_requested?
|
||||
|
||||
ensure_federation_shutdown_hook!
|
||||
|
||||
existing = settings.respond_to?(:federation_worker_pool) ? settings.federation_worker_pool : nil
|
||||
return existing if existing&.alive?
|
||||
@@ -182,81 +177,19 @@ module PotatoMesh
|
||||
pool = PotatoMesh::App::WorkerPool.new(
|
||||
size: PotatoMesh::Config.federation_worker_pool_size,
|
||||
max_queue: PotatoMesh::Config.federation_worker_queue_capacity,
|
||||
task_timeout: PotatoMesh::Config.federation_task_timeout_seconds,
|
||||
name: "potato-mesh-fed",
|
||||
)
|
||||
|
||||
set(:federation_worker_pool, pool) if respond_to?(:set)
|
||||
pool
|
||||
end
|
||||
|
||||
# Ensure federation background workers are torn down during process exit.
|
||||
#
|
||||
# @return [void]
|
||||
def ensure_federation_shutdown_hook!
|
||||
application = is_a?(Class) ? self : self.class
|
||||
return application.ensure_federation_shutdown_hook! unless application.equal?(self)
|
||||
|
||||
installed = if respond_to?(:settings) && settings.respond_to?(:federation_shutdown_hook_installed)
|
||||
settings.federation_shutdown_hook_installed
|
||||
else
|
||||
instance_variable_defined?(:@federation_shutdown_hook_installed) && @federation_shutdown_hook_installed
|
||||
end
|
||||
return if installed
|
||||
|
||||
if respond_to?(:set) && settings.respond_to?(:federation_shutdown_hook_installed=)
|
||||
set(:federation_shutdown_hook_installed, true)
|
||||
else
|
||||
@federation_shutdown_hook_installed = true
|
||||
end
|
||||
|
||||
at_exit do
|
||||
begin
|
||||
application.shutdown_federation_background_work!(timeout: PotatoMesh::Config.federation_shutdown_timeout_seconds)
|
||||
pool.shutdown(timeout: PotatoMesh::Config.federation_task_timeout_seconds)
|
||||
rescue StandardError
|
||||
# Suppress shutdown errors during interpreter teardown.
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Check whether federation workers have received a shutdown request.
|
||||
#
|
||||
# @return [Boolean] true when stop has been requested.
|
||||
def federation_shutdown_requested?
|
||||
return false unless respond_to?(:settings)
|
||||
return false unless settings.respond_to?(:federation_shutdown_requested)
|
||||
|
||||
settings.federation_shutdown_requested == true
|
||||
end
|
||||
|
||||
# Mark federation background work as shutting down.
|
||||
#
|
||||
# @return [void]
|
||||
def request_federation_shutdown!
|
||||
set(:federation_shutdown_requested, true) if respond_to?(:set)
|
||||
end
|
||||
|
||||
# Clear any previously requested federation shutdown marker.
|
||||
#
|
||||
# @return [void]
|
||||
def clear_federation_shutdown_request!
|
||||
set(:federation_shutdown_requested, false) if respond_to?(:set)
|
||||
end
|
||||
|
||||
# Sleep in short intervals so federation loops can react to shutdown.
|
||||
#
|
||||
# @param seconds [Numeric] target sleep duration.
|
||||
# @return [Boolean] true when the full delay elapsed without shutdown.
|
||||
def federation_sleep_with_shutdown(seconds)
|
||||
remaining = seconds.to_f
|
||||
while remaining.positive?
|
||||
return false if federation_shutdown_requested?
|
||||
|
||||
slice = [remaining, FEDERATION_SLEEP_SLICE_SECONDS].min
|
||||
Kernel.sleep(slice)
|
||||
remaining -= slice
|
||||
end
|
||||
!federation_shutdown_requested?
|
||||
set(:federation_worker_pool, pool) if respond_to?(:set)
|
||||
pool
|
||||
end
|
||||
|
||||
# Shutdown and clear the federation worker pool if present.
|
||||
@@ -280,44 +213,6 @@ module PotatoMesh
|
||||
end
|
||||
end
|
||||
|
||||
# Gracefully terminate federation background loops and worker pool tasks.
|
||||
#
|
||||
# @param timeout [Numeric, nil] maximum join time applied per thread.
|
||||
# @return [void]
|
||||
def shutdown_federation_background_work!(timeout: nil)
|
||||
request_federation_shutdown!
|
||||
timeout_value = timeout || PotatoMesh::Config.federation_shutdown_timeout_seconds
|
||||
stop_federation_thread!(:initial_federation_thread, timeout: timeout_value)
|
||||
stop_federation_thread!(:federation_thread, timeout: timeout_value)
|
||||
shutdown_federation_worker_pool!
|
||||
clear_federation_crawl_state!
|
||||
end
|
||||
|
||||
# Stop a specific federation thread setting and clear its reference.
|
||||
#
|
||||
# @param setting_name [Symbol] settings key storing the thread object.
|
||||
# @param timeout [Numeric] seconds to wait for clean thread exit.
|
||||
# @return [void]
|
||||
def stop_federation_thread!(setting_name, timeout:)
|
||||
return unless respond_to?(:settings)
|
||||
return unless settings.respond_to?(setting_name)
|
||||
|
||||
thread = settings.public_send(setting_name)
|
||||
if thread&.alive?
|
||||
begin
|
||||
thread.wakeup if thread.respond_to?(:wakeup)
|
||||
rescue ThreadError
|
||||
# The thread may not currently be sleeping; continue shutdown.
|
||||
end
|
||||
thread.join(timeout)
|
||||
if thread.alive?
|
||||
thread.kill
|
||||
thread.join(0.1)
|
||||
end
|
||||
end
|
||||
set(setting_name, nil) if respond_to?(:set)
|
||||
end
|
||||
|
||||
def federation_target_domains(self_domain)
|
||||
normalized_self = sanitize_instance_domain(self_domain)&.downcase
|
||||
ordered = []
|
||||
@@ -369,21 +264,16 @@ module PotatoMesh
|
||||
|
||||
def announce_instance_to_domain(domain, payload_json)
|
||||
return false unless domain && !domain.empty?
|
||||
return false if federation_shutdown_requested?
|
||||
|
||||
https_failures = []
|
||||
|
||||
published = instance_uri_candidates(domain, "/api/instances").any? do |uri|
|
||||
break false if federation_shutdown_requested?
|
||||
|
||||
instance_uri_candidates(domain, "/api/instances").each do |uri|
|
||||
begin
|
||||
http = build_remote_http_client(uri)
|
||||
response = Timeout.timeout(PotatoMesh::Config.remote_instance_request_timeout) do
|
||||
http.start do |connection|
|
||||
request = build_federation_http_request(Net::HTTP::Post, uri)
|
||||
request.body = payload_json
|
||||
connection.request(request)
|
||||
end
|
||||
response = http.start do |connection|
|
||||
request = build_federation_http_request(Net::HTTP::Post, uri)
|
||||
request.body = payload_json
|
||||
connection.request(request)
|
||||
end
|
||||
if response.is_a?(Net::HTTPSuccess)
|
||||
debug_log(
|
||||
@@ -392,16 +282,14 @@ module PotatoMesh
|
||||
target: uri.to_s,
|
||||
status: response.code,
|
||||
)
|
||||
true
|
||||
else
|
||||
debug_log(
|
||||
"Federation announcement failed",
|
||||
context: "federation.announce",
|
||||
target: uri.to_s,
|
||||
status: response.code,
|
||||
)
|
||||
false
|
||||
return true
|
||||
end
|
||||
debug_log(
|
||||
"Federation announcement failed",
|
||||
context: "federation.announce",
|
||||
target: uri.to_s,
|
||||
status: response.code,
|
||||
)
|
||||
rescue StandardError => e
|
||||
metadata = {
|
||||
context: "federation.announce",
|
||||
@@ -416,18 +304,9 @@ module PotatoMesh
|
||||
**metadata,
|
||||
)
|
||||
https_failures << metadata
|
||||
else
|
||||
warn_log(
|
||||
"Federation announcement raised exception",
|
||||
**metadata,
|
||||
)
|
||||
next
|
||||
end
|
||||
false
|
||||
end
|
||||
end
|
||||
|
||||
unless published
|
||||
https_failures.each do |metadata|
|
||||
warn_log(
|
||||
"Federation announcement raised exception",
|
||||
**metadata,
|
||||
@@ -435,7 +314,14 @@ module PotatoMesh
|
||||
end
|
||||
end
|
||||
|
||||
published
|
||||
https_failures.each do |metadata|
|
||||
warn_log(
|
||||
"Federation announcement raised exception",
|
||||
**metadata,
|
||||
)
|
||||
end
|
||||
|
||||
false
|
||||
end
|
||||
|
||||
# Determine whether an HTTPS announcement failure should fall back to HTTP.
|
||||
@@ -455,7 +341,6 @@ module PotatoMesh
|
||||
|
||||
def announce_instance_to_all_domains
|
||||
return unless federation_enabled?
|
||||
return if federation_shutdown_requested?
|
||||
|
||||
attributes, signature = ensure_self_instance_record!
|
||||
payload_json = JSON.generate(instance_announcement_payload(attributes, signature))
|
||||
@@ -463,15 +348,13 @@ module PotatoMesh
|
||||
pool = federation_worker_pool
|
||||
scheduled = []
|
||||
|
||||
domains.each_with_object(scheduled) do |domain, scheduled_tasks|
|
||||
break if federation_shutdown_requested?
|
||||
|
||||
domains.each do |domain|
|
||||
if pool
|
||||
begin
|
||||
task = pool.schedule do
|
||||
announce_instance_to_domain(domain, payload_json)
|
||||
end
|
||||
scheduled_tasks << [domain, task]
|
||||
scheduled << [domain, task]
|
||||
next
|
||||
rescue PotatoMesh::App::WorkerPool::QueueFullError
|
||||
warn_log(
|
||||
@@ -512,9 +395,7 @@ module PotatoMesh
|
||||
return if scheduled.empty?
|
||||
|
||||
timeout = PotatoMesh::Config.federation_task_timeout_seconds
|
||||
scheduled.all? do |domain, task|
|
||||
break false if federation_shutdown_requested?
|
||||
|
||||
scheduled.each do |domain, task|
|
||||
begin
|
||||
task.wait(timeout: timeout)
|
||||
rescue PotatoMesh::App::WorkerPool::TaskTimeoutError => e
|
||||
@@ -535,23 +416,19 @@ module PotatoMesh
|
||||
error_message: e.message,
|
||||
)
|
||||
end
|
||||
true
|
||||
end
|
||||
end
|
||||
|
||||
def start_federation_announcer!
|
||||
# Federation broadcasts must not execute when federation support is disabled.
|
||||
return nil unless federation_enabled?
|
||||
clear_federation_shutdown_request!
|
||||
ensure_federation_shutdown_hook!
|
||||
|
||||
existing = settings.federation_thread
|
||||
return existing if existing&.alive?
|
||||
|
||||
thread = Thread.new do
|
||||
loop do
|
||||
break unless federation_sleep_with_shutdown(PotatoMesh::Config.federation_announcement_interval)
|
||||
|
||||
sleep PotatoMesh::Config.federation_announcement_interval
|
||||
begin
|
||||
announce_instance_to_all_domains
|
||||
rescue StandardError => e
|
||||
@@ -565,8 +442,6 @@ module PotatoMesh
|
||||
end
|
||||
end
|
||||
thread.name = "potato-mesh-federation" if thread.respond_to?(:name=)
|
||||
# Allow shutdown even if the announcement loop is still sleeping.
|
||||
thread.daemon = true if thread.respond_to?(:daemon=)
|
||||
set(:federation_thread, thread)
|
||||
thread
|
||||
end
|
||||
@@ -577,8 +452,6 @@ module PotatoMesh
|
||||
def start_initial_federation_announcement!
|
||||
# Skip the initial broadcast entirely when federation is disabled.
|
||||
return nil unless federation_enabled?
|
||||
clear_federation_shutdown_request!
|
||||
ensure_federation_shutdown_hook!
|
||||
|
||||
existing = settings.respond_to?(:initial_federation_thread) ? settings.initial_federation_thread : nil
|
||||
return existing if existing&.alive?
|
||||
@@ -586,12 +459,7 @@ module PotatoMesh
|
||||
thread = Thread.new do
|
||||
begin
|
||||
delay = PotatoMesh::Config.initial_federation_delay_seconds
|
||||
if delay.positive?
|
||||
completed = federation_sleep_with_shutdown(delay)
|
||||
next unless completed
|
||||
end
|
||||
next if federation_shutdown_requested?
|
||||
|
||||
Kernel.sleep(delay) if delay.positive?
|
||||
announce_instance_to_all_domains
|
||||
rescue StandardError => e
|
||||
warn_log(
|
||||
@@ -606,8 +474,6 @@ module PotatoMesh
|
||||
end
|
||||
thread.name = "potato-mesh-federation-initial" if thread.respond_to?(:name=)
|
||||
thread.report_on_exception = false if thread.respond_to?(:report_on_exception=)
|
||||
# Avoid blocking process shutdown during delayed startup announcements.
|
||||
thread.daemon = true if thread.respond_to?(:daemon=)
|
||||
set(:initial_federation_thread, thread)
|
||||
thread
|
||||
end
|
||||
@@ -652,19 +518,15 @@ module PotatoMesh
|
||||
end
|
||||
|
||||
def perform_instance_http_request(uri)
|
||||
raise InstanceFetchError, "federation shutdown requested" if federation_shutdown_requested?
|
||||
|
||||
http = build_remote_http_client(uri)
|
||||
Timeout.timeout(PotatoMesh::Config.remote_instance_request_timeout) do
|
||||
http.start do |connection|
|
||||
request = build_federation_http_request(Net::HTTP::Get, uri)
|
||||
response = connection.request(request)
|
||||
case response
|
||||
when Net::HTTPSuccess
|
||||
response.body
|
||||
else
|
||||
raise InstanceFetchError, "unexpected response #{response.code}"
|
||||
end
|
||||
http.start do |connection|
|
||||
request = build_federation_http_request(Net::HTTP::Get, uri)
|
||||
response = connection.request(request)
|
||||
case response
|
||||
when Net::HTTPSuccess
|
||||
response.body
|
||||
else
|
||||
raise InstanceFetchError, "unexpected response #{response.code}"
|
||||
end
|
||||
end
|
||||
rescue StandardError => e
|
||||
@@ -721,12 +583,8 @@ module PotatoMesh
|
||||
end
|
||||
|
||||
def fetch_instance_json(domain, path)
|
||||
return [nil, ["federation shutdown requested"]] if federation_shutdown_requested?
|
||||
|
||||
errors = []
|
||||
instance_uri_candidates(domain, path).each do |uri|
|
||||
break if federation_shutdown_requested?
|
||||
|
||||
begin
|
||||
body = perform_instance_http_request(uri)
|
||||
return [JSON.parse(body), uri] if body
|
||||
@@ -739,34 +597,6 @@ module PotatoMesh
|
||||
[nil, errors]
|
||||
end
|
||||
|
||||
# Resolve the best matching active-node count from a remote /api/stats payload.
|
||||
#
|
||||
# @param payload [Hash, nil] decoded JSON payload from /api/stats.
|
||||
# @param max_age_seconds [Integer] activity window currently expected for federation freshness.
|
||||
# @return [Integer, nil] selected active-node count when available.
|
||||
def remote_active_node_count_from_stats(payload, max_age_seconds:)
|
||||
return nil unless payload.is_a?(Hash)
|
||||
|
||||
active_nodes = payload["active_nodes"]
|
||||
return nil unless active_nodes.is_a?(Hash)
|
||||
|
||||
age = coerce_integer(max_age_seconds) || 0
|
||||
key = if age <= 3600
|
||||
"hour"
|
||||
elsif age <= 86_400
|
||||
"day"
|
||||
elsif age <= PotatoMesh::Config.week_seconds
|
||||
"week"
|
||||
else
|
||||
"month"
|
||||
end
|
||||
|
||||
value = coerce_integer(active_nodes[key])
|
||||
return nil unless value
|
||||
|
||||
[value, 0].max
|
||||
end
|
||||
|
||||
# Parse a remote federation instance payload into canonical attributes.
|
||||
#
|
||||
# @param payload [Hash] JSON object describing a remote instance.
|
||||
@@ -827,147 +657,49 @@ module PotatoMesh
|
||||
# @param overall_limit [Integer, nil] maximum unique domains visited.
|
||||
# @return [Boolean] true when the crawl was scheduled successfully.
|
||||
def enqueue_federation_crawl(domain, per_response_limit:, overall_limit:)
|
||||
sanitized_domain = sanitize_instance_domain(domain)
|
||||
unless sanitized_domain
|
||||
warn_log(
|
||||
"Skipped remote instance crawl",
|
||||
context: "federation.instances",
|
||||
domain: domain,
|
||||
reason: "invalid domain",
|
||||
)
|
||||
return false
|
||||
end
|
||||
return false if federation_shutdown_requested?
|
||||
|
||||
application = is_a?(Class) ? self : self.class
|
||||
pool = application.federation_worker_pool
|
||||
pool = federation_worker_pool
|
||||
unless pool
|
||||
debug_log(
|
||||
"Skipped remote instance crawl",
|
||||
context: "federation.instances",
|
||||
domain: sanitized_domain,
|
||||
domain: domain,
|
||||
reason: "federation disabled",
|
||||
)
|
||||
return false
|
||||
end
|
||||
|
||||
claim_result = application.claim_federation_crawl_slot(sanitized_domain)
|
||||
unless claim_result == :claimed
|
||||
debug_log(
|
||||
"Skipped remote instance crawl",
|
||||
context: "federation.instances",
|
||||
domain: sanitized_domain,
|
||||
reason: claim_result == :in_flight ? "crawl already in flight" : "recent crawl completed",
|
||||
)
|
||||
return false
|
||||
end
|
||||
|
||||
application = is_a?(Class) ? self : self.class
|
||||
pool.schedule do
|
||||
db = nil
|
||||
db = application.open_database
|
||||
begin
|
||||
db = application.open_database
|
||||
application.ingest_known_instances_from!(
|
||||
db,
|
||||
sanitized_domain,
|
||||
domain,
|
||||
per_response_limit: per_response_limit,
|
||||
overall_limit: overall_limit,
|
||||
)
|
||||
ensure
|
||||
db&.close
|
||||
application.release_federation_crawl_slot(sanitized_domain)
|
||||
end
|
||||
end
|
||||
|
||||
true
|
||||
rescue PotatoMesh::App::WorkerPool::QueueFullError
|
||||
application.handle_failed_federation_crawl_schedule(sanitized_domain, "worker queue saturated")
|
||||
rescue PotatoMesh::App::WorkerPool::ShutdownError
|
||||
application.handle_failed_federation_crawl_schedule(sanitized_domain, "worker pool shut down")
|
||||
end
|
||||
|
||||
# Handle a failed crawl schedule attempt without applying cooldown.
|
||||
#
|
||||
# @param domain [String] canonical domain that failed to schedule.
|
||||
# @param reason [String] human-readable failure reason.
|
||||
# @return [Boolean] always false because scheduling did not succeed.
|
||||
def handle_failed_federation_crawl_schedule(domain, reason)
|
||||
release_federation_crawl_slot(domain, record_completion: false)
|
||||
warn_log(
|
||||
"Skipped remote instance crawl",
|
||||
context: "federation.instances",
|
||||
domain: domain,
|
||||
reason: reason,
|
||||
reason: "worker queue saturated",
|
||||
)
|
||||
false
|
||||
rescue PotatoMesh::App::WorkerPool::ShutdownError
|
||||
warn_log(
|
||||
"Skipped remote instance crawl",
|
||||
context: "federation.instances",
|
||||
domain: domain,
|
||||
reason: "worker pool shut down",
|
||||
)
|
||||
false
|
||||
end
|
||||
|
||||
# Initialize shared in-memory state used to deduplicate crawl scheduling.
|
||||
#
|
||||
# @return [void]
|
||||
def initialize_federation_crawl_state!
|
||||
@federation_crawl_init_mutex ||= Mutex.new
|
||||
return if instance_variable_defined?(:@federation_crawl_mutex) && @federation_crawl_mutex
|
||||
|
||||
@federation_crawl_init_mutex.synchronize do
|
||||
return if instance_variable_defined?(:@federation_crawl_mutex) && @federation_crawl_mutex
|
||||
|
||||
@federation_crawl_mutex = Mutex.new
|
||||
@federation_crawl_in_flight = Set.new
|
||||
@federation_crawl_last_completed_at = {}
|
||||
end
|
||||
end
|
||||
|
||||
# Retrieve the cooldown period used for duplicate crawl suppression.
|
||||
#
|
||||
# @return [Integer] seconds a domain remains in cooldown after completion.
|
||||
def federation_crawl_cooldown_seconds
|
||||
PotatoMesh::Config.federation_crawl_cooldown_seconds
|
||||
end
|
||||
|
||||
# Mark a domain crawl as claimed if no active or recent crawl exists.
|
||||
#
|
||||
# @param domain [String] canonical domain name.
|
||||
# @return [Symbol] +:claimed+, +:in_flight+, or +:cooldown+.
|
||||
def claim_federation_crawl_slot(domain)
|
||||
initialize_federation_crawl_state!
|
||||
now = Time.now.to_i
|
||||
@federation_crawl_mutex.synchronize do
|
||||
return :in_flight if @federation_crawl_in_flight.include?(domain)
|
||||
|
||||
last_completed = @federation_crawl_last_completed_at[domain]
|
||||
if last_completed && now - last_completed < federation_crawl_cooldown_seconds
|
||||
return :cooldown
|
||||
end
|
||||
|
||||
@federation_crawl_in_flight << domain
|
||||
:claimed
|
||||
end
|
||||
end
|
||||
|
||||
# Release an in-flight crawl claim and record completion timestamp.
|
||||
#
|
||||
# @param domain [String] canonical domain name.
|
||||
# @param record_completion [Boolean] true to apply cooldown tracking.
|
||||
# @return [void]
|
||||
def release_federation_crawl_slot(domain, record_completion: true)
|
||||
return unless domain
|
||||
|
||||
initialize_federation_crawl_state!
|
||||
@federation_crawl_mutex.synchronize do
|
||||
@federation_crawl_in_flight.delete(domain)
|
||||
@federation_crawl_last_completed_at[domain] = Time.now.to_i if record_completion
|
||||
end
|
||||
end
|
||||
|
||||
# Clear all in-memory crawl scheduling state.
|
||||
#
|
||||
# @return [void]
|
||||
def clear_federation_crawl_state!
|
||||
initialize_federation_crawl_state!
|
||||
@federation_crawl_mutex.synchronize do
|
||||
@federation_crawl_in_flight.clear
|
||||
@federation_crawl_last_completed_at.clear
|
||||
end
|
||||
end
|
||||
|
||||
# Recursively ingest federation records exposed by the supplied domain.
|
||||
@@ -987,7 +719,6 @@ module PotatoMesh
|
||||
)
|
||||
sanitized = sanitize_instance_domain(domain)
|
||||
return visited || Set.new unless sanitized
|
||||
return visited || Set.new if federation_shutdown_requested?
|
||||
|
||||
visited ||= Set.new
|
||||
|
||||
@@ -1022,8 +753,6 @@ module PotatoMesh
|
||||
processed_entries = 0
|
||||
recent_cutoff = Time.now.to_i - PotatoMesh::Config.remote_instance_max_node_age
|
||||
payload.each do |entry|
|
||||
break if federation_shutdown_requested?
|
||||
|
||||
if per_response_limit && per_response_limit.positive? && processed_entries >= per_response_limit
|
||||
debug_log(
|
||||
"Skipped remote instance entry due to response limit",
|
||||
@@ -1077,33 +806,21 @@ module PotatoMesh
|
||||
|
||||
attributes[:is_private] = false if attributes[:is_private].nil?
|
||||
|
||||
stats_payload, stats_metadata = fetch_instance_json(attributes[:domain], "/api/stats")
|
||||
stats_count = remote_active_node_count_from_stats(
|
||||
stats_payload,
|
||||
max_age_seconds: PotatoMesh::Config.remote_instance_max_node_age,
|
||||
)
|
||||
attributes[:nodes_count] = stats_count if stats_count
|
||||
|
||||
nodes_since_path = "/api/nodes?since=#{recent_cutoff}&limit=1000"
|
||||
nodes_since_window, nodes_since_metadata = fetch_instance_json(attributes[:domain], nodes_since_path)
|
||||
if stats_count.nil? && attributes[:nodes_count].nil? && nodes_since_window.is_a?(Array)
|
||||
if nodes_since_window.is_a?(Array)
|
||||
attributes[:nodes_count] = nodes_since_window.length
|
||||
elsif nodes_since_metadata
|
||||
warn_log(
|
||||
"Failed to load remote node window",
|
||||
context: "federation.instances",
|
||||
domain: attributes[:domain],
|
||||
reason: Array(nodes_since_metadata).map(&:to_s).join("; "),
|
||||
)
|
||||
end
|
||||
|
||||
remote_nodes, node_metadata = fetch_instance_json(attributes[:domain], "/api/nodes")
|
||||
remote_nodes = nodes_since_window if remote_nodes.nil? && nodes_since_window.is_a?(Array)
|
||||
if attributes[:nodes_count].nil? && remote_nodes.is_a?(Array)
|
||||
attributes[:nodes_count] = remote_nodes.length
|
||||
end
|
||||
|
||||
if stats_count.nil? && Array(stats_metadata).any?
|
||||
debug_log(
|
||||
"Remote instance /api/stats unavailable; using node list fallback",
|
||||
context: "federation.instances",
|
||||
domain: attributes[:domain],
|
||||
reason: Array(stats_metadata).map(&:to_s).join("; "),
|
||||
)
|
||||
end
|
||||
remote_nodes ||= nodes_since_window if nodes_since_window.is_a?(Array)
|
||||
unless remote_nodes
|
||||
warn_log(
|
||||
"Failed to load remote node data",
|
||||
|
||||
@@ -20,8 +20,6 @@ module PotatoMesh
|
||||
# its intended consumers to ensure consistent behaviour across the Sinatra
|
||||
# application.
|
||||
module Helpers
|
||||
ANNOUNCEMENT_URL_PATTERN = %r{\bhttps?://[^\s<]+}i.freeze
|
||||
|
||||
# Fetch an application level constant exposed by {PotatoMesh::Application}.
|
||||
#
|
||||
# @param name [Symbol] constant identifier to retrieve.
|
||||
@@ -94,47 +92,6 @@ module PotatoMesh
|
||||
PotatoMesh::Sanitizer.sanitized_site_name
|
||||
end
|
||||
|
||||
# Retrieve the configured announcement banner copy.
|
||||
#
|
||||
# @return [String, nil] sanitised announcement or nil when unset.
|
||||
def sanitized_announcement
|
||||
PotatoMesh::Sanitizer.sanitized_announcement
|
||||
end
|
||||
|
||||
# Render the announcement copy with safe outbound links.
|
||||
#
|
||||
# @return [String, nil] escaped HTML snippet or nil when unset.
|
||||
def announcement_html
|
||||
announcement = sanitized_announcement
|
||||
return nil unless announcement
|
||||
|
||||
fragments = []
|
||||
last_index = 0
|
||||
|
||||
announcement.to_enum(:scan, ANNOUNCEMENT_URL_PATTERN).each do
|
||||
match = Regexp.last_match
|
||||
next unless match
|
||||
|
||||
start_index = match.begin(0)
|
||||
end_index = match.end(0)
|
||||
|
||||
if start_index > last_index
|
||||
fragments << Rack::Utils.escape_html(announcement[last_index...start_index])
|
||||
end
|
||||
|
||||
url = match[0]
|
||||
escaped_url = Rack::Utils.escape_html(url)
|
||||
fragments << %(<a href="#{escaped_url}" target="_blank" rel="noopener noreferrer">#{escaped_url}</a>)
|
||||
last_index = end_index
|
||||
end
|
||||
|
||||
if last_index < announcement.length
|
||||
fragments << Rack::Utils.escape_html(announcement[last_index..])
|
||||
end
|
||||
|
||||
fragments.join
|
||||
end
|
||||
|
||||
# Retrieve the configured channel.
|
||||
#
|
||||
# @return [String] sanitised channel identifier.
|
||||
|
||||
@@ -165,96 +165,37 @@ module PotatoMesh
|
||||
# malformed rows gracefully. The dataset is restricted to records updated
|
||||
# within the rolling window defined by PotatoMesh::Config.week_seconds.
|
||||
#
|
||||
# @param limit [Integer, nil] optional page size used when pagination is enabled.
|
||||
# @param cursor [String, nil] optional keyset cursor for pagination.
|
||||
# @param with_pagination [Boolean] when true, return items and next cursor metadata.
|
||||
# @return [Array<Hash>, Hash] list of cleaned instance payloads or pagination metadata hash.
|
||||
def load_instances_for_api(limit: nil, cursor: nil, with_pagination: false)
|
||||
# @return [Array<Hash>] list of cleaned instance payloads.
|
||||
def load_instances_for_api
|
||||
clean_duplicate_instances!
|
||||
|
||||
db = open_database(readonly: true)
|
||||
db.results_as_hash = true
|
||||
now = Time.now.to_i
|
||||
min_last_update_time = now - PotatoMesh::Config.week_seconds
|
||||
safe_limit = coerce_query_limit(limit) if with_pagination
|
||||
fetch_limit = with_pagination ? safe_limit + 1 : nil
|
||||
where_clauses = [
|
||||
"id IS NOT NULL",
|
||||
"TRIM(id) != ''",
|
||||
"domain IS NOT NULL",
|
||||
"TRIM(domain) != ''",
|
||||
"pubkey IS NOT NULL",
|
||||
"TRIM(pubkey) != ''",
|
||||
"last_update_time IS NOT NULL",
|
||||
"last_update_time >= ?",
|
||||
]
|
||||
items = []
|
||||
cursor_payload = with_pagination ? decode_query_cursor(cursor) : nil
|
||||
cursor_domain = cursor_payload ? sanitize_instance_domain(cursor_payload["domain"])&.downcase : nil
|
||||
cursor_id = cursor_payload ? string_or_nil(cursor_payload["id"]) : nil
|
||||
sql = <<~SQL
|
||||
SELECT id, domain, pubkey, name, version, channel, frequency,
|
||||
latitude, longitude, last_update_time, is_private, nodes_count, contact_link, signature
|
||||
FROM instances
|
||||
WHERE domain IS NOT NULL AND TRIM(domain) != ''
|
||||
AND pubkey IS NOT NULL AND TRIM(pubkey) != ''
|
||||
AND last_update_time IS NOT NULL AND last_update_time >= ?
|
||||
ORDER BY LOWER(domain)
|
||||
SQL
|
||||
|
||||
loop do
|
||||
page_where_clauses = where_clauses.dup
|
||||
page_params = [min_last_update_time]
|
||||
if with_pagination && cursor_domain && cursor_id
|
||||
page_where_clauses << "(LOWER(domain) > ? OR (LOWER(domain) = ? AND id > ?))"
|
||||
page_params.concat([cursor_domain, cursor_domain, cursor_id])
|
||||
end
|
||||
|
||||
sql = <<~SQL
|
||||
SELECT id, domain, pubkey, name, version, channel, frequency,
|
||||
latitude, longitude, last_update_time, is_private, nodes_count, contact_link, signature
|
||||
FROM instances
|
||||
WHERE #{page_where_clauses.join("\n AND ")}
|
||||
ORDER BY LOWER(domain), id
|
||||
SQL
|
||||
sql += " LIMIT ?" if with_pagination
|
||||
page_params << fetch_limit if with_pagination
|
||||
|
||||
rows = with_busy_retry do
|
||||
db.execute(sql, page_params)
|
||||
end
|
||||
|
||||
rows.each do |row|
|
||||
normalized = normalize_instance_row(row)
|
||||
next unless normalized
|
||||
|
||||
last_update_time = normalized["lastUpdateTime"]
|
||||
next unless last_update_time.is_a?(Integer) && last_update_time >= min_last_update_time
|
||||
|
||||
items << normalized
|
||||
end
|
||||
|
||||
return items unless with_pagination
|
||||
|
||||
break if items.length > safe_limit
|
||||
break if rows.length < fetch_limit
|
||||
|
||||
marker_row = rows.reverse.find do |row|
|
||||
string_or_nil(row["domain"]) && string_or_nil(row["id"])
|
||||
end
|
||||
break unless marker_row
|
||||
|
||||
marker_domain = string_or_nil(marker_row["domain"])&.downcase
|
||||
marker_id = string_or_nil(marker_row["id"])
|
||||
break unless marker_domain && marker_id
|
||||
|
||||
cursor_domain = marker_domain
|
||||
cursor_id = marker_id
|
||||
rows = with_busy_retry do
|
||||
db.execute(sql, min_last_update_time)
|
||||
end
|
||||
|
||||
has_more = items.length > safe_limit
|
||||
paged_items = has_more ? items.first(safe_limit) : items
|
||||
next_cursor = nil
|
||||
if has_more && !paged_items.empty?
|
||||
marker = paged_items.last
|
||||
next_cursor = encode_query_cursor({
|
||||
"domain" => string_or_nil(marker["domain"]),
|
||||
"id" => string_or_nil(marker["id"]),
|
||||
})
|
||||
end
|
||||
rows.each_with_object([]) do |row, memo|
|
||||
normalized = normalize_instance_row(row)
|
||||
next unless normalized
|
||||
|
||||
{ items: paged_items, next_cursor: next_cursor }
|
||||
last_update_time = normalized["lastUpdateTime"]
|
||||
next unless last_update_time.is_a?(Integer) && last_update_time >= min_last_update_time
|
||||
|
||||
memo << normalized
|
||||
end
|
||||
rescue SQLite3::Exception => e
|
||||
warn_log(
|
||||
"Failed to load instance records",
|
||||
@@ -262,7 +203,7 @@ module PotatoMesh
|
||||
error_class: e.class.name,
|
||||
error_message: e.message,
|
||||
)
|
||||
with_pagination ? { items: [], next_cursor: nil } : []
|
||||
[]
|
||||
ensure
|
||||
db&.close
|
||||
end
|
||||
|
||||
@@ -1,102 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# frozen_string_literal: true
|
||||
|
||||
require "base64"
|
||||
|
||||
module PotatoMesh
|
||||
module App
|
||||
module Meshtastic
|
||||
# Compute Meshtastic channel hashes from a name and pre-shared key.
|
||||
module ChannelHash
|
||||
module_function
|
||||
|
||||
DEFAULT_PSK_ALIAS_KEYS = {
|
||||
1 => [
|
||||
0xD4, 0xF1, 0xBB, 0x3A, 0x20, 0x29, 0x07, 0x59,
|
||||
0xF0, 0xBC, 0xFF, 0xAB, 0xCF, 0x4E, 0x69, 0x01,
|
||||
].pack("C*"),
|
||||
2 => [
|
||||
0x38, 0x4B, 0xBC, 0xC0, 0x1D, 0xC0, 0x22, 0xD1,
|
||||
0x81, 0xBF, 0x36, 0xB8, 0x61, 0x21, 0xE1, 0xFB,
|
||||
0x96, 0xB7, 0x2E, 0x55, 0xBF, 0x74, 0x22, 0x7E,
|
||||
0x9D, 0x6A, 0xFB, 0x48, 0xD6, 0x4C, 0xB1, 0xA1,
|
||||
].pack("C*"),
|
||||
}.freeze
|
||||
|
||||
# Calculate the Meshtastic channel hash for the given name and PSK.
|
||||
#
|
||||
# @param name [String] channel name candidate.
|
||||
# @param psk_b64 [String, nil] base64-encoded PSK or PSK alias.
|
||||
# @return [Integer, nil] channel hash byte or nil when inputs are invalid.
|
||||
def channel_hash(name, psk_b64)
|
||||
return nil unless name
|
||||
|
||||
key = expanded_key(psk_b64)
|
||||
return nil unless key
|
||||
|
||||
h_name = xor_bytes(name.b)
|
||||
h_key = xor_bytes(key)
|
||||
|
||||
(h_name ^ h_key) & 0xFF
|
||||
end
|
||||
|
||||
# Expand the provided PSK into a valid AES key length.
|
||||
#
|
||||
# @param psk_b64 [String, nil] base64 PSK value.
|
||||
# @return [String, nil] expanded key bytes or nil when invalid.
|
||||
def expanded_key(psk_b64)
|
||||
raw = Base64.decode64(psk_b64.to_s)
|
||||
|
||||
case raw.bytesize
|
||||
when 0
|
||||
"".b
|
||||
when 1
|
||||
default_key_for_alias(raw.bytes.first)
|
||||
when 2..15
|
||||
(raw.bytes + [0] * (16 - raw.bytesize)).pack("C*")
|
||||
when 16
|
||||
raw
|
||||
when 17..31
|
||||
(raw.bytes + [0] * (32 - raw.bytesize)).pack("C*")
|
||||
when 32
|
||||
raw
|
||||
else
|
||||
nil
|
||||
end
|
||||
end
|
||||
|
||||
# Map PSK alias bytes to their default key material.
|
||||
#
|
||||
# @param alias_index [Integer, nil] alias identifier for the PSK.
|
||||
# @return [String, nil] key bytes or nil when unknown.
|
||||
def default_key_for_alias(alias_index)
|
||||
return nil unless alias_index
|
||||
|
||||
DEFAULT_PSK_ALIAS_KEYS[alias_index]&.dup
|
||||
end
|
||||
|
||||
# XOR all bytes in the given string or byte array.
|
||||
#
|
||||
# @param value [String, Array<Integer>] input byte sequence.
|
||||
# @return [Integer] XOR of all bytes.
|
||||
def xor_bytes(value)
|
||||
bytes = value.is_a?(String) ? value.bytes : value
|
||||
bytes.reduce(0) { |acc, byte| (acc ^ byte) & 0xFF }
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
@@ -1,28 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# frozen_string_literal: true
|
||||
|
||||
module PotatoMesh
|
||||
module App
|
||||
module Meshtastic
|
||||
# Canonical list of candidate channel names used to build rainbow tables.
|
||||
module ChannelNames
|
||||
CHANNEL_NAME_CANDIDATES = %w[
|
||||
911 Admin ADMIN admin Alert Alpha AlphaNet Alpine Amateur Amazon Anaconda Aquila Arctic Ash Asteroid Astro Aurora Avalanche Backup Basalt Base Base1 Base2 BaseAlpha BaseBravo BaseCharlie Bavaria Beacon Bear BearNet Beat Berg Berlin BerlinMesh BerlinNet Beta BetaBerlin Bison Blackout Blizzard Bolt Bonfire Border Borealis Bravo BravoNet Breeze Bridge Bronze Burner Burrow Callisto Callsign Camp Campfire CampNet Caravan Carbon Carpet Central Chameleon Charlie Chat Checkpoint Checkpoint1 Checkpoint2 Cheetah City Clinic Cloud Cobra Collective Cologne Colony Comet Command Command1 Command2 CommandRoom Comms Comms1 Comms2 CommsNet Commune Control Control1 Control2 ControlRoom Convoy Copper Core Corvus Cosmos Courier Courier1 Courier2 CourierMesh CourierNet CQ CQ1 CQ2 Crow CrowNet DarkNet Dawn Daybreak Daylight Delta DeltaNet Demo DEMO DemoBerlin Den Desert Diamond Distress District Doctor Dortmund Downlink Downlink1 Draco Dragon DragonNet Dune Dusk Eagle EagleNet East EastStar Echo EchoMesh EchoNet Emergency emergency EMERGENCY EmergencyBerlin Epsilon Equinox Europa Falcon Field FieldNet Fire Fire1 Fire2 Firebird Firefly Fireline Fireteam Firewatch Flash Flock Fluss Fog Forest Fox FoxNet Foxtrot FoxtrotMesh FoxtrotNet Frankfurt Freedom Freq Freq1 Freq2 Friedrichshain Frontier Frost Galaxy Gale Gamma Ganymede Gecko General Ghost GhostNet Glacier Gold Granite Grassland Grid Grid1 Grid2 GridNet GridNorth GridSouth Griffin Group Ham HAM Hamburg HAMNet Harbor Harmony HarmonyNet Hawk HawkNet Haze Help Hessen Highway Hilltop Hinterland Hive Hospital HQ HQ1 HQ2 Hub Hub1 Hub2 Hydra Ice Io Iron Jaguar Jungle Jupiter Kiez Kilo KiloMesh KiloNet Kraken Kreuzberg Lava Layer Layer1 Layer2 Layer3 Leipzig Leopard Liberty LightNet Lightning Lima Link Lion Lizard LongFast LongSlow LoRa LoRaBerlin LoRaHessen LoRaMesh LoRaNet LoRaTest Main Mars Med Med1 Med2 Medic MediumFast MediumSlow Mercury Mesh Mesh1 Mesh2 Mesh3 Mesh4 Mesh5 MeshBerlin MeshCollective MeshCologne MeshFrankfurt MeshGrid MeshHamburg MeshHessen MeshLeipzig MeshMunich MeshNet MeshNetwork MeshRuhr Meshtastic MeshTest Meteor Metro Midnight Mirage Mist MoonNet Munich Müggelberg Nebula Nest Network Neukölln Nexus Nightfall NightMesh NightNet Nightshift NightshiftNet Nightwatch Node1 Node2 Node3 Node4 Node5 Nomad NomadMesh NomadNet Nomads Nord North NorthStar Oasis Obsidian Omega Operations OPERATIONS Ops Ops1 Ops2 OpsCenter OpsRoom Orbit Ost Outpost Outsider Owl Pack Packet PacketNet PacketRadio Panther Paramedic Path Peak Phantom Phoenix PhoenixNet Platinum Pluto Polar Prairie Prenzlauer PRIVATE Private Public PUBLIC Pulse PulseNet Python Quasar Radio Radio1 Radio2 RadioNet Rain Ranger Raven RavenNet Relay Relay1 Relay2 Repeater Repeater1 Repeater2 RepeaterHub Rescue Rescue1 Rescue2 RescueTeam Rhythm Ridge River Road Rock Router Router1 Router2 Rover Ruhr Runner Runners Safari Safe Safety Sahara Saturn Savanna Saxony Scout Sector Secure Sensor SENSOR Sensors SENSORS Shade Shadow ShadowNet Shelter Shelter1 Shelter2 ShortFast Sideband Sideband1 Sierra Signal Signal1 Signal2 SignalFire Signals Silver Smoke Snake Snow Solstice SOS Sos SOSBerlin South SouthStar Spectrum Squad StarNet Steel Stone Storm Storm1 Storm2 Stratum Stuttgart Summit SunNet Sunrise Sunset Sync SyncNet Syndicate Süd Tal Tango TangoMesh TangoNet Team Tempo Test TEST test TestBerlin Teufelsberg Thunder Tiger Titan Town Trail Tundra Tunnel Union Unit Universe Uplink Uplink1 Valley Venus Victor Village Viper Volcano Wald Wander Wanderer Wanderers Watch Watch1 Watch2 WaWi West WestStar Whisper Wind Wolf WolfDen WolfMesh WolfNet Wolfpack Wolves Woods Wyvern Zeta Zone Zone1 Zone2 Zone3 Zulu ZuluMesh ZuluNet
|
||||
].freeze
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
@@ -1,183 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# frozen_string_literal: true
|
||||
|
||||
require "base64"
|
||||
require "openssl"
|
||||
|
||||
require_relative "channel_hash"
|
||||
require_relative "protobuf"
|
||||
|
||||
module PotatoMesh
|
||||
module App
|
||||
module Meshtastic
|
||||
# Decrypt Meshtastic payloads with AES-CTR using Meshtastic nonce rules.
|
||||
module Cipher
|
||||
module_function
|
||||
|
||||
DEFAULT_PSK_B64 = "AQ=="
|
||||
TEXT_MESSAGE_PORTNUM = 1
|
||||
|
||||
# Decrypt an encrypted Meshtastic payload into UTF-8 text.
|
||||
#
|
||||
# @param cipher_b64 [String] base64-encoded encrypted payload.
|
||||
# @param packet_id [Integer] packet identifier used for the nonce.
|
||||
# @param from_id [String, nil] Meshtastic node identifier (e.g. "!9e95cf60").
|
||||
# @param from_num [Integer, nil] numeric node identifier override.
|
||||
# @param psk_b64 [String, nil] base64 PSK or alias.
|
||||
# @return [String, nil] decrypted text or nil when decryption fails.
|
||||
def decrypt_text(cipher_b64:, packet_id:, from_id: nil, from_num: nil, psk_b64: DEFAULT_PSK_B64)
|
||||
data = decrypt_data(
|
||||
cipher_b64: cipher_b64,
|
||||
packet_id: packet_id,
|
||||
from_id: from_id,
|
||||
from_num: from_num,
|
||||
psk_b64: psk_b64,
|
||||
)
|
||||
|
||||
data && data[:text]
|
||||
end
|
||||
|
||||
# Decrypt the Meshtastic data protobuf payload.
|
||||
#
|
||||
# @param cipher_b64 [String] base64-encoded encrypted payload.
|
||||
# @param packet_id [Integer] packet identifier used for the nonce.
|
||||
# @param from_id [String, nil] Meshtastic node identifier.
|
||||
# @param from_num [Integer, nil] numeric node identifier override.
|
||||
# @param psk_b64 [String, nil] base64 PSK or alias.
|
||||
# @return [Hash, nil] decrypted data payload details or nil when decryption fails.
|
||||
def decrypt_data(cipher_b64:, packet_id:, from_id: nil, from_num: nil, psk_b64: DEFAULT_PSK_B64)
|
||||
ciphertext = Base64.strict_decode64(cipher_b64)
|
||||
key = ChannelHash.expanded_key(psk_b64)
|
||||
return nil unless key
|
||||
return nil unless [16, 32].include?(key.bytesize)
|
||||
|
||||
packet_value = normalize_packet_id(packet_id)
|
||||
return nil unless packet_value
|
||||
|
||||
from_value = normalize_node_num(from_id, from_num)
|
||||
return nil unless from_value
|
||||
|
||||
nonce = build_nonce(packet_value, from_value)
|
||||
plaintext = decrypt_aes_ctr(ciphertext, key, nonce)
|
||||
return nil unless plaintext
|
||||
|
||||
data = Protobuf.parse_data(plaintext)
|
||||
return nil unless data
|
||||
|
||||
text = nil
|
||||
if data[:portnum] == TEXT_MESSAGE_PORTNUM
|
||||
candidate = data[:payload].dup.force_encoding("UTF-8")
|
||||
text = candidate if candidate.valid_encoding? && !candidate.empty?
|
||||
end
|
||||
|
||||
{ portnum: data[:portnum], payload: data[:payload], text: text }
|
||||
rescue ArgumentError, OpenSSL::Cipher::CipherError
|
||||
nil
|
||||
end
|
||||
|
||||
# Decrypt the Meshtastic data protobuf payload bytes.
|
||||
#
|
||||
# @param cipher_b64 [String] base64-encoded encrypted payload.
|
||||
# @param packet_id [Integer] packet identifier used for the nonce.
|
||||
# @param from_id [String, nil] Meshtastic node identifier.
|
||||
# @param from_num [Integer, nil] numeric node identifier override.
|
||||
# @param psk_b64 [String, nil] base64 PSK or alias.
|
||||
# @return [String, nil] payload bytes or nil when decryption fails.
|
||||
def decrypt_payload_bytes(cipher_b64:, packet_id:, from_id: nil, from_num: nil, psk_b64: DEFAULT_PSK_B64)
|
||||
data = decrypt_data(
|
||||
cipher_b64: cipher_b64,
|
||||
packet_id: packet_id,
|
||||
from_id: from_id,
|
||||
from_num: from_num,
|
||||
psk_b64: psk_b64,
|
||||
)
|
||||
|
||||
data && data[:payload]
|
||||
end
|
||||
|
||||
# Build the Meshtastic AES nonce from packet and node identifiers.
|
||||
#
|
||||
# @param packet_id [Integer] packet identifier.
|
||||
# @param from_num [Integer] numeric node identifier.
|
||||
# @return [String] 16-byte nonce.
|
||||
def build_nonce(packet_id, from_num)
|
||||
[packet_id].pack("Q<") + [from_num].pack("L<") + ("\x00" * 4)
|
||||
end
|
||||
|
||||
# Decrypt data using AES-CTR with the derived nonce.
|
||||
#
|
||||
# @param ciphertext [String] encrypted payload bytes.
|
||||
# @param key [String] expanded AES key bytes.
|
||||
# @param nonce [String] 16-byte nonce.
|
||||
# @return [String] decrypted plaintext bytes.
|
||||
def decrypt_aes_ctr(ciphertext, key, nonce)
|
||||
cipher_name = key.bytesize == 16 ? "aes-128-ctr" : "aes-256-ctr"
|
||||
cipher = OpenSSL::Cipher.new(cipher_name)
|
||||
cipher.decrypt
|
||||
cipher.key = key
|
||||
cipher.iv = nonce
|
||||
cipher.update(ciphertext) + cipher.final
|
||||
end
|
||||
|
||||
# Normalise the packet identifier into an integer.
|
||||
#
|
||||
# @param packet_id [Integer, nil] packet identifier.
|
||||
# @return [Integer, nil] validated packet id or nil when invalid.
|
||||
def normalize_packet_id(packet_id)
|
||||
return packet_id if packet_id.is_a?(Integer) && packet_id >= 0
|
||||
return nil if packet_id.nil?
|
||||
|
||||
if packet_id.is_a?(Numeric)
|
||||
return nil if packet_id.negative?
|
||||
return packet_id.to_i
|
||||
end
|
||||
|
||||
return nil unless packet_id.respond_to?(:to_s)
|
||||
|
||||
trimmed = packet_id.to_s.strip
|
||||
return nil if trimmed.empty?
|
||||
return trimmed.to_i(10) if trimmed.match?(/\A\d+\z/)
|
||||
|
||||
nil
|
||||
end
|
||||
|
||||
# Resolve the node number from any of the supported identifiers.
|
||||
#
|
||||
# @param from_id [String, nil] Meshtastic node identifier.
|
||||
# @param from_num [Integer, nil] numeric node identifier override.
|
||||
# @return [Integer, nil] node number or nil when invalid.
|
||||
def normalize_node_num(from_id, from_num)
|
||||
if from_num.is_a?(Integer)
|
||||
return from_num & 0xFFFFFFFF
|
||||
elsif from_num.is_a?(Numeric)
|
||||
return from_num.to_i & 0xFFFFFFFF
|
||||
end
|
||||
|
||||
return nil unless from_id
|
||||
|
||||
trimmed = from_id.to_s.strip
|
||||
return nil if trimmed.empty?
|
||||
|
||||
hex = trimmed.delete_prefix("!")
|
||||
hex = hex[2..] if hex.start_with?("0x", "0X")
|
||||
return nil unless hex.match?(/\A[0-9A-Fa-f]+\z/)
|
||||
|
||||
hex.to_i(16) & 0xFFFFFFFF
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
@@ -1,120 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# frozen_string_literal: true
|
||||
|
||||
require "json"
|
||||
require "open3"
|
||||
|
||||
module PotatoMesh
|
||||
module App
|
||||
module Meshtastic
|
||||
# Decode Meshtastic protobuf payloads via the Python helper script.
|
||||
module PayloadDecoder
|
||||
module_function
|
||||
|
||||
PYTHON_ENV_KEY = "MESHTASTIC_PYTHON"
|
||||
DEFAULT_PYTHON_RELATIVE = File.join("data", ".venv", "bin", "python")
|
||||
DEFAULT_DECODER_RELATIVE = File.join("data", "mesh_ingestor", "decode_payload.py")
|
||||
FALLBACK_PYTHON_NAMES = ["python3", "python"].freeze
|
||||
|
||||
# Decode a protobuf payload using the Meshtastic helper.
|
||||
#
|
||||
# @param portnum [Integer] Meshtastic port number.
|
||||
# @param payload_b64 [String] base64-encoded payload bytes.
|
||||
# @return [Hash, nil] decoded payload hash or nil when decoding fails.
|
||||
def decode(portnum:, payload_b64:)
|
||||
return nil unless portnum && payload_b64
|
||||
|
||||
decoder_path = decoder_script_path
|
||||
python_path = python_executable_path
|
||||
return nil unless decoder_path && python_path
|
||||
|
||||
input = JSON.generate({ portnum: portnum, payload_b64: payload_b64 })
|
||||
stdout, stderr, status = Open3.capture3(python_path, decoder_path, stdin_data: input)
|
||||
return nil unless status.success?
|
||||
|
||||
parsed = JSON.parse(stdout)
|
||||
return nil unless parsed.is_a?(Hash)
|
||||
return nil if parsed["error"]
|
||||
|
||||
parsed
|
||||
rescue JSON::ParserError
|
||||
nil
|
||||
rescue Errno::ENOENT
|
||||
nil
|
||||
rescue ArgumentError
|
||||
nil
|
||||
end
|
||||
|
||||
# Resolve the configured Python executable for Meshtastic decoding.
|
||||
#
|
||||
# @return [String, nil] python path or nil when missing.
|
||||
def python_executable_path
|
||||
configured = ENV[PYTHON_ENV_KEY]
|
||||
return configured if configured && !configured.strip.empty?
|
||||
|
||||
candidate = File.expand_path(DEFAULT_PYTHON_RELATIVE, repo_root)
|
||||
return candidate if File.exist?(candidate)
|
||||
|
||||
FALLBACK_PYTHON_NAMES.each do |name|
|
||||
found = find_executable(name)
|
||||
return found if found
|
||||
end
|
||||
|
||||
nil
|
||||
end
|
||||
|
||||
# Resolve the Meshtastic payload decoder script path.
|
||||
#
|
||||
# @return [String, nil] script path or nil when missing.
|
||||
def decoder_script_path
|
||||
repo_candidate = File.expand_path(DEFAULT_DECODER_RELATIVE, repo_root)
|
||||
return repo_candidate if File.exist?(repo_candidate)
|
||||
|
||||
web_candidate = File.expand_path(DEFAULT_DECODER_RELATIVE, web_root)
|
||||
return web_candidate if File.exist?(web_candidate)
|
||||
|
||||
nil
|
||||
end
|
||||
|
||||
# Resolve the repository root directory from the application config.
|
||||
#
|
||||
# @return [String] absolute path to the repository root.
|
||||
def repo_root
|
||||
PotatoMesh::Config.repo_root
|
||||
end
|
||||
|
||||
def web_root
|
||||
PotatoMesh::Config.web_root
|
||||
end
|
||||
|
||||
def find_executable(name)
|
||||
# Locate an executable in PATH without invoking a subshell.
|
||||
#
|
||||
# @param name [String] executable name to resolve.
|
||||
# @return [String, nil] full path when found.
|
||||
ENV.fetch("PATH", "").split(File::PATH_SEPARATOR).each do |path|
|
||||
candidate = File.join(path, name)
|
||||
return candidate if File.file?(candidate) && File.executable?(candidate)
|
||||
end
|
||||
|
||||
nil
|
||||
end
|
||||
|
||||
private_class_method :find_executable
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
@@ -1,140 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# frozen_string_literal: true
|
||||
|
||||
module PotatoMesh
|
||||
module App
|
||||
module Meshtastic
|
||||
# Minimal protobuf helpers for extracting payload bytes from Meshtastic data.
|
||||
module Protobuf
|
||||
module_function
|
||||
|
||||
WIRE_TYPE_VARINT = 0
|
||||
WIRE_TYPE_64BIT = 1
|
||||
WIRE_TYPE_LENGTH_DELIMITED = 2
|
||||
WIRE_TYPE_32BIT = 5
|
||||
DATA_PORTNUM_FIELD = 1
|
||||
DATA_PAYLOAD_FIELD = 2
|
||||
|
||||
# Extract a length-delimited field from a protobuf message.
|
||||
#
|
||||
# @param payload [String] raw protobuf-encoded bytes.
|
||||
# @param field_number [Integer] field to extract.
|
||||
# @return [String, nil] field bytes or nil when absent/invalid.
|
||||
def extract_field_bytes(payload, field_number)
|
||||
return nil unless payload && field_number
|
||||
|
||||
bytes = payload.bytes
|
||||
index = 0
|
||||
|
||||
while index < bytes.length
|
||||
tag, index = read_varint(bytes, index)
|
||||
return nil unless tag
|
||||
|
||||
field = tag >> 3
|
||||
wire = tag & 0x7
|
||||
|
||||
case wire
|
||||
when WIRE_TYPE_VARINT
|
||||
_, index = read_varint(bytes, index)
|
||||
return nil unless index
|
||||
when WIRE_TYPE_64BIT
|
||||
index += 8
|
||||
when WIRE_TYPE_LENGTH_DELIMITED
|
||||
length, index = read_varint(bytes, index)
|
||||
return nil unless length
|
||||
return nil if index + length > bytes.length
|
||||
value = bytes[index, length].pack("C*")
|
||||
index += length
|
||||
return value if field == field_number
|
||||
when WIRE_TYPE_32BIT
|
||||
index += 4
|
||||
else
|
||||
return nil
|
||||
end
|
||||
end
|
||||
|
||||
nil
|
||||
end
|
||||
|
||||
# Parse a Meshtastic Data message for the port number and payload.
|
||||
#
|
||||
# @param payload [String] raw protobuf-encoded bytes.
|
||||
# @return [Hash, nil] parsed port number and payload bytes.
|
||||
def parse_data(payload)
|
||||
return nil unless payload
|
||||
|
||||
bytes = payload.bytes
|
||||
index = 0
|
||||
portnum = nil
|
||||
data_payload = nil
|
||||
|
||||
while index < bytes.length
|
||||
tag, index = read_varint(bytes, index)
|
||||
return nil unless tag
|
||||
|
||||
field = tag >> 3
|
||||
wire = tag & 0x7
|
||||
|
||||
case wire
|
||||
when WIRE_TYPE_VARINT
|
||||
value, index = read_varint(bytes, index)
|
||||
return nil unless value
|
||||
portnum = value if field == DATA_PORTNUM_FIELD
|
||||
when WIRE_TYPE_64BIT
|
||||
index += 8
|
||||
when WIRE_TYPE_LENGTH_DELIMITED
|
||||
length, index = read_varint(bytes, index)
|
||||
return nil unless length
|
||||
return nil if index + length > bytes.length
|
||||
value = bytes[index, length].pack("C*")
|
||||
index += length
|
||||
data_payload = value if field == DATA_PAYLOAD_FIELD
|
||||
when WIRE_TYPE_32BIT
|
||||
index += 4
|
||||
else
|
||||
return nil
|
||||
end
|
||||
end
|
||||
|
||||
return nil unless portnum && data_payload
|
||||
|
||||
{ portnum: portnum, payload: data_payload }
|
||||
end
|
||||
|
||||
# Read a protobuf varint from a byte array.
|
||||
#
|
||||
# @param bytes [Array<Integer>] byte stream.
|
||||
# @param index [Integer] read offset.
|
||||
# @return [Array(Integer, Integer), nil] value and new index or nil when invalid.
|
||||
def read_varint(bytes, index)
|
||||
shift = 0
|
||||
value = 0
|
||||
|
||||
while index < bytes.length
|
||||
byte = bytes[index]
|
||||
index += 1
|
||||
value |= (byte & 0x7F) << shift
|
||||
return [value, index] if (byte & 0x80).zero?
|
||||
shift += 7
|
||||
return nil if shift > 63
|
||||
end
|
||||
|
||||
nil
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
@@ -1,68 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# frozen_string_literal: true
|
||||
|
||||
require_relative "channel_hash"
|
||||
require_relative "channel_names"
|
||||
|
||||
module PotatoMesh
|
||||
module App
|
||||
module Meshtastic
|
||||
# Resolve candidate channel names for a hashed channel index.
|
||||
module RainbowTable
|
||||
module_function
|
||||
|
||||
@tables = {}
|
||||
|
||||
# Lookup candidate channel names for a hashed channel index.
|
||||
#
|
||||
# @param index [Integer, nil] channel hash byte.
|
||||
# @param psk_b64 [String, nil] base64 PSK or alias.
|
||||
# @return [Array<String>] list of candidate names.
|
||||
def channel_names_for(index, psk_b64:)
|
||||
return [] unless index.is_a?(Integer)
|
||||
|
||||
table_for(psk_b64)[index] || []
|
||||
end
|
||||
|
||||
# Build or retrieve the cached rainbow table for the given PSK.
|
||||
#
|
||||
# @param psk_b64 [String, nil] base64 PSK or alias.
|
||||
# @return [Hash{Integer=>Array<String>}] mapping of hash bytes to names.
|
||||
def table_for(psk_b64)
|
||||
key = psk_b64.to_s
|
||||
@tables[key] ||= build_table(psk_b64)
|
||||
end
|
||||
|
||||
# Build a hash-to-name mapping for the provided PSK.
|
||||
#
|
||||
# @param psk_b64 [String, nil] base64 PSK or alias.
|
||||
# @return [Hash{Integer=>Array<String>}] mapping of hash bytes to names.
|
||||
def build_table(psk_b64)
|
||||
mapping = Hash.new { |hash, key| hash[key] = [] }
|
||||
|
||||
ChannelNames::CHANNEL_NAME_CANDIDATES.each do |name|
|
||||
hash = ChannelHash.channel_hash(name, psk_b64)
|
||||
next unless hash
|
||||
|
||||
mapping[hash] << name
|
||||
end
|
||||
|
||||
mapping
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
@@ -127,162 +127,6 @@ module PotatoMesh
|
||||
[threshold, floor].max
|
||||
end
|
||||
|
||||
# Normalise an optional upper-bound timestamp for keyset pagination.
|
||||
#
|
||||
# @param before [Object] requested upper bound expressed as unix seconds.
|
||||
# @param ceiling [Integer] maximum allowable timestamp.
|
||||
# @return [Integer, nil] normalized upper bound or nil when absent.
|
||||
def normalize_before_threshold(before, ceiling: Time.now.to_i)
|
||||
value = coerce_integer(before)
|
||||
return nil if value.nil?
|
||||
|
||||
value = 0 if value.negative?
|
||||
[value, ceiling].min
|
||||
end
|
||||
|
||||
# Decode a keyset cursor token previously emitted by {encode_query_cursor}.
|
||||
#
|
||||
# @param token [String, nil] base64 cursor token.
|
||||
# @return [Hash, nil] decoded cursor payload.
|
||||
def decode_query_cursor(token)
|
||||
value = string_or_nil(token)
|
||||
return nil unless value
|
||||
|
||||
decoded = Base64.urlsafe_decode64(value)
|
||||
parsed = JSON.parse(decoded)
|
||||
parsed.is_a?(Hash) ? parsed : nil
|
||||
rescue ArgumentError, JSON::ParserError
|
||||
nil
|
||||
end
|
||||
|
||||
# Encode a cursor payload for keyset pagination transport.
|
||||
#
|
||||
# @param payload [Hash] cursor components.
|
||||
# @return [String] URL-safe base64 cursor token.
|
||||
def encode_query_cursor(payload)
|
||||
Base64.urlsafe_encode64(JSON.generate(payload))
|
||||
end
|
||||
|
||||
# Parse a rowid-based time cursor payload from pagination token.
|
||||
#
|
||||
# @param cursor [String, nil] cursor token.
|
||||
# @param time_key [String] payload key containing the timestamp component.
|
||||
# @return [Array<(Integer, Integer)>, Array<(nil, nil)>] decoded time/rowid pair.
|
||||
def decode_rowid_time_cursor(cursor, time_key:)
|
||||
cursor_payload = decode_query_cursor(cursor)
|
||||
return [nil, nil] unless cursor_payload
|
||||
|
||||
[coerce_integer(cursor_payload[time_key]), coerce_integer(cursor_payload["rowid"])]
|
||||
end
|
||||
|
||||
# Build pagination metadata for rowid-keyset collections.
|
||||
#
|
||||
# @param items [Array<Hash>] compacted rows.
|
||||
# @param limit [Integer] requested limit.
|
||||
# @param time_key [String] cursor payload timestamp key.
|
||||
# @param marker_time [Proc] extractor receiving marker row hash.
|
||||
# @return [Hash{Symbol => Object}] items and optional next_cursor.
|
||||
def build_rowid_pagination_response(items, limit, time_key:, marker_time:)
|
||||
has_more = items.length > limit
|
||||
paged_items = has_more ? items.first(limit) : items
|
||||
next_cursor = nil
|
||||
if has_more && !paged_items.empty?
|
||||
marker = paged_items.last
|
||||
next_cursor = encode_query_cursor({
|
||||
time_key => coerce_integer(marker_time.call(marker)),
|
||||
"rowid" => coerce_integer(marker["_cursor_rowid"]),
|
||||
})
|
||||
end
|
||||
paged_items.each { |item| item.delete("_cursor_time") }
|
||||
paged_items.each { |item| item.delete("_cursor_rowid") }
|
||||
{ items: paged_items, next_cursor: next_cursor }
|
||||
end
|
||||
|
||||
# Append normalized since/before predicates for time-windowed collections.
|
||||
#
|
||||
# @param where_clauses [Array<String>] mutable SQL predicate fragments.
|
||||
# @param params [Array<Object>] mutable SQL bind parameters.
|
||||
# @param since [Object] lower-bound timestamp candidate.
|
||||
# @param before [Object] upper-bound timestamp candidate.
|
||||
# @param since_floor [Integer] minimum accepted since threshold.
|
||||
# @param ceiling [Integer] maximum accepted before threshold.
|
||||
# @param time_expression [String] SQL expression used for temporal filtering.
|
||||
# @return [Integer] normalized since threshold.
|
||||
def append_time_window_filters!(
|
||||
where_clauses:,
|
||||
params:,
|
||||
since:,
|
||||
before:,
|
||||
since_floor:,
|
||||
ceiling:,
|
||||
time_expression:
|
||||
)
|
||||
since_threshold = normalize_since_threshold(since, floor: since_floor)
|
||||
before_threshold = normalize_before_threshold(before, ceiling: ceiling)
|
||||
|
||||
where_clauses << "#{time_expression} >= ?"
|
||||
params << since_threshold
|
||||
if before_threshold
|
||||
where_clauses << "#{time_expression} <= ?"
|
||||
params << before_threshold
|
||||
end
|
||||
|
||||
since_threshold
|
||||
end
|
||||
|
||||
# Append rowid/timestamp keyset predicates for descending time-ordered tables.
|
||||
#
|
||||
# @param where_clauses [Array<String>] mutable SQL predicate fragments.
|
||||
# @param params [Array<Object>] mutable SQL bind parameters.
|
||||
# @param cursor [String, nil] encoded cursor token.
|
||||
# @param time_key [String] cursor payload timestamp key.
|
||||
# @param time_expression [String] SQL timestamp expression used for ordering.
|
||||
# @return [void]
|
||||
def append_rowid_time_cursor_filter!(where_clauses:, params:, cursor:, time_key:, time_expression:)
|
||||
cursor_time, cursor_rowid = decode_rowid_time_cursor(cursor, time_key: time_key)
|
||||
return unless cursor_time && cursor_rowid
|
||||
|
||||
where_clauses << "(#{time_expression} < ? OR (#{time_expression} = ? AND rowid < ?))"
|
||||
params.concat([cursor_time, cursor_time, cursor_rowid])
|
||||
end
|
||||
|
||||
# Return exact active-node counts across common activity windows.
|
||||
#
|
||||
# Counts are resolved directly in SQL with COUNT(*) thresholds against
|
||||
# +nodes.last_heard+ to avoid sampling bias from list endpoint limits.
|
||||
#
|
||||
# @param now [Integer] reference unix timestamp in seconds.
|
||||
# @param db [SQLite3::Database, nil] optional open database handle to reuse.
|
||||
# @return [Hash{String => Integer}] counts keyed by hour/day/week/month.
|
||||
def query_active_node_stats(now: Time.now.to_i, db: nil)
|
||||
handle = db || open_database(readonly: true)
|
||||
handle.results_as_hash = true
|
||||
reference_now = coerce_integer(now) || Time.now.to_i
|
||||
hour_cutoff = reference_now - 3600
|
||||
day_cutoff = reference_now - 86_400
|
||||
week_cutoff = reference_now - PotatoMesh::Config.week_seconds
|
||||
month_cutoff = reference_now - (30 * 24 * 60 * 60)
|
||||
private_filter = private_mode? ? " AND (role IS NULL OR role <> 'CLIENT_HIDDEN')" : ""
|
||||
sql = <<~SQL
|
||||
SELECT
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{private_filter}) AS hour_count,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{private_filter}) AS day_count,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{private_filter}) AS week_count,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{private_filter}) AS month_count
|
||||
SQL
|
||||
row = with_busy_retry do
|
||||
handle.get_first_row(sql, [hour_cutoff, day_cutoff, week_cutoff, month_cutoff])
|
||||
end || {}
|
||||
{
|
||||
"hour" => row["hour_count"].to_i,
|
||||
"day" => row["day_count"].to_i,
|
||||
"week" => row["week_count"].to_i,
|
||||
"month" => row["month_count"].to_i,
|
||||
}
|
||||
ensure
|
||||
handle&.close unless db
|
||||
end
|
||||
|
||||
def node_reference_tokens(node_ref)
|
||||
parts = canonical_node_parts(node_ref)
|
||||
canonical_id, numeric_id = parts ? parts[0, 2] : [nil, nil]
|
||||
@@ -369,45 +213,27 @@ module PotatoMesh
|
||||
#
|
||||
# @param limit [Integer] maximum number of rows to return.
|
||||
# @param node_ref [String, Integer, nil] optional node reference to narrow results.
|
||||
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window for collections.
|
||||
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window.
|
||||
# @return [Array<Hash>] compacted node rows suitable for API responses.
|
||||
def query_nodes(limit, node_ref: nil, since: 0, before: nil, cursor: nil, with_pagination: false)
|
||||
def query_nodes(limit, node_ref: nil, since: 0)
|
||||
limit = coerce_query_limit(limit)
|
||||
fetch_limit = with_pagination ? limit + 1 : limit
|
||||
db = open_database(readonly: true)
|
||||
db.results_as_hash = true
|
||||
now = Time.now.to_i
|
||||
min_last_heard = now - PotatoMesh::Config.week_seconds
|
||||
since_floor = node_ref ? 0 : min_last_heard
|
||||
since_threshold = normalize_since_threshold(since, floor: since_floor)
|
||||
before_threshold = normalize_before_threshold(before, ceiling: now)
|
||||
since_threshold = normalize_since_threshold(since, floor: min_last_heard)
|
||||
params = []
|
||||
where_clauses = []
|
||||
|
||||
if node_ref
|
||||
clause = node_lookup_clause(node_ref, string_columns: ["node_id"], numeric_columns: ["num"])
|
||||
return with_pagination ? { items: [], next_cursor: nil } : [] unless clause
|
||||
return [] unless clause
|
||||
where_clauses << clause.first
|
||||
params.concat(clause.last)
|
||||
else
|
||||
where_clauses << "last_heard >= ?"
|
||||
params << since_threshold
|
||||
end
|
||||
if before_threshold
|
||||
where_clauses << "last_heard <= ?"
|
||||
params << before_threshold
|
||||
end
|
||||
if with_pagination
|
||||
cursor_payload = decode_query_cursor(cursor)
|
||||
if cursor_payload
|
||||
cursor_last_heard = coerce_integer(cursor_payload["last_heard"])
|
||||
cursor_node_id = string_or_nil(cursor_payload["node_id"])
|
||||
if cursor_last_heard && cursor_node_id
|
||||
where_clauses << "(last_heard < ? OR (last_heard = ? AND node_id < ?))"
|
||||
params.concat([cursor_last_heard, cursor_last_heard, cursor_node_id])
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
if private_mode?
|
||||
where_clauses << "(role IS NULL OR role <> 'CLIENT_HIDDEN')"
|
||||
@@ -423,10 +249,10 @@ module PotatoMesh
|
||||
SQL
|
||||
sql += " WHERE #{where_clauses.join(" AND ")}\n" if where_clauses.any?
|
||||
sql += <<~SQL
|
||||
ORDER BY last_heard DESC, node_id DESC
|
||||
ORDER BY last_heard DESC
|
||||
LIMIT ?
|
||||
SQL
|
||||
params << fetch_limit
|
||||
params << limit
|
||||
|
||||
rows = db.execute(sql, params)
|
||||
rows = rows.select do |r|
|
||||
@@ -436,15 +262,7 @@ module PotatoMesh
|
||||
.max
|
||||
last_candidate && last_candidate >= since_threshold
|
||||
end
|
||||
|
||||
has_more = with_pagination && rows.length > limit
|
||||
paged_rows = has_more ? rows.first(limit) : rows
|
||||
marker_row = has_more ? paged_rows.last : nil
|
||||
marker_last_heard = marker_row ? coerce_integer(marker_row["last_heard"]) : nil
|
||||
marker_node_id = marker_row ? string_or_nil(marker_row["node_id"]) : nil
|
||||
output_rows = with_pagination ? paged_rows : rows
|
||||
|
||||
output_rows.each do |r|
|
||||
rows.each do |r|
|
||||
r["role"] ||= "CLIENT"
|
||||
lh = r["last_heard"]&.to_i
|
||||
pt = r["position_time"]&.to_i
|
||||
@@ -457,18 +275,7 @@ module PotatoMesh
|
||||
pb = r["precision_bits"]
|
||||
r["precision_bits"] = pb.to_i if pb
|
||||
end
|
||||
items = output_rows.map { |row| compact_api_row(row) }
|
||||
items.each { |item| item.delete("_cursor_rowid") }
|
||||
return items unless with_pagination
|
||||
|
||||
next_cursor = nil
|
||||
if has_more && marker_last_heard && marker_node_id
|
||||
next_cursor = encode_query_cursor({
|
||||
"last_heard" => marker_last_heard,
|
||||
"node_id" => marker_node_id,
|
||||
})
|
||||
end
|
||||
{ items: items, next_cursor: next_cursor }
|
||||
rows.map { |row| compact_api_row(row) }
|
||||
ensure
|
||||
db&.close
|
||||
end
|
||||
@@ -476,43 +283,24 @@ module PotatoMesh
|
||||
# Fetch ingestor heartbeats with optional freshness filtering.
|
||||
#
|
||||
# @param limit [Integer] maximum number of ingestors to return.
|
||||
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window for collections.
|
||||
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window.
|
||||
# @return [Array<Hash>] compacted ingestor rows suitable for API responses.
|
||||
def query_ingestors(limit, since: 0, before: nil, cursor: nil, with_pagination: false)
|
||||
def query_ingestors(limit, since: 0)
|
||||
limit = coerce_query_limit(limit)
|
||||
fetch_limit = with_pagination ? limit + 1 : limit
|
||||
db = open_database(readonly: true)
|
||||
db.results_as_hash = true
|
||||
now = Time.now.to_i
|
||||
cutoff = now - PotatoMesh::Config.week_seconds
|
||||
since_threshold = normalize_since_threshold(since, floor: cutoff)
|
||||
before_threshold = normalize_before_threshold(before, ceiling: now)
|
||||
where_clauses = ["last_seen_time >= ?"]
|
||||
params = [since_threshold]
|
||||
if before_threshold
|
||||
where_clauses << "last_seen_time <= ?"
|
||||
params << before_threshold
|
||||
end
|
||||
if with_pagination
|
||||
cursor_payload = decode_query_cursor(cursor)
|
||||
if cursor_payload
|
||||
cursor_last_seen = coerce_integer(cursor_payload["last_seen_time"])
|
||||
cursor_node_id = string_or_nil(cursor_payload["node_id"])
|
||||
if cursor_last_seen && cursor_node_id
|
||||
where_clauses << "(last_seen_time < ? OR (last_seen_time = ? AND node_id < ?))"
|
||||
params.concat([cursor_last_seen, cursor_last_seen, cursor_node_id])
|
||||
end
|
||||
end
|
||||
end
|
||||
sql = <<~SQL
|
||||
SELECT node_id, start_time, last_seen_time, version, lora_freq, modem_preset
|
||||
FROM ingestors
|
||||
WHERE #{where_clauses.join(" AND ")}
|
||||
ORDER BY last_seen_time DESC, node_id DESC
|
||||
WHERE last_seen_time >= ?
|
||||
ORDER BY last_seen_time DESC
|
||||
LIMIT ?
|
||||
SQL
|
||||
|
||||
rows = db.execute(sql, params + [fetch_limit])
|
||||
rows = db.execute(sql, [since_threshold, limit])
|
||||
rows.each do |row|
|
||||
row.delete_if { |key, _| key.is_a?(Integer) }
|
||||
start_time = coerce_integer(row["start_time"])
|
||||
@@ -528,21 +316,7 @@ module PotatoMesh
|
||||
row["last_seen_iso"] = Time.at(last_seen_time).utc.iso8601 if last_seen_time
|
||||
end
|
||||
|
||||
items = rows.map { |row| compact_api_row(row) }
|
||||
items.each { |item| item.delete("_cursor_rowid") }
|
||||
return items unless with_pagination
|
||||
|
||||
has_more = items.length > limit
|
||||
paged_items = has_more ? items.first(limit) : items
|
||||
next_cursor = nil
|
||||
if has_more && !paged_items.empty?
|
||||
marker = paged_items.last
|
||||
next_cursor = encode_query_cursor({
|
||||
"last_seen_time" => coerce_integer(marker["last_seen_time"]),
|
||||
"node_id" => string_or_nil(marker["node_id"]),
|
||||
})
|
||||
end
|
||||
{ items: paged_items, next_cursor: next_cursor }
|
||||
rows.map { |row| compact_api_row(row) }
|
||||
ensure
|
||||
db&.close
|
||||
end
|
||||
@@ -554,11 +328,9 @@ module PotatoMesh
|
||||
# @param include_encrypted [Boolean] when true, include encrypted payloads in the response.
|
||||
# @param since [Integer] unix timestamp threshold; messages with rx_time older than this are excluded.
|
||||
# @return [Array<Hash>] compacted message rows safe for API responses.
|
||||
def query_messages(limit, node_ref: nil, include_encrypted: false, since: 0, before: nil, cursor: nil, with_pagination: false)
|
||||
def query_messages(limit, node_ref: nil, include_encrypted: false, since: 0)
|
||||
limit = coerce_query_limit(limit)
|
||||
fetch_limit = with_pagination ? limit + 1 : limit
|
||||
since_threshold = normalize_since_threshold(since, floor: 0)
|
||||
before_threshold = normalize_before_threshold(before)
|
||||
db = open_database(readonly: true)
|
||||
db.results_as_hash = true
|
||||
params = []
|
||||
@@ -568,25 +340,10 @@ module PotatoMesh
|
||||
include_encrypted = !!include_encrypted
|
||||
where_clauses << "m.rx_time >= ?"
|
||||
params << since_threshold
|
||||
if before_threshold
|
||||
where_clauses << "m.rx_time <= ?"
|
||||
params << before_threshold
|
||||
end
|
||||
|
||||
unless include_encrypted
|
||||
where_clauses << "COALESCE(TRIM(m.encrypted), '') = ''"
|
||||
end
|
||||
if with_pagination
|
||||
cursor_payload = decode_query_cursor(cursor)
|
||||
if cursor_payload
|
||||
cursor_rx_time = coerce_integer(cursor_payload["rx_time"])
|
||||
cursor_id = string_or_nil(cursor_payload["id"])
|
||||
if cursor_rx_time && cursor_id
|
||||
where_clauses << "(m.rx_time < ? OR (m.rx_time = ? AND m.id < ?))"
|
||||
params.concat([cursor_rx_time, cursor_rx_time, cursor_id])
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
if node_ref
|
||||
clause = node_lookup_clause(node_ref, string_columns: ["m.from_id", "m.to_id"])
|
||||
@@ -599,23 +356,20 @@ module PotatoMesh
|
||||
SELECT m.id, m.rx_time, m.rx_iso, m.from_id, m.to_id, m.channel,
|
||||
m.portnum, m.text, m.encrypted, m.rssi, m.hop_limit,
|
||||
m.lora_freq, m.modem_preset, m.channel_name, m.snr,
|
||||
m.reply_id, m.emoji, m.ingestor
|
||||
m.reply_id, m.emoji
|
||||
FROM messages m
|
||||
SQL
|
||||
sql += " WHERE #{where_clauses.join(" AND ")}\n"
|
||||
sql += <<~SQL
|
||||
ORDER BY m.rx_time DESC, m.id DESC
|
||||
ORDER BY m.rx_time DESC
|
||||
LIMIT ?
|
||||
SQL
|
||||
params << fetch_limit
|
||||
params << limit
|
||||
rows = db.execute(sql, params)
|
||||
rows.each do |r|
|
||||
r.delete_if { |key, _| key.is_a?(Integer) }
|
||||
r["reply_id"] = coerce_integer(r["reply_id"]) if r.key?("reply_id")
|
||||
r["emoji"] = string_or_nil(r["emoji"]) if r.key?("emoji")
|
||||
if string_or_nil(r["encrypted"])
|
||||
r.delete("portnum")
|
||||
end
|
||||
if PotatoMesh::Config.debug? && (r["from_id"].nil? || r["from_id"].to_s.strip.empty?)
|
||||
raw = db.execute("SELECT * FROM messages WHERE id = ?", [r["id"]]).first
|
||||
debug_log(
|
||||
@@ -649,21 +403,7 @@ module PotatoMesh
|
||||
)
|
||||
end
|
||||
end
|
||||
items = rows.map { |row| compact_api_row(row) }
|
||||
items.each { |item| item.delete("_cursor_rowid") }
|
||||
return items unless with_pagination
|
||||
|
||||
has_more = items.length > limit
|
||||
paged_items = has_more ? items.first(limit) : items
|
||||
next_cursor = nil
|
||||
if has_more && !paged_items.empty?
|
||||
marker = paged_items.last
|
||||
next_cursor = encode_query_cursor({
|
||||
"rx_time" => coerce_integer(marker["rx_time"]),
|
||||
"id" => string_or_nil(marker["id"]),
|
||||
})
|
||||
end
|
||||
{ items: paged_items, next_cursor: next_cursor }
|
||||
rows.map { |row| compact_api_row(row) }
|
||||
ensure
|
||||
db&.close
|
||||
end
|
||||
@@ -674,26 +414,17 @@ module PotatoMesh
|
||||
# @param node_ref [String, Integer, nil] optional node reference to scope results.
|
||||
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window.
|
||||
# @return [Array<Hash>] compacted position rows suitable for API responses.
|
||||
def query_positions(limit, node_ref: nil, since: 0, before: nil, cursor: nil, with_pagination: false)
|
||||
def query_positions(limit, node_ref: nil, since: 0)
|
||||
limit = coerce_query_limit(limit)
|
||||
fetch_limit = with_pagination ? limit + 1 : limit
|
||||
db = open_database(readonly: true)
|
||||
db.results_as_hash = true
|
||||
params = []
|
||||
where_clauses = []
|
||||
now = Time.now.to_i
|
||||
min_rx_time = now - PotatoMesh::Config.week_seconds
|
||||
time_expression = "COALESCE(rx_time, position_time, 0)"
|
||||
since_floor = node_ref ? 0 : min_rx_time
|
||||
append_time_window_filters!(
|
||||
where_clauses: where_clauses,
|
||||
params: params,
|
||||
since: since,
|
||||
before: before,
|
||||
since_floor: since_floor,
|
||||
ceiling: now,
|
||||
time_expression: time_expression,
|
||||
)
|
||||
since_threshold = normalize_since_threshold(since, floor: min_rx_time)
|
||||
where_clauses << "COALESCE(rx_time, position_time, 0) >= ?"
|
||||
params << since_threshold
|
||||
|
||||
if node_ref
|
||||
clause = node_lookup_clause(node_ref, string_columns: ["node_id"], numeric_columns: ["node_num"])
|
||||
@@ -702,22 +433,15 @@ module PotatoMesh
|
||||
params.concat(clause.last)
|
||||
end
|
||||
|
||||
append_rowid_time_cursor_filter!(
|
||||
where_clauses: where_clauses,
|
||||
params: params,
|
||||
cursor: cursor,
|
||||
time_key: "cursor_time",
|
||||
time_expression: time_expression,
|
||||
) if with_pagination
|
||||
|
||||
select_sql = with_pagination ? "SELECT *, rowid AS _cursor_rowid, COALESCE(rx_time, position_time, 0) AS _cursor_time FROM positions" : "SELECT * FROM positions"
|
||||
sql = "#{select_sql}\n"
|
||||
sql = <<~SQL
|
||||
SELECT * FROM positions
|
||||
SQL
|
||||
sql += " WHERE #{where_clauses.join(" AND ")}\n" if where_clauses.any?
|
||||
sql += <<~SQL
|
||||
ORDER BY COALESCE(rx_time, position_time, 0) DESC, rowid DESC
|
||||
ORDER BY rx_time DESC
|
||||
LIMIT ?
|
||||
SQL
|
||||
params << fetch_limit
|
||||
params << limit
|
||||
rows = db.execute(sql, params)
|
||||
rows.each do |r|
|
||||
rx_time = coerce_integer(r["rx_time"])
|
||||
@@ -737,16 +461,7 @@ module PotatoMesh
|
||||
r["pdop"] = coerce_float(r["pdop"])
|
||||
r["snr"] = coerce_float(r["snr"])
|
||||
end
|
||||
items = rows.map { |row| compact_api_row(row) }
|
||||
items.each { |item| item.delete("_cursor_rowid") } unless with_pagination
|
||||
return items unless with_pagination
|
||||
|
||||
build_rowid_pagination_response(
|
||||
items,
|
||||
limit,
|
||||
time_key: "cursor_time",
|
||||
marker_time: ->(marker) { marker["_cursor_time"] },
|
||||
)
|
||||
rows.map { |row| compact_api_row(row) }
|
||||
ensure
|
||||
db&.close
|
||||
end
|
||||
@@ -755,28 +470,19 @@ module PotatoMesh
|
||||
#
|
||||
# @param limit [Integer] maximum number of rows to return.
|
||||
# @param node_ref [String, Integer, nil] optional node reference to scope results.
|
||||
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window for collections.
|
||||
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window.
|
||||
# @return [Array<Hash>] compacted neighbor rows suitable for API responses.
|
||||
def query_neighbors(limit, node_ref: nil, since: 0, before: nil, cursor: nil, with_pagination: false)
|
||||
def query_neighbors(limit, node_ref: nil, since: 0)
|
||||
limit = coerce_query_limit(limit)
|
||||
fetch_limit = with_pagination ? limit + 1 : limit
|
||||
db = open_database(readonly: true)
|
||||
db.results_as_hash = true
|
||||
params = []
|
||||
where_clauses = []
|
||||
now = Time.now.to_i
|
||||
min_rx_time = now - PotatoMesh::Config.week_seconds
|
||||
time_expression = "COALESCE(rx_time, 0)"
|
||||
since_floor = node_ref ? 0 : min_rx_time
|
||||
append_time_window_filters!(
|
||||
where_clauses: where_clauses,
|
||||
params: params,
|
||||
since: since,
|
||||
before: before,
|
||||
since_floor: since_floor,
|
||||
ceiling: now,
|
||||
time_expression: time_expression,
|
||||
)
|
||||
since_threshold = normalize_since_threshold(since, floor: min_rx_time)
|
||||
where_clauses << "COALESCE(rx_time, 0) >= ?"
|
||||
params << since_threshold
|
||||
|
||||
if node_ref
|
||||
clause = node_lookup_clause(node_ref, string_columns: ["node_id", "neighbor_id"])
|
||||
@@ -785,22 +491,15 @@ module PotatoMesh
|
||||
params.concat(clause.last)
|
||||
end
|
||||
|
||||
append_rowid_time_cursor_filter!(
|
||||
where_clauses: where_clauses,
|
||||
params: params,
|
||||
cursor: cursor,
|
||||
time_key: "rx_time",
|
||||
time_expression: "rx_time",
|
||||
) if with_pagination
|
||||
|
||||
select_sql = with_pagination ? "SELECT *, rowid AS _cursor_rowid FROM neighbors" : "SELECT * FROM neighbors"
|
||||
sql = "#{select_sql}\n"
|
||||
sql = <<~SQL
|
||||
SELECT * FROM neighbors
|
||||
SQL
|
||||
sql += " WHERE #{where_clauses.join(" AND ")}\n" if where_clauses.any?
|
||||
sql += <<~SQL
|
||||
ORDER BY rx_time DESC, rowid DESC
|
||||
ORDER BY rx_time DESC
|
||||
LIMIT ?
|
||||
SQL
|
||||
params << fetch_limit
|
||||
params << limit
|
||||
rows = db.execute(sql, params)
|
||||
rows.each do |r|
|
||||
rx_time = coerce_integer(r["rx_time"])
|
||||
@@ -809,16 +508,7 @@ module PotatoMesh
|
||||
r["rx_iso"] = Time.at(rx_time).utc.iso8601 if rx_time
|
||||
r["snr"] = coerce_float(r["snr"])
|
||||
end
|
||||
items = rows.map { |row| compact_api_row(row) }
|
||||
items.each { |item| item.delete("_cursor_rowid") } unless with_pagination
|
||||
return items unless with_pagination
|
||||
|
||||
build_rowid_pagination_response(
|
||||
items,
|
||||
limit,
|
||||
time_key: "rx_time",
|
||||
marker_time: ->(marker) { marker["rx_time"] },
|
||||
)
|
||||
rows.map { |row| compact_api_row(row) }
|
||||
ensure
|
||||
db&.close
|
||||
end
|
||||
@@ -827,28 +517,19 @@ module PotatoMesh
|
||||
#
|
||||
# @param limit [Integer] maximum number of rows to return.
|
||||
# @param node_ref [String, Integer, nil] optional node reference to scope results.
|
||||
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window for collections.
|
||||
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window.
|
||||
# @return [Array<Hash>] compacted telemetry rows suitable for API responses.
|
||||
def query_telemetry(limit, node_ref: nil, since: 0, before: nil, cursor: nil, with_pagination: false)
|
||||
def query_telemetry(limit, node_ref: nil, since: 0)
|
||||
limit = coerce_query_limit(limit)
|
||||
fetch_limit = with_pagination ? limit + 1 : limit
|
||||
db = open_database(readonly: true)
|
||||
db.results_as_hash = true
|
||||
params = []
|
||||
where_clauses = []
|
||||
now = Time.now.to_i
|
||||
min_rx_time = now - PotatoMesh::Config.week_seconds
|
||||
time_expression = "COALESCE(rx_time, telemetry_time, 0)"
|
||||
since_floor = node_ref ? 0 : min_rx_time
|
||||
append_time_window_filters!(
|
||||
where_clauses: where_clauses,
|
||||
params: params,
|
||||
since: since,
|
||||
before: before,
|
||||
since_floor: since_floor,
|
||||
ceiling: now,
|
||||
time_expression: time_expression,
|
||||
)
|
||||
since_threshold = normalize_since_threshold(since, floor: min_rx_time)
|
||||
where_clauses << "COALESCE(rx_time, telemetry_time, 0) >= ?"
|
||||
params << since_threshold
|
||||
|
||||
if node_ref
|
||||
clause = node_lookup_clause(node_ref, string_columns: ["node_id"], numeric_columns: ["node_num"])
|
||||
@@ -857,22 +538,15 @@ module PotatoMesh
|
||||
params.concat(clause.last)
|
||||
end
|
||||
|
||||
append_rowid_time_cursor_filter!(
|
||||
where_clauses: where_clauses,
|
||||
params: params,
|
||||
cursor: cursor,
|
||||
time_key: "cursor_time",
|
||||
time_expression: time_expression,
|
||||
) if with_pagination
|
||||
|
||||
select_sql = with_pagination ? "SELECT *, rowid AS _cursor_rowid, COALESCE(rx_time, telemetry_time, 0) AS _cursor_time FROM telemetry" : "SELECT * FROM telemetry"
|
||||
sql = "#{select_sql}\n"
|
||||
sql = <<~SQL
|
||||
SELECT * FROM telemetry
|
||||
SQL
|
||||
sql += " WHERE #{where_clauses.join(" AND ")}\n" if where_clauses.any?
|
||||
sql += <<~SQL
|
||||
ORDER BY COALESCE(rx_time, telemetry_time, 0) DESC, rowid DESC
|
||||
ORDER BY rx_time DESC
|
||||
LIMIT ?
|
||||
SQL
|
||||
params << fetch_limit
|
||||
params << limit
|
||||
rows = db.execute(sql, params)
|
||||
rows.each do |r|
|
||||
rx_time = coerce_integer(r["rx_time"])
|
||||
@@ -920,16 +594,7 @@ module PotatoMesh
|
||||
r["soil_moisture"] = coerce_integer(r["soil_moisture"])
|
||||
r["soil_temperature"] = coerce_float(r["soil_temperature"])
|
||||
end
|
||||
items = rows.map { |row| compact_api_row(row) }
|
||||
items.each { |item| item.delete("_cursor_rowid") } unless with_pagination
|
||||
return items unless with_pagination
|
||||
|
||||
build_rowid_pagination_response(
|
||||
items,
|
||||
limit,
|
||||
time_key: "cursor_time",
|
||||
marker_time: ->(marker) { marker["_cursor_time"] },
|
||||
)
|
||||
rows.map { |row| compact_api_row(row) }
|
||||
ensure
|
||||
db&.close
|
||||
end
|
||||
@@ -1062,29 +727,23 @@ module PotatoMesh
|
||||
# @param node_ref [String, Integer, nil] optional node reference to scope results.
|
||||
# @param since [Integer] unix timestamp threshold applied in addition to the rolling window.
|
||||
# @return [Array<Hash>] compacted trace rows suitable for API responses.
|
||||
def query_traces(limit, node_ref: nil, since: 0, before: nil, cursor: nil, with_pagination: false)
|
||||
def query_traces(limit, node_ref: nil, since: 0)
|
||||
limit = coerce_query_limit(limit)
|
||||
fetch_limit = with_pagination ? limit + 1 : limit
|
||||
db = open_database(readonly: true)
|
||||
db.results_as_hash = true
|
||||
params = []
|
||||
where_clauses = []
|
||||
now = Time.now.to_i
|
||||
min_rx_time = now - PotatoMesh::Config.trace_neighbor_window_seconds
|
||||
min_rx_time = now - PotatoMesh::Config.week_seconds
|
||||
since_threshold = normalize_since_threshold(since, floor: min_rx_time)
|
||||
before_threshold = normalize_before_threshold(before, ceiling: now)
|
||||
where_clauses << "COALESCE(rx_time, 0) >= ?"
|
||||
params << since_threshold
|
||||
if before_threshold
|
||||
where_clauses << "COALESCE(rx_time, 0) <= ?"
|
||||
params << before_threshold
|
||||
end
|
||||
|
||||
if node_ref
|
||||
tokens = node_reference_tokens(node_ref)
|
||||
numeric_values = tokens[:numeric_values]
|
||||
if numeric_values.empty?
|
||||
return with_pagination ? { items: [], next_cursor: nil } : []
|
||||
return []
|
||||
end
|
||||
placeholders = Array.new(numeric_values.length, "?").join(", ")
|
||||
candidate_clauses = []
|
||||
@@ -1095,28 +754,16 @@ module PotatoMesh
|
||||
3.times { params.concat(numeric_values) }
|
||||
end
|
||||
|
||||
if with_pagination
|
||||
cursor_payload = decode_query_cursor(cursor)
|
||||
if cursor_payload
|
||||
cursor_rx_time = coerce_integer(cursor_payload["rx_time"])
|
||||
cursor_id = coerce_integer(cursor_payload["id"])
|
||||
if cursor_rx_time && cursor_id
|
||||
where_clauses << "(rx_time < ? OR (rx_time = ? AND id < ?))"
|
||||
params.concat([cursor_rx_time, cursor_rx_time, cursor_id])
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
sql = <<~SQL
|
||||
SELECT id, request_id, src, dest, rx_time, rx_iso, rssi, snr, elapsed_ms
|
||||
FROM traces
|
||||
SQL
|
||||
sql += " WHERE #{where_clauses.join(" AND ")}\n" if where_clauses.any?
|
||||
sql += <<~SQL
|
||||
ORDER BY rx_time DESC, id DESC
|
||||
ORDER BY rx_time DESC
|
||||
LIMIT ?
|
||||
SQL
|
||||
params << fetch_limit
|
||||
params << limit
|
||||
rows = db.execute(sql, params)
|
||||
|
||||
trace_ids = rows.map { |row| coerce_integer(row["id"]) }.compact
|
||||
@@ -1153,20 +800,7 @@ module PotatoMesh
|
||||
r["hops"] = hops_by_trace[trace_id]
|
||||
end
|
||||
end
|
||||
items = rows.map { |row| compact_api_row(row) }
|
||||
return items unless with_pagination
|
||||
|
||||
has_more = items.length > limit
|
||||
paged_items = has_more ? items.first(limit) : items
|
||||
next_cursor = nil
|
||||
if has_more && !paged_items.empty?
|
||||
marker = paged_items.last
|
||||
next_cursor = encode_query_cursor({
|
||||
"rx_time" => coerce_integer(marker["rx_time"]),
|
||||
"id" => coerce_integer(marker["id"]),
|
||||
})
|
||||
end
|
||||
{ items: paged_items, next_cursor: next_cursor }
|
||||
rows.map { |row| compact_api_row(row) }
|
||||
ensure
|
||||
db&.close
|
||||
end
|
||||
|
||||
@@ -64,23 +64,7 @@ module PotatoMesh
|
||||
app.get "/api/nodes" do
|
||||
content_type :json
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
result = query_nodes(
|
||||
limit,
|
||||
since: params["since"],
|
||||
before: params["before"],
|
||||
cursor: params["cursor"],
|
||||
with_pagination: true,
|
||||
)
|
||||
response["X-Next-Cursor"] = result[:next_cursor] if result[:next_cursor]
|
||||
result[:items].to_json
|
||||
end
|
||||
|
||||
app.get "/api/stats" do
|
||||
content_type :json
|
||||
{
|
||||
active_nodes: query_active_node_stats,
|
||||
sampled: false,
|
||||
}.to_json
|
||||
query_nodes(limit, since: params["since"]).to_json
|
||||
end
|
||||
|
||||
app.get "/api/nodes/:id" do
|
||||
@@ -96,15 +80,7 @@ module PotatoMesh
|
||||
app.get "/api/ingestors" do
|
||||
content_type :json
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
result = query_ingestors(
|
||||
limit,
|
||||
since: params["since"],
|
||||
before: params["before"],
|
||||
cursor: params["cursor"],
|
||||
with_pagination: true,
|
||||
)
|
||||
response["X-Next-Cursor"] = result[:next_cursor] if result[:next_cursor]
|
||||
result[:items].to_json
|
||||
query_ingestors(limit, since: params["since"]).to_json
|
||||
end
|
||||
|
||||
app.get "/api/messages" do
|
||||
@@ -113,16 +89,7 @@ module PotatoMesh
|
||||
include_encrypted = coerce_boolean(params["encrypted"]) || false
|
||||
since = coerce_integer(params["since"])
|
||||
since = 0 if since.nil? || since.negative?
|
||||
result = query_messages(
|
||||
limit,
|
||||
include_encrypted: include_encrypted,
|
||||
since: since,
|
||||
before: params["before"],
|
||||
cursor: params["cursor"],
|
||||
with_pagination: true,
|
||||
)
|
||||
response["X-Next-Cursor"] = result[:next_cursor] if result[:next_cursor]
|
||||
result[:items].to_json
|
||||
query_messages(limit, include_encrypted: include_encrypted, since: since).to_json
|
||||
end
|
||||
|
||||
app.get "/api/messages/:id" do
|
||||
@@ -133,31 +100,18 @@ module PotatoMesh
|
||||
include_encrypted = coerce_boolean(params["encrypted"]) || false
|
||||
since = coerce_integer(params["since"])
|
||||
since = 0 if since.nil? || since.negative?
|
||||
result = query_messages(
|
||||
query_messages(
|
||||
limit,
|
||||
node_ref: node_ref,
|
||||
include_encrypted: include_encrypted,
|
||||
since: since,
|
||||
before: params["before"],
|
||||
cursor: params["cursor"],
|
||||
with_pagination: true,
|
||||
)
|
||||
response["X-Next-Cursor"] = result[:next_cursor] if result[:next_cursor]
|
||||
result[:items].to_json
|
||||
).to_json
|
||||
end
|
||||
|
||||
app.get "/api/positions" do
|
||||
content_type :json
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
result = query_positions(
|
||||
limit,
|
||||
since: params["since"],
|
||||
before: params["before"],
|
||||
cursor: params["cursor"],
|
||||
with_pagination: true,
|
||||
)
|
||||
response["X-Next-Cursor"] = result[:next_cursor] if result[:next_cursor]
|
||||
result[:items].to_json
|
||||
query_positions(limit, since: params["since"]).to_json
|
||||
end
|
||||
|
||||
app.get "/api/positions/:id" do
|
||||
@@ -165,30 +119,13 @@ module PotatoMesh
|
||||
node_ref = string_or_nil(params["id"])
|
||||
halt 400, { error: "missing node id" }.to_json unless node_ref
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
result = query_positions(
|
||||
limit,
|
||||
node_ref: node_ref,
|
||||
since: params["since"],
|
||||
before: params["before"],
|
||||
cursor: params["cursor"],
|
||||
with_pagination: true,
|
||||
)
|
||||
response["X-Next-Cursor"] = result[:next_cursor] if result[:next_cursor]
|
||||
result[:items].to_json
|
||||
query_positions(limit, node_ref: node_ref, since: params["since"]).to_json
|
||||
end
|
||||
|
||||
app.get "/api/neighbors" do
|
||||
content_type :json
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
result = query_neighbors(
|
||||
limit,
|
||||
since: params["since"],
|
||||
before: params["before"],
|
||||
cursor: params["cursor"],
|
||||
with_pagination: true,
|
||||
)
|
||||
response["X-Next-Cursor"] = result[:next_cursor] if result[:next_cursor]
|
||||
result[:items].to_json
|
||||
query_neighbors(limit, since: params["since"]).to_json
|
||||
end
|
||||
|
||||
app.get "/api/neighbors/:id" do
|
||||
@@ -196,30 +133,13 @@ module PotatoMesh
|
||||
node_ref = string_or_nil(params["id"])
|
||||
halt 400, { error: "missing node id" }.to_json unless node_ref
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
result = query_neighbors(
|
||||
limit,
|
||||
node_ref: node_ref,
|
||||
since: params["since"],
|
||||
before: params["before"],
|
||||
cursor: params["cursor"],
|
||||
with_pagination: true,
|
||||
)
|
||||
response["X-Next-Cursor"] = result[:next_cursor] if result[:next_cursor]
|
||||
result[:items].to_json
|
||||
query_neighbors(limit, node_ref: node_ref, since: params["since"]).to_json
|
||||
end
|
||||
|
||||
app.get "/api/telemetry" do
|
||||
content_type :json
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
result = query_telemetry(
|
||||
limit,
|
||||
since: params["since"],
|
||||
before: params["before"],
|
||||
cursor: params["cursor"],
|
||||
with_pagination: true,
|
||||
)
|
||||
response["X-Next-Cursor"] = result[:next_cursor] if result[:next_cursor]
|
||||
result[:items].to_json
|
||||
query_telemetry(limit, since: params["since"]).to_json
|
||||
end
|
||||
|
||||
app.get "/api/telemetry/aggregated" do
|
||||
@@ -262,30 +182,13 @@ module PotatoMesh
|
||||
node_ref = string_or_nil(params["id"])
|
||||
halt 400, { error: "missing node id" }.to_json unless node_ref
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
result = query_telemetry(
|
||||
limit,
|
||||
node_ref: node_ref,
|
||||
since: params["since"],
|
||||
before: params["before"],
|
||||
cursor: params["cursor"],
|
||||
with_pagination: true,
|
||||
)
|
||||
response["X-Next-Cursor"] = result[:next_cursor] if result[:next_cursor]
|
||||
result[:items].to_json
|
||||
query_telemetry(limit, node_ref: node_ref, since: params["since"]).to_json
|
||||
end
|
||||
|
||||
app.get "/api/traces" do
|
||||
content_type :json
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
result = query_traces(
|
||||
limit,
|
||||
since: params["since"],
|
||||
before: params["before"],
|
||||
cursor: params["cursor"],
|
||||
with_pagination: true,
|
||||
)
|
||||
response["X-Next-Cursor"] = result[:next_cursor] if result[:next_cursor]
|
||||
result[:items].to_json
|
||||
query_traces(limit, since: params["since"]).to_json
|
||||
end
|
||||
|
||||
app.get "/api/traces/:id" do
|
||||
@@ -293,16 +196,7 @@ module PotatoMesh
|
||||
node_ref = string_or_nil(params["id"])
|
||||
halt 400, { error: "missing node id" }.to_json unless node_ref
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
result = query_traces(
|
||||
limit,
|
||||
node_ref: node_ref,
|
||||
since: params["since"],
|
||||
before: params["before"],
|
||||
cursor: params["cursor"],
|
||||
with_pagination: true,
|
||||
)
|
||||
response["X-Next-Cursor"] = result[:next_cursor] if result[:next_cursor]
|
||||
result[:items].to_json
|
||||
query_traces(limit, node_ref: node_ref, since: params["since"]).to_json
|
||||
end
|
||||
|
||||
app.get "/api/instances" do
|
||||
@@ -311,13 +205,8 @@ module PotatoMesh
|
||||
|
||||
content_type :json
|
||||
ensure_self_instance_record!
|
||||
result = load_instances_for_api(
|
||||
limit: params["limit"],
|
||||
cursor: params["cursor"],
|
||||
with_pagination: true,
|
||||
)
|
||||
response["X-Next-Cursor"] = result[:next_cursor] if result[:next_cursor]
|
||||
JSON.generate(result[:items])
|
||||
payload = load_instances_for_api
|
||||
JSON.generate(payload)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
@@ -14,8 +14,6 @@
|
||||
|
||||
# frozen_string_literal: true
|
||||
|
||||
require "timeout"
|
||||
|
||||
module PotatoMesh
|
||||
module App
|
||||
# WorkerPool executes submitted blocks using a bounded set of Ruby threads.
|
||||
@@ -126,9 +124,8 @@ module PotatoMesh
|
||||
#
|
||||
# @param size [Integer] number of worker threads to spawn.
|
||||
# @param max_queue [Integer, nil] optional upper bound on queued jobs.
|
||||
# @param task_timeout [Numeric, nil] optional per-task execution timeout.
|
||||
# @param name [String] prefix assigned to worker thread names.
|
||||
def initialize(size:, max_queue: nil, task_timeout: nil, name: "worker-pool")
|
||||
def initialize(size:, max_queue: nil, name: "worker-pool")
|
||||
raise ArgumentError, "size must be positive" unless size.is_a?(Integer) && size.positive?
|
||||
|
||||
@name = name
|
||||
@@ -136,7 +133,6 @@ module PotatoMesh
|
||||
@threads = []
|
||||
@stopped = false
|
||||
@mutex = Mutex.new
|
||||
@task_timeout = normalize_task_timeout(task_timeout)
|
||||
spawn_workers(size)
|
||||
end
|
||||
|
||||
@@ -196,45 +192,23 @@ module PotatoMesh
|
||||
worker = Thread.new do
|
||||
Thread.current.name = "#{@name}-#{index}" if Thread.current.respond_to?(:name=)
|
||||
Thread.current.report_on_exception = false if Thread.current.respond_to?(:report_on_exception=)
|
||||
# Daemon threads allow the process to exit even if a job is stuck.
|
||||
Thread.current.daemon = true if Thread.current.respond_to?(:daemon=)
|
||||
|
||||
loop do
|
||||
task, block = @queue.pop
|
||||
break if task.equal?(STOP_SIGNAL)
|
||||
|
||||
begin
|
||||
result = if @task_timeout
|
||||
Timeout.timeout(@task_timeout, TaskTimeoutError, "task exceeded timeout") do
|
||||
block.call
|
||||
end
|
||||
else
|
||||
block.call
|
||||
end
|
||||
result = block.call
|
||||
task.fulfill(result)
|
||||
rescue StandardError => e
|
||||
task.reject(e)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
@threads << worker
|
||||
end
|
||||
end
|
||||
|
||||
# Normalize the per-task timeout into a positive float value.
|
||||
#
|
||||
# @param task_timeout [Numeric, nil] candidate timeout value.
|
||||
# @return [Float, nil] positive timeout in seconds or nil when disabled.
|
||||
def normalize_task_timeout(task_timeout)
|
||||
return nil if task_timeout.nil?
|
||||
|
||||
value = Float(task_timeout)
|
||||
return nil unless value.positive?
|
||||
|
||||
value
|
||||
rescue ArgumentError, TypeError
|
||||
nil
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
@@ -32,19 +32,15 @@ module PotatoMesh
|
||||
DEFAULT_MAP_CENTER = "#{DEFAULT_MAP_CENTER_LAT},#{DEFAULT_MAP_CENTER_LON}"
|
||||
DEFAULT_CHANNEL = "#LongFast"
|
||||
DEFAULT_FREQUENCY = "915MHz"
|
||||
DEFAULT_MESHTASTIC_PSK_B64 = "AQ=="
|
||||
DEFAULT_CONTACT_LINK = "#potatomesh:dod.ngo"
|
||||
DEFAULT_MAX_DISTANCE_KM = 42.0
|
||||
DEFAULT_REMOTE_INSTANCE_CONNECT_TIMEOUT = 15
|
||||
DEFAULT_REMOTE_INSTANCE_READ_TIMEOUT = 60
|
||||
DEFAULT_REMOTE_INSTANCE_REQUEST_TIMEOUT = 30
|
||||
DEFAULT_FEDERATION_MAX_INSTANCES_PER_RESPONSE = 64
|
||||
DEFAULT_FEDERATION_MAX_DOMAINS_PER_CRAWL = 256
|
||||
DEFAULT_FEDERATION_WORKER_POOL_SIZE = 4
|
||||
DEFAULT_FEDERATION_WORKER_QUEUE_CAPACITY = 128
|
||||
DEFAULT_FEDERATION_TASK_TIMEOUT_SECONDS = 120
|
||||
DEFAULT_FEDERATION_SHUTDOWN_TIMEOUT_SECONDS = 3
|
||||
DEFAULT_FEDERATION_CRAWL_COOLDOWN_SECONDS = 300
|
||||
DEFAULT_INITIAL_FEDERATION_DELAY_SECONDS = 2
|
||||
DEFAULT_FEDERATION_SEED_DOMAINS = %w[potatomesh.net potatomesh.jmrp.io mesh.qrp.ro].freeze
|
||||
|
||||
@@ -162,13 +158,6 @@ module PotatoMesh
|
||||
7 * 24 * 60 * 60
|
||||
end
|
||||
|
||||
# Rolling retention window in seconds for trace and neighbor API queries.
|
||||
#
|
||||
# @return [Integer] seconds in twenty-eight days.
|
||||
def trace_neighbor_window_seconds
|
||||
28 * 24 * 60 * 60
|
||||
end
|
||||
|
||||
# Default upper bound for accepted JSON payload sizes.
|
||||
#
|
||||
# @return [Integer] byte ceiling for HTTP request bodies.
|
||||
@@ -187,7 +176,7 @@ module PotatoMesh
|
||||
#
|
||||
# @return [String] semantic version identifier.
|
||||
def version_fallback
|
||||
"0.5.10"
|
||||
"0.5.9"
|
||||
end
|
||||
|
||||
# Default refresh interval for frontend polling routines.
|
||||
@@ -353,16 +342,6 @@ module PotatoMesh
|
||||
)
|
||||
end
|
||||
|
||||
# End-to-end timeout applied to each outbound federation HTTP request.
|
||||
#
|
||||
# @return [Integer] maximum request duration in seconds.
|
||||
def remote_instance_request_timeout
|
||||
fetch_positive_integer(
|
||||
"REMOTE_INSTANCE_REQUEST_TIMEOUT",
|
||||
DEFAULT_REMOTE_INSTANCE_REQUEST_TIMEOUT,
|
||||
)
|
||||
end
|
||||
|
||||
# Limit the number of remote instances processed from a single response.
|
||||
#
|
||||
# @return [Integer] maximum entries processed per /api/instances payload.
|
||||
@@ -413,26 +392,6 @@ module PotatoMesh
|
||||
)
|
||||
end
|
||||
|
||||
# Determine how long shutdown waits before forcing federation thread exit.
|
||||
#
|
||||
# @return [Integer] per-thread shutdown timeout in seconds.
|
||||
def federation_shutdown_timeout_seconds
|
||||
fetch_positive_integer(
|
||||
"FEDERATION_SHUTDOWN_TIMEOUT",
|
||||
DEFAULT_FEDERATION_SHUTDOWN_TIMEOUT_SECONDS,
|
||||
)
|
||||
end
|
||||
|
||||
# Define how long finished crawl domains remain on cooldown.
|
||||
#
|
||||
# @return [Integer] cooldown window in seconds.
|
||||
def federation_crawl_cooldown_seconds
|
||||
fetch_positive_integer(
|
||||
"FEDERATION_CRAWL_COOLDOWN",
|
||||
DEFAULT_FEDERATION_CRAWL_COOLDOWN_SECONDS,
|
||||
)
|
||||
end
|
||||
|
||||
# Maximum acceptable age for remote node data.
|
||||
#
|
||||
# @return [Integer] seconds before remote nodes are considered stale.
|
||||
@@ -478,13 +437,6 @@ module PotatoMesh
|
||||
fetch_string("SITE_NAME", "PotatoMesh Demo")
|
||||
end
|
||||
|
||||
# Retrieve the configured announcement banner copy.
|
||||
#
|
||||
# @return [String, nil] announcement string when configured.
|
||||
def announcement
|
||||
fetch_string("ANNOUNCEMENT", nil)
|
||||
end
|
||||
|
||||
# Retrieve the default radio channel label.
|
||||
#
|
||||
# @return [String] channel name from configuration.
|
||||
@@ -499,13 +451,6 @@ module PotatoMesh
|
||||
fetch_string("FREQUENCY", DEFAULT_FREQUENCY)
|
||||
end
|
||||
|
||||
# Retrieve the Meshtastic PSK used for decrypting channel messages.
|
||||
#
|
||||
# @return [String] base64-encoded PSK or alias.
|
||||
def meshtastic_psk_b64
|
||||
fetch_string("MESHTASTIC_PSK_B64", DEFAULT_MESHTASTIC_PSK_B64)
|
||||
end
|
||||
|
||||
# Parse the configured map centre coordinates.
|
||||
#
|
||||
# @return [Hash{Symbol=>Float}] latitude and longitude in decimal degrees.
|
||||
|
||||
@@ -199,14 +199,6 @@ module PotatoMesh
|
||||
sanitized_string(Config.site_name)
|
||||
end
|
||||
|
||||
# Retrieve the configured announcement banner copy and normalise blank values to nil.
|
||||
#
|
||||
# @return [String, nil] announcement copy or +nil+ when blank.
|
||||
def sanitized_announcement
|
||||
value = sanitized_string(Config.announcement)
|
||||
value.empty? ? nil : value
|
||||
end
|
||||
|
||||
# Retrieve the configured channel as a cleaned string.
|
||||
#
|
||||
# @return [String] trimmed configuration value.
|
||||
|
||||
Generated
+12
-2
@@ -1,12 +1,16 @@
|
||||
{
|
||||
"name": "potato-mesh",
|
||||
"version": "0.5.10",
|
||||
"version": "0.5.9",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "potato-mesh",
|
||||
"version": "0.5.10",
|
||||
"version": "0.5.9",
|
||||
"hasInstallScript": true,
|
||||
"dependencies": {
|
||||
"uplot": "^1.6.30"
|
||||
},
|
||||
"devDependencies": {
|
||||
"istanbul-lib-coverage": "^3.2.2",
|
||||
"istanbul-lib-report": "^3.0.1",
|
||||
@@ -154,6 +158,12 @@
|
||||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"node_modules/uplot": {
|
||||
"version": "1.6.32",
|
||||
"resolved": "https://registry.npmjs.org/uplot/-/uplot-1.6.32.tgz",
|
||||
"integrity": "sha512-KIMVnG68zvu5XXUbC4LQEPnhwOxBuLyW1AHtpm6IKTXImkbLgkMy+jabjLgSLMasNuGGzQm/ep3tOkyTxpiQIw==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/v8-to-istanbul": {
|
||||
"version": "9.3.0",
|
||||
"resolved": "https://registry.npmjs.org/v8-to-istanbul/-/v8-to-istanbul-9.3.0.tgz",
|
||||
|
||||
+5
-1
@@ -1,11 +1,15 @@
|
||||
{
|
||||
"name": "potato-mesh",
|
||||
"version": "0.5.10",
|
||||
"version": "0.5.9",
|
||||
"type": "module",
|
||||
"private": true,
|
||||
"scripts": {
|
||||
"postinstall": "node ./scripts/copy-uplot.js",
|
||||
"test": "mkdir -p reports coverage && NODE_V8_COVERAGE=coverage node --test --experimental-test-coverage --test-reporter=spec --test-reporter-destination=stdout --test-reporter=junit --test-reporter-destination=reports/javascript-junit.xml && node ./scripts/export-coverage.js"
|
||||
},
|
||||
"dependencies": {
|
||||
"uplot": "^1.6.30"
|
||||
},
|
||||
"devDependencies": {
|
||||
"istanbul-lib-coverage": "^3.2.2",
|
||||
"istanbul-lib-report": "^3.0.1",
|
||||
|
||||
@@ -80,13 +80,19 @@ test('initializeChartsPage renders the telemetry charts when snapshots are avail
|
||||
},
|
||||
]);
|
||||
let receivedOptions = null;
|
||||
const renderCharts = (node, options) => {
|
||||
let mountedModels = null;
|
||||
const createCharts = (node, options) => {
|
||||
receivedOptions = options;
|
||||
return '<section class="node-detail__charts">Charts</section>';
|
||||
return { chartsHtml: '<section class="node-detail__charts">Charts</section>', chartModels: [{ id: 'power' }] };
|
||||
};
|
||||
const result = await initializeChartsPage({ document: documentStub, fetchImpl, renderCharts });
|
||||
const mountCharts = (chartModels, options) => {
|
||||
mountedModels = { chartModels, options };
|
||||
return [];
|
||||
};
|
||||
const result = await initializeChartsPage({ document: documentStub, fetchImpl, createCharts, mountCharts });
|
||||
assert.equal(result, true);
|
||||
assert.equal(container.innerHTML.includes('node-detail__charts'), true);
|
||||
assert.equal(mountedModels.chartModels.length, 1);
|
||||
assert.ok(receivedOptions);
|
||||
assert.equal(receivedOptions.chartOptions.windowMs, 604_800_000);
|
||||
assert.equal(typeof receivedOptions.chartOptions.lineReducer, 'function');
|
||||
@@ -118,8 +124,8 @@ test('initializeChartsPage shows an error message when fetching fails', async ()
|
||||
const fetchImpl = async () => {
|
||||
throw new Error('network');
|
||||
};
|
||||
const renderCharts = () => '<section>unused</section>';
|
||||
const result = await initializeChartsPage({ document: documentStub, fetchImpl, renderCharts });
|
||||
const createCharts = () => ({ chartsHtml: '<section>unused</section>', chartModels: [] });
|
||||
const result = await initializeChartsPage({ document: documentStub, fetchImpl, createCharts });
|
||||
assert.equal(result, false);
|
||||
assert.equal(container.innerHTML.includes('Failed to load telemetry charts.'), true);
|
||||
});
|
||||
@@ -136,8 +142,8 @@ test('initializeChartsPage handles missing containers and empty telemetry snapsh
|
||||
},
|
||||
};
|
||||
const fetchImpl = async () => createResponse(200, []);
|
||||
const renderCharts = () => '';
|
||||
const result = await initializeChartsPage({ document: documentStub, fetchImpl, renderCharts });
|
||||
const createCharts = () => ({ chartsHtml: '', chartModels: [] });
|
||||
const result = await initializeChartsPage({ document: documentStub, fetchImpl, createCharts });
|
||||
assert.equal(result, true);
|
||||
assert.equal(container.innerHTML.includes('Telemetry snapshots are unavailable.'), true);
|
||||
});
|
||||
@@ -155,8 +161,8 @@ test('initializeChartsPage shows a status when rendering produces no markup', as
|
||||
aggregates: { voltage: { avg: 3.9 } },
|
||||
},
|
||||
]);
|
||||
const renderCharts = () => '';
|
||||
const result = await initializeChartsPage({ document: documentStub, fetchImpl, renderCharts });
|
||||
const createCharts = () => ({ chartsHtml: '', chartModels: [] });
|
||||
const result = await initializeChartsPage({ document: documentStub, fetchImpl, createCharts });
|
||||
assert.equal(result, true);
|
||||
assert.equal(container.innerHTML.includes('Telemetry snapshots are unavailable.'), true);
|
||||
});
|
||||
|
||||
@@ -62,22 +62,6 @@ function buildModel(overrides = {}) {
|
||||
});
|
||||
}
|
||||
|
||||
function findChannelByLabel(model, label) {
|
||||
return model.channels.find(channel => channel.label === label);
|
||||
}
|
||||
|
||||
function assertChannelMessages(model, { label, id, index, messageIds }) {
|
||||
const channel = findChannelByLabel(model, label);
|
||||
assert.ok(channel);
|
||||
if (id instanceof RegExp) {
|
||||
assert.match(channel.id, id);
|
||||
} else {
|
||||
assert.equal(channel.id, id);
|
||||
}
|
||||
assert.equal(channel.index, index);
|
||||
assert.deepEqual(channel.entries.map(entry => entry.message.id), messageIds);
|
||||
}
|
||||
|
||||
test('buildChatTabModel returns sorted nodes and channel buckets', () => {
|
||||
const model = buildModel();
|
||||
assert.equal(model.logEntries.length, 3);
|
||||
@@ -91,13 +75,12 @@ test('buildChatTabModel returns sorted nodes and channel buckets', () => {
|
||||
['recent-node', 'iso-node', 'encrypted']
|
||||
);
|
||||
|
||||
assert.equal(model.channels.length, 6);
|
||||
assert.equal(model.channels.length, 5);
|
||||
assert.deepEqual(model.channels.map(channel => channel.label), [
|
||||
'EnvDefault',
|
||||
'Fallback',
|
||||
'MediumFast',
|
||||
'ShortFast',
|
||||
'1',
|
||||
'BerlinMesh'
|
||||
]);
|
||||
|
||||
@@ -123,21 +106,18 @@ test('buildChatTabModel returns sorted nodes and channel buckets', () => {
|
||||
assert.equal(presetChannel.id, 'channel-0-shortfast');
|
||||
assert.deepEqual(presetChannel.entries.map(entry => entry.message.id), ['primary-preset']);
|
||||
|
||||
const unnamedSecondaryChannel = channelByLabel['1'];
|
||||
assert.equal(unnamedSecondaryChannel.index, 1);
|
||||
assert.equal(unnamedSecondaryChannel.id, 'channel-1');
|
||||
assert.deepEqual(unnamedSecondaryChannel.entries.map(entry => entry.message.id), ['iso-ts']);
|
||||
|
||||
const secondaryChannel = channelByLabel.BerlinMesh;
|
||||
assert.equal(secondaryChannel.index, 1);
|
||||
assert.match(secondaryChannel.id, /^channel-secondary-name-berlinmesh-[a-z0-9]+$/);
|
||||
assert.equal(secondaryChannel.entries.length, 1);
|
||||
assert.deepEqual(secondaryChannel.entries.map(entry => entry.message.id), ['recent-alt']);
|
||||
assert.equal(secondaryChannel.id, 'channel-secondary-berlinmesh');
|
||||
assert.equal(secondaryChannel.entries.length, 2);
|
||||
assert.deepEqual(secondaryChannel.entries.map(entry => entry.message.id), ['iso-ts', 'recent-alt']);
|
||||
});
|
||||
|
||||
test('buildChatTabModel skips channel buckets when there are no messages', () => {
|
||||
test('buildChatTabModel always includes channel zero bucket', () => {
|
||||
const model = buildChatTabModel({ nodes: [], messages: [], nowSeconds: NOW, windowSeconds: WINDOW });
|
||||
assert.equal(model.channels.length, 0);
|
||||
assert.equal(model.channels.length, 1);
|
||||
assert.equal(model.channels[0].index, 0);
|
||||
assert.equal(model.channels[0].entries.length, 0);
|
||||
});
|
||||
|
||||
test('buildChatTabModel falls back to numeric label when no metadata provided', () => {
|
||||
@@ -194,13 +174,14 @@ test('buildChatTabModel includes telemetry, position, and neighbor events', () =
|
||||
windowSeconds: WINDOW
|
||||
});
|
||||
|
||||
const types = model.logEntries.map(entry => entry.type);
|
||||
assert.equal(types[0], CHAT_LOG_ENTRY_TYPES.NODE_NEW);
|
||||
assert.ok(types.includes(CHAT_LOG_ENTRY_TYPES.NODE_INFO));
|
||||
assert.ok(types.includes(CHAT_LOG_ENTRY_TYPES.TELEMETRY));
|
||||
assert.ok(types.includes(CHAT_LOG_ENTRY_TYPES.POSITION));
|
||||
assert.ok(types.includes(CHAT_LOG_ENTRY_TYPES.NEIGHBOR));
|
||||
assert.ok(types.includes(CHAT_LOG_ENTRY_TYPES.TRACE));
|
||||
assert.deepEqual(model.logEntries.map(entry => entry.type), [
|
||||
CHAT_LOG_ENTRY_TYPES.NODE_NEW,
|
||||
CHAT_LOG_ENTRY_TYPES.NODE_INFO,
|
||||
CHAT_LOG_ENTRY_TYPES.TELEMETRY,
|
||||
CHAT_LOG_ENTRY_TYPES.POSITION,
|
||||
CHAT_LOG_ENTRY_TYPES.NEIGHBOR,
|
||||
CHAT_LOG_ENTRY_TYPES.TRACE
|
||||
]);
|
||||
assert.equal(model.logEntries[0].nodeId, nodeId);
|
||||
const neighborEntry = model.logEntries.find(entry => entry.type === CHAT_LOG_ENTRY_TYPES.NEIGHBOR);
|
||||
assert.ok(neighborEntry);
|
||||
@@ -294,7 +275,7 @@ test('buildChatTabModel ignores plaintext log-only entries', () => {
|
||||
assert.equal(encryptedEntries[0]?.message?.id, 'enc');
|
||||
});
|
||||
|
||||
test('buildChatTabModel merges secondary channels with matching labels across indexes', () => {
|
||||
test('buildChatTabModel merges secondary channels with matching labels regardless of index', () => {
|
||||
const primaryId = 'primary';
|
||||
const secondaryFirstId = 'secondary-one';
|
||||
const secondarySecondId = 'secondary-two';
|
||||
@@ -318,139 +299,55 @@ test('buildChatTabModel merges secondary channels with matching labels across in
|
||||
assert.equal(primaryChannel.entries.length, 1);
|
||||
assert.equal(primaryChannel.entries[0]?.message?.id, primaryId);
|
||||
|
||||
const mergedSecondaryChannel = meshChannels.find(channel => channel.index === 3);
|
||||
assert.ok(mergedSecondaryChannel);
|
||||
assert.match(mergedSecondaryChannel.id, /^channel-secondary-name-meshtown-[a-z0-9]+$/);
|
||||
assert.deepEqual(
|
||||
mergedSecondaryChannel.entries.map(entry => entry.message.id),
|
||||
[secondaryFirstId, secondarySecondId]
|
||||
);
|
||||
const secondaryChannel = meshChannels.find(channel => channel.index > 0);
|
||||
assert.ok(secondaryChannel);
|
||||
assert.equal(secondaryChannel.id, 'channel-secondary-meshtown');
|
||||
assert.equal(secondaryChannel.index, 3);
|
||||
assert.deepEqual(secondaryChannel.entries.map(entry => entry.message.id), [secondaryFirstId, secondarySecondId]);
|
||||
});
|
||||
|
||||
test('buildChatTabModel keeps unnamed secondary buckets separate when a label later arrives', () => {
|
||||
const scenarios = [
|
||||
{
|
||||
index: 4,
|
||||
label: 'SideMesh',
|
||||
messages: [
|
||||
{ id: 'unnamed', rx_time: NOW - 15, channel: 4 },
|
||||
{ id: 'named', rx_time: NOW - 10, channel: 4, channel_name: 'SideMesh' }
|
||||
],
|
||||
namedId: /^channel-secondary-name-sidemesh-[a-z0-9]+$/,
|
||||
namedMessages: ['named'],
|
||||
unnamedMessages: ['unnamed']
|
||||
},
|
||||
{
|
||||
index: 5,
|
||||
label: 'MeshNorth',
|
||||
messages: [
|
||||
{ id: 'named', rx_time: NOW - 12, channel: 5, channel_name: 'MeshNorth' },
|
||||
{ id: 'unlabeled', rx_time: NOW - 8, channel: 5 }
|
||||
],
|
||||
namedId: /^channel-secondary-name-meshnorth-[a-z0-9]+$/,
|
||||
namedMessages: ['named'],
|
||||
unnamedMessages: ['unlabeled']
|
||||
}
|
||||
];
|
||||
|
||||
for (const scenario of scenarios) {
|
||||
const model = buildChatTabModel({
|
||||
nodes: [],
|
||||
messages: scenario.messages,
|
||||
nowSeconds: NOW,
|
||||
windowSeconds: WINDOW
|
||||
});
|
||||
const secondaryChannels = model.channels.filter(channel => channel.index === scenario.index);
|
||||
assert.equal(secondaryChannels.length, 2);
|
||||
assertChannelMessages(model, {
|
||||
label: scenario.label,
|
||||
id: scenario.namedId,
|
||||
index: scenario.index,
|
||||
messageIds: scenario.namedMessages
|
||||
});
|
||||
assertChannelMessages(model, {
|
||||
label: String(scenario.index),
|
||||
id: `channel-${scenario.index}`,
|
||||
index: scenario.index,
|
||||
messageIds: scenario.unnamedMessages
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
test('buildChatTabModel keeps same-index channels with different names in separate tabs', () => {
|
||||
test('buildChatTabModel rekeys unnamed secondary buckets when a label later arrives', () => {
|
||||
const unnamedId = 'unnamed';
|
||||
const namedId = 'named';
|
||||
const label = 'SideMesh';
|
||||
const index = 4;
|
||||
const model = buildChatTabModel({
|
||||
nodes: [],
|
||||
messages: [
|
||||
{ id: 'public-msg', rx_time: NOW - 12, channel: 1, channel_name: 'PUBLIC' },
|
||||
{ id: 'berlin-msg', rx_time: NOW - 8, channel: 1, channel_name: 'BerlinMesh' }
|
||||
{ id: unnamedId, rx_time: NOW - 15, channel: index },
|
||||
{ id: namedId, rx_time: NOW - 10, channel: index, channel_name: label }
|
||||
],
|
||||
nowSeconds: NOW,
|
||||
windowSeconds: WINDOW
|
||||
});
|
||||
|
||||
assertChannelMessages(model, {
|
||||
label: 'PUBLIC',
|
||||
id: /^channel-secondary-name-public-[a-z0-9]+$/,
|
||||
index: 1,
|
||||
messageIds: ['public-msg']
|
||||
});
|
||||
assertChannelMessages(model, {
|
||||
label: 'BerlinMesh',
|
||||
id: /^channel-secondary-name-berlinmesh-[a-z0-9]+$/,
|
||||
index: 1,
|
||||
messageIds: ['berlin-msg']
|
||||
});
|
||||
const secondaryChannels = model.channels.filter(channel => channel.index === index);
|
||||
assert.equal(secondaryChannels.length, 1);
|
||||
const [secondaryChannel] = secondaryChannels;
|
||||
assert.equal(secondaryChannel.id, 'channel-secondary-sidemesh');
|
||||
assert.equal(secondaryChannel.label, label);
|
||||
assert.deepEqual(secondaryChannel.entries.map(entry => entry.message.id), [unnamedId, namedId]);
|
||||
});
|
||||
|
||||
test('buildChatTabModel merges same-name channels even when indexes differ', () => {
|
||||
test('buildChatTabModel merges unlabeled secondary messages into existing named buckets by index', () => {
|
||||
const namedId = 'named';
|
||||
const unlabeledId = 'unlabeled';
|
||||
const label = 'MeshNorth';
|
||||
const index = 5;
|
||||
const model = buildChatTabModel({
|
||||
nodes: [],
|
||||
messages: [
|
||||
{ id: 'test-1', rx_time: NOW - 12, channel: 1, channel_name: 'TEST' },
|
||||
{ id: 'test-2', rx_time: NOW - 8, channel: 2, channel_name: 'TEST' }
|
||||
{ id: namedId, rx_time: NOW - 12, channel: index, channel_name: label },
|
||||
{ id: unlabeledId, rx_time: NOW - 8, channel: index }
|
||||
],
|
||||
nowSeconds: NOW,
|
||||
windowSeconds: WINDOW
|
||||
});
|
||||
|
||||
assertChannelMessages(model, {
|
||||
label: 'TEST',
|
||||
id: /^channel-secondary-name-test-[a-z0-9]+$/,
|
||||
index: 1,
|
||||
messageIds: ['test-1', 'test-2']
|
||||
});
|
||||
});
|
||||
|
||||
test('buildChatTabModel keeps same-index slug-colliding labels on distinct tab ids', () => {
|
||||
const model = buildChatTabModel({
|
||||
nodes: [],
|
||||
messages: [
|
||||
{ id: 'foo-space', rx_time: NOW - 10, channel: 1, channel_name: 'Foo Bar' },
|
||||
{ id: 'foo-dash', rx_time: NOW - 8, channel: 1, channel_name: 'Foo-Bar' }
|
||||
],
|
||||
nowSeconds: NOW,
|
||||
windowSeconds: WINDOW
|
||||
});
|
||||
|
||||
const fooSpaceChannel = findChannelByLabel(model, 'Foo Bar');
|
||||
const fooDashChannel = findChannelByLabel(model, 'Foo-Bar');
|
||||
assert.ok(fooSpaceChannel);
|
||||
assert.ok(fooDashChannel);
|
||||
assert.match(fooSpaceChannel.id, /^channel-secondary-name-foo-bar-[a-z0-9]+$/);
|
||||
assert.match(fooDashChannel.id, /^channel-secondary-name-foo-bar-[a-z0-9]+$/);
|
||||
assert.notEqual(fooSpaceChannel.id, fooDashChannel.id);
|
||||
});
|
||||
|
||||
test('buildChatTabModel falls back to hashed id for unsluggable secondary labels', () => {
|
||||
const model = buildChatTabModel({
|
||||
nodes: [],
|
||||
messages: [{ id: 'hash-fallback', rx_time: NOW - 5, channel: 2, channel_name: '###' }],
|
||||
nowSeconds: NOW,
|
||||
windowSeconds: WINDOW
|
||||
});
|
||||
const channel = findChannelByLabel(model, '###');
|
||||
assert.ok(channel);
|
||||
assert.equal(channel.index, 2);
|
||||
assert.ok(channel.id.startsWith('channel-secondary-name-'));
|
||||
assert.ok(channel.id.length > 'channel-secondary-name-'.length);
|
||||
const secondaryChannels = model.channels.filter(channel => channel.index === index);
|
||||
assert.equal(secondaryChannels.length, 1);
|
||||
const [secondaryChannel] = secondaryChannels;
|
||||
assert.equal(secondaryChannel.id, 'channel-secondary-meshnorth');
|
||||
assert.equal(secondaryChannel.label, label);
|
||||
assert.deepEqual(secondaryChannel.entries.map(entry => entry.message.id), [namedId, unlabeledId]);
|
||||
});
|
||||
|
||||
@@ -21,64 +21,12 @@ import { createDomEnvironment } from './dom-environment.js';
|
||||
import { initializeFederationPage } from '../federation-page.js';
|
||||
import { roleColors } from '../role-helpers.js';
|
||||
|
||||
function createFailureScenarioPage(env) {
|
||||
const { document, createElement, registerElement } = env;
|
||||
registerElement('map', createElement('div', 'map'));
|
||||
const statusEl = createElement('div', 'status');
|
||||
registerElement('status', statusEl);
|
||||
const tableEl = createElement('table', 'instances');
|
||||
const tbodyEl = createElement('tbody');
|
||||
registerElement('instances', tableEl);
|
||||
const configEl = createElement('div');
|
||||
configEl.setAttribute('data-app-config', JSON.stringify({}));
|
||||
document.querySelector = selector => {
|
||||
if (selector === '[data-app-config]') return configEl;
|
||||
if (selector === '#instances tbody') return tbodyEl;
|
||||
return null;
|
||||
};
|
||||
return { statusEl };
|
||||
}
|
||||
|
||||
function createMinimalLeafletStub() {
|
||||
return {
|
||||
map() {
|
||||
return {
|
||||
setView() {},
|
||||
on() {},
|
||||
getPane() {
|
||||
return null;
|
||||
}
|
||||
};
|
||||
},
|
||||
tileLayer() {
|
||||
return {
|
||||
addTo() {
|
||||
return this;
|
||||
},
|
||||
getContainer() {
|
||||
return null;
|
||||
},
|
||||
on() {}
|
||||
};
|
||||
},
|
||||
layerGroup() {
|
||||
return { addLayer() {}, addTo() { return this; } };
|
||||
},
|
||||
circleMarker() {
|
||||
return { bindPopup() { return this; } };
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
test('federation map centers on configured coordinates and follows theme filters', async () => {
|
||||
const env = createDomEnvironment({ includeBody: true, bodyHasDarkClass: true });
|
||||
const { document, window, createElement, registerElement, cleanup } = env;
|
||||
|
||||
const mapEl = createElement('div', 'map');
|
||||
registerElement('map', mapEl);
|
||||
const mapPanel = createElement('div', 'mapPanel');
|
||||
mapPanel.dataset.legendCollapsed = 'true';
|
||||
registerElement('mapPanel', mapPanel);
|
||||
const statusEl = createElement('div', 'status');
|
||||
registerElement('status', statusEl);
|
||||
const tableEl = createElement('table', 'instances');
|
||||
@@ -460,141 +408,47 @@ test('federation table sorting, contact rendering, and legend creation', async (
|
||||
assert.deepEqual(mapSetViewCalls[0], [[0, 0], 3]);
|
||||
assert.equal(mapFitBoundsCalls[0][0].length, 3);
|
||||
|
||||
assert.equal(legendContainers.length, 2);
|
||||
const legend = legendContainers.find(container => container.className.includes('legend--instances'));
|
||||
assert.ok(legend);
|
||||
assert.ok(legend.className.includes('legend-hidden'));
|
||||
assert.equal(legendContainers.length, 1);
|
||||
const legend = legendContainers[0];
|
||||
assert.ok(legend.className.includes('legend'));
|
||||
const legendHeader = legend.children.find(child => child.className === 'legend-header');
|
||||
const legendTitle = legendHeader && Array.isArray(legendHeader.children)
|
||||
? legendHeader.children.find(child => child.className === 'legend-title')
|
||||
: null;
|
||||
assert.ok(legendTitle);
|
||||
assert.equal(legendTitle.textContent, 'Active nodes');
|
||||
const legendToggle = legendContainers.find(container => container.className.includes('legend-toggle'));
|
||||
assert.ok(legendToggle);
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('federation legend toggle respects media query changes', async () => {
|
||||
test('federation page tolerates fetch failures', async () => {
|
||||
const env = createDomEnvironment({ includeBody: true, bodyHasDarkClass: false });
|
||||
const { document, createElement, registerElement, cleanup } = env;
|
||||
|
||||
const mapEl = createElement('div', 'map');
|
||||
registerElement('map', mapEl);
|
||||
const mapPanel = createElement('div', 'mapPanel');
|
||||
mapPanel.setAttribute('data-legend-collapsed', 'false');
|
||||
registerElement('mapPanel', mapPanel);
|
||||
const statusEl = createElement('div', 'status');
|
||||
registerElement('status', statusEl);
|
||||
|
||||
const tableEl = createElement('table', 'instances');
|
||||
const tbodyEl = createElement('tbody');
|
||||
registerElement('instances', tableEl);
|
||||
tableEl.appendChild(tbodyEl);
|
||||
|
||||
const configPayload = {
|
||||
mapCenter: { lat: 0, lon: 0 },
|
||||
mapZoom: 3,
|
||||
tileFilters: { light: 'none', dark: 'invert(1)' }
|
||||
};
|
||||
const configEl = createElement('div');
|
||||
configEl.setAttribute('data-app-config', JSON.stringify(configPayload));
|
||||
|
||||
configEl.setAttribute('data-app-config', JSON.stringify({}));
|
||||
document.querySelector = selector => {
|
||||
if (selector === '[data-app-config]') return configEl;
|
||||
if (selector === '#instances tbody') return tbodyEl;
|
||||
return null;
|
||||
};
|
||||
|
||||
let mediaQueryHandler = null;
|
||||
window.matchMedia = () => ({
|
||||
matches: false,
|
||||
addListener(handler) {
|
||||
mediaQueryHandler = handler;
|
||||
}
|
||||
});
|
||||
|
||||
const legendContainers = [];
|
||||
const legendButtons = [];
|
||||
|
||||
const DomUtil = {
|
||||
create(tag, className, parent) {
|
||||
const classSet = new Set(className ? className.split(/\s+/).filter(Boolean) : []);
|
||||
const el = {
|
||||
tagName: tag,
|
||||
className,
|
||||
classList: {
|
||||
toggle(name, force) {
|
||||
const shouldAdd = typeof force === 'boolean' ? force : !classSet.has(name);
|
||||
if (shouldAdd) {
|
||||
classSet.add(name);
|
||||
} else {
|
||||
classSet.delete(name);
|
||||
}
|
||||
el.className = Array.from(classSet).join(' ');
|
||||
}
|
||||
},
|
||||
children: [],
|
||||
style: {},
|
||||
textContent: '',
|
||||
attributes: new Map(),
|
||||
setAttribute(name, value) {
|
||||
this.attributes.set(name, String(value));
|
||||
},
|
||||
appendChild(child) {
|
||||
this.children.push(child);
|
||||
return child;
|
||||
},
|
||||
addEventListener(event, handler) {
|
||||
if (event === 'click') {
|
||||
this._clickHandler = handler;
|
||||
}
|
||||
},
|
||||
querySelector() {
|
||||
return null;
|
||||
}
|
||||
};
|
||||
if (parent && parent.appendChild) parent.appendChild(el);
|
||||
if (className && className.includes('legend-toggle-button')) {
|
||||
legendButtons.push(el);
|
||||
}
|
||||
return el;
|
||||
}
|
||||
};
|
||||
|
||||
const controlStub = () => {
|
||||
const ctrl = {
|
||||
onAdd: null,
|
||||
container: null,
|
||||
addTo(map) {
|
||||
this.container = this.onAdd ? this.onAdd(map) : null;
|
||||
legendContainers.push(this.container);
|
||||
return this;
|
||||
},
|
||||
getContainer() {
|
||||
return this.container;
|
||||
}
|
||||
};
|
||||
return ctrl;
|
||||
};
|
||||
|
||||
const markersLayer = {
|
||||
addLayer() {
|
||||
return null;
|
||||
},
|
||||
addTo() {
|
||||
return this;
|
||||
}
|
||||
};
|
||||
|
||||
const leafletStub = {
|
||||
map() {
|
||||
return {
|
||||
setView() {},
|
||||
on() {},
|
||||
fitBounds() {}
|
||||
getPane() {
|
||||
return null;
|
||||
}
|
||||
};
|
||||
},
|
||||
tileLayer() {
|
||||
@@ -609,54 +463,13 @@ test('federation legend toggle respects media query changes', async () => {
|
||||
};
|
||||
},
|
||||
layerGroup() {
|
||||
return markersLayer;
|
||||
return { addLayer() {}, addTo() { return this; } };
|
||||
},
|
||||
circleMarker() {
|
||||
return {
|
||||
bindPopup() {
|
||||
return this;
|
||||
}
|
||||
};
|
||||
},
|
||||
control: controlStub,
|
||||
DomUtil,
|
||||
DomEvent: {
|
||||
disableClickPropagation() {},
|
||||
disableScrollPropagation() {}
|
||||
return { bindPopup() { return this; } };
|
||||
}
|
||||
};
|
||||
|
||||
const fetchImpl = async () => ({
|
||||
ok: true,
|
||||
json: async () => []
|
||||
});
|
||||
|
||||
try {
|
||||
await initializeFederationPage({ config: configPayload, fetchImpl, leaflet: leafletStub });
|
||||
|
||||
const legend = legendContainers.find(container => container.className.includes('legend--instances'));
|
||||
assert.ok(legend);
|
||||
assert.ok(!legend.className.includes('legend-hidden'));
|
||||
|
||||
assert.equal(legendButtons.length, 1);
|
||||
legendButtons[0]._clickHandler?.({ preventDefault() {}, stopPropagation() {} });
|
||||
assert.ok(legend.className.includes('legend-hidden'));
|
||||
|
||||
if (mediaQueryHandler) {
|
||||
mediaQueryHandler({ matches: false });
|
||||
assert.ok(!legend.className.includes('legend-hidden'));
|
||||
}
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('federation page tolerates fetch failures', async () => {
|
||||
const env = createDomEnvironment({ includeBody: true, bodyHasDarkClass: false });
|
||||
const { cleanup } = env;
|
||||
createFailureScenarioPage(env);
|
||||
const leafletStub = createMinimalLeafletStub();
|
||||
|
||||
const fetchImpl = async () => {
|
||||
throw new Error('boom');
|
||||
};
|
||||
@@ -664,16 +477,3 @@ test('federation page tolerates fetch failures', async () => {
|
||||
await initializeFederationPage({ config: {}, fetchImpl, leaflet: leafletStub });
|
||||
cleanup();
|
||||
});
|
||||
|
||||
test('federation page tolerates non-ok paginated instance responses', async () => {
|
||||
const env = createDomEnvironment({ includeBody: true, bodyHasDarkClass: false });
|
||||
const { statusEl } = createFailureScenarioPage(env);
|
||||
const { cleanup } = env;
|
||||
const leafletStub = createMinimalLeafletStub();
|
||||
|
||||
const fetchImpl = async () => ({ ok: false, json: async () => [] });
|
||||
|
||||
await initializeFederationPage({ config: {}, fetchImpl, leaflet: leafletStub });
|
||||
assert.match(statusEl.textContent, /0 instances/);
|
||||
cleanup();
|
||||
});
|
||||
|
||||
@@ -20,7 +20,7 @@ import { createDomEnvironment } from './dom-environment.js';
|
||||
|
||||
import { buildInstanceUrl, initializeInstanceSelector, __test__ } from '../instance-selector.js';
|
||||
|
||||
const { resolveInstanceLabel, updateFederationNavCount } = __test__;
|
||||
const { resolveInstanceLabel } = __test__;
|
||||
|
||||
function setupSelectElement(document) {
|
||||
const select = document.createElement('select');
|
||||
@@ -191,133 +191,3 @@ test('initializeInstanceSelector navigates to the chosen instance domain', async
|
||||
env.cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('initializeInstanceSelector updates federation navigation labels with instance count', async () => {
|
||||
const env = createDomEnvironment();
|
||||
const select = setupSelectElement(env.document);
|
||||
const navLink = env.document.createElement('a');
|
||||
navLink.classList.add('js-federation-nav');
|
||||
navLink.textContent = 'Federation';
|
||||
env.document.body.appendChild(navLink);
|
||||
|
||||
const fetchImpl = async () => ({
|
||||
ok: true,
|
||||
async json() {
|
||||
return [{ domain: 'alpha.mesh' }, { domain: 'beta.mesh' }];
|
||||
}
|
||||
});
|
||||
|
||||
try {
|
||||
await initializeInstanceSelector({
|
||||
selectElement: select,
|
||||
fetchImpl,
|
||||
windowObject: env.window,
|
||||
documentObject: env.document
|
||||
});
|
||||
|
||||
assert.equal(navLink.textContent, 'Federation (2)');
|
||||
} finally {
|
||||
env.cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('initializeInstanceSelector follows paginated instance responses', async () => {
|
||||
const env = createDomEnvironment();
|
||||
const select = setupSelectElement(env.document);
|
||||
const calls = [];
|
||||
|
||||
const fetchImpl = async url => {
|
||||
calls.push(url);
|
||||
if (url === '/api/instances?limit=500') {
|
||||
return {
|
||||
ok: true,
|
||||
headers: { get: name => (name === 'X-Next-Cursor' ? 'cursor-1' : null) },
|
||||
async json() {
|
||||
return [{ domain: 'alpha.mesh' }];
|
||||
}
|
||||
};
|
||||
}
|
||||
if (url === '/api/instances?limit=500&cursor=cursor-1') {
|
||||
return {
|
||||
ok: true,
|
||||
headers: { get: () => null },
|
||||
async json() {
|
||||
return [{ domain: 'bravo.mesh' }];
|
||||
}
|
||||
};
|
||||
}
|
||||
throw new Error(`unexpected url ${url}`);
|
||||
};
|
||||
|
||||
try {
|
||||
await initializeInstanceSelector({
|
||||
selectElement: select,
|
||||
fetchImpl,
|
||||
windowObject: env.window,
|
||||
documentObject: env.document
|
||||
});
|
||||
|
||||
assert.deepEqual(calls, ['/api/instances?limit=500', '/api/instances?limit=500&cursor=cursor-1']);
|
||||
assert.equal(select.options.length, 3);
|
||||
} finally {
|
||||
env.cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('initializeInstanceSelector handles non-ok instance responses without adding options', async () => {
|
||||
const env = createDomEnvironment();
|
||||
const select = setupSelectElement(env.document);
|
||||
|
||||
const fetchImpl = async () => ({
|
||||
ok: false,
|
||||
async json() {
|
||||
return [{ domain: 'ignored.mesh' }];
|
||||
}
|
||||
});
|
||||
|
||||
try {
|
||||
await initializeInstanceSelector({
|
||||
selectElement: select,
|
||||
fetchImpl,
|
||||
windowObject: env.window,
|
||||
documentObject: env.document
|
||||
});
|
||||
|
||||
assert.equal(select.options.length, 1);
|
||||
} finally {
|
||||
env.cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('updateFederationNavCount prefers stored labels and normalizes counts', () => {
|
||||
const env = createDomEnvironment();
|
||||
const navLink = env.document.createElement('a');
|
||||
navLink.classList.add('js-federation-nav');
|
||||
navLink.textContent = 'Federation';
|
||||
navLink.dataset.federationLabel = 'Community';
|
||||
env.document.body.appendChild(navLink);
|
||||
|
||||
try {
|
||||
updateFederationNavCount({ documentObject: env.document, count: -3 });
|
||||
|
||||
assert.equal(navLink.textContent, 'Community (0)');
|
||||
} finally {
|
||||
env.cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('updateFederationNavCount falls back to existing link text when no dataset label', () => {
|
||||
const env = createDomEnvironment();
|
||||
const navLink = env.document.createElement('a');
|
||||
navLink.classList.add('js-federation-nav');
|
||||
navLink.textContent = 'Federation (9)';
|
||||
env.document.body.appendChild(navLink);
|
||||
|
||||
try {
|
||||
updateFederationNavCount({ documentObject: env.document, count: 4 });
|
||||
|
||||
assert.equal(navLink.textContent, 'Federation (4)');
|
||||
} finally {
|
||||
env.cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
@@ -1,399 +0,0 @@
|
||||
/*
|
||||
* Copyright © 2025-26 l5yth & contributors
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
import test from 'node:test';
|
||||
import assert from 'node:assert/strict';
|
||||
|
||||
import {
|
||||
computeLocalActiveNodeStats,
|
||||
fetchPaginatedCollection,
|
||||
fetchActiveNodeStats,
|
||||
formatActiveNodeStatsText,
|
||||
normaliseActiveNodeStatsPayload,
|
||||
readNextCursorHeader,
|
||||
} from '../main.js';
|
||||
|
||||
const NOW = 1_700_000_000;
|
||||
|
||||
test('computeLocalActiveNodeStats calculates local hour/day/week/month counts', () => {
|
||||
const nodes = [
|
||||
{ last_heard: NOW - 60 },
|
||||
{ last_heard: NOW - 4_000 },
|
||||
{ last_heard: NOW - 90_000 },
|
||||
{ last_heard: NOW - (8 * 86_400) },
|
||||
{ last_heard: NOW - (20 * 86_400) },
|
||||
];
|
||||
|
||||
const stats = computeLocalActiveNodeStats(nodes, NOW);
|
||||
|
||||
assert.deepEqual(stats, {
|
||||
hour: 1,
|
||||
day: 2,
|
||||
week: 3,
|
||||
month: 5,
|
||||
sampled: true,
|
||||
});
|
||||
});
|
||||
|
||||
test('normaliseActiveNodeStatsPayload validates and normalizes API payload', () => {
|
||||
const payload = {
|
||||
active_nodes: {
|
||||
hour: '11',
|
||||
day: 22,
|
||||
week: 33,
|
||||
month: 44,
|
||||
},
|
||||
sampled: false,
|
||||
};
|
||||
|
||||
assert.deepEqual(normaliseActiveNodeStatsPayload(payload), {
|
||||
hour: 11,
|
||||
day: 22,
|
||||
week: 33,
|
||||
month: 44,
|
||||
sampled: false,
|
||||
});
|
||||
|
||||
assert.equal(normaliseActiveNodeStatsPayload({}), null);
|
||||
});
|
||||
|
||||
test('normaliseActiveNodeStatsPayload rejects malformed stat values', () => {
|
||||
assert.equal(
|
||||
normaliseActiveNodeStatsPayload({ active_nodes: { hour: 'x', day: 1, week: 1, month: 1 } }),
|
||||
null
|
||||
);
|
||||
assert.equal(
|
||||
normaliseActiveNodeStatsPayload({ active_nodes: null }),
|
||||
null
|
||||
);
|
||||
});
|
||||
|
||||
test('normaliseActiveNodeStatsPayload clamps negatives and truncates floats', () => {
|
||||
assert.deepEqual(
|
||||
normaliseActiveNodeStatsPayload({
|
||||
active_nodes: { hour: -1.9, day: 2.8, week: 3.1, month: 4.9 },
|
||||
sampled: 1
|
||||
}),
|
||||
{ hour: 0, day: 2, week: 3, month: 4, sampled: true }
|
||||
);
|
||||
});
|
||||
|
||||
test('fetchActiveNodeStats uses /api/stats when available', async () => {
|
||||
const calls = [];
|
||||
const fetchImpl = async (url) => {
|
||||
calls.push(url);
|
||||
return {
|
||||
ok: true,
|
||||
async json() {
|
||||
return {
|
||||
active_nodes: { hour: 5, day: 15, week: 25, month: 35 },
|
||||
sampled: false,
|
||||
};
|
||||
},
|
||||
};
|
||||
};
|
||||
|
||||
const stats = await fetchActiveNodeStats({ nodes: [], nowSeconds: NOW, fetchImpl });
|
||||
|
||||
assert.equal(calls[0], '/api/stats');
|
||||
assert.deepEqual(stats, {
|
||||
hour: 5,
|
||||
day: 15,
|
||||
week: 25,
|
||||
month: 35,
|
||||
sampled: false,
|
||||
});
|
||||
});
|
||||
|
||||
test('fetchActiveNodeStats reuses cached /api/stats response for repeated calls', async () => {
|
||||
const calls = [];
|
||||
const fetchImpl = async (url) => {
|
||||
calls.push(url);
|
||||
return {
|
||||
ok: true,
|
||||
async json() {
|
||||
return {
|
||||
active_nodes: { hour: 2, day: 4, week: 6, month: 8 },
|
||||
sampled: false,
|
||||
};
|
||||
},
|
||||
};
|
||||
};
|
||||
|
||||
const first = await fetchActiveNodeStats({ nodes: [], nowSeconds: NOW, fetchImpl });
|
||||
const second = await fetchActiveNodeStats({ nodes: [], nowSeconds: NOW, fetchImpl });
|
||||
|
||||
assert.equal(calls.length, 1);
|
||||
assert.deepEqual(first, second);
|
||||
});
|
||||
|
||||
test('fetchActiveNodeStats does not reuse cache across different fetch implementations', async () => {
|
||||
const callsA = [];
|
||||
const callsB = [];
|
||||
const fetchImplA = async (url) => {
|
||||
callsA.push(url);
|
||||
return {
|
||||
ok: true,
|
||||
async json() {
|
||||
return { active_nodes: { hour: 1, day: 1, week: 1, month: 1 }, sampled: false };
|
||||
},
|
||||
};
|
||||
};
|
||||
const fetchImplB = async (url) => {
|
||||
callsB.push(url);
|
||||
return {
|
||||
ok: true,
|
||||
async json() {
|
||||
return { active_nodes: { hour: 2, day: 2, week: 2, month: 2 }, sampled: false };
|
||||
},
|
||||
};
|
||||
};
|
||||
|
||||
await fetchActiveNodeStats({ nodes: [], nowSeconds: NOW, fetchImpl: fetchImplA });
|
||||
await fetchActiveNodeStats({ nodes: [], nowSeconds: NOW, fetchImpl: fetchImplB });
|
||||
|
||||
assert.equal(callsA.length, 1);
|
||||
assert.equal(callsB.length, 1);
|
||||
});
|
||||
|
||||
test('fetchActiveNodeStats falls back to local counts when stats fetch fails', async () => {
|
||||
const nodes = [
|
||||
{ last_heard: NOW - 120 },
|
||||
{ last_heard: NOW - (10 * 86_400) },
|
||||
];
|
||||
const fetchImpl = async () => {
|
||||
throw new Error('network down');
|
||||
};
|
||||
|
||||
const stats = await fetchActiveNodeStats({ nodes, nowSeconds: NOW, fetchImpl });
|
||||
|
||||
assert.deepEqual(stats, {
|
||||
hour: 1,
|
||||
day: 1,
|
||||
week: 1,
|
||||
month: 2,
|
||||
sampled: true,
|
||||
});
|
||||
});
|
||||
|
||||
test('fetchActiveNodeStats falls back to local counts on non-OK HTTP responses', async () => {
|
||||
const stats = await fetchActiveNodeStats({
|
||||
nodes: [{ last_heard: NOW - 10 }],
|
||||
nowSeconds: NOW,
|
||||
fetchImpl: async () => ({ ok: false, status: 503 })
|
||||
});
|
||||
assert.equal(stats.sampled, true);
|
||||
assert.equal(stats.hour, 1);
|
||||
});
|
||||
|
||||
test('fetchActiveNodeStats falls back to local counts on invalid payloads', async () => {
|
||||
const stats = await fetchActiveNodeStats({
|
||||
nodes: [{ last_heard: NOW - (31 * 86_400) }],
|
||||
nowSeconds: NOW,
|
||||
fetchImpl: async () => ({
|
||||
ok: true,
|
||||
async json() {
|
||||
return { active_nodes: { hour: 'bad' } };
|
||||
}
|
||||
})
|
||||
});
|
||||
assert.equal(stats.sampled, true);
|
||||
assert.equal(stats.month, 0);
|
||||
});
|
||||
|
||||
test('formatActiveNodeStatsText emits expected dashboard string', () => {
|
||||
const text = formatActiveNodeStatsText({
|
||||
channel: 'LongFast',
|
||||
frequency: '868MHz',
|
||||
stats: { hour: 1, day: 2, week: 3, month: 4, sampled: false },
|
||||
});
|
||||
|
||||
assert.equal(
|
||||
text,
|
||||
'LongFast (868MHz) — active nodes: 1/hour, 2/day, 3/week, 4/month.'
|
||||
);
|
||||
});
|
||||
|
||||
test('formatActiveNodeStatsText appends sampled marker when local fallback is used', () => {
|
||||
const text = formatActiveNodeStatsText({
|
||||
channel: 'LongFast',
|
||||
frequency: '868MHz',
|
||||
stats: { hour: 9, day: 8, week: 7, month: 6, sampled: true },
|
||||
});
|
||||
|
||||
assert.equal(
|
||||
text,
|
||||
'LongFast (868MHz) — active nodes: 9/hour, 8/day, 7/week, 6/month (sampled).'
|
||||
);
|
||||
});
|
||||
|
||||
test('readNextCursorHeader reads cursor token from response headers', () => {
|
||||
const response = {
|
||||
headers: {
|
||||
get(name) {
|
||||
return name === 'X-Next-Cursor' ? 'cursor-token' : null;
|
||||
},
|
||||
},
|
||||
};
|
||||
assert.equal(readNextCursorHeader(response), 'cursor-token');
|
||||
});
|
||||
|
||||
test('readNextCursorHeader returns null when response headers are missing', () => {
|
||||
assert.equal(readNextCursorHeader(null), null);
|
||||
assert.equal(readNextCursorHeader({ headers: {} }), null);
|
||||
});
|
||||
|
||||
test('fetchPaginatedCollection follows cursor headers and merges pages', async () => {
|
||||
const calls = [];
|
||||
const pages = new Map([
|
||||
['/api/nodes?limit=2', { items: [{ id: 'a' }, { id: 'b' }], next: 'cursor-1' }],
|
||||
['/api/nodes?limit=2&cursor=cursor-1', { items: [{ id: 'c' }], next: null }],
|
||||
]);
|
||||
const fetchImpl = async (url) => {
|
||||
calls.push(url);
|
||||
const page = pages.get(url);
|
||||
if (!page) {
|
||||
throw new Error(`unexpected url ${url}`);
|
||||
}
|
||||
return {
|
||||
ok: true,
|
||||
headers: { get: (name) => (name === 'X-Next-Cursor' ? page.next : null) },
|
||||
async json() {
|
||||
return page.items;
|
||||
},
|
||||
};
|
||||
};
|
||||
|
||||
const items = await fetchPaginatedCollection({
|
||||
path: '/api/nodes',
|
||||
limit: 2,
|
||||
maxRows: 10,
|
||||
fetchImpl,
|
||||
});
|
||||
|
||||
assert.deepEqual(calls, ['/api/nodes?limit=2', '/api/nodes?limit=2&cursor=cursor-1']);
|
||||
assert.deepEqual(items.map((item) => item.id), ['a', 'b', 'c']);
|
||||
});
|
||||
|
||||
test('fetchPaginatedCollection returns empty list when path is missing', async () => {
|
||||
const items = await fetchPaginatedCollection({
|
||||
path: '',
|
||||
limit: 10,
|
||||
fetchImpl: async () => ({ ok: true, headers: { get: () => null }, json: async () => [] }),
|
||||
});
|
||||
assert.deepEqual(items, []);
|
||||
});
|
||||
|
||||
test('fetchPaginatedCollection enforces maxRows and propagates params', async () => {
|
||||
const calls = [];
|
||||
const fetchImpl = async (url) => {
|
||||
calls.push(url);
|
||||
return {
|
||||
ok: true,
|
||||
headers: { get: () => (url.includes('cursor=') ? null : 'next-1') },
|
||||
async json() {
|
||||
return [{ id: 1 }, { id: 2 }, { id: 3 }];
|
||||
},
|
||||
};
|
||||
};
|
||||
const items = await fetchPaginatedCollection({
|
||||
path: '/api/messages',
|
||||
limit: 3,
|
||||
maxRows: 4,
|
||||
params: { since: '123', encrypted: 'true' },
|
||||
fetchImpl,
|
||||
});
|
||||
|
||||
assert.equal(calls[0], '/api/messages?limit=3&since=123&encrypted=true');
|
||||
assert.equal(items.length, 4);
|
||||
});
|
||||
|
||||
test('fetchPaginatedCollection throws on non-ok responses', async () => {
|
||||
await assert.rejects(
|
||||
fetchPaginatedCollection({
|
||||
path: '/api/messages',
|
||||
limit: 2,
|
||||
fetchImpl: async () => ({ ok: false, status: 503, json: async () => [] }),
|
||||
}),
|
||||
/HTTP 503/
|
||||
);
|
||||
});
|
||||
|
||||
test('fetchPaginatedCollection throws on invalid payload shapes', async () => {
|
||||
await assert.rejects(
|
||||
fetchPaginatedCollection({
|
||||
path: '/api/messages',
|
||||
limit: 2,
|
||||
fetchImpl: async () => ({ ok: true, headers: { get: () => null }, json: async () => ({}) }),
|
||||
}),
|
||||
/invalid paginated payload/
|
||||
);
|
||||
});
|
||||
|
||||
test('fetchPaginatedCollection ignores blank params and defaults invalid limits', async () => {
|
||||
const calls = [];
|
||||
const items = await fetchPaginatedCollection({
|
||||
path: '/api/messages',
|
||||
limit: 0,
|
||||
maxRows: 0,
|
||||
params: { since: ' ', encrypted: null, scope: 'recent' },
|
||||
fetchImpl: async (url) => {
|
||||
calls.push(url);
|
||||
return {
|
||||
ok: true,
|
||||
headers: { get: () => null },
|
||||
async json() {
|
||||
return [{ id: 1 }];
|
||||
},
|
||||
};
|
||||
},
|
||||
});
|
||||
|
||||
assert.deepEqual(items, [{ id: 1 }]);
|
||||
assert.equal(calls[0], '/api/messages?limit=200&scope=recent');
|
||||
});
|
||||
|
||||
test('fetchPaginatedCollection stops when a page is empty even if cursor was present', async () => {
|
||||
const calls = [];
|
||||
const fetchImpl = async (url) => {
|
||||
calls.push(url);
|
||||
if (url === '/api/nodes?limit=2') {
|
||||
return {
|
||||
ok: true,
|
||||
headers: { get: (name) => (name === 'X-Next-Cursor' ? 'cursor-1' : null) },
|
||||
async json() {
|
||||
return [{ id: 'a' }];
|
||||
},
|
||||
};
|
||||
}
|
||||
return {
|
||||
ok: true,
|
||||
headers: { get: () => null },
|
||||
async json() {
|
||||
return [];
|
||||
},
|
||||
};
|
||||
};
|
||||
|
||||
const items = await fetchPaginatedCollection({
|
||||
path: '/api/nodes',
|
||||
limit: 2,
|
||||
fetchImpl,
|
||||
});
|
||||
|
||||
assert.deepEqual(items, [{ id: 'a' }]);
|
||||
assert.deepEqual(calls, ['/api/nodes?limit=2', '/api/nodes?limit=2&cursor=cursor-1']);
|
||||
});
|
||||
@@ -1,455 +0,0 @@
|
||||
/*
|
||||
* Copyright © 2025-26 l5yth & contributors
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
import test from 'node:test';
|
||||
import assert from 'node:assert/strict';
|
||||
|
||||
import { __test__, initializeMobileMenu } from '../mobile-menu.js';
|
||||
|
||||
const { createMobileMenuController, resolveFocusableElements } = __test__;
|
||||
|
||||
function createClassList() {
|
||||
const values = new Set();
|
||||
return {
|
||||
add(...names) {
|
||||
names.forEach(name => values.add(name));
|
||||
},
|
||||
remove(...names) {
|
||||
names.forEach(name => values.delete(name));
|
||||
},
|
||||
contains(name) {
|
||||
return values.has(name);
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
function createElement(tagName = 'div', initialId = '') {
|
||||
const listeners = new Map();
|
||||
const attributes = new Map();
|
||||
if (initialId) {
|
||||
attributes.set('id', String(initialId));
|
||||
}
|
||||
return {
|
||||
tagName: tagName.toUpperCase(),
|
||||
attributes,
|
||||
classList: createClassList(),
|
||||
dataset: {},
|
||||
hidden: false,
|
||||
parentNode: null,
|
||||
nextSibling: null,
|
||||
setAttribute(name, value) {
|
||||
attributes.set(name, String(value));
|
||||
},
|
||||
getAttribute(name) {
|
||||
return attributes.has(name) ? attributes.get(name) : null;
|
||||
},
|
||||
addEventListener(event, handler) {
|
||||
listeners.set(event, handler);
|
||||
},
|
||||
dispatchEvent(event) {
|
||||
const key = typeof event === 'string' ? event : event?.type;
|
||||
const handler = listeners.get(key);
|
||||
if (handler) {
|
||||
handler(event);
|
||||
}
|
||||
},
|
||||
appendChild(node) {
|
||||
this.lastAppended = node;
|
||||
return node;
|
||||
},
|
||||
insertBefore(node, nextSibling) {
|
||||
this.lastInserted = { node, nextSibling };
|
||||
return node;
|
||||
},
|
||||
focus() {
|
||||
globalThis.document.activeElement = this;
|
||||
},
|
||||
querySelector() {
|
||||
return null;
|
||||
},
|
||||
querySelectorAll() {
|
||||
return [];
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
function createDomStub() {
|
||||
const originalDocument = globalThis.document;
|
||||
const registry = new Map();
|
||||
const documentStub = {
|
||||
body: createElement('body'),
|
||||
activeElement: null,
|
||||
querySelectorAll() {
|
||||
return [];
|
||||
},
|
||||
getElementById(id) {
|
||||
return registry.get(id) || null;
|
||||
}
|
||||
};
|
||||
globalThis.document = documentStub;
|
||||
return {
|
||||
documentStub,
|
||||
registry,
|
||||
cleanup() {
|
||||
globalThis.document = originalDocument;
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
function createWindowStub(matches = true) {
|
||||
const listeners = new Map();
|
||||
const mediaListeners = new Map();
|
||||
return {
|
||||
matchMedia() {
|
||||
return {
|
||||
matches,
|
||||
addEventListener(event, handler) {
|
||||
mediaListeners.set(event, handler);
|
||||
}
|
||||
};
|
||||
},
|
||||
addEventListener(event, handler) {
|
||||
listeners.set(event, handler);
|
||||
},
|
||||
dispatchEvent(event) {
|
||||
const key = typeof event === 'string' ? event : event?.type;
|
||||
const handler = listeners.get(key);
|
||||
if (handler) {
|
||||
handler(event);
|
||||
}
|
||||
},
|
||||
dispatchMediaChange() {
|
||||
const handler = mediaListeners.get('change');
|
||||
if (handler) {
|
||||
handler();
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
function createWindowStubWithListener(matches = true) {
|
||||
const listeners = new Map();
|
||||
let mediaHandler = null;
|
||||
return {
|
||||
matchMedia() {
|
||||
return {
|
||||
matches,
|
||||
addListener(handler) {
|
||||
mediaHandler = handler;
|
||||
}
|
||||
};
|
||||
},
|
||||
addEventListener(event, handler) {
|
||||
listeners.set(event, handler);
|
||||
},
|
||||
dispatchMediaChange() {
|
||||
if (mediaHandler) {
|
||||
mediaHandler();
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
test('mobile menu toggles open state and aria-expanded', () => {
|
||||
const { documentStub, registry, cleanup } = createDomStub();
|
||||
const windowStub = createWindowStub(true);
|
||||
|
||||
const menuToggle = createElement('button');
|
||||
const menu = createElement('div');
|
||||
const menuPanel = createElement('div');
|
||||
const closeButton = createElement('button');
|
||||
const navLink = createElement('a');
|
||||
|
||||
menu.hidden = true;
|
||||
menuPanel.classList.add('mobile-menu__panel');
|
||||
|
||||
menu.querySelector = selector => {
|
||||
if (selector === '.mobile-menu__panel') return menuPanel;
|
||||
return null;
|
||||
};
|
||||
menu.querySelectorAll = selector => {
|
||||
if (selector === '[data-mobile-menu-close]') return [closeButton];
|
||||
if (selector === 'a') return [navLink];
|
||||
return [];
|
||||
};
|
||||
menuPanel.querySelectorAll = () => [closeButton, navLink];
|
||||
|
||||
registry.set('mobileMenuToggle', menuToggle);
|
||||
registry.set('mobileMenu', menu);
|
||||
|
||||
try {
|
||||
const controller = createMobileMenuController({
|
||||
documentObject: documentStub,
|
||||
windowObject: windowStub
|
||||
});
|
||||
|
||||
controller.initialize();
|
||||
windowStub.dispatchMediaChange();
|
||||
|
||||
menuToggle.dispatchEvent({ type: 'click', preventDefault() {} });
|
||||
assert.equal(menu.hidden, false);
|
||||
assert.equal(menuToggle.getAttribute('aria-expanded'), 'true');
|
||||
assert.equal(documentStub.body.classList.contains('menu-open'), true);
|
||||
|
||||
navLink.dispatchEvent({ type: 'click' });
|
||||
assert.equal(menu.hidden, true);
|
||||
|
||||
closeButton.dispatchEvent({ type: 'click' });
|
||||
assert.equal(menu.hidden, true);
|
||||
assert.equal(menuToggle.getAttribute('aria-expanded'), 'false');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('mobile menu closes on escape and route changes', () => {
|
||||
const { documentStub, registry, cleanup } = createDomStub();
|
||||
const windowStub = createWindowStub(true);
|
||||
|
||||
const menuToggle = createElement('button');
|
||||
const menu = createElement('div');
|
||||
const menuPanel = createElement('div');
|
||||
const closeButton = createElement('button');
|
||||
|
||||
menu.hidden = true;
|
||||
menuPanel.classList.add('mobile-menu__panel');
|
||||
|
||||
menu.querySelector = selector => {
|
||||
if (selector === '.mobile-menu__panel') return menuPanel;
|
||||
return null;
|
||||
};
|
||||
menu.querySelectorAll = selector => {
|
||||
if (selector === '[data-mobile-menu-close]') return [closeButton];
|
||||
return [];
|
||||
};
|
||||
menuPanel.querySelectorAll = () => [closeButton];
|
||||
|
||||
registry.set('mobileMenuToggle', menuToggle);
|
||||
registry.set('mobileMenu', menu);
|
||||
|
||||
try {
|
||||
const controller = createMobileMenuController({
|
||||
documentObject: documentStub,
|
||||
windowObject: windowStub
|
||||
});
|
||||
|
||||
controller.initialize();
|
||||
|
||||
menuPanel.dispatchEvent({ type: 'keydown', key: 'Escape', preventDefault() {} });
|
||||
assert.equal(menu.hidden, true);
|
||||
|
||||
menuToggle.dispatchEvent({ type: 'click', preventDefault() {} });
|
||||
assert.equal(menu.hidden, false);
|
||||
|
||||
menuPanel.dispatchEvent({ type: 'keydown', key: 'ArrowDown' });
|
||||
assert.equal(menu.hidden, false);
|
||||
|
||||
menuPanel.dispatchEvent({ type: 'keydown', key: 'Escape', preventDefault() {} });
|
||||
assert.equal(menu.hidden, true);
|
||||
|
||||
menuToggle.dispatchEvent({ type: 'click', preventDefault() {} });
|
||||
windowStub.dispatchEvent({ type: 'hashchange' });
|
||||
assert.equal(menu.hidden, true);
|
||||
|
||||
menuToggle.dispatchEvent({ type: 'click', preventDefault() {} });
|
||||
windowStub.dispatchEvent({ type: 'popstate' });
|
||||
assert.equal(menu.hidden, true);
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('mobile menu traps focus within the panel', () => {
|
||||
const { documentStub, registry, cleanup } = createDomStub();
|
||||
const windowStub = createWindowStub(true);
|
||||
|
||||
const menuToggle = createElement('button');
|
||||
const menu = createElement('div');
|
||||
const menuPanel = createElement('div');
|
||||
const firstLink = createElement('a');
|
||||
const lastButton = createElement('button');
|
||||
|
||||
menuPanel.classList.add('mobile-menu__panel');
|
||||
menuPanel.querySelectorAll = () => [firstLink, lastButton];
|
||||
menu.querySelector = selector => {
|
||||
if (selector === '.mobile-menu__panel') return menuPanel;
|
||||
return null;
|
||||
};
|
||||
menu.querySelectorAll = () => [];
|
||||
|
||||
registry.set('mobileMenuToggle', menuToggle);
|
||||
registry.set('mobileMenu', menu);
|
||||
|
||||
try {
|
||||
const controller = createMobileMenuController({
|
||||
documentObject: documentStub,
|
||||
windowObject: windowStub
|
||||
});
|
||||
|
||||
controller.initialize();
|
||||
menuToggle.dispatchEvent({ type: 'click', preventDefault() {} });
|
||||
|
||||
documentStub.activeElement = lastButton;
|
||||
menuPanel.dispatchEvent({ type: 'keydown', key: 'Tab', preventDefault() {}, shiftKey: false });
|
||||
assert.equal(documentStub.activeElement, firstLink);
|
||||
|
||||
documentStub.activeElement = firstLink;
|
||||
menuPanel.dispatchEvent({ type: 'keydown', key: 'Tab', preventDefault() {}, shiftKey: true });
|
||||
assert.equal(documentStub.activeElement, lastButton);
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('resolveFocusableElements filters out aria-hidden nodes', () => {
|
||||
const hiddenButton = createElement('button');
|
||||
hiddenButton.getAttribute = name => (name === 'aria-hidden' ? 'true' : null);
|
||||
const openLink = createElement('a');
|
||||
const bareNode = { tagName: 'DIV' };
|
||||
const container = {
|
||||
querySelectorAll() {
|
||||
return [hiddenButton, bareNode, openLink];
|
||||
}
|
||||
};
|
||||
|
||||
const focusables = resolveFocusableElements(container);
|
||||
assert.equal(focusables.length, 1);
|
||||
assert.equal(focusables[0], openLink);
|
||||
});
|
||||
|
||||
test('resolveFocusableElements handles empty containers', () => {
|
||||
assert.deepEqual(resolveFocusableElements(null), []);
|
||||
assert.deepEqual(resolveFocusableElements({}), []);
|
||||
});
|
||||
|
||||
test('mobile menu focuses the panel when no focusables exist', () => {
|
||||
const { documentStub, registry, cleanup } = createDomStub();
|
||||
const windowStub = createWindowStub(true);
|
||||
|
||||
const menuToggle = createElement('button');
|
||||
const menu = createElement('div');
|
||||
const menuPanel = createElement('div');
|
||||
const lastActive = createElement('button');
|
||||
|
||||
menuPanel.classList.add('mobile-menu__panel');
|
||||
menuPanel.querySelectorAll = () => [];
|
||||
menu.querySelector = selector => {
|
||||
if (selector === '.mobile-menu__panel') return menuPanel;
|
||||
return null;
|
||||
};
|
||||
menu.querySelectorAll = () => [];
|
||||
|
||||
registry.set('mobileMenuToggle', menuToggle);
|
||||
registry.set('mobileMenu', menu);
|
||||
documentStub.activeElement = lastActive;
|
||||
|
||||
try {
|
||||
const controller = createMobileMenuController({
|
||||
documentObject: documentStub,
|
||||
windowObject: windowStub
|
||||
});
|
||||
|
||||
controller.initialize();
|
||||
menuToggle.dispatchEvent({ type: 'click', preventDefault() {} });
|
||||
assert.equal(documentStub.activeElement, menuPanel);
|
||||
|
||||
menuToggle.dispatchEvent({ type: 'click', preventDefault() {} });
|
||||
assert.equal(documentStub.activeElement, lastActive);
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('mobile menu registers legacy media query listeners', () => {
|
||||
const { documentStub, registry, cleanup } = createDomStub();
|
||||
const windowStub = createWindowStubWithListener(true);
|
||||
|
||||
const menuToggle = createElement('button');
|
||||
const menu = createElement('div');
|
||||
const menuPanel = createElement('div');
|
||||
|
||||
menuPanel.classList.add('mobile-menu__panel');
|
||||
menu.querySelector = selector => {
|
||||
if (selector === '.mobile-menu__panel') return menuPanel;
|
||||
return null;
|
||||
};
|
||||
menu.querySelectorAll = () => [];
|
||||
|
||||
registry.set('mobileMenuToggle', menuToggle);
|
||||
registry.set('mobileMenu', menu);
|
||||
|
||||
try {
|
||||
const controller = createMobileMenuController({
|
||||
documentObject: documentStub,
|
||||
windowObject: windowStub
|
||||
});
|
||||
|
||||
controller.initialize();
|
||||
windowStub.dispatchMediaChange();
|
||||
assert.equal(menuToggle.getAttribute('aria-expanded'), 'false');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('mobile menu safely no-ops without required nodes', () => {
|
||||
const { documentStub, cleanup } = createDomStub();
|
||||
const windowStub = createWindowStub(true);
|
||||
|
||||
try {
|
||||
const controller = createMobileMenuController({
|
||||
documentObject: documentStub,
|
||||
windowObject: windowStub
|
||||
});
|
||||
|
||||
controller.initialize();
|
||||
controller.openMenu();
|
||||
controller.closeMenu();
|
||||
controller.syncLayout();
|
||||
assert.equal(documentStub.body.classList.contains('menu-open'), false);
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('initializeMobileMenu returns a controller', () => {
|
||||
const { documentStub, registry, cleanup } = createDomStub();
|
||||
const windowStub = createWindowStub(true);
|
||||
|
||||
const menuToggle = createElement('button');
|
||||
const menu = createElement('div');
|
||||
const menuPanel = createElement('div');
|
||||
|
||||
menuPanel.classList.add('mobile-menu__panel');
|
||||
menu.querySelector = selector => {
|
||||
if (selector === '.mobile-menu__panel') return menuPanel;
|
||||
return null;
|
||||
};
|
||||
menu.querySelectorAll = () => [];
|
||||
|
||||
registry.set('mobileMenuToggle', menuToggle);
|
||||
registry.set('mobileMenu', menu);
|
||||
|
||||
try {
|
||||
const controller = initializeMobileMenu({
|
||||
documentObject: documentStub,
|
||||
windowObject: windowStub
|
||||
});
|
||||
assert.equal(typeof controller.openMenu, 'function');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
@@ -111,6 +111,26 @@ test('createNodeDetailOverlayManager renders fetched markup and restores focus',
|
||||
assert.equal(focusTarget.focusCalled, true);
|
||||
});
|
||||
|
||||
test('createNodeDetailOverlayManager mounts telemetry charts for overlay content', async () => {
|
||||
const { document, content } = createOverlayHarness();
|
||||
const chartModels = [{ id: 'power' }];
|
||||
let mountCall = null;
|
||||
const manager = createNodeDetailOverlayManager({
|
||||
document,
|
||||
fetchNodeDetail: async () => ({ html: '<section class="node-detail">Charts</section>', chartModels }),
|
||||
mountCharts: (models, options) => {
|
||||
mountCall = { models, options };
|
||||
return [];
|
||||
},
|
||||
});
|
||||
assert.ok(manager);
|
||||
await manager.open({ nodeId: '!alpha' });
|
||||
assert.equal(content.innerHTML.includes('Charts'), true);
|
||||
assert.ok(mountCall);
|
||||
assert.equal(mountCall.models, chartModels);
|
||||
assert.equal(mountCall.options.root, content);
|
||||
});
|
||||
|
||||
test('createNodeDetailOverlayManager surfaces errors and supports escape closing', async () => {
|
||||
const { document, overlay, content } = createOverlayHarness();
|
||||
const errors = [];
|
||||
|
||||
@@ -47,7 +47,9 @@ const {
|
||||
categoriseNeighbors,
|
||||
renderNeighborGroups,
|
||||
renderSingleNodeTable,
|
||||
createTelemetryCharts,
|
||||
renderTelemetryCharts,
|
||||
buildUPlotChartConfig,
|
||||
renderMessages,
|
||||
renderTraceroutes,
|
||||
renderTracePath,
|
||||
@@ -154,14 +156,12 @@ test('additional format helpers provide table friendly output', () => {
|
||||
channel_name: 'Primary',
|
||||
node: { short_name: 'SRCE', role: 'ROUTER', node_id: '!src' },
|
||||
},
|
||||
{ text: ' GAA= ', encrypted: true, rx_time: 1_700_000_405 },
|
||||
{ emoji: '😊', rx_time: 1_700_000_401 },
|
||||
],
|
||||
renderShortHtml,
|
||||
nodeContext,
|
||||
);
|
||||
assert.equal(messagesHtml.includes('hello'), true);
|
||||
assert.equal(messagesHtml.includes('GAA='), false);
|
||||
assert.equal(messagesHtml.includes('😊'), true);
|
||||
assert.match(messagesHtml, /\[\d{4}-\d{2}-\d{2} \d{2}:\d{2}\]\[868\]/);
|
||||
assert.equal(messagesHtml.includes('[868]'), true);
|
||||
@@ -388,23 +388,10 @@ test('renderTelemetryCharts renders condensed scatter charts when telemetry exis
|
||||
},
|
||||
};
|
||||
const html = renderTelemetryCharts(node, { nowMs });
|
||||
const fmt = new Date(nowMs);
|
||||
const expectedDate = String(fmt.getDate()).padStart(2, '0');
|
||||
assert.equal(html.includes('node-detail__charts'), true);
|
||||
assert.equal(html.includes('Power metrics'), true);
|
||||
assert.equal(html.includes('Environmental telemetry'), true);
|
||||
assert.equal(html.includes('Battery (%)'), true);
|
||||
assert.equal(html.includes('Voltage (V)'), true);
|
||||
assert.equal(html.includes('Current (A)'), true);
|
||||
assert.equal(html.includes('Channel utilization (%)'), true);
|
||||
assert.equal(html.includes('Air util TX (%)'), true);
|
||||
assert.equal(html.includes('Utilization (%)'), true);
|
||||
assert.equal(html.includes('Gas resistance (\u03a9)'), true);
|
||||
assert.equal(html.includes('Air quality'), true);
|
||||
assert.equal(html.includes('IAQ index'), true);
|
||||
assert.equal(html.includes('Temperature (\u00b0C)'), true);
|
||||
assert.equal(html.includes(expectedDate), true);
|
||||
assert.equal(html.includes('node-detail__chart-point'), true);
|
||||
assert.equal(html.includes('node-detail__chart-plot'), true);
|
||||
});
|
||||
|
||||
test('renderTelemetryCharts expands upper bounds when overflow metrics exceed defaults', () => {
|
||||
@@ -435,12 +422,18 @@ test('renderTelemetryCharts expands upper bounds when overflow metrics exceed de
|
||||
},
|
||||
},
|
||||
};
|
||||
const html = renderTelemetryCharts(node, { nowMs });
|
||||
assert.match(html, />7\.2<\/text>/);
|
||||
assert.match(html, />3\.6<\/text>/);
|
||||
assert.match(html, />45<\/text>/);
|
||||
assert.match(html, />650<\/text>/);
|
||||
assert.match(html, />1100<\/text>/);
|
||||
const { chartModels } = createTelemetryCharts(node, { nowMs });
|
||||
const powerChart = chartModels.find(model => model.id === 'power');
|
||||
const environmentChart = chartModels.find(model => model.id === 'environment');
|
||||
const airChart = chartModels.find(model => model.id === 'airQuality');
|
||||
const powerConfig = buildUPlotChartConfig(powerChart);
|
||||
const envConfig = buildUPlotChartConfig(environmentChart);
|
||||
const airConfig = buildUPlotChartConfig(airChart);
|
||||
assert.equal(powerConfig.options.scales.voltage.range()[1], 7.2);
|
||||
assert.equal(powerConfig.options.scales.current.range()[1], 3.6);
|
||||
assert.equal(envConfig.options.scales.temperature.range()[1], 45);
|
||||
assert.equal(airConfig.options.scales.iaq.range()[1], 650);
|
||||
assert.equal(airConfig.options.scales.pressure.range()[1], 1100);
|
||||
});
|
||||
|
||||
test('renderTelemetryCharts keeps default bounds when metrics stay within limits', () => {
|
||||
@@ -471,11 +464,17 @@ test('renderTelemetryCharts keeps default bounds when metrics stay within limits
|
||||
},
|
||||
},
|
||||
};
|
||||
const html = renderTelemetryCharts(node, { nowMs });
|
||||
assert.match(html, />6\.0<\/text>/);
|
||||
assert.match(html, />3\.0<\/text>/);
|
||||
assert.match(html, />40<\/text>/);
|
||||
assert.match(html, />500<\/text>/);
|
||||
const { chartModels } = createTelemetryCharts(node, { nowMs });
|
||||
const powerChart = chartModels.find(model => model.id === 'power');
|
||||
const environmentChart = chartModels.find(model => model.id === 'environment');
|
||||
const airChart = chartModels.find(model => model.id === 'airQuality');
|
||||
const powerConfig = buildUPlotChartConfig(powerChart);
|
||||
const envConfig = buildUPlotChartConfig(environmentChart);
|
||||
const airConfig = buildUPlotChartConfig(airChart);
|
||||
assert.equal(powerConfig.options.scales.voltage.range()[1], 6);
|
||||
assert.equal(powerConfig.options.scales.current.range()[1], 3);
|
||||
assert.equal(envConfig.options.scales.temperature.range()[1], 40);
|
||||
assert.equal(airConfig.options.scales.iaq.range()[1], 500);
|
||||
});
|
||||
|
||||
test('renderNodeDetailHtml composes the table, neighbors, and messages', () => {
|
||||
@@ -591,17 +590,18 @@ test('fetchNodeDetailHtml renders the node layout for overlays', async () => {
|
||||
neighbors: [],
|
||||
rawSources: { node: { node_id: '!alpha', role: 'CLIENT', short_name: 'ALPH' } },
|
||||
});
|
||||
const html = await fetchNodeDetailHtml(reference, {
|
||||
const result = await fetchNodeDetailHtml(reference, {
|
||||
refreshImpl,
|
||||
fetchImpl,
|
||||
renderShortHtml: short => `<span class="short-name">${short}</span>`,
|
||||
returnState: true,
|
||||
});
|
||||
assert.equal(calledUrls.some(url => url.includes('/api/messages/!alpha')), true);
|
||||
assert.equal(calledUrls.some(url => url.includes('/api/traces/!alpha')), true);
|
||||
assert.equal(html.includes('Example Alpha'), true);
|
||||
assert.equal(html.includes('Overlay hello'), true);
|
||||
assert.equal(html.includes('Traceroutes'), true);
|
||||
assert.equal(html.includes('node-detail__table'), true);
|
||||
assert.equal(result.html.includes('Example Alpha'), true);
|
||||
assert.equal(result.html.includes('Overlay hello'), true);
|
||||
assert.equal(result.html.includes('Traceroutes'), true);
|
||||
assert.equal(result.html.includes('node-detail__table'), true);
|
||||
});
|
||||
|
||||
test('fetchNodeDetailHtml hydrates traceroute nodes with API metadata', async () => {
|
||||
@@ -639,16 +639,17 @@ test('fetchNodeDetailHtml hydrates traceroute nodes with API metadata', async ()
|
||||
rawSources: { node: { node_id: '!origin', role: 'CLIENT', short_name: 'ORIG' } },
|
||||
});
|
||||
|
||||
const html = await fetchNodeDetailHtml(reference, {
|
||||
const result = await fetchNodeDetailHtml(reference, {
|
||||
refreshImpl,
|
||||
fetchImpl,
|
||||
renderShortHtml: short => `<span class="short-name">${short}</span>`,
|
||||
returnState: true,
|
||||
});
|
||||
|
||||
assert.equal(calledUrls.some(url => url.includes('/api/nodes/!relay')), true);
|
||||
assert.equal(calledUrls.some(url => url.includes('/api/nodes/!target')), true);
|
||||
assert.equal(html.includes('RLY1'), true);
|
||||
assert.equal(html.includes('TGT1'), true);
|
||||
assert.equal(result.html.includes('RLY1'), true);
|
||||
assert.equal(result.html.includes('TGT1'), true);
|
||||
});
|
||||
|
||||
test('fetchNodeDetailHtml requires a node identifier reference', async () => {
|
||||
@@ -948,19 +949,13 @@ test('initializeNodeDetailPage reports an error when refresh fails', async () =>
|
||||
throw new Error('boom');
|
||||
};
|
||||
const renderShortHtml = short => `<span>${short}</span>`;
|
||||
const originalError = console.error;
|
||||
console.error = () => {};
|
||||
try {
|
||||
const result = await initializeNodeDetailPage({
|
||||
document: documentStub,
|
||||
refreshImpl,
|
||||
renderShortHtml,
|
||||
});
|
||||
assert.equal(result, false);
|
||||
assert.equal(element.innerHTML.includes('Failed to load'), true);
|
||||
} finally {
|
||||
console.error = originalError;
|
||||
}
|
||||
const result = await initializeNodeDetailPage({
|
||||
document: documentStub,
|
||||
refreshImpl,
|
||||
renderShortHtml,
|
||||
});
|
||||
assert.equal(result, false);
|
||||
assert.equal(element.innerHTML.includes('Failed to load'), true);
|
||||
});
|
||||
|
||||
test('initializeNodeDetailPage handles missing reference payloads', async () => {
|
||||
|
||||
@@ -0,0 +1,360 @@
|
||||
/*
|
||||
* Copyright © 2025-26 l5yth & contributors
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
import test from 'node:test';
|
||||
import assert from 'node:assert/strict';
|
||||
|
||||
import { __testUtils } from '../node-page.js';
|
||||
import { buildMovingAverageSeries } from '../charts-page.js';
|
||||
|
||||
const {
|
||||
createTelemetryCharts,
|
||||
buildUPlotChartConfig,
|
||||
mountTelemetryCharts,
|
||||
mountTelemetryChartsWithRetry,
|
||||
} = __testUtils;
|
||||
|
||||
test('uPlot chart config preserves axes, colors, and tick labels for node telemetry', () => {
|
||||
const nowMs = Date.UTC(2025, 0, 8, 12, 0, 0);
|
||||
const nowSeconds = Math.floor(nowMs / 1000);
|
||||
const node = {
|
||||
rawSources: {
|
||||
telemetry: {
|
||||
snapshots: [
|
||||
{
|
||||
rx_time: nowSeconds - 60,
|
||||
device_metrics: {
|
||||
battery_level: 80,
|
||||
voltage: 4.1,
|
||||
current: 0.75,
|
||||
},
|
||||
},
|
||||
{
|
||||
rx_time: nowSeconds - 3_600,
|
||||
device_metrics: {
|
||||
battery_level: 78,
|
||||
voltage: 4.05,
|
||||
current: 0.65,
|
||||
},
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
};
|
||||
const { chartModels } = createTelemetryCharts(node, {
|
||||
nowMs,
|
||||
chartOptions: {
|
||||
xAxisTickBuilder: () => [nowMs],
|
||||
xAxisTickFormatter: () => '08',
|
||||
},
|
||||
});
|
||||
const powerChart = chartModels.find(model => model.id === 'power');
|
||||
const { options, data } = buildUPlotChartConfig(powerChart);
|
||||
|
||||
assert.deepEqual(options.scales.battery.range(), [0, 100]);
|
||||
assert.deepEqual(options.scales.voltage.range(), [0, 6]);
|
||||
assert.deepEqual(options.scales.current.range(), [0, 3]);
|
||||
assert.equal(options.series[1].stroke, '#8856a7');
|
||||
assert.equal(options.series[2].stroke, '#9ebcda');
|
||||
assert.equal(options.series[3].stroke, '#3182bd');
|
||||
assert.deepEqual(options.axes[0].values(null, [nowMs]), ['08']);
|
||||
assert.equal(options.axes[0].stroke, '#5c6773');
|
||||
|
||||
assert.deepEqual(data[0].slice(0, 2), [nowMs - 3_600_000, nowMs - 60_000]);
|
||||
assert.deepEqual(data[1].slice(0, 2), [78, 80]);
|
||||
});
|
||||
|
||||
test('uPlot chart config maps moving averages and raw points for aggregated telemetry', () => {
|
||||
const nowMs = Date.UTC(2025, 0, 8, 12, 0, 0);
|
||||
const nowSeconds = Math.floor(nowMs / 1000);
|
||||
const snapshots = [
|
||||
{
|
||||
rx_time: nowSeconds - 3_600,
|
||||
device_metrics: { battery_level: 10 },
|
||||
},
|
||||
{
|
||||
rx_time: nowSeconds - 1_800,
|
||||
device_metrics: { battery_level: 20 },
|
||||
},
|
||||
];
|
||||
const node = { rawSources: { telemetry: { snapshots } } };
|
||||
const { chartModels } = createTelemetryCharts(node, {
|
||||
nowMs,
|
||||
chartOptions: {
|
||||
lineReducer: points => buildMovingAverageSeries(points, 3_600_000),
|
||||
},
|
||||
});
|
||||
const powerChart = chartModels.find(model => model.id === 'power');
|
||||
const { options, data } = buildUPlotChartConfig(powerChart);
|
||||
|
||||
assert.equal(options.series.length, 3);
|
||||
assert.equal(options.series[1].stroke.startsWith('rgba('), true);
|
||||
assert.equal(options.series[2].stroke, '#8856a7');
|
||||
assert.deepEqual(data[1].slice(0, 2), [10, 15]);
|
||||
assert.deepEqual(data[2].slice(0, 2), [10, 20]);
|
||||
});
|
||||
|
||||
test('buildUPlotChartConfig applies axis color overrides', () => {
|
||||
const nowMs = Date.UTC(2025, 0, 8, 12, 0, 0);
|
||||
const nowSeconds = Math.floor(nowMs / 1000);
|
||||
const node = {
|
||||
rawSources: {
|
||||
telemetry: {
|
||||
snapshots: [
|
||||
{
|
||||
rx_time: nowSeconds - 60,
|
||||
device_metrics: { battery_level: 80 },
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
};
|
||||
const { chartModels } = createTelemetryCharts(node, { nowMs });
|
||||
const powerChart = chartModels.find(model => model.id === 'power');
|
||||
const { options } = buildUPlotChartConfig(powerChart, {
|
||||
axisColor: '#ffffff',
|
||||
gridColor: '#222222',
|
||||
});
|
||||
assert.equal(options.axes[0].stroke, '#ffffff');
|
||||
assert.equal(options.axes[0].grid.stroke, '#222222');
|
||||
});
|
||||
|
||||
test('environment chart renders humidity axis on the right side', () => {
|
||||
const nowMs = Date.UTC(2025, 0, 8, 12, 0, 0);
|
||||
const nowSeconds = Math.floor(nowMs / 1000);
|
||||
const node = {
|
||||
rawSources: {
|
||||
telemetry: {
|
||||
snapshots: [
|
||||
{
|
||||
rx_time: nowSeconds - 60,
|
||||
environment_metrics: {
|
||||
temperature: 19.5,
|
||||
relative_humidity: 55,
|
||||
},
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
};
|
||||
const { chartModels } = createTelemetryCharts(node, { nowMs });
|
||||
const envChart = chartModels.find(model => model.id === 'environment');
|
||||
const { options } = buildUPlotChartConfig(envChart);
|
||||
const humidityAxis = options.axes.find(axis => axis.scale === 'humidity');
|
||||
assert.ok(humidityAxis);
|
||||
assert.equal(humidityAxis.side, 1);
|
||||
assert.equal(humidityAxis.show, true);
|
||||
});
|
||||
|
||||
test('channel utilization chart includes a right-side utilization axis', () => {
|
||||
const nowMs = Date.UTC(2025, 0, 8, 12, 0, 0);
|
||||
const nowSeconds = Math.floor(nowMs / 1000);
|
||||
const node = {
|
||||
rawSources: {
|
||||
telemetry: {
|
||||
snapshots: [
|
||||
{
|
||||
rx_time: nowSeconds - 60,
|
||||
device_metrics: {
|
||||
channel_utilization: 40,
|
||||
air_util_tx: 22,
|
||||
},
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
};
|
||||
const { chartModels } = createTelemetryCharts(node, { nowMs });
|
||||
const channelChart = chartModels.find(model => model.id === 'channel');
|
||||
const { options } = buildUPlotChartConfig(channelChart);
|
||||
const rightAxis = options.axes.find(axis => axis.scale === 'channelSecondary');
|
||||
assert.ok(rightAxis);
|
||||
assert.equal(rightAxis.side, 1);
|
||||
assert.equal(rightAxis.show, true);
|
||||
});
|
||||
|
||||
test('createTelemetryCharts returns empty markup when snapshots are missing', () => {
|
||||
const { chartsHtml, chartModels } = createTelemetryCharts({ rawSources: { telemetry: { snapshots: [] } } });
|
||||
assert.equal(chartsHtml, '');
|
||||
assert.equal(chartModels.length, 0);
|
||||
});
|
||||
|
||||
test('mountTelemetryCharts instantiates uPlot for chart containers', () => {
|
||||
const nowMs = Date.UTC(2025, 0, 8, 12, 0, 0);
|
||||
const nowSeconds = Math.floor(nowMs / 1000);
|
||||
const node = {
|
||||
rawSources: {
|
||||
telemetry: {
|
||||
snapshots: [
|
||||
{
|
||||
rx_time: nowSeconds - 60,
|
||||
device_metrics: { battery_level: 80 },
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
};
|
||||
const { chartModels } = createTelemetryCharts(node, { nowMs });
|
||||
const [model] = chartModels;
|
||||
const plotRoot = { innerHTML: 'placeholder' };
|
||||
const chartContainer = {
|
||||
querySelector(selector) {
|
||||
return selector === '[data-telemetry-plot]' ? plotRoot : null;
|
||||
},
|
||||
};
|
||||
const root = {
|
||||
querySelector(selector) {
|
||||
return selector === `[data-telemetry-chart-id="${model.id}"]` ? chartContainer : null;
|
||||
},
|
||||
};
|
||||
class UPlotStub {
|
||||
constructor(options, data, container) {
|
||||
this.options = options;
|
||||
this.data = data;
|
||||
this.container = container;
|
||||
}
|
||||
}
|
||||
const instances = mountTelemetryCharts(chartModels, { root, uPlotImpl: UPlotStub });
|
||||
assert.equal(plotRoot.innerHTML, '');
|
||||
assert.equal(instances.length, 1);
|
||||
assert.equal(instances[0].container, plotRoot);
|
||||
});
|
||||
|
||||
test('mountTelemetryCharts responds to window resize events', async () => {
|
||||
const nowMs = Date.UTC(2025, 0, 8, 12, 0, 0);
|
||||
const nowSeconds = Math.floor(nowMs / 1000);
|
||||
const node = {
|
||||
rawSources: {
|
||||
telemetry: {
|
||||
snapshots: [
|
||||
{
|
||||
rx_time: nowSeconds - 60,
|
||||
device_metrics: { battery_level: 80 },
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
};
|
||||
const { chartModels } = createTelemetryCharts(node, { nowMs });
|
||||
const [model] = chartModels;
|
||||
const plotRoot = {
|
||||
innerHTML: '',
|
||||
clientWidth: 320,
|
||||
clientHeight: 180,
|
||||
getBoundingClientRect() {
|
||||
return { width: this.clientWidth, height: this.clientHeight };
|
||||
},
|
||||
};
|
||||
const chartContainer = {
|
||||
querySelector(selector) {
|
||||
return selector === '[data-telemetry-plot]' ? plotRoot : null;
|
||||
},
|
||||
};
|
||||
const root = {
|
||||
querySelector(selector) {
|
||||
return selector === `[data-telemetry-chart-id="${model.id}"]` ? chartContainer : null;
|
||||
},
|
||||
};
|
||||
const previousResizeObserver = globalThis.ResizeObserver;
|
||||
const previousAddEventListener = globalThis.addEventListener;
|
||||
let resizeHandler = null;
|
||||
globalThis.ResizeObserver = undefined;
|
||||
globalThis.addEventListener = (event, handler) => {
|
||||
if (event === 'resize') {
|
||||
resizeHandler = handler;
|
||||
}
|
||||
};
|
||||
const sizeCalls = [];
|
||||
class UPlotStub {
|
||||
constructor(options, data, container) {
|
||||
this.options = options;
|
||||
this.data = data;
|
||||
this.container = container;
|
||||
this.root = container;
|
||||
}
|
||||
setSize(size) {
|
||||
sizeCalls.push(size);
|
||||
}
|
||||
}
|
||||
mountTelemetryCharts(chartModels, { root, uPlotImpl: UPlotStub });
|
||||
assert.ok(resizeHandler);
|
||||
plotRoot.clientWidth = 480;
|
||||
plotRoot.clientHeight = 240;
|
||||
resizeHandler();
|
||||
await new Promise(resolve => setTimeout(resolve, 150));
|
||||
assert.equal(sizeCalls.length >= 1, true);
|
||||
assert.deepEqual(sizeCalls[sizeCalls.length - 1], { width: 480, height: 240 });
|
||||
globalThis.ResizeObserver = previousResizeObserver;
|
||||
globalThis.addEventListener = previousAddEventListener;
|
||||
});
|
||||
|
||||
test('mountTelemetryChartsWithRetry loads uPlot when missing', async () => {
|
||||
const nowMs = Date.UTC(2025, 0, 8, 12, 0, 0);
|
||||
const nowSeconds = Math.floor(nowMs / 1000);
|
||||
const node = {
|
||||
rawSources: {
|
||||
telemetry: {
|
||||
snapshots: [
|
||||
{
|
||||
rx_time: nowSeconds - 60,
|
||||
device_metrics: { battery_level: 80 },
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
};
|
||||
const { chartModels } = createTelemetryCharts(node, { nowMs });
|
||||
const [model] = chartModels;
|
||||
const plotRoot = { innerHTML: '', clientWidth: 400, clientHeight: 200 };
|
||||
const chartContainer = {
|
||||
querySelector(selector) {
|
||||
return selector === '[data-telemetry-plot]' ? plotRoot : null;
|
||||
},
|
||||
};
|
||||
const root = {
|
||||
ownerDocument: {
|
||||
body: {},
|
||||
querySelector: () => null,
|
||||
},
|
||||
querySelector(selector) {
|
||||
return selector === `[data-telemetry-chart-id="${model.id}"]` ? chartContainer : null;
|
||||
},
|
||||
};
|
||||
const previousUPlot = globalThis.uPlot;
|
||||
const instances = [];
|
||||
class UPlotStub {
|
||||
constructor(options, data, container) {
|
||||
this.options = options;
|
||||
this.data = data;
|
||||
this.container = container;
|
||||
instances.push(this);
|
||||
}
|
||||
}
|
||||
let loadCalled = false;
|
||||
const loadUPlot = ({ onLoad }) => {
|
||||
loadCalled = true;
|
||||
globalThis.uPlot = UPlotStub;
|
||||
if (typeof onLoad === 'function') {
|
||||
onLoad();
|
||||
}
|
||||
return true;
|
||||
};
|
||||
mountTelemetryChartsWithRetry(chartModels, { root, loadUPlot });
|
||||
await new Promise(resolve => setTimeout(resolve, 0));
|
||||
assert.equal(loadCalled, true);
|
||||
assert.equal(instances.length, 1);
|
||||
globalThis.uPlot = previousUPlot;
|
||||
});
|
||||
@@ -14,7 +14,7 @@
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
import { renderTelemetryCharts } from './node-page.js';
|
||||
import { createTelemetryCharts, mountTelemetryChartsWithRetry } from './node-page.js';
|
||||
|
||||
const TELEMETRY_BUCKET_SECONDS = 60 * 60;
|
||||
const HOUR_MS = 60 * 60 * 1000;
|
||||
@@ -193,6 +193,21 @@ export async function fetchAggregatedTelemetry({
|
||||
.filter(snapshot => snapshot != null);
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch and render aggregated telemetry charts.
|
||||
*
|
||||
* @param {{
|
||||
* document?: Document,
|
||||
* rootId?: string,
|
||||
* fetchImpl?: Function,
|
||||
* bucketSeconds?: number,
|
||||
* windowMs?: number,
|
||||
* createCharts?: Function,
|
||||
* mountCharts?: Function,
|
||||
* uPlotImpl?: Function,
|
||||
* }} options Optional overrides for testing.
|
||||
* @returns {Promise<boolean>} ``true`` when charts were rendered successfully.
|
||||
*/
|
||||
export async function initializeChartsPage(options = {}) {
|
||||
const documentRef = options.document ?? globalThis.document;
|
||||
if (!documentRef || typeof documentRef.getElementById !== 'function') {
|
||||
@@ -204,7 +219,8 @@ export async function initializeChartsPage(options = {}) {
|
||||
return false;
|
||||
}
|
||||
|
||||
const renderCharts = typeof options.renderCharts === 'function' ? options.renderCharts : renderTelemetryCharts;
|
||||
const createCharts = typeof options.createCharts === 'function' ? options.createCharts : createTelemetryCharts;
|
||||
const mountCharts = typeof options.mountCharts === 'function' ? options.mountCharts : mountTelemetryChartsWithRetry;
|
||||
const fetchImpl = options.fetchImpl ?? globalThis.fetch;
|
||||
const bucketSeconds = options.bucketSeconds ?? TELEMETRY_BUCKET_SECONDS;
|
||||
const windowMs = options.windowMs ?? CHART_WINDOW_MS;
|
||||
@@ -218,7 +234,7 @@ export async function initializeChartsPage(options = {}) {
|
||||
return true;
|
||||
}
|
||||
const node = { rawSources: { telemetry: { snapshots } } };
|
||||
const chartsHtml = renderCharts(node, {
|
||||
const chartState = createCharts(node, {
|
||||
nowMs: Date.now(),
|
||||
chartOptions: {
|
||||
windowMs,
|
||||
@@ -228,11 +244,12 @@ export async function initializeChartsPage(options = {}) {
|
||||
lineReducer: points => buildMovingAverageSeries(points, HOUR_MS),
|
||||
},
|
||||
});
|
||||
if (!chartsHtml) {
|
||||
if (!chartState.chartsHtml) {
|
||||
container.innerHTML = renderStatus('Telemetry snapshots are unavailable.');
|
||||
return true;
|
||||
}
|
||||
container.innerHTML = chartsHtml;
|
||||
container.innerHTML = chartState.chartsHtml;
|
||||
mountCharts(chartState.chartModels, { root: container, uPlotImpl: options.uPlotImpl });
|
||||
return true;
|
||||
} catch (error) {
|
||||
console.error('Failed to render aggregated telemetry charts', error);
|
||||
|
||||
@@ -20,7 +20,7 @@ import { extractModemMetadata } from './node-modem-metadata.js';
|
||||
* Highest channel index that should be represented within the tab view.
|
||||
* @type {number}
|
||||
*/
|
||||
export const MAX_CHANNEL_INDEX = 255;
|
||||
export const MAX_CHANNEL_INDEX = 9;
|
||||
|
||||
/**
|
||||
* Discrete event types that can appear in the chat activity log.
|
||||
@@ -65,8 +65,7 @@ function resolveSnapshotList(entry) {
|
||||
* Build a data model describing the content for chat tabs.
|
||||
*
|
||||
* Entries outside the recent activity window, encrypted messages, and
|
||||
* channels above {@link MAX_CHANNEL_INDEX} are filtered out. Channel
|
||||
* buckets are only created when messages are present for that channel.
|
||||
* channels above {@link MAX_CHANNEL_INDEX} are filtered out.
|
||||
*
|
||||
* @param {{
|
||||
* nodes?: Array<Object>,
|
||||
@@ -103,29 +102,11 @@ export function buildChatTabModel({
|
||||
const logEntries = [];
|
||||
const channelBuckets = new Map();
|
||||
const primaryChannelEnvLabel = normalisePrimaryChannelEnvLabel(primaryChannelFallbackLabel);
|
||||
const nodeById = new Map();
|
||||
const nodeByNum = new Map();
|
||||
const nodeInfoKeys = new Set();
|
||||
|
||||
const buildNodeInfoKey = (nodeId, nodeNum, ts) => `${nodeId ?? ''}:${nodeNum ?? ''}:${ts ?? ''}`;
|
||||
const recordNodeInfoEntry = (ts, nodeId, nodeNum) => {
|
||||
if (ts == null) return;
|
||||
const key = buildNodeInfoKey(nodeId, nodeNum, ts);
|
||||
if (nodeInfoKeys.has(key)) return;
|
||||
const node = nodeId && nodeById.has(nodeId)
|
||||
? nodeById.get(nodeId)
|
||||
: (nodeNum != null && nodeByNum.has(nodeNum) ? nodeByNum.get(nodeNum) : null);
|
||||
if (!node) return;
|
||||
nodeInfoKeys.add(key);
|
||||
logEntries.push({ ts, type: CHAT_LOG_ENTRY_TYPES.NODE_INFO, node, nodeId, nodeNum });
|
||||
};
|
||||
|
||||
for (const node of nodes || []) {
|
||||
if (!node) continue;
|
||||
const nodeId = normaliseNodeId(node);
|
||||
const nodeNum = normaliseNodeNum(node);
|
||||
if (nodeId) nodeById.set(nodeId, node);
|
||||
if (nodeNum != null) nodeByNum.set(nodeNum, node);
|
||||
const firstTs = resolveTimestampSeconds(node.first_heard ?? node.firstHeard, node.first_heard_iso ?? node.firstHeardIso);
|
||||
if (firstTs != null && firstTs >= cutoff) {
|
||||
logEntries.push({ ts: firstTs, type: CHAT_LOG_ENTRY_TYPES.NODE_NEW, node, nodeId, nodeNum });
|
||||
@@ -133,7 +114,6 @@ export function buildChatTabModel({
|
||||
const lastTs = resolveTimestampSeconds(node.last_heard ?? node.lastHeard, node.last_seen_iso ?? node.lastSeenIso);
|
||||
if (lastTs != null && lastTs >= cutoff) {
|
||||
logEntries.push({ ts: lastTs, type: CHAT_LOG_ENTRY_TYPES.NODE_INFO, node, nodeId, nodeNum });
|
||||
nodeInfoKeys.add(buildNodeInfoKey(nodeId, nodeNum, lastTs));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -149,7 +129,6 @@ export function buildChatTabModel({
|
||||
const nodeId = normaliseNodeId(snapshot);
|
||||
const nodeNum = normaliseNodeNum(snapshot);
|
||||
logEntries.push({ ts, type: CHAT_LOG_ENTRY_TYPES.TELEMETRY, telemetry: snapshot, nodeId, nodeNum });
|
||||
recordNodeInfoEntry(ts, nodeId, nodeNum);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -165,7 +144,6 @@ export function buildChatTabModel({
|
||||
const nodeId = normaliseNodeId(snapshot);
|
||||
const nodeNum = normaliseNodeNum(snapshot);
|
||||
logEntries.push({ ts, type: CHAT_LOG_ENTRY_TYPES.POSITION, position: snapshot, nodeId, nodeNum });
|
||||
recordNodeInfoEntry(ts, nodeId, nodeNum);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -179,7 +157,6 @@ export function buildChatTabModel({
|
||||
const nodeNum = normaliseNodeNum(snapshot);
|
||||
const neighborId = normaliseNeighborId(snapshot);
|
||||
logEntries.push({ ts, type: CHAT_LOG_ENTRY_TYPES.NEIGHBOR, neighbor: snapshot, nodeId, nodeNum, neighborId });
|
||||
recordNodeInfoEntry(ts, nodeId, nodeNum);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -209,7 +186,6 @@ export function buildChatTabModel({
|
||||
nodeId: firstHop.id ?? null,
|
||||
nodeNum: firstHop.num ?? null
|
||||
});
|
||||
recordNodeInfoEntry(ts, firstHop.id ?? null, firstHop.num ?? null);
|
||||
}
|
||||
|
||||
const encryptedLogEntries = [];
|
||||
@@ -245,12 +221,28 @@ export function buildChatTabModel({
|
||||
modemPreset,
|
||||
envFallbackLabel: primaryChannelEnvLabel
|
||||
});
|
||||
const nameBucketKey = safeIndex > 0 ? buildSecondaryNameBucketKey(safeIndex, labelInfo) : null;
|
||||
const nameBucketKey = safeIndex > 0 ? buildSecondaryNameBucketKey(labelInfo) : null;
|
||||
const primaryBucketKey = safeIndex === 0 && labelInfo.label !== '0' ? buildPrimaryBucketKey(labelInfo.label) : '0';
|
||||
|
||||
const bucketKey = safeIndex === 0 ? primaryBucketKey : nameBucketKey ?? String(safeIndex);
|
||||
let bucketKey = safeIndex === 0 ? primaryBucketKey : nameBucketKey ?? String(safeIndex);
|
||||
let bucket = channelBuckets.get(bucketKey);
|
||||
|
||||
if (!bucket && safeIndex > 0) {
|
||||
const existingBucketKey = findExistingBucketKeyByIndex(channelBuckets, safeIndex);
|
||||
if (existingBucketKey) {
|
||||
bucketKey = existingBucketKey;
|
||||
bucket = channelBuckets.get(existingBucketKey);
|
||||
}
|
||||
}
|
||||
|
||||
if (bucket && nameBucketKey && bucket.key !== nameBucketKey) {
|
||||
channelBuckets.delete(bucket.key);
|
||||
bucket.key = nameBucketKey;
|
||||
bucket.id = buildChannelTabId(nameBucketKey);
|
||||
channelBuckets.set(nameBucketKey, bucket);
|
||||
bucketKey = nameBucketKey;
|
||||
}
|
||||
|
||||
if (!bucket) {
|
||||
bucket = {
|
||||
key: bucketKey,
|
||||
@@ -295,6 +287,26 @@ export function buildChatTabModel({
|
||||
|
||||
logEntries.sort((a, b) => a.ts - b.ts);
|
||||
|
||||
let hasPrimaryBucket = false;
|
||||
for (const bucket of channelBuckets.values()) {
|
||||
if (bucket.index === 0) {
|
||||
hasPrimaryBucket = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (!hasPrimaryBucket) {
|
||||
const bucketKey = '0';
|
||||
channelBuckets.set(bucketKey, {
|
||||
key: bucketKey,
|
||||
id: buildChannelTabId(bucketKey),
|
||||
index: 0,
|
||||
label: '0',
|
||||
entries: [],
|
||||
labelPriority: CHANNEL_LABEL_PRIORITY.INDEX,
|
||||
isPrimaryFallback: true
|
||||
});
|
||||
}
|
||||
|
||||
const channels = Array.from(channelBuckets.values()).sort((a, b) => {
|
||||
if (a.index !== b.index) {
|
||||
return a.index - b.index;
|
||||
@@ -553,42 +565,43 @@ function buildPrimaryBucketKey(primaryChannelLabel) {
|
||||
return '0';
|
||||
}
|
||||
|
||||
function buildSecondaryNameBucketKey(index, labelInfo) {
|
||||
function buildSecondaryNameBucketKey(labelInfo) {
|
||||
const label = labelInfo?.label ?? null;
|
||||
const priority = labelInfo?.priority ?? CHANNEL_LABEL_PRIORITY.INDEX;
|
||||
if (!Number.isFinite(index) || index <= 0 || priority !== CHANNEL_LABEL_PRIORITY.NAME || !label) {
|
||||
if (priority !== CHANNEL_LABEL_PRIORITY.NAME || !label) {
|
||||
return null;
|
||||
}
|
||||
const trimmedLabel = label.trim().toLowerCase();
|
||||
if (!trimmedLabel.length) {
|
||||
return null;
|
||||
}
|
||||
return `secondary-name::${trimmedLabel}`;
|
||||
return `secondary::${trimmedLabel}`;
|
||||
}
|
||||
|
||||
function findExistingBucketKeyByIndex(channelBuckets, targetIndex) {
|
||||
if (!channelBuckets || !Number.isFinite(targetIndex) || targetIndex <= 0) {
|
||||
return null;
|
||||
}
|
||||
const normalizedTarget = Math.trunc(targetIndex);
|
||||
for (const [key, bucket] of channelBuckets.entries()) {
|
||||
if (!bucket || !Number.isFinite(bucket.index)) {
|
||||
continue;
|
||||
}
|
||||
if (Math.trunc(bucket.index) !== normalizedTarget) {
|
||||
continue;
|
||||
}
|
||||
if (bucket.index === 0) {
|
||||
continue;
|
||||
}
|
||||
return key;
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
function buildChannelTabId(bucketKey) {
|
||||
if (bucketKey === '0') {
|
||||
return 'channel-0';
|
||||
}
|
||||
const secondaryNameParts = /^secondary-name::(.+)$/.exec(String(bucketKey));
|
||||
if (secondaryNameParts) {
|
||||
const secondaryLabelSlug = slugify(secondaryNameParts[1]);
|
||||
const secondaryHash = hashChannelKey(bucketKey);
|
||||
if (secondaryLabelSlug) {
|
||||
return `channel-secondary-name-${secondaryLabelSlug}-${secondaryHash}`;
|
||||
}
|
||||
return `channel-secondary-name-${secondaryHash}`;
|
||||
}
|
||||
const secondaryParts = /^secondary::(\d+)::(.+)$/.exec(String(bucketKey));
|
||||
if (secondaryParts) {
|
||||
const secondaryIndex = secondaryParts[1];
|
||||
const secondaryLabelSlug = slugify(secondaryParts[2]);
|
||||
const secondaryHash = hashChannelKey(bucketKey);
|
||||
if (secondaryLabelSlug) {
|
||||
return `channel-secondary-${secondaryIndex}-${secondaryLabelSlug}-${secondaryHash}`;
|
||||
}
|
||||
return `channel-secondary-${secondaryIndex}-${secondaryHash}`;
|
||||
}
|
||||
const slug = slugify(bucketKey);
|
||||
if (slug) {
|
||||
if (slug !== '0') {
|
||||
|
||||
@@ -15,7 +15,6 @@
|
||||
*/
|
||||
|
||||
import { readAppConfig } from './config.js';
|
||||
import { resolveLegendVisibility } from './map-legend-visibility.js';
|
||||
import { mergeConfig } from './settings.js';
|
||||
import { roleColors } from './role-helpers.js';
|
||||
|
||||
@@ -80,59 +79,6 @@ function buildInstanceUrl(domain) {
|
||||
return `https://${trimmed}`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Read the next-page cursor token from response headers.
|
||||
*
|
||||
* @param {*} response Fetch response candidate.
|
||||
* @returns {string|null} Cursor token when present.
|
||||
*/
|
||||
function readNextCursorHeader(response) {
|
||||
const headers = response && response.headers;
|
||||
if (!headers || typeof headers.get !== 'function') return null;
|
||||
const cursor = headers.get('X-Next-Cursor');
|
||||
return cursor && String(cursor).trim() ? String(cursor).trim() : null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch all federation instances using keyset cursor pagination.
|
||||
*
|
||||
* @param {Function} fetchImpl Fetch-compatible function.
|
||||
* @returns {Promise<Array<Object>>} Combined instance rows.
|
||||
*/
|
||||
async function fetchAllInstances(fetchImpl) {
|
||||
const results = [];
|
||||
let cursor = null;
|
||||
let pageCount = 0;
|
||||
const limit = 500;
|
||||
|
||||
while (pageCount < 100) {
|
||||
const query = new URLSearchParams({ limit: String(limit) });
|
||||
if (cursor) {
|
||||
query.set('cursor', cursor);
|
||||
}
|
||||
|
||||
const response = await fetchImpl(`/api/instances?${query.toString()}`, {
|
||||
headers: { Accept: 'application/json' },
|
||||
credentials: 'omit'
|
||||
});
|
||||
if (!response || !response.ok || typeof response.json !== 'function') {
|
||||
return results;
|
||||
}
|
||||
const payload = await response.json();
|
||||
if (!Array.isArray(payload) || payload.length === 0) {
|
||||
return results;
|
||||
}
|
||||
results.push(...payload);
|
||||
cursor = readNextCursorHeader(response);
|
||||
pageCount += 1;
|
||||
if (!cursor) {
|
||||
return results;
|
||||
}
|
||||
}
|
||||
|
||||
return results;
|
||||
}
|
||||
|
||||
const NODE_COUNT_COLOR_STOPS = [
|
||||
{ limit: 100, color: roleColors.CLIENT_HIDDEN },
|
||||
{ limit: 200, color: roleColors.SENSOR },
|
||||
@@ -258,31 +204,6 @@ function hasNumberValue(value) {
|
||||
return toFiniteNumber(value) != null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Toggle the legend hidden class on a container element.
|
||||
*
|
||||
* @param {HTMLElement|{ classList?: { toggle?: Function }, className?: string }} container Legend container.
|
||||
* @param {boolean} hidden Whether the legend should be hidden.
|
||||
* @returns {void}
|
||||
*/
|
||||
function toggleLegendHiddenClass(container, hidden) {
|
||||
if (!container) return;
|
||||
if (container.classList && typeof container.classList.toggle === 'function') {
|
||||
container.classList.toggle('legend-hidden', hidden);
|
||||
return;
|
||||
}
|
||||
if (typeof container.className === 'string') {
|
||||
const classes = container.className.split(/\s+/).filter(Boolean);
|
||||
const hasHidden = classes.includes('legend-hidden');
|
||||
if (hidden && !hasHidden) {
|
||||
classes.push('legend-hidden');
|
||||
} else if (!hidden && hasHidden) {
|
||||
classes.splice(classes.indexOf('legend-hidden'), 1);
|
||||
}
|
||||
container.className = classes.join(' ');
|
||||
}
|
||||
}
|
||||
|
||||
const TILE_LAYER_URL = 'https://{s}.tile.openstreetmap.fr/hot/{z}/{x}/{y}.png';
|
||||
|
||||
/**
|
||||
@@ -302,7 +223,6 @@ export async function initializeFederationPage(options = {}) {
|
||||
const fetchImpl = options.fetchImpl || fetch;
|
||||
const leaflet = options.leaflet || (typeof window !== 'undefined' ? window.L : null);
|
||||
const mapContainer = document.getElementById('map');
|
||||
const mapPanel = document.getElementById('mapPanel');
|
||||
const tableEl = document.getElementById('instances');
|
||||
const tableBody = document.querySelector('#instances tbody');
|
||||
const statusEl = document.getElementById('status');
|
||||
@@ -319,13 +239,6 @@ export async function initializeFederationPage(options = {}) {
|
||||
let map = null;
|
||||
let markersLayer = null;
|
||||
let tileLayer = null;
|
||||
let legendContainer = null;
|
||||
let legendToggleButton = null;
|
||||
let legendVisible = true;
|
||||
const legendCollapsedValue = mapPanel ? mapPanel.getAttribute('data-legend-collapsed') : null;
|
||||
const legendDefaultCollapsed = legendCollapsedValue == null
|
||||
? true
|
||||
: legendCollapsedValue.trim() !== 'false';
|
||||
const tableSorters = {
|
||||
name: { getValue: inst => inst.name ?? '', compare: compareString, hasValue: hasStringValue, defaultDirection: 'asc' },
|
||||
domain: { getValue: inst => inst.domain ?? '', compare: compareString, hasValue: hasStringValue, defaultDirection: 'asc' },
|
||||
@@ -444,37 +357,6 @@ export async function initializeFederationPage(options = {}) {
|
||||
syncSortIndicators();
|
||||
};
|
||||
|
||||
/**
|
||||
* Update the pressed state of the legend visibility toggle button.
|
||||
*
|
||||
* @returns {void}
|
||||
*/
|
||||
const updateLegendToggleState = () => {
|
||||
if (!legendToggleButton) return;
|
||||
const baseLabel = legendVisible ? 'Hide map legend' : 'Show map legend';
|
||||
const baseText = legendVisible ? 'Hide legend' : 'Show legend';
|
||||
legendToggleButton.setAttribute('aria-pressed', legendVisible ? 'true' : 'false');
|
||||
legendToggleButton.setAttribute('aria-label', baseLabel);
|
||||
legendToggleButton.textContent = baseText;
|
||||
};
|
||||
|
||||
/**
|
||||
* Show or hide the map legend component.
|
||||
*
|
||||
* @param {boolean} visible Whether the legend should be displayed.
|
||||
* @returns {void}
|
||||
*/
|
||||
const setLegendVisibility = visible => {
|
||||
legendVisible = Boolean(visible);
|
||||
if (legendContainer) {
|
||||
toggleLegendHiddenClass(legendContainer, !legendVisible);
|
||||
if (typeof legendContainer.setAttribute === 'function') {
|
||||
legendContainer.setAttribute('aria-hidden', legendVisible ? 'false' : 'true');
|
||||
}
|
||||
}
|
||||
updateLegendToggleState();
|
||||
};
|
||||
|
||||
/**
|
||||
* Wire up click and keyboard handlers for sortable headers.
|
||||
*
|
||||
@@ -577,7 +459,13 @@ export async function initializeFederationPage(options = {}) {
|
||||
// Fetch instances data
|
||||
let instances = [];
|
||||
try {
|
||||
instances = await fetchAllInstances(fetchImpl);
|
||||
const response = await fetchImpl('/api/instances', {
|
||||
headers: { Accept: 'application/json' },
|
||||
credentials: 'omit'
|
||||
});
|
||||
if (response.ok) {
|
||||
instances = await response.json();
|
||||
}
|
||||
} catch (err) {
|
||||
console.warn('Failed to fetch federation instances', err);
|
||||
}
|
||||
@@ -595,15 +483,6 @@ export async function initializeFederationPage(options = {}) {
|
||||
const canRenderLegend =
|
||||
typeof leaflet.control === 'function' && leaflet.DomUtil && typeof leaflet.DomUtil.create === 'function';
|
||||
if (canRenderLegend) {
|
||||
const legendMediaQuery = typeof window !== 'undefined' && window.matchMedia
|
||||
? window.matchMedia('(max-width: 1024px)')
|
||||
: null;
|
||||
const initialLegendVisible = resolveLegendVisibility({
|
||||
defaultCollapsed: legendDefaultCollapsed,
|
||||
mediaQueryMatches: legendMediaQuery ? legendMediaQuery.matches : false
|
||||
});
|
||||
legendVisible = initialLegendVisible;
|
||||
|
||||
const legendStops = NODE_COUNT_COLOR_STOPS.map((stop, index) => {
|
||||
const lower = index === 0 ? 0 : NODE_COUNT_COLOR_STOPS[index - 1].limit;
|
||||
const upper = stop.limit - 1;
|
||||
@@ -616,11 +495,7 @@ export async function initializeFederationPage(options = {}) {
|
||||
const legend = leaflet.control({ position: 'bottomright' });
|
||||
legend.onAdd = function onAdd() {
|
||||
const container = leaflet.DomUtil.create('div', 'legend legend--instances');
|
||||
container.id = 'federationLegend';
|
||||
container.setAttribute('aria-label', 'Active nodes legend');
|
||||
container.setAttribute('role', 'region');
|
||||
container.setAttribute('aria-hidden', initialLegendVisible ? 'false' : 'true');
|
||||
toggleLegendHiddenClass(container, !initialLegendVisible);
|
||||
const header = leaflet.DomUtil.create('div', 'legend-header', container);
|
||||
const title = leaflet.DomUtil.create('span', 'legend-title', header);
|
||||
title.textContent = 'Active nodes';
|
||||
@@ -633,46 +508,9 @@ export async function initializeFederationPage(options = {}) {
|
||||
const label = leaflet.DomUtil.create('span', 'legend-label', item);
|
||||
label.textContent = stop.label;
|
||||
});
|
||||
legendContainer = container;
|
||||
return container;
|
||||
};
|
||||
legend.addTo(map);
|
||||
|
||||
const legendToggleControl = leaflet.control({ position: 'bottomright' });
|
||||
legendToggleControl.onAdd = function onAdd() {
|
||||
const container = leaflet.DomUtil.create('div', 'leaflet-control legend-toggle');
|
||||
const button = leaflet.DomUtil.create('button', 'legend-toggle-button', container);
|
||||
button.type = 'button';
|
||||
button.setAttribute('aria-controls', 'federationLegend');
|
||||
button.addEventListener?.('click', event => {
|
||||
event.preventDefault();
|
||||
event.stopPropagation();
|
||||
setLegendVisibility(!legendVisible);
|
||||
});
|
||||
legendToggleButton = button;
|
||||
updateLegendToggleState();
|
||||
if (leaflet.DomEvent && typeof leaflet.DomEvent.disableClickPropagation === 'function') {
|
||||
leaflet.DomEvent.disableClickPropagation(container);
|
||||
}
|
||||
if (leaflet.DomEvent && typeof leaflet.DomEvent.disableScrollPropagation === 'function') {
|
||||
leaflet.DomEvent.disableScrollPropagation(container);
|
||||
}
|
||||
return container;
|
||||
};
|
||||
legendToggleControl.addTo(map);
|
||||
|
||||
setLegendVisibility(initialLegendVisible);
|
||||
if (legendMediaQuery) {
|
||||
const changeHandler = event => {
|
||||
if (legendDefaultCollapsed) return;
|
||||
setLegendVisibility(!event.matches);
|
||||
};
|
||||
if (typeof legendMediaQuery.addEventListener === 'function') {
|
||||
legendMediaQuery.addEventListener('change', changeHandler);
|
||||
} else if (typeof legendMediaQuery.addListener === 'function') {
|
||||
legendMediaQuery.addListener(changeHandler);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for (const instance of instances) {
|
||||
|
||||
@@ -34,108 +34,6 @@ function resolveInstanceLabel(entry) {
|
||||
return domain;
|
||||
}
|
||||
|
||||
/**
|
||||
* Update federation navigation labels with the instance count.
|
||||
*
|
||||
* @param {{
|
||||
* documentObject?: Document | null,
|
||||
* count: number
|
||||
* }} options Configuration for updating the navigation labels.
|
||||
* @returns {void}
|
||||
*/
|
||||
function updateFederationNavCount(options) {
|
||||
const { documentObject, count } = options;
|
||||
|
||||
if (!documentObject || typeof count !== 'number' || !Number.isFinite(count)) {
|
||||
return;
|
||||
}
|
||||
|
||||
const normalizedCount = Math.max(0, Math.floor(count));
|
||||
const root = typeof documentObject.querySelectorAll === 'function'
|
||||
? documentObject
|
||||
: documentObject.body;
|
||||
|
||||
if (!root || typeof root.querySelectorAll !== 'function') {
|
||||
return;
|
||||
}
|
||||
|
||||
const links = Array.from(root.querySelectorAll('.js-federation-nav'));
|
||||
|
||||
links.forEach(link => {
|
||||
if (!link || typeof link !== 'object') {
|
||||
return;
|
||||
}
|
||||
|
||||
const dataset = link.dataset || {};
|
||||
const storedLabel = typeof dataset.federationLabel === 'string' ? dataset.federationLabel.trim() : '';
|
||||
const fallbackLabel = typeof link.textContent === 'string'
|
||||
? link.textContent.split('(')[0].trim()
|
||||
: 'Federation';
|
||||
const label = storedLabel || fallbackLabel || 'Federation';
|
||||
|
||||
dataset.federationLabel = label;
|
||||
link.textContent = `${label} (${normalizedCount})`;
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Read the next-page cursor header from an HTTP response.
|
||||
*
|
||||
* @param {*} response Fetch response candidate.
|
||||
* @returns {string|null} Cursor token when available.
|
||||
*/
|
||||
function readNextCursorHeader(response) {
|
||||
const headers = response && response.headers;
|
||||
if (!headers || typeof headers.get !== 'function') {
|
||||
return null;
|
||||
}
|
||||
const cursor = headers.get('X-Next-Cursor');
|
||||
return cursor && String(cursor).trim().length > 0 ? String(cursor).trim() : null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Load federation instances across paginated API responses.
|
||||
*
|
||||
* @param {Function} fetchImpl Fetch-compatible function.
|
||||
* @returns {Promise<Array<Object>>} Combined instance payload rows.
|
||||
*/
|
||||
async function fetchAllInstances(fetchImpl) {
|
||||
const results = [];
|
||||
let cursor = null;
|
||||
let pageCount = 0;
|
||||
const limit = 500;
|
||||
|
||||
while (pageCount < 100) {
|
||||
const query = new URLSearchParams({ limit: String(limit) });
|
||||
if (cursor) {
|
||||
query.set('cursor', cursor);
|
||||
}
|
||||
|
||||
const response = await fetchImpl(`/api/instances?${query.toString()}`, {
|
||||
headers: { Accept: 'application/json' },
|
||||
credentials: 'omit',
|
||||
});
|
||||
|
||||
if (!response || typeof response.json !== 'function' || !response.ok) {
|
||||
return results;
|
||||
}
|
||||
|
||||
const payload = await response.json();
|
||||
if (!Array.isArray(payload) || payload.length === 0) {
|
||||
return results;
|
||||
}
|
||||
|
||||
results.push(...payload);
|
||||
cursor = readNextCursorHeader(response);
|
||||
pageCount += 1;
|
||||
if (!cursor) {
|
||||
return results;
|
||||
}
|
||||
}
|
||||
|
||||
return results;
|
||||
}
|
||||
|
||||
/**
|
||||
* Construct a navigable URL for the provided instance domain.
|
||||
*
|
||||
@@ -237,20 +135,37 @@ export async function initializeInstanceSelector(options) {
|
||||
return;
|
||||
}
|
||||
|
||||
let payload;
|
||||
let response;
|
||||
try {
|
||||
payload = await fetchAllInstances(fetchImpl);
|
||||
response = await fetchImpl('/api/instances', {
|
||||
headers: { Accept: 'application/json' },
|
||||
credentials: 'omit',
|
||||
});
|
||||
} catch (error) {
|
||||
console.warn('Failed to load federation instances', error);
|
||||
return;
|
||||
}
|
||||
|
||||
if (!response || typeof response.json !== 'function') {
|
||||
return;
|
||||
}
|
||||
|
||||
if (!response.ok) {
|
||||
return;
|
||||
}
|
||||
|
||||
let payload;
|
||||
try {
|
||||
payload = await response.json();
|
||||
} catch (error) {
|
||||
console.warn('Invalid federation instances payload', error);
|
||||
return;
|
||||
}
|
||||
|
||||
if (!Array.isArray(payload)) {
|
||||
return;
|
||||
}
|
||||
|
||||
updateFederationNavCount({ documentObject: doc, count: payload.length });
|
||||
|
||||
const sanitizedDomain = typeof instanceDomain === 'string' ? instanceDomain.trim().toLowerCase() : null;
|
||||
|
||||
const sortedEntries = payload
|
||||
@@ -323,4 +238,4 @@ export async function initializeInstanceSelector(options) {
|
||||
});
|
||||
}
|
||||
|
||||
export const __test__ = { resolveInstanceLabel, updateFederationNavCount };
|
||||
export const __test__ = { resolveInstanceLabel };
|
||||
|
||||
@@ -44,7 +44,6 @@ import {
|
||||
formatChatPresetTag
|
||||
} from './chat-format.js';
|
||||
import { initializeInstanceSelector } from './instance-selector.js';
|
||||
import { initializeMobileMenu } from './mobile-menu.js';
|
||||
import { MESSAGE_LIMIT, normaliseMessageLimit } from './message-limit.js';
|
||||
import { CHAT_LOG_ENTRY_TYPES, buildChatTabModel, MAX_CHANNEL_INDEX } from './chat-log-tabs.js';
|
||||
import { renderChatTabs } from './chat-tabs.js';
|
||||
@@ -69,221 +68,6 @@ import {
|
||||
roleRenderOrder,
|
||||
} from './role-helpers.js';
|
||||
|
||||
/**
|
||||
* Compute active-node counts from a local node array.
|
||||
*
|
||||
* @param {Array<Object>} nodes Node payloads.
|
||||
* @param {number} nowSeconds Reference timestamp.
|
||||
* @returns {{hour: number, day: number, week: number, month: number, sampled: boolean}} Local count snapshot.
|
||||
*/
|
||||
export function computeLocalActiveNodeStats(nodes, nowSeconds) {
|
||||
const safeNodes = Array.isArray(nodes) ? nodes : [];
|
||||
const referenceNow = Number.isFinite(nowSeconds) ? nowSeconds : Date.now() / 1000;
|
||||
const windows = [
|
||||
{ key: 'hour', secs: 3600 },
|
||||
{ key: 'day', secs: 86_400 },
|
||||
{ key: 'week', secs: 7 * 86_400 },
|
||||
{ key: 'month', secs: 30 * 86_400 }
|
||||
];
|
||||
const counts = { sampled: true };
|
||||
for (const window of windows) {
|
||||
counts[window.key] = safeNodes.filter(node => {
|
||||
const lastHeard = Number(node?.last_heard);
|
||||
return Number.isFinite(lastHeard) && referenceNow - lastHeard <= window.secs;
|
||||
}).length;
|
||||
}
|
||||
return counts;
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse and validate the `/api/stats` payload.
|
||||
*
|
||||
* @param {*} payload Candidate JSON object from the stats endpoint.
|
||||
* @returns {{hour: number, day: number, week: number, month: number, sampled: boolean}|null} Normalized stats or null.
|
||||
*/
|
||||
export function normaliseActiveNodeStatsPayload(payload) {
|
||||
const activeNodes = payload && typeof payload === 'object' ? payload.active_nodes : null;
|
||||
if (!activeNodes || typeof activeNodes !== 'object') {
|
||||
return null;
|
||||
}
|
||||
const hour = Number(activeNodes.hour);
|
||||
const day = Number(activeNodes.day);
|
||||
const week = Number(activeNodes.week);
|
||||
const month = Number(activeNodes.month);
|
||||
if (![hour, day, week, month].every(Number.isFinite)) {
|
||||
return null;
|
||||
}
|
||||
return {
|
||||
hour: Math.max(0, Math.trunc(hour)),
|
||||
day: Math.max(0, Math.trunc(day)),
|
||||
week: Math.max(0, Math.trunc(week)),
|
||||
month: Math.max(0, Math.trunc(month)),
|
||||
sampled: Boolean(payload.sampled)
|
||||
};
|
||||
}
|
||||
|
||||
const ACTIVE_NODE_STATS_CACHE_TTL_MS = 30_000;
|
||||
let activeNodeStatsCache = null;
|
||||
let activeNodeStatsFetchPromise = null;
|
||||
let activeNodeStatsFetchImpl = null;
|
||||
|
||||
/**
|
||||
* Fetch active-node stats from the dedicated API endpoint with short-lived caching.
|
||||
*
|
||||
* @param {Function} fetchImpl Fetch implementation.
|
||||
* @returns {Promise<{hour: number, day: number, week: number, month: number, sampled: boolean} | null>} Normalized stats or null.
|
||||
*/
|
||||
async function fetchRemoteActiveNodeStats(fetchImpl) {
|
||||
const nowMs = Date.now();
|
||||
if (activeNodeStatsCache?.fetchImpl === fetchImpl && activeNodeStatsCache.expiresAt > nowMs) {
|
||||
return activeNodeStatsCache.stats;
|
||||
}
|
||||
if (activeNodeStatsFetchPromise && activeNodeStatsFetchImpl === fetchImpl) {
|
||||
return activeNodeStatsFetchPromise;
|
||||
}
|
||||
|
||||
activeNodeStatsFetchImpl = fetchImpl;
|
||||
activeNodeStatsFetchPromise = (async () => {
|
||||
const response = await fetchImpl('/api/stats', { cache: 'no-store' });
|
||||
if (!response?.ok) {
|
||||
throw new Error(`stats HTTP ${response?.status ?? 'unknown'}`);
|
||||
}
|
||||
const payload = await response.json();
|
||||
const normalized = normaliseActiveNodeStatsPayload(payload);
|
||||
if (!normalized) {
|
||||
throw new Error('invalid stats payload');
|
||||
}
|
||||
activeNodeStatsCache = {
|
||||
fetchImpl,
|
||||
expiresAt: Date.now() + ACTIVE_NODE_STATS_CACHE_TTL_MS,
|
||||
stats: normalized
|
||||
};
|
||||
return normalized;
|
||||
})();
|
||||
|
||||
try {
|
||||
return await activeNodeStatsFetchPromise;
|
||||
} finally {
|
||||
activeNodeStatsFetchPromise = null;
|
||||
activeNodeStatsFetchImpl = null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch active-node stats from the dedicated API endpoint with local fallback.
|
||||
*
|
||||
* @param {{
|
||||
* nodes: Array<Object>,
|
||||
* nowSeconds: number,
|
||||
* fetchImpl?: Function
|
||||
* }} params Fetch parameters.
|
||||
* @returns {Promise<{hour: number, day: number, week: number, month: number, sampled: boolean}>} Stats snapshot.
|
||||
*/
|
||||
export async function fetchActiveNodeStats({ nodes, nowSeconds, fetchImpl = fetch }) {
|
||||
try {
|
||||
const normalized = await fetchRemoteActiveNodeStats(fetchImpl);
|
||||
if (normalized) return normalized;
|
||||
throw new Error('invalid stats payload');
|
||||
} catch (error) {
|
||||
console.debug('Failed to fetch /api/stats; using local active-node counts.', error);
|
||||
return computeLocalActiveNodeStats(nodes, nowSeconds);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Format the dashboard refresh-info sentence for active-node counts.
|
||||
*
|
||||
* @param {{channel: string, frequency: string, stats: {hour:number,day:number,week:number,month:number,sampled:boolean}}} params Formatting data.
|
||||
* @returns {string} User-visible sentence for the dashboard header.
|
||||
*/
|
||||
export function formatActiveNodeStatsText({ channel, frequency, stats }) {
|
||||
const parts = [
|
||||
`${Number(stats?.hour) || 0}/hour`,
|
||||
`${Number(stats?.day) || 0}/day`,
|
||||
`${Number(stats?.week) || 0}/week`,
|
||||
`${Number(stats?.month) || 0}/month`
|
||||
];
|
||||
const suffix = stats?.sampled ? ' (sampled)' : '';
|
||||
return `${channel} (${frequency}) — active nodes: ${parts.join(', ')}${suffix}.`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse the next-page cursor header from an API response.
|
||||
*
|
||||
* @param {*} response Fetch response candidate.
|
||||
* @returns {string|null} Cursor token for the next page.
|
||||
*/
|
||||
export function readNextCursorHeader(response) {
|
||||
const headers = response && response.headers;
|
||||
if (!headers || typeof headers.get !== 'function') {
|
||||
return null;
|
||||
}
|
||||
const cursor = headers.get('X-Next-Cursor');
|
||||
return cursor && String(cursor).trim().length > 0 ? String(cursor).trim() : null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch an API collection endpoint using keyset cursor pagination.
|
||||
*
|
||||
* @param {{
|
||||
* path: string,
|
||||
* limit: number,
|
||||
* maxRows?: number,
|
||||
* params?: Record<string, string>,
|
||||
* fetchImpl?: Function
|
||||
* }} options Request options.
|
||||
* @returns {Promise<Array<Object>>} Aggregated array of collection rows.
|
||||
*/
|
||||
export async function fetchPaginatedCollection({
|
||||
path,
|
||||
limit,
|
||||
maxRows = 5000,
|
||||
params = {},
|
||||
fetchImpl = fetch
|
||||
}) {
|
||||
const safePath = typeof path === 'string' ? path : '';
|
||||
if (!safePath) {
|
||||
return [];
|
||||
}
|
||||
const safeLimit = Number.isFinite(limit) && limit > 0 ? Math.floor(limit) : 200;
|
||||
const safeMaxRows = Number.isFinite(maxRows) && maxRows > 0 ? Math.floor(maxRows) : safeLimit;
|
||||
const results = [];
|
||||
let cursor = null;
|
||||
let pageCount = 0;
|
||||
|
||||
while (results.length < safeMaxRows && pageCount < 100) {
|
||||
const query = new URLSearchParams({ limit: String(safeLimit) });
|
||||
Object.entries(params || {}).forEach(([key, value]) => {
|
||||
if (value == null) return;
|
||||
const text = String(value).trim();
|
||||
if (!text) return;
|
||||
query.set(key, text);
|
||||
});
|
||||
if (cursor) {
|
||||
query.set('cursor', cursor);
|
||||
}
|
||||
const response = await fetchImpl(`${safePath}?${query.toString()}`, { cache: 'no-store' });
|
||||
if (!response || !response.ok) {
|
||||
throw new Error('HTTP ' + (response ? response.status : 'unknown'));
|
||||
}
|
||||
const payload = await response.json();
|
||||
if (!Array.isArray(payload)) {
|
||||
throw new Error('invalid paginated payload');
|
||||
}
|
||||
if (payload.length === 0) {
|
||||
break;
|
||||
}
|
||||
results.push(...payload);
|
||||
cursor = readNextCursorHeader(response);
|
||||
pageCount += 1;
|
||||
if (!cursor) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return results.slice(0, safeMaxRows);
|
||||
}
|
||||
|
||||
/**
|
||||
* Entry point for the interactive dashboard. Wires up event listeners,
|
||||
* initializes the map, and triggers the first data refresh cycle.
|
||||
@@ -335,8 +119,6 @@ export function initializeApp(config) {
|
||||
const isChatView = bodyClassList ? bodyClassList.contains('view-chat') : false;
|
||||
const isMapView = bodyClassList ? bodyClassList.contains('view-map') : false;
|
||||
const mapZoomOverride = Number.isFinite(config.mapZoom) ? Number(config.mapZoom) : null;
|
||||
|
||||
initializeMobileMenu({ documentObject: document, windowObject: window });
|
||||
/**
|
||||
* Column sorter configuration for the node table.
|
||||
*
|
||||
@@ -410,10 +192,9 @@ export function initializeApp(config) {
|
||||
});
|
||||
const NODE_LIMIT = 1000;
|
||||
const TRACE_LIMIT = 200;
|
||||
const TRACE_MAX_AGE_SECONDS = 28 * 24 * 60 * 60;
|
||||
const TRACE_MAX_AGE_SECONDS = 7 * 24 * 60 * 60;
|
||||
const SNAPSHOT_LIMIT = SNAPSHOT_WINDOW;
|
||||
const CHAT_LIMIT = MESSAGE_LIMIT;
|
||||
const RECENT_COLLECTION_WINDOW_SECONDS = 7 * 24 * 60 * 60;
|
||||
const CHAT_RECENT_WINDOW_SECONDS = 7 * 24 * 60 * 60;
|
||||
const REFRESH_MS = config.refreshMs;
|
||||
const CHAT_ENABLED = Boolean(config.chatEnabled);
|
||||
@@ -438,7 +219,6 @@ export function initializeApp(config) {
|
||||
|
||||
/** @type {ReturnType<typeof setTimeout>|null} */
|
||||
let refreshTimer = null;
|
||||
let refreshInfoRequestId = 0;
|
||||
|
||||
/**
|
||||
* Close any open short-info overlays that do not contain the provided anchor.
|
||||
@@ -3127,14 +2907,6 @@ export function initializeApp(config) {
|
||||
* @returns {HTMLElement} Chat log element.
|
||||
*/
|
||||
function createMessageChatEntry(m) {
|
||||
let plainText = '';
|
||||
if (m?.text != null) {
|
||||
plainText = String(m.text).trim();
|
||||
}
|
||||
if (m?.encrypted && plainText === 'GAA=') {
|
||||
return null;
|
||||
}
|
||||
|
||||
const div = document.createElement('div');
|
||||
const tsSeconds = resolveTimestampSeconds(
|
||||
m.rx_time ?? m.rxTime,
|
||||
@@ -3366,28 +3138,18 @@ export function initializeApp(config) {
|
||||
}
|
||||
const getDivider = createDateDividerFactory();
|
||||
const limitedEntries = entries.slice(Math.max(entries.length - CHAT_LIMIT, 0));
|
||||
let renderedEntries = 0;
|
||||
for (const entry of limitedEntries) {
|
||||
if (!entry || typeof entry.ts !== 'number') {
|
||||
continue;
|
||||
}
|
||||
if (typeof renderEntry !== 'function') {
|
||||
continue;
|
||||
}
|
||||
const node = renderEntry(entry);
|
||||
if (!node) {
|
||||
continue;
|
||||
}
|
||||
const divider = getDivider(entry.ts);
|
||||
if (divider) fragment.appendChild(divider);
|
||||
fragment.appendChild(node);
|
||||
renderedEntries += 1;
|
||||
}
|
||||
if (renderedEntries === 0 && emptyLabel) {
|
||||
const empty = document.createElement('p');
|
||||
empty.className = 'chat-empty';
|
||||
empty.textContent = emptyLabel;
|
||||
fragment.appendChild(empty);
|
||||
if (typeof renderEntry === 'function') {
|
||||
const node = renderEntry(entry);
|
||||
if (node) {
|
||||
fragment.appendChild(node);
|
||||
}
|
||||
}
|
||||
}
|
||||
return fragment;
|
||||
}
|
||||
@@ -3683,14 +3445,9 @@ export function initializeApp(config) {
|
||||
*/
|
||||
async function fetchNodes(limit = NODE_LIMIT) {
|
||||
const effectiveLimit = resolveSnapshotLimit(limit, NODE_LIMIT);
|
||||
const maxRows = Math.max(effectiveLimit, effectiveLimit * SNAPSHOT_LIMIT);
|
||||
const nowSec = Math.floor(Date.now() / 1000);
|
||||
return fetchPaginatedCollection({
|
||||
path: '/api/nodes',
|
||||
limit: effectiveLimit,
|
||||
maxRows,
|
||||
params: { since: String(nowSec - RECENT_COLLECTION_WINDOW_SECONDS) }
|
||||
});
|
||||
const r = await fetch(`/api/nodes?limit=${effectiveLimit}`, { cache: 'no-store' });
|
||||
if (!r.ok) throw new Error('HTTP ' + r.status);
|
||||
return r.json();
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -3719,19 +3476,14 @@ export function initializeApp(config) {
|
||||
async function fetchMessages(limit = MESSAGE_LIMIT, options = {}) {
|
||||
if (!CHAT_ENABLED) return [];
|
||||
const safeLimit = normaliseMessageLimit(limit);
|
||||
const nowSec = Math.floor(Date.now() / 1000);
|
||||
const params = {};
|
||||
const params = new URLSearchParams({ limit: String(safeLimit) });
|
||||
if (options && options.encrypted) {
|
||||
params.encrypted = 'true';
|
||||
params.set('encrypted', 'true');
|
||||
}
|
||||
params.since = String(nowSec - CHAT_RECENT_WINDOW_SECONDS);
|
||||
const maxRows = Math.max(safeLimit, safeLimit * SNAPSHOT_LIMIT);
|
||||
return fetchPaginatedCollection({
|
||||
path: '/api/messages',
|
||||
limit: safeLimit,
|
||||
maxRows,
|
||||
params
|
||||
});
|
||||
const query = params.toString();
|
||||
const r = await fetch(`/api/messages?${query}`, { cache: 'no-store' });
|
||||
if (!r.ok) throw new Error('HTTP ' + r.status);
|
||||
return r.json();
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -3742,14 +3494,9 @@ export function initializeApp(config) {
|
||||
*/
|
||||
async function fetchNeighbors(limit = NODE_LIMIT) {
|
||||
const effectiveLimit = resolveSnapshotLimit(limit, NODE_LIMIT);
|
||||
const maxRows = Math.max(effectiveLimit, effectiveLimit * SNAPSHOT_LIMIT);
|
||||
const nowSec = Math.floor(Date.now() / 1000);
|
||||
return fetchPaginatedCollection({
|
||||
path: '/api/neighbors',
|
||||
limit: effectiveLimit,
|
||||
maxRows,
|
||||
params: { since: String(nowSec - RECENT_COLLECTION_WINDOW_SECONDS) }
|
||||
});
|
||||
const r = await fetch(`/api/neighbors?limit=${effectiveLimit}`, { cache: 'no-store' });
|
||||
if (!r.ok) throw new Error('HTTP ' + r.status);
|
||||
return r.json();
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -3761,14 +3508,9 @@ export function initializeApp(config) {
|
||||
async function fetchTraces(limit = TRACE_LIMIT) {
|
||||
const safeLimit = Number.isFinite(limit) && limit > 0 ? Math.floor(limit) : TRACE_LIMIT;
|
||||
const effectiveLimit = Math.min(safeLimit, NODE_LIMIT);
|
||||
const maxRows = Math.max(effectiveLimit, effectiveLimit * SNAPSHOT_LIMIT);
|
||||
const nowSec = Math.floor(Date.now() / 1000);
|
||||
const traces = await fetchPaginatedCollection({
|
||||
path: '/api/traces',
|
||||
limit: effectiveLimit,
|
||||
maxRows,
|
||||
params: { since: String(nowSec - TRACE_MAX_AGE_SECONDS) }
|
||||
});
|
||||
const r = await fetch(`/api/traces?limit=${effectiveLimit}`, { cache: 'no-store' });
|
||||
if (!r.ok) throw new Error('HTTP ' + r.status);
|
||||
const traces = await r.json();
|
||||
return filterRecentTraces(traces, TRACE_MAX_AGE_SECONDS);
|
||||
}
|
||||
|
||||
@@ -3780,14 +3522,9 @@ export function initializeApp(config) {
|
||||
*/
|
||||
async function fetchTelemetry(limit = NODE_LIMIT) {
|
||||
const effectiveLimit = resolveSnapshotLimit(limit, NODE_LIMIT);
|
||||
const maxRows = Math.max(effectiveLimit, effectiveLimit * SNAPSHOT_LIMIT);
|
||||
const nowSec = Math.floor(Date.now() / 1000);
|
||||
return fetchPaginatedCollection({
|
||||
path: '/api/telemetry',
|
||||
limit: effectiveLimit,
|
||||
maxRows,
|
||||
params: { since: String(nowSec - RECENT_COLLECTION_WINDOW_SECONDS) }
|
||||
});
|
||||
const r = await fetch(`/api/telemetry?limit=${effectiveLimit}`, { cache: 'no-store' });
|
||||
if (!r.ok) throw new Error('HTTP ' + r.status);
|
||||
return r.json();
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -3798,14 +3535,9 @@ export function initializeApp(config) {
|
||||
*/
|
||||
async function fetchPositions(limit = NODE_LIMIT) {
|
||||
const effectiveLimit = resolveSnapshotLimit(limit, NODE_LIMIT);
|
||||
const maxRows = Math.max(effectiveLimit, effectiveLimit * SNAPSHOT_LIMIT);
|
||||
const nowSec = Math.floor(Date.now() / 1000);
|
||||
return fetchPaginatedCollection({
|
||||
path: '/api/positions',
|
||||
limit: effectiveLimit,
|
||||
maxRows,
|
||||
params: { since: String(nowSec - RECENT_COLLECTION_WINDOW_SECONDS) }
|
||||
});
|
||||
const r = await fetch(`/api/positions?limit=${effectiveLimit}`, { cache: 'no-store' });
|
||||
if (!r.ok) throw new Error('HTTP ' + r.status);
|
||||
return r.json();
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -4642,16 +4374,15 @@ export function initializeApp(config) {
|
||||
if (!refreshInfo || !isDashboardView) {
|
||||
return;
|
||||
}
|
||||
const requestId = ++refreshInfoRequestId;
|
||||
void fetchActiveNodeStats({ nodes, nowSeconds: nowSec }).then(stats => {
|
||||
if (requestId !== refreshInfoRequestId) {
|
||||
return;
|
||||
}
|
||||
refreshInfo.textContent = formatActiveNodeStatsText({
|
||||
channel: config.channel,
|
||||
frequency: config.frequency,
|
||||
stats
|
||||
});
|
||||
});
|
||||
const windows = [
|
||||
{ label: 'hour', secs: 3600 },
|
||||
{ label: 'day', secs: 86400 },
|
||||
{ label: 'week', secs: 7 * 86400 },
|
||||
];
|
||||
const counts = windows.map(w => {
|
||||
const c = nodes.filter(n => n.last_heard && nowSec - Number(n.last_heard) <= w.secs).length;
|
||||
return `${c}/${w.label}`;
|
||||
}).join(', ');
|
||||
refreshInfo.textContent = `${config.channel} (${config.frequency}) — active nodes: ${counts}.`;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,271 +0,0 @@
|
||||
/*
|
||||
* Copyright © 2025-26 l5yth & contributors
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
const MOBILE_MENU_MEDIA_QUERY = '(max-width: 900px)';
|
||||
const FOCUSABLE_SELECTOR = [
|
||||
'a[href]',
|
||||
'button:not([disabled])',
|
||||
'input:not([disabled])',
|
||||
'select:not([disabled])',
|
||||
'textarea:not([disabled])',
|
||||
'[tabindex]:not([tabindex="-1"])'
|
||||
].join(', ');
|
||||
|
||||
/**
|
||||
* Collect the elements that can receive focus within a container.
|
||||
*
|
||||
* @param {?Element} container DOM node hosting focusable descendants.
|
||||
* @returns {Array<Element>} Ordered list of focusable elements.
|
||||
*/
|
||||
function resolveFocusableElements(container) {
|
||||
if (!container || typeof container.querySelectorAll !== 'function') {
|
||||
return [];
|
||||
}
|
||||
const candidates = Array.from(container.querySelectorAll(FOCUSABLE_SELECTOR));
|
||||
return candidates.filter(candidate => {
|
||||
if (!candidate || typeof candidate.getAttribute !== 'function') {
|
||||
return false;
|
||||
}
|
||||
return candidate.getAttribute('aria-hidden') !== 'true';
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Build a menu controller for handling toggle state, focus trapping, and
|
||||
* responsive layout swapping.
|
||||
*
|
||||
* @param {{
|
||||
* documentObject?: Document,
|
||||
* windowObject?: Window
|
||||
* }} [options]
|
||||
* @returns {{
|
||||
* initialize: () => void,
|
||||
* openMenu: () => void,
|
||||
* closeMenu: () => void,
|
||||
* syncLayout: () => void
|
||||
* }}
|
||||
*/
|
||||
function createMobileMenuController(options = {}) {
|
||||
const documentObject = options.documentObject || document;
|
||||
const windowObject = options.windowObject || window;
|
||||
const menuToggle = documentObject.getElementById('mobileMenuToggle');
|
||||
const menu = documentObject.getElementById('mobileMenu');
|
||||
const menuPanel = menu ? menu.querySelector('.mobile-menu__panel') : null;
|
||||
const closeTriggers = menu ? Array.from(menu.querySelectorAll('[data-mobile-menu-close]')) : [];
|
||||
const menuLinks = menu ? Array.from(menu.querySelectorAll('a')) : [];
|
||||
const body = documentObject.body;
|
||||
const mediaQuery = windowObject.matchMedia
|
||||
? windowObject.matchMedia(MOBILE_MENU_MEDIA_QUERY)
|
||||
: null;
|
||||
let isOpen = false;
|
||||
let lastActive = null;
|
||||
|
||||
/**
|
||||
* Toggle the ``aria-expanded`` state on the menu trigger.
|
||||
*
|
||||
* @param {boolean} expanded Whether the menu is open.
|
||||
* @returns {void}
|
||||
*/
|
||||
function setExpandedState(expanded) {
|
||||
if (!menuToggle || typeof menuToggle.setAttribute !== 'function') {
|
||||
return;
|
||||
}
|
||||
menuToggle.setAttribute('aria-expanded', expanded ? 'true' : 'false');
|
||||
}
|
||||
|
||||
/**
|
||||
* Synchronize the meta row placement based on the active media query.
|
||||
*
|
||||
* @returns {void}
|
||||
*/
|
||||
function syncLayout() {
|
||||
return;
|
||||
}
|
||||
|
||||
/**
|
||||
* Open the slide-in menu and trap focus within the panel.
|
||||
*
|
||||
* @returns {void}
|
||||
*/
|
||||
function openMenu() {
|
||||
if (!menu || !menuToggle || !menuPanel) {
|
||||
return;
|
||||
}
|
||||
syncLayout();
|
||||
menu.hidden = false;
|
||||
menu.classList.add('is-open');
|
||||
if (body && body.classList) {
|
||||
body.classList.add('menu-open');
|
||||
}
|
||||
setExpandedState(true);
|
||||
isOpen = true;
|
||||
lastActive = documentObject.activeElement || null;
|
||||
const focusables = resolveFocusableElements(menuPanel);
|
||||
const focusTarget = focusables[0] || menuPanel;
|
||||
if (focusTarget && typeof focusTarget.focus === 'function') {
|
||||
focusTarget.focus();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Close the menu and restore focus to the trigger.
|
||||
*
|
||||
* @returns {void}
|
||||
*/
|
||||
function closeMenu() {
|
||||
if (!menu || !menuToggle) {
|
||||
return;
|
||||
}
|
||||
menu.classList.remove('is-open');
|
||||
menu.hidden = true;
|
||||
if (body && body.classList) {
|
||||
body.classList.remove('menu-open');
|
||||
}
|
||||
setExpandedState(false);
|
||||
isOpen = false;
|
||||
if (lastActive && typeof lastActive.focus === 'function') {
|
||||
lastActive.focus();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Toggle open or closed based on the trigger interaction.
|
||||
*
|
||||
* @param {Event} event Click event originating from the trigger.
|
||||
* @returns {void}
|
||||
*/
|
||||
function handleToggleClick(event) {
|
||||
if (event && typeof event.preventDefault === 'function') {
|
||||
event.preventDefault();
|
||||
}
|
||||
if (isOpen) {
|
||||
closeMenu();
|
||||
} else {
|
||||
openMenu();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Trap tab focus within the menu panel while open.
|
||||
*
|
||||
* @param {KeyboardEvent} event Keydown event from the panel.
|
||||
* @returns {void}
|
||||
*/
|
||||
function handleKeydown(event) {
|
||||
if (!isOpen || !event) {
|
||||
return;
|
||||
}
|
||||
if (event.key === 'Escape') {
|
||||
event.preventDefault();
|
||||
closeMenu();
|
||||
return;
|
||||
}
|
||||
if (event.key !== 'Tab') {
|
||||
return;
|
||||
}
|
||||
const focusables = resolveFocusableElements(menuPanel);
|
||||
if (!focusables.length) {
|
||||
return;
|
||||
}
|
||||
const first = focusables[0];
|
||||
const last = focusables[focusables.length - 1];
|
||||
const active = documentObject.activeElement;
|
||||
if (event.shiftKey && active === first) {
|
||||
event.preventDefault();
|
||||
last.focus();
|
||||
} else if (!event.shiftKey && active === last) {
|
||||
event.preventDefault();
|
||||
first.focus();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Close the menu when navigation state changes.
|
||||
*
|
||||
* @returns {void}
|
||||
*/
|
||||
function handleRouteChange() {
|
||||
if (isOpen) {
|
||||
closeMenu();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Attach event listeners and sync initial layout.
|
||||
*
|
||||
* @returns {void}
|
||||
*/
|
||||
function initialize() {
|
||||
if (!menuToggle || !menu) {
|
||||
return;
|
||||
}
|
||||
menuToggle.addEventListener('click', handleToggleClick);
|
||||
closeTriggers.forEach(trigger => {
|
||||
trigger.addEventListener('click', closeMenu);
|
||||
});
|
||||
menuLinks.forEach(link => {
|
||||
link.addEventListener('click', closeMenu);
|
||||
});
|
||||
if (menuPanel && typeof menuPanel.addEventListener === 'function') {
|
||||
menuPanel.addEventListener('keydown', handleKeydown);
|
||||
}
|
||||
if (mediaQuery) {
|
||||
if (typeof mediaQuery.addEventListener === 'function') {
|
||||
mediaQuery.addEventListener('change', syncLayout);
|
||||
} else if (typeof mediaQuery.addListener === 'function') {
|
||||
mediaQuery.addListener(syncLayout);
|
||||
}
|
||||
}
|
||||
if (windowObject && typeof windowObject.addEventListener === 'function') {
|
||||
windowObject.addEventListener('hashchange', handleRouteChange);
|
||||
windowObject.addEventListener('popstate', handleRouteChange);
|
||||
}
|
||||
syncLayout();
|
||||
setExpandedState(false);
|
||||
}
|
||||
|
||||
return {
|
||||
initialize,
|
||||
openMenu,
|
||||
closeMenu,
|
||||
syncLayout,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize the mobile menu using the live DOM environment.
|
||||
*
|
||||
* @param {{
|
||||
* documentObject?: Document,
|
||||
* windowObject?: Window
|
||||
* }} [options]
|
||||
* @returns {{
|
||||
* initialize: () => void,
|
||||
* openMenu: () => void,
|
||||
* closeMenu: () => void,
|
||||
* syncLayout: () => void
|
||||
* }}
|
||||
*/
|
||||
export function initializeMobileMenu(options = {}) {
|
||||
const controller = createMobileMenuController(options);
|
||||
controller.initialize();
|
||||
return controller;
|
||||
}
|
||||
|
||||
export const __test__ = {
|
||||
createMobileMenuController,
|
||||
resolveFocusableElements,
|
||||
};
|
||||
@@ -14,7 +14,7 @@
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
import { fetchNodeDetailHtml } from './node-page.js';
|
||||
import { fetchNodeDetailHtml, mountTelemetryChartsWithRetry } from './node-page.js';
|
||||
|
||||
/**
|
||||
* Escape a string for safe HTML injection.
|
||||
@@ -68,6 +68,9 @@ function hasValidReference(reference) {
|
||||
* fetchImpl?: Function,
|
||||
* refreshImpl?: Function,
|
||||
* renderShortHtml?: Function,
|
||||
* mountCharts?: Function,
|
||||
* uPlotImpl?: Function,
|
||||
* loadUPlot?: Function,
|
||||
* privateMode?: boolean,
|
||||
* logger?: Console
|
||||
* }} [options] Behaviour overrides.
|
||||
@@ -101,6 +104,9 @@ export function createNodeDetailOverlayManager(options = {}) {
|
||||
const fetchImpl = options.fetchImpl;
|
||||
const refreshImpl = options.refreshImpl;
|
||||
const renderShortHtml = options.renderShortHtml;
|
||||
const mountCharts = typeof options.mountCharts === 'function' ? options.mountCharts : mountTelemetryChartsWithRetry;
|
||||
const uPlotImpl = options.uPlotImpl;
|
||||
const loadUPlot = options.loadUPlot;
|
||||
|
||||
let requestToken = 0;
|
||||
let lastTrigger = null;
|
||||
@@ -198,16 +204,21 @@ export function createNodeDetailOverlayManager(options = {}) {
|
||||
}
|
||||
const currentToken = ++requestToken;
|
||||
try {
|
||||
const html = await fetchDetail(reference, {
|
||||
const result = await fetchDetail(reference, {
|
||||
fetchImpl,
|
||||
refreshImpl,
|
||||
renderShortHtml,
|
||||
privateMode,
|
||||
returnState: true,
|
||||
});
|
||||
if (currentToken !== requestToken) {
|
||||
return;
|
||||
}
|
||||
content.innerHTML = html;
|
||||
const resolvedHtml = typeof result === 'string' ? result : result?.html;
|
||||
content.innerHTML = resolvedHtml ?? '';
|
||||
if (result && typeof result === 'object' && Array.isArray(result.chartModels)) {
|
||||
mountCharts(result.chartModels, { root: content, uPlotImpl, loadUPlot });
|
||||
}
|
||||
if (typeof closeButton.focus === 'function') {
|
||||
closeButton.focus();
|
||||
}
|
||||
|
||||
@@ -124,6 +124,15 @@ const TELEMETRY_CHART_SPECS = Object.freeze([
|
||||
ticks: 4,
|
||||
color: '#2ca25f',
|
||||
},
|
||||
{
|
||||
id: 'channelSecondary',
|
||||
position: 'right',
|
||||
label: 'Utilization (%)',
|
||||
min: 0,
|
||||
max: 100,
|
||||
ticks: 4,
|
||||
color: '#2ca25f',
|
||||
},
|
||||
],
|
||||
series: [
|
||||
{
|
||||
@@ -137,7 +146,7 @@ const TELEMETRY_CHART_SPECS = Object.freeze([
|
||||
},
|
||||
{
|
||||
id: 'air',
|
||||
axis: 'channel',
|
||||
axis: 'channelSecondary',
|
||||
color: '#99d8c9',
|
||||
label: 'Air util tx',
|
||||
legend: 'Air util TX (%)',
|
||||
@@ -162,13 +171,13 @@ const TELEMETRY_CHART_SPECS = Object.freeze([
|
||||
},
|
||||
{
|
||||
id: 'humidity',
|
||||
position: 'left',
|
||||
position: 'right',
|
||||
label: 'Humidity (%)',
|
||||
min: 0,
|
||||
max: 100,
|
||||
ticks: 4,
|
||||
color: '#91bfdb',
|
||||
visible: false,
|
||||
visible: true,
|
||||
},
|
||||
],
|
||||
series: [
|
||||
@@ -857,67 +866,6 @@ function createChartDimensions(spec) {
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Compute the horizontal drawing position for an axis descriptor.
|
||||
*
|
||||
* @param {string} position Axis position keyword.
|
||||
* @param {Object} dims Chart dimensions.
|
||||
* @returns {number} X coordinate for the axis baseline.
|
||||
*/
|
||||
function resolveAxisX(position, dims) {
|
||||
switch (position) {
|
||||
case 'leftSecondary':
|
||||
return dims.margin.left - 32;
|
||||
case 'right':
|
||||
return dims.width - dims.margin.right;
|
||||
case 'rightSecondary':
|
||||
return dims.width - dims.margin.right + 32;
|
||||
case 'left':
|
||||
default:
|
||||
return dims.margin.left;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Compute the X coordinate for a timestamp constrained to the rolling window.
|
||||
*
|
||||
* @param {number} timestamp Timestamp in milliseconds.
|
||||
* @param {number} domainStart Start of the window in milliseconds.
|
||||
* @param {number} domainEnd End of the window in milliseconds.
|
||||
* @param {Object} dims Chart dimensions.
|
||||
* @returns {number} X coordinate inside the SVG viewport.
|
||||
*/
|
||||
function scaleTimestamp(timestamp, domainStart, domainEnd, dims) {
|
||||
const safeStart = Math.min(domainStart, domainEnd);
|
||||
const safeEnd = Math.max(domainStart, domainEnd);
|
||||
const span = Math.max(1, safeEnd - safeStart);
|
||||
const clamped = clamp(timestamp, safeStart, safeEnd);
|
||||
const ratio = (clamped - safeStart) / span;
|
||||
return dims.margin.left + ratio * dims.innerWidth;
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert a value bound to a specific axis into a Y coordinate.
|
||||
*
|
||||
* @param {number} value Series value.
|
||||
* @param {Object} axis Axis descriptor.
|
||||
* @param {Object} dims Chart dimensions.
|
||||
* @returns {number} Y coordinate.
|
||||
*/
|
||||
function scaleValueToAxis(value, axis, dims) {
|
||||
if (!axis) return dims.chartBottom;
|
||||
if (axis.scale === 'log') {
|
||||
const minLog = Math.log10(axis.min);
|
||||
const maxLog = Math.log10(axis.max);
|
||||
const safe = clamp(value, axis.min, axis.max);
|
||||
const ratio = (Math.log10(safe) - minLog) / (maxLog - minLog);
|
||||
return dims.chartBottom - ratio * dims.innerHeight;
|
||||
}
|
||||
const safe = clamp(value, axis.min, axis.max);
|
||||
const ratio = (safe - axis.min) / (axis.max - axis.min || 1);
|
||||
return dims.chartBottom - ratio * dims.innerHeight;
|
||||
}
|
||||
|
||||
/**
|
||||
* Collect candidate containers that may hold telemetry values for a snapshot.
|
||||
*
|
||||
@@ -1034,129 +982,15 @@ function resolveAxisMax(axis, seriesEntries) {
|
||||
}
|
||||
|
||||
/**
|
||||
* Render a telemetry series as circles plus an optional translucent guide line.
|
||||
*
|
||||
* @param {Object} seriesConfig Series metadata.
|
||||
* @param {Array<{timestamp: number, value: number}>} points Series points.
|
||||
* @param {Object} axis Axis descriptor.
|
||||
* @param {Object} dims Chart dimensions.
|
||||
* @param {number} domainStart Window start timestamp.
|
||||
* @param {number} domainEnd Window end timestamp.
|
||||
* @returns {string} SVG markup for the series.
|
||||
*/
|
||||
function renderTelemetrySeries(seriesConfig, points, axis, dims, domainStart, domainEnd, { lineReducer } = {}) {
|
||||
if (!Array.isArray(points) || points.length === 0) {
|
||||
return '';
|
||||
}
|
||||
const convertPoint = point => {
|
||||
const cx = scaleTimestamp(point.timestamp, domainStart, domainEnd, dims);
|
||||
const cy = scaleValueToAxis(point.value, axis, dims);
|
||||
return { cx, cy, value: point.value };
|
||||
};
|
||||
const circleEntries = points.map(point => {
|
||||
const coords = convertPoint(point);
|
||||
const tooltip = formatSeriesPointValue(seriesConfig, point.value);
|
||||
const titleMarkup = tooltip ? `<title>${escapeHtml(tooltip)}</title>` : '';
|
||||
return `<circle class="node-detail__chart-point" cx="${coords.cx.toFixed(2)}" cy="${coords.cy.toFixed(2)}" r="3.2" fill="${seriesConfig.color}" aria-hidden="true">${titleMarkup}</circle>`;
|
||||
});
|
||||
const lineSource = typeof lineReducer === 'function' ? lineReducer(points) : points;
|
||||
const linePoints = Array.isArray(lineSource) && lineSource.length > 0 ? lineSource : points;
|
||||
const coordinates = linePoints.map(convertPoint);
|
||||
let line = '';
|
||||
if (coordinates.length > 1) {
|
||||
const path = coordinates
|
||||
.map((coord, idx) => `${idx === 0 ? 'M' : 'L'}${coord.cx.toFixed(2)} ${coord.cy.toFixed(2)}`)
|
||||
.join(' ');
|
||||
line = `<path class="node-detail__chart-trend" d="${path}" fill="none" stroke="${hexToRgba(seriesConfig.color, 0.5)}" stroke-width="1.5" aria-hidden="true"></path>`;
|
||||
}
|
||||
return `${line}${circleEntries.join('')}`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Render a vertical axis when visible.
|
||||
*
|
||||
* @param {Object} axis Axis descriptor.
|
||||
* @param {Object} dims Chart dimensions.
|
||||
* @returns {string} SVG markup for the axis or an empty string.
|
||||
*/
|
||||
function renderYAxis(axis, dims) {
|
||||
if (!axis || axis.visible === false) {
|
||||
return '';
|
||||
}
|
||||
const x = resolveAxisX(axis.position, dims);
|
||||
const ticks = axis.scale === 'log'
|
||||
? buildLogTicks(axis.min, axis.max)
|
||||
: buildLinearTicks(axis.min, axis.max, axis.ticks);
|
||||
const tickElements = ticks
|
||||
.map(value => {
|
||||
const y = scaleValueToAxis(value, axis, dims);
|
||||
const tickLength = axis.position === 'left' || axis.position === 'leftSecondary' ? -4 : 4;
|
||||
const textAnchor = axis.position === 'left' || axis.position === 'leftSecondary' ? 'end' : 'start';
|
||||
const textOffset = axis.position === 'left' || axis.position === 'leftSecondary' ? -6 : 6;
|
||||
return `
|
||||
<g class="node-detail__chart-tick" aria-hidden="true">
|
||||
<line x1="${x}" y1="${y.toFixed(2)}" x2="${(x + tickLength).toFixed(2)}" y2="${y.toFixed(2)}"></line>
|
||||
<text x="${(x + textOffset).toFixed(2)}" y="${(y + 3).toFixed(2)}" text-anchor="${textAnchor}" dominant-baseline="middle">${escapeHtml(formatAxisTick(value, axis))}</text>
|
||||
</g>
|
||||
`;
|
||||
})
|
||||
.join('');
|
||||
const labelPadding = axis.position === 'left' || axis.position === 'leftSecondary' ? -56 : 56;
|
||||
const labelX = x + labelPadding;
|
||||
const labelY = (dims.chartTop + dims.chartBottom) / 2;
|
||||
const labelTransform = `rotate(-90 ${labelX.toFixed(2)} ${labelY.toFixed(2)})`;
|
||||
return `
|
||||
<g class="node-detail__chart-axis node-detail__chart-axis--y" aria-hidden="true">
|
||||
<line x1="${x}" y1="${dims.chartTop}" x2="${x}" y2="${dims.chartBottom}"></line>
|
||||
${tickElements}
|
||||
<text class="node-detail__chart-axis-label" x="${labelX.toFixed(2)}" y="${labelY.toFixed(2)}" text-anchor="middle" dominant-baseline="middle" transform="${labelTransform}">${escapeHtml(axis.label)}</text>
|
||||
</g>
|
||||
`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Render the horizontal floating seven-day axis with midnight ticks.
|
||||
*
|
||||
* @param {Object} dims Chart dimensions.
|
||||
* @param {number} domainStart Window start timestamp.
|
||||
* @param {number} domainEnd Window end timestamp.
|
||||
* @param {Array<number>} tickTimestamps Midnight tick timestamps.
|
||||
* @returns {string} SVG markup for the X axis.
|
||||
*/
|
||||
function renderXAxis(dims, domainStart, domainEnd, tickTimestamps, { labelFormatter = formatCompactDate } = {}) {
|
||||
const y = dims.chartBottom;
|
||||
const ticks = tickTimestamps
|
||||
.map(ts => {
|
||||
const x = scaleTimestamp(ts, domainStart, domainEnd, dims);
|
||||
const labelY = y + 18;
|
||||
const xStr = x.toFixed(2);
|
||||
const yStr = labelY.toFixed(2);
|
||||
const label = labelFormatter(ts);
|
||||
return `
|
||||
<g class="node-detail__chart-tick" aria-hidden="true">
|
||||
<line class="node-detail__chart-grid-line" x1="${xStr}" y1="${dims.chartTop}" x2="${xStr}" y2="${dims.chartBottom}"></line>
|
||||
<text x="${xStr}" y="${yStr}" text-anchor="end" dominant-baseline="central" transform="rotate(-90 ${xStr} ${yStr})">${escapeHtml(label)}</text>
|
||||
</g>
|
||||
`;
|
||||
})
|
||||
.join('');
|
||||
return `
|
||||
<g class="node-detail__chart-axis node-detail__chart-axis--x" aria-hidden="true">
|
||||
<line x1="${dims.margin.left}" y1="${y}" x2="${dims.width - dims.margin.right}" y2="${y}"></line>
|
||||
${ticks}
|
||||
</g>
|
||||
`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Render a single telemetry chart defined by ``spec``.
|
||||
* Build a telemetry chart model from a specification and series entries.
|
||||
*
|
||||
* @param {Object} spec Chart specification.
|
||||
* @param {Array<{timestamp: number, snapshot: Object}>} entries Telemetry entries.
|
||||
* @param {number} nowMs Reference timestamp.
|
||||
* @returns {string} Rendered chart markup or an empty string.
|
||||
* @param {Object} chartOptions Rendering overrides.
|
||||
* @returns {Object|null} Chart model or ``null`` when empty.
|
||||
*/
|
||||
function renderTelemetryChart(spec, entries, nowMs, chartOptions = {}) {
|
||||
function buildTelemetryChartModel(spec, entries, nowMs, chartOptions = {}) {
|
||||
const windowMs = Number.isFinite(chartOptions.windowMs) && chartOptions.windowMs > 0 ? chartOptions.windowMs : TELEMETRY_WINDOW_MS;
|
||||
const timeRangeLabel = stringOrNull(chartOptions.timeRangeLabel) ?? 'Last 7 days';
|
||||
const domainEnd = nowMs;
|
||||
@@ -1170,7 +1004,7 @@ function renderTelemetryChart(spec, entries, nowMs, chartOptions = {}) {
|
||||
})
|
||||
.filter(entry => entry != null);
|
||||
if (seriesEntries.length === 0) {
|
||||
return '';
|
||||
return null;
|
||||
}
|
||||
const adjustedAxes = spec.axes.map(axis => {
|
||||
const resolvedMax = resolveAxisMax(axis, seriesEntries);
|
||||
@@ -1188,22 +1022,33 @@ function renderTelemetryChart(spec, entries, nowMs, chartOptions = {}) {
|
||||
})
|
||||
.filter(entry => entry != null);
|
||||
if (plottedSeries.length === 0) {
|
||||
return '';
|
||||
return null;
|
||||
}
|
||||
const axesMarkup = adjustedAxes.map(axis => renderYAxis(axis, dims)).join('');
|
||||
const tickBuilder = typeof chartOptions.xAxisTickBuilder === 'function' ? chartOptions.xAxisTickBuilder : buildMidnightTicks;
|
||||
const tickFormatter = typeof chartOptions.xAxisTickFormatter === 'function' ? chartOptions.xAxisTickFormatter : formatCompactDate;
|
||||
const ticks = tickBuilder(nowMs, windowMs);
|
||||
const xAxisMarkup = renderXAxis(dims, domainStart, domainEnd, ticks, { labelFormatter: tickFormatter });
|
||||
return {
|
||||
id: spec.id,
|
||||
title: spec.title,
|
||||
timeRangeLabel,
|
||||
domainStart,
|
||||
domainEnd,
|
||||
dims,
|
||||
axes: adjustedAxes,
|
||||
seriesEntries: plottedSeries,
|
||||
ticks: tickBuilder(nowMs, windowMs),
|
||||
tickFormatter,
|
||||
lineReducer: typeof chartOptions.lineReducer === 'function' ? chartOptions.lineReducer : null,
|
||||
};
|
||||
}
|
||||
|
||||
const seriesMarkup = plottedSeries
|
||||
.map(series =>
|
||||
renderTelemetrySeries(series.config, series.points, series.axis, dims, domainStart, domainEnd, {
|
||||
lineReducer: chartOptions.lineReducer,
|
||||
}),
|
||||
)
|
||||
.join('');
|
||||
const legendItems = plottedSeries
|
||||
/**
|
||||
* Render a telemetry chart container for a chart model.
|
||||
*
|
||||
* @param {Object} model Chart model.
|
||||
* @returns {string} Chart markup.
|
||||
*/
|
||||
function renderTelemetryChartMarkup(model) {
|
||||
const legendItems = model.seriesEntries
|
||||
.map(series => {
|
||||
const legendLabel = stringOrNull(series.config.legend) ?? series.config.label;
|
||||
return `
|
||||
@@ -1217,22 +1062,428 @@ function renderTelemetryChart(spec, entries, nowMs, chartOptions = {}) {
|
||||
const legendMarkup = legendItems
|
||||
? `<div class="node-detail__chart-legend" aria-hidden="true">${legendItems}</div>`
|
||||
: '';
|
||||
const ariaLabel = `${model.title} over last seven days`;
|
||||
return `
|
||||
<figure class="node-detail__chart">
|
||||
<figure class="node-detail__chart" data-telemetry-chart-id="${escapeHtml(model.id)}">
|
||||
<figcaption class="node-detail__chart-header">
|
||||
<h4>${escapeHtml(spec.title)}</h4>
|
||||
<span>${escapeHtml(timeRangeLabel)}</span>
|
||||
<h4>${escapeHtml(model.title)}</h4>
|
||||
<span>${escapeHtml(model.timeRangeLabel)}</span>
|
||||
</figcaption>
|
||||
<svg viewBox="0 0 ${dims.width} ${dims.height}" preserveAspectRatio="xMidYMid meet" role="img" aria-label="${escapeHtml(`${spec.title} over last seven days`)}">
|
||||
${axesMarkup}
|
||||
${xAxisMarkup}
|
||||
${seriesMarkup}
|
||||
</svg>
|
||||
<div class="node-detail__chart-plot" data-telemetry-plot role="img" aria-label="${escapeHtml(ariaLabel)}"></div>
|
||||
${legendMarkup}
|
||||
</figure>
|
||||
`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Build a sorted timestamp index shared across series entries.
|
||||
*
|
||||
* @param {Array<Object>} seriesEntries Plotted series entries.
|
||||
* @param {Function|null} lineReducer Optional line reducer.
|
||||
* @returns {{timestamps: Array<number>, indexByTimestamp: Map<number, number>}} Timestamp index.
|
||||
*/
|
||||
function buildChartTimestampIndex(seriesEntries, lineReducer) {
|
||||
const timestampSet = new Set();
|
||||
for (const entry of seriesEntries) {
|
||||
if (!entry || !Array.isArray(entry.points)) continue;
|
||||
entry.points.forEach(point => {
|
||||
if (point && Number.isFinite(point.timestamp)) {
|
||||
timestampSet.add(point.timestamp);
|
||||
}
|
||||
});
|
||||
if (typeof lineReducer === 'function') {
|
||||
const reduced = lineReducer(entry.points);
|
||||
if (Array.isArray(reduced)) {
|
||||
reduced.forEach(point => {
|
||||
if (point && Number.isFinite(point.timestamp)) {
|
||||
timestampSet.add(point.timestamp);
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
const timestamps = Array.from(timestampSet).sort((a, b) => a - b);
|
||||
const indexByTimestamp = new Map(timestamps.map((ts, idx) => [ts, idx]));
|
||||
return { timestamps, indexByTimestamp };
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert a list of points into an aligned values array.
|
||||
*
|
||||
* @param {Array<{timestamp: number, value: number}>} points Series points.
|
||||
* @param {Map<number, number>} indexByTimestamp Timestamp index.
|
||||
* @param {number} length Length of the output array.
|
||||
* @returns {Array<number|null>} Values aligned to timestamps.
|
||||
*/
|
||||
function mapSeriesValues(points, indexByTimestamp, length) {
|
||||
const values = Array.from({ length }, () => null);
|
||||
if (!Array.isArray(points)) {
|
||||
return values;
|
||||
}
|
||||
for (const point of points) {
|
||||
if (!point || !Number.isFinite(point.timestamp)) continue;
|
||||
const idx = indexByTimestamp.get(point.timestamp);
|
||||
if (idx == null) continue;
|
||||
values[idx] = Number.isFinite(point.value) ? point.value : null;
|
||||
}
|
||||
return values;
|
||||
}
|
||||
|
||||
/**
|
||||
* Build uPlot series and data arrays for a chart model.
|
||||
*
|
||||
* @param {Object} model Chart model.
|
||||
* @returns {{data: Array<Array<number|null>>, series: Array<Object>}} uPlot data and series config.
|
||||
*/
|
||||
function buildTelemetryChartData(model) {
|
||||
const { timestamps, indexByTimestamp } = buildChartTimestampIndex(model.seriesEntries, model.lineReducer);
|
||||
const data = [timestamps];
|
||||
const series = [{ label: 'Time' }];
|
||||
|
||||
model.seriesEntries.forEach(entry => {
|
||||
const baseConfig = {
|
||||
label: entry.config.label,
|
||||
scale: entry.axis.id,
|
||||
};
|
||||
if (model.lineReducer) {
|
||||
const reducedPoints = model.lineReducer(entry.points);
|
||||
const linePoints = Array.isArray(reducedPoints) && reducedPoints.length > 0 ? reducedPoints : entry.points;
|
||||
const lineValues = mapSeriesValues(linePoints, indexByTimestamp, timestamps.length);
|
||||
series.push({
|
||||
...baseConfig,
|
||||
stroke: hexToRgba(entry.config.color, 0.5),
|
||||
width: 1.5,
|
||||
points: { show: false },
|
||||
});
|
||||
data.push(lineValues);
|
||||
|
||||
const pointValues = mapSeriesValues(entry.points, indexByTimestamp, timestamps.length);
|
||||
series.push({
|
||||
...baseConfig,
|
||||
stroke: entry.config.color,
|
||||
width: 0,
|
||||
points: { show: true, size: 6, width: 1 },
|
||||
});
|
||||
data.push(pointValues);
|
||||
} else {
|
||||
const values = mapSeriesValues(entry.points, indexByTimestamp, timestamps.length);
|
||||
series.push({
|
||||
...baseConfig,
|
||||
stroke: entry.config.color,
|
||||
width: 1.5,
|
||||
points: { show: true, size: 6, width: 1 },
|
||||
});
|
||||
data.push(values);
|
||||
}
|
||||
});
|
||||
|
||||
return { data, series };
|
||||
}
|
||||
|
||||
/**
|
||||
* Build uPlot chart configuration and data for a telemetry chart.
|
||||
*
|
||||
* @param {Object} model Chart model.
|
||||
* @returns {{options: Object, data: Array<Array<number|null>>}} uPlot config and data.
|
||||
*/
|
||||
function buildUPlotChartConfig(model, { width, height, axisColor, gridColor } = {}) {
|
||||
const { data, series } = buildTelemetryChartData(model);
|
||||
const fallbackWidth = Math.round(model.dims.width * 1.8);
|
||||
const resolvedWidth = Number.isFinite(width) && width > 0 ? width : fallbackWidth;
|
||||
const resolvedHeight = Number.isFinite(height) && height > 0 ? height : model.dims.height;
|
||||
const axisStroke = stringOrNull(axisColor) ?? '#5c6773';
|
||||
const gridStroke = stringOrNull(gridColor) ?? 'rgba(12, 15, 18, 0.08)';
|
||||
const axes = [
|
||||
{
|
||||
scale: 'x',
|
||||
side: 2,
|
||||
stroke: axisStroke,
|
||||
grid: { show: true, stroke: gridStroke },
|
||||
splits: () => model.ticks,
|
||||
values: (u, splits) => splits.map(value => model.tickFormatter(value)),
|
||||
},
|
||||
];
|
||||
const scales = {
|
||||
x: {
|
||||
time: true,
|
||||
range: () => [model.domainStart, model.domainEnd],
|
||||
},
|
||||
};
|
||||
|
||||
model.axes.forEach(axis => {
|
||||
const ticks = axis.scale === 'log'
|
||||
? buildLogTicks(axis.min, axis.max)
|
||||
: buildLinearTicks(axis.min, axis.max, axis.ticks);
|
||||
const side = axis.position === 'right' || axis.position === 'rightSecondary' ? 1 : 3;
|
||||
axes.push({
|
||||
scale: axis.id,
|
||||
side,
|
||||
show: axis.visible !== false,
|
||||
stroke: axisStroke,
|
||||
grid: { show: false },
|
||||
label: axis.label,
|
||||
splits: () => ticks,
|
||||
values: (u, splits) => splits.map(value => formatAxisTick(value, axis)),
|
||||
});
|
||||
scales[axis.id] = {
|
||||
distr: axis.scale === 'log' ? 3 : 1,
|
||||
log: axis.scale === 'log' ? 10 : undefined,
|
||||
range: () => [axis.min, axis.max],
|
||||
};
|
||||
});
|
||||
|
||||
return {
|
||||
options: {
|
||||
width: resolvedWidth,
|
||||
height: resolvedHeight,
|
||||
padding: [
|
||||
model.dims.margin.top,
|
||||
model.dims.margin.right,
|
||||
model.dims.margin.bottom,
|
||||
model.dims.margin.left,
|
||||
],
|
||||
legend: { show: false },
|
||||
series,
|
||||
axes,
|
||||
scales,
|
||||
},
|
||||
data,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Instantiate uPlot charts for the provided chart models.
|
||||
*
|
||||
* @param {Array<Object>} chartModels Chart models to render.
|
||||
* @param {{root?: ParentNode, uPlotImpl?: Function}} [options] Rendering options.
|
||||
* @returns {Array<Object>} Instantiated uPlot charts.
|
||||
*/
|
||||
export function mountTelemetryCharts(chartModels, { root, uPlotImpl } = {}) {
|
||||
if (!Array.isArray(chartModels) || chartModels.length === 0) {
|
||||
return [];
|
||||
}
|
||||
const host = root ?? globalThis.document;
|
||||
if (!host || typeof host.querySelector !== 'function') {
|
||||
return [];
|
||||
}
|
||||
const uPlotCtor = typeof uPlotImpl === 'function' ? uPlotImpl : globalThis.uPlot;
|
||||
if (typeof uPlotCtor !== 'function') {
|
||||
console.warn('uPlot is unavailable; telemetry charts will not render.');
|
||||
return [];
|
||||
}
|
||||
|
||||
const instances = [];
|
||||
const colorRoot = host?.ownerDocument?.body ?? host?.body ?? globalThis.document?.body ?? null;
|
||||
const axisColor = colorRoot && typeof globalThis.getComputedStyle === 'function'
|
||||
? globalThis.getComputedStyle(colorRoot).getPropertyValue('--muted').trim()
|
||||
: null;
|
||||
const gridColor = colorRoot && typeof globalThis.getComputedStyle === 'function'
|
||||
? globalThis.getComputedStyle(colorRoot).getPropertyValue('--line').trim()
|
||||
: null;
|
||||
chartModels.forEach(model => {
|
||||
const container = host.querySelector(`[data-telemetry-chart-id="${model.id}"]`);
|
||||
if (!container) return;
|
||||
const plotRoot = container.querySelector('[data-telemetry-plot]');
|
||||
if (!plotRoot) return;
|
||||
plotRoot.innerHTML = '';
|
||||
const plotWidth = plotRoot.clientWidth || plotRoot.getBoundingClientRect?.().width;
|
||||
const plotHeight = plotRoot.clientHeight || plotRoot.getBoundingClientRect?.().height;
|
||||
const { options, data } = buildUPlotChartConfig(model, {
|
||||
width: plotWidth ? Math.round(plotWidth) : undefined,
|
||||
height: plotHeight ? Math.round(plotHeight) : undefined,
|
||||
axisColor: axisColor || undefined,
|
||||
gridColor: gridColor || undefined,
|
||||
});
|
||||
const instance = new uPlotCtor(options, data, plotRoot);
|
||||
instance.__potatoMeshRoot = plotRoot;
|
||||
instances.push(instance);
|
||||
});
|
||||
registerTelemetryChartResize(instances);
|
||||
return instances;
|
||||
}
|
||||
|
||||
const telemetryResizeRegistry = new Set();
|
||||
const telemetryResizeObservers = new WeakMap();
|
||||
let telemetryResizeListenerAttached = false;
|
||||
let telemetryResizeDebounceId = null;
|
||||
const TELEMETRY_RESIZE_DEBOUNCE_MS = 120;
|
||||
|
||||
function resizeUPlotInstance(instance) {
|
||||
if (!instance || typeof instance.setSize !== 'function') {
|
||||
return;
|
||||
}
|
||||
const root = instance.__potatoMeshRoot ?? instance.root ?? null;
|
||||
if (!root) return;
|
||||
const rect = typeof root.getBoundingClientRect === 'function' ? root.getBoundingClientRect() : null;
|
||||
const width = Number.isFinite(root.clientWidth) ? root.clientWidth : rect?.width;
|
||||
const height = Number.isFinite(root.clientHeight) ? root.clientHeight : rect?.height;
|
||||
if (!width || !height) return;
|
||||
instance.setSize({ width: Math.round(width), height: Math.round(height) });
|
||||
}
|
||||
|
||||
function registerTelemetryChartResize(instances) {
|
||||
if (!Array.isArray(instances) || instances.length === 0) {
|
||||
return;
|
||||
}
|
||||
const scheduleResize = () => {
|
||||
if (telemetryResizeDebounceId != null) {
|
||||
clearTimeout(telemetryResizeDebounceId);
|
||||
}
|
||||
telemetryResizeDebounceId = setTimeout(() => {
|
||||
telemetryResizeDebounceId = null;
|
||||
telemetryResizeRegistry.forEach(instance => resizeUPlotInstance(instance));
|
||||
}, TELEMETRY_RESIZE_DEBOUNCE_MS);
|
||||
};
|
||||
instances.forEach(instance => {
|
||||
telemetryResizeRegistry.add(instance);
|
||||
resizeUPlotInstance(instance);
|
||||
if (typeof globalThis.ResizeObserver === 'function') {
|
||||
if (telemetryResizeObservers.has(instance)) return;
|
||||
const observer = new globalThis.ResizeObserver(scheduleResize);
|
||||
telemetryResizeObservers.set(instance, observer);
|
||||
const root = instance.__potatoMeshRoot ?? instance.root ?? null;
|
||||
if (root && typeof observer.observe === 'function') {
|
||||
observer.observe(root);
|
||||
}
|
||||
}
|
||||
});
|
||||
if (!telemetryResizeListenerAttached && typeof globalThis.addEventListener === 'function') {
|
||||
globalThis.addEventListener('resize', () => {
|
||||
scheduleResize();
|
||||
});
|
||||
telemetryResizeListenerAttached = true;
|
||||
}
|
||||
}
|
||||
|
||||
function defaultLoadUPlot({ documentRef, onLoad }) {
|
||||
if (!documentRef || typeof documentRef.querySelector !== 'function') {
|
||||
return false;
|
||||
}
|
||||
const existing = documentRef.querySelector('script[data-uplot-loader="true"]');
|
||||
if (existing) {
|
||||
if (existing.dataset.loaded === 'true' && typeof onLoad === 'function') {
|
||||
onLoad();
|
||||
} else if (typeof existing.addEventListener === 'function' && typeof onLoad === 'function') {
|
||||
existing.addEventListener('load', onLoad, { once: true });
|
||||
}
|
||||
return true;
|
||||
}
|
||||
if (typeof documentRef.createElement !== 'function') {
|
||||
return false;
|
||||
}
|
||||
const script = documentRef.createElement('script');
|
||||
script.src = '/assets/vendor/uplot/uPlot.iife.min.js';
|
||||
script.defer = true;
|
||||
script.dataset.uplotLoader = 'true';
|
||||
if (typeof script.addEventListener === 'function') {
|
||||
script.addEventListener('load', () => {
|
||||
script.dataset.loaded = 'true';
|
||||
if (typeof onLoad === 'function') {
|
||||
onLoad();
|
||||
}
|
||||
});
|
||||
}
|
||||
const head = documentRef.head ?? documentRef.body;
|
||||
if (head && typeof head.appendChild === 'function') {
|
||||
head.appendChild(script);
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Mount telemetry charts, retrying briefly if uPlot has not loaded yet.
|
||||
*
|
||||
* @param {Array<Object>} chartModels Chart models to render.
|
||||
* @param {{root?: ParentNode, uPlotImpl?: Function, loadUPlot?: Function}} [options] Rendering options.
|
||||
* @returns {Array<Object>} Instantiated uPlot charts.
|
||||
*/
|
||||
export function mountTelemetryChartsWithRetry(chartModels, { root, uPlotImpl, loadUPlot } = {}) {
|
||||
const instances = mountTelemetryCharts(chartModels, { root, uPlotImpl });
|
||||
if (instances.length > 0 || typeof uPlotImpl === 'function') {
|
||||
return instances;
|
||||
}
|
||||
const host = root ?? globalThis.document;
|
||||
if (!host || typeof host.querySelector !== 'function') {
|
||||
return instances;
|
||||
}
|
||||
let mounted = false;
|
||||
let attempts = 0;
|
||||
const maxAttempts = 10;
|
||||
const retryDelayMs = 50;
|
||||
const retry = () => {
|
||||
if (mounted) return;
|
||||
attempts += 1;
|
||||
const next = mountTelemetryCharts(chartModels, { root, uPlotImpl });
|
||||
if (next.length > 0) {
|
||||
mounted = true;
|
||||
return;
|
||||
}
|
||||
if (attempts >= maxAttempts) {
|
||||
return;
|
||||
}
|
||||
setTimeout(retry, retryDelayMs);
|
||||
};
|
||||
const loadFn = typeof loadUPlot === 'function' ? loadUPlot : defaultLoadUPlot;
|
||||
loadFn({
|
||||
documentRef: host.ownerDocument ?? globalThis.document,
|
||||
onLoad: () => {
|
||||
const next = mountTelemetryCharts(chartModels, { root, uPlotImpl });
|
||||
if (next.length > 0) {
|
||||
mounted = true;
|
||||
}
|
||||
},
|
||||
});
|
||||
setTimeout(retry, 0);
|
||||
return instances;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create chart markup and models for telemetry charts.
|
||||
*
|
||||
* @param {Object} node Normalised node payload.
|
||||
* @param {{ nowMs?: number, chartOptions?: Object }} [options] Rendering options.
|
||||
* @returns {{chartsHtml: string, chartModels: Array<Object>}} Chart markup and models.
|
||||
*/
|
||||
export function createTelemetryCharts(node, { nowMs = Date.now(), chartOptions = {} } = {}) {
|
||||
const telemetrySource = node?.rawSources?.telemetry;
|
||||
const snapshotHistory = Array.isArray(node?.rawSources?.telemetrySnapshots) && node.rawSources.telemetrySnapshots.length > 0
|
||||
? node.rawSources.telemetrySnapshots
|
||||
: null;
|
||||
const aggregatedSnapshots = Array.isArray(telemetrySource?.snapshots)
|
||||
? telemetrySource.snapshots
|
||||
: null;
|
||||
const rawSnapshots = snapshotHistory ?? aggregatedSnapshots;
|
||||
if (!Array.isArray(rawSnapshots) || rawSnapshots.length === 0) {
|
||||
return { chartsHtml: '', chartModels: [] };
|
||||
}
|
||||
const entries = rawSnapshots
|
||||
.map(snapshot => {
|
||||
const timestamp = resolveSnapshotTimestamp(snapshot);
|
||||
if (timestamp == null) return null;
|
||||
return { timestamp, snapshot };
|
||||
})
|
||||
.filter(entry => entry != null && entry.timestamp >= nowMs - TELEMETRY_WINDOW_MS && entry.timestamp <= nowMs)
|
||||
.sort((a, b) => a.timestamp - b.timestamp);
|
||||
if (entries.length === 0) {
|
||||
return { chartsHtml: '', chartModels: [] };
|
||||
}
|
||||
const chartModels = TELEMETRY_CHART_SPECS
|
||||
.map(spec => buildTelemetryChartModel(spec, entries, nowMs, chartOptions))
|
||||
.filter(model => model != null);
|
||||
if (chartModels.length === 0) {
|
||||
return { chartsHtml: '', chartModels: [] };
|
||||
}
|
||||
const chartsHtml = `
|
||||
<section class="node-detail__charts">
|
||||
<div class="node-detail__charts-grid">
|
||||
${chartModels.map(model => renderTelemetryChartMarkup(model)).join('')}
|
||||
</div>
|
||||
</section>
|
||||
`;
|
||||
return { chartsHtml, chartModels };
|
||||
}
|
||||
|
||||
/**
|
||||
* Render the telemetry charts for the supplied node when telemetry snapshots
|
||||
* exist.
|
||||
@@ -1242,41 +1493,7 @@ function renderTelemetryChart(spec, entries, nowMs, chartOptions = {}) {
|
||||
* @returns {string} Chart grid markup or an empty string.
|
||||
*/
|
||||
export function renderTelemetryCharts(node, { nowMs = Date.now(), chartOptions = {} } = {}) {
|
||||
const telemetrySource = node?.rawSources?.telemetry;
|
||||
const snapshotHistory = Array.isArray(node?.rawSources?.telemetrySnapshots) && node.rawSources.telemetrySnapshots.length > 0
|
||||
? node.rawSources.telemetrySnapshots
|
||||
: null;
|
||||
const aggregatedSnapshots = Array.isArray(telemetrySource?.snapshots)
|
||||
? telemetrySource.snapshots
|
||||
: null;
|
||||
const rawSnapshots = snapshotHistory ?? aggregatedSnapshots;
|
||||
if (!Array.isArray(rawSnapshots) || rawSnapshots.length === 0) {
|
||||
return '';
|
||||
}
|
||||
const entries = rawSnapshots
|
||||
.map(snapshot => {
|
||||
const timestamp = resolveSnapshotTimestamp(snapshot);
|
||||
if (timestamp == null) return null;
|
||||
return { timestamp, snapshot };
|
||||
})
|
||||
.filter(entry => entry != null && entry.timestamp >= nowMs - TELEMETRY_WINDOW_MS && entry.timestamp <= nowMs)
|
||||
.sort((a, b) => a.timestamp - b.timestamp);
|
||||
if (entries.length === 0) {
|
||||
return '';
|
||||
}
|
||||
const charts = TELEMETRY_CHART_SPECS
|
||||
.map(spec => renderTelemetryChart(spec, entries, nowMs, chartOptions))
|
||||
.filter(chart => stringOrNull(chart));
|
||||
if (charts.length === 0) {
|
||||
return '';
|
||||
}
|
||||
return `
|
||||
<section class="node-detail__charts">
|
||||
<div class="node-detail__charts-grid">
|
||||
${charts.join('')}
|
||||
</div>
|
||||
</section>
|
||||
`;
|
||||
return createTelemetryCharts(node, { nowMs, chartOptions }).chartsHtml;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -2056,7 +2273,6 @@ function renderMessages(messages, renderShortHtml, node) {
|
||||
if (!message || typeof message !== 'object') return null;
|
||||
const text = stringOrNull(message.text) || stringOrNull(message.emoji);
|
||||
if (!text) return null;
|
||||
if (message.encrypted && String(text).trim() === 'GAA=') return null;
|
||||
|
||||
const timestamp = formatMessageTimestamp(message.rx_time, message.rx_iso);
|
||||
const metadata = extractChatMessageMetadata(message);
|
||||
@@ -2299,6 +2515,7 @@ function renderTraceroutes(traces, renderShortHtml, { roleIndex = null, node = n
|
||||
* messages?: Array<Object>,
|
||||
* traces?: Array<Object>,
|
||||
* renderShortHtml: Function,
|
||||
* chartsHtml?: string,
|
||||
* }} options Rendering options.
|
||||
* @returns {string} HTML fragment representing the detail view.
|
||||
*/
|
||||
@@ -2308,6 +2525,7 @@ function renderNodeDetailHtml(node, {
|
||||
traces = [],
|
||||
renderShortHtml,
|
||||
roleIndex = null,
|
||||
chartsHtml = null,
|
||||
chartNowMs = Date.now(),
|
||||
} = {}) {
|
||||
const roleAwareBadge = renderRoleAwareBadge(renderShortHtml, {
|
||||
@@ -2321,7 +2539,7 @@ function renderNodeDetailHtml(node, {
|
||||
const longName = stringOrNull(node.longName ?? node.long_name);
|
||||
const identifier = stringOrNull(node.nodeId ?? node.node_id);
|
||||
const tableHtml = renderSingleNodeTable(node, renderShortHtml);
|
||||
const chartsHtml = renderTelemetryCharts(node, { nowMs: chartNowMs });
|
||||
const telemetryChartsHtml = stringOrNull(chartsHtml) ?? renderTelemetryCharts(node, { nowMs: chartNowMs });
|
||||
const neighborsHtml = renderNeighborGroups(node, neighbors, renderShortHtml, { roleIndex });
|
||||
const tracesHtml = renderTraceroutes(traces, renderShortHtml, { roleIndex, node });
|
||||
const messagesHtml = renderMessages(messages, renderShortHtml, node);
|
||||
@@ -2347,7 +2565,7 @@ function renderNodeDetailHtml(node, {
|
||||
<header class="node-detail__header">
|
||||
<h2 class="node-detail__title">${badgeHtml}${nameHtml}${identifierHtml}</h2>
|
||||
</header>
|
||||
${chartsHtml ?? ''}
|
||||
${telemetryChartsHtml ?? ''}
|
||||
${tableSection}
|
||||
${contentHtml}
|
||||
`;
|
||||
@@ -2461,15 +2679,17 @@ async function fetchTracesForNode(identifier, { fetchImpl } = {}) {
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialise the node detail page by hydrating the DOM with fetched data.
|
||||
* Fetch node detail data and render the HTML fragment.
|
||||
*
|
||||
* @param {{
|
||||
* document?: Document,
|
||||
* fetchImpl?: Function,
|
||||
* refreshImpl?: Function,
|
||||
* renderShortHtml?: Function,
|
||||
* chartNowMs?: number,
|
||||
* chartOptions?: Object,
|
||||
* }} options Optional overrides for testing.
|
||||
* @returns {Promise<boolean>} ``true`` when the node was rendered successfully.
|
||||
* @returns {Promise<string|{html: string, chartModels: Array<Object>}>} Rendered markup or chart models when requested.
|
||||
*/
|
||||
export async function fetchNodeDetailHtml(referenceData, options = {}) {
|
||||
if (!referenceData || typeof referenceData !== 'object') {
|
||||
@@ -2499,15 +2719,38 @@ export async function fetchNodeDetailHtml(referenceData, options = {}) {
|
||||
fetchTracesForNode(messageIdentifier, { fetchImpl: options.fetchImpl }),
|
||||
]);
|
||||
const roleIndex = await buildTraceRoleIndex(traces, neighborRoleIndex, { fetchImpl: options.fetchImpl });
|
||||
return renderNodeDetailHtml(node, {
|
||||
const chartNowMs = Number.isFinite(options.chartNowMs) ? options.chartNowMs : Date.now();
|
||||
const chartState = createTelemetryCharts(node, {
|
||||
nowMs: chartNowMs,
|
||||
chartOptions: options.chartOptions ?? {},
|
||||
});
|
||||
const html = renderNodeDetailHtml(node, {
|
||||
neighbors: node.neighbors,
|
||||
messages,
|
||||
traces,
|
||||
renderShortHtml,
|
||||
roleIndex,
|
||||
chartsHtml: chartState.chartsHtml,
|
||||
chartNowMs,
|
||||
});
|
||||
if (options.returnState === true) {
|
||||
return { html, chartModels: chartState.chartModels };
|
||||
}
|
||||
return html;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialise the standalone node detail page and mount telemetry charts.
|
||||
*
|
||||
* @param {{
|
||||
* document?: Document,
|
||||
* fetchImpl?: Function,
|
||||
* refreshImpl?: Function,
|
||||
* renderShortHtml?: Function,
|
||||
* uPlotImpl?: Function,
|
||||
* }} options Optional overrides for testing.
|
||||
* @returns {Promise<boolean>} ``true`` when the node was rendered successfully.
|
||||
*/
|
||||
export async function initializeNodeDetailPage(options = {}) {
|
||||
const documentRef = options.document ?? globalThis.document;
|
||||
if (!documentRef || typeof documentRef.querySelector !== 'function') {
|
||||
@@ -2544,13 +2787,15 @@ export async function initializeNodeDetailPage(options = {}) {
|
||||
const privateMode = (root.dataset?.privateMode ?? '').toLowerCase() === 'true';
|
||||
|
||||
try {
|
||||
const html = await fetchNodeDetailHtml(referenceData, {
|
||||
const result = await fetchNodeDetailHtml(referenceData, {
|
||||
fetchImpl: options.fetchImpl,
|
||||
refreshImpl,
|
||||
renderShortHtml: options.renderShortHtml,
|
||||
privateMode,
|
||||
returnState: true,
|
||||
});
|
||||
root.innerHTML = html;
|
||||
root.innerHTML = result.html;
|
||||
mountTelemetryChartsWithRetry(result.chartModels, { root, uPlotImpl: options.uPlotImpl });
|
||||
return true;
|
||||
} catch (error) {
|
||||
console.error('Failed to render node detail page', error);
|
||||
@@ -2587,7 +2832,11 @@ export const __testUtils = {
|
||||
categoriseNeighbors,
|
||||
renderNeighborGroups,
|
||||
renderSingleNodeTable,
|
||||
createTelemetryCharts,
|
||||
renderTelemetryCharts,
|
||||
mountTelemetryCharts,
|
||||
mountTelemetryChartsWithRetry,
|
||||
buildUPlotChartConfig,
|
||||
renderMessages,
|
||||
renderTraceroutes,
|
||||
renderTracePath,
|
||||
|
||||
@@ -30,9 +30,6 @@
|
||||
--input-border: rgba(12, 15, 18, 0.18);
|
||||
--input-placeholder: rgba(12, 15, 18, 0.45);
|
||||
--control-accent: var(--accent);
|
||||
--announcement-bg: #fff4d6;
|
||||
--announcement-fg: #7a3f00;
|
||||
--announcement-border: #f0c05b;
|
||||
--pad: 16px;
|
||||
--map-tile-filter-light: grayscale(1) saturate(0) brightness(0.92) contrast(1.05);
|
||||
--map-tile-filter-dark: grayscale(1) invert(1) brightness(0.9) contrast(1.08);
|
||||
@@ -62,9 +59,6 @@ body.dark {
|
||||
--input-border: rgba(230, 235, 240, 0.24);
|
||||
--input-placeholder: rgba(230, 235, 240, 0.55);
|
||||
--control-accent: var(--accent);
|
||||
--announcement-bg: #3b2500;
|
||||
--announcement-fg: #ffd184;
|
||||
--announcement-border: #a56a00;
|
||||
}
|
||||
|
||||
html,
|
||||
@@ -221,237 +215,25 @@ h1 {
|
||||
|
||||
.site-header {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
gap: 16px;
|
||||
min-height: 56px;
|
||||
padding: 4px 0;
|
||||
margin-bottom: 8px;
|
||||
}
|
||||
|
||||
.site-header__left,
|
||||
.site-header__right {
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
align-items: center;
|
||||
gap: 12px;
|
||||
}
|
||||
|
||||
.site-header__left {
|
||||
flex: 1 1 auto;
|
||||
min-width: 0;
|
||||
}
|
||||
|
||||
.site-header__right {
|
||||
flex: 0 0 auto;
|
||||
margin-left: auto;
|
||||
}
|
||||
|
||||
.announcement-banner {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
height: 1.6em;
|
||||
padding: 0 var(--pad);
|
||||
border-radius: 999px;
|
||||
background: var(--announcement-bg);
|
||||
color: var(--announcement-fg);
|
||||
border: 1px solid var(--announcement-border);
|
||||
box-sizing: border-box;
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.announcement-banner__content {
|
||||
margin: 0;
|
||||
line-height: 1.6;
|
||||
text-align: center;
|
||||
white-space: nowrap;
|
||||
overflow: hidden;
|
||||
text-overflow: ellipsis;
|
||||
margin-bottom: 8px;
|
||||
}
|
||||
|
||||
.site-title {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
gap: 12px;
|
||||
min-width: 0;
|
||||
}
|
||||
|
||||
.site-title-text {
|
||||
min-width: 0;
|
||||
max-width: 100%;
|
||||
overflow: hidden;
|
||||
text-overflow: ellipsis;
|
||||
white-space: nowrap;
|
||||
}
|
||||
|
||||
.site-title img {
|
||||
width: 36px;
|
||||
height: 36px;
|
||||
width: 52px;
|
||||
height: 52px;
|
||||
display: block;
|
||||
border-radius: 12px;
|
||||
}
|
||||
|
||||
.site-nav {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 8px;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.site-nav__link {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
gap: 6px;
|
||||
padding: 6px 12px;
|
||||
border-radius: 999px;
|
||||
color: var(--fg);
|
||||
text-decoration: none;
|
||||
border: 1px solid transparent;
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
.site-nav__link:hover {
|
||||
background: var(--card);
|
||||
}
|
||||
|
||||
.site-nav__link.is-active {
|
||||
border-color: var(--accent);
|
||||
color: var(--accent);
|
||||
background: transparent;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.site-nav__link:focus-visible {
|
||||
outline: 2px solid var(--accent);
|
||||
outline-offset: 2px;
|
||||
}
|
||||
|
||||
.menu-toggle {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.menu-toggle:focus-visible {
|
||||
outline: 2px solid var(--accent);
|
||||
outline-offset: 2px;
|
||||
}
|
||||
|
||||
.mobile-menu {
|
||||
position: fixed;
|
||||
inset: 0;
|
||||
z-index: 1200;
|
||||
display: flex;
|
||||
justify-content: flex-end;
|
||||
pointer-events: none;
|
||||
}
|
||||
|
||||
.mobile-menu[hidden] {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.mobile-menu__backdrop {
|
||||
flex: 1 1 auto;
|
||||
background: rgba(0, 0, 0, 0.4);
|
||||
opacity: 0;
|
||||
transition: opacity 200ms ease;
|
||||
}
|
||||
|
||||
.mobile-menu__panel {
|
||||
width: min(320px, 86vw);
|
||||
background: var(--bg2);
|
||||
color: var(--fg);
|
||||
padding: 16px;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 16px;
|
||||
height: 100%;
|
||||
overflow-y: auto;
|
||||
transform: translateX(100%);
|
||||
transition: transform 220ms ease;
|
||||
box-shadow: -12px 0 32px rgba(0, 0, 0, 0.3);
|
||||
}
|
||||
|
||||
.mobile-menu__header {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
gap: 12px;
|
||||
}
|
||||
|
||||
.mobile-menu__title {
|
||||
margin: 0;
|
||||
font-size: 16px;
|
||||
}
|
||||
|
||||
.mobile-menu__close:focus-visible {
|
||||
outline: 2px solid var(--accent);
|
||||
outline-offset: 2px;
|
||||
}
|
||||
|
||||
.mobile-nav {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 8px;
|
||||
}
|
||||
|
||||
.mobile-nav__link {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
padding: 8px 10px;
|
||||
border-radius: 10px;
|
||||
color: var(--fg);
|
||||
text-decoration: none;
|
||||
border: 1px solid transparent;
|
||||
}
|
||||
|
||||
.mobile-nav__link.is-active {
|
||||
border-color: var(--accent);
|
||||
color: var(--accent);
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.mobile-nav__link:focus-visible {
|
||||
outline: 2px solid var(--accent);
|
||||
outline-offset: 2px;
|
||||
}
|
||||
|
||||
.mobile-menu.is-open {
|
||||
pointer-events: auto;
|
||||
}
|
||||
|
||||
.mobile-menu.is-open .mobile-menu__backdrop {
|
||||
opacity: 1;
|
||||
}
|
||||
|
||||
.mobile-menu.is-open .mobile-menu__panel {
|
||||
transform: translateX(0);
|
||||
}
|
||||
|
||||
.menu-open {
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.section-link {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
gap: 6px;
|
||||
padding: 6px 10px;
|
||||
border-radius: 999px;
|
||||
border: 1px solid var(--line);
|
||||
color: var(--fg);
|
||||
text-decoration: none;
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
.section-link:hover {
|
||||
border-color: var(--accent);
|
||||
color: var(--accent);
|
||||
}
|
||||
|
||||
.section-link:focus-visible {
|
||||
outline: 2px solid var(--accent);
|
||||
outline-offset: 2px;
|
||||
}
|
||||
|
||||
.meta {
|
||||
color: #555;
|
||||
margin-bottom: 12px;
|
||||
@@ -500,29 +282,11 @@ h1 {
|
||||
|
||||
@media (max-width: 900px) {
|
||||
.site-header {
|
||||
margin-bottom: 4px;
|
||||
}
|
||||
|
||||
.site-header__left {
|
||||
flex-wrap: nowrap;
|
||||
}
|
||||
|
||||
.site-header__left--federation {
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.site-nav {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.menu-toggle {
|
||||
display: inline-flex;
|
||||
}
|
||||
|
||||
.instance-selector {
|
||||
flex: 0 1 auto;
|
||||
flex-direction: column;
|
||||
align-items: flex-start;
|
||||
}
|
||||
|
||||
.instance-selector,
|
||||
.instance-select {
|
||||
width: 100%;
|
||||
}
|
||||
@@ -532,7 +296,6 @@ h1 {
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
.pill {
|
||||
display: inline-block;
|
||||
padding: 2px 8px;
|
||||
@@ -1231,7 +994,7 @@ body.dark .node-detail-overlay__close:hover {
|
||||
.node-detail__charts-grid {
|
||||
display: grid;
|
||||
gap: 24px;
|
||||
grid-template-columns: repeat(auto-fit, minmax(min(100%, 640px), 1fr));
|
||||
grid-template-columns: repeat(auto-fit, minmax(min(100%, 1152px), 1fr));
|
||||
}
|
||||
|
||||
.node-detail__chart {
|
||||
@@ -1263,10 +1026,45 @@ body.dark .node-detail-overlay__close:hover {
|
||||
font-size: 1rem;
|
||||
}
|
||||
|
||||
.node-detail__chart svg {
|
||||
.node-detail__chart-plot {
|
||||
width: 100%;
|
||||
height: auto;
|
||||
height: clamp(240px, 50vw, 360px);
|
||||
max-height: 420px;
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.node-detail__chart-plot .uplot {
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
margin: 0;
|
||||
line-height: 0;
|
||||
position: relative;
|
||||
}
|
||||
|
||||
.node-detail__chart-plot .uplot .u-wrap,
|
||||
.node-detail__chart-plot .uplot .u-under,
|
||||
.node-detail__chart-plot .uplot .u-over {
|
||||
top: 0;
|
||||
left: 0;
|
||||
}
|
||||
|
||||
.node-detail__chart-plot .u-axis,
|
||||
.node-detail__chart-plot .u-axis .u-label,
|
||||
.node-detail__chart-plot .u-axis .u-value,
|
||||
.node-detail__chart-plot .u-axis text,
|
||||
.node-detail__chart-plot .u-axis-label {
|
||||
color: var(--muted) !important;
|
||||
fill: var(--muted) !important;
|
||||
font-size: 0.95rem;
|
||||
}
|
||||
|
||||
.node-detail__chart-plot .u-grid {
|
||||
stroke: rgba(12, 15, 18, 0.08);
|
||||
stroke-width: 1;
|
||||
}
|
||||
|
||||
body.dark .node-detail__chart-plot .u-grid {
|
||||
stroke: rgba(255, 255, 255, 0.15);
|
||||
}
|
||||
|
||||
.node-detail__chart-axis line {
|
||||
@@ -1931,6 +1729,10 @@ input[type="radio"] {
|
||||
gap: 12px;
|
||||
}
|
||||
|
||||
.controls--full-screen {
|
||||
grid-template-columns: minmax(0, 1fr) auto;
|
||||
}
|
||||
|
||||
.controls .filter-input {
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
File diff suppressed because one or more lines are too long
@@ -0,0 +1 @@
|
||||
.uplot, .uplot *, .uplot *::before, .uplot *::after {box-sizing: border-box;}.uplot {font-family: system-ui, -apple-system, "Segoe UI", Roboto, "Helvetica Neue", Arial, "Noto Sans", sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji";line-height: 1.5;width: min-content;}.u-title {text-align: center;font-size: 18px;font-weight: bold;}.u-wrap {position: relative;user-select: none;}.u-over, .u-under {position: absolute;}.u-under {overflow: hidden;}.uplot canvas {display: block;position: relative;width: 100%;height: 100%;}.u-axis {position: absolute;}.u-legend {font-size: 14px;margin: auto;text-align: center;}.u-inline {display: block;}.u-inline * {display: inline-block;}.u-inline tr {margin-right: 16px;}.u-legend th {font-weight: 600;}.u-legend th > * {vertical-align: middle;display: inline-block;}.u-legend .u-marker {width: 1em;height: 1em;margin-right: 4px;background-clip: padding-box !important;}.u-inline.u-live th::after {content: ":";vertical-align: middle;}.u-inline:not(.u-live) .u-value {display: none;}.u-series > * {padding: 4px;}.u-series th {cursor: pointer;}.u-legend .u-off > * {opacity: 0.3;}.u-select {background: rgba(0,0,0,0.07);position: absolute;pointer-events: none;}.u-cursor-x, .u-cursor-y {position: absolute;left: 0;top: 0;pointer-events: none;will-change: transform;}.u-hz .u-cursor-x, .u-vt .u-cursor-y {height: 100%;border-right: 1px dashed #607D8B;}.u-hz .u-cursor-y, .u-vt .u-cursor-x {width: 100%;border-bottom: 1px dashed #607D8B;}.u-cursor-pt {position: absolute;top: 0;left: 0;border-radius: 50%;border: 0 solid;pointer-events: none;will-change: transform;/*this has to be !important since we set inline "background" shorthand */background-clip: padding-box !important;}.u-axis.u-off, .u-select.u-off, .u-cursor-x.u-off, .u-cursor-y.u-off, .u-cursor-pt.u-off {display: none;}
|
||||
@@ -0,0 +1,55 @@
|
||||
/*
|
||||
* Copyright © 2025-26 l5yth & contributors
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
import { mkdir, copyFile, access } from 'node:fs/promises';
|
||||
import { constants as fsConstants } from 'node:fs';
|
||||
import path from 'node:path';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
|
||||
/**
|
||||
* Resolve an absolute path relative to this script location.
|
||||
*
|
||||
* @param {string[]} segments Path segments to append.
|
||||
* @returns {string} Absolute path resolved from this script.
|
||||
*/
|
||||
function resolvePath(...segments) {
|
||||
const scriptDir = path.dirname(fileURLToPath(import.meta.url));
|
||||
return path.resolve(scriptDir, ...segments);
|
||||
}
|
||||
|
||||
/**
|
||||
* Ensure the uPlot assets are available within the public asset tree.
|
||||
*
|
||||
* @returns {Promise<void>} Resolves once files have been copied.
|
||||
*/
|
||||
async function copyUPlotAssets() {
|
||||
const sourceDir = resolvePath('..', 'node_modules', 'uplot', 'dist');
|
||||
const targetDir = resolvePath('..', 'public', 'assets', 'vendor', 'uplot');
|
||||
const assets = ['uPlot.iife.min.js', 'uPlot.min.css'];
|
||||
|
||||
await access(sourceDir, fsConstants.R_OK);
|
||||
await mkdir(targetDir, { recursive: true });
|
||||
|
||||
await Promise.all(
|
||||
assets.map(async asset => {
|
||||
const source = path.join(sourceDir, asset);
|
||||
const target = path.join(targetDir, asset);
|
||||
await copyFile(source, target);
|
||||
}),
|
||||
);
|
||||
}
|
||||
|
||||
await copyUPlotAssets();
|
||||
+26
-1889
File diff suppressed because it is too large
Load Diff
@@ -239,30 +239,6 @@ RSpec.describe PotatoMesh::Config do
|
||||
end
|
||||
end
|
||||
|
||||
describe ".remote_instance_request_timeout" do
|
||||
it "returns the baked-in request timeout when unset" do
|
||||
within_env("REMOTE_INSTANCE_REQUEST_TIMEOUT" => nil) do
|
||||
expect(described_class.remote_instance_request_timeout).to eq(
|
||||
PotatoMesh::Config::DEFAULT_REMOTE_INSTANCE_REQUEST_TIMEOUT,
|
||||
)
|
||||
end
|
||||
end
|
||||
|
||||
it "accepts positive overrides" do
|
||||
within_env("REMOTE_INSTANCE_REQUEST_TIMEOUT" => "19") do
|
||||
expect(described_class.remote_instance_request_timeout).to eq(19)
|
||||
end
|
||||
end
|
||||
|
||||
it "rejects invalid overrides" do
|
||||
within_env("REMOTE_INSTANCE_REQUEST_TIMEOUT" => "0") do
|
||||
expect(described_class.remote_instance_request_timeout).to eq(
|
||||
PotatoMesh::Config::DEFAULT_REMOTE_INSTANCE_REQUEST_TIMEOUT,
|
||||
)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
describe ".federation_max_instances_per_response" do
|
||||
it "returns the baked-in response limit when unset" do
|
||||
within_env("FEDERATION_MAX_INSTANCES_PER_RESPONSE" => nil) do
|
||||
@@ -383,54 +359,6 @@ RSpec.describe PotatoMesh::Config do
|
||||
end
|
||||
end
|
||||
|
||||
describe ".federation_shutdown_timeout_seconds" do
|
||||
it "returns the default shutdown timeout when unset" do
|
||||
within_env("FEDERATION_SHUTDOWN_TIMEOUT" => nil) do
|
||||
expect(described_class.federation_shutdown_timeout_seconds).to eq(
|
||||
PotatoMesh::Config::DEFAULT_FEDERATION_SHUTDOWN_TIMEOUT_SECONDS,
|
||||
)
|
||||
end
|
||||
end
|
||||
|
||||
it "accepts positive overrides" do
|
||||
within_env("FEDERATION_SHUTDOWN_TIMEOUT" => "9") do
|
||||
expect(described_class.federation_shutdown_timeout_seconds).to eq(9)
|
||||
end
|
||||
end
|
||||
|
||||
it "rejects invalid overrides" do
|
||||
within_env("FEDERATION_SHUTDOWN_TIMEOUT" => "-1") do
|
||||
expect(described_class.federation_shutdown_timeout_seconds).to eq(
|
||||
PotatoMesh::Config::DEFAULT_FEDERATION_SHUTDOWN_TIMEOUT_SECONDS,
|
||||
)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
describe ".federation_crawl_cooldown_seconds" do
|
||||
it "returns the default crawl cooldown when unset" do
|
||||
within_env("FEDERATION_CRAWL_COOLDOWN" => nil) do
|
||||
expect(described_class.federation_crawl_cooldown_seconds).to eq(
|
||||
PotatoMesh::Config::DEFAULT_FEDERATION_CRAWL_COOLDOWN_SECONDS,
|
||||
)
|
||||
end
|
||||
end
|
||||
|
||||
it "accepts positive overrides" do
|
||||
within_env("FEDERATION_CRAWL_COOLDOWN" => "17") do
|
||||
expect(described_class.federation_crawl_cooldown_seconds).to eq(17)
|
||||
end
|
||||
end
|
||||
|
||||
it "rejects invalid overrides" do
|
||||
within_env("FEDERATION_CRAWL_COOLDOWN" => "0") do
|
||||
expect(described_class.federation_crawl_cooldown_seconds).to eq(
|
||||
PotatoMesh::Config::DEFAULT_FEDERATION_CRAWL_COOLDOWN_SECONDS,
|
||||
)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
describe ".db_path" do
|
||||
it "returns the default path inside the data directory" do
|
||||
expect(described_class.db_path).to eq(described_class.default_db_path)
|
||||
@@ -588,24 +516,6 @@ RSpec.describe PotatoMesh::Config do
|
||||
end
|
||||
end
|
||||
|
||||
describe ".announcement" do
|
||||
it "returns nil when unset or blank" do
|
||||
within_env("ANNOUNCEMENT" => nil) do
|
||||
expect(described_class.announcement).to be_nil
|
||||
end
|
||||
|
||||
within_env("ANNOUNCEMENT" => " \t ") do
|
||||
expect(described_class.announcement).to be_nil
|
||||
end
|
||||
end
|
||||
|
||||
it "returns the trimmed announcement text" do
|
||||
within_env("ANNOUNCEMENT" => " Next Meetup ") do
|
||||
expect(described_class.announcement).to eq("Next Meetup")
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
describe ".debug?" do
|
||||
it "reflects the DEBUG environment variable" do
|
||||
within_env("DEBUG" => "1") do
|
||||
|
||||
@@ -184,65 +184,6 @@ RSpec.describe PotatoMesh::App::Database do
|
||||
expect(hop_columns).to include("trace_id", "hop_index", "node_id")
|
||||
end
|
||||
|
||||
it "creates positions and neighbors tables when absent" do
|
||||
SQLite3::Database.new(PotatoMesh::Config.db_path) do |db|
|
||||
db.execute("CREATE TABLE nodes(node_id TEXT)")
|
||||
db.execute("CREATE TABLE messages(id INTEGER PRIMARY KEY)")
|
||||
db.execute("CREATE TABLE telemetry(id INTEGER PRIMARY KEY, rx_time INTEGER, rx_iso TEXT)")
|
||||
end
|
||||
|
||||
expect(column_names_for("positions")).to be_empty
|
||||
expect(column_names_for("neighbors")).to be_empty
|
||||
|
||||
harness_class.ensure_schema_upgrades
|
||||
|
||||
positions_columns = column_names_for("positions")
|
||||
expect(positions_columns).to include("id", "node_id", "rx_time", "ingestor")
|
||||
|
||||
neighbors_columns = column_names_for("neighbors")
|
||||
expect(neighbors_columns).to include("node_id", "neighbor_id", "rx_time", "ingestor")
|
||||
end
|
||||
|
||||
it "adds ingestor columns to legacy positions neighbors and traces tables" do
|
||||
SQLite3::Database.new(PotatoMesh::Config.db_path) do |db|
|
||||
db.execute("CREATE TABLE nodes(node_id TEXT)")
|
||||
db.execute("CREATE TABLE messages(id INTEGER PRIMARY KEY)")
|
||||
db.execute("CREATE TABLE telemetry(id INTEGER PRIMARY KEY, rx_time INTEGER, rx_iso TEXT)")
|
||||
db.execute <<~SQL
|
||||
CREATE TABLE positions (
|
||||
id INTEGER PRIMARY KEY,
|
||||
rx_time INTEGER,
|
||||
rx_iso TEXT,
|
||||
node_id TEXT
|
||||
)
|
||||
SQL
|
||||
db.execute <<~SQL
|
||||
CREATE TABLE neighbors (
|
||||
node_id TEXT,
|
||||
neighbor_id TEXT,
|
||||
rx_time INTEGER
|
||||
)
|
||||
SQL
|
||||
db.execute <<~SQL
|
||||
CREATE TABLE traces (
|
||||
id INTEGER PRIMARY KEY,
|
||||
request_id INTEGER,
|
||||
src TEXT,
|
||||
dest TEXT,
|
||||
rx_time INTEGER,
|
||||
rx_iso TEXT
|
||||
)
|
||||
SQL
|
||||
db.execute("CREATE TABLE trace_hops(trace_id INTEGER, hop_index INTEGER, node_id TEXT)")
|
||||
end
|
||||
|
||||
harness_class.ensure_schema_upgrades
|
||||
|
||||
expect(column_names_for("positions")).to include("ingestor")
|
||||
expect(column_names_for("neighbors")).to include("ingestor")
|
||||
expect(column_names_for("traces")).to include("ingestor")
|
||||
end
|
||||
|
||||
it "adds the contact_link column to existing instances tables" do
|
||||
SQLite3::Database.new(PotatoMesh::Config.db_path) do |db|
|
||||
db.execute("CREATE TABLE nodes(node_id TEXT)")
|
||||
|
||||
+26
-589
@@ -23,11 +23,6 @@ require "uri"
|
||||
require "socket"
|
||||
|
||||
RSpec.describe PotatoMesh::App::Federation do
|
||||
NODES_API_PATH = "/api/nodes".freeze
|
||||
STATS_API_PATH = "/api/stats".freeze
|
||||
FULL_DATA_UNAVAILABLE_REASON = "full data unavailable".freeze
|
||||
HTTP_CONNECTION_DOUBLE = "Net::HTTPConnection".freeze
|
||||
|
||||
subject(:federation_helpers) do
|
||||
Class.new do
|
||||
extend PotatoMesh::App::Federation
|
||||
@@ -62,8 +57,6 @@ RSpec.describe PotatoMesh::App::Federation do
|
||||
:federation_thread,
|
||||
:initial_federation_thread,
|
||||
:federation_worker_pool,
|
||||
:federation_shutdown_requested,
|
||||
:federation_shutdown_hook_installed,
|
||||
).new
|
||||
end
|
||||
|
||||
@@ -84,12 +77,10 @@ RSpec.describe PotatoMesh::App::Federation do
|
||||
federation_helpers.instance_variable_set(:@remote_instance_verify_callback, nil)
|
||||
federation_helpers.reset_debug_messages
|
||||
federation_helpers.reset_warn_messages
|
||||
federation_helpers.clear_federation_crawl_state!
|
||||
federation_helpers.shutdown_federation_worker_pool!
|
||||
end
|
||||
|
||||
after do
|
||||
federation_helpers.clear_federation_crawl_state!
|
||||
federation_helpers.shutdown_federation_worker_pool!
|
||||
end
|
||||
|
||||
@@ -279,7 +270,7 @@ RSpec.describe PotatoMesh::App::Federation do
|
||||
let(:response_map) do
|
||||
mapping = { [seed_domain, "/api/instances"] => [payload_entries, :instances] }
|
||||
attributes_list.each do |attributes|
|
||||
mapping[[attributes[:domain], NODES_API_PATH]] = [node_payload, :nodes]
|
||||
mapping[[attributes[:domain], "/api/nodes"]] = [node_payload, :nodes]
|
||||
mapping[[attributes[:domain], "/api/instances"]] = [[], :instances]
|
||||
end
|
||||
mapping
|
||||
@@ -296,37 +287,6 @@ RSpec.describe PotatoMesh::App::Federation do
|
||||
end
|
||||
end
|
||||
|
||||
def configure_remote_node_window(now)
|
||||
allow(Time).to receive(:now).and_return(now)
|
||||
allow(PotatoMesh::Config).to receive(:remote_instance_max_node_age).and_return(900)
|
||||
end
|
||||
|
||||
def stats_mapping(now:, stats_response:, full_nodes_response:, window_nodes_response: nil)
|
||||
recent_cutoff = now.to_i - 900
|
||||
mapping = { [seed_domain, "/api/instances"] => [payload_entries, :instances] }
|
||||
attributes_list.each do |attributes|
|
||||
mapping[[attributes[:domain], STATS_API_PATH]] = stats_response
|
||||
mapping[[attributes[:domain], NODES_API_PATH]] = full_nodes_response
|
||||
mapping[[attributes[:domain], "/api/instances"]] = [[], :instances]
|
||||
next unless window_nodes_response
|
||||
|
||||
mapping[[attributes[:domain], "/api/nodes?since=#{recent_cutoff}&limit=1000"]] = window_nodes_response
|
||||
end
|
||||
mapping
|
||||
end
|
||||
|
||||
def stub_ingest_fetches(mapping, capture_paths: false)
|
||||
captured_paths = []
|
||||
allow(federation_helpers).to receive(:fetch_instance_json) do |host, path|
|
||||
captured_paths << [host, path] if capture_paths
|
||||
mapping.fetch([host, path]) { [nil, []] }
|
||||
end
|
||||
allow(federation_helpers).to receive(:verify_instance_signature).and_return(true)
|
||||
allow(federation_helpers).to receive(:validate_remote_nodes).and_return([true, nil])
|
||||
allow(federation_helpers).to receive(:upsert_instance_record)
|
||||
captured_paths
|
||||
end
|
||||
|
||||
it "stops processing once the per-response limit is exceeded" do
|
||||
processed_domains = []
|
||||
allow(federation_helpers).to receive(:upsert_instance_record) do |_db, attrs, _signature|
|
||||
@@ -362,162 +322,37 @@ RSpec.describe PotatoMesh::App::Federation do
|
||||
expect(federation_helpers.debug_messages).to include(a_string_including("crawl limit"))
|
||||
end
|
||||
|
||||
it "prefers /api/stats when counting remote activity" do
|
||||
it "requests an expanded recent node window when counting remote activity" do
|
||||
now = Time.at(1_700_000_000)
|
||||
configure_remote_node_window(now)
|
||||
allow(Time).to receive(:now).and_return(now)
|
||||
allow(PotatoMesh::Config).to receive(:remote_instance_max_node_age).and_return(900)
|
||||
recent_cutoff = now.to_i - 900
|
||||
|
||||
mapping = stats_mapping(
|
||||
now:,
|
||||
stats_response: [{ "active_nodes" => { "hour" => 5, "day" => 7, "week" => 9, "month" => 11 }, "sampled" => false }, :stats],
|
||||
full_nodes_response: [node_payload, :nodes],
|
||||
)
|
||||
captured_paths = stub_ingest_fetches(mapping, capture_paths: true)
|
||||
|
||||
federation_helpers.ingest_known_instances_from!(db, seed_domain)
|
||||
|
||||
expect(captured_paths).to include(
|
||||
[attributes_list[0][:domain], STATS_API_PATH],
|
||||
[attributes_list[1][:domain], STATS_API_PATH],
|
||||
[attributes_list[2][:domain], STATS_API_PATH],
|
||||
)
|
||||
expect(captured_paths).to include(
|
||||
[attributes_list[0][:domain], NODES_API_PATH],
|
||||
[attributes_list[1][:domain], NODES_API_PATH],
|
||||
[attributes_list[2][:domain], NODES_API_PATH],
|
||||
)
|
||||
expect(attributes_list.map { |attrs| attrs[:nodes_count] }).to all(eq(5))
|
||||
end
|
||||
|
||||
it "prefers recent node window counts when /api/stats is unavailable" do
|
||||
now = Time.at(1_700_000_000)
|
||||
configure_remote_node_window(now)
|
||||
full_nodes_payload = node_payload.take(2)
|
||||
recent_window_payload = node_payload
|
||||
recent_path = "/api/nodes?since=#{now.to_i - 900}&limit=1000"
|
||||
|
||||
mapping = stats_mapping(
|
||||
now:,
|
||||
stats_response: [nil, ["stats unavailable"]],
|
||||
full_nodes_response: [full_nodes_payload, :nodes],
|
||||
window_nodes_response: [recent_window_payload, :nodes],
|
||||
)
|
||||
captured_paths = stub_ingest_fetches(mapping, capture_paths: true)
|
||||
|
||||
federation_helpers.ingest_known_instances_from!(db, seed_domain)
|
||||
|
||||
expect(captured_paths).to include(
|
||||
[attributes_list[0][:domain], STATS_API_PATH],
|
||||
[attributes_list[1][:domain], STATS_API_PATH],
|
||||
[attributes_list[2][:domain], STATS_API_PATH],
|
||||
)
|
||||
expect(captured_paths).to include(
|
||||
[attributes_list[0][:domain], NODES_API_PATH],
|
||||
[attributes_list[1][:domain], NODES_API_PATH],
|
||||
[attributes_list[2][:domain], NODES_API_PATH],
|
||||
)
|
||||
expect(captured_paths).to include(
|
||||
[attributes_list[0][:domain], recent_path],
|
||||
[attributes_list[1][:domain], recent_path],
|
||||
[attributes_list[2][:domain], recent_path],
|
||||
)
|
||||
expect(attributes_list.map { |attrs| attrs[:nodes_count] }).to all(eq(recent_window_payload.length))
|
||||
end
|
||||
|
||||
it "falls back to recent node window when full node data is unavailable" do
|
||||
now = Time.at(1_700_000_000)
|
||||
configure_remote_node_window(now)
|
||||
|
||||
mapping = stats_mapping(
|
||||
now:,
|
||||
stats_response: [nil, ["stats unavailable"]],
|
||||
full_nodes_response: [nil, [FULL_DATA_UNAVAILABLE_REASON]],
|
||||
window_nodes_response: [node_payload, :nodes],
|
||||
)
|
||||
stub_ingest_fetches(mapping)
|
||||
|
||||
federation_helpers.ingest_known_instances_from!(db, seed_domain)
|
||||
|
||||
expect(attributes_list.map { |attrs| attrs[:nodes_count] }).to all(eq(node_payload.length))
|
||||
end
|
||||
|
||||
it "uses recent node window fallback when stats succeed but full node data is unavailable" do
|
||||
now = Time.at(1_700_000_000)
|
||||
configure_remote_node_window(now)
|
||||
recent_path = "/api/nodes?since=#{now.to_i - 900}&limit=1000"
|
||||
|
||||
mapping = stats_mapping(
|
||||
now:,
|
||||
stats_response: [{ "active_nodes" => { "hour" => 9, "day" => 10, "week" => 11, "month" => 12 }, "sampled" => false }, :stats],
|
||||
full_nodes_response: [nil, [FULL_DATA_UNAVAILABLE_REASON]],
|
||||
window_nodes_response: [node_payload, :nodes],
|
||||
)
|
||||
captured_paths = stub_ingest_fetches(mapping, capture_paths: true)
|
||||
|
||||
federation_helpers.ingest_known_instances_from!(db, seed_domain)
|
||||
|
||||
expect(captured_paths).to include(
|
||||
[attributes_list[0][:domain], STATS_API_PATH],
|
||||
[attributes_list[1][:domain], STATS_API_PATH],
|
||||
[attributes_list[2][:domain], STATS_API_PATH],
|
||||
)
|
||||
expect(captured_paths).to include(
|
||||
[attributes_list[0][:domain], recent_path],
|
||||
[attributes_list[1][:domain], recent_path],
|
||||
[attributes_list[2][:domain], recent_path],
|
||||
)
|
||||
expect(attributes_list.map { |attrs| attrs[:nodes_count] }).to all(eq(9))
|
||||
end
|
||||
|
||||
it "handles URI metadata from malformed /api/stats payloads without crashing" do
|
||||
now = Time.at(1_700_000_000)
|
||||
configure_remote_node_window(now)
|
||||
|
||||
mapping = stats_mapping(
|
||||
now:,
|
||||
stats_response: [{ "unexpected" => "shape" }, URI.parse("https://ally-0.mesh/api/stats")],
|
||||
full_nodes_response: [node_payload.take(2), :nodes],
|
||||
window_nodes_response: [node_payload, :nodes],
|
||||
)
|
||||
stub_ingest_fetches(mapping)
|
||||
|
||||
expect do
|
||||
federation_helpers.ingest_known_instances_from!(db, seed_domain)
|
||||
end.not_to raise_error
|
||||
expect(attributes_list.map { |attrs| attrs[:nodes_count] }).to all(eq(node_payload.length))
|
||||
end
|
||||
|
||||
it "skips remote entries when both full and window node feeds are unavailable" do
|
||||
now = Time.at(1_700_000_000)
|
||||
configure_remote_node_window(now)
|
||||
recent_path = "/api/nodes?since=#{now.to_i - 900}&limit=1000"
|
||||
|
||||
mapping = stats_mapping(
|
||||
now:,
|
||||
stats_response: [{ "active_nodes" => { "hour" => 3, "day" => 3, "week" => 3, "month" => 3 }, "sampled" => false }, :stats],
|
||||
full_nodes_response: [nil, [FULL_DATA_UNAVAILABLE_REASON]],
|
||||
window_nodes_response: [nil, ["window unavailable"]],
|
||||
)
|
||||
captured_paths = stub_ingest_fetches(mapping, capture_paths: true)
|
||||
upserted = []
|
||||
allow(federation_helpers).to receive(:upsert_instance_record) do |_db, attrs, _signature|
|
||||
upserted << attrs
|
||||
mapping = { [seed_domain, "/api/instances"] => [payload_entries, :instances] }
|
||||
attributes_list.each_with_index do |attributes, index|
|
||||
mapping[[attributes[:domain], "/api/nodes?since=#{recent_cutoff}&limit=1000"]] = [node_payload, :nodes]
|
||||
mapping[[attributes[:domain], "/api/nodes"]] = [node_payload, :nodes]
|
||||
mapping[[attributes[:domain], "/api/instances"]] = [[], :instances]
|
||||
allow(federation_helpers).to receive(:remote_instance_attributes_from_payload).with(payload_entries[index]).and_return([attributes, "signature-#{index}", nil])
|
||||
end
|
||||
|
||||
captured_paths = []
|
||||
allow(federation_helpers).to receive(:fetch_instance_json) do |host, path|
|
||||
captured_paths << [host, path]
|
||||
mapping.fetch([host, path]) { [nil, []] }
|
||||
end
|
||||
allow(federation_helpers).to receive(:verify_instance_signature).and_return(true)
|
||||
allow(federation_helpers).to receive(:validate_remote_nodes).and_return([true, nil])
|
||||
allow(federation_helpers).to receive(:upsert_instance_record)
|
||||
|
||||
federation_helpers.ingest_known_instances_from!(db, seed_domain)
|
||||
|
||||
expect(captured_paths).to include(
|
||||
[attributes_list[0][:domain], NODES_API_PATH],
|
||||
[attributes_list[1][:domain], NODES_API_PATH],
|
||||
[attributes_list[2][:domain], NODES_API_PATH],
|
||||
[attributes_list[0][:domain], "/api/nodes?since=#{recent_cutoff}&limit=1000"],
|
||||
[attributes_list[1][:domain], "/api/nodes?since=#{recent_cutoff}&limit=1000"],
|
||||
[attributes_list[2][:domain], "/api/nodes?since=#{recent_cutoff}&limit=1000"],
|
||||
)
|
||||
expect(captured_paths).to include(
|
||||
[attributes_list[0][:domain], recent_path],
|
||||
[attributes_list[1][:domain], recent_path],
|
||||
[attributes_list[2][:domain], recent_path],
|
||||
)
|
||||
expect(upserted).to be_empty
|
||||
expect(federation_helpers.warn_messages).to include("Failed to load remote node data")
|
||||
expect(attributes_list.map { |attrs| attrs[:nodes_count] }).to all(eq(3))
|
||||
expect(attributes_list.map { |attrs| attrs[:nodes_count] }).to all(eq(node_payload.length))
|
||||
end
|
||||
end
|
||||
|
||||
@@ -714,7 +549,7 @@ RSpec.describe PotatoMesh::App::Federation do
|
||||
end
|
||||
|
||||
it "applies federation headers to instance fetch requests" do
|
||||
connection = instance_double(HTTP_CONNECTION_DOUBLE)
|
||||
connection = instance_double("Net::HTTPConnection")
|
||||
success_response = Net::HTTPOK.new("1.1", "200", "OK")
|
||||
allow(success_response).to receive(:body).and_return("{}")
|
||||
allow(success_response).to receive(:code).and_return("200")
|
||||
@@ -736,56 +571,13 @@ RSpec.describe PotatoMesh::App::Federation do
|
||||
expect(captured_request["User-Agent"]).to eq(federation_helpers.send(:federation_user_agent_header))
|
||||
expect(captured_request["Content-Type"]).to be_nil
|
||||
end
|
||||
|
||||
it "wraps non-success HTTP responses" do
|
||||
connection = instance_double(HTTP_CONNECTION_DOUBLE)
|
||||
failure_response = Net::HTTPBadGateway.new("1.1", "502", "Bad Gateway")
|
||||
allow(failure_response).to receive(:code).and_return("502")
|
||||
|
||||
allow(http_client).to receive(:start) do |&block|
|
||||
block.call(connection)
|
||||
end
|
||||
allow(connection).to receive(:request).and_return(failure_response)
|
||||
|
||||
expect do
|
||||
federation_helpers.send(:perform_instance_http_request, uri)
|
||||
end.to raise_error(
|
||||
PotatoMesh::App::InstanceFetchError,
|
||||
a_string_including("unexpected response 502"),
|
||||
)
|
||||
end
|
||||
end
|
||||
|
||||
describe ".federation_sleep_with_shutdown" do
|
||||
it "returns false when shutdown is requested during sleep" do
|
||||
allow(Kernel).to receive(:sleep)
|
||||
call_count = 0
|
||||
allow(federation_helpers).to receive(:federation_shutdown_requested?) do
|
||||
call_count += 1
|
||||
call_count > 1
|
||||
end
|
||||
|
||||
result = federation_helpers.federation_sleep_with_shutdown(1.0)
|
||||
|
||||
expect(result).to be(false)
|
||||
expect(Kernel).to have_received(:sleep).at_least(:once)
|
||||
end
|
||||
|
||||
it "returns true when the full delay elapses without shutdown" do
|
||||
allow(Kernel).to receive(:sleep)
|
||||
allow(federation_helpers).to receive(:federation_shutdown_requested?).and_return(false)
|
||||
|
||||
result = federation_helpers.federation_sleep_with_shutdown(0.01)
|
||||
|
||||
expect(result).to be(true)
|
||||
end
|
||||
end
|
||||
|
||||
describe ".announce_instance_to_domain" do
|
||||
let(:payload) { "{}" }
|
||||
let(:https_uri) { URI.parse("https://remote.mesh/api/instances") }
|
||||
let(:http_uri) { URI.parse("http://remote.mesh/api/instances") }
|
||||
let(:http_connection) { instance_double(HTTP_CONNECTION_DOUBLE) }
|
||||
let(:http_connection) { instance_double("Net::HTTPConnection") }
|
||||
let(:success_response) { Net::HTTPOK.new("1.1", "200", "OK") }
|
||||
|
||||
before do
|
||||
@@ -863,14 +655,6 @@ RSpec.describe PotatoMesh::App::Federation do
|
||||
expect(federation_helpers.ensure_federation_worker_pool!).to be_nil
|
||||
end
|
||||
|
||||
it "returns nil when federation shutdown has been requested" do
|
||||
allow(federation_helpers).to receive(:federation_enabled?).and_return(true)
|
||||
federation_helpers.request_federation_shutdown!
|
||||
|
||||
expect(federation_helpers.ensure_federation_worker_pool!).to be_nil
|
||||
expect(federation_helpers.send(:settings).federation_worker_pool).to be_nil
|
||||
end
|
||||
|
||||
it "creates and memoizes the worker pool" do
|
||||
allow(federation_helpers).to receive(:federation_enabled?).and_return(true)
|
||||
|
||||
@@ -883,69 +667,6 @@ RSpec.describe PotatoMesh::App::Federation do
|
||||
end
|
||||
end
|
||||
|
||||
describe ".ensure_federation_shutdown_hook!" do
|
||||
it "registers a single at_exit hook when called repeatedly" do
|
||||
allow(federation_helpers).to receive(:at_exit)
|
||||
|
||||
federation_helpers.ensure_federation_shutdown_hook!
|
||||
federation_helpers.ensure_federation_shutdown_hook!
|
||||
|
||||
expect(federation_helpers).to have_received(:at_exit).once
|
||||
expect(federation_helpers.send(:settings).federation_shutdown_hook_installed).to be(true)
|
||||
end
|
||||
|
||||
it "delegates hook installation from instances to the application class" do
|
||||
class_with_instance = Class.new do
|
||||
include PotatoMesh::App::Federation
|
||||
end
|
||||
|
||||
expect(class_with_instance).to receive(:ensure_federation_shutdown_hook!).once
|
||||
class_with_instance.new.ensure_federation_shutdown_hook!
|
||||
end
|
||||
|
||||
it "uses ivar guard when hook-installed setting is unavailable" do
|
||||
helper_without_hook_setting = Class.new do
|
||||
extend PotatoMesh::App::Federation
|
||||
|
||||
class << self
|
||||
def settings
|
||||
@settings ||= Struct.new(:federation_thread, :initial_federation_thread, :federation_worker_pool, :federation_shutdown_requested).new
|
||||
end
|
||||
|
||||
# No-op in this helper because tests only assert hook registration behavior.
|
||||
def shutdown_federation_background_work!(timeout: nil); end
|
||||
end
|
||||
end
|
||||
|
||||
allow(helper_without_hook_setting).to receive(:at_exit)
|
||||
helper_without_hook_setting.ensure_federation_shutdown_hook!
|
||||
helper_without_hook_setting.ensure_federation_shutdown_hook!
|
||||
|
||||
expect(helper_without_hook_setting).to have_received(:at_exit).once
|
||||
expect(
|
||||
helper_without_hook_setting.instance_variable_get(:@federation_shutdown_hook_installed),
|
||||
).to be(true)
|
||||
end
|
||||
end
|
||||
|
||||
describe ".stop_federation_thread!" do
|
||||
it "wakes, joins, and kills a stubborn live thread" do
|
||||
thread = instance_double(Thread)
|
||||
allow(thread).to receive(:alive?).and_return(true, true, false)
|
||||
allow(thread).to receive(:respond_to?).with(:wakeup).and_return(true)
|
||||
allow(thread).to receive(:wakeup).and_raise(ThreadError, "not asleep")
|
||||
allow(thread).to receive(:join)
|
||||
allow(thread).to receive(:kill)
|
||||
|
||||
federation_helpers.set(:federation_thread, thread)
|
||||
federation_helpers.stop_federation_thread!(:federation_thread, timeout: 0.01)
|
||||
|
||||
expect(thread).to have_received(:join).with(0.01)
|
||||
expect(thread).to have_received(:kill)
|
||||
expect(federation_helpers.send(:settings).federation_thread).to be_nil
|
||||
end
|
||||
end
|
||||
|
||||
describe ".shutdown_federation_worker_pool!" do
|
||||
it "logs an error when shutdown fails" do
|
||||
pool = instance_double(PotatoMesh::App::WorkerPool)
|
||||
@@ -962,10 +683,6 @@ RSpec.describe PotatoMesh::App::Federation do
|
||||
describe ".enqueue_federation_crawl" do
|
||||
let(:pool) { instance_double(PotatoMesh::App::WorkerPool) }
|
||||
|
||||
before do
|
||||
allow(PotatoMesh::Config).to receive(:federation_crawl_cooldown_seconds).and_return(300)
|
||||
end
|
||||
|
||||
it "returns false and logs when the pool is unavailable" do
|
||||
allow(federation_helpers).to receive(:federation_worker_pool).and_return(nil)
|
||||
|
||||
@@ -979,17 +696,6 @@ RSpec.describe PotatoMesh::App::Federation do
|
||||
expect(federation_helpers.debug_messages.last).to include("Skipped remote instance crawl")
|
||||
end
|
||||
|
||||
it "returns false and logs when the domain is invalid" do
|
||||
result = federation_helpers.enqueue_federation_crawl(
|
||||
"https://bad domain",
|
||||
per_response_limit: 5,
|
||||
overall_limit: 9,
|
||||
)
|
||||
|
||||
expect(result).to be(false)
|
||||
expect(federation_helpers.warn_messages.last).to include("Skipped remote instance crawl")
|
||||
end
|
||||
|
||||
it "schedules ingestion work on the pool" do
|
||||
allow(federation_helpers).to receive(:federation_worker_pool).and_return(pool)
|
||||
db = instance_double(SQLite3::Database)
|
||||
@@ -1040,29 +746,6 @@ RSpec.describe PotatoMesh::App::Federation do
|
||||
expect(result).to be(false)
|
||||
end
|
||||
|
||||
it "does not apply cooldown when scheduling fails due to queue saturation" do
|
||||
allow(PotatoMesh::Config).to receive(:federation_crawl_cooldown_seconds).and_return(300)
|
||||
allow(federation_helpers).to receive(:federation_worker_pool).and_return(pool)
|
||||
allow(pool).to receive(:schedule).and_raise(PotatoMesh::App::WorkerPool::QueueFullError, "full")
|
||||
|
||||
first = federation_helpers.enqueue_federation_crawl(
|
||||
"remote.mesh",
|
||||
per_response_limit: 1,
|
||||
overall_limit: 2,
|
||||
)
|
||||
second = federation_helpers.enqueue_federation_crawl(
|
||||
"remote.mesh",
|
||||
per_response_limit: 1,
|
||||
overall_limit: 2,
|
||||
)
|
||||
|
||||
expect(first).to be(false)
|
||||
expect(second).to be(false)
|
||||
expect(federation_helpers.debug_messages).not_to include(
|
||||
a_string_including("recent crawl completed"),
|
||||
)
|
||||
end
|
||||
|
||||
it "logs when the worker pool is shutting down" do
|
||||
allow(federation_helpers).to receive(:federation_worker_pool).and_return(pool)
|
||||
allow(pool).to receive(:schedule).and_raise(PotatoMesh::App::WorkerPool::ShutdownError, "closed")
|
||||
@@ -1083,224 +766,6 @@ RSpec.describe PotatoMesh::App::Federation do
|
||||
|
||||
expect(result).to be(false)
|
||||
end
|
||||
|
||||
it "deduplicates crawls while a domain crawl is already in flight" do
|
||||
db = instance_double(SQLite3::Database)
|
||||
allow(db).to receive(:close)
|
||||
captured_job = nil
|
||||
|
||||
allow(federation_helpers).to receive(:federation_worker_pool).and_return(pool)
|
||||
allow(pool).to receive(:schedule) do |&block|
|
||||
captured_job = block
|
||||
instance_double(PotatoMesh::App::WorkerPool::Task)
|
||||
end
|
||||
allow(federation_helpers).to receive(:open_database).and_return(db)
|
||||
allow(federation_helpers).to receive(:ingest_known_instances_from!)
|
||||
|
||||
first = federation_helpers.enqueue_federation_crawl(
|
||||
"remote.mesh",
|
||||
per_response_limit: 5,
|
||||
overall_limit: 9,
|
||||
)
|
||||
second = federation_helpers.enqueue_federation_crawl(
|
||||
"remote.mesh",
|
||||
per_response_limit: 5,
|
||||
overall_limit: 9,
|
||||
)
|
||||
|
||||
expect(first).to be(true)
|
||||
expect(second).to be(false)
|
||||
expect(captured_job).not_to be_nil
|
||||
captured_job.call
|
||||
expect(db).to have_received(:close)
|
||||
end
|
||||
|
||||
it "releases the crawl slot when opening the database fails" do
|
||||
allow(federation_helpers).to receive(:federation_crawl_cooldown_seconds).and_return(0)
|
||||
captured_job = nil
|
||||
allow(federation_helpers).to receive(:federation_worker_pool).and_return(pool)
|
||||
allow(pool).to receive(:schedule) do |&block|
|
||||
captured_job = block
|
||||
instance_double(PotatoMesh::App::WorkerPool::Task)
|
||||
end
|
||||
allow(federation_helpers).to receive(:open_database).and_raise(SQLite3::Exception, "db unavailable")
|
||||
allow(federation_helpers).to receive(:ingest_known_instances_from!)
|
||||
|
||||
first = federation_helpers.enqueue_federation_crawl(
|
||||
"remote.mesh",
|
||||
per_response_limit: 5,
|
||||
overall_limit: 9,
|
||||
)
|
||||
expect(first).to be(true)
|
||||
expect(captured_job).not_to be_nil
|
||||
|
||||
expect { captured_job.call }.to raise_error(SQLite3::Exception, "db unavailable")
|
||||
|
||||
second = federation_helpers.enqueue_federation_crawl(
|
||||
"remote.mesh",
|
||||
per_response_limit: 5,
|
||||
overall_limit: 9,
|
||||
)
|
||||
expect(second).to be(true)
|
||||
end
|
||||
|
||||
it "deduplicates crawls across instance receivers using shared class state" do
|
||||
helper_class = Class.new do
|
||||
include PotatoMesh::App::Federation
|
||||
|
||||
class << self
|
||||
attr_accessor :pool
|
||||
|
||||
def settings
|
||||
@settings ||= Struct.new(:federation_shutdown_requested).new(false)
|
||||
end
|
||||
|
||||
def set(key, value)
|
||||
settings.public_send("#{key}=", value)
|
||||
end
|
||||
|
||||
def federation_worker_pool
|
||||
pool
|
||||
end
|
||||
|
||||
# No-op to keep the test helper minimal while satisfying federation logging calls.
|
||||
def debug_log(*); end
|
||||
|
||||
# No-op to keep the test helper minimal while satisfying federation logging calls.
|
||||
def warn_log(*); end
|
||||
end
|
||||
|
||||
def settings
|
||||
self.class.settings
|
||||
end
|
||||
|
||||
def set(key, value)
|
||||
self.class.set(key, value)
|
||||
end
|
||||
|
||||
def debug_log(...)
|
||||
self.class.debug_log(...)
|
||||
end
|
||||
|
||||
def warn_log(...)
|
||||
self.class.warn_log(...)
|
||||
end
|
||||
end
|
||||
|
||||
pool_double = instance_double(PotatoMesh::App::WorkerPool)
|
||||
allow(pool_double).to receive(:schedule).and_return(instance_double(PotatoMesh::App::WorkerPool::Task))
|
||||
helper_class.pool = pool_double
|
||||
|
||||
first_receiver = helper_class.new
|
||||
second_receiver = helper_class.new
|
||||
|
||||
first = first_receiver.enqueue_federation_crawl(
|
||||
"remote.mesh",
|
||||
per_response_limit: 1,
|
||||
overall_limit: 2,
|
||||
)
|
||||
second = second_receiver.enqueue_federation_crawl(
|
||||
"remote.mesh",
|
||||
per_response_limit: 1,
|
||||
overall_limit: 2,
|
||||
)
|
||||
|
||||
expect(first).to be(true)
|
||||
expect(second).to be(false)
|
||||
expect(pool_double).to have_received(:schedule).once
|
||||
end
|
||||
end
|
||||
|
||||
describe ".fetch_instance_json" do
|
||||
it "short-circuits when shutdown has been requested" do
|
||||
federation_helpers.request_federation_shutdown!
|
||||
|
||||
payload, metadata = federation_helpers.fetch_instance_json("remote.mesh", NODES_API_PATH)
|
||||
|
||||
expect(payload).to be_nil
|
||||
expect(metadata).to eq(["federation shutdown requested"])
|
||||
end
|
||||
|
||||
it "stops iterating URI candidates after shutdown is requested mid-loop" do
|
||||
calls = 0
|
||||
allow(federation_helpers).to receive(:instance_uri_candidates).and_return([
|
||||
URI.parse("https://remote.mesh/api/nodes"),
|
||||
URI.parse("http://remote.mesh/api/nodes"),
|
||||
])
|
||||
allow(federation_helpers).to receive(:perform_instance_http_request) do |_uri|
|
||||
calls += 1
|
||||
federation_helpers.request_federation_shutdown!
|
||||
raise PotatoMesh::App::InstanceFetchError, "boom"
|
||||
end
|
||||
|
||||
payload, metadata = federation_helpers.fetch_instance_json("remote.mesh", NODES_API_PATH)
|
||||
|
||||
expect(payload).to be_nil
|
||||
expect(calls).to eq(1)
|
||||
expect(metadata.first).to include("boom")
|
||||
end
|
||||
end
|
||||
|
||||
describe ".claim_federation_crawl_slot" do
|
||||
it "initializes crawl dedupe state safely under concurrent access" do
|
||||
federation_helpers.instance_variable_set(:@federation_crawl_mutex, nil)
|
||||
federation_helpers.instance_variable_set(:@federation_crawl_in_flight, nil)
|
||||
federation_helpers.instance_variable_set(:@federation_crawl_last_completed_at, nil)
|
||||
federation_helpers.instance_variable_set(:@federation_crawl_init_mutex, nil)
|
||||
|
||||
threads = Array.new(12) do
|
||||
Thread.new do
|
||||
federation_helpers.initialize_federation_crawl_state!
|
||||
end
|
||||
end
|
||||
threads.each(&:join)
|
||||
|
||||
mutex = federation_helpers.instance_variable_get(:@federation_crawl_mutex)
|
||||
in_flight = federation_helpers.instance_variable_get(:@federation_crawl_in_flight)
|
||||
last_completed = federation_helpers.instance_variable_get(:@federation_crawl_last_completed_at)
|
||||
|
||||
expect(mutex).to be_a(Mutex)
|
||||
expect(in_flight).to be_a(Set)
|
||||
expect(last_completed).to be_a(Hash)
|
||||
expect(in_flight).to be_empty
|
||||
expect(last_completed).to be_empty
|
||||
end
|
||||
|
||||
it "returns cooldown when the domain completed recently" do
|
||||
allow(PotatoMesh::Config).to receive(:federation_crawl_cooldown_seconds).and_return(300)
|
||||
federation_helpers.clear_federation_crawl_state!
|
||||
federation_helpers.release_federation_crawl_slot("remote.mesh")
|
||||
|
||||
result = federation_helpers.claim_federation_crawl_slot("remote.mesh")
|
||||
|
||||
expect(result).to eq(:cooldown)
|
||||
end
|
||||
end
|
||||
|
||||
describe ".shutdown_federation_background_work!" do
|
||||
it "marks shutdown and clears announcer references" do
|
||||
initial_thread = instance_double(Thread)
|
||||
recurring_thread = instance_double(Thread)
|
||||
pool = instance_double(PotatoMesh::App::WorkerPool)
|
||||
allow(PotatoMesh::Config).to receive(:federation_shutdown_timeout_seconds).and_return(0.05)
|
||||
allow(PotatoMesh::Config).to receive(:federation_task_timeout_seconds).and_return(0.05)
|
||||
|
||||
[initial_thread, recurring_thread].each do |thread|
|
||||
allow(thread).to receive(:alive?).and_return(false)
|
||||
end
|
||||
allow(pool).to receive(:shutdown)
|
||||
|
||||
federation_helpers.set(:initial_federation_thread, initial_thread)
|
||||
federation_helpers.set(:federation_thread, recurring_thread)
|
||||
federation_helpers.set(:federation_worker_pool, pool)
|
||||
|
||||
federation_helpers.shutdown_federation_background_work!
|
||||
|
||||
expect(federation_helpers.federation_shutdown_requested?).to be(true)
|
||||
expect(federation_helpers.send(:settings).initial_federation_thread).to be_nil
|
||||
expect(federation_helpers.send(:settings).federation_thread).to be_nil
|
||||
expect(federation_helpers.send(:settings).federation_worker_pool).to be_nil
|
||||
end
|
||||
end
|
||||
|
||||
describe ".wait_for_federation_tasks" do
|
||||
@@ -1371,32 +836,4 @@ RSpec.describe PotatoMesh::App::Federation do
|
||||
federation_helpers.announce_instance_to_all_domains
|
||||
end
|
||||
end
|
||||
|
||||
describe ".start_federation_announcer!" do
|
||||
it "clears shutdown, installs hook, and exits loop when sleep aborts" do
|
||||
thread_double = instance_double(Thread)
|
||||
captured = nil
|
||||
|
||||
allow(federation_helpers).to receive(:federation_enabled?).and_return(true)
|
||||
allow(federation_helpers).to receive(:clear_federation_shutdown_request!)
|
||||
allow(federation_helpers).to receive(:ensure_federation_shutdown_hook!)
|
||||
allow(federation_helpers).to receive(:federation_sleep_with_shutdown).and_return(false)
|
||||
allow(Thread).to receive(:new) do |&block|
|
||||
captured = block
|
||||
thread_double
|
||||
end
|
||||
allow(thread_double).to receive(:respond_to?).with(:name=).and_return(false)
|
||||
allow(thread_double).to receive(:respond_to?).with(:daemon=).and_return(false)
|
||||
allow(federation_helpers).to receive(:set)
|
||||
|
||||
result = federation_helpers.start_federation_announcer!
|
||||
expect(result).to eq(thread_double)
|
||||
expect(captured).to be_a(Proc)
|
||||
captured.call
|
||||
|
||||
expect(federation_helpers).to have_received(:clear_federation_shutdown_request!)
|
||||
expect(federation_helpers).to have_received(:ensure_federation_shutdown_hook!)
|
||||
expect(federation_helpers).to have_received(:federation_sleep_with_shutdown)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
@@ -61,7 +61,7 @@ RSpec.describe "Ingestor endpoints" do
|
||||
node_id: "!abc12345",
|
||||
start_time: now - 120,
|
||||
last_seen_time: now - 60,
|
||||
version: "0.5.10",
|
||||
version: "0.5.9",
|
||||
lora_freq: 915,
|
||||
modem_preset: "LongFast",
|
||||
}.merge(overrides)
|
||||
@@ -133,7 +133,7 @@ RSpec.describe "Ingestor endpoints" do
|
||||
with_db do |db|
|
||||
db.execute(
|
||||
"INSERT INTO ingestors(node_id, start_time, last_seen_time, version) VALUES(?,?,?,?)",
|
||||
["!fresh000", now - 100, now - 10, "0.5.10"],
|
||||
["!fresh000", now - 100, now - 10, "0.5.9"],
|
||||
)
|
||||
db.execute(
|
||||
"INSERT INTO ingestors(node_id, start_time, last_seen_time, version) VALUES(?,?,?,?)",
|
||||
@@ -141,7 +141,7 @@ RSpec.describe "Ingestor endpoints" do
|
||||
)
|
||||
db.execute(
|
||||
"INSERT INTO ingestors(node_id, start_time, last_seen_time, version, lora_freq, modem_preset) VALUES(?,?,?,?,?,?)",
|
||||
["!rich000", now - 200, now - 100, "0.5.10", 915, "MediumFast"],
|
||||
["!rich000", now - 200, now - 100, "0.5.9", 915, "MediumFast"],
|
||||
)
|
||||
end
|
||||
|
||||
@@ -173,7 +173,7 @@ RSpec.describe "Ingestor endpoints" do
|
||||
)
|
||||
db.execute(
|
||||
"INSERT INTO ingestors(node_id, start_time, last_seen_time, version) VALUES(?,?,?,?)",
|
||||
["!new-ingestor", now - 60, now - 30, "0.5.10"],
|
||||
["!new-ingestor", now - 60, now - 30, "0.5.9"],
|
||||
)
|
||||
end
|
||||
|
||||
|
||||
@@ -1,280 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# frozen_string_literal: true
|
||||
|
||||
require "spec_helper"
|
||||
|
||||
RSpec.describe PotatoMesh::App::Meshtastic::Cipher do
|
||||
let(:psk_b64) { "Nmh7EooP2Tsc+7pvPwXLcEDDuYhk+fBo2GLnbA1Y1sg=" }
|
||||
let(:cipher_b64) { "Q1R7tgI5yXzMXu/3" }
|
||||
let(:packet_id) { 3_915_687_257 }
|
||||
let(:from_id) { "!9e95cf60" }
|
||||
|
||||
def encode_varint(value)
|
||||
bytes = []
|
||||
remaining = value
|
||||
|
||||
loop do
|
||||
byte = remaining & 0x7f
|
||||
remaining >>= 7
|
||||
if remaining.zero?
|
||||
bytes << byte
|
||||
break
|
||||
end
|
||||
bytes << (byte | 0x80)
|
||||
end
|
||||
|
||||
bytes.pack("C*")
|
||||
end
|
||||
|
||||
def build_data_message(portnum, payload)
|
||||
tag_portnum = (1 << 3) | 0
|
||||
tag_payload = (2 << 3) | 2
|
||||
|
||||
[
|
||||
tag_portnum,
|
||||
].pack("C") + encode_varint(portnum) +
|
||||
[tag_payload].pack("C") + encode_varint(payload.bytesize) + payload
|
||||
end
|
||||
|
||||
def encrypt_message(plaintext, psk_b64:, packet_id:, from_id:)
|
||||
key = PotatoMesh::App::Meshtastic::ChannelHash.expanded_key(psk_b64)
|
||||
from_num = described_class.normalize_node_num(from_id, nil)
|
||||
nonce = described_class.build_nonce(packet_id, from_num)
|
||||
|
||||
cipher_name = key.bytesize == 16 ? "aes-128-ctr" : "aes-256-ctr"
|
||||
cipher = OpenSSL::Cipher.new(cipher_name)
|
||||
cipher.encrypt
|
||||
cipher.key = key
|
||||
cipher.iv = nonce
|
||||
|
||||
Base64.strict_encode64(cipher.update(plaintext) + cipher.final)
|
||||
end
|
||||
|
||||
describe PotatoMesh::App::Meshtastic::ChannelHash do
|
||||
it "hashes channel names with the provided PSK" do
|
||||
hash = described_class.channel_hash("BerlinMesh", psk_b64)
|
||||
|
||||
expect(hash).to eq(35)
|
||||
end
|
||||
|
||||
it "resolves the default PSK alias when hashing channel names" do
|
||||
hash = described_class.channel_hash("PUBLIC", "AQ==")
|
||||
|
||||
expect(hash).to eq(3)
|
||||
end
|
||||
|
||||
it "expands short PSKs to AES-128 length" do
|
||||
key = described_class.expanded_key(Base64.strict_encode64("abc"))
|
||||
|
||||
expect(key.bytesize).to eq(16)
|
||||
expect(key.bytes.first(3).pack("C*")).to eq("abc")
|
||||
end
|
||||
|
||||
it "returns nil for unsupported PSK sizes" do
|
||||
key = described_class.expanded_key(Base64.strict_encode64("x" * 33))
|
||||
|
||||
expect(key).to be_nil
|
||||
end
|
||||
|
||||
it "resolves the event PSK alias" do
|
||||
key = described_class.expanded_key(Base64.strict_encode64([2].pack("C")))
|
||||
|
||||
expect(key.bytesize).to eq(32)
|
||||
end
|
||||
|
||||
it "returns nil for unknown aliases" do
|
||||
expect(described_class.default_key_for_alias(99)).to be_nil
|
||||
end
|
||||
|
||||
it "xors byte arrays deterministically" do
|
||||
value = described_class.xor_bytes([0x01, 0x02, 0x03])
|
||||
|
||||
expect(value).to eq(0x00)
|
||||
end
|
||||
|
||||
it "xors byte strings deterministically" do
|
||||
value = described_class.xor_bytes("ABC")
|
||||
|
||||
expect(value).to eq(0x40)
|
||||
end
|
||||
|
||||
it "returns empty key material for empty PSK" do
|
||||
key = described_class.expanded_key("")
|
||||
|
||||
expect(key).to eq("")
|
||||
end
|
||||
|
||||
it "pads 17 byte PSKs up to 32 bytes" do
|
||||
key = described_class.expanded_key(Base64.strict_encode64("x" * 17))
|
||||
|
||||
expect(key.bytesize).to eq(32)
|
||||
end
|
||||
end
|
||||
|
||||
describe PotatoMesh::App::Meshtastic::RainbowTable do
|
||||
it "returns candidate names for a channel hash" do
|
||||
candidates = described_class.channel_names_for(35, psk_b64: psk_b64)
|
||||
|
||||
expect(candidates).to include("BerlinMesh")
|
||||
end
|
||||
end
|
||||
|
||||
it "decrypts the BerlinMesh example payload" do
|
||||
text = described_class.decrypt_text(
|
||||
cipher_b64: cipher_b64,
|
||||
packet_id: packet_id,
|
||||
from_id: from_id,
|
||||
psk_b64: psk_b64,
|
||||
)
|
||||
|
||||
expect(text).to eq("Nabend")
|
||||
end
|
||||
|
||||
it "decrypts the public PSK alias sample payload" do
|
||||
text = described_class.decrypt_text(
|
||||
cipher_b64: "otu3OyMrTIUlcaisLVDyAnLW",
|
||||
packet_id: 3_189_171_433,
|
||||
from_id: "!7c5b0920",
|
||||
psk_b64: "AQ==",
|
||||
)
|
||||
|
||||
expect(text).to eq("FF-TB Beacon")
|
||||
end
|
||||
|
||||
it "decrypts another public PSK alias payload sample" do
|
||||
text = described_class.decrypt_text(
|
||||
cipher_b64: "Xso0VQhndJ5RJ3pfHRVRLKSA",
|
||||
packet_id: 4_126_217_817,
|
||||
from_id: "!1d60dd3c",
|
||||
psk_b64: "AQ==",
|
||||
)
|
||||
|
||||
expect(text).to eq("FF-ZW Beacon")
|
||||
end
|
||||
|
||||
it "returns nil when the cipher text is invalid" do
|
||||
text = described_class.decrypt_text(
|
||||
cipher_b64: "not-base64",
|
||||
packet_id: packet_id,
|
||||
from_id: from_id,
|
||||
psk_b64: psk_b64,
|
||||
)
|
||||
|
||||
expect(text).to be_nil
|
||||
end
|
||||
|
||||
it "ignores non-text portnums even when payload is UTF-8" do
|
||||
payload = "OK".b
|
||||
plaintext = build_data_message(3, payload)
|
||||
encrypted = encrypt_message(plaintext, psk_b64: psk_b64, packet_id: packet_id, from_id: from_id)
|
||||
|
||||
text = described_class.decrypt_text(
|
||||
cipher_b64: encrypted,
|
||||
packet_id: packet_id,
|
||||
from_id: from_id,
|
||||
psk_b64: psk_b64,
|
||||
)
|
||||
|
||||
data = described_class.decrypt_data(
|
||||
cipher_b64: encrypted,
|
||||
packet_id: packet_id,
|
||||
from_id: from_id,
|
||||
psk_b64: psk_b64,
|
||||
)
|
||||
|
||||
expect(text).to be_nil
|
||||
expect(data).to eq({ portnum: 3, payload: payload, text: nil })
|
||||
end
|
||||
|
||||
it "normalizes packet ids from numeric strings" do
|
||||
value = described_class.normalize_packet_id("12345")
|
||||
|
||||
expect(value).to eq(12_345)
|
||||
end
|
||||
|
||||
it "returns nil for negative packet ids" do
|
||||
value = described_class.normalize_packet_id(-1)
|
||||
|
||||
expect(value).to be_nil
|
||||
end
|
||||
|
||||
it "normalizes node numbers from hex identifiers" do
|
||||
value = described_class.normalize_node_num("0x433da83c", nil)
|
||||
|
||||
expect(value).to eq(0x433da83c)
|
||||
end
|
||||
|
||||
it "uses the provided numeric node number when present" do
|
||||
value = described_class.normalize_node_num("!deadbeef", 123)
|
||||
|
||||
expect(value).to eq(123)
|
||||
end
|
||||
|
||||
it "decrypts payload bytes when requested" do
|
||||
payload = "OK".b
|
||||
plaintext = build_data_message(1, payload)
|
||||
encrypted = encrypt_message(plaintext, psk_b64: psk_b64, packet_id: packet_id, from_id: from_id)
|
||||
|
||||
bytes = described_class.decrypt_payload_bytes(
|
||||
cipher_b64: encrypted,
|
||||
packet_id: packet_id,
|
||||
from_id: from_id,
|
||||
psk_b64: psk_b64,
|
||||
)
|
||||
|
||||
expect(bytes).to eq(payload)
|
||||
end
|
||||
|
||||
it "returns nil for non-numeric packet ids" do
|
||||
value = described_class.normalize_packet_id("abc")
|
||||
|
||||
expect(value).to be_nil
|
||||
end
|
||||
|
||||
it "returns nil for invalid node identifiers" do
|
||||
value = described_class.normalize_node_num("not-hex", nil)
|
||||
|
||||
expect(value).to be_nil
|
||||
end
|
||||
|
||||
it "normalizes floating node numbers" do
|
||||
value = described_class.normalize_node_num(nil, 12.5)
|
||||
|
||||
expect(value).to eq(12)
|
||||
end
|
||||
|
||||
it "returns nil when the PSK is an unsupported size" do
|
||||
data = described_class.decrypt_data(
|
||||
cipher_b64: "AA==",
|
||||
packet_id: 1,
|
||||
from_id: "!9e95cf60",
|
||||
psk_b64: Base64.strict_encode64("x" * 33),
|
||||
)
|
||||
|
||||
expect(data).to be_nil
|
||||
end
|
||||
|
||||
it "returns nil when the PSK expands to an empty key" do
|
||||
data = described_class.decrypt_data(
|
||||
cipher_b64: "AA==",
|
||||
packet_id: 1,
|
||||
from_id: "!9e95cf60",
|
||||
psk_b64: "",
|
||||
)
|
||||
|
||||
expect(data).to be_nil
|
||||
end
|
||||
end
|
||||
@@ -1,189 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# frozen_string_literal: true
|
||||
|
||||
require "spec_helper"
|
||||
require "fileutils"
|
||||
require "tmpdir"
|
||||
|
||||
RSpec.describe PotatoMesh::App::Meshtastic::PayloadDecoder do
|
||||
def with_env(key, value)
|
||||
previous = ENV[key]
|
||||
ENV[key] = value
|
||||
yield
|
||||
ensure
|
||||
ENV[key] = previous
|
||||
end
|
||||
|
||||
def with_repo_root(path)
|
||||
allow(PotatoMesh::Config).to receive(:repo_root).and_return(path)
|
||||
end
|
||||
|
||||
it "prefers a configured python path" do
|
||||
Dir.mktmpdir do |dir|
|
||||
with_env("MESHTASTIC_PYTHON", "/custom/python") do
|
||||
with_repo_root(dir) do
|
||||
expect(described_class.python_executable_path).to eq("/custom/python")
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
it "uses the project venv when present" do
|
||||
Dir.mktmpdir do |dir|
|
||||
python_path = File.join(dir, "data", ".venv", "bin", "python")
|
||||
FileUtils.mkdir_p(File.dirname(python_path))
|
||||
File.write(python_path, "")
|
||||
FileUtils.chmod(0o755, python_path)
|
||||
|
||||
with_env("MESHTASTIC_PYTHON", nil) do
|
||||
with_repo_root(dir) do
|
||||
expect(described_class.python_executable_path).to eq(python_path)
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
it "falls back to python on PATH when no venv is available" do
|
||||
Dir.mktmpdir do |dir|
|
||||
fake_bin = File.join(dir, "bin")
|
||||
FileUtils.mkdir_p(fake_bin)
|
||||
python_path = File.join(fake_bin, "python3")
|
||||
File.write(python_path, "#!/bin/sh\n")
|
||||
FileUtils.chmod(0o755, python_path)
|
||||
|
||||
with_env("MESHTASTIC_PYTHON", nil) do
|
||||
with_env("PATH", fake_bin) do
|
||||
with_repo_root(dir) do
|
||||
expect(described_class.python_executable_path).to eq(python_path)
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
it "resolves the decoder script path from the repo root" do
|
||||
Dir.mktmpdir do |dir|
|
||||
script_path = File.join(dir, "data", "mesh_ingestor", "decode_payload.py")
|
||||
FileUtils.mkdir_p(File.dirname(script_path))
|
||||
File.write(script_path, "")
|
||||
|
||||
with_repo_root(dir) do
|
||||
expect(described_class.decoder_script_path).to eq(script_path)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
it "falls back to the web root when the repo root is unavailable" do
|
||||
Dir.mktmpdir do |dir|
|
||||
script_path = File.join(dir, "data", "mesh_ingestor", "decode_payload.py")
|
||||
FileUtils.mkdir_p(File.dirname(script_path))
|
||||
File.write(script_path, "")
|
||||
|
||||
with_repo_root(Dir.mktmpdir) do
|
||||
allow(PotatoMesh::Config).to receive(:web_root).and_return(dir)
|
||||
expect(described_class.decoder_script_path).to eq(script_path)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
it "returns nil when the decoder script is missing" do
|
||||
Dir.mktmpdir do |dir|
|
||||
with_repo_root(dir) do
|
||||
expect(described_class.decoder_script_path).to be_nil
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
it "returns nil when the decoder process fails" do
|
||||
allow(described_class).to receive(:decoder_script_path).and_return("/tmp/decoder.py")
|
||||
allow(described_class).to receive(:python_executable_path).and_return("/usr/bin/python3")
|
||||
allow(Open3).to receive(:capture3).and_return(["{}", "boom", instance_double(Process::Status, success?: false)])
|
||||
|
||||
expect(described_class.decode(portnum: 3, payload_b64: "AA==")).to be_nil
|
||||
end
|
||||
|
||||
it "returns nil when decoder output is invalid JSON" do
|
||||
allow(described_class).to receive(:decoder_script_path).and_return("/tmp/decoder.py")
|
||||
allow(described_class).to receive(:python_executable_path).and_return("/usr/bin/python3")
|
||||
allow(Open3).to receive(:capture3).and_return(["not-json", "", instance_double(Process::Status, success?: true)])
|
||||
|
||||
expect(described_class.decode(portnum: 3, payload_b64: "AA==")).to be_nil
|
||||
end
|
||||
|
||||
it "returns nil when decoder output includes an error" do
|
||||
allow(described_class).to receive(:decoder_script_path).and_return("/tmp/decoder.py")
|
||||
allow(described_class).to receive(:python_executable_path).and_return("/usr/bin/python3")
|
||||
allow(Open3).to receive(:capture3).and_return([JSON.generate("error" => "boom"), "", instance_double(Process::Status, success?: true)])
|
||||
|
||||
expect(described_class.decode(portnum: 3, payload_b64: "AA==")).to be_nil
|
||||
end
|
||||
|
||||
it "returns nil when decoder output is not a hash" do
|
||||
allow(described_class).to receive(:decoder_script_path).and_return("/tmp/decoder.py")
|
||||
allow(described_class).to receive(:python_executable_path).and_return("/usr/bin/python3")
|
||||
allow(Open3).to receive(:capture3).and_return([JSON.generate([1, 2, 3]), "", instance_double(Process::Status, success?: true)])
|
||||
|
||||
expect(described_class.decode(portnum: 3, payload_b64: "AA==")).to be_nil
|
||||
end
|
||||
|
||||
it "returns nil when the decoder executable is missing" do
|
||||
allow(described_class).to receive(:decoder_script_path).and_return("/tmp/decoder.py")
|
||||
allow(described_class).to receive(:python_executable_path).and_return("/missing/python")
|
||||
allow(Open3).to receive(:capture3).and_raise(Errno::ENOENT)
|
||||
|
||||
expect(described_class.decode(portnum: 3, payload_b64: "AA==")).to be_nil
|
||||
end
|
||||
|
||||
it "returns nil when decoder paths are unavailable" do
|
||||
allow(described_class).to receive(:decoder_script_path).and_return(nil)
|
||||
allow(described_class).to receive(:python_executable_path).and_return(nil)
|
||||
|
||||
expect(described_class.decode(portnum: 3, payload_b64: "AA==")).to be_nil
|
||||
end
|
||||
|
||||
it "returns nil when no python executable can be found" do
|
||||
with_env("MESHTASTIC_PYTHON", nil) do
|
||||
with_env("PATH", "") do
|
||||
with_repo_root(Dir.mktmpdir) do
|
||||
expect(described_class.python_executable_path).to be_nil
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
it "returns nil when inputs are missing" do
|
||||
expect(described_class.decode(portnum: nil, payload_b64: "AA==")).to be_nil
|
||||
expect(described_class.decode(portnum: 3, payload_b64: nil)).to be_nil
|
||||
end
|
||||
|
||||
it "falls back to PATH when configured python is blank" do
|
||||
Dir.mktmpdir do |dir|
|
||||
fake_bin = File.join(dir, "bin")
|
||||
FileUtils.mkdir_p(fake_bin)
|
||||
python_path = File.join(fake_bin, "python")
|
||||
File.write(python_path, "#!/bin/sh\n")
|
||||
FileUtils.chmod(0o755, python_path)
|
||||
|
||||
with_env("MESHTASTIC_PYTHON", " ") do
|
||||
with_env("PATH", fake_bin) do
|
||||
with_repo_root(dir) do
|
||||
expect(described_class.python_executable_path).to eq(python_path)
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
@@ -1,135 +0,0 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# frozen_string_literal: true
|
||||
|
||||
require "spec_helper"
|
||||
|
||||
RSpec.describe PotatoMesh::App::Meshtastic::Protobuf do
|
||||
def encode_varint(value)
|
||||
bytes = []
|
||||
remaining = value
|
||||
loop do
|
||||
byte = remaining & 0x7f
|
||||
remaining >>= 7
|
||||
if remaining.zero?
|
||||
bytes << byte
|
||||
break
|
||||
end
|
||||
bytes << (byte | 0x80)
|
||||
end
|
||||
bytes.pack("C*")
|
||||
end
|
||||
|
||||
it "extracts a length-delimited field by number" do
|
||||
field_number = 3
|
||||
payload = "blob".b
|
||||
tag = (field_number << 3) | described_class::WIRE_TYPE_LENGTH_DELIMITED
|
||||
message = [tag].pack("C") + encode_varint(payload.bytesize) + payload
|
||||
|
||||
extracted = described_class.extract_field_bytes(message, field_number)
|
||||
|
||||
expect(extracted).to eq(payload)
|
||||
end
|
||||
|
||||
it "returns nil when a varint is truncated" do
|
||||
field_number = 1
|
||||
tag = (field_number << 3) | described_class::WIRE_TYPE_VARINT
|
||||
message = [tag].pack("C") + [0x80].pack("C")
|
||||
|
||||
extracted = described_class.extract_field_bytes(message, field_number)
|
||||
|
||||
expect(extracted).to be_nil
|
||||
end
|
||||
|
||||
it "parses portnum and payload from a data message" do
|
||||
portnum_tag = (1 << 3) | described_class::WIRE_TYPE_VARINT
|
||||
payload_tag = (2 << 3) | described_class::WIRE_TYPE_LENGTH_DELIMITED
|
||||
payload = "OK".b
|
||||
message = [
|
||||
portnum_tag,
|
||||
].pack("C") + encode_varint(3) +
|
||||
[payload_tag].pack("C") + encode_varint(payload.bytesize) + payload
|
||||
|
||||
data = described_class.parse_data(message)
|
||||
|
||||
expect(data).to eq(portnum: 3, payload: payload)
|
||||
end
|
||||
|
||||
it "returns nil when portnum is missing" do
|
||||
payload_tag = (2 << 3) | described_class::WIRE_TYPE_LENGTH_DELIMITED
|
||||
payload = "OK".b
|
||||
message = [payload_tag].pack("C") + encode_varint(payload.bytesize) + payload
|
||||
|
||||
expect(described_class.parse_data(message)).to be_nil
|
||||
end
|
||||
|
||||
it "returns nil when payload is missing" do
|
||||
portnum_tag = (1 << 3) | described_class::WIRE_TYPE_VARINT
|
||||
message = [portnum_tag].pack("C") + encode_varint(1)
|
||||
|
||||
expect(described_class.parse_data(message)).to be_nil
|
||||
end
|
||||
|
||||
it "rejects invalid varints that overflow" do
|
||||
invalid = ([0x80] * 10).pack("C*")
|
||||
|
||||
expect(described_class.read_varint(invalid.bytes, 0)).to be_nil
|
||||
end
|
||||
|
||||
it "skips 64-bit fields while searching for length-delimited bytes" do
|
||||
target_field = 3
|
||||
junk_tag = (1 << 3) | described_class::WIRE_TYPE_64BIT
|
||||
target_tag = (target_field << 3) | described_class::WIRE_TYPE_LENGTH_DELIMITED
|
||||
message = [junk_tag].pack("C") + ("\x00" * 8) +
|
||||
[target_tag].pack("C") + encode_varint(4) + "test"
|
||||
|
||||
extracted = described_class.extract_field_bytes(message, target_field)
|
||||
|
||||
expect(extracted).to eq("test")
|
||||
end
|
||||
|
||||
it "skips 32-bit fields while searching for length-delimited bytes" do
|
||||
target_field = 4
|
||||
junk_tag = (2 << 3) | described_class::WIRE_TYPE_32BIT
|
||||
target_tag = (target_field << 3) | described_class::WIRE_TYPE_LENGTH_DELIMITED
|
||||
message = [junk_tag].pack("C") + ("\x00" * 4) +
|
||||
[target_tag].pack("C") + encode_varint(3) + "abc"
|
||||
|
||||
extracted = described_class.extract_field_bytes(message, target_field)
|
||||
|
||||
expect(extracted).to eq("abc")
|
||||
end
|
||||
|
||||
it "returns nil on unsupported wire types" do
|
||||
bad_tag = (1 << 3) | 7
|
||||
message = [bad_tag].pack("C")
|
||||
|
||||
expect(described_class.extract_field_bytes(message, 1)).to be_nil
|
||||
end
|
||||
|
||||
it "returns nil when length-delimited field overruns payload" do
|
||||
tag = (1 << 3) | described_class::WIRE_TYPE_LENGTH_DELIMITED
|
||||
message = [tag].pack("C") + encode_varint(10) + "short"
|
||||
|
||||
expect(described_class.extract_field_bytes(message, 1)).to be_nil
|
||||
end
|
||||
|
||||
it "returns nil when length varint is missing" do
|
||||
tag = (1 << 3) | described_class::WIRE_TYPE_LENGTH_DELIMITED
|
||||
message = [tag].pack("C")
|
||||
|
||||
expect(described_class.extract_field_bytes(message, 1)).to be_nil
|
||||
end
|
||||
end
|
||||
@@ -75,7 +75,6 @@ RSpec.describe PotatoMesh::Sanitizer do
|
||||
before do
|
||||
allow(PotatoMesh::Config).to receive_messages(
|
||||
site_name: " Spec Mesh ",
|
||||
announcement: " Next Meetup ",
|
||||
channel: " #Spec ",
|
||||
frequency: " 915MHz ",
|
||||
contact_link: " #room:example.org ",
|
||||
@@ -85,7 +84,6 @@ RSpec.describe PotatoMesh::Sanitizer do
|
||||
|
||||
it "provides trimmed strings" do
|
||||
expect(described_class.sanitized_site_name).to eq("Spec Mesh")
|
||||
expect(described_class.sanitized_announcement).to eq("Next Meetup")
|
||||
expect(described_class.sanitized_channel).to eq("#Spec")
|
||||
expect(described_class.sanitized_frequency).to eq("915MHz")
|
||||
expect(described_class.sanitized_contact_link).to eq("#room:example.org")
|
||||
@@ -100,12 +98,6 @@ RSpec.describe PotatoMesh::Sanitizer do
|
||||
expect(described_class.sanitized_contact_link_url).to be_nil
|
||||
end
|
||||
|
||||
it "returns nil when the announcement is blank" do
|
||||
allow(PotatoMesh::Config).to receive(:announcement).and_return(" ")
|
||||
|
||||
expect(described_class.sanitized_announcement).to be_nil
|
||||
end
|
||||
|
||||
it "returns nil when the distance is not positive" do
|
||||
allow(PotatoMesh::Config).to receive(:max_distance_km).and_return(0)
|
||||
|
||||
|
||||
@@ -18,13 +18,8 @@ require "spec_helper"
|
||||
require "timeout"
|
||||
|
||||
RSpec.describe PotatoMesh::App::WorkerPool do
|
||||
def with_pool(size: 2, queue: 2, task_timeout: nil)
|
||||
pool = PotatoMesh::App::WorkerPool.new(
|
||||
size: size,
|
||||
max_queue: queue,
|
||||
task_timeout: task_timeout,
|
||||
name: "spec-pool",
|
||||
)
|
||||
def with_pool(size: 2, queue: 2)
|
||||
pool = PotatoMesh::App::WorkerPool.new(size: size, max_queue: queue, name: "spec-pool")
|
||||
yield pool
|
||||
ensure
|
||||
pool&.shutdown(timeout: 0.5)
|
||||
@@ -38,20 +33,6 @@ RSpec.describe PotatoMesh::App::WorkerPool do
|
||||
end
|
||||
end
|
||||
|
||||
it "fails tasks that exceed the configured timeout" do
|
||||
with_pool(task_timeout: 0.01) do |pool|
|
||||
task = pool.schedule { sleep 0.05; :late }
|
||||
expect { task.wait(timeout: 1) }.to raise_error(described_class::TaskTimeoutError)
|
||||
end
|
||||
end
|
||||
|
||||
it "ignores invalid timeout values" do
|
||||
with_pool(task_timeout: "nope") do |pool|
|
||||
task = pool.schedule { sleep 0.01; :ok }
|
||||
expect(task.wait(timeout: 1)).to eq(:ok)
|
||||
end
|
||||
end
|
||||
|
||||
it "propagates exceptions raised by the job block" do
|
||||
with_pool do |pool|
|
||||
task = pool.schedule { raise ArgumentError, "boom" }
|
||||
|
||||
@@ -13,12 +13,14 @@
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
-->
|
||||
<link rel="stylesheet" href="/assets/vendor/uplot/uPlot.min.css" />
|
||||
<script src="/assets/vendor/uplot/uPlot.iife.min.js" defer></script>
|
||||
<section class="charts-page">
|
||||
<header class="charts-page__intro">
|
||||
<h2>Network telemetry trends</h2>
|
||||
<p>Aggregated telemetry snapshots from every node in the past week.</p>
|
||||
</header>
|
||||
<div id="chartsPage" class="charts-page__content">
|
||||
<div id="chartsPage" class="charts-page__content" data-telemetry-root="true">
|
||||
<p class="charts-page__status">Loading aggregated telemetry charts…</p>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
@@ -16,7 +16,7 @@
|
||||
<section class="federation-page federation-page--full-width">
|
||||
<div class="federation-page__content">
|
||||
<div class="federation-page__map-row">
|
||||
<%= erb :"shared/_map_panel", locals: { full_screen: true, legend_collapsed: true } %>
|
||||
<%= erb :"shared/_map_panel", locals: { full_screen: true } %>
|
||||
</div>
|
||||
<%= erb :"shared/_instances_table" %>
|
||||
</div>
|
||||
|
||||
+17
-71
@@ -75,27 +75,23 @@
|
||||
main_classes = ["page-main"]
|
||||
main_classes << "page-main--dashboard" if view_mode == :dashboard
|
||||
main_classes << "page-main--full-screen" if full_screen_view
|
||||
show_header = true
|
||||
show_header = !full_screen_view
|
||||
show_meta_info = true
|
||||
show_auto_refresh_controls = view_mode != :federation
|
||||
show_auto_fit_toggle = %i[dashboard map].include?(view_mode)
|
||||
map_zoom_override = defined?(map_zoom) ? map_zoom : nil
|
||||
show_info_button = true
|
||||
show_info_button = !full_screen_view
|
||||
show_footer = !full_screen_view
|
||||
show_filter_input = !%i[node_detail charts federation].include?(view_mode)
|
||||
show_auto_refresh_toggle = show_auto_refresh_controls
|
||||
show_refresh_actions = show_auto_refresh_controls || view_mode == :federation
|
||||
nodes_nav_href = "/nodes"
|
||||
nodes_nav_active = %i[nodes node_detail].include?(view_mode)
|
||||
federation_nav_enabled = !private_mode && federation_enabled
|
||||
controls_classes = ["controls"]
|
||||
controls_classes << "controls--full-screen" if full_screen_view
|
||||
refresh_row_classes = ["refresh-row"]
|
||||
refresh_info_text = full_screen_view ? nil : "#{channel} (#{frequency}) — active nodes: …"
|
||||
refresh_row_classes << "refresh-row--no-info" if refresh_info_text.nil?
|
||||
refresh_info_classes = ["refresh-info"]
|
||||
refresh_info_classes << "refresh-info--hidden" if refresh_info_text.nil?
|
||||
announcement_markup = announcement_html %>
|
||||
refresh_info_classes << "refresh-info--hidden" if refresh_info_text.nil? %>
|
||||
<body
|
||||
class="<%= body_classes.join(" ") %>"
|
||||
data-app-config="<%= Rack::Utils.escape_html(app_config_json) %>"
|
||||
@@ -104,75 +100,25 @@
|
||||
>
|
||||
<div class="<%= shell_classes.join(" ") %>">
|
||||
<% if show_header %>
|
||||
<% if announcement_markup && !announcement_markup.empty? %>
|
||||
<div class="announcement-banner" role="status" aria-live="polite">
|
||||
<p class="announcement-banner__content"><%= announcement_markup %></p>
|
||||
</div>
|
||||
<% end %>
|
||||
<header class="site-header">
|
||||
<div class="site-header__left<%= federation_nav_enabled ? " site-header__left--federation" : "" %>">
|
||||
<h1 class="site-title">
|
||||
<img src="/potatomesh-logo.svg" alt="" aria-hidden="true" />
|
||||
<span class="site-title-text"><%= site_name %></span>
|
||||
</h1>
|
||||
<% if federation_nav_enabled %>
|
||||
<div class="header-federation">
|
||||
<div class="instance-selector">
|
||||
<label class="visually-hidden" for="instanceSelect">Select a region</label>
|
||||
<select id="instanceSelect" class="instance-select" aria-label="Select instance region">
|
||||
<option value=""><%= Rack::Utils.escape_html("Select region ...") %></option>
|
||||
</select>
|
||||
</div>
|
||||
<h1 class="site-title">
|
||||
<img src="/potatomesh-logo.svg" alt="" aria-hidden="true" />
|
||||
<span class="site-title-text"><%= site_name %></span>
|
||||
</h1>
|
||||
<% if !private_mode && federation_enabled %>
|
||||
<div class="header-federation">
|
||||
<div class="instance-selector">
|
||||
<label class="visually-hidden" for="instanceSelect">Select a region</label>
|
||||
<select id="instanceSelect" class="instance-select" aria-label="Select instance region">
|
||||
<option value=""><%= Rack::Utils.escape_html("Select region ...") %></option>
|
||||
</select>
|
||||
</div>
|
||||
<% end %>
|
||||
</div>
|
||||
<div class="site-header__right">
|
||||
<nav class="site-nav" aria-label="Primary">
|
||||
<a href="/" class="site-nav__link<%= view_mode == :dashboard ? " is-active" : "" %>"<%= view_mode == :dashboard ? ' aria-current="page"' : "" %>>Dashboard</a>
|
||||
<a href="/map" class="site-nav__link<%= view_mode == :map ? " is-active" : "" %>"<%= view_mode == :map ? ' aria-current="page"' : "" %>>Map</a>
|
||||
<a href="/chat" class="site-nav__link<%= view_mode == :chat ? " is-active" : "" %>"<%= view_mode == :chat ? ' aria-current="page"' : "" %>>Chat</a>
|
||||
<a href="<%= nodes_nav_href %>" class="site-nav__link<%= nodes_nav_active ? " is-active" : "" %>"<%= nodes_nav_active ? ' aria-current="page"' : "" %>>Nodes</a>
|
||||
<a href="/charts" class="site-nav__link<%= view_mode == :charts ? " is-active" : "" %>"<%= view_mode == :charts ? ' aria-current="page"' : "" %>>Charts</a>
|
||||
<% if federation_nav_enabled %>
|
||||
<a href="/federation" class="site-nav__link js-federation-nav<%= view_mode == :federation ? " is-active" : "" %>" data-federation-label="Federation"<%= view_mode == :federation ? ' aria-current="page"' : "" %>>Federation</a>
|
||||
<% end %>
|
||||
</nav>
|
||||
<button
|
||||
id="mobileMenuToggle"
|
||||
class="icon-button menu-toggle"
|
||||
type="button"
|
||||
aria-label="Open navigation menu"
|
||||
aria-expanded="false"
|
||||
aria-controls="mobileMenu"
|
||||
>
|
||||
<span aria-hidden="true">☰</span>
|
||||
</button>
|
||||
</div>
|
||||
</header>
|
||||
<div id="mobileMenu" class="mobile-menu" hidden>
|
||||
<div class="mobile-menu__backdrop" data-mobile-menu-close></div>
|
||||
<div class="mobile-menu__panel" role="dialog" aria-modal="true" aria-labelledby="mobileMenuTitle" tabindex="-1">
|
||||
<div class="mobile-menu__header">
|
||||
<h2 id="mobileMenuTitle" class="mobile-menu__title">Menu</h2>
|
||||
<button class="icon-button mobile-menu__close" type="button" data-mobile-menu-close aria-label="Close navigation menu">
|
||||
<span aria-hidden="true">×</span>
|
||||
</button>
|
||||
</div>
|
||||
<nav class="mobile-nav" aria-label="Mobile">
|
||||
<a href="/" class="mobile-nav__link<%= view_mode == :dashboard ? " is-active" : "" %>"<%= view_mode == :dashboard ? ' aria-current="page"' : "" %>>Dashboard</a>
|
||||
<a href="/map" class="mobile-nav__link<%= view_mode == :map ? " is-active" : "" %>"<%= view_mode == :map ? ' aria-current="page"' : "" %>>Map</a>
|
||||
<a href="/chat" class="mobile-nav__link<%= view_mode == :chat ? " is-active" : "" %>"<%= view_mode == :chat ? ' aria-current="page"' : "" %>>Chat</a>
|
||||
<a href="<%= nodes_nav_href %>" class="mobile-nav__link<%= nodes_nav_active ? " is-active" : "" %>"<%= nodes_nav_active ? ' aria-current="page"' : "" %>>Nodes</a>
|
||||
<a href="/charts" class="mobile-nav__link<%= view_mode == :charts ? " is-active" : "" %>"<%= view_mode == :charts ? ' aria-current="page"' : "" %>>Charts</a>
|
||||
<% if federation_nav_enabled %>
|
||||
<a href="/federation" class="mobile-nav__link js-federation-nav<%= view_mode == :federation ? " is-active" : "" %>" data-federation-label="Federation"<%= view_mode == :federation ? ' aria-current="page"' : "" %>>Federation</a>
|
||||
<% end %>
|
||||
</nav>
|
||||
</div>
|
||||
</div>
|
||||
<% end %>
|
||||
</header>
|
||||
<% end %>
|
||||
|
||||
<div id="metaRow" class="row meta">
|
||||
<div class="row meta">
|
||||
<% if show_meta_info %>
|
||||
<div class="meta-info">
|
||||
<div class="<%= refresh_row_classes.join(" ") %>">
|
||||
|
||||
@@ -17,11 +17,14 @@
|
||||
short_display = node_page_short_name || "Loading"
|
||||
long_display = node_page_long_name
|
||||
identifier_display = node_page_identifier || "" %>
|
||||
<link rel="stylesheet" href="/assets/vendor/uplot/uPlot.min.css" />
|
||||
<script src="/assets/vendor/uplot/uPlot.iife.min.js" defer></script>
|
||||
<section
|
||||
id="nodeDetail"
|
||||
class="node-detail"
|
||||
data-node-reference="<%= Rack::Utils.escape_html(reference_json) %>"
|
||||
data-private-mode="<%= private_mode ? "true" : "false" %>"
|
||||
data-telemetry-root="true"
|
||||
>
|
||||
<header class="node-detail__header">
|
||||
<h2 class="node-detail__title">
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user