Compare commits

..

14 Commits

Author SHA1 Message Date
l5y 73e161f432 web: fix liveliness of api data hydration bug (#783)
* web: fix liveliness of api data hydration bug

* web: address review comments
2026-05-03 13:05:37 +02:00
l5y 7b38f92b2d web: refactor 6/7 node page (#777)
* web: refactor 6/7 node page

* web: address node-page refactor review and close coverage gaps

Fix the concurrency cap in fetchNodeDetailsIntoIndex so it actually
limits in-flight requests.  The previous implementation built each
fetch as an immediately-invoked async IIFE, so all N fetches started
the moment the loop ran; the slicing-then-Promise.all step only
changed when settlement was observed, not when work began.  Replace
the IIFE-then-batch pattern with a worker pool: a fixed-size set of
worker promises iterates a shared queue and only pulls the next
identifier once the previous fetch settles.

Reduce cross-module coupling around the role-aware short-name badge
by extracting renderRoleAwareBadge into a new badge.js module that
single-node-table, messages, detail-html, and traces import directly,
so the neighbour module is no longer pulled in by four non-neighbour
callers.  Tighten applyDetails in role-index.js by hoisting the
ternary into a single key binding and dropping the redundant
instanceof Map guard.

Close the patch-coverage gap reported by Codecov: add tests for
parameter-validation paths in bootstrap (parseReferencePayload,
normalizeNodeReference, fetchNodeDetailHtml, initializeNodeDetailPage),
the worker-pool branches in role-index (no-fetch, empty queue, 404,
non-success responses, and an explicit concurrency-cap assertion),
the badge fallback path, the nested-neighbor seedNeighborRoleIndex
branches, the renderNeighborBadge metadata-merge and short-name
fallback paths, the empty-trace and empty-chart short-circuits, and
single-node-table validation.  All ten node-page submodules now
report 100% line coverage.
2026-05-02 23:05:36 +02:00
l5y 1041e06644 data: refactor 4/7 interfaces (#775)
* data: refactor 4/7 interfaces

* data: address PR #775 review feedback

Fix the two CI test regressions caused by the package split:
- ``factory._load_ble_interface`` no longer keeps a stale module-level
  ``BLEInterface`` cache that survived ``monkeypatch`` teardown across
  tests. The package-level attribute is now the single cache; the
  ``factory.py`` global was removed.  This unblocks
  ``test_load_ble_interface_sets_global``.
- ``interfaces/__init__.py`` re-resolves ``SerialInterface`` and
  ``TCPInterface`` from ``meshtastic.*`` at package-load time so that a
  test that pops ``data.mesh_ingestor.interfaces`` from ``sys.modules``
  and re-imports picks up the freshly registered classes rather than
  whatever a cached ``factory.py`` first resolved.  This unblocks
  ``test_interfaces_patch_handles_preimported_serial``.

Restore 100% patch coverage on the interfaces subpackage by:
- Adding tests for previously uncovered, testable paths:
  ``_extract_host_node_id(None)``, ``_ensure_channel_metadata``,
  ``_normalise_nodeinfo_packet`` (None input + dict-conversion fallback),
  ``_resolve_lora_message`` (radio_section paths), ``_modem_preset``
  (preset attr fallback + unparseable value), ``_camelcase_enum_name``
  separator-only input, ``_region_frequency`` no-digit enum name,
  ``_ensure_radio_metadata`` unresolvable-message path, plus the
  unknown-section recursive branch of ``_candidate_node_id``.
- Marking genuinely unreachable defensive branches with
  ``pragma: no cover`` (BLE receive loop body, upstream API regression
  guards, patch re-entry guard, unreachable ``NoAvailableMeshInterface``
  fallback).
2026-05-02 22:59:52 +02:00
l5y e04fab5b19 web: refactor 7/7 main js (#778)
* web: refactor 7/7 main js

* web: refactor 7/7 main js

* web: address review feedback on 7/7 main.js refactor

* Consolidate the duplicate ./main/format-utils.js import block in
  main.js so all symbols come from a single, alphabetised import
  statement (review item: "Important — Duplicate format-utils.js
  import block").
* Replace the leftover stale JSDoc atop +createOfflineTileLayer+ with
  one clear "do not inline" DI block, and likewise expand the
  +fetchMessages+ wrapper docstring so future readers see the shim's
  purpose without hunting for the implementation (review nit:
  "thin wrappers ... worth a one-line JSDoc").
* Add per-module unit tests under
  public/assets/js/app/main/__tests__/ covering every previously-
  uncovered branch in the 9 modules codecov flagged: tile-coords,
  sort-comparators, fullscreen-helpers, format-utils, data-fetchers,
  data-merge, tooltip-html, long-link-router, and offline-tile-layer.
  This drives the codecov patch percentage on PR #778 from 78.99%
  to ~100% on the new modules and unblocks the codecov/patch gate.

JS suite: 1,114 tests, 0 failures.
2026-05-02 22:35:34 +02:00
l5y d1d0225197 web: refactor 5/7 node page charts (#776)
* web: refactor 5/7 node page charts

* web: address review feedback on node-page-charts split

* Drop the local stringOrNull/numberOrNull copies from node-page.js
  and import them from ./value-helpers.js so the shared module's
  stated dedup actually happens (review issue #1).  The two locals
  were byte-identical to the new shared module.
* Split the display-only formatters out of
  node-page-charts/format-utils.js into a sibling
  node-page-charts/display-formatters.js so format-utils.js carries
  only chart concerns (review issue #2).  The barrel
  node-page-charts.js re-exports both files so existing callers and
  tests keep working unchanged.
* Inline +fmtCurrent+ in node-page-charts/specs.js and drop the
  sideways import from short-info-telemetry.js so node-page-charts/
  no longer depends on an unrelated module (review issue #3).
* Add a dedicated value-helpers.test.js pinning the contract of
  +numberOrNull+ and +stringOrNull+ so they stop relying on
  transitive coverage from the chart suite (review issue #5).
2026-05-02 22:16:18 +02:00
l5y 0fbff32535 web: refactor 2/7 federation (#773)
* web: refactor 2/7 federation

* web: close federation coverage gaps and apply review nits

Address Codecov patch coverage feedback by adding rspec examples for
the 51 lines flagged across the new federation shards (announce,
crawl, validation, http_client, self_instance, instance_metrics,
announcer_threads, lifecycle, signature). Per-shard line coverage in
the federation directory is now 100%.

Apply two review-comment changes: rename the awkwardly-named
http_client_get.rb to instance_fetcher.rb (matching its semantic
role rather than the HTTP verb), and declare PotatoMesh::App::Federation
explicitly in the federation.rb manifest so the namespace is owned by
this file rather than implicitly created by whichever shard happens to
load first.
2026-05-02 22:12:20 +02:00
l5y 03caf391e7 web: refactor 1/7 data processing (#772)
* web: refactor 1/7 data processing

* web: close coverage gaps in data_processing submodules

Bring every file under lib/potato_mesh/application/data_processing/ to
100% line coverage so codecov/patch passes on the 1/7 refactor PR. The
gap was a relocation of pre-existing untested branches; closing them
here keeps the subsequent refactor PRs in the series unblocked.

* Add unit tests covering canonical sender/recipient overrides,
  reply_id/emoji updates on existing rows, and the rare INSERT
  ConstraintException recovery path inside +insert_message+.
* Cover the non-canonical reporter and per-neighbour resolution
  branches in +insert_neighbors+.
* Cover the SQLException rescue in +upsert_ingestor+, the
  fallback_num branch in +touch_node_last_seen+, the limit fallback
  in +read_json_body+, the unrecognised-type branch in
  +store_decrypted_payload+, the +power+ telemetry_type fallback,
  the default-coercion path in +resolve_numeric_metric+, and the
  numeric/bare-hex paths in +canonical_node_parts+ and
  +coerce_trace_node_id+.

Drop dead code surfaced while pinning behaviour:

* +clear_encrypted+ in +insert_message+ has been initialised to
  +false+ and never reassigned since #633 dropped the
  decrypted-text override; remove it and the four dependent
  branches.
* The +rescue ArgumentError; nil+ tails in
  +identity.resolve_node_num+ and +traces.coerce_trace_node_id+ are
  unreachable because every +Integer(...)+ call inside is guarded by
  a regex pre-check.

Add a comment to the +data_processing.rb+ shim explaining that the
+require_relative+ list is ordered by dependency rather than
alphabetically, addressing review nit #5.
2026-05-02 22:08:21 +02:00
l5y f6aff3bdb8 data: refactor 3/7 protocols (#774)
* data: refactor 3/7 protocols

* data: address PR #774 review feedback

- Rewrite the parents[4] path comment in protocols/meshcore/debug_log.py
  to clearly explain why the index changed from parents[3] (the original
  pre-split index) without contradicting the code.
- Add tests covering the six lines flagged uncovered by codecov:
  * _process_self_info host-position branch (handlers.py:78)
  * on_contact_msg early-return for missing text/sender_ts (handlers.py:278)
  * close() RuntimeError swallow when loop closes mid-call (interface.py:155-156)
  * _run_meshcore wrapper around _ensure_channel_names failure (runner.py:131-132)

Restores 100% patch coverage on the meshcore package.
2026-05-02 22:05:43 +02:00
l5y 09d9e7be13 chore: bump version to 0.6.3 (#779)
* chore: bump version to 0.6.3

* chore: bump version to 0.6.3
2026-04-29 22:06:29 +02:00
l5y 43a5724b7f web: add seo improvements (#771)
* web: add seo improvements

* web: address review comments

* web: address review comments
2026-04-29 10:33:33 +02:00
l5y c4dd825d72 matrix: fix matrix preset labels (#770)
* matrix: fix matrix preset labels

* matrix: address review comments

* matrix: address review comments
2026-04-29 07:49:31 +02:00
l5y ee98efc120 web: rework map spider-net feature (#769)
* web: rework map spider-net feature

* web: address review comments

* web: address review comments
2026-04-29 07:06:18 +02:00
dependabot[bot] 521c2f2972 build(deps): bump openssl from 0.10.75 to 0.10.78 in /matrix (#766)
Bumps [openssl](https://github.com/rust-openssl/rust-openssl) from 0.10.75 to 0.10.78.
- [Release notes](https://github.com/rust-openssl/rust-openssl/releases)
- [Commits](https://github.com/rust-openssl/rust-openssl/compare/openssl-v0.10.75...openssl-v0.10.78)

---
updated-dependencies:
- dependency-name: openssl
  dependency-version: 0.10.78
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-23 19:15:29 +02:00
dependabot[bot] f8b02b9f24 build(deps): bump rustls-webpki from 0.103.10 to 0.103.13 in /matrix (#764)
Bumps [rustls-webpki](https://github.com/rustls/webpki) from 0.103.10 to 0.103.13.
- [Release notes](https://github.com/rustls/webpki/releases)
- [Commits](https://github.com/rustls/webpki/compare/v/0.103.10...v/0.103.13)

---
updated-dependencies:
- dependency-name: rustls-webpki
  dependency-version: 0.103.13
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-21 19:57:35 +02:00
149 changed files with 22153 additions and 10532 deletions
+27 -5
View File
@@ -105,10 +105,32 @@ The web app can be configured with environment variables (defaults shown):
| `HIDDEN_CHANNELS` | _unset_ | Comma-separated channel names the ingestor will ignore when forwarding packets. |
| `FEDERATION` | `1` | Set to `1` to announce your instance and crawl peers, or `0` to disable federation. Private mode overrides this. |
| `PRIVATE` | `0` | Set to `1` to hide the chat UI, disable message APIs, and exclude hidden clients from public listings. |
| `OG_IMAGE_URL` | _unset_ | Optional absolute URL for the social preview image. Must use an `http://` or `https://` scheme; values with other schemes are ignored. Most social platforms (Facebook, LinkedIn, Slack, iMessage) require **HTTPS** to render the card. When set, replaces the runtime-generated `/og-image.png` so deployments without Chromium (or with size-conscious images) can point at a CDN. |
| `OG_IMAGE_TTL_SECONDS` | `3600` | Cache lifetime for the runtime-generated dashboard screenshot served at `/og-image.png`. |
| `FERRUM_BROWSER_PATH` | `/usr/bin/chromium` (Docker) | Path to the headless Chromium binary used by the Open Graph preview generator. |
The application derives SEO-friendly document titles, descriptions, and social
preview tags from these existing configuration values and reuses the bundled
logo for Open Graph and Twitter cards.
preview tags from these existing configuration values. `/robots.txt` and
`/sitemap.xml` are generated automatically and respect `PRIVATE`/`FEDERATION`
toggles; markdown files in `pages/` may declare optional YAML frontmatter
(`title`, `description`, `image`, `noindex`) for per-page overrides. The
`image:` frontmatter must be an absolute `http(s)://` URL; other schemes are
silently dropped to keep operators from accidentally leaking `data:` or
`javascript:` URIs into Open Graph tags.
If `INSTANCE_DOMAIN` is unset in production the app emits a one-time `WARN`
at startup; canonical URLs and sitemap entries fall back to the inbound
`Host` header, which can be cache-poisoned by a misconfigured proxy. Set
`INSTANCE_DOMAIN` to your public hostname to silence the warning.
#### Open Graph preview image
The web container ships with Chromium so `/og-image.png` returns a fresh
screenshot of the live dashboard, cached on disk for `OG_IMAGE_TTL_SECONDS`.
Operators on size-constrained hosts can build a slim image by passing
`--build-arg WITH_OG_IMAGE=0` to `docker build`; the route then falls back to
the bundled `public/og-image-default.png`. Set `OG_IMAGE_URL` to an external
PNG/JPG (e.g. on a CDN) to avoid runtime capture entirely.
Example:
@@ -300,9 +322,9 @@ docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-arm64:latest
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-armv7:latest
# version-pinned examples
docker pull ghcr.io/l5yth/potato-mesh-web-linux-amd64:v0.6.2
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-amd64:v0.6.2
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-amd64:v0.6.2
docker pull ghcr.io/l5yth/potato-mesh-web-linux-amd64:v0.6.3
docker pull ghcr.io/l5yth/potato-mesh-ingestor-linux-amd64:v0.6.3
docker pull ghcr.io/l5yth/potato-mesh-matrix-bridge-linux-amd64:v0.6.3
```
Note: `latest` is only published for non-prerelease versions. Pre-release tags
+2 -2
View File
@@ -15,11 +15,11 @@
<key>CFBundlePackageType</key>
<string>FMWK</string>
<key>CFBundleShortVersionString</key>
<string>0.6.2</string>
<string>0.6.3</string>
<key>CFBundleSignature</key>
<string>????</string>
<key>CFBundleVersion</key>
<string>0.6.2</string>
<string>0.6.3</string>
<key>MinimumOSVersion</key>
<string>14.0</string>
</dict>
+1 -1
View File
@@ -1,7 +1,7 @@
name: potato_mesh_reader
description: Meshtastic Reader — read-only view for PotatoMesh messages.
publish_to: "none"
version: 0.6.2
version: 0.6.3
environment:
sdk: ">=3.4.0 <4.0.0"
+1 -1
View File
@@ -18,7 +18,7 @@ The ``data.mesh`` module exposes helpers for reading Meshtastic node and
message information before forwarding it to the accompanying web application.
"""
VERSION = "0.6.2"
VERSION = "0.6.3"
"""Semantic version identifier shared with the dashboard and front-end."""
__version__ = VERSION
+22
View File
@@ -145,3 +145,25 @@ Heartbeat payload:
All collection GET endpoints (`/api/nodes`, `/api/messages`, `/api/positions`, `/api/telemetry`, `/api/traces`, `/api/neighbors`, `/api/ingestors`) accept an optional `?protocol=<value>` query parameter. When present, only records whose `protocol` column matches the given value are returned. The `protocol` field is included in all GET responses.
### GET endpoint time windows
Every read endpoint enforces a server-side rolling-window floor on the data it returns. The window is fixed per route and **cannot be widened by the caller** — explicit `?since=<unix_seconds>` is treated as `MAX(since, floor)`, so a `since` older than the floor is silently clamped to the floor. Pass a `since` newer than the floor when you want to be more restrictive (incremental refresh).
| Route | Floor (default) | Notes |
| --- | --- | --- |
| `GET /api/nodes` | 7 days | filtered by `nodes.last_heard` |
| `GET /api/messages` | 7 days | filtered by `messages.rx_time` |
| `GET /api/positions` | 7 days | filtered by `COALESCE(rx_time, position_time)` |
| `GET /api/telemetry` | 7 days | filtered by `COALESCE(rx_time, telemetry_time)` |
| `GET /api/instances` | 7 days | filtered by `instances.last_update_time` |
| `GET /api/neighbors` | **28 days** | sparse data; widened to keep slow scrapes visible |
| `GET /api/traces` | **28 days** | sparse data; same rationale |
| `GET /api/ingestors` | **28 days** | sparse heartbeats; same rationale |
| `GET /api/.../:id` (per-id lookup) | **28 days** | every per-id route uses the extended window so callers can backfill historical context for a specific node/conversation that has dropped out of the bulk view. The `since` clamp still applies. |
| `GET /api/telemetry/aggregated` | caller-controlled | `?windowSeconds=<N>` is mandatory; defaults to 86 400 (1 day). Bounded by `MAX_QUERY_LIMIT` on bucket count, not by a hard floor. |
| `GET /api/stats` | n/a | reports counts at fixed `hour`/`day`/`week`/`month` activity buckets. |
Federation peers should not assume an unbounded historical window: a peer that requests `/api/messages?since=0` from a partner expecting "everything" will only ever receive the last seven days. To pull older state, request the per-id endpoint (28 days) for the relevant nodes.
The constants live in `web/lib/potato_mesh/config.rb` (`week_seconds`, `four_weeks_seconds`).
-980
View File
@@ -1,980 +0,0 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Mesh interface discovery helpers for interacting with Meshtastic hardware."""
from __future__ import annotations
import contextlib
import importlib
import ipaddress
import math
import re
import sys
import urllib.parse
from collections.abc import Mapping
from typing import TYPE_CHECKING, Any
try: # pragma: no cover - dependency optional in tests
import meshtastic # type: ignore
except Exception: # pragma: no cover - dependency optional in tests
meshtastic = None # type: ignore[assignment]
from . import channels, config, serialization
from .connection import (
BLE_ADDRESS_RE,
DEFAULT_TCP_PORT,
DEFAULT_SERIAL_PATTERNS,
default_serial_targets,
parse_ble_target,
)
def _ensure_mapping(value) -> Mapping | None:
"""Return ``value`` as a mapping when conversion is possible."""
if isinstance(value, Mapping):
return value
if hasattr(value, "__dict__") and isinstance(value.__dict__, Mapping):
return value.__dict__
with contextlib.suppress(Exception):
converted = serialization._node_to_dict(value)
if isinstance(converted, Mapping):
return converted
return None
def _is_nodeish_identifier(value: Any) -> bool:
"""Return ``True`` when ``value`` resembles a Meshtastic node identifier."""
if isinstance(value, (int, float)):
return False
if not isinstance(value, str):
return False
trimmed = value.strip()
if not trimmed:
return False
if trimmed.startswith("^"):
return True
if trimmed.startswith("!"):
trimmed = trimmed[1:]
elif trimmed.lower().startswith("0x"):
trimmed = trimmed[2:]
elif not re.search(r"[a-fA-F]", trimmed):
# Bare decimal strings should not be treated as node ids when labelled "id".
return False
return bool(re.fullmatch(r"[0-9a-fA-F]{1,8}", trimmed))
def _candidate_node_id(mapping: Mapping | None) -> str | None:
"""Extract a canonical node identifier from ``mapping`` when present."""
if mapping is None:
return None
node_keys = (
"fromId",
"from_id",
"from",
"nodeId",
"node_id",
"nodeNum",
"node_num",
"num",
"userId",
"user_id",
)
for key in node_keys:
with contextlib.suppress(Exception):
node_id = serialization._canonical_node_id(mapping.get(key))
if node_id:
return node_id
with contextlib.suppress(Exception):
value = mapping.get("id")
if _is_nodeish_identifier(value):
node_id = serialization._canonical_node_id(value)
if node_id:
return node_id
user_section = _ensure_mapping(mapping.get("user"))
if user_section is not None:
for key in ("userId", "user_id", "num", "nodeNum", "node_num"):
with contextlib.suppress(Exception):
node_id = serialization._canonical_node_id(user_section.get(key))
if node_id:
return node_id
with contextlib.suppress(Exception):
user_id_value = user_section.get("id")
if _is_nodeish_identifier(user_id_value):
node_id = serialization._canonical_node_id(user_id_value)
if node_id:
return node_id
decoded_section = _ensure_mapping(mapping.get("decoded"))
if decoded_section is not None:
node_id = _candidate_node_id(decoded_section)
if node_id:
return node_id
payload_section = _ensure_mapping(mapping.get("payload"))
if payload_section is not None:
node_id = _candidate_node_id(payload_section)
if node_id:
return node_id
for key in ("packet", "meta", "info"):
node_id = _candidate_node_id(_ensure_mapping(mapping.get(key)))
if node_id:
return node_id
for value in mapping.values():
if isinstance(value, (list, tuple)):
for item in value:
node_id = _candidate_node_id(_ensure_mapping(item))
if node_id:
return node_id
else:
node_id = _candidate_node_id(_ensure_mapping(value))
if node_id:
return node_id
return None
def _extract_host_node_id(iface) -> str | None:
"""Return the canonical node identifier for the connected host device.
Searches a sequence of well-known attribute names (``myInfo``,
``my_node_info``, etc.) on ``iface`` for a mapping that contains a
recognisable node identifier, then falls back to the raw ``myNodeNum``
integer attribute.
Parameters:
iface: Live Meshtastic interface object, or any object that exposes
node-identity attributes in one of the expected forms.
Returns:
A canonical ``!xxxxxxxx`` node identifier, or ``None`` when no
identifiable host node information is available.
"""
if iface is None:
return None
def _as_mapping(candidate) -> Mapping | None:
mapping = _ensure_mapping(candidate)
if mapping is not None:
return mapping
if callable(candidate):
with contextlib.suppress(Exception):
return _ensure_mapping(candidate())
return None
candidates: list[Mapping] = []
for attr in ("myInfo", "my_node_info", "myNodeInfo", "my_node", "localNode"):
mapping = _as_mapping(getattr(iface, attr, None))
if mapping is None:
continue
candidates.append(mapping)
nested_info = _ensure_mapping(mapping.get("info"))
if nested_info:
candidates.append(nested_info)
for mapping in candidates:
node_id = _candidate_node_id(mapping)
if node_id:
return node_id
for key in ("myNodeNum", "my_node_num", "myNodeId", "my_node_id"):
node_id = serialization._canonical_node_id(mapping.get(key))
if node_id:
return node_id
node_id = serialization._canonical_node_id(getattr(iface, "myNodeNum", None))
if node_id:
return node_id
return None
def _normalise_nodeinfo_packet(packet) -> dict | None:
"""Return a dictionary view of ``packet`` with a guaranteed ``id`` when known."""
mapping = _ensure_mapping(packet)
if mapping is None:
return None
try:
normalised: dict = dict(mapping)
except Exception:
try:
normalised = {key: mapping[key] for key in mapping}
except Exception:
return None
node_id = _candidate_node_id(normalised)
if node_id and normalised.get("id") != node_id:
normalised["id"] = node_id
return normalised
if TYPE_CHECKING: # pragma: no cover - import only used for type checking
from meshtastic.ble_interface import BLEInterface as _BLEInterface
BLEInterface = None
def _patch_meshtastic_nodeinfo_handler() -> None:
"""Ensure Meshtastic nodeinfo packets always include an ``id`` field."""
module = sys.modules.get("meshtastic", meshtastic)
if module is None:
with contextlib.suppress(Exception):
module = importlib.import_module("meshtastic")
if module is None:
return
globals()["meshtastic"] = module
original = getattr(module, "_onNodeInfoReceive", None)
if not callable(original):
return
mesh_interface_module = getattr(module, "mesh_interface", None)
if mesh_interface_module is None:
with contextlib.suppress(Exception):
mesh_interface_module = importlib.import_module("meshtastic.mesh_interface")
# Replace the module-level handler only once; the sentinel attribute prevents
# re-wrapping if _patch_meshtastic_nodeinfo_handler() is called again after
# the interface module is reloaded or re-imported.
if not getattr(original, "_potato_mesh_safe_wrapper", False):
module._onNodeInfoReceive = _build_safe_nodeinfo_callback(original)
_patch_nodeinfo_handler_class(mesh_interface_module, module)
def _build_safe_nodeinfo_callback(original):
"""Return a wrapper that injects a missing ``id`` before dispatching."""
def _safe_on_node_info_receive(iface, packet): # type: ignore[override]
normalised = _normalise_nodeinfo_packet(packet)
if normalised is not None:
packet = normalised
try:
return original(iface, packet)
except KeyError as exc: # pragma: no cover - defensive only
if exc.args and exc.args[0] == "id":
return None
raise
_safe_on_node_info_receive._potato_mesh_safe_wrapper = True # type: ignore[attr-defined]
return _safe_on_node_info_receive
def _update_nodeinfo_handler_aliases(original, replacement) -> None:
"""Ensure Meshtastic modules reference the patched ``NodeInfoHandler``."""
for module_name, module in list(sys.modules.items()):
if not module_name.startswith("meshtastic"):
continue
existing = getattr(module, "NodeInfoHandler", None)
if existing is original:
setattr(module, "NodeInfoHandler", replacement)
def _patch_nodeinfo_handler_class(
mesh_interface_module, meshtastic_module=None
) -> None:
"""Wrap ``NodeInfoHandler.onReceive`` to normalise packets before callbacks."""
if mesh_interface_module is None:
return
handler_class = getattr(mesh_interface_module, "NodeInfoHandler", None)
if handler_class is None:
return
if getattr(handler_class, "_potato_mesh_safe_wrapper", False):
return
original_on_receive = getattr(handler_class, "onReceive", None)
if not callable(original_on_receive):
return
class _SafeNodeInfoHandler(handler_class): # type: ignore[misc]
"""Subclass that guards against missing node identifiers."""
def onReceive(self, iface, packet): # type: ignore[override]
"""Normalise ``packet`` before dispatching to the parent handler.
Injects a canonical ``id`` field when one can be inferred from the
packet's other fields, then delegates to the original
``NodeInfoHandler.onReceive``. A ``KeyError`` on ``"id"`` is
suppressed because some firmware versions omit the field entirely.
Parameters:
iface: The Meshtastic interface that received the packet.
packet: Raw nodeinfo packet dict, possibly lacking an ``id``
key.
Returns:
The return value of the parent handler, or ``None`` when a
missing ``"id"`` key would otherwise raise.
"""
normalised = _normalise_nodeinfo_packet(packet)
if normalised is not None:
packet = normalised
try:
return super().onReceive(iface, packet)
except KeyError as exc: # pragma: no cover - defensive only
if exc.args and exc.args[0] == "id":
return None
raise
_SafeNodeInfoHandler.__name__ = handler_class.__name__
_SafeNodeInfoHandler.__qualname__ = getattr(
handler_class, "__qualname__", handler_class.__name__
)
_SafeNodeInfoHandler.__module__ = getattr(
handler_class, "__module__", mesh_interface_module.__name__
)
_SafeNodeInfoHandler.__doc__ = getattr(
handler_class, "__doc__", _SafeNodeInfoHandler.__doc__
)
_SafeNodeInfoHandler._potato_mesh_safe_wrapper = True # type: ignore[attr-defined]
setattr(mesh_interface_module, "NodeInfoHandler", _SafeNodeInfoHandler)
if meshtastic_module is None:
meshtastic_module = globals().get("meshtastic")
if meshtastic_module is not None:
existing_top = getattr(meshtastic_module, "NodeInfoHandler", None)
if existing_top is handler_class:
setattr(meshtastic_module, "NodeInfoHandler", _SafeNodeInfoHandler)
_update_nodeinfo_handler_aliases(handler_class, _SafeNodeInfoHandler)
_patch_meshtastic_nodeinfo_handler()
try: # pragma: no cover - optional dependency may be unavailable
from meshtastic.serial_interface import SerialInterface # type: ignore
except Exception: # pragma: no cover - optional dependency may be unavailable
SerialInterface = None # type: ignore[assignment]
try: # pragma: no cover - optional dependency may be unavailable
from meshtastic.tcp_interface import TCPInterface # type: ignore
except Exception: # pragma: no cover - optional dependency may be unavailable
TCPInterface = None # type: ignore[assignment]
def _patch_meshtastic_ble_receive_loop() -> None:
"""Prevent ``UnboundLocalError`` crashes in Meshtastic's BLE reader."""
try:
from meshtastic import ble_interface as _ble_interface_module # type: ignore
except Exception: # pragma: no cover - dependency optional in tests
return
ble_class = getattr(_ble_interface_module, "BLEInterface", None)
if ble_class is None:
return
original = getattr(ble_class, "_receiveFromRadioImpl", None)
if not callable(original):
return
if getattr(original, "_potato_mesh_safe_wrapper", False):
return
FROMRADIO_UUID = getattr(_ble_interface_module, "FROMRADIO_UUID", None)
BleakDBusError = getattr(_ble_interface_module, "BleakDBusError", ())
BleakError = getattr(_ble_interface_module, "BleakError", ())
logger = getattr(_ble_interface_module, "logger", None)
time = getattr(_ble_interface_module, "time", None)
if not FROMRADIO_UUID or logger is None or time is None:
return
def _safe_receive_from_radio(self): # type: ignore[override]
while self._want_receive:
if self.should_read:
self.should_read = False
retries: int = 0
while self._want_receive:
if self.client is None:
logger.debug("BLE client is None, shutting down")
self._want_receive = False
continue
payload: bytes = b""
try:
payload = bytes(self.client.read_gatt_char(FROMRADIO_UUID))
except BleakDBusError as exc:
logger.debug("Device disconnected, shutting down %s", exc)
self._want_receive = False
payload = b""
except BleakError as exc:
if "Not connected" in str(exc):
logger.debug("Device disconnected, shutting down %s", exc)
self._want_receive = False
payload = b""
else:
raise ble_class.BLEError("Error reading BLE") from exc
if not payload:
if not self._want_receive:
break
if retries < 5:
time.sleep(0.1)
retries += 1
continue
break
logger.debug("FROMRADIO read: %s", payload.hex())
self._handleFromRadio(payload)
else:
time.sleep(0.01)
_safe_receive_from_radio._potato_mesh_safe_wrapper = True # type: ignore[attr-defined]
ble_class._receiveFromRadioImpl = _safe_receive_from_radio
_patch_meshtastic_ble_receive_loop()
def _has_field(message: Any, field_name: str) -> bool:
"""Return ``True`` when ``message`` advertises ``field_name`` via ``HasField``."""
if message is None:
return False
has_field = getattr(message, "HasField", None)
if callable(has_field):
try:
return bool(has_field(field_name))
except Exception: # pragma: no cover - defensive guard
return False
return hasattr(message, field_name)
def _enum_name_from_field(message: Any, field_name: str, value: Any) -> str | None:
"""Return the enum name for ``value`` using ``message`` descriptors."""
descriptor = getattr(message, "DESCRIPTOR", None)
if descriptor is None:
return None
fields_by_name = getattr(descriptor, "fields_by_name", {})
field_desc = fields_by_name.get(field_name)
if field_desc is None:
return None
enum_type = getattr(field_desc, "enum_type", None)
if enum_type is None:
return None
enum_values = getattr(enum_type, "values_by_number", {})
enum_value = enum_values.get(value)
if enum_value is None:
return None
return getattr(enum_value, "name", None)
def _resolve_lora_message(local_config: Any) -> Any | None:
"""Return the LoRa configuration sub-message from ``local_config``."""
if local_config is None:
return None
if _has_field(local_config, "lora"):
candidate = getattr(local_config, "lora", None)
if candidate is not None:
return candidate
radio_section = getattr(local_config, "radio", None)
if radio_section is not None:
if _has_field(radio_section, "lora"):
return getattr(radio_section, "lora", None)
if hasattr(radio_section, "lora"):
return getattr(radio_section, "lora")
if hasattr(local_config, "lora"):
return getattr(local_config, "lora")
return None
# Maps Meshtastic region enum name to (base_freq_MHz, channel_spacing_MHz).
# Values are derived from the Meshtastic firmware RegionInfo tables.
# Used by _computed_channel_frequency to derive the actual radio frequency
# from the region and channel index.
_REGION_CHANNEL_PARAMS: dict[str, tuple[float, float]] = {
"US": (902.0, 0.25), # 902928 MHz; e.g. ch 52 ≈ 915 MHz at 250 kHz spacing
"EU_433": (433.175, 0.2),
"EU_868": (869.525, 0.5), # actual primary ≈ 869.525 MHz, not 868
"CN": (470.0, 0.2),
"JP": (920.875, 0.5),
"ANZ": (916.0, 0.5),
"KR": (921.9, 0.5),
"TW": (923.0, 0.5),
"RU": (868.9, 0.5),
"IN": (865.0, 0.5),
"NZ_865": (864.0, 0.5),
"TH": (920.0, 0.5),
"LORA_24": (2400.0, 0.5),
"UA_433": (433.175, 0.2),
"UA_868": (868.0, 0.5),
"MY_433": (433.0, 0.2),
"MY_919": (919.0, 0.5),
"SG_923": (923.0, 0.5),
"PH_433": (433.0, 0.2),
"PH_868": (868.0, 0.5),
"PH_915": (915.0, 0.5),
"ANZ_433": (433.0, 0.2),
"KZ_433": (433.0, 0.2),
"KZ_863": (863.125, 0.5),
"NP_865": (865.0, 0.5),
"BR_902": (902.0, 0.25),
# IL (Israel) is absent from meshtastic Python lib 2.7.8 protobufs; the
# enum value is unresolvable at runtime. Operators on IL firmware should
# set the FREQUENCY environment variable to override.
}
def _computed_channel_frequency(
enum_name: str | None,
channel_num: int | None,
) -> int | None:
"""Compute the floor MHz frequency for a known region and channel index.
Looks up *enum_name* in :data:`_REGION_CHANNEL_PARAMS` and returns
``floor(base_freq + channel_num * spacing)``. Returns ``None`` when the
region is not in the table. A missing or negative *channel_num* is
treated as 0 so the base frequency is always usable.
Args:
enum_name: Region enum name as returned by
:func:`_enum_name_from_field`, e.g. ``"EU_868"`` or ``"US"``.
channel_num: Zero-based channel index from the device LoRa config.
Returns:
Floored MHz as :class:`int`, or ``None`` if the region is unknown.
"""
if enum_name is None:
return None
params = _REGION_CHANNEL_PARAMS.get(enum_name)
if params is None:
return None
base, spacing = params
idx = channel_num if (isinstance(channel_num, int) and channel_num >= 0) else 0
return math.floor(base + idx * spacing)
def _region_frequency(lora_message: Any) -> int | float | str | None:
"""Derive the LoRa region frequency in MHz or the region label from ``lora_message``.
Frequency sources are tried in priority order:
1. ``override_frequency > 0`` — explicit radio override, floored to MHz.
2. :data:`_REGION_CHANNEL_PARAMS` lookup + ``channel_num`` — actual
band-plan frequency derived from the device's region and channel index,
floored to MHz.
3. Largest digit token ≥ 100 parsed from the region enum name string.
4. Largest digit token < 100 from the enum name (reversed scan).
5. Full enum name string, raw integer ≥ 100, or raw string as a label.
Args:
lora_message: A LoRa config protobuf message or compatible object.
Returns:
An integer MHz frequency, a fallback string label, or ``None``.
"""
if lora_message is None:
return None
# Step 1 — explicit radio override
override_frequency = getattr(lora_message, "override_frequency", None)
if override_frequency is not None:
if isinstance(override_frequency, (int, float)):
if override_frequency > 0:
return math.floor(override_frequency)
elif override_frequency:
return override_frequency
region_value = getattr(lora_message, "region", None)
if region_value is None:
return None
enum_name = _enum_name_from_field(lora_message, "region", region_value)
# Step 2 — lookup table + channel offset (actual band-plan frequency)
if enum_name:
channel_num = getattr(lora_message, "channel_num", None)
computed = _computed_channel_frequency(enum_name, channel_num)
if computed is not None:
return computed
# Steps 35 — parse digits from enum name (fallback for unknown regions)
if enum_name:
digits = re.findall(r"\d+", enum_name)
for token in digits:
try:
freq = int(token)
except ValueError: # pragma: no cover - regex guarantees digits
continue
if freq >= 100:
return freq
for token in reversed(digits):
try:
return int(token)
except ValueError: # pragma: no cover - defensive only
continue
return enum_name
if isinstance(region_value, int) and region_value >= 100:
return region_value
if isinstance(region_value, str) and region_value:
return region_value
return None
def _camelcase_enum_name(name: str | None) -> str | None:
"""Convert ``name`` from ``SCREAMING_SNAKE`` to ``CamelCase``."""
if not name:
return None
parts = re.split(r"[^0-9A-Za-z]+", name.strip())
camel_parts = [part.capitalize() for part in parts if part]
if not camel_parts:
return None
return "".join(camel_parts)
def _modem_preset(lora_message: Any) -> str | None:
"""Return the CamelCase modem preset configured on ``lora_message``."""
if lora_message is None:
return None
descriptor = getattr(lora_message, "DESCRIPTOR", None)
fields_by_name = getattr(descriptor, "fields_by_name", {}) if descriptor else {}
if "modem_preset" in fields_by_name:
preset_field = "modem_preset"
elif "preset" in fields_by_name:
preset_field = "preset"
elif hasattr(lora_message, "modem_preset"):
preset_field = "modem_preset"
elif hasattr(lora_message, "preset"):
preset_field = "preset"
else:
return None
preset_value = getattr(lora_message, preset_field, None)
if preset_value is None:
return None
enum_name = _enum_name_from_field(lora_message, preset_field, preset_value)
if isinstance(enum_name, str) and enum_name:
return _camelcase_enum_name(enum_name)
if isinstance(preset_value, str) and preset_value:
return _camelcase_enum_name(preset_value)
return None
def _ensure_radio_metadata(iface: Any) -> None:
"""Populate cached LoRa metadata by inspecting ``iface`` when available."""
if iface is None:
return
try:
wait_for_config = getattr(iface, "waitForConfig", None)
if callable(wait_for_config):
wait_for_config()
except Exception: # pragma: no cover - hardware dependent guard
pass
local_node = getattr(iface, "localNode", None)
local_config = getattr(local_node, "localConfig", None) if local_node else None
lora_message = _resolve_lora_message(local_config)
if lora_message is None:
return
frequency = _region_frequency(lora_message)
preset = _modem_preset(lora_message)
updated = False
if frequency is not None and getattr(config, "LORA_FREQ", None) is None:
config.LORA_FREQ = frequency
updated = True
if preset is not None and getattr(config, "MODEM_PRESET", None) is None:
config.MODEM_PRESET = preset
updated = True
if updated:
config._debug_log(
"Captured LoRa radio metadata",
context="interfaces.ensure_radio_metadata",
severity="info",
always=True,
lora_freq=frequency,
modem_preset=preset,
)
def _ensure_channel_metadata(iface: Any) -> None:
"""Capture channel metadata by inspecting ``iface`` once per runtime."""
if iface is None:
return
try:
channels.capture_from_interface(iface)
except Exception as exc: # pragma: no cover - defensive instrumentation
config._debug_log(
"Failed to capture channel metadata",
context="interfaces.ensure_channel_metadata",
severity="warn",
error_class=exc.__class__.__name__,
error_message=str(exc),
)
_DEFAULT_TCP_TARGET = "http://127.0.0.1"
# Private aliases so that existing internal callers and monkeypatching in
# tests keep working without modification.
_DEFAULT_TCP_PORT = DEFAULT_TCP_PORT # backward-compat alias
_DEFAULT_SERIAL_PATTERNS = DEFAULT_SERIAL_PATTERNS # backward-compat alias
_BLE_ADDRESS_RE = BLE_ADDRESS_RE # backward-compat alias
class _DummySerialInterface:
"""In-memory replacement for ``meshtastic.serial_interface.SerialInterface``."""
def __init__(self) -> None:
self.nodes: dict = {}
def close(self) -> None: # pragma: no cover - nothing to close
"""No-op: the dummy interface holds no resources to release."""
pass
_parse_ble_target = parse_ble_target # backward-compat alias
def _parse_network_target(value: str) -> tuple[str, int] | None:
"""Return ``(host, port)`` when ``value`` is a numeric IP address string.
Only literal IPv4 or IPv6 addresses are accepted, optionally paired with a
port or scheme. Callers that start from hostnames should resolve them to an
address before invoking this helper.
Parameters:
value: Numeric IP literal or URL describing the TCP interface.
Returns:
A ``(host, port)`` tuple or ``None`` when parsing fails.
"""
if not value:
return None
value = value.strip()
if not value:
return None
def _validated_result(host: str | None, port: int | None) -> tuple[str, int] | None:
if not host:
return None
try:
ipaddress.ip_address(host)
except ValueError:
return None
return host, port or _DEFAULT_TCP_PORT
parsed_values = []
if "://" in value:
parsed_values.append(urllib.parse.urlparse(value, scheme="tcp"))
parsed_values.append(urllib.parse.urlparse(f"//{value}", scheme="tcp"))
for parsed in parsed_values:
try:
port = parsed.port
except ValueError:
port = None
result = _validated_result(parsed.hostname, port)
if result:
return result
# For bare "host:port" strings that urlparse may misparse, try a manual
# partition. The `startswith("[")` guard excludes IPv6 bracket notation
# (e.g. "[::1]:8080") because those already succeed via urlparse above.
if value.count(":") == 1 and not value.startswith("["):
host, _, port_text = value.partition(":")
try:
port = int(port_text) if port_text else None
except ValueError:
port = None
result = _validated_result(host, port)
if result:
return result
return _validated_result(value, None)
def _load_ble_interface():
"""Return :class:`meshtastic.ble_interface.BLEInterface` when available.
Returns:
The resolved BLE interface class.
Raises:
RuntimeError: If the BLE dependencies are not installed.
"""
global BLEInterface
if BLEInterface is not None:
return BLEInterface
try:
from meshtastic.ble_interface import BLEInterface as _resolved_interface
except ImportError as exc: # pragma: no cover - exercised in non-BLE envs
raise RuntimeError(
"BLE interface requested but the Meshtastic BLE dependencies are not installed. "
"Install the 'meshtastic[ble]' extra to enable BLE support."
) from exc
BLEInterface = _resolved_interface
try:
import sys
for module_name in ("data.mesh_ingestor", "data.mesh"):
mesh_module = sys.modules.get(module_name)
if mesh_module is not None:
setattr(mesh_module, "BLEInterface", BLEInterface)
except Exception: # pragma: no cover - defensive only
pass
return _resolved_interface
def _create_serial_interface(port: str) -> tuple[object, str]:
"""Return an appropriate mesh interface for ``port``.
Parameters:
port: User-supplied port string which may represent serial, BLE or TCP.
Returns:
``(interface, resolved_target)`` describing the created interface.
"""
port_value = (port or "").strip()
if port_value.lower() in {"", "mock", "none", "null", "disabled"}:
config._debug_log(
"Using dummy serial interface",
context="interfaces.serial",
port=port_value,
)
return _DummySerialInterface(), "mock"
ble_target = _parse_ble_target(port_value)
if ble_target:
# Determine if it's a MAC address or UUID
address_type = "MAC" if ":" in ble_target else "UUID"
config._debug_log(
"Using BLE interface",
context="interfaces.ble",
address=ble_target,
address_type=address_type,
)
return _load_ble_interface()(address=ble_target), ble_target
network_target = _parse_network_target(port_value)
if network_target:
host, tcp_port = network_target
config._debug_log(
"Using TCP interface",
context="interfaces.tcp",
host=host,
port=tcp_port,
)
return (
TCPInterface(hostname=host, portNumber=tcp_port),
f"tcp://{host}:{tcp_port}",
)
config._debug_log(
"Using serial interface",
context="interfaces.serial",
port=port_value,
)
return SerialInterface(devPath=port_value), port_value
class NoAvailableMeshInterface(RuntimeError):
"""Raised when no default mesh interface can be created."""
_default_serial_targets = default_serial_targets # backward-compat alias
def _create_default_interface() -> tuple[object, str]:
"""Attempt to create the default mesh interface, raising on failure.
Returns:
``(interface, resolved_target)`` for the discovered connection.
Raises:
NoAvailableMeshInterface: When no usable connection can be created.
"""
errors: list[tuple[str, Exception]] = []
for candidate in _default_serial_targets():
try:
return _create_serial_interface(candidate)
except Exception as exc: # pragma: no cover - hardware dependent
errors.append((candidate, exc))
config._debug_log(
"Failed to open serial candidate",
context="interfaces.auto_discovery",
target=candidate,
error_class=exc.__class__.__name__,
error_message=str(exc),
)
try:
return _create_serial_interface(_DEFAULT_TCP_TARGET)
except Exception as exc: # pragma: no cover - network dependent
errors.append((_DEFAULT_TCP_TARGET, exc))
config._debug_log(
"Failed to open TCP fallback",
context="interfaces.auto_discovery",
target=_DEFAULT_TCP_TARGET,
error_class=exc.__class__.__name__,
error_message=str(exc),
)
if errors:
summary = "; ".join(f"{target}: {error}" for target, error in errors)
raise NoAvailableMeshInterface(
f"no mesh interface available ({summary})"
) from errors[-1][1]
raise NoAvailableMeshInterface("no mesh interface available")
__all__ = [
"BLEInterface",
"NoAvailableMeshInterface",
"_ensure_channel_metadata",
"_ensure_radio_metadata",
"_extract_host_node_id",
"_DummySerialInterface",
"_DEFAULT_TCP_PORT",
"_DEFAULT_TCP_TARGET",
"_create_default_interface",
"_create_serial_interface",
"_default_serial_targets",
"_load_ble_interface",
"_parse_ble_target",
"_parse_network_target",
"SerialInterface",
"TCPInterface",
]
+108
View File
@@ -0,0 +1,108 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Mesh interface discovery helpers for interacting with Meshtastic hardware."""
from __future__ import annotations
# The patches subpackage applies meshtastic monkey-patches at import time so
# subsequent calls (and any direct ``import meshtastic`` from elsewhere)
# inherit the safe wrappers. Apply BEFORE pulling in factory.py because
# factory.py imports ``meshtastic.serial_interface`` / ``meshtastic.tcp_interface``
# and those modules transitively load NodeInfoHandler.
from .patches import (
_build_safe_nodeinfo_callback,
_patch_meshtastic_ble_receive_loop,
_patch_meshtastic_nodeinfo_handler,
_patch_nodeinfo_handler_class,
_update_nodeinfo_handler_aliases,
apply_all as _apply_all_patches,
)
_apply_all_patches()
from ._aliases import ( # noqa: E402 - keep grouped with sibling re-exports.
_BLE_ADDRESS_RE,
_DEFAULT_SERIAL_PATTERNS,
_DEFAULT_TCP_PORT,
_default_serial_targets,
_parse_ble_target,
)
from .channels_meta import _ensure_channel_metadata # noqa: E402
from .factory import ( # noqa: E402
NoAvailableMeshInterface,
_DummySerialInterface,
_create_default_interface,
_create_serial_interface,
_load_ble_interface,
)
# Resolve the meshtastic interface classes at package-load time so that
# repeated imports (e.g. tests that pop ``data.mesh_ingestor.interfaces`` from
# ``sys.modules`` and re-import after swapping ``meshtastic.*`` submodules)
# pick up the freshly registered classes rather than whatever a cached
# ``factory.py`` first resolved. ``factory.py`` no longer keeps duplicate
# module-level globals; lookups go through the package surface only.
BLEInterface = None
"""Resolved on demand by :func:`_load_ble_interface` to keep BLE optional."""
try: # pragma: no cover - optional dependency may be unavailable
from meshtastic.serial_interface import (
SerialInterface,
) # noqa: E402 # type: ignore
except Exception: # pragma: no cover - optional dependency may be unavailable
SerialInterface = None # type: ignore[assignment]
try: # pragma: no cover - optional dependency may be unavailable
from meshtastic.tcp_interface import TCPInterface # noqa: E402 # type: ignore
except Exception: # pragma: no cover - optional dependency may be unavailable
TCPInterface = None # type: ignore[assignment]
from .identity import ( # noqa: E402
_candidate_node_id,
_ensure_mapping,
_extract_host_node_id,
_is_nodeish_identifier,
)
from .nodeinfo_normalize import _normalise_nodeinfo_packet # noqa: E402
from .radio import ( # noqa: E402
_REGION_CHANNEL_PARAMS,
_camelcase_enum_name,
_computed_channel_frequency,
_ensure_radio_metadata,
_enum_name_from_field,
_has_field,
_modem_preset,
_region_frequency,
_resolve_lora_message,
)
from .targets import _DEFAULT_TCP_TARGET, _parse_network_target # noqa: E402
__all__ = [
"BLEInterface",
"NoAvailableMeshInterface",
"_ensure_channel_metadata",
"_ensure_radio_metadata",
"_extract_host_node_id",
"_DummySerialInterface",
"_DEFAULT_TCP_PORT",
"_DEFAULT_TCP_TARGET",
"_create_default_interface",
"_create_serial_interface",
"_default_serial_targets",
"_load_ble_interface",
"_parse_ble_target",
"_parse_network_target",
"SerialInterface",
"TCPInterface",
]
+33
View File
@@ -0,0 +1,33 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Backward-compat aliases for renames hidden behind the package barrel."""
from __future__ import annotations
from ..connection import (
BLE_ADDRESS_RE,
DEFAULT_SERIAL_PATTERNS,
DEFAULT_TCP_PORT,
default_serial_targets,
parse_ble_target,
)
# Private aliases so that existing internal callers and monkeypatching in
# tests keep working without modification.
_BLE_ADDRESS_RE = BLE_ADDRESS_RE
_DEFAULT_TCP_PORT = DEFAULT_TCP_PORT
_DEFAULT_SERIAL_PATTERNS = DEFAULT_SERIAL_PATTERNS
_parse_ble_target = parse_ble_target
_default_serial_targets = default_serial_targets
@@ -0,0 +1,39 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""One-shot channel metadata capture from a live Meshtastic interface."""
from __future__ import annotations
from typing import Any
from .. import channels, config
def _ensure_channel_metadata(iface: Any) -> None:
"""Capture channel metadata by inspecting ``iface`` once per runtime."""
if iface is None:
return
try:
channels.capture_from_interface(iface)
except Exception as exc: # pragma: no cover - defensive instrumentation
config._debug_log(
"Failed to capture channel metadata",
context="interfaces.ensure_channel_metadata",
severity="warn",
error_class=exc.__class__.__name__,
error_message=str(exc),
)
+191
View File
@@ -0,0 +1,191 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Build Meshtastic interface objects from caller-supplied target strings."""
from __future__ import annotations
import sys
from typing import TYPE_CHECKING
from .. import config
from ..connection import parse_ble_target
from .targets import _DEFAULT_TCP_TARGET, _parse_network_target
if TYPE_CHECKING: # pragma: no cover - import only used for type checking
from meshtastic.ble_interface import BLEInterface as _BLEInterface
# All cached interface classes live on the parent package
# (``data.mesh_ingestor.interfaces``). Tests set them via
# ``monkeypatch.setattr(mesh, "BLEInterface", ...)`` and the package proxy
# routes those writes through to ``interfaces``; keeping a duplicate global on
# this submodule would cache the wrong value across tests because
# ``monkeypatch`` only restores attributes it set. The ``__init__.py``
# re-resolves ``SerialInterface``/``TCPInterface`` from ``meshtastic.*`` at
# package-load time and assigns them to package-level attributes.
class _DummySerialInterface:
"""In-memory replacement for ``meshtastic.serial_interface.SerialInterface``."""
def __init__(self) -> None:
self.nodes: dict = {}
def close(self) -> None: # pragma: no cover - nothing to close
"""No-op: the dummy interface holds no resources to release."""
pass
class NoAvailableMeshInterface(RuntimeError):
"""Raised when no default mesh interface can be created."""
def _load_ble_interface():
"""Return :class:`meshtastic.ble_interface.BLEInterface` when available.
Returns:
The resolved BLE interface class.
Raises:
RuntimeError: If the BLE dependencies are not installed.
"""
pkg = sys.modules.get("data.mesh_ingestor.interfaces")
pkg_ble = getattr(pkg, "BLEInterface", None) if pkg is not None else None
if pkg_ble is not None:
return pkg_ble
try:
from meshtastic.ble_interface import BLEInterface as _resolved_interface
except ImportError as exc: # pragma: no cover - exercised in non-BLE envs
raise RuntimeError(
"BLE interface requested but the Meshtastic BLE dependencies are not installed. "
"Install the 'meshtastic[ble]' extra to enable BLE support."
) from exc
if pkg is not None:
setattr(pkg, "BLEInterface", _resolved_interface)
for module_name in ("data.mesh_ingestor", "data.mesh"):
mesh_module = sys.modules.get(module_name)
if mesh_module is not None:
setattr(mesh_module, "BLEInterface", _resolved_interface)
return _resolved_interface
def _create_serial_interface(port: str) -> tuple[object, str]:
"""Return an appropriate mesh interface for ``port``.
Parameters:
port: User-supplied port string which may represent serial, BLE or TCP.
Returns:
``(interface, resolved_target)`` describing the created interface.
"""
pkg = sys.modules["data.mesh_ingestor.interfaces"]
port_value = (port or "").strip()
if port_value.lower() in {"", "mock", "none", "null", "disabled"}:
config._debug_log(
"Using dummy serial interface",
context="interfaces.serial",
port=port_value,
)
return _DummySerialInterface(), "mock"
ble_target = parse_ble_target(port_value)
if ble_target:
# Determine if it's a MAC address or UUID
address_type = "MAC" if ":" in ble_target else "UUID"
config._debug_log(
"Using BLE interface",
context="interfaces.ble",
address=ble_target,
address_type=address_type,
)
return _load_ble_interface()(address=ble_target), ble_target
network_target = _parse_network_target(port_value)
if network_target:
host, tcp_port = network_target
config._debug_log(
"Using TCP interface",
context="interfaces.tcp",
host=host,
port=tcp_port,
)
# Resolve via the package so test fakes installed via ``sys.modules``
# patches at ``meshtastic.tcp_interface`` propagate when interfaces
# was imported earlier.
tcp_cls = getattr(pkg, "TCPInterface", None)
return (
tcp_cls(hostname=host, portNumber=tcp_port),
f"tcp://{host}:{tcp_port}",
)
config._debug_log(
"Using serial interface",
context="interfaces.serial",
port=port_value,
)
serial_cls = getattr(pkg, "SerialInterface", None)
return serial_cls(devPath=port_value), port_value
def _create_default_interface() -> tuple[object, str]:
"""Attempt to create the default mesh interface, raising on failure.
Returns:
``(interface, resolved_target)`` for the discovered connection.
Raises:
NoAvailableMeshInterface: When no usable connection can be created.
"""
# Resolve via the package surface so that monkeypatches against the
# backward-compat aliases (``mesh._default_serial_targets``,
# ``mesh._create_serial_interface``) propagate at call time.
pkg = sys.modules["data.mesh_ingestor.interfaces"]
default_serial_targets = pkg._default_serial_targets
create_serial = pkg._create_serial_interface
errors: list[tuple[str, Exception]] = []
for candidate in default_serial_targets():
try:
return create_serial(candidate)
except Exception as exc: # pragma: no cover - hardware dependent
errors.append((candidate, exc))
config._debug_log(
"Failed to open serial candidate",
context="interfaces.auto_discovery",
target=candidate,
error_class=exc.__class__.__name__,
error_message=str(exc),
)
try:
return create_serial(_DEFAULT_TCP_TARGET)
except Exception as exc: # pragma: no cover - network dependent
errors.append((_DEFAULT_TCP_TARGET, exc))
config._debug_log(
"Failed to open TCP fallback",
context="interfaces.auto_discovery",
target=_DEFAULT_TCP_TARGET,
error_class=exc.__class__.__name__,
error_message=str(exc),
)
if errors:
summary = "; ".join(f"{target}: {error}" for target, error in errors)
raise NoAvailableMeshInterface(
f"no mesh interface available ({summary})"
) from errors[-1][1]
raise NoAvailableMeshInterface( # pragma: no cover - defensive only
"no mesh interface available"
)
+194
View File
@@ -0,0 +1,194 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Mapping/identifier helpers for Meshtastic interface objects."""
from __future__ import annotations
import contextlib
import re
from collections.abc import Mapping
from typing import Any
from .. import serialization
def _ensure_mapping(value) -> Mapping | None:
"""Return ``value`` as a mapping when conversion is possible."""
if isinstance(value, Mapping):
return value
if hasattr(value, "__dict__") and isinstance(value.__dict__, Mapping):
return value.__dict__
with contextlib.suppress(Exception):
converted = serialization._node_to_dict(value)
if isinstance(converted, Mapping):
return converted
return None
def _is_nodeish_identifier(value: Any) -> bool:
"""Return ``True`` when ``value`` resembles a Meshtastic node identifier."""
if isinstance(value, (int, float)):
return False
if not isinstance(value, str):
return False
trimmed = value.strip()
if not trimmed:
return False
if trimmed.startswith("^"):
return True
if trimmed.startswith("!"):
trimmed = trimmed[1:]
elif trimmed.lower().startswith("0x"):
trimmed = trimmed[2:]
elif not re.search(r"[a-fA-F]", trimmed):
# Bare decimal strings should not be treated as node ids when labelled "id".
return False
return bool(re.fullmatch(r"[0-9a-fA-F]{1,8}", trimmed))
def _candidate_node_id(mapping: Mapping | None) -> str | None:
"""Extract a canonical node identifier from ``mapping`` when present."""
if mapping is None:
return None
node_keys = (
"fromId",
"from_id",
"from",
"nodeId",
"node_id",
"nodeNum",
"node_num",
"num",
"userId",
"user_id",
)
for key in node_keys:
with contextlib.suppress(Exception):
node_id = serialization._canonical_node_id(mapping.get(key))
if node_id:
return node_id
with contextlib.suppress(Exception):
value = mapping.get("id")
if _is_nodeish_identifier(value):
node_id = serialization._canonical_node_id(value)
if node_id:
return node_id
user_section = _ensure_mapping(mapping.get("user"))
if user_section is not None:
for key in ("userId", "user_id", "num", "nodeNum", "node_num"):
with contextlib.suppress(Exception):
node_id = serialization._canonical_node_id(user_section.get(key))
if node_id:
return node_id
with contextlib.suppress(Exception):
user_id_value = user_section.get("id")
if _is_nodeish_identifier(user_id_value):
node_id = serialization._canonical_node_id(user_id_value)
if node_id:
return node_id
decoded_section = _ensure_mapping(mapping.get("decoded"))
if decoded_section is not None:
node_id = _candidate_node_id(decoded_section)
if node_id:
return node_id
payload_section = _ensure_mapping(mapping.get("payload"))
if payload_section is not None:
node_id = _candidate_node_id(payload_section)
if node_id:
return node_id
for key in ("packet", "meta", "info"):
node_id = _candidate_node_id(_ensure_mapping(mapping.get(key)))
if node_id:
return node_id
for value in mapping.values():
if isinstance(value, (list, tuple)):
for item in value:
node_id = _candidate_node_id(_ensure_mapping(item))
if node_id:
return node_id
else:
node_id = _candidate_node_id(_ensure_mapping(value))
if node_id:
return node_id
return None
def _extract_host_node_id(iface) -> str | None:
"""Return the canonical node identifier for the connected host device.
Searches a sequence of well-known attribute names (``myInfo``,
``my_node_info``, etc.) on ``iface`` for a mapping that contains a
recognisable node identifier, then falls back to the raw ``myNodeNum``
integer attribute.
Parameters:
iface: Live Meshtastic interface object, or any object that exposes
node-identity attributes in one of the expected forms.
Returns:
A canonical ``!xxxxxxxx`` node identifier, or ``None`` when no
identifiable host node information is available.
"""
if iface is None:
return None
def _as_mapping(candidate) -> Mapping | None:
mapping = _ensure_mapping(candidate)
if mapping is not None:
return mapping
if callable(candidate):
with contextlib.suppress(Exception):
return _ensure_mapping(candidate())
return None
candidates: list[Mapping] = []
for attr in ("myInfo", "my_node_info", "myNodeInfo", "my_node", "localNode"):
mapping = _as_mapping(getattr(iface, attr, None))
if mapping is None:
continue
candidates.append(mapping)
nested_info = _ensure_mapping(mapping.get("info"))
if nested_info:
candidates.append(nested_info)
for mapping in candidates:
node_id = _candidate_node_id(mapping)
if node_id:
return node_id
for key in ("myNodeNum", "my_node_num", "myNodeId", "my_node_id"):
node_id = serialization._canonical_node_id(mapping.get(key))
if node_id:
return node_id
node_id = serialization._canonical_node_id(getattr(iface, "myNodeNum", None))
if node_id:
return node_id
return None
@@ -0,0 +1,41 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Inject a canonical ``id`` into Meshtastic nodeinfo packets when missing."""
from __future__ import annotations
from .identity import _candidate_node_id, _ensure_mapping
def _normalise_nodeinfo_packet(packet) -> dict | None:
"""Return a dictionary view of ``packet`` with a guaranteed ``id`` when known."""
mapping = _ensure_mapping(packet)
if mapping is None:
return None
try:
normalised: dict = dict(mapping)
except Exception:
try:
normalised = {key: mapping[key] for key in mapping}
except Exception: # pragma: no cover - both copy strategies failed
return None
node_id = _candidate_node_id(normalised)
if node_id and normalised.get("id") != node_id:
normalised["id"] = node_id
return normalised
@@ -0,0 +1,41 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Runtime monkey-patches applied to the upstream ``meshtastic`` library."""
from __future__ import annotations
from .ble_receive import _patch_meshtastic_ble_receive_loop
from .nodeinfo import (
_build_safe_nodeinfo_callback,
_patch_meshtastic_nodeinfo_handler,
_patch_nodeinfo_handler_class,
_update_nodeinfo_handler_aliases,
)
def apply_all() -> None:
"""Apply every meshtastic monkey-patch in the order required for safety."""
_patch_meshtastic_nodeinfo_handler()
_patch_meshtastic_ble_receive_loop()
__all__ = [
"apply_all",
"_build_safe_nodeinfo_callback",
"_patch_meshtastic_ble_receive_loop",
"_patch_meshtastic_nodeinfo_handler",
"_patch_nodeinfo_handler_class",
"_update_nodeinfo_handler_aliases",
]
@@ -0,0 +1,93 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Patch the upstream Meshtastic BLE receive loop to avoid ``UnboundLocalError``."""
from __future__ import annotations
def _patch_meshtastic_ble_receive_loop() -> None:
"""Prevent ``UnboundLocalError`` crashes in Meshtastic's BLE reader."""
try:
from meshtastic import ble_interface as _ble_interface_module # type: ignore
except Exception: # pragma: no cover - dependency optional in tests
return
ble_class = getattr(_ble_interface_module, "BLEInterface", None)
if ble_class is None: # pragma: no cover - exercised only without BLE class
return
original = getattr(ble_class, "_receiveFromRadioImpl", None)
if not callable(original): # pragma: no cover - upstream API regression guard
return
if getattr(original, "_potato_mesh_safe_wrapper", False):
return
FROMRADIO_UUID = getattr(_ble_interface_module, "FROMRADIO_UUID", None)
BleakDBusError = getattr(_ble_interface_module, "BleakDBusError", ())
BleakError = getattr(_ble_interface_module, "BleakError", ())
logger = getattr(_ble_interface_module, "logger", None)
time = getattr(_ble_interface_module, "time", None)
if ( # pragma: no cover - upstream API regression guard
not FROMRADIO_UUID or logger is None or time is None
):
return
# The receive loop runs on a dedicated thread and only completes against a
# live BLE adapter; the body is hardware-dependent and not unit-testable.
def _safe_receive_from_radio(self): # pragma: no cover - hardware dependent
# type: ignore[override]
while self._want_receive:
if self.should_read:
self.should_read = False
retries: int = 0
while self._want_receive:
if self.client is None:
logger.debug("BLE client is None, shutting down")
self._want_receive = False
continue
payload: bytes = b""
try:
payload = bytes(self.client.read_gatt_char(FROMRADIO_UUID))
except BleakDBusError as exc:
logger.debug("Device disconnected, shutting down %s", exc)
self._want_receive = False
payload = b""
except BleakError as exc:
if "Not connected" in str(exc):
logger.debug("Device disconnected, shutting down %s", exc)
self._want_receive = False
payload = b""
else:
raise ble_class.BLEError("Error reading BLE") from exc
if not payload:
if not self._want_receive:
break
if retries < 5:
time.sleep(0.1)
retries += 1
continue
break
logger.debug("FROMRADIO read: %s", payload.hex())
self._handleFromRadio(payload)
else:
time.sleep(0.01)
_safe_receive_from_radio._potato_mesh_safe_wrapper = True # type: ignore[attr-defined]
ble_class._receiveFromRadioImpl = _safe_receive_from_radio
@@ -0,0 +1,164 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Runtime patches that harden Meshtastic's nodeinfo handler against missing ``id`` fields."""
from __future__ import annotations
import contextlib
import importlib
import sys
try: # pragma: no cover - dependency optional in tests
import meshtastic # type: ignore
except Exception: # pragma: no cover - dependency optional in tests
meshtastic = None # type: ignore[assignment]
from ..nodeinfo_normalize import _normalise_nodeinfo_packet
def _patch_meshtastic_nodeinfo_handler() -> None:
"""Ensure Meshtastic nodeinfo packets always include an ``id`` field."""
module = sys.modules.get("meshtastic", meshtastic)
if module is None: # pragma: no cover - re-import fallback for cold caches
with contextlib.suppress(Exception):
module = importlib.import_module("meshtastic")
if module is None: # pragma: no cover - exercised only without meshtastic
return
globals()["meshtastic"] = module
original = getattr(module, "_onNodeInfoReceive", None)
if not callable(original): # pragma: no cover - upstream API regression guard
return
mesh_interface_module = getattr(module, "mesh_interface", None)
if mesh_interface_module is None:
with contextlib.suppress(Exception):
mesh_interface_module = importlib.import_module("meshtastic.mesh_interface")
# Replace the module-level handler only once; the sentinel attribute prevents
# re-wrapping if _patch_meshtastic_nodeinfo_handler() is called again after
# the interface module is reloaded or re-imported.
if not getattr(original, "_potato_mesh_safe_wrapper", False):
module._onNodeInfoReceive = _build_safe_nodeinfo_callback(original)
_patch_nodeinfo_handler_class(mesh_interface_module, module)
def _build_safe_nodeinfo_callback(original):
"""Return a wrapper that injects a missing ``id`` before dispatching."""
def _safe_on_node_info_receive(iface, packet): # type: ignore[override]
normalised = _normalise_nodeinfo_packet(packet)
if normalised is not None:
packet = normalised
try:
return original(iface, packet)
except KeyError as exc: # pragma: no cover - defensive only
if exc.args and exc.args[0] == "id":
return None
raise
_safe_on_node_info_receive._potato_mesh_safe_wrapper = True # type: ignore[attr-defined]
return _safe_on_node_info_receive
def _update_nodeinfo_handler_aliases(original, replacement) -> None:
"""Ensure Meshtastic modules reference the patched ``NodeInfoHandler``."""
for module_name, module in list(sys.modules.items()):
if not module_name.startswith("meshtastic"):
continue
existing = getattr(module, "NodeInfoHandler", None)
if existing is original:
setattr(module, "NodeInfoHandler", replacement)
def _patch_nodeinfo_handler_class(
mesh_interface_module, meshtastic_module=None
) -> None:
"""Wrap ``NodeInfoHandler.onReceive`` to normalise packets before callbacks."""
if (
mesh_interface_module is None
): # pragma: no cover - exercised only without meshtastic
return
handler_class = getattr(mesh_interface_module, "NodeInfoHandler", None)
if handler_class is None: # pragma: no cover - upstream API regression guard
return
if getattr(
handler_class, "_potato_mesh_safe_wrapper", False
): # pragma: no cover - re-entry guard
return
original_on_receive = getattr(handler_class, "onReceive", None)
if not callable(
original_on_receive
): # pragma: no cover - upstream API regression guard
return
class _SafeNodeInfoHandler(handler_class): # type: ignore[misc]
"""Subclass that guards against missing node identifiers."""
def onReceive(self, iface, packet): # type: ignore[override]
"""Normalise ``packet`` before dispatching to the parent handler.
Injects a canonical ``id`` field when one can be inferred from the
packet's other fields, then delegates to the original
``NodeInfoHandler.onReceive``. A ``KeyError`` on ``"id"`` is
suppressed because some firmware versions omit the field entirely.
Parameters:
iface: The Meshtastic interface that received the packet.
packet: Raw nodeinfo packet dict, possibly lacking an ``id``
key.
Returns:
The return value of the parent handler, or ``None`` when a
missing ``"id"`` key would otherwise raise.
"""
normalised = _normalise_nodeinfo_packet(packet)
if normalised is not None:
packet = normalised
try:
return super().onReceive(iface, packet)
except KeyError as exc: # pragma: no cover - defensive only
if exc.args and exc.args[0] == "id":
return None
raise
_SafeNodeInfoHandler.__name__ = handler_class.__name__
_SafeNodeInfoHandler.__qualname__ = getattr(
handler_class, "__qualname__", handler_class.__name__
)
_SafeNodeInfoHandler.__module__ = getattr(
handler_class, "__module__", mesh_interface_module.__name__
)
_SafeNodeInfoHandler.__doc__ = getattr(
handler_class, "__doc__", _SafeNodeInfoHandler.__doc__
)
_SafeNodeInfoHandler._potato_mesh_safe_wrapper = True # type: ignore[attr-defined]
setattr(mesh_interface_module, "NodeInfoHandler", _SafeNodeInfoHandler)
if meshtastic_module is None:
meshtastic_module = globals().get("meshtastic")
if meshtastic_module is not None:
existing_top = getattr(meshtastic_module, "NodeInfoHandler", None)
if existing_top is handler_class: # pragma: no cover - top-level re-export
setattr(meshtastic_module, "NodeInfoHandler", _SafeNodeInfoHandler)
_update_nodeinfo_handler_aliases(handler_class, _SafeNodeInfoHandler)
+292
View File
@@ -0,0 +1,292 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""LoRa region/frequency/preset derivation from a Meshtastic config protobuf."""
from __future__ import annotations
import math
import re
from typing import Any
from .. import config
def _has_field(message: Any, field_name: str) -> bool:
"""Return ``True`` when ``message`` advertises ``field_name`` via ``HasField``."""
if message is None:
return False
has_field = getattr(message, "HasField", None)
if callable(has_field):
try:
return bool(has_field(field_name))
except Exception: # pragma: no cover - defensive guard
return False
return hasattr(message, field_name)
def _enum_name_from_field(message: Any, field_name: str, value: Any) -> str | None:
"""Return the enum name for ``value`` using ``message`` descriptors."""
descriptor = getattr(message, "DESCRIPTOR", None)
if descriptor is None:
return None
fields_by_name = getattr(descriptor, "fields_by_name", {})
field_desc = fields_by_name.get(field_name)
if field_desc is None:
return None
enum_type = getattr(field_desc, "enum_type", None)
if enum_type is None:
return None
enum_values = getattr(enum_type, "values_by_number", {})
enum_value = enum_values.get(value)
if enum_value is None:
return None
return getattr(enum_value, "name", None)
def _resolve_lora_message(local_config: Any) -> Any | None:
"""Return the LoRa configuration sub-message from ``local_config``."""
if local_config is None:
return None
if _has_field(local_config, "lora"):
candidate = getattr(local_config, "lora", None)
if candidate is not None:
return candidate
radio_section = getattr(local_config, "radio", None)
if radio_section is not None:
if _has_field(radio_section, "lora"):
return getattr(radio_section, "lora", None)
if hasattr(radio_section, "lora"):
return getattr(radio_section, "lora")
if hasattr(local_config, "lora"):
return getattr(local_config, "lora")
return None
# Maps Meshtastic region enum name to (base_freq_MHz, channel_spacing_MHz).
# Values are derived from the Meshtastic firmware RegionInfo tables.
# Used by _computed_channel_frequency to derive the actual radio frequency
# from the region and channel index.
_REGION_CHANNEL_PARAMS: dict[str, tuple[float, float]] = {
"US": (902.0, 0.25), # 902928 MHz; e.g. ch 52 ≈ 915 MHz at 250 kHz spacing
"EU_433": (433.175, 0.2),
"EU_868": (869.525, 0.5), # actual primary ≈ 869.525 MHz, not 868
"CN": (470.0, 0.2),
"JP": (920.875, 0.5),
"ANZ": (916.0, 0.5),
"KR": (921.9, 0.5),
"TW": (923.0, 0.5),
"RU": (868.9, 0.5),
"IN": (865.0, 0.5),
"NZ_865": (864.0, 0.5),
"TH": (920.0, 0.5),
"LORA_24": (2400.0, 0.5),
"UA_433": (433.175, 0.2),
"UA_868": (868.0, 0.5),
"MY_433": (433.0, 0.2),
"MY_919": (919.0, 0.5),
"SG_923": (923.0, 0.5),
"PH_433": (433.0, 0.2),
"PH_868": (868.0, 0.5),
"PH_915": (915.0, 0.5),
"ANZ_433": (433.0, 0.2),
"KZ_433": (433.0, 0.2),
"KZ_863": (863.125, 0.5),
"NP_865": (865.0, 0.5),
"BR_902": (902.0, 0.25),
# IL (Israel) is absent from meshtastic Python lib 2.7.8 protobufs; the
# enum value is unresolvable at runtime. Operators on IL firmware should
# set the FREQUENCY environment variable to override.
}
def _computed_channel_frequency(
enum_name: str | None,
channel_num: int | None,
) -> int | None:
"""Compute the floor MHz frequency for a known region and channel index.
Looks up *enum_name* in :data:`_REGION_CHANNEL_PARAMS` and returns
``floor(base_freq + channel_num * spacing)``. Returns ``None`` when the
region is not in the table. A missing or negative *channel_num* is
treated as 0 so the base frequency is always usable.
Args:
enum_name: Region enum name as returned by
:func:`_enum_name_from_field`, e.g. ``"EU_868"`` or ``"US"``.
channel_num: Zero-based channel index from the device LoRa config.
Returns:
Floored MHz as :class:`int`, or ``None`` if the region is unknown.
"""
if enum_name is None:
return None
params = _REGION_CHANNEL_PARAMS.get(enum_name)
if params is None:
return None
base, spacing = params
idx = channel_num if (isinstance(channel_num, int) and channel_num >= 0) else 0
return math.floor(base + idx * spacing)
def _region_frequency(lora_message: Any) -> int | float | str | None:
"""Derive the LoRa region frequency in MHz or the region label from ``lora_message``.
Frequency sources are tried in priority order:
1. ``override_frequency > 0`` — explicit radio override, floored to MHz.
2. :data:`_REGION_CHANNEL_PARAMS` lookup + ``channel_num`` — actual
band-plan frequency derived from the device's region and channel index,
floored to MHz.
3. Largest digit token ≥ 100 parsed from the region enum name string.
4. Largest digit token < 100 from the enum name (reversed scan).
5. Full enum name string, raw integer ≥ 100, or raw string as a label.
Args:
lora_message: A LoRa config protobuf message or compatible object.
Returns:
An integer MHz frequency, a fallback string label, or ``None``.
"""
if lora_message is None:
return None
# Step 1 — explicit radio override
override_frequency = getattr(lora_message, "override_frequency", None)
if override_frequency is not None:
if isinstance(override_frequency, (int, float)):
if override_frequency > 0:
return math.floor(override_frequency)
elif override_frequency:
return override_frequency
region_value = getattr(lora_message, "region", None)
if region_value is None:
return None
enum_name = _enum_name_from_field(lora_message, "region", region_value)
# Step 2 — lookup table + channel offset (actual band-plan frequency)
if enum_name:
channel_num = getattr(lora_message, "channel_num", None)
computed = _computed_channel_frequency(enum_name, channel_num)
if computed is not None:
return computed
# Steps 35 — parse digits from enum name (fallback for unknown regions)
if enum_name:
digits = re.findall(r"\d+", enum_name)
for token in digits:
try:
freq = int(token)
except ValueError: # pragma: no cover - regex guarantees digits
continue
if freq >= 100:
return freq
for token in reversed(digits):
try:
return int(token)
except ValueError: # pragma: no cover - defensive only
continue
return enum_name
if isinstance(region_value, int) and region_value >= 100:
return region_value
if isinstance(region_value, str) and region_value:
return region_value
return None
def _camelcase_enum_name(name: str | None) -> str | None:
"""Convert ``name`` from ``SCREAMING_SNAKE`` to ``CamelCase``."""
if not name:
return None
parts = re.split(r"[^0-9A-Za-z]+", name.strip())
camel_parts = [part.capitalize() for part in parts if part]
if not camel_parts:
return None
return "".join(camel_parts)
def _modem_preset(lora_message: Any) -> str | None:
"""Return the CamelCase modem preset configured on ``lora_message``."""
if lora_message is None:
return None
descriptor = getattr(lora_message, "DESCRIPTOR", None)
fields_by_name = getattr(descriptor, "fields_by_name", {}) if descriptor else {}
if "modem_preset" in fields_by_name:
preset_field = "modem_preset"
elif "preset" in fields_by_name:
preset_field = "preset"
elif hasattr(lora_message, "modem_preset"):
preset_field = "modem_preset"
elif hasattr(lora_message, "preset"):
preset_field = "preset"
else:
return None
preset_value = getattr(lora_message, preset_field, None)
if preset_value is None:
return None
enum_name = _enum_name_from_field(lora_message, preset_field, preset_value)
if isinstance(enum_name, str) and enum_name:
return _camelcase_enum_name(enum_name)
if isinstance(preset_value, str) and preset_value:
return _camelcase_enum_name(preset_value)
return None
def _ensure_radio_metadata(iface: Any) -> None:
"""Populate cached LoRa metadata by inspecting ``iface`` when available."""
if iface is None:
return
try:
wait_for_config = getattr(iface, "waitForConfig", None)
if callable(wait_for_config):
wait_for_config()
except Exception: # pragma: no cover - hardware dependent guard
pass
local_node = getattr(iface, "localNode", None)
local_config = getattr(local_node, "localConfig", None) if local_node else None
lora_message = _resolve_lora_message(local_config)
if lora_message is None:
return
frequency = _region_frequency(lora_message)
preset = _modem_preset(lora_message)
updated = False
if frequency is not None and getattr(config, "LORA_FREQ", None) is None:
config.LORA_FREQ = frequency
updated = True
if preset is not None and getattr(config, "MODEM_PRESET", None) is None:
config.MODEM_PRESET = preset
updated = True
if updated:
config._debug_log(
"Captured LoRa radio metadata",
context="interfaces.ensure_radio_metadata",
severity="info",
always=True,
lora_freq=frequency,
modem_preset=preset,
)
+84
View File
@@ -0,0 +1,84 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Network target parsing helpers for Meshtastic interfaces."""
from __future__ import annotations
import ipaddress
import urllib.parse
from ..connection import DEFAULT_TCP_PORT
_DEFAULT_TCP_TARGET = "http://127.0.0.1"
def _parse_network_target(value: str) -> tuple[str, int] | None:
"""Return ``(host, port)`` when ``value`` is a numeric IP address string.
Only literal IPv4 or IPv6 addresses are accepted, optionally paired with a
port or scheme. Callers that start from hostnames should resolve them to an
address before invoking this helper.
Parameters:
value: Numeric IP literal or URL describing the TCP interface.
Returns:
A ``(host, port)`` tuple or ``None`` when parsing fails.
"""
if not value:
return None
value = value.strip()
if not value:
return None
def _validated_result(host: str | None, port: int | None) -> tuple[str, int] | None:
if not host:
return None
try:
ipaddress.ip_address(host)
except ValueError:
return None
return host, port or DEFAULT_TCP_PORT
parsed_values = []
if "://" in value:
parsed_values.append(urllib.parse.urlparse(value, scheme="tcp"))
parsed_values.append(urllib.parse.urlparse(f"//{value}", scheme="tcp"))
for parsed in parsed_values:
try:
port = parsed.port
except ValueError:
port = None
result = _validated_result(parsed.hostname, port)
if result:
return result
# For bare "host:port" strings that urlparse may misparse, try a manual
# partition. The `startswith("[")` guard excludes IPv6 bracket notation
# (e.g. "[::1]:8080") because those already succeed via urlparse above.
if value.count(":") == 1 and not value.startswith("["):
host, _, port_text = value.partition(":")
try:
port = int(port_text) if port_text else None
except ValueError:
port = None
result = _validated_result(host, port)
if result: # pragma: no cover - urlparse handles all currently-known forms
return result
return _validated_result(value, None)
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,170 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""MeshCore protocol implementation.
This package defines :class:`MeshcoreProvider`, which satisfies the
:class:`~data.mesh_ingestor.mesh_protocol.MeshProtocol` interface for MeshCore
nodes connected via serial port, BLE, or TCP/IP.
The protocol backend runs MeshCore's ``asyncio`` event loop in a background
daemon thread so that incoming events are dispatched without blocking the
synchronous daemon loop. Received contacts, channel messages, and direct
messages are forwarded to the shared HTTP ingest queue via the same
:mod:`~data.mesh_ingestor.handlers` helpers used by the Meshtastic protocol.
Connection type is detected automatically from the target string:
* **BLE** — MAC address (``AA:BB:CC:DD:EE:FF``) or UUID (macOS format).
* **TCP** — ``host:port`` or ``[ipv6]:port`` (accepts hostnames).
* **Serial** — any other non-empty string (e.g. ``/dev/ttyUSB0``).
* **Auto** — ``None`` or empty: tries serial candidates from
:func:`~data.mesh_ingestor.connection.default_serial_targets`.
Node identities are derived from the first four bytes (eight hex characters)
of each contact's 32-byte public key, formatted as ``!xxxxxxxx`` to match
the canonical node-ID schema used across the system. Ingested
``user.shortName`` is the first two bytes (four hex characters) of the
node ID, not the advertised name.
"""
from __future__ import annotations
# Apply upstream-library patches before any ``MeshCore`` instance is built,
# otherwise the first malformed advertisement dies inside a detached asyncio
# task before our handler can observe it. See
# :mod:`data.mesh_ingestor.protocols._meshcore_patches` for the specific
# upstream bugs covered.
#
# This mutates the upstream class at import time. The blast radius is
# narrow because ``protocols/__init__.py`` exposes this package only through
# a lazy ``__getattr__`` and the daemon resolves it only when
# ``PROTOCOL=meshcore`` is active. Any future diagnostic CLI that imports
# this package will inherit the shim.
from .. import _meshcore_patches as _meshcore_patches
_meshcore_patches.apply()
# Re-expose meshcore-library symbols so existing test imports (and callers
# that prefer a single import surface) keep working unchanged. Submodules
# resolve these names at call time via ``sys.modules`` so monkey-patches
# applied to the package surface during tests propagate.
from meshcore import ( # noqa: E402 - patches must run before this import.
BLEConnection,
EventType,
MeshCore,
SerialConnection,
TCPConnection,
)
# Re-expose the ``data.mesh_ingestor`` modules that tests monkeypatch through
# the meshcore namespace (``_mod.config._debug_log``, ``_mod._ingestors``,
# ``_mod._queue``). Keeping these attributes preserves the call surface of
# the pre-split ``meshcore.py`` module.
from ... import config as config # noqa: E402
from ... import ingestors as _ingestors # noqa: E402
from ... import queue as _queue # noqa: E402
from ...connection import default_serial_targets # noqa: E402
from ._constants import ( # noqa: E402 - keep grouped with sibling re-exports.
_CHANNEL_PROBE_FALLBACK_MAX,
_CONNECT_TIMEOUT_SECS,
_DEFAULT_BAUDRATE,
_MENTION_RE,
_MESHCORE_ADV_TYPE_ROLE,
_MESHCORE_ID_BITS,
_MESHCORE_ID_MASK,
)
from .channels import _ensure_channel_names # noqa: E402
from .connection import ( # noqa: E402
_log_unhandled_loop_exception,
_make_connection,
)
from .debug_log import ( # noqa: E402
_IGNORED_MESSAGE_LOCK,
_IGNORED_MESSAGE_LOG_PATH,
_record_meshcore_message,
_to_json_safe,
)
from .decode import ( # noqa: E402
_contact_to_node_dict,
_derive_modem_preset,
_self_info_to_node_dict,
)
from .handlers import ( # noqa: E402
_make_event_handlers,
_process_contact_update,
_process_contacts,
_process_self_info,
)
from .identity import ( # noqa: E402
_derive_synthetic_node_id,
_meshcore_adv_type_to_role,
_meshcore_node_id,
_meshcore_short_name,
_pubkey_prefix_to_node_id,
)
from .interface import ClosedBeforeConnectedError, _MeshcoreInterface # noqa: E402
from .messages import ( # noqa: E402
_derive_message_id,
_extract_mention_names,
_parse_sender_name,
_synthetic_node_dict,
)
from .position import _store_meshcore_position # noqa: E402
from .provider import MeshcoreProvider # noqa: E402
from .runner import _run_meshcore # noqa: E402
__all__ = [
"BLEConnection",
"ClosedBeforeConnectedError",
"EventType",
"MeshCore",
"MeshcoreProvider",
"SerialConnection",
"TCPConnection",
"_CHANNEL_PROBE_FALLBACK_MAX",
"_CONNECT_TIMEOUT_SECS",
"_DEFAULT_BAUDRATE",
"_IGNORED_MESSAGE_LOCK",
"_IGNORED_MESSAGE_LOG_PATH",
"_MENTION_RE",
"_MESHCORE_ADV_TYPE_ROLE",
"_MESHCORE_ID_BITS",
"_MESHCORE_ID_MASK",
"_MeshcoreInterface",
"_contact_to_node_dict",
"_derive_message_id",
"_derive_modem_preset",
"_derive_synthetic_node_id",
"_ensure_channel_names",
"_extract_mention_names",
"_log_unhandled_loop_exception",
"_make_connection",
"_make_event_handlers",
"_meshcore_adv_type_to_role",
"_meshcore_node_id",
"_meshcore_short_name",
"_parse_sender_name",
"_process_contact_update",
"_process_contacts",
"_process_self_info",
"_pubkey_prefix_to_node_id",
"_record_meshcore_message",
"_run_meshcore",
"_self_info_to_node_dict",
"_store_meshcore_position",
"_synthetic_node_dict",
"_to_json_safe",
]
@@ -0,0 +1,56 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Constants shared across MeshCore submodules.
Hoisted out of the original monolithic ``meshcore.py`` so that submodules can
import only what they need without picking up unrelated side-effects.
"""
from __future__ import annotations
import re
_CONNECT_TIMEOUT_SECS: float = 30.0
"""Seconds to wait for the MeshCore node to respond to the appstart handshake."""
_DEFAULT_BAUDRATE: int = 115200
"""Default baud rate for MeshCore serial connections."""
# MeshCore ``ADV_TYPE_*`` (``AdvertDataHelpers.h``) → ``user.role`` for POST /api/nodes.
_MESHCORE_ADV_TYPE_ROLE: dict[int, str] = {
1: "COMPANION", # ADV_TYPE_CHAT
2: "REPEATER", # ADV_TYPE_REPEATER
3: "ROOM_SERVER", # ADV_TYPE_ROOM_SERVER
4: "SENSOR", # ADV_TYPE_SENSOR
}
_MESHCORE_ID_BITS = 53
"""Width of the synthetic MeshCore message ID, in bits.
53 bits keeps the value within :js:data:`Number.MAX_SAFE_INTEGER`
(``2**53 - 1``) so the JSON ID round-trips through the JavaScript frontend
without precision loss, while giving roughly :math:`2^{26.5}` (~95 million)
distinct messages of birthday-collision headroom.
"""
_MESHCORE_ID_MASK = (1 << _MESHCORE_ID_BITS) - 1
"""Bitmask applied to the SHA-256 prefix to clamp the id to 53 bits."""
# Fallback upper bound for channel index probing when the device query fails
# or returns an older firmware version that omits ``max_channels``.
_CHANNEL_PROBE_FALLBACK_MAX = 32
# Matches @[Name] mention patterns in MeshCore message bodies.
_MENTION_RE = re.compile(r"@\[([^\]]+)\]")
@@ -0,0 +1,86 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Channel-name probing for MeshCore devices."""
from __future__ import annotations
import sys
from ... import config
from ._constants import _CHANNEL_PROBE_FALLBACK_MAX
async def _ensure_channel_names(mc: object) -> None:
"""Probe channel names from the device and populate the channel cache.
Queries the device for its authoritative channel count via
:meth:`~meshcore.MeshCore.commands.send_device_query` (``max_channels``
field of the ``DEVICE_INFO`` response), then iterates every index from 0
through ``max_channels - 1``, requesting each via
:meth:`~meshcore.MeshCore.commands.get_channel`. The responses arrive as
:attr:`~meshcore.EventType.CHANNEL_INFO` events and are registered into
the shared channel cache via :func:`~data.mesh_ingestor.channels.register_channel`.
Falls back to a probe bound of :data:`_CHANNEL_PROBE_FALLBACK_MAX` when the
device query fails or returns an older firmware that omits ``max_channels``.
Probes every index without early-stopping on ``ERROR`` responses, so sparse
configurations (e.g. slots 0 and 5 configured, slots 14 empty) are handled
correctly. Only a hard exception (connection loss, timeout) aborts the loop.
Parameters:
mc: Connected :class:`~meshcore.MeshCore` instance.
"""
# Deferred — see _make_event_handlers for the circular-dependency note.
from ... import channels as _channels
# Look up ``EventType`` via the parent package so that test fakes installed
# via ``monkeypatch.setattr(mod, "EventType", ...)`` apply at call time.
pkg = sys.modules["data.mesh_ingestor.protocols.meshcore"]
EventType = pkg.EventType
max_idx = _CHANNEL_PROBE_FALLBACK_MAX
try:
dev_evt = await mc.commands.send_device_query()
if dev_evt.type == EventType.DEVICE_INFO:
reported = (dev_evt.payload or {}).get("max_channels")
if isinstance(reported, int) and reported > 0:
max_idx = reported
except Exception as exc:
config._debug_log(
"Device query failed; using fallback channel probe bound",
context="meshcore.channels",
severity="warning",
fallback_max=max_idx,
error=str(exc),
)
for idx in range(max_idx):
try:
evt = await mc.commands.get_channel(idx)
if evt.type == EventType.CHANNEL_INFO:
name = (evt.payload or {}).get("channel_name", "")
if name:
_channels.register_channel(idx, name)
# ERROR response — unconfigured slot; continue to next index
except Exception as exc:
config._debug_log(
"Channel probe failed",
context="meshcore.channels",
severity="warning",
channel_idx=idx,
error=str(exc),
)
break
@@ -0,0 +1,95 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Connection routing and asyncio exception logging for MeshCore."""
from __future__ import annotations
import asyncio
import sys
from ... import config
from ...connection import parse_ble_target, parse_tcp_target
def _make_connection(target: str, baudrate: int) -> object:
"""Create the appropriate MeshCore connection object for *target*.
Routes to the correct ``meshcore`` connection class based on the target
string format:
* BLE MAC / UUID → :class:`meshcore.BLEConnection`
* ``host:port`` / ``[ipv6]:port`` → :class:`meshcore.TCPConnection`
* anything else → :class:`meshcore.SerialConnection`
Parameters:
target: Resolved, non-empty connection target.
baudrate: Baud rate for serial connections (ignored for BLE/TCP).
Returns:
An unconnected ``meshcore`` connection object.
"""
# Look up connection classes via the parent package so that test fakes
# installed via ``monkeypatch.setattr(mod, "BLEConnection", ...)`` apply.
pkg = sys.modules["data.mesh_ingestor.protocols.meshcore"]
ble_addr = parse_ble_target(target)
if ble_addr:
return pkg.BLEConnection(address=ble_addr)
tcp_target = parse_tcp_target(target)
if tcp_target:
host, port = tcp_target
return pkg.TCPConnection(host, port)
return pkg.SerialConnection(target, baudrate)
def _log_unhandled_loop_exception(
loop: asyncio.AbstractEventLoop, context: dict
) -> None:
"""Route asyncio's "unhandled task exception" warnings through our logger.
The upstream ``meshcore`` library spawns detached
``asyncio.create_task`` tasks for every inbound radio frame. When one
of those tasks raises and nobody awaits the future, asyncio's default
handler writes ``Task exception was never retrieved`` to stderr. That
bypasses our structured log pipeline and clutters container logs.
This handler preserves the same information under
``context=asyncio.unhandled`` so operators grep for one place.
Parameters:
loop: Event loop that surfaced the exception (unused but required
by the asyncio handler signature).
context: Asyncio exception-context dictionary. Fields we care
about: ``message`` (human summary) and ``exception`` (the raw
exception object, when available).
"""
del loop
exception = context.get("exception")
task = context.get("task")
task_name = None
if task is not None:
# Prefer the friendly ``get_name()``; fall back to ``repr`` for any
# future Task-like object that does not implement it.
get_name = getattr(task, "get_name", None)
task_name = get_name() if callable(get_name) else repr(task)
config._debug_log(
context.get("message") or "Unhandled asyncio task exception",
context="asyncio.unhandled",
severity="error",
always=True,
error_class=type(exception).__name__ if exception else None,
error_message=str(exception) if exception else None,
task=task_name,
)
@@ -0,0 +1,90 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""``DEBUG=1`` capture of unhandled MeshCore frames to ``ignored-meshcore.txt``."""
from __future__ import annotations
import base64
import json
import sys
import threading
from datetime import datetime, timezone
from pathlib import Path
from ... import config
# This file lives one level deeper than the pre-split ``meshcore.py``
# (``data/mesh_ingestor/protocols/meshcore/debug_log.py`` vs.
# ``data/mesh_ingestor/protocols/meshcore.py``), so ``parents[4]`` here
# (meshcore/ → protocols/ → mesh_ingestor/ → data/ → repo root) lands at
# the same repo-root destination as ``parents[3]`` did in the original
# module. The on-disk log path is therefore unchanged after the split.
_IGNORED_MESSAGE_LOG_PATH = Path(__file__).resolve().parents[4] / "ignored-meshcore.txt"
"""Filesystem path that stores raw MeshCore messages when ``DEBUG=1``."""
_IGNORED_MESSAGE_LOCK = threading.Lock()
"""Lock guarding writes to :data:`_IGNORED_MESSAGE_LOG_PATH`."""
def _to_json_safe(value: object) -> object:
"""Recursively convert *value* to a JSON-serialisable form.
Handles the common types present in mesh protocol messages: dicts, lists,
bytes (base64-encoded), and primitives. Anything else is coerced via
``str()``.
"""
if isinstance(value, dict):
return {str(k): _to_json_safe(v) for k, v in value.items()}
if isinstance(value, (list, tuple, set)):
return [_to_json_safe(v) for v in value]
if isinstance(value, bytes):
return base64.b64encode(value).decode("ascii")
if isinstance(value, (str, int, float, bool)) or value is None:
return value
return str(value)
def _record_meshcore_message(message: object, *, source: str) -> None:
"""Persist a MeshCore message to :data:`ignored-meshcore.txt` when ``DEBUG=1``.
When ``DEBUG`` is not set the function returns immediately without any
I/O so that production deployments are not burdened by file writes.
Parameters:
message: The raw message object received from the MeshCore node.
source: A short label describing where the message originated (e.g.
a serial port path or BLE address).
"""
if not config.DEBUG:
return
# Resolve path/lock via the parent package so test monkey-patches at
# ``meshcore._IGNORED_MESSAGE_LOG_PATH`` (and ``_IGNORED_MESSAGE_LOCK``)
# take effect at call time.
pkg = sys.modules.get("data.mesh_ingestor.protocols.meshcore")
log_path = getattr(pkg, "_IGNORED_MESSAGE_LOG_PATH", _IGNORED_MESSAGE_LOG_PATH)
log_lock = getattr(pkg, "_IGNORED_MESSAGE_LOCK", _IGNORED_MESSAGE_LOCK)
timestamp = datetime.now(timezone.utc).isoformat()
entry = {
"message": _to_json_safe(message),
"source": source,
"timestamp": timestamp,
}
payload = json.dumps(entry, ensure_ascii=False, sort_keys=True)
with log_lock:
log_path.parent.mkdir(parents=True, exist_ok=True)
with log_path.open("a", encoding="utf-8") as fh:
fh.write(f"{payload}\n")
@@ -0,0 +1,110 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Convert MeshCore contact / self-info payloads into ``POST /api/nodes`` dicts."""
from __future__ import annotations
import time
from .identity import (
_meshcore_adv_type_to_role,
_meshcore_node_id,
_meshcore_short_name,
)
def _contact_to_node_dict(contact: dict) -> dict:
"""Convert a MeshCore contact dict to a Meshtastic-ish node dict.
Parameters:
contact: Contact dict from the MeshCore library. Expected keys
include ``public_key``, ``type`` (``ADV_TYPE_*``), ``adv_name``,
``last_advert``, ``adv_lat``, and ``adv_lon``.
Returns:
Node dict compatible with the ``POST /api/nodes`` payload format.
"""
pub_key = contact.get("public_key", "")
node_id = _meshcore_node_id(pub_key)
name = (contact.get("adv_name") or "").strip()
role = _meshcore_adv_type_to_role(contact.get("type"))
node: dict = {
"lastHeard": contact.get("last_advert"),
"protocol": "meshcore",
"user": {
"longName": name,
"shortName": _meshcore_short_name(node_id),
"publicKey": pub_key,
**({"role": role} if role is not None else {}),
},
}
lat = contact.get("adv_lat")
lon = contact.get("adv_lon")
if lat is not None and lon is not None and (lat or lon):
pos: dict = {"latitude": lat, "longitude": lon}
last_advert = contact.get("last_advert")
if last_advert is not None:
pos["time"] = last_advert
node["position"] = pos
return node
def _derive_modem_preset(sf: object, bw: object, cr: object) -> str | None:
"""Return a compact radio-parameter string from spreading factor, bandwidth, and coding rate.
Parameters:
sf: Spreading factor (int, e.g. ``12``).
bw: Bandwidth in kHz (int or float, e.g. ``125.0``).
cr: Coding rate denominator (int, e.g. ``5`` meaning 4/5).
Returns:
A string such as ``"SF12/BW125/CR5"``, or ``None`` when any parameter
is absent or zero (meaning the radio config was not reported).
"""
if not sf or not bw or not cr:
return None
return f"SF{int(sf)}/BW{int(bw)}/CR{int(cr)}"
def _self_info_to_node_dict(self_info: dict) -> dict:
"""Convert a MeshCore ``SELF_INFO`` payload to a Meshtastic-ish node dict.
Parameters:
self_info: Payload dict from the ``SELF_INFO`` event. Expected keys
include ``name``, ``public_key``, ``adv_type`` (``ADV_TYPE_*``),
``adv_lat``, and ``adv_lon``.
Returns:
Node dict compatible with the ``POST /api/nodes`` payload format.
"""
name = (self_info.get("name") or "").strip()
pub_key = self_info.get("public_key", "")
node_id = _meshcore_node_id(pub_key)
role = _meshcore_adv_type_to_role(self_info.get("adv_type"))
node: dict = {
"lastHeard": int(time.time()),
"protocol": "meshcore",
"user": {
"longName": name,
"shortName": _meshcore_short_name(node_id),
"publicKey": pub_key,
**({"role": role} if role is not None else {}),
},
}
lat = self_info.get("adv_lat")
lon = self_info.get("adv_lon")
if lat is not None and lon is not None and (lat or lon):
node["position"] = {"latitude": lat, "longitude": lon, "time": int(time.time())}
return node
@@ -0,0 +1,324 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Event-handler closures for MeshCore protocol messages."""
from __future__ import annotations
import time
from ... import config, ingestors as _ingestors
from .decode import _contact_to_node_dict, _derive_modem_preset, _self_info_to_node_dict
from .identity import _derive_synthetic_node_id, _meshcore_node_id
from .interface import _MeshcoreInterface
from .messages import (
_derive_message_id,
_extract_mention_names,
_parse_sender_name,
_synthetic_node_dict,
)
from .position import _store_meshcore_position
def _process_self_info(
payload: dict, iface: _MeshcoreInterface, handlers: object
) -> None:
"""Apply a ``SELF_INFO`` payload: set host_node_id, upsert the host node,
and capture LoRa radio metadata into the shared config cache.
Parameters:
payload: Event payload dict containing at minimum ``public_key`` and
optionally ``name``, ``adv_lat``, ``adv_lon``, ``radio_freq``,
``radio_bw``, ``radio_sf``, ``radio_cr``.
iface: Active interface whose :attr:`host_node_id` will be updated.
handlers: Module reference for :func:`~data.mesh_ingestor.handlers`
functions (passed to avoid circular-import issues).
"""
# Cache the payload so node_snapshot_items / self_node_item can use it later.
iface._self_info_payload = payload
pub_key = payload.get("public_key", "")
node_id = _meshcore_node_id(pub_key)
# Capture radio metadata BEFORE upserting the node so that
# _apply_radio_metadata_to_nodes finds populated values on the very first
# SELF_INFO. Never overwrite a previously cached value.
radio_freq = payload.get("radio_freq")
if radio_freq is not None and getattr(config, "LORA_FREQ", None) is None:
config.LORA_FREQ = radio_freq
modem_preset = _derive_modem_preset(
payload.get("radio_sf"), payload.get("radio_bw"), payload.get("radio_cr")
)
if modem_preset is not None and getattr(config, "MODEM_PRESET", None) is None:
config.MODEM_PRESET = modem_preset
if node_id:
iface.host_node_id = node_id
handlers.register_host_node_id(node_id)
# Queue the ingestor registration BEFORE any node upserts so the web
# backend assigns the correct protocol to all subsequent records.
# Radio metadata (LORA_FREQ, MODEM_PRESET) is captured just above and
# will be included in the heartbeat payload by queue_ingestor_heartbeat.
_ingestors.queue_ingestor_heartbeat(force=True, node_id=node_id)
handlers.upsert_node(node_id, _self_info_to_node_dict(payload))
lat = payload.get("adv_lat")
lon = payload.get("adv_lon")
if lat is not None and lon is not None and (lat or lon):
_store_meshcore_position(
node_id, lat, lon, int(time.time()), handlers.host_node_id()
)
config._debug_log(
"MeshCore radio metadata captured",
context="meshcore.self_info.radio",
severity="info",
lora_freq=radio_freq,
modem_preset=modem_preset,
)
handlers._mark_packet_seen()
config._debug_log(
"MeshCore self-info received",
context="meshcore.self_info",
node_id=node_id,
name=payload.get("name"),
)
def _process_contacts(
contacts: dict, iface: _MeshcoreInterface, handlers: object
) -> None:
"""Apply a bulk ``CONTACTS`` payload: update the local snapshot and upsert nodes.
Parameters:
contacts: Mapping of full ``public_key`` hex strings to contact dicts.
iface: Active interface whose contact snapshot will be updated.
handlers: Module reference for :func:`~data.mesh_ingestor.handlers`.
"""
for pub_key, contact in contacts.items():
node_id = _meshcore_node_id(pub_key)
if node_id is None:
continue
iface._update_contact(contact)
handlers.upsert_node(node_id, _contact_to_node_dict(contact))
lat = contact.get("adv_lat")
lon = contact.get("adv_lon")
if lat is not None and lon is not None and (lat or lon):
_store_meshcore_position(
node_id,
lat,
lon,
contact.get("last_advert"),
handlers.host_node_id(),
)
handlers._mark_packet_seen()
def _process_contact_update(
contact: dict, iface: _MeshcoreInterface, handlers: object
) -> None:
"""Apply a single ``NEW_CONTACT`` or ``NEXT_CONTACT`` event.
Parameters:
contact: Contact dict containing at minimum ``public_key``.
iface: Active interface whose contact snapshot will be updated.
handlers: Module reference for :func:`~data.mesh_ingestor.handlers`.
"""
pub_key = contact.get("public_key", "")
node_id = _meshcore_node_id(pub_key)
if node_id is None:
return
iface._update_contact(contact)
handlers.upsert_node(node_id, _contact_to_node_dict(contact))
lat = contact.get("adv_lat")
lon = contact.get("adv_lon")
if lat is not None and lon is not None and (lat or lon):
_store_meshcore_position(
node_id,
lat,
lon,
contact.get("last_advert"),
handlers.host_node_id(),
)
handlers._mark_packet_seen()
config._debug_log(
"MeshCore contact updated",
context="meshcore.contact",
node_id=node_id,
name=contact.get("adv_name"),
)
def _make_event_handlers(iface: _MeshcoreInterface, target: str | None) -> dict:
"""Build async callbacks for each relevant MeshCore event type.
All callbacks are closures over *iface* and *target* so they can update
connection state and forward data to the ingest queue without global state.
Parameters:
iface: The active :class:`_MeshcoreInterface` instance.
target: Human-readable connection target for log messages.
Returns:
Mapping of ``EventType`` member name → async callback coroutine.
"""
# Deferred imports to avoid a circular dependency: meshcore is imported by
# protocols/__init__.py which is imported by the top-level mesh_ingestor
# package, while handlers.py and channels.py import from that same package.
from ... import channels as _channels
from ... import handlers as _handlers
async def on_channel_info(evt) -> None:
payload = evt.payload or {}
idx = payload.get("channel_idx")
name = payload.get("channel_name", "")
if idx is not None and name:
_channels.register_channel(idx, name)
async def on_self_info(evt) -> None:
_process_self_info(evt.payload or {}, iface, _handlers)
async def on_contacts(evt) -> None:
_process_contacts(evt.payload or {}, iface, _handlers)
async def on_contact_update(evt) -> None:
_process_contact_update(evt.payload or {}, iface, _handlers)
async def on_channel_msg(evt) -> None:
payload = evt.payload or {}
sender_ts = payload.get("sender_timestamp")
text = payload.get("text")
if sender_ts is None or not text:
return
rx_time = int(time.time())
channel_idx = payload.get("channel_idx", 0)
# MeshCore channel messages carry no sender identifier in the event
# payload. Try to resolve the sender from the "SenderName: body"
# convention embedded in the message text, matched against the known
# contacts roster. When the contacts roster does not yet contain the
# sender, create a synthetic placeholder node so that the message
# receives a stable from_id and the UI can render a badge immediately.
# The web app will migrate messages to the real node ID once the sender
# is seen via a contact advertisement.
sender_name = _parse_sender_name(text)
from_id = iface.lookup_node_id_by_name(sender_name) if sender_name else None
if from_id is None and sender_name:
synthetic_id = _derive_synthetic_node_id(sender_name)
if synthetic_id not in iface._synthetic_node_ids:
_handlers.upsert_node(synthetic_id, _synthetic_node_dict(sender_name))
iface._synthetic_node_ids.add(synthetic_id)
from_id = synthetic_id
# Upsert synthetic placeholder nodes for any @[Name] mentions in the
# message body whose names are not yet in the contacts roster. This
# ensures mention badges resolve even before the mentioned node is seen.
for mention_name in _extract_mention_names(text):
if not iface.lookup_node_id_by_name(mention_name):
mention_id = _derive_synthetic_node_id(mention_name)
if mention_id not in iface._synthetic_node_ids:
_handlers.upsert_node(
mention_id, _synthetic_node_dict(mention_name)
)
iface._synthetic_node_ids.add(mention_id)
# The dedup fingerprint uses the parsed sender name (lowercased and
# stripped) rather than ``from_id``: each ingestor independently
# resolves Alice to either her real ``!aabbccdd`` (when she is in its
# contact roster) or to a synthetic id derived from her name; the
# parsed name lives in the message text itself, so it is identical
# across all receivers regardless of roster state.
sender_identity = (sender_name or "").strip().lower()
packet = {
"id": _derive_message_id(
sender_identity, sender_ts, f"c{channel_idx}", text
),
"rxTime": rx_time,
"rx_time": rx_time,
"from_id": from_id,
"to_id": "^all",
"channel": channel_idx,
"snr": payload.get("SNR"),
"rssi": payload.get("RSSI"),
"protocol": "meshcore",
"decoded": {
"portnum": "TEXT_MESSAGE_APP",
"text": text,
"channel": channel_idx,
},
}
_handlers._mark_packet_seen()
_handlers.store_packet_dict(packet)
config._debug_log(
"MeshCore channel message",
context="meshcore.channel_msg",
channel=channel_idx,
sender=sender_name,
from_id=from_id,
)
async def on_contact_msg(evt) -> None:
payload = evt.payload or {}
sender_ts = payload.get("sender_timestamp")
text = payload.get("text")
if sender_ts is None or not text:
return
rx_time = int(time.time())
pubkey_prefix = payload.get("pubkey_prefix", "")
from_id = iface.lookup_node_id(pubkey_prefix)
# ``pubkey_prefix`` is already a sender-side stable identifier (the
# first six bytes of the sender's public key); ``"dm"`` namespaces
# direct messages so they cannot collide with channel messages that
# happen to share the other components.
packet = {
"id": _derive_message_id(pubkey_prefix or "", sender_ts, "dm", text),
"rxTime": rx_time,
"rx_time": rx_time,
"from_id": from_id,
"to_id": iface.host_node_id,
"channel": 0,
"snr": payload.get("SNR"),
"protocol": "meshcore",
"decoded": {
"portnum": "TEXT_MESSAGE_APP",
"text": text,
"channel": 0,
},
}
_handlers._mark_packet_seen()
_handlers.store_packet_dict(packet)
async def on_disconnected(evt) -> None:
iface.isConnected = False
config._debug_log(
"MeshCore node disconnected",
context="meshcore.disconnect",
target=target or "unknown",
severity="warning",
always=True,
)
return {
"CHANNEL_INFO": on_channel_info,
"SELF_INFO": on_self_info,
"CONTACTS": on_contacts,
"NEW_CONTACT": on_contact_update,
"NEXT_CONTACT": on_contact_update,
"CHANNEL_MSG_RECV": on_channel_msg,
"CONTACT_MSG_RECV": on_contact_msg,
"DISCONNECTED": on_disconnected,
}
@@ -0,0 +1,125 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Pure helpers that derive canonical MeshCore node identifiers.
These helpers are deterministic and side-effect-free so they can be imported
from anywhere in the MeshCore package without circular concerns.
"""
from __future__ import annotations
import hashlib
from ._constants import _MESHCORE_ADV_TYPE_ROLE
def _meshcore_node_id(public_key_hex: str | None) -> str | None:
"""Derive a canonical ``!xxxxxxxx`` node ID from a MeshCore public key.
Uses the first four bytes (eight hex characters) of the 32-byte public
key, formatted as ``!xxxxxxxx``.
Parameters:
public_key_hex: 64-character lowercase hex string for the node's
public key as returned by the MeshCore library.
Returns:
Canonical ``!xxxxxxxx`` node ID string, or ``None`` when the key is
absent or too short.
"""
if not public_key_hex or len(public_key_hex) < 8:
return None
return "!" + public_key_hex[:8].lower()
def _meshcore_short_name(node_id: str | None) -> str:
"""Derive a four-character short name from a canonical node ID.
Uses the first two bytes (four hex characters) of the ``!xxxxxxxx`` node
ID. This keeps the short name consistent with the node ID itself — if the
node ID is later replaced when the real public key is heard, the short name
will update alongside it.
Parameters:
node_id: Canonical ``!xxxxxxxx`` node ID string (as returned by
:func:`_meshcore_node_id`).
Returns:
Four lowercase hex characters (e.g. ``"cafe"``), or an empty string
when the node ID is missing or too short.
"""
if not node_id:
return ""
raw = node_id.lstrip("!")
if len(raw) < 4:
return ""
return raw[:4].lower()
def _meshcore_adv_type_to_role(adv_type: object) -> str | None:
"""Map MeshCore ``ADV_TYPE_*`` (contact ``type`` / self ``adv_type``) to ingest role.
Values match MeshCore firmware ``AdvertDataHelpers.h`` (``ADV_TYPE_CHAT``,
``ADV_TYPE_REPEATER``, …). Role strings match the MeshCore palette keys
used by the web dashboard (``COMPANION``, ``REPEATER``, …).
Parameters:
adv_type: Raw type byte from meshcore_py (typically ``int`` 04).
Non-integer values (e.g. ``float``, ``None``) are rejected and
return ``None``. Future firmware type codes not yet in the mapping
also return ``None`` until the table is updated.
Returns:
Uppercase role string, or ``None`` when the value is unknown or should
not override the web default (``ADV_TYPE_NONE`` / unrecognised).
"""
if not isinstance(adv_type, int):
return None
return _MESHCORE_ADV_TYPE_ROLE.get(adv_type)
def _derive_synthetic_node_id(long_name: str) -> str:
"""Derive a deterministic synthetic ``!xxxxxxxx`` node ID from a long name.
Uses the first four bytes of SHA-256(UTF-8 encoded name), formatted as
``!xxxxxxxx``. The same long name always produces the same ID across
restarts. The probability of collision with a real public-key-derived ID
is ~1 in 4 billion per pair, which is negligible in practice.
Parameters:
long_name: Node long name used as the hash input.
Returns:
Canonical ``!xxxxxxxx`` node ID string.
"""
return "!" + hashlib.sha256(long_name.encode("utf-8")).hexdigest()[:8]
def _pubkey_prefix_to_node_id(contacts: dict, pubkey_prefix: str) -> str | None:
"""Look up a canonical node ID by six-byte public-key prefix.
Parameters:
contacts: Mapping of full ``public_key`` hex strings to contact dicts.
pubkey_prefix: Twelve-character hex string (six bytes) as used in
MeshCore direct-message events.
Returns:
Canonical ``!xxxxxxxx`` node ID for the first matching contact, or
``None`` when no contact's public key starts with *pubkey_prefix*.
"""
for pub_key in contacts:
if pub_key.startswith(pubkey_prefix):
return _meshcore_node_id(pub_key)
return None
@@ -0,0 +1,159 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Live MeshCore interface and the connection-stage shutdown sentinel."""
from __future__ import annotations
import asyncio
import threading
from .decode import _contact_to_node_dict
from .identity import _meshcore_node_id, _pubkey_prefix_to_node_id
class ClosedBeforeConnectedError(ConnectionError):
"""Raised when :meth:`_MeshcoreInterface.close` is called while the
connection coroutine is still waiting for the device handshake to complete.
This is a :exc:`ConnectionError` subclass so callers that only handle the
base class continue to work, while callers that need to distinguish a
user-initiated shutdown from a hardware failure can catch this type
specifically.
"""
class _MeshcoreInterface:
"""Live MeshCore interface managing an asyncio event loop in a background thread.
Holds connection state, a thread-safe snapshot of known contacts, and the
handles needed to shut down cleanly when the daemon requests a disconnect.
"""
host_node_id: str | None = None
"""Canonical ``!xxxxxxxx`` identifier for the connected host device."""
def __init__(self, *, target: str | None) -> None:
"""Initialise the interface with the connection *target*."""
self._target = target
self._mc: object | None = None
self._loop: asyncio.AbstractEventLoop | None = None
self._thread: threading.Thread | None = None
self._stop_event: asyncio.Event | None = None
self._contacts_lock = threading.Lock()
self._contacts: dict = {}
self.isConnected: bool = False
# Tracks synthetic node IDs already upserted this session to avoid
# repeating the HTTP POST for every message from the same unknown sender.
# This set is reset on reconnect (because _MeshcoreInterface is recreated),
# which may cause extra upserts after a disconnect — the ON CONFLICT guard
# in the Ruby web app ensures those are idempotent and safe.
self._synthetic_node_ids: set[str] = set()
self._self_info_payload: dict | None = None
"""Most recent SELF_INFO payload received from the device, or ``None``."""
# ------------------------------------------------------------------
# Contact management (called from the asyncio thread)
# ------------------------------------------------------------------
def _update_contact(self, contact: dict) -> None:
"""Thread-safely add or update a contact in the local snapshot.
Parameters:
contact: Contact dict from a ``CONTACTS``, ``NEW_CONTACT``, or
``NEXT_CONTACT`` event.
"""
pub_key = contact.get("public_key")
if pub_key:
with self._contacts_lock:
self._contacts[pub_key] = contact
def contacts_snapshot(self) -> list[tuple[str, dict]]:
"""Return a thread-safe snapshot of all known contacts as node entries.
Returns:
List of ``(canonical_node_id, node_dict)`` pairs, skipping any
contact whose public key cannot be mapped to a valid node ID.
"""
with self._contacts_lock:
items = list(self._contacts.items())
result = []
for pub_key, contact in items:
node_id = _meshcore_node_id(pub_key)
if node_id is not None:
result.append((node_id, _contact_to_node_dict(contact)))
return result
def lookup_node_id(self, pubkey_prefix: str) -> str | None:
"""Return the canonical node ID for the contact matching *pubkey_prefix*.
Parameters:
pubkey_prefix: Twelve-character hex string (six bytes) from a
``CONTACT_MSG_RECV`` event.
Returns:
Canonical ``!xxxxxxxx`` node ID, or ``None`` when no match.
"""
with self._contacts_lock:
return _pubkey_prefix_to_node_id(self._contacts, pubkey_prefix)
def lookup_node_id_by_name(self, adv_name: str) -> str | None:
"""Return the canonical node ID for the contact whose ``adv_name`` matches.
Used to resolve the sender of a MeshCore channel message from the
``"SenderName: body"`` text prefix when no ``pubkey_prefix`` is
available in the event payload. The comparison is case-sensitive
because ``adv_name`` values come verbatim from the MeshCore firmware.
Parameters:
adv_name: Advertised name to look up. Leading and trailing
whitespace is stripped before comparison.
Returns:
Canonical ``!xxxxxxxx`` node ID, or ``None`` when no contact with
that name is known.
"""
name = adv_name.strip() if adv_name else ""
if not name:
return None
with self._contacts_lock:
for pub_key, contact in self._contacts.items():
contact_name = (contact.get("adv_name") or "").strip()
if contact_name == name:
return _meshcore_node_id(pub_key)
return None
# ------------------------------------------------------------------
# Lifecycle
# ------------------------------------------------------------------
def close(self) -> None:
"""Signal the background event loop to stop and wait for the thread.
Safe to call multiple times and from any thread.
"""
self.isConnected = False
loop = self._loop
stop_event = self._stop_event
if loop is not None and not loop.is_closed():
try:
if stop_event is not None:
loop.call_soon_threadsafe(stop_event.set)
else:
loop.call_soon_threadsafe(loop.stop)
except RuntimeError:
pass
thread = self._thread
if thread is not None and thread.is_alive():
thread.join(timeout=5.0)
@@ -0,0 +1,130 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Sender-side fingerprinting and parsing helpers for MeshCore messages."""
from __future__ import annotations
import hashlib
import time
from ._constants import _MENTION_RE, _MESHCORE_ID_MASK
def _derive_message_id(
sender_identity: str,
sender_ts: int,
discriminator: str,
text: str,
) -> int:
"""Derive a stable 53-bit message ID from sender-side MeshCore fields.
MeshCore does not assign firmware-side packet IDs. This function produces
a deterministic 53-bit integer fingerprint of a physical transmission so
that the same packet heard by multiple ingestors collapses to a single
``messages`` row via the ``messages.id`` PRIMARY KEY upsert path. Every
component of the fingerprint is sender-side, ensuring two receivers with
different clocks or roster state still compute the same value.
Parameters:
sender_identity: Stable sender identifier shared across receivers.
For channel messages this is the lowercased+stripped sender name
parsed from the message text via :func:`_parse_sender_name`; for
direct messages it is the sender's MeshCore ``pubkey_prefix``.
Must be a string (use ``""`` when unavailable).
sender_ts: Unix timestamp from the sender's clock (identical across
receivers regardless of receiver-side clock skew).
discriminator: Namespace tag separating message classes that could
otherwise collide. ``"c<N>"`` is reserved for channel messages
on channel ``N``; ``"dm"`` is reserved for direct messages.
text: Message text exactly as transmitted by the sender.
Returns:
A non-negative 53-bit integer suitable for the ``id`` column. The
value is bounded by ``0 <= id <= (1 << 53) - 1`` so it survives the
JSON → JavaScript number round-trip without precision loss.
"""
# The ``v1:`` prefix lets us evolve the fingerprint format (e.g. add a
# channel-secret hash) by bumping to ``v2:`` without colliding with
# existing ids written under the v1 scheme.
fingerprint = f"v1:{sender_identity}:{sender_ts}:{discriminator}:{text}"
digest = hashlib.sha256(fingerprint.encode("utf-8", errors="replace")).digest()
return int.from_bytes(digest[:7], "big") & _MESHCORE_ID_MASK
def _parse_sender_name(text: str) -> str | None:
"""Extract the sender name from a MeshCore channel message text.
MeshCore channel messages use the convention ``"SenderName: body"``.
Only the first colon is treated as the separator; colons that appear in the
body are preserved. The sender name is stripped of leading and trailing
whitespace.
Parameters:
text: Raw message text as stored in the database.
Returns:
Stripped sender name string, or ``None`` when the text does not
contain a colon or the portion before the colon is blank.
"""
colon_idx = text.find(":")
if colon_idx < 0:
return None
name = text[:colon_idx].strip()
return name if name else None
def _extract_mention_names(text: str) -> list[str]:
"""Extract all ``@[Name]`` mention names from a MeshCore message body.
Parameters:
text: Raw message text that may contain ``@[Name]`` mention patterns.
Returns:
List of extracted name strings (may be empty).
"""
return _MENTION_RE.findall(text)
def _synthetic_node_dict(long_name: str) -> dict:
"""Build a synthetic node dict for an unknown MeshCore channel sender.
Synthetic nodes are placeholder entries created when a channel message
arrives from a sender who is not yet in the connected device's contacts
roster. They carry ``role=COMPANION`` (the only role capable of sending
channel messages). The short name is intentionally omitted here — the
Ruby web app derives it at query time via
``meshcore_companion_display_short_name`` for all COMPANION nodes.
When the real contact advertisement is later received, the Ruby web app
detects the matching long name, migrates all messages from the synthetic
node ID to the real one, and removes the placeholder row.
Parameters:
long_name: Sender name parsed from the ``"SenderName: body"`` prefix.
Returns:
Node dict compatible with the ``POST /api/nodes`` payload format,
with ``user.synthetic`` set to ``True``.
"""
return {
"lastHeard": int(time.time()),
"protocol": "meshcore",
"user": {
"longName": long_name,
"shortName": "",
"role": "COMPANION",
"synthetic": True,
},
}
@@ -0,0 +1,69 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Forward MeshCore advertised positions to ``POST /api/positions``."""
from __future__ import annotations
import hashlib
import time
from ... import queue as _queue
from ...serialization import _iso, _node_num_from_id
def _store_meshcore_position(
node_id: str,
lat: float,
lon: float,
position_time: int | None,
ingestor: str | None,
) -> None:
"""Enqueue a ``POST /api/positions`` for a MeshCore contact's advertised position.
MeshCore does not issue dedicated position packets; position data is embedded
in contact advertisements. A stable pseudo-ID is derived from the node
identity and the position timestamp so repeated advertisements of the same
position are idempotently de-duplicated by the web app's ``ON CONFLICT``
clause.
Parameters:
node_id: Canonical ``!xxxxxxxx`` node identifier.
lat: Latitude in decimal degrees.
lon: Longitude in decimal degrees.
position_time: Unix timestamp from the contact's ``last_advert`` field,
or ``None`` to fall back to the current wall-clock time.
ingestor: Canonical node ID of the host ingestor, or ``None``.
"""
rx_time = int(time.time())
pt = position_time or rx_time
# Stable 63-bit pseudo-ID unique to (node, position_time) so that the web
# app ON CONFLICT clause de-duplicates repeated advertisements of the same
# position without collisions between different nodes.
digest = hashlib.sha256(f"{node_id}:{pt}".encode()).digest()
pos_id = int.from_bytes(digest[:8], "big") & 0x7FFFFFFFFFFFFFFF
node_num = _node_num_from_id(node_id)
payload = {
"id": pos_id,
"rx_time": rx_time,
"rx_iso": _iso(rx_time),
"node_id": node_id,
"node_num": node_num,
"from_id": node_id,
"latitude": lat,
"longitude": lon,
"position_time": pt,
"ingestor": ingestor,
}
_queue._queue_post_json("/api/positions", payload)
@@ -0,0 +1,196 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Public ``MeshcoreProvider`` satisfying the :class:`MeshProtocol` interface."""
from __future__ import annotations
import asyncio
import sys
import threading
from ... import config
from ._constants import _CONNECT_TIMEOUT_SECS
from .decode import _self_info_to_node_dict
from .identity import _meshcore_node_id
from .interface import _MeshcoreInterface
class MeshcoreProvider:
"""MeshCore ingestion provider.
Connects to a MeshCore node via serial port, BLE, or TCP/IP. The
connection type is inferred from the target string; see :meth:`connect`
for routing rules.
The provider runs MeshCore's ``asyncio`` event loop in a background daemon
thread. Incoming ``SELF_INFO``, ``CONTACTS``, ``NEW_CONTACT``,
``CHANNEL_MSG_RECV``, and ``CONTACT_MSG_RECV`` events are forwarded to the
HTTP ingest queue via the shared handler functions.
"""
name = "meshcore"
def subscribe(self) -> list[str]:
"""Return subscribed topic names.
MeshCore uses an ``asyncio`` event system rather than a pubsub bus,
so there are no topics to register at startup.
"""
return []
def connect(
self, *, active_candidate: str | None
) -> tuple[object, str | None, str | None]:
"""Connect to a MeshCore node via serial, BLE, or TCP.
Starts an asyncio event loop in a background daemon thread, performs
the MeshCore companion-protocol handshake, and blocks until the node's
self-info is received or the timeout expires.
Connection type is inferred from *active_candidate* (or
:data:`~data.mesh_ingestor.config.CONNECTION`):
* BLE MAC / UUID → :class:`meshcore.BLEConnection`
* ``host:port`` → :class:`meshcore.TCPConnection`
* serial path → :class:`meshcore.SerialConnection`
* ``None`` / empty → first candidate from
:func:`~data.mesh_ingestor.connection.default_serial_targets`
Parameters:
active_candidate: Previously resolved connection target, or
``None`` to fall back to
:data:`~data.mesh_ingestor.config.CONNECTION`.
Returns:
``(iface, resolved_target, next_active_candidate)`` matching the
:class:`~data.mesh_ingestor.provider.Provider` contract.
Raises:
ConnectionError: When the node does not complete the handshake
within :data:`_CONNECT_TIMEOUT_SECS` seconds.
"""
target: str | None = active_candidate or config.CONNECTION
if not target:
# Look up via the package so test fakes installed via
# ``monkeypatch.setattr(mod, "default_serial_targets", ...)`` apply.
pkg = sys.modules["data.mesh_ingestor.protocols.meshcore"]
candidates = pkg.default_serial_targets()
target = candidates[0] if candidates else "/dev/ttyACM0"
config._debug_log(
"Connecting to MeshCore node",
context="meshcore.connect",
target=target,
)
iface = _MeshcoreInterface(target=target)
connected_event = threading.Event()
error_holder: list = [None]
# Resolve the runner + asyncio handler via the parent package so test
# fakes installed via ``monkeypatch.setattr(mod, "_run_meshcore", ...)``
# apply at call time.
pkg = sys.modules["data.mesh_ingestor.protocols.meshcore"]
def _run_loop() -> None:
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
# Second line of defence around issue #754: if a detached task
# inside the upstream ``meshcore`` library ever raises an
# exception we do not anticipate in ``_meshcore_patches``, funnel
# it through our logger instead of the default handler (which
# only writes ``Task exception was never retrieved`` to stderr).
loop.set_exception_handler(pkg._log_unhandled_loop_exception)
iface._loop = loop
try:
loop.run_until_complete(
pkg._run_meshcore(iface, target, connected_event, error_holder)
)
finally:
loop.close()
thread = threading.Thread(target=_run_loop, name="meshcore-loop", daemon=True)
iface._thread = thread
thread.start()
if not connected_event.wait(timeout=_CONNECT_TIMEOUT_SECS):
iface.close()
raise ConnectionError(
f"Timed out waiting for MeshCore node at {target!r} "
f"after {_CONNECT_TIMEOUT_SECS:g}s."
)
if error_holder[0] is not None:
iface.close()
raise error_holder[0]
return iface, target, target
def extract_host_node_id(self, iface: object) -> str | None:
"""Return the canonical ``!xxxxxxxx`` host node ID from the interface.
Parameters:
iface: Active :class:`_MeshcoreInterface` returned by
:meth:`connect`.
"""
return getattr(iface, "host_node_id", None)
def self_node_item(self, iface: object) -> tuple[str, dict] | None:
"""Return the ``(node_id, node_dict)`` pair for the host self-node.
Uses the most recently cached ``SELF_INFO`` payload stored on the
interface. Returns ``None`` when no SELF_INFO has been received yet
or when the public key cannot be mapped to a valid node ID.
Parameters:
iface: Active :class:`_MeshcoreInterface` instance.
Returns:
``(canonical_node_id, node_dict)`` tuple or ``None``.
"""
if not isinstance(iface, _MeshcoreInterface):
return None
payload = getattr(iface, "_self_info_payload", None)
if not payload:
return None
node_id = _meshcore_node_id(payload.get("public_key", ""))
if not node_id:
return None
return node_id, _self_info_to_node_dict(payload)
def node_snapshot_items(self, iface: object) -> list[tuple[str, dict]]:
"""Return a snapshot of all known MeshCore contacts as node entries.
Includes the host self-node when a ``SELF_INFO`` payload has already
been received, so that the initial snapshot sent by the daemon
covers the local device even when the background event loop delivers
``SELF_INFO`` before the snapshot is taken.
Parameters:
iface: Active :class:`_MeshcoreInterface` instance. Any other
object type causes an empty list to be returned.
Returns:
List of ``(canonical_node_id, node_dict)`` pairs suitable for
passing to :func:`~data.mesh_ingestor.handlers.upsert_node`.
"""
if not isinstance(iface, _MeshcoreInterface):
return []
items: list[tuple[str, dict]] = list(iface.contacts_snapshot())
self_item = self.self_node_item(iface)
if self_item is not None:
items.append(self_item)
return items
@@ -0,0 +1,152 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Asyncio entry point that drives a MeshCore connection from a worker thread."""
from __future__ import annotations
import asyncio
import sys
import threading
from ... import config
from ._constants import _DEFAULT_BAUDRATE
from .channels import _ensure_channel_names
from .connection import _make_connection
from .handlers import _make_event_handlers
from .interface import ClosedBeforeConnectedError, _MeshcoreInterface
async def _run_meshcore(
iface: _MeshcoreInterface,
target: str,
connected_event: threading.Event,
error_holder: list,
) -> None:
"""Connect to a MeshCore node and keep the event loop running until closed.
This coroutine is the single entry point for the background asyncio thread.
It connects the MeshCore library, registers event handlers, fetches the
initial contact list, starts auto-message polling, and then waits for the
:attr:`_MeshcoreInterface._stop_event` to be set.
Parameters:
iface: Shared interface object for state and contact tracking.
target: Resolved, non-empty connection target (serial, BLE, or TCP).
connected_event: Threading event signalled when the connection
succeeds or fails, to unblock the calling ``connect()`` method.
error_holder: Single-element list; set to the raised exception when
the connection attempt fails so the caller can re-raise it.
"""
# Install early so :meth:`_MeshcoreInterface.close` can signal shutdown with
# ``stop_event.set()`` instead of ``loop.stop()`` while ``connect()`` or the
# ``finally`` disconnect is still running (avoids RuntimeError from
# :meth:`asyncio.loop.run_until_complete`).
stop_event = asyncio.Event()
iface._stop_event = stop_event
# Resolve meshcore-library symbols via the parent package so test fakes
# installed via ``monkeypatch.setattr(mod, "MeshCore", ...)`` apply.
pkg = sys.modules["data.mesh_ingestor.protocols.meshcore"]
MeshCore = pkg.MeshCore
EventType = pkg.EventType
mc = None
try:
cx = _make_connection(target, _DEFAULT_BAUDRATE)
mc = MeshCore(cx)
iface._mc = mc
handlers_map = _make_event_handlers(iface, target)
for event_name, callback in handlers_map.items():
mc.subscribe(EventType[event_name], callback)
_handled_types = frozenset(EventType[n] for n in handlers_map)
# Bookkeeping events that require no action and should not be logged.
_silent_types = frozenset(
{
EventType.CONNECTED,
EventType.ACK,
EventType.OK,
EventType.ERROR,
EventType.NO_MORE_MSGS,
EventType.MESSAGES_WAITING,
EventType.MSG_SENT,
EventType.CURRENT_TIME,
}
)
async def _on_unhandled(evt) -> None:
if evt.type in _handled_types or evt.type in _silent_types:
return
# Look up via the parent package so test fakes installed via
# ``monkeypatch.setattr(mod, "_record_meshcore_message", ...)`` apply.
pkg._record_meshcore_message(
evt.payload,
source=f"{target or 'auto'}:{evt.type.name}",
)
mc.subscribe(None, _on_unhandled)
result = await mc.connect()
if result is None:
raise ConnectionError(
f"MeshCore node at {target!r} did not respond to the appstart "
"handshake. Ensure the device is running MeshCore companion-mode "
"firmware."
)
if stop_event.is_set():
raise ClosedBeforeConnectedError(
"Mesh interface close was requested before the connection could be completed."
)
iface.isConnected = True
connected_event.set()
try:
await mc.ensure_contacts()
except Exception as exc:
config._debug_log(
"Failed to fetch initial contacts",
context="meshcore.contacts",
severity="warning",
always=True,
error=str(exc),
)
try:
await _ensure_channel_names(mc)
except Exception as exc:
config._debug_log(
"Failed to fetch channel names",
context="meshcore.channels",
severity="warning",
error=str(exc),
)
await mc.start_auto_message_fetching()
await stop_event.wait()
except Exception as exc:
if not connected_event.is_set():
error_holder[0] = exc
connected_event.set()
finally:
if mc is not None:
try:
await mc.disconnect()
except Exception:
pass
+7 -7
View File
@@ -878,9 +878,9 @@ checksum = "384b8ab6d37215f3c5301a95a4accb5d64aa607f1fcb26a11b5303878451b4fe"
[[package]]
name = "openssl"
version = "0.10.75"
version = "0.10.78"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "08838db121398ad17ab8531ce9de97b244589089e290a384c900cb9ff7434328"
checksum = "f38c4372413cdaaf3cc79dd92d29d7d9f5ab09b51b10dded508fb90bb70b9222"
dependencies = [
"bitflags",
"cfg-if",
@@ -910,9 +910,9 @@ checksum = "d05e27ee213611ffe7d6348b942e8f942b37114c00cc03cec254295a4a17852e"
[[package]]
name = "openssl-sys"
version = "0.9.111"
version = "0.9.114"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "82cab2d520aa75e3c58898289429321eb788c3106963d0dc886ec7a5f4adc321"
checksum = "13ce1245cd07fcc4cfdb438f7507b0c7e4f3849a69fd84d52374c66d83741bb6"
dependencies = [
"cc",
"libc",
@@ -969,7 +969,7 @@ checksum = "7edddbd0b52d732b21ad9a5fab5c704c14cd949e5e9a1ec5929a24fded1b904c"
[[package]]
name = "potatomesh-matrix-bridge"
version = "0.6.2"
version = "0.6.3"
dependencies = [
"anyhow",
"axum",
@@ -1255,9 +1255,9 @@ dependencies = [
[[package]]
name = "rustls-webpki"
version = "0.103.10"
version = "0.103.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "df33b2b81ac578cabaf06b89b0631153a3f416b0a886e8a7a1707fb51abbd1ef"
checksum = "61c429a8649f110dddef65e2a5ad240f747e85f7758a6bccc7e5777bd33f756e"
dependencies = [
"ring",
"rustls-pki-types",
+1 -1
View File
@@ -14,7 +14,7 @@
[package]
name = "potatomesh-matrix-bridge"
version = "0.6.2"
version = "0.6.3"
edition = "2021"
[dependencies]
+45 -29
View File
@@ -17,6 +17,7 @@ mod config;
mod matrix;
mod matrix_server;
mod potatomesh;
mod preset;
use std::{fs, net::SocketAddr, path::Path};
@@ -255,8 +256,17 @@ async fn handle_message(
let display_name = display_name_for_node(&node);
matrix.set_display_name(&user_id, &display_name).await?;
// Format the bridged message
let preset_short = modem_preset_short(&msg.modem_preset);
// Format the bridged message. `lora_freq` is `u32`, so 0 stands in for
// "unknown" — collapse that to `None` to match the JS pipeline (which
// runs `normalizeFrequency` and discards 0/non-finite before reaching
// the preset lookup).
let freq_mhz = if msg.lora_freq > 0 {
Some(msg.lora_freq as f64)
} else {
None
};
let abbr = preset::abbreviate_preset(&msg.modem_preset, freq_mhz);
let preset_short = preset::normalize_preset_slot(abbr.as_deref());
let tag = protocol_tag(msg.protocol.as_deref());
let prefix = format!(
"{tag}[{freq}][{preset_short}][{channel}]",
@@ -290,19 +300,6 @@ fn protocol_tag(protocol: Option<&str>) -> &'static str {
}
}
/// Build a compact modem preset label like "LF" for "LongFast".
fn modem_preset_short(preset: &str) -> String {
let letters: String = preset
.chars()
.filter(|ch| ch.is_ascii_uppercase())
.collect();
if letters.is_empty() {
preset.chars().take(2).collect()
} else {
letters
}
}
/// Build plain text + HTML message bodies with inline-code metadata.
fn format_message_bodies(prefix: &str, text: &str) -> (String, String) {
let body = format!("`{}` {}", prefix, text);
@@ -383,12 +380,6 @@ mod tests {
}
}
#[test]
fn modem_preset_short_handles_camelcase() {
assert_eq!(modem_preset_short("LongFast"), "LF");
assert_eq!(modem_preset_short("MediumFast"), "MF");
}
#[test]
fn format_message_bodies_escape_html() {
let (body, formatted) = format_message_bodies("[868][LF]", "Hello <&>");
@@ -757,8 +748,17 @@ mod tests {
/// Drive `handle_message` end-to-end against a mocked Matrix homeserver
/// and PotatoMesh API, asserting that the bridged message body carries
/// the expected protocol tag. Shared by the per-protocol test cases below.
async fn assert_handle_message_emits_tag(protocol: Option<&str>, expected_tag: &str) {
/// the expected protocol tag and preset abbreviation. Shared by the
/// per-protocol test cases below. `lora_freq` is plumbed through both
/// the input message and the expected body so the missing-freq path
/// (`lora_freq = 0`) can be exercised alongside the populated cases.
async fn assert_handle_message_emits_tag(
protocol: Option<&str>,
expected_tag: &str,
modem_preset: &str,
lora_freq: u32,
expected_preset_slot: &str,
) {
let mut server = mockito::Server::new_async().await;
let potatomesh_cfg = PotatomeshConfig {
@@ -822,8 +822,10 @@ mod tests {
.txn_counter
.load(std::sync::atomic::Ordering::SeqCst);
let expected_body = format!("`{expected_tag}[868][MF][TEST]` Ping");
let expected_formatted = format!("<code>{expected_tag}[868][MF][TEST]</code> Ping");
let expected_body =
format!("`{expected_tag}[{lora_freq}][{expected_preset_slot}][TEST]` Ping");
let expected_formatted =
format!("<code>{expected_tag}[{lora_freq}][{expected_preset_slot}][TEST]</code> Ping");
let mock_send = server
.mock(
@@ -849,6 +851,8 @@ mod tests {
let mut state = BridgeState::default();
let msg = PotatoMessage {
protocol: protocol.map(str::to_string),
modem_preset: modem_preset.to_string(),
lora_freq,
..sample_msg(100)
};
@@ -866,21 +870,33 @@ mod tests {
#[tokio::test]
async fn handle_message_tags_meshtastic_in_body() {
assert_handle_message_emits_tag(Some("meshtastic"), "[MT]").await;
assert_handle_message_emits_tag(Some("meshtastic"), "[MT]", "MediumFast", 868, "MF").await;
}
#[tokio::test]
async fn handle_message_defaults_missing_protocol_to_meshtastic_tag() {
assert_handle_message_emits_tag(None, "[MT]").await;
assert_handle_message_emits_tag(None, "[MT]", "MediumFast", 868, "MF").await;
}
#[tokio::test]
async fn handle_message_tags_meshcore_in_body() {
assert_handle_message_emits_tag(Some("meshcore"), "[MC]").await;
// SF8/BW62/CR8 is EU/UK Narrow → bandwidth-driven short code "Na"
// → uppercased "NA" in the bracket slot. Exercises the bug fix.
assert_handle_message_emits_tag(Some("meshcore"), "[MC]", "SF8/BW62/CR8", 868, "NA").await;
}
#[tokio::test]
async fn handle_message_tags_unknown_protocol_as_placeholder() {
assert_handle_message_emits_tag(Some("reticulum"), "[??]").await;
assert_handle_message_emits_tag(Some("reticulum"), "[??]", "MediumFast", 868, "MF").await;
}
#[tokio::test]
async fn handle_message_treats_zero_lora_freq_as_unknown_freq() {
// `lora_freq = 0` stands in for "unknown frequency" — the call
// site collapses it to `None` so frequency-gated named-preset
// lookups are skipped (matching JS `normalizeFrequency`). The
// BW-derived short code still resolves, so SF7/BW62/CR5 renders
// as `[NA]` even without a frequency to disambiguate the region.
assert_handle_message_emits_tag(Some("meshcore"), "[MC]", "SF7/BW62/CR5", 0, "NA").await;
}
}
+776
View File
@@ -0,0 +1,776 @@
// Copyright © 2025-26 l5yth & contributors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Modem preset abbreviation logic, mirroring
//! `web/public/assets/js/app/node-modem-metadata.js` and
//! `web/public/assets/js/app/chat-format.js`.
//!
//! The PotatoMesh ingestor encodes MeshCore radio config as
//! `SF{sf}/BW{bw}/CR{cr}` and Meshtastic radio config as a CamelCase preset
//! name like `MediumFast`. The web dashboard collapses both to a 2-character
//! bracket label (e.g. `[NA]` for EU/UK Narrow, `[MF]` for MediumFast); this
//! module reproduces that mapping in Rust so Matrix-bridged messages render
//! the same label as the dashboard.
/// Named MeshCore SF/BW/CR preset entry.
///
/// Frequency-gated entries (currently only the SF7/BW62/CR5 row) are skipped
/// when `freq_mhz` is `None`, matching the JS `resolveMeshcorePresetDisplay`
/// behavior.
struct NamedPreset {
sf: u8,
bw: u16,
cr: u8,
long_name: &'static str,
/// Inclusive lower bound for `freq_mhz`. `None` means no lower gate.
min_freq_mhz: Option<u16>,
/// Exclusive upper bound for `freq_mhz`. `None` means no upper gate.
max_freq_mhz: Option<u16>,
}
/// Canonical MeshCore preset table, ported from
/// `MESHCORE_NAMED_PRESETS` in `node-modem-metadata.js:84-92`.
const MESHCORE_NAMED_PRESETS: &[NamedPreset] = &[
NamedPreset {
sf: 10,
bw: 250,
cr: 5,
long_name: "AU/NZ Wide",
min_freq_mhz: None,
max_freq_mhz: None,
},
NamedPreset {
sf: 10,
bw: 62,
cr: 5,
long_name: "AU/NZ Narrow",
min_freq_mhz: None,
max_freq_mhz: None,
},
NamedPreset {
sf: 11,
bw: 250,
cr: 5,
long_name: "EU/UK Wide",
min_freq_mhz: None,
max_freq_mhz: None,
},
NamedPreset {
sf: 8,
bw: 62,
cr: 8,
long_name: "EU/UK Narrow",
min_freq_mhz: None,
max_freq_mhz: None,
},
// SF7/BW62/CR5 is region-disambiguated by the 900 MHz threshold.
NamedPreset {
sf: 7,
bw: 62,
cr: 5,
long_name: "CZ/SK Narrow",
min_freq_mhz: None,
max_freq_mhz: Some(900),
},
NamedPreset {
sf: 7,
bw: 62,
cr: 5,
long_name: "US/CA Narrow",
min_freq_mhz: Some(900),
max_freq_mhz: None,
},
];
/// Canonical Meshtastic preset abbreviation table, ported from
/// `PRESET_ABBREVIATIONS` in `chat-format.js:160-170`.
///
/// Keys are already lowercased and stripped of non-alphabetic characters so
/// the lookup is insensitive to delimiters and casing.
const MESHTASTIC_PRESET_ABBREVIATIONS: &[(&str, &str)] = &[
("verylongslow", "VL"),
("longslow", "LS"),
("longmoderate", "LM"),
("longfast", "LF"),
("mediumslow", "MS"),
("mediumfast", "MF"),
("shortslow", "SS"),
("shortfast", "SF"),
("shortturbo", "ST"),
];
/// Identity of one parsed token in an SF/BW/CR preset string.
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
enum PresetKey {
Sf,
Bw,
Cr,
}
/// Parsed numeric values extracted from an SF/BW/CR preset string.
///
/// JS uses double-precision floats throughout; mirroring with `f64` avoids
/// surprises for any future fractional MHz value even though the values
/// in play today (62, 62.5, 125, 250, ~868915) are exact in `f32`.
#[derive(Clone, Copy, PartialEq, Debug)]
struct MeshcoreTokens {
sf: f64,
bw: f64,
cr: f64,
}
/// Validate that `s` matches the JS regex `\d+(?:\.\d+)?`.
///
/// Accepts integer or decimal positive numbers — no sign, no exponent, no
/// leading or trailing dot.
fn is_valid_number_token(s: &str) -> bool {
let mut has_digits_before_dot = false;
let mut found_dot = false;
let mut has_digits_after_dot = false;
for c in s.chars() {
if c == '.' {
if found_dot || !has_digits_before_dot {
return false;
}
found_dot = true;
} else if c.is_ascii_digit() {
if found_dot {
has_digits_after_dot = true;
} else {
has_digits_before_dot = true;
}
} else {
return false;
}
}
has_digits_before_dot && (!found_dot || has_digits_after_dot)
}
/// Parse a single `SF{n}`, `BW{n}`, or `CR{n}` token (case-insensitive).
fn parse_token(part: &str) -> Option<(PresetKey, f64)> {
// Both length and char-boundary checks are needed: `len() >= 3` rules
// out short tokens, but a multi-byte first codepoint (e.g. `é12`) has
// `len() == 3` while byte index 2 lands mid-codepoint — so
// `is_char_boundary(2)` is what actually keeps `split_at(2)` from
// panicking on non-ASCII input.
if part.len() < 3 || !part.is_char_boundary(2) {
return None;
}
let (prefix, rest) = part.split_at(2);
let key = if prefix.eq_ignore_ascii_case("SF") {
PresetKey::Sf
} else if prefix.eq_ignore_ascii_case("BW") {
PresetKey::Bw
} else if prefix.eq_ignore_ascii_case("CR") {
PresetKey::Cr
} else {
return None;
};
if !is_valid_number_token(rest) {
return None;
}
let value: f64 = rest.parse().ok()?;
Some((key, value))
}
/// Parse an SF/BW/CR preset string into its three components.
///
/// Tokens may appear in any order; the prefix matching is case-insensitive.
/// Returns `None` for any string that is not a 3-segment SF/BW/CR pattern,
/// matching JS `parseMeshcorePresetTokens`.
fn parse_meshcore_preset_tokens(preset: &str) -> Option<MeshcoreTokens> {
let trimmed = preset.trim();
if trimmed.is_empty() {
return None;
}
let parts: Vec<&str> = trimmed.split('/').collect();
if parts.len() != 3 {
return None;
}
let mut sf: Option<f64> = None;
let mut bw: Option<f64> = None;
let mut cr: Option<f64> = None;
for part in parts {
let (key, value) = parse_token(part)?;
match key {
PresetKey::Sf => {
if sf.is_some() {
return None;
}
sf = Some(value);
}
PresetKey::Bw => {
if bw.is_some() {
return None;
}
bw = Some(value);
}
PresetKey::Cr => {
if cr.is_some() {
return None;
}
cr = Some(value);
}
}
}
Some(MeshcoreTokens {
sf: sf?,
bw: bw?,
cr: cr?,
})
}
/// Map a LoRa bandwidth to the canonical 2-character short code.
///
/// Mirrors `bwToShortCode` in `node-modem-metadata.js:132-138` — `62` and
/// `62.5` collapse to `Na`, `125` to `St`, `250` to `Wi`. Any other value
/// returns `None`.
///
/// The `f64 ==` comparisons rely on each literal having an exact double
/// representation; `62`, `62.5`, `125`, and `250` all do, and tokens
/// reach this function via `f64::from_str` of plain decimal strings, so
/// no rounding is introduced upstream.
fn bw_to_short_code(bw: f64) -> Option<&'static str> {
if bw == 62.0 || bw == 62.5 {
Some("Na")
} else if bw == 125.0 {
Some("St")
} else if bw == 250.0 {
Some("Wi")
} else {
None
}
}
/// Format a numeric token for display string construction.
///
/// Mirrors JS coercion: `62` renders as `"62"`, `62.5` as `"62.5"`.
fn format_number(n: f64) -> String {
if n.fract() == 0.0 {
format!("{}", n as i64)
} else {
format!("{}", n)
}
}
/// Display metadata returned by [`resolve_meshcore_preset_display`].
///
/// `long_name` and `display_string` are not consumed by the Matrix bridge
/// today — only `short_code` feeds the bracket render. They are retained
/// (with `#[allow(dead_code)]`) so the port stays line-for-line auditable
/// against the JS source and so a future caller (e.g. a tooltip surface)
/// can read them without touching the parsing path again.
#[derive(Clone, PartialEq, Debug)]
struct MeshcoreDisplay {
/// Long human-readable name (e.g. "EU/UK Wide") when the SF/BW/CR
/// triple matches a named preset, else `None`.
#[allow(dead_code)]
long_name: Option<&'static str>,
/// 2-character short code derived from BW alone (e.g. "Na", "St",
/// "Wi"), or `None` when the BW is unrecognized.
short_code: Option<&'static str>,
/// Human-readable display string — the long name when matched, else
/// `BW{bw}/SF{sf}/CR{cr}`.
#[allow(dead_code)]
display_string: String,
}
/// Resolve a MeshCore SF/BW/CR preset into display metadata, or `None`
/// when the input is not an SF/BW/CR string.
///
/// Mirrors `resolveMeshcorePresetDisplay` in `node-modem-metadata.js:161-190`.
fn resolve_meshcore_preset_display(preset: &str, freq_mhz: Option<f64>) -> Option<MeshcoreDisplay> {
let tokens = parse_meshcore_preset_tokens(preset)?;
let short_code = bw_to_short_code(tokens.bw);
let matched = MESHCORE_NAMED_PRESETS.iter().find(|entry| {
if (entry.sf as f64) != tokens.sf {
return false;
}
if (entry.bw as f64) != tokens.bw {
return false;
}
if (entry.cr as f64) != tokens.cr {
return false;
}
if let Some(max) = entry.max_freq_mhz {
match freq_mhz {
Some(f) if f < max as f64 => {}
_ => return false,
}
}
if let Some(min) = entry.min_freq_mhz {
match freq_mhz {
Some(f) if f >= min as f64 => {}
_ => return false,
}
}
true
});
if let Some(entry) = matched {
return Some(MeshcoreDisplay {
long_name: Some(entry.long_name),
short_code,
display_string: entry.long_name.to_string(),
});
}
Some(MeshcoreDisplay {
long_name: None,
short_code,
display_string: format!(
"BW{}/SF{}/CR{}",
format_number(tokens.bw),
format_number(tokens.sf),
format_number(tokens.cr),
),
})
}
/// Lowercase a Meshtastic preset string for table lookup.
///
/// Mirrors `preset.replace(/[^A-Za-z]/g, '').toLowerCase()` in
/// `chat-format.js:296`.
fn normalize_meshtastic_token(preset: &str) -> String {
preset
.chars()
.filter(|c| c.is_ascii_alphabetic())
.flat_map(|c| c.to_lowercase())
.collect()
}
/// Generate the fallback initials for a preset that did not hit either
/// lookup table.
///
/// Mirrors `derivePresetInitials` in `chat-format.js:309-336`.
fn derive_preset_initials(preset: &str) -> Option<String> {
if preset.is_empty() {
return None;
}
// Insert a space between (lowercase | digit) and uppercase to split
// CamelCase boundaries — mirrors `/([a-z0-9])([A-Z])/g`.
let mut spaced = String::with_capacity(preset.len() + 4);
let mut prev: Option<char> = None;
for c in preset.chars() {
if let Some(p) = prev {
if (p.is_ascii_lowercase() || p.is_ascii_digit()) && c.is_ascii_uppercase() {
spaced.push(' ');
}
}
spaced.push(c);
prev = Some(c);
}
let tokens: Vec<String> = spaced
.split(|c: char| c.is_whitespace() || c == '_' || c == '-')
.map(|part| {
part.chars()
.filter(|c| c.is_ascii_alphabetic())
.collect::<String>()
})
.filter(|s| !s.is_empty())
.collect();
if tokens.is_empty() {
return None;
}
if tokens.len() == 1 {
// Tokens are non-empty after the alphabetic-only filter, so
// `upper` always has ≥ 1 character. The branch reduces to "≥ 2
// → first two chars" vs. "exactly 1 → `X?`" — no zero-length arm.
let upper = tokens[0].to_ascii_uppercase();
if upper.chars().count() >= 2 {
return Some(upper.chars().take(2).collect());
}
return Some(format!("{}?", upper));
}
let first = tokens[0].chars().next()?.to_ascii_uppercase();
let second = tokens[1].chars().next()?.to_ascii_uppercase();
Some(format!("{}{}", first, second))
}
/// Produce a 2-character abbreviation for any modem preset string.
///
/// MeshCore SF/BW/CR presets resolve via [`resolve_meshcore_preset_display`]
/// (taking precedence over the Meshtastic table). Meshtastic named presets
/// hit [`MESHTASTIC_PRESET_ABBREVIATIONS`] after delimiter / casing
/// normalization. Anything else falls through to [`derive_preset_initials`].
///
/// Returns `None` only when the preset is empty or cannot be reduced to a
/// 1+ character abbreviation.
///
/// Mirrors `abbreviatePreset` in `chat-format.js:287-301`.
pub fn abbreviate_preset(preset: &str, freq_mhz: Option<f64>) -> Option<String> {
let trimmed = preset.trim();
if trimmed.is_empty() {
return None;
}
if let Some(display) = resolve_meshcore_preset_display(trimmed, freq_mhz) {
return display.short_code.map(str::to_string);
}
let token = normalize_meshtastic_token(trimmed);
if !token.is_empty() {
for (key, value) in MESHTASTIC_PRESET_ABBREVIATIONS {
if token == *key {
return Some((*value).to_string());
}
}
}
derive_preset_initials(trimmed)
}
/// Format an abbreviation into the 2-character bracket slot used by both
/// the dashboard and the Matrix bridge.
///
/// Trims the value, uppercases it, and truncates to 2 characters. Returns
/// `"??"` when the value is missing or empty so the column width remains
/// consistent.
///
/// Mirrors `normalizePresetSlot` in `chat-format.js:344-350`. Where the JS
/// version emits `&nbsp;&nbsp;` for the empty case (HTML context), this Rust
/// port emits the literal placeholder `"??"` because Matrix message bodies
/// are plain text plus a `<code>…</code>` HTML wrapper, not raw HTML. `"??"`
/// also matches the existing `protocol_tag` placeholder convention.
pub fn normalize_preset_slot(value: Option<&str>) -> String {
let raw = value.unwrap_or("").trim();
if raw.is_empty() {
return "??".to_string();
}
let upper: String = raw.chars().flat_map(|c| c.to_uppercase()).collect();
if upper.is_empty() {
return "??".to_string();
}
upper.chars().take(2).collect()
}
#[cfg(test)]
mod tests {
use super::*;
// ----- is_valid_number_token --------------------------------------------
#[test]
fn number_token_accepts_integers_and_decimals() {
assert!(is_valid_number_token("0"));
assert!(is_valid_number_token("125"));
assert!(is_valid_number_token("62.5"));
}
#[test]
fn number_token_rejects_signs_exponents_and_dotted_edges() {
assert!(!is_valid_number_token(""));
assert!(!is_valid_number_token("."));
assert!(!is_valid_number_token(".5"));
assert!(!is_valid_number_token("5."));
assert!(!is_valid_number_token("+5"));
assert!(!is_valid_number_token("-5"));
assert!(!is_valid_number_token("1e3"));
assert!(!is_valid_number_token("1.2.3"));
}
// ----- parse_token -------------------------------------------------------
#[test]
fn parse_token_handles_each_prefix_case_insensitively() {
assert_eq!(parse_token("SF12"), Some((PresetKey::Sf, 12.0)));
assert_eq!(parse_token("sf12"), Some((PresetKey::Sf, 12.0)));
assert_eq!(parse_token("BW125"), Some((PresetKey::Bw, 125.0)));
assert_eq!(parse_token("bw62.5"), Some((PresetKey::Bw, 62.5)));
assert_eq!(parse_token("CR5"), Some((PresetKey::Cr, 5.0)));
}
#[test]
fn parse_token_rejects_invalid_inputs() {
assert_eq!(parse_token(""), None);
assert_eq!(parse_token("XX12"), None);
assert_eq!(parse_token("SF"), None);
assert_eq!(parse_token("SFabc"), None);
// Non-ASCII first byte must not panic (multi-byte char crosses the
// first-2-bytes boundary).
assert_eq!(parse_token("é12"), None);
}
// ----- parse_meshcore_preset_tokens --------------------------------------
#[test]
fn parse_preset_tokens_accepts_any_order_and_case() {
let parsed = parse_meshcore_preset_tokens("SF12/BW125/CR5").unwrap();
assert_eq!(parsed.sf, 12.0);
assert_eq!(parsed.bw, 125.0);
assert_eq!(parsed.cr, 5.0);
let reordered = parse_meshcore_preset_tokens("cr5/sf7/bw62.5").unwrap();
assert_eq!(reordered.sf, 7.0);
assert_eq!(reordered.bw, 62.5);
assert_eq!(reordered.cr, 5.0);
}
#[test]
fn parse_preset_tokens_rejects_non_sf_bw_cr_inputs() {
assert!(parse_meshcore_preset_tokens("MediumFast").is_none());
assert!(parse_meshcore_preset_tokens("").is_none());
assert!(parse_meshcore_preset_tokens("SF12/BW125").is_none());
assert!(parse_meshcore_preset_tokens("SF12/BW125/CR5/extra").is_none());
// Duplicate token rejected to avoid silently dropping ambiguous input.
assert!(parse_meshcore_preset_tokens("SF12/SF12/CR5").is_none());
}
// ----- bw_to_short_code --------------------------------------------------
#[test]
fn bw_short_code_matches_canonical_table() {
assert_eq!(bw_to_short_code(62.0), Some("Na"));
assert_eq!(bw_to_short_code(62.5), Some("Na"));
assert_eq!(bw_to_short_code(125.0), Some("St"));
assert_eq!(bw_to_short_code(250.0), Some("Wi"));
assert_eq!(bw_to_short_code(500.0), None);
assert_eq!(bw_to_short_code(31.0), None);
}
// ----- format_number -----------------------------------------------------
#[test]
fn format_number_drops_decimal_for_integers() {
assert_eq!(format_number(0.0), "0");
assert_eq!(format_number(62.0), "62");
assert_eq!(format_number(125.0), "125");
}
#[test]
fn format_number_keeps_decimal_for_fractions() {
assert_eq!(format_number(0.5), "0.5");
assert_eq!(format_number(62.5), "62.5");
}
// ----- resolve_meshcore_preset_display -----------------------------------
#[test]
fn resolve_returns_none_for_non_sf_bw_cr_input() {
assert!(resolve_meshcore_preset_display("MediumFast", None).is_none());
assert!(resolve_meshcore_preset_display("", None).is_none());
}
#[test]
fn resolve_au_nz_wide_at_915mhz() {
let got = resolve_meshcore_preset_display("SF10/BW250/CR5", Some(915.0)).unwrap();
assert_eq!(got.long_name, Some("AU/NZ Wide"));
assert_eq!(got.short_code, Some("Wi"));
assert_eq!(got.display_string, "AU/NZ Wide");
}
#[test]
fn resolve_au_nz_narrow_at_915mhz() {
let got = resolve_meshcore_preset_display("SF10/BW62/CR5", Some(915.0)).unwrap();
assert_eq!(got.long_name, Some("AU/NZ Narrow"));
assert_eq!(got.short_code, Some("Na"));
assert_eq!(got.display_string, "AU/NZ Narrow");
}
#[test]
fn resolve_eu_uk_wide_at_868mhz() {
let got = resolve_meshcore_preset_display("SF11/BW250/CR5", Some(868.0)).unwrap();
assert_eq!(got.long_name, Some("EU/UK Wide"));
assert_eq!(got.short_code, Some("Wi"));
assert_eq!(got.display_string, "EU/UK Wide");
}
#[test]
fn resolve_eu_uk_narrow_at_868mhz() {
let got = resolve_meshcore_preset_display("SF8/BW62/CR8", Some(868.0)).unwrap();
assert_eq!(got.long_name, Some("EU/UK Narrow"));
assert_eq!(got.short_code, Some("Na"));
assert_eq!(got.display_string, "EU/UK Narrow");
}
#[test]
fn resolve_cz_sk_narrow_below_900mhz() {
let got = resolve_meshcore_preset_display("SF7/BW62/CR5", Some(868.0)).unwrap();
assert_eq!(got.long_name, Some("CZ/SK Narrow"));
assert_eq!(got.short_code, Some("Na"));
assert_eq!(got.display_string, "CZ/SK Narrow");
}
#[test]
fn resolve_us_ca_narrow_at_or_above_900mhz() {
let got915 = resolve_meshcore_preset_display("SF7/BW62/CR5", Some(915.0)).unwrap();
assert_eq!(got915.long_name, Some("US/CA Narrow"));
let got_boundary = resolve_meshcore_preset_display("SF7/BW62/CR5", Some(900.0)).unwrap();
assert_eq!(got_boundary.long_name, Some("US/CA Narrow"));
}
#[test]
fn resolve_unknown_freq_skips_gated_named_match() {
let got = resolve_meshcore_preset_display("SF7/BW62/CR5", None).unwrap();
assert_eq!(got.long_name, None);
assert_eq!(got.short_code, Some("Na"));
assert_eq!(got.display_string, "BW62/SF7/CR5");
}
#[test]
fn resolve_unknown_bw_has_no_short_code() {
let got = resolve_meshcore_preset_display("SF12/BW500/CR7", None).unwrap();
assert_eq!(got.long_name, None);
assert_eq!(got.short_code, None);
assert_eq!(got.display_string, "BW500/SF12/CR7");
}
#[test]
fn resolve_125khz_falls_back_to_st() {
let got = resolve_meshcore_preset_display("SF9/BW125/CR6", None).unwrap();
assert_eq!(got.long_name, None);
assert_eq!(got.short_code, Some("St"));
assert_eq!(got.display_string, "BW125/SF9/CR6");
}
// ----- abbreviate_preset (MeshCore path) ---------------------------------
#[test]
fn abbreviate_returns_meshcore_short_code_for_named_presets() {
assert_eq!(
abbreviate_preset("SF11/BW250/CR5", Some(868.0)).as_deref(),
Some("Wi")
);
assert_eq!(
abbreviate_preset("SF8/BW62/CR8", Some(868.0)).as_deref(),
Some("Na")
);
assert_eq!(
abbreviate_preset("SF10/BW250/CR5", Some(915.0)).as_deref(),
Some("Wi")
);
}
#[test]
fn abbreviate_returns_meshcore_short_code_via_bw_fallback() {
assert_eq!(
abbreviate_preset("SF9/BW125/CR6", None).as_deref(),
Some("St")
);
assert_eq!(
abbreviate_preset("SF7/BW62/CR5", None).as_deref(),
Some("Na")
);
}
#[test]
fn abbreviate_returns_none_when_meshcore_bw_unknown() {
assert_eq!(abbreviate_preset("SF12/BW500/CR7", None), None);
}
// ----- abbreviate_preset (Meshtastic path) -------------------------------
#[test]
fn abbreviate_resolves_every_named_meshtastic_preset() {
let cases = [
("VeryLongSlow", "VL"),
("LongSlow", "LS"),
("LongModerate", "LM"),
("LongFast", "LF"),
("MediumSlow", "MS"),
("MediumFast", "MF"),
("ShortSlow", "SS"),
("ShortFast", "SF"),
("ShortTurbo", "ST"),
];
for (input, expected) in cases {
assert_eq!(
abbreviate_preset(input, None).as_deref(),
Some(expected),
"input={input}"
);
}
}
#[test]
fn abbreviate_is_insensitive_to_delimiters_and_case() {
assert_eq!(abbreviate_preset("LONG_FAST", None).as_deref(), Some("LF"));
assert_eq!(abbreviate_preset("long-fast", None).as_deref(), Some("LF"));
assert_eq!(
abbreviate_preset("Medium_Fast", None).as_deref(),
Some("MF")
);
// Whitespace is stripped along with other non-alphabetic chars,
// so a human-typed `"Medium Fast"` resolves the same as the
// CamelCase form.
assert_eq!(
abbreviate_preset("Medium Fast", None).as_deref(),
Some("MF")
);
}
// ----- abbreviate_preset (initials fallback) -----------------------------
#[test]
fn abbreviate_falls_back_to_initials_for_unmapped_camelcase() {
assert_eq!(
abbreviate_preset("CustomPreset", None).as_deref(),
Some("CP")
);
}
#[test]
fn abbreviate_handles_single_word_and_letter_inputs() {
assert_eq!(abbreviate_preset("Foo", None).as_deref(), Some("FO"));
assert_eq!(abbreviate_preset("X", None).as_deref(), Some("X?"));
}
#[test]
fn abbreviate_returns_none_for_blank_or_punctuation_only() {
assert_eq!(abbreviate_preset("", None), None);
assert_eq!(abbreviate_preset(" ", None), None);
assert_eq!(abbreviate_preset("___", None), None);
}
// ----- normalize_preset_slot --------------------------------------------
#[test]
fn normalize_slot_uppercases_and_truncates() {
assert_eq!(normalize_preset_slot(Some("Na")), "NA");
assert_eq!(normalize_preset_slot(Some("MF")), "MF");
assert_eq!(normalize_preset_slot(Some("verylong")), "VE");
assert_eq!(normalize_preset_slot(Some(" st ")), "ST");
}
#[test]
fn normalize_slot_emits_placeholder_for_missing_or_empty() {
assert_eq!(normalize_preset_slot(None), "??");
assert_eq!(normalize_preset_slot(Some("")), "??");
assert_eq!(normalize_preset_slot(Some(" ")), "??");
}
// ----- derive_preset_initials -------------------------------------------
#[test]
fn derive_initials_handles_token_count_branches() {
assert_eq!(derive_preset_initials(""), None);
assert_eq!(derive_preset_initials("___"), None);
assert_eq!(derive_preset_initials("Foo"), Some("FO".to_string()));
assert_eq!(derive_preset_initials("X"), Some("X?".to_string()));
assert_eq!(
derive_preset_initials("CustomPreset"),
Some("CP".to_string())
);
assert_eq!(
derive_preset_initials("Three Word Name"),
Some("TW".to_string())
);
}
}
+211
View File
@@ -191,6 +191,15 @@ class TestCandidateNodeId:
result = ifaces._candidate_node_id({"items": [{"fromId": "!aabbccdd"}]})
assert result == "!aabbccdd"
def test_unknown_section_value_scanned(self):
"""Mapping values under arbitrary keys are recursively scanned.
Exercises the ``else`` branch of the values-loop (non-list/tuple value)
when the parent key is not one of the recognised section names.
"""
result = ifaces._candidate_node_id({"misc_section": {"fromId": "!aabbccdd"}})
assert result == "!aabbccdd"
# ---------------------------------------------------------------------------
# _has_field
@@ -448,6 +457,61 @@ class TestRegionFrequency:
)
assert ifaces._region_frequency(msg) == 999
def test_enum_name_without_any_digits_returns_name(self):
"""Enum name with no extractable digits is returned as-is."""
enum_val = SimpleNamespace(name="UNSET")
enum_type = SimpleNamespace(values_by_number={0: enum_val})
field_desc = SimpleNamespace(enum_type=enum_type)
desc = SimpleNamespace(fields_by_name={"region": field_desc})
msg = SimpleNamespace(DESCRIPTOR=desc, override_frequency=None, region=0)
assert ifaces._region_frequency(msg) == "UNSET"
# ---------------------------------------------------------------------------
# _resolve_lora_message
# ---------------------------------------------------------------------------
class TestResolveLoraMessage:
"""Tests for :func:`interfaces._resolve_lora_message`."""
def test_none_returns_none(self):
"""A ``None`` ``local_config`` short-circuits."""
assert ifaces._resolve_lora_message(None) is None
def test_radio_section_lora_via_has_field(self):
"""Resolves ``radio.lora`` when exposed via ``HasField``."""
radio_section = SimpleNamespace(
HasField=lambda name: name == "lora", lora="radio_lora"
)
local_config = SimpleNamespace(HasField=lambda name: False, radio=radio_section)
assert ifaces._resolve_lora_message(local_config) == "radio_lora"
def test_radio_section_lora_via_hasattr(self):
"""Resolves ``radio.lora`` via ``hasattr`` when ``HasField`` is silent.
The ``radio_section`` exposes ``HasField`` returning ``False`` so
``_has_field`` produces ``False`` for ``"lora"``, forcing the
``hasattr`` fallback path to be taken before returning the value.
"""
radio_section = SimpleNamespace(
HasField=lambda name: False, lora="radio_lora_attr"
)
local_config = SimpleNamespace(HasField=lambda name: False, radio=radio_section)
assert ifaces._resolve_lora_message(local_config) == "radio_lora_attr"
def test_local_config_lora_via_hasattr_only(self):
"""Resolves ``local_config.lora`` via ``hasattr`` when no ``HasField`` match."""
local_config = SimpleNamespace(
HasField=lambda name: False, lora="bare_lora", radio=None
)
assert ifaces._resolve_lora_message(local_config) == "bare_lora"
def test_no_lora_anywhere_returns_none(self):
"""No ``lora`` attribute on either section returns ``None``."""
local_config = SimpleNamespace(HasField=lambda name: False, radio=None)
assert ifaces._resolve_lora_message(local_config) is None
# ---------------------------------------------------------------------------
# _camelcase_enum_name
@@ -477,6 +541,10 @@ class TestCamelcaseEnumName:
"""Digits in the name are preserved."""
assert ifaces._camelcase_enum_name("BAND_915") == "Band915"
def test_only_separators_returns_none(self):
"""A string consisting only of separators yields no usable parts."""
assert ifaces._camelcase_enum_name("___") is None
# ---------------------------------------------------------------------------
# _modem_preset
@@ -523,6 +591,30 @@ class TestModemPreset:
msg = SimpleNamespace(DESCRIPTOR=desc, preset=1)
assert ifaces._modem_preset(msg) == "ShortFast"
def test_attr_preset_fallback_when_no_modem_preset(self):
"""Falls back to ``preset`` attribute when ``modem_preset`` is absent.
Exercises the ``hasattr(lora_message, 'preset')`` branch when the
descriptor lacks both fields and the object only exposes ``preset``.
"""
class _PresetOnly:
DESCRIPTOR = None
preset = "LONG_FAST"
assert ifaces._modem_preset(_PresetOnly()) == "LongFast"
def test_unparseable_preset_value_returns_none(self):
"""A non-string, non-enum-resolvable preset value returns None."""
# Field present in descriptor but enum_type lookup yields a non-string
# (e.g., a numeric mapping with no name). ``preset_value`` is also a
# plain int (not a string), so neither name nor string fallback applies.
enum_type = SimpleNamespace(values_by_number={})
field_desc = SimpleNamespace(enum_type=enum_type)
desc = SimpleNamespace(fields_by_name={"modem_preset": field_desc})
msg = SimpleNamespace(DESCRIPTOR=desc, modem_preset=99)
assert ifaces._modem_preset(msg) is None
# ---------------------------------------------------------------------------
# _ensure_radio_metadata caching
@@ -540,6 +632,18 @@ class TestEnsureRadioMetadata:
assert config.LORA_FREQ == original_freq
assert config.MODEM_PRESET == original_preset
def test_unresolvable_lora_message_returns_without_writing(self, monkeypatch):
"""When ``_resolve_lora_message`` returns ``None``, config is left alone."""
monkeypatch.setattr(config, "LORA_FREQ", None)
monkeypatch.setattr(config, "MODEM_PRESET", None)
# ``localConfig`` exists but has no lora/radio, so resolve returns None.
local_config = SimpleNamespace(HasField=lambda name: False, radio=None)
local_node = SimpleNamespace(localConfig=local_config)
iface = SimpleNamespace(localNode=local_node, waitForConfig=lambda: None)
ifaces._ensure_radio_metadata(iface)
assert config.LORA_FREQ is None
assert config.MODEM_PRESET is None
def test_sets_lora_freq_when_not_cached(self, monkeypatch):
"""Populates LORA_FREQ from interface when not yet configured."""
monkeypatch.setattr(config, "LORA_FREQ", None)
@@ -577,3 +681,110 @@ class TestEnsureRadioMetadata:
ifaces._ensure_radio_metadata(iface)
assert config.LORA_FREQ == 433
# ---------------------------------------------------------------------------
# _extract_host_node_id
# ---------------------------------------------------------------------------
class TestExtractHostNodeId:
"""Tests for :func:`interfaces._extract_host_node_id`."""
def test_none_iface_returns_none(self):
"""A ``None`` interface short-circuits without any attribute access."""
assert ifaces._extract_host_node_id(None) is None
# ---------------------------------------------------------------------------
# _ensure_channel_metadata
# ---------------------------------------------------------------------------
class TestEnsureChannelMetadata:
"""Tests for :func:`interfaces._ensure_channel_metadata`."""
def test_none_iface_is_noop(self, monkeypatch):
"""A ``None`` interface short-circuits without invoking ``capture_from_interface``."""
import data.mesh_ingestor.channels as _channels
called: list = []
monkeypatch.setattr(
_channels, "capture_from_interface", lambda iface: called.append(iface)
)
ifaces._ensure_channel_metadata(None)
assert called == []
def test_calls_capture_from_interface(self, monkeypatch):
"""A non-None interface delegates to ``channels.capture_from_interface``."""
import data.mesh_ingestor.channels as _channels
seen: list = []
monkeypatch.setattr(
_channels, "capture_from_interface", lambda iface: seen.append(iface)
)
sentinel = SimpleNamespace(myInfo={})
ifaces._ensure_channel_metadata(sentinel)
assert seen == [sentinel]
# ---------------------------------------------------------------------------
# _normalise_nodeinfo_packet
# ---------------------------------------------------------------------------
class TestNormaliseNodeinfoPacket:
"""Tests for :func:`interfaces._normalise_nodeinfo_packet`."""
def test_non_mapping_returns_none(self):
"""Inputs that ``_ensure_mapping`` cannot coerce return ``None``."""
# int/float values are explicitly rejected by ``_ensure_mapping``.
assert ifaces._normalise_nodeinfo_packet(42) is None
def test_mapping_with_node_id_injects_id_field(self):
"""A valid mapping has the canonical id injected when inferable."""
result = ifaces._normalise_nodeinfo_packet({"fromId": "!aabbccdd"})
assert result is not None
assert result["id"] == "!aabbccdd"
def test_mapping_keeps_existing_id_when_consistent(self):
"""A pre-existing matching ``id`` is left untouched."""
result = ifaces._normalise_nodeinfo_packet(
{"id": "!aabbccdd", "fromId": "!aabbccdd"}
)
assert result == {"id": "!aabbccdd", "fromId": "!aabbccdd"}
def test_dict_conversion_fallback(self):
"""Mapping whose ``dict(...)`` raises falls back to comprehension copy.
Exercises the inner ``except`` branch that copies via
``{key: mapping[key] for key in mapping}`` when ``dict(mapping)`` fails.
Uses a Mapping subclass whose first ``__iter__`` call raises so the
``dict()`` constructor errors but the subsequent comprehension reads
via the same iterator and succeeds.
"""
from collections.abc import Mapping as _Mapping
class _RaisingDictMapping(_Mapping):
def __init__(self, payload: dict) -> None:
self._payload = payload
self._first_iter_done = False
def __iter__(self):
if not self._first_iter_done:
self._first_iter_done = True
raise RuntimeError("simulated iteration failure")
yield from self._payload
def __getitem__(self, key):
return self._payload[key]
def __len__(self):
return len(self._payload)
result = ifaces._normalise_nodeinfo_packet(
_RaisingDictMapping({"fromId": "!aabbccdd"})
)
assert result is not None
assert result["fromId"] == "!aabbccdd"
assert result["id"] == "!aabbccdd"
+164
View File
@@ -1195,6 +1195,40 @@ def test_interface_close_is_idempotent():
iface.close() # must not raise
def test_interface_close_swallows_runtime_error_from_loop():
"""close() must swallow RuntimeError from loop.call_soon_threadsafe.
A race between the ``loop.is_closed()`` guard and the ``call_soon_threadsafe``
invocation can leave the loop closed by the time we schedule the stop, in
which case asyncio raises ``RuntimeError("Event loop is closed")``. ``close()``
must absorb that error so callers can treat shutdown as best-effort.
"""
iface = _MeshcoreInterface(target=None)
class _RacingLoop:
def is_closed(self):
return False
def call_soon_threadsafe(self, *_a, **_k):
raise RuntimeError("Event loop is closed")
def stop(self): # accessed as ``loop.stop`` arg in the no-stop_event branch
return None
iface._loop = _RacingLoop()
iface._stop_event = types.SimpleNamespace(set=lambda: None)
iface.close() # must not raise
assert iface.isConnected is False
# Same code path with stop_event=None exercises the loop.stop() branch.
iface2 = _MeshcoreInterface(target=None)
iface2._loop = _RacingLoop()
iface2._stop_event = None
iface2.close() # must not raise
assert iface2.isConnected is False
# ---------------------------------------------------------------------------
# _derive_message_id
# ---------------------------------------------------------------------------
@@ -1707,6 +1741,33 @@ def test_on_channel_msg_skips_empty_text(monkeypatch):
assert captured == []
def test_on_contact_msg_skips_when_text_or_sender_ts_missing(monkeypatch):
"""on_contact_msg must early-return when text is empty or sender_ts is None.
Mirrors :func:`test_on_channel_msg_skips_empty_text` for direct messages so
that a malformed CONTACT_MSG_RECV event cannot enqueue an empty packet.
"""
import asyncio
import data.mesh_ingestor as _mesh_pkg
import data.mesh_ingestor.protocols.meshcore as _mod
captured: list = []
stub = _make_stub_handlers_module()
stub.store_packet_dict = lambda pkt: captured.append(pkt)
monkeypatch.setattr(_mod.config, "_debug_log", lambda *_a, **_k: None)
monkeypatch.setattr(_mesh_pkg, "handlers", stub)
iface = _MeshcoreInterface(target=None)
iface.host_node_id = "!deadbeef"
hmap = _make_event_handlers(iface, "/dev/ttyUSB0")
asyncio.run(hmap["CONTACT_MSG_RECV"](_FakeEvt({"sender_timestamp": 1, "text": ""})))
asyncio.run(hmap["CONTACT_MSG_RECV"](_FakeEvt({"text": "hi"}))) # missing ts
asyncio.run(hmap["CONTACT_MSG_RECV"](_FakeEvt({}))) # both missing
assert captured == []
@pytest.mark.filterwarnings("ignore::pytest.PytestUnhandledThreadExceptionWarning")
def test_connect_raises_on_timeout(monkeypatch):
"""connect() raises ConnectionError when connected_event is never signalled.
@@ -2049,6 +2110,68 @@ def test_process_self_info_queues_ingestor_heartbeat_before_upsert(monkeypatch):
], "Ingestor heartbeat must be queued before node upsert"
def test_process_self_info_queues_position_when_advertised(monkeypatch):
"""_process_self_info must POST to /api/positions when adv_lat/adv_lon are set.
Covers the host-node position branch: when the connected radio reports a
GPS-fixed advertisement in its SELF_INFO, the host's own position must be
forwarded to the web backend exactly once per heartbeat.
"""
import data.mesh_ingestor.protocols.meshcore as _mod
monkeypatch.setattr(_mod.config, "_debug_log", lambda *_a, **_k: None)
monkeypatch.setattr(
_mod._ingestors, "queue_ingestor_heartbeat", lambda *_a, **_k: True
)
posted: list = []
monkeypatch.setattr(
_mod._queue,
"_queue_post_json",
lambda route, payload, **_k: posted.append((route, payload)),
)
stub = _make_stub_handlers_module()
stub.host_node_id = lambda: "!ingestor1"
payload = {
"public_key": "aabbccdd" + "00" * 28,
"name": "Host",
"adv_lat": 51.5,
"adv_lon": -0.1,
}
_process_self_info(payload, _MeshcoreInterface(target=None), stub)
position_posts = [p for r, p in posted if r == "/api/positions"]
assert len(position_posts) == 1
assert position_posts[0]["node_id"] == "!aabbccdd"
assert position_posts[0]["latitude"] == pytest.approx(51.5)
assert position_posts[0]["longitude"] == pytest.approx(-0.1)
assert position_posts[0]["ingestor"] == "!ingestor1"
def test_process_self_info_skips_position_when_latlon_absent(monkeypatch):
"""_process_self_info must not POST to /api/positions when lat/lon are absent."""
import data.mesh_ingestor.protocols.meshcore as _mod
monkeypatch.setattr(_mod.config, "_debug_log", lambda *_a, **_k: None)
monkeypatch.setattr(
_mod._ingestors, "queue_ingestor_heartbeat", lambda *_a, **_k: True
)
posted: list = []
monkeypatch.setattr(
_mod._queue,
"_queue_post_json",
lambda route, payload, **_k: posted.append(route),
)
payload = {"public_key": "aabbccdd" + "00" * 28, "name": "Host"}
_process_self_info(
payload, _MeshcoreInterface(target=None), _make_stub_handlers_module()
)
assert "/api/positions" not in posted
# ---------------------------------------------------------------------------
# _derive_modem_preset
# ---------------------------------------------------------------------------
@@ -2987,6 +3110,47 @@ def test_run_meshcore_ensure_contacts_failure_continues(monkeypatch):
assert "warning" in logged
def test_run_meshcore_ensure_channel_names_failure_continues(monkeypatch):
"""_ensure_channel_names raising must log a warning but not abort the connection.
The channel-name probe is best-effort: even when its internal try/except is
bypassed (e.g. a programming error inside ``_ensure_channel_names`` itself
or an exception from a deferred import), the outer ``_run_meshcore`` loop
must catch it so the connection stays alive.
"""
import asyncio
import data.mesh_ingestor.protocols.meshcore as _mod
import data.mesh_ingestor.protocols.meshcore.runner as _runner_mod
logged: list = []
def _capture(*_a, severity=None, **_k):
logged.append(severity)
monkeypatch.setattr(_mod.config, "_debug_log", _capture)
async def _boom(_mc):
raise RuntimeError("synthetic channel probe failure")
# Patch the binding inside runner.py — the module-level ``from .channels
# import _ensure_channel_names`` resolves the name at import time, so
# patching the package attribute alone would not reach the runner.
monkeypatch.setattr(_runner_mod, "_ensure_channel_names", _boom)
fake_mod = _make_fake_meshcore_mod()
_patch_meshcore_mod(monkeypatch, _mod, fake_mod)
iface = _MeshcoreInterface(target=None)
connected_event, error_holder = asyncio.run(
_run_until_connected(iface, "/dev/ttyUSB0", fake_mod, _mod)
)
assert connected_event.is_set()
assert error_holder[0] is None
assert "warning" in logged
def test_run_meshcore_disconnect_exception_suppressed(monkeypatch):
"""disconnect() raising in the finally block must be silently swallowed."""
import asyncio
+17 -2
View File
@@ -48,12 +48,26 @@ RUN python3 -m venv /opt/meshtastic-venv && \
# Production stage
FROM ruby:3.3-alpine AS production
# Install runtime dependencies
# Build-time toggle controlling whether Chromium is bundled into the image
# for runtime Open Graph preview rendering. Operators on size-constrained
# hosts can build with `--build-arg WITH_OG_IMAGE=0` to skip Chromium and
# its font/library payload (~150 MB). The web app falls back to the
# packaged default PNG when Chromium is missing, and operators can point
# `OG_IMAGE_URL` at a CDN-hosted preview instead.
ARG WITH_OG_IMAGE=1
ENV WITH_OG_IMAGE=${WITH_OG_IMAGE}
# Install runtime dependencies. Chromium powers the runtime Open Graph
# preview generator; the accompanying font and library packages are the
# minimum set required to render the dashboard headlessly on Alpine.
RUN apk add --no-cache \
python3 \
sqlite \
tzdata \
curl
curl \
&& if [ "$WITH_OG_IMAGE" = "1" ]; then \
apk add --no-cache chromium nss freetype harfbuzz ttf-freefont; \
fi
# Create non-root user
RUN addgroup -g 1000 -S potatomesh && \
@@ -107,6 +121,7 @@ ENV RACK_ENV=production \
MAP_ZOOM="" \
MAX_DISTANCE=42 \
CONTACT_LINK="#potatomesh:dod.ngo" \
FERRUM_BROWSER_PATH=/usr/bin/chromium \
DEBUG=0
# Start the application
+1
View File
@@ -22,6 +22,7 @@ gem "puma", "~> 7.0"
gem "prometheus-client"
gem "kramdown", "~> 2.4"
gem "kramdown-parser-gfm", "~> 1.1"
gem "ferrum", "~> 0.17"
group :test do
gem "rspec", "~> 3.12"
+1
View File
@@ -40,6 +40,7 @@ require_relative "config"
require_relative "sanitizer"
require_relative "meta"
require_relative "logging"
require_relative "og_image"
require_relative "application/helpers"
require_relative "application/errors"
require_relative "application/database"
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,51 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module DataProcessing
# Allowed values for the +telemetry_type+ discriminator column.
VALID_TELEMETRY_TYPES = %w[device environment power air_quality].freeze
# Half-window (seconds) for the meshcore content-level message dedup
# in +insert_message+ and the matching one-shot backfill. Set to
# roughly 3× the observed relay-retransmit delta (~10 s) so genuine
# clock skew across co-operating ingestors still collapses, while
# rapid legitimate re-sends ("ack", "ok", "test") ≥30 s apart remain
# distinct rows. See issue #756 and ``CONTRACTS.md`` for rationale.
#
# IMPORTANT: widening this value only takes effect at runtime — the
# one-shot backfill in +PotatoMesh::App::Database+ is frozen at
# +MESHCORE_CONTENT_DEDUP_BACKFILL_VERSION+. To re-sweep pre-existing
# rows that newly fall within an expanded window, bump the backfill
# version so the migration re-runs on the next deploy.
MESHCORE_CONTENT_DEDUP_WINDOW_SECONDS = 30
# Coerce a Ruby boolean into a SQLite integer (1/0) while passing through
# any other value unchanged. Used when writing boolean node fields.
#
# @param value [Boolean, Object] value to coerce.
# @return [Integer, Object] 1, 0, or the original value.
def coerce_bool(value)
case value
when true then 1
when false then 0
else value
end
end
end
end
end
@@ -0,0 +1,273 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module DataProcessing
# Decode and store decrypted payloads in domain-specific tables.
#
# @param db [SQLite3::Database] open database handle.
# @param message [Hash] original message payload.
# @param packet_id [Integer] packet identifier for the message.
# @param decrypted [Hash] decrypted payload metadata.
# @param rx_time [Integer] receive time.
# @param rx_iso [String] ISO 8601 receive timestamp.
# @param from_id [String, nil] canonical sender identifier.
# @param to_id [String, nil] destination identifier.
# @param channel [Integer, nil] channel index.
# @param portnum [Object, nil] port number identifier.
# @param hop_limit [Integer, nil] hop limit value.
# @param snr [Numeric, nil] signal-to-noise ratio.
# @param rssi [Integer, nil] RSSI value.
# @return [void]
def store_decrypted_payload(
db,
message,
packet_id,
decrypted,
rx_time:,
rx_iso:,
from_id:,
to_id:,
channel:,
portnum:,
hop_limit:,
snr:,
rssi:
)
payload_bytes = decrypted[:payload]
return false unless payload_bytes
portnum_value = coerce_integer(portnum || decrypted[:portnum])
return false unless portnum_value
payload_b64 = Base64.strict_encode64(payload_bytes)
supported_ports = [3, 4, 67, 70, 71]
return false unless supported_ports.include?(portnum_value)
decoded = PotatoMesh::App::Meshtastic::PayloadDecoder.decode(
portnum: portnum_value,
payload_b64: payload_b64,
)
return false unless decoded.is_a?(Hash)
return false unless decoded["payload"].is_a?(Hash)
common_payload = {
"id" => packet_id,
"packet_id" => packet_id,
"rx_time" => rx_time,
"rx_iso" => rx_iso,
"from_id" => from_id,
"to_id" => to_id,
"channel" => channel,
"portnum" => portnum_value.to_s,
"hop_limit" => hop_limit,
"snr" => snr,
"rssi" => rssi,
"lora_freq" => coerce_integer(message["lora_freq"] || message["loraFrequency"]),
"modem_preset" => string_or_nil(message["modem_preset"] || message["modemPreset"]),
"payload_b64" => payload_b64,
"ingestor" => string_or_nil(message["ingestor"]),
}
case decoded["type"]
when "POSITION_APP"
payload = common_payload.merge("position" => decoded["payload"])
insert_position(db, payload)
debug_log(
"Stored decrypted position payload",
context: "data_processing.store_decrypted_payload",
message_id: packet_id,
portnum: portnum_value,
)
true
when "NODEINFO_APP"
node_payload = normalize_decrypted_nodeinfo_payload(decoded["payload"])
return false unless valid_decrypted_nodeinfo_payload?(node_payload)
node_id = string_or_nil(node_payload["id"]) || from_id
node_num = coerce_integer(node_payload["num"]) ||
coerce_integer(message["from_num"]) ||
resolve_node_num(from_id, message)
node_id ||= format("!%08x", node_num & 0xFFFFFFFF) if node_num
return false unless node_id
payload = node_payload.merge(
"num" => node_num,
"lastHeard" => coerce_integer(node_payload["lastHeard"] || node_payload["last_heard"]) || rx_time,
"snr" => node_payload.key?("snr") ? node_payload["snr"] : snr,
"lora_freq" => common_payload["lora_freq"],
"modem_preset" => common_payload["modem_preset"],
)
upsert_node(db, node_id, payload)
debug_log(
"Stored decrypted node payload",
context: "data_processing.store_decrypted_payload",
message_id: packet_id,
portnum: portnum_value,
node_id: node_id,
)
true
when "TELEMETRY_APP"
payload = common_payload.merge("telemetry" => decoded["payload"])
insert_telemetry(db, payload)
debug_log(
"Stored decrypted telemetry payload",
context: "data_processing.store_decrypted_payload",
message_id: packet_id,
portnum: portnum_value,
)
true
when "NEIGHBORINFO_APP"
neighbor_payload = decoded["payload"]
neighbors = neighbor_payload["neighbors"]
neighbors = [] unless neighbors.is_a?(Array)
normalized_neighbors = neighbors.map do |neighbor|
next unless neighbor.is_a?(Hash)
{
"neighbor_id" => neighbor["node_id"] || neighbor["nodeId"] || neighbor["id"],
"snr" => neighbor["snr"],
"rx_time" => neighbor["last_rx_time"],
}.compact
end.compact
return false if normalized_neighbors.empty?
payload = common_payload.merge(
"node_id" => neighbor_payload["node_id"] || from_id,
"neighbors" => normalized_neighbors,
"node_broadcast_interval_secs" => neighbor_payload["node_broadcast_interval_secs"],
"last_sent_by_id" => neighbor_payload["last_sent_by_id"],
)
insert_neighbors(db, payload)
debug_log(
"Stored decrypted neighbor payload",
context: "data_processing.store_decrypted_payload",
message_id: packet_id,
portnum: portnum_value,
)
true
when "TRACEROUTE_APP"
route = decoded["payload"]["route"]
route_back = decoded["payload"]["route_back"]
hops = route.is_a?(Array) ? route : route_back.is_a?(Array) ? route_back : []
dest = hops.last if hops.is_a?(Array) && !hops.empty?
src_num = coerce_integer(message["from_num"]) || resolve_node_num(from_id, message)
payload = common_payload.merge(
"src" => src_num,
"dest" => dest,
"hops" => hops,
)
insert_trace(db, payload)
debug_log(
"Stored decrypted traceroute payload",
context: "data_processing.store_decrypted_payload",
message_id: packet_id,
portnum: portnum_value,
)
true
else
false
end
end
# Validate decoded NodeInfo payloads before upserting node records.
#
# @param payload [Object] decoded payload candidate.
# @return [Boolean] true when the payload resembles a Meshtastic NodeInfo.
def valid_decrypted_nodeinfo_payload?(payload)
return false unless payload.is_a?(Hash)
return false if payload.empty?
return false unless payload["user"].is_a?(Hash)
return false if payload.key?("position") && !payload["position"].is_a?(Hash)
return false if payload.key?("deviceMetrics") && !payload["deviceMetrics"].is_a?(Hash)
return false unless nodeinfo_user_has_identifying_fields?(payload["user"])
true
end
# Normalize decoded NodeInfo payload keys for +upsert_node+ compatibility.
#
# The Python decoder preserves protobuf field names, so nested hashes may
# use +snake_case+ keys that +upsert_node+ does not read.
#
# @param payload [Object] decoded NodeInfo payload.
# @return [Hash] normalized payload hash.
def normalize_decrypted_nodeinfo_payload(payload)
return {} unless payload.is_a?(Hash)
user = payload["user"]
normalized_user = user.is_a?(Hash) ? user.dup : nil
if normalized_user
normalized_user["shortName"] ||= normalized_user["short_name"]
normalized_user["longName"] ||= normalized_user["long_name"]
normalized_user["hwModel"] ||= normalized_user["hw_model"]
normalized_user["publicKey"] ||= normalized_user["public_key"]
normalized_user["isUnmessagable"] = normalized_user["is_unmessagable"] if normalized_user.key?("is_unmessagable")
end
metrics = payload["deviceMetrics"] || payload["device_metrics"]
normalized_metrics = metrics.is_a?(Hash) ? metrics.dup : nil
if normalized_metrics
normalized_metrics["batteryLevel"] ||= normalized_metrics["battery_level"]
normalized_metrics["channelUtilization"] ||= normalized_metrics["channel_utilization"]
normalized_metrics["airUtilTx"] ||= normalized_metrics["air_util_tx"]
normalized_metrics["uptimeSeconds"] ||= normalized_metrics["uptime_seconds"]
end
position = payload["position"]
normalized_position = position.is_a?(Hash) ? position.dup : nil
if normalized_position
normalized_position["precisionBits"] ||= normalized_position["precision_bits"]
normalized_position["locationSource"] ||= normalized_position["location_source"]
end
normalized = payload.dup
normalized["user"] = normalized_user if normalized_user
normalized["deviceMetrics"] = normalized_metrics if normalized_metrics
normalized["position"] = normalized_position if normalized_position
normalized["lastHeard"] ||= normalized["last_heard"]
normalized["hopsAway"] ||= normalized["hops_away"]
normalized["isFavorite"] = normalized["is_favorite"] if normalized.key?("is_favorite")
normalized["hwModel"] ||= normalized["hw_model"]
normalized
end
# Validate that a decoded NodeInfo user section contains identifying data.
#
# @param user [Hash] decoded NodeInfo user payload.
# @return [Boolean] true when at least one identifying field is present.
def nodeinfo_user_has_identifying_fields?(user)
identifying_fields = [
user["id"],
user["shortName"],
user["short_name"],
user["longName"],
user["long_name"],
user["macaddr"],
user["hwModel"],
user["hw_model"],
user["publicKey"],
user["public_key"],
]
identifying_fields.any? do |value|
value.is_a?(String) ? !value.strip.empty? : !value.nil?
end
end
end
end
end
@@ -0,0 +1,199 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module DataProcessing
# Resolve the numeric representation of a node identifier from a packet payload.
#
# The +payload["num"]+ field may arrive as an Integer, a decimal string, or
# a hexadecimal string (with or without an +0x+ prefix). When the field is
# absent or ambiguous the method falls back to decoding the hex portion of
# +node_id+.
#
# @param node_id [String, nil] canonical node identifier in +!xxxxxxxx+ form.
# @param payload [Hash] inbound message payload that may carry a +num+ field.
# @return [Integer, nil] resolved 32-bit node number or +nil+ when undecidable.
def resolve_node_num(node_id, payload)
raw = payload["num"]
case raw
when Integer
return raw
when Numeric
return raw.to_i
when String
trimmed = raw.strip
return nil if trimmed.empty?
return Integer(trimmed, 10) if trimmed.match?(/\A[0-9]+\z/)
return Integer(trimmed.delete_prefix("0x").delete_prefix("0X"), 16) if trimmed.match?(/\A0[xX][0-9A-Fa-f]+\z/)
if trimmed.match?(/\A[0-9A-Fa-f]+\z/)
canonical = node_id.is_a?(String) ? node_id.strip : ""
return Integer(trimmed, 16) if canonical.match?(/\A!?[0-9A-Fa-f]+\z/)
end
end
return nil unless node_id.is_a?(String)
hex = node_id.strip
return nil if hex.empty?
hex = hex.delete_prefix("!")
return nil unless hex.match?(/\A[0-9A-Fa-f]+\z/)
Integer(hex, 16)
end
# Derive the canonical triplet for a node reference.
#
# Accepts an Integer node number, a hex string with or without the +!+
# sigil, a decimal numeric string, or a +0x+-prefixed hex string. A
# +fallback_num+ may be provided when +node_ref+ is nil.
#
# @param node_ref [Integer, String, nil] raw node identifier from a packet.
# @param fallback_num [Integer, nil] numeric fallback when +node_ref+ is nil.
# @return [Array(String, Integer, String), nil] tuple of
# +[canonical_id, node_num, short_id]+ or +nil+ when the reference cannot
# be resolved. +canonical_id+ is prefixed with +!+ and zero-padded to
# eight lowercase hex digits. +short_id+ is the upper-case last four
# hex digits used for display.
def canonical_node_parts(node_ref, fallback_num = nil)
fallback = coerce_integer(fallback_num)
hex = nil
num = nil
case node_ref
when Integer
num = node_ref
when Numeric
num = node_ref.to_i
when String
trimmed = node_ref.strip
return nil if trimmed.empty?
if trimmed.start_with?("!")
hex = trimmed.delete_prefix("!")
elsif trimmed.match?(/\A0[xX][0-9A-Fa-f]+\z/)
hex = trimmed[2..].to_s
elsif trimmed.match?(/\A-?\d+\z/)
num = trimmed.to_i
elsif trimmed.match?(/\A[0-9A-Fa-f]+\z/)
hex = trimmed
else
return nil
end
when nil
num = fallback if fallback
else
return nil
end
num ||= fallback if fallback
if hex
begin
num ||= Integer(hex, 16)
rescue ArgumentError
return nil
end
elsif num
return nil if num.negative?
hex = format("%08x", num & 0xFFFFFFFF)
else
return nil
end
return nil if hex.nil? || hex.empty?
begin
parsed = Integer(hex, 16)
rescue ArgumentError
return nil
end
parsed &= 0xFFFFFFFF
canonical_hex = format("%08x", parsed)
short_id = canonical_hex[-4, 4].upcase
["!#{canonical_hex}", parsed, short_id]
end
# Detect whether a node reference resolves to the broadcast address.
#
# @param node_ref [Integer, String, nil] raw node reference.
# @param fallback_num [Integer, nil] optional numeric fallback.
# @return [Boolean] true when the reference matches the broadcast address.
def broadcast_node_ref?(node_ref, fallback_num = nil)
return true if fallback_num == 0xFFFFFFFF
trimmed = string_or_nil(node_ref)
return false unless trimmed
normalized = trimmed.delete_prefix("!").strip.downcase
normalized == "ffffffff"
end
# Converts a protocol identifier such as +meshtastic+ or +mesh-core+ into
# the display label used in generated node names: capitalised parts joined
# without a separator (e.g. +Meshtastic+, +MeshCore+).
#
# @param protocol [String] protocol identifier.
# @return [String] formatted display label.
def protocol_display_label(protocol)
protocol.split(/[-_]/).map(&:capitalize).join
end
# Returns true if +long_name+ is the synthetic placeholder generated by
# +ensure_unknown_node+ for the given +node_id+ and +protocol+. Such
# names carry no real information and must not overwrite a known name
# already on record.
#
# @param long_name [String, nil] candidate long name.
# @param node_id [String, nil] canonical node identifier.
# @param protocol [String] protocol identifier the placeholder was generated for.
# @return [Boolean] true when the long name is a generic placeholder.
def generic_fallback_name?(long_name, node_id, protocol)
return false unless long_name && !long_name.empty?
parts = canonical_node_parts(node_id)
return false unless parts
short_id = parts[2]
long_name == "#{protocol_display_label(protocol)} #{short_id}"
end
# Resolve a raw node reference to its canonical row in the +nodes+ table.
#
# @param db [SQLite3::Database] open database handle.
# @param node_ref [Object] raw reference (string, integer, or hex string).
# @return [String, nil] canonical +node_id+ or nil when no match exists.
def normalize_node_id(db, node_ref)
return nil if node_ref.nil?
ref_str = node_ref.to_s.strip
return nil if ref_str.empty?
node_id = db.get_first_value("SELECT node_id FROM nodes WHERE node_id = ?", [ref_str])
return node_id if node_id
begin
ref_num = Integer(ref_str, 10)
rescue ArgumentError
return nil
end
db.get_first_value("SELECT node_id FROM nodes WHERE num = ?", [ref_num])
end
end
end
end
@@ -0,0 +1,83 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module DataProcessing
# Insert or update an ingestor heartbeat payload.
#
# @param db [SQLite3::Database] open database handle.
# @param payload [Hash] ingestor payload from the collector.
# @return [Boolean] true when persistence succeeded.
def upsert_ingestor(db, payload)
return false unless payload.is_a?(Hash)
parts = canonical_node_parts(payload["node_id"] || payload["id"])
return false unless parts
node_id, = parts
now = Time.now.to_i
start_time = coerce_integer(payload["start_time"] || payload["startTime"]) || now
last_seen_time =
coerce_integer(payload["last_seen_time"] || payload["lastSeenTime"]) || start_time
start_time = 0 if start_time.negative?
last_seen_time = 0 if last_seen_time.negative?
start_time = now if start_time > now
last_seen_time = now if last_seen_time > now
last_seen_time = start_time if last_seen_time < start_time
version = string_or_nil(payload["version"] || payload["ingestorVersion"])
return false unless version
lora_freq = coerce_integer(payload["lora_freq"])
modem_preset = string_or_nil(payload["modem_preset"])
protocol = string_or_nil(payload["protocol"]) || "meshtastic"
with_busy_retry do
db.execute <<~SQL, [node_id, start_time, last_seen_time, version, lora_freq, modem_preset, protocol]
INSERT INTO ingestors(node_id, start_time, last_seen_time, version, lora_freq, modem_preset, protocol)
VALUES(?,?,?,?,?,?,?)
ON CONFLICT(node_id) DO UPDATE SET
start_time = CASE
WHEN excluded.start_time > ingestors.start_time THEN excluded.start_time
ELSE ingestors.start_time
END,
last_seen_time = CASE
WHEN excluded.last_seen_time > ingestors.last_seen_time THEN excluded.last_seen_time
ELSE ingestors.last_seen_time
END,
version = COALESCE(excluded.version, ingestors.version),
lora_freq = COALESCE(excluded.lora_freq, ingestors.lora_freq),
modem_preset = COALESCE(excluded.modem_preset, ingestors.modem_preset),
protocol = excluded.protocol
SQL
end
true
rescue SQLite3::SQLException => e
warn_log(
"Failed to upsert ingestor record",
context: "data_processing.ingestors",
node_id: node_id,
error_class: e.class.name,
error_message: e.message,
)
false
end
end
end
end
@@ -0,0 +1,494 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module DataProcessing
# Determine whether the canonical sender identifier should override the
# sender supplied by the ingestor. MeshCore packets that include a
# +packet_id+ but no +id+ predate the canonical-id assignment, so we
# prefer the canonical lookup when both are available.
#
# @param message [Hash] inbound message payload.
# @return [Boolean] true when the canonical lookup wins.
def prefer_canonical_sender?(message)
message.is_a?(Hash) && message.key?("packet_id") && !message.key?("id")
end
# Attempt to decrypt an encrypted Meshtastic message payload.
#
# @param message [Hash] message payload supplied by the ingestor.
# @param packet_id [Integer] message packet identifier.
# @param from_id [String, nil] canonical node identifier when available.
# @param from_num [Integer, nil] numeric node identifier when available.
# @param channel_index [Integer, nil] channel hash index.
# @return [Hash, nil] decrypted payload metadata when parsing succeeds.
def decrypt_meshtastic_message(message, packet_id, from_id, from_num, channel_index)
return nil unless message.is_a?(Hash)
cipher_b64 = string_or_nil(message["encrypted"])
return nil unless cipher_b64
if (ENV["RACK_ENV"] == "test" || ENV["APP_ENV"] == "test" || defined?(RSpec)) &&
ENV["MESHTASTIC_PSK_B64"].nil?
return nil
end
node_num = coerce_integer(from_num)
if node_num.nil?
parts = canonical_node_parts(from_id)
node_num = parts[1] if parts
end
return nil unless node_num
psk_b64 = PotatoMesh::Config.meshtastic_psk_b64
data = PotatoMesh::App::Meshtastic::Cipher.decrypt_data(
cipher_b64: cipher_b64,
packet_id: packet_id,
from_id: from_id,
from_num: node_num,
psk_b64: psk_b64,
)
return nil unless data
channel_name = nil
if channel_index.is_a?(Integer)
candidates = PotatoMesh::App::Meshtastic::RainbowTable.channel_names_for(
channel_index,
psk_b64: psk_b64,
)
channel_name = candidates.first if candidates.any?
end
{
text: data[:text],
portnum: data[:portnum],
payload: data[:payload],
channel_name: channel_name,
}
end
# Persist a chat-layer message payload, performing meshcore content
# dedup, decryption, and per-protocol bookkeeping.
#
# @param db [SQLite3::Database] open database handle.
# @param message [Hash] inbound message payload.
# @param protocol_cache [Hash, nil] optional per-batch ingestor protocol cache.
# @return [void]
def insert_message(db, message, protocol_cache: nil)
return unless message.is_a?(Hash)
msg_id = coerce_integer(message["id"] || message["packet_id"])
return unless msg_id
now = Time.now.to_i
rx_time = coerce_integer(message["rx_time"])
rx_time = now if rx_time.nil? || rx_time > now
rx_iso = string_or_nil(message["rx_iso"])
rx_iso ||= Time.at(rx_time).utc.iso8601
raw_from_id = message["from_id"]
if raw_from_id.nil? || raw_from_id.to_s.strip.empty?
alt_from = message["from"]
raw_from_id = alt_from unless alt_from.nil? || alt_from.to_s.strip.empty?
end
trimmed_from_id = string_or_nil(raw_from_id)
canonical_from_id = string_or_nil(normalize_node_id(db, raw_from_id))
from_id = trimmed_from_id
if canonical_from_id
if from_id.nil?
from_id = canonical_from_id
elsif prefer_canonical_sender?(message)
from_id = canonical_from_id
elsif from_id.start_with?("!") && from_id.casecmp(canonical_from_id) != 0
from_id = canonical_from_id
end
end
if from_id && !from_id.start_with?("^")
canonical_parts = canonical_node_parts(from_id, message["from_num"])
if canonical_parts && !from_id.start_with?("!")
from_id = canonical_parts[0]
message["from_num"] ||= canonical_parts[1]
end
end
sender_present = !from_id.nil? || !coerce_integer(message["from_num"]).nil? || !trimmed_from_id.nil?
raw_to_id = message["to_id"]
raw_to_id = message["to"] if raw_to_id.nil? || raw_to_id.to_s.strip.empty?
trimmed_to_id = string_or_nil(raw_to_id)
canonical_to_id = string_or_nil(normalize_node_id(db, raw_to_id))
to_id = trimmed_to_id
if canonical_to_id
if to_id.nil?
to_id = canonical_to_id
elsif to_id.start_with?("!") && to_id.casecmp(canonical_to_id) != 0
to_id = canonical_to_id
end
end
if to_id && !to_id.start_with?("^")
canonical_parts = canonical_node_parts(to_id, message["to_num"])
if canonical_parts && !to_id.start_with?("!")
to_id = canonical_parts[0]
message["to_num"] ||= canonical_parts[1]
end
end
encrypted = string_or_nil(message["encrypted"])
text = message["text"]
portnum = message["portnum"]
channel_index = coerce_integer(message["channel"] || message["channel_index"] || message["channelIndex"])
decrypted_payload = nil
decrypted_portnum = nil
if encrypted && (text.nil? || text.to_s.strip.empty?)
decrypted = decrypt_meshtastic_message(
message,
msg_id,
from_id,
message["from_num"],
channel_index,
)
if decrypted
decrypted_payload = decrypted
decrypted_portnum = decrypted[:portnum]
end
end
if encrypted && (text.nil? || text.to_s.strip.empty?)
portnum = nil
message.delete("portnum")
end
lora_freq = coerce_integer(message["lora_freq"] || message["loraFrequency"])
modem_preset = string_or_nil(message["modem_preset"] || message["modemPreset"])
channel_name = string_or_nil(message["channel_name"] || message["channelName"])
reply_id = coerce_integer(message["reply_id"] || message["replyId"])
emoji = string_or_nil(message["emoji"])
ingestor = string_or_nil(message["ingestor"])
protocol = resolve_protocol(db, ingestor, cache: protocol_cache)
row = [
msg_id,
rx_time,
rx_iso,
from_id,
to_id,
message["channel"],
portnum,
text,
encrypted,
message["snr"],
message["rssi"],
message["hop_limit"],
lora_freq,
modem_preset,
channel_name,
reply_id,
emoji,
ingestor,
protocol,
]
with_busy_retry do
# Meshcore-only content-level dedup (issue #756). The deterministic
# message id (``_derive_message_id`` in the Python ingestor) hashes
# ``sender_timestamp`` among other fields, but the MeshCore library
# has been observed delivering the same physical packet twice with
# a rewritten ``sender_timestamp`` (relay/retransmit behaviour).
# The PK path below cannot catch that — two copies compute two
# different ids — so we add a narrow content+window pre-check here.
#
# Ruby integer ``0`` is truthy, so the ``channel_index`` guard
# passes for the broadcast channel intentionally; we only skip when
# the channel is absent/nil. ``from_id`` + non-empty ``text`` keep
# encrypted or anonymous traffic on the id-PK path.
#
# Known race: the SELECT and the downstream INSERT do not share a
# transaction, so two Puma threads carrying the same content with
# different ids can both pass the pre-check and both insert. The
# deploy-time backfill sweeps the survivors; wrapping the pair in
# ``db.transaction(:immediate)`` is a future tightening if the race
# is ever observed in production.
if protocol == "meshcore" && from_id && channel_index && text && !text.to_s.empty?
# ``channel = ?`` matches the ``channel_index`` bind cleanly
# because the guard above rejects nil; ``to_id`` may legitimately
# be nil (rare meshcore fallback), so it keeps ``IS ?`` for a
# NULL-safe compare.
duplicate_id = db.get_first_value(
<<~SQL,
SELECT id FROM messages
WHERE protocol = 'meshcore'
AND from_id = ?
AND to_id IS ?
AND channel = ?
AND text = ?
AND rx_time BETWEEN ? AND ?
AND id != ?
LIMIT 1
SQL
[from_id, to_id, channel_index, text,
rx_time - MESHCORE_CONTENT_DEDUP_WINDOW_SECONDS,
rx_time + MESHCORE_CONTENT_DEDUP_WINDOW_SECONDS, msg_id],
)
if duplicate_id
debug_log(
"Skipped meshcore message duplicate",
context: "data_processing.insert_message",
new_id: msg_id,
existing_id: duplicate_id,
from_id: from_id,
channel: channel_index,
)
return
end
end
existing = db.get_first_row(
"SELECT from_id, to_id, text, encrypted, lora_freq, modem_preset, channel_name, reply_id, emoji, portnum, ingestor, protocol FROM messages WHERE id = ?",
[msg_id],
)
if existing
updates = {}
existing_text = existing.is_a?(Hash) ? existing["text"] : existing[2]
existing_text_str = existing_text&.to_s
existing_has_text = existing_text_str && !existing_text_str.strip.empty?
existing_from = existing.is_a?(Hash) ? existing["from_id"] : existing[0]
existing_from_str = existing_from&.to_s
return if !sender_present && (existing_from_str.nil? || existing_from_str.strip.empty?)
existing_encrypted = existing.is_a?(Hash) ? existing["encrypted"] : existing[3]
existing_encrypted_str = existing_encrypted&.to_s
decrypted_precedence = text && existing_encrypted_str && !existing_encrypted_str.strip.empty?
if from_id
should_update = existing_from_str.nil? || existing_from_str.strip.empty?
should_update ||= existing_from != from_id
updates["from_id"] = from_id if should_update
end
if to_id
existing_to = existing.is_a?(Hash) ? existing["to_id"] : existing[1]
existing_to_str = existing_to&.to_s
should_update = existing_to_str.nil? || existing_to_str.strip.empty?
should_update ||= existing_to != to_id
updates["to_id"] = to_id if should_update
end
if decrypted_precedence && existing_encrypted_str && !existing_encrypted_str.strip.empty?
updates["encrypted"] = nil if existing_encrypted
elsif encrypted && !existing_has_text
should_update = existing_encrypted_str.nil? || existing_encrypted_str.strip.empty?
should_update ||= existing_encrypted != encrypted
updates["encrypted"] = encrypted if should_update
end
if text
should_update = existing_text_str.nil? || existing_text_str.strip.empty?
should_update ||= existing_text != text
updates["text"] = text if should_update
end
if decrypted_precedence
updates["channel"] = message["channel"] if message.key?("channel")
updates["snr"] = message["snr"] if message.key?("snr")
updates["rssi"] = message["rssi"] if message.key?("rssi")
updates["hop_limit"] = message["hop_limit"] if message.key?("hop_limit")
updates["lora_freq"] = lora_freq unless lora_freq.nil?
updates["modem_preset"] = modem_preset if modem_preset
updates["channel_name"] = channel_name if channel_name
updates["rx_time"] = rx_time if rx_time
updates["rx_iso"] = rx_iso if rx_iso
end
if portnum
existing_portnum = existing.is_a?(Hash) ? existing["portnum"] : existing[9]
existing_portnum_str = existing_portnum&.to_s
should_update = existing_portnum_str.nil? || existing_portnum_str.strip.empty?
should_update ||= existing_portnum != portnum
should_update ||= decrypted_precedence
updates["portnum"] = portnum if should_update
end
unless lora_freq.nil?
existing_lora = existing.is_a?(Hash) ? existing["lora_freq"] : existing[4]
updates["lora_freq"] = lora_freq if existing_lora != lora_freq
end
if modem_preset
existing_preset = existing.is_a?(Hash) ? existing["modem_preset"] : existing[5]
existing_preset_str = existing_preset&.to_s
should_update = existing_preset_str.nil? || existing_preset_str.strip.empty?
should_update ||= existing_preset != modem_preset
updates["modem_preset"] = modem_preset if should_update
end
if channel_name
existing_channel = existing.is_a?(Hash) ? existing["channel_name"] : existing[6]
existing_channel_str = existing_channel&.to_s
should_update = existing_channel_str.nil? || existing_channel_str.strip.empty?
should_update ||= existing_channel != channel_name
updates["channel_name"] = channel_name if should_update
end
unless reply_id.nil?
existing_reply = existing.is_a?(Hash) ? existing["reply_id"] : existing[7]
updates["reply_id"] = reply_id if existing_reply != reply_id
end
if emoji
existing_emoji = existing.is_a?(Hash) ? existing["emoji"] : existing[8]
existing_emoji_str = existing_emoji&.to_s
should_update = existing_emoji_str.nil? || existing_emoji_str.strip.empty?
should_update ||= existing_emoji != emoji
updates["emoji"] = emoji if should_update
end
if ingestor
existing_ingestor = existing.is_a?(Hash) ? existing["ingestor"] : existing[10]
existing_ingestor = string_or_nil(existing_ingestor)
updates["ingestor"] = ingestor if existing_ingestor.nil?
end
existing_protocol = existing.is_a?(Hash) ? existing["protocol"] : existing[11]
return if existing_protocol && existing_protocol != "meshtastic" && existing_protocol != protocol
updates["protocol"] = protocol if (existing_protocol.nil? || existing_protocol == "meshtastic") && protocol != "meshtastic"
unless updates.empty?
assignments = updates.keys.map { |column| "#{column} = ?" }.join(", ")
db.execute("UPDATE messages SET #{assignments} WHERE id = ?", updates.values + [msg_id])
end
else
PotatoMesh::App::Prometheus::MESSAGES_TOTAL.increment
begin
db.execute <<~SQL, row
INSERT INTO messages(id,rx_time,rx_iso,from_id,to_id,channel,portnum,text,encrypted,snr,rssi,hop_limit,lora_freq,modem_preset,channel_name,reply_id,emoji,ingestor,protocol)
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
SQL
rescue SQLite3::ConstraintException
existing_row = db.get_first_row(
"SELECT text, encrypted, ingestor, protocol FROM messages WHERE id = ?",
[msg_id],
)
existing_text = existing_row.is_a?(Hash) ? existing_row["text"] : existing_row&.[](0)
existing_text_str = existing_text&.to_s
allow_encrypted_update = existing_text_str.nil? || existing_text_str.strip.empty?
existing_encrypted = existing_row.is_a?(Hash) ? existing_row["encrypted"] : existing_row&.[](1)
existing_encrypted_str = existing_encrypted&.to_s
existing_ingestor = existing_row.is_a?(Hash) ? existing_row["ingestor"] : existing_row&.[](2)
existing_ingestor = string_or_nil(existing_ingestor)
existing_fallback_protocol = existing_row.is_a?(Hash) ? existing_row["protocol"] : existing_row&.[](3)
# Guard against cross-protocol contamination in the constraint fallback path,
# mirroring the same guard applied in the primary update path above.
return if existing_fallback_protocol && existing_fallback_protocol != "meshtastic" && existing_fallback_protocol != protocol
decrypted_precedence = text && existing_encrypted_str && !existing_encrypted_str.strip.empty?
fallback_updates = {}
fallback_updates["from_id"] = from_id if from_id
fallback_updates["to_id"] = to_id if to_id
fallback_updates["text"] = text if text
fallback_updates["encrypted"] = encrypted if encrypted && allow_encrypted_update
fallback_updates["portnum"] = portnum if portnum
if decrypted_precedence
fallback_updates["channel"] = message["channel"] if message.key?("channel")
fallback_updates["snr"] = message["snr"] if message.key?("snr")
fallback_updates["rssi"] = message["rssi"] if message.key?("rssi")
fallback_updates["hop_limit"] = message["hop_limit"] if message.key?("hop_limit")
fallback_updates["portnum"] = portnum if portnum
fallback_updates["lora_freq"] = lora_freq unless lora_freq.nil?
fallback_updates["modem_preset"] = modem_preset if modem_preset
fallback_updates["channel_name"] = channel_name if channel_name
fallback_updates["rx_time"] = rx_time if rx_time
fallback_updates["rx_iso"] = rx_iso if rx_iso
else
fallback_updates["lora_freq"] = lora_freq unless lora_freq.nil?
fallback_updates["modem_preset"] = modem_preset if modem_preset
fallback_updates["channel_name"] = channel_name if channel_name
end
fallback_updates["reply_id"] = reply_id unless reply_id.nil?
fallback_updates["emoji"] = emoji if emoji
fallback_updates["ingestor"] = ingestor if ingestor && existing_ingestor.nil?
fallback_updates["protocol"] = protocol if (existing_fallback_protocol.nil? || existing_fallback_protocol == "meshtastic") && protocol != "meshtastic"
unless fallback_updates.empty?
assignments = fallback_updates.keys.map { |column| "#{column} = ?" }.join(", ")
db.execute("UPDATE messages SET #{assignments} WHERE id = ?", fallback_updates.values + [msg_id])
end
end
end
end
stored_decrypted = nil
if decrypted_payload
stored_decrypted = store_decrypted_payload(
db,
message,
msg_id,
decrypted_payload,
rx_time: rx_time,
rx_iso: rx_iso,
from_id: from_id,
to_id: to_id,
channel: message["channel"],
portnum: portnum || decrypted_portnum,
hop_limit: message["hop_limit"],
snr: message["snr"],
rssi: message["rssi"],
)
end
if stored_decrypted && encrypted
with_busy_retry do
db.execute("UPDATE messages SET encrypted = NULL WHERE id = ?", [msg_id])
end
debug_log(
"Cleared encrypted payload after decoding",
context: "data_processing.insert_message",
message_id: msg_id,
portnum: portnum || decrypted_portnum,
)
end
should_touch_message = !stored_decrypted
if should_touch_message
ensure_unknown_node(db, from_id || raw_from_id, message["from_num"], heard_time: rx_time, protocol: protocol)
touch_node_last_seen(
db,
from_id || raw_from_id || message["from_num"],
message["from_num"],
rx_time: rx_time,
source: :message,
lora_freq: lora_freq,
modem_preset: modem_preset,
)
ensure_unknown_node(db, to_id || raw_to_id, message["to_num"], heard_time: rx_time, protocol: protocol) if to_id || raw_to_id
if to_id || raw_to_id || message.key?("to_num")
touch_node_last_seen(
db,
to_id || raw_to_id || message["to_num"],
message["to_num"],
rx_time: rx_time,
source: :message,
lora_freq: lora_freq,
modem_preset: modem_preset,
)
end
end
end
end
end
end
@@ -0,0 +1,144 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module DataProcessing
# Persist a neighbours snapshot for a single reporting node.
#
# @param db [SQLite3::Database] open database handle.
# @param payload [Hash] inbound NeighborInfo payload.
# @param protocol_cache [Hash, nil] optional per-batch ingestor protocol cache.
# @return [void]
def insert_neighbors(db, payload, protocol_cache: nil)
return unless payload.is_a?(Hash)
now = Time.now.to_i
rx_time = coerce_integer(payload["rx_time"])
rx_time = now if rx_time.nil? || rx_time > now
raw_node_id = payload["node_id"] || payload["node"] || payload["from_id"]
raw_node_num = coerce_integer(payload["node_num"]) || coerce_integer(payload["num"])
canonical_parts = canonical_node_parts(raw_node_id, raw_node_num)
if canonical_parts
node_id, node_num, = canonical_parts
else
node_id = string_or_nil(raw_node_id)
canonical = normalize_node_id(db, node_id || raw_node_num)
node_id = canonical if canonical
if node_id&.start_with?("!") && raw_node_num.nil?
begin
node_num = Integer(node_id.delete_prefix("!"), 16)
rescue ArgumentError
node_num = nil
end
else
node_num = raw_node_num
end
end
return unless node_id
node_id = "!#{node_id.delete_prefix("!").downcase}" if node_id.start_with?("!")
ingestor = string_or_nil(payload["ingestor"])
protocol = resolve_protocol(db, ingestor, cache: protocol_cache)
ensure_unknown_node(db, node_id || node_num, node_num, heard_time: rx_time, protocol: protocol)
touch_node_last_seen(db, node_id || node_num, node_num, rx_time: rx_time, source: :neighborinfo)
neighbor_entries = []
neighbors_payload = payload["neighbors"]
neighbors_list = neighbors_payload.is_a?(Array) ? neighbors_payload : []
neighbors_list.each do |neighbor|
next unless neighbor.is_a?(Hash)
neighbor_ref = neighbor["neighbor_id"] || neighbor["node_id"] || neighbor["nodeId"] || neighbor["id"]
neighbor_num = coerce_integer(
neighbor["neighbor_num"] || neighbor["node_num"] || neighbor["nodeId"] || neighbor["id"],
)
canonical_neighbor = canonical_node_parts(neighbor_ref, neighbor_num)
if canonical_neighbor
neighbor_id, neighbor_num, = canonical_neighbor
else
neighbor_id = string_or_nil(neighbor_ref)
canonical_neighbor_id = normalize_node_id(db, neighbor_id || neighbor_num)
neighbor_id = canonical_neighbor_id if canonical_neighbor_id
if neighbor_id&.start_with?("!") && neighbor_num.nil?
begin
neighbor_num = Integer(neighbor_id.delete_prefix("!"), 16)
rescue ArgumentError
neighbor_num = nil
end
end
end
next unless neighbor_id
neighbor_id = "!#{neighbor_id.delete_prefix("!").downcase}" if neighbor_id.start_with?("!")
entry_rx_time = coerce_integer(neighbor["rx_time"]) || rx_time
entry_rx_time = now if entry_rx_time && entry_rx_time > now
snr = coerce_float(neighbor["snr"])
ensure_unknown_node(db, neighbor_id || neighbor_num, neighbor_num, heard_time: entry_rx_time, protocol: protocol)
neighbor_entries << [neighbor_id, snr, entry_rx_time, ingestor, protocol]
end
with_busy_retry do
db.transaction do
if neighbor_entries.empty?
db.execute("DELETE FROM neighbors WHERE node_id = ?", [node_id])
else
expected_neighbors = neighbor_entries.map(&:first).uniq
existing_neighbors = db.execute(
"SELECT neighbor_id FROM neighbors WHERE node_id = ?",
[node_id],
).flatten
stale_neighbors = existing_neighbors - expected_neighbors
stale_neighbors.each_slice(500) do |slice|
placeholders = slice.map { "?" }.join(",")
db.execute(
"DELETE FROM neighbors WHERE node_id = ? AND neighbor_id IN (#{placeholders})",
[node_id] + slice,
)
end
end
neighbor_entries.each do |neighbor_id, snr_value, heard_time, reporter_id, proto|
db.execute(
<<~SQL,
INSERT INTO neighbors(node_id, neighbor_id, snr, rx_time, ingestor, protocol)
VALUES (?, ?, ?, ?, ?, ?)
ON CONFLICT(node_id, neighbor_id) DO UPDATE SET
snr = excluded.snr,
rx_time = excluded.rx_time,
ingestor = COALESCE(NULLIF(neighbors.ingestor,''), excluded.ingestor),
protocol = COALESCE(NULLIF(neighbors.protocol,'meshtastic'), excluded.protocol)
SQL
[node_id, neighbor_id, snr_value, heard_time, reporter_id, proto],
)
end
end
end
end
end
end
end
@@ -0,0 +1,570 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module DataProcessing
# Insert a hidden placeholder node when an unknown reference is encountered.
#
# @param db [SQLite3::Database] open database handle.
# @param node_ref [Object] raw node reference from the inbound payload.
# @param fallback_num [Integer, nil] numeric fallback when +node_ref+ is nil.
# @param heard_time [Integer, nil] timestamp to record as +last_heard+/+first_heard+.
# @param protocol [String] protocol identifier for placeholder generation.
# @return [Boolean, nil] true when a row was inserted, false/nil otherwise.
def ensure_unknown_node(db, node_ref, fallback_num = nil, heard_time: nil, protocol: "meshtastic")
parts = canonical_node_parts(node_ref, fallback_num)
return unless parts
node_id, node_num, short_id = parts
return if broadcast_node_ref?(node_id, node_num)
existing = db.get_first_value(
"SELECT 1 FROM nodes WHERE node_id = ? LIMIT 1",
[node_id],
)
return if existing
long_name = "#{protocol_display_label(protocol)} #{short_id}"
default_role = case protocol
when "meshcore" then "COMPANION"
else "CLIENT_HIDDEN"
end
heard_time = coerce_integer(heard_time)
inserted = false
with_busy_retry do
db.execute(
<<~SQL,
INSERT OR IGNORE INTO nodes(node_id,num,short_name,long_name,role,last_heard,first_heard,protocol)
VALUES (?,?,?,?,?,?,?,?)
SQL
[node_id, node_num, short_id, long_name, default_role, heard_time, heard_time, protocol],
)
inserted = db.changes.positive?
end
if inserted
debug_log(
"Created hidden placeholder node",
context: "data_processing.ensure_unknown_node",
node_id: node_id,
reference: node_ref,
fallback: fallback_num,
heard_time: heard_time,
)
end
inserted
end
# Refresh a node's +last_heard+, +first_heard+, +lora_freq+, and
# +modem_preset+ columns from a freshly received packet.
#
# @param db [SQLite3::Database] open database handle.
# @param node_ref [Object] raw node reference.
# @param fallback_num [Integer, nil] numeric fallback when +node_ref+ is nil.
# @param rx_time [Integer, nil] receive timestamp; the method exits early when nil.
# @param source [Symbol, nil] originating subsystem (used for debug logs).
# @param lora_freq [Integer, nil] LoRa frequency; only updated when non-nil.
# @param modem_preset [String, nil] modem preset name; only updated when non-nil.
# @return [Boolean] true when at least one row was updated.
def touch_node_last_seen(
db,
node_ref,
fallback_num = nil,
rx_time: nil,
source: nil,
lora_freq: nil,
modem_preset: nil
)
timestamp = coerce_integer(rx_time)
return unless timestamp
node_id = nil
parts = canonical_node_parts(node_ref, fallback_num)
if parts
node_id, node_num = parts
return if broadcast_node_ref?(node_id, node_num)
end
unless node_id
trimmed = string_or_nil(node_ref)
if trimmed
node_id = normalize_node_id(db, trimmed) || trimmed
elsif fallback_num
fallback_parts = canonical_node_parts(fallback_num, nil)
node_id, = fallback_parts if fallback_parts
end
end
return if broadcast_node_ref?(node_id, fallback_num)
return unless node_id
lora_freq = coerce_integer(lora_freq)
modem_preset = string_or_nil(modem_preset)
updated = false
with_busy_retry do
db.execute <<~SQL, [timestamp, timestamp, timestamp, lora_freq, modem_preset, node_id]
UPDATE nodes
SET last_heard = CASE
WHEN COALESCE(last_heard, 0) >= ? THEN last_heard
ELSE ?
END,
first_heard = COALESCE(first_heard, ?),
lora_freq = COALESCE(?, lora_freq),
modem_preset = COALESCE(?, modem_preset)
WHERE node_id = ?
SQL
updated ||= db.changes.positive?
end
if updated
debug_log(
"Updated node last seen timestamp",
context: "data_processing.touch_node_last_seen",
node_id: node_id,
timestamp: timestamp,
source: source || :unknown,
lora_freq: lora_freq,
modem_preset: modem_preset,
)
end
updated
end
# Insert or update a node row from an inbound NodeInfo-style payload.
#
# @param db [SQLite3::Database] open database handle.
# @param node_id [String] canonical node identifier.
# @param n [Hash] node payload extracted from the ingestor.
# @param protocol [String] protocol identifier (default +meshtastic+).
# @return [void]
def upsert_node(db, node_id, n, protocol: "meshtastic")
user = n["user"] || {}
met = n["deviceMetrics"] || {}
pos = n["position"] || {}
# nil when user info absent; COALESCE in the conflict clause preserves
# the stored role rather than overwriting with a default.
role = user["role"]
lh = coerce_integer(n["lastHeard"])
pt = coerce_integer(pos["time"])
now = Time.now.to_i
pt = nil if pt && pt > now
lh = now if lh && lh > now
# 0 is truthy in Ruby — `lh ||= now` won't replace it, leaving the
# 7-day list filter to evaluate `0 >= now-7days` → false (node hidden).
lh = nil if lh && lh <= 0
# position.time = 0 means no GPS fix; skip it as a last_heard anchor
# (would re-introduce the same zero-timestamp exclusion bug for lh).
lh = pt if pt && pt > 0 && (!lh || lh < pt)
lh ||= now
node_num = resolve_node_num(node_id, n)
update_prometheus_metrics(node_id, user, role, met, pos)
lora_freq = coerce_integer(n["lora_freq"] || n["loraFrequency"])
modem_preset = string_or_nil(n["modem_preset"] || n["modemPreset"])
# Synthetic flag: true for placeholder nodes created from channel message
# sender names before the real contact advertisement is received.
synthetic = user["synthetic"] ? 1 : 0
long_name = user["longName"]
# If the incoming long name is a generic placeholder, prefer any real
# name already on record so we never stomp known data with fallback
# text. For new nodes there is nothing to preserve, so the generic
# name is still written via the INSERT VALUES path.
long_name_conflict_sql = if generic_fallback_name?(long_name, node_id, protocol)
# Generic placeholder: keep any real name already on record.
# COALESCE returns nodes.long_name when non-null, otherwise falls
# back to the incoming generic — so brand-new nodes still get it.
"COALESCE(nodes.long_name, excluded.long_name)"
else
# Real name (or nil): use the incoming value, preserving the
# existing name only when the incoming value is nil. A nil
# long_name in the packet carries no information, so falling back
# to what we already have is better than overwriting with NULL.
"COALESCE(excluded.long_name, nodes.long_name)"
end
row = [
node_id,
node_num,
user["shortName"],
long_name,
user["macaddr"],
user["hwModel"] || n["hwModel"],
role,
user["publicKey"],
coerce_bool(user["isUnmessagable"]),
coerce_bool(n["isFavorite"]),
n["hopsAway"],
n["snr"],
lh,
lh,
met["batteryLevel"],
met["voltage"],
met["channelUtilization"],
met["airUtilTx"],
met["uptimeSeconds"],
pt,
pos["locationSource"],
coerce_integer(
pos["precisionBits"] ||
pos["precision_bits"] ||
pos.dig("raw", "precision_bits"),
),
pos["latitude"],
pos["longitude"],
pos["altitude"],
lora_freq,
modem_preset,
protocol,
synthetic,
]
with_busy_retry do
db.transaction do
db.execute(<<~SQL, row)
INSERT INTO nodes(node_id,num,short_name,long_name,macaddr,hw_model,role,public_key,is_unmessagable,is_favorite,
hops_away,snr,last_heard,first_heard,battery_level,voltage,channel_utilization,air_util_tx,uptime_seconds,
position_time,location_source,precision_bits,latitude,longitude,altitude,lora_freq,modem_preset,protocol,synthetic)
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
ON CONFLICT(node_id) DO UPDATE SET
num=COALESCE(excluded.num, nodes.num),
short_name=COALESCE(excluded.short_name, nodes.short_name),
long_name=#{long_name_conflict_sql},
macaddr=COALESCE(excluded.macaddr, nodes.macaddr),
hw_model=COALESCE(excluded.hw_model, nodes.hw_model),
role=COALESCE(excluded.role, nodes.role),
public_key=COALESCE(excluded.public_key, nodes.public_key),
is_unmessagable=COALESCE(excluded.is_unmessagable, nodes.is_unmessagable),
is_favorite=excluded.is_favorite, hops_away=excluded.hops_away, snr=excluded.snr, last_heard=excluded.last_heard,
first_heard=COALESCE(nodes.first_heard, excluded.first_heard, excluded.last_heard),
battery_level=excluded.battery_level, voltage=excluded.voltage, channel_utilization=excluded.channel_utilization,
air_util_tx=excluded.air_util_tx, uptime_seconds=excluded.uptime_seconds,
position_time=COALESCE(excluded.position_time, nodes.position_time),
location_source=COALESCE(excluded.location_source, nodes.location_source),
precision_bits=COALESCE(excluded.precision_bits, nodes.precision_bits),
latitude=COALESCE(excluded.latitude, nodes.latitude),
longitude=COALESCE(excluded.longitude, nodes.longitude),
altitude=COALESCE(excluded.altitude, nodes.altitude),
lora_freq=excluded.lora_freq, modem_preset=excluded.modem_preset,
protocol=COALESCE(NULLIF(nodes.protocol,'meshtastic'), excluded.protocol),
synthetic=MIN(COALESCE(excluded.synthetic,1), COALESCE(nodes.synthetic,1))
WHERE COALESCE(excluded.last_heard,0) >= COALESCE(nodes.last_heard,0)
AND NOT (COALESCE(nodes.synthetic,0) = 0 AND excluded.synthetic = 1)
SQL
# Reconcile synthetic placeholder rows with their real counterparts
# whenever a MeshCore node is upserted. Both directions must fire —
# the arrival order of chat messages vs contact advertisements is
# not guaranteed and may differ across co-operating ingestors that
# share this database. See issue #755.
if protocol == "meshcore" && long_name && !long_name.empty?
if synthetic == 0
merge_synthetic_nodes(db, node_id, long_name)
else
merge_into_real_node(db, node_id, long_name)
end
end
end
end
end
# Migrate messages from synthetic placeholder nodes to a newly confirmed
# real node, then remove the placeholders.
#
# Called inside a transaction from +upsert_node+ when a real (non-synthetic)
# MeshCore node with the same +long_name+ is upserted.
#
# Only +messages.from_id+ is migrated. Synthetic nodes are placeholders
# created solely from parsed channel message sender names, so they cannot
# have associated positions, telemetry, neighbors, or traces — those tables
# are intentionally left untouched.
#
# @param db [SQLite3::Database] open database connection.
# @param real_node_id [String] canonical node ID for the real contact.
# @param long_name [String] long name to match against synthetic rows.
# @return [void]
def merge_synthetic_nodes(db, real_node_id, long_name)
# long_name is user-editable and not unique across pubkeys — two real
# meshcore devices can legitimately share the same display name. When
# that happens we cannot tell which real node a given chat-derived
# synthetic was acting as placeholder for, so any merge would risk
# mis-attributing messages. Bail out and leave the synthetic intact.
other_real = db.execute(
"SELECT 1 FROM nodes WHERE long_name = ? AND synthetic = 0 AND protocol = 'meshcore' AND node_id != ? LIMIT 1",
[long_name, real_node_id],
).first
return if other_real
synthetic_ids = db.execute(
"SELECT node_id FROM nodes WHERE long_name = ? AND synthetic = 1 AND protocol = 'meshcore' AND node_id != ?",
[long_name, real_node_id],
).map { |row| row[0] }
synthetic_ids.each do |synthetic_id|
db.execute(
"UPDATE messages SET from_id = ? WHERE from_id = ?",
[real_node_id, synthetic_id],
)
db.execute(
"DELETE FROM nodes WHERE node_id = ? AND synthetic = 1",
[synthetic_id],
)
end
end
# Reverse of +merge_synthetic_nodes+: when a synthetic placeholder is
# upserted for a MeshCore sender whose real contact advertisement has
# already been stored (e.g. by a co-operating ingestor that saw the
# advertisement first), migrate any messages from the synthetic id to the
# real id and drop the synthetic row.
#
# Fixes duplication bug #755 where a chat-derived synthetic node and a
# pubkey-derived real node coexisted because the forward merge only fired
# on real-node upserts and never back-filled late-arriving synthetics.
#
# @param db [SQLite3::Database] open database connection.
# @param synthetic_node_id [String] canonical node ID of the synthetic placeholder being upserted.
# @param long_name [String] long name to match against existing real rows.
# @return [void]
def merge_into_real_node(db, synthetic_node_id, long_name)
# Index by [0] rather than the hash key so this works whether the db
# handle was opened with results_as_hash = true or not.
real_rows = db.execute(
"SELECT node_id FROM nodes WHERE long_name = ? AND synthetic = 0 AND protocol = 'meshcore' AND node_id != ? LIMIT 2",
[long_name, synthetic_node_id],
)
# Ambiguous name: two distinct real meshcore devices share this
# long_name. The synthetic placeholder could legitimately represent
# either, so we cannot pick one without risking mis-attribution. Leave
# the synthetic in place; an operator can resolve the duplicate
# manually.
return if real_rows.length > 1
row = real_rows.first
return unless row
real_node_id = row[0]
return unless real_node_id
db.execute(
"UPDATE messages SET from_id = ? WHERE from_id = ?",
[real_node_id, synthetic_node_id],
)
db.execute(
"DELETE FROM nodes WHERE node_id = ? AND synthetic = 1",
[synthetic_node_id],
)
end
# Update node row columns from a freshly observed position record.
#
# @param db [SQLite3::Database] open database handle.
# @param node_id [String, nil] canonical node identifier.
# @param node_num [Integer, nil] numeric node identifier.
# @param rx_time [Integer, nil] receive time.
# @param position_time [Integer, nil] timestamp from the position payload.
# @param location_source [String, nil] +location_source+ enum value.
# @param precision_bits [Integer, nil] horizontal precision bits.
# @param latitude [Float, nil] decoded latitude.
# @param longitude [Float, nil] decoded longitude.
# @param altitude [Float, nil] decoded altitude.
# @param snr [Float, nil] signal-to-noise ratio.
# @return [void]
def update_node_from_position(db, node_id, node_num, rx_time, position_time, location_source, precision_bits, latitude, longitude, altitude, snr)
num = coerce_integer(node_num)
id = string_or_nil(node_id)
if id&.start_with?("!")
id = "!#{id.delete_prefix("!").downcase}"
end
id ||= format("!%08x", num & 0xFFFFFFFF) if num
return unless id
now = Time.now.to_i
rx = coerce_integer(rx_time) || now
rx = now if rx && rx > now
pos_time = coerce_integer(position_time)
pos_time = nil if pos_time && pos_time > now
last_heard = [rx, pos_time].compact.max || rx
last_heard = now if last_heard && last_heard > now
loc = string_or_nil(location_source)
lat = coerce_float(latitude)
lon = coerce_float(longitude)
alt = coerce_float(altitude)
precision = coerce_integer(precision_bits)
snr_val = coerce_float(snr)
update_prometheus_metrics(node_id, nil, nil, nil, {
"latitude" => lat,
"longitude" => lon,
"altitude" => alt,
})
row = [
id,
num,
last_heard,
last_heard,
pos_time,
loc,
precision,
lat,
lon,
alt,
snr_val,
]
with_busy_retry do
db.execute <<~SQL, row
INSERT INTO nodes(node_id,num,last_heard,first_heard,position_time,location_source,precision_bits,latitude,longitude,altitude,snr)
VALUES (?,?,?,?,?,?,?,?,?,?,?)
ON CONFLICT(node_id) DO UPDATE SET
num=COALESCE(excluded.num,nodes.num),
snr=COALESCE(excluded.snr,nodes.snr),
last_heard=MAX(COALESCE(nodes.last_heard,0),COALESCE(excluded.last_heard,0)),
first_heard=COALESCE(nodes.first_heard, excluded.first_heard, excluded.last_heard),
position_time=CASE
WHEN COALESCE(excluded.position_time,0) >= COALESCE(nodes.position_time,0)
THEN excluded.position_time
ELSE nodes.position_time
END,
location_source=CASE
WHEN COALESCE(excluded.position_time,0) >= COALESCE(nodes.position_time,0)
AND excluded.location_source IS NOT NULL
THEN excluded.location_source
ELSE nodes.location_source
END,
precision_bits=CASE
WHEN COALESCE(excluded.position_time,0) >= COALESCE(nodes.position_time,0)
AND excluded.precision_bits IS NOT NULL
THEN excluded.precision_bits
ELSE nodes.precision_bits
END,
latitude=CASE
WHEN COALESCE(excluded.position_time,0) >= COALESCE(nodes.position_time,0)
AND excluded.latitude IS NOT NULL
THEN excluded.latitude
ELSE nodes.latitude
END,
longitude=CASE
WHEN COALESCE(excluded.position_time,0) >= COALESCE(nodes.position_time,0)
AND excluded.longitude IS NOT NULL
THEN excluded.longitude
ELSE nodes.longitude
END,
altitude=CASE
WHEN COALESCE(excluded.position_time,0) >= COALESCE(nodes.position_time,0)
AND excluded.altitude IS NOT NULL
THEN excluded.altitude
ELSE nodes.altitude
END
SQL
end
end
# Update node columns based on metrics included in a telemetry packet.
#
# @param db [SQLite3::Database] open database handle.
# @param node_id [String, nil] canonical node identifier.
# @param node_num [Integer, nil] numeric node identifier.
# @param rx_time [Integer, nil] receive time used as +last_heard+.
# @param metrics [Hash] decoded telemetry metric map.
# @param lora_freq [Integer, nil] optional LoRa frequency.
# @param modem_preset [String, nil] optional modem preset.
# @param protocol [String] protocol identifier (default +meshtastic+).
# @return [void]
def update_node_from_telemetry(
db,
node_id,
node_num,
rx_time,
metrics = {},
lora_freq: nil,
modem_preset: nil,
protocol: "meshtastic"
)
num = coerce_integer(node_num)
id = string_or_nil(node_id)
if id&.start_with?("!")
id = "!#{id.delete_prefix("!").downcase}"
end
id ||= format("!%08x", num & 0xFFFFFFFF) if num
return unless id
ensure_unknown_node(db, id, num, heard_time: rx_time, protocol: protocol)
touch_node_last_seen(
db,
id,
num,
rx_time: rx_time,
source: :telemetry,
lora_freq: lora_freq,
modem_preset: modem_preset,
)
battery = coerce_float(metrics[:battery_level] || metrics["battery_level"])
voltage = coerce_float(metrics[:voltage] || metrics["voltage"])
channel_util = coerce_float(metrics[:channel_utilization] || metrics["channel_utilization"])
air_util_tx = coerce_float(metrics[:air_util_tx] || metrics["air_util_tx"])
uptime = coerce_integer(metrics[:uptime_seconds] || metrics["uptime_seconds"])
update_prometheus_metrics(node_id, nil, nil, {
"batteryLevel" => battery,
"voltage" => voltage,
"uptimeSeconds" => uptime,
"channelUtilization" => channel_util,
"airUtilTx" => air_util_tx,
}, nil)
assignments = []
params = []
if num
assignments << "num = ?"
params << num
end
metric_updates = {
"battery_level" => battery,
"voltage" => voltage,
"channel_utilization" => channel_util,
"air_util_tx" => air_util_tx,
"uptime_seconds" => uptime,
}
metric_updates.each do |column, value|
next if value.nil?
assignments << "#{column} = ?"
params << value
end
return if assignments.empty?
assignments_sql = assignments.join(", ")
params << id
with_busy_retry do
db.execute("UPDATE nodes SET #{assignments_sql} WHERE node_id = ?", params)
end
end
end
end
end
@@ -0,0 +1,226 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module DataProcessing
# Persist a position payload, populate the +nodes+ table for newly seen
# senders, and update node rows with the freshest GPS fields.
#
# @param db [SQLite3::Database] open database handle.
# @param payload [Hash] inbound position payload.
# @param protocol_cache [Hash, nil] optional per-batch ingestor protocol cache.
# @return [void]
def insert_position(db, payload, protocol_cache: nil)
pos_id = coerce_integer(payload["id"] || payload["packet_id"])
return unless pos_id
now = Time.now.to_i
rx_time = coerce_integer(payload["rx_time"])
rx_time = now if rx_time.nil? || rx_time > now
rx_iso = string_or_nil(payload["rx_iso"])
rx_iso ||= Time.at(rx_time).utc.iso8601
raw_node_id = payload["node_id"] || payload["from_id"] || payload["from"]
raw_node_num = coerce_integer(payload["node_num"]) || coerce_integer(payload["num"])
canonical_parts = canonical_node_parts(raw_node_id, raw_node_num)
if canonical_parts
node_id, node_num, = canonical_parts
else
node_id = string_or_nil(raw_node_id)
node_id = "!#{node_id.delete_prefix("!").downcase}" if node_id&.start_with?("!")
node_id ||= format("!%08x", raw_node_num & 0xFFFFFFFF) if node_id.nil? && raw_node_num
payload_for_num = payload.is_a?(Hash) ? payload.dup : {}
payload_for_num["num"] ||= raw_node_num if raw_node_num
node_num = resolve_node_num(node_id, payload_for_num)
node_num ||= raw_node_num
canonical = normalize_node_id(db, node_id || node_num)
node_id = canonical if canonical
end
lora_freq = coerce_integer(payload["lora_freq"] || payload["loraFrequency"])
modem_preset = string_or_nil(payload["modem_preset"] || payload["modemPreset"])
ingestor = string_or_nil(payload["ingestor"])
protocol = resolve_protocol(db, ingestor, cache: protocol_cache)
ensure_unknown_node(db, node_id || node_num, node_num, heard_time: rx_time, protocol: protocol)
touch_node_last_seen(
db,
node_id || node_num,
node_num,
rx_time: rx_time,
source: :position,
lora_freq: lora_freq,
modem_preset: modem_preset,
)
to_id = string_or_nil(payload["to_id"] || payload["to"])
position_section = payload["position"].is_a?(Hash) ? payload["position"] : {}
lat = coerce_float(payload["latitude"]) || coerce_float(position_section["latitude"])
lon = coerce_float(payload["longitude"]) || coerce_float(position_section["longitude"])
alt = coerce_float(payload["altitude"]) || coerce_float(position_section["altitude"])
lat ||= begin
lat_i = coerce_integer(position_section["latitudeI"] || position_section["latitude_i"] || position_section.dig("raw", "latitude_i"))
lat_i ? lat_i / 1e7 : nil
end
lon ||= begin
lon_i = coerce_integer(position_section["longitudeI"] || position_section["longitude_i"] || position_section.dig("raw", "longitude_i"))
lon_i ? lon_i / 1e7 : nil
end
alt ||= coerce_float(position_section.dig("raw", "altitude"))
position_time = coerce_integer(
payload["position_time"] ||
position_section["time"] ||
position_section.dig("raw", "time"),
)
location_source = string_or_nil(
payload["location_source"] ||
payload["locationSource"] ||
position_section["location_source"] ||
position_section["locationSource"] ||
position_section.dig("raw", "location_source"),
)
precision_bits = coerce_integer(
payload["precision_bits"] ||
payload["precisionBits"] ||
position_section["precision_bits"] ||
position_section["precisionBits"] ||
position_section.dig("raw", "precision_bits"),
)
sats_in_view = coerce_integer(
payload["sats_in_view"] ||
payload["satsInView"] ||
position_section["sats_in_view"] ||
position_section["satsInView"] ||
position_section.dig("raw", "sats_in_view"),
)
pdop = coerce_float(
payload["pdop"] ||
payload["PDOP"] ||
position_section["pdop"] ||
position_section["PDOP"] ||
position_section.dig("raw", "PDOP") ||
position_section.dig("raw", "pdop"),
)
ground_speed = coerce_float(
payload["ground_speed"] ||
payload["groundSpeed"] ||
position_section["ground_speed"] ||
position_section["groundSpeed"] ||
position_section.dig("raw", "ground_speed"),
)
ground_track = coerce_float(
payload["ground_track"] ||
payload["groundTrack"] ||
position_section["ground_track"] ||
position_section["groundTrack"] ||
position_section.dig("raw", "ground_track"),
)
snr = coerce_float(payload["snr"] || payload["rx_snr"] || payload["rxSnr"])
rssi = coerce_integer(payload["rssi"] || payload["rx_rssi"] || payload["rxRssi"])
hop_limit = coerce_integer(payload["hop_limit"] || payload["hopLimit"])
bitfield = coerce_integer(payload["bitfield"])
payload_b64 = string_or_nil(payload["payload_b64"] || payload["payload"])
payload_b64 ||= string_or_nil(position_section.dig("payload", "__bytes_b64__"))
row = [
pos_id,
node_id,
node_num,
rx_time,
rx_iso,
position_time,
to_id,
lat,
lon,
alt,
location_source,
precision_bits,
sats_in_view,
pdop,
ground_speed,
ground_track,
snr,
rssi,
hop_limit,
bitfield,
payload_b64,
ingestor,
protocol,
]
with_busy_retry do
db.execute <<~SQL, row
INSERT INTO positions(id,node_id,node_num,rx_time,rx_iso,position_time,to_id,latitude,longitude,altitude,location_source,
precision_bits,sats_in_view,pdop,ground_speed,ground_track,snr,rssi,hop_limit,bitfield,payload_b64,ingestor,protocol)
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
ON CONFLICT(id) DO UPDATE SET
node_id=COALESCE(excluded.node_id,positions.node_id),
node_num=COALESCE(excluded.node_num,positions.node_num),
rx_time=excluded.rx_time,
rx_iso=excluded.rx_iso,
position_time=COALESCE(excluded.position_time,positions.position_time),
to_id=COALESCE(excluded.to_id,positions.to_id),
latitude=COALESCE(excluded.latitude,positions.latitude),
longitude=COALESCE(excluded.longitude,positions.longitude),
altitude=COALESCE(excluded.altitude,positions.altitude),
location_source=COALESCE(excluded.location_source,positions.location_source),
precision_bits=COALESCE(excluded.precision_bits,positions.precision_bits),
sats_in_view=COALESCE(excluded.sats_in_view,positions.sats_in_view),
pdop=COALESCE(excluded.pdop,positions.pdop),
ground_speed=COALESCE(excluded.ground_speed,positions.ground_speed),
ground_track=COALESCE(excluded.ground_track,positions.ground_track),
snr=COALESCE(excluded.snr,positions.snr),
rssi=COALESCE(excluded.rssi,positions.rssi),
hop_limit=COALESCE(excluded.hop_limit,positions.hop_limit),
bitfield=COALESCE(excluded.bitfield,positions.bitfield),
payload_b64=COALESCE(excluded.payload_b64,positions.payload_b64),
ingestor=COALESCE(NULLIF(positions.ingestor,''), excluded.ingestor),
protocol=COALESCE(NULLIF(positions.protocol,'meshtastic'), excluded.protocol)
SQL
end
update_node_from_position(
db,
node_id,
node_num,
rx_time,
position_time,
location_source,
precision_bits,
lat,
lon,
alt,
snr,
)
end
end
end
end
@@ -0,0 +1,50 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module DataProcessing
# Look up the protocol registered by a given ingestor node.
#
# @param db [SQLite3::Database] open database handle.
# @param ingestor_node_id [String, nil] the node_id of the reporting ingestor.
# @param cache [Hash, nil] optional per-request memoization hash; pass a shared
# Hash instance across a batch to avoid redundant DB lookups per record.
# @return [String] protocol string; defaults to "meshtastic" when absent or unknown.
def resolve_protocol(db, ingestor_node_id, cache: nil)
return "meshtastic" if ingestor_node_id.nil? || ingestor_node_id.to_s.strip.empty?
if cache
return cache[ingestor_node_id] if cache.key?(ingestor_node_id)
result = db.get_first_value(
"SELECT protocol FROM ingestors WHERE node_id = ? LIMIT 1",
[ingestor_node_id],
) || "meshtastic"
cache[ingestor_node_id] = result
return result
end
db.get_first_value(
"SELECT protocol FROM ingestors WHERE node_id = ? LIMIT 1",
[ingestor_node_id],
) || "meshtastic"
end
private :resolve_protocol
end
end
end
@@ -0,0 +1,69 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module DataProcessing
# Halt the current request with HTTP 403 unless the request carries a
# bearer token that securely matches +API_TOKEN+.
#
# @return [void]
def require_token!
token = ENV["API_TOKEN"]
provided = request.env["HTTP_AUTHORIZATION"].to_s.sub(/^Bearer\s+/i, "")
halt 403, { error: "Forbidden" }.to_json unless token && !token.empty? && secure_token_match?(token, provided)
end
# Constant-time comparison of two API tokens to mitigate timing attacks.
#
# @param expected [String] expected token from configuration.
# @param provided [String] token supplied by the client.
# @return [Boolean] true when the tokens match in constant time.
def secure_token_match?(expected, provided)
return false unless expected.is_a?(String) && provided.is_a?(String)
expected_bytes = expected.b
provided_bytes = provided.b
return false unless expected_bytes.bytesize == provided_bytes.bytesize
Rack::Utils.secure_compare(expected_bytes, provided_bytes)
rescue Rack::Utils::SecurityError
false
end
# Read the request body up to a configured byte ceiling and halt with HTTP
# 413 when the payload exceeds the limit.
#
# @param limit [Integer, nil] optional override; falls back to
# +PotatoMesh::Config.max_json_body_bytes+ when nil or non-positive.
# @return [String] raw request body.
def read_json_body(limit: nil)
max_bytes = limit || PotatoMesh::Config.max_json_body_bytes
max_bytes = max_bytes.to_i
if max_bytes <= 0
max_bytes = PotatoMesh::Config.max_json_body_bytes
end
body = request.body.read(max_bytes + 1)
body = "" if body.nil?
halt 413, { error: "payload too large" }.to_json if body.bytesize > max_bytes
body
ensure
request.body.rewind if request.body.respond_to?(:rewind)
end
end
end
end
@@ -0,0 +1,547 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module DataProcessing
# Ordered list of telemetry metric definitions consulted by
# +insert_telemetry+. Each entry is a tuple of
# +[column_name, coercion_type, key_map]+, where +key_map+ specifies the
# candidate field names for each source layer. Hoisted out of the method
# body to keep +insert_telemetry+ scannable; the data is otherwise
# identical to the inline definitions used previously.
TELEMETRY_METRIC_DEFINITIONS = [
[
"battery_level",
:float,
{
payload: %w[battery_level batteryLevel],
telemetry: %w[batteryLevel],
device: %w[battery_level batteryLevel],
environment: %w[battery_level batteryLevel],
},
],
[
"voltage",
:float,
{
payload: %w[voltage],
telemetry: %w[voltage],
device: %w[voltage],
environment: %w[voltage],
},
],
[
"channel_utilization",
:float,
{
payload: %w[channel_utilization channelUtilization],
telemetry: %w[channelUtilization],
device: %w[channel_utilization channelUtilization],
},
],
[
"air_util_tx",
:float,
{
payload: %w[air_util_tx airUtilTx],
telemetry: %w[airUtilTx],
device: %w[air_util_tx airUtilTx],
},
],
[
"uptime_seconds",
:integer,
{
payload: %w[uptime_seconds uptimeSeconds],
telemetry: %w[uptimeSeconds],
device: %w[uptime_seconds uptimeSeconds],
},
],
[
"temperature",
:float,
{
payload: %w[temperature temperatureC tempC],
telemetry: %w[temperature temperatureC tempC],
environment: %w[temperature temperatureC temperature_c tempC],
},
],
[
"relative_humidity",
:float,
{
payload: %w[relative_humidity relativeHumidity humidity],
telemetry: %w[relative_humidity relativeHumidity humidity],
environment: %w[relative_humidity relativeHumidity humidity],
},
],
[
"barometric_pressure",
:float,
{
payload: %w[barometric_pressure barometricPressure pressure],
telemetry: %w[barometric_pressure barometricPressure pressure],
environment: %w[barometric_pressure barometricPressure pressure],
},
],
[
"gas_resistance",
:float,
{
payload: %w[gas_resistance gasResistance],
telemetry: %w[gas_resistance gasResistance],
environment: %w[gas_resistance gasResistance],
},
],
[
"current",
:float,
{
payload: %w[current current_ma currentMa],
telemetry: %w[current current_ma currentMa],
device: %w[current current_ma currentMa],
environment: %w[current],
},
],
[
"iaq",
:integer,
{
payload: %w[iaq iaqIndex iaq_index],
telemetry: %w[iaq iaqIndex iaq_index],
environment: %w[iaq iaqIndex iaq_index],
},
],
[
"distance",
:float,
{
payload: %w[distance range rangeMeters],
telemetry: %w[distance range rangeMeters],
environment: %w[distance range rangeMeters],
},
],
[
"lux",
:float,
{
payload: %w[lux illuminance lightLux],
telemetry: %w[lux illuminance lightLux],
environment: %w[lux illuminance lightLux],
},
],
[
"white_lux",
:float,
{
payload: %w[white_lux whiteLux],
telemetry: %w[white_lux whiteLux],
environment: %w[white_lux whiteLux],
},
],
[
"ir_lux",
:float,
{
payload: %w[ir_lux irLux],
telemetry: %w[ir_lux irLux],
environment: %w[ir_lux irLux],
},
],
[
"uv_lux",
:float,
{
payload: %w[uv_lux uvLux uvIndex],
telemetry: %w[uv_lux uvLux uvIndex],
environment: %w[uv_lux uvLux uvIndex],
},
],
[
"wind_direction",
:integer,
{
payload: %w[wind_direction windDirection],
telemetry: %w[wind_direction windDirection],
environment: %w[wind_direction windDirection],
},
],
[
"wind_speed",
:float,
{
payload: %w[wind_speed windSpeed windSpeedMps],
telemetry: %w[wind_speed windSpeed windSpeedMps],
environment: %w[wind_speed windSpeed windSpeedMps],
},
],
[
"weight",
:float,
{
payload: %w[weight mass],
telemetry: %w[weight mass],
environment: %w[weight mass],
},
],
[
"wind_gust",
:float,
{
payload: %w[wind_gust windGust],
telemetry: %w[wind_gust windGust],
environment: %w[wind_gust windGust],
},
],
[
"wind_lull",
:float,
{
payload: %w[wind_lull windLull],
telemetry: %w[wind_lull windLull],
environment: %w[wind_lull windLull],
},
],
[
"radiation",
:float,
{
payload: %w[radiation radiationLevel],
telemetry: %w[radiation radiationLevel],
environment: %w[radiation radiationLevel],
},
],
[
"rainfall_1h",
:float,
{
payload: %w[rainfall_1h rainfall1h rainfallOneHour],
telemetry: %w[rainfall_1h rainfall1h rainfallOneHour],
environment: %w[rainfall_1h rainfall1h rainfallOneHour],
},
],
[
"rainfall_24h",
:float,
{
payload: %w[rainfall_24h rainfall24h rainfallTwentyFourHour],
telemetry: %w[rainfall_24h rainfall24h rainfallTwentyFourHour],
environment: %w[rainfall_24h rainfall24h rainfallTwentyFourHour],
},
],
[
"soil_moisture",
:integer,
{
payload: %w[soil_moisture soilMoisture],
telemetry: %w[soil_moisture soilMoisture],
environment: %w[soil_moisture soilMoisture],
},
],
[
"soil_temperature",
:float,
{
payload: %w[soil_temperature soilTemperature],
telemetry: %w[soil_temperature soilTemperature],
environment: %w[soil_temperature soilTemperature],
},
],
].freeze
# Resolve a telemetry metric from the provided data sources.
#
# @param key_map [Hash{Symbol=>Array<String>}] ordered mapping of source names to candidate keys.
# @param sources [Hash{Symbol=>Hash}] data structures to search for metric values.
# @param type [Symbol] coercion strategy, ``:float`` or ``:integer``.
# @return [Numeric, nil] coerced metric value or nil when no candidates exist.
def resolve_numeric_metric(key_map, sources, type)
key_map.each do |source, keys|
next if keys.nil? || keys.empty?
data = sources[source]
next unless data.is_a?(Hash)
keys.each do |name|
next if name.nil?
key = name.to_s
value = if data.key?(key)
data[key]
else
sym_key = key.to_sym
data.key?(sym_key) ? data[sym_key] : nil
end
next if value.nil?
coerced = case type
when :float
coerce_float(value)
when :integer
coerce_integer(value)
else
value
end
return coerced unless coerced.nil?
end
end
nil
end
private :resolve_numeric_metric
# Persist a telemetry packet and refresh the related node row.
#
# @param db [SQLite3::Database] open database handle.
# @param payload [Hash] inbound telemetry payload.
# @param protocol_cache [Hash, nil] optional per-batch ingestor protocol cache.
# @return [void]
def insert_telemetry(db, payload, protocol_cache: nil)
return unless payload.is_a?(Hash)
telemetry_id = coerce_integer(payload["id"] || payload["packet_id"])
return unless telemetry_id
now = Time.now.to_i
rx_time = coerce_integer(payload["rx_time"])
rx_time = now if rx_time.nil? || rx_time > now
rx_iso = string_or_nil(payload["rx_iso"])
rx_iso ||= Time.at(rx_time).utc.iso8601
raw_node_id = payload["node_id"] || payload["from_id"] || payload["from"]
raw_node_num = coerce_integer(payload["node_num"]) || coerce_integer(payload["num"])
canonical_parts = canonical_node_parts(raw_node_id, raw_node_num)
if canonical_parts
node_id, node_num, = canonical_parts
else
node_id = string_or_nil(raw_node_id)
node_id = "!#{node_id.delete_prefix("!").downcase}" if node_id&.start_with?("!")
payload_for_num = payload.dup
payload_for_num["num"] ||= raw_node_num if raw_node_num
node_num = resolve_node_num(node_id, payload_for_num)
node_num ||= raw_node_num
canonical = normalize_node_id(db, node_id || node_num)
node_id = canonical if canonical
end
from_id = string_or_nil(payload["from_id"]) || node_id
to_id = string_or_nil(payload["to_id"] || payload["to"])
telemetry_time = coerce_integer(payload["telemetry_time"] || payload["time"] || payload.dig("telemetry", "time"))
telemetry_time = nil if telemetry_time && telemetry_time > now
channel = coerce_integer(payload["channel"])
portnum = string_or_nil(payload["portnum"])
hop_limit = coerce_integer(payload["hop_limit"] || payload["hopLimit"])
snr = coerce_float(payload["snr"])
rssi = coerce_integer(payload["rssi"])
bitfield = coerce_integer(payload["bitfield"])
payload_b64 = string_or_nil(payload["payload_b64"] || payload["payload"])
lora_freq = coerce_integer(payload["lora_freq"] || payload["loraFrequency"])
modem_preset = string_or_nil(payload["modem_preset"] || payload["modemPreset"])
ingestor = string_or_nil(payload["ingestor"])
protocol = resolve_protocol(db, ingestor, cache: protocol_cache)
telemetry_section = normalize_json_object(payload["telemetry"])
device_metrics = normalize_json_object(payload["device_metrics"] || payload["deviceMetrics"])
device_metrics ||= normalize_json_object(telemetry_section["deviceMetrics"]) if telemetry_section&.key?("deviceMetrics")
environment_metrics = normalize_json_object(payload["environment_metrics"] || payload["environmentMetrics"])
environment_metrics ||= normalize_json_object(telemetry_section["environmentMetrics"]) if telemetry_section&.key?("environmentMetrics")
power_metrics = normalize_json_object(payload["power_metrics"] || payload["powerMetrics"])
power_metrics ||= normalize_json_object(telemetry_section["powerMetrics"]) if telemetry_section&.key?("powerMetrics")
air_quality_metrics = normalize_json_object(payload["air_quality_metrics"] || payload["airQualityMetrics"])
air_quality_metrics ||= normalize_json_object(telemetry_section["airQualityMetrics"]) if telemetry_section&.key?("airQualityMetrics")
telemetry_type = string_or_nil(payload["telemetry_type"])
telemetry_type = nil unless VALID_TELEMETRY_TYPES.include?(telemetry_type)
telemetry_type ||= if device_metrics&.any?
"device"
elsif environment_metrics&.any?
"environment"
elsif power_metrics&.any?
"power"
elsif air_quality_metrics&.any?
"air_quality"
end
sources = {
payload: payload,
telemetry: telemetry_section,
device: device_metrics,
environment: environment_metrics,
}
metric_values = {}
TELEMETRY_METRIC_DEFINITIONS.each do |column, type, key_map|
value = resolve_numeric_metric(key_map, sources, type)
metric_values[column] = value unless value.nil?
end
battery_level = metric_values["battery_level"]
voltage = metric_values["voltage"]
channel_utilization = metric_values["channel_utilization"]
air_util_tx = metric_values["air_util_tx"]
uptime_seconds = metric_values["uptime_seconds"]
temperature = metric_values["temperature"]
relative_humidity = metric_values["relative_humidity"]
barometric_pressure = metric_values["barometric_pressure"]
gas_resistance = metric_values["gas_resistance"]
current = metric_values["current"]
iaq = metric_values["iaq"]
distance = metric_values["distance"]
lux = metric_values["lux"]
white_lux = metric_values["white_lux"]
ir_lux = metric_values["ir_lux"]
uv_lux = metric_values["uv_lux"]
wind_direction = metric_values["wind_direction"]
wind_speed = metric_values["wind_speed"]
weight = metric_values["weight"]
wind_gust = metric_values["wind_gust"]
wind_lull = metric_values["wind_lull"]
radiation = metric_values["radiation"]
rainfall_1h = metric_values["rainfall_1h"]
rainfall_24h = metric_values["rainfall_24h"]
soil_moisture = metric_values["soil_moisture"]
soil_temperature = metric_values["soil_temperature"]
row = [
telemetry_id,
node_id,
node_num,
from_id,
to_id,
rx_time,
rx_iso,
telemetry_time,
channel,
portnum,
hop_limit,
snr,
rssi,
bitfield,
payload_b64,
battery_level,
voltage,
channel_utilization,
air_util_tx,
uptime_seconds,
temperature,
relative_humidity,
barometric_pressure,
gas_resistance,
current,
iaq,
distance,
lux,
white_lux,
ir_lux,
uv_lux,
wind_direction,
wind_speed,
weight,
wind_gust,
wind_lull,
radiation,
rainfall_1h,
rainfall_24h,
soil_moisture,
soil_temperature,
ingestor,
protocol,
telemetry_type,
]
placeholders = Array.new(row.length, "?").join(",")
with_busy_retry do
db.execute <<~SQL, row
INSERT INTO telemetry(id,node_id,node_num,from_id,to_id,rx_time,rx_iso,telemetry_time,channel,portnum,hop_limit,snr,rssi,bitfield,payload_b64,
battery_level,voltage,channel_utilization,air_util_tx,uptime_seconds,temperature,relative_humidity,barometric_pressure,gas_resistance,current,iaq,distance,lux,white_lux,ir_lux,uv_lux,wind_direction,wind_speed,weight,wind_gust,wind_lull,radiation,rainfall_1h,rainfall_24h,soil_moisture,soil_temperature,ingestor,protocol,telemetry_type)
VALUES (#{placeholders})
ON CONFLICT(id) DO UPDATE SET
node_id=COALESCE(excluded.node_id,telemetry.node_id),
node_num=COALESCE(excluded.node_num,telemetry.node_num),
from_id=COALESCE(excluded.from_id,telemetry.from_id),
to_id=COALESCE(excluded.to_id,telemetry.to_id),
rx_time=excluded.rx_time,
rx_iso=excluded.rx_iso,
telemetry_time=COALESCE(excluded.telemetry_time,telemetry.telemetry_time),
channel=COALESCE(excluded.channel,telemetry.channel),
portnum=COALESCE(excluded.portnum,telemetry.portnum),
hop_limit=COALESCE(excluded.hop_limit,telemetry.hop_limit),
snr=COALESCE(excluded.snr,telemetry.snr),
rssi=COALESCE(excluded.rssi,telemetry.rssi),
bitfield=COALESCE(excluded.bitfield,telemetry.bitfield),
payload_b64=COALESCE(excluded.payload_b64,telemetry.payload_b64),
battery_level=COALESCE(excluded.battery_level,telemetry.battery_level),
voltage=COALESCE(excluded.voltage,telemetry.voltage),
channel_utilization=COALESCE(excluded.channel_utilization,telemetry.channel_utilization),
air_util_tx=COALESCE(excluded.air_util_tx,telemetry.air_util_tx),
uptime_seconds=COALESCE(excluded.uptime_seconds,telemetry.uptime_seconds),
temperature=COALESCE(excluded.temperature,telemetry.temperature),
relative_humidity=COALESCE(excluded.relative_humidity,telemetry.relative_humidity),
barometric_pressure=COALESCE(excluded.barometric_pressure,telemetry.barometric_pressure),
gas_resistance=COALESCE(excluded.gas_resistance,telemetry.gas_resistance),
current=COALESCE(excluded.current,telemetry.current),
iaq=COALESCE(excluded.iaq,telemetry.iaq),
distance=COALESCE(excluded.distance,telemetry.distance),
lux=COALESCE(excluded.lux,telemetry.lux),
white_lux=COALESCE(excluded.white_lux,telemetry.white_lux),
ir_lux=COALESCE(excluded.ir_lux,telemetry.ir_lux),
uv_lux=COALESCE(excluded.uv_lux,telemetry.uv_lux),
wind_direction=COALESCE(excluded.wind_direction,telemetry.wind_direction),
wind_speed=COALESCE(excluded.wind_speed,telemetry.wind_speed),
weight=COALESCE(excluded.weight,telemetry.weight),
wind_gust=COALESCE(excluded.wind_gust,telemetry.wind_gust),
wind_lull=COALESCE(excluded.wind_lull,telemetry.wind_lull),
radiation=COALESCE(excluded.radiation,telemetry.radiation),
rainfall_1h=COALESCE(excluded.rainfall_1h,telemetry.rainfall_1h),
rainfall_24h=COALESCE(excluded.rainfall_24h,telemetry.rainfall_24h),
soil_moisture=COALESCE(excluded.soil_moisture,telemetry.soil_moisture),
soil_temperature=COALESCE(excluded.soil_temperature,telemetry.soil_temperature),
ingestor=COALESCE(NULLIF(telemetry.ingestor,''), excluded.ingestor),
protocol=COALESCE(NULLIF(telemetry.protocol,'meshtastic'), excluded.protocol),
telemetry_type=COALESCE(excluded.telemetry_type,telemetry.telemetry_type)
SQL
end
update_node_from_telemetry(
db,
node_id,
node_num,
rx_time,
{
battery_level: battery_level,
voltage: voltage,
channel_utilization: channel_utilization,
air_util_tx: air_util_tx,
uptime_seconds: uptime_seconds,
},
lora_freq: lora_freq,
modem_preset: modem_preset,
protocol: protocol,
)
end
end
end
end
@@ -0,0 +1,130 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module DataProcessing
# Normalise a traceroute hop entry to a numeric node identifier.
#
# @param hop [Object] raw hop entry from the payload.
# @return [Integer, nil] coerced node ID or nil when the value is unusable.
def coerce_trace_node_id(hop)
case hop
when Integer
return hop
when Numeric
return hop.to_i
when String
trimmed = hop.strip
return nil if trimmed.empty?
return Integer(trimmed, 10) if trimmed.match?(/\A-?\d+\z/)
parts = canonical_node_parts(trimmed)
return parts[1] if parts
when Hash
candidate = hop["node_id"] || hop[:node_id] || hop["id"] || hop[:id] || hop["num"] || hop[:num]
return coerce_trace_node_id(candidate)
end
nil
end
# Extract hop identifiers from a traceroute payload preserving order.
#
# @param hops_value [Object] raw hops array or path collection.
# @return [Array<Integer>] ordered list of coerced hop identifiers.
def normalize_trace_hops(hops_value)
return [] if hops_value.nil?
hop_entries = hops_value.is_a?(Array) ? hops_value : [hops_value]
hop_entries.filter_map { |entry| coerce_trace_node_id(entry) }
end
# Persist a traceroute observation and its hop path.
#
# @param db [SQLite3::Database] open database handle.
# @param payload [Hash] traceroute payload as produced by the ingestor.
# @param protocol_cache [Hash, nil] optional per-batch ingestor protocol cache.
# @return [void]
def insert_trace(db, payload, protocol_cache: nil)
return unless payload.is_a?(Hash)
trace_identifier = coerce_integer(payload["id"] || payload["packet_id"] || payload["packetId"])
trace_identifier ||= coerce_integer(payload["trace_id"])
request_id = coerce_integer(payload["request_id"] || payload["req"])
trace_identifier ||= request_id
now = Time.now.to_i
rx_time = coerce_integer(payload["rx_time"])
rx_time = now if rx_time.nil? || rx_time > now
rx_iso = string_or_nil(payload["rx_iso"]) || Time.at(rx_time).utc.iso8601
metrics = normalize_json_object(payload["metrics"]) || {}
src = coerce_integer(payload["src"] || payload["source"] || payload["from"])
dest = coerce_integer(payload["dest"] || payload["destination"] || payload["to"])
rssi = coerce_integer(payload["rssi"]) || coerce_integer(metrics["rssi"])
snr = coerce_float(payload["snr"]) || coerce_float(metrics["snr"])
elapsed_ms = coerce_integer(
payload["elapsed_ms"] ||
payload["latency_ms"] ||
metrics&.[]("elapsed_ms") ||
metrics&.[]("latency_ms") ||
metrics&.[]("latencyMs"),
)
ingestor = string_or_nil(payload["ingestor"])
protocol = resolve_protocol(db, ingestor, cache: protocol_cache)
hops_value = payload.key?("hops") ? payload["hops"] : payload["path"]
hops = normalize_trace_hops(hops_value)
all_nodes = [src, dest, *hops].compact.uniq
all_nodes.each do |node|
ensure_unknown_node(db, node, node, heard_time: rx_time, protocol: protocol)
touch_node_last_seen(db, node, node, rx_time: rx_time, source: :trace)
end
with_busy_retry do
db.execute <<~SQL, [trace_identifier, request_id, src, dest, rx_time, rx_iso, rssi, snr, elapsed_ms, ingestor, protocol]
INSERT INTO traces(id, request_id, src, dest, rx_time, rx_iso, rssi, snr, elapsed_ms, ingestor, protocol)
VALUES(?,?,?,?,?,?,?,?,?,?,?)
ON CONFLICT(id) DO UPDATE SET
request_id=COALESCE(excluded.request_id,traces.request_id),
src=COALESCE(excluded.src,traces.src),
dest=COALESCE(excluded.dest,traces.dest),
rx_time=excluded.rx_time,
rx_iso=excluded.rx_iso,
rssi=COALESCE(excluded.rssi,traces.rssi),
snr=COALESCE(excluded.snr,traces.snr),
elapsed_ms=COALESCE(excluded.elapsed_ms,traces.elapsed_ms),
ingestor=COALESCE(NULLIF(traces.ingestor,''), excluded.ingestor),
protocol=COALESCE(NULLIF(traces.protocol,'meshtastic'), excluded.protocol)
SQL
trace_id = trace_identifier || db.last_insert_row_id
return unless trace_id
db.execute("DELETE FROM trace_hops WHERE trace_id = ?", [trace_id])
hops.each_with_index do |hop_id, index|
db.execute(
"INSERT INTO trace_hops(trace_id, hop_index, node_id) VALUES(?,?,?)",
[trace_id, index, hop_id],
)
end
end
end
end
end
end
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,231 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Federation
# Announce the local instance record to a remote federation peer,
# cycling through resolved IP addresses when transport-level failures
# occur.
#
# @param domain [String] remote peer hostname.
# @param payload_json [String] JSON-encoded announcement body.
# @return [Boolean] true when the announcement was accepted.
def announce_instance_to_domain(domain, payload_json)
return false unless domain && !domain.empty?
return false if federation_shutdown_requested?
https_failures = []
published = instance_uri_candidates(domain, "/api/instances").any? do |uri|
break false if federation_shutdown_requested?
begin
response = perform_announce_request(uri, payload_json)
if response.is_a?(Net::HTTPSuccess)
debug_log(
"Published federation announcement",
context: "federation.announce",
target: uri.to_s,
status: response.code,
)
true
else
debug_log(
"Federation announcement failed",
context: "federation.announce",
target: uri.to_s,
status: response.code,
)
false
end
rescue StandardError => e
metadata = {
context: "federation.announce",
target: uri.to_s,
error_class: e.class.name,
error_message: e.message,
}
if uri.scheme == "https" && https_connection_refused?(e)
debug_log(
"HTTPS federation announcement failed, retrying with HTTP",
**metadata,
)
https_failures << metadata
else
warn_log(
"Federation announcement raised exception",
**metadata,
)
end
false
end
end
unless published
https_failures.each do |metadata|
warn_log(
"Federation announcement raised exception",
**metadata,
)
end
end
published
end
# Execute a POST announcement request against the supplied URI, cycling
# through resolved IP addresses on connection-level failures.
#
# @param uri [URI::Generic] target endpoint.
# @param payload_json [String] JSON-encoded announcement body.
# @return [Net::HTTPResponse] the HTTP response from the first reachable address.
# @raise [StandardError] when all addresses fail or a non-retryable error occurs.
def perform_announce_request(uri, payload_json)
remote_addresses = sort_addresses_for_connection(resolve_remote_ip_addresses(uri))
addresses = remote_addresses.empty? ? [nil] : remote_addresses
last_error = nil
addresses.each do |address|
break if federation_shutdown_requested?
begin
return perform_single_announce_request(uri, payload_json, ip_address: address&.to_s)
rescue StandardError => e
if connection_refused_or_unreachable?(e)
last_error = e
else
raise
end
end
end
raise(last_error || StandardError.new("all resolved addresses failed"))
end
# Execute a single POST announcement request, optionally pinning the
# connection to a specific IP address.
#
# @param uri [URI::Generic] target endpoint.
# @param payload_json [String] JSON-encoded announcement body.
# @param ip_address [String, nil] resolved IP address to pin the
# connection to, or +nil+ to let {build_remote_http_client} resolve.
# @return [Net::HTTPResponse] the HTTP response.
# @raise [StandardError] when the request fails.
def perform_single_announce_request(uri, payload_json, ip_address: nil)
http = build_remote_http_client(uri, ip_address: ip_address)
Timeout.timeout(PotatoMesh::Config.remote_instance_request_timeout) do
http.start do |connection|
request = build_federation_http_request(Net::HTTP::Post, uri)
request.body = payload_json
connection.request(request)
end
end
end
# Run the periodic announcement cycle by signing the local payload and
# dispatching it (preferably via the worker pool) to every peer domain.
#
# @return [void]
def announce_instance_to_all_domains
return unless federation_enabled?
return if federation_shutdown_requested?
attributes, signature = ensure_self_instance_record!
payload_json = JSON.generate(instance_announcement_payload(attributes, signature))
domains = federation_target_domains(attributes[:domain])
pool = federation_worker_pool
scheduled = []
domains.each_with_object(scheduled) do |domain, scheduled_tasks|
break if federation_shutdown_requested?
if pool
begin
task = pool.schedule do
announce_instance_to_domain(domain, payload_json)
end
scheduled_tasks << [domain, task]
next
rescue PotatoMesh::App::WorkerPool::QueueFullError
warn_log(
"Skipped asynchronous federation announcement",
context: "federation.announce",
domain: domain,
reason: "worker queue saturated",
)
rescue PotatoMesh::App::WorkerPool::ShutdownError
warn_log(
"Worker pool unavailable, falling back to synchronous announcement",
context: "federation.announce",
domain: domain,
)
pool = nil
end
end
announce_instance_to_domain(domain, payload_json)
end
wait_for_federation_tasks(scheduled)
unless domains.empty?
debug_log(
"Federation announcement cycle complete",
context: "federation.announce",
targets: domains,
)
end
end
# Wait for scheduled federation tasks to complete while logging failures.
#
# @param scheduled [Array<(String, PotatoMesh::App::WorkerPool::Task)>] pairs of domains and tasks.
# @return [void]
def wait_for_federation_tasks(scheduled)
return if scheduled.empty?
timeout = PotatoMesh::Config.federation_task_timeout_seconds
scheduled.all? do |domain, task|
break false if federation_shutdown_requested?
begin
task.wait(timeout: timeout)
rescue PotatoMesh::App::WorkerPool::TaskTimeoutError => e
warn_log(
"Federation announcement task timed out",
context: "federation.announce",
domain: domain,
timeout: timeout,
error_class: e.class.name,
error_message: e.message,
)
rescue StandardError => e
warn_log(
"Federation announcement task failed",
context: "federation.announce",
domain: domain,
error_class: e.class.name,
error_message: e.message,
)
end
true
end
end
end
end
end
@@ -0,0 +1,98 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Federation
# Spawn the long-running announcer thread that drives periodic federation
# broadcasts.
#
# @return [Thread, nil] the announcer thread, or nil when federation is disabled.
def start_federation_announcer!
# Federation broadcasts must not execute when federation support is disabled.
return nil unless federation_enabled?
clear_federation_shutdown_request!
ensure_federation_shutdown_hook!
existing = settings.federation_thread
return existing if existing&.alive?
thread = Thread.new do
loop do
break unless federation_sleep_with_shutdown(PotatoMesh::Config.federation_announcement_interval)
begin
announce_instance_to_all_domains
rescue StandardError => e
warn_log(
"Federation announcement loop error",
context: "federation.announce",
error_class: e.class.name,
error_message: e.message,
)
end
end
end
thread.name = "potato-mesh-federation" if thread.respond_to?(:name=)
# Allow shutdown even if the announcement loop is still sleeping.
thread.daemon = true if thread.respond_to?(:daemon=)
set(:federation_thread, thread)
thread
end
# Launch a background thread responsible for the first federation broadcast.
#
# @return [Thread, nil] the thread handling the initial announcement.
def start_initial_federation_announcement!
# Skip the initial broadcast entirely when federation is disabled.
return nil unless federation_enabled?
clear_federation_shutdown_request!
ensure_federation_shutdown_hook!
existing = settings.respond_to?(:initial_federation_thread) ? settings.initial_federation_thread : nil
return existing if existing&.alive?
thread = Thread.new do
begin
delay = PotatoMesh::Config.initial_federation_delay_seconds
if delay.positive?
completed = federation_sleep_with_shutdown(delay)
next unless completed
end
next if federation_shutdown_requested?
announce_instance_to_all_domains
rescue StandardError => e
warn_log(
"Initial federation announcement failed",
context: "federation.announce",
error_class: e.class.name,
error_message: e.message,
)
ensure
set(:initial_federation_thread, nil)
end
end
thread.name = "potato-mesh-federation-initial" if thread.respond_to?(:name=)
thread.report_on_exception = false if thread.respond_to?(:report_on_exception=)
# Avoid blocking process shutdown during delayed startup announcements.
thread.daemon = true if thread.respond_to?(:daemon=)
set(:initial_federation_thread, thread)
thread
end
end
end
end
@@ -0,0 +1,369 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Federation
# Resolve the best matching active-node count from a remote /api/stats payload.
#
# @param payload [Hash, nil] decoded JSON payload from /api/stats.
# @param max_age_seconds [Integer] activity window currently expected for federation freshness.
# @return [Integer, nil] selected active-node count when available.
def remote_active_node_count_from_stats(payload, max_age_seconds:)
return nil unless payload.is_a?(Hash)
active_nodes = payload["active_nodes"]
return nil unless active_nodes.is_a?(Hash)
age = coerce_integer(max_age_seconds) || 0
key = if age <= 3600
"hour"
elsif age <= 86_400
"day"
elsif age <= PotatoMesh::Config.week_seconds
"week"
else
"month"
end
value = coerce_integer(active_nodes[key])
return nil unless value
[value, 0].max
end
# Parse a remote federation instance payload into canonical attributes.
#
# @param payload [Hash] JSON object describing a remote instance.
# @return [Array<(Hash, String), String>] tuple containing the attribute
# hash and signature when valid or a failure reason when invalid.
def remote_instance_attributes_from_payload(payload)
unless payload.is_a?(Hash)
return [nil, nil, "instance payload is not an object"]
end
id = string_or_nil(payload["id"])
return [nil, nil, "missing instance id"] unless id
domain = sanitize_instance_domain(payload["domain"])
return [nil, nil, "missing instance domain"] unless domain
pubkey = sanitize_public_key_pem(payload["pubkey"])
return [nil, nil, "missing instance public key"] unless pubkey
signature = string_or_nil(payload["signature"])
return [nil, nil, "missing instance signature"] unless signature
private_value = if payload.key?("isPrivate")
payload["isPrivate"]
else
payload["is_private"]
end
private_flag = coerce_boolean(private_value)
if private_flag.nil?
numeric_flag = coerce_integer(private_value)
private_flag = !numeric_flag.to_i.zero? if numeric_flag
end
attributes = {
id: id,
domain: domain,
pubkey: pubkey,
name: string_or_nil(payload["name"]),
version: string_or_nil(payload["version"]),
channel: string_or_nil(payload["channel"]),
frequency: string_or_nil(payload["frequency"]),
latitude: coerce_float(payload["latitude"]),
longitude: coerce_float(payload["longitude"]),
last_update_time: coerce_integer(payload["lastUpdateTime"]),
is_private: private_flag,
contact_link: string_or_nil(payload["contactLink"]),
}
[attributes, signature, nil]
rescue StandardError => e
[nil, nil, e.message]
end
# Enqueue a federation crawl for the supplied domain using the worker pool.
#
# @param domain [String] sanitized remote domain to crawl.
# @param per_response_limit [Integer, nil] maximum entries processed per response.
# @param overall_limit [Integer, nil] maximum unique domains visited.
# @return [Boolean] true when the crawl was scheduled successfully.
def enqueue_federation_crawl(domain, per_response_limit:, overall_limit:)
sanitized_domain = sanitize_instance_domain(domain)
unless sanitized_domain
warn_log(
"Skipped remote instance crawl",
context: "federation.instances",
domain: domain,
reason: "invalid domain",
)
return false
end
return false if federation_shutdown_requested?
application = is_a?(Class) ? self : self.class
pool = application.federation_worker_pool
unless pool
debug_log(
"Skipped remote instance crawl",
context: "federation.instances",
domain: sanitized_domain,
reason: "federation disabled",
)
return false
end
claim_result = application.claim_federation_crawl_slot(sanitized_domain)
unless claim_result == :claimed
debug_log(
"Skipped remote instance crawl",
context: "federation.instances",
domain: sanitized_domain,
reason: claim_result == :in_flight ? "crawl already in flight" : "recent crawl completed",
)
return false
end
pool.schedule do
db = nil
begin
db = application.open_database
application.ingest_known_instances_from!(
db,
sanitized_domain,
per_response_limit: per_response_limit,
overall_limit: overall_limit,
)
ensure
db&.close
application.release_federation_crawl_slot(sanitized_domain)
end
end
true
rescue PotatoMesh::App::WorkerPool::QueueFullError
application.handle_failed_federation_crawl_schedule(sanitized_domain, "worker queue saturated")
rescue PotatoMesh::App::WorkerPool::ShutdownError
application.handle_failed_federation_crawl_schedule(sanitized_domain, "worker pool shut down")
end
# Handle a failed crawl schedule attempt without applying cooldown.
#
# @param domain [String] canonical domain that failed to schedule.
# @param reason [String] human-readable failure reason.
# @return [Boolean] always false because scheduling did not succeed.
def handle_failed_federation_crawl_schedule(domain, reason)
release_federation_crawl_slot(domain, record_completion: false)
warn_log(
"Skipped remote instance crawl",
context: "federation.instances",
domain: domain,
reason: reason,
)
false
end
# Recursively ingest federation records exposed by the supplied domain.
#
# @param db [SQLite3::Database] open database connection used for writes.
# @param domain [String] remote domain to crawl for federation records.
# @param visited [Set<String>] domains processed during this crawl.
# @param per_response_limit [Integer, nil] maximum entries processed per response.
# @param overall_limit [Integer, nil] maximum unique domains visited.
# @return [Set<String>] updated set of visited domains.
def ingest_known_instances_from!(
db,
domain,
visited: nil,
per_response_limit: nil,
overall_limit: nil
)
sanitized = sanitize_instance_domain(domain)
return visited || Set.new unless sanitized
return visited || Set.new if federation_shutdown_requested?
visited ||= Set.new
overall_limit ||= PotatoMesh::Config.federation_max_domains_per_crawl
per_response_limit ||= PotatoMesh::Config.federation_max_instances_per_response
if overall_limit && overall_limit.positive? && visited.size >= overall_limit
debug_log(
"Skipped remote instance crawl due to crawl limit",
context: "federation.instances",
domain: sanitized,
limit: overall_limit,
)
return visited
end
return visited if visited.include?(sanitized)
visited << sanitized
payload, metadata = fetch_instance_json(sanitized, "/api/instances")
unless payload.is_a?(Array)
warn_log(
"Failed to load remote federation instances",
context: "federation.instances",
domain: sanitized,
reason: Array(metadata).map(&:to_s).join("; "),
)
return visited
end
processed_entries = 0
recent_cutoff = Time.now.to_i - PotatoMesh::Config.remote_instance_max_node_age
payload.each do |entry|
break if federation_shutdown_requested?
if per_response_limit && per_response_limit.positive? && processed_entries >= per_response_limit
debug_log(
"Skipped remote instance entry due to response limit",
context: "federation.instances",
domain: sanitized,
limit: per_response_limit,
)
break
end
if overall_limit && overall_limit.positive? && visited.size >= overall_limit
debug_log(
"Skipped remote instance entry due to crawl limit",
context: "federation.instances",
domain: sanitized,
limit: overall_limit,
)
break
end
processed_entries += 1
attributes, signature, reason = remote_instance_attributes_from_payload(entry)
unless attributes && signature
warn_log(
"Discarded remote instance entry",
context: "federation.instances",
domain: sanitized,
reason: reason || "invalid payload",
)
next
end
if attributes[:is_private]
debug_log(
"Skipped private remote instance",
context: "federation.instances",
domain: attributes[:domain],
)
next
end
unless verify_instance_signature(attributes, signature, attributes[:pubkey])
warn_log(
"Discarded remote instance entry",
context: "federation.instances",
domain: attributes[:domain],
reason: "invalid signature",
)
next
end
attributes[:is_private] = false if attributes[:is_private].nil?
stats_payload, stats_metadata = fetch_instance_json(attributes[:domain], "/api/stats")
stats_count = remote_active_node_count_from_stats(
stats_payload,
max_age_seconds: PotatoMesh::Config.remote_instance_max_node_age,
)
attributes[:nodes_count] = stats_count if stats_count
# Extract per-protocol 24h counts (informational, not signed).
if stats_payload.is_a?(Hash)
mc_day = stats_payload.dig("meshcore", "day")
mt_day = stats_payload.dig("meshtastic", "day")
attributes[:meshcore_nodes_count] = coerce_integer(mc_day) if mc_day
attributes[:meshtastic_nodes_count] = coerce_integer(mt_day) if mt_day
end
nodes_since_path = "/api/nodes?since=#{recent_cutoff}&limit=1000"
nodes_since_window, nodes_since_metadata = fetch_instance_json(attributes[:domain], nodes_since_path)
if stats_count.nil? && attributes[:nodes_count].nil? && nodes_since_window.is_a?(Array)
attributes[:nodes_count] = nodes_since_window.length
end
remote_nodes, node_metadata = fetch_instance_json(attributes[:domain], "/api/nodes")
remote_nodes = nodes_since_window if remote_nodes.nil? && nodes_since_window.is_a?(Array)
if attributes[:nodes_count].nil? && remote_nodes.is_a?(Array)
attributes[:nodes_count] = remote_nodes.length
end
if stats_count.nil? && Array(stats_metadata).any?
debug_log(
"Remote instance /api/stats unavailable; using node list fallback",
context: "federation.instances",
domain: attributes[:domain],
reason: Array(stats_metadata).map(&:to_s).join("; "),
)
end
unless remote_nodes
warn_log(
"Failed to load remote node data",
context: "federation.instances",
domain: attributes[:domain],
reason: Array(node_metadata || nodes_since_metadata).map(&:to_s).join("; "),
)
next
end
fresh, freshness_reason = validate_remote_nodes(remote_nodes)
unless fresh
warn_log(
"Discarded remote instance entry",
context: "federation.instances",
domain: attributes[:domain],
reason: freshness_reason || "stale node data",
)
next
end
begin
upsert_instance_record(db, attributes, signature)
ingest_known_instances_from!(
db,
attributes[:domain],
visited: visited,
per_response_limit: per_response_limit,
overall_limit: overall_limit,
)
rescue ArgumentError => e
warn_log(
"Failed to persist remote instance",
context: "federation.instances",
domain: attributes[:domain],
error_class: e.class.name,
error_message: e.message,
)
end
end
visited
end
end
end
end
@@ -0,0 +1,90 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Federation
# Initialize shared in-memory state used to deduplicate crawl scheduling.
#
# @return [void]
def initialize_federation_crawl_state!
@federation_crawl_init_mutex ||= Mutex.new
return if instance_variable_defined?(:@federation_crawl_mutex) && @federation_crawl_mutex
@federation_crawl_init_mutex.synchronize do
return if instance_variable_defined?(:@federation_crawl_mutex) && @federation_crawl_mutex
@federation_crawl_mutex = Mutex.new
@federation_crawl_in_flight = Set.new
@federation_crawl_last_completed_at = {}
end
end
# Retrieve the cooldown period used for duplicate crawl suppression.
#
# @return [Integer] seconds a domain remains in cooldown after completion.
def federation_crawl_cooldown_seconds
PotatoMesh::Config.federation_crawl_cooldown_seconds
end
# Mark a domain crawl as claimed if no active or recent crawl exists.
#
# @param domain [String] canonical domain name.
# @return [Symbol] +:claimed+, +:in_flight+, or +:cooldown+.
def claim_federation_crawl_slot(domain)
initialize_federation_crawl_state!
now = Time.now.to_i
@federation_crawl_mutex.synchronize do
return :in_flight if @federation_crawl_in_flight.include?(domain)
last_completed = @federation_crawl_last_completed_at[domain]
if last_completed && now - last_completed < federation_crawl_cooldown_seconds
return :cooldown
end
@federation_crawl_in_flight << domain
:claimed
end
end
# Release an in-flight crawl claim and record completion timestamp.
#
# @param domain [String] canonical domain name.
# @param record_completion [Boolean] true to apply cooldown tracking.
# @return [void]
def release_federation_crawl_slot(domain, record_completion: true)
return unless domain
initialize_federation_crawl_state!
@federation_crawl_mutex.synchronize do
@federation_crawl_in_flight.delete(domain)
@federation_crawl_last_completed_at[domain] = Time.now.to_i if record_completion
end
end
# Clear all in-memory crawl scheduling state.
#
# @return [void]
def clear_federation_crawl_state!
initialize_federation_crawl_state!
@federation_crawl_mutex.synchronize do
@federation_crawl_in_flight.clear
@federation_crawl_last_completed_at.clear
end
end
end
end
end
@@ -0,0 +1,263 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Federation
# Determine whether an HTTPS announcement failure should fall back to HTTP.
#
# @param error [StandardError] failure raised while attempting HTTPS.
# @return [Boolean] true when the error corresponds to a refused TCP connection.
def https_connection_refused?(error)
current = error
while current
return true if current.is_a?(Errno::ECONNREFUSED)
current = current.respond_to?(:cause) ? current.cause : nil
end
false
end
# Determine whether an error indicates a transport-level connection
# failure that may succeed on an alternative resolved address.
#
# Connection refusals, host/network unreachable errors, and TCP open
# timeouts signal that the selected IP address cannot be reached but
# do not rule out alternative addresses for the same hostname.
#
# @param error [StandardError] failure raised during the connection attempt.
# @return [Boolean] true when a retry with a different address is warranted.
def connection_refused_or_unreachable?(error)
retryable_classes = [
Errno::ECONNREFUSED,
Errno::EHOSTUNREACH,
Errno::ENETUNREACH,
Errno::ECONNRESET,
Errno::ETIMEDOUT,
Net::OpenTimeout,
]
current = error
while current
return true if retryable_classes.any? { |klass| current.is_a?(klass) }
current = current.respond_to?(:cause) ? current.cause : nil
end
false
end
# Build the HTTPS-then-HTTP URI candidates used to reach a remote peer.
#
# @param domain [String] peer hostname.
# @param path [String] request path (must include leading slash).
# @return [Array<URI::Generic>] ordered list of URI candidates.
def instance_uri_candidates(domain, path)
base = domain
[
URI.parse("https://#{base}#{path}"),
URI.parse("http://#{base}#{path}"),
]
rescue URI::InvalidURIError
[]
end
# Build an HTTP request decorated with the headers required for federation peers.
#
# @param request_class [Class<Net::HTTPRequest>] HTTP request class such as {Net::HTTP::Get}.
# @param uri [URI::Generic] target URI describing the remote endpoint.
# @return [Net::HTTPRequest] configured HTTP request including standard headers.
def build_federation_http_request(request_class, uri)
request = request_class.new(uri)
request["User-Agent"] = federation_user_agent_header
request["Accept"] = "application/json"
request["Content-Type"] = "application/json" if request.request_body_permitted?
request
end
# Compose the User-Agent string used when communicating with federation peers.
#
# @return [String] descriptive identifier for PotatoMesh federation requests.
def federation_user_agent_header
version = app_constant(:APP_VERSION).to_s
version = "unknown" if version.empty?
sanitized_domain = sanitize_instance_domain(app_constant(:INSTANCE_DOMAIN), downcase: true)
base = "PotatoMesh/#{version}"
return base unless sanitized_domain && !sanitized_domain.empty?
"#{base} (+https://#{sanitized_domain})"
end
# Resolve the host component of a remote URI and ensure the destination is
# safe for federation HTTP requests.
#
# The method performs a DNS lookup using Addrinfo to capture every
# available address for the supplied URI host. The resulting addresses are
# converted to {IPAddr} objects for consistent inspection via
# {restricted_ip_address?}. When all resolved addresses fall within
# restricted ranges, the method raises an ArgumentError so callers can
# abort the federation request before contacting the remote endpoint.
#
# @param uri [URI::Generic] remote endpoint candidate.
# @return [Array<IPAddr>] list of resolved, unrestricted IP addresses.
# @raise [ArgumentError] when +uri.host+ is blank or resolves solely to
# restricted addresses.
def resolve_remote_ip_addresses(uri)
host = uri&.host
raise ArgumentError, "URI missing host" unless host
addrinfo_records = Addrinfo.getaddrinfo(host, nil, Socket::AF_UNSPEC, Socket::SOCK_STREAM)
addresses = addrinfo_records.filter_map do |addr|
begin
IPAddr.new(addr.ip_address)
rescue IPAddr::InvalidAddressError
nil
end
end
unique_addresses = addresses.uniq { |ip| [ip.family, ip.to_s] }
unrestricted_addresses = unique_addresses.reject { |ip| restricted_ip_address?(ip) }
if unique_addresses.any? && unrestricted_addresses.empty?
raise ArgumentError, "restricted domain"
end
unrestricted_addresses
end
# Sort resolved addresses so that IPv4 precedes IPv6.
#
# Federation peers with dual-stack DNS may publish addresses where one
# family is unreachable. Placing IPv4 entries first mirrors the
# preference used by {discover_local_ip_address} and improves the
# likelihood that the first connection attempt succeeds.
#
# @param addresses [Array<IPAddr>] resolved IP address list.
# @return [Array<IPAddr>] addresses sorted with IPv4 entries before IPv6.
def sort_addresses_for_connection(addresses)
return addresses if addresses.nil? || addresses.length <= 1
v4, v6 = addresses.partition { |ip| !ip.ipv6? }
v4 + v6
end
# Build an HTTP client configured for communication with a remote instance.
#
# When +ip_address+ is supplied the client is pinned to that specific
# address, bypassing DNS resolution. Callers that iterate over
# multiple resolved addresses should pass each candidate in turn.
#
# @param uri [URI::Generic] target URI describing the remote endpoint.
# @param ip_address [String, nil] explicit IP address to connect to,
# or +nil+ to resolve via DNS and use the first result.
# @return [Net::HTTP] HTTP client ready to execute the request.
def build_remote_http_client(uri, ip_address: nil)
http = Net::HTTP.new(uri.host, uri.port)
if ip_address
http.ipaddr = ip_address if http.respond_to?(:ipaddr=)
else
remote_addresses = resolve_remote_ip_addresses(uri)
if http.respond_to?(:ipaddr=) && remote_addresses.any?
http.ipaddr = remote_addresses.first.to_s
end
end
http.open_timeout = PotatoMesh::Config.remote_instance_http_timeout
http.read_timeout = PotatoMesh::Config.remote_instance_read_timeout
http.use_ssl = uri.scheme == "https"
return http unless http.use_ssl?
http.verify_mode = OpenSSL::SSL::VERIFY_PEER
http.min_version = :TLS1_2 if http.respond_to?(:min_version=)
store = remote_instance_cert_store
http.cert_store = store if store
callback = remote_instance_verify_callback
http.verify_callback = callback if callback
http
end
# Construct a certificate store that disables strict CRL enforcement.
#
# OpenSSL may fail remote requests when certificate revocation lists are
# unavailable from the issuing authority. The returned store mirrors the
# default system trust store while clearing CRL-related flags so that
# federation announcements gracefully succeed when CRLs cannot be fetched.
#
# @return [OpenSSL::X509::Store, nil] configured store or nil when setup fails.
def remote_instance_cert_store
return @remote_instance_cert_store if defined?(@remote_instance_cert_store) && @remote_instance_cert_store
store = OpenSSL::X509::Store.new
store.set_default_paths
store.flags = 0 if store.respond_to?(:flags=)
@remote_instance_cert_store = store
rescue OpenSSL::X509::StoreError => e
debug_log(
"Failed to initialize certificate store for federation HTTP: #{e.message}",
)
@remote_instance_cert_store = nil
end
# Build a TLS verification callback that tolerates CRL availability failures.
#
# Some certificate authorities publish CRL endpoints that may occasionally be
# unreachable. When OpenSSL cannot download the CRL it raises the
# V_ERR_UNABLE_TO_GET_CRL error which would otherwise cause HTTPS federation
# announcements to abort. The generated callback accepts those specific
# failures while preserving strict verification for all other errors.
#
# @return [Proc, nil] verification callback or nil when creation fails.
def remote_instance_verify_callback
if defined?(@remote_instance_verify_callback) && @remote_instance_verify_callback
return @remote_instance_verify_callback
end
callback = lambda do |preverify_ok, store_context|
return true if preverify_ok
if store_context && crl_unavailable_error?(store_context.error)
debug_log(
"Ignoring TLS CRL retrieval failure during federation request",
context: "federation.announce",
)
true
else
false
end
end
@remote_instance_verify_callback = callback
rescue StandardError => e
debug_log(
"Failed to initialize federation TLS verify callback: #{e.message}",
context: "federation.announce",
)
@remote_instance_verify_callback = nil
end
# Determine whether the supplied OpenSSL verification error corresponds to a
# missing certificate revocation list.
#
# @param error_code [Integer, nil] OpenSSL verification error value.
# @return [Boolean] true when the error should be ignored.
def crl_unavailable_error?(error_code)
allowed_errors = [OpenSSL::X509::V_ERR_UNABLE_TO_GET_CRL]
if defined?(OpenSSL::X509::V_ERR_UNABLE_TO_GET_CRL_ISSUER)
allowed_errors << OpenSSL::X509::V_ERR_UNABLE_TO_GET_CRL_ISSUER
end
allowed_errors.include?(error_code)
end
end
end
end
@@ -0,0 +1,136 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Federation
# Execute a GET request against the supplied federation URI, cycling
# through resolved IP addresses when a transport-level connection
# failure occurs.
#
# DNS resolution is performed once and the resulting addresses are
# sorted with IPv4 first via {sort_addresses_for_connection}. Each
# address is attempted sequentially; when a connection-level error
# (refused, unreachable, timeout) is raised the next address is tried.
# Non-connection errors (SSL failures, HTTP-level errors) are raised
# immediately without trying further addresses.
#
# @param uri [URI::Generic] target endpoint to request.
# @return [String] raw HTTP response body on success.
# @raise [InstanceFetchError] when all addresses are exhausted or a
# non-retryable error occurs.
def perform_instance_http_request(uri)
raise InstanceFetchError, "federation shutdown requested" if federation_shutdown_requested?
remote_addresses = sort_addresses_for_connection(resolve_remote_ip_addresses(uri))
addresses = remote_addresses.empty? ? [nil] : remote_addresses
last_error = nil
addresses.each do |address|
break if federation_shutdown_requested?
begin
return perform_single_http_request(uri, ip_address: address&.to_s)
rescue InstanceFetchError => e
if connection_refused_or_unreachable?(e)
last_error = e
else
raise
end
end
end
raise last_error || InstanceFetchError.new("all resolved addresses failed")
rescue ArgumentError => e
raise_instance_fetch_error(e)
end
# Execute a single HTTP GET request against the supplied URI, optionally
# pinning the connection to a specific IP address.
#
# @param uri [URI::Generic] target endpoint.
# @param ip_address [String, nil] resolved IP address to pin the
# connection to, or +nil+ to let {build_remote_http_client} resolve.
# @return [String] raw HTTP response body.
# @raise [InstanceFetchError] when the request fails.
def perform_single_http_request(uri, ip_address: nil)
http = build_remote_http_client(uri, ip_address: ip_address)
Timeout.timeout(PotatoMesh::Config.remote_instance_request_timeout) do
http.start do |connection|
request = build_federation_http_request(Net::HTTP::Get, uri)
response = connection.request(request)
case response
when Net::HTTPSuccess
response.body
else
raise InstanceFetchError, "unexpected response #{response.code}"
end
end
end
rescue StandardError => e
raise_instance_fetch_error(e)
end
# Build a human readable error message for a failed instance request.
#
# @param error [StandardError] failure raised while performing the request.
# @return [String] description including the error class when necessary.
def instance_fetch_error_message(error)
message = error.message.to_s.strip
class_name = error.class.name || error.class.to_s
return class_name if message.empty?
message.include?(class_name) ? message : "#{class_name}: #{message}"
end
# Raise an InstanceFetchError that preserves the original context.
#
# @param error [StandardError] failure raised while performing the request.
# @return [void]
def raise_instance_fetch_error(error)
message = instance_fetch_error_message(error)
wrapped = InstanceFetchError.new(message)
wrapped.set_backtrace(error.backtrace)
raise wrapped
end
# Fetch and JSON-decode a federation document from a peer.
#
# @param domain [String] peer hostname.
# @param path [String] request path.
# @return [Array(Object, URI::Generic | Array<String>)] decoded payload
# plus the successful URI, or +[nil, errors]+ when every candidate fails.
def fetch_instance_json(domain, path)
return [nil, ["federation shutdown requested"]] if federation_shutdown_requested?
errors = []
instance_uri_candidates(domain, path).each do |uri|
break if federation_shutdown_requested?
begin
body = perform_instance_http_request(uri)
return [JSON.parse(body), uri] if body
rescue JSON::ParserError => e
errors << "#{uri}: invalid JSON (#{e.message})"
rescue InstanceFetchError => e
errors << "#{uri}: #{e.message}"
end
end
[nil, errors]
end
end
end
end
@@ -0,0 +1,80 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Federation
# Count the number of nodes active since the supplied timestamp.
#
# @param cutoff [Integer] unix timestamp in seconds.
# @param db [SQLite3::Database, nil] optional open handle to reuse.
# @return [Integer, nil] node count or nil when unavailable.
def active_node_count_since(cutoff, db: nil)
return nil unless cutoff
handle = db || open_database(readonly: true)
count =
with_busy_retry do
handle.get_first_value("SELECT COUNT(*) FROM nodes WHERE last_heard >= ?", cutoff.to_i)
end
Integer(count)
rescue SQLite3::Exception, ArgumentError => e
warn_log(
"Failed to count active nodes",
context: "instances.nodes_count",
error_class: e.class.name,
error_message: e.message,
)
nil
ensure
handle&.close unless db
end
# Count the number of nodes for a specific protocol active since the
# supplied timestamp.
#
# @param cutoff [Integer] unix timestamp in seconds.
# @param protocol [String] protocol name (e.g. "meshcore", "meshtastic").
# @param db [SQLite3::Database, nil] optional open handle to reuse.
# @return [Integer, nil] node count or nil when unavailable.
def active_node_count_since_for_protocol(cutoff, protocol, db: nil)
return nil unless cutoff && protocol
handle = db || open_database(readonly: true)
count =
with_busy_retry do
handle.get_first_value(
"SELECT COUNT(*) FROM nodes WHERE last_heard >= ? AND protocol = ?",
cutoff.to_i,
protocol,
)
end
Integer(count)
rescue SQLite3::Exception, ArgumentError => e
warn_log(
"Failed to count active nodes for protocol",
context: "instances.protocol_nodes_count",
protocol: protocol,
error_class: e.class.name,
error_message: e.message,
)
nil
ensure
handle&.close unless db
end
end
end
end
@@ -0,0 +1,107 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Federation
# Persist or refresh a remote instance row, evicting any conflicting
# entry that already claimed the same domain.
#
# @param db [SQLite3::Database] open database handle.
# @param attributes [Hash] sanitized instance attributes.
# @param signature [String] base64-encoded signature.
# @return [void]
# @raise [ArgumentError] when the domain is invalid or restricted.
def upsert_instance_record(db, attributes, signature)
sanitized_domain = sanitize_instance_domain(attributes[:domain])
raise ArgumentError, "invalid domain" unless sanitized_domain
ip = ip_from_domain(sanitized_domain)
if ip && restricted_ip_address?(ip)
raise ArgumentError, "restricted domain"
end
normalized_domain = sanitized_domain
existing_id = with_busy_retry do
db.get_first_value(
"SELECT id FROM instances WHERE domain = ?",
normalized_domain,
)
end
if existing_id && existing_id != attributes[:id]
with_busy_retry do
db.execute("DELETE FROM instances WHERE id = ?", existing_id)
end
debug_log(
"Removed conflicting instance by domain",
context: "federation.instances",
domain: normalized_domain,
replaced_id: existing_id,
incoming_id: attributes[:id],
)
end
sql = <<~SQL
INSERT INTO instances (
id, domain, pubkey, name, version, channel, frequency,
latitude, longitude, last_update_time, is_private, nodes_count,
meshcore_nodes_count, meshtastic_nodes_count, contact_link, signature
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(id) DO UPDATE SET
domain=excluded.domain,
pubkey=excluded.pubkey,
name=excluded.name,
version=excluded.version,
channel=excluded.channel,
frequency=excluded.frequency,
latitude=excluded.latitude,
longitude=excluded.longitude,
last_update_time=excluded.last_update_time,
is_private=excluded.is_private,
nodes_count=COALESCE(excluded.nodes_count, instances.nodes_count),
meshcore_nodes_count=COALESCE(excluded.meshcore_nodes_count, instances.meshcore_nodes_count),
meshtastic_nodes_count=COALESCE(excluded.meshtastic_nodes_count, instances.meshtastic_nodes_count),
contact_link=excluded.contact_link,
signature=excluded.signature
SQL
nodes_count = coerce_integer(attributes[:nodes_count])
params = [
attributes[:id],
normalized_domain,
attributes[:pubkey],
attributes[:name],
attributes[:version],
attributes[:channel],
attributes[:frequency],
attributes[:latitude],
attributes[:longitude],
attributes[:last_update_time],
attributes[:is_private] ? 1 : 0,
nodes_count,
coerce_integer(attributes[:meshcore_nodes_count]),
coerce_integer(attributes[:meshtastic_nodes_count]),
attributes[:contact_link],
signature,
]
with_busy_retry do
db.execute(sql, params)
end
end
end
end
end
@@ -0,0 +1,196 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Federation
# Maximum slice (seconds) used by +federation_sleep_with_shutdown+ when
# decomposing a target sleep into shutdown-aware increments.
FEDERATION_SLEEP_SLICE_SECONDS = 0.2
# Retrieve or initialize the worker pool servicing federation jobs.
#
# @return [PotatoMesh::App::WorkerPool, nil] active worker pool or nil when disabled.
def federation_worker_pool
ensure_federation_worker_pool!
end
# Ensure the federation worker pool exists when federation remains enabled.
#
# Threading model: the pool is a fixed-size thread pool backed by a bounded
# queue. A single long-lived announcer thread (started by
# {#start_federation_announcer!}) drives periodic crawl and announcement
# cycles by submitting tasks onto the pool; individual crawl and announce
# jobs then run concurrently on pool threads. The pool is lazily
# instantiated on first use and is memoized on the Sinatra settings object so
# that all requests share the same instance. An +at_exit+ hook
# ({#ensure_federation_shutdown_hook!}) guarantees the pool drains cleanly on
# process termination even when the announcer thread is still alive.
#
# @return [PotatoMesh::App::WorkerPool, nil] active worker pool if created.
def ensure_federation_worker_pool!
return nil unless federation_enabled?
return nil if federation_shutdown_requested?
ensure_federation_shutdown_hook!
existing = settings.respond_to?(:federation_worker_pool) ? settings.federation_worker_pool : nil
return existing if existing&.alive?
pool = PotatoMesh::App::WorkerPool.new(
size: PotatoMesh::Config.federation_worker_pool_size,
max_queue: PotatoMesh::Config.federation_worker_queue_capacity,
task_timeout: PotatoMesh::Config.federation_task_timeout_seconds,
name: "potato-mesh-fed",
)
set(:federation_worker_pool, pool) if respond_to?(:set)
pool
end
# Ensure federation background workers are torn down during process exit.
#
# @return [void]
def ensure_federation_shutdown_hook!
application = is_a?(Class) ? self : self.class
return application.ensure_federation_shutdown_hook! unless application.equal?(self)
installed = if respond_to?(:settings) && settings.respond_to?(:federation_shutdown_hook_installed)
settings.federation_shutdown_hook_installed
else
instance_variable_defined?(:@federation_shutdown_hook_installed) && @federation_shutdown_hook_installed
end
return if installed
if respond_to?(:set) && settings.respond_to?(:federation_shutdown_hook_installed=)
set(:federation_shutdown_hook_installed, true)
else
@federation_shutdown_hook_installed = true
end
at_exit do
begin
application.shutdown_federation_background_work!(timeout: PotatoMesh::Config.federation_shutdown_timeout_seconds)
rescue StandardError
# Suppress shutdown errors during interpreter teardown.
end
end
end
# Check whether federation workers have received a shutdown request.
#
# @return [Boolean] true when stop has been requested.
def federation_shutdown_requested?
return false unless respond_to?(:settings)
return false unless settings.respond_to?(:federation_shutdown_requested)
settings.federation_shutdown_requested == true
end
# Mark federation background work as shutting down.
#
# @return [void]
def request_federation_shutdown!
set(:federation_shutdown_requested, true) if respond_to?(:set)
end
# Clear any previously requested federation shutdown marker.
#
# @return [void]
def clear_federation_shutdown_request!
set(:federation_shutdown_requested, false) if respond_to?(:set)
end
# Sleep in short intervals so federation loops can react to shutdown.
#
# @param seconds [Numeric] target sleep duration.
# @return [Boolean] true when the full delay elapsed without shutdown.
def federation_sleep_with_shutdown(seconds)
remaining = seconds.to_f
while remaining.positive?
return false if federation_shutdown_requested?
slice = [remaining, FEDERATION_SLEEP_SLICE_SECONDS].min
Kernel.sleep(slice)
remaining -= slice
end
!federation_shutdown_requested?
end
# Shutdown and clear the federation worker pool if present.
#
# @return [void]
def shutdown_federation_worker_pool!
existing = settings.respond_to?(:federation_worker_pool) ? settings.federation_worker_pool : nil
return unless existing
begin
existing.shutdown(timeout: PotatoMesh::Config.federation_task_timeout_seconds)
rescue StandardError => e
warn_log(
"Failed to shut down federation worker pool",
context: "federation",
error_class: e.class.name,
error_message: e.message,
)
ensure
set(:federation_worker_pool, nil) if respond_to?(:set)
end
end
# Gracefully terminate federation background loops and worker pool tasks.
#
# @param timeout [Numeric, nil] maximum join time applied per thread.
# @return [void]
def shutdown_federation_background_work!(timeout: nil)
request_federation_shutdown!
timeout_value = timeout || PotatoMesh::Config.federation_shutdown_timeout_seconds
# Drain the worker pool first so federation threads blocked in
# wait_for_federation_tasks unblock promptly instead of waiting
# for each task's individual timeout to expire.
shutdown_federation_worker_pool!
stop_federation_thread!(:initial_federation_thread, timeout: timeout_value)
stop_federation_thread!(:federation_thread, timeout: timeout_value)
clear_federation_crawl_state!
end
# Stop a specific federation thread setting and clear its reference.
#
# @param setting_name [Symbol] settings key storing the thread object.
# @param timeout [Numeric] seconds to wait for clean thread exit.
# @return [void]
def stop_federation_thread!(setting_name, timeout:)
return unless respond_to?(:settings)
return unless settings.respond_to?(setting_name)
thread = settings.public_send(setting_name)
if thread&.alive?
begin
thread.wakeup if thread.respond_to?(:wakeup)
rescue ThreadError
# The thread may not currently be sleeping; continue shutdown.
end
thread.join(timeout)
if thread.alive?
thread.kill
thread.join(0.1)
end
end
set(setting_name, nil) if respond_to?(:set)
end
end
end
end
@@ -0,0 +1,76 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Federation
# Build the ordered list of peer domains the local instance should
# announce itself to. Seed domains take precedence and are followed by
# peers seen in the local +instances+ table within the freshness window.
#
# @param self_domain [String, nil] sanitized local instance domain.
# @return [Array<String>] sanitized, deduplicated peer domains.
def federation_target_domains(self_domain)
normalized_self = sanitize_instance_domain(self_domain)&.downcase
ordered = []
seen = Set.new
PotatoMesh::Config.federation_seed_domains.each do |seed|
sanitized = sanitize_instance_domain(seed)&.downcase
next unless sanitized
next if normalized_self && sanitized == normalized_self
next if seen.include?(sanitized)
ordered << sanitized
seen << sanitized
end
db = open_database(readonly: true)
db.results_as_hash = false
cutoff = Time.now.to_i - PotatoMesh::Config.week_seconds
rows = with_busy_retry do
db.execute(
"SELECT domain, last_update_time FROM instances WHERE domain IS NOT NULL AND TRIM(domain) != ''",
)
end
rows.each do |row|
raw_domain = row[0]
last_update_time = coerce_integer(row[1])
next unless last_update_time && last_update_time >= cutoff
sanitized = sanitize_instance_domain(raw_domain)&.downcase
next unless sanitized
next if normalized_self && sanitized == normalized_self
next if seen.include?(sanitized)
ordered << sanitized
seen << sanitized
end
ordered
rescue SQLite3::Exception
fallback = PotatoMesh::Config.federation_seed_domains.filter_map do |seed|
candidate = sanitize_instance_domain(seed)&.downcase
next if normalized_self && candidate == normalized_self
candidate
end
fallback.uniq
ensure
db&.close
end
end
end
end
@@ -0,0 +1,200 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Federation
# Process-wide memo for the most recently emitted self-registration
# decision. Sinatra spins up a fresh app instance per request so a
# plain instance variable would not survive across calls; storing the
# state on the module itself keeps the dedupe stable for the lifetime
# of the worker process.
@self_registration_log_state = { mutex: Mutex.new, last: nil }
# Accessor for the dedupe state used by {#ensure_self_instance_record!}.
#
# @return [Hash{Symbol => Object}] mutable state hash holding +:mutex+ and +:last+.
def self.self_registration_log_state
@self_registration_log_state
end
# Reset the dedupe memo. Intended for tests; production code never
# needs to clear the state because each process starts fresh.
#
# @return [void]
def self.reset_self_registration_log_state!
state = @self_registration_log_state
state[:mutex].synchronize { state[:last] = nil }
end
# Resolve the canonical domain for the running instance.
#
# @return [String, nil] sanitized instance domain or nil outside production.
# @raise [RuntimeError] when the domain cannot be determined in production.
def self_instance_domain
sanitized = sanitize_instance_domain(app_constant(:INSTANCE_DOMAIN))
return sanitized if sanitized
unless production_environment?
debug_log(
"INSTANCE_DOMAIN unavailable; skipping self instance domain",
context: "federation.instances",
app_env: string_or_nil(ENV["APP_ENV"]),
rack_env: string_or_nil(ENV["RACK_ENV"]),
source: app_constant(:INSTANCE_DOMAIN_SOURCE),
)
return nil
end
raise "INSTANCE_DOMAIN could not be determined"
end
# Determine whether the local instance should persist its own record.
#
# @param domain [String, nil] candidate domain for the running instance.
# @return [Array(Boolean, String, nil)] tuple containing a decision flag and an optional reason.
def self_instance_registration_decision(domain)
source = app_constant(:INSTANCE_DOMAIN_SOURCE)
return [false, "INSTANCE_DOMAIN source is #{source}"] unless source == :environment
sanitized = sanitize_instance_domain(domain)
return [false, "INSTANCE_DOMAIN missing or invalid"] unless sanitized
ip = ip_from_domain(sanitized)
if ip && restricted_ip_address?(ip)
return [false, "INSTANCE_DOMAIN resolves to restricted IP"]
end
[true, nil]
end
# Build the canonical attribute hash describing the local instance.
#
# @return [Hash] populated instance attribute hash.
def self_instance_attributes
domain = self_instance_domain
last_update = latest_node_update_timestamp || Time.now.to_i
cutoff = Time.now.to_i - PotatoMesh::Config.remote_instance_max_node_age
db = open_database(readonly: true)
nodes_count = active_node_count_since(cutoff, db: db)
mc_count = active_node_count_since_for_protocol(cutoff, "meshcore", db: db)
mt_count = active_node_count_since_for_protocol(cutoff, "meshtastic", db: db)
{
id: app_constant(:SELF_INSTANCE_ID),
domain: domain,
pubkey: app_constant(:INSTANCE_PUBLIC_KEY_PEM),
name: sanitized_site_name,
version: app_constant(:APP_VERSION),
channel: sanitized_channel,
frequency: sanitized_frequency,
latitude: PotatoMesh::Config.map_center_lat,
longitude: PotatoMesh::Config.map_center_lon,
last_update_time: last_update,
is_private: private_mode?,
contact_link: sanitized_contact_link,
nodes_count: nodes_count,
meshcore_nodes_count: mc_count,
meshtastic_nodes_count: mt_count,
}
ensure
db&.close
end
# Sign a canonical instance attribute set with the local private key.
#
# @param attributes [Hash] canonical instance attributes.
# @return [String] base64-encoded RSA-SHA256 signature.
def sign_instance_attributes(attributes)
payload = canonical_instance_payload(attributes)
Base64.strict_encode64(
app_constant(:INSTANCE_PRIVATE_KEY).sign(OpenSSL::Digest::SHA256.new, payload),
)
end
# Compose the JSON-friendly announcement payload sent to peers.
#
# @param attributes [Hash] canonical instance attributes.
# @param signature [String] base64-encoded signature.
# @return [Hash] payload with nil entries removed.
def instance_announcement_payload(attributes, signature)
payload = {
"id" => attributes[:id],
"domain" => attributes[:domain],
"pubkey" => attributes[:pubkey],
"name" => attributes[:name],
"version" => attributes[:version],
"channel" => attributes[:channel],
"frequency" => attributes[:frequency],
"latitude" => attributes[:latitude],
"longitude" => attributes[:longitude],
"lastUpdateTime" => attributes[:last_update_time],
"isPrivate" => attributes[:is_private],
"contactLink" => attributes[:contact_link],
"nodesCount" => attributes[:nodes_count],
"meshcoreNodesCount" => attributes[:meshcore_nodes_count],
"meshtasticNodesCount" => attributes[:meshtastic_nodes_count],
"signature" => signature,
}
payload.reject { |_, value| value.nil? }
end
# Persist the local instance record when registration is allowed.
#
# @return [Array(Hash, String)] tuple of (attributes, signature) suitable
# for direct reuse by the announcer thread.
def ensure_self_instance_record!
attributes = self_instance_attributes
signature = sign_instance_attributes(attributes)
db = nil
allowed, reason = self_instance_registration_decision(attributes[:domain])
# Decisions are stable per process while INSTANCE_DOMAIN_SOURCE
# remains the same — without dedupe, the federation banner on every
# page navigation produced one log line apiece. Only emit when the
# tuple changes so operators still see the first decision (and any
# later flip) without the spam.
sentinel = [allowed, reason, attributes[:domain]]
state = PotatoMesh::App::Federation.self_registration_log_state
should_log = state[:mutex].synchronize do
changed = state[:last] != sentinel
state[:last] = sentinel if changed
changed
end
if allowed
db = open_database
upsert_instance_record(db, attributes, signature)
if should_log
debug_log(
"Registered self instance record",
context: "federation.instances",
domain: attributes[:domain],
instance_id: attributes[:id],
)
end
elsif should_log
debug_log(
"Skipped self instance registration",
context: "federation.instances",
domain: attributes[:domain],
reason: reason,
)
end
[attributes, signature]
ensure
db&.close
end
end
end
end
@@ -0,0 +1,62 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Federation
# Build the canonical JSON payload that gets signed for instance
# announcements. Keys are emitted in deterministic order and only
# populated when the corresponding attribute is non-nil.
#
# @param attributes [Hash] instance attributes hash.
# @return [String] canonical JSON string suitable for signing.
def canonical_instance_payload(attributes)
data = {}
data["contactLink"] = attributes[:contact_link] if attributes[:contact_link]
data["id"] = attributes[:id] if attributes[:id]
data["domain"] = attributes[:domain] if attributes[:domain]
data["pubkey"] = attributes[:pubkey] if attributes[:pubkey]
data["name"] = attributes[:name] if attributes[:name]
data["version"] = attributes[:version] if attributes[:version]
data["channel"] = attributes[:channel] if attributes[:channel]
data["frequency"] = attributes[:frequency] if attributes[:frequency]
data["latitude"] = attributes[:latitude] unless attributes[:latitude].nil?
data["longitude"] = attributes[:longitude] unless attributes[:longitude].nil?
data["lastUpdateTime"] = attributes[:last_update_time] unless attributes[:last_update_time].nil?
data["isPrivate"] = attributes[:is_private] unless attributes[:is_private].nil?
JSON.generate(data, sort_keys: true)
end
# Verify a base64 RSA-SHA256 signature for an instance attribute set.
#
# @param attributes [Hash] canonical instance attributes.
# @param signature [String, nil] base64-encoded signature bytes.
# @param public_key_pem [String, nil] PEM-encoded RSA public key.
# @return [Boolean] true when the signature validates against the public key.
def verify_instance_signature(attributes, signature, public_key_pem)
return false unless signature && public_key_pem
canonical = canonical_instance_payload(attributes)
signature_bytes = Base64.strict_decode64(signature)
key = OpenSSL::PKey::RSA.new(public_key_pem)
key.verify(OpenSSL::Digest::SHA256.new, signature_bytes, canonical)
rescue ArgumentError, OpenSSL::PKey::PKeyError
false
end
end
end
end
@@ -0,0 +1,107 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Federation
# Validate a remote +/.well-known+ document, including signature checks
# against the supplied public key.
#
# @param document [Hash] decoded well-known document.
# @param domain [String] expected sanitized domain.
# @param pubkey [String] expected canonical PEM public key.
# @return [Array(Boolean, String, nil)] tuple containing the validation
# result and an optional human-readable failure reason.
def validate_well_known_document(document, domain, pubkey)
unless document.is_a?(Hash)
return [false, "document is not an object"]
end
remote_pubkey = sanitize_public_key_pem(document["publicKey"])
return [false, "public key missing"] unless remote_pubkey
return [false, "public key mismatch"] unless remote_pubkey == pubkey
remote_domain = string_or_nil(document["domain"])
return [false, "domain missing"] unless remote_domain
return [false, "domain mismatch"] unless remote_domain.casecmp?(domain)
algorithm = string_or_nil(document["signatureAlgorithm"])
unless algorithm&.casecmp?(PotatoMesh::Config.instance_signature_algorithm)
return [false, "unsupported signature algorithm"]
end
signed_payload_b64 = string_or_nil(document["signedPayload"])
signature_b64 = string_or_nil(document["signature"])
return [false, "missing signed payload"] unless signed_payload_b64
return [false, "missing signature"] unless signature_b64
signed_payload = Base64.strict_decode64(signed_payload_b64)
signature = Base64.strict_decode64(signature_b64)
key = OpenSSL::PKey::RSA.new(remote_pubkey)
unless key.verify(OpenSSL::Digest::SHA256.new, signature, signed_payload)
return [false, "invalid well-known signature"]
end
payload = JSON.parse(signed_payload)
unless payload.is_a?(Hash)
return [false, "signed payload is not an object"]
end
payload_domain = string_or_nil(payload["domain"])
payload_pubkey = sanitize_public_key_pem(payload["publicKey"])
return [false, "signed payload domain mismatch"] unless payload_domain&.casecmp?(domain)
return [false, "signed payload public key mismatch"] unless payload_pubkey == pubkey
[true, nil]
rescue ArgumentError, OpenSSL::PKey::PKeyError => e
[false, e.message]
rescue JSON::ParserError => e
[false, "signed payload JSON error: #{e.message}"]
end
# Confirm a remote +/api/nodes+ payload contains a sufficient set of
# recently active nodes.
#
# @param nodes [Object] decoded array of remote node entries.
# @return [Array(Boolean, String, nil)] tuple of (is_fresh, optional reason).
def validate_remote_nodes(nodes)
unless nodes.is_a?(Array)
return [false, "node response is not an array"]
end
if nodes.length < PotatoMesh::Config.remote_instance_min_node_count
return [false, "insufficient nodes"]
end
latest = nodes.filter_map do |node|
next unless node.is_a?(Hash)
last_heard_values = []
last_heard_values << coerce_integer(node["last_heard"])
last_heard_values << coerce_integer(node["lastHeard"])
last_heard_values.compact.max
end.compact.max
return [false, "missing last_heard data"] unless latest
cutoff = Time.now.to_i - PotatoMesh::Config.remote_instance_max_node_age
return [false, "node data is stale"] if latest < cutoff
[true, nil]
end
end
end
end
@@ -74,9 +74,18 @@ module PotatoMesh
# Generate the structured meta configuration for the UI.
#
# @param view [Symbol, String, nil] logical view identifier used to
# tailor the title and description for non-dashboard pages.
# @param overrides [Hash, nil] explicit replacements for individual
# meta fields. See {PotatoMesh::Meta.configuration} for accepted
# keys.
# @return [Hash] frozen configuration metadata.
def meta_configuration
PotatoMesh::Meta.configuration(private_mode: private_mode?)
def meta_configuration(view: nil, overrides: nil)
PotatoMesh::Meta.configuration(
private_mode: private_mode?,
view: view,
overrides: overrides,
)
end
# Indicate whether private mode has been requested.
+14 -1
View File
@@ -274,16 +274,29 @@ module PotatoMesh
end
# Emit a debug entry describing how the instance domain was derived.
# When +INSTANCE_DOMAIN+ is unset in production, also surface a
# warning because canonical URLs, sitemap entries, and JSON-LD
# metadata fall back to whatever +Host+ header the request arrived
# with — which can be cache-poisoned by a misconfigured proxy.
#
# @return [void]
def log_instance_domain_resolution
source = app_constant(:INSTANCE_DOMAIN_SOURCE) || :unknown
domain = app_constant(:INSTANCE_DOMAIN)
debug_log(
"Resolved instance domain",
context: "identity.domain",
source: source,
domain: app_constant(:INSTANCE_DOMAIN),
domain: domain,
)
if production_environment? && (domain.nil? || domain.to_s.strip.empty?)
warn_log(
"INSTANCE_DOMAIN is unset; canonical URLs and sitemap entries " \
"will be derived from the inbound Host header",
context: "identity.domain",
source: source,
)
end
end
end
end
+186 -4
View File
@@ -17,6 +17,7 @@
require "kramdown"
require "kramdown-parser-gfm"
require "sanitize"
require "yaml"
module PotatoMesh
module App
@@ -36,10 +37,29 @@ module PotatoMesh
# @!attribute [r] slug
# @return [String] URL-safe identifier derived from the filename.
# @!attribute [r] title
# @return [String] human-readable nav label.
# @return [String] human-readable nav label, optionally overridden
# via YAML frontmatter.
# @!attribute [r] path
# @return [String] absolute filesystem path to the Markdown source.
PageEntry = Struct.new(:sort_key, :slug, :title, :path, keyword_init: true)
# @!attribute [r] description
# @return [String, nil] meta-description override sourced from
# frontmatter, or +nil+ when the global default should be used.
# @!attribute [r] image
# @return [String, nil] absolute URL for the per-page social preview
# image, or +nil+ when the default OG image should be used.
# @!attribute [r] noindex
# @return [Boolean] +true+ when the operator marked the page with
# +noindex: true+ in frontmatter; instructs crawlers to skip it.
PageEntry = Struct.new(
:sort_key,
:slug,
:title,
:path,
:description,
:image,
:noindex,
keyword_init: true,
)
# Pattern matching a safe slug segment: lowercase alphanumeric words
# separated by single hyphens. Used to validate both parsed slugs and
@@ -54,6 +74,20 @@ module PotatoMesh
# directory-bomb scenarios from consuming unbounded memory.
MAX_PAGES = 50
# Maximum number of bytes inspected when extracting frontmatter from a
# candidate file during directory scans. Keeps {load_static_pages}
# cheap for large markdown files.
FRONTMATTER_PROBE_BYTES = 4096
# Set of frontmatter keys that operators may use to influence how a
# page is presented to crawlers and social platforms. Any other key in
# the document is silently ignored to keep the surface area small and
# the parser predictable.
ALLOWED_FRONTMATTER_KEYS = %w[title description image noindex].freeze
# Pattern used to recognise a leading YAML frontmatter block.
FRONTMATTER_PATTERN = /\A---\s*\n(.*?)\n---\s*(?:\n|\z)/m
# Kramdown options shared across all page renders.
KRAMDOWN_OPTIONS = {
input: "GFM",
@@ -100,6 +134,151 @@ module PotatoMesh
PageEntry.new(sort_key: sort_key, slug: slug, title: title, path: nil)
end
# Extract the frontmatter block (if any) from raw markdown source.
#
# The first +---+ delimited block is parsed via {YAML.safe_load}; only
# keys listed in {ALLOWED_FRONTMATTER_KEYS} are kept and string values
# are stripped. Malformed YAML, unsupported types, and missing
# delimiters all result in an empty hash so the caller can fall back to
# filename-derived metadata without raising.
#
# @param content [String] raw file contents (UTF-8).
# @return [Hash{String=>Object}] permitted, normalised frontmatter
# values.
def parse_frontmatter(content)
return {} unless content.is_a?(String)
match = content.match(FRONTMATTER_PATTERN)
return {} unless match
begin
parsed = YAML.safe_load(match[1], permitted_classes: [], aliases: false) || {}
rescue Psych::Exception
return {}
end
return {} unless parsed.is_a?(Hash)
parsed.each_with_object({}) do |(key, value), result|
string_key = key.to_s
next unless ALLOWED_FRONTMATTER_KEYS.include?(string_key)
result[string_key] = normalise_frontmatter_value(string_key, value)
end
end
# Strip a leading frontmatter block from the raw markdown body.
#
# @param content [String] file contents.
# @return [String] markdown body without frontmatter.
def strip_frontmatter(content)
return content unless content.is_a?(String)
content.sub(FRONTMATTER_PATTERN, "")
end
# Coerce frontmatter values into the canonical type expected for each
# supported key. String fields are trimmed; +noindex+ is forced into a
# strict boolean; +image+ additionally enforces an +http(s)+ scheme
# so an operator who pastes a +data:+, +javascript:+, or relative
# URI does not silently leak it into the +og:image+ tag. Unrecognised
# values fall through to +nil+/+false+ so the rest of the pipeline
# can rely on simple checks.
#
# @param key [String] supported frontmatter key.
# @param value [Object] raw parsed value from {YAML.safe_load}.
# @return [String, Boolean, nil] normalised value.
def normalise_frontmatter_value(key, value)
case key
when "noindex"
truthy_frontmatter?(value)
when "image"
normalise_image_url(value)
else
string = value.is_a?(String) ? value : value.to_s
stripped = string.strip
stripped.empty? ? nil : stripped
end
end
# Validate an operator-supplied image URL. Only +http(s)+ schemes are
# accepted — +data:+, +javascript:+, relative paths, and other
# exotic forms are dropped silently because they would either fail
# to render in social-media link previews or open a content-security
# foot-gun.
#
# @param value [Object] raw frontmatter value.
# @return [String, nil] absolute URL or +nil+ when invalid/blank.
def normalise_image_url(value)
string = value.is_a?(String) ? value : value.to_s
stripped = string.strip
return nil if stripped.empty?
return nil unless stripped.match?(%r{\Ahttps?://}i)
stripped
end
# Decide whether a frontmatter scalar should be treated as truthy.
#
# Accepts native booleans as well as the common string aliases
# +"true"+, +"yes"+, +"1"+, +"on"+ (case-insensitive) so operators do
# not have to remember YAML's exact boolean coercion rules.
#
# @param value [Object] candidate value.
# @return [Boolean] +true+ when the value should map to truth.
def truthy_frontmatter?(value)
return value if value == true || value == false
normalised = value.to_s.strip.downcase
%w[true yes 1 on].include?(normalised)
end
# Read up to {FRONTMATTER_PROBE_BYTES} of the file at +path+ for
# frontmatter inspection during directory scans. Returns an empty
# string for unreadable or oversized inputs so the caller can treat
# them as having no frontmatter.
#
# The result is force-encoded to UTF-8 because YAML parsers refuse
# input declared as binary; for files that are already UTF-8 this
# is a no-op, and for files in another encoding it surfaces a
# decoding error to the YAML parser instead of silently producing
# gibberish that happens to match the frontmatter delimiters.
#
# @param path [String] absolute path to the markdown source.
# @return [String] candidate frontmatter prefix.
def read_frontmatter_probe(path)
return "" unless File.file?(path) && File.readable?(path)
raw = File.open(path, "r:UTF-8") { |file| file.read(FRONTMATTER_PROBE_BYTES) || "" }
raw.force_encoding(Encoding::UTF_8)
rescue SystemCallError
""
end
# Apply parsed frontmatter values to a {PageEntry}, returning a new
# struct that preserves filename-derived defaults whenever a key is
# absent or blank.
#
# {parse_frontmatter} has already dropped blank string values for
# +title+/+description+/+image+, so this method can rely on truthy
# checks rather than re-validating each key.
#
# @param entry [PageEntry] base entry parsed from the filename.
# @param frontmatter [Hash] permitted frontmatter values.
# @return [PageEntry] enriched entry.
def apply_frontmatter(entry, frontmatter)
return entry unless entry
PageEntry.new(
sort_key: entry.sort_key,
slug: entry.slug,
title: frontmatter["title"] || entry.title,
path: entry.path,
description: frontmatter["description"],
image: frontmatter["image"],
noindex: frontmatter["noindex"] == true,
)
end
# Scan the pages directory and return a sorted list of page entries.
#
# The directory is read once per call; results are not cached here (see
@@ -116,12 +295,14 @@ module PotatoMesh
entry = parse_page_filename(basename)
next unless entry
PageEntry.new(
base_entry = PageEntry.new(
sort_key: entry.sort_key,
slug: entry.slug,
title: entry.title,
path: path,
)
frontmatter = parse_frontmatter(read_frontmatter_probe(path))
apply_frontmatter(base_entry, frontmatter)
end
entries.sort_by!(&:sort_key)
@@ -174,7 +355,8 @@ module PotatoMesh
return nil if size > PotatoMesh::Config.max_page_file_bytes
content = File.read(page_entry.path, encoding: "utf-8")
raw_html = Kramdown::Document.new(content, **KRAMDOWN_OPTIONS).to_html
body = strip_frontmatter(content)
raw_html = Kramdown::Document.new(body, **KRAMDOWN_OPTIONS).to_html
strip_unsafe_html(raw_html)
rescue SystemCallError
nil
@@ -26,7 +26,12 @@ module PotatoMesh
# @return [Array<Hash>] compacted message rows safe for API responses.
def query_messages(limit, node_ref: nil, include_encrypted: false, since: 0, protocol: nil)
limit = coerce_query_limit(limit)
since_threshold = normalize_since_threshold(since, floor: 0)
now = Time.now.to_i
# Default the chat feed to the same seven-day window the dashboard uses
# for the node table; per-id lookups widen to twenty-eight days so
# historical conversation context remains reachable on demand.
since_floor = node_ref ? now - PotatoMesh::Config.four_weeks_seconds : now - PotatoMesh::Config.week_seconds
since_threshold = normalize_since_threshold(since, floor: since_floor)
db = open_database(readonly: true)
db.results_as_hash = true
params = []
@@ -30,8 +30,9 @@ module PotatoMesh
params = []
where_clauses = []
now = Time.now.to_i
min_rx_time = now - PotatoMesh::Config.week_seconds
since_floor = node_ref ? 0 : min_rx_time
# Bulk positions follow the seven-day default; per-id lookups widen
# to twenty-eight days for backfill of historical track data.
since_floor = node_ref ? now - PotatoMesh::Config.four_weeks_seconds : now - PotatoMesh::Config.week_seconds
since_threshold = normalize_since_threshold(since, floor: since_floor)
where_clauses << "COALESCE(rx_time, position_time, 0) >= ?"
params << since_threshold
@@ -91,9 +92,11 @@ module PotatoMesh
params = []
where_clauses = []
now = Time.now.to_i
min_rx_time = now - PotatoMesh::Config.week_seconds
since_floor = node_ref ? 0 : min_rx_time
since_threshold = normalize_since_threshold(since, floor: since_floor)
# Neighbor relationships are reported sporadically and are easy to
# lose between scrapes, so use the twenty-eight-day extended window
# for both bulk and per-id queries.
min_rx_time = now - PotatoMesh::Config.four_weeks_seconds
since_threshold = normalize_since_threshold(since, floor: min_rx_time)
where_clauses << "COALESCE(rx_time, 0) >= ?"
params << since_threshold
@@ -141,7 +144,7 @@ module PotatoMesh
params = []
where_clauses = []
now = Time.now.to_i
min_rx_time = now - PotatoMesh::Config.trace_neighbor_window_seconds
min_rx_time = now - PotatoMesh::Config.four_weeks_seconds
since_threshold = normalize_since_threshold(since, floor: min_rx_time)
where_clauses << "COALESCE(rx_time, 0) >= ?"
params << since_threshold
@@ -141,8 +141,10 @@ module PotatoMesh
db = open_database(readonly: true)
db.results_as_hash = true
now = Time.now.to_i
min_last_heard = now - PotatoMesh::Config.week_seconds
since_floor = node_ref ? 0 : min_last_heard
# Bulk listings stay on the seven-day window so the dashboard does not
# render stale nodes; per-id lookups widen to twenty-eight days so
# callers can backfill older records that fall outside the bulk floor.
since_floor = node_ref ? now - PotatoMesh::Config.four_weeks_seconds : now - PotatoMesh::Config.week_seconds
since_threshold = normalize_since_threshold(since, floor: since_floor)
params = []
where_clauses = []
@@ -227,7 +229,10 @@ module PotatoMesh
db = open_database(readonly: true)
db.results_as_hash = true
now = Time.now.to_i
cutoff = now - PotatoMesh::Config.week_seconds
# Ingestor heartbeats are sparse (one per ingestor per cycle) so widen
# the rolling window to twenty-eight days to keep slow-tick ingestors
# visible in the federation overview.
cutoff = now - PotatoMesh::Config.four_weeks_seconds
since_threshold = normalize_since_threshold(since, floor: cutoff)
where_clauses = ["last_seen_time >= ?"]
params = [since_threshold]
@@ -30,8 +30,9 @@ module PotatoMesh
params = []
where_clauses = []
now = Time.now.to_i
min_rx_time = now - PotatoMesh::Config.week_seconds
since_floor = node_ref ? 0 : min_rx_time
# Bulk telemetry follows the seven-day default; per-id lookups widen
# to twenty-eight days so historical chart data remains reachable.
since_floor = node_ref ? now - PotatoMesh::Config.four_weeks_seconds : now - PotatoMesh::Config.week_seconds
since_threshold = normalize_since_threshold(since, floor: since_floor)
where_clauses << "COALESCE(rx_time, telemetry_time, 0) >= ?"
params << since_threshold
+15 -3
View File
@@ -390,9 +390,21 @@ module PotatoMesh
halt 404 unless federation_enabled?
content_type :json
ensure_self_instance_record!
payload = load_instances_for_api
JSON.generate(payload)
# The federation banner is rendered on every page navigation, which
# caused this endpoint to fire ~7 times in a few seconds while the
# user clicked through the site. Cache the response (including the
# self-record refresh) for a short window so navigation feels free
# without delaying signature/peer updates by more than a few
# seconds. The dedicated announcer thread keeps the underlying
# record fresh on its own cadence regardless of cache hits.
priv = private_mode? ? 1 : 0
cached = PotatoMesh::App::ApiCache.fetch("api:instances:#{priv}", ttl_seconds: 30) do
ensure_self_instance_record!
JSON.generate(load_instances_for_api)
end
etag cached[:etag], kind: :weak
api_cache_control
cached[:value]
end
end
end
@@ -315,6 +315,10 @@ module PotatoMesh
db = open_database
upsert_instance_record(db, attributes, signature)
# Drop the cached /api/instances payload so the new peer becomes
# visible on the next dashboard refresh instead of after the TTL
# naturally expires.
PotatoMesh::App::ApiCache.invalidate_prefix("api:instances:")
enqueued = enqueue_federation_crawl(
attributes[:domain],
per_response_limit: PotatoMesh::Config.federation_max_instances_per_response,
+257 -3
View File
@@ -19,6 +19,18 @@ module PotatoMesh
module Routes
module Root
module Helpers
# Map of XML predefined entities used by {#xml_escape}.
XML_ESCAPE_REPLACEMENTS = {
"&" => "&amp;",
"<" => "&lt;",
">" => "&gt;",
'"' => "&quot;",
"'" => "&apos;",
}.freeze
# Pattern matching any XML metacharacter that requires escaping.
XML_ESCAPE_PATTERN = Regexp.union(XML_ESCAPE_REPLACEMENTS.keys).freeze
# Return the fixed dark theme identifier. Light mode is no longer
# supported; theme selection and cookie persistence have been removed.
#
@@ -31,19 +43,28 @@ module PotatoMesh
#
# @param template [Symbol] identifier for the ERB template.
# @param view_mode [Symbol, String] logical view identifier for CSS hooks.
# @param view_meta [Symbol, String, nil] meta-tag selector. Defaults to
# +view_mode+ so most callers can omit it; pass an explicit value
# when the layout view differs from the meta archetype (e.g. the
# dynamic +/pages/:slug+ routes whose view_mode is per-slug).
# @param meta_overrides [Hash, nil] explicit replacements for
# individual meta values (title, description, image, noindex).
# @param extra_locals [Hash] additional locals merged into the rendering context.
# @return [String] rendered ERB output.
def render_root_view(template, view_mode: :dashboard, extra_locals: {})
meta = meta_configuration
def render_root_view(template, view_mode: :dashboard, view_meta: nil, meta_overrides: nil, extra_locals: {})
view_mode_sym = view_mode.respond_to?(:to_sym) ? view_mode.to_sym : view_mode
view_meta_sym = view_meta.nil? ? view_mode_sym : (view_meta.respond_to?(:to_sym) ? view_meta.to_sym : view_meta)
meta = meta_configuration(view: view_meta_sym, overrides: meta_overrides)
config = frontend_app_config
theme = resolve_initial_theme
view_mode_sym = view_mode.respond_to?(:to_sym) ? view_mode.to_sym : view_mode
base_locals = {
site_name: meta[:name],
meta_title: meta[:title],
meta_name: meta[:name],
meta_description: meta[:description],
meta_image_url: meta[:image],
meta_noindex: meta[:noindex] == true,
channel: sanitized_channel,
frequency: sanitized_frequency,
map_center_lat: PotatoMesh::Config.map_center_lat,
@@ -135,6 +156,203 @@ module PotatoMesh
"position" => position,
}
end
# Resolve the canonical absolute base URL for the running request.
# Prefers an operator-supplied override (+INSTANCE_DOMAIN+) so
# generated absolute URLs match the public-facing hostname, falling
# back to the request's own +base_url+ for development.
#
# @return [String] base URL (scheme + authority) without trailing slash.
def public_base_url
domain = string_or_nil(app_constant(:INSTANCE_DOMAIN))
return request.base_url unless domain
scheme = request.scheme || "https"
"#{scheme}://#{domain}"
end
# Construct the OG image URL referenced from the layout. Operators
# may provide an explicit override via +OG_IMAGE_URL+; otherwise
# the runtime-generated +/og-image.png+ URL is returned.
#
# The override is rejected unless it carries an +http(s)+ scheme.
# +data:+ and +javascript:+ URIs do not render in any social
# platform's link preview and would only serve as a content
# security foot-gun, so they are silently dropped in favour of
# the runtime URL.
#
# @return [String] absolute URL to the social preview image.
def og_image_url
override = string_or_nil(PotatoMesh::Config.og_image_url)
return override if override && override.match?(%r{\Ahttps?://}i)
"#{public_base_url}/og-image.png"
end
# Build the title segment for the node detail view from the data
# already resolved by {build_node_detail_reference}.
#
# @param short_name [String, nil] sanitized short identifier.
# @param long_name [String, nil] sanitized long name.
# @param canonical_id [String, nil] canonical "!hex" identifier.
# @return [String] human-friendly node label.
def node_detail_title_label(short_name:, long_name:, canonical_id:)
short = string_or_nil(short_name)
long = string_or_nil(long_name)
return "#{short} (#{long})" if short && long
return short if short
return long if long
return "Node #{canonical_id}" if canonical_id
"Node detail"
end
# Compose meta overrides for the +/nodes/:id+ view.
#
# @param short_name [String, nil] sanitized short identifier.
# @param long_name [String, nil] sanitized long name.
# @param canonical_id [String, nil] canonical "!hex" identifier.
# @return [Hash] override hash for {meta_configuration}.
def node_detail_meta_overrides(short_name:, long_name:, canonical_id:)
site = sanitized_site_name
label = node_detail_title_label(
short_name: short_name,
long_name: long_name,
canonical_id: canonical_id,
)
description_subject = string_or_nil(short_name) || string_or_nil(long_name) ||
canonical_id || "this node"
{
title: site && !site.empty? ? "#{label} · #{site}" : label,
description: "Telemetry, position history, and live status for node #{description_subject} on #{site}.",
}
end
# Compose meta overrides for the +/pages/:slug+ view from a static
# page entry plus any frontmatter the operator defined.
#
# Only keys that carry meaningful values are included so the
# downstream {meta_configuration} call is not asked to filter
# +nil+ values out of an otherwise sparse hash.
#
# @param page [PotatoMesh::App::Pages::PageEntry] resolved page entry.
# @return [Hash] override hash for {meta_configuration}.
def static_page_meta_overrides(page)
site = sanitized_site_name
title_segment = string_or_nil(page&.title) || ""
composed_title = if !title_segment.empty? && site && !site.empty?
"#{title_segment} · #{site}"
elsif !title_segment.empty?
title_segment
else
site
end
overrides = { title: composed_title }
description = string_or_nil(page&.description)
overrides[:description] = description if description
image = string_or_nil(page&.image)
overrides[:image] = image if image
overrides[:noindex] = true if page&.noindex == true
overrides
end
# Render the +robots.txt+ body honoring private-mode preferences.
# Private deployments emit a blanket disallow; public deployments
# whitelist the dashboard while disallowing instrumentation paths.
#
# @param sitemap_url [String] absolute URL of the public sitemap.
# @return [String] +robots.txt+ payload, terminated with a newline.
def build_robots_txt(sitemap_url)
if private_mode?
"User-agent: *\nDisallow: /\n"
else
<<~TXT
User-agent: *
Disallow: /metrics
Disallow: /api/
Sitemap: #{sitemap_url}
TXT
end
end
# Build the URL list emitted by +/sitemap.xml+ for a public
# deployment. Each entry is a hash with +:loc+ and an optional
# +:lastmod+ / +:changefreq+ pair.
#
# +lastmod+ is intentionally omitted for top-level dashboard
# routes. The data behind those views changes continuously, so
# advertising +Time.now+ on every crawl trains crawlers to ignore
# the field (Google explicitly discourages noisy +lastmod+
# values). Static pages keep a meaningful +lastmod+ derived from
# +File.mtime+.
#
# The handler at +/sitemap.xml+ already 404s in private mode, so
# this method does not need to filter chat — it is unreachable
# otherwise.
#
# @param base_url [String] absolute base URL prefix.
# @return [Array<Hash>] ordered list of sitemap entries.
def build_sitemap_entries(base_url)
entries = []
entries << { loc: "#{base_url}/", changefreq: "daily" }
entries << { loc: "#{base_url}/map", changefreq: "daily" }
entries << { loc: "#{base_url}/chat", changefreq: "daily" }
entries << { loc: "#{base_url}/charts", changefreq: "daily" }
entries << { loc: "#{base_url}/nodes", changefreq: "daily" }
entries << { loc: "#{base_url}/federation", changefreq: "weekly" } if federation_enabled?
PotatoMesh::App::Pages.static_pages.each do |page|
next if page.noindex
next unless page.path
lastmod = begin
File.mtime(page.path).utc.strftime("%Y-%m-%d")
rescue SystemCallError
nil
end
entry = { loc: "#{base_url}/pages/#{page.slug}", changefreq: "weekly" }
entry[:lastmod] = lastmod if lastmod
entries << entry
end
entries
end
# Escape a string for inclusion as XML character data.
#
# Replaces the five XML predefined entities in a single pass.
# Used by the sitemap renderer instead of
# {Rack::Utils.escape_html} so apostrophes become the canonical
# +&apos;+ entity rather than an HTML-style numeric character
# reference.
#
# @param value [Object] input fragment; coerced to a string.
# @return [String] XML-safe representation.
def xml_escape(value)
value.to_s.gsub(XML_ESCAPE_PATTERN, XML_ESCAPE_REPLACEMENTS)
end
# Render a sitemap entry list as +urlset+ XML.
#
# @param entries [Array<Hash>] entries produced by
# {build_sitemap_entries}.
# @return [String] XML document body.
def render_sitemap_xml(entries)
lines = [%(<?xml version="1.0" encoding="UTF-8"?>),
%(<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">)]
entries.each do |entry|
lines << " <url>"
lines << " <loc>#{xml_escape(entry[:loc])}</loc>"
lines << " <lastmod>#{xml_escape(entry[:lastmod])}</lastmod>" if entry[:lastmod]
lines << " <changefreq>#{xml_escape(entry[:changefreq])}</changefreq>" if entry[:changefreq]
lines << " </url>"
end
lines << "</urlset>"
lines.join("\n") + "\n"
end
end
def self.registered(app)
@@ -160,6 +378,36 @@ module PotatoMesh
send_file path
end
app.get "/robots.txt" do
content_type "text/plain"
cache_control :public, max_age: 3600
build_robots_txt("#{public_base_url}/sitemap.xml")
end
app.get "/sitemap.xml" do
halt 404, "Not Found" if private_mode?
content_type "application/xml"
cache_control :public, max_age: 3600
render_sitemap_xml(build_sitemap_entries(public_base_url))
end
app.get "/og-image.png" do
override = string_or_nil(PotatoMesh::Config.og_image_url)
redirect override, 302 if override && override.match?(%r{\Ahttps?://}i)
begin
payload = PotatoMesh::OgImage.serve(base_url: public_base_url)
rescue PotatoMesh::OgImage::CaptureError
halt 503, "Preview unavailable"
end
content_type "image/png"
cache_control :public, max_age: payload[:max_age]
last_modified payload[:last_modified] if payload[:last_modified]
payload[:bytes]
end
app.get "/" do
render_root_view(:index, view_mode: :dashboard)
end
@@ -194,6 +442,7 @@ module PotatoMesh
render_root_view(
:page,
view_mode: :"page_#{slug}",
meta_overrides: static_page_meta_overrides(page),
extra_locals: {
page_title: page.title,
page_content_html: page_html,
@@ -215,6 +464,11 @@ module PotatoMesh
render_root_view(
:node_detail,
view_mode: :node_detail,
meta_overrides: node_detail_meta_overrides(
short_name: short_name,
long_name: long_name,
canonical_id: canonical_id,
),
extra_locals: {
node_reference_json: JSON.generate(reject_nil_values(reference_payload)),
node_page_short_name: short_name,
+102 -4
View File
@@ -47,6 +47,12 @@ module PotatoMesh
DEFAULT_FEDERATION_CRAWL_COOLDOWN_SECONDS = 300
DEFAULT_INITIAL_FEDERATION_DELAY_SECONDS = 2
DEFAULT_FEDERATION_SEED_DOMAINS = %w[potatomesh.net potatomesh.jmrp.io mesh.qrp.ro].freeze
DEFAULT_OG_IMAGE_TTL_SECONDS = 3_600
DEFAULT_OG_IMAGE_VIEWPORT_WIDTH = 1_200
DEFAULT_OG_IMAGE_VIEWPORT_HEIGHT = 630
DEFAULT_OG_IMAGE_NAVIGATION_TIMEOUT = 15
DEFAULT_OG_IMAGE_NETWORK_IDLE_DURATION = 1.5
DEFAULT_OG_IMAGE_NETWORK_IDLE_TIMEOUT = 8
# Retrieve the configured API token used for authenticated requests.
#
@@ -175,17 +181,30 @@ module PotatoMesh
DEFAULT_DB_BUSY_RETRY_DELAY
end
# Convenience constant describing the number of seconds in a week.
# Default rolling retention window in seconds.
#
# Used as the freshness floor for every "general" bulk read endpoint —
# nodes, messages, positions, telemetry, and the federation instance
# catalog — and as the freshness floor for federation/dashboard activity
# counters. Exceptions (sparse data) live on
# {#four_weeks_seconds}; per-id lookups also widen to the extended
# window so callers can backfill historical context for a single node.
#
# @return [Integer] seconds in seven days.
def week_seconds
7 * 24 * 60 * 60
end
# Rolling retention window in seconds for trace and neighbor API queries.
# Extended rolling retention window in seconds.
#
# Used as the default freshness floor for endpoints whose data is more
# fragile (traces, neighbors, ingestors) and as the floor for every
# +/api/.../:id+ lookup so callers can backfill historical records that
# would otherwise fall outside the seven-day default applied to bulk
# endpoints.
#
# @return [Integer] seconds in twenty-eight days.
def trace_neighbor_window_seconds
def four_weeks_seconds
28 * 24 * 60 * 60
end
@@ -207,7 +226,7 @@ module PotatoMesh
#
# @return [String] semantic version identifier.
def version_fallback
"0.6.2"
"0.6.3"
end
# Default refresh interval for frontend polling routines.
@@ -593,6 +612,85 @@ module PotatoMesh
fetch_string("CONNECTION", "/dev/ttyACM0")
end
# Optional absolute URL to use for the social share preview image.
#
# When set, the layout uses this URL verbatim for +og:image+ and
# +twitter:image+ and the runtime capture pipeline is skipped. Operators
# who do not want to ship Chromium in their container, or who prefer to
# host their own preview image on a CDN, can point at any reachable
# +https://+ URL.
#
# @return [String, nil] override URL or +nil+ when unset.
def og_image_url
fetch_string("OG_IMAGE_URL", nil)
end
# Cache lifetime for runtime-generated +/og-image.png+ responses, in
# seconds. Successful captures are stored on disk and reused until the
# TTL elapses; the next request after expiry refreshes the cache
# synchronously while holding a process-wide mutex so concurrent
# requesters serialise rather than spawning multiple browsers.
#
# @return [Integer] positive cache duration in seconds.
def og_image_ttl_seconds
fetch_positive_integer("OG_IMAGE_TTL_SECONDS", DEFAULT_OG_IMAGE_TTL_SECONDS)
end
# Viewport width used for the headless browser preview capture.
#
# @return [Integer] viewport width in CSS pixels.
def og_image_viewport_width
DEFAULT_OG_IMAGE_VIEWPORT_WIDTH
end
# Viewport height used for the headless browser preview capture.
#
# @return [Integer] viewport height in CSS pixels.
def og_image_viewport_height
DEFAULT_OG_IMAGE_VIEWPORT_HEIGHT
end
# Maximum time the headless browser may spend navigating to the
# capture target before the request is abandoned.
#
# @return [Integer] navigation timeout in seconds.
def og_image_navigation_timeout
DEFAULT_OG_IMAGE_NAVIGATION_TIMEOUT
end
# Continuous duration of network silence required before the screenshot
# is taken. Acts as a heuristic for "page settled".
#
# @return [Float] idle window duration in seconds.
def og_image_network_idle_duration
DEFAULT_OG_IMAGE_NETWORK_IDLE_DURATION
end
# Maximum time spent waiting for {og_image_network_idle_duration} of
# silence before the capture proceeds anyway.
#
# @return [Integer] idle wait ceiling in seconds.
def og_image_network_idle_timeout
DEFAULT_OG_IMAGE_NETWORK_IDLE_TIMEOUT
end
# Filesystem path used to cache the most recent runtime-generated
# preview image. The directory is created lazily on first capture.
#
# @return [String] absolute cache file path.
def og_image_cache_path
File.join(data_directory, "og-image.png")
end
# Filesystem path of the bundled fallback preview image served when no
# cached capture is available and the runtime generator is unable to
# produce one (e.g. Chromium missing, transient navigation failure).
#
# @return [String] absolute path to the packaged default PNG.
def og_image_default_path
File.join(web_root, "public", "og-image-default.png")
end
# Determine the best URL to represent the configured contact link.
#
# @return [String, nil] absolute URL when derivable, otherwise nil.
+141 -3
View File
@@ -66,17 +66,155 @@ module PotatoMesh
sentences.join(" ")
end
# Return the human-readable label associated with a logical view name.
#
# The label appears as the first segment of {.view_title} (e.g. the
# +"Map"+ portion of +"Map · PotatoMesh"+) and is omitted for views that
# should reuse the bare site name (such as the dashboard or detail pages
# whose title is built from per-record data).
#
# @param view [Symbol, String, nil] logical view identifier.
# @return [String, nil] navigation label or +nil+ when no label applies.
def view_label(view)
return nil if view.nil?
symbol = view.respond_to?(:to_sym) ? view.to_sym : view
{
map: "Map",
chat: "Chat",
charts: "Charts",
nodes: "Nodes",
federation: "Federation",
}[symbol]
end
# Compose the per-view document title using the +"Label · Site"+ pattern.
#
# @param view [Symbol, String, nil] logical view identifier.
# @param site [String] sanitized site name suffix.
# @return [String, nil] composed title or +nil+ when no view-specific
# label exists for the supplied identifier.
def view_title(view, site)
label = view_label(view)
return nil unless label
return label if site.nil? || site.empty?
"#{label} · #{site}"
end
# Build the per-view description string used for the +<meta name="description">+
# and Open Graph descriptions.
#
# @param view [Symbol, String, nil] logical view identifier.
# @param private_mode [Boolean] whether private mode is enabled. Drives
# suppression of chat-specific copy and other federation-aware text.
# @return [String, nil] description text or +nil+ when the view should
# inherit the global description.
def view_description(view, private_mode:)
return nil if view.nil?
symbol = view.respond_to?(:to_sym) ? view.to_sym : view
site = Sanitizer.sanitized_site_name
channel = Sanitizer.sanitized_channel
frequency = Sanitizer.sanitized_frequency
case symbol
when :map
map_view_description(site, channel, frequency)
when :chat
chat_view_description(site, channel, private_mode: private_mode)
when :charts
"Network activity charts for #{site}: nodes online, traffic, and signal quality."
when :nodes
"All Meshtastic and MeshCore nodes seen on #{site}, with last-heard time and metadata."
when :federation
"Federated PotatoMesh instances sharing node and message data with #{site}."
end
end
# Compose the description sentence used by the +/map+ view.
#
# @param site [String] sanitized site name.
# @param channel [String] sanitized channel label.
# @param frequency [String] sanitized frequency identifier.
# @return [String] descriptive sentence with the available channel and
# frequency suffixes.
def map_view_description(site, channel, frequency)
lead = "Live coverage map of #{site}"
lead += if !channel.empty? && !frequency.empty?
" on #{channel} (#{frequency})"
elsif !channel.empty?
" on #{channel}"
elsif !frequency.empty?
" tuned to #{frequency}"
else
""
end
"#{lead} — see node positions in real time."
end
# Compose the description sentence used by the +/chat+ view.
#
# @param site [String] sanitized site name.
# @param channel [String] sanitized channel label.
# @param private_mode [Boolean] whether the instance is running in
# private mode; chat is hidden for private deployments.
# @return [String, nil] description copy or +nil+ when chat is disabled.
def chat_view_description(site, channel, private_mode:)
return nil if private_mode
if channel.empty?
"Recent mesh chat traffic on #{site}."
else
"Recent mesh chat traffic on #{channel} for #{site}."
end
end
# Build a hash of meta configuration values used by templating layers.
#
# @param private_mode [Boolean] whether private mode is enabled.
# @param view [Symbol, String, nil] logical view identifier used to derive
# per-page title and description copy. When +nil+, the dashboard
# defaults are returned.
# @param overrides [Hash, nil] explicit values that take precedence over
# both view-specific and global defaults. Recognised keys: +:title+,
# +:description+, +:image+, +:noindex+.
# @return [Hash] structured metadata for templates.
def configuration(private_mode:)
def configuration(private_mode:, view: nil, overrides: nil)
site = Sanitizer.sanitized_site_name
base_description = description(private_mode: private_mode)
override_hash = overrides.is_a?(Hash) ? overrides : {}
override_title = string_or_nil(override_hash[:title])
override_description = string_or_nil(override_hash[:description])
override_image = string_or_nil(override_hash[:image])
override_noindex = override_hash[:noindex] == true
resolved_title = override_title || view_title(view, site) || site
resolved_description = override_description ||
view_description(view, private_mode: private_mode) ||
base_description
{
title: site,
title: resolved_title,
name: site,
description: description(private_mode: private_mode),
description: resolved_description,
image: override_image,
noindex: override_noindex,
}.freeze
end
# Coerce arbitrary input into a trimmed non-empty string or +nil+.
#
# @param value [Object, nil] candidate value.
# @return [String, nil] non-empty string or +nil+ when the input is
# blank, missing, or coerces to an empty value.
def string_or_nil(value)
return nil if value.nil?
str = value.is_a?(String) ? value : value.to_s
trimmed = str.strip
trimmed.empty? ? nil : trimmed
end
end
end
+364
View File
@@ -0,0 +1,364 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
require "fileutils"
require_relative "config"
require_relative "logging"
module PotatoMesh
# Runtime generator and cache layer for the Open Graph / Twitter Card
# preview image served at +/og-image.png+.
#
# The module is responsible for:
#
# * Producing a 1200×630 PNG screenshot of the dashboard via
# {Ferrum} (Chrome DevTools Protocol).
# * Caching successful captures on disk so that subsequent crawler hits
# are cheap.
# * Falling back to the previous cache (or the bundled default PNG) when
# a capture cannot be performed — for example, because Chromium is
# unavailable in the runtime image.
#
# The capture step is encapsulated in {.invoke_capture} so test suites
# can substitute it with {.capture_strategy=} and exercise the cache and
# response paths without launching a real browser.
module OgImage
module_function
# Raised when the capture pipeline could not produce a screenshot for
# any reason (Chromium missing, navigation timeout, transient network
# failure, etc.). Callers translate it into a fallback response.
class CaptureError < StandardError; end
# Minimum interval between capture attempts after a failure. Prevents a
# tight loop of relaunching Chromium when a persistent error is in play
# (e.g. disk-full breaks {.write_cache} or the browser binary is
# missing). Subsequent crawler hits inside the window are answered from
# the cache or the bundled default PNG without re-attempting.
CAPTURE_FAILURE_BACKOFF_SECONDS = 60
# Module-level mutex guarding capture invocations to prevent a
# thundering-herd of concurrent crawler requests from spawning multiple
# browsers. Created once when the module loads.
@capture_mutex = Mutex.new
# Optional override for the capture function. When set, {.invoke_capture}
# delegates to this callable instead of {.default_capture}; tests use
# this hook to inject deterministic byte payloads.
@capture_strategy = nil
# Timestamp of the last failed capture attempt. Used by
# {.in_failure_backoff?} to throttle retries when capture or cache
# writes are persistently failing.
@last_failure_at = nil
# Produce a response payload for the +/og-image.png+ route.
#
# @param base_url [String] absolute URL of the running application, used
# as the navigation target for the headless browser.
# @return [Hash] hash with +:bytes+ (binary PNG payload),
# +:last_modified+ ({Time}), and +:max_age+ (Integer seconds for the
# Cache-Control header).
def serve(base_url:)
bytes, last_modified = resolve_image_bytes(base_url: base_url)
{
bytes: bytes,
last_modified: last_modified,
max_age: PotatoMesh::Config.og_image_ttl_seconds,
}
end
# Resolve the freshest image bytes available, capturing a new
# screenshot when the cache is empty or stale.
#
# @param base_url [String] dashboard URL captured by Ferrum.
# @return [Array(String, Time)] PNG payload and its last-modified
# timestamp.
def resolve_image_bytes(base_url:)
cache = read_cache
return [cache[:bytes], cache[:mtime]] if cache && cache_fresh?(cache[:mtime])
refreshed = attempt_refresh(base_url)
return refreshed if refreshed
return [cache[:bytes], cache[:mtime]] if cache
default = read_default
return default if default
raise CaptureError, "no preview image available"
end
# Try to capture a fresh screenshot, returning the new payload on
# success and +nil+ when the capture failed, another thread is already
# running one, or the backoff window from a recent failure is still
# active.
#
# @param base_url [String] dashboard URL captured by Ferrum.
# @return [Array(String, Time), nil] new bytes and timestamp, or +nil+.
def attempt_refresh(base_url)
return nil if in_failure_backoff?
acquired = @capture_mutex.try_lock
return nil unless acquired
begin
bytes = invoke_capture(base_url)
write_succeeded = write_cache(bytes)
@last_failure_at = write_succeeded ? nil : Time.now
[bytes, Time.now]
rescue StandardError => e
log_capture_error(e)
@last_failure_at = Time.now
nil
ensure
@capture_mutex.unlock if acquired
end
end
# Determine whether a recent failure should suppress another capture
# attempt. The backoff is reset by the first successful capture and
# cache write.
#
# @return [Boolean] +true+ when capture attempts should be skipped.
def in_failure_backoff?
return false unless @last_failure_at
(Time.now - @last_failure_at) < CAPTURE_FAILURE_BACKOFF_SECONDS
end
# Invoke either the configured {.capture_strategy} or
# {.default_capture} to produce PNG bytes.
#
# @param base_url [String] navigation target.
# @return [String] binary PNG payload.
def invoke_capture(base_url)
strategy = @capture_strategy || method(:default_capture)
strategy.call(base_url)
end
# Default capture implementation backed by the +ferrum+ gem.
#
# The browser is launched with the configured viewport, navigated to
# +base_url+, and given a brief idle window before the screenshot is
# taken. Errors raised by Ferrum are wrapped in {CaptureError} so the
# serve path can fall back gracefully.
#
# @param base_url [String] navigation target.
# @return [String] binary PNG payload.
# @raise [CaptureError] when the capture cannot be performed.
def default_capture(base_url)
browser = build_browser
begin
browser.goto(base_url.to_s)
wait_for_settled(browser)
bytes = browser.screenshot(format: "png", encoding: :binary, full: false)
bytes.is_a?(String) ? bytes : bytes.to_s
ensure
safely_quit_browser(browser)
end
rescue LoadError => e
raise CaptureError, "ferrum not installed: #{e.message}"
rescue StandardError => e
raise CaptureError, "capture failed: #{e.message}"
end
# Construct a fresh Ferrum browser instance using configuration values.
# Loads the gem lazily so importing this module does not pull Chromium
# into environments that never need it.
#
# @return [Object] Ferrum::Browser instance.
def build_browser
require "ferrum"
Ferrum::Browser.new(browser_options)
end
# Build the option hash passed to +Ferrum::Browser.new+. Extracted as
# a separate method so tests can verify the dimensions without
# launching the browser.
#
# The +--no-sandbox+ flag is required to launch Chromium as a non-root
# user inside an Alpine container without the kernel SETUID helper.
# This is only safe because the capture target is always the
# operator's own dashboard ({.serve} fetches +base_url+ from the
# +/og-image.png+ route, which derives it from the running app's
# public URL). DO NOT extend this code path to capture untrusted URLs
# — the disabled sandbox would turn a renderer-process exploit into a
# container escape.
#
# +--disable-dev-shm-usage+ avoids /dev/shm OOMs in small containers
# and +--disable-gpu+ prevents WebGL probing on machines without a
# GPU. Both are routine for headless Chromium captures.
#
# @return [Hash] keyword options for Ferrum::Browser.
def browser_options
options = {
headless: true,
window_size: [
PotatoMesh::Config.og_image_viewport_width,
PotatoMesh::Config.og_image_viewport_height,
],
timeout: PotatoMesh::Config.og_image_navigation_timeout,
process_timeout: PotatoMesh::Config.og_image_navigation_timeout,
browser_options: {
"no-sandbox": nil,
"disable-dev-shm-usage": nil,
"disable-gpu": nil,
},
}
browser_path = ENV["FERRUM_BROWSER_PATH"]
options[:browser_path] = browser_path if browser_path && !browser_path.empty?
options
end
# Wait for the dashboard to reach a stable state before capturing.
# Network-idle timeouts are tolerated because some dashboard widgets
# may continue polling indefinitely.
#
# @param browser [Object] Ferrum::Browser instance.
# @return [void]
def wait_for_settled(browser)
return unless browser.respond_to?(:network)
browser.network.wait_for_idle(
duration: PotatoMesh::Config.og_image_network_idle_duration,
timeout: PotatoMesh::Config.og_image_network_idle_timeout,
)
rescue StandardError
# Idle timeout — proceed with a best-effort capture.
end
# Quit the browser, ignoring shutdown errors so a slow or already-dead
# browser does not mask the original exception.
#
# @param browser [Object, nil] Ferrum::Browser instance.
# @return [void]
def safely_quit_browser(browser)
return if browser.nil?
browser.quit
rescue StandardError
# Best-effort cleanup — never let teardown raise.
end
# Read the cached preview from disk when present and readable.
#
# @return [Hash{Symbol=>Object}, nil] hash with +:bytes+ and +:mtime+
# keys, or +nil+ when no cache file exists.
def read_cache
path = PotatoMesh::Config.og_image_cache_path
return nil unless File.file?(path) && File.readable?(path)
bytes = File.binread(path)
return nil if bytes.empty?
{ bytes: bytes, mtime: File.mtime(path) }
rescue SystemCallError
nil
end
# Persist the freshly-captured PNG payload to the cache location.
#
# Returns +true+ on success so callers can clear the failure backoff
# only when the cache is actually durable. Empty/nil payloads count as
# a write failure so the backoff path triggers and we do not loop
# capturing without persisting.
#
# @param bytes [String] binary PNG payload.
# @return [Boolean] +true+ on success, +false+ otherwise.
def write_cache(bytes)
return false unless bytes.is_a?(String) && !bytes.empty?
path = PotatoMesh::Config.og_image_cache_path
FileUtils.mkdir_p(File.dirname(path))
File.binwrite(path, bytes)
true
rescue SystemCallError => e
log_capture_error(e)
false
end
# Determine whether the cache mtime falls inside the configured TTL.
#
# @param mtime [Time] cache file modification time.
# @return [Boolean] +true+ when the cache is still fresh.
def cache_fresh?(mtime)
return false unless mtime.is_a?(Time)
(Time.now - mtime) < PotatoMesh::Config.og_image_ttl_seconds
end
# Read the bundled default PNG as a last-resort fallback.
#
# @return [Array(String, Time), nil] payload and modification time, or
# +nil+ when the default file is missing.
def read_default
path = PotatoMesh::Config.og_image_default_path
return nil unless File.file?(path) && File.readable?(path)
[File.binread(path), File.mtime(path)]
rescue SystemCallError
nil
end
# Override the capture strategy. Intended for test suites that need to
# exercise the serve/cache logic without spawning Chromium.
#
# @param callable [#call, nil] callable that accepts +base_url+ and
# returns PNG bytes, or +nil+ to restore {.default_capture}.
# @return [void]
def capture_strategy=(callable)
@capture_strategy = callable
end
# Reset module state for use in tests. Releases the capture mutex if
# it is held, clears the configured strategy, removes the cache file,
# and clears the failure backoff timestamp so individual specs are
# isolated from each other.
#
# @return [void]
def reset_for_tests!
@capture_strategy = nil
@last_failure_at = nil
@capture_mutex.unlock if @capture_mutex.owned?
path = PotatoMesh::Config.og_image_cache_path
File.unlink(path) if File.exist?(path)
rescue SystemCallError
# Cache cleanup is best-effort; ignore filesystem errors.
end
# Emit a structured warning when capture or cache I/O fails. Logging is
# best-effort: errors are swallowed when no logger is available so the
# serve path can continue to fall back without raising.
#
# @param error [Exception] caught error instance.
# @return [void]
def log_capture_error(error)
logger = PotatoMesh::Logging.logger_for
return unless logger
PotatoMesh::Logging.log(
logger,
:warn,
"preview capture fell back to cache/default",
context: "og_image",
error: error.class.name,
message: error.message,
)
end
end
end
+2 -2
View File
@@ -1,12 +1,12 @@
{
"name": "potato-mesh",
"version": "0.6.1",
"version": "0.6.3",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "potato-mesh",
"version": "0.6.1",
"version": "0.6.3",
"devDependencies": {
"istanbul-lib-coverage": "^3.2.2",
"istanbul-lib-report": "^3.0.1",
+1 -1
View File
@@ -1,6 +1,6 @@
{
"name": "potato-mesh",
"version": "0.6.2",
"version": "0.6.3",
"type": "module",
"private": true,
"scripts": {
+26
View File
@@ -1,3 +1,10 @@
---
title: About
description: Community dashboard for the local mesh — what it is, how to join, and where to read more.
# image: https://example.com/your-page-preview.png
# noindex: true
---
# About This Mesh
Welcome to this [PotatoMesh](https://github.com/l5yth/potato-mesh) instance - a community dashboard for off-grid mesh networks. This is an example page, please modify it before deploying.
@@ -39,6 +46,25 @@ Instance operators can add, edit, or remove pages by placing Markdown files in
the `pages/` directory (mounted as a Docker volume at `/app/pages`). Each file
becomes a new entry in the navigation bar.
### Optional Frontmatter
Each page may begin with a YAML frontmatter block to override the default
nav label and SEO meta tags. All keys are optional:
```
---
title: About
description: Short summary shown to search engines and link previews.
image: https://example.com/about-preview.png
noindex: true
---
```
- `title` — overrides the slug-derived nav label and the document title.
- `description` — replaces the global meta description for this page only.
- `image` — absolute URL to a per-page social preview image (1200×630 recommended).
- `noindex` — when truthy, emits `<meta name="robots" content="noindex,nofollow">` and removes the page from `/sitemap.xml`. Useful for legal pages such as Impressum that should remain reachable but not indexed.
### Filename Convention
```
@@ -0,0 +1,352 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import { createDomEnvironment } from './dom-environment.js';
import { initializeApp } from '../main.js';
import { MINIMAL_CONFIG } from './main-app-test-helpers.js';
/**
* Build a minimal stub of the Leaflet ``L`` global that supports the surface
* area exercised during {@link initializeApp} setup and the subsequent
* {@link renderMap} render path. The stub is deliberately data-only every
* Leaflet object is a plain ``{}`` shape with the methods the production code
* calls so tests can introspect counts (e.g. how many markers were added
* to a particular layer) without depending on a real Leaflet runtime.
*
* The returned object also exposes the ``_recorded`` reference which holds
* arrays of created markers / lines / hubs so individual tests can assert on
* what was drawn into each layer. Layers themselves expose their internal
* ``_layers`` array, allowing direct assertions like
* ``stub.markersLayer._layers.length`` after a render. The stub is kept
* intentionally minimal every method here corresponds to a call site in
* production main.js, so adding a new Leaflet call generally requires a
* matching entry here.
*
* @returns {Object} Stub Leaflet root with helper accessors.
*/
export function makeLeafletStub() {
const recorded = {
circleMarkers: [],
polylines: [],
markers: [],
divIcons: [],
layerGroups: [],
domEventStopPropagation: 0
};
/**
* Build a layer-group stub that records additions into an internal
* ``_layers`` array so tests can introspect what was drawn there.
*
* @returns {Object} Layer group stub.
*/
function makeLayerGroup() {
const group = {
_layers: [],
addTo() {
return group;
},
clearLayers() {
group._layers.length = 0;
return group;
}
};
recorded.layerGroups.push(group);
return group;
}
/**
* Construct a marker-shaped stub with the subset of Leaflet's marker API
* that production code interacts with. Used as the base for both
* ``L.circleMarker`` and ``L.marker`` results so the two share the
* ``addTo`` / ``on`` / ``getElement`` surface.
*
* @param {[number, number]} latLng Initial coordinate pair.
* @param {Object} [options] Marker options forwarded by the caller.
* @returns {Object} Marker stub.
*/
function makeMarker(latLng, options) {
const eventHandlers = new Map();
const marker = {
_latLng: latLng,
_addedTo: null,
options: options || {},
addTo(layer) {
marker._addedTo = layer;
if (layer && Array.isArray(layer._layers)) layer._layers.push(marker);
return marker;
},
on(event, handler) {
if (!eventHandlers.has(event)) eventHandlers.set(event, []);
eventHandlers.get(event).push(handler);
return marker;
},
_eventHandlers: eventHandlers
};
return marker;
}
/**
* Construct a polyline-shaped stub for spider leader / neighbour /
* trace lines. Production code reads ``setLatLngs`` (used by the spider
* refresh helper) but never the getters, so we keep the shape minimal.
*
* @param {Array<[number, number]>} latLngs Initial coordinate list.
* @param {Object} [options] Polyline options.
* @returns {Object} Polyline stub.
*/
function makePolyline(latLngs, options) {
const line = {
_latLngs: latLngs,
_addedTo: null,
options: options || {},
addTo(layer) {
line._addedTo = layer;
if (layer && Array.isArray(layer._layers)) layer._layers.push(line);
return line;
}
};
return line;
}
/**
* Construct a tile-layer stub. ``initializeApp`` registers
* ``tileloadstart`` / ``tileload`` / ``load`` / ``tileerror`` handlers but
* never fires them in the test environment, so the stub just stores the
* registration for completeness.
*
* @param {string} url Tile URL template (ignored).
* @param {Object} [options] Tile options.
* @returns {Object} Tile-layer stub.
*/
function makeTileLayer(url, options) {
const tile = {
_url: url,
_events: new Map(),
options: options || {},
addTo() {
return tile;
},
on(event, handler) {
if (!tile._events.has(event)) tile._events.set(event, []);
tile._events.get(event).push(handler);
return tile;
}
};
return tile;
}
/**
* Construct the map stub returned by ``L.map()``. ``getZoom`` is
* mutable via ``_setZoom`` so individual tests can drive the dispatch
* branches without re-instantiating the entire harness.
*
* @returns {Object} Map stub.
*/
function makeMap() {
let zoom = 14;
const eventHandlers = new Map();
const map = {
_setZoom(value) {
zoom = value;
},
fitBounds() {
return map;
},
getZoom() {
return zoom;
},
latLngToLayerPoint(latLng) {
// Identity-ish: [lat, lon] → {x: lon, y: lat}. Keeps offsets simple
// to reason about in test assertions.
const lat = Array.isArray(latLng) ? latLng[0] : latLng.lat;
const lon = Array.isArray(latLng) ? latLng[1] : latLng.lng;
return { x: lon, y: lat };
},
layerPointToLatLng(point) {
return { lat: point.y, lng: point.x };
},
on(event, handler) {
if (!eventHandlers.has(event)) eventHandlers.set(event, []);
eventHandlers.get(event).push(handler);
return map;
},
whenReady(cb) {
// Fire synchronously so the harness does not have to drive an event
// loop just to thread the ready-callback side effects.
if (typeof cb === 'function') cb();
return map;
},
invalidateSize() {
return map;
}
};
return map;
}
const stub = {
map(_container, _options) {
stub._map = makeMap();
return stub._map;
},
tileLayer: makeTileLayer,
layerGroup: makeLayerGroup,
circleMarker(latLng, options) {
const marker = makeMarker(latLng, options);
recorded.circleMarkers.push(marker);
return marker;
},
polyline(latLngs, options) {
const line = makePolyline(latLngs, options);
recorded.polylines.push(line);
return line;
},
marker(latLng, options) {
const marker = makeMarker(latLng, options);
recorded.markers.push(marker);
return marker;
},
divIcon(options) {
const icon = { options: options || {} };
recorded.divIcons.push(icon);
return icon;
},
point(x, y) {
return { x, y };
},
latLng(lat, lng) {
// ``L.latLng`` is invoked once during ``initializeApp`` to seed the
// initial map centre. The stub returns a plain object since the rest
// of the production code only reads ``.lat`` / ``.lng`` from it.
return { lat, lng };
},
DomEvent: {
stopPropagation() {
recorded.domEventStopPropagation += 1;
}
},
control(_options) {
// ``initializeApp`` calls ``L.control(...)`` to construct the legend
// toggle widget. The stub returns a chainable shape with ``addTo`` so
// the registration path completes without producing a real Leaflet
// control instance.
return {
addTo() {
return this;
}
};
},
_recorded: recorded
};
return stub;
}
/**
* Spin up the application with a Leaflet stub on ``window.L`` and a
* pre-registered ``#map`` element so the map-init branch of
* {@link initializeApp} runs to completion. Network ``fetch`` is replaced
* with a never-resolving promise so the trailing ``refresh()`` cycle does
* not race against the test's cleanup (the same pattern documented in the
* narrower ``stubFetchForApplyFilter`` helper).
*
* @param {Object} [opts]
* @param {Object} [opts.configOverrides] Per-test overrides merged into
* {@link MINIMAL_CONFIG}.
* @returns {{ testUtils: Object, env: Object, leaflet: Object, cleanup: Function }}
*/
export function setupAppWithLeaflet(opts = {}) {
const env = createDomEnvironment({ includeBody: true });
const mapContainer = env.createElement('div', 'map');
env.registerElement('map', mapContainer);
// ``applyFiltersToAllTiles`` writes to ``document.body.style`` via
// ``setProperty``; the bare ``MockElement`` only exposes an empty object,
// so extend it with the method. The ``style.cssText`` accumulator is
// diagnostic-only — production code never reads it back, but having it
// lets tests inspect what filters were applied if needed.
const bodyStyle = (env.window && env.window.document && env.window.document.body)
? env.window.document.body.style
: null;
if (bodyStyle && typeof bodyStyle.setProperty !== 'function') {
bodyStyle._properties = bodyStyle._properties || {};
bodyStyle.setProperty = (name, value) => {
bodyStyle._properties[name] = value;
};
}
// ``initializeApp`` calls ``window.matchMedia`` to set up a responsive
// legend listener. The base DOM mock does not provide it, so we install
// a no-op shim that returns a never-firing ``MediaQueryList`` shape.
if (env.window && typeof env.window.matchMedia !== 'function') {
env.window.matchMedia = () => ({
matches: false,
media: '',
addEventListener() {},
removeEventListener() {}
});
}
const previousWindowL = globalThis.window.L;
const previousGlobalL = globalThis.L;
const previousFetch = globalThis.fetch;
const leaflet = makeLeafletStub();
// Both ``window.L`` and the bare ``L`` global must be set: the
// ``hasLeaflet`` capture reads ``window.L``, while the runtime references
// ``L`` directly via the module's global scope. Mirror the way the
// browser's ``leaflet.js`` exposes the namespace.
globalThis.window.L = leaflet;
globalThis.L = leaflet;
// Pinning fetch to a never-resolving promise keeps any
// ``fetchActiveNodeStats`` / ``refresh`` chains from racing against the
// test cleanup. The promise never settles, so any future ``.then`` /
// ``.catch`` attached downstream simply hangs harmlessly until the next
// microtask cycle is abandoned by the test runner.
globalThis.fetch = () => new Promise(() => {});
const config = { ...MINIMAL_CONFIG, ...(opts.configOverrides || {}) };
const { _testUtils } = initializeApp(config);
return {
testUtils: _testUtils,
env,
leaflet,
cleanup() {
globalThis.fetch = previousFetch;
globalThis.window.L = previousWindowL;
globalThis.L = previousGlobalL;
env.cleanup();
}
};
}
/**
* Mirror of {@link withApp} that uses the Leaflet-aware setup. Ensures the
* cleanup runs regardless of test outcome.
*
* @param {function({ testUtils: Object, leaflet: Object, env: Object }): void} fn
* Test body.
* @param {Object} [opts] Forwarded to {@link setupAppWithLeaflet}.
*/
export function withAppAndLeaflet(fn, opts = {}) {
const harness = setupAppWithLeaflet(opts);
try {
fn({ testUtils: harness.testUtils, leaflet: harness.leaflet, env: harness.env });
} finally {
harness.cleanup();
}
}
@@ -18,6 +18,7 @@ import test from 'node:test';
import assert from 'node:assert/strict';
import { withApp } from './main-app-test-helpers.js';
import { withAppAndLeaflet } from './main-app-leaflet-stub.js';
/**
* Build a stub Leaflet ``L`` that implements ``point({x, y})``. The renderer
@@ -233,3 +234,600 @@ test('_setColocatedSpiderStateForTests returns the previous state and rejects no
assert.deepEqual(t._getColocatedSpiderStateForTests(), []);
});
});
/**
* Build a stub Leaflet map that reports a configurable zoom level. The
* other Leaflet-projection methods are kept identical to ``makeStubMap`` so
* the helper composes with existing harness shapes that exercise both
* ``getZoom`` and the projection.
*
* @param {number} zoom Zoom level to report from ``getZoom()``.
* @returns {Object} Stub map.
*/
function makeStubMapAtZoom(zoom) {
const base = makeStubMap();
base.getZoom = () => zoom;
return base;
}
test('currentZoomBucket returns "low" below the threshold and "high" at/above', () => {
withApp((t) => {
// No map injected → defensive default keeps the feature visible so the
// test harness behaves identically to today's no-Leaflet path.
assert.equal(t._currentZoomBucketForTests(), 'high');
t._setMapForTests(makeStubMapAtZoom(12));
assert.equal(t._currentZoomBucketForTests(), 'low');
t._setMapForTests(makeStubMapAtZoom(13));
assert.equal(t._currentZoomBucketForTests(), 'high');
t._setMapForTests(makeStubMapAtZoom(18));
assert.equal(t._currentZoomBucketForTests(), 'high');
// Non-finite zoom (e.g. before the projection is ready) must not flip
// the user into the low-zoom branch — fall back to 'high' so the
// current rendering remains usable.
t._setMapForTests(makeStubMapAtZoom(Number.NaN));
assert.equal(t._currentZoomBucketForTests(), 'high');
// Map without a getZoom method (e.g. a stub used purely for projection
// round-trips) is also treated as 'high' rather than throwing.
t._setMapForTests({});
assert.equal(t._currentZoomBucketForTests(), 'high');
t._setMapForTests(null);
});
});
test('handleZoomEndForColocatedHubs clears expanded keys when crossing the threshold', () => {
withApp((t) => {
// Pre-stage state as if the previous render was at high zoom with one
// group expanded; a zoomend that drops us below the threshold should
// erase that state. No fetch wrapper is needed because the new
// ``rerenderMapForFiltering`` helper called by the threshold-cross
// handler does not run the stats-fetch pipeline.
t._setLastRenderedZoomBucketForTests('high');
const seeded = new Set(['10.00000,20.00000']);
t._setExpandedColocatedKeysForTests(seeded);
t._setMapForTests(makeStubMapAtZoom(12));
t.handleZoomEndForColocatedHubs();
assert.equal(t._getExpandedColocatedKeysForTests().size, 0);
t._setMapForTests(null);
});
});
test('handleZoomEndForColocatedHubs leaves expanded keys alone when bucket is unchanged', () => {
withApp((t) => {
// Same bucket as the last render → no clear, no applyFilter side effect.
t._setLastRenderedZoomBucketForTests('high');
const seeded = new Set(['1.00000,2.00000']);
t._setExpandedColocatedKeysForTests(seeded);
t._setMapForTests(makeStubMapAtZoom(15));
t.handleZoomEndForColocatedHubs();
assert.equal(t._getExpandedColocatedKeysForTests(), seeded);
assert.ok(seeded.has('1.00000,2.00000'));
t._setMapForTests(null);
});
});
test('handleZoomEndForColocatedHubs handles zooming back up through the threshold', () => {
withApp((t) => {
// Previous render was low; zoom back up to high → expanded keys are
// (already) empty per the prior crossing, but the bucket flip must
// still register so subsequent clicks behave correctly.
t._setLastRenderedZoomBucketForTests('low');
t._setExpandedColocatedKeysForTests(new Set());
t._setMapForTests(makeStubMapAtZoom(14));
assert.doesNotThrow(() => t.handleZoomEndForColocatedHubs());
t._setMapForTests(null);
});
});
test('createColocatedHubMarker emits "*<count>" html and toggles expansion on click', () => {
withApp((t) => {
const previousL = globalThis.L;
const created = [];
let domEventStopCalls = 0;
let lastClickHandler = null;
globalThis.L = {
divIcon(opts) {
return { _kind: 'divIcon', options: opts };
},
marker(latLng, opts) {
const marker = {
latLng,
options: opts,
_addedTo: null,
on(event, handler) {
if (event === 'click') lastClickHandler = handler;
return marker;
},
addTo(layer) {
marker._addedTo = layer;
layer._children.push(marker);
return marker;
}
};
created.push(marker);
return marker;
},
DomEvent: {
stopPropagation() {
domEventStopCalls += 1;
}
}
};
const stubLayer = { _children: [] };
t._setColocatedHubsLayerForTests(stubLayer);
try {
// Reset the icon cache so this test's stub L is the source of every
// divIcon rather than a previous run's plain-object icon.
t._getColocatedHubIconCacheForTests().clear();
const result = t.createColocatedHubMarker('5.12345,6.54321', 4, 5.12345, 6.54321);
assert.equal(created.length, 1);
assert.equal(result, created[0]);
assert.deepEqual(result.latLng, [5.12345, 6.54321]);
// The divIcon receives the asterisk + count html and the spider hub
// class so the CSS rules in base.css can style it as a clickable badge.
const iconOptions = result.options.icon.options;
assert.equal(iconOptions.className, 'colocated-spider-hub');
assert.ok(/\*4</.test(iconOptions.html), `html ${iconOptions.html} should contain *4`);
assert.deepEqual(iconOptions.iconSize, [16, 16]);
assert.deepEqual(iconOptions.iconAnchor, [8, 8]);
// ``bubblingMouseEvents: false`` keeps Leaflet's internal event
// routing from forwarding the click to map-level handlers. The
// ``riseOnHover`` option is intentionally absent because divIcon
// markers handle z-index inconsistently across Leaflet versions.
assert.equal(result.options.bubblingMouseEvents, false);
assert.equal(result.options.riseOnHover, undefined);
// Marker was added to the injected hub layer rather than the global
// markers layer; this keeps hub badges in their own clearable group.
assert.equal(result._addedTo, stubLayer);
assert.equal(stubLayer._children.length, 1);
// Click → expandedColocatedKeys flips, both Leaflet's DomEvent
// helper and the raw DOM stopPropagation are invoked so the click
// is contained at every layer of the event pipeline.
let stopPropagationCalls = 0;
assert.ok(lastClickHandler);
lastClickHandler({
originalEvent: { stopPropagation() { stopPropagationCalls += 1; } }
});
assert.equal(stopPropagationCalls, 1);
assert.equal(domEventStopCalls, 1);
assert.ok(t._getExpandedColocatedKeysForTests().has('5.12345,6.54321'));
// Second click toggles back off.
lastClickHandler({
originalEvent: { stopPropagation() { stopPropagationCalls += 1; } }
});
assert.equal(stopPropagationCalls, 2);
assert.equal(domEventStopCalls, 2);
assert.equal(t._getExpandedColocatedKeysForTests().has('5.12345,6.54321'), false);
// A click without an originalEvent (or without stopPropagation) must
// still toggle without throwing — covers the defensive guard branch.
assert.doesNotThrow(() => lastClickHandler(undefined));
assert.ok(t._getExpandedColocatedKeysForTests().has('5.12345,6.54321'));
assert.doesNotThrow(() => lastClickHandler({ originalEvent: {} }));
} finally {
t._setColocatedHubsLayerForTests(null);
t._setExpandedColocatedKeysForTests(new Set());
t._getColocatedHubIconCacheForTests().clear();
globalThis.L = previousL;
}
});
});
test('_setExpandedColocatedKeysForTests round-trips and rejects non-Set input', () => {
withApp((t) => {
// Initial state from init: empty Set.
const initial = t._setExpandedColocatedKeysForTests(new Set(['a']));
assert.ok(initial instanceof Set);
assert.equal(initial.size, 0);
const live = t._getExpandedColocatedKeysForTests();
assert.ok(live.has('a'));
// Non-Set input replaces the live set with a fresh empty Set, returning
// the previous (now-stale) reference for the test to inspect.
const previous = t._setExpandedColocatedKeysForTests('not-a-set');
assert.equal(previous.size, 1);
assert.equal(t._getExpandedColocatedKeysForTests().size, 0);
});
});
test('_setColocatedHubsLayerForTests round-trips the hub layer reference', () => {
withApp((t) => {
const initial = t._setColocatedHubsLayerForTests('layer-a');
// Initial value is null because the harness never instantiates Leaflet.
assert.equal(initial, null);
assert.equal(t._getColocatedHubsLayerForTests(), 'layer-a');
const previous = t._setColocatedHubsLayerForTests(null);
assert.equal(previous, 'layer-a');
assert.equal(t._getColocatedHubsLayerForTests(), null);
});
});
test('_setLastRenderedZoomBucketForTests round-trips the bucket marker', () => {
withApp((t) => {
const initial = t._setLastRenderedZoomBucketForTests('high');
// Initial value is null because no render has yet captured a bucket.
assert.equal(initial, null);
assert.equal(t._getLastRenderedZoomBucketForTests(), 'high');
const previous = t._setLastRenderedZoomBucketForTests('low');
assert.equal(previous, 'high');
assert.equal(t._getLastRenderedZoomBucketForTests(), 'low');
});
});
/**
* Build a list of nodes that share an identical coordinate so the renderer
* can group them. Each node carries a unique ``node_id`` to satisfy the
* deterministic-slot ordering inside ``computeColocatedOffsets``.
*
* @param {number} count Number of nodes to generate.
* @param {number} [lat=50] Shared latitude.
* @param {number} [lon=10] Shared longitude.
* @param {Object} [extra] Optional extra fields merged into each node.
* @returns {Array<Object>} Nodes ready to feed into ``renderMap``.
*/
function makeColocatedNodes(count, lat = 50, lon = 10, extra = {}) {
const nodes = [];
for (let i = 0; i < count; i += 1) {
nodes.push({
node_id: `node-${i}`,
latitude: lat,
longitude: lon,
role: 'CLIENT',
protocol: 'meshtastic',
...extra
});
}
return nodes;
}
/**
* Count how many drawn objects in ``recorded`` ended up inside a particular
* layer group. ``recorded`` is the running history of every Leaflet object
* the stub created during the test, while ``layer._layers`` reflects only
* the ones still mounted (after ``clearLayers``). Filtering by both keeps
* the assertions stable across re-renders.
*
* @param {Array<Object>} recorded Array such as ``leaflet._recorded.circleMarkers``.
* @param {Object} layer Layer group whose ``_layers`` array tracks current mounts.
* @returns {number} Count of recorded items currently mounted on the layer.
*/
function countLayerMembers(recorded, layer) {
if (!layer || !Array.isArray(layer._layers)) return 0;
return recorded.filter(item => layer._layers.includes(item)).length;
}
test('renderMap renders flat overlap at zoom < COLOCATED_HUB_MIN_ZOOM', () => {
withAppAndLeaflet(({ testUtils, leaflet }) => {
leaflet._map._setZoom(12);
leaflet._recorded.circleMarkers.length = 0;
leaflet._recorded.markers.length = 0;
leaflet._recorded.polylines.length = 0;
const nodes = makeColocatedNodes(3);
testUtils.renderMap(nodes, 0);
const hubLayer = testUtils._getColocatedHubsLayerForTests();
// Below the threshold every node renders as a normal circleMarker at
// its original coordinate; no hub badge is created and no leader lines
// are drawn. This is the "spider disabled" mode that the user asked
// for when the map is fully zoomed out.
assert.equal(hubLayer._layers.length, 0);
assert.equal(leaflet._recorded.markers.length, 0);
assert.equal(leaflet._recorded.circleMarkers.length, 3);
assert.equal(leaflet._recorded.polylines.length, 0);
// Markers stack at exactly the original coords (no projection round-trip).
for (const marker of leaflet._recorded.circleMarkers) {
assert.deepEqual(marker._latLng, [50, 10]);
}
// The cached zoom-bucket reflects what the render targeted, so the
// zoomend handler can detect a future bucket flip.
assert.equal(testUtils._getLastRenderedZoomBucketForTests(), 'low');
});
});
test('renderMap renders a collapsed hub at zoom ≥ COLOCATED_HUB_MIN_ZOOM', () => {
withAppAndLeaflet(({ testUtils, leaflet }) => {
leaflet._map._setZoom(14);
leaflet._recorded.circleMarkers.length = 0;
leaflet._recorded.markers.length = 0;
leaflet._recorded.polylines.length = 0;
const nodes = makeColocatedNodes(3);
testUtils.renderMap(nodes, 0);
const hubLayer = testUtils._getColocatedHubsLayerForTests();
// Default state at high zoom is collapsed: a single hub badge replaces
// the three member markers, no leader lines are drawn, and the badge
// html carries the asterisk + count so the user can read the group
// size at a glance.
assert.equal(hubLayer._layers.length, 1);
assert.equal(leaflet._recorded.markers.length, 1);
assert.equal(leaflet._recorded.circleMarkers.length, 0);
assert.equal(leaflet._recorded.polylines.length, 0);
const hub = leaflet._recorded.markers[0];
assert.deepEqual(hub._latLng, [50, 10]);
assert.ok(/\*3</.test(hub.options.icon.options.html));
assert.equal(testUtils._getLastRenderedZoomBucketForTests(), 'high');
});
});
test('renderMap dedups the hub badge across the slots in a single group', () => {
withAppAndLeaflet(({ testUtils, leaflet }) => {
leaflet._map._setZoom(14);
leaflet._recorded.markers.length = 0;
// Five colocated nodes would yield five offset slots; the renderer must
// still create exactly one hub for the group rather than emitting one
// per slot. This exercises the ``renderedHubKeys`` dedup guard.
const nodes = makeColocatedNodes(5);
testUtils.renderMap(nodes, 0);
const hubLayer = testUtils._getColocatedHubsLayerForTests();
assert.equal(hubLayer._layers.length, 1);
assert.equal(leaflet._recorded.markers.length, 1);
assert.ok(/\*5</.test(leaflet._recorded.markers[0].options.icon.options.html));
});
});
test('renderMap renders a singleton as a normal marker (no hub) at any zoom', () => {
withAppAndLeaflet(({ testUtils, leaflet }) => {
leaflet._map._setZoom(14);
leaflet._recorded.circleMarkers.length = 0;
leaflet._recorded.markers.length = 0;
const nodes = makeColocatedNodes(1, 1, 2);
testUtils.renderMap(nodes, 0);
const hubLayer = testUtils._getColocatedHubsLayerForTests();
assert.equal(hubLayer._layers.length, 0);
assert.equal(leaflet._recorded.markers.length, 0);
assert.equal(leaflet._recorded.circleMarkers.length, 1);
assert.deepEqual(leaflet._recorded.circleMarkers[0]._latLng, [1, 2]);
});
});
test('renderMap fans out members and draws leader lines when a group is expanded', () => {
withAppAndLeaflet(({ testUtils, leaflet }) => {
leaflet._map._setZoom(14);
leaflet._recorded.circleMarkers.length = 0;
leaflet._recorded.markers.length = 0;
leaflet._recorded.polylines.length = 0;
// Pre-stage the group as expanded so the renderer takes the (c) branch
// — the user already clicked the hub. The key matches the format
// ``computeColocatedOffsets`` produces at the default precision.
testUtils._setExpandedColocatedKeysForTests(new Set(['50.00000,10.00000']));
const nodes = makeColocatedNodes(3);
testUtils.renderMap(nodes, 0);
const hubLayer = testUtils._getColocatedHubsLayerForTests();
// Expanded mode: 1 hub still visible (the click affordance) + 3 member
// markers fanned out + 3 leader polylines.
assert.equal(hubLayer._layers.length, 1);
assert.equal(leaflet._recorded.markers.length, 1);
assert.equal(leaflet._recorded.circleMarkers.length, 3);
assert.equal(leaflet._recorded.polylines.length, 3);
// The spider state has one entry per fanned member so the zoomend hook
// can re-project them when the user keeps zooming.
assert.equal(testUtils._getColocatedSpiderStateForTests().length, 3);
});
});
test('renderMap prunes expandedColocatedKeys whose group has shrunk below 2', () => {
withAppAndLeaflet(({ testUtils, leaflet }) => {
leaflet._map._setZoom(14);
// Pre-stage a stale expansion key whose group will not exist in this
// render. After the render the key must be evicted so subsequent
// clicks at the same coordinate start collapsed.
testUtils._setExpandedColocatedKeysForTests(new Set(['99.00000,99.00000', '50.00000,10.00000']));
const nodes = makeColocatedNodes(1);
testUtils.renderMap(nodes, 0);
const live = testUtils._getExpandedColocatedKeysForTests();
assert.equal(live.has('99.00000,99.00000'), false, 'vanished group key was not pruned');
assert.equal(live.has('50.00000,10.00000'), false, 'shrunken group key was not pruned');
});
});
test('renderMap distance-filter regression: hub html reflects visible count', () => {
withAppAndLeaflet(({ testUtils, leaflet }) => {
leaflet._map._setZoom(14);
leaflet._recorded.markers.length = 0;
const nodes = makeColocatedNodes(4);
nodes[0].distance_km = 9999;
testUtils.renderMap(nodes, 0);
assert.equal(leaflet._recorded.markers.length, 1);
assert.ok(/\*3</.test(leaflet._recorded.markers[0].options.icon.options.html));
}, { configOverrides: { maxDistanceKm: 100 } });
});
test('renderMap re-renders preserve expansion across data refreshes', () => {
withAppAndLeaflet(({ testUtils, leaflet }) => {
leaflet._map._setZoom(14);
testUtils._setExpandedColocatedKeysForTests(new Set(['50.00000,10.00000']));
const nodes = makeColocatedNodes(3);
testUtils.renderMap(nodes, 0);
// First render produced 3 fanned markers; a second render with the
// same data must keep the expansion (i.e. re-emit 3 fanned markers
// rather than collapsing back to a hub-only state).
leaflet._recorded.circleMarkers.length = 0;
leaflet._recorded.markers.length = 0;
leaflet._recorded.polylines.length = 0;
testUtils.renderMap(nodes, 0);
assert.equal(leaflet._recorded.circleMarkers.length, 3);
assert.equal(leaflet._recorded.markers.length, 1);
assert.equal(leaflet._recorded.polylines.length, 3);
assert.ok(testUtils._getExpandedColocatedKeysForTests().has('50.00000,10.00000'));
});
});
test('hub click invokes Leaflet stopPropagation through the live harness', () => {
withAppAndLeaflet(({ testUtils, leaflet }) => {
leaflet._map._setZoom(14);
const nodes = makeColocatedNodes(2);
testUtils.renderMap(nodes, 0);
// The hub badge created during renderMap is a regular Leaflet marker;
// its click handler should stop the event at both the Leaflet and DOM
// layers. Firing the registered click handler directly emulates a
// user click without needing a real DOM event.
const hub = leaflet._recorded.markers[0];
const handlers = hub._eventHandlers.get('click') || [];
assert.equal(handlers.length, 1);
const baselineDomEventCount = leaflet._recorded.domEventStopPropagation;
let stopPropagationCalls = 0;
handlers[0]({
originalEvent: { stopPropagation() { stopPropagationCalls += 1; } }
});
// The click handler must contain the event at both pipeline layers so
// the underlying overlayStack / map ``click`` handlers are not also
// notified. ``rerenderMapForFiltering`` then triggers a second
// renderMap cycle that re-evaluates the dispatch — but with the
// harness's empty ``allNodes`` the new render produces zero offsets,
// so the pruning step sees no surviving multi-node groups. We assert
// on the stopPropagation side effects rather than the post-render
// expansion state because the latter is correctly cleaned up by the
// pruning logic.
assert.equal(stopPropagationCalls, 1);
assert.equal(leaflet._recorded.domEventStopPropagation, baselineDomEventCount + 1);
});
});
test('hub click does not trigger an /api/stats fetch (surgical re-render)', () => {
withAppAndLeaflet(({ testUtils, leaflet }) => {
leaflet._map._setZoom(14);
const nodes = makeColocatedNodes(2);
testUtils.renderMap(nodes, 0);
// Replace the harness's never-resolving fetch with a counter so we can
// observe whether the click handler accidentally invokes it via the
// old ``applyFilter`` path. Capture the previous reference so the
// ``cleanup`` from withAppAndLeaflet can still restore it.
let fetchCalls = 0;
const previousFetch = globalThis.fetch;
globalThis.fetch = () => {
fetchCalls += 1;
return new Promise(() => {});
};
try {
const hub = leaflet._recorded.markers[0];
const handler = (hub._eventHandlers.get('click') || [])[0];
assert.ok(handler);
handler({ originalEvent: { stopPropagation() {} } });
// ``rerenderMapForFiltering`` only calls renderMap; the stats fetch
// that ``applyFilter`` used to issue should not have been triggered.
assert.equal(fetchCalls, 0);
} finally {
globalThis.fetch = previousFetch;
}
});
});
test('renderMap reuses a single divIcon instance across same-size groups', () => {
withAppAndLeaflet(({ testUtils, leaflet }) => {
leaflet._map._setZoom(14);
leaflet._recorded.markers.length = 0;
leaflet._recorded.divIcons.length = 0;
testUtils._getColocatedHubIconCacheForTests().clear();
// Two distinct groups of size 3 at different coordinates. The dispatch
// emits one hub per group, so this exercises the icon cache *within*
// a single render: the second hub should pick up the cached icon
// rather than allocating a new ``L.divIcon``.
const nodes = [
...makeColocatedNodes(3, 50, 10),
...makeColocatedNodes(3, 51, 11)
].map((n, i) => ({ ...n, node_id: `dup-${i}` }));
testUtils.renderMap(nodes, 0);
assert.equal(leaflet._recorded.markers.length, 2, 'expected one hub per group');
assert.equal(leaflet._recorded.divIcons.length, 1, 'expected exactly one divIcon allocation across both hubs');
assert.equal(
leaflet._recorded.markers[0].options.icon,
leaflet._recorded.markers[1].options.icon,
'both hubs should share the cached icon instance'
);
const cache = testUtils._getColocatedHubIconCacheForTests();
assert.equal(cache.size, 1);
assert.ok(cache.has(3));
});
});
test('renderMap reuses divIcons across re-renders', () => {
withAppAndLeaflet(({ testUtils, leaflet }) => {
leaflet._map._setZoom(14);
testUtils._getColocatedHubIconCacheForTests().clear();
const nodes = makeColocatedNodes(4);
testUtils.renderMap(nodes, 0);
const firstIcon = leaflet._recorded.markers[0].options.icon;
leaflet._recorded.markers.length = 0;
leaflet._recorded.divIcons.length = 0;
testUtils.renderMap(nodes, 0);
// Second render reuses the cached size-4 icon — no new divIcon
// allocation, and the new hub points at the same instance as before.
assert.equal(leaflet._recorded.divIcons.length, 0);
assert.equal(leaflet._recorded.markers[0].options.icon, firstIcon);
});
});
test('rerenderMapForFiltering refreshes the map without the applyFilter side effects', () => {
withAppAndLeaflet(({ testUtils, leaflet }) => {
leaflet._map._setZoom(14);
leaflet._recorded.markers.length = 0;
leaflet._recorded.divIcons.length = 0;
let fetchCalls = 0;
const previousFetch = globalThis.fetch;
globalThis.fetch = () => {
fetchCalls += 1;
return new Promise(() => {});
};
try {
// ``rerenderMapForFiltering`` reads ``allNodes`` directly; the test
// harness leaves it empty (no /api/nodes resolution), so the call
// exercises the early-return branches inside renderMap rather than
// a full render. The point of this test is the *absence* of side
// effects: no stats fetch, no thrown errors.
assert.doesNotThrow(() => testUtils.rerenderMapForFiltering());
assert.equal(fetchCalls, 0);
} finally {
globalThis.fetch = previousFetch;
}
});
});
test('_getColocatedHubIconCacheForTests exposes the live cache', () => {
// Use the Leaflet-aware harness so ``L.divIcon`` exists when the helper
// is invoked; the bare ``withApp`` harness leaves L undefined.
withAppAndLeaflet(({ testUtils }) => {
const cache = testUtils._getColocatedHubIconCacheForTests();
assert.ok(cache instanceof Map);
cache.clear();
assert.equal(cache.size, 0);
// Populating via ``getColocatedHubIcon`` proves the seam returns the
// same Map instance the production helper writes to.
const icon = testUtils.getColocatedHubIcon(7);
assert.equal(cache.get(7), icon);
assert.equal(testUtils.getColocatedHubIcon(7), icon, 'second lookup must hit the cache');
cache.clear();
});
});
test('renderMap places fanned markers around the shared centre when expanded', () => {
withAppAndLeaflet(({ testUtils, leaflet }) => {
leaflet._map._setZoom(14);
testUtils._setExpandedColocatedKeysForTests(new Set(['50.00000,10.00000']));
leaflet._recorded.circleMarkers.length = 0;
const nodes = makeColocatedNodes(2);
testUtils.renderMap(nodes, 0);
// The two fanned slots sit on opposite sides of the original centre at
// the configured base radius. The stub uses an identity projection
// ([lat, lon] → {x: lon, y: lat}), so the offset markers' coordinates
// differ from the centre by exactly ``baseRadiusPx`` (after the recent
// halving: 7px) along the X axis for the first slot.
assert.equal(leaflet._recorded.circleMarkers.length, 2);
// ``projectColocatedOffsetLatLng`` returns a ``[lat, lng]`` array, so
// each ``_latLng`` here is a tuple rather than a Leaflet LatLng object.
const offsets = leaflet._recorded.circleMarkers.map(m =>
Math.hypot(m._latLng[1] - 10, m._latLng[0] - 50)
);
for (const distance of offsets) {
assert.ok(distance > 0, `offset distance ${distance} should be > 0`);
assert.ok(Math.abs(distance - 7) < 1e-9, `offset distance ${distance} should match the halved base radius`);
}
});
});
@@ -235,6 +235,35 @@ test('entries without node_id still receive deterministic slots', () => {
}
});
test('grouped slots expose groupKey and groupSize', () => {
// Three colocated entries → every slot reports the same coordinate-derived
// bucket key (matching coordinateKey at the default precision) and the full
// membership count. This is what the renderer keys expand/collapse state
// off of, so a regression here would silently desync the hub interaction.
const entries = [
makeEntry('a', 10, 20),
makeEntry('b', 10, 20),
makeEntry('c', 10, 20)
];
const result = computeColocatedOffsets(entries);
const expectedKey = coordinateKey(10, 20, DEFAULT_PRECISION);
for (const slot of result) {
assert.equal(slot.groupKey, expectedKey);
assert.equal(slot.groupSize, 3);
}
});
test('singleton slots still report a stable groupKey and groupSize of 1', () => {
// Singletons need a groupKey too: the renderer treats every result row
// uniformly when pruning expand/collapse state, so even non-grouped points
// must carry a non-empty key matching their rounded coordinate.
const entries = [makeEntry('solo', 12.34567, 89.01234)];
const result = computeColocatedOffsets(entries);
assert.equal(result.length, 1);
assert.equal(result[0].groupSize, 1);
assert.equal(result[0].groupKey, coordinateKey(12.34567, 89.01234, DEFAULT_PRECISION));
});
test('coordinateKey formats lat/lon at requested precision', () => {
assert.equal(coordinateKey(1.234567, 7.654321, 3), '1.235,7.654');
assert.equal(coordinateKey(0, 0, DEFAULT_PRECISION), '0.00000,0.00000');
@@ -17,7 +17,53 @@
import test from 'node:test';
import assert from 'node:assert/strict';
import { createMessageNodeHydrator } from '../message-node-hydrator.js';
import { createMessageNodeHydrator, MESSAGE_HYDRATION_CONCURRENCY } from '../message-node-hydrator.js';
/**
* Build a fetch double that records the maximum number of simultaneously
* pending lookups so tests can assert the worker-pool cap is honoured.
*
* @param {number} settleDelayMs Milliseconds to keep each lookup pending
* before resolving, giving sibling workers a chance to start.
* @returns {{
* fetchNodeById: (id: string) => Promise<object|null>,
* maxInFlight: () => number,
* totalCalls: () => number,
* }} Helper API exposing the recorded peak concurrency.
*/
function makeConcurrencyProbe(settleDelayMs = 10) {
let inFlight = 0;
let peak = 0;
let total = 0;
return {
async fetchNodeById(id) {
inFlight += 1;
total += 1;
peak = Math.max(peak, inFlight);
try {
await new Promise(resolve => setTimeout(resolve, settleDelayMs));
return { node_id: id, short_name: id.slice(1, 5) };
} finally {
inFlight -= 1;
}
},
maxInFlight: () => peak,
totalCalls: () => total,
};
}
/**
* Build N messages with unique sender identifiers for concurrency tests.
*
* @param {number} count Number of messages to produce.
* @returns {Array<object>} Synthetic message payloads.
*/
function makeUniqueSenderMessages(count) {
return Array.from({ length: count }, (_, index) => ({
from_id: `!sender${index.toString().padStart(4, '0')}`,
text: `m${index}`,
}));
}
/**
* Capture warning invocations produced during a test run.
@@ -78,6 +124,66 @@ test('hydrate fetches missing nodes once and caches the result', async () => {
assert.strictEqual(result[1].node, nodesById.get('!fetch'));
});
test('hydrate caches 404 results so subsequent calls do not refetch dead ids', async () => {
let fetchCalls = 0;
const hydrator = createMessageNodeHydrator({
fetchNodeById: async () => {
fetchCalls += 1;
return null;
},
applyNodeFallback: () => {},
});
const messages = [{ from_id: '!gone', text: 'first' }];
const nodesById = new Map();
await hydrator.hydrate(messages, nodesById);
await hydrator.hydrate([{ from_id: '!gone', text: 'second' }], nodesById);
await hydrator.hydrate([{ from_id: '!gone', text: 'third' }], nodesById);
assert.equal(fetchCalls, 1);
});
test('cached missing entry is overridden when nodesById later resolves the id', async () => {
let fetchCalls = 0;
const hydrator = createMessageNodeHydrator({
fetchNodeById: async () => {
fetchCalls += 1;
return null;
},
applyNodeFallback: () => {},
});
const nodesById = new Map();
await hydrator.hydrate([{ from_id: '!late', text: 'first' }], nodesById);
assert.equal(fetchCalls, 1);
// Bulk /api/nodes refresh resolves the id afterwards.
const lateNode = { node_id: '!late', short_name: 'Late' };
nodesById.set('!late', lateNode);
const result = await hydrator.hydrate([{ from_id: '!late', text: 'second' }], nodesById);
assert.equal(fetchCalls, 1);
assert.strictEqual(result[0].node, lateNode);
});
test('hydrate caches lookup failures alongside 404s', async () => {
let fetchCalls = 0;
const hydrator = createMessageNodeHydrator({
fetchNodeById: async () => {
fetchCalls += 1;
throw new Error('network down');
},
applyNodeFallback: () => {},
logger: { warn() {} },
});
const nodesById = new Map();
await hydrator.hydrate([{ from_id: '!flaky', text: 'a' }], nodesById);
await hydrator.hydrate([{ from_id: '!flaky', text: 'b' }], nodesById);
assert.equal(fetchCalls, 1);
});
test('hydrate falls back to placeholders when lookups fail', async () => {
const logger = new LoggerStub();
let fallbackCalls = 0;
@@ -121,3 +227,125 @@ test('hydrate records warning when fetch rejects', async () => {
assert.ok(logger.messages.length >= 1);
assert.equal(nodesById.has('!warn'), false);
});
test('hydrate caps in-flight lookups at the default concurrency', async () => {
const probe = makeConcurrencyProbe();
const hydrator = createMessageNodeHydrator({
fetchNodeById: probe.fetchNodeById,
applyNodeFallback: () => {},
});
const messages = makeUniqueSenderMessages(MESSAGE_HYDRATION_CONCURRENCY * 3);
await hydrator.hydrate(messages, new Map());
assert.equal(probe.totalCalls(), messages.length);
assert.ok(
probe.maxInFlight() <= MESSAGE_HYDRATION_CONCURRENCY,
`expected <= ${MESSAGE_HYDRATION_CONCURRENCY} concurrent fetches, observed ${probe.maxInFlight()}`,
);
});
test('hydrate honours a custom concurrency override', async () => {
const probe = makeConcurrencyProbe();
const hydrator = createMessageNodeHydrator({
fetchNodeById: probe.fetchNodeById,
applyNodeFallback: () => {},
concurrency: 2,
});
const messages = makeUniqueSenderMessages(8);
await hydrator.hydrate(messages, new Map());
assert.equal(probe.totalCalls(), 8);
assert.equal(probe.maxInFlight(), 2);
});
test('hydrate serialises lookups when concurrency is one', async () => {
const probe = makeConcurrencyProbe();
const hydrator = createMessageNodeHydrator({
fetchNodeById: probe.fetchNodeById,
applyNodeFallback: () => {},
concurrency: 1,
});
const messages = makeUniqueSenderMessages(4);
await hydrator.hydrate(messages, new Map());
assert.equal(probe.maxInFlight(), 1);
});
test('hydrate falls back to the default cap for invalid concurrency values', async () => {
for (const invalid of [0, -3, Number.NaN, Number.POSITIVE_INFINITY, 'four']) {
const probe = makeConcurrencyProbe();
const hydrator = createMessageNodeHydrator({
fetchNodeById: probe.fetchNodeById,
applyNodeFallback: () => {},
concurrency: invalid,
});
const messages = makeUniqueSenderMessages(MESSAGE_HYDRATION_CONCURRENCY * 2);
await hydrator.hydrate(messages, new Map());
assert.ok(
probe.maxInFlight() <= MESSAGE_HYDRATION_CONCURRENCY,
`concurrency=${String(invalid)} should fall back to default; observed peak ${probe.maxInFlight()}`,
);
}
});
test('factory rejects missing fetch and fallback dependencies', () => {
assert.throws(
() => createMessageNodeHydrator({ applyNodeFallback: () => {} }),
TypeError,
);
assert.throws(
() => createMessageNodeHydrator({ fetchNodeById: async () => null }),
TypeError,
);
});
test('hydrate skips non-object entries and senderless messages', async () => {
let fetchCalls = 0;
const hydrator = createMessageNodeHydrator({
fetchNodeById: async () => {
fetchCalls += 1;
return null;
},
applyNodeFallback: () => {},
});
const senderless = { text: 'no sender' };
const messages = [null, 'not-an-object', senderless];
const result = await hydrator.hydrate(messages, new Map());
assert.equal(fetchCalls, 0);
assert.equal(result.length, 3);
assert.strictEqual(senderless.node, null);
});
test('hydrate dedupes duplicate senders without exceeding the cap', async () => {
const probe = makeConcurrencyProbe();
const hydrator = createMessageNodeHydrator({
fetchNodeById: probe.fetchNodeById,
applyNodeFallback: () => {},
concurrency: 2,
});
// Twenty messages but only four unique senders. After the first lookup
// for a given sender resolves, ``resolveNode`` writes the result into the
// shared ``nodesById`` cache; every later message with the same id is
// bound synchronously from that cache before it ever reaches the worker
// pool, so the total fetch count collapses to the four unique senders.
// (The inflight-promise map only matters when two workers happen to race
// on the same id, which barely happens at concurrency=2 — the
// ``nodesById`` short-circuit is the dominant mechanism here.)
const senders = ['!aaa', '!bbb', '!ccc', '!ddd'];
const messages = Array.from({ length: 20 }, (_, index) => ({
from_id: senders[index % senders.length],
text: `dup${index}`,
}));
await hydrator.hydrate(messages, new Map());
assert.equal(probe.totalCalls(), senders.length);
assert.ok(probe.maxInFlight() <= 2);
});
@@ -42,9 +42,13 @@ const {
lookupNeighborDetails,
seedNeighborRoleIndex,
buildNeighborRoleIndex,
fetchNodeDetailsIntoIndex,
renderRoleAwareBadge,
collectTraceNodeFetchMap,
buildTraceRoleIndex,
categoriseNeighbors,
renderNeighborBadge,
renderNeighborGroup,
renderNeighborGroups,
renderSingleNodeTable,
classifySnapshot,
@@ -1257,3 +1261,245 @@ test('initializeNodeDetailPage handles missing reference payloads', async () =>
assert.equal(result, false);
assert.equal(element.innerHTML.includes('Node reference unavailable'), true);
});
test('parseReferencePayload returns null for blank or unparseable input', () => {
assert.equal(parseReferencePayload(null), null);
assert.equal(parseReferencePayload(' '), null);
assert.equal(parseReferencePayload('not-json'), null);
assert.equal(parseReferencePayload(JSON.stringify(42)), null);
assert.deepEqual(parseReferencePayload(JSON.stringify({ nodeId: '!a' })), { nodeId: '!a' });
});
test('initializeNodeDetailPage rejects invalid documents and missing identifiers', async () => {
await assert.rejects(
() => initializeNodeDetailPage({ document: null, fetchImpl: async () => ({}) }),
/document with querySelector/,
);
const root = { dataset: { nodeReference: JSON.stringify({}) }, innerHTML: '' };
const documentStub = {
querySelector: selector => (selector === '#nodeDetail' ? root : null),
};
const result = await initializeNodeDetailPage({
document: documentStub,
fetchImpl: async () => ({ ok: true, json: async () => ({}) }),
renderShortHtml: short => `<span>${short}</span>`,
});
assert.equal(result, false);
assert.equal(root.innerHTML.includes('Node identifier missing'), true);
});
test('renderRoleAwareBadge falls back when both shortName and identifier are absent', () => {
const html = renderRoleAwareBadge((short, role) => `<b data-role="${role}">${short}</b>`, {});
assert.equal(html, '<b data-role="CLIENT">?</b>');
});
test('renderRoleAwareBadge invokes default span renderer when renderShortHtml is missing', () => {
const html = renderRoleAwareBadge(null, { shortName: 'AB&CD' });
assert.equal(html.includes('class="short-name"'), true);
assert.equal(html.includes('AB&amp;CD'), true);
});
test('seedNeighborRoleIndex tolerates non-array and non-object entries', () => {
const index = { byId: new Map(), byNum: new Map(), detailsById: new Map(), detailsByNum: new Map() };
assert.equal(seedNeighborRoleIndex(index, null).size, 0);
assert.equal(seedNeighborRoleIndex(index, 'not-an-array').size, 0);
assert.equal(seedNeighborRoleIndex(index, [null, 7, 'string']).size, 0);
});
test('seedNeighborRoleIndex hydrates roles from nested neighbor and node objects', () => {
const index = { byId: new Map(), byNum: new Map(), detailsById: new Map(), detailsByNum: new Map() };
seedNeighborRoleIndex(index, [
{
neighbor: { node_id: '!ally', node_num: 11, role: 'ROUTER', short_name: 'ALLY', long_name: 'Ally Long' },
node: { node_id: '!self', node_num: 22, role: 'CLIENT', short_name: 'SELF', long_name: 'Self Long' },
},
]);
assert.equal(index.byId.get('!ally'), 'ROUTER');
assert.equal(index.byId.get('!self'), 'CLIENT');
assert.equal(index.byNum.get(11), 'ROUTER');
assert.equal(index.byNum.get(22), 'CLIENT');
const allyDetails = lookupNeighborDetails(index, { identifier: '!ally' });
assert.equal(allyDetails.shortName, 'ALLY');
assert.equal(allyDetails.longName, 'Ally Long');
});
test('fetchNodeDetailsIntoIndex skips work when no fetch is reachable', async () => {
const originalFetch = globalThis.fetch;
globalThis.fetch = undefined;
try {
const index = { byId: new Map(), byNum: new Map(), detailsById: new Map(), detailsByNum: new Map() };
await fetchNodeDetailsIntoIndex(index, new Map([['x', 'x']]), undefined);
assert.equal(index.byId.size, 0);
} finally {
globalThis.fetch = originalFetch;
}
});
test('fetchNodeDetailsIntoIndex returns immediately for empty or non-Map inputs', async () => {
const index = { byId: new Map(), byNum: new Map(), detailsById: new Map(), detailsByNum: new Map() };
let calls = 0;
const fetchImpl = async () => { calls += 1; return { ok: true, json: async () => ({}) }; };
await fetchNodeDetailsIntoIndex(index, null, fetchImpl);
await fetchNodeDetailsIntoIndex(index, new Map(), fetchImpl);
assert.equal(calls, 0);
});
test('fetchNodeDetailsIntoIndex silently ignores 404 responses without registering', async () => {
const index = { byId: new Map(), byNum: new Map(), detailsById: new Map(), detailsByNum: new Map() };
const fetchImpl = async () => ({ ok: false, status: 404, json: async () => ({}) });
await fetchNodeDetailsIntoIndex(index, new Map([['gone', 'gone']]), fetchImpl);
assert.equal(index.byId.size, 0);
});
test('renderSingleNodeTable returns empty string for invalid inputs', () => {
assert.equal(renderSingleNodeTable(null, () => ''), '');
assert.equal(renderSingleNodeTable({ nodeId: '!a' }, null), '');
assert.equal(renderSingleNodeTable('string-not-object', () => ''), '');
});
test('renderTelemetryCharts returns empty string when no entries fall in the window', () => {
const out = renderTelemetryCharts(makeAggregatedNode([
{ timestamp: '1970-01-01T00:00:00Z' },
]), { now: () => Date.UTC(2026, 0, 1) });
assert.equal(out, '');
});
test('renderTelemetryCharts returns empty string when chart specs produce no markup', () => {
// Aggregated snapshot with valid timestamp but no telemetry fields any chart
// can plot — every chart spec filters its empty series and returns ''.
const node = makeAggregatedNode([
{ rx_time: CHART_NOW_SECONDS - 60, telemetry_type: 'device' },
]);
const out = renderTelemetryCharts(node, { nowMs: CHART_NOW_MS });
assert.equal(out, '');
});
test('renderTracePath returns empty string when fewer than two badges render', () => {
const renderShortHtml = short => `<span>${short}</span>`;
// Single-element path → items.length < 2 → empty result.
assert.equal(renderTracePath(['!only'], renderShortHtml), '');
// Two refs but the renderer yields blanks → filter strips them → items.length < 2.
assert.equal(renderTracePath([{ identifier: '!a' }, { identifier: '!b' }], () => ''), '');
});
test('renderNeighborBadge returns empty string for invalid inputs', () => {
assert.equal(renderNeighborBadge(null, 'heardBy', () => ''), '');
assert.equal(renderNeighborBadge({ neighbor_id: '!a' }, 'weHear', null), '');
// Entry without any identifier in keys → returns ''
assert.equal(renderNeighborBadge({ snr: 5 }, 'weHear', () => ''), '');
});
test('renderNeighborBadge merges role-index metadata into the source object', () => {
const source = {};
const entry = { neighbor_id: '!ally', neighbor: source };
const roleIndex = {
byId: new Map([['!ally', 'ROUTER']]),
byNum: new Map(),
detailsById: new Map([
['!ally', { shortName: 'ALLY', longName: 'Ally Long', role: 'ROUTER' }],
]),
detailsByNum: new Map(),
};
const renderShortHtml = (short, role, long, badgeSource) =>
`<b data-role="${role}" data-long="${long}" data-source-role="${badgeSource.role}">${short}</b>`;
const html = renderNeighborBadge(entry, 'weHear', renderShortHtml, roleIndex);
assert.match(html, /ALLY/);
assert.equal(source.short_name, 'ALLY');
assert.equal(source.long_name, 'Ally Long');
assert.equal(source.role, 'ROUTER');
});
test('renderNeighborBadge derives short name from identifier when no metadata is available', () => {
const html = renderNeighborBadge(
{ neighbor_id: '!abcdef12' },
'weHear',
short => `<span>${short}</span>`,
);
// Last four hex chars of identifier, uppercased.
assert.equal(html, '<span>EF12</span>');
});
test('renderNeighborGroup skips entries that fail to render and returns empty when none survive', () => {
const renderShortHtml = (short, role) => `<span data-role="${role}">${short}</span>`;
// Two entries; only one yields a valid badge.
const html = renderNeighborGroup(
'Heard by',
[
{ node_id: '!peer', node_short_name: 'PEER' },
{ snr: 5 }, // no identifier → renderNeighborBadge returns '' → filtered out.
],
'heardBy',
renderShortHtml,
);
assert.equal(html.includes('PEER'), true);
assert.equal(html.match(/<li>/g).length, 1);
// All entries fail → returns ''.
const empty = renderNeighborGroup('Heard by', [{ snr: 1 }, { snr: 2 }], 'heardBy', renderShortHtml);
assert.equal(empty, '');
});
test('renderTraceroutes returns empty string when no trace path renders content', () => {
const renderShortHtml = short => `<span>${short}</span>`;
// Each trace yields a single-hop path which renderTracePath rejects → no items remain.
assert.equal(renderTraceroutes([{ src: '!a', hops: [], dest: null }], renderShortHtml), '');
});
test('fetchNodeDetailHtml rejects non-object references', async () => {
await assert.rejects(() => fetchNodeDetailHtml(null), TypeError);
await assert.rejects(() => fetchNodeDetailHtml('not-an-object'), TypeError);
});
test('normalizeNodeReference returns null for non-object inputs and references missing both ids', () => {
const { normalizeNodeReference } = __testUtils;
assert.equal(normalizeNodeReference(null), null);
assert.equal(normalizeNodeReference('not-an-object'), null);
assert.equal(normalizeNodeReference({}), null);
assert.deepEqual(normalizeNodeReference({ nodeId: '!a' }), { nodeId: '!a', nodeNum: null });
});
test('fetchNodeDetailsIntoIndex warns and continues when a non-404 response fails', async () => {
const index = { byId: new Map(), byNum: new Map(), detailsById: new Map(), detailsByNum: new Map() };
const fetchImpl = async () => ({ ok: false, status: 503, json: async () => ({}) });
const originalWarn = console.warn;
const messages = [];
console.warn = (...args) => messages.push(args[0]);
try {
await fetchNodeDetailsIntoIndex(index, new Map([['ouch', 'ouch']]), fetchImpl, 'unit-test');
} finally {
console.warn = originalWarn;
}
assert.equal(index.byId.size, 0);
assert.equal(messages.some(msg => typeof msg === 'string' && msg.includes('unit-test')), true);
});
test('fetchNodeDetailsIntoIndex caps in-flight requests at NEIGHBOR_ROLE_FETCH_CONCURRENCY', async () => {
// Eight identifiers, four-wide pool: at most four fetches should be in flight.
const ids = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'];
const fetchIdMap = new Map(ids.map(id => [id, id]));
let inFlight = 0;
let peak = 0;
const release = [];
const fetchImpl = () => {
inFlight += 1;
if (inFlight > peak) peak = inFlight;
return new Promise(resolve => {
release.push(() => {
inFlight -= 1;
resolve({ ok: true, status: 200, json: async () => ({ node_id: '!stub', role: 'CLIENT' }) });
});
});
};
const index = { byId: new Map(), byNum: new Map(), detailsById: new Map(), detailsByNum: new Map() };
const work = fetchNodeDetailsIntoIndex(index, fetchIdMap, fetchImpl);
// Yield so the four workers reach their first await.
await new Promise(resolve => setImmediate(resolve));
assert.equal(peak, 4, `expected concurrency cap of 4, observed peak ${peak}`);
// Drain in two waves; the second wave only starts once the first releases.
release.splice(0, 4).forEach(fn => fn());
await new Promise(resolve => setImmediate(resolve));
release.splice(0).forEach(fn => fn());
await work;
assert.equal(peak, 4);
});
@@ -0,0 +1,92 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import { numberOrNull, stringOrNull } from '../value-helpers.js';
// ---------------------------------------------------------------------------
// numberOrNull
// ---------------------------------------------------------------------------
test('numberOrNull passes through finite numbers unchanged', () => {
assert.equal(numberOrNull(42), 42);
assert.equal(numberOrNull(-3.14), -3.14);
assert.equal(numberOrNull(0), 0);
});
test('numberOrNull returns null for non-finite numbers', () => {
assert.equal(numberOrNull(Number.NaN), null);
assert.equal(numberOrNull(Number.POSITIVE_INFINITY), null);
assert.equal(numberOrNull(Number.NEGATIVE_INFINITY), null);
});
test('numberOrNull returns null for null, undefined, and empty string', () => {
assert.equal(numberOrNull(null), null);
assert.equal(numberOrNull(undefined), null);
assert.equal(numberOrNull(''), null);
});
test('numberOrNull coerces numeric strings into numbers', () => {
assert.equal(numberOrNull('42'), 42);
assert.equal(numberOrNull(' -1.5 '), -1.5);
assert.equal(numberOrNull('0'), 0);
});
test('numberOrNull rejects non-numeric strings', () => {
assert.equal(numberOrNull('not a number'), null);
assert.equal(numberOrNull('1.2.3'), null);
});
test('numberOrNull rejects objects and arrays', () => {
assert.equal(numberOrNull({}), null);
assert.equal(numberOrNull([]), 0); // Array#toString of [] is '' which Number('') is 0
assert.equal(numberOrNull([1, 2]), null);
});
test('numberOrNull treats booleans as their numeric coercion', () => {
// Number(true) === 1, Number(false) === 0; documented contract is that any
// value Number() resolves to a finite number passes through.
assert.equal(numberOrNull(true), 1);
assert.equal(numberOrNull(false), 0);
});
// ---------------------------------------------------------------------------
// stringOrNull
// ---------------------------------------------------------------------------
test('stringOrNull returns trimmed strings for non-empty input', () => {
assert.equal(stringOrNull('hello'), 'hello');
assert.equal(stringOrNull(' spaced '), 'spaced');
});
test('stringOrNull returns null for null and undefined', () => {
assert.equal(stringOrNull(null), null);
assert.equal(stringOrNull(undefined), null);
});
test('stringOrNull returns null for the empty string and whitespace-only input', () => {
assert.equal(stringOrNull(''), null);
assert.equal(stringOrNull(' '), null);
assert.equal(stringOrNull('\t\n'), null);
});
test('stringOrNull stringifies non-string inputs', () => {
assert.equal(stringOrNull(42), '42');
assert.equal(stringOrNull(0), '0');
assert.equal(stringOrNull(true), 'true');
});
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,346 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import {
fetchMessages,
fetchNeighbors,
fetchNodeById,
fetchNodes,
fetchPositions,
fetchTelemetry,
fetchTraces,
filterRecentTraces,
resolveSnapshotLimit,
} from '../data-fetchers.js';
import { NODE_LIMIT, SNAPSHOT_LIMIT, TRACE_LIMIT } from '../constants.js';
/**
* Install a temporary global ``fetch`` stub that records every call and
* returns the supplied response. Returns a teardown handle that restores
* the previous binding and exposes the captured call list.
*
* @param {{ ok?: boolean, status?: number, body?: any }|Function} responseOrFn
* Response descriptor or an async function returning one.
* @returns {{ calls: Array<{url: string, init: any}>, restore: Function }}
* Stub control surface.
*/
function withFetchStub(responseOrFn) {
const previous = globalThis.fetch;
const calls = [];
const handler = typeof responseOrFn === 'function'
? responseOrFn
: () => responseOrFn;
globalThis.fetch = async (url, init) => {
calls.push({ url, init });
const descriptor = await handler(url, init);
return {
ok: descriptor.ok ?? true,
status: descriptor.status ?? 200,
json: async () => descriptor.body ?? [],
};
};
return {
calls,
restore() {
if (previous === undefined) {
delete globalThis.fetch;
} else {
globalThis.fetch = previous;
}
},
};
}
// ---------------------------------------------------------------------------
// resolveSnapshotLimit
// ---------------------------------------------------------------------------
test('resolveSnapshotLimit multiplies the requested limit by SNAPSHOT_LIMIT', () => {
assert.equal(resolveSnapshotLimit(10), Math.min(10 * SNAPSHOT_LIMIT, NODE_LIMIT));
});
test('resolveSnapshotLimit caps to maxLimit', () => {
assert.equal(resolveSnapshotLimit(NODE_LIMIT), NODE_LIMIT);
assert.equal(resolveSnapshotLimit(NODE_LIMIT * 2), NODE_LIMIT);
});
test('resolveSnapshotLimit defaults to NODE_LIMIT for invalid input', () => {
assert.equal(resolveSnapshotLimit(null), NODE_LIMIT);
assert.equal(resolveSnapshotLimit(0), NODE_LIMIT);
assert.equal(resolveSnapshotLimit(-5), NODE_LIMIT);
assert.equal(resolveSnapshotLimit(Number.NaN), NODE_LIMIT);
});
// ---------------------------------------------------------------------------
// filterRecentTraces
// ---------------------------------------------------------------------------
test('filterRecentTraces returns empty array for non-array input', () => {
assert.deepEqual(filterRecentTraces(null), []);
assert.deepEqual(filterRecentTraces(undefined), []);
assert.deepEqual(filterRecentTraces({}), []);
});
test('filterRecentTraces returns a copy of the input when maxAgeSeconds is non-positive', () => {
const input = [{ rx_time: 1 }, { rx_time: 2 }];
const result = filterRecentTraces(input, 0);
assert.deepEqual(result, input);
assert.notEqual(result, input); // Returns a copy, not the same reference.
const negativeResult = filterRecentTraces(input, -10);
assert.deepEqual(negativeResult, input);
});
test('filterRecentTraces drops traces older than the cutoff', () => {
const nowSeconds = Math.floor(Date.now() / 1000);
const traces = [
{ rx_time: nowSeconds }, // recent
{ rx_time: nowSeconds - 7200 }, // older than 1h
{ rxIso: new Date((nowSeconds - 30) * 1000).toISOString() }, // recent via ISO
];
const filtered = filterRecentTraces(traces, 3600);
assert.equal(filtered.length, 2);
});
test('filterRecentTraces drops traces with no usable timestamp', () => {
const filtered = filterRecentTraces([{ noTime: true }, { rx_time: null }], 3600);
assert.deepEqual(filtered, []);
});
// ---------------------------------------------------------------------------
// fetchNodeById
// ---------------------------------------------------------------------------
test('fetchNodeById returns null for non-string inputs', async () => {
assert.equal(await fetchNodeById(null), null);
assert.equal(await fetchNodeById(42), null);
});
test('fetchNodeById returns null for blank string inputs', async () => {
assert.equal(await fetchNodeById(''), null);
assert.equal(await fetchNodeById(' '), null);
});
test('fetchNodeById returns null on HTTP 404', async () => {
const stub = withFetchStub({ ok: false, status: 404 });
try {
assert.equal(await fetchNodeById('!aabbccdd'), null);
assert.equal(stub.calls.length, 1);
assert.ok(stub.calls[0].url.includes('!aabbccdd'));
} finally {
stub.restore();
}
});
test('fetchNodeById throws on non-OK non-404 responses', async () => {
const stub = withFetchStub({ ok: false, status: 500 });
try {
await assert.rejects(() => fetchNodeById('!aabbccdd'), /HTTP 500/);
} finally {
stub.restore();
}
});
test('fetchNodeById returns parsed payload on success', async () => {
const stub = withFetchStub({ ok: true, status: 200, body: { node_id: '!aabbccdd' } });
try {
const result = await fetchNodeById('!aabbccdd');
assert.deepEqual(result, { node_id: '!aabbccdd' });
} finally {
stub.restore();
}
});
// ---------------------------------------------------------------------------
// fetchNodes / fetchNeighbors / fetchTelemetry / fetchPositions / fetchTraces
// ---------------------------------------------------------------------------
test('fetchNodes appends since when greater than zero', async () => {
const stub = withFetchStub({ ok: true, body: [] });
try {
await fetchNodes(10, 1234);
assert.ok(stub.calls[0].url.includes('since=1234'));
} finally {
stub.restore();
}
});
test('fetchNodes throws on non-OK', async () => {
const stub = withFetchStub({ ok: false, status: 503 });
try {
await assert.rejects(() => fetchNodes(), /HTTP 503/);
} finally {
stub.restore();
}
});
test('fetchNeighbors hits the neighbours endpoint', async () => {
const stub = withFetchStub({ ok: true, body: [{ node_id: '!a' }] });
try {
const result = await fetchNeighbors(50);
assert.ok(stub.calls[0].url.startsWith('/api/neighbors?'));
assert.deepEqual(result, [{ node_id: '!a' }]);
} finally {
stub.restore();
}
});
test('fetchNeighbors propagates HTTP errors', async () => {
const stub = withFetchStub({ ok: false, status: 502 });
try {
await assert.rejects(() => fetchNeighbors(), /HTTP 502/);
} finally {
stub.restore();
}
});
test('fetchTelemetry hits the telemetry endpoint', async () => {
const stub = withFetchStub({ ok: true, body: [] });
try {
await fetchTelemetry(50, 100);
assert.ok(stub.calls[0].url.startsWith('/api/telemetry?'));
assert.ok(stub.calls[0].url.includes('since=100'));
} finally {
stub.restore();
}
});
test('fetchTelemetry propagates HTTP errors', async () => {
const stub = withFetchStub({ ok: false, status: 504 });
try {
await assert.rejects(() => fetchTelemetry(), /HTTP 504/);
} finally {
stub.restore();
}
});
test('fetchPositions hits the positions endpoint', async () => {
const stub = withFetchStub({ ok: true, body: [] });
try {
await fetchPositions();
assert.ok(stub.calls[0].url.startsWith('/api/positions?'));
} finally {
stub.restore();
}
});
test('fetchPositions propagates HTTP errors', async () => {
const stub = withFetchStub({ ok: false, status: 500 });
try {
await assert.rejects(() => fetchPositions(), /HTTP 500/);
} finally {
stub.restore();
}
});
test('fetchTraces filters expired entries', async () => {
const nowSeconds = Math.floor(Date.now() / 1000);
const stub = withFetchStub({
ok: true,
body: [
{ rx_time: nowSeconds },
{ rx_time: nowSeconds - 365 * 24 * 3600 },
],
});
try {
const result = await fetchTraces();
// Only the recent trace should survive.
assert.equal(result.length, 1);
assert.ok(stub.calls[0].url.startsWith('/api/traces?'));
} finally {
stub.restore();
}
});
test('fetchTraces falls back to TRACE_LIMIT on bogus input', async () => {
const stub = withFetchStub({ ok: true, body: [] });
try {
await fetchTraces(Number.NaN);
assert.ok(stub.calls[0].url.includes(`limit=${TRACE_LIMIT}`));
} finally {
stub.restore();
}
});
test('fetchTraces propagates HTTP errors', async () => {
const stub = withFetchStub({ ok: false, status: 500 });
try {
await assert.rejects(() => fetchTraces(), /HTTP 500/);
} finally {
stub.restore();
}
});
// ---------------------------------------------------------------------------
// fetchMessages
// ---------------------------------------------------------------------------
test('fetchMessages returns [] when chatEnabled is false', async () => {
const stub = withFetchStub({ ok: true, body: [{ id: 1 }] });
try {
const result = await fetchMessages(10, { chatEnabled: false });
assert.deepEqual(result, []);
assert.equal(stub.calls.length, 0);
} finally {
stub.restore();
}
});
test('fetchMessages applies normaliseMessageLimit when provided', async () => {
const stub = withFetchStub({ ok: true, body: [] });
try {
await fetchMessages(999, {
normaliseMessageLimit: () => 25,
chatEnabled: true,
});
assert.ok(stub.calls[0].url.includes('limit=25'));
} finally {
stub.restore();
}
});
test('fetchMessages forwards encrypted=true and since when set', async () => {
const stub = withFetchStub({ ok: true, body: [] });
try {
await fetchMessages(10, { encrypted: true, since: 555 });
assert.ok(stub.calls[0].url.includes('encrypted=true'));
assert.ok(stub.calls[0].url.includes('since=555'));
} finally {
stub.restore();
}
});
test('fetchMessages omits limit normalisation when normaliser is absent', async () => {
const stub = withFetchStub({ ok: true, body: [] });
try {
await fetchMessages(50);
assert.ok(stub.calls[0].url.includes('limit=50'));
} finally {
stub.restore();
}
});
test('fetchMessages propagates HTTP errors', async () => {
const stub = withFetchStub({ ok: false, status: 500 });
try {
await assert.rejects(() => fetchMessages(10), /HTTP 500/);
} finally {
stub.restore();
}
});
@@ -0,0 +1,242 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import {
buildTelemetryIndex,
mergePositionsIntoNodes,
mergeTelemetryIntoNodes,
} from '../data-merge.js';
// ---------------------------------------------------------------------------
// mergePositionsIntoNodes — early returns
// ---------------------------------------------------------------------------
test('mergePositionsIntoNodes is a no-op when nodes is not an array', () => {
const positions = [{ node_id: '!a', latitude: 1, longitude: 2 }];
// Just assert no throw.
mergePositionsIntoNodes(null, positions);
mergePositionsIntoNodes(undefined, positions);
});
test('mergePositionsIntoNodes is a no-op when positions is not an array', () => {
const nodes = [{ node_id: '!a' }];
mergePositionsIntoNodes(nodes, null);
mergePositionsIntoNodes(nodes, undefined);
assert.deepEqual(nodes, [{ node_id: '!a' }]);
});
test('mergePositionsIntoNodes is a no-op for empty node arrays', () => {
mergePositionsIntoNodes([], [{ node_id: '!a', latitude: 1, longitude: 2 }]);
});
test('mergePositionsIntoNodes is a no-op when no nodes carry a string node_id', () => {
// Hits the `if (nodesById.size === 0) return;` early exit.
const nodes = [{ node_num: 5 }];
mergePositionsIntoNodes(nodes, [{ node_id: '!a', latitude: 1, longitude: 2 }]);
assert.deepEqual(nodes, [{ node_num: 5 }]);
});
// ---------------------------------------------------------------------------
// mergePositionsIntoNodes — merge logic
// ---------------------------------------------------------------------------
test('mergePositionsIntoNodes copies coordinates when none exist', () => {
const nodes = [{ node_id: '!a' }];
mergePositionsIntoNodes(nodes, [{
node_id: '!a',
latitude: 52.5,
longitude: 13.4,
altitude: 100,
position_time: 1700000000,
position_time_iso: '2023-11-14T22:13:20.000Z',
location_source: 'gps',
precision_bits: 24,
}]);
assert.equal(nodes[0].latitude, 52.5);
assert.equal(nodes[0].longitude, 13.4);
assert.equal(nodes[0].altitude, 100);
assert.equal(nodes[0].position_time, 1700000000);
assert.equal(nodes[0].pos_time_iso, '2023-11-14T22:13:20.000Z');
assert.equal(nodes[0].location_source, 'gps');
assert.equal(nodes[0].precision_bits, 24);
});
test('mergePositionsIntoNodes generates an ISO when only numeric position_time is supplied', () => {
const nodes = [{ node_id: '!a' }];
mergePositionsIntoNodes(nodes, [{
node_id: '!a',
latitude: 1,
longitude: 2,
position_time: 1700000000,
}]);
assert.equal(nodes[0].pos_time_iso, new Date(1700000000 * 1000).toISOString());
});
test('mergePositionsIntoNodes preserves ISO when numeric position_time is missing', () => {
const nodes = [{ node_id: '!a' }];
mergePositionsIntoNodes(nodes, [{
node_id: '!a',
latitude: 1,
longitude: 2,
position_time_iso: '2024-01-01T00:00:00.000Z',
}]);
assert.equal(nodes[0].pos_time_iso, '2024-01-01T00:00:00.000Z');
});
test('mergePositionsIntoNodes ignores incoming positions with non-finite coordinates', () => {
const nodes = [{ node_id: '!a' }];
mergePositionsIntoNodes(nodes, [{ node_id: '!a', latitude: 'NaN', longitude: 1 }]);
assert.equal(nodes[0].latitude, undefined);
});
test('mergePositionsIntoNodes only applies the first matching position per node', () => {
const nodes = [{ node_id: '!a' }];
mergePositionsIntoNodes(nodes, [
{ node_id: '!a', latitude: 1, longitude: 2 },
{ node_id: '!a', latitude: 99, longitude: 99 },
]);
assert.equal(nodes[0].latitude, 1);
});
test('mergePositionsIntoNodes skips packets older than the existing snapshot', () => {
const nodes = [{
node_id: '!a',
latitude: 5,
longitude: 5,
position_time: 2000,
}];
mergePositionsIntoNodes(nodes, [{
node_id: '!a',
latitude: 9,
longitude: 9,
position_time: 1000,
}]);
assert.equal(nodes[0].latitude, 5); // unchanged
});
test('mergePositionsIntoNodes accepts strictly newer packets', () => {
const nodes = [{
node_id: '!a',
latitude: 5,
longitude: 5,
position_time: 1000,
}];
mergePositionsIntoNodes(nodes, [{
node_id: '!a',
latitude: 9,
longitude: 9,
position_time: 2000,
}]);
assert.equal(nodes[0].latitude, 9);
});
test('mergePositionsIntoNodes skips entries lacking a node_id', () => {
const nodes = [{ node_id: '!a' }];
mergePositionsIntoNodes(nodes, [{ latitude: 1, longitude: 2 }, null]);
assert.equal(nodes[0].latitude, undefined);
});
// ---------------------------------------------------------------------------
// buildTelemetryIndex
// ---------------------------------------------------------------------------
test('buildTelemetryIndex returns empty maps for non-array input', () => {
const { byNodeId, byNodeNum } = buildTelemetryIndex(null);
assert.equal(byNodeId.size, 0);
assert.equal(byNodeNum.size, 0);
});
test('buildTelemetryIndex keeps the freshest entry per node_id', () => {
const { byNodeId } = buildTelemetryIndex([
{ node_id: '!a', rx_time: 100, payload: 'old' },
{ node_id: '!a', rx_time: 200, payload: 'new' },
]);
assert.equal(byNodeId.get('!a').entry.payload, 'new');
});
test('buildTelemetryIndex falls back to telemetry_time when rx_time is absent', () => {
const { byNodeId } = buildTelemetryIndex([
{ node_id: '!a', telemetry_time: 50, payload: 'fallback' },
]);
assert.equal(byNodeId.get('!a').timestamp, 50);
});
test('buildTelemetryIndex indexes by numeric node_num', () => {
const { byNodeNum } = buildTelemetryIndex([
{ node_num: 42, rx_time: 100, payload: 'first' },
]);
assert.ok(byNodeNum.has(42));
});
test('buildTelemetryIndex skips non-object entries', () => {
const { byNodeId, byNodeNum } = buildTelemetryIndex([null, 'string', 5]);
assert.equal(byNodeId.size, 0);
assert.equal(byNodeNum.size, 0);
});
// ---------------------------------------------------------------------------
// mergeTelemetryIntoNodes
// ---------------------------------------------------------------------------
test('mergeTelemetryIntoNodes is a no-op when nodes is empty or not an array', () => {
mergeTelemetryIntoNodes([], []);
mergeTelemetryIntoNodes(null, []);
});
test('mergeTelemetryIntoNodes copies metrics when matched by node_id', () => {
const nodes = [{ node_id: '!a' }];
mergeTelemetryIntoNodes(nodes, [{
node_id: '!a',
battery_level: 85,
voltage: 4.1,
rx_time: 100,
telemetry_time: 95,
}]);
assert.equal(nodes[0].battery_level, 85);
assert.equal(nodes[0].voltage, 4.1);
assert.equal(nodes[0].telemetry_time, 95);
assert.equal(nodes[0].telemetry_rx_time, 100);
});
test('mergeTelemetryIntoNodes falls back to node_num lookup', () => {
const nodes = [{ num: 42 }];
mergeTelemetryIntoNodes(nodes, [{
node_num: 42,
temperature: 21.5,
}]);
assert.equal(nodes[0].temperature, 21.5);
});
test('mergeTelemetryIntoNodes ignores nodes that do not match by id or num', () => {
const nodes = [{ node_id: '!a', num: 1 }];
mergeTelemetryIntoNodes(nodes, [{ node_id: '!b', battery_level: 50 }]);
assert.equal(nodes[0].battery_level, undefined);
});
test('mergeTelemetryIntoNodes skips null metric values', () => {
const nodes = [{ node_id: '!a', battery_level: 99 }];
mergeTelemetryIntoNodes(nodes, [{ node_id: '!a', battery_level: null }]);
assert.equal(nodes[0].battery_level, 99);
});
test('mergeTelemetryIntoNodes tolerates non-object entries in the list', () => {
const nodes = [null, undefined, { node_id: '!a' }];
mergeTelemetryIntoNodes(nodes, [{ node_id: '!a', voltage: 3.9 }]);
assert.equal(nodes[2].voltage, 3.9);
});
@@ -0,0 +1,405 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import {
cssEscape,
fmtCoords,
fmtHw,
formatDate,
formatShortInfoUptime,
formatSnrDisplay,
formatTime,
pad,
parseNodeNumericRef,
pickFirstProperty,
pickNumericProperty,
resolveTimestampSeconds,
shortInfoValueOrDash,
timeAgo,
timeHum,
toFiniteNumber,
} from '../format-utils.js';
// ---------------------------------------------------------------------------
// pad / formatTime / formatDate
// ---------------------------------------------------------------------------
test('pad pads small numbers to two digits', () => {
assert.equal(pad(0), '00');
assert.equal(pad(7), '07');
assert.equal(pad(42), '42');
});
test('formatTime renders HH:MM:SS', () => {
const d = new Date(2026, 0, 1, 9, 5, 7); // Local time.
assert.equal(formatTime(d), '09:05:07');
});
test('formatDate renders YYYY-MM-DD', () => {
const d = new Date(2026, 0, 9); // Jan 9, 2026 local.
assert.equal(formatDate(d), '2026-01-09');
});
// ---------------------------------------------------------------------------
// fmtHw
// ---------------------------------------------------------------------------
test('fmtHw passes through normal values', () => {
assert.equal(fmtHw('TBEAM'), 'TBEAM');
});
test('fmtHw hides the UNSET sentinel', () => {
assert.equal(fmtHw('UNSET'), '');
});
test('fmtHw returns empty string for falsy input', () => {
assert.equal(fmtHw(null), '');
assert.equal(fmtHw(''), '');
assert.equal(fmtHw(undefined), '');
});
// ---------------------------------------------------------------------------
// fmtCoords
// ---------------------------------------------------------------------------
test('fmtCoords formats numbers with default precision 5', () => {
assert.equal(fmtCoords(52.520008), '52.52001');
});
test('fmtCoords accepts a custom precision', () => {
assert.equal(fmtCoords(52.520008, 2), '52.52');
});
test('fmtCoords returns empty string for null, undefined, and empty', () => {
assert.equal(fmtCoords(null), '');
assert.equal(fmtCoords(undefined), '');
assert.equal(fmtCoords(''), '');
});
test('fmtCoords returns empty string for non-numeric input', () => {
assert.equal(fmtCoords('not a number'), '');
});
// ---------------------------------------------------------------------------
// formatSnrDisplay
// ---------------------------------------------------------------------------
test('formatSnrDisplay appends dB suffix with one decimal', () => {
assert.equal(formatSnrDisplay(7.49), '7.5 dB');
assert.equal(formatSnrDisplay(-3), '-3.0 dB');
});
test('formatSnrDisplay returns empty string for null and empty input', () => {
assert.equal(formatSnrDisplay(null), '');
assert.equal(formatSnrDisplay(''), '');
});
test('formatSnrDisplay returns empty string for non-finite input', () => {
assert.equal(formatSnrDisplay('abc'), '');
});
// ---------------------------------------------------------------------------
// timeHum
// ---------------------------------------------------------------------------
test('timeHum returns empty string for falsy input', () => {
assert.equal(timeHum(0), '');
assert.equal(timeHum(null), '');
});
test('timeHum returns 0s for negative durations', () => {
assert.equal(timeHum(-5), '0s');
});
test('timeHum formats sub-minute durations as seconds', () => {
assert.equal(timeHum(45), '45s');
});
test('timeHum formats sub-hour durations as minutes and seconds', () => {
assert.equal(timeHum(125), '2m 5s');
});
test('timeHum formats sub-day durations as hours and minutes', () => {
assert.equal(timeHum(3700), '1h 1m');
});
test('timeHum formats day-scale durations as days and hours', () => {
assert.equal(timeHum(90061), '1d 1h');
});
// ---------------------------------------------------------------------------
// timeAgo
// ---------------------------------------------------------------------------
test('timeAgo returns empty string when the input is missing', () => {
assert.equal(timeAgo(0), '');
assert.equal(timeAgo(null), '');
});
test('timeAgo clamps future timestamps to 0s', () => {
assert.equal(timeAgo(5000, 1000), '0s');
});
test('timeAgo formats sub-minute deltas as seconds', () => {
assert.equal(timeAgo(950, 1000), '50s');
});
test('timeAgo formats sub-hour deltas as minutes and seconds', () => {
assert.equal(timeAgo(875, 1000), '2m 5s');
});
test('timeAgo formats sub-day deltas as hours and minutes', () => {
// Use a non-zero past timestamp; timeAgo treats 0 as "missing" and returns "".
assert.equal(timeAgo(1000, 4700), '1h 1m');
});
test('timeAgo formats day-scale deltas as days and hours', () => {
assert.equal(timeAgo(1000, 91061), '1d 1h');
});
// ---------------------------------------------------------------------------
// toFiniteNumber
// ---------------------------------------------------------------------------
test('toFiniteNumber converts numeric strings', () => {
assert.equal(toFiniteNumber('42'), 42);
});
test('toFiniteNumber returns null for null, undefined, and empty', () => {
assert.equal(toFiniteNumber(null), null);
assert.equal(toFiniteNumber(undefined), null);
assert.equal(toFiniteNumber(''), null);
});
test('toFiniteNumber rejects non-finite values', () => {
assert.equal(toFiniteNumber('abc'), null);
assert.equal(toFiniteNumber(Number.NaN), null);
assert.equal(toFiniteNumber(Number.POSITIVE_INFINITY), null);
});
// ---------------------------------------------------------------------------
// resolveTimestampSeconds
// ---------------------------------------------------------------------------
test('resolveTimestampSeconds prefers a numeric timestamp', () => {
assert.equal(resolveTimestampSeconds(1700000000, '2024-01-01T00:00:00Z'), 1700000000);
});
test('resolveTimestampSeconds falls back to ISO when numeric is missing', () => {
// 2024-01-01T00:00:00Z = 1704067200 seconds.
assert.equal(resolveTimestampSeconds(null, '2024-01-01T00:00:00Z'), 1704067200);
});
test('resolveTimestampSeconds returns null when both inputs are unusable', () => {
assert.equal(resolveTimestampSeconds(null, null), null);
assert.equal(resolveTimestampSeconds(null, ''), null);
assert.equal(resolveTimestampSeconds(null, 'not a date'), null);
});
// ---------------------------------------------------------------------------
// cssEscape
// ---------------------------------------------------------------------------
test('cssEscape returns empty string for non-strings and empty input', () => {
assert.equal(cssEscape(''), '');
assert.equal(cssEscape(null), '');
assert.equal(cssEscape(undefined), '');
assert.equal(cssEscape(42), '');
});
test('cssEscape uses window.CSS.escape when available', () => {
const previous = globalThis.window;
globalThis.window = {
CSS: {
escape: value => `escaped(${value})`,
},
};
try {
assert.equal(cssEscape('foo'), 'escaped(foo)');
} finally {
if (previous === undefined) {
delete globalThis.window;
} else {
globalThis.window = previous;
}
}
});
test('cssEscape falls back to manual escaping when window.CSS is unavailable', () => {
const previous = globalThis.window;
delete globalThis.window;
try {
// Underscores and hyphens pass through; everything else is backslash-escaped.
assert.equal(cssEscape('a-b_c'), 'a-b_c');
assert.equal(cssEscape('a:b'), 'a\\:b');
} finally {
if (previous !== undefined) {
globalThis.window = previous;
}
}
});
// ---------------------------------------------------------------------------
// formatShortInfoUptime
// ---------------------------------------------------------------------------
test('formatShortInfoUptime returns empty string for null and empty', () => {
assert.equal(formatShortInfoUptime(null), '');
assert.equal(formatShortInfoUptime(''), '');
});
test('formatShortInfoUptime returns empty string for non-finite input', () => {
assert.equal(formatShortInfoUptime('abc'), '');
});
test('formatShortInfoUptime renders 0s for zero', () => {
assert.equal(formatShortInfoUptime(0), '0s');
});
test('formatShortInfoUptime delegates to timeHum for positive values', () => {
assert.equal(formatShortInfoUptime(125), '2m 5s');
});
// ---------------------------------------------------------------------------
// shortInfoValueOrDash
// ---------------------------------------------------------------------------
test('shortInfoValueOrDash returns the string form of present values', () => {
assert.equal(shortInfoValueOrDash('text'), 'text');
assert.equal(shortInfoValueOrDash(0), '0');
});
test('shortInfoValueOrDash returns em dash for null, undefined, and empty', () => {
assert.equal(shortInfoValueOrDash(null), '—');
assert.equal(shortInfoValueOrDash(undefined), '—');
assert.equal(shortInfoValueOrDash(''), '—');
});
// ---------------------------------------------------------------------------
// pickFirstProperty
// ---------------------------------------------------------------------------
test('pickFirstProperty returns null when sources or keys are not arrays', () => {
assert.equal(pickFirstProperty(null, ['a']), null);
assert.equal(pickFirstProperty([{}], null), null);
});
test('pickFirstProperty returns the first present trimmed string', () => {
const sources = [
{},
{ id: ' ' },
{ id: ' hello ' },
];
assert.equal(pickFirstProperty(sources, ['id']), 'hello');
});
test('pickFirstProperty returns the first non-string value verbatim', () => {
assert.equal(pickFirstProperty([{ count: 5 }], ['count']), 5);
assert.equal(pickFirstProperty([{ flag: false }], ['flag']), false);
});
test('pickFirstProperty skips non-object entries and absent properties', () => {
const sources = [null, 42, { other: 'value' }, { name: 'final' }];
assert.equal(pickFirstProperty(sources, ['name']), 'final');
});
test('pickFirstProperty returns null when no source provides a value', () => {
assert.equal(pickFirstProperty([{ a: null }, { a: '' }], ['a']), null);
});
// ---------------------------------------------------------------------------
// pickNumericProperty
// ---------------------------------------------------------------------------
test('pickNumericProperty returns null when sources or keys are not arrays', () => {
assert.equal(pickNumericProperty(null, ['a']), null);
assert.equal(pickNumericProperty([{}], null), null);
});
test('pickNumericProperty returns the first finite numeric value', () => {
const sources = [
{ value: '' },
{ value: 'abc' },
{ value: '42' },
];
assert.equal(pickNumericProperty(sources, ['value']), 42);
});
test('pickNumericProperty skips non-object entries and missing keys', () => {
const sources = [null, undefined, { other: 1 }, { count: 7 }];
assert.equal(pickNumericProperty(sources, ['count']), 7);
});
test('pickNumericProperty returns null when no candidate is finite', () => {
assert.equal(pickNumericProperty([{ a: 'abc' }, { a: null }], ['a']), null);
});
// ---------------------------------------------------------------------------
// parseNodeNumericRef
// ---------------------------------------------------------------------------
test('parseNodeNumericRef returns null for null and undefined', () => {
assert.equal(parseNodeNumericRef(null), null);
assert.equal(parseNodeNumericRef(undefined), null);
});
test('parseNodeNumericRef passes through finite numbers', () => {
assert.equal(parseNodeNumericRef(42), 42);
});
test('parseNodeNumericRef returns null for non-finite numbers', () => {
assert.equal(parseNodeNumericRef(Number.NaN), null);
assert.equal(parseNodeNumericRef(Number.POSITIVE_INFINITY), null);
});
test('parseNodeNumericRef parses !-prefixed hex strings', () => {
assert.equal(parseNodeNumericRef('!aabbccdd'), 0xaabbccdd);
});
test('parseNodeNumericRef rejects !-prefixed strings with invalid characters', () => {
assert.equal(parseNodeNumericRef('!ZZZ'), null);
});
test('parseNodeNumericRef parses 0x-prefixed hex strings', () => {
assert.equal(parseNodeNumericRef('0x1A'), 0x1a);
});
test('parseNodeNumericRef parses decimal strings', () => {
assert.equal(parseNodeNumericRef('123'), 123);
});
test('parseNodeNumericRef returns null for blank strings', () => {
assert.equal(parseNodeNumericRef(''), null);
assert.equal(parseNodeNumericRef(' '), null);
});
test('parseNodeNumericRef returns null for unparseable strings', () => {
assert.equal(parseNodeNumericRef('not a number'), null);
});
test('parseNodeNumericRef coerces other inputs via Number()', () => {
// Booleans, Date, etc. — anything the global Number() constructor can
// map to a finite number passes through.
assert.equal(parseNodeNumericRef(true), 1);
assert.equal(parseNodeNumericRef(false), 0);
});
test('parseNodeNumericRef returns null for unparseable non-string inputs', () => {
assert.equal(parseNodeNumericRef({}), null);
});
@@ -0,0 +1,152 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import { getActiveFullscreenElement, legendClickHandler } from '../fullscreen-helpers.js';
// ---------------------------------------------------------------------------
// getActiveFullscreenElement
// ---------------------------------------------------------------------------
test('getActiveFullscreenElement returns null when document is undefined', () => {
const previousDoc = globalThis.document;
// Node has no document by default, but other tests in the suite may have
// assigned one — clear it explicitly for this case.
delete globalThis.document;
try {
assert.equal(getActiveFullscreenElement(), null);
} finally {
if (previousDoc !== undefined) {
globalThis.document = previousDoc;
}
}
});
test('getActiveFullscreenElement prefers fullscreenElement', () => {
const dummy = { tag: 'std' };
const previousDoc = globalThis.document;
globalThis.document = {
fullscreenElement: dummy,
webkitFullscreenElement: { tag: 'webkit' },
msFullscreenElement: { tag: 'ms' },
};
try {
assert.equal(getActiveFullscreenElement(), dummy);
} finally {
if (previousDoc === undefined) {
delete globalThis.document;
} else {
globalThis.document = previousDoc;
}
}
});
test('getActiveFullscreenElement falls back to webkit prefix', () => {
const dummy = { tag: 'webkit' };
const previousDoc = globalThis.document;
globalThis.document = {
fullscreenElement: null,
webkitFullscreenElement: dummy,
msFullscreenElement: null,
};
try {
assert.equal(getActiveFullscreenElement(), dummy);
} finally {
if (previousDoc === undefined) {
delete globalThis.document;
} else {
globalThis.document = previousDoc;
}
}
});
test('getActiveFullscreenElement falls back to ms prefix', () => {
const dummy = { tag: 'ms' };
const previousDoc = globalThis.document;
globalThis.document = {
fullscreenElement: null,
webkitFullscreenElement: null,
msFullscreenElement: dummy,
};
try {
assert.equal(getActiveFullscreenElement(), dummy);
} finally {
if (previousDoc === undefined) {
delete globalThis.document;
} else {
globalThis.document = previousDoc;
}
}
});
test('getActiveFullscreenElement returns null when no fullscreen owner is set', () => {
const previousDoc = globalThis.document;
globalThis.document = {
fullscreenElement: null,
webkitFullscreenElement: null,
msFullscreenElement: null,
};
try {
assert.equal(getActiveFullscreenElement(), null);
} finally {
if (previousDoc === undefined) {
delete globalThis.document;
} else {
globalThis.document = previousDoc;
}
}
});
// ---------------------------------------------------------------------------
// legendClickHandler
// ---------------------------------------------------------------------------
test('legendClickHandler always calls preventDefault and stopPropagation', () => {
let preventCalls = 0;
let stopCalls = 0;
let bodyCalls = 0;
const handler = legendClickHandler(() => {
bodyCalls += 1;
});
const fakeEvent = {
preventDefault: () => {
preventCalls += 1;
},
stopPropagation: () => {
stopCalls += 1;
},
};
handler(fakeEvent);
assert.equal(preventCalls, 1);
assert.equal(stopCalls, 1);
assert.equal(bodyCalls, 1);
});
test('legendClickHandler forwards the event object to the body', () => {
let received = null;
const handler = legendClickHandler(event => {
received = event;
});
const fakeEvent = {
preventDefault() {},
stopPropagation() {},
payload: 'forwarded',
};
handler(fakeEvent);
assert.equal(received, fakeEvent);
});
@@ -0,0 +1,188 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import {
applyNodeNameFallback,
extractIdentifierFromHref,
getNodeDisplayNameForOverlay,
getNodeIdentifierFromLink,
shouldHandleNodeLongLink,
} from '../long-link-router.js';
// ---------------------------------------------------------------------------
// shouldHandleNodeLongLink
// ---------------------------------------------------------------------------
test('shouldHandleNodeLongLink rejects null and undefined', () => {
assert.equal(shouldHandleNodeLongLink(null), false);
assert.equal(shouldHandleNodeLongLink(undefined), false);
});
test('shouldHandleNodeLongLink rejects elements without a dataset', () => {
assert.equal(shouldHandleNodeLongLink({}), false);
});
test('shouldHandleNodeLongLink honours an explicit nodeDetailLink=false opt-out', () => {
const link = { dataset: { nodeDetailLink: 'false' } };
assert.equal(shouldHandleNodeLongLink(link), false);
});
test('shouldHandleNodeLongLink accepts elements with a permissive dataset', () => {
assert.equal(shouldHandleNodeLongLink({ dataset: {} }), true);
assert.equal(shouldHandleNodeLongLink({ dataset: { nodeDetailLink: 'true' } }), true);
});
// ---------------------------------------------------------------------------
// extractIdentifierFromHref
// ---------------------------------------------------------------------------
test('extractIdentifierFromHref returns empty string for non-string and empty input', () => {
assert.equal(extractIdentifierFromHref(null), '');
assert.equal(extractIdentifierFromHref(undefined), '');
assert.equal(extractIdentifierFromHref(''), '');
assert.equal(extractIdentifierFromHref(42), '');
});
test('extractIdentifierFromHref returns empty string when no /nodes/!… segment is present', () => {
assert.equal(extractIdentifierFromHref('/about'), '');
assert.equal(extractIdentifierFromHref('https://example.com/'), '');
});
test('extractIdentifierFromHref returns the canonical node id for /nodes/!… URIs', () => {
assert.equal(extractIdentifierFromHref('/nodes/!aabbccdd'), '!aabbccdd');
// canonicalNodeIdentifier preserves case; it only ensures the leading "!".
assert.equal(
extractIdentifierFromHref('https://example.com/nodes/!AABBCCDD?ref=1'),
'!AABBCCDD',
);
});
test('extractIdentifierFromHref tolerates URI-encoded ! prefixes', () => {
// %21 is the URL-encoded form of !. decodeURIComponent should restore it.
assert.equal(extractIdentifierFromHref('/nodes/%21aabbccdd'), '');
// Not all encodings return a node — '!aabbccdd' encoded as a literal also works.
assert.equal(extractIdentifierFromHref('/nodes/!aabbccdd#anchor'), '!aabbccdd');
});
test('extractIdentifierFromHref falls back to the raw match when decoding throws', () => {
// A bare "%" tail is malformed UTF-8 percent encoding and makes
// decodeURIComponent raise URIError. The catch branch should still
// canonicalise the un-decoded match.
assert.equal(
extractIdentifierFromHref('/nodes/!aabbccdd%E0'),
'!aabbccdd%E0',
);
});
// ---------------------------------------------------------------------------
// getNodeIdentifierFromLink
// ---------------------------------------------------------------------------
test('getNodeIdentifierFromLink returns empty string for falsy input', () => {
assert.equal(getNodeIdentifierFromLink(null), '');
assert.equal(getNodeIdentifierFromLink(undefined), '');
});
test('getNodeIdentifierFromLink prefers dataset.nodeId when canonical', () => {
const link = { dataset: { nodeId: '!aabbccdd' } };
assert.equal(getNodeIdentifierFromLink(link), '!aabbccdd');
});
test('getNodeIdentifierFromLink falls back to getAttribute("href") when dataset is absent', () => {
const link = {
getAttribute(name) {
return name === 'href' ? '/nodes/!aabbccdd' : null;
},
};
assert.equal(getNodeIdentifierFromLink(link), '!aabbccdd');
});
test('getNodeIdentifierFromLink falls back to the .href property when getAttribute is absent', () => {
const link = { href: '/nodes/!aabbccdd' };
assert.equal(getNodeIdentifierFromLink(link), '!aabbccdd');
});
test('getNodeIdentifierFromLink returns empty string when nothing parses', () => {
assert.equal(getNodeIdentifierFromLink({}), '');
});
// ---------------------------------------------------------------------------
// getNodeDisplayNameForOverlay
// ---------------------------------------------------------------------------
test('getNodeDisplayNameForOverlay returns empty string for non-objects', () => {
assert.equal(getNodeDisplayNameForOverlay(null), '');
assert.equal(getNodeDisplayNameForOverlay(42), '');
});
test('getNodeDisplayNameForOverlay prefers long_name', () => {
const node = { long_name: 'Alpha Long', short_name: 'A', node_id: '!a' };
assert.equal(getNodeDisplayNameForOverlay(node), 'Alpha Long');
});
test('getNodeDisplayNameForOverlay falls back to short_name', () => {
const node = { short_name: 'A', node_id: '!a' };
assert.equal(getNodeDisplayNameForOverlay(node), 'A');
});
test('getNodeDisplayNameForOverlay falls back to node_id when names are absent', () => {
assert.equal(getNodeDisplayNameForOverlay({ node_id: '!a' }), '!a');
});
test('getNodeDisplayNameForOverlay reads camelCase keys too', () => {
assert.equal(getNodeDisplayNameForOverlay({ longName: 'L' }), 'L');
assert.equal(getNodeDisplayNameForOverlay({ shortName: 'S' }), 'S');
});
// ---------------------------------------------------------------------------
// applyNodeNameFallback
// ---------------------------------------------------------------------------
test('applyNodeNameFallback is a no-op for non-objects', () => {
// Just ensure no throw.
applyNodeNameFallback(null);
applyNodeNameFallback(undefined);
});
test('applyNodeNameFallback fills missing names from node_id', () => {
const node = { node_id: '!aabbccdd' };
applyNodeNameFallback(node);
assert.equal(node.short_name, 'ccdd');
assert.equal(node.long_name, 'Meshtastic !aabbccdd');
});
test('applyNodeNameFallback updates camelCase aliases when present', () => {
const node = { node_id: '!aabbccdd', shortName: '', longName: '' };
applyNodeNameFallback(node);
assert.equal(node.shortName, 'ccdd');
assert.equal(node.longName, 'Meshtastic !aabbccdd');
});
test('applyNodeNameFallback leaves existing names untouched', () => {
const node = { node_id: '!aabbccdd', short_name: 'AAA', long_name: 'Alpha' };
applyNodeNameFallback(node);
assert.equal(node.short_name, 'AAA');
assert.equal(node.long_name, 'Alpha');
});
test('applyNodeNameFallback is a no-op when no node_id is available', () => {
const node = {};
applyNodeNameFallback(node);
assert.deepEqual(node, {});
});
@@ -0,0 +1,228 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import { createOfflineTileLayer } from '../offline-tile-layer.js';
/**
* Build a minimal Leaflet stub exposing the methods the offline tile layer
* needs (``L.gridLayer``). The returned grid-layer object is otherwise a
* plain bag whose ``createTile`` slot is reassigned by the production code.
*
* @returns {Object} Leaflet-compatible stub.
*/
function makeLeafletStub() {
return {
gridLayer(options) {
return { options, createTile: null };
},
};
}
/**
* Install a minimal ``document`` stub whose ``createElement`` returns objects
* that satisfy the offline tile layer's small DOM contract: canvas elements
* expose a configurable ``getContext`` slot, while plain ``div`` elements
* expose ``style``, ``className`` and ``cloneNode``.
*
* @param {{ canvasContext?: any }} [options] Override the canvas 2D context.
* @returns {{ restore: Function }} Teardown handle.
*/
function withDocumentStub({ canvasContext } = {}) {
const previousDocument = globalThis.document;
globalThis.document = {
createElement(tag) {
if (tag === 'canvas') {
return {
width: 0,
height: 0,
getContext: () => (canvasContext === undefined ? makeRecordingContext() : canvasContext),
};
}
const element = {
tag,
className: '',
style: {},
textContent: '',
cloneNode() {
// Return a shallow copy that retains the recorded properties so
// assertions can inspect what the production code rendered.
return JSON.parse(JSON.stringify({
tag: element.tag,
className: element.className,
style: element.style,
textContent: element.textContent,
}));
},
};
return element;
},
};
return {
restore() {
if (previousDocument === undefined) {
delete globalThis.document;
} else {
globalThis.document = previousDocument;
}
},
};
}
/**
* Build a Canvas 2D context stub that records the calls it receives. The
* tests inspect the call list to ensure the production code follows the
* expected drawing path.
*
* @returns {Object} Recording 2D context.
*/
function makeRecordingContext() {
const calls = [];
const ctx = {
calls,
fillStyle: null,
strokeStyle: null,
lineWidth: 0,
font: '',
textBaseline: '',
textAlign: '',
createLinearGradient(...args) {
calls.push(['createLinearGradient', args]);
return { addColorStop(...stop) { calls.push(['addColorStop', stop]); } };
},
fillRect(...args) {
calls.push(['fillRect', args]);
},
beginPath() {
calls.push(['beginPath']);
},
moveTo(...args) {
calls.push(['moveTo', args]);
},
lineTo(...args) {
calls.push(['lineTo', args]);
},
stroke() {
calls.push(['stroke']);
},
fillText(...args) {
calls.push(['fillText', args]);
},
};
return ctx;
}
// ---------------------------------------------------------------------------
// createOfflineTileLayer — early returns
// ---------------------------------------------------------------------------
test('createOfflineTileLayer returns null when Leaflet is missing', () => {
assert.equal(createOfflineTileLayer(null), null);
assert.equal(createOfflineTileLayer(undefined), null);
});
test('createOfflineTileLayer returns null when Leaflet has no gridLayer factory', () => {
assert.equal(createOfflineTileLayer({}), null);
});
// ---------------------------------------------------------------------------
// createOfflineTileLayer — happy path
// ---------------------------------------------------------------------------
test('createOfflineTileLayer attaches a createTile method on success', () => {
const stub = withDocumentStub();
try {
const layer = createOfflineTileLayer(makeLeafletStub());
assert.ok(layer);
assert.equal(typeof layer.createTile, 'function');
} finally {
stub.restore();
}
});
test('createOfflineTileLayer renders a canvas tile when getContext succeeds', () => {
const stub = withDocumentStub();
try {
const layer = createOfflineTileLayer(makeLeafletStub());
const tile = layer.createTile({ x: 1, y: 1, z: 1 });
// The returned element should be the canvas itself (has getContext).
assert.equal(typeof tile.getContext, 'function');
assert.equal(tile.width, 256);
assert.equal(tile.height, 256);
} finally {
stub.restore();
}
});
test('createOfflineTileLayer falls back to placeholder when canvas getContext returns null', () => {
const stub = withDocumentStub({ canvasContext: null });
// Silence the warn from the fallback branch so test output stays clean.
const previousWarn = console.warn;
console.warn = () => {};
try {
const layer = createOfflineTileLayer(makeLeafletStub());
const tile = layer.createTile({ x: 0, y: 0, z: 0 });
// Fallback is the cloned <div> — no getContext method.
assert.equal(tile.getContext, undefined);
assert.equal(tile.tag, 'div');
assert.equal(tile.className, 'offline-tile-fallback');
} finally {
console.warn = previousWarn;
stub.restore();
}
});
test('createOfflineTileLayer reuses the cached fallback tile across invocations', () => {
const stub = withDocumentStub({ canvasContext: null });
const previousWarn = console.warn;
console.warn = () => {};
try {
const layer = createOfflineTileLayer(makeLeafletStub());
const first = layer.createTile({ x: 0, y: 0, z: 0 });
const second = layer.createTile({ x: 1, y: 0, z: 0 });
// Both calls produce equivalent fallback nodes (same shape).
assert.deepEqual(first, second);
} finally {
console.warn = previousWarn;
stub.restore();
}
});
test('createOfflineTileLayer falls back when the canvas drawing path throws', () => {
// Build a context whose `createLinearGradient` throws to force the
// catch-and-fall-back branch.
const ctx = makeRecordingContext();
ctx.createLinearGradient = () => {
throw new Error('boom');
};
const stub = withDocumentStub({ canvasContext: ctx });
const previousError = console.error;
console.error = () => {};
try {
const layer = createOfflineTileLayer(makeLeafletStub());
const tile = layer.createTile({ x: 0, y: 0, z: 0 });
// Production code logs and returns the fallback element.
assert.equal(tile.getContext, undefined);
assert.equal(tile.tag, 'div');
} finally {
console.error = previousError;
stub.restore();
}
});
@@ -0,0 +1,128 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import {
compareNumber,
compareString,
hasNumberValue,
hasStringValue,
} from '../sort-comparators.js';
// ---------------------------------------------------------------------------
// hasStringValue
// ---------------------------------------------------------------------------
test('hasStringValue returns true for non-empty strings', () => {
assert.equal(hasStringValue('hi'), true);
assert.equal(hasStringValue(' text '), true);
});
test('hasStringValue returns false for null, undefined, and blank input', () => {
assert.equal(hasStringValue(null), false);
assert.equal(hasStringValue(undefined), false);
assert.equal(hasStringValue(''), false);
assert.equal(hasStringValue(' '), false);
});
test('hasStringValue treats numbers as their string form', () => {
assert.equal(hasStringValue(0), true);
assert.equal(hasStringValue(42), true);
});
// ---------------------------------------------------------------------------
// hasNumberValue
// ---------------------------------------------------------------------------
test('hasNumberValue accepts finite numbers', () => {
assert.equal(hasNumberValue(42), true);
assert.equal(hasNumberValue(-1.5), true);
assert.equal(hasNumberValue(0), true);
});
test('hasNumberValue rejects null, undefined, and empty string', () => {
assert.equal(hasNumberValue(null), false);
assert.equal(hasNumberValue(undefined), false);
assert.equal(hasNumberValue(''), false);
});
test('hasNumberValue rejects non-finite numbers and unparseable strings', () => {
assert.equal(hasNumberValue(Number.NaN), false);
assert.equal(hasNumberValue(Number.POSITIVE_INFINITY), false);
assert.equal(hasNumberValue('abc'), false);
});
test('hasNumberValue accepts numeric strings', () => {
assert.equal(hasNumberValue('42'), true);
assert.equal(hasNumberValue(' -1.5 '), true);
});
// ---------------------------------------------------------------------------
// compareString
// ---------------------------------------------------------------------------
test('compareString sorts non-empty values lexicographically', () => {
assert.ok(compareString('alpha', 'beta') < 0);
assert.ok(compareString('beta', 'alpha') > 0);
assert.equal(compareString('alpha', 'alpha'), 0);
});
test('compareString trims surrounding whitespace before comparing', () => {
assert.equal(compareString(' alpha ', 'alpha'), 0);
});
test('compareString sorts blank values to the end', () => {
assert.ok(compareString('alpha', '') < 0);
assert.ok(compareString('', 'alpha') > 0);
});
test('compareString returns 0 when both values are blank', () => {
assert.equal(compareString(null, ''), 0);
assert.equal(compareString('', ' '), 0);
});
test('compareString uses numeric collation for digit-bearing strings', () => {
// localeCompare with { numeric: true } orders "node-2" before "node-10".
assert.ok(compareString('node-2', 'node-10') < 0);
});
// ---------------------------------------------------------------------------
// compareNumber
// ---------------------------------------------------------------------------
test('compareNumber sorts ascending for finite values', () => {
assert.ok(compareNumber(1, 2) < 0);
assert.ok(compareNumber(2, 1) > 0);
assert.equal(compareNumber(1, 1), 0);
});
test('compareNumber accepts numeric strings', () => {
assert.ok(compareNumber('1', '2') < 0);
assert.ok(compareNumber('2', '1') > 0);
});
test('compareNumber pushes invalid values after valid ones', () => {
assert.ok(compareNumber(5, 'not-a-number') < 0);
assert.ok(compareNumber('not-a-number', 5) > 0);
});
test('compareNumber returns 0 when both inputs are unparseable', () => {
assert.equal(compareNumber('abc', 'def'), 0);
// Note: Number(null) === 0, so null is *finite* under this comparator.
assert.equal(compareNumber(undefined, 'abc'), 0);
});
@@ -0,0 +1,49 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import { tileToLat, tileToLon } from '../tile-coords.js';
test('tileToLon zero tile at zoom 0 is -180', () => {
assert.equal(tileToLon(0, 0), -180);
});
test('tileToLon centre tile at zoom 1 is 0', () => {
assert.equal(tileToLon(1, 1), 0);
});
test('tileToLon last tile at zoom 2 is 90', () => {
assert.equal(tileToLon(3, 2), 90);
});
test('tileToLat zero tile at zoom 0 is roughly 85.0511', () => {
// Mercator clamp: northernmost projectable latitude.
assert.ok(Math.abs(tileToLat(0, 0) - 85.0511287798066) < 1e-9);
});
test('tileToLat centre tile at zoom 1 is 0', () => {
assert.equal(tileToLat(1, 1), 0);
});
test('tileToLat is symmetric around the equator at zoom 1', () => {
// Tile y=0 (northern edge) and y=2 (southern edge) at zoom 1 should
// be equal in magnitude with opposite signs.
const north = tileToLat(0, 1);
const south = tileToLat(2, 1);
assert.ok(Math.abs(north + south) < 1e-9);
});
@@ -0,0 +1,119 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import { buildNeighborTooltipHtml, buildTraceTooltipHtml } from '../tooltip-html.js';
// ---------------------------------------------------------------------------
// buildTraceTooltipHtml
// ---------------------------------------------------------------------------
test('buildTraceTooltipHtml returns empty string for non-arrays', () => {
assert.equal(buildTraceTooltipHtml(null), '');
assert.equal(buildTraceTooltipHtml(undefined), '');
assert.equal(buildTraceTooltipHtml({}), '');
});
test('buildTraceTooltipHtml returns empty string when fewer than two hops are supplied', () => {
assert.equal(buildTraceTooltipHtml([]), '');
assert.equal(buildTraceTooltipHtml([{ short_name: 'A', node_id: '!a' }]), '');
});
test('buildTraceTooltipHtml emits a content fragment with arrows between hops', () => {
const html = buildTraceTooltipHtml([
{ short_name: 'AAA', node_id: '!a' },
{ short_name: 'BBB', node_id: '!b' },
]);
assert.ok(html.includes('trace-tooltip__content'));
assert.ok(html.includes('trace-tooltip__arrow'));
// One arrow between two badges.
const arrowCount = (html.match(/trace-tooltip__arrow/g) || []).length;
assert.equal(arrowCount, 1);
});
test('buildTraceTooltipHtml falls back to node_id when short name is missing', () => {
const html = buildTraceTooltipHtml([
{ node_id: '!a' },
{ node_id: '!b' },
]);
// The badge should reference the node_id.
assert.ok(html.includes('!a'));
assert.ok(html.includes('!b'));
});
test('buildTraceTooltipHtml filters out malformed entries', () => {
const html = buildTraceTooltipHtml([
null,
{ short_name: 'AAA', node_id: '!a' },
'not an object',
{ short_name: 'BBB', node_id: '!b' },
]);
// Two valid entries → exactly one arrow.
const arrowCount = (html.match(/trace-tooltip__arrow/g) || []).length;
assert.equal(arrowCount, 1);
});
test('buildTraceTooltipHtml returns empty string when every entry is malformed', () => {
assert.equal(buildTraceTooltipHtml([null, 'x', 1]), '');
});
// ---------------------------------------------------------------------------
// buildNeighborTooltipHtml
// ---------------------------------------------------------------------------
test('buildNeighborTooltipHtml returns empty string for falsy segments', () => {
assert.equal(buildNeighborTooltipHtml(null), '');
assert.equal(buildNeighborTooltipHtml(undefined), '');
});
test('buildNeighborTooltipHtml emits source → target HTML', () => {
const html = buildNeighborTooltipHtml({
sourceShortName: 'AAA',
targetShortName: 'BBB',
sourceNode: { node_id: '!a', long_name: 'Alpha' },
targetNode: { node_id: '!b', long_name: 'Beta' },
sourceRole: 'CLIENT',
targetRole: 'CLIENT',
});
assert.ok(html.includes('trace-tooltip__content'));
assert.ok(html.includes('trace-tooltip__arrow'));
assert.ok(html.includes('Alpha'));
assert.ok(html.includes('Beta'));
});
test('buildNeighborTooltipHtml falls back to node short_name fields', () => {
const html = buildNeighborTooltipHtml({
sourceNode: { short_name: 'AAA', node_id: '!a' },
targetNode: { short_name: 'BBB', node_id: '!b' },
});
assert.ok(html.includes('trace-tooltip__arrow'));
});
test('buildNeighborTooltipHtml falls back to node_id when no short name is present', () => {
const html = buildNeighborTooltipHtml({
sourceNode: { node_id: '!a' },
targetNode: { node_id: '!b' },
});
assert.ok(html.includes('!a'));
assert.ok(html.includes('!b'));
});
test('buildNeighborTooltipHtml returns empty string when either side has no short name', () => {
assert.equal(buildNeighborTooltipHtml({ sourceNode: { node_id: '!a' } }), '');
assert.equal(buildNeighborTooltipHtml({ targetNode: { node_id: '!b' } }), '');
});
@@ -0,0 +1,36 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Stable numeric limits shared between ``main.js`` and the helpers extracted
* into ``main/`` submodules.
*
* @module main/constants
*/
import { SNAPSHOT_WINDOW } from '../snapshot-aggregator.js';
/** Maximum number of node rows requested from the API. */
export const NODE_LIMIT = 1000;
/** Maximum number of trace rows requested from the API. */
export const TRACE_LIMIT = 200;
/** Maximum age (seconds) for traces displayed on the map. */
export const TRACE_MAX_AGE_SECONDS = 28 * 24 * 60 * 60;
/** Snapshot multiplier — how many rows we ask for to build a richer aggregate. */
export const SNAPSHOT_LIMIT = SNAPSHOT_WINDOW;

Some files were not shown because too many files have changed in this diff Show More