Compare commits

...

8 Commits

Author SHA1 Message Date
l5y 73e161f432 web: fix liveliness of api data hydration bug (#783)
* web: fix liveliness of api data hydration bug

* web: address review comments
2026-05-03 13:05:37 +02:00
l5y 7b38f92b2d web: refactor 6/7 node page (#777)
* web: refactor 6/7 node page

* web: address node-page refactor review and close coverage gaps

Fix the concurrency cap in fetchNodeDetailsIntoIndex so it actually
limits in-flight requests.  The previous implementation built each
fetch as an immediately-invoked async IIFE, so all N fetches started
the moment the loop ran; the slicing-then-Promise.all step only
changed when settlement was observed, not when work began.  Replace
the IIFE-then-batch pattern with a worker pool: a fixed-size set of
worker promises iterates a shared queue and only pulls the next
identifier once the previous fetch settles.

Reduce cross-module coupling around the role-aware short-name badge
by extracting renderRoleAwareBadge into a new badge.js module that
single-node-table, messages, detail-html, and traces import directly,
so the neighbour module is no longer pulled in by four non-neighbour
callers.  Tighten applyDetails in role-index.js by hoisting the
ternary into a single key binding and dropping the redundant
instanceof Map guard.

Close the patch-coverage gap reported by Codecov: add tests for
parameter-validation paths in bootstrap (parseReferencePayload,
normalizeNodeReference, fetchNodeDetailHtml, initializeNodeDetailPage),
the worker-pool branches in role-index (no-fetch, empty queue, 404,
non-success responses, and an explicit concurrency-cap assertion),
the badge fallback path, the nested-neighbor seedNeighborRoleIndex
branches, the renderNeighborBadge metadata-merge and short-name
fallback paths, the empty-trace and empty-chart short-circuits, and
single-node-table validation.  All ten node-page submodules now
report 100% line coverage.
2026-05-02 23:05:36 +02:00
l5y 1041e06644 data: refactor 4/7 interfaces (#775)
* data: refactor 4/7 interfaces

* data: address PR #775 review feedback

Fix the two CI test regressions caused by the package split:
- ``factory._load_ble_interface`` no longer keeps a stale module-level
  ``BLEInterface`` cache that survived ``monkeypatch`` teardown across
  tests. The package-level attribute is now the single cache; the
  ``factory.py`` global was removed.  This unblocks
  ``test_load_ble_interface_sets_global``.
- ``interfaces/__init__.py`` re-resolves ``SerialInterface`` and
  ``TCPInterface`` from ``meshtastic.*`` at package-load time so that a
  test that pops ``data.mesh_ingestor.interfaces`` from ``sys.modules``
  and re-imports picks up the freshly registered classes rather than
  whatever a cached ``factory.py`` first resolved.  This unblocks
  ``test_interfaces_patch_handles_preimported_serial``.

Restore 100% patch coverage on the interfaces subpackage by:
- Adding tests for previously uncovered, testable paths:
  ``_extract_host_node_id(None)``, ``_ensure_channel_metadata``,
  ``_normalise_nodeinfo_packet`` (None input + dict-conversion fallback),
  ``_resolve_lora_message`` (radio_section paths), ``_modem_preset``
  (preset attr fallback + unparseable value), ``_camelcase_enum_name``
  separator-only input, ``_region_frequency`` no-digit enum name,
  ``_ensure_radio_metadata`` unresolvable-message path, plus the
  unknown-section recursive branch of ``_candidate_node_id``.
- Marking genuinely unreachable defensive branches with
  ``pragma: no cover`` (BLE receive loop body, upstream API regression
  guards, patch re-entry guard, unreachable ``NoAvailableMeshInterface``
  fallback).
2026-05-02 22:59:52 +02:00
l5y e04fab5b19 web: refactor 7/7 main js (#778)
* web: refactor 7/7 main js

* web: refactor 7/7 main js

* web: address review feedback on 7/7 main.js refactor

* Consolidate the duplicate ./main/format-utils.js import block in
  main.js so all symbols come from a single, alphabetised import
  statement (review item: "Important — Duplicate format-utils.js
  import block").
* Replace the leftover stale JSDoc atop +createOfflineTileLayer+ with
  one clear "do not inline" DI block, and likewise expand the
  +fetchMessages+ wrapper docstring so future readers see the shim's
  purpose without hunting for the implementation (review nit:
  "thin wrappers ... worth a one-line JSDoc").
* Add per-module unit tests under
  public/assets/js/app/main/__tests__/ covering every previously-
  uncovered branch in the 9 modules codecov flagged: tile-coords,
  sort-comparators, fullscreen-helpers, format-utils, data-fetchers,
  data-merge, tooltip-html, long-link-router, and offline-tile-layer.
  This drives the codecov patch percentage on PR #778 from 78.99%
  to ~100% on the new modules and unblocks the codecov/patch gate.

JS suite: 1,114 tests, 0 failures.
2026-05-02 22:35:34 +02:00
l5y d1d0225197 web: refactor 5/7 node page charts (#776)
* web: refactor 5/7 node page charts

* web: address review feedback on node-page-charts split

* Drop the local stringOrNull/numberOrNull copies from node-page.js
  and import them from ./value-helpers.js so the shared module's
  stated dedup actually happens (review issue #1).  The two locals
  were byte-identical to the new shared module.
* Split the display-only formatters out of
  node-page-charts/format-utils.js into a sibling
  node-page-charts/display-formatters.js so format-utils.js carries
  only chart concerns (review issue #2).  The barrel
  node-page-charts.js re-exports both files so existing callers and
  tests keep working unchanged.
* Inline +fmtCurrent+ in node-page-charts/specs.js and drop the
  sideways import from short-info-telemetry.js so node-page-charts/
  no longer depends on an unrelated module (review issue #3).
* Add a dedicated value-helpers.test.js pinning the contract of
  +numberOrNull+ and +stringOrNull+ so they stop relying on
  transitive coverage from the chart suite (review issue #5).
2026-05-02 22:16:18 +02:00
l5y 0fbff32535 web: refactor 2/7 federation (#773)
* web: refactor 2/7 federation

* web: close federation coverage gaps and apply review nits

Address Codecov patch coverage feedback by adding rspec examples for
the 51 lines flagged across the new federation shards (announce,
crawl, validation, http_client, self_instance, instance_metrics,
announcer_threads, lifecycle, signature). Per-shard line coverage in
the federation directory is now 100%.

Apply two review-comment changes: rename the awkwardly-named
http_client_get.rb to instance_fetcher.rb (matching its semantic
role rather than the HTTP verb), and declare PotatoMesh::App::Federation
explicitly in the federation.rb manifest so the namespace is owned by
this file rather than implicitly created by whichever shard happens to
load first.
2026-05-02 22:12:20 +02:00
l5y 03caf391e7 web: refactor 1/7 data processing (#772)
* web: refactor 1/7 data processing

* web: close coverage gaps in data_processing submodules

Bring every file under lib/potato_mesh/application/data_processing/ to
100% line coverage so codecov/patch passes on the 1/7 refactor PR. The
gap was a relocation of pre-existing untested branches; closing them
here keeps the subsequent refactor PRs in the series unblocked.

* Add unit tests covering canonical sender/recipient overrides,
  reply_id/emoji updates on existing rows, and the rare INSERT
  ConstraintException recovery path inside +insert_message+.
* Cover the non-canonical reporter and per-neighbour resolution
  branches in +insert_neighbors+.
* Cover the SQLException rescue in +upsert_ingestor+, the
  fallback_num branch in +touch_node_last_seen+, the limit fallback
  in +read_json_body+, the unrecognised-type branch in
  +store_decrypted_payload+, the +power+ telemetry_type fallback,
  the default-coercion path in +resolve_numeric_metric+, and the
  numeric/bare-hex paths in +canonical_node_parts+ and
  +coerce_trace_node_id+.

Drop dead code surfaced while pinning behaviour:

* +clear_encrypted+ in +insert_message+ has been initialised to
  +false+ and never reassigned since #633 dropped the
  decrypted-text override; remove it and the four dependent
  branches.
* The +rescue ArgumentError; nil+ tails in
  +identity.resolve_node_num+ and +traces.coerce_trace_node_id+ are
  unreachable because every +Integer(...)+ call inside is guarded by
  a regex pre-check.

Add a comment to the +data_processing.rb+ shim explaining that the
+require_relative+ list is ordered by dependency rather than
alphabetically, addressing review nit #5.
2026-05-02 22:08:21 +02:00
l5y f6aff3bdb8 data: refactor 3/7 protocols (#774)
* data: refactor 3/7 protocols

* data: address PR #774 review feedback

- Rewrite the parents[4] path comment in protocols/meshcore/debug_log.py
  to clearly explain why the index changed from parents[3] (the original
  pre-split index) without contradicting the code.
- Add tests covering the six lines flagged uncovered by codecov:
  * _process_self_info host-position branch (handlers.py:78)
  * on_contact_msg early-return for missing text/sender_ts (handlers.py:278)
  * close() RuntimeError swallow when loop closes mid-call (interface.py:155-156)
  * _run_meshcore wrapper around _ensure_channel_names failure (runner.py:131-132)

Restores 100% patch coverage on the meshcore package.
2026-05-02 22:05:43 +02:00
116 changed files with 17307 additions and 10415 deletions
+22
View File
@@ -145,3 +145,25 @@ Heartbeat payload:
All collection GET endpoints (`/api/nodes`, `/api/messages`, `/api/positions`, `/api/telemetry`, `/api/traces`, `/api/neighbors`, `/api/ingestors`) accept an optional `?protocol=<value>` query parameter. When present, only records whose `protocol` column matches the given value are returned. The `protocol` field is included in all GET responses.
### GET endpoint time windows
Every read endpoint enforces a server-side rolling-window floor on the data it returns. The window is fixed per route and **cannot be widened by the caller** — explicit `?since=<unix_seconds>` is treated as `MAX(since, floor)`, so a `since` older than the floor is silently clamped to the floor. Pass a `since` newer than the floor when you want to be more restrictive (incremental refresh).
| Route | Floor (default) | Notes |
| --- | --- | --- |
| `GET /api/nodes` | 7 days | filtered by `nodes.last_heard` |
| `GET /api/messages` | 7 days | filtered by `messages.rx_time` |
| `GET /api/positions` | 7 days | filtered by `COALESCE(rx_time, position_time)` |
| `GET /api/telemetry` | 7 days | filtered by `COALESCE(rx_time, telemetry_time)` |
| `GET /api/instances` | 7 days | filtered by `instances.last_update_time` |
| `GET /api/neighbors` | **28 days** | sparse data; widened to keep slow scrapes visible |
| `GET /api/traces` | **28 days** | sparse data; same rationale |
| `GET /api/ingestors` | **28 days** | sparse heartbeats; same rationale |
| `GET /api/.../:id` (per-id lookup) | **28 days** | every per-id route uses the extended window so callers can backfill historical context for a specific node/conversation that has dropped out of the bulk view. The `since` clamp still applies. |
| `GET /api/telemetry/aggregated` | caller-controlled | `?windowSeconds=<N>` is mandatory; defaults to 86 400 (1 day). Bounded by `MAX_QUERY_LIMIT` on bucket count, not by a hard floor. |
| `GET /api/stats` | n/a | reports counts at fixed `hour`/`day`/`week`/`month` activity buckets. |
Federation peers should not assume an unbounded historical window: a peer that requests `/api/messages?since=0` from a partner expecting "everything" will only ever receive the last seven days. To pull older state, request the per-id endpoint (28 days) for the relevant nodes.
The constants live in `web/lib/potato_mesh/config.rb` (`week_seconds`, `four_weeks_seconds`).
-980
View File
@@ -1,980 +0,0 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Mesh interface discovery helpers for interacting with Meshtastic hardware."""
from __future__ import annotations
import contextlib
import importlib
import ipaddress
import math
import re
import sys
import urllib.parse
from collections.abc import Mapping
from typing import TYPE_CHECKING, Any
try: # pragma: no cover - dependency optional in tests
import meshtastic # type: ignore
except Exception: # pragma: no cover - dependency optional in tests
meshtastic = None # type: ignore[assignment]
from . import channels, config, serialization
from .connection import (
BLE_ADDRESS_RE,
DEFAULT_TCP_PORT,
DEFAULT_SERIAL_PATTERNS,
default_serial_targets,
parse_ble_target,
)
def _ensure_mapping(value) -> Mapping | None:
"""Return ``value`` as a mapping when conversion is possible."""
if isinstance(value, Mapping):
return value
if hasattr(value, "__dict__") and isinstance(value.__dict__, Mapping):
return value.__dict__
with contextlib.suppress(Exception):
converted = serialization._node_to_dict(value)
if isinstance(converted, Mapping):
return converted
return None
def _is_nodeish_identifier(value: Any) -> bool:
"""Return ``True`` when ``value`` resembles a Meshtastic node identifier."""
if isinstance(value, (int, float)):
return False
if not isinstance(value, str):
return False
trimmed = value.strip()
if not trimmed:
return False
if trimmed.startswith("^"):
return True
if trimmed.startswith("!"):
trimmed = trimmed[1:]
elif trimmed.lower().startswith("0x"):
trimmed = trimmed[2:]
elif not re.search(r"[a-fA-F]", trimmed):
# Bare decimal strings should not be treated as node ids when labelled "id".
return False
return bool(re.fullmatch(r"[0-9a-fA-F]{1,8}", trimmed))
def _candidate_node_id(mapping: Mapping | None) -> str | None:
"""Extract a canonical node identifier from ``mapping`` when present."""
if mapping is None:
return None
node_keys = (
"fromId",
"from_id",
"from",
"nodeId",
"node_id",
"nodeNum",
"node_num",
"num",
"userId",
"user_id",
)
for key in node_keys:
with contextlib.suppress(Exception):
node_id = serialization._canonical_node_id(mapping.get(key))
if node_id:
return node_id
with contextlib.suppress(Exception):
value = mapping.get("id")
if _is_nodeish_identifier(value):
node_id = serialization._canonical_node_id(value)
if node_id:
return node_id
user_section = _ensure_mapping(mapping.get("user"))
if user_section is not None:
for key in ("userId", "user_id", "num", "nodeNum", "node_num"):
with contextlib.suppress(Exception):
node_id = serialization._canonical_node_id(user_section.get(key))
if node_id:
return node_id
with contextlib.suppress(Exception):
user_id_value = user_section.get("id")
if _is_nodeish_identifier(user_id_value):
node_id = serialization._canonical_node_id(user_id_value)
if node_id:
return node_id
decoded_section = _ensure_mapping(mapping.get("decoded"))
if decoded_section is not None:
node_id = _candidate_node_id(decoded_section)
if node_id:
return node_id
payload_section = _ensure_mapping(mapping.get("payload"))
if payload_section is not None:
node_id = _candidate_node_id(payload_section)
if node_id:
return node_id
for key in ("packet", "meta", "info"):
node_id = _candidate_node_id(_ensure_mapping(mapping.get(key)))
if node_id:
return node_id
for value in mapping.values():
if isinstance(value, (list, tuple)):
for item in value:
node_id = _candidate_node_id(_ensure_mapping(item))
if node_id:
return node_id
else:
node_id = _candidate_node_id(_ensure_mapping(value))
if node_id:
return node_id
return None
def _extract_host_node_id(iface) -> str | None:
"""Return the canonical node identifier for the connected host device.
Searches a sequence of well-known attribute names (``myInfo``,
``my_node_info``, etc.) on ``iface`` for a mapping that contains a
recognisable node identifier, then falls back to the raw ``myNodeNum``
integer attribute.
Parameters:
iface: Live Meshtastic interface object, or any object that exposes
node-identity attributes in one of the expected forms.
Returns:
A canonical ``!xxxxxxxx`` node identifier, or ``None`` when no
identifiable host node information is available.
"""
if iface is None:
return None
def _as_mapping(candidate) -> Mapping | None:
mapping = _ensure_mapping(candidate)
if mapping is not None:
return mapping
if callable(candidate):
with contextlib.suppress(Exception):
return _ensure_mapping(candidate())
return None
candidates: list[Mapping] = []
for attr in ("myInfo", "my_node_info", "myNodeInfo", "my_node", "localNode"):
mapping = _as_mapping(getattr(iface, attr, None))
if mapping is None:
continue
candidates.append(mapping)
nested_info = _ensure_mapping(mapping.get("info"))
if nested_info:
candidates.append(nested_info)
for mapping in candidates:
node_id = _candidate_node_id(mapping)
if node_id:
return node_id
for key in ("myNodeNum", "my_node_num", "myNodeId", "my_node_id"):
node_id = serialization._canonical_node_id(mapping.get(key))
if node_id:
return node_id
node_id = serialization._canonical_node_id(getattr(iface, "myNodeNum", None))
if node_id:
return node_id
return None
def _normalise_nodeinfo_packet(packet) -> dict | None:
"""Return a dictionary view of ``packet`` with a guaranteed ``id`` when known."""
mapping = _ensure_mapping(packet)
if mapping is None:
return None
try:
normalised: dict = dict(mapping)
except Exception:
try:
normalised = {key: mapping[key] for key in mapping}
except Exception:
return None
node_id = _candidate_node_id(normalised)
if node_id and normalised.get("id") != node_id:
normalised["id"] = node_id
return normalised
if TYPE_CHECKING: # pragma: no cover - import only used for type checking
from meshtastic.ble_interface import BLEInterface as _BLEInterface
BLEInterface = None
def _patch_meshtastic_nodeinfo_handler() -> None:
"""Ensure Meshtastic nodeinfo packets always include an ``id`` field."""
module = sys.modules.get("meshtastic", meshtastic)
if module is None:
with contextlib.suppress(Exception):
module = importlib.import_module("meshtastic")
if module is None:
return
globals()["meshtastic"] = module
original = getattr(module, "_onNodeInfoReceive", None)
if not callable(original):
return
mesh_interface_module = getattr(module, "mesh_interface", None)
if mesh_interface_module is None:
with contextlib.suppress(Exception):
mesh_interface_module = importlib.import_module("meshtastic.mesh_interface")
# Replace the module-level handler only once; the sentinel attribute prevents
# re-wrapping if _patch_meshtastic_nodeinfo_handler() is called again after
# the interface module is reloaded or re-imported.
if not getattr(original, "_potato_mesh_safe_wrapper", False):
module._onNodeInfoReceive = _build_safe_nodeinfo_callback(original)
_patch_nodeinfo_handler_class(mesh_interface_module, module)
def _build_safe_nodeinfo_callback(original):
"""Return a wrapper that injects a missing ``id`` before dispatching."""
def _safe_on_node_info_receive(iface, packet): # type: ignore[override]
normalised = _normalise_nodeinfo_packet(packet)
if normalised is not None:
packet = normalised
try:
return original(iface, packet)
except KeyError as exc: # pragma: no cover - defensive only
if exc.args and exc.args[0] == "id":
return None
raise
_safe_on_node_info_receive._potato_mesh_safe_wrapper = True # type: ignore[attr-defined]
return _safe_on_node_info_receive
def _update_nodeinfo_handler_aliases(original, replacement) -> None:
"""Ensure Meshtastic modules reference the patched ``NodeInfoHandler``."""
for module_name, module in list(sys.modules.items()):
if not module_name.startswith("meshtastic"):
continue
existing = getattr(module, "NodeInfoHandler", None)
if existing is original:
setattr(module, "NodeInfoHandler", replacement)
def _patch_nodeinfo_handler_class(
mesh_interface_module, meshtastic_module=None
) -> None:
"""Wrap ``NodeInfoHandler.onReceive`` to normalise packets before callbacks."""
if mesh_interface_module is None:
return
handler_class = getattr(mesh_interface_module, "NodeInfoHandler", None)
if handler_class is None:
return
if getattr(handler_class, "_potato_mesh_safe_wrapper", False):
return
original_on_receive = getattr(handler_class, "onReceive", None)
if not callable(original_on_receive):
return
class _SafeNodeInfoHandler(handler_class): # type: ignore[misc]
"""Subclass that guards against missing node identifiers."""
def onReceive(self, iface, packet): # type: ignore[override]
"""Normalise ``packet`` before dispatching to the parent handler.
Injects a canonical ``id`` field when one can be inferred from the
packet's other fields, then delegates to the original
``NodeInfoHandler.onReceive``. A ``KeyError`` on ``"id"`` is
suppressed because some firmware versions omit the field entirely.
Parameters:
iface: The Meshtastic interface that received the packet.
packet: Raw nodeinfo packet dict, possibly lacking an ``id``
key.
Returns:
The return value of the parent handler, or ``None`` when a
missing ``"id"`` key would otherwise raise.
"""
normalised = _normalise_nodeinfo_packet(packet)
if normalised is not None:
packet = normalised
try:
return super().onReceive(iface, packet)
except KeyError as exc: # pragma: no cover - defensive only
if exc.args and exc.args[0] == "id":
return None
raise
_SafeNodeInfoHandler.__name__ = handler_class.__name__
_SafeNodeInfoHandler.__qualname__ = getattr(
handler_class, "__qualname__", handler_class.__name__
)
_SafeNodeInfoHandler.__module__ = getattr(
handler_class, "__module__", mesh_interface_module.__name__
)
_SafeNodeInfoHandler.__doc__ = getattr(
handler_class, "__doc__", _SafeNodeInfoHandler.__doc__
)
_SafeNodeInfoHandler._potato_mesh_safe_wrapper = True # type: ignore[attr-defined]
setattr(mesh_interface_module, "NodeInfoHandler", _SafeNodeInfoHandler)
if meshtastic_module is None:
meshtastic_module = globals().get("meshtastic")
if meshtastic_module is not None:
existing_top = getattr(meshtastic_module, "NodeInfoHandler", None)
if existing_top is handler_class:
setattr(meshtastic_module, "NodeInfoHandler", _SafeNodeInfoHandler)
_update_nodeinfo_handler_aliases(handler_class, _SafeNodeInfoHandler)
_patch_meshtastic_nodeinfo_handler()
try: # pragma: no cover - optional dependency may be unavailable
from meshtastic.serial_interface import SerialInterface # type: ignore
except Exception: # pragma: no cover - optional dependency may be unavailable
SerialInterface = None # type: ignore[assignment]
try: # pragma: no cover - optional dependency may be unavailable
from meshtastic.tcp_interface import TCPInterface # type: ignore
except Exception: # pragma: no cover - optional dependency may be unavailable
TCPInterface = None # type: ignore[assignment]
def _patch_meshtastic_ble_receive_loop() -> None:
"""Prevent ``UnboundLocalError`` crashes in Meshtastic's BLE reader."""
try:
from meshtastic import ble_interface as _ble_interface_module # type: ignore
except Exception: # pragma: no cover - dependency optional in tests
return
ble_class = getattr(_ble_interface_module, "BLEInterface", None)
if ble_class is None:
return
original = getattr(ble_class, "_receiveFromRadioImpl", None)
if not callable(original):
return
if getattr(original, "_potato_mesh_safe_wrapper", False):
return
FROMRADIO_UUID = getattr(_ble_interface_module, "FROMRADIO_UUID", None)
BleakDBusError = getattr(_ble_interface_module, "BleakDBusError", ())
BleakError = getattr(_ble_interface_module, "BleakError", ())
logger = getattr(_ble_interface_module, "logger", None)
time = getattr(_ble_interface_module, "time", None)
if not FROMRADIO_UUID or logger is None or time is None:
return
def _safe_receive_from_radio(self): # type: ignore[override]
while self._want_receive:
if self.should_read:
self.should_read = False
retries: int = 0
while self._want_receive:
if self.client is None:
logger.debug("BLE client is None, shutting down")
self._want_receive = False
continue
payload: bytes = b""
try:
payload = bytes(self.client.read_gatt_char(FROMRADIO_UUID))
except BleakDBusError as exc:
logger.debug("Device disconnected, shutting down %s", exc)
self._want_receive = False
payload = b""
except BleakError as exc:
if "Not connected" in str(exc):
logger.debug("Device disconnected, shutting down %s", exc)
self._want_receive = False
payload = b""
else:
raise ble_class.BLEError("Error reading BLE") from exc
if not payload:
if not self._want_receive:
break
if retries < 5:
time.sleep(0.1)
retries += 1
continue
break
logger.debug("FROMRADIO read: %s", payload.hex())
self._handleFromRadio(payload)
else:
time.sleep(0.01)
_safe_receive_from_radio._potato_mesh_safe_wrapper = True # type: ignore[attr-defined]
ble_class._receiveFromRadioImpl = _safe_receive_from_radio
_patch_meshtastic_ble_receive_loop()
def _has_field(message: Any, field_name: str) -> bool:
"""Return ``True`` when ``message`` advertises ``field_name`` via ``HasField``."""
if message is None:
return False
has_field = getattr(message, "HasField", None)
if callable(has_field):
try:
return bool(has_field(field_name))
except Exception: # pragma: no cover - defensive guard
return False
return hasattr(message, field_name)
def _enum_name_from_field(message: Any, field_name: str, value: Any) -> str | None:
"""Return the enum name for ``value`` using ``message`` descriptors."""
descriptor = getattr(message, "DESCRIPTOR", None)
if descriptor is None:
return None
fields_by_name = getattr(descriptor, "fields_by_name", {})
field_desc = fields_by_name.get(field_name)
if field_desc is None:
return None
enum_type = getattr(field_desc, "enum_type", None)
if enum_type is None:
return None
enum_values = getattr(enum_type, "values_by_number", {})
enum_value = enum_values.get(value)
if enum_value is None:
return None
return getattr(enum_value, "name", None)
def _resolve_lora_message(local_config: Any) -> Any | None:
"""Return the LoRa configuration sub-message from ``local_config``."""
if local_config is None:
return None
if _has_field(local_config, "lora"):
candidate = getattr(local_config, "lora", None)
if candidate is not None:
return candidate
radio_section = getattr(local_config, "radio", None)
if radio_section is not None:
if _has_field(radio_section, "lora"):
return getattr(radio_section, "lora", None)
if hasattr(radio_section, "lora"):
return getattr(radio_section, "lora")
if hasattr(local_config, "lora"):
return getattr(local_config, "lora")
return None
# Maps Meshtastic region enum name to (base_freq_MHz, channel_spacing_MHz).
# Values are derived from the Meshtastic firmware RegionInfo tables.
# Used by _computed_channel_frequency to derive the actual radio frequency
# from the region and channel index.
_REGION_CHANNEL_PARAMS: dict[str, tuple[float, float]] = {
"US": (902.0, 0.25), # 902928 MHz; e.g. ch 52 ≈ 915 MHz at 250 kHz spacing
"EU_433": (433.175, 0.2),
"EU_868": (869.525, 0.5), # actual primary ≈ 869.525 MHz, not 868
"CN": (470.0, 0.2),
"JP": (920.875, 0.5),
"ANZ": (916.0, 0.5),
"KR": (921.9, 0.5),
"TW": (923.0, 0.5),
"RU": (868.9, 0.5),
"IN": (865.0, 0.5),
"NZ_865": (864.0, 0.5),
"TH": (920.0, 0.5),
"LORA_24": (2400.0, 0.5),
"UA_433": (433.175, 0.2),
"UA_868": (868.0, 0.5),
"MY_433": (433.0, 0.2),
"MY_919": (919.0, 0.5),
"SG_923": (923.0, 0.5),
"PH_433": (433.0, 0.2),
"PH_868": (868.0, 0.5),
"PH_915": (915.0, 0.5),
"ANZ_433": (433.0, 0.2),
"KZ_433": (433.0, 0.2),
"KZ_863": (863.125, 0.5),
"NP_865": (865.0, 0.5),
"BR_902": (902.0, 0.25),
# IL (Israel) is absent from meshtastic Python lib 2.7.8 protobufs; the
# enum value is unresolvable at runtime. Operators on IL firmware should
# set the FREQUENCY environment variable to override.
}
def _computed_channel_frequency(
enum_name: str | None,
channel_num: int | None,
) -> int | None:
"""Compute the floor MHz frequency for a known region and channel index.
Looks up *enum_name* in :data:`_REGION_CHANNEL_PARAMS` and returns
``floor(base_freq + channel_num * spacing)``. Returns ``None`` when the
region is not in the table. A missing or negative *channel_num* is
treated as 0 so the base frequency is always usable.
Args:
enum_name: Region enum name as returned by
:func:`_enum_name_from_field`, e.g. ``"EU_868"`` or ``"US"``.
channel_num: Zero-based channel index from the device LoRa config.
Returns:
Floored MHz as :class:`int`, or ``None`` if the region is unknown.
"""
if enum_name is None:
return None
params = _REGION_CHANNEL_PARAMS.get(enum_name)
if params is None:
return None
base, spacing = params
idx = channel_num if (isinstance(channel_num, int) and channel_num >= 0) else 0
return math.floor(base + idx * spacing)
def _region_frequency(lora_message: Any) -> int | float | str | None:
"""Derive the LoRa region frequency in MHz or the region label from ``lora_message``.
Frequency sources are tried in priority order:
1. ``override_frequency > 0`` — explicit radio override, floored to MHz.
2. :data:`_REGION_CHANNEL_PARAMS` lookup + ``channel_num`` — actual
band-plan frequency derived from the device's region and channel index,
floored to MHz.
3. Largest digit token ≥ 100 parsed from the region enum name string.
4. Largest digit token < 100 from the enum name (reversed scan).
5. Full enum name string, raw integer ≥ 100, or raw string as a label.
Args:
lora_message: A LoRa config protobuf message or compatible object.
Returns:
An integer MHz frequency, a fallback string label, or ``None``.
"""
if lora_message is None:
return None
# Step 1 — explicit radio override
override_frequency = getattr(lora_message, "override_frequency", None)
if override_frequency is not None:
if isinstance(override_frequency, (int, float)):
if override_frequency > 0:
return math.floor(override_frequency)
elif override_frequency:
return override_frequency
region_value = getattr(lora_message, "region", None)
if region_value is None:
return None
enum_name = _enum_name_from_field(lora_message, "region", region_value)
# Step 2 — lookup table + channel offset (actual band-plan frequency)
if enum_name:
channel_num = getattr(lora_message, "channel_num", None)
computed = _computed_channel_frequency(enum_name, channel_num)
if computed is not None:
return computed
# Steps 35 — parse digits from enum name (fallback for unknown regions)
if enum_name:
digits = re.findall(r"\d+", enum_name)
for token in digits:
try:
freq = int(token)
except ValueError: # pragma: no cover - regex guarantees digits
continue
if freq >= 100:
return freq
for token in reversed(digits):
try:
return int(token)
except ValueError: # pragma: no cover - defensive only
continue
return enum_name
if isinstance(region_value, int) and region_value >= 100:
return region_value
if isinstance(region_value, str) and region_value:
return region_value
return None
def _camelcase_enum_name(name: str | None) -> str | None:
"""Convert ``name`` from ``SCREAMING_SNAKE`` to ``CamelCase``."""
if not name:
return None
parts = re.split(r"[^0-9A-Za-z]+", name.strip())
camel_parts = [part.capitalize() for part in parts if part]
if not camel_parts:
return None
return "".join(camel_parts)
def _modem_preset(lora_message: Any) -> str | None:
"""Return the CamelCase modem preset configured on ``lora_message``."""
if lora_message is None:
return None
descriptor = getattr(lora_message, "DESCRIPTOR", None)
fields_by_name = getattr(descriptor, "fields_by_name", {}) if descriptor else {}
if "modem_preset" in fields_by_name:
preset_field = "modem_preset"
elif "preset" in fields_by_name:
preset_field = "preset"
elif hasattr(lora_message, "modem_preset"):
preset_field = "modem_preset"
elif hasattr(lora_message, "preset"):
preset_field = "preset"
else:
return None
preset_value = getattr(lora_message, preset_field, None)
if preset_value is None:
return None
enum_name = _enum_name_from_field(lora_message, preset_field, preset_value)
if isinstance(enum_name, str) and enum_name:
return _camelcase_enum_name(enum_name)
if isinstance(preset_value, str) and preset_value:
return _camelcase_enum_name(preset_value)
return None
def _ensure_radio_metadata(iface: Any) -> None:
"""Populate cached LoRa metadata by inspecting ``iface`` when available."""
if iface is None:
return
try:
wait_for_config = getattr(iface, "waitForConfig", None)
if callable(wait_for_config):
wait_for_config()
except Exception: # pragma: no cover - hardware dependent guard
pass
local_node = getattr(iface, "localNode", None)
local_config = getattr(local_node, "localConfig", None) if local_node else None
lora_message = _resolve_lora_message(local_config)
if lora_message is None:
return
frequency = _region_frequency(lora_message)
preset = _modem_preset(lora_message)
updated = False
if frequency is not None and getattr(config, "LORA_FREQ", None) is None:
config.LORA_FREQ = frequency
updated = True
if preset is not None and getattr(config, "MODEM_PRESET", None) is None:
config.MODEM_PRESET = preset
updated = True
if updated:
config._debug_log(
"Captured LoRa radio metadata",
context="interfaces.ensure_radio_metadata",
severity="info",
always=True,
lora_freq=frequency,
modem_preset=preset,
)
def _ensure_channel_metadata(iface: Any) -> None:
"""Capture channel metadata by inspecting ``iface`` once per runtime."""
if iface is None:
return
try:
channels.capture_from_interface(iface)
except Exception as exc: # pragma: no cover - defensive instrumentation
config._debug_log(
"Failed to capture channel metadata",
context="interfaces.ensure_channel_metadata",
severity="warn",
error_class=exc.__class__.__name__,
error_message=str(exc),
)
_DEFAULT_TCP_TARGET = "http://127.0.0.1"
# Private aliases so that existing internal callers and monkeypatching in
# tests keep working without modification.
_DEFAULT_TCP_PORT = DEFAULT_TCP_PORT # backward-compat alias
_DEFAULT_SERIAL_PATTERNS = DEFAULT_SERIAL_PATTERNS # backward-compat alias
_BLE_ADDRESS_RE = BLE_ADDRESS_RE # backward-compat alias
class _DummySerialInterface:
"""In-memory replacement for ``meshtastic.serial_interface.SerialInterface``."""
def __init__(self) -> None:
self.nodes: dict = {}
def close(self) -> None: # pragma: no cover - nothing to close
"""No-op: the dummy interface holds no resources to release."""
pass
_parse_ble_target = parse_ble_target # backward-compat alias
def _parse_network_target(value: str) -> tuple[str, int] | None:
"""Return ``(host, port)`` when ``value`` is a numeric IP address string.
Only literal IPv4 or IPv6 addresses are accepted, optionally paired with a
port or scheme. Callers that start from hostnames should resolve them to an
address before invoking this helper.
Parameters:
value: Numeric IP literal or URL describing the TCP interface.
Returns:
A ``(host, port)`` tuple or ``None`` when parsing fails.
"""
if not value:
return None
value = value.strip()
if not value:
return None
def _validated_result(host: str | None, port: int | None) -> tuple[str, int] | None:
if not host:
return None
try:
ipaddress.ip_address(host)
except ValueError:
return None
return host, port or _DEFAULT_TCP_PORT
parsed_values = []
if "://" in value:
parsed_values.append(urllib.parse.urlparse(value, scheme="tcp"))
parsed_values.append(urllib.parse.urlparse(f"//{value}", scheme="tcp"))
for parsed in parsed_values:
try:
port = parsed.port
except ValueError:
port = None
result = _validated_result(parsed.hostname, port)
if result:
return result
# For bare "host:port" strings that urlparse may misparse, try a manual
# partition. The `startswith("[")` guard excludes IPv6 bracket notation
# (e.g. "[::1]:8080") because those already succeed via urlparse above.
if value.count(":") == 1 and not value.startswith("["):
host, _, port_text = value.partition(":")
try:
port = int(port_text) if port_text else None
except ValueError:
port = None
result = _validated_result(host, port)
if result:
return result
return _validated_result(value, None)
def _load_ble_interface():
"""Return :class:`meshtastic.ble_interface.BLEInterface` when available.
Returns:
The resolved BLE interface class.
Raises:
RuntimeError: If the BLE dependencies are not installed.
"""
global BLEInterface
if BLEInterface is not None:
return BLEInterface
try:
from meshtastic.ble_interface import BLEInterface as _resolved_interface
except ImportError as exc: # pragma: no cover - exercised in non-BLE envs
raise RuntimeError(
"BLE interface requested but the Meshtastic BLE dependencies are not installed. "
"Install the 'meshtastic[ble]' extra to enable BLE support."
) from exc
BLEInterface = _resolved_interface
try:
import sys
for module_name in ("data.mesh_ingestor", "data.mesh"):
mesh_module = sys.modules.get(module_name)
if mesh_module is not None:
setattr(mesh_module, "BLEInterface", BLEInterface)
except Exception: # pragma: no cover - defensive only
pass
return _resolved_interface
def _create_serial_interface(port: str) -> tuple[object, str]:
"""Return an appropriate mesh interface for ``port``.
Parameters:
port: User-supplied port string which may represent serial, BLE or TCP.
Returns:
``(interface, resolved_target)`` describing the created interface.
"""
port_value = (port or "").strip()
if port_value.lower() in {"", "mock", "none", "null", "disabled"}:
config._debug_log(
"Using dummy serial interface",
context="interfaces.serial",
port=port_value,
)
return _DummySerialInterface(), "mock"
ble_target = _parse_ble_target(port_value)
if ble_target:
# Determine if it's a MAC address or UUID
address_type = "MAC" if ":" in ble_target else "UUID"
config._debug_log(
"Using BLE interface",
context="interfaces.ble",
address=ble_target,
address_type=address_type,
)
return _load_ble_interface()(address=ble_target), ble_target
network_target = _parse_network_target(port_value)
if network_target:
host, tcp_port = network_target
config._debug_log(
"Using TCP interface",
context="interfaces.tcp",
host=host,
port=tcp_port,
)
return (
TCPInterface(hostname=host, portNumber=tcp_port),
f"tcp://{host}:{tcp_port}",
)
config._debug_log(
"Using serial interface",
context="interfaces.serial",
port=port_value,
)
return SerialInterface(devPath=port_value), port_value
class NoAvailableMeshInterface(RuntimeError):
"""Raised when no default mesh interface can be created."""
_default_serial_targets = default_serial_targets # backward-compat alias
def _create_default_interface() -> tuple[object, str]:
"""Attempt to create the default mesh interface, raising on failure.
Returns:
``(interface, resolved_target)`` for the discovered connection.
Raises:
NoAvailableMeshInterface: When no usable connection can be created.
"""
errors: list[tuple[str, Exception]] = []
for candidate in _default_serial_targets():
try:
return _create_serial_interface(candidate)
except Exception as exc: # pragma: no cover - hardware dependent
errors.append((candidate, exc))
config._debug_log(
"Failed to open serial candidate",
context="interfaces.auto_discovery",
target=candidate,
error_class=exc.__class__.__name__,
error_message=str(exc),
)
try:
return _create_serial_interface(_DEFAULT_TCP_TARGET)
except Exception as exc: # pragma: no cover - network dependent
errors.append((_DEFAULT_TCP_TARGET, exc))
config._debug_log(
"Failed to open TCP fallback",
context="interfaces.auto_discovery",
target=_DEFAULT_TCP_TARGET,
error_class=exc.__class__.__name__,
error_message=str(exc),
)
if errors:
summary = "; ".join(f"{target}: {error}" for target, error in errors)
raise NoAvailableMeshInterface(
f"no mesh interface available ({summary})"
) from errors[-1][1]
raise NoAvailableMeshInterface("no mesh interface available")
__all__ = [
"BLEInterface",
"NoAvailableMeshInterface",
"_ensure_channel_metadata",
"_ensure_radio_metadata",
"_extract_host_node_id",
"_DummySerialInterface",
"_DEFAULT_TCP_PORT",
"_DEFAULT_TCP_TARGET",
"_create_default_interface",
"_create_serial_interface",
"_default_serial_targets",
"_load_ble_interface",
"_parse_ble_target",
"_parse_network_target",
"SerialInterface",
"TCPInterface",
]
+108
View File
@@ -0,0 +1,108 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Mesh interface discovery helpers for interacting with Meshtastic hardware."""
from __future__ import annotations
# The patches subpackage applies meshtastic monkey-patches at import time so
# subsequent calls (and any direct ``import meshtastic`` from elsewhere)
# inherit the safe wrappers. Apply BEFORE pulling in factory.py because
# factory.py imports ``meshtastic.serial_interface`` / ``meshtastic.tcp_interface``
# and those modules transitively load NodeInfoHandler.
from .patches import (
_build_safe_nodeinfo_callback,
_patch_meshtastic_ble_receive_loop,
_patch_meshtastic_nodeinfo_handler,
_patch_nodeinfo_handler_class,
_update_nodeinfo_handler_aliases,
apply_all as _apply_all_patches,
)
_apply_all_patches()
from ._aliases import ( # noqa: E402 - keep grouped with sibling re-exports.
_BLE_ADDRESS_RE,
_DEFAULT_SERIAL_PATTERNS,
_DEFAULT_TCP_PORT,
_default_serial_targets,
_parse_ble_target,
)
from .channels_meta import _ensure_channel_metadata # noqa: E402
from .factory import ( # noqa: E402
NoAvailableMeshInterface,
_DummySerialInterface,
_create_default_interface,
_create_serial_interface,
_load_ble_interface,
)
# Resolve the meshtastic interface classes at package-load time so that
# repeated imports (e.g. tests that pop ``data.mesh_ingestor.interfaces`` from
# ``sys.modules`` and re-import after swapping ``meshtastic.*`` submodules)
# pick up the freshly registered classes rather than whatever a cached
# ``factory.py`` first resolved. ``factory.py`` no longer keeps duplicate
# module-level globals; lookups go through the package surface only.
BLEInterface = None
"""Resolved on demand by :func:`_load_ble_interface` to keep BLE optional."""
try: # pragma: no cover - optional dependency may be unavailable
from meshtastic.serial_interface import (
SerialInterface,
) # noqa: E402 # type: ignore
except Exception: # pragma: no cover - optional dependency may be unavailable
SerialInterface = None # type: ignore[assignment]
try: # pragma: no cover - optional dependency may be unavailable
from meshtastic.tcp_interface import TCPInterface # noqa: E402 # type: ignore
except Exception: # pragma: no cover - optional dependency may be unavailable
TCPInterface = None # type: ignore[assignment]
from .identity import ( # noqa: E402
_candidate_node_id,
_ensure_mapping,
_extract_host_node_id,
_is_nodeish_identifier,
)
from .nodeinfo_normalize import _normalise_nodeinfo_packet # noqa: E402
from .radio import ( # noqa: E402
_REGION_CHANNEL_PARAMS,
_camelcase_enum_name,
_computed_channel_frequency,
_ensure_radio_metadata,
_enum_name_from_field,
_has_field,
_modem_preset,
_region_frequency,
_resolve_lora_message,
)
from .targets import _DEFAULT_TCP_TARGET, _parse_network_target # noqa: E402
__all__ = [
"BLEInterface",
"NoAvailableMeshInterface",
"_ensure_channel_metadata",
"_ensure_radio_metadata",
"_extract_host_node_id",
"_DummySerialInterface",
"_DEFAULT_TCP_PORT",
"_DEFAULT_TCP_TARGET",
"_create_default_interface",
"_create_serial_interface",
"_default_serial_targets",
"_load_ble_interface",
"_parse_ble_target",
"_parse_network_target",
"SerialInterface",
"TCPInterface",
]
+33
View File
@@ -0,0 +1,33 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Backward-compat aliases for renames hidden behind the package barrel."""
from __future__ import annotations
from ..connection import (
BLE_ADDRESS_RE,
DEFAULT_SERIAL_PATTERNS,
DEFAULT_TCP_PORT,
default_serial_targets,
parse_ble_target,
)
# Private aliases so that existing internal callers and monkeypatching in
# tests keep working without modification.
_BLE_ADDRESS_RE = BLE_ADDRESS_RE
_DEFAULT_TCP_PORT = DEFAULT_TCP_PORT
_DEFAULT_SERIAL_PATTERNS = DEFAULT_SERIAL_PATTERNS
_parse_ble_target = parse_ble_target
_default_serial_targets = default_serial_targets
@@ -0,0 +1,39 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""One-shot channel metadata capture from a live Meshtastic interface."""
from __future__ import annotations
from typing import Any
from .. import channels, config
def _ensure_channel_metadata(iface: Any) -> None:
"""Capture channel metadata by inspecting ``iface`` once per runtime."""
if iface is None:
return
try:
channels.capture_from_interface(iface)
except Exception as exc: # pragma: no cover - defensive instrumentation
config._debug_log(
"Failed to capture channel metadata",
context="interfaces.ensure_channel_metadata",
severity="warn",
error_class=exc.__class__.__name__,
error_message=str(exc),
)
+191
View File
@@ -0,0 +1,191 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Build Meshtastic interface objects from caller-supplied target strings."""
from __future__ import annotations
import sys
from typing import TYPE_CHECKING
from .. import config
from ..connection import parse_ble_target
from .targets import _DEFAULT_TCP_TARGET, _parse_network_target
if TYPE_CHECKING: # pragma: no cover - import only used for type checking
from meshtastic.ble_interface import BLEInterface as _BLEInterface
# All cached interface classes live on the parent package
# (``data.mesh_ingestor.interfaces``). Tests set them via
# ``monkeypatch.setattr(mesh, "BLEInterface", ...)`` and the package proxy
# routes those writes through to ``interfaces``; keeping a duplicate global on
# this submodule would cache the wrong value across tests because
# ``monkeypatch`` only restores attributes it set. The ``__init__.py``
# re-resolves ``SerialInterface``/``TCPInterface`` from ``meshtastic.*`` at
# package-load time and assigns them to package-level attributes.
class _DummySerialInterface:
"""In-memory replacement for ``meshtastic.serial_interface.SerialInterface``."""
def __init__(self) -> None:
self.nodes: dict = {}
def close(self) -> None: # pragma: no cover - nothing to close
"""No-op: the dummy interface holds no resources to release."""
pass
class NoAvailableMeshInterface(RuntimeError):
"""Raised when no default mesh interface can be created."""
def _load_ble_interface():
"""Return :class:`meshtastic.ble_interface.BLEInterface` when available.
Returns:
The resolved BLE interface class.
Raises:
RuntimeError: If the BLE dependencies are not installed.
"""
pkg = sys.modules.get("data.mesh_ingestor.interfaces")
pkg_ble = getattr(pkg, "BLEInterface", None) if pkg is not None else None
if pkg_ble is not None:
return pkg_ble
try:
from meshtastic.ble_interface import BLEInterface as _resolved_interface
except ImportError as exc: # pragma: no cover - exercised in non-BLE envs
raise RuntimeError(
"BLE interface requested but the Meshtastic BLE dependencies are not installed. "
"Install the 'meshtastic[ble]' extra to enable BLE support."
) from exc
if pkg is not None:
setattr(pkg, "BLEInterface", _resolved_interface)
for module_name in ("data.mesh_ingestor", "data.mesh"):
mesh_module = sys.modules.get(module_name)
if mesh_module is not None:
setattr(mesh_module, "BLEInterface", _resolved_interface)
return _resolved_interface
def _create_serial_interface(port: str) -> tuple[object, str]:
"""Return an appropriate mesh interface for ``port``.
Parameters:
port: User-supplied port string which may represent serial, BLE or TCP.
Returns:
``(interface, resolved_target)`` describing the created interface.
"""
pkg = sys.modules["data.mesh_ingestor.interfaces"]
port_value = (port or "").strip()
if port_value.lower() in {"", "mock", "none", "null", "disabled"}:
config._debug_log(
"Using dummy serial interface",
context="interfaces.serial",
port=port_value,
)
return _DummySerialInterface(), "mock"
ble_target = parse_ble_target(port_value)
if ble_target:
# Determine if it's a MAC address or UUID
address_type = "MAC" if ":" in ble_target else "UUID"
config._debug_log(
"Using BLE interface",
context="interfaces.ble",
address=ble_target,
address_type=address_type,
)
return _load_ble_interface()(address=ble_target), ble_target
network_target = _parse_network_target(port_value)
if network_target:
host, tcp_port = network_target
config._debug_log(
"Using TCP interface",
context="interfaces.tcp",
host=host,
port=tcp_port,
)
# Resolve via the package so test fakes installed via ``sys.modules``
# patches at ``meshtastic.tcp_interface`` propagate when interfaces
# was imported earlier.
tcp_cls = getattr(pkg, "TCPInterface", None)
return (
tcp_cls(hostname=host, portNumber=tcp_port),
f"tcp://{host}:{tcp_port}",
)
config._debug_log(
"Using serial interface",
context="interfaces.serial",
port=port_value,
)
serial_cls = getattr(pkg, "SerialInterface", None)
return serial_cls(devPath=port_value), port_value
def _create_default_interface() -> tuple[object, str]:
"""Attempt to create the default mesh interface, raising on failure.
Returns:
``(interface, resolved_target)`` for the discovered connection.
Raises:
NoAvailableMeshInterface: When no usable connection can be created.
"""
# Resolve via the package surface so that monkeypatches against the
# backward-compat aliases (``mesh._default_serial_targets``,
# ``mesh._create_serial_interface``) propagate at call time.
pkg = sys.modules["data.mesh_ingestor.interfaces"]
default_serial_targets = pkg._default_serial_targets
create_serial = pkg._create_serial_interface
errors: list[tuple[str, Exception]] = []
for candidate in default_serial_targets():
try:
return create_serial(candidate)
except Exception as exc: # pragma: no cover - hardware dependent
errors.append((candidate, exc))
config._debug_log(
"Failed to open serial candidate",
context="interfaces.auto_discovery",
target=candidate,
error_class=exc.__class__.__name__,
error_message=str(exc),
)
try:
return create_serial(_DEFAULT_TCP_TARGET)
except Exception as exc: # pragma: no cover - network dependent
errors.append((_DEFAULT_TCP_TARGET, exc))
config._debug_log(
"Failed to open TCP fallback",
context="interfaces.auto_discovery",
target=_DEFAULT_TCP_TARGET,
error_class=exc.__class__.__name__,
error_message=str(exc),
)
if errors:
summary = "; ".join(f"{target}: {error}" for target, error in errors)
raise NoAvailableMeshInterface(
f"no mesh interface available ({summary})"
) from errors[-1][1]
raise NoAvailableMeshInterface( # pragma: no cover - defensive only
"no mesh interface available"
)
+194
View File
@@ -0,0 +1,194 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Mapping/identifier helpers for Meshtastic interface objects."""
from __future__ import annotations
import contextlib
import re
from collections.abc import Mapping
from typing import Any
from .. import serialization
def _ensure_mapping(value) -> Mapping | None:
"""Return ``value`` as a mapping when conversion is possible."""
if isinstance(value, Mapping):
return value
if hasattr(value, "__dict__") and isinstance(value.__dict__, Mapping):
return value.__dict__
with contextlib.suppress(Exception):
converted = serialization._node_to_dict(value)
if isinstance(converted, Mapping):
return converted
return None
def _is_nodeish_identifier(value: Any) -> bool:
"""Return ``True`` when ``value`` resembles a Meshtastic node identifier."""
if isinstance(value, (int, float)):
return False
if not isinstance(value, str):
return False
trimmed = value.strip()
if not trimmed:
return False
if trimmed.startswith("^"):
return True
if trimmed.startswith("!"):
trimmed = trimmed[1:]
elif trimmed.lower().startswith("0x"):
trimmed = trimmed[2:]
elif not re.search(r"[a-fA-F]", trimmed):
# Bare decimal strings should not be treated as node ids when labelled "id".
return False
return bool(re.fullmatch(r"[0-9a-fA-F]{1,8}", trimmed))
def _candidate_node_id(mapping: Mapping | None) -> str | None:
"""Extract a canonical node identifier from ``mapping`` when present."""
if mapping is None:
return None
node_keys = (
"fromId",
"from_id",
"from",
"nodeId",
"node_id",
"nodeNum",
"node_num",
"num",
"userId",
"user_id",
)
for key in node_keys:
with contextlib.suppress(Exception):
node_id = serialization._canonical_node_id(mapping.get(key))
if node_id:
return node_id
with contextlib.suppress(Exception):
value = mapping.get("id")
if _is_nodeish_identifier(value):
node_id = serialization._canonical_node_id(value)
if node_id:
return node_id
user_section = _ensure_mapping(mapping.get("user"))
if user_section is not None:
for key in ("userId", "user_id", "num", "nodeNum", "node_num"):
with contextlib.suppress(Exception):
node_id = serialization._canonical_node_id(user_section.get(key))
if node_id:
return node_id
with contextlib.suppress(Exception):
user_id_value = user_section.get("id")
if _is_nodeish_identifier(user_id_value):
node_id = serialization._canonical_node_id(user_id_value)
if node_id:
return node_id
decoded_section = _ensure_mapping(mapping.get("decoded"))
if decoded_section is not None:
node_id = _candidate_node_id(decoded_section)
if node_id:
return node_id
payload_section = _ensure_mapping(mapping.get("payload"))
if payload_section is not None:
node_id = _candidate_node_id(payload_section)
if node_id:
return node_id
for key in ("packet", "meta", "info"):
node_id = _candidate_node_id(_ensure_mapping(mapping.get(key)))
if node_id:
return node_id
for value in mapping.values():
if isinstance(value, (list, tuple)):
for item in value:
node_id = _candidate_node_id(_ensure_mapping(item))
if node_id:
return node_id
else:
node_id = _candidate_node_id(_ensure_mapping(value))
if node_id:
return node_id
return None
def _extract_host_node_id(iface) -> str | None:
"""Return the canonical node identifier for the connected host device.
Searches a sequence of well-known attribute names (``myInfo``,
``my_node_info``, etc.) on ``iface`` for a mapping that contains a
recognisable node identifier, then falls back to the raw ``myNodeNum``
integer attribute.
Parameters:
iface: Live Meshtastic interface object, or any object that exposes
node-identity attributes in one of the expected forms.
Returns:
A canonical ``!xxxxxxxx`` node identifier, or ``None`` when no
identifiable host node information is available.
"""
if iface is None:
return None
def _as_mapping(candidate) -> Mapping | None:
mapping = _ensure_mapping(candidate)
if mapping is not None:
return mapping
if callable(candidate):
with contextlib.suppress(Exception):
return _ensure_mapping(candidate())
return None
candidates: list[Mapping] = []
for attr in ("myInfo", "my_node_info", "myNodeInfo", "my_node", "localNode"):
mapping = _as_mapping(getattr(iface, attr, None))
if mapping is None:
continue
candidates.append(mapping)
nested_info = _ensure_mapping(mapping.get("info"))
if nested_info:
candidates.append(nested_info)
for mapping in candidates:
node_id = _candidate_node_id(mapping)
if node_id:
return node_id
for key in ("myNodeNum", "my_node_num", "myNodeId", "my_node_id"):
node_id = serialization._canonical_node_id(mapping.get(key))
if node_id:
return node_id
node_id = serialization._canonical_node_id(getattr(iface, "myNodeNum", None))
if node_id:
return node_id
return None
@@ -0,0 +1,41 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Inject a canonical ``id`` into Meshtastic nodeinfo packets when missing."""
from __future__ import annotations
from .identity import _candidate_node_id, _ensure_mapping
def _normalise_nodeinfo_packet(packet) -> dict | None:
"""Return a dictionary view of ``packet`` with a guaranteed ``id`` when known."""
mapping = _ensure_mapping(packet)
if mapping is None:
return None
try:
normalised: dict = dict(mapping)
except Exception:
try:
normalised = {key: mapping[key] for key in mapping}
except Exception: # pragma: no cover - both copy strategies failed
return None
node_id = _candidate_node_id(normalised)
if node_id and normalised.get("id") != node_id:
normalised["id"] = node_id
return normalised
@@ -0,0 +1,41 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Runtime monkey-patches applied to the upstream ``meshtastic`` library."""
from __future__ import annotations
from .ble_receive import _patch_meshtastic_ble_receive_loop
from .nodeinfo import (
_build_safe_nodeinfo_callback,
_patch_meshtastic_nodeinfo_handler,
_patch_nodeinfo_handler_class,
_update_nodeinfo_handler_aliases,
)
def apply_all() -> None:
"""Apply every meshtastic monkey-patch in the order required for safety."""
_patch_meshtastic_nodeinfo_handler()
_patch_meshtastic_ble_receive_loop()
__all__ = [
"apply_all",
"_build_safe_nodeinfo_callback",
"_patch_meshtastic_ble_receive_loop",
"_patch_meshtastic_nodeinfo_handler",
"_patch_nodeinfo_handler_class",
"_update_nodeinfo_handler_aliases",
]
@@ -0,0 +1,93 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Patch the upstream Meshtastic BLE receive loop to avoid ``UnboundLocalError``."""
from __future__ import annotations
def _patch_meshtastic_ble_receive_loop() -> None:
"""Prevent ``UnboundLocalError`` crashes in Meshtastic's BLE reader."""
try:
from meshtastic import ble_interface as _ble_interface_module # type: ignore
except Exception: # pragma: no cover - dependency optional in tests
return
ble_class = getattr(_ble_interface_module, "BLEInterface", None)
if ble_class is None: # pragma: no cover - exercised only without BLE class
return
original = getattr(ble_class, "_receiveFromRadioImpl", None)
if not callable(original): # pragma: no cover - upstream API regression guard
return
if getattr(original, "_potato_mesh_safe_wrapper", False):
return
FROMRADIO_UUID = getattr(_ble_interface_module, "FROMRADIO_UUID", None)
BleakDBusError = getattr(_ble_interface_module, "BleakDBusError", ())
BleakError = getattr(_ble_interface_module, "BleakError", ())
logger = getattr(_ble_interface_module, "logger", None)
time = getattr(_ble_interface_module, "time", None)
if ( # pragma: no cover - upstream API regression guard
not FROMRADIO_UUID or logger is None or time is None
):
return
# The receive loop runs on a dedicated thread and only completes against a
# live BLE adapter; the body is hardware-dependent and not unit-testable.
def _safe_receive_from_radio(self): # pragma: no cover - hardware dependent
# type: ignore[override]
while self._want_receive:
if self.should_read:
self.should_read = False
retries: int = 0
while self._want_receive:
if self.client is None:
logger.debug("BLE client is None, shutting down")
self._want_receive = False
continue
payload: bytes = b""
try:
payload = bytes(self.client.read_gatt_char(FROMRADIO_UUID))
except BleakDBusError as exc:
logger.debug("Device disconnected, shutting down %s", exc)
self._want_receive = False
payload = b""
except BleakError as exc:
if "Not connected" in str(exc):
logger.debug("Device disconnected, shutting down %s", exc)
self._want_receive = False
payload = b""
else:
raise ble_class.BLEError("Error reading BLE") from exc
if not payload:
if not self._want_receive:
break
if retries < 5:
time.sleep(0.1)
retries += 1
continue
break
logger.debug("FROMRADIO read: %s", payload.hex())
self._handleFromRadio(payload)
else:
time.sleep(0.01)
_safe_receive_from_radio._potato_mesh_safe_wrapper = True # type: ignore[attr-defined]
ble_class._receiveFromRadioImpl = _safe_receive_from_radio
@@ -0,0 +1,164 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Runtime patches that harden Meshtastic's nodeinfo handler against missing ``id`` fields."""
from __future__ import annotations
import contextlib
import importlib
import sys
try: # pragma: no cover - dependency optional in tests
import meshtastic # type: ignore
except Exception: # pragma: no cover - dependency optional in tests
meshtastic = None # type: ignore[assignment]
from ..nodeinfo_normalize import _normalise_nodeinfo_packet
def _patch_meshtastic_nodeinfo_handler() -> None:
"""Ensure Meshtastic nodeinfo packets always include an ``id`` field."""
module = sys.modules.get("meshtastic", meshtastic)
if module is None: # pragma: no cover - re-import fallback for cold caches
with contextlib.suppress(Exception):
module = importlib.import_module("meshtastic")
if module is None: # pragma: no cover - exercised only without meshtastic
return
globals()["meshtastic"] = module
original = getattr(module, "_onNodeInfoReceive", None)
if not callable(original): # pragma: no cover - upstream API regression guard
return
mesh_interface_module = getattr(module, "mesh_interface", None)
if mesh_interface_module is None:
with contextlib.suppress(Exception):
mesh_interface_module = importlib.import_module("meshtastic.mesh_interface")
# Replace the module-level handler only once; the sentinel attribute prevents
# re-wrapping if _patch_meshtastic_nodeinfo_handler() is called again after
# the interface module is reloaded or re-imported.
if not getattr(original, "_potato_mesh_safe_wrapper", False):
module._onNodeInfoReceive = _build_safe_nodeinfo_callback(original)
_patch_nodeinfo_handler_class(mesh_interface_module, module)
def _build_safe_nodeinfo_callback(original):
"""Return a wrapper that injects a missing ``id`` before dispatching."""
def _safe_on_node_info_receive(iface, packet): # type: ignore[override]
normalised = _normalise_nodeinfo_packet(packet)
if normalised is not None:
packet = normalised
try:
return original(iface, packet)
except KeyError as exc: # pragma: no cover - defensive only
if exc.args and exc.args[0] == "id":
return None
raise
_safe_on_node_info_receive._potato_mesh_safe_wrapper = True # type: ignore[attr-defined]
return _safe_on_node_info_receive
def _update_nodeinfo_handler_aliases(original, replacement) -> None:
"""Ensure Meshtastic modules reference the patched ``NodeInfoHandler``."""
for module_name, module in list(sys.modules.items()):
if not module_name.startswith("meshtastic"):
continue
existing = getattr(module, "NodeInfoHandler", None)
if existing is original:
setattr(module, "NodeInfoHandler", replacement)
def _patch_nodeinfo_handler_class(
mesh_interface_module, meshtastic_module=None
) -> None:
"""Wrap ``NodeInfoHandler.onReceive`` to normalise packets before callbacks."""
if (
mesh_interface_module is None
): # pragma: no cover - exercised only without meshtastic
return
handler_class = getattr(mesh_interface_module, "NodeInfoHandler", None)
if handler_class is None: # pragma: no cover - upstream API regression guard
return
if getattr(
handler_class, "_potato_mesh_safe_wrapper", False
): # pragma: no cover - re-entry guard
return
original_on_receive = getattr(handler_class, "onReceive", None)
if not callable(
original_on_receive
): # pragma: no cover - upstream API regression guard
return
class _SafeNodeInfoHandler(handler_class): # type: ignore[misc]
"""Subclass that guards against missing node identifiers."""
def onReceive(self, iface, packet): # type: ignore[override]
"""Normalise ``packet`` before dispatching to the parent handler.
Injects a canonical ``id`` field when one can be inferred from the
packet's other fields, then delegates to the original
``NodeInfoHandler.onReceive``. A ``KeyError`` on ``"id"`` is
suppressed because some firmware versions omit the field entirely.
Parameters:
iface: The Meshtastic interface that received the packet.
packet: Raw nodeinfo packet dict, possibly lacking an ``id``
key.
Returns:
The return value of the parent handler, or ``None`` when a
missing ``"id"`` key would otherwise raise.
"""
normalised = _normalise_nodeinfo_packet(packet)
if normalised is not None:
packet = normalised
try:
return super().onReceive(iface, packet)
except KeyError as exc: # pragma: no cover - defensive only
if exc.args and exc.args[0] == "id":
return None
raise
_SafeNodeInfoHandler.__name__ = handler_class.__name__
_SafeNodeInfoHandler.__qualname__ = getattr(
handler_class, "__qualname__", handler_class.__name__
)
_SafeNodeInfoHandler.__module__ = getattr(
handler_class, "__module__", mesh_interface_module.__name__
)
_SafeNodeInfoHandler.__doc__ = getattr(
handler_class, "__doc__", _SafeNodeInfoHandler.__doc__
)
_SafeNodeInfoHandler._potato_mesh_safe_wrapper = True # type: ignore[attr-defined]
setattr(mesh_interface_module, "NodeInfoHandler", _SafeNodeInfoHandler)
if meshtastic_module is None:
meshtastic_module = globals().get("meshtastic")
if meshtastic_module is not None:
existing_top = getattr(meshtastic_module, "NodeInfoHandler", None)
if existing_top is handler_class: # pragma: no cover - top-level re-export
setattr(meshtastic_module, "NodeInfoHandler", _SafeNodeInfoHandler)
_update_nodeinfo_handler_aliases(handler_class, _SafeNodeInfoHandler)
+292
View File
@@ -0,0 +1,292 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""LoRa region/frequency/preset derivation from a Meshtastic config protobuf."""
from __future__ import annotations
import math
import re
from typing import Any
from .. import config
def _has_field(message: Any, field_name: str) -> bool:
"""Return ``True`` when ``message`` advertises ``field_name`` via ``HasField``."""
if message is None:
return False
has_field = getattr(message, "HasField", None)
if callable(has_field):
try:
return bool(has_field(field_name))
except Exception: # pragma: no cover - defensive guard
return False
return hasattr(message, field_name)
def _enum_name_from_field(message: Any, field_name: str, value: Any) -> str | None:
"""Return the enum name for ``value`` using ``message`` descriptors."""
descriptor = getattr(message, "DESCRIPTOR", None)
if descriptor is None:
return None
fields_by_name = getattr(descriptor, "fields_by_name", {})
field_desc = fields_by_name.get(field_name)
if field_desc is None:
return None
enum_type = getattr(field_desc, "enum_type", None)
if enum_type is None:
return None
enum_values = getattr(enum_type, "values_by_number", {})
enum_value = enum_values.get(value)
if enum_value is None:
return None
return getattr(enum_value, "name", None)
def _resolve_lora_message(local_config: Any) -> Any | None:
"""Return the LoRa configuration sub-message from ``local_config``."""
if local_config is None:
return None
if _has_field(local_config, "lora"):
candidate = getattr(local_config, "lora", None)
if candidate is not None:
return candidate
radio_section = getattr(local_config, "radio", None)
if radio_section is not None:
if _has_field(radio_section, "lora"):
return getattr(radio_section, "lora", None)
if hasattr(radio_section, "lora"):
return getattr(radio_section, "lora")
if hasattr(local_config, "lora"):
return getattr(local_config, "lora")
return None
# Maps Meshtastic region enum name to (base_freq_MHz, channel_spacing_MHz).
# Values are derived from the Meshtastic firmware RegionInfo tables.
# Used by _computed_channel_frequency to derive the actual radio frequency
# from the region and channel index.
_REGION_CHANNEL_PARAMS: dict[str, tuple[float, float]] = {
"US": (902.0, 0.25), # 902928 MHz; e.g. ch 52 ≈ 915 MHz at 250 kHz spacing
"EU_433": (433.175, 0.2),
"EU_868": (869.525, 0.5), # actual primary ≈ 869.525 MHz, not 868
"CN": (470.0, 0.2),
"JP": (920.875, 0.5),
"ANZ": (916.0, 0.5),
"KR": (921.9, 0.5),
"TW": (923.0, 0.5),
"RU": (868.9, 0.5),
"IN": (865.0, 0.5),
"NZ_865": (864.0, 0.5),
"TH": (920.0, 0.5),
"LORA_24": (2400.0, 0.5),
"UA_433": (433.175, 0.2),
"UA_868": (868.0, 0.5),
"MY_433": (433.0, 0.2),
"MY_919": (919.0, 0.5),
"SG_923": (923.0, 0.5),
"PH_433": (433.0, 0.2),
"PH_868": (868.0, 0.5),
"PH_915": (915.0, 0.5),
"ANZ_433": (433.0, 0.2),
"KZ_433": (433.0, 0.2),
"KZ_863": (863.125, 0.5),
"NP_865": (865.0, 0.5),
"BR_902": (902.0, 0.25),
# IL (Israel) is absent from meshtastic Python lib 2.7.8 protobufs; the
# enum value is unresolvable at runtime. Operators on IL firmware should
# set the FREQUENCY environment variable to override.
}
def _computed_channel_frequency(
enum_name: str | None,
channel_num: int | None,
) -> int | None:
"""Compute the floor MHz frequency for a known region and channel index.
Looks up *enum_name* in :data:`_REGION_CHANNEL_PARAMS` and returns
``floor(base_freq + channel_num * spacing)``. Returns ``None`` when the
region is not in the table. A missing or negative *channel_num* is
treated as 0 so the base frequency is always usable.
Args:
enum_name: Region enum name as returned by
:func:`_enum_name_from_field`, e.g. ``"EU_868"`` or ``"US"``.
channel_num: Zero-based channel index from the device LoRa config.
Returns:
Floored MHz as :class:`int`, or ``None`` if the region is unknown.
"""
if enum_name is None:
return None
params = _REGION_CHANNEL_PARAMS.get(enum_name)
if params is None:
return None
base, spacing = params
idx = channel_num if (isinstance(channel_num, int) and channel_num >= 0) else 0
return math.floor(base + idx * spacing)
def _region_frequency(lora_message: Any) -> int | float | str | None:
"""Derive the LoRa region frequency in MHz or the region label from ``lora_message``.
Frequency sources are tried in priority order:
1. ``override_frequency > 0`` — explicit radio override, floored to MHz.
2. :data:`_REGION_CHANNEL_PARAMS` lookup + ``channel_num`` — actual
band-plan frequency derived from the device's region and channel index,
floored to MHz.
3. Largest digit token ≥ 100 parsed from the region enum name string.
4. Largest digit token < 100 from the enum name (reversed scan).
5. Full enum name string, raw integer ≥ 100, or raw string as a label.
Args:
lora_message: A LoRa config protobuf message or compatible object.
Returns:
An integer MHz frequency, a fallback string label, or ``None``.
"""
if lora_message is None:
return None
# Step 1 — explicit radio override
override_frequency = getattr(lora_message, "override_frequency", None)
if override_frequency is not None:
if isinstance(override_frequency, (int, float)):
if override_frequency > 0:
return math.floor(override_frequency)
elif override_frequency:
return override_frequency
region_value = getattr(lora_message, "region", None)
if region_value is None:
return None
enum_name = _enum_name_from_field(lora_message, "region", region_value)
# Step 2 — lookup table + channel offset (actual band-plan frequency)
if enum_name:
channel_num = getattr(lora_message, "channel_num", None)
computed = _computed_channel_frequency(enum_name, channel_num)
if computed is not None:
return computed
# Steps 35 — parse digits from enum name (fallback for unknown regions)
if enum_name:
digits = re.findall(r"\d+", enum_name)
for token in digits:
try:
freq = int(token)
except ValueError: # pragma: no cover - regex guarantees digits
continue
if freq >= 100:
return freq
for token in reversed(digits):
try:
return int(token)
except ValueError: # pragma: no cover - defensive only
continue
return enum_name
if isinstance(region_value, int) and region_value >= 100:
return region_value
if isinstance(region_value, str) and region_value:
return region_value
return None
def _camelcase_enum_name(name: str | None) -> str | None:
"""Convert ``name`` from ``SCREAMING_SNAKE`` to ``CamelCase``."""
if not name:
return None
parts = re.split(r"[^0-9A-Za-z]+", name.strip())
camel_parts = [part.capitalize() for part in parts if part]
if not camel_parts:
return None
return "".join(camel_parts)
def _modem_preset(lora_message: Any) -> str | None:
"""Return the CamelCase modem preset configured on ``lora_message``."""
if lora_message is None:
return None
descriptor = getattr(lora_message, "DESCRIPTOR", None)
fields_by_name = getattr(descriptor, "fields_by_name", {}) if descriptor else {}
if "modem_preset" in fields_by_name:
preset_field = "modem_preset"
elif "preset" in fields_by_name:
preset_field = "preset"
elif hasattr(lora_message, "modem_preset"):
preset_field = "modem_preset"
elif hasattr(lora_message, "preset"):
preset_field = "preset"
else:
return None
preset_value = getattr(lora_message, preset_field, None)
if preset_value is None:
return None
enum_name = _enum_name_from_field(lora_message, preset_field, preset_value)
if isinstance(enum_name, str) and enum_name:
return _camelcase_enum_name(enum_name)
if isinstance(preset_value, str) and preset_value:
return _camelcase_enum_name(preset_value)
return None
def _ensure_radio_metadata(iface: Any) -> None:
"""Populate cached LoRa metadata by inspecting ``iface`` when available."""
if iface is None:
return
try:
wait_for_config = getattr(iface, "waitForConfig", None)
if callable(wait_for_config):
wait_for_config()
except Exception: # pragma: no cover - hardware dependent guard
pass
local_node = getattr(iface, "localNode", None)
local_config = getattr(local_node, "localConfig", None) if local_node else None
lora_message = _resolve_lora_message(local_config)
if lora_message is None:
return
frequency = _region_frequency(lora_message)
preset = _modem_preset(lora_message)
updated = False
if frequency is not None and getattr(config, "LORA_FREQ", None) is None:
config.LORA_FREQ = frequency
updated = True
if preset is not None and getattr(config, "MODEM_PRESET", None) is None:
config.MODEM_PRESET = preset
updated = True
if updated:
config._debug_log(
"Captured LoRa radio metadata",
context="interfaces.ensure_radio_metadata",
severity="info",
always=True,
lora_freq=frequency,
modem_preset=preset,
)
+84
View File
@@ -0,0 +1,84 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Network target parsing helpers for Meshtastic interfaces."""
from __future__ import annotations
import ipaddress
import urllib.parse
from ..connection import DEFAULT_TCP_PORT
_DEFAULT_TCP_TARGET = "http://127.0.0.1"
def _parse_network_target(value: str) -> tuple[str, int] | None:
"""Return ``(host, port)`` when ``value`` is a numeric IP address string.
Only literal IPv4 or IPv6 addresses are accepted, optionally paired with a
port or scheme. Callers that start from hostnames should resolve them to an
address before invoking this helper.
Parameters:
value: Numeric IP literal or URL describing the TCP interface.
Returns:
A ``(host, port)`` tuple or ``None`` when parsing fails.
"""
if not value:
return None
value = value.strip()
if not value:
return None
def _validated_result(host: str | None, port: int | None) -> tuple[str, int] | None:
if not host:
return None
try:
ipaddress.ip_address(host)
except ValueError:
return None
return host, port or DEFAULT_TCP_PORT
parsed_values = []
if "://" in value:
parsed_values.append(urllib.parse.urlparse(value, scheme="tcp"))
parsed_values.append(urllib.parse.urlparse(f"//{value}", scheme="tcp"))
for parsed in parsed_values:
try:
port = parsed.port
except ValueError:
port = None
result = _validated_result(parsed.hostname, port)
if result:
return result
# For bare "host:port" strings that urlparse may misparse, try a manual
# partition. The `startswith("[")` guard excludes IPv6 bracket notation
# (e.g. "[::1]:8080") because those already succeed via urlparse above.
if value.count(":") == 1 and not value.startswith("["):
host, _, port_text = value.partition(":")
try:
port = int(port_text) if port_text else None
except ValueError:
port = None
result = _validated_result(host, port)
if result: # pragma: no cover - urlparse handles all currently-known forms
return result
return _validated_result(value, None)
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,170 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""MeshCore protocol implementation.
This package defines :class:`MeshcoreProvider`, which satisfies the
:class:`~data.mesh_ingestor.mesh_protocol.MeshProtocol` interface for MeshCore
nodes connected via serial port, BLE, or TCP/IP.
The protocol backend runs MeshCore's ``asyncio`` event loop in a background
daemon thread so that incoming events are dispatched without blocking the
synchronous daemon loop. Received contacts, channel messages, and direct
messages are forwarded to the shared HTTP ingest queue via the same
:mod:`~data.mesh_ingestor.handlers` helpers used by the Meshtastic protocol.
Connection type is detected automatically from the target string:
* **BLE** — MAC address (``AA:BB:CC:DD:EE:FF``) or UUID (macOS format).
* **TCP** — ``host:port`` or ``[ipv6]:port`` (accepts hostnames).
* **Serial** — any other non-empty string (e.g. ``/dev/ttyUSB0``).
* **Auto** — ``None`` or empty: tries serial candidates from
:func:`~data.mesh_ingestor.connection.default_serial_targets`.
Node identities are derived from the first four bytes (eight hex characters)
of each contact's 32-byte public key, formatted as ``!xxxxxxxx`` to match
the canonical node-ID schema used across the system. Ingested
``user.shortName`` is the first two bytes (four hex characters) of the
node ID, not the advertised name.
"""
from __future__ import annotations
# Apply upstream-library patches before any ``MeshCore`` instance is built,
# otherwise the first malformed advertisement dies inside a detached asyncio
# task before our handler can observe it. See
# :mod:`data.mesh_ingestor.protocols._meshcore_patches` for the specific
# upstream bugs covered.
#
# This mutates the upstream class at import time. The blast radius is
# narrow because ``protocols/__init__.py`` exposes this package only through
# a lazy ``__getattr__`` and the daemon resolves it only when
# ``PROTOCOL=meshcore`` is active. Any future diagnostic CLI that imports
# this package will inherit the shim.
from .. import _meshcore_patches as _meshcore_patches
_meshcore_patches.apply()
# Re-expose meshcore-library symbols so existing test imports (and callers
# that prefer a single import surface) keep working unchanged. Submodules
# resolve these names at call time via ``sys.modules`` so monkey-patches
# applied to the package surface during tests propagate.
from meshcore import ( # noqa: E402 - patches must run before this import.
BLEConnection,
EventType,
MeshCore,
SerialConnection,
TCPConnection,
)
# Re-expose the ``data.mesh_ingestor`` modules that tests monkeypatch through
# the meshcore namespace (``_mod.config._debug_log``, ``_mod._ingestors``,
# ``_mod._queue``). Keeping these attributes preserves the call surface of
# the pre-split ``meshcore.py`` module.
from ... import config as config # noqa: E402
from ... import ingestors as _ingestors # noqa: E402
from ... import queue as _queue # noqa: E402
from ...connection import default_serial_targets # noqa: E402
from ._constants import ( # noqa: E402 - keep grouped with sibling re-exports.
_CHANNEL_PROBE_FALLBACK_MAX,
_CONNECT_TIMEOUT_SECS,
_DEFAULT_BAUDRATE,
_MENTION_RE,
_MESHCORE_ADV_TYPE_ROLE,
_MESHCORE_ID_BITS,
_MESHCORE_ID_MASK,
)
from .channels import _ensure_channel_names # noqa: E402
from .connection import ( # noqa: E402
_log_unhandled_loop_exception,
_make_connection,
)
from .debug_log import ( # noqa: E402
_IGNORED_MESSAGE_LOCK,
_IGNORED_MESSAGE_LOG_PATH,
_record_meshcore_message,
_to_json_safe,
)
from .decode import ( # noqa: E402
_contact_to_node_dict,
_derive_modem_preset,
_self_info_to_node_dict,
)
from .handlers import ( # noqa: E402
_make_event_handlers,
_process_contact_update,
_process_contacts,
_process_self_info,
)
from .identity import ( # noqa: E402
_derive_synthetic_node_id,
_meshcore_adv_type_to_role,
_meshcore_node_id,
_meshcore_short_name,
_pubkey_prefix_to_node_id,
)
from .interface import ClosedBeforeConnectedError, _MeshcoreInterface # noqa: E402
from .messages import ( # noqa: E402
_derive_message_id,
_extract_mention_names,
_parse_sender_name,
_synthetic_node_dict,
)
from .position import _store_meshcore_position # noqa: E402
from .provider import MeshcoreProvider # noqa: E402
from .runner import _run_meshcore # noqa: E402
__all__ = [
"BLEConnection",
"ClosedBeforeConnectedError",
"EventType",
"MeshCore",
"MeshcoreProvider",
"SerialConnection",
"TCPConnection",
"_CHANNEL_PROBE_FALLBACK_MAX",
"_CONNECT_TIMEOUT_SECS",
"_DEFAULT_BAUDRATE",
"_IGNORED_MESSAGE_LOCK",
"_IGNORED_MESSAGE_LOG_PATH",
"_MENTION_RE",
"_MESHCORE_ADV_TYPE_ROLE",
"_MESHCORE_ID_BITS",
"_MESHCORE_ID_MASK",
"_MeshcoreInterface",
"_contact_to_node_dict",
"_derive_message_id",
"_derive_modem_preset",
"_derive_synthetic_node_id",
"_ensure_channel_names",
"_extract_mention_names",
"_log_unhandled_loop_exception",
"_make_connection",
"_make_event_handlers",
"_meshcore_adv_type_to_role",
"_meshcore_node_id",
"_meshcore_short_name",
"_parse_sender_name",
"_process_contact_update",
"_process_contacts",
"_process_self_info",
"_pubkey_prefix_to_node_id",
"_record_meshcore_message",
"_run_meshcore",
"_self_info_to_node_dict",
"_store_meshcore_position",
"_synthetic_node_dict",
"_to_json_safe",
]
@@ -0,0 +1,56 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Constants shared across MeshCore submodules.
Hoisted out of the original monolithic ``meshcore.py`` so that submodules can
import only what they need without picking up unrelated side-effects.
"""
from __future__ import annotations
import re
_CONNECT_TIMEOUT_SECS: float = 30.0
"""Seconds to wait for the MeshCore node to respond to the appstart handshake."""
_DEFAULT_BAUDRATE: int = 115200
"""Default baud rate for MeshCore serial connections."""
# MeshCore ``ADV_TYPE_*`` (``AdvertDataHelpers.h``) → ``user.role`` for POST /api/nodes.
_MESHCORE_ADV_TYPE_ROLE: dict[int, str] = {
1: "COMPANION", # ADV_TYPE_CHAT
2: "REPEATER", # ADV_TYPE_REPEATER
3: "ROOM_SERVER", # ADV_TYPE_ROOM_SERVER
4: "SENSOR", # ADV_TYPE_SENSOR
}
_MESHCORE_ID_BITS = 53
"""Width of the synthetic MeshCore message ID, in bits.
53 bits keeps the value within :js:data:`Number.MAX_SAFE_INTEGER`
(``2**53 - 1``) so the JSON ID round-trips through the JavaScript frontend
without precision loss, while giving roughly :math:`2^{26.5}` (~95 million)
distinct messages of birthday-collision headroom.
"""
_MESHCORE_ID_MASK = (1 << _MESHCORE_ID_BITS) - 1
"""Bitmask applied to the SHA-256 prefix to clamp the id to 53 bits."""
# Fallback upper bound for channel index probing when the device query fails
# or returns an older firmware version that omits ``max_channels``.
_CHANNEL_PROBE_FALLBACK_MAX = 32
# Matches @[Name] mention patterns in MeshCore message bodies.
_MENTION_RE = re.compile(r"@\[([^\]]+)\]")
@@ -0,0 +1,86 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Channel-name probing for MeshCore devices."""
from __future__ import annotations
import sys
from ... import config
from ._constants import _CHANNEL_PROBE_FALLBACK_MAX
async def _ensure_channel_names(mc: object) -> None:
"""Probe channel names from the device and populate the channel cache.
Queries the device for its authoritative channel count via
:meth:`~meshcore.MeshCore.commands.send_device_query` (``max_channels``
field of the ``DEVICE_INFO`` response), then iterates every index from 0
through ``max_channels - 1``, requesting each via
:meth:`~meshcore.MeshCore.commands.get_channel`. The responses arrive as
:attr:`~meshcore.EventType.CHANNEL_INFO` events and are registered into
the shared channel cache via :func:`~data.mesh_ingestor.channels.register_channel`.
Falls back to a probe bound of :data:`_CHANNEL_PROBE_FALLBACK_MAX` when the
device query fails or returns an older firmware that omits ``max_channels``.
Probes every index without early-stopping on ``ERROR`` responses, so sparse
configurations (e.g. slots 0 and 5 configured, slots 14 empty) are handled
correctly. Only a hard exception (connection loss, timeout) aborts the loop.
Parameters:
mc: Connected :class:`~meshcore.MeshCore` instance.
"""
# Deferred — see _make_event_handlers for the circular-dependency note.
from ... import channels as _channels
# Look up ``EventType`` via the parent package so that test fakes installed
# via ``monkeypatch.setattr(mod, "EventType", ...)`` apply at call time.
pkg = sys.modules["data.mesh_ingestor.protocols.meshcore"]
EventType = pkg.EventType
max_idx = _CHANNEL_PROBE_FALLBACK_MAX
try:
dev_evt = await mc.commands.send_device_query()
if dev_evt.type == EventType.DEVICE_INFO:
reported = (dev_evt.payload or {}).get("max_channels")
if isinstance(reported, int) and reported > 0:
max_idx = reported
except Exception as exc:
config._debug_log(
"Device query failed; using fallback channel probe bound",
context="meshcore.channels",
severity="warning",
fallback_max=max_idx,
error=str(exc),
)
for idx in range(max_idx):
try:
evt = await mc.commands.get_channel(idx)
if evt.type == EventType.CHANNEL_INFO:
name = (evt.payload or {}).get("channel_name", "")
if name:
_channels.register_channel(idx, name)
# ERROR response — unconfigured slot; continue to next index
except Exception as exc:
config._debug_log(
"Channel probe failed",
context="meshcore.channels",
severity="warning",
channel_idx=idx,
error=str(exc),
)
break
@@ -0,0 +1,95 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Connection routing and asyncio exception logging for MeshCore."""
from __future__ import annotations
import asyncio
import sys
from ... import config
from ...connection import parse_ble_target, parse_tcp_target
def _make_connection(target: str, baudrate: int) -> object:
"""Create the appropriate MeshCore connection object for *target*.
Routes to the correct ``meshcore`` connection class based on the target
string format:
* BLE MAC / UUID → :class:`meshcore.BLEConnection`
* ``host:port`` / ``[ipv6]:port`` → :class:`meshcore.TCPConnection`
* anything else → :class:`meshcore.SerialConnection`
Parameters:
target: Resolved, non-empty connection target.
baudrate: Baud rate for serial connections (ignored for BLE/TCP).
Returns:
An unconnected ``meshcore`` connection object.
"""
# Look up connection classes via the parent package so that test fakes
# installed via ``monkeypatch.setattr(mod, "BLEConnection", ...)`` apply.
pkg = sys.modules["data.mesh_ingestor.protocols.meshcore"]
ble_addr = parse_ble_target(target)
if ble_addr:
return pkg.BLEConnection(address=ble_addr)
tcp_target = parse_tcp_target(target)
if tcp_target:
host, port = tcp_target
return pkg.TCPConnection(host, port)
return pkg.SerialConnection(target, baudrate)
def _log_unhandled_loop_exception(
loop: asyncio.AbstractEventLoop, context: dict
) -> None:
"""Route asyncio's "unhandled task exception" warnings through our logger.
The upstream ``meshcore`` library spawns detached
``asyncio.create_task`` tasks for every inbound radio frame. When one
of those tasks raises and nobody awaits the future, asyncio's default
handler writes ``Task exception was never retrieved`` to stderr. That
bypasses our structured log pipeline and clutters container logs.
This handler preserves the same information under
``context=asyncio.unhandled`` so operators grep for one place.
Parameters:
loop: Event loop that surfaced the exception (unused but required
by the asyncio handler signature).
context: Asyncio exception-context dictionary. Fields we care
about: ``message`` (human summary) and ``exception`` (the raw
exception object, when available).
"""
del loop
exception = context.get("exception")
task = context.get("task")
task_name = None
if task is not None:
# Prefer the friendly ``get_name()``; fall back to ``repr`` for any
# future Task-like object that does not implement it.
get_name = getattr(task, "get_name", None)
task_name = get_name() if callable(get_name) else repr(task)
config._debug_log(
context.get("message") or "Unhandled asyncio task exception",
context="asyncio.unhandled",
severity="error",
always=True,
error_class=type(exception).__name__ if exception else None,
error_message=str(exception) if exception else None,
task=task_name,
)
@@ -0,0 +1,90 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""``DEBUG=1`` capture of unhandled MeshCore frames to ``ignored-meshcore.txt``."""
from __future__ import annotations
import base64
import json
import sys
import threading
from datetime import datetime, timezone
from pathlib import Path
from ... import config
# This file lives one level deeper than the pre-split ``meshcore.py``
# (``data/mesh_ingestor/protocols/meshcore/debug_log.py`` vs.
# ``data/mesh_ingestor/protocols/meshcore.py``), so ``parents[4]`` here
# (meshcore/ → protocols/ → mesh_ingestor/ → data/ → repo root) lands at
# the same repo-root destination as ``parents[3]`` did in the original
# module. The on-disk log path is therefore unchanged after the split.
_IGNORED_MESSAGE_LOG_PATH = Path(__file__).resolve().parents[4] / "ignored-meshcore.txt"
"""Filesystem path that stores raw MeshCore messages when ``DEBUG=1``."""
_IGNORED_MESSAGE_LOCK = threading.Lock()
"""Lock guarding writes to :data:`_IGNORED_MESSAGE_LOG_PATH`."""
def _to_json_safe(value: object) -> object:
"""Recursively convert *value* to a JSON-serialisable form.
Handles the common types present in mesh protocol messages: dicts, lists,
bytes (base64-encoded), and primitives. Anything else is coerced via
``str()``.
"""
if isinstance(value, dict):
return {str(k): _to_json_safe(v) for k, v in value.items()}
if isinstance(value, (list, tuple, set)):
return [_to_json_safe(v) for v in value]
if isinstance(value, bytes):
return base64.b64encode(value).decode("ascii")
if isinstance(value, (str, int, float, bool)) or value is None:
return value
return str(value)
def _record_meshcore_message(message: object, *, source: str) -> None:
"""Persist a MeshCore message to :data:`ignored-meshcore.txt` when ``DEBUG=1``.
When ``DEBUG`` is not set the function returns immediately without any
I/O so that production deployments are not burdened by file writes.
Parameters:
message: The raw message object received from the MeshCore node.
source: A short label describing where the message originated (e.g.
a serial port path or BLE address).
"""
if not config.DEBUG:
return
# Resolve path/lock via the parent package so test monkey-patches at
# ``meshcore._IGNORED_MESSAGE_LOG_PATH`` (and ``_IGNORED_MESSAGE_LOCK``)
# take effect at call time.
pkg = sys.modules.get("data.mesh_ingestor.protocols.meshcore")
log_path = getattr(pkg, "_IGNORED_MESSAGE_LOG_PATH", _IGNORED_MESSAGE_LOG_PATH)
log_lock = getattr(pkg, "_IGNORED_MESSAGE_LOCK", _IGNORED_MESSAGE_LOCK)
timestamp = datetime.now(timezone.utc).isoformat()
entry = {
"message": _to_json_safe(message),
"source": source,
"timestamp": timestamp,
}
payload = json.dumps(entry, ensure_ascii=False, sort_keys=True)
with log_lock:
log_path.parent.mkdir(parents=True, exist_ok=True)
with log_path.open("a", encoding="utf-8") as fh:
fh.write(f"{payload}\n")
@@ -0,0 +1,110 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Convert MeshCore contact / self-info payloads into ``POST /api/nodes`` dicts."""
from __future__ import annotations
import time
from .identity import (
_meshcore_adv_type_to_role,
_meshcore_node_id,
_meshcore_short_name,
)
def _contact_to_node_dict(contact: dict) -> dict:
"""Convert a MeshCore contact dict to a Meshtastic-ish node dict.
Parameters:
contact: Contact dict from the MeshCore library. Expected keys
include ``public_key``, ``type`` (``ADV_TYPE_*``), ``adv_name``,
``last_advert``, ``adv_lat``, and ``adv_lon``.
Returns:
Node dict compatible with the ``POST /api/nodes`` payload format.
"""
pub_key = contact.get("public_key", "")
node_id = _meshcore_node_id(pub_key)
name = (contact.get("adv_name") or "").strip()
role = _meshcore_adv_type_to_role(contact.get("type"))
node: dict = {
"lastHeard": contact.get("last_advert"),
"protocol": "meshcore",
"user": {
"longName": name,
"shortName": _meshcore_short_name(node_id),
"publicKey": pub_key,
**({"role": role} if role is not None else {}),
},
}
lat = contact.get("adv_lat")
lon = contact.get("adv_lon")
if lat is not None and lon is not None and (lat or lon):
pos: dict = {"latitude": lat, "longitude": lon}
last_advert = contact.get("last_advert")
if last_advert is not None:
pos["time"] = last_advert
node["position"] = pos
return node
def _derive_modem_preset(sf: object, bw: object, cr: object) -> str | None:
"""Return a compact radio-parameter string from spreading factor, bandwidth, and coding rate.
Parameters:
sf: Spreading factor (int, e.g. ``12``).
bw: Bandwidth in kHz (int or float, e.g. ``125.0``).
cr: Coding rate denominator (int, e.g. ``5`` meaning 4/5).
Returns:
A string such as ``"SF12/BW125/CR5"``, or ``None`` when any parameter
is absent or zero (meaning the radio config was not reported).
"""
if not sf or not bw or not cr:
return None
return f"SF{int(sf)}/BW{int(bw)}/CR{int(cr)}"
def _self_info_to_node_dict(self_info: dict) -> dict:
"""Convert a MeshCore ``SELF_INFO`` payload to a Meshtastic-ish node dict.
Parameters:
self_info: Payload dict from the ``SELF_INFO`` event. Expected keys
include ``name``, ``public_key``, ``adv_type`` (``ADV_TYPE_*``),
``adv_lat``, and ``adv_lon``.
Returns:
Node dict compatible with the ``POST /api/nodes`` payload format.
"""
name = (self_info.get("name") or "").strip()
pub_key = self_info.get("public_key", "")
node_id = _meshcore_node_id(pub_key)
role = _meshcore_adv_type_to_role(self_info.get("adv_type"))
node: dict = {
"lastHeard": int(time.time()),
"protocol": "meshcore",
"user": {
"longName": name,
"shortName": _meshcore_short_name(node_id),
"publicKey": pub_key,
**({"role": role} if role is not None else {}),
},
}
lat = self_info.get("adv_lat")
lon = self_info.get("adv_lon")
if lat is not None and lon is not None and (lat or lon):
node["position"] = {"latitude": lat, "longitude": lon, "time": int(time.time())}
return node
@@ -0,0 +1,324 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Event-handler closures for MeshCore protocol messages."""
from __future__ import annotations
import time
from ... import config, ingestors as _ingestors
from .decode import _contact_to_node_dict, _derive_modem_preset, _self_info_to_node_dict
from .identity import _derive_synthetic_node_id, _meshcore_node_id
from .interface import _MeshcoreInterface
from .messages import (
_derive_message_id,
_extract_mention_names,
_parse_sender_name,
_synthetic_node_dict,
)
from .position import _store_meshcore_position
def _process_self_info(
payload: dict, iface: _MeshcoreInterface, handlers: object
) -> None:
"""Apply a ``SELF_INFO`` payload: set host_node_id, upsert the host node,
and capture LoRa radio metadata into the shared config cache.
Parameters:
payload: Event payload dict containing at minimum ``public_key`` and
optionally ``name``, ``adv_lat``, ``adv_lon``, ``radio_freq``,
``radio_bw``, ``radio_sf``, ``radio_cr``.
iface: Active interface whose :attr:`host_node_id` will be updated.
handlers: Module reference for :func:`~data.mesh_ingestor.handlers`
functions (passed to avoid circular-import issues).
"""
# Cache the payload so node_snapshot_items / self_node_item can use it later.
iface._self_info_payload = payload
pub_key = payload.get("public_key", "")
node_id = _meshcore_node_id(pub_key)
# Capture radio metadata BEFORE upserting the node so that
# _apply_radio_metadata_to_nodes finds populated values on the very first
# SELF_INFO. Never overwrite a previously cached value.
radio_freq = payload.get("radio_freq")
if radio_freq is not None and getattr(config, "LORA_FREQ", None) is None:
config.LORA_FREQ = radio_freq
modem_preset = _derive_modem_preset(
payload.get("radio_sf"), payload.get("radio_bw"), payload.get("radio_cr")
)
if modem_preset is not None and getattr(config, "MODEM_PRESET", None) is None:
config.MODEM_PRESET = modem_preset
if node_id:
iface.host_node_id = node_id
handlers.register_host_node_id(node_id)
# Queue the ingestor registration BEFORE any node upserts so the web
# backend assigns the correct protocol to all subsequent records.
# Radio metadata (LORA_FREQ, MODEM_PRESET) is captured just above and
# will be included in the heartbeat payload by queue_ingestor_heartbeat.
_ingestors.queue_ingestor_heartbeat(force=True, node_id=node_id)
handlers.upsert_node(node_id, _self_info_to_node_dict(payload))
lat = payload.get("adv_lat")
lon = payload.get("adv_lon")
if lat is not None and lon is not None and (lat or lon):
_store_meshcore_position(
node_id, lat, lon, int(time.time()), handlers.host_node_id()
)
config._debug_log(
"MeshCore radio metadata captured",
context="meshcore.self_info.radio",
severity="info",
lora_freq=radio_freq,
modem_preset=modem_preset,
)
handlers._mark_packet_seen()
config._debug_log(
"MeshCore self-info received",
context="meshcore.self_info",
node_id=node_id,
name=payload.get("name"),
)
def _process_contacts(
contacts: dict, iface: _MeshcoreInterface, handlers: object
) -> None:
"""Apply a bulk ``CONTACTS`` payload: update the local snapshot and upsert nodes.
Parameters:
contacts: Mapping of full ``public_key`` hex strings to contact dicts.
iface: Active interface whose contact snapshot will be updated.
handlers: Module reference for :func:`~data.mesh_ingestor.handlers`.
"""
for pub_key, contact in contacts.items():
node_id = _meshcore_node_id(pub_key)
if node_id is None:
continue
iface._update_contact(contact)
handlers.upsert_node(node_id, _contact_to_node_dict(contact))
lat = contact.get("adv_lat")
lon = contact.get("adv_lon")
if lat is not None and lon is not None and (lat or lon):
_store_meshcore_position(
node_id,
lat,
lon,
contact.get("last_advert"),
handlers.host_node_id(),
)
handlers._mark_packet_seen()
def _process_contact_update(
contact: dict, iface: _MeshcoreInterface, handlers: object
) -> None:
"""Apply a single ``NEW_CONTACT`` or ``NEXT_CONTACT`` event.
Parameters:
contact: Contact dict containing at minimum ``public_key``.
iface: Active interface whose contact snapshot will be updated.
handlers: Module reference for :func:`~data.mesh_ingestor.handlers`.
"""
pub_key = contact.get("public_key", "")
node_id = _meshcore_node_id(pub_key)
if node_id is None:
return
iface._update_contact(contact)
handlers.upsert_node(node_id, _contact_to_node_dict(contact))
lat = contact.get("adv_lat")
lon = contact.get("adv_lon")
if lat is not None and lon is not None and (lat or lon):
_store_meshcore_position(
node_id,
lat,
lon,
contact.get("last_advert"),
handlers.host_node_id(),
)
handlers._mark_packet_seen()
config._debug_log(
"MeshCore contact updated",
context="meshcore.contact",
node_id=node_id,
name=contact.get("adv_name"),
)
def _make_event_handlers(iface: _MeshcoreInterface, target: str | None) -> dict:
"""Build async callbacks for each relevant MeshCore event type.
All callbacks are closures over *iface* and *target* so they can update
connection state and forward data to the ingest queue without global state.
Parameters:
iface: The active :class:`_MeshcoreInterface` instance.
target: Human-readable connection target for log messages.
Returns:
Mapping of ``EventType`` member name → async callback coroutine.
"""
# Deferred imports to avoid a circular dependency: meshcore is imported by
# protocols/__init__.py which is imported by the top-level mesh_ingestor
# package, while handlers.py and channels.py import from that same package.
from ... import channels as _channels
from ... import handlers as _handlers
async def on_channel_info(evt) -> None:
payload = evt.payload or {}
idx = payload.get("channel_idx")
name = payload.get("channel_name", "")
if idx is not None and name:
_channels.register_channel(idx, name)
async def on_self_info(evt) -> None:
_process_self_info(evt.payload or {}, iface, _handlers)
async def on_contacts(evt) -> None:
_process_contacts(evt.payload or {}, iface, _handlers)
async def on_contact_update(evt) -> None:
_process_contact_update(evt.payload or {}, iface, _handlers)
async def on_channel_msg(evt) -> None:
payload = evt.payload or {}
sender_ts = payload.get("sender_timestamp")
text = payload.get("text")
if sender_ts is None or not text:
return
rx_time = int(time.time())
channel_idx = payload.get("channel_idx", 0)
# MeshCore channel messages carry no sender identifier in the event
# payload. Try to resolve the sender from the "SenderName: body"
# convention embedded in the message text, matched against the known
# contacts roster. When the contacts roster does not yet contain the
# sender, create a synthetic placeholder node so that the message
# receives a stable from_id and the UI can render a badge immediately.
# The web app will migrate messages to the real node ID once the sender
# is seen via a contact advertisement.
sender_name = _parse_sender_name(text)
from_id = iface.lookup_node_id_by_name(sender_name) if sender_name else None
if from_id is None and sender_name:
synthetic_id = _derive_synthetic_node_id(sender_name)
if synthetic_id not in iface._synthetic_node_ids:
_handlers.upsert_node(synthetic_id, _synthetic_node_dict(sender_name))
iface._synthetic_node_ids.add(synthetic_id)
from_id = synthetic_id
# Upsert synthetic placeholder nodes for any @[Name] mentions in the
# message body whose names are not yet in the contacts roster. This
# ensures mention badges resolve even before the mentioned node is seen.
for mention_name in _extract_mention_names(text):
if not iface.lookup_node_id_by_name(mention_name):
mention_id = _derive_synthetic_node_id(mention_name)
if mention_id not in iface._synthetic_node_ids:
_handlers.upsert_node(
mention_id, _synthetic_node_dict(mention_name)
)
iface._synthetic_node_ids.add(mention_id)
# The dedup fingerprint uses the parsed sender name (lowercased and
# stripped) rather than ``from_id``: each ingestor independently
# resolves Alice to either her real ``!aabbccdd`` (when she is in its
# contact roster) or to a synthetic id derived from her name; the
# parsed name lives in the message text itself, so it is identical
# across all receivers regardless of roster state.
sender_identity = (sender_name or "").strip().lower()
packet = {
"id": _derive_message_id(
sender_identity, sender_ts, f"c{channel_idx}", text
),
"rxTime": rx_time,
"rx_time": rx_time,
"from_id": from_id,
"to_id": "^all",
"channel": channel_idx,
"snr": payload.get("SNR"),
"rssi": payload.get("RSSI"),
"protocol": "meshcore",
"decoded": {
"portnum": "TEXT_MESSAGE_APP",
"text": text,
"channel": channel_idx,
},
}
_handlers._mark_packet_seen()
_handlers.store_packet_dict(packet)
config._debug_log(
"MeshCore channel message",
context="meshcore.channel_msg",
channel=channel_idx,
sender=sender_name,
from_id=from_id,
)
async def on_contact_msg(evt) -> None:
payload = evt.payload or {}
sender_ts = payload.get("sender_timestamp")
text = payload.get("text")
if sender_ts is None or not text:
return
rx_time = int(time.time())
pubkey_prefix = payload.get("pubkey_prefix", "")
from_id = iface.lookup_node_id(pubkey_prefix)
# ``pubkey_prefix`` is already a sender-side stable identifier (the
# first six bytes of the sender's public key); ``"dm"`` namespaces
# direct messages so they cannot collide with channel messages that
# happen to share the other components.
packet = {
"id": _derive_message_id(pubkey_prefix or "", sender_ts, "dm", text),
"rxTime": rx_time,
"rx_time": rx_time,
"from_id": from_id,
"to_id": iface.host_node_id,
"channel": 0,
"snr": payload.get("SNR"),
"protocol": "meshcore",
"decoded": {
"portnum": "TEXT_MESSAGE_APP",
"text": text,
"channel": 0,
},
}
_handlers._mark_packet_seen()
_handlers.store_packet_dict(packet)
async def on_disconnected(evt) -> None:
iface.isConnected = False
config._debug_log(
"MeshCore node disconnected",
context="meshcore.disconnect",
target=target or "unknown",
severity="warning",
always=True,
)
return {
"CHANNEL_INFO": on_channel_info,
"SELF_INFO": on_self_info,
"CONTACTS": on_contacts,
"NEW_CONTACT": on_contact_update,
"NEXT_CONTACT": on_contact_update,
"CHANNEL_MSG_RECV": on_channel_msg,
"CONTACT_MSG_RECV": on_contact_msg,
"DISCONNECTED": on_disconnected,
}
@@ -0,0 +1,125 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Pure helpers that derive canonical MeshCore node identifiers.
These helpers are deterministic and side-effect-free so they can be imported
from anywhere in the MeshCore package without circular concerns.
"""
from __future__ import annotations
import hashlib
from ._constants import _MESHCORE_ADV_TYPE_ROLE
def _meshcore_node_id(public_key_hex: str | None) -> str | None:
"""Derive a canonical ``!xxxxxxxx`` node ID from a MeshCore public key.
Uses the first four bytes (eight hex characters) of the 32-byte public
key, formatted as ``!xxxxxxxx``.
Parameters:
public_key_hex: 64-character lowercase hex string for the node's
public key as returned by the MeshCore library.
Returns:
Canonical ``!xxxxxxxx`` node ID string, or ``None`` when the key is
absent or too short.
"""
if not public_key_hex or len(public_key_hex) < 8:
return None
return "!" + public_key_hex[:8].lower()
def _meshcore_short_name(node_id: str | None) -> str:
"""Derive a four-character short name from a canonical node ID.
Uses the first two bytes (four hex characters) of the ``!xxxxxxxx`` node
ID. This keeps the short name consistent with the node ID itself — if the
node ID is later replaced when the real public key is heard, the short name
will update alongside it.
Parameters:
node_id: Canonical ``!xxxxxxxx`` node ID string (as returned by
:func:`_meshcore_node_id`).
Returns:
Four lowercase hex characters (e.g. ``"cafe"``), or an empty string
when the node ID is missing or too short.
"""
if not node_id:
return ""
raw = node_id.lstrip("!")
if len(raw) < 4:
return ""
return raw[:4].lower()
def _meshcore_adv_type_to_role(adv_type: object) -> str | None:
"""Map MeshCore ``ADV_TYPE_*`` (contact ``type`` / self ``adv_type``) to ingest role.
Values match MeshCore firmware ``AdvertDataHelpers.h`` (``ADV_TYPE_CHAT``,
``ADV_TYPE_REPEATER``, …). Role strings match the MeshCore palette keys
used by the web dashboard (``COMPANION``, ``REPEATER``, …).
Parameters:
adv_type: Raw type byte from meshcore_py (typically ``int`` 04).
Non-integer values (e.g. ``float``, ``None``) are rejected and
return ``None``. Future firmware type codes not yet in the mapping
also return ``None`` until the table is updated.
Returns:
Uppercase role string, or ``None`` when the value is unknown or should
not override the web default (``ADV_TYPE_NONE`` / unrecognised).
"""
if not isinstance(adv_type, int):
return None
return _MESHCORE_ADV_TYPE_ROLE.get(adv_type)
def _derive_synthetic_node_id(long_name: str) -> str:
"""Derive a deterministic synthetic ``!xxxxxxxx`` node ID from a long name.
Uses the first four bytes of SHA-256(UTF-8 encoded name), formatted as
``!xxxxxxxx``. The same long name always produces the same ID across
restarts. The probability of collision with a real public-key-derived ID
is ~1 in 4 billion per pair, which is negligible in practice.
Parameters:
long_name: Node long name used as the hash input.
Returns:
Canonical ``!xxxxxxxx`` node ID string.
"""
return "!" + hashlib.sha256(long_name.encode("utf-8")).hexdigest()[:8]
def _pubkey_prefix_to_node_id(contacts: dict, pubkey_prefix: str) -> str | None:
"""Look up a canonical node ID by six-byte public-key prefix.
Parameters:
contacts: Mapping of full ``public_key`` hex strings to contact dicts.
pubkey_prefix: Twelve-character hex string (six bytes) as used in
MeshCore direct-message events.
Returns:
Canonical ``!xxxxxxxx`` node ID for the first matching contact, or
``None`` when no contact's public key starts with *pubkey_prefix*.
"""
for pub_key in contacts:
if pub_key.startswith(pubkey_prefix):
return _meshcore_node_id(pub_key)
return None
@@ -0,0 +1,159 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Live MeshCore interface and the connection-stage shutdown sentinel."""
from __future__ import annotations
import asyncio
import threading
from .decode import _contact_to_node_dict
from .identity import _meshcore_node_id, _pubkey_prefix_to_node_id
class ClosedBeforeConnectedError(ConnectionError):
"""Raised when :meth:`_MeshcoreInterface.close` is called while the
connection coroutine is still waiting for the device handshake to complete.
This is a :exc:`ConnectionError` subclass so callers that only handle the
base class continue to work, while callers that need to distinguish a
user-initiated shutdown from a hardware failure can catch this type
specifically.
"""
class _MeshcoreInterface:
"""Live MeshCore interface managing an asyncio event loop in a background thread.
Holds connection state, a thread-safe snapshot of known contacts, and the
handles needed to shut down cleanly when the daemon requests a disconnect.
"""
host_node_id: str | None = None
"""Canonical ``!xxxxxxxx`` identifier for the connected host device."""
def __init__(self, *, target: str | None) -> None:
"""Initialise the interface with the connection *target*."""
self._target = target
self._mc: object | None = None
self._loop: asyncio.AbstractEventLoop | None = None
self._thread: threading.Thread | None = None
self._stop_event: asyncio.Event | None = None
self._contacts_lock = threading.Lock()
self._contacts: dict = {}
self.isConnected: bool = False
# Tracks synthetic node IDs already upserted this session to avoid
# repeating the HTTP POST for every message from the same unknown sender.
# This set is reset on reconnect (because _MeshcoreInterface is recreated),
# which may cause extra upserts after a disconnect — the ON CONFLICT guard
# in the Ruby web app ensures those are idempotent and safe.
self._synthetic_node_ids: set[str] = set()
self._self_info_payload: dict | None = None
"""Most recent SELF_INFO payload received from the device, or ``None``."""
# ------------------------------------------------------------------
# Contact management (called from the asyncio thread)
# ------------------------------------------------------------------
def _update_contact(self, contact: dict) -> None:
"""Thread-safely add or update a contact in the local snapshot.
Parameters:
contact: Contact dict from a ``CONTACTS``, ``NEW_CONTACT``, or
``NEXT_CONTACT`` event.
"""
pub_key = contact.get("public_key")
if pub_key:
with self._contacts_lock:
self._contacts[pub_key] = contact
def contacts_snapshot(self) -> list[tuple[str, dict]]:
"""Return a thread-safe snapshot of all known contacts as node entries.
Returns:
List of ``(canonical_node_id, node_dict)`` pairs, skipping any
contact whose public key cannot be mapped to a valid node ID.
"""
with self._contacts_lock:
items = list(self._contacts.items())
result = []
for pub_key, contact in items:
node_id = _meshcore_node_id(pub_key)
if node_id is not None:
result.append((node_id, _contact_to_node_dict(contact)))
return result
def lookup_node_id(self, pubkey_prefix: str) -> str | None:
"""Return the canonical node ID for the contact matching *pubkey_prefix*.
Parameters:
pubkey_prefix: Twelve-character hex string (six bytes) from a
``CONTACT_MSG_RECV`` event.
Returns:
Canonical ``!xxxxxxxx`` node ID, or ``None`` when no match.
"""
with self._contacts_lock:
return _pubkey_prefix_to_node_id(self._contacts, pubkey_prefix)
def lookup_node_id_by_name(self, adv_name: str) -> str | None:
"""Return the canonical node ID for the contact whose ``adv_name`` matches.
Used to resolve the sender of a MeshCore channel message from the
``"SenderName: body"`` text prefix when no ``pubkey_prefix`` is
available in the event payload. The comparison is case-sensitive
because ``adv_name`` values come verbatim from the MeshCore firmware.
Parameters:
adv_name: Advertised name to look up. Leading and trailing
whitespace is stripped before comparison.
Returns:
Canonical ``!xxxxxxxx`` node ID, or ``None`` when no contact with
that name is known.
"""
name = adv_name.strip() if adv_name else ""
if not name:
return None
with self._contacts_lock:
for pub_key, contact in self._contacts.items():
contact_name = (contact.get("adv_name") or "").strip()
if contact_name == name:
return _meshcore_node_id(pub_key)
return None
# ------------------------------------------------------------------
# Lifecycle
# ------------------------------------------------------------------
def close(self) -> None:
"""Signal the background event loop to stop and wait for the thread.
Safe to call multiple times and from any thread.
"""
self.isConnected = False
loop = self._loop
stop_event = self._stop_event
if loop is not None and not loop.is_closed():
try:
if stop_event is not None:
loop.call_soon_threadsafe(stop_event.set)
else:
loop.call_soon_threadsafe(loop.stop)
except RuntimeError:
pass
thread = self._thread
if thread is not None and thread.is_alive():
thread.join(timeout=5.0)
@@ -0,0 +1,130 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Sender-side fingerprinting and parsing helpers for MeshCore messages."""
from __future__ import annotations
import hashlib
import time
from ._constants import _MENTION_RE, _MESHCORE_ID_MASK
def _derive_message_id(
sender_identity: str,
sender_ts: int,
discriminator: str,
text: str,
) -> int:
"""Derive a stable 53-bit message ID from sender-side MeshCore fields.
MeshCore does not assign firmware-side packet IDs. This function produces
a deterministic 53-bit integer fingerprint of a physical transmission so
that the same packet heard by multiple ingestors collapses to a single
``messages`` row via the ``messages.id`` PRIMARY KEY upsert path. Every
component of the fingerprint is sender-side, ensuring two receivers with
different clocks or roster state still compute the same value.
Parameters:
sender_identity: Stable sender identifier shared across receivers.
For channel messages this is the lowercased+stripped sender name
parsed from the message text via :func:`_parse_sender_name`; for
direct messages it is the sender's MeshCore ``pubkey_prefix``.
Must be a string (use ``""`` when unavailable).
sender_ts: Unix timestamp from the sender's clock (identical across
receivers regardless of receiver-side clock skew).
discriminator: Namespace tag separating message classes that could
otherwise collide. ``"c<N>"`` is reserved for channel messages
on channel ``N``; ``"dm"`` is reserved for direct messages.
text: Message text exactly as transmitted by the sender.
Returns:
A non-negative 53-bit integer suitable for the ``id`` column. The
value is bounded by ``0 <= id <= (1 << 53) - 1`` so it survives the
JSON → JavaScript number round-trip without precision loss.
"""
# The ``v1:`` prefix lets us evolve the fingerprint format (e.g. add a
# channel-secret hash) by bumping to ``v2:`` without colliding with
# existing ids written under the v1 scheme.
fingerprint = f"v1:{sender_identity}:{sender_ts}:{discriminator}:{text}"
digest = hashlib.sha256(fingerprint.encode("utf-8", errors="replace")).digest()
return int.from_bytes(digest[:7], "big") & _MESHCORE_ID_MASK
def _parse_sender_name(text: str) -> str | None:
"""Extract the sender name from a MeshCore channel message text.
MeshCore channel messages use the convention ``"SenderName: body"``.
Only the first colon is treated as the separator; colons that appear in the
body are preserved. The sender name is stripped of leading and trailing
whitespace.
Parameters:
text: Raw message text as stored in the database.
Returns:
Stripped sender name string, or ``None`` when the text does not
contain a colon or the portion before the colon is blank.
"""
colon_idx = text.find(":")
if colon_idx < 0:
return None
name = text[:colon_idx].strip()
return name if name else None
def _extract_mention_names(text: str) -> list[str]:
"""Extract all ``@[Name]`` mention names from a MeshCore message body.
Parameters:
text: Raw message text that may contain ``@[Name]`` mention patterns.
Returns:
List of extracted name strings (may be empty).
"""
return _MENTION_RE.findall(text)
def _synthetic_node_dict(long_name: str) -> dict:
"""Build a synthetic node dict for an unknown MeshCore channel sender.
Synthetic nodes are placeholder entries created when a channel message
arrives from a sender who is not yet in the connected device's contacts
roster. They carry ``role=COMPANION`` (the only role capable of sending
channel messages). The short name is intentionally omitted here — the
Ruby web app derives it at query time via
``meshcore_companion_display_short_name`` for all COMPANION nodes.
When the real contact advertisement is later received, the Ruby web app
detects the matching long name, migrates all messages from the synthetic
node ID to the real one, and removes the placeholder row.
Parameters:
long_name: Sender name parsed from the ``"SenderName: body"`` prefix.
Returns:
Node dict compatible with the ``POST /api/nodes`` payload format,
with ``user.synthetic`` set to ``True``.
"""
return {
"lastHeard": int(time.time()),
"protocol": "meshcore",
"user": {
"longName": long_name,
"shortName": "",
"role": "COMPANION",
"synthetic": True,
},
}
@@ -0,0 +1,69 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Forward MeshCore advertised positions to ``POST /api/positions``."""
from __future__ import annotations
import hashlib
import time
from ... import queue as _queue
from ...serialization import _iso, _node_num_from_id
def _store_meshcore_position(
node_id: str,
lat: float,
lon: float,
position_time: int | None,
ingestor: str | None,
) -> None:
"""Enqueue a ``POST /api/positions`` for a MeshCore contact's advertised position.
MeshCore does not issue dedicated position packets; position data is embedded
in contact advertisements. A stable pseudo-ID is derived from the node
identity and the position timestamp so repeated advertisements of the same
position are idempotently de-duplicated by the web app's ``ON CONFLICT``
clause.
Parameters:
node_id: Canonical ``!xxxxxxxx`` node identifier.
lat: Latitude in decimal degrees.
lon: Longitude in decimal degrees.
position_time: Unix timestamp from the contact's ``last_advert`` field,
or ``None`` to fall back to the current wall-clock time.
ingestor: Canonical node ID of the host ingestor, or ``None``.
"""
rx_time = int(time.time())
pt = position_time or rx_time
# Stable 63-bit pseudo-ID unique to (node, position_time) so that the web
# app ON CONFLICT clause de-duplicates repeated advertisements of the same
# position without collisions between different nodes.
digest = hashlib.sha256(f"{node_id}:{pt}".encode()).digest()
pos_id = int.from_bytes(digest[:8], "big") & 0x7FFFFFFFFFFFFFFF
node_num = _node_num_from_id(node_id)
payload = {
"id": pos_id,
"rx_time": rx_time,
"rx_iso": _iso(rx_time),
"node_id": node_id,
"node_num": node_num,
"from_id": node_id,
"latitude": lat,
"longitude": lon,
"position_time": pt,
"ingestor": ingestor,
}
_queue._queue_post_json("/api/positions", payload)
@@ -0,0 +1,196 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Public ``MeshcoreProvider`` satisfying the :class:`MeshProtocol` interface."""
from __future__ import annotations
import asyncio
import sys
import threading
from ... import config
from ._constants import _CONNECT_TIMEOUT_SECS
from .decode import _self_info_to_node_dict
from .identity import _meshcore_node_id
from .interface import _MeshcoreInterface
class MeshcoreProvider:
"""MeshCore ingestion provider.
Connects to a MeshCore node via serial port, BLE, or TCP/IP. The
connection type is inferred from the target string; see :meth:`connect`
for routing rules.
The provider runs MeshCore's ``asyncio`` event loop in a background daemon
thread. Incoming ``SELF_INFO``, ``CONTACTS``, ``NEW_CONTACT``,
``CHANNEL_MSG_RECV``, and ``CONTACT_MSG_RECV`` events are forwarded to the
HTTP ingest queue via the shared handler functions.
"""
name = "meshcore"
def subscribe(self) -> list[str]:
"""Return subscribed topic names.
MeshCore uses an ``asyncio`` event system rather than a pubsub bus,
so there are no topics to register at startup.
"""
return []
def connect(
self, *, active_candidate: str | None
) -> tuple[object, str | None, str | None]:
"""Connect to a MeshCore node via serial, BLE, or TCP.
Starts an asyncio event loop in a background daemon thread, performs
the MeshCore companion-protocol handshake, and blocks until the node's
self-info is received or the timeout expires.
Connection type is inferred from *active_candidate* (or
:data:`~data.mesh_ingestor.config.CONNECTION`):
* BLE MAC / UUID → :class:`meshcore.BLEConnection`
* ``host:port`` → :class:`meshcore.TCPConnection`
* serial path → :class:`meshcore.SerialConnection`
* ``None`` / empty → first candidate from
:func:`~data.mesh_ingestor.connection.default_serial_targets`
Parameters:
active_candidate: Previously resolved connection target, or
``None`` to fall back to
:data:`~data.mesh_ingestor.config.CONNECTION`.
Returns:
``(iface, resolved_target, next_active_candidate)`` matching the
:class:`~data.mesh_ingestor.provider.Provider` contract.
Raises:
ConnectionError: When the node does not complete the handshake
within :data:`_CONNECT_TIMEOUT_SECS` seconds.
"""
target: str | None = active_candidate or config.CONNECTION
if not target:
# Look up via the package so test fakes installed via
# ``monkeypatch.setattr(mod, "default_serial_targets", ...)`` apply.
pkg = sys.modules["data.mesh_ingestor.protocols.meshcore"]
candidates = pkg.default_serial_targets()
target = candidates[0] if candidates else "/dev/ttyACM0"
config._debug_log(
"Connecting to MeshCore node",
context="meshcore.connect",
target=target,
)
iface = _MeshcoreInterface(target=target)
connected_event = threading.Event()
error_holder: list = [None]
# Resolve the runner + asyncio handler via the parent package so test
# fakes installed via ``monkeypatch.setattr(mod, "_run_meshcore", ...)``
# apply at call time.
pkg = sys.modules["data.mesh_ingestor.protocols.meshcore"]
def _run_loop() -> None:
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
# Second line of defence around issue #754: if a detached task
# inside the upstream ``meshcore`` library ever raises an
# exception we do not anticipate in ``_meshcore_patches``, funnel
# it through our logger instead of the default handler (which
# only writes ``Task exception was never retrieved`` to stderr).
loop.set_exception_handler(pkg._log_unhandled_loop_exception)
iface._loop = loop
try:
loop.run_until_complete(
pkg._run_meshcore(iface, target, connected_event, error_holder)
)
finally:
loop.close()
thread = threading.Thread(target=_run_loop, name="meshcore-loop", daemon=True)
iface._thread = thread
thread.start()
if not connected_event.wait(timeout=_CONNECT_TIMEOUT_SECS):
iface.close()
raise ConnectionError(
f"Timed out waiting for MeshCore node at {target!r} "
f"after {_CONNECT_TIMEOUT_SECS:g}s."
)
if error_holder[0] is not None:
iface.close()
raise error_holder[0]
return iface, target, target
def extract_host_node_id(self, iface: object) -> str | None:
"""Return the canonical ``!xxxxxxxx`` host node ID from the interface.
Parameters:
iface: Active :class:`_MeshcoreInterface` returned by
:meth:`connect`.
"""
return getattr(iface, "host_node_id", None)
def self_node_item(self, iface: object) -> tuple[str, dict] | None:
"""Return the ``(node_id, node_dict)`` pair for the host self-node.
Uses the most recently cached ``SELF_INFO`` payload stored on the
interface. Returns ``None`` when no SELF_INFO has been received yet
or when the public key cannot be mapped to a valid node ID.
Parameters:
iface: Active :class:`_MeshcoreInterface` instance.
Returns:
``(canonical_node_id, node_dict)`` tuple or ``None``.
"""
if not isinstance(iface, _MeshcoreInterface):
return None
payload = getattr(iface, "_self_info_payload", None)
if not payload:
return None
node_id = _meshcore_node_id(payload.get("public_key", ""))
if not node_id:
return None
return node_id, _self_info_to_node_dict(payload)
def node_snapshot_items(self, iface: object) -> list[tuple[str, dict]]:
"""Return a snapshot of all known MeshCore contacts as node entries.
Includes the host self-node when a ``SELF_INFO`` payload has already
been received, so that the initial snapshot sent by the daemon
covers the local device even when the background event loop delivers
``SELF_INFO`` before the snapshot is taken.
Parameters:
iface: Active :class:`_MeshcoreInterface` instance. Any other
object type causes an empty list to be returned.
Returns:
List of ``(canonical_node_id, node_dict)`` pairs suitable for
passing to :func:`~data.mesh_ingestor.handlers.upsert_node`.
"""
if not isinstance(iface, _MeshcoreInterface):
return []
items: list[tuple[str, dict]] = list(iface.contacts_snapshot())
self_item = self.self_node_item(iface)
if self_item is not None:
items.append(self_item)
return items
@@ -0,0 +1,152 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Asyncio entry point that drives a MeshCore connection from a worker thread."""
from __future__ import annotations
import asyncio
import sys
import threading
from ... import config
from ._constants import _DEFAULT_BAUDRATE
from .channels import _ensure_channel_names
from .connection import _make_connection
from .handlers import _make_event_handlers
from .interface import ClosedBeforeConnectedError, _MeshcoreInterface
async def _run_meshcore(
iface: _MeshcoreInterface,
target: str,
connected_event: threading.Event,
error_holder: list,
) -> None:
"""Connect to a MeshCore node and keep the event loop running until closed.
This coroutine is the single entry point for the background asyncio thread.
It connects the MeshCore library, registers event handlers, fetches the
initial contact list, starts auto-message polling, and then waits for the
:attr:`_MeshcoreInterface._stop_event` to be set.
Parameters:
iface: Shared interface object for state and contact tracking.
target: Resolved, non-empty connection target (serial, BLE, or TCP).
connected_event: Threading event signalled when the connection
succeeds or fails, to unblock the calling ``connect()`` method.
error_holder: Single-element list; set to the raised exception when
the connection attempt fails so the caller can re-raise it.
"""
# Install early so :meth:`_MeshcoreInterface.close` can signal shutdown with
# ``stop_event.set()`` instead of ``loop.stop()`` while ``connect()`` or the
# ``finally`` disconnect is still running (avoids RuntimeError from
# :meth:`asyncio.loop.run_until_complete`).
stop_event = asyncio.Event()
iface._stop_event = stop_event
# Resolve meshcore-library symbols via the parent package so test fakes
# installed via ``monkeypatch.setattr(mod, "MeshCore", ...)`` apply.
pkg = sys.modules["data.mesh_ingestor.protocols.meshcore"]
MeshCore = pkg.MeshCore
EventType = pkg.EventType
mc = None
try:
cx = _make_connection(target, _DEFAULT_BAUDRATE)
mc = MeshCore(cx)
iface._mc = mc
handlers_map = _make_event_handlers(iface, target)
for event_name, callback in handlers_map.items():
mc.subscribe(EventType[event_name], callback)
_handled_types = frozenset(EventType[n] for n in handlers_map)
# Bookkeeping events that require no action and should not be logged.
_silent_types = frozenset(
{
EventType.CONNECTED,
EventType.ACK,
EventType.OK,
EventType.ERROR,
EventType.NO_MORE_MSGS,
EventType.MESSAGES_WAITING,
EventType.MSG_SENT,
EventType.CURRENT_TIME,
}
)
async def _on_unhandled(evt) -> None:
if evt.type in _handled_types or evt.type in _silent_types:
return
# Look up via the parent package so test fakes installed via
# ``monkeypatch.setattr(mod, "_record_meshcore_message", ...)`` apply.
pkg._record_meshcore_message(
evt.payload,
source=f"{target or 'auto'}:{evt.type.name}",
)
mc.subscribe(None, _on_unhandled)
result = await mc.connect()
if result is None:
raise ConnectionError(
f"MeshCore node at {target!r} did not respond to the appstart "
"handshake. Ensure the device is running MeshCore companion-mode "
"firmware."
)
if stop_event.is_set():
raise ClosedBeforeConnectedError(
"Mesh interface close was requested before the connection could be completed."
)
iface.isConnected = True
connected_event.set()
try:
await mc.ensure_contacts()
except Exception as exc:
config._debug_log(
"Failed to fetch initial contacts",
context="meshcore.contacts",
severity="warning",
always=True,
error=str(exc),
)
try:
await _ensure_channel_names(mc)
except Exception as exc:
config._debug_log(
"Failed to fetch channel names",
context="meshcore.channels",
severity="warning",
error=str(exc),
)
await mc.start_auto_message_fetching()
await stop_event.wait()
except Exception as exc:
if not connected_event.is_set():
error_holder[0] = exc
connected_event.set()
finally:
if mc is not None:
try:
await mc.disconnect()
except Exception:
pass
+211
View File
@@ -191,6 +191,15 @@ class TestCandidateNodeId:
result = ifaces._candidate_node_id({"items": [{"fromId": "!aabbccdd"}]})
assert result == "!aabbccdd"
def test_unknown_section_value_scanned(self):
"""Mapping values under arbitrary keys are recursively scanned.
Exercises the ``else`` branch of the values-loop (non-list/tuple value)
when the parent key is not one of the recognised section names.
"""
result = ifaces._candidate_node_id({"misc_section": {"fromId": "!aabbccdd"}})
assert result == "!aabbccdd"
# ---------------------------------------------------------------------------
# _has_field
@@ -448,6 +457,61 @@ class TestRegionFrequency:
)
assert ifaces._region_frequency(msg) == 999
def test_enum_name_without_any_digits_returns_name(self):
"""Enum name with no extractable digits is returned as-is."""
enum_val = SimpleNamespace(name="UNSET")
enum_type = SimpleNamespace(values_by_number={0: enum_val})
field_desc = SimpleNamespace(enum_type=enum_type)
desc = SimpleNamespace(fields_by_name={"region": field_desc})
msg = SimpleNamespace(DESCRIPTOR=desc, override_frequency=None, region=0)
assert ifaces._region_frequency(msg) == "UNSET"
# ---------------------------------------------------------------------------
# _resolve_lora_message
# ---------------------------------------------------------------------------
class TestResolveLoraMessage:
"""Tests for :func:`interfaces._resolve_lora_message`."""
def test_none_returns_none(self):
"""A ``None`` ``local_config`` short-circuits."""
assert ifaces._resolve_lora_message(None) is None
def test_radio_section_lora_via_has_field(self):
"""Resolves ``radio.lora`` when exposed via ``HasField``."""
radio_section = SimpleNamespace(
HasField=lambda name: name == "lora", lora="radio_lora"
)
local_config = SimpleNamespace(HasField=lambda name: False, radio=radio_section)
assert ifaces._resolve_lora_message(local_config) == "radio_lora"
def test_radio_section_lora_via_hasattr(self):
"""Resolves ``radio.lora`` via ``hasattr`` when ``HasField`` is silent.
The ``radio_section`` exposes ``HasField`` returning ``False`` so
``_has_field`` produces ``False`` for ``"lora"``, forcing the
``hasattr`` fallback path to be taken before returning the value.
"""
radio_section = SimpleNamespace(
HasField=lambda name: False, lora="radio_lora_attr"
)
local_config = SimpleNamespace(HasField=lambda name: False, radio=radio_section)
assert ifaces._resolve_lora_message(local_config) == "radio_lora_attr"
def test_local_config_lora_via_hasattr_only(self):
"""Resolves ``local_config.lora`` via ``hasattr`` when no ``HasField`` match."""
local_config = SimpleNamespace(
HasField=lambda name: False, lora="bare_lora", radio=None
)
assert ifaces._resolve_lora_message(local_config) == "bare_lora"
def test_no_lora_anywhere_returns_none(self):
"""No ``lora`` attribute on either section returns ``None``."""
local_config = SimpleNamespace(HasField=lambda name: False, radio=None)
assert ifaces._resolve_lora_message(local_config) is None
# ---------------------------------------------------------------------------
# _camelcase_enum_name
@@ -477,6 +541,10 @@ class TestCamelcaseEnumName:
"""Digits in the name are preserved."""
assert ifaces._camelcase_enum_name("BAND_915") == "Band915"
def test_only_separators_returns_none(self):
"""A string consisting only of separators yields no usable parts."""
assert ifaces._camelcase_enum_name("___") is None
# ---------------------------------------------------------------------------
# _modem_preset
@@ -523,6 +591,30 @@ class TestModemPreset:
msg = SimpleNamespace(DESCRIPTOR=desc, preset=1)
assert ifaces._modem_preset(msg) == "ShortFast"
def test_attr_preset_fallback_when_no_modem_preset(self):
"""Falls back to ``preset`` attribute when ``modem_preset`` is absent.
Exercises the ``hasattr(lora_message, 'preset')`` branch when the
descriptor lacks both fields and the object only exposes ``preset``.
"""
class _PresetOnly:
DESCRIPTOR = None
preset = "LONG_FAST"
assert ifaces._modem_preset(_PresetOnly()) == "LongFast"
def test_unparseable_preset_value_returns_none(self):
"""A non-string, non-enum-resolvable preset value returns None."""
# Field present in descriptor but enum_type lookup yields a non-string
# (e.g., a numeric mapping with no name). ``preset_value`` is also a
# plain int (not a string), so neither name nor string fallback applies.
enum_type = SimpleNamespace(values_by_number={})
field_desc = SimpleNamespace(enum_type=enum_type)
desc = SimpleNamespace(fields_by_name={"modem_preset": field_desc})
msg = SimpleNamespace(DESCRIPTOR=desc, modem_preset=99)
assert ifaces._modem_preset(msg) is None
# ---------------------------------------------------------------------------
# _ensure_radio_metadata caching
@@ -540,6 +632,18 @@ class TestEnsureRadioMetadata:
assert config.LORA_FREQ == original_freq
assert config.MODEM_PRESET == original_preset
def test_unresolvable_lora_message_returns_without_writing(self, monkeypatch):
"""When ``_resolve_lora_message`` returns ``None``, config is left alone."""
monkeypatch.setattr(config, "LORA_FREQ", None)
monkeypatch.setattr(config, "MODEM_PRESET", None)
# ``localConfig`` exists but has no lora/radio, so resolve returns None.
local_config = SimpleNamespace(HasField=lambda name: False, radio=None)
local_node = SimpleNamespace(localConfig=local_config)
iface = SimpleNamespace(localNode=local_node, waitForConfig=lambda: None)
ifaces._ensure_radio_metadata(iface)
assert config.LORA_FREQ is None
assert config.MODEM_PRESET is None
def test_sets_lora_freq_when_not_cached(self, monkeypatch):
"""Populates LORA_FREQ from interface when not yet configured."""
monkeypatch.setattr(config, "LORA_FREQ", None)
@@ -577,3 +681,110 @@ class TestEnsureRadioMetadata:
ifaces._ensure_radio_metadata(iface)
assert config.LORA_FREQ == 433
# ---------------------------------------------------------------------------
# _extract_host_node_id
# ---------------------------------------------------------------------------
class TestExtractHostNodeId:
"""Tests for :func:`interfaces._extract_host_node_id`."""
def test_none_iface_returns_none(self):
"""A ``None`` interface short-circuits without any attribute access."""
assert ifaces._extract_host_node_id(None) is None
# ---------------------------------------------------------------------------
# _ensure_channel_metadata
# ---------------------------------------------------------------------------
class TestEnsureChannelMetadata:
"""Tests for :func:`interfaces._ensure_channel_metadata`."""
def test_none_iface_is_noop(self, monkeypatch):
"""A ``None`` interface short-circuits without invoking ``capture_from_interface``."""
import data.mesh_ingestor.channels as _channels
called: list = []
monkeypatch.setattr(
_channels, "capture_from_interface", lambda iface: called.append(iface)
)
ifaces._ensure_channel_metadata(None)
assert called == []
def test_calls_capture_from_interface(self, monkeypatch):
"""A non-None interface delegates to ``channels.capture_from_interface``."""
import data.mesh_ingestor.channels as _channels
seen: list = []
monkeypatch.setattr(
_channels, "capture_from_interface", lambda iface: seen.append(iface)
)
sentinel = SimpleNamespace(myInfo={})
ifaces._ensure_channel_metadata(sentinel)
assert seen == [sentinel]
# ---------------------------------------------------------------------------
# _normalise_nodeinfo_packet
# ---------------------------------------------------------------------------
class TestNormaliseNodeinfoPacket:
"""Tests for :func:`interfaces._normalise_nodeinfo_packet`."""
def test_non_mapping_returns_none(self):
"""Inputs that ``_ensure_mapping`` cannot coerce return ``None``."""
# int/float values are explicitly rejected by ``_ensure_mapping``.
assert ifaces._normalise_nodeinfo_packet(42) is None
def test_mapping_with_node_id_injects_id_field(self):
"""A valid mapping has the canonical id injected when inferable."""
result = ifaces._normalise_nodeinfo_packet({"fromId": "!aabbccdd"})
assert result is not None
assert result["id"] == "!aabbccdd"
def test_mapping_keeps_existing_id_when_consistent(self):
"""A pre-existing matching ``id`` is left untouched."""
result = ifaces._normalise_nodeinfo_packet(
{"id": "!aabbccdd", "fromId": "!aabbccdd"}
)
assert result == {"id": "!aabbccdd", "fromId": "!aabbccdd"}
def test_dict_conversion_fallback(self):
"""Mapping whose ``dict(...)`` raises falls back to comprehension copy.
Exercises the inner ``except`` branch that copies via
``{key: mapping[key] for key in mapping}`` when ``dict(mapping)`` fails.
Uses a Mapping subclass whose first ``__iter__`` call raises so the
``dict()`` constructor errors but the subsequent comprehension reads
via the same iterator and succeeds.
"""
from collections.abc import Mapping as _Mapping
class _RaisingDictMapping(_Mapping):
def __init__(self, payload: dict) -> None:
self._payload = payload
self._first_iter_done = False
def __iter__(self):
if not self._first_iter_done:
self._first_iter_done = True
raise RuntimeError("simulated iteration failure")
yield from self._payload
def __getitem__(self, key):
return self._payload[key]
def __len__(self):
return len(self._payload)
result = ifaces._normalise_nodeinfo_packet(
_RaisingDictMapping({"fromId": "!aabbccdd"})
)
assert result is not None
assert result["fromId"] == "!aabbccdd"
assert result["id"] == "!aabbccdd"
+164
View File
@@ -1195,6 +1195,40 @@ def test_interface_close_is_idempotent():
iface.close() # must not raise
def test_interface_close_swallows_runtime_error_from_loop():
"""close() must swallow RuntimeError from loop.call_soon_threadsafe.
A race between the ``loop.is_closed()`` guard and the ``call_soon_threadsafe``
invocation can leave the loop closed by the time we schedule the stop, in
which case asyncio raises ``RuntimeError("Event loop is closed")``. ``close()``
must absorb that error so callers can treat shutdown as best-effort.
"""
iface = _MeshcoreInterface(target=None)
class _RacingLoop:
def is_closed(self):
return False
def call_soon_threadsafe(self, *_a, **_k):
raise RuntimeError("Event loop is closed")
def stop(self): # accessed as ``loop.stop`` arg in the no-stop_event branch
return None
iface._loop = _RacingLoop()
iface._stop_event = types.SimpleNamespace(set=lambda: None)
iface.close() # must not raise
assert iface.isConnected is False
# Same code path with stop_event=None exercises the loop.stop() branch.
iface2 = _MeshcoreInterface(target=None)
iface2._loop = _RacingLoop()
iface2._stop_event = None
iface2.close() # must not raise
assert iface2.isConnected is False
# ---------------------------------------------------------------------------
# _derive_message_id
# ---------------------------------------------------------------------------
@@ -1707,6 +1741,33 @@ def test_on_channel_msg_skips_empty_text(monkeypatch):
assert captured == []
def test_on_contact_msg_skips_when_text_or_sender_ts_missing(monkeypatch):
"""on_contact_msg must early-return when text is empty or sender_ts is None.
Mirrors :func:`test_on_channel_msg_skips_empty_text` for direct messages so
that a malformed CONTACT_MSG_RECV event cannot enqueue an empty packet.
"""
import asyncio
import data.mesh_ingestor as _mesh_pkg
import data.mesh_ingestor.protocols.meshcore as _mod
captured: list = []
stub = _make_stub_handlers_module()
stub.store_packet_dict = lambda pkt: captured.append(pkt)
monkeypatch.setattr(_mod.config, "_debug_log", lambda *_a, **_k: None)
monkeypatch.setattr(_mesh_pkg, "handlers", stub)
iface = _MeshcoreInterface(target=None)
iface.host_node_id = "!deadbeef"
hmap = _make_event_handlers(iface, "/dev/ttyUSB0")
asyncio.run(hmap["CONTACT_MSG_RECV"](_FakeEvt({"sender_timestamp": 1, "text": ""})))
asyncio.run(hmap["CONTACT_MSG_RECV"](_FakeEvt({"text": "hi"}))) # missing ts
asyncio.run(hmap["CONTACT_MSG_RECV"](_FakeEvt({}))) # both missing
assert captured == []
@pytest.mark.filterwarnings("ignore::pytest.PytestUnhandledThreadExceptionWarning")
def test_connect_raises_on_timeout(monkeypatch):
"""connect() raises ConnectionError when connected_event is never signalled.
@@ -2049,6 +2110,68 @@ def test_process_self_info_queues_ingestor_heartbeat_before_upsert(monkeypatch):
], "Ingestor heartbeat must be queued before node upsert"
def test_process_self_info_queues_position_when_advertised(monkeypatch):
"""_process_self_info must POST to /api/positions when adv_lat/adv_lon are set.
Covers the host-node position branch: when the connected radio reports a
GPS-fixed advertisement in its SELF_INFO, the host's own position must be
forwarded to the web backend exactly once per heartbeat.
"""
import data.mesh_ingestor.protocols.meshcore as _mod
monkeypatch.setattr(_mod.config, "_debug_log", lambda *_a, **_k: None)
monkeypatch.setattr(
_mod._ingestors, "queue_ingestor_heartbeat", lambda *_a, **_k: True
)
posted: list = []
monkeypatch.setattr(
_mod._queue,
"_queue_post_json",
lambda route, payload, **_k: posted.append((route, payload)),
)
stub = _make_stub_handlers_module()
stub.host_node_id = lambda: "!ingestor1"
payload = {
"public_key": "aabbccdd" + "00" * 28,
"name": "Host",
"adv_lat": 51.5,
"adv_lon": -0.1,
}
_process_self_info(payload, _MeshcoreInterface(target=None), stub)
position_posts = [p for r, p in posted if r == "/api/positions"]
assert len(position_posts) == 1
assert position_posts[0]["node_id"] == "!aabbccdd"
assert position_posts[0]["latitude"] == pytest.approx(51.5)
assert position_posts[0]["longitude"] == pytest.approx(-0.1)
assert position_posts[0]["ingestor"] == "!ingestor1"
def test_process_self_info_skips_position_when_latlon_absent(monkeypatch):
"""_process_self_info must not POST to /api/positions when lat/lon are absent."""
import data.mesh_ingestor.protocols.meshcore as _mod
monkeypatch.setattr(_mod.config, "_debug_log", lambda *_a, **_k: None)
monkeypatch.setattr(
_mod._ingestors, "queue_ingestor_heartbeat", lambda *_a, **_k: True
)
posted: list = []
monkeypatch.setattr(
_mod._queue,
"_queue_post_json",
lambda route, payload, **_k: posted.append(route),
)
payload = {"public_key": "aabbccdd" + "00" * 28, "name": "Host"}
_process_self_info(
payload, _MeshcoreInterface(target=None), _make_stub_handlers_module()
)
assert "/api/positions" not in posted
# ---------------------------------------------------------------------------
# _derive_modem_preset
# ---------------------------------------------------------------------------
@@ -2987,6 +3110,47 @@ def test_run_meshcore_ensure_contacts_failure_continues(monkeypatch):
assert "warning" in logged
def test_run_meshcore_ensure_channel_names_failure_continues(monkeypatch):
"""_ensure_channel_names raising must log a warning but not abort the connection.
The channel-name probe is best-effort: even when its internal try/except is
bypassed (e.g. a programming error inside ``_ensure_channel_names`` itself
or an exception from a deferred import), the outer ``_run_meshcore`` loop
must catch it so the connection stays alive.
"""
import asyncio
import data.mesh_ingestor.protocols.meshcore as _mod
import data.mesh_ingestor.protocols.meshcore.runner as _runner_mod
logged: list = []
def _capture(*_a, severity=None, **_k):
logged.append(severity)
monkeypatch.setattr(_mod.config, "_debug_log", _capture)
async def _boom(_mc):
raise RuntimeError("synthetic channel probe failure")
# Patch the binding inside runner.py — the module-level ``from .channels
# import _ensure_channel_names`` resolves the name at import time, so
# patching the package attribute alone would not reach the runner.
monkeypatch.setattr(_runner_mod, "_ensure_channel_names", _boom)
fake_mod = _make_fake_meshcore_mod()
_patch_meshcore_mod(monkeypatch, _mod, fake_mod)
iface = _MeshcoreInterface(target=None)
connected_event, error_holder = asyncio.run(
_run_until_connected(iface, "/dev/ttyUSB0", fake_mod, _mod)
)
assert connected_event.is_set()
assert error_holder[0] is None
assert "warning" in logged
def test_run_meshcore_disconnect_exception_suppressed(monkeypatch):
"""disconnect() raising in the finally block must be silently swallowed."""
import asyncio
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,51 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module DataProcessing
# Allowed values for the +telemetry_type+ discriminator column.
VALID_TELEMETRY_TYPES = %w[device environment power air_quality].freeze
# Half-window (seconds) for the meshcore content-level message dedup
# in +insert_message+ and the matching one-shot backfill. Set to
# roughly 3× the observed relay-retransmit delta (~10 s) so genuine
# clock skew across co-operating ingestors still collapses, while
# rapid legitimate re-sends ("ack", "ok", "test") ≥30 s apart remain
# distinct rows. See issue #756 and ``CONTRACTS.md`` for rationale.
#
# IMPORTANT: widening this value only takes effect at runtime — the
# one-shot backfill in +PotatoMesh::App::Database+ is frozen at
# +MESHCORE_CONTENT_DEDUP_BACKFILL_VERSION+. To re-sweep pre-existing
# rows that newly fall within an expanded window, bump the backfill
# version so the migration re-runs on the next deploy.
MESHCORE_CONTENT_DEDUP_WINDOW_SECONDS = 30
# Coerce a Ruby boolean into a SQLite integer (1/0) while passing through
# any other value unchanged. Used when writing boolean node fields.
#
# @param value [Boolean, Object] value to coerce.
# @return [Integer, Object] 1, 0, or the original value.
def coerce_bool(value)
case value
when true then 1
when false then 0
else value
end
end
end
end
end
@@ -0,0 +1,273 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module DataProcessing
# Decode and store decrypted payloads in domain-specific tables.
#
# @param db [SQLite3::Database] open database handle.
# @param message [Hash] original message payload.
# @param packet_id [Integer] packet identifier for the message.
# @param decrypted [Hash] decrypted payload metadata.
# @param rx_time [Integer] receive time.
# @param rx_iso [String] ISO 8601 receive timestamp.
# @param from_id [String, nil] canonical sender identifier.
# @param to_id [String, nil] destination identifier.
# @param channel [Integer, nil] channel index.
# @param portnum [Object, nil] port number identifier.
# @param hop_limit [Integer, nil] hop limit value.
# @param snr [Numeric, nil] signal-to-noise ratio.
# @param rssi [Integer, nil] RSSI value.
# @return [void]
def store_decrypted_payload(
db,
message,
packet_id,
decrypted,
rx_time:,
rx_iso:,
from_id:,
to_id:,
channel:,
portnum:,
hop_limit:,
snr:,
rssi:
)
payload_bytes = decrypted[:payload]
return false unless payload_bytes
portnum_value = coerce_integer(portnum || decrypted[:portnum])
return false unless portnum_value
payload_b64 = Base64.strict_encode64(payload_bytes)
supported_ports = [3, 4, 67, 70, 71]
return false unless supported_ports.include?(portnum_value)
decoded = PotatoMesh::App::Meshtastic::PayloadDecoder.decode(
portnum: portnum_value,
payload_b64: payload_b64,
)
return false unless decoded.is_a?(Hash)
return false unless decoded["payload"].is_a?(Hash)
common_payload = {
"id" => packet_id,
"packet_id" => packet_id,
"rx_time" => rx_time,
"rx_iso" => rx_iso,
"from_id" => from_id,
"to_id" => to_id,
"channel" => channel,
"portnum" => portnum_value.to_s,
"hop_limit" => hop_limit,
"snr" => snr,
"rssi" => rssi,
"lora_freq" => coerce_integer(message["lora_freq"] || message["loraFrequency"]),
"modem_preset" => string_or_nil(message["modem_preset"] || message["modemPreset"]),
"payload_b64" => payload_b64,
"ingestor" => string_or_nil(message["ingestor"]),
}
case decoded["type"]
when "POSITION_APP"
payload = common_payload.merge("position" => decoded["payload"])
insert_position(db, payload)
debug_log(
"Stored decrypted position payload",
context: "data_processing.store_decrypted_payload",
message_id: packet_id,
portnum: portnum_value,
)
true
when "NODEINFO_APP"
node_payload = normalize_decrypted_nodeinfo_payload(decoded["payload"])
return false unless valid_decrypted_nodeinfo_payload?(node_payload)
node_id = string_or_nil(node_payload["id"]) || from_id
node_num = coerce_integer(node_payload["num"]) ||
coerce_integer(message["from_num"]) ||
resolve_node_num(from_id, message)
node_id ||= format("!%08x", node_num & 0xFFFFFFFF) if node_num
return false unless node_id
payload = node_payload.merge(
"num" => node_num,
"lastHeard" => coerce_integer(node_payload["lastHeard"] || node_payload["last_heard"]) || rx_time,
"snr" => node_payload.key?("snr") ? node_payload["snr"] : snr,
"lora_freq" => common_payload["lora_freq"],
"modem_preset" => common_payload["modem_preset"],
)
upsert_node(db, node_id, payload)
debug_log(
"Stored decrypted node payload",
context: "data_processing.store_decrypted_payload",
message_id: packet_id,
portnum: portnum_value,
node_id: node_id,
)
true
when "TELEMETRY_APP"
payload = common_payload.merge("telemetry" => decoded["payload"])
insert_telemetry(db, payload)
debug_log(
"Stored decrypted telemetry payload",
context: "data_processing.store_decrypted_payload",
message_id: packet_id,
portnum: portnum_value,
)
true
when "NEIGHBORINFO_APP"
neighbor_payload = decoded["payload"]
neighbors = neighbor_payload["neighbors"]
neighbors = [] unless neighbors.is_a?(Array)
normalized_neighbors = neighbors.map do |neighbor|
next unless neighbor.is_a?(Hash)
{
"neighbor_id" => neighbor["node_id"] || neighbor["nodeId"] || neighbor["id"],
"snr" => neighbor["snr"],
"rx_time" => neighbor["last_rx_time"],
}.compact
end.compact
return false if normalized_neighbors.empty?
payload = common_payload.merge(
"node_id" => neighbor_payload["node_id"] || from_id,
"neighbors" => normalized_neighbors,
"node_broadcast_interval_secs" => neighbor_payload["node_broadcast_interval_secs"],
"last_sent_by_id" => neighbor_payload["last_sent_by_id"],
)
insert_neighbors(db, payload)
debug_log(
"Stored decrypted neighbor payload",
context: "data_processing.store_decrypted_payload",
message_id: packet_id,
portnum: portnum_value,
)
true
when "TRACEROUTE_APP"
route = decoded["payload"]["route"]
route_back = decoded["payload"]["route_back"]
hops = route.is_a?(Array) ? route : route_back.is_a?(Array) ? route_back : []
dest = hops.last if hops.is_a?(Array) && !hops.empty?
src_num = coerce_integer(message["from_num"]) || resolve_node_num(from_id, message)
payload = common_payload.merge(
"src" => src_num,
"dest" => dest,
"hops" => hops,
)
insert_trace(db, payload)
debug_log(
"Stored decrypted traceroute payload",
context: "data_processing.store_decrypted_payload",
message_id: packet_id,
portnum: portnum_value,
)
true
else
false
end
end
# Validate decoded NodeInfo payloads before upserting node records.
#
# @param payload [Object] decoded payload candidate.
# @return [Boolean] true when the payload resembles a Meshtastic NodeInfo.
def valid_decrypted_nodeinfo_payload?(payload)
return false unless payload.is_a?(Hash)
return false if payload.empty?
return false unless payload["user"].is_a?(Hash)
return false if payload.key?("position") && !payload["position"].is_a?(Hash)
return false if payload.key?("deviceMetrics") && !payload["deviceMetrics"].is_a?(Hash)
return false unless nodeinfo_user_has_identifying_fields?(payload["user"])
true
end
# Normalize decoded NodeInfo payload keys for +upsert_node+ compatibility.
#
# The Python decoder preserves protobuf field names, so nested hashes may
# use +snake_case+ keys that +upsert_node+ does not read.
#
# @param payload [Object] decoded NodeInfo payload.
# @return [Hash] normalized payload hash.
def normalize_decrypted_nodeinfo_payload(payload)
return {} unless payload.is_a?(Hash)
user = payload["user"]
normalized_user = user.is_a?(Hash) ? user.dup : nil
if normalized_user
normalized_user["shortName"] ||= normalized_user["short_name"]
normalized_user["longName"] ||= normalized_user["long_name"]
normalized_user["hwModel"] ||= normalized_user["hw_model"]
normalized_user["publicKey"] ||= normalized_user["public_key"]
normalized_user["isUnmessagable"] = normalized_user["is_unmessagable"] if normalized_user.key?("is_unmessagable")
end
metrics = payload["deviceMetrics"] || payload["device_metrics"]
normalized_metrics = metrics.is_a?(Hash) ? metrics.dup : nil
if normalized_metrics
normalized_metrics["batteryLevel"] ||= normalized_metrics["battery_level"]
normalized_metrics["channelUtilization"] ||= normalized_metrics["channel_utilization"]
normalized_metrics["airUtilTx"] ||= normalized_metrics["air_util_tx"]
normalized_metrics["uptimeSeconds"] ||= normalized_metrics["uptime_seconds"]
end
position = payload["position"]
normalized_position = position.is_a?(Hash) ? position.dup : nil
if normalized_position
normalized_position["precisionBits"] ||= normalized_position["precision_bits"]
normalized_position["locationSource"] ||= normalized_position["location_source"]
end
normalized = payload.dup
normalized["user"] = normalized_user if normalized_user
normalized["deviceMetrics"] = normalized_metrics if normalized_metrics
normalized["position"] = normalized_position if normalized_position
normalized["lastHeard"] ||= normalized["last_heard"]
normalized["hopsAway"] ||= normalized["hops_away"]
normalized["isFavorite"] = normalized["is_favorite"] if normalized.key?("is_favorite")
normalized["hwModel"] ||= normalized["hw_model"]
normalized
end
# Validate that a decoded NodeInfo user section contains identifying data.
#
# @param user [Hash] decoded NodeInfo user payload.
# @return [Boolean] true when at least one identifying field is present.
def nodeinfo_user_has_identifying_fields?(user)
identifying_fields = [
user["id"],
user["shortName"],
user["short_name"],
user["longName"],
user["long_name"],
user["macaddr"],
user["hwModel"],
user["hw_model"],
user["publicKey"],
user["public_key"],
]
identifying_fields.any? do |value|
value.is_a?(String) ? !value.strip.empty? : !value.nil?
end
end
end
end
end
@@ -0,0 +1,199 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module DataProcessing
# Resolve the numeric representation of a node identifier from a packet payload.
#
# The +payload["num"]+ field may arrive as an Integer, a decimal string, or
# a hexadecimal string (with or without an +0x+ prefix). When the field is
# absent or ambiguous the method falls back to decoding the hex portion of
# +node_id+.
#
# @param node_id [String, nil] canonical node identifier in +!xxxxxxxx+ form.
# @param payload [Hash] inbound message payload that may carry a +num+ field.
# @return [Integer, nil] resolved 32-bit node number or +nil+ when undecidable.
def resolve_node_num(node_id, payload)
raw = payload["num"]
case raw
when Integer
return raw
when Numeric
return raw.to_i
when String
trimmed = raw.strip
return nil if trimmed.empty?
return Integer(trimmed, 10) if trimmed.match?(/\A[0-9]+\z/)
return Integer(trimmed.delete_prefix("0x").delete_prefix("0X"), 16) if trimmed.match?(/\A0[xX][0-9A-Fa-f]+\z/)
if trimmed.match?(/\A[0-9A-Fa-f]+\z/)
canonical = node_id.is_a?(String) ? node_id.strip : ""
return Integer(trimmed, 16) if canonical.match?(/\A!?[0-9A-Fa-f]+\z/)
end
end
return nil unless node_id.is_a?(String)
hex = node_id.strip
return nil if hex.empty?
hex = hex.delete_prefix("!")
return nil unless hex.match?(/\A[0-9A-Fa-f]+\z/)
Integer(hex, 16)
end
# Derive the canonical triplet for a node reference.
#
# Accepts an Integer node number, a hex string with or without the +!+
# sigil, a decimal numeric string, or a +0x+-prefixed hex string. A
# +fallback_num+ may be provided when +node_ref+ is nil.
#
# @param node_ref [Integer, String, nil] raw node identifier from a packet.
# @param fallback_num [Integer, nil] numeric fallback when +node_ref+ is nil.
# @return [Array(String, Integer, String), nil] tuple of
# +[canonical_id, node_num, short_id]+ or +nil+ when the reference cannot
# be resolved. +canonical_id+ is prefixed with +!+ and zero-padded to
# eight lowercase hex digits. +short_id+ is the upper-case last four
# hex digits used for display.
def canonical_node_parts(node_ref, fallback_num = nil)
fallback = coerce_integer(fallback_num)
hex = nil
num = nil
case node_ref
when Integer
num = node_ref
when Numeric
num = node_ref.to_i
when String
trimmed = node_ref.strip
return nil if trimmed.empty?
if trimmed.start_with?("!")
hex = trimmed.delete_prefix("!")
elsif trimmed.match?(/\A0[xX][0-9A-Fa-f]+\z/)
hex = trimmed[2..].to_s
elsif trimmed.match?(/\A-?\d+\z/)
num = trimmed.to_i
elsif trimmed.match?(/\A[0-9A-Fa-f]+\z/)
hex = trimmed
else
return nil
end
when nil
num = fallback if fallback
else
return nil
end
num ||= fallback if fallback
if hex
begin
num ||= Integer(hex, 16)
rescue ArgumentError
return nil
end
elsif num
return nil if num.negative?
hex = format("%08x", num & 0xFFFFFFFF)
else
return nil
end
return nil if hex.nil? || hex.empty?
begin
parsed = Integer(hex, 16)
rescue ArgumentError
return nil
end
parsed &= 0xFFFFFFFF
canonical_hex = format("%08x", parsed)
short_id = canonical_hex[-4, 4].upcase
["!#{canonical_hex}", parsed, short_id]
end
# Detect whether a node reference resolves to the broadcast address.
#
# @param node_ref [Integer, String, nil] raw node reference.
# @param fallback_num [Integer, nil] optional numeric fallback.
# @return [Boolean] true when the reference matches the broadcast address.
def broadcast_node_ref?(node_ref, fallback_num = nil)
return true if fallback_num == 0xFFFFFFFF
trimmed = string_or_nil(node_ref)
return false unless trimmed
normalized = trimmed.delete_prefix("!").strip.downcase
normalized == "ffffffff"
end
# Converts a protocol identifier such as +meshtastic+ or +mesh-core+ into
# the display label used in generated node names: capitalised parts joined
# without a separator (e.g. +Meshtastic+, +MeshCore+).
#
# @param protocol [String] protocol identifier.
# @return [String] formatted display label.
def protocol_display_label(protocol)
protocol.split(/[-_]/).map(&:capitalize).join
end
# Returns true if +long_name+ is the synthetic placeholder generated by
# +ensure_unknown_node+ for the given +node_id+ and +protocol+. Such
# names carry no real information and must not overwrite a known name
# already on record.
#
# @param long_name [String, nil] candidate long name.
# @param node_id [String, nil] canonical node identifier.
# @param protocol [String] protocol identifier the placeholder was generated for.
# @return [Boolean] true when the long name is a generic placeholder.
def generic_fallback_name?(long_name, node_id, protocol)
return false unless long_name && !long_name.empty?
parts = canonical_node_parts(node_id)
return false unless parts
short_id = parts[2]
long_name == "#{protocol_display_label(protocol)} #{short_id}"
end
# Resolve a raw node reference to its canonical row in the +nodes+ table.
#
# @param db [SQLite3::Database] open database handle.
# @param node_ref [Object] raw reference (string, integer, or hex string).
# @return [String, nil] canonical +node_id+ or nil when no match exists.
def normalize_node_id(db, node_ref)
return nil if node_ref.nil?
ref_str = node_ref.to_s.strip
return nil if ref_str.empty?
node_id = db.get_first_value("SELECT node_id FROM nodes WHERE node_id = ?", [ref_str])
return node_id if node_id
begin
ref_num = Integer(ref_str, 10)
rescue ArgumentError
return nil
end
db.get_first_value("SELECT node_id FROM nodes WHERE num = ?", [ref_num])
end
end
end
end
@@ -0,0 +1,83 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module DataProcessing
# Insert or update an ingestor heartbeat payload.
#
# @param db [SQLite3::Database] open database handle.
# @param payload [Hash] ingestor payload from the collector.
# @return [Boolean] true when persistence succeeded.
def upsert_ingestor(db, payload)
return false unless payload.is_a?(Hash)
parts = canonical_node_parts(payload["node_id"] || payload["id"])
return false unless parts
node_id, = parts
now = Time.now.to_i
start_time = coerce_integer(payload["start_time"] || payload["startTime"]) || now
last_seen_time =
coerce_integer(payload["last_seen_time"] || payload["lastSeenTime"]) || start_time
start_time = 0 if start_time.negative?
last_seen_time = 0 if last_seen_time.negative?
start_time = now if start_time > now
last_seen_time = now if last_seen_time > now
last_seen_time = start_time if last_seen_time < start_time
version = string_or_nil(payload["version"] || payload["ingestorVersion"])
return false unless version
lora_freq = coerce_integer(payload["lora_freq"])
modem_preset = string_or_nil(payload["modem_preset"])
protocol = string_or_nil(payload["protocol"]) || "meshtastic"
with_busy_retry do
db.execute <<~SQL, [node_id, start_time, last_seen_time, version, lora_freq, modem_preset, protocol]
INSERT INTO ingestors(node_id, start_time, last_seen_time, version, lora_freq, modem_preset, protocol)
VALUES(?,?,?,?,?,?,?)
ON CONFLICT(node_id) DO UPDATE SET
start_time = CASE
WHEN excluded.start_time > ingestors.start_time THEN excluded.start_time
ELSE ingestors.start_time
END,
last_seen_time = CASE
WHEN excluded.last_seen_time > ingestors.last_seen_time THEN excluded.last_seen_time
ELSE ingestors.last_seen_time
END,
version = COALESCE(excluded.version, ingestors.version),
lora_freq = COALESCE(excluded.lora_freq, ingestors.lora_freq),
modem_preset = COALESCE(excluded.modem_preset, ingestors.modem_preset),
protocol = excluded.protocol
SQL
end
true
rescue SQLite3::SQLException => e
warn_log(
"Failed to upsert ingestor record",
context: "data_processing.ingestors",
node_id: node_id,
error_class: e.class.name,
error_message: e.message,
)
false
end
end
end
end
@@ -0,0 +1,494 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module DataProcessing
# Determine whether the canonical sender identifier should override the
# sender supplied by the ingestor. MeshCore packets that include a
# +packet_id+ but no +id+ predate the canonical-id assignment, so we
# prefer the canonical lookup when both are available.
#
# @param message [Hash] inbound message payload.
# @return [Boolean] true when the canonical lookup wins.
def prefer_canonical_sender?(message)
message.is_a?(Hash) && message.key?("packet_id") && !message.key?("id")
end
# Attempt to decrypt an encrypted Meshtastic message payload.
#
# @param message [Hash] message payload supplied by the ingestor.
# @param packet_id [Integer] message packet identifier.
# @param from_id [String, nil] canonical node identifier when available.
# @param from_num [Integer, nil] numeric node identifier when available.
# @param channel_index [Integer, nil] channel hash index.
# @return [Hash, nil] decrypted payload metadata when parsing succeeds.
def decrypt_meshtastic_message(message, packet_id, from_id, from_num, channel_index)
return nil unless message.is_a?(Hash)
cipher_b64 = string_or_nil(message["encrypted"])
return nil unless cipher_b64
if (ENV["RACK_ENV"] == "test" || ENV["APP_ENV"] == "test" || defined?(RSpec)) &&
ENV["MESHTASTIC_PSK_B64"].nil?
return nil
end
node_num = coerce_integer(from_num)
if node_num.nil?
parts = canonical_node_parts(from_id)
node_num = parts[1] if parts
end
return nil unless node_num
psk_b64 = PotatoMesh::Config.meshtastic_psk_b64
data = PotatoMesh::App::Meshtastic::Cipher.decrypt_data(
cipher_b64: cipher_b64,
packet_id: packet_id,
from_id: from_id,
from_num: node_num,
psk_b64: psk_b64,
)
return nil unless data
channel_name = nil
if channel_index.is_a?(Integer)
candidates = PotatoMesh::App::Meshtastic::RainbowTable.channel_names_for(
channel_index,
psk_b64: psk_b64,
)
channel_name = candidates.first if candidates.any?
end
{
text: data[:text],
portnum: data[:portnum],
payload: data[:payload],
channel_name: channel_name,
}
end
# Persist a chat-layer message payload, performing meshcore content
# dedup, decryption, and per-protocol bookkeeping.
#
# @param db [SQLite3::Database] open database handle.
# @param message [Hash] inbound message payload.
# @param protocol_cache [Hash, nil] optional per-batch ingestor protocol cache.
# @return [void]
def insert_message(db, message, protocol_cache: nil)
return unless message.is_a?(Hash)
msg_id = coerce_integer(message["id"] || message["packet_id"])
return unless msg_id
now = Time.now.to_i
rx_time = coerce_integer(message["rx_time"])
rx_time = now if rx_time.nil? || rx_time > now
rx_iso = string_or_nil(message["rx_iso"])
rx_iso ||= Time.at(rx_time).utc.iso8601
raw_from_id = message["from_id"]
if raw_from_id.nil? || raw_from_id.to_s.strip.empty?
alt_from = message["from"]
raw_from_id = alt_from unless alt_from.nil? || alt_from.to_s.strip.empty?
end
trimmed_from_id = string_or_nil(raw_from_id)
canonical_from_id = string_or_nil(normalize_node_id(db, raw_from_id))
from_id = trimmed_from_id
if canonical_from_id
if from_id.nil?
from_id = canonical_from_id
elsif prefer_canonical_sender?(message)
from_id = canonical_from_id
elsif from_id.start_with?("!") && from_id.casecmp(canonical_from_id) != 0
from_id = canonical_from_id
end
end
if from_id && !from_id.start_with?("^")
canonical_parts = canonical_node_parts(from_id, message["from_num"])
if canonical_parts && !from_id.start_with?("!")
from_id = canonical_parts[0]
message["from_num"] ||= canonical_parts[1]
end
end
sender_present = !from_id.nil? || !coerce_integer(message["from_num"]).nil? || !trimmed_from_id.nil?
raw_to_id = message["to_id"]
raw_to_id = message["to"] if raw_to_id.nil? || raw_to_id.to_s.strip.empty?
trimmed_to_id = string_or_nil(raw_to_id)
canonical_to_id = string_or_nil(normalize_node_id(db, raw_to_id))
to_id = trimmed_to_id
if canonical_to_id
if to_id.nil?
to_id = canonical_to_id
elsif to_id.start_with?("!") && to_id.casecmp(canonical_to_id) != 0
to_id = canonical_to_id
end
end
if to_id && !to_id.start_with?("^")
canonical_parts = canonical_node_parts(to_id, message["to_num"])
if canonical_parts && !to_id.start_with?("!")
to_id = canonical_parts[0]
message["to_num"] ||= canonical_parts[1]
end
end
encrypted = string_or_nil(message["encrypted"])
text = message["text"]
portnum = message["portnum"]
channel_index = coerce_integer(message["channel"] || message["channel_index"] || message["channelIndex"])
decrypted_payload = nil
decrypted_portnum = nil
if encrypted && (text.nil? || text.to_s.strip.empty?)
decrypted = decrypt_meshtastic_message(
message,
msg_id,
from_id,
message["from_num"],
channel_index,
)
if decrypted
decrypted_payload = decrypted
decrypted_portnum = decrypted[:portnum]
end
end
if encrypted && (text.nil? || text.to_s.strip.empty?)
portnum = nil
message.delete("portnum")
end
lora_freq = coerce_integer(message["lora_freq"] || message["loraFrequency"])
modem_preset = string_or_nil(message["modem_preset"] || message["modemPreset"])
channel_name = string_or_nil(message["channel_name"] || message["channelName"])
reply_id = coerce_integer(message["reply_id"] || message["replyId"])
emoji = string_or_nil(message["emoji"])
ingestor = string_or_nil(message["ingestor"])
protocol = resolve_protocol(db, ingestor, cache: protocol_cache)
row = [
msg_id,
rx_time,
rx_iso,
from_id,
to_id,
message["channel"],
portnum,
text,
encrypted,
message["snr"],
message["rssi"],
message["hop_limit"],
lora_freq,
modem_preset,
channel_name,
reply_id,
emoji,
ingestor,
protocol,
]
with_busy_retry do
# Meshcore-only content-level dedup (issue #756). The deterministic
# message id (``_derive_message_id`` in the Python ingestor) hashes
# ``sender_timestamp`` among other fields, but the MeshCore library
# has been observed delivering the same physical packet twice with
# a rewritten ``sender_timestamp`` (relay/retransmit behaviour).
# The PK path below cannot catch that — two copies compute two
# different ids — so we add a narrow content+window pre-check here.
#
# Ruby integer ``0`` is truthy, so the ``channel_index`` guard
# passes for the broadcast channel intentionally; we only skip when
# the channel is absent/nil. ``from_id`` + non-empty ``text`` keep
# encrypted or anonymous traffic on the id-PK path.
#
# Known race: the SELECT and the downstream INSERT do not share a
# transaction, so two Puma threads carrying the same content with
# different ids can both pass the pre-check and both insert. The
# deploy-time backfill sweeps the survivors; wrapping the pair in
# ``db.transaction(:immediate)`` is a future tightening if the race
# is ever observed in production.
if protocol == "meshcore" && from_id && channel_index && text && !text.to_s.empty?
# ``channel = ?`` matches the ``channel_index`` bind cleanly
# because the guard above rejects nil; ``to_id`` may legitimately
# be nil (rare meshcore fallback), so it keeps ``IS ?`` for a
# NULL-safe compare.
duplicate_id = db.get_first_value(
<<~SQL,
SELECT id FROM messages
WHERE protocol = 'meshcore'
AND from_id = ?
AND to_id IS ?
AND channel = ?
AND text = ?
AND rx_time BETWEEN ? AND ?
AND id != ?
LIMIT 1
SQL
[from_id, to_id, channel_index, text,
rx_time - MESHCORE_CONTENT_DEDUP_WINDOW_SECONDS,
rx_time + MESHCORE_CONTENT_DEDUP_WINDOW_SECONDS, msg_id],
)
if duplicate_id
debug_log(
"Skipped meshcore message duplicate",
context: "data_processing.insert_message",
new_id: msg_id,
existing_id: duplicate_id,
from_id: from_id,
channel: channel_index,
)
return
end
end
existing = db.get_first_row(
"SELECT from_id, to_id, text, encrypted, lora_freq, modem_preset, channel_name, reply_id, emoji, portnum, ingestor, protocol FROM messages WHERE id = ?",
[msg_id],
)
if existing
updates = {}
existing_text = existing.is_a?(Hash) ? existing["text"] : existing[2]
existing_text_str = existing_text&.to_s
existing_has_text = existing_text_str && !existing_text_str.strip.empty?
existing_from = existing.is_a?(Hash) ? existing["from_id"] : existing[0]
existing_from_str = existing_from&.to_s
return if !sender_present && (existing_from_str.nil? || existing_from_str.strip.empty?)
existing_encrypted = existing.is_a?(Hash) ? existing["encrypted"] : existing[3]
existing_encrypted_str = existing_encrypted&.to_s
decrypted_precedence = text && existing_encrypted_str && !existing_encrypted_str.strip.empty?
if from_id
should_update = existing_from_str.nil? || existing_from_str.strip.empty?
should_update ||= existing_from != from_id
updates["from_id"] = from_id if should_update
end
if to_id
existing_to = existing.is_a?(Hash) ? existing["to_id"] : existing[1]
existing_to_str = existing_to&.to_s
should_update = existing_to_str.nil? || existing_to_str.strip.empty?
should_update ||= existing_to != to_id
updates["to_id"] = to_id if should_update
end
if decrypted_precedence && existing_encrypted_str && !existing_encrypted_str.strip.empty?
updates["encrypted"] = nil if existing_encrypted
elsif encrypted && !existing_has_text
should_update = existing_encrypted_str.nil? || existing_encrypted_str.strip.empty?
should_update ||= existing_encrypted != encrypted
updates["encrypted"] = encrypted if should_update
end
if text
should_update = existing_text_str.nil? || existing_text_str.strip.empty?
should_update ||= existing_text != text
updates["text"] = text if should_update
end
if decrypted_precedence
updates["channel"] = message["channel"] if message.key?("channel")
updates["snr"] = message["snr"] if message.key?("snr")
updates["rssi"] = message["rssi"] if message.key?("rssi")
updates["hop_limit"] = message["hop_limit"] if message.key?("hop_limit")
updates["lora_freq"] = lora_freq unless lora_freq.nil?
updates["modem_preset"] = modem_preset if modem_preset
updates["channel_name"] = channel_name if channel_name
updates["rx_time"] = rx_time if rx_time
updates["rx_iso"] = rx_iso if rx_iso
end
if portnum
existing_portnum = existing.is_a?(Hash) ? existing["portnum"] : existing[9]
existing_portnum_str = existing_portnum&.to_s
should_update = existing_portnum_str.nil? || existing_portnum_str.strip.empty?
should_update ||= existing_portnum != portnum
should_update ||= decrypted_precedence
updates["portnum"] = portnum if should_update
end
unless lora_freq.nil?
existing_lora = existing.is_a?(Hash) ? existing["lora_freq"] : existing[4]
updates["lora_freq"] = lora_freq if existing_lora != lora_freq
end
if modem_preset
existing_preset = existing.is_a?(Hash) ? existing["modem_preset"] : existing[5]
existing_preset_str = existing_preset&.to_s
should_update = existing_preset_str.nil? || existing_preset_str.strip.empty?
should_update ||= existing_preset != modem_preset
updates["modem_preset"] = modem_preset if should_update
end
if channel_name
existing_channel = existing.is_a?(Hash) ? existing["channel_name"] : existing[6]
existing_channel_str = existing_channel&.to_s
should_update = existing_channel_str.nil? || existing_channel_str.strip.empty?
should_update ||= existing_channel != channel_name
updates["channel_name"] = channel_name if should_update
end
unless reply_id.nil?
existing_reply = existing.is_a?(Hash) ? existing["reply_id"] : existing[7]
updates["reply_id"] = reply_id if existing_reply != reply_id
end
if emoji
existing_emoji = existing.is_a?(Hash) ? existing["emoji"] : existing[8]
existing_emoji_str = existing_emoji&.to_s
should_update = existing_emoji_str.nil? || existing_emoji_str.strip.empty?
should_update ||= existing_emoji != emoji
updates["emoji"] = emoji if should_update
end
if ingestor
existing_ingestor = existing.is_a?(Hash) ? existing["ingestor"] : existing[10]
existing_ingestor = string_or_nil(existing_ingestor)
updates["ingestor"] = ingestor if existing_ingestor.nil?
end
existing_protocol = existing.is_a?(Hash) ? existing["protocol"] : existing[11]
return if existing_protocol && existing_protocol != "meshtastic" && existing_protocol != protocol
updates["protocol"] = protocol if (existing_protocol.nil? || existing_protocol == "meshtastic") && protocol != "meshtastic"
unless updates.empty?
assignments = updates.keys.map { |column| "#{column} = ?" }.join(", ")
db.execute("UPDATE messages SET #{assignments} WHERE id = ?", updates.values + [msg_id])
end
else
PotatoMesh::App::Prometheus::MESSAGES_TOTAL.increment
begin
db.execute <<~SQL, row
INSERT INTO messages(id,rx_time,rx_iso,from_id,to_id,channel,portnum,text,encrypted,snr,rssi,hop_limit,lora_freq,modem_preset,channel_name,reply_id,emoji,ingestor,protocol)
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
SQL
rescue SQLite3::ConstraintException
existing_row = db.get_first_row(
"SELECT text, encrypted, ingestor, protocol FROM messages WHERE id = ?",
[msg_id],
)
existing_text = existing_row.is_a?(Hash) ? existing_row["text"] : existing_row&.[](0)
existing_text_str = existing_text&.to_s
allow_encrypted_update = existing_text_str.nil? || existing_text_str.strip.empty?
existing_encrypted = existing_row.is_a?(Hash) ? existing_row["encrypted"] : existing_row&.[](1)
existing_encrypted_str = existing_encrypted&.to_s
existing_ingestor = existing_row.is_a?(Hash) ? existing_row["ingestor"] : existing_row&.[](2)
existing_ingestor = string_or_nil(existing_ingestor)
existing_fallback_protocol = existing_row.is_a?(Hash) ? existing_row["protocol"] : existing_row&.[](3)
# Guard against cross-protocol contamination in the constraint fallback path,
# mirroring the same guard applied in the primary update path above.
return if existing_fallback_protocol && existing_fallback_protocol != "meshtastic" && existing_fallback_protocol != protocol
decrypted_precedence = text && existing_encrypted_str && !existing_encrypted_str.strip.empty?
fallback_updates = {}
fallback_updates["from_id"] = from_id if from_id
fallback_updates["to_id"] = to_id if to_id
fallback_updates["text"] = text if text
fallback_updates["encrypted"] = encrypted if encrypted && allow_encrypted_update
fallback_updates["portnum"] = portnum if portnum
if decrypted_precedence
fallback_updates["channel"] = message["channel"] if message.key?("channel")
fallback_updates["snr"] = message["snr"] if message.key?("snr")
fallback_updates["rssi"] = message["rssi"] if message.key?("rssi")
fallback_updates["hop_limit"] = message["hop_limit"] if message.key?("hop_limit")
fallback_updates["portnum"] = portnum if portnum
fallback_updates["lora_freq"] = lora_freq unless lora_freq.nil?
fallback_updates["modem_preset"] = modem_preset if modem_preset
fallback_updates["channel_name"] = channel_name if channel_name
fallback_updates["rx_time"] = rx_time if rx_time
fallback_updates["rx_iso"] = rx_iso if rx_iso
else
fallback_updates["lora_freq"] = lora_freq unless lora_freq.nil?
fallback_updates["modem_preset"] = modem_preset if modem_preset
fallback_updates["channel_name"] = channel_name if channel_name
end
fallback_updates["reply_id"] = reply_id unless reply_id.nil?
fallback_updates["emoji"] = emoji if emoji
fallback_updates["ingestor"] = ingestor if ingestor && existing_ingestor.nil?
fallback_updates["protocol"] = protocol if (existing_fallback_protocol.nil? || existing_fallback_protocol == "meshtastic") && protocol != "meshtastic"
unless fallback_updates.empty?
assignments = fallback_updates.keys.map { |column| "#{column} = ?" }.join(", ")
db.execute("UPDATE messages SET #{assignments} WHERE id = ?", fallback_updates.values + [msg_id])
end
end
end
end
stored_decrypted = nil
if decrypted_payload
stored_decrypted = store_decrypted_payload(
db,
message,
msg_id,
decrypted_payload,
rx_time: rx_time,
rx_iso: rx_iso,
from_id: from_id,
to_id: to_id,
channel: message["channel"],
portnum: portnum || decrypted_portnum,
hop_limit: message["hop_limit"],
snr: message["snr"],
rssi: message["rssi"],
)
end
if stored_decrypted && encrypted
with_busy_retry do
db.execute("UPDATE messages SET encrypted = NULL WHERE id = ?", [msg_id])
end
debug_log(
"Cleared encrypted payload after decoding",
context: "data_processing.insert_message",
message_id: msg_id,
portnum: portnum || decrypted_portnum,
)
end
should_touch_message = !stored_decrypted
if should_touch_message
ensure_unknown_node(db, from_id || raw_from_id, message["from_num"], heard_time: rx_time, protocol: protocol)
touch_node_last_seen(
db,
from_id || raw_from_id || message["from_num"],
message["from_num"],
rx_time: rx_time,
source: :message,
lora_freq: lora_freq,
modem_preset: modem_preset,
)
ensure_unknown_node(db, to_id || raw_to_id, message["to_num"], heard_time: rx_time, protocol: protocol) if to_id || raw_to_id
if to_id || raw_to_id || message.key?("to_num")
touch_node_last_seen(
db,
to_id || raw_to_id || message["to_num"],
message["to_num"],
rx_time: rx_time,
source: :message,
lora_freq: lora_freq,
modem_preset: modem_preset,
)
end
end
end
end
end
end
@@ -0,0 +1,144 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module DataProcessing
# Persist a neighbours snapshot for a single reporting node.
#
# @param db [SQLite3::Database] open database handle.
# @param payload [Hash] inbound NeighborInfo payload.
# @param protocol_cache [Hash, nil] optional per-batch ingestor protocol cache.
# @return [void]
def insert_neighbors(db, payload, protocol_cache: nil)
return unless payload.is_a?(Hash)
now = Time.now.to_i
rx_time = coerce_integer(payload["rx_time"])
rx_time = now if rx_time.nil? || rx_time > now
raw_node_id = payload["node_id"] || payload["node"] || payload["from_id"]
raw_node_num = coerce_integer(payload["node_num"]) || coerce_integer(payload["num"])
canonical_parts = canonical_node_parts(raw_node_id, raw_node_num)
if canonical_parts
node_id, node_num, = canonical_parts
else
node_id = string_or_nil(raw_node_id)
canonical = normalize_node_id(db, node_id || raw_node_num)
node_id = canonical if canonical
if node_id&.start_with?("!") && raw_node_num.nil?
begin
node_num = Integer(node_id.delete_prefix("!"), 16)
rescue ArgumentError
node_num = nil
end
else
node_num = raw_node_num
end
end
return unless node_id
node_id = "!#{node_id.delete_prefix("!").downcase}" if node_id.start_with?("!")
ingestor = string_or_nil(payload["ingestor"])
protocol = resolve_protocol(db, ingestor, cache: protocol_cache)
ensure_unknown_node(db, node_id || node_num, node_num, heard_time: rx_time, protocol: protocol)
touch_node_last_seen(db, node_id || node_num, node_num, rx_time: rx_time, source: :neighborinfo)
neighbor_entries = []
neighbors_payload = payload["neighbors"]
neighbors_list = neighbors_payload.is_a?(Array) ? neighbors_payload : []
neighbors_list.each do |neighbor|
next unless neighbor.is_a?(Hash)
neighbor_ref = neighbor["neighbor_id"] || neighbor["node_id"] || neighbor["nodeId"] || neighbor["id"]
neighbor_num = coerce_integer(
neighbor["neighbor_num"] || neighbor["node_num"] || neighbor["nodeId"] || neighbor["id"],
)
canonical_neighbor = canonical_node_parts(neighbor_ref, neighbor_num)
if canonical_neighbor
neighbor_id, neighbor_num, = canonical_neighbor
else
neighbor_id = string_or_nil(neighbor_ref)
canonical_neighbor_id = normalize_node_id(db, neighbor_id || neighbor_num)
neighbor_id = canonical_neighbor_id if canonical_neighbor_id
if neighbor_id&.start_with?("!") && neighbor_num.nil?
begin
neighbor_num = Integer(neighbor_id.delete_prefix("!"), 16)
rescue ArgumentError
neighbor_num = nil
end
end
end
next unless neighbor_id
neighbor_id = "!#{neighbor_id.delete_prefix("!").downcase}" if neighbor_id.start_with?("!")
entry_rx_time = coerce_integer(neighbor["rx_time"]) || rx_time
entry_rx_time = now if entry_rx_time && entry_rx_time > now
snr = coerce_float(neighbor["snr"])
ensure_unknown_node(db, neighbor_id || neighbor_num, neighbor_num, heard_time: entry_rx_time, protocol: protocol)
neighbor_entries << [neighbor_id, snr, entry_rx_time, ingestor, protocol]
end
with_busy_retry do
db.transaction do
if neighbor_entries.empty?
db.execute("DELETE FROM neighbors WHERE node_id = ?", [node_id])
else
expected_neighbors = neighbor_entries.map(&:first).uniq
existing_neighbors = db.execute(
"SELECT neighbor_id FROM neighbors WHERE node_id = ?",
[node_id],
).flatten
stale_neighbors = existing_neighbors - expected_neighbors
stale_neighbors.each_slice(500) do |slice|
placeholders = slice.map { "?" }.join(",")
db.execute(
"DELETE FROM neighbors WHERE node_id = ? AND neighbor_id IN (#{placeholders})",
[node_id] + slice,
)
end
end
neighbor_entries.each do |neighbor_id, snr_value, heard_time, reporter_id, proto|
db.execute(
<<~SQL,
INSERT INTO neighbors(node_id, neighbor_id, snr, rx_time, ingestor, protocol)
VALUES (?, ?, ?, ?, ?, ?)
ON CONFLICT(node_id, neighbor_id) DO UPDATE SET
snr = excluded.snr,
rx_time = excluded.rx_time,
ingestor = COALESCE(NULLIF(neighbors.ingestor,''), excluded.ingestor),
protocol = COALESCE(NULLIF(neighbors.protocol,'meshtastic'), excluded.protocol)
SQL
[node_id, neighbor_id, snr_value, heard_time, reporter_id, proto],
)
end
end
end
end
end
end
end
@@ -0,0 +1,570 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module DataProcessing
# Insert a hidden placeholder node when an unknown reference is encountered.
#
# @param db [SQLite3::Database] open database handle.
# @param node_ref [Object] raw node reference from the inbound payload.
# @param fallback_num [Integer, nil] numeric fallback when +node_ref+ is nil.
# @param heard_time [Integer, nil] timestamp to record as +last_heard+/+first_heard+.
# @param protocol [String] protocol identifier for placeholder generation.
# @return [Boolean, nil] true when a row was inserted, false/nil otherwise.
def ensure_unknown_node(db, node_ref, fallback_num = nil, heard_time: nil, protocol: "meshtastic")
parts = canonical_node_parts(node_ref, fallback_num)
return unless parts
node_id, node_num, short_id = parts
return if broadcast_node_ref?(node_id, node_num)
existing = db.get_first_value(
"SELECT 1 FROM nodes WHERE node_id = ? LIMIT 1",
[node_id],
)
return if existing
long_name = "#{protocol_display_label(protocol)} #{short_id}"
default_role = case protocol
when "meshcore" then "COMPANION"
else "CLIENT_HIDDEN"
end
heard_time = coerce_integer(heard_time)
inserted = false
with_busy_retry do
db.execute(
<<~SQL,
INSERT OR IGNORE INTO nodes(node_id,num,short_name,long_name,role,last_heard,first_heard,protocol)
VALUES (?,?,?,?,?,?,?,?)
SQL
[node_id, node_num, short_id, long_name, default_role, heard_time, heard_time, protocol],
)
inserted = db.changes.positive?
end
if inserted
debug_log(
"Created hidden placeholder node",
context: "data_processing.ensure_unknown_node",
node_id: node_id,
reference: node_ref,
fallback: fallback_num,
heard_time: heard_time,
)
end
inserted
end
# Refresh a node's +last_heard+, +first_heard+, +lora_freq+, and
# +modem_preset+ columns from a freshly received packet.
#
# @param db [SQLite3::Database] open database handle.
# @param node_ref [Object] raw node reference.
# @param fallback_num [Integer, nil] numeric fallback when +node_ref+ is nil.
# @param rx_time [Integer, nil] receive timestamp; the method exits early when nil.
# @param source [Symbol, nil] originating subsystem (used for debug logs).
# @param lora_freq [Integer, nil] LoRa frequency; only updated when non-nil.
# @param modem_preset [String, nil] modem preset name; only updated when non-nil.
# @return [Boolean] true when at least one row was updated.
def touch_node_last_seen(
db,
node_ref,
fallback_num = nil,
rx_time: nil,
source: nil,
lora_freq: nil,
modem_preset: nil
)
timestamp = coerce_integer(rx_time)
return unless timestamp
node_id = nil
parts = canonical_node_parts(node_ref, fallback_num)
if parts
node_id, node_num = parts
return if broadcast_node_ref?(node_id, node_num)
end
unless node_id
trimmed = string_or_nil(node_ref)
if trimmed
node_id = normalize_node_id(db, trimmed) || trimmed
elsif fallback_num
fallback_parts = canonical_node_parts(fallback_num, nil)
node_id, = fallback_parts if fallback_parts
end
end
return if broadcast_node_ref?(node_id, fallback_num)
return unless node_id
lora_freq = coerce_integer(lora_freq)
modem_preset = string_or_nil(modem_preset)
updated = false
with_busy_retry do
db.execute <<~SQL, [timestamp, timestamp, timestamp, lora_freq, modem_preset, node_id]
UPDATE nodes
SET last_heard = CASE
WHEN COALESCE(last_heard, 0) >= ? THEN last_heard
ELSE ?
END,
first_heard = COALESCE(first_heard, ?),
lora_freq = COALESCE(?, lora_freq),
modem_preset = COALESCE(?, modem_preset)
WHERE node_id = ?
SQL
updated ||= db.changes.positive?
end
if updated
debug_log(
"Updated node last seen timestamp",
context: "data_processing.touch_node_last_seen",
node_id: node_id,
timestamp: timestamp,
source: source || :unknown,
lora_freq: lora_freq,
modem_preset: modem_preset,
)
end
updated
end
# Insert or update a node row from an inbound NodeInfo-style payload.
#
# @param db [SQLite3::Database] open database handle.
# @param node_id [String] canonical node identifier.
# @param n [Hash] node payload extracted from the ingestor.
# @param protocol [String] protocol identifier (default +meshtastic+).
# @return [void]
def upsert_node(db, node_id, n, protocol: "meshtastic")
user = n["user"] || {}
met = n["deviceMetrics"] || {}
pos = n["position"] || {}
# nil when user info absent; COALESCE in the conflict clause preserves
# the stored role rather than overwriting with a default.
role = user["role"]
lh = coerce_integer(n["lastHeard"])
pt = coerce_integer(pos["time"])
now = Time.now.to_i
pt = nil if pt && pt > now
lh = now if lh && lh > now
# 0 is truthy in Ruby — `lh ||= now` won't replace it, leaving the
# 7-day list filter to evaluate `0 >= now-7days` → false (node hidden).
lh = nil if lh && lh <= 0
# position.time = 0 means no GPS fix; skip it as a last_heard anchor
# (would re-introduce the same zero-timestamp exclusion bug for lh).
lh = pt if pt && pt > 0 && (!lh || lh < pt)
lh ||= now
node_num = resolve_node_num(node_id, n)
update_prometheus_metrics(node_id, user, role, met, pos)
lora_freq = coerce_integer(n["lora_freq"] || n["loraFrequency"])
modem_preset = string_or_nil(n["modem_preset"] || n["modemPreset"])
# Synthetic flag: true for placeholder nodes created from channel message
# sender names before the real contact advertisement is received.
synthetic = user["synthetic"] ? 1 : 0
long_name = user["longName"]
# If the incoming long name is a generic placeholder, prefer any real
# name already on record so we never stomp known data with fallback
# text. For new nodes there is nothing to preserve, so the generic
# name is still written via the INSERT VALUES path.
long_name_conflict_sql = if generic_fallback_name?(long_name, node_id, protocol)
# Generic placeholder: keep any real name already on record.
# COALESCE returns nodes.long_name when non-null, otherwise falls
# back to the incoming generic — so brand-new nodes still get it.
"COALESCE(nodes.long_name, excluded.long_name)"
else
# Real name (or nil): use the incoming value, preserving the
# existing name only when the incoming value is nil. A nil
# long_name in the packet carries no information, so falling back
# to what we already have is better than overwriting with NULL.
"COALESCE(excluded.long_name, nodes.long_name)"
end
row = [
node_id,
node_num,
user["shortName"],
long_name,
user["macaddr"],
user["hwModel"] || n["hwModel"],
role,
user["publicKey"],
coerce_bool(user["isUnmessagable"]),
coerce_bool(n["isFavorite"]),
n["hopsAway"],
n["snr"],
lh,
lh,
met["batteryLevel"],
met["voltage"],
met["channelUtilization"],
met["airUtilTx"],
met["uptimeSeconds"],
pt,
pos["locationSource"],
coerce_integer(
pos["precisionBits"] ||
pos["precision_bits"] ||
pos.dig("raw", "precision_bits"),
),
pos["latitude"],
pos["longitude"],
pos["altitude"],
lora_freq,
modem_preset,
protocol,
synthetic,
]
with_busy_retry do
db.transaction do
db.execute(<<~SQL, row)
INSERT INTO nodes(node_id,num,short_name,long_name,macaddr,hw_model,role,public_key,is_unmessagable,is_favorite,
hops_away,snr,last_heard,first_heard,battery_level,voltage,channel_utilization,air_util_tx,uptime_seconds,
position_time,location_source,precision_bits,latitude,longitude,altitude,lora_freq,modem_preset,protocol,synthetic)
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
ON CONFLICT(node_id) DO UPDATE SET
num=COALESCE(excluded.num, nodes.num),
short_name=COALESCE(excluded.short_name, nodes.short_name),
long_name=#{long_name_conflict_sql},
macaddr=COALESCE(excluded.macaddr, nodes.macaddr),
hw_model=COALESCE(excluded.hw_model, nodes.hw_model),
role=COALESCE(excluded.role, nodes.role),
public_key=COALESCE(excluded.public_key, nodes.public_key),
is_unmessagable=COALESCE(excluded.is_unmessagable, nodes.is_unmessagable),
is_favorite=excluded.is_favorite, hops_away=excluded.hops_away, snr=excluded.snr, last_heard=excluded.last_heard,
first_heard=COALESCE(nodes.first_heard, excluded.first_heard, excluded.last_heard),
battery_level=excluded.battery_level, voltage=excluded.voltage, channel_utilization=excluded.channel_utilization,
air_util_tx=excluded.air_util_tx, uptime_seconds=excluded.uptime_seconds,
position_time=COALESCE(excluded.position_time, nodes.position_time),
location_source=COALESCE(excluded.location_source, nodes.location_source),
precision_bits=COALESCE(excluded.precision_bits, nodes.precision_bits),
latitude=COALESCE(excluded.latitude, nodes.latitude),
longitude=COALESCE(excluded.longitude, nodes.longitude),
altitude=COALESCE(excluded.altitude, nodes.altitude),
lora_freq=excluded.lora_freq, modem_preset=excluded.modem_preset,
protocol=COALESCE(NULLIF(nodes.protocol,'meshtastic'), excluded.protocol),
synthetic=MIN(COALESCE(excluded.synthetic,1), COALESCE(nodes.synthetic,1))
WHERE COALESCE(excluded.last_heard,0) >= COALESCE(nodes.last_heard,0)
AND NOT (COALESCE(nodes.synthetic,0) = 0 AND excluded.synthetic = 1)
SQL
# Reconcile synthetic placeholder rows with their real counterparts
# whenever a MeshCore node is upserted. Both directions must fire —
# the arrival order of chat messages vs contact advertisements is
# not guaranteed and may differ across co-operating ingestors that
# share this database. See issue #755.
if protocol == "meshcore" && long_name && !long_name.empty?
if synthetic == 0
merge_synthetic_nodes(db, node_id, long_name)
else
merge_into_real_node(db, node_id, long_name)
end
end
end
end
end
# Migrate messages from synthetic placeholder nodes to a newly confirmed
# real node, then remove the placeholders.
#
# Called inside a transaction from +upsert_node+ when a real (non-synthetic)
# MeshCore node with the same +long_name+ is upserted.
#
# Only +messages.from_id+ is migrated. Synthetic nodes are placeholders
# created solely from parsed channel message sender names, so they cannot
# have associated positions, telemetry, neighbors, or traces — those tables
# are intentionally left untouched.
#
# @param db [SQLite3::Database] open database connection.
# @param real_node_id [String] canonical node ID for the real contact.
# @param long_name [String] long name to match against synthetic rows.
# @return [void]
def merge_synthetic_nodes(db, real_node_id, long_name)
# long_name is user-editable and not unique across pubkeys — two real
# meshcore devices can legitimately share the same display name. When
# that happens we cannot tell which real node a given chat-derived
# synthetic was acting as placeholder for, so any merge would risk
# mis-attributing messages. Bail out and leave the synthetic intact.
other_real = db.execute(
"SELECT 1 FROM nodes WHERE long_name = ? AND synthetic = 0 AND protocol = 'meshcore' AND node_id != ? LIMIT 1",
[long_name, real_node_id],
).first
return if other_real
synthetic_ids = db.execute(
"SELECT node_id FROM nodes WHERE long_name = ? AND synthetic = 1 AND protocol = 'meshcore' AND node_id != ?",
[long_name, real_node_id],
).map { |row| row[0] }
synthetic_ids.each do |synthetic_id|
db.execute(
"UPDATE messages SET from_id = ? WHERE from_id = ?",
[real_node_id, synthetic_id],
)
db.execute(
"DELETE FROM nodes WHERE node_id = ? AND synthetic = 1",
[synthetic_id],
)
end
end
# Reverse of +merge_synthetic_nodes+: when a synthetic placeholder is
# upserted for a MeshCore sender whose real contact advertisement has
# already been stored (e.g. by a co-operating ingestor that saw the
# advertisement first), migrate any messages from the synthetic id to the
# real id and drop the synthetic row.
#
# Fixes duplication bug #755 where a chat-derived synthetic node and a
# pubkey-derived real node coexisted because the forward merge only fired
# on real-node upserts and never back-filled late-arriving synthetics.
#
# @param db [SQLite3::Database] open database connection.
# @param synthetic_node_id [String] canonical node ID of the synthetic placeholder being upserted.
# @param long_name [String] long name to match against existing real rows.
# @return [void]
def merge_into_real_node(db, synthetic_node_id, long_name)
# Index by [0] rather than the hash key so this works whether the db
# handle was opened with results_as_hash = true or not.
real_rows = db.execute(
"SELECT node_id FROM nodes WHERE long_name = ? AND synthetic = 0 AND protocol = 'meshcore' AND node_id != ? LIMIT 2",
[long_name, synthetic_node_id],
)
# Ambiguous name: two distinct real meshcore devices share this
# long_name. The synthetic placeholder could legitimately represent
# either, so we cannot pick one without risking mis-attribution. Leave
# the synthetic in place; an operator can resolve the duplicate
# manually.
return if real_rows.length > 1
row = real_rows.first
return unless row
real_node_id = row[0]
return unless real_node_id
db.execute(
"UPDATE messages SET from_id = ? WHERE from_id = ?",
[real_node_id, synthetic_node_id],
)
db.execute(
"DELETE FROM nodes WHERE node_id = ? AND synthetic = 1",
[synthetic_node_id],
)
end
# Update node row columns from a freshly observed position record.
#
# @param db [SQLite3::Database] open database handle.
# @param node_id [String, nil] canonical node identifier.
# @param node_num [Integer, nil] numeric node identifier.
# @param rx_time [Integer, nil] receive time.
# @param position_time [Integer, nil] timestamp from the position payload.
# @param location_source [String, nil] +location_source+ enum value.
# @param precision_bits [Integer, nil] horizontal precision bits.
# @param latitude [Float, nil] decoded latitude.
# @param longitude [Float, nil] decoded longitude.
# @param altitude [Float, nil] decoded altitude.
# @param snr [Float, nil] signal-to-noise ratio.
# @return [void]
def update_node_from_position(db, node_id, node_num, rx_time, position_time, location_source, precision_bits, latitude, longitude, altitude, snr)
num = coerce_integer(node_num)
id = string_or_nil(node_id)
if id&.start_with?("!")
id = "!#{id.delete_prefix("!").downcase}"
end
id ||= format("!%08x", num & 0xFFFFFFFF) if num
return unless id
now = Time.now.to_i
rx = coerce_integer(rx_time) || now
rx = now if rx && rx > now
pos_time = coerce_integer(position_time)
pos_time = nil if pos_time && pos_time > now
last_heard = [rx, pos_time].compact.max || rx
last_heard = now if last_heard && last_heard > now
loc = string_or_nil(location_source)
lat = coerce_float(latitude)
lon = coerce_float(longitude)
alt = coerce_float(altitude)
precision = coerce_integer(precision_bits)
snr_val = coerce_float(snr)
update_prometheus_metrics(node_id, nil, nil, nil, {
"latitude" => lat,
"longitude" => lon,
"altitude" => alt,
})
row = [
id,
num,
last_heard,
last_heard,
pos_time,
loc,
precision,
lat,
lon,
alt,
snr_val,
]
with_busy_retry do
db.execute <<~SQL, row
INSERT INTO nodes(node_id,num,last_heard,first_heard,position_time,location_source,precision_bits,latitude,longitude,altitude,snr)
VALUES (?,?,?,?,?,?,?,?,?,?,?)
ON CONFLICT(node_id) DO UPDATE SET
num=COALESCE(excluded.num,nodes.num),
snr=COALESCE(excluded.snr,nodes.snr),
last_heard=MAX(COALESCE(nodes.last_heard,0),COALESCE(excluded.last_heard,0)),
first_heard=COALESCE(nodes.first_heard, excluded.first_heard, excluded.last_heard),
position_time=CASE
WHEN COALESCE(excluded.position_time,0) >= COALESCE(nodes.position_time,0)
THEN excluded.position_time
ELSE nodes.position_time
END,
location_source=CASE
WHEN COALESCE(excluded.position_time,0) >= COALESCE(nodes.position_time,0)
AND excluded.location_source IS NOT NULL
THEN excluded.location_source
ELSE nodes.location_source
END,
precision_bits=CASE
WHEN COALESCE(excluded.position_time,0) >= COALESCE(nodes.position_time,0)
AND excluded.precision_bits IS NOT NULL
THEN excluded.precision_bits
ELSE nodes.precision_bits
END,
latitude=CASE
WHEN COALESCE(excluded.position_time,0) >= COALESCE(nodes.position_time,0)
AND excluded.latitude IS NOT NULL
THEN excluded.latitude
ELSE nodes.latitude
END,
longitude=CASE
WHEN COALESCE(excluded.position_time,0) >= COALESCE(nodes.position_time,0)
AND excluded.longitude IS NOT NULL
THEN excluded.longitude
ELSE nodes.longitude
END,
altitude=CASE
WHEN COALESCE(excluded.position_time,0) >= COALESCE(nodes.position_time,0)
AND excluded.altitude IS NOT NULL
THEN excluded.altitude
ELSE nodes.altitude
END
SQL
end
end
# Update node columns based on metrics included in a telemetry packet.
#
# @param db [SQLite3::Database] open database handle.
# @param node_id [String, nil] canonical node identifier.
# @param node_num [Integer, nil] numeric node identifier.
# @param rx_time [Integer, nil] receive time used as +last_heard+.
# @param metrics [Hash] decoded telemetry metric map.
# @param lora_freq [Integer, nil] optional LoRa frequency.
# @param modem_preset [String, nil] optional modem preset.
# @param protocol [String] protocol identifier (default +meshtastic+).
# @return [void]
def update_node_from_telemetry(
db,
node_id,
node_num,
rx_time,
metrics = {},
lora_freq: nil,
modem_preset: nil,
protocol: "meshtastic"
)
num = coerce_integer(node_num)
id = string_or_nil(node_id)
if id&.start_with?("!")
id = "!#{id.delete_prefix("!").downcase}"
end
id ||= format("!%08x", num & 0xFFFFFFFF) if num
return unless id
ensure_unknown_node(db, id, num, heard_time: rx_time, protocol: protocol)
touch_node_last_seen(
db,
id,
num,
rx_time: rx_time,
source: :telemetry,
lora_freq: lora_freq,
modem_preset: modem_preset,
)
battery = coerce_float(metrics[:battery_level] || metrics["battery_level"])
voltage = coerce_float(metrics[:voltage] || metrics["voltage"])
channel_util = coerce_float(metrics[:channel_utilization] || metrics["channel_utilization"])
air_util_tx = coerce_float(metrics[:air_util_tx] || metrics["air_util_tx"])
uptime = coerce_integer(metrics[:uptime_seconds] || metrics["uptime_seconds"])
update_prometheus_metrics(node_id, nil, nil, {
"batteryLevel" => battery,
"voltage" => voltage,
"uptimeSeconds" => uptime,
"channelUtilization" => channel_util,
"airUtilTx" => air_util_tx,
}, nil)
assignments = []
params = []
if num
assignments << "num = ?"
params << num
end
metric_updates = {
"battery_level" => battery,
"voltage" => voltage,
"channel_utilization" => channel_util,
"air_util_tx" => air_util_tx,
"uptime_seconds" => uptime,
}
metric_updates.each do |column, value|
next if value.nil?
assignments << "#{column} = ?"
params << value
end
return if assignments.empty?
assignments_sql = assignments.join(", ")
params << id
with_busy_retry do
db.execute("UPDATE nodes SET #{assignments_sql} WHERE node_id = ?", params)
end
end
end
end
end
@@ -0,0 +1,226 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module DataProcessing
# Persist a position payload, populate the +nodes+ table for newly seen
# senders, and update node rows with the freshest GPS fields.
#
# @param db [SQLite3::Database] open database handle.
# @param payload [Hash] inbound position payload.
# @param protocol_cache [Hash, nil] optional per-batch ingestor protocol cache.
# @return [void]
def insert_position(db, payload, protocol_cache: nil)
pos_id = coerce_integer(payload["id"] || payload["packet_id"])
return unless pos_id
now = Time.now.to_i
rx_time = coerce_integer(payload["rx_time"])
rx_time = now if rx_time.nil? || rx_time > now
rx_iso = string_or_nil(payload["rx_iso"])
rx_iso ||= Time.at(rx_time).utc.iso8601
raw_node_id = payload["node_id"] || payload["from_id"] || payload["from"]
raw_node_num = coerce_integer(payload["node_num"]) || coerce_integer(payload["num"])
canonical_parts = canonical_node_parts(raw_node_id, raw_node_num)
if canonical_parts
node_id, node_num, = canonical_parts
else
node_id = string_or_nil(raw_node_id)
node_id = "!#{node_id.delete_prefix("!").downcase}" if node_id&.start_with?("!")
node_id ||= format("!%08x", raw_node_num & 0xFFFFFFFF) if node_id.nil? && raw_node_num
payload_for_num = payload.is_a?(Hash) ? payload.dup : {}
payload_for_num["num"] ||= raw_node_num if raw_node_num
node_num = resolve_node_num(node_id, payload_for_num)
node_num ||= raw_node_num
canonical = normalize_node_id(db, node_id || node_num)
node_id = canonical if canonical
end
lora_freq = coerce_integer(payload["lora_freq"] || payload["loraFrequency"])
modem_preset = string_or_nil(payload["modem_preset"] || payload["modemPreset"])
ingestor = string_or_nil(payload["ingestor"])
protocol = resolve_protocol(db, ingestor, cache: protocol_cache)
ensure_unknown_node(db, node_id || node_num, node_num, heard_time: rx_time, protocol: protocol)
touch_node_last_seen(
db,
node_id || node_num,
node_num,
rx_time: rx_time,
source: :position,
lora_freq: lora_freq,
modem_preset: modem_preset,
)
to_id = string_or_nil(payload["to_id"] || payload["to"])
position_section = payload["position"].is_a?(Hash) ? payload["position"] : {}
lat = coerce_float(payload["latitude"]) || coerce_float(position_section["latitude"])
lon = coerce_float(payload["longitude"]) || coerce_float(position_section["longitude"])
alt = coerce_float(payload["altitude"]) || coerce_float(position_section["altitude"])
lat ||= begin
lat_i = coerce_integer(position_section["latitudeI"] || position_section["latitude_i"] || position_section.dig("raw", "latitude_i"))
lat_i ? lat_i / 1e7 : nil
end
lon ||= begin
lon_i = coerce_integer(position_section["longitudeI"] || position_section["longitude_i"] || position_section.dig("raw", "longitude_i"))
lon_i ? lon_i / 1e7 : nil
end
alt ||= coerce_float(position_section.dig("raw", "altitude"))
position_time = coerce_integer(
payload["position_time"] ||
position_section["time"] ||
position_section.dig("raw", "time"),
)
location_source = string_or_nil(
payload["location_source"] ||
payload["locationSource"] ||
position_section["location_source"] ||
position_section["locationSource"] ||
position_section.dig("raw", "location_source"),
)
precision_bits = coerce_integer(
payload["precision_bits"] ||
payload["precisionBits"] ||
position_section["precision_bits"] ||
position_section["precisionBits"] ||
position_section.dig("raw", "precision_bits"),
)
sats_in_view = coerce_integer(
payload["sats_in_view"] ||
payload["satsInView"] ||
position_section["sats_in_view"] ||
position_section["satsInView"] ||
position_section.dig("raw", "sats_in_view"),
)
pdop = coerce_float(
payload["pdop"] ||
payload["PDOP"] ||
position_section["pdop"] ||
position_section["PDOP"] ||
position_section.dig("raw", "PDOP") ||
position_section.dig("raw", "pdop"),
)
ground_speed = coerce_float(
payload["ground_speed"] ||
payload["groundSpeed"] ||
position_section["ground_speed"] ||
position_section["groundSpeed"] ||
position_section.dig("raw", "ground_speed"),
)
ground_track = coerce_float(
payload["ground_track"] ||
payload["groundTrack"] ||
position_section["ground_track"] ||
position_section["groundTrack"] ||
position_section.dig("raw", "ground_track"),
)
snr = coerce_float(payload["snr"] || payload["rx_snr"] || payload["rxSnr"])
rssi = coerce_integer(payload["rssi"] || payload["rx_rssi"] || payload["rxRssi"])
hop_limit = coerce_integer(payload["hop_limit"] || payload["hopLimit"])
bitfield = coerce_integer(payload["bitfield"])
payload_b64 = string_or_nil(payload["payload_b64"] || payload["payload"])
payload_b64 ||= string_or_nil(position_section.dig("payload", "__bytes_b64__"))
row = [
pos_id,
node_id,
node_num,
rx_time,
rx_iso,
position_time,
to_id,
lat,
lon,
alt,
location_source,
precision_bits,
sats_in_view,
pdop,
ground_speed,
ground_track,
snr,
rssi,
hop_limit,
bitfield,
payload_b64,
ingestor,
protocol,
]
with_busy_retry do
db.execute <<~SQL, row
INSERT INTO positions(id,node_id,node_num,rx_time,rx_iso,position_time,to_id,latitude,longitude,altitude,location_source,
precision_bits,sats_in_view,pdop,ground_speed,ground_track,snr,rssi,hop_limit,bitfield,payload_b64,ingestor,protocol)
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
ON CONFLICT(id) DO UPDATE SET
node_id=COALESCE(excluded.node_id,positions.node_id),
node_num=COALESCE(excluded.node_num,positions.node_num),
rx_time=excluded.rx_time,
rx_iso=excluded.rx_iso,
position_time=COALESCE(excluded.position_time,positions.position_time),
to_id=COALESCE(excluded.to_id,positions.to_id),
latitude=COALESCE(excluded.latitude,positions.latitude),
longitude=COALESCE(excluded.longitude,positions.longitude),
altitude=COALESCE(excluded.altitude,positions.altitude),
location_source=COALESCE(excluded.location_source,positions.location_source),
precision_bits=COALESCE(excluded.precision_bits,positions.precision_bits),
sats_in_view=COALESCE(excluded.sats_in_view,positions.sats_in_view),
pdop=COALESCE(excluded.pdop,positions.pdop),
ground_speed=COALESCE(excluded.ground_speed,positions.ground_speed),
ground_track=COALESCE(excluded.ground_track,positions.ground_track),
snr=COALESCE(excluded.snr,positions.snr),
rssi=COALESCE(excluded.rssi,positions.rssi),
hop_limit=COALESCE(excluded.hop_limit,positions.hop_limit),
bitfield=COALESCE(excluded.bitfield,positions.bitfield),
payload_b64=COALESCE(excluded.payload_b64,positions.payload_b64),
ingestor=COALESCE(NULLIF(positions.ingestor,''), excluded.ingestor),
protocol=COALESCE(NULLIF(positions.protocol,'meshtastic'), excluded.protocol)
SQL
end
update_node_from_position(
db,
node_id,
node_num,
rx_time,
position_time,
location_source,
precision_bits,
lat,
lon,
alt,
snr,
)
end
end
end
end
@@ -0,0 +1,50 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module DataProcessing
# Look up the protocol registered by a given ingestor node.
#
# @param db [SQLite3::Database] open database handle.
# @param ingestor_node_id [String, nil] the node_id of the reporting ingestor.
# @param cache [Hash, nil] optional per-request memoization hash; pass a shared
# Hash instance across a batch to avoid redundant DB lookups per record.
# @return [String] protocol string; defaults to "meshtastic" when absent or unknown.
def resolve_protocol(db, ingestor_node_id, cache: nil)
return "meshtastic" if ingestor_node_id.nil? || ingestor_node_id.to_s.strip.empty?
if cache
return cache[ingestor_node_id] if cache.key?(ingestor_node_id)
result = db.get_first_value(
"SELECT protocol FROM ingestors WHERE node_id = ? LIMIT 1",
[ingestor_node_id],
) || "meshtastic"
cache[ingestor_node_id] = result
return result
end
db.get_first_value(
"SELECT protocol FROM ingestors WHERE node_id = ? LIMIT 1",
[ingestor_node_id],
) || "meshtastic"
end
private :resolve_protocol
end
end
end
@@ -0,0 +1,69 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module DataProcessing
# Halt the current request with HTTP 403 unless the request carries a
# bearer token that securely matches +API_TOKEN+.
#
# @return [void]
def require_token!
token = ENV["API_TOKEN"]
provided = request.env["HTTP_AUTHORIZATION"].to_s.sub(/^Bearer\s+/i, "")
halt 403, { error: "Forbidden" }.to_json unless token && !token.empty? && secure_token_match?(token, provided)
end
# Constant-time comparison of two API tokens to mitigate timing attacks.
#
# @param expected [String] expected token from configuration.
# @param provided [String] token supplied by the client.
# @return [Boolean] true when the tokens match in constant time.
def secure_token_match?(expected, provided)
return false unless expected.is_a?(String) && provided.is_a?(String)
expected_bytes = expected.b
provided_bytes = provided.b
return false unless expected_bytes.bytesize == provided_bytes.bytesize
Rack::Utils.secure_compare(expected_bytes, provided_bytes)
rescue Rack::Utils::SecurityError
false
end
# Read the request body up to a configured byte ceiling and halt with HTTP
# 413 when the payload exceeds the limit.
#
# @param limit [Integer, nil] optional override; falls back to
# +PotatoMesh::Config.max_json_body_bytes+ when nil or non-positive.
# @return [String] raw request body.
def read_json_body(limit: nil)
max_bytes = limit || PotatoMesh::Config.max_json_body_bytes
max_bytes = max_bytes.to_i
if max_bytes <= 0
max_bytes = PotatoMesh::Config.max_json_body_bytes
end
body = request.body.read(max_bytes + 1)
body = "" if body.nil?
halt 413, { error: "payload too large" }.to_json if body.bytesize > max_bytes
body
ensure
request.body.rewind if request.body.respond_to?(:rewind)
end
end
end
end
@@ -0,0 +1,547 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module DataProcessing
# Ordered list of telemetry metric definitions consulted by
# +insert_telemetry+. Each entry is a tuple of
# +[column_name, coercion_type, key_map]+, where +key_map+ specifies the
# candidate field names for each source layer. Hoisted out of the method
# body to keep +insert_telemetry+ scannable; the data is otherwise
# identical to the inline definitions used previously.
TELEMETRY_METRIC_DEFINITIONS = [
[
"battery_level",
:float,
{
payload: %w[battery_level batteryLevel],
telemetry: %w[batteryLevel],
device: %w[battery_level batteryLevel],
environment: %w[battery_level batteryLevel],
},
],
[
"voltage",
:float,
{
payload: %w[voltage],
telemetry: %w[voltage],
device: %w[voltage],
environment: %w[voltage],
},
],
[
"channel_utilization",
:float,
{
payload: %w[channel_utilization channelUtilization],
telemetry: %w[channelUtilization],
device: %w[channel_utilization channelUtilization],
},
],
[
"air_util_tx",
:float,
{
payload: %w[air_util_tx airUtilTx],
telemetry: %w[airUtilTx],
device: %w[air_util_tx airUtilTx],
},
],
[
"uptime_seconds",
:integer,
{
payload: %w[uptime_seconds uptimeSeconds],
telemetry: %w[uptimeSeconds],
device: %w[uptime_seconds uptimeSeconds],
},
],
[
"temperature",
:float,
{
payload: %w[temperature temperatureC tempC],
telemetry: %w[temperature temperatureC tempC],
environment: %w[temperature temperatureC temperature_c tempC],
},
],
[
"relative_humidity",
:float,
{
payload: %w[relative_humidity relativeHumidity humidity],
telemetry: %w[relative_humidity relativeHumidity humidity],
environment: %w[relative_humidity relativeHumidity humidity],
},
],
[
"barometric_pressure",
:float,
{
payload: %w[barometric_pressure barometricPressure pressure],
telemetry: %w[barometric_pressure barometricPressure pressure],
environment: %w[barometric_pressure barometricPressure pressure],
},
],
[
"gas_resistance",
:float,
{
payload: %w[gas_resistance gasResistance],
telemetry: %w[gas_resistance gasResistance],
environment: %w[gas_resistance gasResistance],
},
],
[
"current",
:float,
{
payload: %w[current current_ma currentMa],
telemetry: %w[current current_ma currentMa],
device: %w[current current_ma currentMa],
environment: %w[current],
},
],
[
"iaq",
:integer,
{
payload: %w[iaq iaqIndex iaq_index],
telemetry: %w[iaq iaqIndex iaq_index],
environment: %w[iaq iaqIndex iaq_index],
},
],
[
"distance",
:float,
{
payload: %w[distance range rangeMeters],
telemetry: %w[distance range rangeMeters],
environment: %w[distance range rangeMeters],
},
],
[
"lux",
:float,
{
payload: %w[lux illuminance lightLux],
telemetry: %w[lux illuminance lightLux],
environment: %w[lux illuminance lightLux],
},
],
[
"white_lux",
:float,
{
payload: %w[white_lux whiteLux],
telemetry: %w[white_lux whiteLux],
environment: %w[white_lux whiteLux],
},
],
[
"ir_lux",
:float,
{
payload: %w[ir_lux irLux],
telemetry: %w[ir_lux irLux],
environment: %w[ir_lux irLux],
},
],
[
"uv_lux",
:float,
{
payload: %w[uv_lux uvLux uvIndex],
telemetry: %w[uv_lux uvLux uvIndex],
environment: %w[uv_lux uvLux uvIndex],
},
],
[
"wind_direction",
:integer,
{
payload: %w[wind_direction windDirection],
telemetry: %w[wind_direction windDirection],
environment: %w[wind_direction windDirection],
},
],
[
"wind_speed",
:float,
{
payload: %w[wind_speed windSpeed windSpeedMps],
telemetry: %w[wind_speed windSpeed windSpeedMps],
environment: %w[wind_speed windSpeed windSpeedMps],
},
],
[
"weight",
:float,
{
payload: %w[weight mass],
telemetry: %w[weight mass],
environment: %w[weight mass],
},
],
[
"wind_gust",
:float,
{
payload: %w[wind_gust windGust],
telemetry: %w[wind_gust windGust],
environment: %w[wind_gust windGust],
},
],
[
"wind_lull",
:float,
{
payload: %w[wind_lull windLull],
telemetry: %w[wind_lull windLull],
environment: %w[wind_lull windLull],
},
],
[
"radiation",
:float,
{
payload: %w[radiation radiationLevel],
telemetry: %w[radiation radiationLevel],
environment: %w[radiation radiationLevel],
},
],
[
"rainfall_1h",
:float,
{
payload: %w[rainfall_1h rainfall1h rainfallOneHour],
telemetry: %w[rainfall_1h rainfall1h rainfallOneHour],
environment: %w[rainfall_1h rainfall1h rainfallOneHour],
},
],
[
"rainfall_24h",
:float,
{
payload: %w[rainfall_24h rainfall24h rainfallTwentyFourHour],
telemetry: %w[rainfall_24h rainfall24h rainfallTwentyFourHour],
environment: %w[rainfall_24h rainfall24h rainfallTwentyFourHour],
},
],
[
"soil_moisture",
:integer,
{
payload: %w[soil_moisture soilMoisture],
telemetry: %w[soil_moisture soilMoisture],
environment: %w[soil_moisture soilMoisture],
},
],
[
"soil_temperature",
:float,
{
payload: %w[soil_temperature soilTemperature],
telemetry: %w[soil_temperature soilTemperature],
environment: %w[soil_temperature soilTemperature],
},
],
].freeze
# Resolve a telemetry metric from the provided data sources.
#
# @param key_map [Hash{Symbol=>Array<String>}] ordered mapping of source names to candidate keys.
# @param sources [Hash{Symbol=>Hash}] data structures to search for metric values.
# @param type [Symbol] coercion strategy, ``:float`` or ``:integer``.
# @return [Numeric, nil] coerced metric value or nil when no candidates exist.
def resolve_numeric_metric(key_map, sources, type)
key_map.each do |source, keys|
next if keys.nil? || keys.empty?
data = sources[source]
next unless data.is_a?(Hash)
keys.each do |name|
next if name.nil?
key = name.to_s
value = if data.key?(key)
data[key]
else
sym_key = key.to_sym
data.key?(sym_key) ? data[sym_key] : nil
end
next if value.nil?
coerced = case type
when :float
coerce_float(value)
when :integer
coerce_integer(value)
else
value
end
return coerced unless coerced.nil?
end
end
nil
end
private :resolve_numeric_metric
# Persist a telemetry packet and refresh the related node row.
#
# @param db [SQLite3::Database] open database handle.
# @param payload [Hash] inbound telemetry payload.
# @param protocol_cache [Hash, nil] optional per-batch ingestor protocol cache.
# @return [void]
def insert_telemetry(db, payload, protocol_cache: nil)
return unless payload.is_a?(Hash)
telemetry_id = coerce_integer(payload["id"] || payload["packet_id"])
return unless telemetry_id
now = Time.now.to_i
rx_time = coerce_integer(payload["rx_time"])
rx_time = now if rx_time.nil? || rx_time > now
rx_iso = string_or_nil(payload["rx_iso"])
rx_iso ||= Time.at(rx_time).utc.iso8601
raw_node_id = payload["node_id"] || payload["from_id"] || payload["from"]
raw_node_num = coerce_integer(payload["node_num"]) || coerce_integer(payload["num"])
canonical_parts = canonical_node_parts(raw_node_id, raw_node_num)
if canonical_parts
node_id, node_num, = canonical_parts
else
node_id = string_or_nil(raw_node_id)
node_id = "!#{node_id.delete_prefix("!").downcase}" if node_id&.start_with?("!")
payload_for_num = payload.dup
payload_for_num["num"] ||= raw_node_num if raw_node_num
node_num = resolve_node_num(node_id, payload_for_num)
node_num ||= raw_node_num
canonical = normalize_node_id(db, node_id || node_num)
node_id = canonical if canonical
end
from_id = string_or_nil(payload["from_id"]) || node_id
to_id = string_or_nil(payload["to_id"] || payload["to"])
telemetry_time = coerce_integer(payload["telemetry_time"] || payload["time"] || payload.dig("telemetry", "time"))
telemetry_time = nil if telemetry_time && telemetry_time > now
channel = coerce_integer(payload["channel"])
portnum = string_or_nil(payload["portnum"])
hop_limit = coerce_integer(payload["hop_limit"] || payload["hopLimit"])
snr = coerce_float(payload["snr"])
rssi = coerce_integer(payload["rssi"])
bitfield = coerce_integer(payload["bitfield"])
payload_b64 = string_or_nil(payload["payload_b64"] || payload["payload"])
lora_freq = coerce_integer(payload["lora_freq"] || payload["loraFrequency"])
modem_preset = string_or_nil(payload["modem_preset"] || payload["modemPreset"])
ingestor = string_or_nil(payload["ingestor"])
protocol = resolve_protocol(db, ingestor, cache: protocol_cache)
telemetry_section = normalize_json_object(payload["telemetry"])
device_metrics = normalize_json_object(payload["device_metrics"] || payload["deviceMetrics"])
device_metrics ||= normalize_json_object(telemetry_section["deviceMetrics"]) if telemetry_section&.key?("deviceMetrics")
environment_metrics = normalize_json_object(payload["environment_metrics"] || payload["environmentMetrics"])
environment_metrics ||= normalize_json_object(telemetry_section["environmentMetrics"]) if telemetry_section&.key?("environmentMetrics")
power_metrics = normalize_json_object(payload["power_metrics"] || payload["powerMetrics"])
power_metrics ||= normalize_json_object(telemetry_section["powerMetrics"]) if telemetry_section&.key?("powerMetrics")
air_quality_metrics = normalize_json_object(payload["air_quality_metrics"] || payload["airQualityMetrics"])
air_quality_metrics ||= normalize_json_object(telemetry_section["airQualityMetrics"]) if telemetry_section&.key?("airQualityMetrics")
telemetry_type = string_or_nil(payload["telemetry_type"])
telemetry_type = nil unless VALID_TELEMETRY_TYPES.include?(telemetry_type)
telemetry_type ||= if device_metrics&.any?
"device"
elsif environment_metrics&.any?
"environment"
elsif power_metrics&.any?
"power"
elsif air_quality_metrics&.any?
"air_quality"
end
sources = {
payload: payload,
telemetry: telemetry_section,
device: device_metrics,
environment: environment_metrics,
}
metric_values = {}
TELEMETRY_METRIC_DEFINITIONS.each do |column, type, key_map|
value = resolve_numeric_metric(key_map, sources, type)
metric_values[column] = value unless value.nil?
end
battery_level = metric_values["battery_level"]
voltage = metric_values["voltage"]
channel_utilization = metric_values["channel_utilization"]
air_util_tx = metric_values["air_util_tx"]
uptime_seconds = metric_values["uptime_seconds"]
temperature = metric_values["temperature"]
relative_humidity = metric_values["relative_humidity"]
barometric_pressure = metric_values["barometric_pressure"]
gas_resistance = metric_values["gas_resistance"]
current = metric_values["current"]
iaq = metric_values["iaq"]
distance = metric_values["distance"]
lux = metric_values["lux"]
white_lux = metric_values["white_lux"]
ir_lux = metric_values["ir_lux"]
uv_lux = metric_values["uv_lux"]
wind_direction = metric_values["wind_direction"]
wind_speed = metric_values["wind_speed"]
weight = metric_values["weight"]
wind_gust = metric_values["wind_gust"]
wind_lull = metric_values["wind_lull"]
radiation = metric_values["radiation"]
rainfall_1h = metric_values["rainfall_1h"]
rainfall_24h = metric_values["rainfall_24h"]
soil_moisture = metric_values["soil_moisture"]
soil_temperature = metric_values["soil_temperature"]
row = [
telemetry_id,
node_id,
node_num,
from_id,
to_id,
rx_time,
rx_iso,
telemetry_time,
channel,
portnum,
hop_limit,
snr,
rssi,
bitfield,
payload_b64,
battery_level,
voltage,
channel_utilization,
air_util_tx,
uptime_seconds,
temperature,
relative_humidity,
barometric_pressure,
gas_resistance,
current,
iaq,
distance,
lux,
white_lux,
ir_lux,
uv_lux,
wind_direction,
wind_speed,
weight,
wind_gust,
wind_lull,
radiation,
rainfall_1h,
rainfall_24h,
soil_moisture,
soil_temperature,
ingestor,
protocol,
telemetry_type,
]
placeholders = Array.new(row.length, "?").join(",")
with_busy_retry do
db.execute <<~SQL, row
INSERT INTO telemetry(id,node_id,node_num,from_id,to_id,rx_time,rx_iso,telemetry_time,channel,portnum,hop_limit,snr,rssi,bitfield,payload_b64,
battery_level,voltage,channel_utilization,air_util_tx,uptime_seconds,temperature,relative_humidity,barometric_pressure,gas_resistance,current,iaq,distance,lux,white_lux,ir_lux,uv_lux,wind_direction,wind_speed,weight,wind_gust,wind_lull,radiation,rainfall_1h,rainfall_24h,soil_moisture,soil_temperature,ingestor,protocol,telemetry_type)
VALUES (#{placeholders})
ON CONFLICT(id) DO UPDATE SET
node_id=COALESCE(excluded.node_id,telemetry.node_id),
node_num=COALESCE(excluded.node_num,telemetry.node_num),
from_id=COALESCE(excluded.from_id,telemetry.from_id),
to_id=COALESCE(excluded.to_id,telemetry.to_id),
rx_time=excluded.rx_time,
rx_iso=excluded.rx_iso,
telemetry_time=COALESCE(excluded.telemetry_time,telemetry.telemetry_time),
channel=COALESCE(excluded.channel,telemetry.channel),
portnum=COALESCE(excluded.portnum,telemetry.portnum),
hop_limit=COALESCE(excluded.hop_limit,telemetry.hop_limit),
snr=COALESCE(excluded.snr,telemetry.snr),
rssi=COALESCE(excluded.rssi,telemetry.rssi),
bitfield=COALESCE(excluded.bitfield,telemetry.bitfield),
payload_b64=COALESCE(excluded.payload_b64,telemetry.payload_b64),
battery_level=COALESCE(excluded.battery_level,telemetry.battery_level),
voltage=COALESCE(excluded.voltage,telemetry.voltage),
channel_utilization=COALESCE(excluded.channel_utilization,telemetry.channel_utilization),
air_util_tx=COALESCE(excluded.air_util_tx,telemetry.air_util_tx),
uptime_seconds=COALESCE(excluded.uptime_seconds,telemetry.uptime_seconds),
temperature=COALESCE(excluded.temperature,telemetry.temperature),
relative_humidity=COALESCE(excluded.relative_humidity,telemetry.relative_humidity),
barometric_pressure=COALESCE(excluded.barometric_pressure,telemetry.barometric_pressure),
gas_resistance=COALESCE(excluded.gas_resistance,telemetry.gas_resistance),
current=COALESCE(excluded.current,telemetry.current),
iaq=COALESCE(excluded.iaq,telemetry.iaq),
distance=COALESCE(excluded.distance,telemetry.distance),
lux=COALESCE(excluded.lux,telemetry.lux),
white_lux=COALESCE(excluded.white_lux,telemetry.white_lux),
ir_lux=COALESCE(excluded.ir_lux,telemetry.ir_lux),
uv_lux=COALESCE(excluded.uv_lux,telemetry.uv_lux),
wind_direction=COALESCE(excluded.wind_direction,telemetry.wind_direction),
wind_speed=COALESCE(excluded.wind_speed,telemetry.wind_speed),
weight=COALESCE(excluded.weight,telemetry.weight),
wind_gust=COALESCE(excluded.wind_gust,telemetry.wind_gust),
wind_lull=COALESCE(excluded.wind_lull,telemetry.wind_lull),
radiation=COALESCE(excluded.radiation,telemetry.radiation),
rainfall_1h=COALESCE(excluded.rainfall_1h,telemetry.rainfall_1h),
rainfall_24h=COALESCE(excluded.rainfall_24h,telemetry.rainfall_24h),
soil_moisture=COALESCE(excluded.soil_moisture,telemetry.soil_moisture),
soil_temperature=COALESCE(excluded.soil_temperature,telemetry.soil_temperature),
ingestor=COALESCE(NULLIF(telemetry.ingestor,''), excluded.ingestor),
protocol=COALESCE(NULLIF(telemetry.protocol,'meshtastic'), excluded.protocol),
telemetry_type=COALESCE(excluded.telemetry_type,telemetry.telemetry_type)
SQL
end
update_node_from_telemetry(
db,
node_id,
node_num,
rx_time,
{
battery_level: battery_level,
voltage: voltage,
channel_utilization: channel_utilization,
air_util_tx: air_util_tx,
uptime_seconds: uptime_seconds,
},
lora_freq: lora_freq,
modem_preset: modem_preset,
protocol: protocol,
)
end
end
end
end
@@ -0,0 +1,130 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module DataProcessing
# Normalise a traceroute hop entry to a numeric node identifier.
#
# @param hop [Object] raw hop entry from the payload.
# @return [Integer, nil] coerced node ID or nil when the value is unusable.
def coerce_trace_node_id(hop)
case hop
when Integer
return hop
when Numeric
return hop.to_i
when String
trimmed = hop.strip
return nil if trimmed.empty?
return Integer(trimmed, 10) if trimmed.match?(/\A-?\d+\z/)
parts = canonical_node_parts(trimmed)
return parts[1] if parts
when Hash
candidate = hop["node_id"] || hop[:node_id] || hop["id"] || hop[:id] || hop["num"] || hop[:num]
return coerce_trace_node_id(candidate)
end
nil
end
# Extract hop identifiers from a traceroute payload preserving order.
#
# @param hops_value [Object] raw hops array or path collection.
# @return [Array<Integer>] ordered list of coerced hop identifiers.
def normalize_trace_hops(hops_value)
return [] if hops_value.nil?
hop_entries = hops_value.is_a?(Array) ? hops_value : [hops_value]
hop_entries.filter_map { |entry| coerce_trace_node_id(entry) }
end
# Persist a traceroute observation and its hop path.
#
# @param db [SQLite3::Database] open database handle.
# @param payload [Hash] traceroute payload as produced by the ingestor.
# @param protocol_cache [Hash, nil] optional per-batch ingestor protocol cache.
# @return [void]
def insert_trace(db, payload, protocol_cache: nil)
return unless payload.is_a?(Hash)
trace_identifier = coerce_integer(payload["id"] || payload["packet_id"] || payload["packetId"])
trace_identifier ||= coerce_integer(payload["trace_id"])
request_id = coerce_integer(payload["request_id"] || payload["req"])
trace_identifier ||= request_id
now = Time.now.to_i
rx_time = coerce_integer(payload["rx_time"])
rx_time = now if rx_time.nil? || rx_time > now
rx_iso = string_or_nil(payload["rx_iso"]) || Time.at(rx_time).utc.iso8601
metrics = normalize_json_object(payload["metrics"]) || {}
src = coerce_integer(payload["src"] || payload["source"] || payload["from"])
dest = coerce_integer(payload["dest"] || payload["destination"] || payload["to"])
rssi = coerce_integer(payload["rssi"]) || coerce_integer(metrics["rssi"])
snr = coerce_float(payload["snr"]) || coerce_float(metrics["snr"])
elapsed_ms = coerce_integer(
payload["elapsed_ms"] ||
payload["latency_ms"] ||
metrics&.[]("elapsed_ms") ||
metrics&.[]("latency_ms") ||
metrics&.[]("latencyMs"),
)
ingestor = string_or_nil(payload["ingestor"])
protocol = resolve_protocol(db, ingestor, cache: protocol_cache)
hops_value = payload.key?("hops") ? payload["hops"] : payload["path"]
hops = normalize_trace_hops(hops_value)
all_nodes = [src, dest, *hops].compact.uniq
all_nodes.each do |node|
ensure_unknown_node(db, node, node, heard_time: rx_time, protocol: protocol)
touch_node_last_seen(db, node, node, rx_time: rx_time, source: :trace)
end
with_busy_retry do
db.execute <<~SQL, [trace_identifier, request_id, src, dest, rx_time, rx_iso, rssi, snr, elapsed_ms, ingestor, protocol]
INSERT INTO traces(id, request_id, src, dest, rx_time, rx_iso, rssi, snr, elapsed_ms, ingestor, protocol)
VALUES(?,?,?,?,?,?,?,?,?,?,?)
ON CONFLICT(id) DO UPDATE SET
request_id=COALESCE(excluded.request_id,traces.request_id),
src=COALESCE(excluded.src,traces.src),
dest=COALESCE(excluded.dest,traces.dest),
rx_time=excluded.rx_time,
rx_iso=excluded.rx_iso,
rssi=COALESCE(excluded.rssi,traces.rssi),
snr=COALESCE(excluded.snr,traces.snr),
elapsed_ms=COALESCE(excluded.elapsed_ms,traces.elapsed_ms),
ingestor=COALESCE(NULLIF(traces.ingestor,''), excluded.ingestor),
protocol=COALESCE(NULLIF(traces.protocol,'meshtastic'), excluded.protocol)
SQL
trace_id = trace_identifier || db.last_insert_row_id
return unless trace_id
db.execute("DELETE FROM trace_hops WHERE trace_id = ?", [trace_id])
hops.each_with_index do |hop_id, index|
db.execute(
"INSERT INTO trace_hops(trace_id, hop_index, node_id) VALUES(?,?,?)",
[trace_id, index, hop_id],
)
end
end
end
end
end
end
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,231 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Federation
# Announce the local instance record to a remote federation peer,
# cycling through resolved IP addresses when transport-level failures
# occur.
#
# @param domain [String] remote peer hostname.
# @param payload_json [String] JSON-encoded announcement body.
# @return [Boolean] true when the announcement was accepted.
def announce_instance_to_domain(domain, payload_json)
return false unless domain && !domain.empty?
return false if federation_shutdown_requested?
https_failures = []
published = instance_uri_candidates(domain, "/api/instances").any? do |uri|
break false if federation_shutdown_requested?
begin
response = perform_announce_request(uri, payload_json)
if response.is_a?(Net::HTTPSuccess)
debug_log(
"Published federation announcement",
context: "federation.announce",
target: uri.to_s,
status: response.code,
)
true
else
debug_log(
"Federation announcement failed",
context: "federation.announce",
target: uri.to_s,
status: response.code,
)
false
end
rescue StandardError => e
metadata = {
context: "federation.announce",
target: uri.to_s,
error_class: e.class.name,
error_message: e.message,
}
if uri.scheme == "https" && https_connection_refused?(e)
debug_log(
"HTTPS federation announcement failed, retrying with HTTP",
**metadata,
)
https_failures << metadata
else
warn_log(
"Federation announcement raised exception",
**metadata,
)
end
false
end
end
unless published
https_failures.each do |metadata|
warn_log(
"Federation announcement raised exception",
**metadata,
)
end
end
published
end
# Execute a POST announcement request against the supplied URI, cycling
# through resolved IP addresses on connection-level failures.
#
# @param uri [URI::Generic] target endpoint.
# @param payload_json [String] JSON-encoded announcement body.
# @return [Net::HTTPResponse] the HTTP response from the first reachable address.
# @raise [StandardError] when all addresses fail or a non-retryable error occurs.
def perform_announce_request(uri, payload_json)
remote_addresses = sort_addresses_for_connection(resolve_remote_ip_addresses(uri))
addresses = remote_addresses.empty? ? [nil] : remote_addresses
last_error = nil
addresses.each do |address|
break if federation_shutdown_requested?
begin
return perform_single_announce_request(uri, payload_json, ip_address: address&.to_s)
rescue StandardError => e
if connection_refused_or_unreachable?(e)
last_error = e
else
raise
end
end
end
raise(last_error || StandardError.new("all resolved addresses failed"))
end
# Execute a single POST announcement request, optionally pinning the
# connection to a specific IP address.
#
# @param uri [URI::Generic] target endpoint.
# @param payload_json [String] JSON-encoded announcement body.
# @param ip_address [String, nil] resolved IP address to pin the
# connection to, or +nil+ to let {build_remote_http_client} resolve.
# @return [Net::HTTPResponse] the HTTP response.
# @raise [StandardError] when the request fails.
def perform_single_announce_request(uri, payload_json, ip_address: nil)
http = build_remote_http_client(uri, ip_address: ip_address)
Timeout.timeout(PotatoMesh::Config.remote_instance_request_timeout) do
http.start do |connection|
request = build_federation_http_request(Net::HTTP::Post, uri)
request.body = payload_json
connection.request(request)
end
end
end
# Run the periodic announcement cycle by signing the local payload and
# dispatching it (preferably via the worker pool) to every peer domain.
#
# @return [void]
def announce_instance_to_all_domains
return unless federation_enabled?
return if federation_shutdown_requested?
attributes, signature = ensure_self_instance_record!
payload_json = JSON.generate(instance_announcement_payload(attributes, signature))
domains = federation_target_domains(attributes[:domain])
pool = federation_worker_pool
scheduled = []
domains.each_with_object(scheduled) do |domain, scheduled_tasks|
break if federation_shutdown_requested?
if pool
begin
task = pool.schedule do
announce_instance_to_domain(domain, payload_json)
end
scheduled_tasks << [domain, task]
next
rescue PotatoMesh::App::WorkerPool::QueueFullError
warn_log(
"Skipped asynchronous federation announcement",
context: "federation.announce",
domain: domain,
reason: "worker queue saturated",
)
rescue PotatoMesh::App::WorkerPool::ShutdownError
warn_log(
"Worker pool unavailable, falling back to synchronous announcement",
context: "federation.announce",
domain: domain,
)
pool = nil
end
end
announce_instance_to_domain(domain, payload_json)
end
wait_for_federation_tasks(scheduled)
unless domains.empty?
debug_log(
"Federation announcement cycle complete",
context: "federation.announce",
targets: domains,
)
end
end
# Wait for scheduled federation tasks to complete while logging failures.
#
# @param scheduled [Array<(String, PotatoMesh::App::WorkerPool::Task)>] pairs of domains and tasks.
# @return [void]
def wait_for_federation_tasks(scheduled)
return if scheduled.empty?
timeout = PotatoMesh::Config.federation_task_timeout_seconds
scheduled.all? do |domain, task|
break false if federation_shutdown_requested?
begin
task.wait(timeout: timeout)
rescue PotatoMesh::App::WorkerPool::TaskTimeoutError => e
warn_log(
"Federation announcement task timed out",
context: "federation.announce",
domain: domain,
timeout: timeout,
error_class: e.class.name,
error_message: e.message,
)
rescue StandardError => e
warn_log(
"Federation announcement task failed",
context: "federation.announce",
domain: domain,
error_class: e.class.name,
error_message: e.message,
)
end
true
end
end
end
end
end
@@ -0,0 +1,98 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Federation
# Spawn the long-running announcer thread that drives periodic federation
# broadcasts.
#
# @return [Thread, nil] the announcer thread, or nil when federation is disabled.
def start_federation_announcer!
# Federation broadcasts must not execute when federation support is disabled.
return nil unless federation_enabled?
clear_federation_shutdown_request!
ensure_federation_shutdown_hook!
existing = settings.federation_thread
return existing if existing&.alive?
thread = Thread.new do
loop do
break unless federation_sleep_with_shutdown(PotatoMesh::Config.federation_announcement_interval)
begin
announce_instance_to_all_domains
rescue StandardError => e
warn_log(
"Federation announcement loop error",
context: "federation.announce",
error_class: e.class.name,
error_message: e.message,
)
end
end
end
thread.name = "potato-mesh-federation" if thread.respond_to?(:name=)
# Allow shutdown even if the announcement loop is still sleeping.
thread.daemon = true if thread.respond_to?(:daemon=)
set(:federation_thread, thread)
thread
end
# Launch a background thread responsible for the first federation broadcast.
#
# @return [Thread, nil] the thread handling the initial announcement.
def start_initial_federation_announcement!
# Skip the initial broadcast entirely when federation is disabled.
return nil unless federation_enabled?
clear_federation_shutdown_request!
ensure_federation_shutdown_hook!
existing = settings.respond_to?(:initial_federation_thread) ? settings.initial_federation_thread : nil
return existing if existing&.alive?
thread = Thread.new do
begin
delay = PotatoMesh::Config.initial_federation_delay_seconds
if delay.positive?
completed = federation_sleep_with_shutdown(delay)
next unless completed
end
next if federation_shutdown_requested?
announce_instance_to_all_domains
rescue StandardError => e
warn_log(
"Initial federation announcement failed",
context: "federation.announce",
error_class: e.class.name,
error_message: e.message,
)
ensure
set(:initial_federation_thread, nil)
end
end
thread.name = "potato-mesh-federation-initial" if thread.respond_to?(:name=)
thread.report_on_exception = false if thread.respond_to?(:report_on_exception=)
# Avoid blocking process shutdown during delayed startup announcements.
thread.daemon = true if thread.respond_to?(:daemon=)
set(:initial_federation_thread, thread)
thread
end
end
end
end
@@ -0,0 +1,369 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Federation
# Resolve the best matching active-node count from a remote /api/stats payload.
#
# @param payload [Hash, nil] decoded JSON payload from /api/stats.
# @param max_age_seconds [Integer] activity window currently expected for federation freshness.
# @return [Integer, nil] selected active-node count when available.
def remote_active_node_count_from_stats(payload, max_age_seconds:)
return nil unless payload.is_a?(Hash)
active_nodes = payload["active_nodes"]
return nil unless active_nodes.is_a?(Hash)
age = coerce_integer(max_age_seconds) || 0
key = if age <= 3600
"hour"
elsif age <= 86_400
"day"
elsif age <= PotatoMesh::Config.week_seconds
"week"
else
"month"
end
value = coerce_integer(active_nodes[key])
return nil unless value
[value, 0].max
end
# Parse a remote federation instance payload into canonical attributes.
#
# @param payload [Hash] JSON object describing a remote instance.
# @return [Array<(Hash, String), String>] tuple containing the attribute
# hash and signature when valid or a failure reason when invalid.
def remote_instance_attributes_from_payload(payload)
unless payload.is_a?(Hash)
return [nil, nil, "instance payload is not an object"]
end
id = string_or_nil(payload["id"])
return [nil, nil, "missing instance id"] unless id
domain = sanitize_instance_domain(payload["domain"])
return [nil, nil, "missing instance domain"] unless domain
pubkey = sanitize_public_key_pem(payload["pubkey"])
return [nil, nil, "missing instance public key"] unless pubkey
signature = string_or_nil(payload["signature"])
return [nil, nil, "missing instance signature"] unless signature
private_value = if payload.key?("isPrivate")
payload["isPrivate"]
else
payload["is_private"]
end
private_flag = coerce_boolean(private_value)
if private_flag.nil?
numeric_flag = coerce_integer(private_value)
private_flag = !numeric_flag.to_i.zero? if numeric_flag
end
attributes = {
id: id,
domain: domain,
pubkey: pubkey,
name: string_or_nil(payload["name"]),
version: string_or_nil(payload["version"]),
channel: string_or_nil(payload["channel"]),
frequency: string_or_nil(payload["frequency"]),
latitude: coerce_float(payload["latitude"]),
longitude: coerce_float(payload["longitude"]),
last_update_time: coerce_integer(payload["lastUpdateTime"]),
is_private: private_flag,
contact_link: string_or_nil(payload["contactLink"]),
}
[attributes, signature, nil]
rescue StandardError => e
[nil, nil, e.message]
end
# Enqueue a federation crawl for the supplied domain using the worker pool.
#
# @param domain [String] sanitized remote domain to crawl.
# @param per_response_limit [Integer, nil] maximum entries processed per response.
# @param overall_limit [Integer, nil] maximum unique domains visited.
# @return [Boolean] true when the crawl was scheduled successfully.
def enqueue_federation_crawl(domain, per_response_limit:, overall_limit:)
sanitized_domain = sanitize_instance_domain(domain)
unless sanitized_domain
warn_log(
"Skipped remote instance crawl",
context: "federation.instances",
domain: domain,
reason: "invalid domain",
)
return false
end
return false if federation_shutdown_requested?
application = is_a?(Class) ? self : self.class
pool = application.federation_worker_pool
unless pool
debug_log(
"Skipped remote instance crawl",
context: "federation.instances",
domain: sanitized_domain,
reason: "federation disabled",
)
return false
end
claim_result = application.claim_federation_crawl_slot(sanitized_domain)
unless claim_result == :claimed
debug_log(
"Skipped remote instance crawl",
context: "federation.instances",
domain: sanitized_domain,
reason: claim_result == :in_flight ? "crawl already in flight" : "recent crawl completed",
)
return false
end
pool.schedule do
db = nil
begin
db = application.open_database
application.ingest_known_instances_from!(
db,
sanitized_domain,
per_response_limit: per_response_limit,
overall_limit: overall_limit,
)
ensure
db&.close
application.release_federation_crawl_slot(sanitized_domain)
end
end
true
rescue PotatoMesh::App::WorkerPool::QueueFullError
application.handle_failed_federation_crawl_schedule(sanitized_domain, "worker queue saturated")
rescue PotatoMesh::App::WorkerPool::ShutdownError
application.handle_failed_federation_crawl_schedule(sanitized_domain, "worker pool shut down")
end
# Handle a failed crawl schedule attempt without applying cooldown.
#
# @param domain [String] canonical domain that failed to schedule.
# @param reason [String] human-readable failure reason.
# @return [Boolean] always false because scheduling did not succeed.
def handle_failed_federation_crawl_schedule(domain, reason)
release_federation_crawl_slot(domain, record_completion: false)
warn_log(
"Skipped remote instance crawl",
context: "federation.instances",
domain: domain,
reason: reason,
)
false
end
# Recursively ingest federation records exposed by the supplied domain.
#
# @param db [SQLite3::Database] open database connection used for writes.
# @param domain [String] remote domain to crawl for federation records.
# @param visited [Set<String>] domains processed during this crawl.
# @param per_response_limit [Integer, nil] maximum entries processed per response.
# @param overall_limit [Integer, nil] maximum unique domains visited.
# @return [Set<String>] updated set of visited domains.
def ingest_known_instances_from!(
db,
domain,
visited: nil,
per_response_limit: nil,
overall_limit: nil
)
sanitized = sanitize_instance_domain(domain)
return visited || Set.new unless sanitized
return visited || Set.new if federation_shutdown_requested?
visited ||= Set.new
overall_limit ||= PotatoMesh::Config.federation_max_domains_per_crawl
per_response_limit ||= PotatoMesh::Config.federation_max_instances_per_response
if overall_limit && overall_limit.positive? && visited.size >= overall_limit
debug_log(
"Skipped remote instance crawl due to crawl limit",
context: "federation.instances",
domain: sanitized,
limit: overall_limit,
)
return visited
end
return visited if visited.include?(sanitized)
visited << sanitized
payload, metadata = fetch_instance_json(sanitized, "/api/instances")
unless payload.is_a?(Array)
warn_log(
"Failed to load remote federation instances",
context: "federation.instances",
domain: sanitized,
reason: Array(metadata).map(&:to_s).join("; "),
)
return visited
end
processed_entries = 0
recent_cutoff = Time.now.to_i - PotatoMesh::Config.remote_instance_max_node_age
payload.each do |entry|
break if federation_shutdown_requested?
if per_response_limit && per_response_limit.positive? && processed_entries >= per_response_limit
debug_log(
"Skipped remote instance entry due to response limit",
context: "federation.instances",
domain: sanitized,
limit: per_response_limit,
)
break
end
if overall_limit && overall_limit.positive? && visited.size >= overall_limit
debug_log(
"Skipped remote instance entry due to crawl limit",
context: "federation.instances",
domain: sanitized,
limit: overall_limit,
)
break
end
processed_entries += 1
attributes, signature, reason = remote_instance_attributes_from_payload(entry)
unless attributes && signature
warn_log(
"Discarded remote instance entry",
context: "federation.instances",
domain: sanitized,
reason: reason || "invalid payload",
)
next
end
if attributes[:is_private]
debug_log(
"Skipped private remote instance",
context: "federation.instances",
domain: attributes[:domain],
)
next
end
unless verify_instance_signature(attributes, signature, attributes[:pubkey])
warn_log(
"Discarded remote instance entry",
context: "federation.instances",
domain: attributes[:domain],
reason: "invalid signature",
)
next
end
attributes[:is_private] = false if attributes[:is_private].nil?
stats_payload, stats_metadata = fetch_instance_json(attributes[:domain], "/api/stats")
stats_count = remote_active_node_count_from_stats(
stats_payload,
max_age_seconds: PotatoMesh::Config.remote_instance_max_node_age,
)
attributes[:nodes_count] = stats_count if stats_count
# Extract per-protocol 24h counts (informational, not signed).
if stats_payload.is_a?(Hash)
mc_day = stats_payload.dig("meshcore", "day")
mt_day = stats_payload.dig("meshtastic", "day")
attributes[:meshcore_nodes_count] = coerce_integer(mc_day) if mc_day
attributes[:meshtastic_nodes_count] = coerce_integer(mt_day) if mt_day
end
nodes_since_path = "/api/nodes?since=#{recent_cutoff}&limit=1000"
nodes_since_window, nodes_since_metadata = fetch_instance_json(attributes[:domain], nodes_since_path)
if stats_count.nil? && attributes[:nodes_count].nil? && nodes_since_window.is_a?(Array)
attributes[:nodes_count] = nodes_since_window.length
end
remote_nodes, node_metadata = fetch_instance_json(attributes[:domain], "/api/nodes")
remote_nodes = nodes_since_window if remote_nodes.nil? && nodes_since_window.is_a?(Array)
if attributes[:nodes_count].nil? && remote_nodes.is_a?(Array)
attributes[:nodes_count] = remote_nodes.length
end
if stats_count.nil? && Array(stats_metadata).any?
debug_log(
"Remote instance /api/stats unavailable; using node list fallback",
context: "federation.instances",
domain: attributes[:domain],
reason: Array(stats_metadata).map(&:to_s).join("; "),
)
end
unless remote_nodes
warn_log(
"Failed to load remote node data",
context: "federation.instances",
domain: attributes[:domain],
reason: Array(node_metadata || nodes_since_metadata).map(&:to_s).join("; "),
)
next
end
fresh, freshness_reason = validate_remote_nodes(remote_nodes)
unless fresh
warn_log(
"Discarded remote instance entry",
context: "federation.instances",
domain: attributes[:domain],
reason: freshness_reason || "stale node data",
)
next
end
begin
upsert_instance_record(db, attributes, signature)
ingest_known_instances_from!(
db,
attributes[:domain],
visited: visited,
per_response_limit: per_response_limit,
overall_limit: overall_limit,
)
rescue ArgumentError => e
warn_log(
"Failed to persist remote instance",
context: "federation.instances",
domain: attributes[:domain],
error_class: e.class.name,
error_message: e.message,
)
end
end
visited
end
end
end
end
@@ -0,0 +1,90 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Federation
# Initialize shared in-memory state used to deduplicate crawl scheduling.
#
# @return [void]
def initialize_federation_crawl_state!
@federation_crawl_init_mutex ||= Mutex.new
return if instance_variable_defined?(:@federation_crawl_mutex) && @federation_crawl_mutex
@federation_crawl_init_mutex.synchronize do
return if instance_variable_defined?(:@federation_crawl_mutex) && @federation_crawl_mutex
@federation_crawl_mutex = Mutex.new
@federation_crawl_in_flight = Set.new
@federation_crawl_last_completed_at = {}
end
end
# Retrieve the cooldown period used for duplicate crawl suppression.
#
# @return [Integer] seconds a domain remains in cooldown after completion.
def federation_crawl_cooldown_seconds
PotatoMesh::Config.federation_crawl_cooldown_seconds
end
# Mark a domain crawl as claimed if no active or recent crawl exists.
#
# @param domain [String] canonical domain name.
# @return [Symbol] +:claimed+, +:in_flight+, or +:cooldown+.
def claim_federation_crawl_slot(domain)
initialize_federation_crawl_state!
now = Time.now.to_i
@federation_crawl_mutex.synchronize do
return :in_flight if @federation_crawl_in_flight.include?(domain)
last_completed = @federation_crawl_last_completed_at[domain]
if last_completed && now - last_completed < federation_crawl_cooldown_seconds
return :cooldown
end
@federation_crawl_in_flight << domain
:claimed
end
end
# Release an in-flight crawl claim and record completion timestamp.
#
# @param domain [String] canonical domain name.
# @param record_completion [Boolean] true to apply cooldown tracking.
# @return [void]
def release_federation_crawl_slot(domain, record_completion: true)
return unless domain
initialize_federation_crawl_state!
@federation_crawl_mutex.synchronize do
@federation_crawl_in_flight.delete(domain)
@federation_crawl_last_completed_at[domain] = Time.now.to_i if record_completion
end
end
# Clear all in-memory crawl scheduling state.
#
# @return [void]
def clear_federation_crawl_state!
initialize_federation_crawl_state!
@federation_crawl_mutex.synchronize do
@federation_crawl_in_flight.clear
@federation_crawl_last_completed_at.clear
end
end
end
end
end
@@ -0,0 +1,263 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Federation
# Determine whether an HTTPS announcement failure should fall back to HTTP.
#
# @param error [StandardError] failure raised while attempting HTTPS.
# @return [Boolean] true when the error corresponds to a refused TCP connection.
def https_connection_refused?(error)
current = error
while current
return true if current.is_a?(Errno::ECONNREFUSED)
current = current.respond_to?(:cause) ? current.cause : nil
end
false
end
# Determine whether an error indicates a transport-level connection
# failure that may succeed on an alternative resolved address.
#
# Connection refusals, host/network unreachable errors, and TCP open
# timeouts signal that the selected IP address cannot be reached but
# do not rule out alternative addresses for the same hostname.
#
# @param error [StandardError] failure raised during the connection attempt.
# @return [Boolean] true when a retry with a different address is warranted.
def connection_refused_or_unreachable?(error)
retryable_classes = [
Errno::ECONNREFUSED,
Errno::EHOSTUNREACH,
Errno::ENETUNREACH,
Errno::ECONNRESET,
Errno::ETIMEDOUT,
Net::OpenTimeout,
]
current = error
while current
return true if retryable_classes.any? { |klass| current.is_a?(klass) }
current = current.respond_to?(:cause) ? current.cause : nil
end
false
end
# Build the HTTPS-then-HTTP URI candidates used to reach a remote peer.
#
# @param domain [String] peer hostname.
# @param path [String] request path (must include leading slash).
# @return [Array<URI::Generic>] ordered list of URI candidates.
def instance_uri_candidates(domain, path)
base = domain
[
URI.parse("https://#{base}#{path}"),
URI.parse("http://#{base}#{path}"),
]
rescue URI::InvalidURIError
[]
end
# Build an HTTP request decorated with the headers required for federation peers.
#
# @param request_class [Class<Net::HTTPRequest>] HTTP request class such as {Net::HTTP::Get}.
# @param uri [URI::Generic] target URI describing the remote endpoint.
# @return [Net::HTTPRequest] configured HTTP request including standard headers.
def build_federation_http_request(request_class, uri)
request = request_class.new(uri)
request["User-Agent"] = federation_user_agent_header
request["Accept"] = "application/json"
request["Content-Type"] = "application/json" if request.request_body_permitted?
request
end
# Compose the User-Agent string used when communicating with federation peers.
#
# @return [String] descriptive identifier for PotatoMesh federation requests.
def federation_user_agent_header
version = app_constant(:APP_VERSION).to_s
version = "unknown" if version.empty?
sanitized_domain = sanitize_instance_domain(app_constant(:INSTANCE_DOMAIN), downcase: true)
base = "PotatoMesh/#{version}"
return base unless sanitized_domain && !sanitized_domain.empty?
"#{base} (+https://#{sanitized_domain})"
end
# Resolve the host component of a remote URI and ensure the destination is
# safe for federation HTTP requests.
#
# The method performs a DNS lookup using Addrinfo to capture every
# available address for the supplied URI host. The resulting addresses are
# converted to {IPAddr} objects for consistent inspection via
# {restricted_ip_address?}. When all resolved addresses fall within
# restricted ranges, the method raises an ArgumentError so callers can
# abort the federation request before contacting the remote endpoint.
#
# @param uri [URI::Generic] remote endpoint candidate.
# @return [Array<IPAddr>] list of resolved, unrestricted IP addresses.
# @raise [ArgumentError] when +uri.host+ is blank or resolves solely to
# restricted addresses.
def resolve_remote_ip_addresses(uri)
host = uri&.host
raise ArgumentError, "URI missing host" unless host
addrinfo_records = Addrinfo.getaddrinfo(host, nil, Socket::AF_UNSPEC, Socket::SOCK_STREAM)
addresses = addrinfo_records.filter_map do |addr|
begin
IPAddr.new(addr.ip_address)
rescue IPAddr::InvalidAddressError
nil
end
end
unique_addresses = addresses.uniq { |ip| [ip.family, ip.to_s] }
unrestricted_addresses = unique_addresses.reject { |ip| restricted_ip_address?(ip) }
if unique_addresses.any? && unrestricted_addresses.empty?
raise ArgumentError, "restricted domain"
end
unrestricted_addresses
end
# Sort resolved addresses so that IPv4 precedes IPv6.
#
# Federation peers with dual-stack DNS may publish addresses where one
# family is unreachable. Placing IPv4 entries first mirrors the
# preference used by {discover_local_ip_address} and improves the
# likelihood that the first connection attempt succeeds.
#
# @param addresses [Array<IPAddr>] resolved IP address list.
# @return [Array<IPAddr>] addresses sorted with IPv4 entries before IPv6.
def sort_addresses_for_connection(addresses)
return addresses if addresses.nil? || addresses.length <= 1
v4, v6 = addresses.partition { |ip| !ip.ipv6? }
v4 + v6
end
# Build an HTTP client configured for communication with a remote instance.
#
# When +ip_address+ is supplied the client is pinned to that specific
# address, bypassing DNS resolution. Callers that iterate over
# multiple resolved addresses should pass each candidate in turn.
#
# @param uri [URI::Generic] target URI describing the remote endpoint.
# @param ip_address [String, nil] explicit IP address to connect to,
# or +nil+ to resolve via DNS and use the first result.
# @return [Net::HTTP] HTTP client ready to execute the request.
def build_remote_http_client(uri, ip_address: nil)
http = Net::HTTP.new(uri.host, uri.port)
if ip_address
http.ipaddr = ip_address if http.respond_to?(:ipaddr=)
else
remote_addresses = resolve_remote_ip_addresses(uri)
if http.respond_to?(:ipaddr=) && remote_addresses.any?
http.ipaddr = remote_addresses.first.to_s
end
end
http.open_timeout = PotatoMesh::Config.remote_instance_http_timeout
http.read_timeout = PotatoMesh::Config.remote_instance_read_timeout
http.use_ssl = uri.scheme == "https"
return http unless http.use_ssl?
http.verify_mode = OpenSSL::SSL::VERIFY_PEER
http.min_version = :TLS1_2 if http.respond_to?(:min_version=)
store = remote_instance_cert_store
http.cert_store = store if store
callback = remote_instance_verify_callback
http.verify_callback = callback if callback
http
end
# Construct a certificate store that disables strict CRL enforcement.
#
# OpenSSL may fail remote requests when certificate revocation lists are
# unavailable from the issuing authority. The returned store mirrors the
# default system trust store while clearing CRL-related flags so that
# federation announcements gracefully succeed when CRLs cannot be fetched.
#
# @return [OpenSSL::X509::Store, nil] configured store or nil when setup fails.
def remote_instance_cert_store
return @remote_instance_cert_store if defined?(@remote_instance_cert_store) && @remote_instance_cert_store
store = OpenSSL::X509::Store.new
store.set_default_paths
store.flags = 0 if store.respond_to?(:flags=)
@remote_instance_cert_store = store
rescue OpenSSL::X509::StoreError => e
debug_log(
"Failed to initialize certificate store for federation HTTP: #{e.message}",
)
@remote_instance_cert_store = nil
end
# Build a TLS verification callback that tolerates CRL availability failures.
#
# Some certificate authorities publish CRL endpoints that may occasionally be
# unreachable. When OpenSSL cannot download the CRL it raises the
# V_ERR_UNABLE_TO_GET_CRL error which would otherwise cause HTTPS federation
# announcements to abort. The generated callback accepts those specific
# failures while preserving strict verification for all other errors.
#
# @return [Proc, nil] verification callback or nil when creation fails.
def remote_instance_verify_callback
if defined?(@remote_instance_verify_callback) && @remote_instance_verify_callback
return @remote_instance_verify_callback
end
callback = lambda do |preverify_ok, store_context|
return true if preverify_ok
if store_context && crl_unavailable_error?(store_context.error)
debug_log(
"Ignoring TLS CRL retrieval failure during federation request",
context: "federation.announce",
)
true
else
false
end
end
@remote_instance_verify_callback = callback
rescue StandardError => e
debug_log(
"Failed to initialize federation TLS verify callback: #{e.message}",
context: "federation.announce",
)
@remote_instance_verify_callback = nil
end
# Determine whether the supplied OpenSSL verification error corresponds to a
# missing certificate revocation list.
#
# @param error_code [Integer, nil] OpenSSL verification error value.
# @return [Boolean] true when the error should be ignored.
def crl_unavailable_error?(error_code)
allowed_errors = [OpenSSL::X509::V_ERR_UNABLE_TO_GET_CRL]
if defined?(OpenSSL::X509::V_ERR_UNABLE_TO_GET_CRL_ISSUER)
allowed_errors << OpenSSL::X509::V_ERR_UNABLE_TO_GET_CRL_ISSUER
end
allowed_errors.include?(error_code)
end
end
end
end
@@ -0,0 +1,136 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Federation
# Execute a GET request against the supplied federation URI, cycling
# through resolved IP addresses when a transport-level connection
# failure occurs.
#
# DNS resolution is performed once and the resulting addresses are
# sorted with IPv4 first via {sort_addresses_for_connection}. Each
# address is attempted sequentially; when a connection-level error
# (refused, unreachable, timeout) is raised the next address is tried.
# Non-connection errors (SSL failures, HTTP-level errors) are raised
# immediately without trying further addresses.
#
# @param uri [URI::Generic] target endpoint to request.
# @return [String] raw HTTP response body on success.
# @raise [InstanceFetchError] when all addresses are exhausted or a
# non-retryable error occurs.
def perform_instance_http_request(uri)
raise InstanceFetchError, "federation shutdown requested" if federation_shutdown_requested?
remote_addresses = sort_addresses_for_connection(resolve_remote_ip_addresses(uri))
addresses = remote_addresses.empty? ? [nil] : remote_addresses
last_error = nil
addresses.each do |address|
break if federation_shutdown_requested?
begin
return perform_single_http_request(uri, ip_address: address&.to_s)
rescue InstanceFetchError => e
if connection_refused_or_unreachable?(e)
last_error = e
else
raise
end
end
end
raise last_error || InstanceFetchError.new("all resolved addresses failed")
rescue ArgumentError => e
raise_instance_fetch_error(e)
end
# Execute a single HTTP GET request against the supplied URI, optionally
# pinning the connection to a specific IP address.
#
# @param uri [URI::Generic] target endpoint.
# @param ip_address [String, nil] resolved IP address to pin the
# connection to, or +nil+ to let {build_remote_http_client} resolve.
# @return [String] raw HTTP response body.
# @raise [InstanceFetchError] when the request fails.
def perform_single_http_request(uri, ip_address: nil)
http = build_remote_http_client(uri, ip_address: ip_address)
Timeout.timeout(PotatoMesh::Config.remote_instance_request_timeout) do
http.start do |connection|
request = build_federation_http_request(Net::HTTP::Get, uri)
response = connection.request(request)
case response
when Net::HTTPSuccess
response.body
else
raise InstanceFetchError, "unexpected response #{response.code}"
end
end
end
rescue StandardError => e
raise_instance_fetch_error(e)
end
# Build a human readable error message for a failed instance request.
#
# @param error [StandardError] failure raised while performing the request.
# @return [String] description including the error class when necessary.
def instance_fetch_error_message(error)
message = error.message.to_s.strip
class_name = error.class.name || error.class.to_s
return class_name if message.empty?
message.include?(class_name) ? message : "#{class_name}: #{message}"
end
# Raise an InstanceFetchError that preserves the original context.
#
# @param error [StandardError] failure raised while performing the request.
# @return [void]
def raise_instance_fetch_error(error)
message = instance_fetch_error_message(error)
wrapped = InstanceFetchError.new(message)
wrapped.set_backtrace(error.backtrace)
raise wrapped
end
# Fetch and JSON-decode a federation document from a peer.
#
# @param domain [String] peer hostname.
# @param path [String] request path.
# @return [Array(Object, URI::Generic | Array<String>)] decoded payload
# plus the successful URI, or +[nil, errors]+ when every candidate fails.
def fetch_instance_json(domain, path)
return [nil, ["federation shutdown requested"]] if federation_shutdown_requested?
errors = []
instance_uri_candidates(domain, path).each do |uri|
break if federation_shutdown_requested?
begin
body = perform_instance_http_request(uri)
return [JSON.parse(body), uri] if body
rescue JSON::ParserError => e
errors << "#{uri}: invalid JSON (#{e.message})"
rescue InstanceFetchError => e
errors << "#{uri}: #{e.message}"
end
end
[nil, errors]
end
end
end
end
@@ -0,0 +1,80 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Federation
# Count the number of nodes active since the supplied timestamp.
#
# @param cutoff [Integer] unix timestamp in seconds.
# @param db [SQLite3::Database, nil] optional open handle to reuse.
# @return [Integer, nil] node count or nil when unavailable.
def active_node_count_since(cutoff, db: nil)
return nil unless cutoff
handle = db || open_database(readonly: true)
count =
with_busy_retry do
handle.get_first_value("SELECT COUNT(*) FROM nodes WHERE last_heard >= ?", cutoff.to_i)
end
Integer(count)
rescue SQLite3::Exception, ArgumentError => e
warn_log(
"Failed to count active nodes",
context: "instances.nodes_count",
error_class: e.class.name,
error_message: e.message,
)
nil
ensure
handle&.close unless db
end
# Count the number of nodes for a specific protocol active since the
# supplied timestamp.
#
# @param cutoff [Integer] unix timestamp in seconds.
# @param protocol [String] protocol name (e.g. "meshcore", "meshtastic").
# @param db [SQLite3::Database, nil] optional open handle to reuse.
# @return [Integer, nil] node count or nil when unavailable.
def active_node_count_since_for_protocol(cutoff, protocol, db: nil)
return nil unless cutoff && protocol
handle = db || open_database(readonly: true)
count =
with_busy_retry do
handle.get_first_value(
"SELECT COUNT(*) FROM nodes WHERE last_heard >= ? AND protocol = ?",
cutoff.to_i,
protocol,
)
end
Integer(count)
rescue SQLite3::Exception, ArgumentError => e
warn_log(
"Failed to count active nodes for protocol",
context: "instances.protocol_nodes_count",
protocol: protocol,
error_class: e.class.name,
error_message: e.message,
)
nil
ensure
handle&.close unless db
end
end
end
end
@@ -0,0 +1,107 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Federation
# Persist or refresh a remote instance row, evicting any conflicting
# entry that already claimed the same domain.
#
# @param db [SQLite3::Database] open database handle.
# @param attributes [Hash] sanitized instance attributes.
# @param signature [String] base64-encoded signature.
# @return [void]
# @raise [ArgumentError] when the domain is invalid or restricted.
def upsert_instance_record(db, attributes, signature)
sanitized_domain = sanitize_instance_domain(attributes[:domain])
raise ArgumentError, "invalid domain" unless sanitized_domain
ip = ip_from_domain(sanitized_domain)
if ip && restricted_ip_address?(ip)
raise ArgumentError, "restricted domain"
end
normalized_domain = sanitized_domain
existing_id = with_busy_retry do
db.get_first_value(
"SELECT id FROM instances WHERE domain = ?",
normalized_domain,
)
end
if existing_id && existing_id != attributes[:id]
with_busy_retry do
db.execute("DELETE FROM instances WHERE id = ?", existing_id)
end
debug_log(
"Removed conflicting instance by domain",
context: "federation.instances",
domain: normalized_domain,
replaced_id: existing_id,
incoming_id: attributes[:id],
)
end
sql = <<~SQL
INSERT INTO instances (
id, domain, pubkey, name, version, channel, frequency,
latitude, longitude, last_update_time, is_private, nodes_count,
meshcore_nodes_count, meshtastic_nodes_count, contact_link, signature
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(id) DO UPDATE SET
domain=excluded.domain,
pubkey=excluded.pubkey,
name=excluded.name,
version=excluded.version,
channel=excluded.channel,
frequency=excluded.frequency,
latitude=excluded.latitude,
longitude=excluded.longitude,
last_update_time=excluded.last_update_time,
is_private=excluded.is_private,
nodes_count=COALESCE(excluded.nodes_count, instances.nodes_count),
meshcore_nodes_count=COALESCE(excluded.meshcore_nodes_count, instances.meshcore_nodes_count),
meshtastic_nodes_count=COALESCE(excluded.meshtastic_nodes_count, instances.meshtastic_nodes_count),
contact_link=excluded.contact_link,
signature=excluded.signature
SQL
nodes_count = coerce_integer(attributes[:nodes_count])
params = [
attributes[:id],
normalized_domain,
attributes[:pubkey],
attributes[:name],
attributes[:version],
attributes[:channel],
attributes[:frequency],
attributes[:latitude],
attributes[:longitude],
attributes[:last_update_time],
attributes[:is_private] ? 1 : 0,
nodes_count,
coerce_integer(attributes[:meshcore_nodes_count]),
coerce_integer(attributes[:meshtastic_nodes_count]),
attributes[:contact_link],
signature,
]
with_busy_retry do
db.execute(sql, params)
end
end
end
end
end
@@ -0,0 +1,196 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Federation
# Maximum slice (seconds) used by +federation_sleep_with_shutdown+ when
# decomposing a target sleep into shutdown-aware increments.
FEDERATION_SLEEP_SLICE_SECONDS = 0.2
# Retrieve or initialize the worker pool servicing federation jobs.
#
# @return [PotatoMesh::App::WorkerPool, nil] active worker pool or nil when disabled.
def federation_worker_pool
ensure_federation_worker_pool!
end
# Ensure the federation worker pool exists when federation remains enabled.
#
# Threading model: the pool is a fixed-size thread pool backed by a bounded
# queue. A single long-lived announcer thread (started by
# {#start_federation_announcer!}) drives periodic crawl and announcement
# cycles by submitting tasks onto the pool; individual crawl and announce
# jobs then run concurrently on pool threads. The pool is lazily
# instantiated on first use and is memoized on the Sinatra settings object so
# that all requests share the same instance. An +at_exit+ hook
# ({#ensure_federation_shutdown_hook!}) guarantees the pool drains cleanly on
# process termination even when the announcer thread is still alive.
#
# @return [PotatoMesh::App::WorkerPool, nil] active worker pool if created.
def ensure_federation_worker_pool!
return nil unless federation_enabled?
return nil if federation_shutdown_requested?
ensure_federation_shutdown_hook!
existing = settings.respond_to?(:federation_worker_pool) ? settings.federation_worker_pool : nil
return existing if existing&.alive?
pool = PotatoMesh::App::WorkerPool.new(
size: PotatoMesh::Config.federation_worker_pool_size,
max_queue: PotatoMesh::Config.federation_worker_queue_capacity,
task_timeout: PotatoMesh::Config.federation_task_timeout_seconds,
name: "potato-mesh-fed",
)
set(:federation_worker_pool, pool) if respond_to?(:set)
pool
end
# Ensure federation background workers are torn down during process exit.
#
# @return [void]
def ensure_federation_shutdown_hook!
application = is_a?(Class) ? self : self.class
return application.ensure_federation_shutdown_hook! unless application.equal?(self)
installed = if respond_to?(:settings) && settings.respond_to?(:federation_shutdown_hook_installed)
settings.federation_shutdown_hook_installed
else
instance_variable_defined?(:@federation_shutdown_hook_installed) && @federation_shutdown_hook_installed
end
return if installed
if respond_to?(:set) && settings.respond_to?(:federation_shutdown_hook_installed=)
set(:federation_shutdown_hook_installed, true)
else
@federation_shutdown_hook_installed = true
end
at_exit do
begin
application.shutdown_federation_background_work!(timeout: PotatoMesh::Config.federation_shutdown_timeout_seconds)
rescue StandardError
# Suppress shutdown errors during interpreter teardown.
end
end
end
# Check whether federation workers have received a shutdown request.
#
# @return [Boolean] true when stop has been requested.
def federation_shutdown_requested?
return false unless respond_to?(:settings)
return false unless settings.respond_to?(:federation_shutdown_requested)
settings.federation_shutdown_requested == true
end
# Mark federation background work as shutting down.
#
# @return [void]
def request_federation_shutdown!
set(:federation_shutdown_requested, true) if respond_to?(:set)
end
# Clear any previously requested federation shutdown marker.
#
# @return [void]
def clear_federation_shutdown_request!
set(:federation_shutdown_requested, false) if respond_to?(:set)
end
# Sleep in short intervals so federation loops can react to shutdown.
#
# @param seconds [Numeric] target sleep duration.
# @return [Boolean] true when the full delay elapsed without shutdown.
def federation_sleep_with_shutdown(seconds)
remaining = seconds.to_f
while remaining.positive?
return false if federation_shutdown_requested?
slice = [remaining, FEDERATION_SLEEP_SLICE_SECONDS].min
Kernel.sleep(slice)
remaining -= slice
end
!federation_shutdown_requested?
end
# Shutdown and clear the federation worker pool if present.
#
# @return [void]
def shutdown_federation_worker_pool!
existing = settings.respond_to?(:federation_worker_pool) ? settings.federation_worker_pool : nil
return unless existing
begin
existing.shutdown(timeout: PotatoMesh::Config.federation_task_timeout_seconds)
rescue StandardError => e
warn_log(
"Failed to shut down federation worker pool",
context: "federation",
error_class: e.class.name,
error_message: e.message,
)
ensure
set(:federation_worker_pool, nil) if respond_to?(:set)
end
end
# Gracefully terminate federation background loops and worker pool tasks.
#
# @param timeout [Numeric, nil] maximum join time applied per thread.
# @return [void]
def shutdown_federation_background_work!(timeout: nil)
request_federation_shutdown!
timeout_value = timeout || PotatoMesh::Config.federation_shutdown_timeout_seconds
# Drain the worker pool first so federation threads blocked in
# wait_for_federation_tasks unblock promptly instead of waiting
# for each task's individual timeout to expire.
shutdown_federation_worker_pool!
stop_federation_thread!(:initial_federation_thread, timeout: timeout_value)
stop_federation_thread!(:federation_thread, timeout: timeout_value)
clear_federation_crawl_state!
end
# Stop a specific federation thread setting and clear its reference.
#
# @param setting_name [Symbol] settings key storing the thread object.
# @param timeout [Numeric] seconds to wait for clean thread exit.
# @return [void]
def stop_federation_thread!(setting_name, timeout:)
return unless respond_to?(:settings)
return unless settings.respond_to?(setting_name)
thread = settings.public_send(setting_name)
if thread&.alive?
begin
thread.wakeup if thread.respond_to?(:wakeup)
rescue ThreadError
# The thread may not currently be sleeping; continue shutdown.
end
thread.join(timeout)
if thread.alive?
thread.kill
thread.join(0.1)
end
end
set(setting_name, nil) if respond_to?(:set)
end
end
end
end
@@ -0,0 +1,76 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Federation
# Build the ordered list of peer domains the local instance should
# announce itself to. Seed domains take precedence and are followed by
# peers seen in the local +instances+ table within the freshness window.
#
# @param self_domain [String, nil] sanitized local instance domain.
# @return [Array<String>] sanitized, deduplicated peer domains.
def federation_target_domains(self_domain)
normalized_self = sanitize_instance_domain(self_domain)&.downcase
ordered = []
seen = Set.new
PotatoMesh::Config.federation_seed_domains.each do |seed|
sanitized = sanitize_instance_domain(seed)&.downcase
next unless sanitized
next if normalized_self && sanitized == normalized_self
next if seen.include?(sanitized)
ordered << sanitized
seen << sanitized
end
db = open_database(readonly: true)
db.results_as_hash = false
cutoff = Time.now.to_i - PotatoMesh::Config.week_seconds
rows = with_busy_retry do
db.execute(
"SELECT domain, last_update_time FROM instances WHERE domain IS NOT NULL AND TRIM(domain) != ''",
)
end
rows.each do |row|
raw_domain = row[0]
last_update_time = coerce_integer(row[1])
next unless last_update_time && last_update_time >= cutoff
sanitized = sanitize_instance_domain(raw_domain)&.downcase
next unless sanitized
next if normalized_self && sanitized == normalized_self
next if seen.include?(sanitized)
ordered << sanitized
seen << sanitized
end
ordered
rescue SQLite3::Exception
fallback = PotatoMesh::Config.federation_seed_domains.filter_map do |seed|
candidate = sanitize_instance_domain(seed)&.downcase
next if normalized_self && candidate == normalized_self
candidate
end
fallback.uniq
ensure
db&.close
end
end
end
end
@@ -0,0 +1,200 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Federation
# Process-wide memo for the most recently emitted self-registration
# decision. Sinatra spins up a fresh app instance per request so a
# plain instance variable would not survive across calls; storing the
# state on the module itself keeps the dedupe stable for the lifetime
# of the worker process.
@self_registration_log_state = { mutex: Mutex.new, last: nil }
# Accessor for the dedupe state used by {#ensure_self_instance_record!}.
#
# @return [Hash{Symbol => Object}] mutable state hash holding +:mutex+ and +:last+.
def self.self_registration_log_state
@self_registration_log_state
end
# Reset the dedupe memo. Intended for tests; production code never
# needs to clear the state because each process starts fresh.
#
# @return [void]
def self.reset_self_registration_log_state!
state = @self_registration_log_state
state[:mutex].synchronize { state[:last] = nil }
end
# Resolve the canonical domain for the running instance.
#
# @return [String, nil] sanitized instance domain or nil outside production.
# @raise [RuntimeError] when the domain cannot be determined in production.
def self_instance_domain
sanitized = sanitize_instance_domain(app_constant(:INSTANCE_DOMAIN))
return sanitized if sanitized
unless production_environment?
debug_log(
"INSTANCE_DOMAIN unavailable; skipping self instance domain",
context: "federation.instances",
app_env: string_or_nil(ENV["APP_ENV"]),
rack_env: string_or_nil(ENV["RACK_ENV"]),
source: app_constant(:INSTANCE_DOMAIN_SOURCE),
)
return nil
end
raise "INSTANCE_DOMAIN could not be determined"
end
# Determine whether the local instance should persist its own record.
#
# @param domain [String, nil] candidate domain for the running instance.
# @return [Array(Boolean, String, nil)] tuple containing a decision flag and an optional reason.
def self_instance_registration_decision(domain)
source = app_constant(:INSTANCE_DOMAIN_SOURCE)
return [false, "INSTANCE_DOMAIN source is #{source}"] unless source == :environment
sanitized = sanitize_instance_domain(domain)
return [false, "INSTANCE_DOMAIN missing or invalid"] unless sanitized
ip = ip_from_domain(sanitized)
if ip && restricted_ip_address?(ip)
return [false, "INSTANCE_DOMAIN resolves to restricted IP"]
end
[true, nil]
end
# Build the canonical attribute hash describing the local instance.
#
# @return [Hash] populated instance attribute hash.
def self_instance_attributes
domain = self_instance_domain
last_update = latest_node_update_timestamp || Time.now.to_i
cutoff = Time.now.to_i - PotatoMesh::Config.remote_instance_max_node_age
db = open_database(readonly: true)
nodes_count = active_node_count_since(cutoff, db: db)
mc_count = active_node_count_since_for_protocol(cutoff, "meshcore", db: db)
mt_count = active_node_count_since_for_protocol(cutoff, "meshtastic", db: db)
{
id: app_constant(:SELF_INSTANCE_ID),
domain: domain,
pubkey: app_constant(:INSTANCE_PUBLIC_KEY_PEM),
name: sanitized_site_name,
version: app_constant(:APP_VERSION),
channel: sanitized_channel,
frequency: sanitized_frequency,
latitude: PotatoMesh::Config.map_center_lat,
longitude: PotatoMesh::Config.map_center_lon,
last_update_time: last_update,
is_private: private_mode?,
contact_link: sanitized_contact_link,
nodes_count: nodes_count,
meshcore_nodes_count: mc_count,
meshtastic_nodes_count: mt_count,
}
ensure
db&.close
end
# Sign a canonical instance attribute set with the local private key.
#
# @param attributes [Hash] canonical instance attributes.
# @return [String] base64-encoded RSA-SHA256 signature.
def sign_instance_attributes(attributes)
payload = canonical_instance_payload(attributes)
Base64.strict_encode64(
app_constant(:INSTANCE_PRIVATE_KEY).sign(OpenSSL::Digest::SHA256.new, payload),
)
end
# Compose the JSON-friendly announcement payload sent to peers.
#
# @param attributes [Hash] canonical instance attributes.
# @param signature [String] base64-encoded signature.
# @return [Hash] payload with nil entries removed.
def instance_announcement_payload(attributes, signature)
payload = {
"id" => attributes[:id],
"domain" => attributes[:domain],
"pubkey" => attributes[:pubkey],
"name" => attributes[:name],
"version" => attributes[:version],
"channel" => attributes[:channel],
"frequency" => attributes[:frequency],
"latitude" => attributes[:latitude],
"longitude" => attributes[:longitude],
"lastUpdateTime" => attributes[:last_update_time],
"isPrivate" => attributes[:is_private],
"contactLink" => attributes[:contact_link],
"nodesCount" => attributes[:nodes_count],
"meshcoreNodesCount" => attributes[:meshcore_nodes_count],
"meshtasticNodesCount" => attributes[:meshtastic_nodes_count],
"signature" => signature,
}
payload.reject { |_, value| value.nil? }
end
# Persist the local instance record when registration is allowed.
#
# @return [Array(Hash, String)] tuple of (attributes, signature) suitable
# for direct reuse by the announcer thread.
def ensure_self_instance_record!
attributes = self_instance_attributes
signature = sign_instance_attributes(attributes)
db = nil
allowed, reason = self_instance_registration_decision(attributes[:domain])
# Decisions are stable per process while INSTANCE_DOMAIN_SOURCE
# remains the same — without dedupe, the federation banner on every
# page navigation produced one log line apiece. Only emit when the
# tuple changes so operators still see the first decision (and any
# later flip) without the spam.
sentinel = [allowed, reason, attributes[:domain]]
state = PotatoMesh::App::Federation.self_registration_log_state
should_log = state[:mutex].synchronize do
changed = state[:last] != sentinel
state[:last] = sentinel if changed
changed
end
if allowed
db = open_database
upsert_instance_record(db, attributes, signature)
if should_log
debug_log(
"Registered self instance record",
context: "federation.instances",
domain: attributes[:domain],
instance_id: attributes[:id],
)
end
elsif should_log
debug_log(
"Skipped self instance registration",
context: "federation.instances",
domain: attributes[:domain],
reason: reason,
)
end
[attributes, signature]
ensure
db&.close
end
end
end
end
@@ -0,0 +1,62 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Federation
# Build the canonical JSON payload that gets signed for instance
# announcements. Keys are emitted in deterministic order and only
# populated when the corresponding attribute is non-nil.
#
# @param attributes [Hash] instance attributes hash.
# @return [String] canonical JSON string suitable for signing.
def canonical_instance_payload(attributes)
data = {}
data["contactLink"] = attributes[:contact_link] if attributes[:contact_link]
data["id"] = attributes[:id] if attributes[:id]
data["domain"] = attributes[:domain] if attributes[:domain]
data["pubkey"] = attributes[:pubkey] if attributes[:pubkey]
data["name"] = attributes[:name] if attributes[:name]
data["version"] = attributes[:version] if attributes[:version]
data["channel"] = attributes[:channel] if attributes[:channel]
data["frequency"] = attributes[:frequency] if attributes[:frequency]
data["latitude"] = attributes[:latitude] unless attributes[:latitude].nil?
data["longitude"] = attributes[:longitude] unless attributes[:longitude].nil?
data["lastUpdateTime"] = attributes[:last_update_time] unless attributes[:last_update_time].nil?
data["isPrivate"] = attributes[:is_private] unless attributes[:is_private].nil?
JSON.generate(data, sort_keys: true)
end
# Verify a base64 RSA-SHA256 signature for an instance attribute set.
#
# @param attributes [Hash] canonical instance attributes.
# @param signature [String, nil] base64-encoded signature bytes.
# @param public_key_pem [String, nil] PEM-encoded RSA public key.
# @return [Boolean] true when the signature validates against the public key.
def verify_instance_signature(attributes, signature, public_key_pem)
return false unless signature && public_key_pem
canonical = canonical_instance_payload(attributes)
signature_bytes = Base64.strict_decode64(signature)
key = OpenSSL::PKey::RSA.new(public_key_pem)
key.verify(OpenSSL::Digest::SHA256.new, signature_bytes, canonical)
rescue ArgumentError, OpenSSL::PKey::PKeyError
false
end
end
end
end
@@ -0,0 +1,107 @@
# Copyright © 2025-26 l5yth & contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
module PotatoMesh
module App
module Federation
# Validate a remote +/.well-known+ document, including signature checks
# against the supplied public key.
#
# @param document [Hash] decoded well-known document.
# @param domain [String] expected sanitized domain.
# @param pubkey [String] expected canonical PEM public key.
# @return [Array(Boolean, String, nil)] tuple containing the validation
# result and an optional human-readable failure reason.
def validate_well_known_document(document, domain, pubkey)
unless document.is_a?(Hash)
return [false, "document is not an object"]
end
remote_pubkey = sanitize_public_key_pem(document["publicKey"])
return [false, "public key missing"] unless remote_pubkey
return [false, "public key mismatch"] unless remote_pubkey == pubkey
remote_domain = string_or_nil(document["domain"])
return [false, "domain missing"] unless remote_domain
return [false, "domain mismatch"] unless remote_domain.casecmp?(domain)
algorithm = string_or_nil(document["signatureAlgorithm"])
unless algorithm&.casecmp?(PotatoMesh::Config.instance_signature_algorithm)
return [false, "unsupported signature algorithm"]
end
signed_payload_b64 = string_or_nil(document["signedPayload"])
signature_b64 = string_or_nil(document["signature"])
return [false, "missing signed payload"] unless signed_payload_b64
return [false, "missing signature"] unless signature_b64
signed_payload = Base64.strict_decode64(signed_payload_b64)
signature = Base64.strict_decode64(signature_b64)
key = OpenSSL::PKey::RSA.new(remote_pubkey)
unless key.verify(OpenSSL::Digest::SHA256.new, signature, signed_payload)
return [false, "invalid well-known signature"]
end
payload = JSON.parse(signed_payload)
unless payload.is_a?(Hash)
return [false, "signed payload is not an object"]
end
payload_domain = string_or_nil(payload["domain"])
payload_pubkey = sanitize_public_key_pem(payload["publicKey"])
return [false, "signed payload domain mismatch"] unless payload_domain&.casecmp?(domain)
return [false, "signed payload public key mismatch"] unless payload_pubkey == pubkey
[true, nil]
rescue ArgumentError, OpenSSL::PKey::PKeyError => e
[false, e.message]
rescue JSON::ParserError => e
[false, "signed payload JSON error: #{e.message}"]
end
# Confirm a remote +/api/nodes+ payload contains a sufficient set of
# recently active nodes.
#
# @param nodes [Object] decoded array of remote node entries.
# @return [Array(Boolean, String, nil)] tuple of (is_fresh, optional reason).
def validate_remote_nodes(nodes)
unless nodes.is_a?(Array)
return [false, "node response is not an array"]
end
if nodes.length < PotatoMesh::Config.remote_instance_min_node_count
return [false, "insufficient nodes"]
end
latest = nodes.filter_map do |node|
next unless node.is_a?(Hash)
last_heard_values = []
last_heard_values << coerce_integer(node["last_heard"])
last_heard_values << coerce_integer(node["lastHeard"])
last_heard_values.compact.max
end.compact.max
return [false, "missing last_heard data"] unless latest
cutoff = Time.now.to_i - PotatoMesh::Config.remote_instance_max_node_age
return [false, "node data is stale"] if latest < cutoff
[true, nil]
end
end
end
end
@@ -26,7 +26,12 @@ module PotatoMesh
# @return [Array<Hash>] compacted message rows safe for API responses.
def query_messages(limit, node_ref: nil, include_encrypted: false, since: 0, protocol: nil)
limit = coerce_query_limit(limit)
since_threshold = normalize_since_threshold(since, floor: 0)
now = Time.now.to_i
# Default the chat feed to the same seven-day window the dashboard uses
# for the node table; per-id lookups widen to twenty-eight days so
# historical conversation context remains reachable on demand.
since_floor = node_ref ? now - PotatoMesh::Config.four_weeks_seconds : now - PotatoMesh::Config.week_seconds
since_threshold = normalize_since_threshold(since, floor: since_floor)
db = open_database(readonly: true)
db.results_as_hash = true
params = []
@@ -30,8 +30,9 @@ module PotatoMesh
params = []
where_clauses = []
now = Time.now.to_i
min_rx_time = now - PotatoMesh::Config.week_seconds
since_floor = node_ref ? 0 : min_rx_time
# Bulk positions follow the seven-day default; per-id lookups widen
# to twenty-eight days for backfill of historical track data.
since_floor = node_ref ? now - PotatoMesh::Config.four_weeks_seconds : now - PotatoMesh::Config.week_seconds
since_threshold = normalize_since_threshold(since, floor: since_floor)
where_clauses << "COALESCE(rx_time, position_time, 0) >= ?"
params << since_threshold
@@ -91,9 +92,11 @@ module PotatoMesh
params = []
where_clauses = []
now = Time.now.to_i
min_rx_time = now - PotatoMesh::Config.week_seconds
since_floor = node_ref ? 0 : min_rx_time
since_threshold = normalize_since_threshold(since, floor: since_floor)
# Neighbor relationships are reported sporadically and are easy to
# lose between scrapes, so use the twenty-eight-day extended window
# for both bulk and per-id queries.
min_rx_time = now - PotatoMesh::Config.four_weeks_seconds
since_threshold = normalize_since_threshold(since, floor: min_rx_time)
where_clauses << "COALESCE(rx_time, 0) >= ?"
params << since_threshold
@@ -141,7 +144,7 @@ module PotatoMesh
params = []
where_clauses = []
now = Time.now.to_i
min_rx_time = now - PotatoMesh::Config.trace_neighbor_window_seconds
min_rx_time = now - PotatoMesh::Config.four_weeks_seconds
since_threshold = normalize_since_threshold(since, floor: min_rx_time)
where_clauses << "COALESCE(rx_time, 0) >= ?"
params << since_threshold
@@ -141,8 +141,10 @@ module PotatoMesh
db = open_database(readonly: true)
db.results_as_hash = true
now = Time.now.to_i
min_last_heard = now - PotatoMesh::Config.week_seconds
since_floor = node_ref ? 0 : min_last_heard
# Bulk listings stay on the seven-day window so the dashboard does not
# render stale nodes; per-id lookups widen to twenty-eight days so
# callers can backfill older records that fall outside the bulk floor.
since_floor = node_ref ? now - PotatoMesh::Config.four_weeks_seconds : now - PotatoMesh::Config.week_seconds
since_threshold = normalize_since_threshold(since, floor: since_floor)
params = []
where_clauses = []
@@ -227,7 +229,10 @@ module PotatoMesh
db = open_database(readonly: true)
db.results_as_hash = true
now = Time.now.to_i
cutoff = now - PotatoMesh::Config.week_seconds
# Ingestor heartbeats are sparse (one per ingestor per cycle) so widen
# the rolling window to twenty-eight days to keep slow-tick ingestors
# visible in the federation overview.
cutoff = now - PotatoMesh::Config.four_weeks_seconds
since_threshold = normalize_since_threshold(since, floor: cutoff)
where_clauses = ["last_seen_time >= ?"]
params = [since_threshold]
@@ -30,8 +30,9 @@ module PotatoMesh
params = []
where_clauses = []
now = Time.now.to_i
min_rx_time = now - PotatoMesh::Config.week_seconds
since_floor = node_ref ? 0 : min_rx_time
# Bulk telemetry follows the seven-day default; per-id lookups widen
# to twenty-eight days so historical chart data remains reachable.
since_floor = node_ref ? now - PotatoMesh::Config.four_weeks_seconds : now - PotatoMesh::Config.week_seconds
since_threshold = normalize_since_threshold(since, floor: since_floor)
where_clauses << "COALESCE(rx_time, telemetry_time, 0) >= ?"
params << since_threshold
+15 -3
View File
@@ -390,9 +390,21 @@ module PotatoMesh
halt 404 unless federation_enabled?
content_type :json
ensure_self_instance_record!
payload = load_instances_for_api
JSON.generate(payload)
# The federation banner is rendered on every page navigation, which
# caused this endpoint to fire ~7 times in a few seconds while the
# user clicked through the site. Cache the response (including the
# self-record refresh) for a short window so navigation feels free
# without delaying signature/peer updates by more than a few
# seconds. The dedicated announcer thread keeps the underlying
# record fresh on its own cadence regardless of cache hits.
priv = private_mode? ? 1 : 0
cached = PotatoMesh::App::ApiCache.fetch("api:instances:#{priv}", ttl_seconds: 30) do
ensure_self_instance_record!
JSON.generate(load_instances_for_api)
end
etag cached[:etag], kind: :weak
api_cache_control
cached[:value]
end
end
end
@@ -315,6 +315,10 @@ module PotatoMesh
db = open_database
upsert_instance_record(db, attributes, signature)
# Drop the cached /api/instances payload so the new peer becomes
# visible on the next dashboard refresh instead of after the TTL
# naturally expires.
PotatoMesh::App::ApiCache.invalidate_prefix("api:instances:")
enqueued = enqueue_federation_crawl(
attributes[:domain],
per_response_limit: PotatoMesh::Config.federation_max_instances_per_response,
+16 -3
View File
@@ -181,17 +181,30 @@ module PotatoMesh
DEFAULT_DB_BUSY_RETRY_DELAY
end
# Convenience constant describing the number of seconds in a week.
# Default rolling retention window in seconds.
#
# Used as the freshness floor for every "general" bulk read endpoint —
# nodes, messages, positions, telemetry, and the federation instance
# catalog — and as the freshness floor for federation/dashboard activity
# counters. Exceptions (sparse data) live on
# {#four_weeks_seconds}; per-id lookups also widen to the extended
# window so callers can backfill historical context for a single node.
#
# @return [Integer] seconds in seven days.
def week_seconds
7 * 24 * 60 * 60
end
# Rolling retention window in seconds for trace and neighbor API queries.
# Extended rolling retention window in seconds.
#
# Used as the default freshness floor for endpoints whose data is more
# fragile (traces, neighbors, ingestors) and as the floor for every
# +/api/.../:id+ lookup so callers can backfill historical records that
# would otherwise fall outside the seven-day default applied to bulk
# endpoints.
#
# @return [Integer] seconds in twenty-eight days.
def trace_neighbor_window_seconds
def four_weeks_seconds
28 * 24 * 60 * 60
end
@@ -17,7 +17,53 @@
import test from 'node:test';
import assert from 'node:assert/strict';
import { createMessageNodeHydrator } from '../message-node-hydrator.js';
import { createMessageNodeHydrator, MESSAGE_HYDRATION_CONCURRENCY } from '../message-node-hydrator.js';
/**
* Build a fetch double that records the maximum number of simultaneously
* pending lookups so tests can assert the worker-pool cap is honoured.
*
* @param {number} settleDelayMs Milliseconds to keep each lookup pending
* before resolving, giving sibling workers a chance to start.
* @returns {{
* fetchNodeById: (id: string) => Promise<object|null>,
* maxInFlight: () => number,
* totalCalls: () => number,
* }} Helper API exposing the recorded peak concurrency.
*/
function makeConcurrencyProbe(settleDelayMs = 10) {
let inFlight = 0;
let peak = 0;
let total = 0;
return {
async fetchNodeById(id) {
inFlight += 1;
total += 1;
peak = Math.max(peak, inFlight);
try {
await new Promise(resolve => setTimeout(resolve, settleDelayMs));
return { node_id: id, short_name: id.slice(1, 5) };
} finally {
inFlight -= 1;
}
},
maxInFlight: () => peak,
totalCalls: () => total,
};
}
/**
* Build N messages with unique sender identifiers for concurrency tests.
*
* @param {number} count Number of messages to produce.
* @returns {Array<object>} Synthetic message payloads.
*/
function makeUniqueSenderMessages(count) {
return Array.from({ length: count }, (_, index) => ({
from_id: `!sender${index.toString().padStart(4, '0')}`,
text: `m${index}`,
}));
}
/**
* Capture warning invocations produced during a test run.
@@ -78,6 +124,66 @@ test('hydrate fetches missing nodes once and caches the result', async () => {
assert.strictEqual(result[1].node, nodesById.get('!fetch'));
});
test('hydrate caches 404 results so subsequent calls do not refetch dead ids', async () => {
let fetchCalls = 0;
const hydrator = createMessageNodeHydrator({
fetchNodeById: async () => {
fetchCalls += 1;
return null;
},
applyNodeFallback: () => {},
});
const messages = [{ from_id: '!gone', text: 'first' }];
const nodesById = new Map();
await hydrator.hydrate(messages, nodesById);
await hydrator.hydrate([{ from_id: '!gone', text: 'second' }], nodesById);
await hydrator.hydrate([{ from_id: '!gone', text: 'third' }], nodesById);
assert.equal(fetchCalls, 1);
});
test('cached missing entry is overridden when nodesById later resolves the id', async () => {
let fetchCalls = 0;
const hydrator = createMessageNodeHydrator({
fetchNodeById: async () => {
fetchCalls += 1;
return null;
},
applyNodeFallback: () => {},
});
const nodesById = new Map();
await hydrator.hydrate([{ from_id: '!late', text: 'first' }], nodesById);
assert.equal(fetchCalls, 1);
// Bulk /api/nodes refresh resolves the id afterwards.
const lateNode = { node_id: '!late', short_name: 'Late' };
nodesById.set('!late', lateNode);
const result = await hydrator.hydrate([{ from_id: '!late', text: 'second' }], nodesById);
assert.equal(fetchCalls, 1);
assert.strictEqual(result[0].node, lateNode);
});
test('hydrate caches lookup failures alongside 404s', async () => {
let fetchCalls = 0;
const hydrator = createMessageNodeHydrator({
fetchNodeById: async () => {
fetchCalls += 1;
throw new Error('network down');
},
applyNodeFallback: () => {},
logger: { warn() {} },
});
const nodesById = new Map();
await hydrator.hydrate([{ from_id: '!flaky', text: 'a' }], nodesById);
await hydrator.hydrate([{ from_id: '!flaky', text: 'b' }], nodesById);
assert.equal(fetchCalls, 1);
});
test('hydrate falls back to placeholders when lookups fail', async () => {
const logger = new LoggerStub();
let fallbackCalls = 0;
@@ -121,3 +227,125 @@ test('hydrate records warning when fetch rejects', async () => {
assert.ok(logger.messages.length >= 1);
assert.equal(nodesById.has('!warn'), false);
});
test('hydrate caps in-flight lookups at the default concurrency', async () => {
const probe = makeConcurrencyProbe();
const hydrator = createMessageNodeHydrator({
fetchNodeById: probe.fetchNodeById,
applyNodeFallback: () => {},
});
const messages = makeUniqueSenderMessages(MESSAGE_HYDRATION_CONCURRENCY * 3);
await hydrator.hydrate(messages, new Map());
assert.equal(probe.totalCalls(), messages.length);
assert.ok(
probe.maxInFlight() <= MESSAGE_HYDRATION_CONCURRENCY,
`expected <= ${MESSAGE_HYDRATION_CONCURRENCY} concurrent fetches, observed ${probe.maxInFlight()}`,
);
});
test('hydrate honours a custom concurrency override', async () => {
const probe = makeConcurrencyProbe();
const hydrator = createMessageNodeHydrator({
fetchNodeById: probe.fetchNodeById,
applyNodeFallback: () => {},
concurrency: 2,
});
const messages = makeUniqueSenderMessages(8);
await hydrator.hydrate(messages, new Map());
assert.equal(probe.totalCalls(), 8);
assert.equal(probe.maxInFlight(), 2);
});
test('hydrate serialises lookups when concurrency is one', async () => {
const probe = makeConcurrencyProbe();
const hydrator = createMessageNodeHydrator({
fetchNodeById: probe.fetchNodeById,
applyNodeFallback: () => {},
concurrency: 1,
});
const messages = makeUniqueSenderMessages(4);
await hydrator.hydrate(messages, new Map());
assert.equal(probe.maxInFlight(), 1);
});
test('hydrate falls back to the default cap for invalid concurrency values', async () => {
for (const invalid of [0, -3, Number.NaN, Number.POSITIVE_INFINITY, 'four']) {
const probe = makeConcurrencyProbe();
const hydrator = createMessageNodeHydrator({
fetchNodeById: probe.fetchNodeById,
applyNodeFallback: () => {},
concurrency: invalid,
});
const messages = makeUniqueSenderMessages(MESSAGE_HYDRATION_CONCURRENCY * 2);
await hydrator.hydrate(messages, new Map());
assert.ok(
probe.maxInFlight() <= MESSAGE_HYDRATION_CONCURRENCY,
`concurrency=${String(invalid)} should fall back to default; observed peak ${probe.maxInFlight()}`,
);
}
});
test('factory rejects missing fetch and fallback dependencies', () => {
assert.throws(
() => createMessageNodeHydrator({ applyNodeFallback: () => {} }),
TypeError,
);
assert.throws(
() => createMessageNodeHydrator({ fetchNodeById: async () => null }),
TypeError,
);
});
test('hydrate skips non-object entries and senderless messages', async () => {
let fetchCalls = 0;
const hydrator = createMessageNodeHydrator({
fetchNodeById: async () => {
fetchCalls += 1;
return null;
},
applyNodeFallback: () => {},
});
const senderless = { text: 'no sender' };
const messages = [null, 'not-an-object', senderless];
const result = await hydrator.hydrate(messages, new Map());
assert.equal(fetchCalls, 0);
assert.equal(result.length, 3);
assert.strictEqual(senderless.node, null);
});
test('hydrate dedupes duplicate senders without exceeding the cap', async () => {
const probe = makeConcurrencyProbe();
const hydrator = createMessageNodeHydrator({
fetchNodeById: probe.fetchNodeById,
applyNodeFallback: () => {},
concurrency: 2,
});
// Twenty messages but only four unique senders. After the first lookup
// for a given sender resolves, ``resolveNode`` writes the result into the
// shared ``nodesById`` cache; every later message with the same id is
// bound synchronously from that cache before it ever reaches the worker
// pool, so the total fetch count collapses to the four unique senders.
// (The inflight-promise map only matters when two workers happen to race
// on the same id, which barely happens at concurrency=2 — the
// ``nodesById`` short-circuit is the dominant mechanism here.)
const senders = ['!aaa', '!bbb', '!ccc', '!ddd'];
const messages = Array.from({ length: 20 }, (_, index) => ({
from_id: senders[index % senders.length],
text: `dup${index}`,
}));
await hydrator.hydrate(messages, new Map());
assert.equal(probe.totalCalls(), senders.length);
assert.ok(probe.maxInFlight() <= 2);
});
@@ -42,9 +42,13 @@ const {
lookupNeighborDetails,
seedNeighborRoleIndex,
buildNeighborRoleIndex,
fetchNodeDetailsIntoIndex,
renderRoleAwareBadge,
collectTraceNodeFetchMap,
buildTraceRoleIndex,
categoriseNeighbors,
renderNeighborBadge,
renderNeighborGroup,
renderNeighborGroups,
renderSingleNodeTable,
classifySnapshot,
@@ -1257,3 +1261,245 @@ test('initializeNodeDetailPage handles missing reference payloads', async () =>
assert.equal(result, false);
assert.equal(element.innerHTML.includes('Node reference unavailable'), true);
});
test('parseReferencePayload returns null for blank or unparseable input', () => {
assert.equal(parseReferencePayload(null), null);
assert.equal(parseReferencePayload(' '), null);
assert.equal(parseReferencePayload('not-json'), null);
assert.equal(parseReferencePayload(JSON.stringify(42)), null);
assert.deepEqual(parseReferencePayload(JSON.stringify({ nodeId: '!a' })), { nodeId: '!a' });
});
test('initializeNodeDetailPage rejects invalid documents and missing identifiers', async () => {
await assert.rejects(
() => initializeNodeDetailPage({ document: null, fetchImpl: async () => ({}) }),
/document with querySelector/,
);
const root = { dataset: { nodeReference: JSON.stringify({}) }, innerHTML: '' };
const documentStub = {
querySelector: selector => (selector === '#nodeDetail' ? root : null),
};
const result = await initializeNodeDetailPage({
document: documentStub,
fetchImpl: async () => ({ ok: true, json: async () => ({}) }),
renderShortHtml: short => `<span>${short}</span>`,
});
assert.equal(result, false);
assert.equal(root.innerHTML.includes('Node identifier missing'), true);
});
test('renderRoleAwareBadge falls back when both shortName and identifier are absent', () => {
const html = renderRoleAwareBadge((short, role) => `<b data-role="${role}">${short}</b>`, {});
assert.equal(html, '<b data-role="CLIENT">?</b>');
});
test('renderRoleAwareBadge invokes default span renderer when renderShortHtml is missing', () => {
const html = renderRoleAwareBadge(null, { shortName: 'AB&CD' });
assert.equal(html.includes('class="short-name"'), true);
assert.equal(html.includes('AB&amp;CD'), true);
});
test('seedNeighborRoleIndex tolerates non-array and non-object entries', () => {
const index = { byId: new Map(), byNum: new Map(), detailsById: new Map(), detailsByNum: new Map() };
assert.equal(seedNeighborRoleIndex(index, null).size, 0);
assert.equal(seedNeighborRoleIndex(index, 'not-an-array').size, 0);
assert.equal(seedNeighborRoleIndex(index, [null, 7, 'string']).size, 0);
});
test('seedNeighborRoleIndex hydrates roles from nested neighbor and node objects', () => {
const index = { byId: new Map(), byNum: new Map(), detailsById: new Map(), detailsByNum: new Map() };
seedNeighborRoleIndex(index, [
{
neighbor: { node_id: '!ally', node_num: 11, role: 'ROUTER', short_name: 'ALLY', long_name: 'Ally Long' },
node: { node_id: '!self', node_num: 22, role: 'CLIENT', short_name: 'SELF', long_name: 'Self Long' },
},
]);
assert.equal(index.byId.get('!ally'), 'ROUTER');
assert.equal(index.byId.get('!self'), 'CLIENT');
assert.equal(index.byNum.get(11), 'ROUTER');
assert.equal(index.byNum.get(22), 'CLIENT');
const allyDetails = lookupNeighborDetails(index, { identifier: '!ally' });
assert.equal(allyDetails.shortName, 'ALLY');
assert.equal(allyDetails.longName, 'Ally Long');
});
test('fetchNodeDetailsIntoIndex skips work when no fetch is reachable', async () => {
const originalFetch = globalThis.fetch;
globalThis.fetch = undefined;
try {
const index = { byId: new Map(), byNum: new Map(), detailsById: new Map(), detailsByNum: new Map() };
await fetchNodeDetailsIntoIndex(index, new Map([['x', 'x']]), undefined);
assert.equal(index.byId.size, 0);
} finally {
globalThis.fetch = originalFetch;
}
});
test('fetchNodeDetailsIntoIndex returns immediately for empty or non-Map inputs', async () => {
const index = { byId: new Map(), byNum: new Map(), detailsById: new Map(), detailsByNum: new Map() };
let calls = 0;
const fetchImpl = async () => { calls += 1; return { ok: true, json: async () => ({}) }; };
await fetchNodeDetailsIntoIndex(index, null, fetchImpl);
await fetchNodeDetailsIntoIndex(index, new Map(), fetchImpl);
assert.equal(calls, 0);
});
test('fetchNodeDetailsIntoIndex silently ignores 404 responses without registering', async () => {
const index = { byId: new Map(), byNum: new Map(), detailsById: new Map(), detailsByNum: new Map() };
const fetchImpl = async () => ({ ok: false, status: 404, json: async () => ({}) });
await fetchNodeDetailsIntoIndex(index, new Map([['gone', 'gone']]), fetchImpl);
assert.equal(index.byId.size, 0);
});
test('renderSingleNodeTable returns empty string for invalid inputs', () => {
assert.equal(renderSingleNodeTable(null, () => ''), '');
assert.equal(renderSingleNodeTable({ nodeId: '!a' }, null), '');
assert.equal(renderSingleNodeTable('string-not-object', () => ''), '');
});
test('renderTelemetryCharts returns empty string when no entries fall in the window', () => {
const out = renderTelemetryCharts(makeAggregatedNode([
{ timestamp: '1970-01-01T00:00:00Z' },
]), { now: () => Date.UTC(2026, 0, 1) });
assert.equal(out, '');
});
test('renderTelemetryCharts returns empty string when chart specs produce no markup', () => {
// Aggregated snapshot with valid timestamp but no telemetry fields any chart
// can plot — every chart spec filters its empty series and returns ''.
const node = makeAggregatedNode([
{ rx_time: CHART_NOW_SECONDS - 60, telemetry_type: 'device' },
]);
const out = renderTelemetryCharts(node, { nowMs: CHART_NOW_MS });
assert.equal(out, '');
});
test('renderTracePath returns empty string when fewer than two badges render', () => {
const renderShortHtml = short => `<span>${short}</span>`;
// Single-element path → items.length < 2 → empty result.
assert.equal(renderTracePath(['!only'], renderShortHtml), '');
// Two refs but the renderer yields blanks → filter strips them → items.length < 2.
assert.equal(renderTracePath([{ identifier: '!a' }, { identifier: '!b' }], () => ''), '');
});
test('renderNeighborBadge returns empty string for invalid inputs', () => {
assert.equal(renderNeighborBadge(null, 'heardBy', () => ''), '');
assert.equal(renderNeighborBadge({ neighbor_id: '!a' }, 'weHear', null), '');
// Entry without any identifier in keys → returns ''
assert.equal(renderNeighborBadge({ snr: 5 }, 'weHear', () => ''), '');
});
test('renderNeighborBadge merges role-index metadata into the source object', () => {
const source = {};
const entry = { neighbor_id: '!ally', neighbor: source };
const roleIndex = {
byId: new Map([['!ally', 'ROUTER']]),
byNum: new Map(),
detailsById: new Map([
['!ally', { shortName: 'ALLY', longName: 'Ally Long', role: 'ROUTER' }],
]),
detailsByNum: new Map(),
};
const renderShortHtml = (short, role, long, badgeSource) =>
`<b data-role="${role}" data-long="${long}" data-source-role="${badgeSource.role}">${short}</b>`;
const html = renderNeighborBadge(entry, 'weHear', renderShortHtml, roleIndex);
assert.match(html, /ALLY/);
assert.equal(source.short_name, 'ALLY');
assert.equal(source.long_name, 'Ally Long');
assert.equal(source.role, 'ROUTER');
});
test('renderNeighborBadge derives short name from identifier when no metadata is available', () => {
const html = renderNeighborBadge(
{ neighbor_id: '!abcdef12' },
'weHear',
short => `<span>${short}</span>`,
);
// Last four hex chars of identifier, uppercased.
assert.equal(html, '<span>EF12</span>');
});
test('renderNeighborGroup skips entries that fail to render and returns empty when none survive', () => {
const renderShortHtml = (short, role) => `<span data-role="${role}">${short}</span>`;
// Two entries; only one yields a valid badge.
const html = renderNeighborGroup(
'Heard by',
[
{ node_id: '!peer', node_short_name: 'PEER' },
{ snr: 5 }, // no identifier → renderNeighborBadge returns '' → filtered out.
],
'heardBy',
renderShortHtml,
);
assert.equal(html.includes('PEER'), true);
assert.equal(html.match(/<li>/g).length, 1);
// All entries fail → returns ''.
const empty = renderNeighborGroup('Heard by', [{ snr: 1 }, { snr: 2 }], 'heardBy', renderShortHtml);
assert.equal(empty, '');
});
test('renderTraceroutes returns empty string when no trace path renders content', () => {
const renderShortHtml = short => `<span>${short}</span>`;
// Each trace yields a single-hop path which renderTracePath rejects → no items remain.
assert.equal(renderTraceroutes([{ src: '!a', hops: [], dest: null }], renderShortHtml), '');
});
test('fetchNodeDetailHtml rejects non-object references', async () => {
await assert.rejects(() => fetchNodeDetailHtml(null), TypeError);
await assert.rejects(() => fetchNodeDetailHtml('not-an-object'), TypeError);
});
test('normalizeNodeReference returns null for non-object inputs and references missing both ids', () => {
const { normalizeNodeReference } = __testUtils;
assert.equal(normalizeNodeReference(null), null);
assert.equal(normalizeNodeReference('not-an-object'), null);
assert.equal(normalizeNodeReference({}), null);
assert.deepEqual(normalizeNodeReference({ nodeId: '!a' }), { nodeId: '!a', nodeNum: null });
});
test('fetchNodeDetailsIntoIndex warns and continues when a non-404 response fails', async () => {
const index = { byId: new Map(), byNum: new Map(), detailsById: new Map(), detailsByNum: new Map() };
const fetchImpl = async () => ({ ok: false, status: 503, json: async () => ({}) });
const originalWarn = console.warn;
const messages = [];
console.warn = (...args) => messages.push(args[0]);
try {
await fetchNodeDetailsIntoIndex(index, new Map([['ouch', 'ouch']]), fetchImpl, 'unit-test');
} finally {
console.warn = originalWarn;
}
assert.equal(index.byId.size, 0);
assert.equal(messages.some(msg => typeof msg === 'string' && msg.includes('unit-test')), true);
});
test('fetchNodeDetailsIntoIndex caps in-flight requests at NEIGHBOR_ROLE_FETCH_CONCURRENCY', async () => {
// Eight identifiers, four-wide pool: at most four fetches should be in flight.
const ids = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'];
const fetchIdMap = new Map(ids.map(id => [id, id]));
let inFlight = 0;
let peak = 0;
const release = [];
const fetchImpl = () => {
inFlight += 1;
if (inFlight > peak) peak = inFlight;
return new Promise(resolve => {
release.push(() => {
inFlight -= 1;
resolve({ ok: true, status: 200, json: async () => ({ node_id: '!stub', role: 'CLIENT' }) });
});
});
};
const index = { byId: new Map(), byNum: new Map(), detailsById: new Map(), detailsByNum: new Map() };
const work = fetchNodeDetailsIntoIndex(index, fetchIdMap, fetchImpl);
// Yield so the four workers reach their first await.
await new Promise(resolve => setImmediate(resolve));
assert.equal(peak, 4, `expected concurrency cap of 4, observed peak ${peak}`);
// Drain in two waves; the second wave only starts once the first releases.
release.splice(0, 4).forEach(fn => fn());
await new Promise(resolve => setImmediate(resolve));
release.splice(0).forEach(fn => fn());
await work;
assert.equal(peak, 4);
});
@@ -0,0 +1,92 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import { numberOrNull, stringOrNull } from '../value-helpers.js';
// ---------------------------------------------------------------------------
// numberOrNull
// ---------------------------------------------------------------------------
test('numberOrNull passes through finite numbers unchanged', () => {
assert.equal(numberOrNull(42), 42);
assert.equal(numberOrNull(-3.14), -3.14);
assert.equal(numberOrNull(0), 0);
});
test('numberOrNull returns null for non-finite numbers', () => {
assert.equal(numberOrNull(Number.NaN), null);
assert.equal(numberOrNull(Number.POSITIVE_INFINITY), null);
assert.equal(numberOrNull(Number.NEGATIVE_INFINITY), null);
});
test('numberOrNull returns null for null, undefined, and empty string', () => {
assert.equal(numberOrNull(null), null);
assert.equal(numberOrNull(undefined), null);
assert.equal(numberOrNull(''), null);
});
test('numberOrNull coerces numeric strings into numbers', () => {
assert.equal(numberOrNull('42'), 42);
assert.equal(numberOrNull(' -1.5 '), -1.5);
assert.equal(numberOrNull('0'), 0);
});
test('numberOrNull rejects non-numeric strings', () => {
assert.equal(numberOrNull('not a number'), null);
assert.equal(numberOrNull('1.2.3'), null);
});
test('numberOrNull rejects objects and arrays', () => {
assert.equal(numberOrNull({}), null);
assert.equal(numberOrNull([]), 0); // Array#toString of [] is '' which Number('') is 0
assert.equal(numberOrNull([1, 2]), null);
});
test('numberOrNull treats booleans as their numeric coercion', () => {
// Number(true) === 1, Number(false) === 0; documented contract is that any
// value Number() resolves to a finite number passes through.
assert.equal(numberOrNull(true), 1);
assert.equal(numberOrNull(false), 0);
});
// ---------------------------------------------------------------------------
// stringOrNull
// ---------------------------------------------------------------------------
test('stringOrNull returns trimmed strings for non-empty input', () => {
assert.equal(stringOrNull('hello'), 'hello');
assert.equal(stringOrNull(' spaced '), 'spaced');
});
test('stringOrNull returns null for null and undefined', () => {
assert.equal(stringOrNull(null), null);
assert.equal(stringOrNull(undefined), null);
});
test('stringOrNull returns null for the empty string and whitespace-only input', () => {
assert.equal(stringOrNull(''), null);
assert.equal(stringOrNull(' '), null);
assert.equal(stringOrNull('\t\n'), null);
});
test('stringOrNull stringifies non-string inputs', () => {
assert.equal(stringOrNull(42), '42');
assert.equal(stringOrNull(0), '0');
assert.equal(stringOrNull(true), 'true');
});
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,346 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import {
fetchMessages,
fetchNeighbors,
fetchNodeById,
fetchNodes,
fetchPositions,
fetchTelemetry,
fetchTraces,
filterRecentTraces,
resolveSnapshotLimit,
} from '../data-fetchers.js';
import { NODE_LIMIT, SNAPSHOT_LIMIT, TRACE_LIMIT } from '../constants.js';
/**
* Install a temporary global ``fetch`` stub that records every call and
* returns the supplied response. Returns a teardown handle that restores
* the previous binding and exposes the captured call list.
*
* @param {{ ok?: boolean, status?: number, body?: any }|Function} responseOrFn
* Response descriptor or an async function returning one.
* @returns {{ calls: Array<{url: string, init: any}>, restore: Function }}
* Stub control surface.
*/
function withFetchStub(responseOrFn) {
const previous = globalThis.fetch;
const calls = [];
const handler = typeof responseOrFn === 'function'
? responseOrFn
: () => responseOrFn;
globalThis.fetch = async (url, init) => {
calls.push({ url, init });
const descriptor = await handler(url, init);
return {
ok: descriptor.ok ?? true,
status: descriptor.status ?? 200,
json: async () => descriptor.body ?? [],
};
};
return {
calls,
restore() {
if (previous === undefined) {
delete globalThis.fetch;
} else {
globalThis.fetch = previous;
}
},
};
}
// ---------------------------------------------------------------------------
// resolveSnapshotLimit
// ---------------------------------------------------------------------------
test('resolveSnapshotLimit multiplies the requested limit by SNAPSHOT_LIMIT', () => {
assert.equal(resolveSnapshotLimit(10), Math.min(10 * SNAPSHOT_LIMIT, NODE_LIMIT));
});
test('resolveSnapshotLimit caps to maxLimit', () => {
assert.equal(resolveSnapshotLimit(NODE_LIMIT), NODE_LIMIT);
assert.equal(resolveSnapshotLimit(NODE_LIMIT * 2), NODE_LIMIT);
});
test('resolveSnapshotLimit defaults to NODE_LIMIT for invalid input', () => {
assert.equal(resolveSnapshotLimit(null), NODE_LIMIT);
assert.equal(resolveSnapshotLimit(0), NODE_LIMIT);
assert.equal(resolveSnapshotLimit(-5), NODE_LIMIT);
assert.equal(resolveSnapshotLimit(Number.NaN), NODE_LIMIT);
});
// ---------------------------------------------------------------------------
// filterRecentTraces
// ---------------------------------------------------------------------------
test('filterRecentTraces returns empty array for non-array input', () => {
assert.deepEqual(filterRecentTraces(null), []);
assert.deepEqual(filterRecentTraces(undefined), []);
assert.deepEqual(filterRecentTraces({}), []);
});
test('filterRecentTraces returns a copy of the input when maxAgeSeconds is non-positive', () => {
const input = [{ rx_time: 1 }, { rx_time: 2 }];
const result = filterRecentTraces(input, 0);
assert.deepEqual(result, input);
assert.notEqual(result, input); // Returns a copy, not the same reference.
const negativeResult = filterRecentTraces(input, -10);
assert.deepEqual(negativeResult, input);
});
test('filterRecentTraces drops traces older than the cutoff', () => {
const nowSeconds = Math.floor(Date.now() / 1000);
const traces = [
{ rx_time: nowSeconds }, // recent
{ rx_time: nowSeconds - 7200 }, // older than 1h
{ rxIso: new Date((nowSeconds - 30) * 1000).toISOString() }, // recent via ISO
];
const filtered = filterRecentTraces(traces, 3600);
assert.equal(filtered.length, 2);
});
test('filterRecentTraces drops traces with no usable timestamp', () => {
const filtered = filterRecentTraces([{ noTime: true }, { rx_time: null }], 3600);
assert.deepEqual(filtered, []);
});
// ---------------------------------------------------------------------------
// fetchNodeById
// ---------------------------------------------------------------------------
test('fetchNodeById returns null for non-string inputs', async () => {
assert.equal(await fetchNodeById(null), null);
assert.equal(await fetchNodeById(42), null);
});
test('fetchNodeById returns null for blank string inputs', async () => {
assert.equal(await fetchNodeById(''), null);
assert.equal(await fetchNodeById(' '), null);
});
test('fetchNodeById returns null on HTTP 404', async () => {
const stub = withFetchStub({ ok: false, status: 404 });
try {
assert.equal(await fetchNodeById('!aabbccdd'), null);
assert.equal(stub.calls.length, 1);
assert.ok(stub.calls[0].url.includes('!aabbccdd'));
} finally {
stub.restore();
}
});
test('fetchNodeById throws on non-OK non-404 responses', async () => {
const stub = withFetchStub({ ok: false, status: 500 });
try {
await assert.rejects(() => fetchNodeById('!aabbccdd'), /HTTP 500/);
} finally {
stub.restore();
}
});
test('fetchNodeById returns parsed payload on success', async () => {
const stub = withFetchStub({ ok: true, status: 200, body: { node_id: '!aabbccdd' } });
try {
const result = await fetchNodeById('!aabbccdd');
assert.deepEqual(result, { node_id: '!aabbccdd' });
} finally {
stub.restore();
}
});
// ---------------------------------------------------------------------------
// fetchNodes / fetchNeighbors / fetchTelemetry / fetchPositions / fetchTraces
// ---------------------------------------------------------------------------
test('fetchNodes appends since when greater than zero', async () => {
const stub = withFetchStub({ ok: true, body: [] });
try {
await fetchNodes(10, 1234);
assert.ok(stub.calls[0].url.includes('since=1234'));
} finally {
stub.restore();
}
});
test('fetchNodes throws on non-OK', async () => {
const stub = withFetchStub({ ok: false, status: 503 });
try {
await assert.rejects(() => fetchNodes(), /HTTP 503/);
} finally {
stub.restore();
}
});
test('fetchNeighbors hits the neighbours endpoint', async () => {
const stub = withFetchStub({ ok: true, body: [{ node_id: '!a' }] });
try {
const result = await fetchNeighbors(50);
assert.ok(stub.calls[0].url.startsWith('/api/neighbors?'));
assert.deepEqual(result, [{ node_id: '!a' }]);
} finally {
stub.restore();
}
});
test('fetchNeighbors propagates HTTP errors', async () => {
const stub = withFetchStub({ ok: false, status: 502 });
try {
await assert.rejects(() => fetchNeighbors(), /HTTP 502/);
} finally {
stub.restore();
}
});
test('fetchTelemetry hits the telemetry endpoint', async () => {
const stub = withFetchStub({ ok: true, body: [] });
try {
await fetchTelemetry(50, 100);
assert.ok(stub.calls[0].url.startsWith('/api/telemetry?'));
assert.ok(stub.calls[0].url.includes('since=100'));
} finally {
stub.restore();
}
});
test('fetchTelemetry propagates HTTP errors', async () => {
const stub = withFetchStub({ ok: false, status: 504 });
try {
await assert.rejects(() => fetchTelemetry(), /HTTP 504/);
} finally {
stub.restore();
}
});
test('fetchPositions hits the positions endpoint', async () => {
const stub = withFetchStub({ ok: true, body: [] });
try {
await fetchPositions();
assert.ok(stub.calls[0].url.startsWith('/api/positions?'));
} finally {
stub.restore();
}
});
test('fetchPositions propagates HTTP errors', async () => {
const stub = withFetchStub({ ok: false, status: 500 });
try {
await assert.rejects(() => fetchPositions(), /HTTP 500/);
} finally {
stub.restore();
}
});
test('fetchTraces filters expired entries', async () => {
const nowSeconds = Math.floor(Date.now() / 1000);
const stub = withFetchStub({
ok: true,
body: [
{ rx_time: nowSeconds },
{ rx_time: nowSeconds - 365 * 24 * 3600 },
],
});
try {
const result = await fetchTraces();
// Only the recent trace should survive.
assert.equal(result.length, 1);
assert.ok(stub.calls[0].url.startsWith('/api/traces?'));
} finally {
stub.restore();
}
});
test('fetchTraces falls back to TRACE_LIMIT on bogus input', async () => {
const stub = withFetchStub({ ok: true, body: [] });
try {
await fetchTraces(Number.NaN);
assert.ok(stub.calls[0].url.includes(`limit=${TRACE_LIMIT}`));
} finally {
stub.restore();
}
});
test('fetchTraces propagates HTTP errors', async () => {
const stub = withFetchStub({ ok: false, status: 500 });
try {
await assert.rejects(() => fetchTraces(), /HTTP 500/);
} finally {
stub.restore();
}
});
// ---------------------------------------------------------------------------
// fetchMessages
// ---------------------------------------------------------------------------
test('fetchMessages returns [] when chatEnabled is false', async () => {
const stub = withFetchStub({ ok: true, body: [{ id: 1 }] });
try {
const result = await fetchMessages(10, { chatEnabled: false });
assert.deepEqual(result, []);
assert.equal(stub.calls.length, 0);
} finally {
stub.restore();
}
});
test('fetchMessages applies normaliseMessageLimit when provided', async () => {
const stub = withFetchStub({ ok: true, body: [] });
try {
await fetchMessages(999, {
normaliseMessageLimit: () => 25,
chatEnabled: true,
});
assert.ok(stub.calls[0].url.includes('limit=25'));
} finally {
stub.restore();
}
});
test('fetchMessages forwards encrypted=true and since when set', async () => {
const stub = withFetchStub({ ok: true, body: [] });
try {
await fetchMessages(10, { encrypted: true, since: 555 });
assert.ok(stub.calls[0].url.includes('encrypted=true'));
assert.ok(stub.calls[0].url.includes('since=555'));
} finally {
stub.restore();
}
});
test('fetchMessages omits limit normalisation when normaliser is absent', async () => {
const stub = withFetchStub({ ok: true, body: [] });
try {
await fetchMessages(50);
assert.ok(stub.calls[0].url.includes('limit=50'));
} finally {
stub.restore();
}
});
test('fetchMessages propagates HTTP errors', async () => {
const stub = withFetchStub({ ok: false, status: 500 });
try {
await assert.rejects(() => fetchMessages(10), /HTTP 500/);
} finally {
stub.restore();
}
});
@@ -0,0 +1,242 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import {
buildTelemetryIndex,
mergePositionsIntoNodes,
mergeTelemetryIntoNodes,
} from '../data-merge.js';
// ---------------------------------------------------------------------------
// mergePositionsIntoNodes — early returns
// ---------------------------------------------------------------------------
test('mergePositionsIntoNodes is a no-op when nodes is not an array', () => {
const positions = [{ node_id: '!a', latitude: 1, longitude: 2 }];
// Just assert no throw.
mergePositionsIntoNodes(null, positions);
mergePositionsIntoNodes(undefined, positions);
});
test('mergePositionsIntoNodes is a no-op when positions is not an array', () => {
const nodes = [{ node_id: '!a' }];
mergePositionsIntoNodes(nodes, null);
mergePositionsIntoNodes(nodes, undefined);
assert.deepEqual(nodes, [{ node_id: '!a' }]);
});
test('mergePositionsIntoNodes is a no-op for empty node arrays', () => {
mergePositionsIntoNodes([], [{ node_id: '!a', latitude: 1, longitude: 2 }]);
});
test('mergePositionsIntoNodes is a no-op when no nodes carry a string node_id', () => {
// Hits the `if (nodesById.size === 0) return;` early exit.
const nodes = [{ node_num: 5 }];
mergePositionsIntoNodes(nodes, [{ node_id: '!a', latitude: 1, longitude: 2 }]);
assert.deepEqual(nodes, [{ node_num: 5 }]);
});
// ---------------------------------------------------------------------------
// mergePositionsIntoNodes — merge logic
// ---------------------------------------------------------------------------
test('mergePositionsIntoNodes copies coordinates when none exist', () => {
const nodes = [{ node_id: '!a' }];
mergePositionsIntoNodes(nodes, [{
node_id: '!a',
latitude: 52.5,
longitude: 13.4,
altitude: 100,
position_time: 1700000000,
position_time_iso: '2023-11-14T22:13:20.000Z',
location_source: 'gps',
precision_bits: 24,
}]);
assert.equal(nodes[0].latitude, 52.5);
assert.equal(nodes[0].longitude, 13.4);
assert.equal(nodes[0].altitude, 100);
assert.equal(nodes[0].position_time, 1700000000);
assert.equal(nodes[0].pos_time_iso, '2023-11-14T22:13:20.000Z');
assert.equal(nodes[0].location_source, 'gps');
assert.equal(nodes[0].precision_bits, 24);
});
test('mergePositionsIntoNodes generates an ISO when only numeric position_time is supplied', () => {
const nodes = [{ node_id: '!a' }];
mergePositionsIntoNodes(nodes, [{
node_id: '!a',
latitude: 1,
longitude: 2,
position_time: 1700000000,
}]);
assert.equal(nodes[0].pos_time_iso, new Date(1700000000 * 1000).toISOString());
});
test('mergePositionsIntoNodes preserves ISO when numeric position_time is missing', () => {
const nodes = [{ node_id: '!a' }];
mergePositionsIntoNodes(nodes, [{
node_id: '!a',
latitude: 1,
longitude: 2,
position_time_iso: '2024-01-01T00:00:00.000Z',
}]);
assert.equal(nodes[0].pos_time_iso, '2024-01-01T00:00:00.000Z');
});
test('mergePositionsIntoNodes ignores incoming positions with non-finite coordinates', () => {
const nodes = [{ node_id: '!a' }];
mergePositionsIntoNodes(nodes, [{ node_id: '!a', latitude: 'NaN', longitude: 1 }]);
assert.equal(nodes[0].latitude, undefined);
});
test('mergePositionsIntoNodes only applies the first matching position per node', () => {
const nodes = [{ node_id: '!a' }];
mergePositionsIntoNodes(nodes, [
{ node_id: '!a', latitude: 1, longitude: 2 },
{ node_id: '!a', latitude: 99, longitude: 99 },
]);
assert.equal(nodes[0].latitude, 1);
});
test('mergePositionsIntoNodes skips packets older than the existing snapshot', () => {
const nodes = [{
node_id: '!a',
latitude: 5,
longitude: 5,
position_time: 2000,
}];
mergePositionsIntoNodes(nodes, [{
node_id: '!a',
latitude: 9,
longitude: 9,
position_time: 1000,
}]);
assert.equal(nodes[0].latitude, 5); // unchanged
});
test('mergePositionsIntoNodes accepts strictly newer packets', () => {
const nodes = [{
node_id: '!a',
latitude: 5,
longitude: 5,
position_time: 1000,
}];
mergePositionsIntoNodes(nodes, [{
node_id: '!a',
latitude: 9,
longitude: 9,
position_time: 2000,
}]);
assert.equal(nodes[0].latitude, 9);
});
test('mergePositionsIntoNodes skips entries lacking a node_id', () => {
const nodes = [{ node_id: '!a' }];
mergePositionsIntoNodes(nodes, [{ latitude: 1, longitude: 2 }, null]);
assert.equal(nodes[0].latitude, undefined);
});
// ---------------------------------------------------------------------------
// buildTelemetryIndex
// ---------------------------------------------------------------------------
test('buildTelemetryIndex returns empty maps for non-array input', () => {
const { byNodeId, byNodeNum } = buildTelemetryIndex(null);
assert.equal(byNodeId.size, 0);
assert.equal(byNodeNum.size, 0);
});
test('buildTelemetryIndex keeps the freshest entry per node_id', () => {
const { byNodeId } = buildTelemetryIndex([
{ node_id: '!a', rx_time: 100, payload: 'old' },
{ node_id: '!a', rx_time: 200, payload: 'new' },
]);
assert.equal(byNodeId.get('!a').entry.payload, 'new');
});
test('buildTelemetryIndex falls back to telemetry_time when rx_time is absent', () => {
const { byNodeId } = buildTelemetryIndex([
{ node_id: '!a', telemetry_time: 50, payload: 'fallback' },
]);
assert.equal(byNodeId.get('!a').timestamp, 50);
});
test('buildTelemetryIndex indexes by numeric node_num', () => {
const { byNodeNum } = buildTelemetryIndex([
{ node_num: 42, rx_time: 100, payload: 'first' },
]);
assert.ok(byNodeNum.has(42));
});
test('buildTelemetryIndex skips non-object entries', () => {
const { byNodeId, byNodeNum } = buildTelemetryIndex([null, 'string', 5]);
assert.equal(byNodeId.size, 0);
assert.equal(byNodeNum.size, 0);
});
// ---------------------------------------------------------------------------
// mergeTelemetryIntoNodes
// ---------------------------------------------------------------------------
test('mergeTelemetryIntoNodes is a no-op when nodes is empty or not an array', () => {
mergeTelemetryIntoNodes([], []);
mergeTelemetryIntoNodes(null, []);
});
test('mergeTelemetryIntoNodes copies metrics when matched by node_id', () => {
const nodes = [{ node_id: '!a' }];
mergeTelemetryIntoNodes(nodes, [{
node_id: '!a',
battery_level: 85,
voltage: 4.1,
rx_time: 100,
telemetry_time: 95,
}]);
assert.equal(nodes[0].battery_level, 85);
assert.equal(nodes[0].voltage, 4.1);
assert.equal(nodes[0].telemetry_time, 95);
assert.equal(nodes[0].telemetry_rx_time, 100);
});
test('mergeTelemetryIntoNodes falls back to node_num lookup', () => {
const nodes = [{ num: 42 }];
mergeTelemetryIntoNodes(nodes, [{
node_num: 42,
temperature: 21.5,
}]);
assert.equal(nodes[0].temperature, 21.5);
});
test('mergeTelemetryIntoNodes ignores nodes that do not match by id or num', () => {
const nodes = [{ node_id: '!a', num: 1 }];
mergeTelemetryIntoNodes(nodes, [{ node_id: '!b', battery_level: 50 }]);
assert.equal(nodes[0].battery_level, undefined);
});
test('mergeTelemetryIntoNodes skips null metric values', () => {
const nodes = [{ node_id: '!a', battery_level: 99 }];
mergeTelemetryIntoNodes(nodes, [{ node_id: '!a', battery_level: null }]);
assert.equal(nodes[0].battery_level, 99);
});
test('mergeTelemetryIntoNodes tolerates non-object entries in the list', () => {
const nodes = [null, undefined, { node_id: '!a' }];
mergeTelemetryIntoNodes(nodes, [{ node_id: '!a', voltage: 3.9 }]);
assert.equal(nodes[2].voltage, 3.9);
});
@@ -0,0 +1,405 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import {
cssEscape,
fmtCoords,
fmtHw,
formatDate,
formatShortInfoUptime,
formatSnrDisplay,
formatTime,
pad,
parseNodeNumericRef,
pickFirstProperty,
pickNumericProperty,
resolveTimestampSeconds,
shortInfoValueOrDash,
timeAgo,
timeHum,
toFiniteNumber,
} from '../format-utils.js';
// ---------------------------------------------------------------------------
// pad / formatTime / formatDate
// ---------------------------------------------------------------------------
test('pad pads small numbers to two digits', () => {
assert.equal(pad(0), '00');
assert.equal(pad(7), '07');
assert.equal(pad(42), '42');
});
test('formatTime renders HH:MM:SS', () => {
const d = new Date(2026, 0, 1, 9, 5, 7); // Local time.
assert.equal(formatTime(d), '09:05:07');
});
test('formatDate renders YYYY-MM-DD', () => {
const d = new Date(2026, 0, 9); // Jan 9, 2026 local.
assert.equal(formatDate(d), '2026-01-09');
});
// ---------------------------------------------------------------------------
// fmtHw
// ---------------------------------------------------------------------------
test('fmtHw passes through normal values', () => {
assert.equal(fmtHw('TBEAM'), 'TBEAM');
});
test('fmtHw hides the UNSET sentinel', () => {
assert.equal(fmtHw('UNSET'), '');
});
test('fmtHw returns empty string for falsy input', () => {
assert.equal(fmtHw(null), '');
assert.equal(fmtHw(''), '');
assert.equal(fmtHw(undefined), '');
});
// ---------------------------------------------------------------------------
// fmtCoords
// ---------------------------------------------------------------------------
test('fmtCoords formats numbers with default precision 5', () => {
assert.equal(fmtCoords(52.520008), '52.52001');
});
test('fmtCoords accepts a custom precision', () => {
assert.equal(fmtCoords(52.520008, 2), '52.52');
});
test('fmtCoords returns empty string for null, undefined, and empty', () => {
assert.equal(fmtCoords(null), '');
assert.equal(fmtCoords(undefined), '');
assert.equal(fmtCoords(''), '');
});
test('fmtCoords returns empty string for non-numeric input', () => {
assert.equal(fmtCoords('not a number'), '');
});
// ---------------------------------------------------------------------------
// formatSnrDisplay
// ---------------------------------------------------------------------------
test('formatSnrDisplay appends dB suffix with one decimal', () => {
assert.equal(formatSnrDisplay(7.49), '7.5 dB');
assert.equal(formatSnrDisplay(-3), '-3.0 dB');
});
test('formatSnrDisplay returns empty string for null and empty input', () => {
assert.equal(formatSnrDisplay(null), '');
assert.equal(formatSnrDisplay(''), '');
});
test('formatSnrDisplay returns empty string for non-finite input', () => {
assert.equal(formatSnrDisplay('abc'), '');
});
// ---------------------------------------------------------------------------
// timeHum
// ---------------------------------------------------------------------------
test('timeHum returns empty string for falsy input', () => {
assert.equal(timeHum(0), '');
assert.equal(timeHum(null), '');
});
test('timeHum returns 0s for negative durations', () => {
assert.equal(timeHum(-5), '0s');
});
test('timeHum formats sub-minute durations as seconds', () => {
assert.equal(timeHum(45), '45s');
});
test('timeHum formats sub-hour durations as minutes and seconds', () => {
assert.equal(timeHum(125), '2m 5s');
});
test('timeHum formats sub-day durations as hours and minutes', () => {
assert.equal(timeHum(3700), '1h 1m');
});
test('timeHum formats day-scale durations as days and hours', () => {
assert.equal(timeHum(90061), '1d 1h');
});
// ---------------------------------------------------------------------------
// timeAgo
// ---------------------------------------------------------------------------
test('timeAgo returns empty string when the input is missing', () => {
assert.equal(timeAgo(0), '');
assert.equal(timeAgo(null), '');
});
test('timeAgo clamps future timestamps to 0s', () => {
assert.equal(timeAgo(5000, 1000), '0s');
});
test('timeAgo formats sub-minute deltas as seconds', () => {
assert.equal(timeAgo(950, 1000), '50s');
});
test('timeAgo formats sub-hour deltas as minutes and seconds', () => {
assert.equal(timeAgo(875, 1000), '2m 5s');
});
test('timeAgo formats sub-day deltas as hours and minutes', () => {
// Use a non-zero past timestamp; timeAgo treats 0 as "missing" and returns "".
assert.equal(timeAgo(1000, 4700), '1h 1m');
});
test('timeAgo formats day-scale deltas as days and hours', () => {
assert.equal(timeAgo(1000, 91061), '1d 1h');
});
// ---------------------------------------------------------------------------
// toFiniteNumber
// ---------------------------------------------------------------------------
test('toFiniteNumber converts numeric strings', () => {
assert.equal(toFiniteNumber('42'), 42);
});
test('toFiniteNumber returns null for null, undefined, and empty', () => {
assert.equal(toFiniteNumber(null), null);
assert.equal(toFiniteNumber(undefined), null);
assert.equal(toFiniteNumber(''), null);
});
test('toFiniteNumber rejects non-finite values', () => {
assert.equal(toFiniteNumber('abc'), null);
assert.equal(toFiniteNumber(Number.NaN), null);
assert.equal(toFiniteNumber(Number.POSITIVE_INFINITY), null);
});
// ---------------------------------------------------------------------------
// resolveTimestampSeconds
// ---------------------------------------------------------------------------
test('resolveTimestampSeconds prefers a numeric timestamp', () => {
assert.equal(resolveTimestampSeconds(1700000000, '2024-01-01T00:00:00Z'), 1700000000);
});
test('resolveTimestampSeconds falls back to ISO when numeric is missing', () => {
// 2024-01-01T00:00:00Z = 1704067200 seconds.
assert.equal(resolveTimestampSeconds(null, '2024-01-01T00:00:00Z'), 1704067200);
});
test('resolveTimestampSeconds returns null when both inputs are unusable', () => {
assert.equal(resolveTimestampSeconds(null, null), null);
assert.equal(resolveTimestampSeconds(null, ''), null);
assert.equal(resolveTimestampSeconds(null, 'not a date'), null);
});
// ---------------------------------------------------------------------------
// cssEscape
// ---------------------------------------------------------------------------
test('cssEscape returns empty string for non-strings and empty input', () => {
assert.equal(cssEscape(''), '');
assert.equal(cssEscape(null), '');
assert.equal(cssEscape(undefined), '');
assert.equal(cssEscape(42), '');
});
test('cssEscape uses window.CSS.escape when available', () => {
const previous = globalThis.window;
globalThis.window = {
CSS: {
escape: value => `escaped(${value})`,
},
};
try {
assert.equal(cssEscape('foo'), 'escaped(foo)');
} finally {
if (previous === undefined) {
delete globalThis.window;
} else {
globalThis.window = previous;
}
}
});
test('cssEscape falls back to manual escaping when window.CSS is unavailable', () => {
const previous = globalThis.window;
delete globalThis.window;
try {
// Underscores and hyphens pass through; everything else is backslash-escaped.
assert.equal(cssEscape('a-b_c'), 'a-b_c');
assert.equal(cssEscape('a:b'), 'a\\:b');
} finally {
if (previous !== undefined) {
globalThis.window = previous;
}
}
});
// ---------------------------------------------------------------------------
// formatShortInfoUptime
// ---------------------------------------------------------------------------
test('formatShortInfoUptime returns empty string for null and empty', () => {
assert.equal(formatShortInfoUptime(null), '');
assert.equal(formatShortInfoUptime(''), '');
});
test('formatShortInfoUptime returns empty string for non-finite input', () => {
assert.equal(formatShortInfoUptime('abc'), '');
});
test('formatShortInfoUptime renders 0s for zero', () => {
assert.equal(formatShortInfoUptime(0), '0s');
});
test('formatShortInfoUptime delegates to timeHum for positive values', () => {
assert.equal(formatShortInfoUptime(125), '2m 5s');
});
// ---------------------------------------------------------------------------
// shortInfoValueOrDash
// ---------------------------------------------------------------------------
test('shortInfoValueOrDash returns the string form of present values', () => {
assert.equal(shortInfoValueOrDash('text'), 'text');
assert.equal(shortInfoValueOrDash(0), '0');
});
test('shortInfoValueOrDash returns em dash for null, undefined, and empty', () => {
assert.equal(shortInfoValueOrDash(null), '—');
assert.equal(shortInfoValueOrDash(undefined), '—');
assert.equal(shortInfoValueOrDash(''), '—');
});
// ---------------------------------------------------------------------------
// pickFirstProperty
// ---------------------------------------------------------------------------
test('pickFirstProperty returns null when sources or keys are not arrays', () => {
assert.equal(pickFirstProperty(null, ['a']), null);
assert.equal(pickFirstProperty([{}], null), null);
});
test('pickFirstProperty returns the first present trimmed string', () => {
const sources = [
{},
{ id: ' ' },
{ id: ' hello ' },
];
assert.equal(pickFirstProperty(sources, ['id']), 'hello');
});
test('pickFirstProperty returns the first non-string value verbatim', () => {
assert.equal(pickFirstProperty([{ count: 5 }], ['count']), 5);
assert.equal(pickFirstProperty([{ flag: false }], ['flag']), false);
});
test('pickFirstProperty skips non-object entries and absent properties', () => {
const sources = [null, 42, { other: 'value' }, { name: 'final' }];
assert.equal(pickFirstProperty(sources, ['name']), 'final');
});
test('pickFirstProperty returns null when no source provides a value', () => {
assert.equal(pickFirstProperty([{ a: null }, { a: '' }], ['a']), null);
});
// ---------------------------------------------------------------------------
// pickNumericProperty
// ---------------------------------------------------------------------------
test('pickNumericProperty returns null when sources or keys are not arrays', () => {
assert.equal(pickNumericProperty(null, ['a']), null);
assert.equal(pickNumericProperty([{}], null), null);
});
test('pickNumericProperty returns the first finite numeric value', () => {
const sources = [
{ value: '' },
{ value: 'abc' },
{ value: '42' },
];
assert.equal(pickNumericProperty(sources, ['value']), 42);
});
test('pickNumericProperty skips non-object entries and missing keys', () => {
const sources = [null, undefined, { other: 1 }, { count: 7 }];
assert.equal(pickNumericProperty(sources, ['count']), 7);
});
test('pickNumericProperty returns null when no candidate is finite', () => {
assert.equal(pickNumericProperty([{ a: 'abc' }, { a: null }], ['a']), null);
});
// ---------------------------------------------------------------------------
// parseNodeNumericRef
// ---------------------------------------------------------------------------
test('parseNodeNumericRef returns null for null and undefined', () => {
assert.equal(parseNodeNumericRef(null), null);
assert.equal(parseNodeNumericRef(undefined), null);
});
test('parseNodeNumericRef passes through finite numbers', () => {
assert.equal(parseNodeNumericRef(42), 42);
});
test('parseNodeNumericRef returns null for non-finite numbers', () => {
assert.equal(parseNodeNumericRef(Number.NaN), null);
assert.equal(parseNodeNumericRef(Number.POSITIVE_INFINITY), null);
});
test('parseNodeNumericRef parses !-prefixed hex strings', () => {
assert.equal(parseNodeNumericRef('!aabbccdd'), 0xaabbccdd);
});
test('parseNodeNumericRef rejects !-prefixed strings with invalid characters', () => {
assert.equal(parseNodeNumericRef('!ZZZ'), null);
});
test('parseNodeNumericRef parses 0x-prefixed hex strings', () => {
assert.equal(parseNodeNumericRef('0x1A'), 0x1a);
});
test('parseNodeNumericRef parses decimal strings', () => {
assert.equal(parseNodeNumericRef('123'), 123);
});
test('parseNodeNumericRef returns null for blank strings', () => {
assert.equal(parseNodeNumericRef(''), null);
assert.equal(parseNodeNumericRef(' '), null);
});
test('parseNodeNumericRef returns null for unparseable strings', () => {
assert.equal(parseNodeNumericRef('not a number'), null);
});
test('parseNodeNumericRef coerces other inputs via Number()', () => {
// Booleans, Date, etc. — anything the global Number() constructor can
// map to a finite number passes through.
assert.equal(parseNodeNumericRef(true), 1);
assert.equal(parseNodeNumericRef(false), 0);
});
test('parseNodeNumericRef returns null for unparseable non-string inputs', () => {
assert.equal(parseNodeNumericRef({}), null);
});
@@ -0,0 +1,152 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import { getActiveFullscreenElement, legendClickHandler } from '../fullscreen-helpers.js';
// ---------------------------------------------------------------------------
// getActiveFullscreenElement
// ---------------------------------------------------------------------------
test('getActiveFullscreenElement returns null when document is undefined', () => {
const previousDoc = globalThis.document;
// Node has no document by default, but other tests in the suite may have
// assigned one — clear it explicitly for this case.
delete globalThis.document;
try {
assert.equal(getActiveFullscreenElement(), null);
} finally {
if (previousDoc !== undefined) {
globalThis.document = previousDoc;
}
}
});
test('getActiveFullscreenElement prefers fullscreenElement', () => {
const dummy = { tag: 'std' };
const previousDoc = globalThis.document;
globalThis.document = {
fullscreenElement: dummy,
webkitFullscreenElement: { tag: 'webkit' },
msFullscreenElement: { tag: 'ms' },
};
try {
assert.equal(getActiveFullscreenElement(), dummy);
} finally {
if (previousDoc === undefined) {
delete globalThis.document;
} else {
globalThis.document = previousDoc;
}
}
});
test('getActiveFullscreenElement falls back to webkit prefix', () => {
const dummy = { tag: 'webkit' };
const previousDoc = globalThis.document;
globalThis.document = {
fullscreenElement: null,
webkitFullscreenElement: dummy,
msFullscreenElement: null,
};
try {
assert.equal(getActiveFullscreenElement(), dummy);
} finally {
if (previousDoc === undefined) {
delete globalThis.document;
} else {
globalThis.document = previousDoc;
}
}
});
test('getActiveFullscreenElement falls back to ms prefix', () => {
const dummy = { tag: 'ms' };
const previousDoc = globalThis.document;
globalThis.document = {
fullscreenElement: null,
webkitFullscreenElement: null,
msFullscreenElement: dummy,
};
try {
assert.equal(getActiveFullscreenElement(), dummy);
} finally {
if (previousDoc === undefined) {
delete globalThis.document;
} else {
globalThis.document = previousDoc;
}
}
});
test('getActiveFullscreenElement returns null when no fullscreen owner is set', () => {
const previousDoc = globalThis.document;
globalThis.document = {
fullscreenElement: null,
webkitFullscreenElement: null,
msFullscreenElement: null,
};
try {
assert.equal(getActiveFullscreenElement(), null);
} finally {
if (previousDoc === undefined) {
delete globalThis.document;
} else {
globalThis.document = previousDoc;
}
}
});
// ---------------------------------------------------------------------------
// legendClickHandler
// ---------------------------------------------------------------------------
test('legendClickHandler always calls preventDefault and stopPropagation', () => {
let preventCalls = 0;
let stopCalls = 0;
let bodyCalls = 0;
const handler = legendClickHandler(() => {
bodyCalls += 1;
});
const fakeEvent = {
preventDefault: () => {
preventCalls += 1;
},
stopPropagation: () => {
stopCalls += 1;
},
};
handler(fakeEvent);
assert.equal(preventCalls, 1);
assert.equal(stopCalls, 1);
assert.equal(bodyCalls, 1);
});
test('legendClickHandler forwards the event object to the body', () => {
let received = null;
const handler = legendClickHandler(event => {
received = event;
});
const fakeEvent = {
preventDefault() {},
stopPropagation() {},
payload: 'forwarded',
};
handler(fakeEvent);
assert.equal(received, fakeEvent);
});
@@ -0,0 +1,188 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import {
applyNodeNameFallback,
extractIdentifierFromHref,
getNodeDisplayNameForOverlay,
getNodeIdentifierFromLink,
shouldHandleNodeLongLink,
} from '../long-link-router.js';
// ---------------------------------------------------------------------------
// shouldHandleNodeLongLink
// ---------------------------------------------------------------------------
test('shouldHandleNodeLongLink rejects null and undefined', () => {
assert.equal(shouldHandleNodeLongLink(null), false);
assert.equal(shouldHandleNodeLongLink(undefined), false);
});
test('shouldHandleNodeLongLink rejects elements without a dataset', () => {
assert.equal(shouldHandleNodeLongLink({}), false);
});
test('shouldHandleNodeLongLink honours an explicit nodeDetailLink=false opt-out', () => {
const link = { dataset: { nodeDetailLink: 'false' } };
assert.equal(shouldHandleNodeLongLink(link), false);
});
test('shouldHandleNodeLongLink accepts elements with a permissive dataset', () => {
assert.equal(shouldHandleNodeLongLink({ dataset: {} }), true);
assert.equal(shouldHandleNodeLongLink({ dataset: { nodeDetailLink: 'true' } }), true);
});
// ---------------------------------------------------------------------------
// extractIdentifierFromHref
// ---------------------------------------------------------------------------
test('extractIdentifierFromHref returns empty string for non-string and empty input', () => {
assert.equal(extractIdentifierFromHref(null), '');
assert.equal(extractIdentifierFromHref(undefined), '');
assert.equal(extractIdentifierFromHref(''), '');
assert.equal(extractIdentifierFromHref(42), '');
});
test('extractIdentifierFromHref returns empty string when no /nodes/!… segment is present', () => {
assert.equal(extractIdentifierFromHref('/about'), '');
assert.equal(extractIdentifierFromHref('https://example.com/'), '');
});
test('extractIdentifierFromHref returns the canonical node id for /nodes/!… URIs', () => {
assert.equal(extractIdentifierFromHref('/nodes/!aabbccdd'), '!aabbccdd');
// canonicalNodeIdentifier preserves case; it only ensures the leading "!".
assert.equal(
extractIdentifierFromHref('https://example.com/nodes/!AABBCCDD?ref=1'),
'!AABBCCDD',
);
});
test('extractIdentifierFromHref tolerates URI-encoded ! prefixes', () => {
// %21 is the URL-encoded form of !. decodeURIComponent should restore it.
assert.equal(extractIdentifierFromHref('/nodes/%21aabbccdd'), '');
// Not all encodings return a node — '!aabbccdd' encoded as a literal also works.
assert.equal(extractIdentifierFromHref('/nodes/!aabbccdd#anchor'), '!aabbccdd');
});
test('extractIdentifierFromHref falls back to the raw match when decoding throws', () => {
// A bare "%" tail is malformed UTF-8 percent encoding and makes
// decodeURIComponent raise URIError. The catch branch should still
// canonicalise the un-decoded match.
assert.equal(
extractIdentifierFromHref('/nodes/!aabbccdd%E0'),
'!aabbccdd%E0',
);
});
// ---------------------------------------------------------------------------
// getNodeIdentifierFromLink
// ---------------------------------------------------------------------------
test('getNodeIdentifierFromLink returns empty string for falsy input', () => {
assert.equal(getNodeIdentifierFromLink(null), '');
assert.equal(getNodeIdentifierFromLink(undefined), '');
});
test('getNodeIdentifierFromLink prefers dataset.nodeId when canonical', () => {
const link = { dataset: { nodeId: '!aabbccdd' } };
assert.equal(getNodeIdentifierFromLink(link), '!aabbccdd');
});
test('getNodeIdentifierFromLink falls back to getAttribute("href") when dataset is absent', () => {
const link = {
getAttribute(name) {
return name === 'href' ? '/nodes/!aabbccdd' : null;
},
};
assert.equal(getNodeIdentifierFromLink(link), '!aabbccdd');
});
test('getNodeIdentifierFromLink falls back to the .href property when getAttribute is absent', () => {
const link = { href: '/nodes/!aabbccdd' };
assert.equal(getNodeIdentifierFromLink(link), '!aabbccdd');
});
test('getNodeIdentifierFromLink returns empty string when nothing parses', () => {
assert.equal(getNodeIdentifierFromLink({}), '');
});
// ---------------------------------------------------------------------------
// getNodeDisplayNameForOverlay
// ---------------------------------------------------------------------------
test('getNodeDisplayNameForOverlay returns empty string for non-objects', () => {
assert.equal(getNodeDisplayNameForOverlay(null), '');
assert.equal(getNodeDisplayNameForOverlay(42), '');
});
test('getNodeDisplayNameForOverlay prefers long_name', () => {
const node = { long_name: 'Alpha Long', short_name: 'A', node_id: '!a' };
assert.equal(getNodeDisplayNameForOverlay(node), 'Alpha Long');
});
test('getNodeDisplayNameForOverlay falls back to short_name', () => {
const node = { short_name: 'A', node_id: '!a' };
assert.equal(getNodeDisplayNameForOverlay(node), 'A');
});
test('getNodeDisplayNameForOverlay falls back to node_id when names are absent', () => {
assert.equal(getNodeDisplayNameForOverlay({ node_id: '!a' }), '!a');
});
test('getNodeDisplayNameForOverlay reads camelCase keys too', () => {
assert.equal(getNodeDisplayNameForOverlay({ longName: 'L' }), 'L');
assert.equal(getNodeDisplayNameForOverlay({ shortName: 'S' }), 'S');
});
// ---------------------------------------------------------------------------
// applyNodeNameFallback
// ---------------------------------------------------------------------------
test('applyNodeNameFallback is a no-op for non-objects', () => {
// Just ensure no throw.
applyNodeNameFallback(null);
applyNodeNameFallback(undefined);
});
test('applyNodeNameFallback fills missing names from node_id', () => {
const node = { node_id: '!aabbccdd' };
applyNodeNameFallback(node);
assert.equal(node.short_name, 'ccdd');
assert.equal(node.long_name, 'Meshtastic !aabbccdd');
});
test('applyNodeNameFallback updates camelCase aliases when present', () => {
const node = { node_id: '!aabbccdd', shortName: '', longName: '' };
applyNodeNameFallback(node);
assert.equal(node.shortName, 'ccdd');
assert.equal(node.longName, 'Meshtastic !aabbccdd');
});
test('applyNodeNameFallback leaves existing names untouched', () => {
const node = { node_id: '!aabbccdd', short_name: 'AAA', long_name: 'Alpha' };
applyNodeNameFallback(node);
assert.equal(node.short_name, 'AAA');
assert.equal(node.long_name, 'Alpha');
});
test('applyNodeNameFallback is a no-op when no node_id is available', () => {
const node = {};
applyNodeNameFallback(node);
assert.deepEqual(node, {});
});
@@ -0,0 +1,228 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import { createOfflineTileLayer } from '../offline-tile-layer.js';
/**
* Build a minimal Leaflet stub exposing the methods the offline tile layer
* needs (``L.gridLayer``). The returned grid-layer object is otherwise a
* plain bag whose ``createTile`` slot is reassigned by the production code.
*
* @returns {Object} Leaflet-compatible stub.
*/
function makeLeafletStub() {
return {
gridLayer(options) {
return { options, createTile: null };
},
};
}
/**
* Install a minimal ``document`` stub whose ``createElement`` returns objects
* that satisfy the offline tile layer's small DOM contract: canvas elements
* expose a configurable ``getContext`` slot, while plain ``div`` elements
* expose ``style``, ``className`` and ``cloneNode``.
*
* @param {{ canvasContext?: any }} [options] Override the canvas 2D context.
* @returns {{ restore: Function }} Teardown handle.
*/
function withDocumentStub({ canvasContext } = {}) {
const previousDocument = globalThis.document;
globalThis.document = {
createElement(tag) {
if (tag === 'canvas') {
return {
width: 0,
height: 0,
getContext: () => (canvasContext === undefined ? makeRecordingContext() : canvasContext),
};
}
const element = {
tag,
className: '',
style: {},
textContent: '',
cloneNode() {
// Return a shallow copy that retains the recorded properties so
// assertions can inspect what the production code rendered.
return JSON.parse(JSON.stringify({
tag: element.tag,
className: element.className,
style: element.style,
textContent: element.textContent,
}));
},
};
return element;
},
};
return {
restore() {
if (previousDocument === undefined) {
delete globalThis.document;
} else {
globalThis.document = previousDocument;
}
},
};
}
/**
* Build a Canvas 2D context stub that records the calls it receives. The
* tests inspect the call list to ensure the production code follows the
* expected drawing path.
*
* @returns {Object} Recording 2D context.
*/
function makeRecordingContext() {
const calls = [];
const ctx = {
calls,
fillStyle: null,
strokeStyle: null,
lineWidth: 0,
font: '',
textBaseline: '',
textAlign: '',
createLinearGradient(...args) {
calls.push(['createLinearGradient', args]);
return { addColorStop(...stop) { calls.push(['addColorStop', stop]); } };
},
fillRect(...args) {
calls.push(['fillRect', args]);
},
beginPath() {
calls.push(['beginPath']);
},
moveTo(...args) {
calls.push(['moveTo', args]);
},
lineTo(...args) {
calls.push(['lineTo', args]);
},
stroke() {
calls.push(['stroke']);
},
fillText(...args) {
calls.push(['fillText', args]);
},
};
return ctx;
}
// ---------------------------------------------------------------------------
// createOfflineTileLayer — early returns
// ---------------------------------------------------------------------------
test('createOfflineTileLayer returns null when Leaflet is missing', () => {
assert.equal(createOfflineTileLayer(null), null);
assert.equal(createOfflineTileLayer(undefined), null);
});
test('createOfflineTileLayer returns null when Leaflet has no gridLayer factory', () => {
assert.equal(createOfflineTileLayer({}), null);
});
// ---------------------------------------------------------------------------
// createOfflineTileLayer — happy path
// ---------------------------------------------------------------------------
test('createOfflineTileLayer attaches a createTile method on success', () => {
const stub = withDocumentStub();
try {
const layer = createOfflineTileLayer(makeLeafletStub());
assert.ok(layer);
assert.equal(typeof layer.createTile, 'function');
} finally {
stub.restore();
}
});
test('createOfflineTileLayer renders a canvas tile when getContext succeeds', () => {
const stub = withDocumentStub();
try {
const layer = createOfflineTileLayer(makeLeafletStub());
const tile = layer.createTile({ x: 1, y: 1, z: 1 });
// The returned element should be the canvas itself (has getContext).
assert.equal(typeof tile.getContext, 'function');
assert.equal(tile.width, 256);
assert.equal(tile.height, 256);
} finally {
stub.restore();
}
});
test('createOfflineTileLayer falls back to placeholder when canvas getContext returns null', () => {
const stub = withDocumentStub({ canvasContext: null });
// Silence the warn from the fallback branch so test output stays clean.
const previousWarn = console.warn;
console.warn = () => {};
try {
const layer = createOfflineTileLayer(makeLeafletStub());
const tile = layer.createTile({ x: 0, y: 0, z: 0 });
// Fallback is the cloned <div> — no getContext method.
assert.equal(tile.getContext, undefined);
assert.equal(tile.tag, 'div');
assert.equal(tile.className, 'offline-tile-fallback');
} finally {
console.warn = previousWarn;
stub.restore();
}
});
test('createOfflineTileLayer reuses the cached fallback tile across invocations', () => {
const stub = withDocumentStub({ canvasContext: null });
const previousWarn = console.warn;
console.warn = () => {};
try {
const layer = createOfflineTileLayer(makeLeafletStub());
const first = layer.createTile({ x: 0, y: 0, z: 0 });
const second = layer.createTile({ x: 1, y: 0, z: 0 });
// Both calls produce equivalent fallback nodes (same shape).
assert.deepEqual(first, second);
} finally {
console.warn = previousWarn;
stub.restore();
}
});
test('createOfflineTileLayer falls back when the canvas drawing path throws', () => {
// Build a context whose `createLinearGradient` throws to force the
// catch-and-fall-back branch.
const ctx = makeRecordingContext();
ctx.createLinearGradient = () => {
throw new Error('boom');
};
const stub = withDocumentStub({ canvasContext: ctx });
const previousError = console.error;
console.error = () => {};
try {
const layer = createOfflineTileLayer(makeLeafletStub());
const tile = layer.createTile({ x: 0, y: 0, z: 0 });
// Production code logs and returns the fallback element.
assert.equal(tile.getContext, undefined);
assert.equal(tile.tag, 'div');
} finally {
console.error = previousError;
stub.restore();
}
});
@@ -0,0 +1,128 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import {
compareNumber,
compareString,
hasNumberValue,
hasStringValue,
} from '../sort-comparators.js';
// ---------------------------------------------------------------------------
// hasStringValue
// ---------------------------------------------------------------------------
test('hasStringValue returns true for non-empty strings', () => {
assert.equal(hasStringValue('hi'), true);
assert.equal(hasStringValue(' text '), true);
});
test('hasStringValue returns false for null, undefined, and blank input', () => {
assert.equal(hasStringValue(null), false);
assert.equal(hasStringValue(undefined), false);
assert.equal(hasStringValue(''), false);
assert.equal(hasStringValue(' '), false);
});
test('hasStringValue treats numbers as their string form', () => {
assert.equal(hasStringValue(0), true);
assert.equal(hasStringValue(42), true);
});
// ---------------------------------------------------------------------------
// hasNumberValue
// ---------------------------------------------------------------------------
test('hasNumberValue accepts finite numbers', () => {
assert.equal(hasNumberValue(42), true);
assert.equal(hasNumberValue(-1.5), true);
assert.equal(hasNumberValue(0), true);
});
test('hasNumberValue rejects null, undefined, and empty string', () => {
assert.equal(hasNumberValue(null), false);
assert.equal(hasNumberValue(undefined), false);
assert.equal(hasNumberValue(''), false);
});
test('hasNumberValue rejects non-finite numbers and unparseable strings', () => {
assert.equal(hasNumberValue(Number.NaN), false);
assert.equal(hasNumberValue(Number.POSITIVE_INFINITY), false);
assert.equal(hasNumberValue('abc'), false);
});
test('hasNumberValue accepts numeric strings', () => {
assert.equal(hasNumberValue('42'), true);
assert.equal(hasNumberValue(' -1.5 '), true);
});
// ---------------------------------------------------------------------------
// compareString
// ---------------------------------------------------------------------------
test('compareString sorts non-empty values lexicographically', () => {
assert.ok(compareString('alpha', 'beta') < 0);
assert.ok(compareString('beta', 'alpha') > 0);
assert.equal(compareString('alpha', 'alpha'), 0);
});
test('compareString trims surrounding whitespace before comparing', () => {
assert.equal(compareString(' alpha ', 'alpha'), 0);
});
test('compareString sorts blank values to the end', () => {
assert.ok(compareString('alpha', '') < 0);
assert.ok(compareString('', 'alpha') > 0);
});
test('compareString returns 0 when both values are blank', () => {
assert.equal(compareString(null, ''), 0);
assert.equal(compareString('', ' '), 0);
});
test('compareString uses numeric collation for digit-bearing strings', () => {
// localeCompare with { numeric: true } orders "node-2" before "node-10".
assert.ok(compareString('node-2', 'node-10') < 0);
});
// ---------------------------------------------------------------------------
// compareNumber
// ---------------------------------------------------------------------------
test('compareNumber sorts ascending for finite values', () => {
assert.ok(compareNumber(1, 2) < 0);
assert.ok(compareNumber(2, 1) > 0);
assert.equal(compareNumber(1, 1), 0);
});
test('compareNumber accepts numeric strings', () => {
assert.ok(compareNumber('1', '2') < 0);
assert.ok(compareNumber('2', '1') > 0);
});
test('compareNumber pushes invalid values after valid ones', () => {
assert.ok(compareNumber(5, 'not-a-number') < 0);
assert.ok(compareNumber('not-a-number', 5) > 0);
});
test('compareNumber returns 0 when both inputs are unparseable', () => {
assert.equal(compareNumber('abc', 'def'), 0);
// Note: Number(null) === 0, so null is *finite* under this comparator.
assert.equal(compareNumber(undefined, 'abc'), 0);
});
@@ -0,0 +1,49 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import { tileToLat, tileToLon } from '../tile-coords.js';
test('tileToLon zero tile at zoom 0 is -180', () => {
assert.equal(tileToLon(0, 0), -180);
});
test('tileToLon centre tile at zoom 1 is 0', () => {
assert.equal(tileToLon(1, 1), 0);
});
test('tileToLon last tile at zoom 2 is 90', () => {
assert.equal(tileToLon(3, 2), 90);
});
test('tileToLat zero tile at zoom 0 is roughly 85.0511', () => {
// Mercator clamp: northernmost projectable latitude.
assert.ok(Math.abs(tileToLat(0, 0) - 85.0511287798066) < 1e-9);
});
test('tileToLat centre tile at zoom 1 is 0', () => {
assert.equal(tileToLat(1, 1), 0);
});
test('tileToLat is symmetric around the equator at zoom 1', () => {
// Tile y=0 (northern edge) and y=2 (southern edge) at zoom 1 should
// be equal in magnitude with opposite signs.
const north = tileToLat(0, 1);
const south = tileToLat(2, 1);
assert.ok(Math.abs(north + south) < 1e-9);
});
@@ -0,0 +1,119 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import test from 'node:test';
import assert from 'node:assert/strict';
import { buildNeighborTooltipHtml, buildTraceTooltipHtml } from '../tooltip-html.js';
// ---------------------------------------------------------------------------
// buildTraceTooltipHtml
// ---------------------------------------------------------------------------
test('buildTraceTooltipHtml returns empty string for non-arrays', () => {
assert.equal(buildTraceTooltipHtml(null), '');
assert.equal(buildTraceTooltipHtml(undefined), '');
assert.equal(buildTraceTooltipHtml({}), '');
});
test('buildTraceTooltipHtml returns empty string when fewer than two hops are supplied', () => {
assert.equal(buildTraceTooltipHtml([]), '');
assert.equal(buildTraceTooltipHtml([{ short_name: 'A', node_id: '!a' }]), '');
});
test('buildTraceTooltipHtml emits a content fragment with arrows between hops', () => {
const html = buildTraceTooltipHtml([
{ short_name: 'AAA', node_id: '!a' },
{ short_name: 'BBB', node_id: '!b' },
]);
assert.ok(html.includes('trace-tooltip__content'));
assert.ok(html.includes('trace-tooltip__arrow'));
// One arrow between two badges.
const arrowCount = (html.match(/trace-tooltip__arrow/g) || []).length;
assert.equal(arrowCount, 1);
});
test('buildTraceTooltipHtml falls back to node_id when short name is missing', () => {
const html = buildTraceTooltipHtml([
{ node_id: '!a' },
{ node_id: '!b' },
]);
// The badge should reference the node_id.
assert.ok(html.includes('!a'));
assert.ok(html.includes('!b'));
});
test('buildTraceTooltipHtml filters out malformed entries', () => {
const html = buildTraceTooltipHtml([
null,
{ short_name: 'AAA', node_id: '!a' },
'not an object',
{ short_name: 'BBB', node_id: '!b' },
]);
// Two valid entries → exactly one arrow.
const arrowCount = (html.match(/trace-tooltip__arrow/g) || []).length;
assert.equal(arrowCount, 1);
});
test('buildTraceTooltipHtml returns empty string when every entry is malformed', () => {
assert.equal(buildTraceTooltipHtml([null, 'x', 1]), '');
});
// ---------------------------------------------------------------------------
// buildNeighborTooltipHtml
// ---------------------------------------------------------------------------
test('buildNeighborTooltipHtml returns empty string for falsy segments', () => {
assert.equal(buildNeighborTooltipHtml(null), '');
assert.equal(buildNeighborTooltipHtml(undefined), '');
});
test('buildNeighborTooltipHtml emits source → target HTML', () => {
const html = buildNeighborTooltipHtml({
sourceShortName: 'AAA',
targetShortName: 'BBB',
sourceNode: { node_id: '!a', long_name: 'Alpha' },
targetNode: { node_id: '!b', long_name: 'Beta' },
sourceRole: 'CLIENT',
targetRole: 'CLIENT',
});
assert.ok(html.includes('trace-tooltip__content'));
assert.ok(html.includes('trace-tooltip__arrow'));
assert.ok(html.includes('Alpha'));
assert.ok(html.includes('Beta'));
});
test('buildNeighborTooltipHtml falls back to node short_name fields', () => {
const html = buildNeighborTooltipHtml({
sourceNode: { short_name: 'AAA', node_id: '!a' },
targetNode: { short_name: 'BBB', node_id: '!b' },
});
assert.ok(html.includes('trace-tooltip__arrow'));
});
test('buildNeighborTooltipHtml falls back to node_id when no short name is present', () => {
const html = buildNeighborTooltipHtml({
sourceNode: { node_id: '!a' },
targetNode: { node_id: '!b' },
});
assert.ok(html.includes('!a'));
assert.ok(html.includes('!b'));
});
test('buildNeighborTooltipHtml returns empty string when either side has no short name', () => {
assert.equal(buildNeighborTooltipHtml({ sourceNode: { node_id: '!a' } }), '');
assert.equal(buildNeighborTooltipHtml({ targetNode: { node_id: '!b' } }), '');
});
@@ -0,0 +1,36 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Stable numeric limits shared between ``main.js`` and the helpers extracted
* into ``main/`` submodules.
*
* @module main/constants
*/
import { SNAPSHOT_WINDOW } from '../snapshot-aggregator.js';
/** Maximum number of node rows requested from the API. */
export const NODE_LIMIT = 1000;
/** Maximum number of trace rows requested from the API. */
export const TRACE_LIMIT = 200;
/** Maximum age (seconds) for traces displayed on the map. */
export const TRACE_MAX_AGE_SECONDS = 28 * 24 * 60 * 60;
/** Snapshot multiplier — how many rows we ask for to build a richer aggregate. */
export const SNAPSHOT_LIMIT = SNAPSHOT_WINDOW;
@@ -0,0 +1,193 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Pure async fetch wrappers for the dashboard JSON API.
*
* Functions accept their own dependencies chat-enabled flag, message-limit
* normaliser so they remain free of any closure / DOM state and can be
* unit-tested standalone.
*
* @module main/data-fetchers
*/
import { NODE_LIMIT, SNAPSHOT_LIMIT, TRACE_LIMIT, TRACE_MAX_AGE_SECONDS } from './constants.js';
import { resolveTimestampSeconds } from './format-utils.js';
/**
* Determine how many snapshots should be requested from the API to build a
* richer aggregate.
*
* @param {number} requestedLimit Desired number of unique entities.
* @param {number} [maxLimit=NODE_LIMIT] Maximum rows accepted by the API.
* @returns {number} Effective request limit honouring {@link SNAPSHOT_LIMIT}.
*/
export function resolveSnapshotLimit(requestedLimit, maxLimit = NODE_LIMIT) {
const base = Number.isFinite(requestedLimit) && requestedLimit > 0
? Math.floor(requestedLimit)
: maxLimit;
const expanded = base * SNAPSHOT_LIMIT;
const candidate = expanded > base ? expanded : base;
return Math.min(candidate, maxLimit);
}
/**
* Filter trace entries to discard packets older than the configured window.
*
* @param {Array<Object>} traces Trace payloads.
* @param {number} [maxAgeSeconds=TRACE_MAX_AGE_SECONDS] Maximum allowed age in seconds.
* @returns {Array<Object>} Recent trace entries.
*/
export function filterRecentTraces(traces, maxAgeSeconds = TRACE_MAX_AGE_SECONDS) {
if (!Array.isArray(traces)) {
return [];
}
if (!Number.isFinite(maxAgeSeconds) || maxAgeSeconds <= 0) {
return [...traces];
}
const nowSeconds = Math.floor(Date.now() / 1000);
const cutoff = nowSeconds - maxAgeSeconds;
return traces.filter(trace => {
const rxTime = resolveTimestampSeconds(trace?.rx_time ?? trace?.rxTime, trace?.rx_iso ?? trace?.rxIso);
return rxTime != null && rxTime >= cutoff;
});
}
/**
* Fetch the latest nodes from the JSON API.
*
* @param {number} [limit=NODE_LIMIT] Maximum number of records.
* @param {number} [since=0] Unix timestamp; only rows newer than this are returned.
* @returns {Promise<Array<Object>>} Parsed node payloads.
*/
export async function fetchNodes(limit = NODE_LIMIT, since = 0) {
const effectiveLimit = resolveSnapshotLimit(limit, NODE_LIMIT);
let url = `/api/nodes?limit=${effectiveLimit}`;
if (since > 0) url += `&since=${since}`;
const r = await fetch(url, { cache: 'default' });
if (!r.ok) throw new Error('HTTP ' + r.status);
return r.json();
}
/**
* Retrieve a single node record by identifier from the API.
*
* @param {string} nodeId Canonical node identifier.
* @returns {Promise<Object|null>} Parsed node payload or null when absent.
*/
export async function fetchNodeById(nodeId) {
if (typeof nodeId !== 'string') return null;
const trimmed = nodeId.trim();
if (trimmed.length === 0) return null;
const r = await fetch(`/api/nodes/${encodeURIComponent(trimmed)}`, { cache: 'default' });
if (r.status === 404) return null;
if (!r.ok) throw new Error('HTTP ' + r.status);
return r.json();
}
/**
* Fetch recent messages from the JSON API.
*
* @param {number} limit Maximum number of rows.
* @param {{ encrypted?: boolean, since?: number, chatEnabled?: boolean, normaliseMessageLimit?: Function }} options
* Retrieval flags and dependency hooks. When ``chatEnabled`` is false the
* function short-circuits to an empty array without contacting the API.
* @returns {Promise<Array<Object>>} Parsed message payloads.
*/
export async function fetchMessages(limit, options = {}) {
const { chatEnabled = true, normaliseMessageLimit, encrypted = false, since = 0 } = options;
if (!chatEnabled) return [];
const safeLimit = typeof normaliseMessageLimit === 'function'
? normaliseMessageLimit(limit)
: limit;
const params = new URLSearchParams({ limit: String(safeLimit) });
if (encrypted) {
params.set('encrypted', 'true');
}
if (since > 0) {
params.set('since', String(since));
}
const query = params.toString();
const r = await fetch(`/api/messages?${query}`, { cache: 'default' });
if (!r.ok) throw new Error('HTTP ' + r.status);
return r.json();
}
/**
* Fetch neighbour information from the JSON API.
*
* @param {number} [limit=NODE_LIMIT] Maximum number of rows.
* @param {number} [since=0] Unix timestamp; only rows newer than this are returned.
* @returns {Promise<Array<Object>>} Parsed neighbour payloads.
*/
export async function fetchNeighbors(limit = NODE_LIMIT, since = 0) {
const effectiveLimit = resolveSnapshotLimit(limit, NODE_LIMIT);
let url = `/api/neighbors?limit=${effectiveLimit}`;
if (since > 0) url += `&since=${since}`;
const r = await fetch(url, { cache: 'default' });
if (!r.ok) throw new Error('HTTP ' + r.status);
return r.json();
}
/**
* Fetch traceroute observations from the JSON API.
*
* @param {number} [limit=TRACE_LIMIT] Maximum number of records.
* @param {number} [since=0] Unix timestamp; only rows newer than this are returned.
* @returns {Promise<Array<Object>>} Parsed trace payloads.
*/
export async function fetchTraces(limit = TRACE_LIMIT, since = 0) {
const safeLimit = Number.isFinite(limit) && limit > 0 ? Math.floor(limit) : TRACE_LIMIT;
const effectiveLimit = Math.min(safeLimit, NODE_LIMIT);
let url = `/api/traces?limit=${effectiveLimit}`;
if (since > 0) url += `&since=${since}`;
const r = await fetch(url, { cache: 'default' });
if (!r.ok) throw new Error('HTTP ' + r.status);
const traces = await r.json();
return filterRecentTraces(traces, TRACE_MAX_AGE_SECONDS);
}
/**
* Fetch telemetry entries from the JSON API.
*
* @param {number} [limit=NODE_LIMIT] Maximum number of rows.
* @param {number} [since=0] Unix timestamp; only rows newer than this are returned.
* @returns {Promise<Array<Object>>} Parsed telemetry payloads.
*/
export async function fetchTelemetry(limit = NODE_LIMIT, since = 0) {
const effectiveLimit = resolveSnapshotLimit(limit, NODE_LIMIT);
let url = `/api/telemetry?limit=${effectiveLimit}`;
if (since > 0) url += `&since=${since}`;
const r = await fetch(url, { cache: 'default' });
if (!r.ok) throw new Error('HTTP ' + r.status);
return r.json();
}
/**
* Fetch position packets from the JSON API.
*
* @param {number} [limit=NODE_LIMIT] Maximum number of rows.
* @param {number} [since=0] Unix timestamp; only rows newer than this are returned.
* @returns {Promise<Array<Object>>} Parsed position payloads.
*/
export async function fetchPositions(limit = NODE_LIMIT, since = 0) {
const effectiveLimit = resolveSnapshotLimit(limit, NODE_LIMIT);
let url = `/api/positions?limit=${effectiveLimit}`;
if (since > 0) url += `&since=${since}`;
const r = await fetch(url, { cache: 'default' });
if (!r.ok) throw new Error('HTTP ' + r.status);
return r.json();
}
+181
View File
@@ -0,0 +1,181 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Pure data-merge helpers fold position and telemetry packets into the
* node collection without touching any closure or DOM state.
*
* @module main/data-merge
*/
import { resolveTimestampSeconds, toFiniteNumber } from './format-utils.js';
/**
* Merge recent position packets into the node list.
*
* Mutates each node entry in place, updating coordinates / altitude /
* position-time fields when the incoming packet carries a strictly newer
* timestamp.
*
* @param {Array<Object>} nodes Node payloads.
* @param {Array<Object>} positions Position entries.
* @returns {void}
*/
export function mergePositionsIntoNodes(nodes, positions) {
if (!Array.isArray(nodes) || !Array.isArray(positions) || nodes.length === 0) return;
const nodesById = new Map();
for (const node of nodes) {
if (!node || typeof node !== 'object') continue;
const key = typeof node.node_id === 'string' ? node.node_id : null;
if (key) nodesById.set(key, node);
}
if (nodesById.size === 0) return;
const updated = new Set();
for (const pos of positions) {
if (!pos || typeof pos !== 'object') continue;
const nodeId = typeof pos.node_id === 'string' ? pos.node_id : null;
if (!nodeId || updated.has(nodeId)) continue;
const node = nodesById.get(nodeId);
if (!node) continue;
const lat = toFiniteNumber(pos.latitude);
const lon = toFiniteNumber(pos.longitude);
if (lat == null || lon == null) continue;
const currentTimestamp = resolveTimestampSeconds(node.position_time, node.pos_time_iso);
const incomingTimestamp = resolveTimestampSeconds(pos.position_time, pos.position_time_iso);
if (currentTimestamp != null) {
if (incomingTimestamp == null || incomingTimestamp <= currentTimestamp) {
continue;
}
}
updated.add(nodeId);
node.latitude = lat;
node.longitude = lon;
const alt = toFiniteNumber(pos.altitude);
if (alt != null) node.altitude = alt;
const posTime = toFiniteNumber(pos.position_time);
if (posTime != null) {
node.position_time = posTime;
node.pos_time_iso = typeof pos.position_time_iso === 'string' && pos.position_time_iso.length
? pos.position_time_iso
: new Date(posTime * 1000).toISOString();
} else if (typeof pos.position_time_iso === 'string' && pos.position_time_iso.length) {
node.pos_time_iso = pos.position_time_iso;
}
if (pos.location_source != null && pos.location_source !== '') {
node.location_source = pos.location_source;
}
const precision = toFiniteNumber(pos.precision_bits);
if (precision != null) node.precision_bits = precision;
}
}
/**
* Build a lookup table of telemetry entries keyed by node identifier.
*
* @param {Array<Object>} entries Telemetry payloads.
* @returns {{byNodeId: Map<string, {entry: Object, timestamp: number}>, byNodeNum: Map<number, {entry: Object, timestamp: number}>}}
* Indexed telemetry data.
*/
export function buildTelemetryIndex(entries) {
const byNodeId = new Map();
const byNodeNum = new Map();
if (!Array.isArray(entries)) {
return { byNodeId, byNodeNum };
}
for (const entry of entries) {
if (!entry || typeof entry !== 'object') continue;
const nodeId = typeof entry.node_id === 'string' ? entry.node_id : (typeof entry.nodeId === 'string' ? entry.nodeId : null);
const nodeNumRaw = entry.node_num ?? entry.nodeNum;
const nodeNum = typeof nodeNumRaw === 'number' ? nodeNumRaw : Number(nodeNumRaw);
const rxTime = toFiniteNumber(entry.rx_time ?? entry.rxTime);
const telemetryTime = toFiniteNumber(entry.telemetry_time ?? entry.telemetryTime);
const timestamp = rxTime != null ? rxTime : telemetryTime != null ? telemetryTime : Number.NEGATIVE_INFINITY;
if (nodeId) {
const existing = byNodeId.get(nodeId);
if (!existing || timestamp > existing.timestamp) {
byNodeId.set(nodeId, { entry, timestamp });
}
}
if (Number.isFinite(nodeNum)) {
const existing = byNodeNum.get(nodeNum);
if (!existing || timestamp > existing.timestamp) {
byNodeNum.set(nodeNum, { entry, timestamp });
}
}
}
return { byNodeId, byNodeNum };
}
/**
* Merge telemetry metrics into the node list.
*
* Mutates each node entry in place, copying battery / voltage / channel
* utilisation / environmental fields from the freshest telemetry packet that
* matches by ``node_id`` or ``node_num``.
*
* @param {Array<Object>} nodes Node payloads.
* @param {Array<Object>} telemetryEntries Telemetry data.
* @returns {void}
*/
export function mergeTelemetryIntoNodes(nodes, telemetryEntries) {
if (!Array.isArray(nodes) || !nodes.length) return;
const { byNodeId, byNodeNum } = buildTelemetryIndex(telemetryEntries);
for (const node of nodes) {
if (!node || typeof node !== 'object') continue;
const nodeId = typeof node.node_id === 'string' ? node.node_id : (typeof node.nodeId === 'string' ? node.nodeId : null);
const nodeNumRaw = node.num ?? node.node_num ?? node.nodeNum;
const nodeNum = typeof nodeNumRaw === 'number' ? nodeNumRaw : Number(nodeNumRaw);
let telemetryEntry = null;
if (nodeId && byNodeId.has(nodeId)) {
telemetryEntry = byNodeId.get(nodeId).entry;
} else if (Number.isFinite(nodeNum) && byNodeNum.has(nodeNum)) {
telemetryEntry = byNodeNum.get(nodeNum).entry;
}
if (!telemetryEntry || typeof telemetryEntry !== 'object') continue;
const metrics = {
battery_level: toFiniteNumber(telemetryEntry.battery_level ?? telemetryEntry.batteryLevel),
voltage: toFiniteNumber(telemetryEntry.voltage),
uptime_seconds: toFiniteNumber(telemetryEntry.uptime_seconds ?? telemetryEntry.uptimeSeconds),
channel_utilization: toFiniteNumber(telemetryEntry.channel_utilization ?? telemetryEntry.channelUtilization),
air_util_tx: toFiniteNumber(telemetryEntry.air_util_tx ?? telemetryEntry.airUtilTx),
temperature: toFiniteNumber(telemetryEntry.temperature),
relative_humidity: toFiniteNumber(telemetryEntry.relative_humidity ?? telemetryEntry.relativeHumidity),
barometric_pressure: toFiniteNumber(telemetryEntry.barometric_pressure ?? telemetryEntry.barometricPressure),
};
for (const [key, value] of Object.entries(metrics)) {
if (value == null) continue;
node[key] = value;
}
const telemetryTime = toFiniteNumber(telemetryEntry.telemetry_time ?? telemetryEntry.telemetryTime);
if (telemetryTime != null) {
node.telemetry_time = telemetryTime;
}
const rxTime = toFiniteNumber(telemetryEntry.rx_time ?? telemetryEntry.rxTime);
if (rxTime != null) {
node.telemetry_rx_time = rxTime;
}
}
}
@@ -0,0 +1,53 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Filter-key helpers used to disambiguate role buttons across protocols.
*
* @module main/filter-helpers
*/
import { isMeshcoreProtocol } from '../protocol-helpers.js';
import { getRoleKey } from '../role-helpers.js';
/**
* Canonical protocol token for use in compound filter keys.
*
* Collapses null/absent/unknown protocol values to ``'meshtastic'`` so that
* pre-protocol legacy records land in the Meshtastic filter bucket.
*
* @param {string|null|undefined} protocol Raw protocol value.
* @returns {'meshtastic'|'meshcore'} Normalised protocol token.
*/
export function normalizeFilterProtocol(protocol) {
return isMeshcoreProtocol(protocol) ? 'meshcore' : 'meshtastic';
}
/**
* Build a compound filter key that encodes both protocol and role.
*
* Using compound keys avoids collisions between role names that appear in
* both Meshtastic and MeshCore (e.g. ``SENSOR``, ``REPEATER``). The filter
* set stores these keys so that clicking the MeshCore SENSOR button only
* includes MeshCore SENSOR nodes, not Meshtastic ones.
*
* @param {*} role Raw role value from the API.
* @param {string|null|undefined} protocol Protocol string from the API.
* @returns {string} Compound key in the form ``"<protocol>:<roleKey>"``.
*/
export function makeRoleFilterKey(role, protocol) {
return `${normalizeFilterProtocol(protocol)}:${getRoleKey(role)}`;
}
@@ -0,0 +1,282 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Pure formatting helpers used throughout the dashboard.
*
* Extracted from ``main.js`` so that submodules and unit tests can import
* them without dragging in the entire ``initializeApp`` closure. Every
* function here is deterministic and free of closure / DOM state.
*
* @module main/format-utils
*/
/**
* Pad a numeric value with leading zeros.
*
* @param {number} n Numeric value.
* @returns {string} Padded string.
*/
export function pad(n) {
return String(n).padStart(2, '0');
}
/**
* Format a ``Date`` object as ``HH:MM:SS``.
*
* @param {Date} d Date instance.
* @returns {string} Time string.
*/
export function formatTime(d) {
return pad(d.getHours()) + ':' + pad(d.getMinutes()) + ':' + pad(d.getSeconds());
}
/**
* Format a ``Date`` object as ``YYYY-MM-DD``.
*
* @param {Date} d Date instance.
* @returns {string} Date string.
*/
export function formatDate(d) {
return d.getFullYear() + '-' + pad(d.getMonth() + 1) + '-' + pad(d.getDate());
}
/**
* Format hardware model strings for display.
*
* @param {*} v Raw hardware model value.
* @returns {string} Sanitised string.
*/
export function fmtHw(v) {
return v && v !== 'UNSET' ? String(v) : '';
}
/**
* Format coordinate values with a configurable precision.
*
* @param {*} v Raw coordinate value.
* @param {number} [d=5] Decimal precision.
* @returns {string} Formatted coordinate string.
*/
export function fmtCoords(v, d = 5) {
if (v == null || v === '') return '';
const n = Number(v);
return Number.isFinite(n) ? n.toFixed(d) : '';
}
/**
* Format SNR readings with a ``dB`` suffix.
*
* @param {*} value Raw SNR value.
* @returns {string} Formatted SNR string.
*/
export function formatSnrDisplay(value) {
if (value == null || value === '') return '';
const n = Number(value);
if (!Number.isFinite(n)) return '';
return `${n.toFixed(1)} dB`;
}
/**
* Convert a duration in seconds into a human readable string.
*
* @param {number} unixSec Duration in seconds.
* @returns {string} Human readable representation.
*/
export function timeHum(unixSec) {
if (!unixSec) return '';
if (unixSec < 0) return '0s';
if (unixSec < 60) return `${unixSec}s`;
if (unixSec < 3600) return `${Math.floor(unixSec / 60)}m ${Math.floor((unixSec % 60))}s`;
if (unixSec < 86400) return `${Math.floor(unixSec / 3600)}h ${Math.floor((unixSec % 3600) / 60)}m`;
return `${Math.floor(unixSec / 86400)}d ${Math.floor((unixSec % 86400) / 3600)}h`;
}
/**
* Return a relative time string describing how long ago an event occurred.
*
* @param {number} unixSec Timestamp in seconds.
* @param {number} [nowSec] Reference timestamp.
* @returns {string} Human readable relative time.
*/
export function timeAgo(unixSec, nowSec = Date.now() / 1000) {
if (!unixSec) return '';
const diff = Math.floor(nowSec - Number(unixSec));
if (diff < 0) return '0s';
if (diff < 60) return `${diff}s`;
if (diff < 3600) return `${Math.floor(diff / 60)}m ${Math.floor((diff % 60))}s`;
if (diff < 86400) return `${Math.floor(diff / 3600)}h ${Math.floor((diff % 3600) / 60)}m`;
return `${Math.floor(diff / 86400)}d ${Math.floor((diff % 86400) / 3600)}h`;
}
/**
* Convert arbitrary values to finite numbers when possible.
*
* @param {*} value Raw value.
* @returns {number|null} Finite number or null when conversion fails.
*/
export function toFiniteNumber(value) {
if (value == null || value === '') return null;
const num = typeof value === 'number' ? value : Number(value);
return Number.isFinite(num) ? num : null;
}
/**
* Determine the best-effort timestamp in seconds from numeric or ISO values.
*
* @param {*} numeric Numeric timestamp.
* @param {*} isoString ISO formatted timestamp.
* @returns {number|null} Timestamp in seconds.
*/
export function resolveTimestampSeconds(numeric, isoString) {
const parsedNumeric = toFiniteNumber(numeric);
if (parsedNumeric != null) return parsedNumeric;
if (typeof isoString === 'string' && isoString.length) {
const parsedIso = Date.parse(isoString);
if (Number.isFinite(parsedIso)) {
return parsedIso / 1000;
}
}
return null;
}
/**
* Escape a string for safe use as a CSS selector fragment.
*
* Falls back to a manual escape when ``CSS.escape`` is unavailable.
*
* @param {string} value Selector fragment.
* @returns {string} Escaped selector fragment safe for interpolation.
*/
export function cssEscape(value) {
if (typeof value !== 'string' || value.length === 0) {
return '';
}
if (typeof window !== 'undefined' && window.CSS && typeof window.CSS.escape === 'function') {
return window.CSS.escape(value);
}
return value.replace(/[^a-zA-Z0-9_-]/g, chr => `\\${chr}`);
}
/**
* Format uptime values for the short-info overlay.
*
* @param {*} value Raw uptime value.
* @returns {string} Human readable uptime string.
*/
export function formatShortInfoUptime(value) {
if (value == null || value === '') return '';
const num = Number(value);
if (!Number.isFinite(num)) return '';
return num === 0 ? '0s' : timeHum(num);
}
/**
* Format overlay values with an em dash fallback when blank.
*
* @param {*} value Candidate value.
* @returns {string} Formatted value or em dash.
*/
export function shortInfoValueOrDash(value) {
return value != null && value !== '' ? String(value) : '—';
}
/**
* Retrieve the first present property value from a collection of objects.
*
* @param {Array<Object>} sources Candidate objects.
* @param {Array<string>} keys Ordered property names to inspect.
* @returns {*} First present non-blank value or ``null`` when absent.
*/
export function pickFirstProperty(sources, keys) {
if (!Array.isArray(sources) || !Array.isArray(keys)) {
return null;
}
for (const source of sources) {
if (!source || typeof source !== 'object') continue;
for (const key of keys) {
if (!Object.prototype.hasOwnProperty.call(source, key)) continue;
const value = source[key];
if (value == null) continue;
if (typeof value === 'string') {
const trimmed = value.trim();
if (trimmed.length === 0) {
continue;
}
return trimmed;
}
return value;
}
}
return null;
}
/**
* Retrieve the first finite numeric property from candidate objects.
*
* @param {Array<Object>} sources Candidate objects.
* @param {Array<string>} keys Ordered property names to inspect.
* @returns {?number} First finite number when available.
*/
export function pickNumericProperty(sources, keys) {
if (!Array.isArray(sources) || !Array.isArray(keys)) {
return null;
}
for (const source of sources) {
if (!source || typeof source !== 'object') continue;
for (const key of keys) {
if (!Object.prototype.hasOwnProperty.call(source, key)) continue;
const raw = source[key];
if (raw == null || raw === '') continue;
const num = typeof raw === 'number' ? raw : Number(raw);
if (Number.isFinite(num)) {
return num;
}
}
}
return null;
}
/**
* Parse a node identifier or numeric reference into a finite number.
*
* @param {*} ref Identifier or numeric reference.
* @returns {number|null} Parsed number or ``null``.
*/
export function parseNodeNumericRef(ref) {
if (ref == null) return null;
if (typeof ref === 'number') {
return Number.isFinite(ref) ? ref : null;
}
if (typeof ref === 'string') {
const trimmed = ref.trim();
if (!trimmed) return null;
if (trimmed.startsWith('!')) {
const hex = trimmed.slice(1);
if (!/^[0-9A-Fa-f]+$/.test(hex)) return null;
const parsedHex = Number.parseInt(hex, 16);
return Number.isFinite(parsedHex) ? parsedHex >>> 0 : null;
}
if (/^0[xX][0-9A-Fa-f]+$/.test(trimmed)) {
const parsedHex = Number.parseInt(trimmed, 16);
return Number.isFinite(parsedHex) ? parsedHex >>> 0 : null;
}
const parsed = Number(trimmed);
return Number.isFinite(parsed) ? parsed : null;
}
const parsed = Number(ref);
return Number.isFinite(parsed) ? parsed : null;
}
@@ -0,0 +1,54 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Pure helpers for the Fullscreen API used by the map fullscreen toggle.
*
* @module main/fullscreen-helpers
*/
/**
* Resolve the element currently being displayed in fullscreen mode.
*
* @returns {Element|null} Active fullscreen element if any.
*/
export function getActiveFullscreenElement() {
if (typeof document === 'undefined') return null;
return (
document.fullscreenElement ||
document.webkitFullscreenElement ||
document.msFullscreenElement ||
null
);
}
/**
* Wrap a legend button click handler so it always calls
* ``preventDefault`` and ``stopPropagation`` before running the body.
*
* Centralising this prevents the two-line boilerplate from repeating in every
* legend button handler, reducing token-level duplication.
*
* @param {function(Event): void} fn Handler body.
* @returns {function(Event): void} Full click listener.
*/
export function legendClickHandler(fn) {
return (event) => {
event.preventDefault();
event.stopPropagation();
fn(event);
};
}
@@ -0,0 +1,125 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Helpers used by the long-name link click router and overlay name fallback.
*
* @module main/long-link-router
*/
import { canonicalNodeIdentifier, normalizeNodeNameValue } from '../node-rendering.js';
/**
* Determine whether a long name link should trigger the overlay behaviour.
*
* @param {?Element} link Anchor element.
* @returns {boolean} ``true`` when the link participates in overlays.
*/
export function shouldHandleNodeLongLink(link) {
if (!link || !link.dataset) return false;
if ('nodeDetailLink' in link.dataset && link.dataset.nodeDetailLink === 'false') {
return false;
}
return true;
}
/**
* Extract the canonical identifier from a node detail hyperlink.
*
* @param {string} href Link href attribute.
* @returns {string} Canonical identifier or ``''``.
*/
export function extractIdentifierFromHref(href) {
if (typeof href !== 'string' || href.length === 0) {
return '';
}
const match = href.match(/\/nodes\/(![^/?#]+)/i);
if (!match || !match[1]) {
return '';
}
try {
const decoded = decodeURIComponent(match[1]);
return canonicalNodeIdentifier(decoded) ?? '';
} catch {
return canonicalNodeIdentifier(match[1]) ?? '';
}
}
/**
* Extract the canonical node identifier from the provided link element.
*
* @param {?Element} link Anchor element.
* @returns {string} Canonical node identifier or ``''`` when unavailable.
*/
export function getNodeIdentifierFromLink(link) {
if (!link) return '';
const datasetIdentifier = link.dataset && typeof link.dataset.nodeId === 'string'
? canonicalNodeIdentifier(link.dataset.nodeId)
: null;
if (datasetIdentifier) {
return datasetIdentifier;
}
if (typeof link.getAttribute === 'function') {
const attrHref = link.getAttribute('href');
const canonicalFromAttr = extractIdentifierFromHref(attrHref);
if (canonicalFromAttr) {
return canonicalFromAttr;
}
}
if (typeof link.href === 'string') {
const canonicalFromProperty = extractIdentifierFromHref(link.href);
if (canonicalFromProperty) {
return canonicalFromProperty;
}
}
return '';
}
/**
* Determine the preferred display name for overlay content.
*
* @param {Object} node Node payload.
* @returns {string} Friendly display name.
*/
export function getNodeDisplayNameForOverlay(node) {
if (!node || typeof node !== 'object') return '';
return (
normalizeNodeNameValue(node.long_name ?? node.longName) ||
normalizeNodeNameValue(node.short_name ?? node.shortName) ||
(typeof node.node_id === 'string' ? node.node_id : '')
);
}
/**
* Populate missing node name fields with sensible defaults.
*
* @param {Object} node Node payload.
* @returns {void}
*/
export function applyNodeNameFallback(node) {
if (!node || typeof node !== 'object') return;
const short = normalizeNodeNameValue(node.short_name ?? node.shortName);
const long = normalizeNodeNameValue(node.long_name ?? node.longName);
if (short || long) return;
const nodeId = normalizeNodeNameValue(node.node_id ?? node.nodeId);
if (!nodeId) return;
const fallbackShort = nodeId.slice(-4);
const fallbackLong = `Meshtastic ${nodeId}`;
node.short_name = fallbackShort;
node.long_name = fallbackLong;
if ('shortName' in node) node.shortName = fallbackShort;
if ('longName' in node) node.longName = fallbackLong;
}
@@ -0,0 +1,134 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Offline-fallback Leaflet ``GridLayer`` factory.
*
* Receives the Leaflet global as a parameter so the module remains free of
* implicit closure dependencies while still rendering identical placeholder
* tiles when network basemaps are unavailable.
*
* @module main/offline-tile-layer
*/
import { tileToLat, tileToLon } from './tile-coords.js';
/**
* Create a minimal Leaflet tile layer that renders offline tiles from cache.
*
* @param {Object|null} L Leaflet global, or ``null`` when Leaflet is unavailable.
* @returns {Object|null} Configured tile layer instance, or ``null`` when Leaflet is missing.
*/
export function createOfflineTileLayer(L) {
if (!L || typeof L.gridLayer !== 'function') return null;
const offlineLayer = L.gridLayer({ className: 'map-tiles map-tiles-offline' });
/** @type {HTMLElement|null} */
let cachedOfflineFallbackTile = null;
/**
* Provide a minimal placeholder tile when canvas rendering is not available.
*
* @param {number} size Pixel width and height of the tile.
* @returns {HTMLElement} Cloned fallback element ready for Leaflet consumption.
*/
function getOfflineFallbackTile(size) {
if (!cachedOfflineFallbackTile) {
const placeholder = document.createElement('div');
placeholder.className = 'offline-tile-fallback';
placeholder.style.width = `${size}px`;
placeholder.style.height = `${size}px`;
placeholder.style.backgroundColor = 'rgba(33, 66, 110, 0.92)';
placeholder.style.display = 'flex';
placeholder.style.alignItems = 'center';
placeholder.style.justifyContent = 'center';
placeholder.style.color = 'rgba(255, 255, 255, 0.6)';
placeholder.style.font = 'bold 14px system-ui, sans-serif';
placeholder.style.textTransform = 'uppercase';
placeholder.textContent = 'Offline tile';
cachedOfflineFallbackTile = placeholder;
}
return /** @type {HTMLElement} */ (cachedOfflineFallbackTile.cloneNode(true));
}
/**
* Render a placeholder tile for offline map usage.
*
* @param {{x: number, y: number, z: number}} coords Tile coordinates supplied by Leaflet.
* @returns {HTMLElement} Tile node containing placeholder artwork.
*/
offlineLayer.createTile = coords => {
const size = 256;
const canvas = document.createElement('canvas');
canvas.width = size;
canvas.height = size;
const ctx = canvas.getContext('2d');
if (!ctx) {
console.warn('Canvas 2D context unavailable for offline tile rendering. Using fallback placeholder.');
return getOfflineFallbackTile(size);
}
try {
const gradient = ctx.createLinearGradient(0, 0, size, size);
gradient.addColorStop(0, 'rgba(33, 66, 110, 0.92)');
gradient.addColorStop(1, 'rgba(64, 98, 144, 0.92)');
ctx.fillStyle = gradient;
ctx.fillRect(0, 0, size, size);
ctx.strokeStyle = 'rgba(255,255,255,0.12)';
ctx.lineWidth = 1;
const steps = 4;
for (let i = 1; i < steps; i++) {
const pos = (size / steps) * i;
ctx.beginPath();
ctx.moveTo(pos, 0);
ctx.lineTo(pos, size);
ctx.stroke();
ctx.beginPath();
ctx.moveTo(0, pos);
ctx.lineTo(size, pos);
ctx.stroke();
}
const west = tileToLon(coords.x, coords.z);
const east = tileToLon(coords.x + 1, coords.z);
const north = tileToLat(coords.y, coords.z);
const south = tileToLat(coords.y + 1, coords.z);
ctx.fillStyle = 'rgba(255,255,255,0.7)';
ctx.font = '12px system-ui, sans-serif';
ctx.textBaseline = 'top';
ctx.fillText(`${west.toFixed(1)}°`, 8, 8);
ctx.textBaseline = 'bottom';
ctx.fillText(`${east.toFixed(1)}°`, 8, size - 8);
ctx.textAlign = 'right';
ctx.textBaseline = 'top';
ctx.fillText(`${north.toFixed(1)}°`, size - 8, 8);
ctx.textBaseline = 'bottom';
ctx.fillText(`${south.toFixed(1)}°`, size - 8, size - 8);
ctx.textAlign = 'center';
ctx.textBaseline = 'middle';
ctx.fillStyle = 'rgba(255,255,255,0.35)';
ctx.font = 'bold 22px system-ui, sans-serif';
ctx.fillText('PotatoMesh offline basemap', size / 2, size / 2);
return canvas;
} catch (error) {
console.error('Failed to render offline tile. Falling back to placeholder element.', error);
return getOfflineFallbackTile(size);
}
};
return offlineLayer;
}
@@ -0,0 +1,57 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Protocol-icon ``<img>`` builders shared between the legend and meta-row
* controls.
*
* @module main/protocol-icons
*/
import { MESHTASTIC_ICON_SRC, MESHCORE_ICON_SRC } from '../protocol-helpers.js';
/**
* Build a protocol icon image element with consistent attributes.
*
* Both the legend and the meta-row protocol toggle use this helper so the
* output is identical regardless of insertion method.
*
* @param {string} src Absolute path to the SVG asset.
* @param {string} variantClass BEM modifier class, e.g. ``protocol-icon--meshtastic``.
* @returns {HTMLImageElement} Icon element ready to append.
*/
export function buildProtocolIconImg(src, variantClass) {
const img = document.createElement('img');
img.setAttribute('src', src);
img.setAttribute('alt', '');
img.setAttribute('width', '12');
img.setAttribute('height', '12');
img.setAttribute('aria-hidden', 'true');
img.setAttribute('loading', 'lazy');
img.setAttribute('decoding', 'async');
img.className = `protocol-icon ${variantClass}`;
return img;
}
/** @returns {HTMLImageElement} Meshtastic protocol icon element. */
export function buildMeshtasticIconImg() {
return buildProtocolIconImg(MESHTASTIC_ICON_SRC, 'protocol-icon--meshtastic');
}
/** @returns {HTMLImageElement} MeshCore protocol icon element. */
export function buildMeshcoreIconImg() {
return buildProtocolIconImg(MESHCORE_ICON_SRC, 'protocol-icon--meshcore');
}
@@ -0,0 +1,92 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Render the role-aware short-name badge used by maps, tables, popups, and
* overlay surfaces.
*
* The function is deliberately dependency-free besides shared modules so it
* can be exposed via ``globalThis.PotatoMesh.renderShortHtml`` and consumed by
* the node-detail page without dragging the dashboard's closure state along.
*
* @module main/short-html-renderer
*/
import { escapeHtml } from '../utils.js';
import { collectTelemetryMetrics } from '../short-info-telemetry.js';
import { getRoleColor, getRoleTextColor, normalizeRole } from '../role-helpers.js';
/**
* Render a short name badge with role-based styling.
*
* @param {string} short Short node identifier.
* @param {string} role Node role string.
* @param {string} longName Full node name.
* @param {?Object} nodeData Optional node metadata attached to the badge.
* @returns {string} HTML snippet describing the badge.
*/
export function renderShortHtml(short, role, longName, nodeData = null) {
const safeTitle = longName ? escapeHtml(String(longName)) : '';
const titleAttr = safeTitle ? ` title="${safeTitle}"` : '';
const roleValue = normalizeRole(role != null && role !== '' ? role : (nodeData && nodeData.role));
let infoAttr = '';
if (nodeData && typeof nodeData === 'object') {
const info = {
nodeId: nodeData.node_id ?? nodeData.nodeId ?? '',
nodeNum: nodeData.num ?? nodeData.node_num ?? nodeData.nodeNum ?? null,
shortName: short != null ? String(short) : (nodeData.short_name ?? ''),
longName: nodeData.long_name ?? longName ?? '',
role: roleValue,
hwModel: nodeData.hw_model ?? nodeData.hwModel ?? '',
telemetryTime: nodeData.telemetry_time ?? nodeData.telemetryTime ?? null,
};
Object.assign(info, collectTelemetryMetrics(nodeData));
const attrParts = [` data-node-info="${escapeHtml(JSON.stringify(info))}"`];
const attrNodeIdRaw = info.nodeId != null ? String(info.nodeId).trim() : '';
if (attrNodeIdRaw) {
attrParts.push(` data-node-id="${escapeHtml(attrNodeIdRaw)}"`);
}
const attrNodeNum = Number(info.nodeNum);
if (Number.isFinite(attrNodeNum)) {
attrParts.push(` data-node-num="${escapeHtml(String(attrNodeNum))}"`);
}
infoAttr = attrParts.join('');
}
if (!short) {
return `<span class="short-name" style="background:#ccc"${titleAttr}${infoAttr}>&nbsp;?&nbsp;</span>`;
}
// Pad the label for the badge. For plain-ASCII names that are already
// 4 characters (meshtastic always stores exactly 4) no padding is added.
// Shorter names or names containing emoji/non-ASCII get a single space
// on each side — grapheme width varies too much for character-count
// centering to work reliably.
const raw = String(short);
const graphemeCount = typeof Intl !== 'undefined' && Intl.Segmenter
? [...new Intl.Segmenter().segment(raw)].length
: raw.length;
let centred;
if (graphemeCount >= 4) {
centred = raw;
} else {
centred = ` ${raw} `;
}
const padded = escapeHtml(centred).replace(/ /g, '&nbsp;');
const protocol = nodeData?.protocol ?? null;
const color = getRoleColor(roleValue, protocol);
const textColor = getRoleTextColor(roleValue, protocol);
const styleAttr = textColor ? `background:${color};color:${textColor}` : `background:${color}`;
return `<span class="short-name" style="${styleAttr}"${titleAttr}${infoAttr}>${padded}</span>`;
}
@@ -0,0 +1,83 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Pure value-presence guards and comparators for the nodes table.
*
* @module main/sort-comparators
*/
/**
* Determine whether a value should count as present when sorting strings.
*
* @param {*} value Candidate value extracted from a node record.
* @returns {boolean} True when the value is a non-empty string.
*/
export function hasStringValue(value) {
if (value == null) return false;
return String(value).trim().length > 0;
}
/**
* Determine whether the provided value can be interpreted as a finite number.
*
* @param {*} value Candidate value extracted from a node record.
* @returns {boolean} True when the value parses to a finite number.
*/
export function hasNumberValue(value) {
if (value == null || value === '') return false;
const num = typeof value === 'number' ? value : Number(value);
return Number.isFinite(num);
}
/**
* Locale-aware comparator for string table values.
*
* @param {*} a First value.
* @param {*} b Second value.
* @returns {number} Comparator result compatible with ``Array.prototype.sort``.
*/
export function compareString(a, b) {
const strA = (a == null ? '' : String(a)).trim();
const strB = (b == null ? '' : String(b)).trim();
const hasA = strA.length > 0;
const hasB = strB.length > 0;
if (!hasA && !hasB) return 0;
if (!hasA) return 1;
if (!hasB) return -1;
return strA.localeCompare(strB, undefined, { numeric: true, sensitivity: 'base' });
}
/**
* Comparator for numeric table values that tolerates string inputs.
*
* @param {*} a First value.
* @param {*} b Second value.
* @returns {number} Comparator result for ``Array.prototype.sort``.
*/
export function compareNumber(a, b) {
const numA = typeof a === 'number' ? a : Number(a);
const numB = typeof b === 'number' ? b : Number(b);
const validA = Number.isFinite(numA);
const validB = Number.isFinite(numB);
if (validA && validB) {
if (numA === numB) return 0;
return numA < numB ? -1 : 1;
}
if (validA) return -1;
if (validB) return 1;
return 0;
}
@@ -0,0 +1,44 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Tile-index longitude/latitude conversions for slippy map tiles.
*
* @module main/tile-coords
*/
/**
* Convert a tile X coordinate to longitude degrees.
*
* @param {number} x Tile X index.
* @param {number} z Zoom level.
* @returns {number} Longitude in degrees.
*/
export function tileToLon(x, z) {
return (x / Math.pow(2, z)) * 360 - 180;
}
/**
* Convert a tile Y coordinate to latitude degrees.
*
* @param {number} y Tile Y index.
* @param {number} z Zoom level.
* @returns {number} Latitude in degrees.
*/
export function tileToLat(y, z) {
const n = Math.PI - (2 * Math.PI * y) / Math.pow(2, z);
return (180 / Math.PI) * Math.atan(0.5 * (Math.exp(n) - Math.exp(-n)));
}
@@ -0,0 +1,78 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* HTML builders for trace and neighbour map tooltips.
*
* @module main/tooltip-html
*/
import { normalizeNodeNameValue } from '../node-rendering.js';
import { renderShortHtml } from './short-html-renderer.js';
/**
* Build tooltip HTML showing styled short-name badges for a trace path.
*
* @param {Array<Object>} pathNodes Ordered node payloads along the trace.
* @returns {string} HTML fragment or ``''`` when unavailable.
*/
export function buildTraceTooltipHtml(pathNodes) {
if (!Array.isArray(pathNodes) || pathNodes.length < 2) {
return '';
}
const parts = pathNodes
.map(node => {
if (!node || typeof node !== 'object') {
return null;
}
const short = normalizeNodeNameValue(node.short_name ?? node.shortName) || (typeof node.node_id === 'string' ? node.node_id : '');
const long = normalizeNodeNameValue(node.long_name ?? node.longName) || '';
return renderShortHtml(short, node.role, long, node);
})
.filter(Boolean);
if (!parts.length) return '';
const arrow = '<span class="trace-tooltip__arrow" aria-hidden="true">→</span>';
return `<div class="trace-tooltip__content">${parts.join(arrow)}</div>`;
}
/**
* Build tooltip HTML for a neighbor segment showing styled short-name badges.
*
* @param {{sourceNode?: Object, targetNode?: Object, sourceShortName?: string, targetShortName?: string, sourceRole?: string, targetRole?: string}} segment Neighbor segment descriptor.
* @returns {string} HTML fragment or ``''`` when unavailable.
*/
export function buildNeighborTooltipHtml(segment) {
if (!segment) return '';
const sourceNode = segment.sourceNode || null;
const targetNode = segment.targetNode || null;
const sourceShort = normalizeNodeNameValue(
segment.sourceShortName ||
(sourceNode ? sourceNode.short_name ?? sourceNode.shortName : null) ||
(sourceNode && typeof sourceNode.node_id === 'string' ? sourceNode.node_id : '')
);
const targetShort = normalizeNodeNameValue(
segment.targetShortName ||
(targetNode ? targetNode.short_name ?? targetNode.shortName : null) ||
(targetNode && typeof targetNode.node_id === 'string' ? targetNode.node_id : '')
);
if (!sourceShort || !targetShort) return '';
const sourceLong = normalizeNodeNameValue(sourceNode?.long_name ?? sourceNode?.longName) || '';
const targetLong = normalizeNodeNameValue(targetNode?.long_name ?? targetNode?.longName) || '';
const sourceHtml = renderShortHtml(sourceShort, segment.sourceRole, sourceLong, sourceNode || {});
const targetHtml = renderShortHtml(targetShort, segment.targetRole, targetLong, targetNode || {});
const arrow = '<span class="trace-tooltip__arrow" aria-hidden="true">→</span>';
return `<div class="trace-tooltip__content">${sourceHtml}${arrow}${targetHtml}</div>`;
}
@@ -14,29 +14,63 @@
* limitations under the License.
*/
/**
* Default upper bound for in-flight ``/api/nodes/:id`` lookups while the
* hydrator backfills sender metadata. Matches the worker-pool size used by
* ``node-page/role-index.js`` so a thundering herd of cold-load lookups
* cannot overwhelm the server.
*/
export const MESSAGE_HYDRATION_CONCURRENCY = 4;
/**
* Build a hydrator capable of attaching node metadata to chat messages.
*
* @param {{
* fetchNodeById: (nodeId: string) => Promise<object|null>,
* applyNodeFallback: (node: object) => void,
* logger?: { warn?: (message?: any, ...optionalParams: any[]) => void }
* }} options Factory configuration.
* logger?: { warn?: (message?: any, ...optionalParams: any[]) => void },
* concurrency?: number
* }} options Factory configuration. ``concurrency`` overrides the default
* worker-pool size and is primarily intended for unit tests; callers
* should leave it unset in production.
* @returns {{
* hydrate: (messages: Array<object>|null|undefined, nodesById: Map<string, object>) => Promise<Array<object>>
* }} Hydrator API.
*/
export function createMessageNodeHydrator({ fetchNodeById, applyNodeFallback, logger = console }) {
export function createMessageNodeHydrator({
fetchNodeById,
applyNodeFallback,
logger = console,
concurrency = MESSAGE_HYDRATION_CONCURRENCY,
}) {
if (typeof fetchNodeById !== 'function') {
throw new TypeError('fetchNodeById must be a function');
}
if (typeof applyNodeFallback !== 'function') {
throw new TypeError('applyNodeFallback must be a function');
}
// Treat any non-positive or non-finite value as "fall back to default". This
// keeps the hydrator robust against accidental misconfiguration without
// degrading to unbounded parallelism.
const workerCap =
Number.isFinite(concurrency) && concurrency > 0
? Math.floor(concurrency)
: MESSAGE_HYDRATION_CONCURRENCY;
/** @type {Map<string, Promise<object|null>>} */
const inflightLookups = new Map();
// Negative-result cache shared across all ``hydrate()`` invocations on
// this hydrator instance. Without it, every refresh tick would re-issue
// ``/api/nodes/:id`` for senders that the server has already returned
// 404 for once — turning a single dead participant in a busy chat into a
// perpetual per-minute fetch. The set is consulted *after* the fresh
// ``nodesById`` lookup, so a node that registers later (and therefore
// appears in the bulk /api/nodes refresh) immediately wins over a stale
// missing entry without any explicit invalidation.
/** @type {Set<string>} */
const missingNodeIds = new Set();
/**
* Normalise potential node identifiers into canonical strings.
*
@@ -63,6 +97,9 @@ export function createMessageNodeHydrator({ fetchNodeById, applyNodeFallback, lo
if (nodesById instanceof Map && nodesById.has(id)) {
return nodesById.get(id);
}
if (missingNodeIds.has(id)) {
return null;
}
if (inflightLookups.has(id)) {
return inflightLookups.get(id);
}
@@ -77,12 +114,14 @@ export function createMessageNodeHydrator({ fetchNodeById, applyNodeFallback, lo
}
return node;
}
missingNodeIds.add(id);
return null;
})
.catch(error => {
if (logger && typeof logger.warn === 'function') {
logger.warn('message node lookup failed', { nodeId: id, error });
}
missingNodeIds.add(id);
return null;
})
.finally(() => {
@@ -96,6 +135,13 @@ export function createMessageNodeHydrator({ fetchNodeById, applyNodeFallback, lo
/**
* Attach node information to the provided message collection.
*
* Messages whose sender is already in ``nodesById`` are bound synchronously
* and incur no network traffic. Misses are pushed onto a shared queue and
* drained by a fixed worker pool so the number of in-flight
* ``/api/nodes/:id`` requests never exceeds {@link workerCap}. This caps
* the cold-load thundering-herd that would otherwise issue one request per
* unique sender in parallel.
*
* @param {Array<object>|null|undefined} messages Message payloads from the API.
* @param {Map<string, object>} nodesById Lookup table of known nodes.
* @returns {Promise<Array<object>>} Hydrated message entries.
@@ -105,7 +151,7 @@ export function createMessageNodeHydrator({ fetchNodeById, applyNodeFallback, lo
return Array.isArray(messages) ? messages : [];
}
const tasks = [];
const queue = [];
for (const message of messages) {
if (!message || typeof message !== 'object') {
continue;
@@ -127,22 +173,36 @@ export function createMessageNodeHydrator({ fetchNodeById, applyNodeFallback, lo
continue;
}
const task = resolveNode(targetId, nodesById).then(node => {
if (node) {
message.node = node;
} else {
const placeholder = { node_id: targetId };
applyNodeFallback(placeholder);
message.node = placeholder;
}
});
tasks.push(task);
queue.push({ message, targetId });
}
if (tasks.length > 0) {
await Promise.all(tasks);
if (queue.length === 0) {
return messages;
}
// Workers share a monotonically advancing index instead of mutating the
// queue with ``shift()`` — ``Array#shift`` is O(n) and would turn a
// large hydration burst into O(n²). Single-threaded JS makes the
// post-increment atomic with respect to other workers, so no lock or
// existence check is needed.
let cursor = 0;
const workerCount = Math.min(workerCap, queue.length);
const workers = Array.from({ length: workerCount }, async () => {
while (cursor < queue.length) {
const entry = queue[cursor++];
// eslint-disable-next-line no-await-in-loop
const node = await resolveNode(entry.targetId, nodesById);
if (node) {
entry.message.node = node;
} else {
const placeholder = { node_id: entry.targetId };
applyNodeFallback(placeholder);
entry.message.node = placeholder;
}
}
});
await Promise.all(workers);
return messages;
}
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,47 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Time-window and layout constants used by the node detail telemetry charts.
*
* @module node-page-charts/constants
*/
/** One day expressed in milliseconds. */
export const DAY_MS = 86_400_000;
/** One hour expressed in milliseconds. */
export const HOUR_MS = 3_600_000;
/** Rolling telemetry display window: seven days in milliseconds. */
export const TELEMETRY_WINDOW_MS = DAY_MS * 7;
/**
* Default SVG viewport dimensions (pixels) for telemetry charts.
*
* @type {Readonly<{width: number, height: number}>}
*/
export const DEFAULT_CHART_DIMENSIONS = Object.freeze({ width: 660, height: 360 });
/**
* Default inner margin (pixels) applied to every telemetry chart.
*
* Extra room for secondary axes is added dynamically in
* {@link createChartDimensions}.
*
* @type {Readonly<{top: number, right: number, bottom: number, left: number}>}
*/
export const DEFAULT_CHART_MARGIN = Object.freeze({ top: 28, right: 80, bottom: 64, left: 80 });
@@ -0,0 +1,222 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Display-only formatters used by the node detail page outside the chart
* SVG rendering path (hardware model, coordinates, message timestamps,
* uptimes, batteries, etc.). Sibling to ``format-utils.js`` so chart code
* only carries chart concerns.
*
* @module node-page-charts/display-formatters
*/
import { numberOrNull, stringOrNull } from '../value-helpers.js';
import { padTwo } from './format-utils.js';
/**
* Format a frequency value using MHz units when a numeric reading is
* available. Non-numeric input is passed through unchanged.
*
* @param {*} value Raw frequency value.
* @returns {string|null} Formatted frequency string or ``null``.
*/
export function formatFrequency(value) {
if (value == null || value === '') return null;
const numeric = numberOrNull(value);
if (numeric == null) {
return stringOrNull(value);
}
const abs = Math.abs(numeric);
if (abs >= 1_000_000) {
return `${(numeric / 1_000_000).toFixed(3)} MHz`;
}
if (abs >= 1_000) {
return `${(numeric / 1_000).toFixed(3)} MHz`;
}
return `${numeric.toFixed(3)} MHz`;
}
/**
* Format a battery reading as a percentage with one decimal place.
*
* @param {*} value Raw battery value.
* @returns {string|null} Formatted percentage or ``null``.
*/
export function formatBattery(value) {
const numeric = numberOrNull(value);
if (numeric == null) return null;
return `${numeric.toFixed(1)}%`;
}
/**
* Format a voltage reading with two decimal places.
*
* @param {*} value Raw voltage value.
* @returns {string|null} Formatted voltage string or ``null``.
*/
export function formatVoltage(value) {
const numeric = numberOrNull(value);
if (numeric == null) return null;
return `${numeric.toFixed(2)} V`;
}
/**
* Convert an uptime reading in seconds to a concise human-readable string.
*
* @param {*} value Raw uptime value.
* @returns {string|null} Formatted uptime string or ``null`` when invalid.
*/
export function formatUptime(value) {
const numeric = numberOrNull(value);
if (numeric == null) return null;
const seconds = Math.floor(numeric);
const parts = [];
const days = Math.floor(seconds / 86_400);
if (days > 0) parts.push(`${days}d`);
const hours = Math.floor((seconds % 86_400) / 3_600);
if (hours > 0) parts.push(`${hours}h`);
const minutes = Math.floor((seconds % 3_600) / 60);
if (minutes > 0) parts.push(`${minutes}m`);
const remainSeconds = seconds % 60;
if (parts.length === 0 || remainSeconds > 0) {
parts.push(`${remainSeconds}s`);
}
return parts.join(' ');
}
/**
* Format a timestamp for the message log as ``YYYY-MM-DD HH:MM`` in the
* local time zone.
*
* @param {*} value Seconds since the epoch.
* @param {string|null} [isoFallback] ISO timestamp to prefer when available.
* @returns {string|null} Formatted timestamp string or ``null``.
*/
export function formatMessageTimestamp(value, isoFallback = null) {
const iso = stringOrNull(isoFallback);
let date = null;
if (iso) {
const candidate = new Date(iso);
if (!Number.isNaN(candidate.getTime())) {
date = candidate;
}
}
if (!date) {
const numeric = numberOrNull(value);
if (numeric == null) return null;
const candidate = new Date(numeric * 1000);
if (Number.isNaN(candidate.getTime())) {
return null;
}
date = candidate;
}
const year = date.getFullYear();
const month = padTwo(date.getMonth() + 1);
const day = padTwo(date.getDate());
const hours = padTwo(date.getHours());
const minutes = padTwo(date.getMinutes());
return `${year}-${month}-${day} ${hours}:${minutes}`;
}
/**
* Format a hardware model string while hiding unset placeholders.
*
* Firmware uses the literal string ``"UNSET"`` for nodes that have not
* reported a hardware model; this helper suppresses that value so the UI
* displays an empty cell instead.
*
* @param {*} value Raw hardware model value.
* @returns {string} Sanitised hardware model string, or empty string.
*/
export function formatHardwareModel(value) {
const text = stringOrNull(value);
if (!text || text.toUpperCase() === 'UNSET') {
return '';
}
return text;
}
/**
* Format a geographic coordinate with consistent decimal precision.
*
* @param {*} value Raw coordinate value.
* @param {number} [precision=5] Number of decimal places.
* @returns {string} Formatted coordinate string, or empty string when invalid.
*/
export function formatCoordinate(value, precision = 5) {
const numeric = numberOrNull(value);
if (numeric == null) return '';
return numeric.toFixed(precision);
}
/**
* Convert an absolute UNIX timestamp into a relative time description.
*
* Returns strings such as ``"42s"``, ``"3m 15s"``, ``"2h 5m"``, ``"1d 3h"``.
*
* @param {*} value Raw timestamp expressed in seconds since the epoch.
* @param {number} [referenceSeconds] Optional reference timestamp in seconds.
* Defaults to ``Date.now() / 1000``.
* @returns {string} Relative time string or empty string when unavailable.
*/
export function formatRelativeSeconds(value, referenceSeconds = Date.now() / 1000) {
const numeric = numberOrNull(value);
if (numeric == null) return '';
const reference = numberOrNull(referenceSeconds);
const base = reference != null ? reference : Date.now() / 1000;
const diff = Math.floor(base - numeric);
const safeDiff = Number.isFinite(diff) ? Math.max(diff, 0) : 0;
if (safeDiff < 60) return `${safeDiff}s`;
if (safeDiff < 3_600) {
const minutes = Math.floor(safeDiff / 60);
const seconds = safeDiff % 60;
return seconds > 0 ? `${minutes}m ${seconds}s` : `${minutes}m`;
}
if (safeDiff < 86_400) {
const hours = Math.floor(safeDiff / 3_600);
const minutes = Math.floor((safeDiff % 3_600) / 60);
return minutes > 0 ? `${hours}h ${minutes}m` : `${hours}h`;
}
const days = Math.floor(safeDiff / 86_400);
const hours = Math.floor((safeDiff % 86_400) / 3_600);
return hours > 0 ? `${days}d ${hours}h` : `${days}d`;
}
/**
* Format a duration expressed in seconds using a compact human-readable form.
*
* @param {*} value Raw duration in seconds.
* @returns {string} Human-readable duration string or empty string.
*/
export function formatDurationSeconds(value) {
const numeric = numberOrNull(value);
if (numeric == null) return '';
const duration = Math.max(Math.floor(numeric), 0);
if (duration < 60) return `${duration}s`;
if (duration < 3_600) {
const minutes = Math.floor(duration / 60);
const seconds = duration % 60;
return seconds > 0 ? `${minutes}m ${seconds}s` : `${minutes}m`;
}
if (duration < 86_400) {
const hours = Math.floor(duration / 3_600);
const minutes = Math.floor((duration % 3_600) / 60);
return minutes > 0 ? `${hours}h ${minutes}m` : `${hours}h`;
}
const days = Math.floor(duration / 86_400);
const hours = Math.floor((duration % 86_400) / 3_600);
return hours > 0 ? `${days}d ${hours}h` : `${days}d`;
}
@@ -0,0 +1,252 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Pure formatting helpers used by chart axis renderers and short-info panels.
*
* @module node-page-charts/format-utils
*/
import { numberOrNull, stringOrNull } from '../value-helpers.js';
/**
* Clamp a numeric value between ``min`` and ``max``.
*
* @param {number} value Value to clamp.
* @param {number} min Minimum bound.
* @param {number} max Maximum bound.
* @returns {number} Clamped numeric value.
*/
export function clamp(value, min, max) {
if (!Number.isFinite(value)) return min;
if (value < min) return min;
if (value > max) return max;
return value;
}
/**
* Convert a hex colour string into an ``rgba()`` CSS value.
*
* Supports both 3- and 6-character hex forms (without the ``#`` prefix being
* required, though it is accepted). Falls back to opaque black on invalid
* input.
*
* @param {string} hex Hex colour string.
* @param {number} [alpha=1] Alpha component in the range [0, 1].
* @returns {string} RGBA CSS colour string.
*/
export function hexToRgba(hex, alpha = 1) {
const normalised = stringOrNull(hex)?.replace(/^#/, '') ?? '';
if (!(normalised.length === 6 || normalised.length === 3)) {
return `rgba(0, 0, 0, ${alpha})`;
}
// Expand shorthand 3-char form to 6 characters.
const expanded = normalised.length === 3
? normalised.split('').map(piece => piece + piece).join('')
: normalised;
const toComponent = (start, end) => parseInt(expanded.slice(start, end), 16);
const r = toComponent(0, 2);
const g = toComponent(2, 4);
const b = toComponent(4, 6);
return `rgba(${r}, ${g}, ${b}, ${alpha})`;
}
/**
* Pad a numeric value to two digits with a leading zero.
*
* Truncates towards zero and works on negative values by taking the absolute.
*
* @param {number} value Numeric value to pad.
* @returns {string} Padded two-character string.
*/
export function padTwo(value) {
return String(Math.trunc(Math.abs(Number(value)))).padStart(2, '0');
}
/**
* Format a timestamp as a zero-padded day-of-month string (local time zone).
*
* Used as the default tick label formatter on the X axis.
*
* @param {number} timestampMs Timestamp expressed in milliseconds.
* @returns {string} Two-digit day string, or empty string when invalid.
*/
export function formatCompactDate(timestampMs) {
const date = new Date(timestampMs);
if (Number.isNaN(date.getTime())) return '';
const day = padTwo(date.getDate());
return day;
}
/**
* Format a gas resistance reading using sensible SI prefixes with the Ω symbol.
*
* @param {number} value Resistance value in Ohms.
* @returns {string} Formatted resistance string, or empty string when invalid.
*/
export function formatGasResistance(value) {
const numeric = numberOrNull(value);
if (numeric == null) return '';
const absValue = Math.abs(numeric);
if (absValue >= 1_000_000) {
return `${(numeric / 1_000_000).toFixed(2)}`;
}
if (absValue >= 1_000) {
return `${(numeric / 1_000).toFixed(2)}`;
}
if (absValue >= 100) {
return `${numeric.toFixed(1)} Ω`;
}
return `${numeric.toFixed(0)} Ω`;
}
/**
* Format a data-point value for tooltip display using the series formatter.
*
* Falls back to a plain ``toString()`` when no ``valueFormatter`` is defined.
*
* @param {Object} seriesConfig Series configuration object.
* @param {number} value Numeric data-point value.
* @returns {string} Formatted value string.
*/
export function formatSeriesPointValue(seriesConfig, value) {
const numeric = numberOrNull(value);
if (numeric == null) return '';
if (typeof seriesConfig.valueFormatter === 'function') {
return seriesConfig.valueFormatter(numeric);
}
return numeric.toString();
}
/**
* Format a numeric UNIX timestamp (seconds) as an ISO 8601 string.
*
* When an ISO fallback string is supplied it is returned verbatim.
*
* @param {*} value Raw timestamp value (seconds since the epoch).
* @param {string|null} [isoFallback] ISO-formatted string to prefer.
* @returns {string|null} ISO timestamp string or ``null``.
*/
export function formatTimestamp(value, isoFallback = null) {
const iso = stringOrNull(isoFallback);
if (iso) return iso;
const numeric = numberOrNull(value);
if (numeric == null) return null;
try {
return new Date(numeric * 1000).toISOString();
} catch (error) {
return null;
}
}
/**
* Format an SNR reading with a decibel suffix.
*
* @param {*} value Raw SNR value.
* @returns {string} Formatted SNR string or empty string.
*/
export function formatSnr(value) {
const numeric = numberOrNull(value);
if (numeric == null) return '';
return `${numeric.toFixed(1)} dB`;
}
/**
* Convert a timestamp that may be expressed in seconds or milliseconds into
* milliseconds.
*
* Values greater than 1 trillion are assumed to already be in milliseconds;
* smaller values are multiplied by 1 000.
*
* @param {*} value Candidate timestamp.
* @returns {number|null} Timestamp in milliseconds or ``null``.
*/
export function toTimestampMs(value) {
const numeric = numberOrNull(value);
if (numeric == null) return null;
// Values above 1e12 are already in milliseconds (post-2001 epoch ms timestamps).
if (numeric > 1_000_000_000_000) {
return numeric;
}
return numeric * 1000;
}
/**
* Resolve the canonical telemetry timestamp for a snapshot record.
*
* Checks ISO string fields first, then falls back to numeric candidates,
* handling both snake_case and camelCase field names from the API.
*
* @param {*} snapshot Telemetry snapshot payload.
* @returns {number|null} Timestamp in milliseconds or ``null``.
*/
export function resolveSnapshotTimestamp(snapshot) {
if (!snapshot || typeof snapshot !== 'object') {
return null;
}
const isoCandidate = stringOrNull(
snapshot.rx_iso
?? snapshot.rxIso
?? snapshot.telemetry_time_iso
?? snapshot.telemetryTimeIso
?? snapshot.timestampIso,
);
if (isoCandidate) {
const parsed = new Date(isoCandidate);
if (!Number.isNaN(parsed.getTime())) {
return parsed.getTime();
}
}
const numericCandidates = [
snapshot.rx_time,
snapshot.rxTime,
snapshot.telemetry_time,
snapshot.telemetryTime,
snapshot.timestamp,
snapshot.ts,
];
for (const candidate of numericCandidates) {
const ts = toTimestampMs(candidate);
if (ts != null) {
return ts;
}
}
return null;
}
/**
* Format a tick label using compact units for better chart readability.
*
* Uses ``k``-suffix notation for large logarithmic values; one decimal
* place when the axis range is narrow ( 10); integer otherwise.
*
* @param {number} value Tick value.
* @param {Object} axis Axis descriptor containing ``scale``, ``min``, and ``max``.
* @returns {string} Formatted label string.
*/
export function formatAxisTick(value, axis) {
if (!Number.isFinite(value)) return '';
if (axis.scale === 'log') {
if (value >= 1000) {
return `${Math.round(value / 1000)}k`;
}
return `${Math.round(value)}`;
}
if (Math.abs(axis.max - axis.min) <= 10) {
return value.toFixed(1);
}
return Math.round(value).toString();
}
@@ -0,0 +1,140 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Chart layout helpers dimensions, axis positioning, and value scaling.
*
* @module node-page-charts/layout
*/
import { DEFAULT_CHART_DIMENSIONS, DEFAULT_CHART_MARGIN } from './constants.js';
import { clamp } from './format-utils.js';
/**
* Compute the layout metrics for the supplied chart specification.
*
* Automatically widens the left/right margins when the spec requests
* secondary axes.
*
* @param {Object} spec Chart specification (must include an ``axes`` array).
* @returns {{
* width: number,
* height: number,
* margin: {top: number, right: number, bottom: number, left: number},
* innerWidth: number,
* innerHeight: number,
* chartTop: number,
* chartBottom: number,
* }} Computed chart dimensions.
*/
export function createChartDimensions(spec) {
const margin = { ...DEFAULT_CHART_MARGIN };
// Widen the left margin when a secondary left axis is present.
if (spec.axes.some(axis => axis.position === 'leftSecondary')) {
margin.left += 36;
}
// Widen the right margin when a secondary right axis is present.
if (spec.axes.some(axis => axis.position === 'rightSecondary')) {
margin.right += 40;
}
const width = DEFAULT_CHART_DIMENSIONS.width;
const height = DEFAULT_CHART_DIMENSIONS.height;
const innerWidth = Math.max(1, width - margin.left - margin.right);
const innerHeight = Math.max(1, height - margin.top - margin.bottom);
return {
width,
height,
margin,
innerWidth,
innerHeight,
chartTop: margin.top,
chartBottom: height - margin.bottom,
};
}
/**
* Compute the horizontal drawing position for an axis descriptor.
*
* Maps position keywords to their SVG X coordinates relative to the chart
* viewport.
*
* @param {string} position Axis position keyword.
* @param {Object} dims Chart dimensions returned by {@link createChartDimensions}.
* @returns {number} X coordinate for the axis baseline.
*/
export function resolveAxisX(position, dims) {
switch (position) {
case 'leftSecondary':
return dims.margin.left - 32;
case 'right':
return dims.width - dims.margin.right;
case 'rightSecondary':
return dims.width - dims.margin.right + 32;
case 'left':
default:
return dims.margin.left;
}
}
/**
* Compute the X coordinate for a timestamp constrained to the rolling window.
*
* Linear interpolation between ``domainStart`` and ``domainEnd``, clamped so
* points never fall outside the chart frame.
*
* @param {number} timestamp Timestamp in milliseconds.
* @param {number} domainStart Start of the window in milliseconds.
* @param {number} domainEnd End of the window in milliseconds.
* @param {Object} dims Chart dimensions.
* @returns {number} X coordinate inside the SVG viewport.
*/
export function scaleTimestamp(timestamp, domainStart, domainEnd, dims) {
const safeStart = Math.min(domainStart, domainEnd);
const safeEnd = Math.max(domainStart, domainEnd);
const span = Math.max(1, safeEnd - safeStart);
const clamped = clamp(timestamp, safeStart, safeEnd);
const ratio = (clamped - safeStart) / span;
return dims.margin.left + ratio * dims.innerWidth;
}
/**
* Convert a value bound to a specific axis into a Y coordinate.
*
* Supports both linear and logarithmic (``scale: 'log'``) axes.
*
* @param {number} value Series value.
* @param {Object} axis Axis descriptor.
* @param {Object} dims Chart dimensions.
* @returns {number} Y coordinate (higher values map to lower Y numbers).
*/
export function scaleValueToAxis(value, axis, dims) {
if (!axis) return dims.chartBottom;
if (axis.scale === 'log') {
// Logarithmic scale: map log10(value) linearly between log10(min) and
// log10(max) so each order of magnitude occupies the same pixel height.
const minLog = Math.log10(axis.min);
const maxLog = Math.log10(axis.max);
const safe = clamp(value, axis.min, axis.max);
const ratio = (Math.log10(safe) - minLog) / (maxLog - minLog);
return dims.chartBottom - ratio * dims.innerHeight;
}
// Linear scale: ratio grows from 0 at axis.min to 1 at axis.max.
// Subtracting from chartBottom inverts the Y axis so higher values appear
// nearer the top of the SVG viewport (lower Y coordinate).
const safe = clamp(value, axis.min, axis.max);
const ratio = (safe - axis.min) / (axis.max - axis.min || 1);
return dims.chartBottom - ratio * dims.innerHeight;
}
@@ -0,0 +1,190 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Telemetry-snapshot data extraction helpers used to derive series points.
*
* @module node-page-charts/snapshot-data
*/
import { numberOrNull, stringOrNull } from '../value-helpers.js';
/**
* Collect candidate containers that may hold telemetry values for a snapshot.
*
* Handles both flat telemetry rows and nested ``device_metrics`` /
* ``environment_metrics`` sub-objects so that value extraction works
* regardless of the API response shape.
*
* @param {Object} snapshot Telemetry snapshot payload.
* @returns {Array<Object>} Container objects to inspect for telemetry fields.
*/
export function collectSnapshotContainers(snapshot) {
const containers = [];
if (!snapshot || typeof snapshot !== 'object') {
return containers;
}
const seen = new Set();
const enqueue = value => {
if (!value || typeof value !== 'object') return;
if (seen.has(value)) return;
seen.add(value);
containers.push(value);
};
enqueue(snapshot);
// Top-level nested keys that carry metric sub-objects.
const directKeys = [
'device_metrics',
'deviceMetrics',
'environment_metrics',
'environmentMetrics',
'raw',
];
directKeys.forEach(key => {
if (Object.prototype.hasOwnProperty.call(snapshot, key)) {
enqueue(snapshot[key]);
}
});
// Also drill one level into `.raw` for double-nested API shapes.
if (snapshot.raw && typeof snapshot.raw === 'object') {
['device_metrics', 'deviceMetrics', 'environment_metrics', 'environmentMetrics'].forEach(key => {
if (Object.prototype.hasOwnProperty.call(snapshot.raw, key)) {
enqueue(snapshot.raw[key]);
}
});
}
return containers;
}
/**
* Infer the telemetry sub-type for a snapshot.
*
* Uses the stored ``telemetry_type`` field when available. Falls back to
* field-presence heuristics for rows that pre-date the discriminator column.
*
* @param {Object} snapshot Telemetry snapshot payload.
* @returns {string} One of ``'device'``, ``'environment'``, ``'power'``,
* ``'air_quality'``, or ``'unknown'``.
*/
export function classifySnapshot(snapshot) {
if (!snapshot || typeof snapshot !== 'object') return 'unknown';
const stored = stringOrNull(snapshot.telemetry_type);
if (stored) return stored;
// Heuristics for legacy rows — check both flat and nested shapes.
const hasBattery =
snapshot.battery_level != null ||
snapshot.channel_utilization != null ||
snapshot.air_util_tx != null ||
snapshot.uptime_seconds != null ||
snapshot.device_metrics?.battery_level != null ||
snapshot.deviceMetrics?.batteryLevel != null;
if (hasBattery) return 'device';
const hasEnv =
snapshot.temperature != null ||
snapshot.relative_humidity != null ||
snapshot.barometric_pressure != null ||
snapshot.environment_metrics?.temperature != null ||
snapshot.environmentMetrics?.temperature != null;
if (hasEnv) return 'environment';
// device_metrics also carries a `voltage` field (~4.2 V for battery), so a
// device row with `voltage` but none of the four battery-discriminator fields
// above would be misclassified as 'power'. This is consistent with the SQL
// backfill and is negligible in practice (firmware always sends at least
// battery_level or channel_utilization alongside voltage).
if (snapshot.current != null || snapshot.voltage != null) return 'power';
if (snapshot.iaq != null || snapshot.gas_resistance != null) return 'environment';
return 'unknown';
}
/**
* Extract the first numeric telemetry value matching one of the supplied
* field names from any candidate container in the snapshot.
*
* @param {*} snapshot Telemetry payload.
* @param {Array<string>} fields Candidate property names.
* @returns {number|null} Extracted numeric value or ``null``.
*/
export function extractSnapshotValue(snapshot, fields) {
if (!snapshot || typeof snapshot !== 'object' || !Array.isArray(fields)) {
return null;
}
const containers = collectSnapshotContainers(snapshot);
for (const container of containers) {
for (const field of fields) {
if (!Object.prototype.hasOwnProperty.call(container, field)) continue;
const numeric = numberOrNull(container[field]);
if (numeric != null) {
return numeric;
}
}
}
return null;
}
/**
* Build data points for a series constrained to the given time window.
*
* Entries outside ``[domainStart, domainEnd]`` are silently dropped.
*
* @param {Array<{timestamp: number, snapshot: Object}>} entries Telemetry entries.
* @param {Array<string>} fields Candidate metric names.
* @param {number} domainStart Window start in milliseconds.
* @param {number} domainEnd Window end in milliseconds.
* @returns {Array<{timestamp: number, value: number}>} Series points sorted by timestamp.
*/
export function buildSeriesPoints(entries, fields, domainStart, domainEnd) {
const points = [];
entries.forEach(entry => {
if (!entry || typeof entry !== 'object') return;
const value = extractSnapshotValue(entry.snapshot, fields);
if (value == null) return;
if (entry.timestamp < domainStart || entry.timestamp > domainEnd) {
return;
}
points.push({ timestamp: entry.timestamp, value });
});
points.sort((a, b) => a.timestamp - b.timestamp);
return points;
}
/**
* Resolve the effective axis maximum when upper-overflow is enabled.
*
* When ``axis.allowUpperOverflow`` is ``true`` and the observed data exceeds
* the declared maximum, the axis ceiling is raised to the observed peak.
*
* @param {Object} axis Axis descriptor.
* @param {Array<{axisId: string, points: Array<{timestamp: number, value: number}>}>} seriesEntries
* Series entries for the chart.
* @returns {number} Effective axis maximum.
*/
export function resolveAxisMax(axis, seriesEntries) {
if (!axis || axis.allowUpperOverflow !== true) {
return axis?.max;
}
let observedMax = null;
for (const entry of seriesEntries) {
if (!entry || entry.axisId !== axis.id || !Array.isArray(entry.points)) continue;
for (const point of entry.points) {
if (!point || !Number.isFinite(point.value)) continue;
observedMax = observedMax == null ? point.value : Math.max(observedMax, point.value);
}
}
if (observedMax != null && Number.isFinite(axis.max) && observedMax > axis.max) {
return observedMax;
}
return axis.max;
}
@@ -0,0 +1,293 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Telemetry chart specifications driving each node-detail chart.
*
* @module node-page-charts/specs
*/
import { numberOrNull } from '../value-helpers.js';
import { formatGasResistance } from './format-utils.js';
/**
* Format an electrical current reading using ``mA`` for sub-amp magnitudes
* and ``A`` otherwise. Inlined here so the chart specs do not depend on
* the unrelated ``short-info-telemetry`` module.
*
* @param {*} value Raw current value.
* @returns {string} Formatted current string, or empty string when invalid.
*/
function fmtCurrent(value) {
const numeric = numberOrNull(value);
if (numeric == null) return '';
if (Math.abs(numeric) < 1) {
return `${(numeric * 1000).toFixed(1)} mA`;
}
return `${numeric.toFixed(2)} A`;
}
/**
* Telemetry chart definitions describing axes and series metadata.
*
* Each entry drives a separate {@link renderTelemetryChart} call inside
* {@link module:node-page}.renderTelemetryCharts.
*
* @type {ReadonlyArray<Object>}
*/
export const TELEMETRY_CHART_SPECS = Object.freeze([
{
id: 'device-health',
title: 'Device health',
typeFilter: ['device', 'unknown'],
axes: [
{
id: 'battery',
position: 'left',
label: 'Battery (%)',
min: 0,
max: 100,
ticks: 4,
color: '#8856a7',
},
{
id: 'voltage',
position: 'right',
label: 'Voltage (V)',
min: 0,
max: 6,
ticks: 3,
color: '#9ebcda',
allowUpperOverflow: true,
},
],
series: [
{
id: 'battery',
axis: 'battery',
color: '#8856a7',
label: 'Battery level',
legend: 'Battery (%)',
fields: ['battery', 'battery_level', 'batteryLevel'],
valueFormatter: value => `${value.toFixed(1)}%`,
},
{
id: 'voltage',
axis: 'voltage',
color: '#9ebcda',
label: 'Voltage',
legend: 'Voltage (V)',
fields: ['voltage', 'voltageReading'],
valueFormatter: value => `${value.toFixed(2)} V`,
},
],
},
{
id: 'power-sensor',
title: 'Power sensor',
typeFilter: ['power'],
axes: [
{
id: 'voltage',
position: 'left',
label: 'Voltage (V)',
min: 0,
max: 6,
ticks: 3,
color: '#9ebcda',
allowUpperOverflow: true,
},
{
id: 'current',
position: 'right',
label: 'Current (A)',
min: 0,
max: 3,
ticks: 3,
color: '#3182bd',
allowUpperOverflow: true,
},
],
series: [
{
id: 'voltage',
axis: 'voltage',
color: '#9ebcda',
label: 'Voltage',
legend: 'Voltage (V)',
fields: ['voltage', 'voltageReading'],
valueFormatter: value => `${value.toFixed(2)} V`,
},
{
id: 'current',
axis: 'current',
color: '#3182bd',
label: 'Current',
legend: 'Current (A)',
fields: ['current'],
valueFormatter: value => fmtCurrent(value),
},
],
},
{
id: 'channel',
title: 'Channel utilization',
typeFilter: ['device', 'unknown'],
axes: [
{
id: 'channel',
position: 'left',
label: 'Utilization (%)',
min: 0,
max: 100,
ticks: 4,
color: '#2ca25f',
},
],
series: [
{
id: 'channel',
axis: 'channel',
color: '#2ca25f',
label: 'Channel util',
legend: 'Channel utilization (%)',
fields: ['channel_utilization', 'channelUtilization'],
valueFormatter: value => `${value.toFixed(1)}%`,
},
{
id: 'air',
axis: 'channel',
color: '#99d8c9',
label: 'Air util tx',
legend: 'Air util TX (%)',
fields: ['airUtil', 'air_util_tx', 'airUtilTx'],
valueFormatter: value => `${value.toFixed(1)}%`,
},
],
},
{
id: 'environment',
title: 'Environmental telemetry',
typeFilter: ['environment'],
axes: [
{
id: 'temperature',
position: 'left',
label: 'Temperature (°C)',
min: -20,
max: 40,
ticks: 4,
color: '#fc8d59',
allowUpperOverflow: true,
},
{
id: 'humidity',
position: 'left',
label: 'Humidity (%)',
min: 0,
max: 100,
ticks: 4,
color: '#91bfdb',
visible: false,
},
],
series: [
{
id: 'temperature',
axis: 'temperature',
color: '#fc8d59',
label: 'Temperature',
legend: 'Temperature (°C)',
fields: ['temperature', 'temp'],
valueFormatter: value => `${value.toFixed(1)}°C`,
},
{
id: 'humidity',
axis: 'humidity',
color: '#91bfdb',
label: 'Humidity',
legend: 'Humidity (%)',
fields: ['humidity', 'relative_humidity', 'relativeHumidity'],
valueFormatter: value => `${value.toFixed(1)}%`,
},
],
},
{
id: 'airQuality',
title: 'Air quality',
typeFilter: ['environment', 'air_quality'],
axes: [
{
id: 'pressure',
position: 'left',
label: 'Pressure (hPa)',
min: 800,
max: 1_100,
ticks: 4,
color: '#c51b8a',
},
{
id: 'gas',
position: 'right',
label: 'Gas resistance (Ω)',
min: 10,
max: 100_000,
ticks: 5,
color: '#fa9fb5',
scale: 'log',
},
{
id: 'iaq',
position: 'rightSecondary',
label: 'IAQ index',
min: 0,
max: 500,
ticks: 5,
color: '#636363',
allowUpperOverflow: true,
},
],
series: [
{
id: 'pressure',
axis: 'pressure',
color: '#c51b8a',
label: 'Pressure',
legend: 'Pressure (hPa)',
fields: ['pressure', 'barometric_pressure', 'barometricPressure'],
valueFormatter: value => `${value.toFixed(1)} hPa`,
},
{
id: 'gas',
axis: 'gas',
color: '#fa9fb5',
label: 'Gas resistance',
legend: 'Gas resistance (Ω)',
fields: ['gas_resistance', 'gasResistance'],
valueFormatter: value => formatGasResistance(value),
},
{
id: 'iaq',
axis: 'iaq',
color: '#636363',
label: 'IAQ',
legend: 'IAQ index',
fields: ['iaq'],
valueFormatter: value => value.toFixed(0),
},
],
},
]);
@@ -0,0 +1,263 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* SVG renderers for telemetry series, axes, and full chart figures.
*
* @module node-page-charts/svg-renderers
*/
import { stringOrNull } from '../value-helpers.js';
import { escapeHtml } from '../utils.js';
import { TELEMETRY_WINDOW_MS } from './constants.js';
import {
formatAxisTick,
formatCompactDate,
formatSeriesPointValue,
hexToRgba,
} from './format-utils.js';
import { createChartDimensions, resolveAxisX, scaleTimestamp, scaleValueToAxis } from './layout.js';
import { buildLinearTicks, buildLogTicks, buildMidnightTicks } from './tick-builders.js';
import {
buildSeriesPoints,
classifySnapshot,
resolveAxisMax,
} from './snapshot-data.js';
/**
* Render a telemetry series as SVG circles with an optional translucent
* guide line.
*
* An optional ``lineReducer`` can be supplied to down-sample the point set
* used for the path (the full set is always used for circles).
*
* @param {Object} seriesConfig Series metadata.
* @param {Array<{timestamp: number, value: number}>} points Series data points.
* @param {Object} axis Axis descriptor.
* @param {Object} dims Chart dimensions.
* @param {number} domainStart Window start timestamp.
* @param {number} domainEnd Window end timestamp.
* @param {{ lineReducer?: Function }} [options] Optional rendering overrides.
* @returns {string} SVG markup for the series.
*/
export function renderTelemetrySeries(seriesConfig, points, axis, dims, domainStart, domainEnd, { lineReducer } = {}) {
if (!Array.isArray(points) || points.length === 0) {
return '';
}
const convertPoint = point => {
const cx = scaleTimestamp(point.timestamp, domainStart, domainEnd, dims);
const cy = scaleValueToAxis(point.value, axis, dims);
return { cx, cy, value: point.value };
};
// Build circle elements — one per data point.
const circleEntries = points.map(point => {
const coords = convertPoint(point);
const tooltip = formatSeriesPointValue(seriesConfig, point.value);
const titleMarkup = tooltip ? `<title>${escapeHtml(tooltip)}</title>` : '';
return `<circle class="node-detail__chart-point" cx="${coords.cx.toFixed(2)}" cy="${coords.cy.toFixed(2)}" r="3.2" fill="${seriesConfig.color}" aria-hidden="true">${titleMarkup}</circle>`;
});
// Allow a custom reducer to thin the line path (e.g. LTTB).
const lineSource = typeof lineReducer === 'function' ? lineReducer(points) : points;
const linePoints = Array.isArray(lineSource) && lineSource.length > 0 ? lineSource : points;
const coordinates = linePoints.map(convertPoint);
let line = '';
if (coordinates.length > 1) {
// Build a straight-line interpolation between consecutive data points.
// The path uses full opacity on the circles but 50% opacity on the trend
// line so individual readings remain visually dominant over the guide.
const path = coordinates
.map((coord, idx) => `${idx === 0 ? 'M' : 'L'}${coord.cx.toFixed(2)} ${coord.cy.toFixed(2)}`)
.join(' ');
line = `<path class="node-detail__chart-trend" d="${path}" fill="none" stroke="${hexToRgba(seriesConfig.color, 0.5)}" stroke-width="1.5" aria-hidden="true"></path>`;
}
// Render the path before the circles so circles sit on top of the line.
return `${line}${circleEntries.join('')}`;
}
/**
* Render a vertical axis with tick marks and a rotated axis label.
*
* Returns an empty string when ``axis.visible === false``.
*
* @param {Object} axis Axis descriptor.
* @param {Object} dims Chart dimensions.
* @returns {string} SVG markup for the Y axis, or empty string.
*/
export function renderYAxis(axis, dims) {
if (!axis || axis.visible === false) {
return '';
}
const x = resolveAxisX(axis.position, dims);
const ticks = axis.scale === 'log'
? buildLogTicks(axis.min, axis.max)
: buildLinearTicks(axis.min, axis.max, axis.ticks);
const tickElements = ticks
.map(value => {
const y = scaleValueToAxis(value, axis, dims);
const tickLength = axis.position === 'left' || axis.position === 'leftSecondary' ? -4 : 4;
const textAnchor = axis.position === 'left' || axis.position === 'leftSecondary' ? 'end' : 'start';
const textOffset = axis.position === 'left' || axis.position === 'leftSecondary' ? -6 : 6;
return `
<g class="node-detail__chart-tick" aria-hidden="true">
<line x1="${x}" y1="${y.toFixed(2)}" x2="${(x + tickLength).toFixed(2)}" y2="${y.toFixed(2)}"></line>
<text x="${(x + textOffset).toFixed(2)}" y="${(y + 3).toFixed(2)}" text-anchor="${textAnchor}" dominant-baseline="middle">${escapeHtml(formatAxisTick(value, axis))}</text>
</g>
`;
})
.join('');
const labelPadding = axis.position === 'left' || axis.position === 'leftSecondary' ? -56 : 56;
const labelX = x + labelPadding;
const labelY = (dims.chartTop + dims.chartBottom) / 2;
const labelTransform = `rotate(-90 ${labelX.toFixed(2)} ${labelY.toFixed(2)})`;
return `
<g class="node-detail__chart-axis node-detail__chart-axis--y" aria-hidden="true">
<line x1="${x}" y1="${dims.chartTop}" x2="${x}" y2="${dims.chartBottom}"></line>
${tickElements}
<text class="node-detail__chart-axis-label" x="${labelX.toFixed(2)}" y="${labelY.toFixed(2)}" text-anchor="middle" dominant-baseline="middle" transform="${labelTransform}">${escapeHtml(axis.label)}</text>
</g>
`;
}
/**
* Render the horizontal time axis with grid lines and date tick labels.
*
* @param {Object} dims Chart dimensions.
* @param {number} domainStart Window start timestamp in milliseconds.
* @param {number} domainEnd Window end timestamp in milliseconds.
* @param {Array<number>} tickTimestamps Tick timestamps to label.
* @param {{ labelFormatter?: Function }} [options] Optional tick label override.
* @returns {string} SVG markup for the X axis.
*/
export function renderXAxis(dims, domainStart, domainEnd, tickTimestamps, { labelFormatter = formatCompactDate } = {}) {
const y = dims.chartBottom;
const ticks = tickTimestamps
.map(ts => {
const x = scaleTimestamp(ts, domainStart, domainEnd, dims);
const labelY = y + 18;
const xStr = x.toFixed(2);
const yStr = labelY.toFixed(2);
const label = labelFormatter(ts);
return `
<g class="node-detail__chart-tick" aria-hidden="true">
<line class="node-detail__chart-grid-line" x1="${xStr}" y1="${dims.chartTop}" x2="${xStr}" y2="${dims.chartBottom}"></line>
<text x="${xStr}" y="${yStr}" text-anchor="end" dominant-baseline="central" transform="rotate(-90 ${xStr} ${yStr})">${escapeHtml(label)}</text>
</g>
`;
})
.join('');
return `
<g class="node-detail__chart-axis node-detail__chart-axis--x" aria-hidden="true">
<line x1="${dims.margin.left}" y1="${y}" x2="${dims.width - dims.margin.right}" y2="${y}"></line>
${ticks}
</g>
`;
}
/**
* Render a single telemetry chart defined by ``spec``.
*
* Returns an empty string when no series data falls within the time window.
* Supports an optional ``chartOptions`` bag for custom window sizes, tick
* builders, tick formatters, line reducers, and aggregation flags.
*
* @param {Object} spec Chart specification from {@link TELEMETRY_CHART_SPECS}.
* @param {Array<{timestamp: number, snapshot: Object}>} entries Telemetry entries.
* @param {number} nowMs Reference timestamp in milliseconds.
* @param {Object} [chartOptions] Optional rendering overrides.
* @returns {string} Rendered chart HTML/SVG markup or empty string.
*/
export function renderTelemetryChart(spec, entries, nowMs, chartOptions = {}) {
const windowMs = Number.isFinite(chartOptions.windowMs) && chartOptions.windowMs > 0 ? chartOptions.windowMs : TELEMETRY_WINDOW_MS;
const timeRangeLabel = stringOrNull(chartOptions.timeRangeLabel) ?? 'Last 7 days';
const domainEnd = nowMs;
const domainStart = nowMs - windowMs;
// When not in aggregated mode, filter entries by the chart's typeFilter.
const effectiveEntries = Array.isArray(spec.typeFilter) && !chartOptions.isAggregated
? entries.filter(e => spec.typeFilter.includes(classifySnapshot(e.snapshot)))
: entries;
const dims = createChartDimensions(spec);
const seriesEntries = spec.series
.map(series => {
const points = buildSeriesPoints(effectiveEntries, series.fields, domainStart, domainEnd);
if (points.length === 0) return null;
return { config: series, axisId: series.axis, points };
})
.filter(entry => entry != null);
if (seriesEntries.length === 0) {
return '';
}
// Apply allowUpperOverflow adjustments to each axis.
const adjustedAxes = spec.axes.map(axis => {
const resolvedMax = resolveAxisMax(axis, seriesEntries);
if (resolvedMax != null && resolvedMax !== axis.max) {
return { ...axis, max: resolvedMax };
}
return axis;
});
const axisMap = new Map(adjustedAxes.map(axis => [axis.id, axis]));
const plottedSeries = seriesEntries
.map(series => {
const axis = axisMap.get(series.axisId);
if (!axis) return null;
return { config: series.config, axis, points: series.points };
})
.filter(entry => entry != null);
if (plottedSeries.length === 0) {
return '';
}
const axesMarkup = adjustedAxes.map(axis => renderYAxis(axis, dims)).join('');
// Allow caller to supply a custom tick builder (e.g. hourly ticks for short windows).
const tickBuilder = typeof chartOptions.xAxisTickBuilder === 'function' ? chartOptions.xAxisTickBuilder : buildMidnightTicks;
const tickFormatter = typeof chartOptions.xAxisTickFormatter === 'function' ? chartOptions.xAxisTickFormatter : formatCompactDate;
const ticks = tickBuilder(nowMs, windowMs);
const xAxisMarkup = renderXAxis(dims, domainStart, domainEnd, ticks, { labelFormatter: tickFormatter });
const seriesMarkup = plottedSeries
.map(series =>
renderTelemetrySeries(series.config, series.points, series.axis, dims, domainStart, domainEnd, {
lineReducer: chartOptions.lineReducer,
}),
)
.join('');
const legendItems = plottedSeries
.map(series => {
const legendLabel = stringOrNull(series.config.legend) ?? series.config.label;
return `
<span class="node-detail__chart-legend-item">
<span class="node-detail__chart-legend-swatch" style="background:${series.config.color}"></span>
<span class="node-detail__chart-legend-text">${escapeHtml(legendLabel)}</span>
</span>
`;
})
.join('');
const legendMarkup = legendItems
? `<div class="node-detail__chart-legend" aria-hidden="true">${legendItems}</div>`
: '';
return `
<figure class="node-detail__chart">
<figcaption class="node-detail__chart-header">
<h4>${escapeHtml(spec.title)}</h4>
<span>${escapeHtml(timeRangeLabel)}</span>
</figcaption>
<svg viewBox="0 0 ${dims.width} ${dims.height}" preserveAspectRatio="xMidYMid meet" role="img" aria-label="${escapeHtml(`${spec.title} over last seven days`)}">
${axesMarkup}
${xAxisMarkup}
${seriesMarkup}
</svg>
${legendMarkup}
</figure>
`;
}
@@ -0,0 +1,109 @@
/*
* Copyright © 2025-26 l5yth & contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Tick generators for telemetry chart X and Y axes.
*
* @module node-page-charts/tick-builders
*/
import { DAY_MS, HOUR_MS, TELEMETRY_WINDOW_MS } from './constants.js';
/**
* Build midnight tick timestamps covering the rolling telemetry window.
*
* Walks backwards from ``nowMs`` by one day until the domain start is
* reached, then reverses the array for chronological order.
*
* @param {number} nowMs Reference timestamp in milliseconds.
* @param {number} [windowMs=TELEMETRY_WINDOW_MS] Window size in milliseconds.
* @returns {Array<number>} Midnight timestamps within the window.
*/
export function buildMidnightTicks(nowMs, windowMs = TELEMETRY_WINDOW_MS) {
const ticks = [];
const safeWindow = Number.isFinite(windowMs) && windowMs > 0 ? windowMs : TELEMETRY_WINDOW_MS;
const domainStart = nowMs - safeWindow;
const cursor = new Date(nowMs);
cursor.setHours(0, 0, 0, 0);
for (let ts = cursor.getTime(); ts >= domainStart; ts -= DAY_MS) {
ticks.push(ts);
}
return ticks.reverse();
}
/**
* Build hourly tick timestamps across the provided window.
*
* @param {number} nowMs Reference timestamp in milliseconds.
* @param {number} [windowMs=DAY_MS] Window size in milliseconds.
* @returns {Array<number>} Hourly tick timestamps in chronological order.
*/
export function buildHourlyTicks(nowMs, windowMs = DAY_MS) {
const ticks = [];
const safeWindow = Number.isFinite(windowMs) && windowMs > 0 ? windowMs : DAY_MS;
const domainStart = nowMs - safeWindow;
const cursor = new Date(nowMs);
cursor.setMinutes(0, 0, 0);
for (let ts = cursor.getTime(); ts >= domainStart; ts -= HOUR_MS) {
ticks.push(ts);
}
return ticks.reverse();
}
/**
* Build evenly spaced ticks for linear axes.
*
* @param {number} min Axis minimum.
* @param {number} max Axis maximum.
* @param {number} [count=4] Number of tick segments (produces count+1 values).
* @returns {Array<number>} Tick values including both extrema.
*/
export function buildLinearTicks(min, max, count = 4) {
if (!Number.isFinite(min) || !Number.isFinite(max)) return [];
if (max <= min) return [min];
const segments = Math.max(1, Math.floor(count));
const step = (max - min) / segments;
const ticks = [];
for (let idx = 0; idx <= segments; idx += 1) {
ticks.push(min + step * idx);
}
return ticks;
}
/**
* Build base-10 ticks for logarithmic axes.
*
* Returns one tick per order of magnitude between ``min`` and ``max``,
* plus the raw min/max values when they are not already included.
*
* @param {number} min Minimum domain value (must be > 0).
* @param {number} max Maximum domain value.
* @returns {Array<number>} Tick values distributed across powers of ten.
*/
export function buildLogTicks(min, max) {
if (!Number.isFinite(min) || !Number.isFinite(max) || min <= 0 || max <= min) {
return [];
}
const ticks = [];
const minExp = Math.ceil(Math.log10(min));
const maxExp = Math.floor(Math.log10(max));
for (let exp = minExp; exp <= maxExp; exp += 1) {
ticks.push(10 ** exp);
}
if (!ticks.includes(min)) ticks.unshift(min);
if (!ticks.includes(max)) ticks.push(max);
return ticks;
}
File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More