mirror of
https://github.com/l5yth/potato-mesh.git
synced 2026-05-14 05:15:51 +02:00
Compare commits
9 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 13b2ce9067 | |||
| 5a73e212a3 | |||
| 07c8e85caa | |||
| c08b3f2c2d | |||
| 851b2180dd | |||
| c175445251 | |||
| b951dbffeb | |||
| 10e6c99196 | |||
| aeb97477f0 |
+65
-35
@@ -5,48 +5,78 @@
|
||||
|
||||
## v0.6.0
|
||||
|
||||
**Official multi-protocol release.** This version introduces first-class
|
||||
support for both Meshtastic and MeshCore mesh protocols via the new `PROTOCOL`
|
||||
environment variable. Key additions since v0.5.12:
|
||||
This is a service release of the radio mesh app-suite `potato-mesh` v0.6.0 which introduces new features and overhauls the user interface. The primary notable change is added support for multi-protocol along with an implementation of **Meshcore** in ingestor, web app, and frontend.
|
||||
|
||||
* Feat: official MeshCore provider with BLE, TCP, and serial support
|
||||
* Feat: multi-protocol awareness across web frontend, ingestor, and mobile app
|
||||
* Enh: surface MeshCore role types and distinguish protocols in the UI
|
||||
Demo: <https://potatomesh.net/>
|
||||
|
||||
See v0.5.12 below for the full commit history of multi-protocol groundwork.
|
||||
### Meshcore
|
||||
|
||||
**Breaking changes — remove deprecated environment variable aliases:**
|
||||
To start ingesting Meshcore data to an upgraded potato-mesh web app, simply tell your ingestor to use the `PROTOCOL="meshcore"`.
|
||||
|
||||
* Ingestor: remove `POTATOMESH_INSTANCE` env var — use `INSTANCE_DOMAIN` by @l5yth
|
||||
* Ingestor: remove `PROVIDER` env var — use `PROTOCOL` by @l5yth
|
||||
* Ingestor: remove `MESH_SERIAL` env var — use `CONNECTION` by @l5yth
|
||||
* Ingestor: remove `PORT` config alias — use `CONNECTION` by @l5yth
|
||||
* Docker: give `INSTANCE_DOMAIN` a default of `http://web:41447` in compose by @l5yth
|
||||
* Chore: bump version to 0.6.0 across web, matrix bridge, and mobile app by @l5yth
|
||||
### About Pages
|
||||
|
||||
The other notable feature is the removal of the "darkmode" and "info" buttons in favor of customizable markdown pages that allow for more flexibility with regard to custom content (info about presets, contact information, etc.) - see `/pages/*.md` in the web app ([#723](https://github.com/l5yth/potato-mesh/pull/723)).
|
||||
|
||||
### Breaking Variable Changes
|
||||
|
||||
The following deprecated environmental variables have been removed in this release finally ([#704](https://github.com/l5yth/potato-mesh/pull/704)):
|
||||
* ~~POTATOMESH_INSTANCE~~ - please use `INSTANCE_DOMAIN`
|
||||
* ~~MESH_SERIAL~~ and ~~PORT~~ - please use `CONNECTION`
|
||||
|
||||
### Features
|
||||
* Web: add markdown static pages by @l5yth in <https://github.com/l5yth/potato-mesh/pull/723>
|
||||
* Data: trace analysus multi ingestor support by @l5yth in <https://github.com/l5yth/potato-mesh/pull/721>
|
||||
* Web: facelift by @l5yth in <https://github.com/l5yth/potato-mesh/pull/716>
|
||||
* Web: sort channels by activity not index by @l5yth in <https://github.com/l5yth/potato-mesh/pull/711>
|
||||
* Data: derive meshcore channel probe bound from device max_channels by @l5yth in <https://github.com/l5yth/potato-mesh/pull/701>
|
||||
* Web: define meshcore modem presets by @l5yth in <https://github.com/l5yth/potato-mesh/pull/696>
|
||||
* Data: register meshcore channel mappings by @l5yth in <https://github.com/l5yth/potato-mesh/pull/695>
|
||||
* Data: provide frequency and modem preset for meshcore by @l5yth in <https://github.com/l5yth/potato-mesh/pull/694>
|
||||
* Web: distinguish meshcore from meshtastic in frontend by @l5yth in <https://github.com/l5yth/potato-mesh/pull/688>
|
||||
* [Meshcore] fix: get meshcore protocol icon displaying correctly by @benallfree in <https://github.com/l5yth/potato-mesh/pull/681>
|
||||
|
||||
### Fixes
|
||||
* Web: fix federation for multi protocol by @l5yth in <https://github.com/l5yth/potato-mesh/pull/722>
|
||||
* Data: fix position time updates by @l5yth in <https://github.com/l5yth/potato-mesh/pull/715>
|
||||
* Data: fix meshcore ingestor self reporting by @l5yth in <https://github.com/l5yth/potato-mesh/pull/713>
|
||||
* Web: reference meshcore nodes in chat by @l5yth in <https://github.com/l5yth/potato-mesh/pull/709>
|
||||
* Web: fix node disappearance role reset by @l5yth in <https://github.com/l5yth/potato-mesh/pull/707>
|
||||
* Web: protect real node names from fallback by @l5yth in <https://github.com/l5yth/potato-mesh/pull/702>
|
||||
* Web: add proper short names for meshcore companions by @l5yth in <https://github.com/l5yth/potato-mesh/pull/693>
|
||||
* Fix: address review comments from PRs #676 and #681 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/689>
|
||||
* [Meshcore] fix: race condition by @benallfree in <https://github.com/l5yth/potato-mesh/pull/676>
|
||||
|
||||
### Chores
|
||||
* Release: v0.6.0 — remove deprecated env var aliases by @l5yth in <https://github.com/l5yth/potato-mesh/pull/704>
|
||||
* Chore: prepare codebase for breaking release by @l5yth in <https://github.com/l5yth/potato-mesh/pull/718>
|
||||
|
||||
## v0.5.12
|
||||
|
||||
* Enh: surface meshcore role types (#680) by @l5yth in <https://github.com/l5yth/potato-mesh/pull/685>
|
||||
* Chore: refactor codebase before meshcore release by @l5yth in <https://github.com/l5yth/potato-mesh/pull/682>
|
||||
* [Meshcore] enh: short name should be 1st 4 hex digits of public key by @benallfree in <https://github.com/l5yth/potato-mesh/pull/679>
|
||||
* Chore: update xcode deps by @benallfree in <https://github.com/l5yth/potato-mesh/pull/674>
|
||||
* Chore: update mesh.sh to use requirements file by @benallfree in <https://github.com/l5yth/potato-mesh/pull/675>
|
||||
* Data/meshcore: fix ble and enable tcp by @l5yth in <https://github.com/l5yth/potato-mesh/pull/669>
|
||||
* Data: handle store_forward and router_heartbeat portnum by @l5yth in <https://github.com/l5yth/potato-mesh/pull/667>
|
||||
* Feat: implement meshcore provider by @l5yth in <https://github.com/l5yth/potato-mesh/pull/663>
|
||||
* Ci: update dependabot and codecov settings by @l5yth in <https://github.com/l5yth/potato-mesh/pull/666>
|
||||
* Web: prepare release by @l5yth in <https://github.com/l5yth/potato-mesh/pull/665>
|
||||
* App: only query meshtastic provider by @l5yth in <https://github.com/l5yth/potato-mesh/pull/664>
|
||||
* Data: prepare ingestor for meshcore by @l5yth in <https://github.com/l5yth/potato-mesh/pull/658>
|
||||
* Web: fix css issues by @l5yth in <https://github.com/l5yth/potato-mesh/pull/659>
|
||||
* Web: prepare frontend for multi protocol by @l5yth in <https://github.com/l5yth/potato-mesh/pull/657>
|
||||
* Feat: split device and power-sensor telemetry charts (#643) by @l5yth in <https://github.com/l5yth/potato-mesh/pull/656>
|
||||
* Web: implement a 'protocol' field across systems by @l5yth in <https://github.com/l5yth/potato-mesh/pull/655>
|
||||
* Fix upsert clearing node coordinates bug by @l5yth in <https://github.com/l5yth/potato-mesh/pull/654>
|
||||
* Data: resolve circular dependency of deamon.py by @l5yth in <https://github.com/l5yth/potato-mesh/pull/653>
|
||||
* Proposal: mesh provider pattern refactor by @benallfree in <https://github.com/l5yth/potato-mesh/pull/651>
|
||||
* Build(deps): bump rustls-webpki from 0.103.8 to 0.103.10 in /matrix by @dependabot[bot]< in https://github.com/l5yth/potato-mesh/pull/649>
|
||||
* Build(deps): bump quinn-proto from 0.11.13 to 0.11.14 in /matrix by @dependabot[bot]< in https://github.com/l5yth/potato-mesh/pull/646>
|
||||
This is a service release of the app potato-mesh v0.5.12 which improves performance and stability.
|
||||
|
||||
Notably, the frontend went through some graphical tweaks to prepare for an upcoming multi-protocol release (meshcore, reticulum, etc.).
|
||||
|
||||
* Enh: surface meshcore role types (#680) by @l5yth in https://github.com/l5yth/potato-mesh/pull/685
|
||||
* Chore: refactor codebase before meshcore release by @l5yth in https://github.com/l5yth/potato-mesh/pull/682
|
||||
* [Meshcore] enh: short name should be 1st 4 hex digits of public key by @benallfree in https://github.com/l5yth/potato-mesh/pull/679
|
||||
* Chore: update xcode deps by @benallfree in https://github.com/l5yth/potato-mesh/pull/674
|
||||
* Chore: update mesh.sh to use requirements file by @benallfree in https://github.com/l5yth/potato-mesh/pull/675
|
||||
* Data/meshcore: fix ble and enable tcp by @l5yth in https://github.com/l5yth/potato-mesh/pull/669
|
||||
* Data: handle store_forward and router_heartbeat portnum by @l5yth in https://github.com/l5yth/potato-mesh/pull/667
|
||||
* Feat: implement meshcore provider by @l5yth in https://github.com/l5yth/potato-mesh/pull/663
|
||||
* Ci: update dependabot and codecov settings by @l5yth in https://github.com/l5yth/potato-mesh/pull/666
|
||||
* Web: prepare release by @l5yth in https://github.com/l5yth/potato-mesh/pull/665
|
||||
* App: only query meshtastic provider by @l5yth in https://github.com/l5yth/potato-mesh/pull/664
|
||||
* Data: prepare ingestor for meshcore by @l5yth in https://github.com/l5yth/potato-mesh/pull/658
|
||||
* Web: fix css issues by @l5yth in https://github.com/l5yth/potato-mesh/pull/659
|
||||
* Web: prepare frontend for multi protocol by @l5yth in https://github.com/l5yth/potato-mesh/pull/657
|
||||
* Feat: split device and power-sensor telemetry charts (#643) by @l5yth in https://github.com/l5yth/potato-mesh/pull/656
|
||||
* Web: implement a 'protocol' field across systems by @l5yth in https://github.com/l5yth/potato-mesh/pull/655
|
||||
* Fix upsert clearing node coordinates bug by @l5yth in https://github.com/l5yth/potato-mesh/pull/654
|
||||
* Data: resolve circular dependency of deamon.py by @l5yth in https://github.com/l5yth/potato-mesh/pull/653
|
||||
* Proposal: mesh provider pattern refactor by @benallfree in https://github.com/l5yth/potato-mesh/pull/651
|
||||
* Build(deps): bump rustls-webpki from 0.103.8 to 0.103.10 in /matrix by @dependabot[bot] in https://github.com/l5yth/potato-mesh/pull/649
|
||||
* Build(deps): bump quinn-proto from 0.11.13 to 0.11.14 in /matrix by @dependabot[bot] in https://github.com/l5yth/potato-mesh/pull/646
|
||||
|
||||
## v0.5.11
|
||||
|
||||
|
||||
@@ -15,11 +15,11 @@
|
||||
<key>CFBundlePackageType</key>
|
||||
<string>FMWK</string>
|
||||
<key>CFBundleShortVersionString</key>
|
||||
<string>0.6.0</string>
|
||||
<string>0.6.1</string>
|
||||
<key>CFBundleSignature</key>
|
||||
<string>????</string>
|
||||
<key>CFBundleVersion</key>
|
||||
<string>0.6.0</string>
|
||||
<string>0.6.1</string>
|
||||
<key>MinimumOSVersion</key>
|
||||
<string>14.0</string>
|
||||
</dict>
|
||||
|
||||
+1
-1
@@ -1,7 +1,7 @@
|
||||
name: potato_mesh_reader
|
||||
description: Meshtastic Reader — read-only view for PotatoMesh messages.
|
||||
publish_to: "none"
|
||||
version: 0.6.0
|
||||
version: 0.6.1
|
||||
|
||||
environment:
|
||||
sdk: ">=3.4.0 <4.0.0"
|
||||
|
||||
+1
-1
@@ -18,7 +18,7 @@ The ``data.mesh`` module exposes helpers for reading Meshtastic node and
|
||||
message information before forwarding it to the accompanying web application.
|
||||
"""
|
||||
|
||||
VERSION = "0.6.0"
|
||||
VERSION = "0.6.1"
|
||||
"""Semantic version identifier shared with the dashboard and front-end."""
|
||||
|
||||
__version__ = VERSION
|
||||
|
||||
@@ -27,6 +27,8 @@ CREATE TABLE IF NOT EXISTS instances (
|
||||
last_update_time INTEGER,
|
||||
is_private BOOLEAN NOT NULL DEFAULT 0,
|
||||
nodes_count INTEGER,
|
||||
meshcore_nodes_count INTEGER,
|
||||
meshtastic_nodes_count INTEGER,
|
||||
contact_link TEXT,
|
||||
signature TEXT
|
||||
);
|
||||
|
||||
@@ -16,6 +16,7 @@
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import math
|
||||
import os
|
||||
from datetime import datetime, timezone
|
||||
from typing import Any
|
||||
@@ -81,6 +82,37 @@ Accepted values are ``meshtastic`` (default) and ``meshcore``.
|
||||
"""
|
||||
|
||||
|
||||
def _parse_lora_freq_env(raw: str | None) -> float | int | None:
|
||||
"""Parse the ``FREQUENCY`` environment variable into a numeric LoRa frequency.
|
||||
|
||||
Returns an :class:`int` for whole-number strings (e.g. ``"868"``), a
|
||||
:class:`float` for decimal strings (e.g. ``"869.525"``), or ``None`` when
|
||||
*raw* is empty, absent, non-numeric, or non-finite (e.g. ``"inf"``).
|
||||
|
||||
Non-numeric labels such as ``"EU_868"`` intentionally return ``None`` so
|
||||
that :data:`LORA_FREQ` is left unset and :func:`~interfaces._ensure_radio_metadata`
|
||||
can still populate it from the detected radio configuration.
|
||||
|
||||
Parameters:
|
||||
raw: Raw value of the ``FREQUENCY`` environment variable.
|
||||
|
||||
Returns:
|
||||
Numeric frequency value, or ``None``.
|
||||
"""
|
||||
if not raw:
|
||||
return None
|
||||
stripped = raw.strip()
|
||||
if not stripped:
|
||||
return None
|
||||
try:
|
||||
as_float = float(stripped)
|
||||
except ValueError:
|
||||
return None
|
||||
if not math.isfinite(as_float):
|
||||
return None
|
||||
return int(as_float) if as_float == int(as_float) else as_float
|
||||
|
||||
|
||||
def _parse_channel_names(raw_value: str | None) -> tuple[str, ...]:
|
||||
"""Normalise a comma-separated list of channel names.
|
||||
|
||||
@@ -221,8 +253,14 @@ API_TOKEN = INSTANCES[0][1] if INSTANCES else os.environ.get("API_TOKEN", "")
|
||||
ENERGY_SAVING = os.environ.get("ENERGY_SAVING") == "1"
|
||||
"""When ``True``, enables the ingestor's energy saving mode."""
|
||||
|
||||
LORA_FREQ: float | int | str | None = None
|
||||
"""Frequency of the local node's configured LoRa region in MHz or raw region label."""
|
||||
LORA_FREQ: float | int | str | None = _parse_lora_freq_env(os.environ.get("FREQUENCY"))
|
||||
"""Frequency of the local node's configured LoRa region in MHz or raw region label.
|
||||
|
||||
Pre-seeded from the ``FREQUENCY`` environment variable when set to a finite
|
||||
numeric value, allowing operators to override auto-detected values.
|
||||
Non-numeric or non-finite values are ignored so that auto-detection from the
|
||||
radio interface can still fill this in.
|
||||
"""
|
||||
|
||||
MODEM_PRESET: str | None = None
|
||||
"""CamelCase modem preset name reported by the local node."""
|
||||
|
||||
@@ -24,7 +24,7 @@ import time
|
||||
|
||||
from pubsub import pub
|
||||
|
||||
from . import config, handlers, ingestors, interfaces
|
||||
from . import config, handlers, ingestors, interfaces, queue
|
||||
from .mesh_protocol import MeshProtocol
|
||||
from .utils import _retry_dict_snapshot
|
||||
|
||||
@@ -488,22 +488,32 @@ def _check_inactivity_reconnect(state: _DaemonState) -> bool:
|
||||
):
|
||||
return False
|
||||
|
||||
if (
|
||||
state.last_inactivity_reconnect is not None
|
||||
and now - state.last_inactivity_reconnect < state.inactivity_reconnect_secs
|
||||
):
|
||||
return False
|
||||
if state.last_inactivity_reconnect is not None:
|
||||
# For explicit disconnects use the shorter max-reconnect-delay window
|
||||
# so the daemon reconnects promptly without thrashing. For inactivity-
|
||||
# only triggers retain the full inactivity window as the throttle.
|
||||
throttle_secs = (
|
||||
config._RECONNECT_MAX_DELAY_SECS
|
||||
if believed_disconnected
|
||||
else state.inactivity_reconnect_secs
|
||||
)
|
||||
if now - state.last_inactivity_reconnect < throttle_secs:
|
||||
return False
|
||||
|
||||
reason = (
|
||||
"disconnected"
|
||||
if believed_disconnected
|
||||
else f"no data for {inactivity_elapsed:.0f}s"
|
||||
)
|
||||
# Uses the module-level global STATE — acceptable because there is only
|
||||
# one queue in production, and in tests this is purely informational.
|
||||
queue_depth = len(queue.STATE.queue)
|
||||
config._debug_log(
|
||||
"Mesh interface inactivity detected",
|
||||
context="daemon.interface",
|
||||
severity="warn",
|
||||
reason=reason,
|
||||
queue_depth=queue_depth,
|
||||
)
|
||||
state.last_inactivity_reconnect = now
|
||||
_close_interface(state.iface)
|
||||
@@ -631,6 +641,17 @@ def main(*, provider: MeshProtocol | None = None) -> None:
|
||||
topics=subscribed,
|
||||
)
|
||||
|
||||
if not config.INSTANCES and not config.INSTANCE:
|
||||
config._debug_log(
|
||||
"No INSTANCE_DOMAIN configured — cannot forward data; exiting",
|
||||
context="daemon.main",
|
||||
severity="error",
|
||||
always=True,
|
||||
)
|
||||
return
|
||||
|
||||
queue._start_queue_drainer(queue.STATE)
|
||||
|
||||
state = _DaemonState(
|
||||
provider=provider,
|
||||
stop=threading.Event(),
|
||||
@@ -666,11 +687,7 @@ def main(*, provider: MeshProtocol | None = None) -> None:
|
||||
signal.signal(signal.SIGINT, handle_sigint)
|
||||
signal.signal(signal.SIGTERM, handle_sigterm)
|
||||
|
||||
instance_label = (
|
||||
", ".join(inst for inst, _ in config.INSTANCES)
|
||||
if config.INSTANCES
|
||||
else "(no INSTANCE_DOMAIN configured)"
|
||||
)
|
||||
instance_label = ", ".join(inst for inst, _ in config.INSTANCES)
|
||||
config._debug_log(
|
||||
"Mesh daemon starting",
|
||||
context="daemon.main",
|
||||
|
||||
@@ -511,16 +511,96 @@ def _resolve_lora_message(local_config: Any) -> Any | None:
|
||||
return None
|
||||
|
||||
|
||||
# Maps Meshtastic region enum name to (base_freq_MHz, channel_spacing_MHz).
|
||||
# Values are derived from the Meshtastic firmware RegionInfo tables.
|
||||
# Used by _computed_channel_frequency to derive the actual radio frequency
|
||||
# from the region and channel index.
|
||||
_REGION_CHANNEL_PARAMS: dict[str, tuple[float, float]] = {
|
||||
"US": (902.0, 0.25), # 902–928 MHz; e.g. ch 52 ≈ 915 MHz at 250 kHz spacing
|
||||
"EU_433": (433.175, 0.2),
|
||||
"EU_868": (869.525, 0.5), # actual primary ≈ 869.525 MHz, not 868
|
||||
"CN": (470.0, 0.2),
|
||||
"JP": (920.875, 0.5),
|
||||
"ANZ": (916.0, 0.5),
|
||||
"KR": (921.9, 0.5),
|
||||
"TW": (923.0, 0.5),
|
||||
"RU": (868.9, 0.5),
|
||||
"IN": (865.0, 0.5),
|
||||
"NZ_865": (864.0, 0.5),
|
||||
"TH": (920.0, 0.5),
|
||||
"LORA_24": (2400.0, 0.5),
|
||||
"UA_433": (433.175, 0.2),
|
||||
"UA_868": (868.0, 0.5),
|
||||
"MY_433": (433.0, 0.2),
|
||||
"MY_919": (919.0, 0.5),
|
||||
"SG_923": (923.0, 0.5),
|
||||
"PH_433": (433.0, 0.2),
|
||||
"PH_868": (868.0, 0.5),
|
||||
"PH_915": (915.0, 0.5),
|
||||
"ANZ_433": (433.0, 0.2),
|
||||
"KZ_433": (433.0, 0.2),
|
||||
"KZ_863": (863.125, 0.5),
|
||||
"NP_865": (865.0, 0.5),
|
||||
"BR_902": (902.0, 0.25),
|
||||
# IL (Israel) is absent from meshtastic Python lib 2.7.8 protobufs; the
|
||||
# enum value is unresolvable at runtime. Operators on IL firmware should
|
||||
# set the FREQUENCY environment variable to override.
|
||||
}
|
||||
|
||||
|
||||
def _computed_channel_frequency(
|
||||
enum_name: str | None,
|
||||
channel_num: int | None,
|
||||
) -> int | None:
|
||||
"""Compute the floor MHz frequency for a known region and channel index.
|
||||
|
||||
Looks up *enum_name* in :data:`_REGION_CHANNEL_PARAMS` and returns
|
||||
``floor(base_freq + channel_num * spacing)``. Returns ``None`` when the
|
||||
region is not in the table. A missing or negative *channel_num* is
|
||||
treated as 0 so the base frequency is always usable.
|
||||
|
||||
Args:
|
||||
enum_name: Region enum name as returned by
|
||||
:func:`_enum_name_from_field`, e.g. ``"EU_868"`` or ``"US"``.
|
||||
channel_num: Zero-based channel index from the device LoRa config.
|
||||
|
||||
Returns:
|
||||
Floored MHz as :class:`int`, or ``None`` if the region is unknown.
|
||||
"""
|
||||
if enum_name is None:
|
||||
return None
|
||||
params = _REGION_CHANNEL_PARAMS.get(enum_name)
|
||||
if params is None:
|
||||
return None
|
||||
base, spacing = params
|
||||
idx = channel_num if (isinstance(channel_num, int) and channel_num >= 0) else 0
|
||||
return math.floor(base + idx * spacing)
|
||||
|
||||
|
||||
def _region_frequency(lora_message: Any) -> int | float | str | None:
|
||||
"""Derive the LoRa region frequency in MHz or the region label from ``lora_message``.
|
||||
|
||||
Numeric override values are floored to the nearest MHz to align with the
|
||||
integer frequencies expected elsewhere in the ingestion pipeline.
|
||||
Frequency sources are tried in priority order:
|
||||
|
||||
1. ``override_frequency > 0`` — explicit radio override, floored to MHz.
|
||||
2. :data:`_REGION_CHANNEL_PARAMS` lookup + ``channel_num`` — actual
|
||||
band-plan frequency derived from the device's region and channel index,
|
||||
floored to MHz.
|
||||
3. Largest digit token ≥ 100 parsed from the region enum name string.
|
||||
4. Largest digit token < 100 from the enum name (reversed scan).
|
||||
5. Full enum name string, raw integer ≥ 100, or raw string as a label.
|
||||
|
||||
Args:
|
||||
lora_message: A LoRa config protobuf message or compatible object.
|
||||
|
||||
Returns:
|
||||
An integer MHz frequency, a fallback string label, or ``None``.
|
||||
"""
|
||||
|
||||
if lora_message is None:
|
||||
return None
|
||||
|
||||
# Step 1 — explicit radio override
|
||||
override_frequency = getattr(lora_message, "override_frequency", None)
|
||||
if override_frequency is not None:
|
||||
if isinstance(override_frequency, (int, float)):
|
||||
@@ -533,6 +613,15 @@ def _region_frequency(lora_message: Any) -> int | float | str | None:
|
||||
if region_value is None:
|
||||
return None
|
||||
enum_name = _enum_name_from_field(lora_message, "region", region_value)
|
||||
|
||||
# Step 2 — lookup table + channel offset (actual band-plan frequency)
|
||||
if enum_name:
|
||||
channel_num = getattr(lora_message, "channel_num", None)
|
||||
computed = _computed_channel_frequency(enum_name, channel_num)
|
||||
if computed is not None:
|
||||
return computed
|
||||
|
||||
# Steps 3–5 — parse digits from enum name (fallback for unknown regions)
|
||||
if enum_name:
|
||||
digits = re.findall(r"\d+", enum_name)
|
||||
for token in digits:
|
||||
|
||||
+295
-21
@@ -83,15 +83,25 @@ _POSITION_POST_PRIORITY = 60
|
||||
_TELEMETRY_POST_PRIORITY = 70
|
||||
_DEFAULT_POST_PRIORITY = 90
|
||||
|
||||
_MAX_SEND_RETRIES = 3
|
||||
"""Maximum number of times a failed POST item is re-queued before being dropped."""
|
||||
|
||||
|
||||
@dataclass
|
||||
class QueueState:
|
||||
"""Mutable state for the HTTP POST priority queue."""
|
||||
|
||||
lock: threading.Lock = field(default_factory=threading.Lock)
|
||||
queue: list[tuple[int, int, str, dict]] = field(default_factory=list)
|
||||
# Heap tuple: (priority, counter, path, payload, retries).
|
||||
queue: list[tuple[int, int, str, dict, int]] = field(default_factory=list)
|
||||
counter: Iterable[int] = field(default_factory=itertools.count)
|
||||
active: bool = False
|
||||
# Background drain thread. When the drainer is alive, _queue_post_json
|
||||
# signals drain_event instead of blocking the caller with HTTP calls.
|
||||
drain_event: threading.Event = field(default_factory=threading.Event)
|
||||
drainer: threading.Thread | None = None
|
||||
# Set to request the drainer thread to exit its loop cleanly.
|
||||
shutdown: threading.Event = field(default_factory=threading.Event)
|
||||
|
||||
|
||||
STATE = QueueState()
|
||||
@@ -102,7 +112,7 @@ def _send_single(
|
||||
api_token: str,
|
||||
path: str,
|
||||
payload: dict,
|
||||
) -> None:
|
||||
) -> bool:
|
||||
"""Transmit a single JSON payload to one instance.
|
||||
|
||||
Parameters:
|
||||
@@ -110,10 +120,13 @@ def _send_single(
|
||||
api_token: Bearer token for this instance (may be empty).
|
||||
path: API path relative to the instance root.
|
||||
payload: JSON-serialisable body to transmit.
|
||||
|
||||
Returns:
|
||||
``True`` when the request succeeded, ``False`` on failure.
|
||||
"""
|
||||
|
||||
if not instance:
|
||||
return
|
||||
return True
|
||||
|
||||
url = f"{instance}{path}"
|
||||
data = json.dumps(payload).encode("utf-8")
|
||||
@@ -139,15 +152,18 @@ def _send_single(
|
||||
try:
|
||||
with urllib.request.urlopen(req, timeout=10) as resp:
|
||||
resp.read()
|
||||
except Exception as exc: # pragma: no cover - exercised in production
|
||||
return True
|
||||
except Exception as exc:
|
||||
config._debug_log(
|
||||
"POST request failed",
|
||||
context="queue.post_json",
|
||||
severity="warn",
|
||||
always=True,
|
||||
url=url,
|
||||
error_class=exc.__class__.__name__,
|
||||
error_message=str(exc),
|
||||
)
|
||||
return False
|
||||
|
||||
|
||||
def _post_json(
|
||||
@@ -156,7 +172,7 @@ def _post_json(
|
||||
*,
|
||||
instance: str | None = None,
|
||||
api_token: str | None = None,
|
||||
) -> None:
|
||||
) -> bool:
|
||||
"""Send a JSON payload to one or more configured web API instances.
|
||||
|
||||
When ``instance`` is provided explicitly the payload is sent to that
|
||||
@@ -169,13 +185,18 @@ def _post_json(
|
||||
payload: JSON-serialisable body to transmit.
|
||||
instance: Optional single-instance override.
|
||||
api_token: Optional token override (only used with ``instance``).
|
||||
|
||||
Returns:
|
||||
``True`` when at least one instance received the payload
|
||||
successfully, ``False`` when all targets failed. A missing
|
||||
configuration is not a transient failure and returns ``True``
|
||||
(retrying would not help).
|
||||
"""
|
||||
|
||||
if instance is not None:
|
||||
if not instance:
|
||||
return
|
||||
_send_single(instance, api_token or "", path, payload)
|
||||
return
|
||||
return True
|
||||
return _send_single(instance, api_token or "", path, payload)
|
||||
|
||||
targets: tuple[tuple[str, str], ...] = config.INSTANCES
|
||||
if not targets:
|
||||
@@ -183,14 +204,28 @@ def _post_json(
|
||||
# config.INSTANCE / config.API_TOKEN directly.
|
||||
inst = config.INSTANCE
|
||||
if not inst:
|
||||
return
|
||||
_send_single(inst, api_token or config.API_TOKEN, path, payload)
|
||||
return
|
||||
try:
|
||||
config._debug_log(
|
||||
"No target instances configured; discarding payload",
|
||||
context="queue.post_json",
|
||||
severity="error",
|
||||
always=True,
|
||||
path=path,
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
return False
|
||||
return _send_single(inst, api_token or config.API_TOKEN, path, payload)
|
||||
|
||||
any_ok = False
|
||||
any_attempted = False
|
||||
for inst, token in targets:
|
||||
if not inst:
|
||||
continue
|
||||
_send_single(inst, token, path, payload)
|
||||
any_attempted = True
|
||||
if _send_single(inst, token, path, payload):
|
||||
any_ok = True
|
||||
return any_ok or not any_attempted
|
||||
|
||||
|
||||
def _enqueue_post_json(
|
||||
@@ -199,6 +234,7 @@ def _enqueue_post_json(
|
||||
priority: int,
|
||||
*,
|
||||
state: QueueState = STATE,
|
||||
retries: int = 0,
|
||||
) -> None:
|
||||
"""Store a POST request in the priority queue.
|
||||
|
||||
@@ -207,15 +243,17 @@ def _enqueue_post_json(
|
||||
payload: JSON-serialisable body.
|
||||
priority: Lower values execute first.
|
||||
state: Shared queue state, injectable for testing.
|
||||
retries: Number of prior failed send attempts for this item.
|
||||
"""
|
||||
|
||||
with state.lock:
|
||||
counter = next(state.counter)
|
||||
# Heap tuple: (priority, counter, path, payload). Lower priority
|
||||
# values are dequeued first (min-heap semantics). The monotonically
|
||||
# increasing counter breaks ties so equal-priority items are processed
|
||||
# in FIFO order without comparing the non-orderable payload dict.
|
||||
heapq.heappush(state.queue, (priority, counter, path, payload))
|
||||
# Heap tuple: (priority, counter, path, payload, retries). Lower
|
||||
# priority values are dequeued first (min-heap semantics). The
|
||||
# monotonically increasing counter breaks ties so equal-priority
|
||||
# items are processed in FIFO order without comparing the
|
||||
# non-orderable payload dict.
|
||||
heapq.heappush(state.queue, (priority, counter, path, payload, retries))
|
||||
|
||||
|
||||
def _drain_post_queue(
|
||||
@@ -223,6 +261,12 @@ def _drain_post_queue(
|
||||
) -> None:
|
||||
"""Process queued POST requests in priority order.
|
||||
|
||||
When the *send* callable returns ``False`` (transient failure) the item
|
||||
is re-queued up to :data:`_MAX_SEND_RETRIES` times. Items exceeding
|
||||
the limit are dropped with a warning. Custom *send* callables that
|
||||
return ``None`` (the typical test/heartbeat pattern) are never retried
|
||||
— the ``result is False`` identity check ensures backward compatibility.
|
||||
|
||||
Parameters:
|
||||
state: Queue container holding pending items.
|
||||
send: Optional callable used to transmit requests.
|
||||
@@ -237,13 +281,184 @@ def _drain_post_queue(
|
||||
if not state.queue:
|
||||
state.active = False
|
||||
return
|
||||
_priority, _idx, path, payload = heapq.heappop(state.queue)
|
||||
send(path, payload)
|
||||
item = heapq.heappop(state.queue)
|
||||
|
||||
# Support both 5-tuple (current) and 4-tuple (legacy/test) items.
|
||||
if len(item) >= 5:
|
||||
priority, _idx, path, payload, retries = item[:5]
|
||||
else:
|
||||
priority, _idx, path, payload = item[:4]
|
||||
retries = 0
|
||||
|
||||
result = send(path, payload)
|
||||
|
||||
# Only retry when the send callable explicitly signals failure
|
||||
# (returns False). Custom send callables (tests, heartbeat)
|
||||
# return None and must NOT be treated as failures.
|
||||
if result is False:
|
||||
if retries < _MAX_SEND_RETRIES:
|
||||
_enqueue_post_json(
|
||||
path, payload, priority, state=state, retries=retries + 1
|
||||
)
|
||||
else:
|
||||
try:
|
||||
config._debug_log(
|
||||
"Dropping item after max retries",
|
||||
context="queue.drain",
|
||||
severity="warn",
|
||||
always=True,
|
||||
path=path,
|
||||
retries=retries,
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
finally:
|
||||
with state.lock:
|
||||
state.active = False
|
||||
|
||||
|
||||
_QUEUE_DEPTH_WARNING_THRESHOLD = 100
|
||||
"""Log a warning when the queue grows past this many items."""
|
||||
|
||||
|
||||
def _queue_drainer_loop(state: QueueState = STATE) -> None:
|
||||
"""Body of the background queue-drain daemon thread.
|
||||
|
||||
Blocks on :attr:`QueueState.drain_event`, clears it, then empties the
|
||||
queue by calling :func:`_drain_post_queue`. The thread is created as a
|
||||
daemon so it terminates automatically when the process exits.
|
||||
|
||||
The loop exits cleanly when :attr:`QueueState.shutdown` is set, allowing
|
||||
tests (and graceful-shutdown paths) to join the thread instead of leaking
|
||||
daemon threads that accumulate across a test run.
|
||||
|
||||
The loop is deliberately hardened so that **no** :class:`Exception` can
|
||||
kill the thread. The ``_debug_log`` calls inside the error handler are
|
||||
themselves wrapped in ``try/except`` to prevent cascading failures
|
||||
(e.g. ``BrokenPipeError`` from ``print()`` to a closed stdout).
|
||||
|
||||
.. note::
|
||||
There is a benign race between ``drain_event.clear()`` and the end
|
||||
of :func:`_drain_post_queue`: a signal arriving in that window is
|
||||
consumed by ``clear()`` but the item is still drained because the
|
||||
drain loop empties the queue completely. However, an item enqueued
|
||||
*after* the drain loop finds the queue empty and *before*
|
||||
``wait()`` re-blocks will sit until the next ``drain_event.set()``
|
||||
call (i.e. the next enqueue). This is acceptable for a best-effort
|
||||
ingestor — maximum extra latency equals the inter-packet interval.
|
||||
|
||||
Parameters:
|
||||
state: Queue state instance to drain.
|
||||
"""
|
||||
try:
|
||||
config._debug_log(
|
||||
"Queue drainer thread started",
|
||||
context="queue.drainer",
|
||||
severity="info",
|
||||
always=True,
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
while not state.shutdown.is_set():
|
||||
state.drain_event.wait(timeout=1.0)
|
||||
if state.shutdown.is_set():
|
||||
break
|
||||
state.drain_event.clear()
|
||||
|
||||
depth = len(state.queue)
|
||||
if depth > _QUEUE_DEPTH_WARNING_THRESHOLD:
|
||||
try:
|
||||
config._debug_log(
|
||||
"Queue depth warning",
|
||||
context="queue.drainer",
|
||||
severity="warn",
|
||||
always=True,
|
||||
depth=depth,
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
try:
|
||||
_drain_post_queue(state)
|
||||
except Exception as exc:
|
||||
try:
|
||||
config._debug_log(
|
||||
"Queue drainer error",
|
||||
context="queue.drainer",
|
||||
severity="error",
|
||||
always=True,
|
||||
error_class=exc.__class__.__name__,
|
||||
error_message=str(exc),
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
try:
|
||||
config._debug_log(
|
||||
"Queue drainer thread exiting",
|
||||
context="queue.drainer",
|
||||
severity="info",
|
||||
always=True,
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
def _start_queue_drainer(state: QueueState = STATE) -> None:
|
||||
"""Idempotently start the background queue-drain thread.
|
||||
|
||||
Calling this function when a drainer thread is already alive is a
|
||||
no-op. The thread is created as a daemon so it does not prevent
|
||||
process exit. The check-and-start is performed under :attr:`state.lock`
|
||||
to avoid starting duplicate threads under concurrent callers.
|
||||
|
||||
If items are already in the queue when the drainer is started,
|
||||
:attr:`QueueState.drain_event` is signalled immediately so they are not
|
||||
stranded waiting for the next packet to arrive.
|
||||
|
||||
Parameters:
|
||||
state: Queue state whose :func:`_queue_drainer_loop` to start.
|
||||
"""
|
||||
with state.lock:
|
||||
if state.drainer is not None and state.drainer.is_alive():
|
||||
return
|
||||
# Reset in case the prior thread was stopped or crashed while
|
||||
# shutdown was already set.
|
||||
state.shutdown.clear()
|
||||
t = threading.Thread(
|
||||
target=_queue_drainer_loop,
|
||||
args=(state,),
|
||||
name="queue-drainer",
|
||||
daemon=True,
|
||||
)
|
||||
t.start()
|
||||
state.drainer = t
|
||||
if state.queue:
|
||||
state.drain_event.set()
|
||||
|
||||
|
||||
def _stop_queue_drainer(state: QueueState = STATE, timeout: float = 5.0) -> None:
|
||||
"""Signal the drainer thread to exit and wait for it to finish.
|
||||
|
||||
Sets :attr:`QueueState.shutdown` and :attr:`QueueState.drain_event` so
|
||||
the loop wakes up, observes the shutdown flag, and terminates. After
|
||||
joining (up to *timeout* seconds) the drainer reference is cleared.
|
||||
|
||||
Safe to call when no drainer is running (no-op).
|
||||
|
||||
Parameters:
|
||||
state: Queue state whose drainer to stop.
|
||||
timeout: Maximum seconds to wait for the thread to finish.
|
||||
"""
|
||||
if state.drainer is None or not state.drainer.is_alive():
|
||||
return
|
||||
state.shutdown.set()
|
||||
state.drain_event.set()
|
||||
state.drainer.join(timeout=timeout)
|
||||
state.drainer = None
|
||||
|
||||
|
||||
def _queue_post_json(
|
||||
path: str,
|
||||
payload: dict,
|
||||
@@ -252,14 +467,32 @@ def _queue_post_json(
|
||||
state: QueueState = STATE,
|
||||
send: Callable[[str, dict], None] | None = None,
|
||||
) -> None:
|
||||
"""Queue a POST request and start processing if idle.
|
||||
"""Queue a POST request and wake the drain thread (or drain inline).
|
||||
|
||||
When a background drainer thread is running (started via
|
||||
:func:`_start_queue_drainer`), this function enqueues the item and
|
||||
signals :attr:`QueueState.drain_event` without blocking — the drain
|
||||
happens on the dedicated thread. This keeps the caller's thread (which
|
||||
may be the Meshtastic asyncio I/O thread) free to process serial events.
|
||||
|
||||
When no background drainer is alive the call falls back to a
|
||||
synchronous inline drain. This path is used by tests (which pass a
|
||||
``send`` override via :func:`_fresh_state`) and for any standalone use
|
||||
without calling :func:`_start_queue_drainer`.
|
||||
|
||||
.. note::
|
||||
The background drainer is used **only** when no custom ``send``
|
||||
override is provided (i.e. the production ``_post_json`` path).
|
||||
Any caller that supplies a custom ``send`` (tests, heartbeat
|
||||
helpers) always gets the synchronous inline drain so its transport
|
||||
is honoured correctly.
|
||||
|
||||
Parameters:
|
||||
path: API path for the request.
|
||||
payload: JSON payload to send.
|
||||
priority: Scheduling priority where lower values run first.
|
||||
state: Queue container used to store pending requests.
|
||||
send: Optional transport override, primarily for tests.
|
||||
send: Optional transport override (synchronous fallback only).
|
||||
"""
|
||||
|
||||
if send is None:
|
||||
@@ -279,6 +512,42 @@ def _queue_post_json(
|
||||
)
|
||||
|
||||
_enqueue_post_json(path, payload, priority, state=state)
|
||||
|
||||
# Use the background drainer only when it is alive AND no custom send
|
||||
# override is in play. A custom send (used by tests and callers such as
|
||||
# ingestors.queue_ingestor_heartbeat) must be honoured synchronously
|
||||
# because the background drainer always calls _drain_post_queue without
|
||||
# a send override.
|
||||
#
|
||||
# The ``is`` check is intentional: _post_json is a module-level function
|
||||
# so identity comparison reliably detects the "no override" default that
|
||||
# was assigned at the top of this function.
|
||||
if send is _post_json:
|
||||
if state.drainer is not None and state.drainer.is_alive():
|
||||
state.drain_event.set()
|
||||
return
|
||||
|
||||
# The drainer was previously started but has died (e.g. unhandled
|
||||
# exception). Restart it so the caller stays non-blocking and the
|
||||
# MeshCore asyncio event loop is not stalled by inline HTTP calls.
|
||||
if state.drainer is not None:
|
||||
try:
|
||||
config._debug_log(
|
||||
"Restarting dead queue drainer thread",
|
||||
context="queue.queue_post_json",
|
||||
severity="warn",
|
||||
always=True,
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
_start_queue_drainer(state)
|
||||
# If the restart succeeded, delegate to the background thread.
|
||||
if state.drainer is not None and state.drainer.is_alive():
|
||||
state.drain_event.set()
|
||||
return
|
||||
|
||||
# Synchronous fallback: no drainer was ever started, the restart
|
||||
# failed, or a custom send override is in play.
|
||||
with state.lock:
|
||||
if state.active:
|
||||
return
|
||||
@@ -304,15 +573,20 @@ __all__ = [
|
||||
"_CHANNEL_POST_PRIORITY",
|
||||
"_DEFAULT_POST_PRIORITY",
|
||||
"_INGESTOR_POST_PRIORITY",
|
||||
"_MAX_SEND_RETRIES",
|
||||
"_MESSAGE_POST_PRIORITY",
|
||||
"_NEIGHBOR_POST_PRIORITY",
|
||||
"_NODE_POST_PRIORITY",
|
||||
"_POSITION_POST_PRIORITY",
|
||||
"_QUEUE_DEPTH_WARNING_THRESHOLD",
|
||||
"_TRACE_POST_PRIORITY",
|
||||
"_TELEMETRY_POST_PRIORITY",
|
||||
"_clear_post_queue",
|
||||
"_drain_post_queue",
|
||||
"_enqueue_post_json",
|
||||
"_post_json",
|
||||
"_queue_drainer_loop",
|
||||
"_queue_post_json",
|
||||
"_start_queue_drainer",
|
||||
"_stop_queue_drainer",
|
||||
]
|
||||
|
||||
Generated
+3
-3
@@ -969,7 +969,7 @@ checksum = "7edddbd0b52d732b21ad9a5fab5c704c14cd949e5e9a1ec5929a24fded1b904c"
|
||||
|
||||
[[package]]
|
||||
name = "potatomesh-matrix-bridge"
|
||||
version = "0.6.0"
|
||||
version = "0.6.1"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"axum",
|
||||
@@ -1087,9 +1087,9 @@ checksum = "69cdb34c158ceb288df11e18b4bd39de994f6657d83847bdffdbd7f346754b0f"
|
||||
|
||||
[[package]]
|
||||
name = "rand"
|
||||
version = "0.9.2"
|
||||
version = "0.9.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "6db2770f06117d490610c7488547d543617b21bfa07796d7a12f6f1bd53850d1"
|
||||
checksum = "44c5af06bb1b7d3216d91932aed5265164bf384dc89cd6ba05cf59a35f5f76ea"
|
||||
dependencies = [
|
||||
"rand_chacha",
|
||||
"rand_core",
|
||||
|
||||
+1
-1
@@ -14,7 +14,7 @@
|
||||
|
||||
[package]
|
||||
name = "potatomesh-matrix-bridge"
|
||||
version = "0.6.0"
|
||||
version = "0.6.1"
|
||||
edition = "2021"
|
||||
|
||||
[dependencies]
|
||||
|
||||
@@ -287,3 +287,85 @@ class TestProtocolValidation:
|
||||
# Restore to valid value so subsequent tests work
|
||||
monkeypatch.setenv("PROTOCOL", "meshtastic")
|
||||
importlib.reload(config)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _parse_lora_freq_env
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestParseLoraFreqEnv:
|
||||
"""Tests for :func:`config._parse_lora_freq_env`."""
|
||||
|
||||
def test_none_returns_none(self):
|
||||
"""None input returns None."""
|
||||
assert config._parse_lora_freq_env(None) is None
|
||||
|
||||
def test_empty_string_returns_none(self):
|
||||
"""Empty string returns None."""
|
||||
assert config._parse_lora_freq_env("") is None
|
||||
|
||||
def test_whitespace_only_returns_none(self):
|
||||
"""Whitespace-only string returns None."""
|
||||
assert config._parse_lora_freq_env(" ") is None
|
||||
|
||||
def test_integer_string_returns_int(self):
|
||||
"""Whole-number string returns int."""
|
||||
result = config._parse_lora_freq_env("868")
|
||||
assert result == 868
|
||||
assert isinstance(result, int)
|
||||
|
||||
def test_float_integer_value_returns_int(self):
|
||||
"""String like '915.0' (whole float) returns int 915."""
|
||||
result = config._parse_lora_freq_env("915.0")
|
||||
assert result == 915
|
||||
assert isinstance(result, int)
|
||||
|
||||
def test_decimal_string_returns_float(self):
|
||||
"""Decimal string returns float."""
|
||||
result = config._parse_lora_freq_env("869.525")
|
||||
assert result == pytest.approx(869.525)
|
||||
assert isinstance(result, float)
|
||||
|
||||
def test_non_numeric_label_returns_none(self):
|
||||
"""Non-numeric string returns None so auto-detection is not blocked."""
|
||||
assert config._parse_lora_freq_env("EU_868") is None
|
||||
|
||||
def test_unit_suffixed_string_returns_none(self):
|
||||
"""String like '915MHz' returns None (not numeric)."""
|
||||
assert config._parse_lora_freq_env("915MHz") is None
|
||||
|
||||
def test_inf_returns_none(self):
|
||||
"""'inf' is non-finite and returns None."""
|
||||
assert config._parse_lora_freq_env("inf") is None
|
||||
|
||||
def test_large_exponent_returns_none(self):
|
||||
"""'1e309' overflows to inf and returns None."""
|
||||
assert config._parse_lora_freq_env("1e309") is None
|
||||
|
||||
def test_nan_returns_none(self):
|
||||
"""'nan' is non-finite and returns None."""
|
||||
assert config._parse_lora_freq_env("nan") is None
|
||||
|
||||
def test_whitespace_stripped(self):
|
||||
"""Leading/trailing whitespace is ignored."""
|
||||
assert config._parse_lora_freq_env(" 919 ") == 919
|
||||
|
||||
def test_frequency_env_preseeds_lora_freq(self, monkeypatch):
|
||||
"""FREQUENCY env var pre-seeds LORA_FREQ at module load."""
|
||||
import importlib
|
||||
|
||||
monkeypatch.setenv("FREQUENCY", "915")
|
||||
importlib.reload(config)
|
||||
assert config.LORA_FREQ == 915
|
||||
# Restore
|
||||
monkeypatch.delenv("FREQUENCY")
|
||||
importlib.reload(config)
|
||||
|
||||
def test_no_frequency_env_leaves_lora_freq_none(self, monkeypatch):
|
||||
"""Absent FREQUENCY env var leaves LORA_FREQ as None."""
|
||||
import importlib
|
||||
|
||||
monkeypatch.delenv("FREQUENCY", raising=False)
|
||||
importlib.reload(config)
|
||||
assert config.LORA_FREQ is None
|
||||
|
||||
@@ -261,6 +261,8 @@ def _configure_common_defaults(
|
||||
):
|
||||
"""Set fast configuration defaults shared by daemon integration tests."""
|
||||
|
||||
monkeypatch.setattr(daemon.config, "INSTANCES", (("http://test", ""),))
|
||||
monkeypatch.setattr(daemon.config, "INSTANCE", "http://test")
|
||||
monkeypatch.setattr(daemon.config, "SNAPSHOT_SECS", 0)
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_INITIAL_DELAY_SECS", 0)
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_MAX_DELAY_SECS", 0)
|
||||
@@ -1089,6 +1091,133 @@ def test_check_inactivity_reconnect_elapsed_triggers(monkeypatch):
|
||||
assert result is True
|
||||
|
||||
|
||||
def test_inactivity_reconnect_bypasses_throttle_when_explicitly_disconnected(
|
||||
monkeypatch,
|
||||
):
|
||||
"""Explicit disconnect reconnects even when last_inactivity_reconnect is recent.
|
||||
|
||||
When isConnected reports False the daemon must not wait the full
|
||||
inactivity window before reconnecting. It uses the shorter
|
||||
_RECONNECT_MAX_DELAY_SECS window instead.
|
||||
"""
|
||||
state = _make_state(inactivity_reconnect_secs=3600.0)
|
||||
state.iface = DummyInterface(is_connected=False)
|
||||
state.iface_connected_at = 0.0
|
||||
# 61 seconds since last reconnect attempt — outside the 60 s anti-thrash window.
|
||||
state.last_inactivity_reconnect = 3589.0
|
||||
|
||||
monkeypatch.setattr(daemon.time, "monotonic", lambda: 3650.0)
|
||||
monkeypatch.setattr(daemon.handlers, "last_packet_monotonic", lambda: None)
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_MAX_DELAY_SECS", 60.0)
|
||||
monkeypatch.setattr(daemon.config, "_debug_log", lambda *_a, **_k: None)
|
||||
monkeypatch.setattr(daemon, "_close_interface", lambda iface: None)
|
||||
|
||||
result = daemon._check_inactivity_reconnect(state)
|
||||
assert (
|
||||
result is True
|
||||
), "Expected reconnect to fire when explicitly disconnected and 61s have elapsed"
|
||||
|
||||
|
||||
def test_inactivity_reconnect_still_throttles_inactivity(monkeypatch):
|
||||
"""The full inactivity window still throttles reconnects that are not explicit disconnects."""
|
||||
state = _make_state(inactivity_reconnect_secs=3600.0)
|
||||
# isConnected=True → inactivity-only trigger (no explicit disconnect signal)
|
||||
state.iface = DummyInterface(is_connected=True)
|
||||
state.iface_connected_at = 0.0
|
||||
# now=3700, last_inactivity_reconnect=3691 → 9 s elapsed, well within 3600 s window.
|
||||
state.last_inactivity_reconnect = 3691.0
|
||||
|
||||
monkeypatch.setattr(daemon.time, "monotonic", lambda: 3700.0)
|
||||
# No recent packet → inactivity_elapsed = 3700 s > inactivity_reconnect_secs (3600 s)
|
||||
monkeypatch.setattr(daemon.handlers, "last_packet_monotonic", lambda: None)
|
||||
monkeypatch.setattr(daemon.config, "_RECONNECT_MAX_DELAY_SECS", 60.0)
|
||||
|
||||
# Even though enough inactive time has passed, last_inactivity_reconnect is
|
||||
# only 9 s ago (< 3600 s throttle window) → reconnect is suppressed.
|
||||
result = daemon._check_inactivity_reconnect(state)
|
||||
assert (
|
||||
result is False
|
||||
), "Expected throttle to suppress reconnect when last attempt was 9 s ago"
|
||||
|
||||
|
||||
def test_inactivity_reconnect_logs_queue_depth(monkeypatch):
|
||||
"""The inactivity reconnect debug log includes the current queue depth."""
|
||||
state = _make_state(inactivity_reconnect_secs=30.0)
|
||||
state.iface = DummyInterface(is_connected=True)
|
||||
state.iface_connected_at = 0.0
|
||||
state.last_inactivity_reconnect = None
|
||||
|
||||
monkeypatch.setattr(daemon.time, "monotonic", lambda: 100.0)
|
||||
monkeypatch.setattr(daemon.handlers, "last_packet_monotonic", lambda: None)
|
||||
monkeypatch.setattr(daemon, "_close_interface", lambda iface: None)
|
||||
|
||||
# Seed the global queue with two dummy items so queue_depth is non-zero.
|
||||
from data.mesh_ingestor.queue import STATE, _enqueue_post_json
|
||||
|
||||
_enqueue_post_json("/api/a", {}, 10, state=STATE)
|
||||
_enqueue_post_json("/api/b", {}, 20, state=STATE)
|
||||
|
||||
log_kwargs: list[dict] = []
|
||||
monkeypatch.setattr(
|
||||
daemon.config,
|
||||
"_debug_log",
|
||||
lambda msg, **kw: log_kwargs.append(kw),
|
||||
)
|
||||
|
||||
try:
|
||||
result = daemon._check_inactivity_reconnect(state)
|
||||
assert result is True
|
||||
assert any(
|
||||
kw.get("queue_depth") == 2 for kw in log_kwargs
|
||||
), f"Expected queue_depth=2 in log kwargs, got {log_kwargs}"
|
||||
finally:
|
||||
# Clean up global state so other tests are not affected.
|
||||
STATE.queue.clear()
|
||||
|
||||
|
||||
def test_main_exits_early_when_no_instances(monkeypatch):
|
||||
"""main() returns immediately when no INSTANCE_DOMAIN is configured.
|
||||
|
||||
The queue drainer must NOT be started on the early-exit path.
|
||||
"""
|
||||
monkeypatch.setattr(daemon.config, "INSTANCES", ())
|
||||
monkeypatch.setattr(daemon.config, "INSTANCE", "")
|
||||
log_msgs: list[str] = []
|
||||
monkeypatch.setattr(
|
||||
daemon.config,
|
||||
"_debug_log",
|
||||
lambda msg, **kw: log_msgs.append(msg),
|
||||
)
|
||||
drainer_calls: list[object] = []
|
||||
monkeypatch.setattr(
|
||||
daemon.queue,
|
||||
"_start_queue_drainer",
|
||||
lambda state=None: drainer_calls.append(state),
|
||||
)
|
||||
|
||||
provider = _make_minimal_fake_provider("meshtastic")
|
||||
daemon.main(provider=provider)
|
||||
|
||||
assert any("no instance_domain" in m.lower() for m in log_msgs)
|
||||
assert drainer_calls == [], "Drainer must not start when no instances configured"
|
||||
|
||||
|
||||
def test_main_starts_queue_drainer(monkeypatch):
|
||||
"""main() calls queue._start_queue_drainer after subscribing."""
|
||||
drainer_calls: list[object] = []
|
||||
monkeypatch.setattr(
|
||||
daemon.queue,
|
||||
"_start_queue_drainer",
|
||||
lambda state=None: drainer_calls.append(state),
|
||||
)
|
||||
|
||||
_patch_daemon_for_fast_exit(monkeypatch)
|
||||
provider = _make_minimal_fake_provider("meshtastic")
|
||||
daemon.main(provider=provider)
|
||||
|
||||
assert len(drainer_calls) == 1
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _try_send_self_node
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
@@ -227,7 +227,7 @@ def test_region_frequency_and_resolution_helpers():
|
||||
assert freq == "915MHz"
|
||||
|
||||
freq = interfaces._region_frequency(LoraMessage(2))
|
||||
assert freq == "US"
|
||||
assert freq == 902 # "US" is in the region lookup table → base 902 MHz
|
||||
|
||||
class StringRegionMessage:
|
||||
def __init__(self, region):
|
||||
|
||||
@@ -267,6 +267,72 @@ class TestEnumNameFromField:
|
||||
assert ifaces._enum_name_from_field(msg, "region", 3) == "US_915"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _computed_channel_frequency
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestComputedChannelFrequency:
|
||||
"""Tests for :func:`interfaces._computed_channel_frequency`."""
|
||||
|
||||
def test_none_enum_name_returns_none(self):
|
||||
"""None enum_name returns None."""
|
||||
assert ifaces._computed_channel_frequency(None, 0) is None
|
||||
|
||||
def test_unknown_region_returns_none(self):
|
||||
"""Enum name not in lookup table returns None."""
|
||||
assert ifaces._computed_channel_frequency("UNKNOWN_REGION", 0) is None
|
||||
|
||||
def test_us_channel_0_base_frequency(self):
|
||||
"""US region, channel 0, returns floor(902.0 + 0*0.25) = 902."""
|
||||
assert ifaces._computed_channel_frequency("US", 0) == 902
|
||||
|
||||
def test_us_channel_52_mid_band(self):
|
||||
"""US region, channel 52, returns floor(902.0 + 52*0.25) = 915."""
|
||||
assert ifaces._computed_channel_frequency("US", 52) == 915
|
||||
|
||||
def test_eu_868_channel_0_returns_869(self):
|
||||
"""EU_868 region, channel 0, returns floor(869.525) = 869, not 868."""
|
||||
assert ifaces._computed_channel_frequency("EU_868", 0) == 869
|
||||
|
||||
def test_eu_868_channel_1_returns_870(self):
|
||||
"""EU_868 region, channel 1, returns floor(869.525 + 0.5) = 870."""
|
||||
assert ifaces._computed_channel_frequency("EU_868", 1) == 870
|
||||
|
||||
def test_my_919_channel_0(self):
|
||||
"""MY_919 region, channel 0, returns floor(919.0) = 919."""
|
||||
assert ifaces._computed_channel_frequency("MY_919", 0) == 919
|
||||
|
||||
def test_lora_24_channel_0(self):
|
||||
"""LORA_24 region, channel 0, returns floor(2400.0) = 2400."""
|
||||
assert ifaces._computed_channel_frequency("LORA_24", 0) == 2400
|
||||
|
||||
def test_none_channel_num_defaults_to_zero(self):
|
||||
"""None channel_num is treated as 0, returning the base frequency."""
|
||||
assert ifaces._computed_channel_frequency("ANZ", None) == 916
|
||||
|
||||
def test_negative_channel_num_clamped_to_zero(self):
|
||||
"""Negative channel_num is clamped to 0, returning the base frequency."""
|
||||
assert ifaces._computed_channel_frequency("ANZ", -1) == 916
|
||||
|
||||
def test_result_is_int(self):
|
||||
"""Return type is int (math.floor result), not float."""
|
||||
result = ifaces._computed_channel_frequency("EU_868", 0)
|
||||
assert isinstance(result, int)
|
||||
|
||||
def test_nz_865_channel_0(self):
|
||||
"""NZ_865 region, channel 0, returns floor(864.0) = 864."""
|
||||
assert ifaces._computed_channel_frequency("NZ_865", 0) == 864
|
||||
|
||||
def test_br_902_channel_4_spacing_0_25(self):
|
||||
"""BR_902 region, channel 4, returns floor(902.0 + 4*0.25) = 903."""
|
||||
assert ifaces._computed_channel_frequency("BR_902", 4) == 903
|
||||
|
||||
def test_kz_863_channel_0(self):
|
||||
"""KZ_863 region, channel 0, returns floor(863.125) = 863."""
|
||||
assert ifaces._computed_channel_frequency("KZ_863", 0) == 863
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _region_frequency
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -323,6 +389,65 @@ class TestRegionFrequency:
|
||||
msg = SimpleNamespace(DESCRIPTOR=None, override_frequency=None, region="EU433")
|
||||
assert ifaces._region_frequency(msg) == "EU433"
|
||||
|
||||
def test_us_enum_lookup_table_used(self):
|
||||
"""US region with channel_num=0 returns 902 from lookup table, not None."""
|
||||
enum_val = SimpleNamespace(name="US")
|
||||
enum_type = SimpleNamespace(values_by_number={1: enum_val})
|
||||
field_desc = SimpleNamespace(enum_type=enum_type)
|
||||
desc = SimpleNamespace(fields_by_name={"region": field_desc})
|
||||
msg = SimpleNamespace(
|
||||
DESCRIPTOR=desc, override_frequency=None, region=1, channel_num=0
|
||||
)
|
||||
assert ifaces._region_frequency(msg) == 902
|
||||
|
||||
def test_eu_868_returns_869_not_868(self):
|
||||
"""EU_868 region returns 869 from lookup table, not 868 parsed from name."""
|
||||
enum_val = SimpleNamespace(name="EU_868")
|
||||
enum_type = SimpleNamespace(values_by_number={3: enum_val})
|
||||
field_desc = SimpleNamespace(enum_type=enum_type)
|
||||
desc = SimpleNamespace(fields_by_name={"region": field_desc})
|
||||
msg = SimpleNamespace(
|
||||
DESCRIPTOR=desc, override_frequency=None, region=3, channel_num=0
|
||||
)
|
||||
assert ifaces._region_frequency(msg) == 869
|
||||
|
||||
def test_unrecognised_int_falls_through(self):
|
||||
"""Raw int region with no DESCRIPTOR and value < 100 returns None."""
|
||||
msg = SimpleNamespace(DESCRIPTOR=None, override_frequency=None, region=99)
|
||||
assert ifaces._region_frequency(msg) is None
|
||||
|
||||
def test_missing_channel_num_attr_uses_base(self):
|
||||
"""Region in lookup table with no channel_num attribute returns base freq."""
|
||||
enum_val = SimpleNamespace(name="MY_919")
|
||||
enum_type = SimpleNamespace(values_by_number={17: enum_val})
|
||||
field_desc = SimpleNamespace(enum_type=enum_type)
|
||||
desc = SimpleNamespace(fields_by_name={"region": field_desc})
|
||||
# deliberately no channel_num attribute
|
||||
msg = SimpleNamespace(DESCRIPTOR=desc, override_frequency=None, region=17)
|
||||
assert ifaces._region_frequency(msg) == 919
|
||||
|
||||
def test_override_takes_priority_over_lookup_table(self):
|
||||
"""override_frequency takes priority over the lookup table."""
|
||||
enum_val = SimpleNamespace(name="EU_868")
|
||||
enum_type = SimpleNamespace(values_by_number={3: enum_val})
|
||||
field_desc = SimpleNamespace(enum_type=enum_type)
|
||||
desc = SimpleNamespace(fields_by_name={"region": field_desc})
|
||||
msg = SimpleNamespace(
|
||||
DESCRIPTOR=desc, override_frequency=867.3, region=3, channel_num=0
|
||||
)
|
||||
assert ifaces._region_frequency(msg) == 867
|
||||
|
||||
def test_unknown_enum_name_falls_to_digit_parse(self):
|
||||
"""Enum name not in lookup table falls through to digit parsing."""
|
||||
enum_val = SimpleNamespace(name="FUTURE_999")
|
||||
enum_type = SimpleNamespace(values_by_number={99: enum_val})
|
||||
field_desc = SimpleNamespace(enum_type=enum_type)
|
||||
desc = SimpleNamespace(fields_by_name={"region": field_desc})
|
||||
msg = SimpleNamespace(
|
||||
DESCRIPTOR=desc, override_frequency=None, region=99, channel_num=0
|
||||
)
|
||||
assert ifaces._region_frequency(msg) == 999
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _camelcase_enum_name
|
||||
|
||||
+11
-3
@@ -589,10 +589,10 @@ def test_ensure_radio_metadata_extracts_config(mesh_module, capsys):
|
||||
first_log = capsys.readouterr().out
|
||||
|
||||
assert iface.wait_calls == 1
|
||||
assert mesh.config.LORA_FREQ == 868
|
||||
assert mesh.config.LORA_FREQ == 869
|
||||
assert mesh.config.MODEM_PRESET == "MediumFast"
|
||||
assert "Captured LoRa radio metadata" in first_log
|
||||
assert "lora_freq=868" in first_log
|
||||
assert "lora_freq=869" in first_log
|
||||
assert "modem_preset='MediumFast'" in first_log
|
||||
|
||||
secondary_lora = make_lora(7, "US_915", 2, "LONG_FAST", preset_field="preset")
|
||||
@@ -602,7 +602,7 @@ def test_ensure_radio_metadata_extracts_config(mesh_module, capsys):
|
||||
second_log = capsys.readouterr().out
|
||||
|
||||
assert second_iface.wait_calls == 1
|
||||
assert mesh.config.LORA_FREQ == 868
|
||||
assert mesh.config.LORA_FREQ == 869
|
||||
assert mesh.config.MODEM_PRESET == "MediumFast"
|
||||
assert second_log == ""
|
||||
|
||||
@@ -1621,6 +1621,8 @@ def test_main_retries_interface_creation(mesh_module, monkeypatch):
|
||||
raise RuntimeError("boom")
|
||||
return iface, port
|
||||
|
||||
monkeypatch.setattr(mesh, "INSTANCES", (("http://test", ""),))
|
||||
monkeypatch.setattr(mesh, "INSTANCE", "http://test")
|
||||
monkeypatch.setattr(mesh, "CONNECTION", "/dev/ttyTEST")
|
||||
monkeypatch.setattr(mesh, "_create_serial_interface", fake_create)
|
||||
monkeypatch.setattr(mesh.threading, "Event", DummyEvent)
|
||||
@@ -1693,6 +1695,8 @@ def test_main_reconnects_when_connection_event_clears(mesh_module, monkeypatch):
|
||||
self._flag = True
|
||||
return True
|
||||
|
||||
monkeypatch.setattr(mesh, "INSTANCES", (("http://test", ""),))
|
||||
monkeypatch.setattr(mesh, "INSTANCE", "http://test")
|
||||
monkeypatch.setattr(mesh, "CONNECTION", "/dev/ttyTEST")
|
||||
monkeypatch.setattr(mesh, "_create_serial_interface", fake_create)
|
||||
monkeypatch.setattr(mesh.threading, "Event", DummyStopEvent)
|
||||
@@ -1757,6 +1761,8 @@ def test_main_recreates_interface_after_snapshot_error(mesh_module, monkeypatch)
|
||||
def record_upsert(node_id, node):
|
||||
upsert_calls.append(node_id)
|
||||
|
||||
monkeypatch.setattr(mesh, "INSTANCES", (("http://test", ""),))
|
||||
monkeypatch.setattr(mesh, "INSTANCE", "http://test")
|
||||
monkeypatch.setattr(mesh, "CONNECTION", "/dev/ttyTEST")
|
||||
monkeypatch.setattr(mesh, "_create_serial_interface", fake_create)
|
||||
monkeypatch.setattr(mesh, "upsert_node", record_upsert)
|
||||
@@ -1779,6 +1785,8 @@ def test_main_exits_when_defaults_unavailable(mesh_module, monkeypatch):
|
||||
def fail_default():
|
||||
raise mesh.NoAvailableMeshInterface("no interface available")
|
||||
|
||||
monkeypatch.setattr(mesh, "INSTANCES", (("http://test", ""),))
|
||||
monkeypatch.setattr(mesh, "INSTANCE", "http://test")
|
||||
monkeypatch.setattr(mesh, "CONNECTION", None)
|
||||
monkeypatch.setattr(mesh, "_create_default_interface", fail_default)
|
||||
monkeypatch.setattr(mesh.signal, "signal", lambda *_, **__: None)
|
||||
|
||||
@@ -125,6 +125,8 @@ def test_daemon_main_uses_provider_connect(monkeypatch):
|
||||
),
|
||||
)
|
||||
|
||||
monkeypatch.setattr(daemon.config, "INSTANCES", (("http://test", ""),))
|
||||
monkeypatch.setattr(daemon.config, "INSTANCE", "http://test")
|
||||
monkeypatch.setattr(
|
||||
daemon.handlers, "register_host_node_id", lambda *_a, **_k: None
|
||||
)
|
||||
|
||||
+715
-2
@@ -17,6 +17,7 @@ from __future__ import annotations
|
||||
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
import urllib.error
|
||||
import urllib.request
|
||||
from pathlib import Path
|
||||
@@ -29,13 +30,20 @@ if str(REPO_ROOT) not in sys.path:
|
||||
sys.path.insert(0, str(REPO_ROOT))
|
||||
|
||||
import data.mesh_ingestor.config as config
|
||||
import data.mesh_ingestor.queue as _queue_mod
|
||||
from data.mesh_ingestor.queue import (
|
||||
QueueState,
|
||||
_clear_post_queue,
|
||||
_drain_post_queue,
|
||||
_enqueue_post_json,
|
||||
_MAX_SEND_RETRIES,
|
||||
_post_json,
|
||||
_QUEUE_DEPTH_WARNING_THRESHOLD,
|
||||
_queue_drainer_loop,
|
||||
_queue_post_json,
|
||||
_send_single,
|
||||
_start_queue_drainer,
|
||||
_stop_queue_drainer,
|
||||
_CHANNEL_POST_PRIORITY,
|
||||
_DEFAULT_POST_PRIORITY,
|
||||
_INGESTOR_POST_PRIORITY,
|
||||
@@ -185,10 +193,11 @@ class TestEnqueuePostJson:
|
||||
state = _fresh_state()
|
||||
_enqueue_post_json("/api/test", {"k": 1}, 50, state=state)
|
||||
assert len(state.queue) == 1
|
||||
priority, _counter, path, payload = state.queue[0]
|
||||
priority, _counter, path, payload, retries = state.queue[0]
|
||||
assert priority == 50
|
||||
assert path == "/api/test"
|
||||
assert payload == {"k": 1}
|
||||
assert retries == 0
|
||||
|
||||
def test_heap_ordering(self):
|
||||
"""Lower priority values are dequeued first (min-heap)."""
|
||||
@@ -197,7 +206,7 @@ class TestEnqueuePostJson:
|
||||
state = _fresh_state()
|
||||
_enqueue_post_json("/api/low", {}, 90, state=state)
|
||||
_enqueue_post_json("/api/high", {}, 10, state=state)
|
||||
_priority, _counter, path, _payload = heapq.heappop(state.queue)
|
||||
_priority, _counter, path, _payload, _retries = heapq.heappop(state.queue)
|
||||
assert path == "/api/high"
|
||||
|
||||
def test_counter_increments(self):
|
||||
@@ -483,3 +492,707 @@ class TestMultiInstanceFanOut:
|
||||
assert len(captured) == 1
|
||||
assert "http://legacy" in captured[0].get_full_url()
|
||||
assert captured[0].get_header("Authorization") == "Bearer tok"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# HTTP failure always-logging
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_http_failure_always_logged(monkeypatch):
|
||||
"""POST failures are logged with always=True regardless of DEBUG mode.
|
||||
|
||||
Operators must be able to see HTTP errors without enabling DEBUG so they
|
||||
can tell whether the ingestor is silently dropping data.
|
||||
"""
|
||||
monkeypatch.setattr(config, "INSTANCES", (("http://localhost", ""),))
|
||||
monkeypatch.setattr(config, "INSTANCE", "http://localhost")
|
||||
monkeypatch.setattr(config, "DEBUG", False)
|
||||
|
||||
log_calls: list[dict] = []
|
||||
original_debug_log = config._debug_log
|
||||
|
||||
def capture_debug_log(msg, **kwargs):
|
||||
log_calls.append(kwargs)
|
||||
original_debug_log(msg, **kwargs)
|
||||
|
||||
monkeypatch.setattr(config, "_debug_log", capture_debug_log)
|
||||
|
||||
def raise_error(req, timeout=None):
|
||||
raise OSError("connection refused")
|
||||
|
||||
with patch("urllib.request.urlopen", raise_error):
|
||||
_send_single("http://localhost", "", "/api/test", {"x": 1})
|
||||
|
||||
assert any(
|
||||
c.get("always") is True for c in log_calls
|
||||
), "Expected at least one _debug_log call with always=True on HTTP failure"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Background drain thread
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestQueueDrainer:
|
||||
"""Tests for :func:`_start_queue_drainer` and :func:`_queue_drainer_loop`."""
|
||||
|
||||
def test_start_queue_drainer_starts_thread(self):
|
||||
"""_start_queue_drainer creates and starts a daemon thread."""
|
||||
state = _fresh_state()
|
||||
assert state.drainer is None
|
||||
_start_queue_drainer(state)
|
||||
assert state.drainer is not None
|
||||
assert state.drainer.is_alive()
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
def test_start_queue_drainer_idempotent(self):
|
||||
"""Calling _start_queue_drainer twice does not create a second thread."""
|
||||
state = _fresh_state()
|
||||
_start_queue_drainer(state)
|
||||
first_thread = state.drainer
|
||||
_start_queue_drainer(state)
|
||||
assert state.drainer is first_thread
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
def test_queue_drainer_loop_drains_items(self):
|
||||
"""_queue_drainer_loop drains enqueued items when signalled."""
|
||||
state = _fresh_state()
|
||||
drained: list[str] = []
|
||||
|
||||
original_post_json = _queue_mod._post_json
|
||||
_queue_mod._post_json = lambda path, payload: drained.append(path)
|
||||
try:
|
||||
_start_queue_drainer(state)
|
||||
_enqueue_post_json("/api/drainer-test", {}, 10, state=state)
|
||||
state.drain_event.set()
|
||||
deadline = time.monotonic() + 2.0
|
||||
while "/api/drainer-test" not in drained and time.monotonic() < deadline:
|
||||
time.sleep(0.01)
|
||||
assert "/api/drainer-test" in drained
|
||||
finally:
|
||||
_queue_mod._post_json = original_post_json
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
def test_queue_post_json_signals_drain_event_with_drainer(self):
|
||||
"""When a drainer is alive, _queue_post_json signals drain_event instead of blocking."""
|
||||
state = _fresh_state()
|
||||
drained: list[str] = []
|
||||
|
||||
original_post_json = _queue_mod._post_json
|
||||
_queue_mod._post_json = lambda path, payload: drained.append(path)
|
||||
try:
|
||||
_start_queue_drainer(state)
|
||||
# With a live drainer, the call should return immediately
|
||||
# (signal only) and the drainer processes the item in the background.
|
||||
_queue_post_json("/api/bg-test", {"k": 1}, priority=10, state=state)
|
||||
deadline = time.monotonic() + 2.0
|
||||
while "/api/bg-test" not in drained and time.monotonic() < deadline:
|
||||
time.sleep(0.01)
|
||||
assert "/api/bg-test" in drained
|
||||
finally:
|
||||
_queue_mod._post_json = original_post_json
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
def test_queue_post_json_falls_back_to_sync_drain_without_drainer(self):
|
||||
"""When no drainer is running, _queue_post_json drains synchronously."""
|
||||
state = _fresh_state()
|
||||
# state.drainer is None → synchronous path
|
||||
sent: list[str] = []
|
||||
_queue_post_json(
|
||||
"/api/sync",
|
||||
{"v": 1},
|
||||
priority=10,
|
||||
state=state,
|
||||
send=lambda p, d: sent.append(p),
|
||||
)
|
||||
assert "/api/sync" in sent
|
||||
|
||||
def test_enqueue_during_drain_is_processed(self):
|
||||
"""Items enqueued while the drainer is mid-drain are still drained.
|
||||
|
||||
Simulates the race where a new item arrives while
|
||||
``_drain_post_queue`` is actively processing. The new item must
|
||||
be picked up within the same drain cycle or on the next signal.
|
||||
"""
|
||||
state = _fresh_state()
|
||||
drained: list[str] = []
|
||||
gate = threading.Event()
|
||||
|
||||
original_post_json = _queue_mod._post_json
|
||||
|
||||
def slow_send(path, payload):
|
||||
"""Drain the first item slowly, allowing a second enqueue."""
|
||||
drained.append(path)
|
||||
if path == "/api/first":
|
||||
gate.set()
|
||||
|
||||
_queue_mod._post_json = slow_send
|
||||
try:
|
||||
_start_queue_drainer(state)
|
||||
_enqueue_post_json("/api/first", {}, 10, state=state)
|
||||
state.drain_event.set()
|
||||
# Wait until the drainer has started processing /api/first.
|
||||
gate.wait(timeout=2.0)
|
||||
# Enqueue a second item while the drainer is active.
|
||||
_enqueue_post_json("/api/second", {}, 10, state=state)
|
||||
state.drain_event.set()
|
||||
deadline = time.monotonic() + 2.0
|
||||
while "/api/second" not in drained and time.monotonic() < deadline:
|
||||
time.sleep(0.01)
|
||||
assert "/api/second" in drained
|
||||
finally:
|
||||
_queue_mod._post_json = original_post_json
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
def test_stop_queue_drainer(self):
|
||||
"""_stop_queue_drainer signals the thread to exit and joins it."""
|
||||
state = _fresh_state()
|
||||
_start_queue_drainer(state)
|
||||
assert state.drainer is not None
|
||||
assert state.drainer.is_alive()
|
||||
_stop_queue_drainer(state)
|
||||
assert state.drainer is None
|
||||
assert state.shutdown.is_set()
|
||||
|
||||
def test_stop_queue_drainer_noop_when_not_running(self):
|
||||
"""_stop_queue_drainer is safe to call with no drainer."""
|
||||
state = _fresh_state()
|
||||
_stop_queue_drainer(state)
|
||||
assert state.drainer is None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Drainer resilience
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestDrainerResilience:
|
||||
"""Tests verifying the drainer thread cannot be killed by exceptions."""
|
||||
|
||||
def test_drainer_survives_drain_exception(self, monkeypatch):
|
||||
"""The drainer loop keeps running after _drain_post_queue raises."""
|
||||
state = _fresh_state()
|
||||
drained: list[str] = []
|
||||
call_count = [0]
|
||||
|
||||
original_drain = _queue_mod._drain_post_queue
|
||||
|
||||
def flaky_drain(s, send=None):
|
||||
call_count[0] += 1
|
||||
if call_count[0] == 1:
|
||||
raise RuntimeError("transient drain error")
|
||||
original_drain(s, send=send)
|
||||
|
||||
original_post_json = _queue_mod._post_json
|
||||
_queue_mod._post_json = lambda path, payload: drained.append(path)
|
||||
monkeypatch.setattr(_queue_mod, "_drain_post_queue", flaky_drain)
|
||||
try:
|
||||
_start_queue_drainer(state)
|
||||
# First signal triggers the RuntimeError; drainer should survive.
|
||||
_enqueue_post_json("/api/first", {}, 10, state=state)
|
||||
state.drain_event.set()
|
||||
time.sleep(0.2)
|
||||
assert state.drainer.is_alive(), "Drainer died after drain exception"
|
||||
# Second signal should succeed normally.
|
||||
_enqueue_post_json("/api/second", {}, 10, state=state)
|
||||
state.drain_event.set()
|
||||
deadline = time.monotonic() + 2.0
|
||||
while "/api/second" not in drained and time.monotonic() < deadline:
|
||||
time.sleep(0.01)
|
||||
assert "/api/second" in drained
|
||||
finally:
|
||||
_queue_mod._post_json = original_post_json
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
def test_drainer_survives_debug_log_exception(self, monkeypatch):
|
||||
"""The drainer survives even when _debug_log raises inside the error handler."""
|
||||
state = _fresh_state()
|
||||
drained: list[str] = []
|
||||
call_count = [0]
|
||||
|
||||
original_drain = _queue_mod._drain_post_queue
|
||||
|
||||
def flaky_drain(s, send=None):
|
||||
call_count[0] += 1
|
||||
if call_count[0] == 1:
|
||||
raise RuntimeError("drain error")
|
||||
original_drain(s, send=send)
|
||||
|
||||
def broken_log(*args, **kwargs):
|
||||
raise BrokenPipeError("stdout closed")
|
||||
|
||||
original_post_json = _queue_mod._post_json
|
||||
_queue_mod._post_json = lambda path, payload: drained.append(path)
|
||||
monkeypatch.setattr(_queue_mod, "_drain_post_queue", flaky_drain)
|
||||
monkeypatch.setattr(config, "_debug_log", broken_log)
|
||||
try:
|
||||
_start_queue_drainer(state)
|
||||
_enqueue_post_json("/api/first", {}, 10, state=state)
|
||||
state.drain_event.set()
|
||||
time.sleep(0.2)
|
||||
assert state.drainer.is_alive(), "Drainer died after log exception"
|
||||
# Restore log so the second drain can proceed.
|
||||
monkeypatch.undo()
|
||||
_queue_mod._post_json = lambda path, payload: drained.append(path)
|
||||
monkeypatch.setattr(_queue_mod, "_drain_post_queue", original_drain)
|
||||
_enqueue_post_json("/api/second", {}, 10, state=state)
|
||||
state.drain_event.set()
|
||||
deadline = time.monotonic() + 2.0
|
||||
while "/api/second" not in drained and time.monotonic() < deadline:
|
||||
time.sleep(0.01)
|
||||
assert "/api/second" in drained
|
||||
finally:
|
||||
_queue_mod._post_json = original_post_json
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
def test_drainer_logs_startup(self, monkeypatch):
|
||||
"""The drainer logs a startup message."""
|
||||
state = _fresh_state()
|
||||
log_msgs: list[str] = []
|
||||
monkeypatch.setattr(
|
||||
config, "_debug_log", lambda msg, **kw: log_msgs.append(msg)
|
||||
)
|
||||
_start_queue_drainer(state)
|
||||
time.sleep(0.1)
|
||||
_stop_queue_drainer(state)
|
||||
assert any("started" in m.lower() for m in log_msgs)
|
||||
|
||||
def test_drainer_logs_exit(self, monkeypatch):
|
||||
"""The drainer logs an exit message on clean shutdown."""
|
||||
state = _fresh_state()
|
||||
log_msgs: list[str] = []
|
||||
monkeypatch.setattr(
|
||||
config, "_debug_log", lambda msg, **kw: log_msgs.append(msg)
|
||||
)
|
||||
_start_queue_drainer(state)
|
||||
time.sleep(0.1)
|
||||
_stop_queue_drainer(state)
|
||||
assert any("exiting" in m.lower() for m in log_msgs)
|
||||
|
||||
def test_drainer_logs_depth_warning(self, monkeypatch):
|
||||
"""A warning is emitted when queue depth exceeds the threshold."""
|
||||
state = _fresh_state()
|
||||
log_kwargs: list[dict] = []
|
||||
monkeypatch.setattr(
|
||||
config,
|
||||
"_debug_log",
|
||||
lambda msg, **kw: log_kwargs.append({"msg": msg, **kw}),
|
||||
)
|
||||
|
||||
original_post_json = _queue_mod._post_json
|
||||
_queue_mod._post_json = lambda path, payload: None
|
||||
try:
|
||||
for i in range(_QUEUE_DEPTH_WARNING_THRESHOLD + 1):
|
||||
_enqueue_post_json(f"/api/{i}", {}, 10, state=state)
|
||||
_start_queue_drainer(state)
|
||||
state.drain_event.set()
|
||||
deadline = time.monotonic() + 2.0
|
||||
while (
|
||||
not any("depth" in e.get("msg", "").lower() for e in log_kwargs)
|
||||
and time.monotonic() < deadline
|
||||
):
|
||||
time.sleep(0.01)
|
||||
assert any("depth" in e.get("msg", "").lower() for e in log_kwargs)
|
||||
finally:
|
||||
_queue_mod._post_json = original_post_json
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Retry logic
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestRetryLogic:
|
||||
"""Tests for send failure retry in :func:`_drain_post_queue`."""
|
||||
|
||||
def test_send_single_returns_true_on_success(self, monkeypatch):
|
||||
"""_send_single returns True when the HTTP call succeeds."""
|
||||
with patch("urllib.request.urlopen", lambda req, timeout=None: _FakeResp()):
|
||||
assert _send_single("http://localhost", "", "/api/ok", {}) is True
|
||||
|
||||
def test_send_single_returns_false_on_failure(self, monkeypatch):
|
||||
"""_send_single returns False when the HTTP call fails."""
|
||||
monkeypatch.setattr(config, "_debug_log", lambda *a, **kw: None)
|
||||
|
||||
def raise_error(req, timeout=None):
|
||||
raise OSError("fail")
|
||||
|
||||
with patch("urllib.request.urlopen", raise_error):
|
||||
assert _send_single("http://localhost", "", "/api/fail", {}) is False
|
||||
|
||||
def test_post_json_returns_true_on_success(self, monkeypatch):
|
||||
"""_post_json returns True when the instance succeeds."""
|
||||
monkeypatch.setattr(config, "INSTANCES", (("http://ok", ""),))
|
||||
with patch("urllib.request.urlopen", lambda req, timeout=None: _FakeResp()):
|
||||
assert _post_json("/api/ok", {}) is True
|
||||
|
||||
def test_post_json_returns_false_when_all_fail(self, monkeypatch):
|
||||
"""_post_json returns False when all instances fail."""
|
||||
monkeypatch.setattr(config, "INSTANCES", (("http://a", ""), ("http://b", "")))
|
||||
monkeypatch.setattr(config, "_debug_log", lambda *a, **kw: None)
|
||||
|
||||
def raise_error(req, timeout=None):
|
||||
raise OSError("fail")
|
||||
|
||||
with patch("urllib.request.urlopen", raise_error):
|
||||
assert _post_json("/api/fail", {}) is False
|
||||
|
||||
def test_post_json_returns_true_when_at_least_one_succeeds(self, monkeypatch):
|
||||
"""_post_json returns True when at least one instance succeeds."""
|
||||
monkeypatch.setattr(
|
||||
config, "INSTANCES", (("http://broken", ""), ("http://ok", ""))
|
||||
)
|
||||
monkeypatch.setattr(config, "_debug_log", lambda *a, **kw: None)
|
||||
|
||||
def selective_urlopen(req, timeout=None):
|
||||
if "broken" in req.get_full_url():
|
||||
raise OSError("fail")
|
||||
return _FakeResp()
|
||||
|
||||
with patch("urllib.request.urlopen", selective_urlopen):
|
||||
assert _post_json("/api/mixed", {}) is True
|
||||
|
||||
def test_drain_retries_on_send_failure(self):
|
||||
"""Items are re-queued and retried when send returns False."""
|
||||
state = _fresh_state()
|
||||
attempts: list[str] = []
|
||||
call_count = [0]
|
||||
|
||||
def flaky_send(path, payload):
|
||||
call_count[0] += 1
|
||||
attempts.append(path)
|
||||
# Fail on first attempt, succeed on retry.
|
||||
return call_count[0] > 1
|
||||
|
||||
_enqueue_post_json("/api/retry", {"v": 1}, 10, state=state)
|
||||
_drain_post_queue(state, send=flaky_send)
|
||||
assert attempts.count("/api/retry") == 2
|
||||
|
||||
def test_drain_drops_after_max_retries(self, monkeypatch):
|
||||
"""Items are dropped with a warning after exceeding max retries."""
|
||||
state = _fresh_state()
|
||||
attempts: list[str] = []
|
||||
log_kwargs: list[dict] = []
|
||||
monkeypatch.setattr(
|
||||
config,
|
||||
"_debug_log",
|
||||
lambda msg, **kw: log_kwargs.append({"msg": msg, **kw}),
|
||||
)
|
||||
|
||||
def always_fail(path, payload):
|
||||
attempts.append(path)
|
||||
return False
|
||||
|
||||
_enqueue_post_json("/api/doomed", {}, 10, state=state)
|
||||
_drain_post_queue(state, send=always_fail)
|
||||
# Initial attempt + _MAX_SEND_RETRIES retries.
|
||||
assert attempts.count("/api/doomed") == _MAX_SEND_RETRIES + 1
|
||||
assert any("dropping" in e.get("msg", "").lower() for e in log_kwargs)
|
||||
|
||||
def test_drain_no_retry_for_none_return(self):
|
||||
"""Custom send callables returning None are NOT retried.
|
||||
|
||||
This preserves backward compatibility with test lambdas that do not
|
||||
return a boolean.
|
||||
"""
|
||||
state = _fresh_state()
|
||||
attempts: list[str] = []
|
||||
|
||||
def custom_send(path, payload):
|
||||
attempts.append(path)
|
||||
return None
|
||||
|
||||
_enqueue_post_json("/api/once", {}, 10, state=state)
|
||||
_drain_post_queue(state, send=custom_send)
|
||||
assert attempts.count("/api/once") == 1
|
||||
|
||||
def test_enqueue_with_retries_parameter(self):
|
||||
"""_enqueue_post_json stores the retry count in the 5th tuple position."""
|
||||
state = _fresh_state()
|
||||
_enqueue_post_json("/api/r", {}, 10, state=state, retries=2)
|
||||
assert len(state.queue) == 1
|
||||
assert state.queue[0][4] == 2
|
||||
|
||||
def test_drain_handles_legacy_4_tuple(self):
|
||||
"""_drain_post_queue handles 4-tuple items without crashing."""
|
||||
import heapq
|
||||
|
||||
state = _fresh_state()
|
||||
sent: list[str] = []
|
||||
# Push a legacy 4-tuple directly.
|
||||
with state.lock:
|
||||
heapq.heappush(state.queue, (10, 0, "/api/legacy", {"v": 1}))
|
||||
_drain_post_queue(state, send=lambda p, d: sent.append(p))
|
||||
assert "/api/legacy" in sent
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Drainer auto-restart
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestDrainerAutoRestart:
|
||||
"""Tests for automatic drainer thread recovery in :func:`_queue_post_json`."""
|
||||
|
||||
def test_queue_post_json_restarts_dead_drainer(self, monkeypatch):
|
||||
"""A dead drainer is automatically restarted by _queue_post_json."""
|
||||
state = _fresh_state()
|
||||
drained: list[str] = []
|
||||
|
||||
original_post_json = _queue_mod._post_json
|
||||
_queue_mod._post_json = lambda path, payload: drained.append(path)
|
||||
monkeypatch.setattr(config, "_debug_log", lambda *a, **kw: None)
|
||||
try:
|
||||
# Start and then kill the drainer.
|
||||
_start_queue_drainer(state)
|
||||
_stop_queue_drainer(state)
|
||||
# _stop_queue_drainer sets drainer=None, so simulate a crash
|
||||
# where the Thread object is still present but dead.
|
||||
state.drainer = threading.Thread(target=lambda: None, daemon=True)
|
||||
state.drainer.start()
|
||||
state.drainer.join() # Dead thread, is_alive()=False
|
||||
|
||||
_queue_post_json("/api/revived", {"v": 1}, priority=10, state=state)
|
||||
deadline = time.monotonic() + 2.0
|
||||
while "/api/revived" not in drained and time.monotonic() < deadline:
|
||||
time.sleep(0.01)
|
||||
assert "/api/revived" in drained
|
||||
assert state.drainer is not None
|
||||
assert state.drainer.is_alive()
|
||||
finally:
|
||||
_queue_mod._post_json = original_post_json
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
def test_queue_post_json_no_restart_when_never_started(self):
|
||||
"""No drainer is started when state.drainer is None (daemon's job)."""
|
||||
state = _fresh_state()
|
||||
assert state.drainer is None
|
||||
sent: list[str] = []
|
||||
_queue_post_json(
|
||||
"/api/no-restart",
|
||||
{},
|
||||
priority=10,
|
||||
state=state,
|
||||
send=lambda p, d: sent.append(p),
|
||||
)
|
||||
assert "/api/no-restart" in sent
|
||||
assert state.drainer is None
|
||||
|
||||
def test_start_queue_drainer_resets_shutdown(self):
|
||||
"""_start_queue_drainer clears the shutdown event before starting."""
|
||||
state = _fresh_state()
|
||||
_start_queue_drainer(state)
|
||||
_stop_queue_drainer(state)
|
||||
assert state.shutdown.is_set()
|
||||
# Re-start should clear shutdown and start a live thread.
|
||||
_start_queue_drainer(state)
|
||||
assert not state.shutdown.is_set()
|
||||
assert state.drainer is not None
|
||||
assert state.drainer.is_alive()
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# No-instances warning
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestNoInstancesWarning:
|
||||
"""Tests for the warning log when no target instances are configured."""
|
||||
|
||||
def test_post_json_errors_when_no_instances(self, monkeypatch):
|
||||
"""An error is logged when INSTANCES and INSTANCE are both empty."""
|
||||
monkeypatch.setattr(config, "INSTANCES", ())
|
||||
monkeypatch.setattr(config, "INSTANCE", "")
|
||||
log_kwargs: list[dict] = []
|
||||
monkeypatch.setattr(
|
||||
config,
|
||||
"_debug_log",
|
||||
lambda msg, **kw: log_kwargs.append({"msg": msg, **kw}),
|
||||
)
|
||||
|
||||
result = _post_json("/api/nowhere", {"v": 1})
|
||||
|
||||
assert result is False
|
||||
assert any(
|
||||
kw.get("always") is True and kw.get("severity") == "error"
|
||||
for kw in log_kwargs
|
||||
)
|
||||
|
||||
def test_post_json_survives_log_exception_on_no_instances(self, monkeypatch):
|
||||
"""_post_json still returns False when logging itself raises."""
|
||||
monkeypatch.setattr(config, "INSTANCES", ())
|
||||
monkeypatch.setattr(config, "INSTANCE", "")
|
||||
monkeypatch.setattr(
|
||||
config,
|
||||
"_debug_log",
|
||||
lambda *a, **kw: (_ for _ in ()).throw(OSError("log broken")),
|
||||
)
|
||||
assert _post_json("/api/nowhere", {}) is False
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Defensive exception guard coverage
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestDefensiveExceptionGuards:
|
||||
"""Cover the ``except Exception: pass`` guards wrapping ``_debug_log`` calls.
|
||||
|
||||
These guards ensure that a broken logging backend (e.g. ``BrokenPipeError``
|
||||
from ``print()`` to a closed stdout) never crashes the drainer thread or
|
||||
drops data.
|
||||
"""
|
||||
|
||||
def test_drain_drop_log_exception(self, monkeypatch):
|
||||
"""Max-retries drop path survives a broken _debug_log."""
|
||||
state = _fresh_state()
|
||||
monkeypatch.setattr(
|
||||
config,
|
||||
"_debug_log",
|
||||
lambda *a, **kw: (_ for _ in ()).throw(BrokenPipeError("broken")),
|
||||
)
|
||||
|
||||
attempts: list[str] = []
|
||||
|
||||
def always_fail(path, payload):
|
||||
attempts.append(path)
|
||||
return False
|
||||
|
||||
_enqueue_post_json("/api/fail", {}, 10, state=state)
|
||||
# Should not raise even though _debug_log throws on the drop message.
|
||||
_drain_post_queue(state, send=always_fail)
|
||||
assert attempts.count("/api/fail") == _MAX_SEND_RETRIES + 1
|
||||
|
||||
def test_drainer_startup_log_exception(self, monkeypatch):
|
||||
"""Drainer thread starts even when the startup log raises."""
|
||||
state = _fresh_state()
|
||||
monkeypatch.setattr(
|
||||
config,
|
||||
"_debug_log",
|
||||
lambda *a, **kw: (_ for _ in ()).throw(BrokenPipeError("broken")),
|
||||
)
|
||||
_start_queue_drainer(state)
|
||||
time.sleep(0.15)
|
||||
assert state.drainer is not None
|
||||
assert state.drainer.is_alive()
|
||||
# Restore log so stop can log cleanly.
|
||||
monkeypatch.undo()
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
def test_drainer_exit_log_exception(self, monkeypatch):
|
||||
"""Drainer thread exits cleanly even when the exit log raises."""
|
||||
state = _fresh_state()
|
||||
_start_queue_drainer(state)
|
||||
time.sleep(0.05)
|
||||
# Break _debug_log AFTER startup so only the exit log raises.
|
||||
monkeypatch.setattr(
|
||||
config,
|
||||
"_debug_log",
|
||||
lambda *a, **kw: (_ for _ in ()).throw(BrokenPipeError("broken")),
|
||||
)
|
||||
_stop_queue_drainer(state)
|
||||
assert state.drainer is None
|
||||
|
||||
def test_drainer_depth_warning_log_exception(self, monkeypatch):
|
||||
"""Drainer survives a broken _debug_log during depth warning."""
|
||||
state = _fresh_state()
|
||||
drained: list[str] = []
|
||||
|
||||
original_post_json = _queue_mod._post_json
|
||||
_queue_mod._post_json = lambda path, payload: drained.append(path)
|
||||
try:
|
||||
_start_queue_drainer(state)
|
||||
time.sleep(0.05)
|
||||
# Break _debug_log so the depth warning raises.
|
||||
monkeypatch.setattr(
|
||||
config,
|
||||
"_debug_log",
|
||||
lambda *a, **kw: (_ for _ in ()).throw(BrokenPipeError("broken")),
|
||||
)
|
||||
for i in range(_QUEUE_DEPTH_WARNING_THRESHOLD + 1):
|
||||
_enqueue_post_json(f"/api/{i}", {}, 10, state=state)
|
||||
state.drain_event.set()
|
||||
deadline = time.monotonic() + 2.0
|
||||
while (
|
||||
len(drained) < _QUEUE_DEPTH_WARNING_THRESHOLD + 1
|
||||
and time.monotonic() < deadline
|
||||
):
|
||||
time.sleep(0.01)
|
||||
assert len(drained) == _QUEUE_DEPTH_WARNING_THRESHOLD + 1
|
||||
finally:
|
||||
_queue_mod._post_json = original_post_json
|
||||
monkeypatch.undo()
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
def test_drainer_error_handler_log_exception(self, monkeypatch):
|
||||
"""Drainer survives when both drain and error-log raise."""
|
||||
state = _fresh_state()
|
||||
call_count = [0]
|
||||
original_drain = _queue_mod._drain_post_queue
|
||||
|
||||
def flaky_drain(s, send=None):
|
||||
call_count[0] += 1
|
||||
if call_count[0] == 1:
|
||||
raise RuntimeError("drain boom")
|
||||
original_drain(s, send=send)
|
||||
|
||||
drained: list[str] = []
|
||||
original_post_json = _queue_mod._post_json
|
||||
_queue_mod._post_json = lambda path, payload: drained.append(path)
|
||||
monkeypatch.setattr(_queue_mod, "_drain_post_queue", flaky_drain)
|
||||
# _debug_log raises on the error handler's inner logging call.
|
||||
monkeypatch.setattr(
|
||||
config,
|
||||
"_debug_log",
|
||||
lambda *a, **kw: (_ for _ in ()).throw(BrokenPipeError("broken")),
|
||||
)
|
||||
try:
|
||||
_start_queue_drainer(state)
|
||||
_enqueue_post_json("/api/first", {}, 10, state=state)
|
||||
state.drain_event.set()
|
||||
time.sleep(0.3)
|
||||
assert state.drainer.is_alive()
|
||||
# Restore to process an item normally.
|
||||
monkeypatch.undo()
|
||||
_queue_mod._post_json = lambda path, payload: drained.append(path)
|
||||
monkeypatch.setattr(_queue_mod, "_drain_post_queue", original_drain)
|
||||
_enqueue_post_json("/api/second", {}, 10, state=state)
|
||||
state.drain_event.set()
|
||||
deadline = time.monotonic() + 2.0
|
||||
while "/api/second" not in drained and time.monotonic() < deadline:
|
||||
time.sleep(0.01)
|
||||
assert "/api/second" in drained
|
||||
finally:
|
||||
_queue_mod._post_json = original_post_json
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
def test_restart_warning_log_exception(self, monkeypatch):
|
||||
"""Drainer restart proceeds even when the restart warning log raises."""
|
||||
state = _fresh_state()
|
||||
drained: list[str] = []
|
||||
original_post_json = _queue_mod._post_json
|
||||
_queue_mod._post_json = lambda path, payload: drained.append(path)
|
||||
monkeypatch.setattr(
|
||||
config,
|
||||
"_debug_log",
|
||||
lambda *a, **kw: (_ for _ in ()).throw(BrokenPipeError("broken")),
|
||||
)
|
||||
try:
|
||||
# Simulate a crashed drainer (dead Thread, not None).
|
||||
state.drainer = threading.Thread(target=lambda: None, daemon=True)
|
||||
state.drainer.start()
|
||||
state.drainer.join()
|
||||
assert not state.drainer.is_alive()
|
||||
|
||||
_queue_post_json("/api/restarted", {"v": 1}, priority=10, state=state)
|
||||
deadline = time.monotonic() + 2.0
|
||||
while "/api/restarted" not in drained and time.monotonic() < deadline:
|
||||
time.sleep(0.01)
|
||||
assert "/api/restarted" in drained
|
||||
finally:
|
||||
_queue_mod._post_json = original_post_json
|
||||
monkeypatch.undo()
|
||||
_stop_queue_drainer(state)
|
||||
|
||||
@@ -57,6 +57,7 @@ require_relative "application/meshtastic/cipher"
|
||||
require_relative "application/meshtastic/payload_decoder"
|
||||
require_relative "application/data_processing"
|
||||
require_relative "application/filesystem"
|
||||
require_relative "application/api_cache"
|
||||
require_relative "application/pages"
|
||||
require_relative "application/instances"
|
||||
require_relative "application/routes/api"
|
||||
|
||||
@@ -0,0 +1,163 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# frozen_string_literal: true
|
||||
|
||||
require "digest"
|
||||
|
||||
module PotatoMesh
|
||||
module App
|
||||
# Thread-safe in-memory cache for serialised API responses.
|
||||
#
|
||||
# Each entry is stored with a monotonic expiration time and a pre-computed
|
||||
# ETag so the route handler can skip recomputing the digest on cache hits.
|
||||
#
|
||||
# The cache is bounded to {MAX_ENTRIES} to prevent unbounded memory growth
|
||||
# from attacker-controlled query parameters. When the limit is reached the
|
||||
# oldest entry by insertion order is evicted (LRU-ish via Ruby hash ordering).
|
||||
#
|
||||
# Invalidation can target a specific prefix (e.g. +"api:nodes:"+) so that an
|
||||
# ingest POST to +/api/messages+ does not flush the neighbors cache.
|
||||
# A single-flight guard coalesces concurrent misses for the same key so only
|
||||
# one thread computes the value while others wait for the result.
|
||||
module ApiCache
|
||||
# Hard cap on the number of cached entries to prevent memory exhaustion.
|
||||
# With the whitelisted protocol values and known limit set, the realistic
|
||||
# key space is ~30 entries. 64 provides generous headroom.
|
||||
MAX_ENTRIES = 64
|
||||
|
||||
@store = {}
|
||||
@inflight = {}
|
||||
@mutex = Mutex.new
|
||||
|
||||
class << self
|
||||
# Retrieve a cached value or compute and store it.
|
||||
#
|
||||
# When multiple threads request the same cold key concurrently only one
|
||||
# executes the block; the others wait for the result (single-flight).
|
||||
#
|
||||
# The returned hash contains both +:value+ (the JSON string) and +:etag+
|
||||
# (pre-computed weak ETag) so callers can set the header without
|
||||
# re-hashing the body.
|
||||
#
|
||||
# @param key [String] cache key incorporating all relevant query
|
||||
# parameters (limit, protocol, etc.).
|
||||
# @param ttl_seconds [Numeric] time-to-live for the cached entry.
|
||||
# @yield Computes the value to cache when the entry is missing or
|
||||
# expired. The block should return the serialised JSON string.
|
||||
# @return [Hash{Symbol => String}] +:value+ and +:etag+ of the response.
|
||||
def fetch(key, ttl_seconds:)
|
||||
now = monotonic_now
|
||||
|
||||
@mutex.synchronize do
|
||||
entry = @store[key]
|
||||
if entry && now < entry[:expires_at]
|
||||
return { value: entry[:value], etag: entry[:etag] }
|
||||
end
|
||||
|
||||
# Single-flight: if another thread is already computing this key,
|
||||
# wait for it to finish and use its result. The loop guards
|
||||
# against spurious wakeups from ConditionVariable#wait.
|
||||
while @inflight.key?(key)
|
||||
cv = @inflight[key]
|
||||
cv.wait(@mutex)
|
||||
entry = @store[key]
|
||||
if entry && monotonic_now < entry[:expires_at]
|
||||
return { value: entry[:value], etag: entry[:etag] }
|
||||
end
|
||||
end
|
||||
|
||||
# Mark this key as in-flight so concurrent requests wait.
|
||||
@inflight[key] = ConditionVariable.new
|
||||
end
|
||||
|
||||
value = yield
|
||||
etag = Digest::MD5.hexdigest(value)
|
||||
|
||||
@mutex.synchronize do
|
||||
evict_oldest_if_full
|
||||
@store[key] = { value: value, etag: etag, expires_at: monotonic_now + ttl_seconds }
|
||||
cv = @inflight.delete(key)
|
||||
cv&.broadcast
|
||||
end
|
||||
|
||||
{ value: value, etag: etag }
|
||||
rescue => e
|
||||
# On error, unblock any waiters and re-raise.
|
||||
@mutex.synchronize do
|
||||
cv = @inflight.delete(key)
|
||||
cv&.broadcast
|
||||
end
|
||||
raise e
|
||||
end
|
||||
|
||||
# Remove entries whose keys start with any of the given prefixes.
|
||||
#
|
||||
# Targeted invalidation so that e.g. a messages POST does not flush the
|
||||
# neighbors or telemetry caches.
|
||||
#
|
||||
# @param prefixes [Array<String>] key prefixes to match.
|
||||
# @return [void]
|
||||
def invalidate_prefix(*prefixes)
|
||||
@mutex.synchronize do
|
||||
@store.reject! do |key, _|
|
||||
prefixes.any? { |p| key.start_with?(p) }
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Remove all entries from the cache.
|
||||
#
|
||||
# @return [void]
|
||||
def invalidate_all
|
||||
@mutex.synchronize { @store.clear }
|
||||
end
|
||||
|
||||
# Remove specific entries by exact key.
|
||||
#
|
||||
# @param keys [Array<String>] cache keys to evict.
|
||||
# @return [void]
|
||||
def invalidate(*keys)
|
||||
@mutex.synchronize do
|
||||
keys.each { |k| @store.delete(k) }
|
||||
end
|
||||
end
|
||||
|
||||
# Return the number of entries currently held in the cache.
|
||||
#
|
||||
# @return [Integer] entry count.
|
||||
def size
|
||||
@mutex.synchronize { @store.size }
|
||||
end
|
||||
|
||||
private
|
||||
|
||||
# Use the monotonic clock so TTL calculations are immune to wall-clock
|
||||
# adjustments (NTP jumps, DST transitions, etc.).
|
||||
def monotonic_now
|
||||
Process.clock_gettime(Process::CLOCK_MONOTONIC)
|
||||
end
|
||||
|
||||
# Evict the oldest entry when the store is at capacity. Ruby hashes
|
||||
# preserve insertion order, so +first+ is the oldest key.
|
||||
def evict_oldest_if_full
|
||||
while @store.size >= MAX_ENTRIES
|
||||
oldest_key = @store.each_key.first
|
||||
@store.delete(oldest_key)
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
@@ -171,16 +171,20 @@ module PotatoMesh
|
||||
return if existing
|
||||
|
||||
long_name = "#{protocol_display_label(protocol)} #{short_id}"
|
||||
default_role = case protocol
|
||||
when "meshcore" then "COMPANION"
|
||||
else "CLIENT_HIDDEN"
|
||||
end
|
||||
heard_time = coerce_integer(heard_time)
|
||||
inserted = false
|
||||
|
||||
with_busy_retry do
|
||||
db.execute(
|
||||
<<~SQL,
|
||||
INSERT OR IGNORE INTO nodes(node_id,num,short_name,long_name,role,last_heard,first_heard)
|
||||
VALUES (?,?,?,?,?,?,?)
|
||||
INSERT OR IGNORE INTO nodes(node_id,num,short_name,long_name,role,last_heard,first_heard,protocol)
|
||||
VALUES (?,?,?,?,?,?,?,?)
|
||||
SQL
|
||||
[node_id, node_num, short_id, long_name, "CLIENT_HIDDEN", heard_time, heard_time],
|
||||
[node_id, node_num, short_id, long_name, default_role, heard_time, heard_time, protocol],
|
||||
)
|
||||
inserted = db.changes.positive?
|
||||
end
|
||||
|
||||
@@ -147,6 +147,14 @@ module PotatoMesh
|
||||
db.execute("CREATE INDEX IF NOT EXISTS idx_nodes_long_name ON nodes(long_name)")
|
||||
end
|
||||
end
|
||||
|
||||
# Backfill #747: ensure_unknown_node previously omitted the protocol
|
||||
# column and hardcoded role=CLIENT_HIDDEN, causing meshcore placeholder
|
||||
# nodes to be stored as meshtastic/CLIENT_HIDDEN. Fix both in one pass.
|
||||
if node_columns.include?("protocol")
|
||||
db.execute("UPDATE nodes SET protocol = 'meshcore' WHERE long_name LIKE 'Meshcore %' AND protocol = 'meshtastic'")
|
||||
db.execute("UPDATE nodes SET role = 'COMPANION' WHERE protocol = 'meshcore' AND role = 'CLIENT_HIDDEN'")
|
||||
end
|
||||
end
|
||||
|
||||
message_table_exists = db.get_first_value(
|
||||
@@ -209,6 +217,17 @@ module PotatoMesh
|
||||
|
||||
unless instance_columns.include?("nodes_count")
|
||||
db.execute("ALTER TABLE instances ADD COLUMN nodes_count INTEGER")
|
||||
instance_columns << "nodes_count"
|
||||
end
|
||||
|
||||
unless instance_columns.include?("meshcore_nodes_count")
|
||||
db.execute("ALTER TABLE instances ADD COLUMN meshcore_nodes_count INTEGER")
|
||||
instance_columns << "meshcore_nodes_count"
|
||||
end
|
||||
|
||||
unless instance_columns.include?("meshtastic_nodes_count")
|
||||
db.execute("ALTER TABLE instances ADD COLUMN meshtastic_nodes_count INTEGER")
|
||||
instance_columns << "meshtastic_nodes_count"
|
||||
end
|
||||
|
||||
telemetry_tables =
|
||||
|
||||
@@ -63,7 +63,11 @@ module PotatoMesh
|
||||
def self_instance_attributes
|
||||
domain = self_instance_domain
|
||||
last_update = latest_node_update_timestamp || Time.now.to_i
|
||||
nodes_count = active_node_count_since(Time.now.to_i - PotatoMesh::Config.remote_instance_max_node_age)
|
||||
cutoff = Time.now.to_i - PotatoMesh::Config.remote_instance_max_node_age
|
||||
db = open_database(readonly: true)
|
||||
nodes_count = active_node_count_since(cutoff, db: db)
|
||||
mc_count = active_node_count_since_for_protocol(cutoff, "meshcore", db: db)
|
||||
mt_count = active_node_count_since_for_protocol(cutoff, "meshtastic", db: db)
|
||||
{
|
||||
id: app_constant(:SELF_INSTANCE_ID),
|
||||
domain: domain,
|
||||
@@ -78,7 +82,11 @@ module PotatoMesh
|
||||
is_private: private_mode?,
|
||||
contact_link: sanitized_contact_link,
|
||||
nodes_count: nodes_count,
|
||||
meshcore_nodes_count: mc_count,
|
||||
meshtastic_nodes_count: mt_count,
|
||||
}
|
||||
ensure
|
||||
db&.close
|
||||
end
|
||||
|
||||
# Count the number of nodes active since the supplied timestamp.
|
||||
@@ -107,6 +115,39 @@ module PotatoMesh
|
||||
handle&.close unless db
|
||||
end
|
||||
|
||||
# Count the number of nodes for a specific protocol active since the
|
||||
# supplied timestamp.
|
||||
#
|
||||
# @param cutoff [Integer] unix timestamp in seconds.
|
||||
# @param protocol [String] protocol name (e.g. "meshcore", "meshtastic").
|
||||
# @param db [SQLite3::Database, nil] optional open handle to reuse.
|
||||
# @return [Integer, nil] node count or nil when unavailable.
|
||||
def active_node_count_since_for_protocol(cutoff, protocol, db: nil)
|
||||
return nil unless cutoff && protocol
|
||||
|
||||
handle = db || open_database(readonly: true)
|
||||
count =
|
||||
with_busy_retry do
|
||||
handle.get_first_value(
|
||||
"SELECT COUNT(*) FROM nodes WHERE last_heard >= ? AND protocol = ?",
|
||||
cutoff.to_i,
|
||||
protocol,
|
||||
)
|
||||
end
|
||||
Integer(count)
|
||||
rescue SQLite3::Exception, ArgumentError => e
|
||||
warn_log(
|
||||
"Failed to count active nodes for protocol",
|
||||
context: "instances.protocol_nodes_count",
|
||||
protocol: protocol,
|
||||
error_class: e.class.name,
|
||||
error_message: e.message,
|
||||
)
|
||||
nil
|
||||
ensure
|
||||
handle&.close unless db
|
||||
end
|
||||
|
||||
def sign_instance_attributes(attributes)
|
||||
payload = canonical_instance_payload(attributes)
|
||||
Base64.strict_encode64(
|
||||
@@ -380,6 +421,13 @@ module PotatoMesh
|
||||
db&.close
|
||||
end
|
||||
|
||||
# Announce the local instance record to a remote federation peer,
|
||||
# cycling through resolved IP addresses when transport-level failures
|
||||
# occur.
|
||||
#
|
||||
# @param domain [String] remote peer hostname.
|
||||
# @param payload_json [String] JSON-encoded announcement body.
|
||||
# @return [Boolean] true when the announcement was accepted.
|
||||
def announce_instance_to_domain(domain, payload_json)
|
||||
return false unless domain && !domain.empty?
|
||||
return false if federation_shutdown_requested?
|
||||
@@ -390,14 +438,7 @@ module PotatoMesh
|
||||
break false if federation_shutdown_requested?
|
||||
|
||||
begin
|
||||
http = build_remote_http_client(uri)
|
||||
response = Timeout.timeout(PotatoMesh::Config.remote_instance_request_timeout) do
|
||||
http.start do |connection|
|
||||
request = build_federation_http_request(Net::HTTP::Post, uri)
|
||||
request.body = payload_json
|
||||
connection.request(request)
|
||||
end
|
||||
end
|
||||
response = perform_announce_request(uri, payload_json)
|
||||
if response.is_a?(Net::HTTPSuccess)
|
||||
debug_log(
|
||||
"Published federation announcement",
|
||||
@@ -451,6 +492,55 @@ module PotatoMesh
|
||||
published
|
||||
end
|
||||
|
||||
# Execute a POST announcement request against the supplied URI, cycling
|
||||
# through resolved IP addresses on connection-level failures.
|
||||
#
|
||||
# @param uri [URI::Generic] target endpoint.
|
||||
# @param payload_json [String] JSON-encoded announcement body.
|
||||
# @return [Net::HTTPResponse] the HTTP response from the first reachable address.
|
||||
# @raise [StandardError] when all addresses fail or a non-retryable error occurs.
|
||||
def perform_announce_request(uri, payload_json)
|
||||
remote_addresses = sort_addresses_for_connection(resolve_remote_ip_addresses(uri))
|
||||
addresses = remote_addresses.empty? ? [nil] : remote_addresses
|
||||
|
||||
last_error = nil
|
||||
addresses.each do |address|
|
||||
break if federation_shutdown_requested?
|
||||
|
||||
begin
|
||||
return perform_single_announce_request(uri, payload_json, ip_address: address&.to_s)
|
||||
rescue StandardError => e
|
||||
if connection_refused_or_unreachable?(e)
|
||||
last_error = e
|
||||
else
|
||||
raise
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
raise(last_error || StandardError.new("all resolved addresses failed"))
|
||||
end
|
||||
|
||||
# Execute a single POST announcement request, optionally pinning the
|
||||
# connection to a specific IP address.
|
||||
#
|
||||
# @param uri [URI::Generic] target endpoint.
|
||||
# @param payload_json [String] JSON-encoded announcement body.
|
||||
# @param ip_address [String, nil] resolved IP address to pin the
|
||||
# connection to, or +nil+ to let {build_remote_http_client} resolve.
|
||||
# @return [Net::HTTPResponse] the HTTP response.
|
||||
# @raise [StandardError] when the request fails.
|
||||
def perform_single_announce_request(uri, payload_json, ip_address: nil)
|
||||
http = build_remote_http_client(uri, ip_address: ip_address)
|
||||
Timeout.timeout(PotatoMesh::Config.remote_instance_request_timeout) do
|
||||
http.start do |connection|
|
||||
request = build_federation_http_request(Net::HTTP::Post, uri)
|
||||
request.body = payload_json
|
||||
connection.request(request)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Determine whether an HTTPS announcement failure should fall back to HTTP.
|
||||
#
|
||||
# @param error [StandardError] failure raised while attempting HTTPS.
|
||||
@@ -466,6 +556,34 @@ module PotatoMesh
|
||||
false
|
||||
end
|
||||
|
||||
# Determine whether an error indicates a transport-level connection
|
||||
# failure that may succeed on an alternative resolved address.
|
||||
#
|
||||
# Connection refusals, host/network unreachable errors, and TCP open
|
||||
# timeouts signal that the selected IP address cannot be reached but
|
||||
# do not rule out alternative addresses for the same hostname.
|
||||
#
|
||||
# @param error [StandardError] failure raised during the connection attempt.
|
||||
# @return [Boolean] true when a retry with a different address is warranted.
|
||||
def connection_refused_or_unreachable?(error)
|
||||
retryable_classes = [
|
||||
Errno::ECONNREFUSED,
|
||||
Errno::EHOSTUNREACH,
|
||||
Errno::ENETUNREACH,
|
||||
Errno::ECONNRESET,
|
||||
Errno::ETIMEDOUT,
|
||||
Net::OpenTimeout,
|
||||
]
|
||||
current = error
|
||||
while current
|
||||
return true if retryable_classes.any? { |klass| current.is_a?(klass) }
|
||||
|
||||
current = current.respond_to?(:cause) ? current.cause : nil
|
||||
end
|
||||
|
||||
false
|
||||
end
|
||||
|
||||
def announce_instance_to_all_domains
|
||||
return unless federation_enabled?
|
||||
return if federation_shutdown_requested?
|
||||
@@ -664,10 +782,57 @@ module PotatoMesh
|
||||
[]
|
||||
end
|
||||
|
||||
# Execute a GET request against the supplied federation URI, cycling
|
||||
# through resolved IP addresses when a transport-level connection
|
||||
# failure occurs.
|
||||
#
|
||||
# DNS resolution is performed once and the resulting addresses are
|
||||
# sorted with IPv4 first via {sort_addresses_for_connection}. Each
|
||||
# address is attempted sequentially; when a connection-level error
|
||||
# (refused, unreachable, timeout) is raised the next address is tried.
|
||||
# Non-connection errors (SSL failures, HTTP-level errors) are raised
|
||||
# immediately without trying further addresses.
|
||||
#
|
||||
# @param uri [URI::Generic] target endpoint to request.
|
||||
# @return [String] raw HTTP response body on success.
|
||||
# @raise [InstanceFetchError] when all addresses are exhausted or a
|
||||
# non-retryable error occurs.
|
||||
def perform_instance_http_request(uri)
|
||||
raise InstanceFetchError, "federation shutdown requested" if federation_shutdown_requested?
|
||||
|
||||
http = build_remote_http_client(uri)
|
||||
remote_addresses = sort_addresses_for_connection(resolve_remote_ip_addresses(uri))
|
||||
addresses = remote_addresses.empty? ? [nil] : remote_addresses
|
||||
|
||||
last_error = nil
|
||||
addresses.each do |address|
|
||||
break if federation_shutdown_requested?
|
||||
|
||||
begin
|
||||
return perform_single_http_request(uri, ip_address: address&.to_s)
|
||||
rescue InstanceFetchError => e
|
||||
if connection_refused_or_unreachable?(e)
|
||||
last_error = e
|
||||
else
|
||||
raise
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
raise last_error || InstanceFetchError.new("all resolved addresses failed")
|
||||
rescue ArgumentError => e
|
||||
raise_instance_fetch_error(e)
|
||||
end
|
||||
|
||||
# Execute a single HTTP GET request against the supplied URI, optionally
|
||||
# pinning the connection to a specific IP address.
|
||||
#
|
||||
# @param uri [URI::Generic] target endpoint.
|
||||
# @param ip_address [String, nil] resolved IP address to pin the
|
||||
# connection to, or +nil+ to let {build_remote_http_client} resolve.
|
||||
# @return [String] raw HTTP response body.
|
||||
# @raise [InstanceFetchError] when the request fails.
|
||||
def perform_single_http_request(uri, ip_address: nil)
|
||||
http = build_remote_http_client(uri, ip_address: ip_address)
|
||||
Timeout.timeout(PotatoMesh::Config.remote_instance_request_timeout) do
|
||||
http.start do |connection|
|
||||
request = build_federation_http_request(Net::HTTP::Get, uri)
|
||||
@@ -1097,6 +1262,14 @@ module PotatoMesh
|
||||
)
|
||||
attributes[:nodes_count] = stats_count if stats_count
|
||||
|
||||
# Extract per-protocol 24h counts (informational, not signed).
|
||||
if stats_payload.is_a?(Hash)
|
||||
mc_day = stats_payload.dig("meshcore", "day")
|
||||
mt_day = stats_payload.dig("meshtastic", "day")
|
||||
attributes[:meshcore_nodes_count] = coerce_integer(mc_day) if mc_day
|
||||
attributes[:meshtastic_nodes_count] = coerce_integer(mt_day) if mt_day
|
||||
end
|
||||
|
||||
nodes_since_path = "/api/nodes?since=#{recent_cutoff}&limit=1000"
|
||||
nodes_since_window, nodes_since_metadata = fetch_instance_json(attributes[:domain], nodes_since_path)
|
||||
if stats_count.nil? && attributes[:nodes_count].nil? && nodes_since_window.is_a?(Array)
|
||||
@@ -1197,15 +1370,41 @@ module PotatoMesh
|
||||
unrestricted_addresses
|
||||
end
|
||||
|
||||
# Sort resolved addresses so that IPv4 precedes IPv6.
|
||||
#
|
||||
# Federation peers with dual-stack DNS may publish addresses where one
|
||||
# family is unreachable. Placing IPv4 entries first mirrors the
|
||||
# preference used by {discover_local_ip_address} and improves the
|
||||
# likelihood that the first connection attempt succeeds.
|
||||
#
|
||||
# @param addresses [Array<IPAddr>] resolved IP address list.
|
||||
# @return [Array<IPAddr>] addresses sorted with IPv4 entries before IPv6.
|
||||
def sort_addresses_for_connection(addresses)
|
||||
return addresses if addresses.nil? || addresses.length <= 1
|
||||
|
||||
v4, v6 = addresses.partition { |ip| !ip.ipv6? }
|
||||
v4 + v6
|
||||
end
|
||||
|
||||
# Build an HTTP client configured for communication with a remote instance.
|
||||
#
|
||||
# When +ip_address+ is supplied the client is pinned to that specific
|
||||
# address, bypassing DNS resolution. Callers that iterate over
|
||||
# multiple resolved addresses should pass each candidate in turn.
|
||||
#
|
||||
# @param uri [URI::Generic] target URI describing the remote endpoint.
|
||||
# @param ip_address [String, nil] explicit IP address to connect to,
|
||||
# or +nil+ to resolve via DNS and use the first result.
|
||||
# @return [Net::HTTP] HTTP client ready to execute the request.
|
||||
def build_remote_http_client(uri)
|
||||
remote_addresses = resolve_remote_ip_addresses(uri)
|
||||
def build_remote_http_client(uri, ip_address: nil)
|
||||
http = Net::HTTP.new(uri.host, uri.port)
|
||||
if http.respond_to?(:ipaddr=) && remote_addresses.any?
|
||||
http.ipaddr = remote_addresses.first.to_s
|
||||
if ip_address
|
||||
http.ipaddr = ip_address if http.respond_to?(:ipaddr=)
|
||||
else
|
||||
remote_addresses = resolve_remote_ip_addresses(uri)
|
||||
if http.respond_to?(:ipaddr=) && remote_addresses.any?
|
||||
http.ipaddr = remote_addresses.first.to_s
|
||||
end
|
||||
end
|
||||
http.open_timeout = PotatoMesh::Config.remote_instance_http_timeout
|
||||
http.read_timeout = PotatoMesh::Config.remote_instance_read_timeout
|
||||
@@ -1398,8 +1597,9 @@ module PotatoMesh
|
||||
sql = <<~SQL
|
||||
INSERT INTO instances (
|
||||
id, domain, pubkey, name, version, channel, frequency,
|
||||
latitude, longitude, last_update_time, is_private, nodes_count, contact_link, signature
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
latitude, longitude, last_update_time, is_private, nodes_count,
|
||||
meshcore_nodes_count, meshtastic_nodes_count, contact_link, signature
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
ON CONFLICT(id) DO UPDATE SET
|
||||
domain=excluded.domain,
|
||||
pubkey=excluded.pubkey,
|
||||
@@ -1412,6 +1612,8 @@ module PotatoMesh
|
||||
last_update_time=excluded.last_update_time,
|
||||
is_private=excluded.is_private,
|
||||
nodes_count=excluded.nodes_count,
|
||||
meshcore_nodes_count=excluded.meshcore_nodes_count,
|
||||
meshtastic_nodes_count=excluded.meshtastic_nodes_count,
|
||||
contact_link=excluded.contact_link,
|
||||
signature=excluded.signature
|
||||
SQL
|
||||
@@ -1430,6 +1632,8 @@ module PotatoMesh
|
||||
attributes[:last_update_time],
|
||||
attributes[:is_private] ? 1 : 0,
|
||||
nodes_count,
|
||||
coerce_integer(attributes[:meshcore_nodes_count]),
|
||||
coerce_integer(attributes[:meshtastic_nodes_count]),
|
||||
attributes[:contact_link],
|
||||
signature,
|
||||
]
|
||||
|
||||
@@ -144,6 +144,8 @@ module PotatoMesh
|
||||
"lastUpdateTime" => last_update_time,
|
||||
"isPrivate" => private_flag,
|
||||
"nodesCount" => coerce_integer(row["nodes_count"]),
|
||||
"meshcoreNodesCount" => coerce_integer(row["meshcore_nodes_count"]),
|
||||
"meshtasticNodesCount" => coerce_integer(row["meshtastic_nodes_count"]),
|
||||
"contactLink" => string_or_nil(row["contact_link"]),
|
||||
"signature" => signature,
|
||||
}
|
||||
@@ -175,7 +177,8 @@ module PotatoMesh
|
||||
min_last_update_time = now - PotatoMesh::Config.week_seconds
|
||||
sql = <<~SQL
|
||||
SELECT id, domain, pubkey, name, version, channel, frequency,
|
||||
latitude, longitude, last_update_time, is_private, nodes_count, contact_link, signature
|
||||
latitude, longitude, last_update_time, is_private, nodes_count,
|
||||
meshcore_nodes_count, meshtastic_nodes_count, contact_link, signature
|
||||
FROM instances
|
||||
WHERE domain IS NOT NULL AND TRIM(domain) != ''
|
||||
AND pubkey IS NOT NULL AND TRIM(pubkey) != ''
|
||||
|
||||
@@ -64,6 +64,12 @@ module PotatoMesh
|
||||
SQL
|
||||
params << limit
|
||||
rows = db.execute(sql, params)
|
||||
|
||||
# Batch-resolve all unique from_id values to canonical node_ids in a
|
||||
# single query instead of issuing 1-2 SELECTs per message row.
|
||||
raw_from_ids = rows.filter_map { |r| string_or_nil(r["from_id"]&.to_s&.strip) }.uniq
|
||||
canonical_map = batch_resolve_node_ids(db, raw_from_ids)
|
||||
|
||||
rows.each do |r|
|
||||
r.delete_if { |key, _| key.is_a?(Integer) }
|
||||
r["reply_id"] = coerce_integer(r["reply_id"]) if r.key?("reply_id")
|
||||
@@ -81,7 +87,7 @@ module PotatoMesh
|
||||
)
|
||||
end
|
||||
|
||||
canonical_from_id = string_or_nil(normalize_node_id(db, r["from_id"]))
|
||||
canonical_from_id = canonical_map[r["from_id"]&.to_s&.strip]
|
||||
node_id = canonical_from_id || string_or_nil(r["from_id"])
|
||||
|
||||
if canonical_from_id
|
||||
|
||||
@@ -133,6 +133,57 @@ module PotatoMesh
|
||||
coerced
|
||||
end
|
||||
|
||||
# Resolve a collection of raw node reference strings to their canonical
|
||||
# +node_id+ values in a single batch query. This avoids the N+1 pattern
|
||||
# of calling +normalize_node_id+ once per row.
|
||||
#
|
||||
# @param db [SQLite3::Database] open database handle.
|
||||
# @param refs [Array<String>] raw node identifiers (hex strings or numeric
|
||||
# strings) to resolve.
|
||||
# @return [Hash{String => String}] mapping from each input reference to its
|
||||
# canonical +node_id+, omitting entries that could not be resolved.
|
||||
def batch_resolve_node_ids(db, refs)
|
||||
return {} if refs.nil? || refs.empty?
|
||||
|
||||
result = {}
|
||||
string_refs = []
|
||||
numeric_refs = []
|
||||
|
||||
refs.each do |ref|
|
||||
next if ref.nil? || ref.strip.empty?
|
||||
string_refs << ref.strip
|
||||
begin
|
||||
numeric_refs << Integer(ref.strip, 10)
|
||||
rescue ArgumentError
|
||||
# not a numeric reference — skip the numeric branch
|
||||
end
|
||||
end
|
||||
|
||||
# Batch lookup by node_id (string match)
|
||||
unless string_refs.empty?
|
||||
placeholders = Array.new(string_refs.length, "?").join(", ")
|
||||
rows = db.execute("SELECT node_id FROM nodes WHERE node_id IN (#{placeholders})", string_refs)
|
||||
rows.each do |row|
|
||||
nid = row.is_a?(Hash) ? row["node_id"] : row[0]
|
||||
result[nid] = nid if nid
|
||||
end
|
||||
end
|
||||
|
||||
# Batch lookup by num (numeric match) for refs not yet resolved
|
||||
unresolved_numeric = numeric_refs.select { |n| !result.key?(n.to_s) }
|
||||
unless unresolved_numeric.empty?
|
||||
placeholders = Array.new(unresolved_numeric.length, "?").join(", ")
|
||||
rows = db.execute("SELECT node_id, num FROM nodes WHERE num IN (#{placeholders})", unresolved_numeric)
|
||||
rows.each do |row|
|
||||
nid = row.is_a?(Hash) ? row["node_id"] : row[0]
|
||||
num = row.is_a?(Hash) ? row["num"] : row[1]
|
||||
result[num.to_s] = nid if nid && num
|
||||
end
|
||||
end
|
||||
|
||||
result
|
||||
end
|
||||
|
||||
# Normalise a caller-supplied timestamp for API pagination windows.
|
||||
#
|
||||
# @param since [Object] requested lower bound expressed as seconds since the epoch.
|
||||
|
||||
@@ -37,7 +37,7 @@ module PotatoMesh
|
||||
params << since_threshold
|
||||
|
||||
if node_ref
|
||||
clause = node_lookup_clause(node_ref, string_columns: ["node_id"], numeric_columns: ["node_num"])
|
||||
clause = node_lookup_clause(node_ref, string_columns: ["node_id"], numeric_columns: ["node_num"], db: db)
|
||||
return [] unless clause
|
||||
where_clauses << clause.first
|
||||
params.concat(clause.last)
|
||||
|
||||
@@ -70,11 +70,42 @@ module PotatoMesh
|
||||
}
|
||||
end
|
||||
|
||||
def node_lookup_clause(node_ref, string_columns:, numeric_columns: [])
|
||||
# Build a WHERE clause fragment for looking up a node across one or more
|
||||
# columns. When +numeric_columns+ are provided together with an open +db+
|
||||
# handle the numeric identifiers are resolved to canonical +node_id+
|
||||
# strings up-front so the resulting SQL uses only string-column +IN+
|
||||
# predicates. This avoids an +OR+ across heterogeneous columns which
|
||||
# prevents SQLite from choosing the optimal index.
|
||||
#
|
||||
# @param node_ref [String, Integer, nil] raw node reference from the request.
|
||||
# @param string_columns [Array<String>] SQL column names holding string identifiers.
|
||||
# @param numeric_columns [Array<String>] SQL column names holding numeric identifiers.
|
||||
# @param db [SQLite3::Database, nil] open database handle used to resolve
|
||||
# numeric IDs to canonical strings. When provided and +numeric_columns+
|
||||
# is non-empty the numeric branch is folded into the string branch.
|
||||
# @return [Array(String, Array), nil] SQL fragment and bind parameters, or
|
||||
# +nil+ when no lookup can be constructed.
|
||||
def node_lookup_clause(node_ref, string_columns:, numeric_columns: [], db: nil)
|
||||
tokens = node_reference_tokens(node_ref)
|
||||
string_values = tokens[:string_values]
|
||||
numeric_values = tokens[:numeric_values]
|
||||
|
||||
# When a database handle is available, resolve numeric identifiers to
|
||||
# canonical node_id strings so the query can use a single indexed column
|
||||
# instead of an OR across string and numeric columns.
|
||||
if db && !numeric_columns.empty? && !numeric_values.empty?
|
||||
numeric_values.each do |num|
|
||||
resolved = db.get_first_value("SELECT node_id FROM nodes WHERE num = ? LIMIT 1", [num])
|
||||
if resolved
|
||||
string_values << resolved unless string_values.include?(resolved)
|
||||
end
|
||||
end
|
||||
# All numeric values have been folded into string_values; drop the
|
||||
# numeric branch so the generated SQL avoids an OR.
|
||||
numeric_columns = []
|
||||
numeric_values = []
|
||||
end
|
||||
|
||||
clauses = []
|
||||
params = []
|
||||
|
||||
@@ -117,7 +148,7 @@ module PotatoMesh
|
||||
where_clauses = []
|
||||
|
||||
if node_ref
|
||||
clause = node_lookup_clause(node_ref, string_columns: ["node_id"], numeric_columns: ["num"])
|
||||
clause = node_lookup_clause(node_ref, string_columns: ["node_id"], numeric_columns: ["num"], db: db)
|
||||
return [] unless clause
|
||||
where_clauses << clause.first
|
||||
params.concat(clause.last)
|
||||
@@ -238,7 +269,8 @@ module PotatoMesh
|
||||
#
|
||||
# @param now [Integer] reference unix timestamp in seconds.
|
||||
# @param db [SQLite3::Database, nil] optional open database handle to reuse.
|
||||
# @return [Hash{String => Integer}] counts keyed by hour/day/week/month.
|
||||
# @return [Hash{String => Object}] counts keyed by hour/day/week/month plus
|
||||
# per-protocol breakdowns under "meshcore" and "meshtastic" sub-hashes.
|
||||
def query_active_node_stats(now: Time.now.to_i, db: nil)
|
||||
handle = db || open_database(readonly: true)
|
||||
handle.results_as_hash = true
|
||||
@@ -247,22 +279,48 @@ module PotatoMesh
|
||||
day_cutoff = reference_now - 86_400
|
||||
week_cutoff = reference_now - PotatoMesh::Config.week_seconds
|
||||
month_cutoff = reference_now - (30 * 24 * 60 * 60)
|
||||
private_filter = private_mode? ? " AND (role IS NULL OR role <> 'CLIENT_HIDDEN')" : ""
|
||||
pf = private_mode? ? " AND (role IS NULL OR role <> 'CLIENT_HIDDEN')" : ""
|
||||
proto = " AND protocol = ?"
|
||||
sql = <<~SQL
|
||||
SELECT
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{private_filter}) AS hour_count,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{private_filter}) AS day_count,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{private_filter}) AS week_count,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{private_filter}) AS month_count
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{pf}) AS hour_count,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{pf}) AS day_count,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{pf}) AS week_count,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{pf}) AS month_count,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{pf}#{proto}) AS mc_hour,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{pf}#{proto}) AS mc_day,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{pf}#{proto}) AS mc_week,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{pf}#{proto}) AS mc_month,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{pf}#{proto}) AS mt_hour,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{pf}#{proto}) AS mt_day,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{pf}#{proto}) AS mt_week,
|
||||
(SELECT COUNT(*) FROM nodes WHERE last_heard >= ?#{pf}#{proto}) AS mt_month
|
||||
SQL
|
||||
cutoffs = [hour_cutoff, day_cutoff, week_cutoff, month_cutoff]
|
||||
# Total counts bind only cutoffs; per-protocol counts bind cutoff + protocol string.
|
||||
params = cutoffs +
|
||||
cutoffs.flat_map { |c| [c, "meshcore"] } +
|
||||
cutoffs.flat_map { |c| [c, "meshtastic"] }
|
||||
row = with_busy_retry do
|
||||
handle.get_first_row(sql, [hour_cutoff, day_cutoff, week_cutoff, month_cutoff])
|
||||
handle.get_first_row(sql, params)
|
||||
end || {}
|
||||
{
|
||||
"hour" => row["hour_count"].to_i,
|
||||
"day" => row["day_count"].to_i,
|
||||
"week" => row["week_count"].to_i,
|
||||
"month" => row["month_count"].to_i,
|
||||
"meshcore" => {
|
||||
"hour" => row["mc_hour"].to_i,
|
||||
"day" => row["mc_day"].to_i,
|
||||
"week" => row["mc_week"].to_i,
|
||||
"month" => row["mc_month"].to_i,
|
||||
},
|
||||
"meshtastic" => {
|
||||
"hour" => row["mt_hour"].to_i,
|
||||
"day" => row["mt_day"].to_i,
|
||||
"week" => row["mt_week"].to_i,
|
||||
"month" => row["mt_month"].to_i,
|
||||
},
|
||||
}
|
||||
ensure
|
||||
handle&.close unless db
|
||||
|
||||
@@ -37,7 +37,7 @@ module PotatoMesh
|
||||
params << since_threshold
|
||||
|
||||
if node_ref
|
||||
clause = node_lookup_clause(node_ref, string_columns: ["node_id"], numeric_columns: ["node_num"])
|
||||
clause = node_lookup_clause(node_ref, string_columns: ["node_id"], numeric_columns: ["node_num"], db: db)
|
||||
return [] unless clause
|
||||
where_clauses << clause.first
|
||||
params.concat(clause.last)
|
||||
|
||||
@@ -18,12 +18,34 @@ module PotatoMesh
|
||||
module App
|
||||
module Routes
|
||||
module Api
|
||||
# Accepted protocol filter values. Unknown values are discarded to
|
||||
# prevent attacker-controlled strings from polluting the cache keyspace.
|
||||
KNOWN_PROTOCOLS = Set.new(%w[meshcore meshtastic]).freeze
|
||||
|
||||
# Register read-only API endpoints that expose cached mesh data and
|
||||
# instance metadata. Invoked by Sinatra during extension registration.
|
||||
#
|
||||
# @param app [Sinatra::Base] application instance receiving the routes.
|
||||
# @return [void]
|
||||
def self.registered(app)
|
||||
known_protocols = KNOWN_PROTOCOLS
|
||||
|
||||
app.helpers do
|
||||
# Sanitise the protocol query parameter to a known value.
|
||||
define_method(:sanitize_protocol) do |raw|
|
||||
val = raw&.to_s&.strip&.downcase
|
||||
known_protocols.include?(val) ? val : nil
|
||||
end
|
||||
|
||||
# Set Cache-Control headers appropriate for the current mode.
|
||||
# Private-mode instances must not allow intermediary caches to
|
||||
# store responses that may contain filtered data.
|
||||
define_method(:api_cache_control) do |max_age: 10|
|
||||
visibility = private_mode? ? :private : :public
|
||||
cache_control visibility, :must_revalidate, max_age: max_age
|
||||
end
|
||||
end
|
||||
|
||||
app.before "/api/messages*" do
|
||||
halt 404 if private_mode?
|
||||
end
|
||||
@@ -63,92 +85,213 @@ module PotatoMesh
|
||||
|
||||
app.get "/api/nodes" do
|
||||
content_type :json
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
query_nodes(limit, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
since = params["since"]
|
||||
protocol = sanitize_protocol(params["protocol"])
|
||||
since_val = coerce_integer(since) || 0
|
||||
priv = private_mode? ? 1 : 0
|
||||
|
||||
if since_val > 0
|
||||
json_body = query_nodes(limit, since: since, protocol: protocol).to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control
|
||||
json_body
|
||||
else
|
||||
cached = PotatoMesh::App::ApiCache.fetch("api:nodes:#{limit}:#{protocol}:#{priv}", ttl_seconds: 15) do
|
||||
query_nodes(limit, since: since, protocol: protocol).to_json
|
||||
end
|
||||
etag cached[:etag], kind: :weak
|
||||
api_cache_control
|
||||
cached[:value]
|
||||
end
|
||||
end
|
||||
|
||||
app.get "/api/stats" do
|
||||
content_type :json
|
||||
{
|
||||
active_nodes: query_active_node_stats,
|
||||
sampled: false,
|
||||
}.to_json
|
||||
priv = private_mode? ? 1 : 0
|
||||
cached = PotatoMesh::App::ApiCache.fetch("api:stats:#{priv}", ttl_seconds: 15) do
|
||||
stats = query_active_node_stats
|
||||
{
|
||||
active_nodes: {
|
||||
"hour" => stats["hour"], "day" => stats["day"],
|
||||
"week" => stats["week"], "month" => stats["month"],
|
||||
},
|
||||
meshcore: stats["meshcore"],
|
||||
meshtastic: stats["meshtastic"],
|
||||
sampled: false,
|
||||
}.to_json
|
||||
end
|
||||
|
||||
etag cached[:etag], kind: :weak
|
||||
api_cache_control
|
||||
cached[:value]
|
||||
end
|
||||
|
||||
app.get "/api/nodes/:id" do
|
||||
content_type :json
|
||||
node_ref = string_or_nil(params["id"])
|
||||
halt 400, { error: "missing node id" }.to_json unless node_ref
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
rows = query_nodes(limit, node_ref: node_ref, since: params["since"])
|
||||
halt 404, { error: "not found" }.to_json if rows.empty?
|
||||
rows.first.to_json
|
||||
json_body = rows.first.to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control
|
||||
json_body
|
||||
end
|
||||
|
||||
app.get "/api/ingestors" do
|
||||
content_type :json
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
query_ingestors(limit, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
protocol = sanitize_protocol(params["protocol"])
|
||||
since = params["since"]
|
||||
since_val = coerce_integer(since) || 0
|
||||
|
||||
if since_val > 0
|
||||
json_body = query_ingestors(limit, since: since, protocol: protocol).to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control
|
||||
json_body
|
||||
else
|
||||
cached = PotatoMesh::App::ApiCache.fetch("api:ingestors:#{limit}:#{protocol}", ttl_seconds: 30) do
|
||||
query_ingestors(limit, since: since, protocol: protocol).to_json
|
||||
end
|
||||
etag cached[:etag], kind: :weak
|
||||
api_cache_control
|
||||
cached[:value]
|
||||
end
|
||||
end
|
||||
|
||||
app.get "/api/messages" do
|
||||
content_type :json
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
include_encrypted = coerce_boolean(params["encrypted"]) || false
|
||||
since = coerce_integer(params["since"])
|
||||
since = 0 if since.nil? || since.negative?
|
||||
query_messages(limit, include_encrypted: include_encrypted, since: since, protocol: string_or_nil(params["protocol"])).to_json
|
||||
protocol = sanitize_protocol(params["protocol"])
|
||||
enc_key = include_encrypted ? "1" : "0"
|
||||
|
||||
if since > 0
|
||||
json_body = query_messages(limit, include_encrypted: include_encrypted, since: since, protocol: protocol).to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control
|
||||
json_body
|
||||
else
|
||||
cached = PotatoMesh::App::ApiCache.fetch("api:messages:#{limit}:#{enc_key}:#{protocol}", ttl_seconds: 10) do
|
||||
query_messages(limit, include_encrypted: include_encrypted, since: since, protocol: protocol).to_json
|
||||
end
|
||||
etag cached[:etag], kind: :weak
|
||||
api_cache_control
|
||||
cached[:value]
|
||||
end
|
||||
end
|
||||
|
||||
app.get "/api/messages/:id" do
|
||||
content_type :json
|
||||
node_ref = string_or_nil(params["id"])
|
||||
halt 400, { error: "missing node id" }.to_json unless node_ref
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
include_encrypted = coerce_boolean(params["encrypted"]) || false
|
||||
since = coerce_integer(params["since"])
|
||||
since = 0 if since.nil? || since.negative?
|
||||
query_messages(
|
||||
json_body = query_messages(
|
||||
limit,
|
||||
node_ref: node_ref,
|
||||
include_encrypted: include_encrypted,
|
||||
since: since,
|
||||
protocol: string_or_nil(params["protocol"]),
|
||||
protocol: sanitize_protocol(params["protocol"]),
|
||||
).to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control
|
||||
json_body
|
||||
end
|
||||
|
||||
app.get "/api/positions" do
|
||||
content_type :json
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
query_positions(limit, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
since = params["since"]
|
||||
protocol = sanitize_protocol(params["protocol"])
|
||||
since_val = coerce_integer(since) || 0
|
||||
|
||||
if since_val > 0
|
||||
json_body = query_positions(limit, since: since, protocol: protocol).to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control
|
||||
json_body
|
||||
else
|
||||
cached = PotatoMesh::App::ApiCache.fetch("api:positions:#{limit}:#{protocol}", ttl_seconds: 15) do
|
||||
query_positions(limit, since: since, protocol: protocol).to_json
|
||||
end
|
||||
etag cached[:etag], kind: :weak
|
||||
api_cache_control
|
||||
cached[:value]
|
||||
end
|
||||
end
|
||||
|
||||
app.get "/api/positions/:id" do
|
||||
content_type :json
|
||||
node_ref = string_or_nil(params["id"])
|
||||
halt 400, { error: "missing node id" }.to_json unless node_ref
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
query_positions(limit, node_ref: node_ref, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
json_body = query_positions(limit, node_ref: node_ref, since: params["since"], protocol: sanitize_protocol(params["protocol"])).to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control
|
||||
json_body
|
||||
end
|
||||
|
||||
app.get "/api/neighbors" do
|
||||
content_type :json
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
query_neighbors(limit, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
since = params["since"]
|
||||
protocol = sanitize_protocol(params["protocol"])
|
||||
since_val = coerce_integer(since) || 0
|
||||
|
||||
if since_val > 0
|
||||
json_body = query_neighbors(limit, since: since, protocol: protocol).to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control
|
||||
json_body
|
||||
else
|
||||
cached = PotatoMesh::App::ApiCache.fetch("api:neighbors:#{limit}:#{protocol}", ttl_seconds: 30) do
|
||||
query_neighbors(limit, since: since, protocol: protocol).to_json
|
||||
end
|
||||
etag cached[:etag], kind: :weak
|
||||
api_cache_control
|
||||
cached[:value]
|
||||
end
|
||||
end
|
||||
|
||||
app.get "/api/neighbors/:id" do
|
||||
content_type :json
|
||||
node_ref = string_or_nil(params["id"])
|
||||
halt 400, { error: "missing node id" }.to_json unless node_ref
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
query_neighbors(limit, node_ref: node_ref, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
json_body = query_neighbors(limit, node_ref: node_ref, since: params["since"], protocol: sanitize_protocol(params["protocol"])).to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control
|
||||
json_body
|
||||
end
|
||||
|
||||
app.get "/api/telemetry" do
|
||||
content_type :json
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
query_telemetry(limit, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
since = params["since"]
|
||||
protocol = sanitize_protocol(params["protocol"])
|
||||
since_val = coerce_integer(since) || 0
|
||||
|
||||
if since_val > 0
|
||||
json_body = query_telemetry(limit, since: since, protocol: protocol).to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control
|
||||
json_body
|
||||
else
|
||||
cached = PotatoMesh::App::ApiCache.fetch("api:telemetry:#{limit}:#{protocol}", ttl_seconds: 15) do
|
||||
query_telemetry(limit, since: since, protocol: protocol).to_json
|
||||
end
|
||||
etag cached[:etag], kind: :weak
|
||||
api_cache_control
|
||||
cached[:value]
|
||||
end
|
||||
end
|
||||
|
||||
app.get "/api/telemetry/aggregated" do
|
||||
@@ -179,33 +322,67 @@ module PotatoMesh
|
||||
halt 400, { error: "bucketSeconds too small for requested window" }.to_json
|
||||
end
|
||||
|
||||
query_telemetry_buckets(
|
||||
window_seconds: window_seconds,
|
||||
bucket_seconds: bucket_seconds,
|
||||
since: params["since"],
|
||||
).to_json
|
||||
since = params["since"]
|
||||
since_val = coerce_integer(since) || 0
|
||||
|
||||
if since_val > 0
|
||||
json_body = query_telemetry_buckets(window_seconds: window_seconds, bucket_seconds: bucket_seconds, since: since).to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control(max_age: 30)
|
||||
json_body
|
||||
else
|
||||
cache_key = "api:telemetry_agg:#{window_seconds}:#{bucket_seconds}"
|
||||
cached = PotatoMesh::App::ApiCache.fetch(cache_key, ttl_seconds: 60) do
|
||||
query_telemetry_buckets(window_seconds: window_seconds, bucket_seconds: bucket_seconds, since: since).to_json
|
||||
end
|
||||
etag cached[:etag], kind: :weak
|
||||
api_cache_control(max_age: 30)
|
||||
cached[:value]
|
||||
end
|
||||
end
|
||||
|
||||
app.get "/api/telemetry/:id" do
|
||||
content_type :json
|
||||
node_ref = string_or_nil(params["id"])
|
||||
halt 400, { error: "missing node id" }.to_json unless node_ref
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
query_telemetry(limit, node_ref: node_ref, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
json_body = query_telemetry(limit, node_ref: node_ref, since: params["since"], protocol: sanitize_protocol(params["protocol"])).to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control
|
||||
json_body
|
||||
end
|
||||
|
||||
app.get "/api/traces" do
|
||||
content_type :json
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
query_traces(limit, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
since = params["since"]
|
||||
protocol = sanitize_protocol(params["protocol"])
|
||||
since_val = coerce_integer(since) || 0
|
||||
|
||||
if since_val > 0
|
||||
json_body = query_traces(limit, since: since, protocol: protocol).to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control
|
||||
json_body
|
||||
else
|
||||
cached = PotatoMesh::App::ApiCache.fetch("api:traces:#{limit}:#{protocol}", ttl_seconds: 30) do
|
||||
query_traces(limit, since: since, protocol: protocol).to_json
|
||||
end
|
||||
etag cached[:etag], kind: :weak
|
||||
api_cache_control
|
||||
cached[:value]
|
||||
end
|
||||
end
|
||||
|
||||
app.get "/api/traces/:id" do
|
||||
content_type :json
|
||||
node_ref = string_or_nil(params["id"])
|
||||
halt 400, { error: "missing node id" }.to_json unless node_ref
|
||||
limit = [params["limit"]&.to_i || 200, 1000].min
|
||||
query_traces(limit, node_ref: node_ref, since: params["since"], protocol: string_or_nil(params["protocol"])).to_json
|
||||
limit = coerce_query_limit(params["limit"])
|
||||
json_body = query_traces(limit, node_ref: node_ref, since: params["since"], protocol: sanitize_protocol(params["protocol"])).to_json
|
||||
etag Digest::MD5.hexdigest(json_body), kind: :weak
|
||||
api_cache_control
|
||||
json_body
|
||||
end
|
||||
|
||||
app.get "/api/instances" do
|
||||
|
||||
@@ -45,6 +45,7 @@ module PotatoMesh
|
||||
upsert_node(db, node_id, node, protocol: protocol)
|
||||
end
|
||||
PotatoMesh::App::Prometheus::NODES_GAUGE.set(query_nodes(1000).length)
|
||||
PotatoMesh::App::ApiCache.invalidate_prefix("api:nodes:", "api:stats:")
|
||||
{ status: "ok" }.to_json
|
||||
ensure
|
||||
db&.close
|
||||
@@ -65,6 +66,7 @@ module PotatoMesh
|
||||
messages.each do |msg|
|
||||
insert_message(db, msg, protocol_cache: protocol_cache)
|
||||
end
|
||||
PotatoMesh::App::ApiCache.invalidate_prefix("api:messages:", "api:stats:")
|
||||
{ status: "ok" }.to_json
|
||||
ensure
|
||||
db&.close
|
||||
@@ -84,6 +86,7 @@ module PotatoMesh
|
||||
db = open_database
|
||||
stored = upsert_ingestor(db, payload)
|
||||
halt 400, { error: "invalid payload" }.to_json unless stored
|
||||
PotatoMesh::App::ApiCache.invalidate_prefix("api:ingestors:")
|
||||
{ status: "ok" }.to_json
|
||||
ensure
|
||||
db&.close
|
||||
@@ -314,6 +317,7 @@ module PotatoMesh
|
||||
positions.each do |pos|
|
||||
insert_position(db, pos, protocol_cache: protocol_cache)
|
||||
end
|
||||
PotatoMesh::App::ApiCache.invalidate_prefix("api:positions:", "api:nodes:", "api:stats:")
|
||||
{ status: "ok" }.to_json
|
||||
ensure
|
||||
db&.close
|
||||
@@ -334,6 +338,7 @@ module PotatoMesh
|
||||
neighbor_payloads.each do |packet|
|
||||
insert_neighbors(db, packet, protocol_cache: protocol_cache)
|
||||
end
|
||||
PotatoMesh::App::ApiCache.invalidate_prefix("api:neighbors:", "api:stats:")
|
||||
{ status: "ok" }.to_json
|
||||
ensure
|
||||
db&.close
|
||||
@@ -354,6 +359,7 @@ module PotatoMesh
|
||||
telemetry_packets.each do |packet|
|
||||
insert_telemetry(db, packet, protocol_cache: protocol_cache)
|
||||
end
|
||||
PotatoMesh::App::ApiCache.invalidate_prefix("api:telemetry:", "api:stats:")
|
||||
{ status: "ok" }.to_json
|
||||
ensure
|
||||
db&.close
|
||||
@@ -374,6 +380,7 @@ module PotatoMesh
|
||||
trace_packets.each do |packet|
|
||||
insert_trace(db, packet, protocol_cache: protocol_cache)
|
||||
end
|
||||
PotatoMesh::App::ApiCache.invalidate_prefix("api:traces:", "api:stats:")
|
||||
{ status: "ok" }.to_json
|
||||
ensure
|
||||
db&.close
|
||||
|
||||
@@ -207,7 +207,7 @@ module PotatoMesh
|
||||
#
|
||||
# @return [String] semantic version identifier.
|
||||
def version_fallback
|
||||
"0.6.0"
|
||||
"0.6.1"
|
||||
end
|
||||
|
||||
# Default refresh interval for frontend polling routines.
|
||||
|
||||
Generated
+2
-2
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "potato-mesh",
|
||||
"version": "0.6.0",
|
||||
"version": "0.6.1",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "potato-mesh",
|
||||
"version": "0.6.0",
|
||||
"version": "0.6.1",
|
||||
"devDependencies": {
|
||||
"istanbul-lib-coverage": "^3.2.2",
|
||||
"istanbul-lib-report": "^3.0.1",
|
||||
|
||||
+1
-1
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "potato-mesh",
|
||||
"version": "0.6.0",
|
||||
"version": "0.6.1",
|
||||
"type": "module",
|
||||
"private": true,
|
||||
"scripts": {
|
||||
|
||||
@@ -326,6 +326,14 @@ export function createDomEnvironment(options = {}) {
|
||||
querySelector() {
|
||||
return null;
|
||||
},
|
||||
querySelectorAll(selector) {
|
||||
// Delegate to body when available — MockElement.querySelectorAll supports
|
||||
// class selectors which covers the majority of test-time lookups.
|
||||
if (document.body && typeof document.body.querySelectorAll === 'function') {
|
||||
return document.body.querySelectorAll(selector);
|
||||
}
|
||||
return [];
|
||||
},
|
||||
createElement(tagName) {
|
||||
return new MockElement(tagName, registry);
|
||||
},
|
||||
|
||||
@@ -0,0 +1,212 @@
|
||||
/*
|
||||
* Copyright © 2025-26 l5yth & contributors
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
import test from 'node:test';
|
||||
import assert from 'node:assert/strict';
|
||||
import { maxRecordTimestamp, mergeById, mergeByCompositeKey, trimToLimit } from '../incremental-helpers.js';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// maxRecordTimestamp
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('maxRecordTimestamp returns 0 for an empty array', () => {
|
||||
assert.equal(maxRecordTimestamp([]), 0);
|
||||
});
|
||||
|
||||
test('maxRecordTimestamp returns 0 for non-array input', () => {
|
||||
assert.equal(maxRecordTimestamp(null), 0);
|
||||
assert.equal(maxRecordTimestamp(undefined), 0);
|
||||
assert.equal(maxRecordTimestamp('string'), 0);
|
||||
});
|
||||
|
||||
test('maxRecordTimestamp extracts the highest rx_time by default', () => {
|
||||
const records = [
|
||||
{ rx_time: 100 },
|
||||
{ rx_time: 300 },
|
||||
{ rx_time: 200 },
|
||||
];
|
||||
assert.equal(maxRecordTimestamp(records), 300);
|
||||
});
|
||||
|
||||
test('maxRecordTimestamp inspects last_heard by default', () => {
|
||||
const records = [
|
||||
{ last_heard: 500 },
|
||||
{ last_heard: 250 },
|
||||
];
|
||||
assert.equal(maxRecordTimestamp(records), 500);
|
||||
});
|
||||
|
||||
test('maxRecordTimestamp returns 0 when records lack timestamp fields', () => {
|
||||
const records = [{ node_id: '!abc' }, { node_id: '!def' }];
|
||||
assert.equal(maxRecordTimestamp(records), 0);
|
||||
});
|
||||
|
||||
test('maxRecordTimestamp accepts custom field names', () => {
|
||||
const records = [
|
||||
{ telemetry_time: 700, rx_time: 600 },
|
||||
{ telemetry_time: 800 },
|
||||
];
|
||||
assert.equal(maxRecordTimestamp(records, ['telemetry_time']), 800);
|
||||
});
|
||||
|
||||
test('maxRecordTimestamp picks the max across multiple fields', () => {
|
||||
const records = [
|
||||
{ rx_time: 100, position_time: 400 },
|
||||
{ rx_time: 300, position_time: 200 },
|
||||
];
|
||||
assert.equal(maxRecordTimestamp(records, ['rx_time', 'position_time']), 400);
|
||||
});
|
||||
|
||||
test('maxRecordTimestamp skips null and non-object entries', () => {
|
||||
const records = [null, undefined, 42, { rx_time: 10 }];
|
||||
assert.equal(maxRecordTimestamp(records), 10);
|
||||
});
|
||||
|
||||
test('maxRecordTimestamp ignores non-number timestamp values', () => {
|
||||
const records = [{ rx_time: 'abc' }, { rx_time: 50 }];
|
||||
assert.equal(maxRecordTimestamp(records), 50);
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// mergeById
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('mergeById returns existing when incoming is empty', () => {
|
||||
const existing = [{ id: 1, v: 'a' }];
|
||||
assert.strictEqual(mergeById(existing, [], 'id'), existing);
|
||||
assert.strictEqual(mergeById(existing, null, 'id'), existing);
|
||||
assert.strictEqual(mergeById(existing, undefined, 'id'), existing);
|
||||
});
|
||||
|
||||
test('mergeById deduplicates by keyField keeping the incoming value', () => {
|
||||
const existing = [
|
||||
{ id: 1, v: 'old' },
|
||||
{ id: 2, v: 'keep' },
|
||||
];
|
||||
const incoming = [
|
||||
{ id: 1, v: 'new' },
|
||||
{ id: 3, v: 'added' },
|
||||
];
|
||||
const result = mergeById(existing, incoming, 'id');
|
||||
assert.equal(result.length, 3);
|
||||
const byId = Object.fromEntries(result.map(r => [r.id, r.v]));
|
||||
assert.equal(byId[1], 'new');
|
||||
assert.equal(byId[2], 'keep');
|
||||
assert.equal(byId[3], 'added');
|
||||
});
|
||||
|
||||
test('mergeById works with string keys', () => {
|
||||
const existing = [{ node_id: '!abc', name: 'A' }];
|
||||
const incoming = [{ node_id: '!abc', name: 'B' }];
|
||||
const result = mergeById(existing, incoming, 'node_id');
|
||||
assert.equal(result.length, 1);
|
||||
assert.equal(result[0].name, 'B');
|
||||
});
|
||||
|
||||
test('mergeById skips items with null or undefined key', () => {
|
||||
const existing = [{ id: 1, v: 'a' }];
|
||||
const incoming = [{ v: 'no-id' }, { id: 2, v: 'b' }];
|
||||
const result = mergeById(existing, incoming, 'id');
|
||||
assert.equal(result.length, 2);
|
||||
});
|
||||
|
||||
test('mergeById returns all incoming when existing is empty', () => {
|
||||
const result = mergeById([], [{ id: 1 }, { id: 2 }], 'id');
|
||||
assert.equal(result.length, 2);
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// mergeByCompositeKey
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('mergeByCompositeKey deduplicates by composite key', () => {
|
||||
const existing = [
|
||||
{ node_id: '!a', neighbor_id: '!b', snr: 5 },
|
||||
{ node_id: '!a', neighbor_id: '!c', snr: 3 },
|
||||
];
|
||||
const incoming = [
|
||||
{ node_id: '!a', neighbor_id: '!b', snr: 8 },
|
||||
{ node_id: '!a', neighbor_id: '!d', snr: 1 },
|
||||
];
|
||||
const result = mergeByCompositeKey(existing, incoming, ['node_id', 'neighbor_id']);
|
||||
assert.equal(result.length, 3);
|
||||
const ab = result.find(r => r.neighbor_id === '!b');
|
||||
assert.equal(ab.snr, 8, 'incoming should overwrite existing for same composite key');
|
||||
});
|
||||
|
||||
test('mergeByCompositeKey returns existing when incoming is empty', () => {
|
||||
const existing = [{ a: 1, b: 2 }];
|
||||
assert.strictEqual(mergeByCompositeKey(existing, [], ['a', 'b']), existing);
|
||||
assert.strictEqual(mergeByCompositeKey(existing, null, ['a', 'b']), existing);
|
||||
});
|
||||
|
||||
test('mergeByCompositeKey handles missing key fields gracefully', () => {
|
||||
const existing = [{ node_id: '!a' }];
|
||||
const incoming = [{ node_id: '!a', neighbor_id: '!b' }];
|
||||
const result = mergeByCompositeKey(existing, incoming, ['node_id', 'neighbor_id']);
|
||||
assert.equal(result.length, 2, 'different composite keys due to missing field');
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// trimToLimit
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('trimToLimit returns the same array when within limit', () => {
|
||||
const records = [{ id: 1, rx_time: 100 }, { id: 2, rx_time: 200 }];
|
||||
const result = trimToLimit(records, 5);
|
||||
assert.strictEqual(result, records);
|
||||
});
|
||||
|
||||
test('trimToLimit trims to limit keeping newest entries', () => {
|
||||
const records = [
|
||||
{ id: 1, rx_time: 100 },
|
||||
{ id: 2, rx_time: 300 },
|
||||
{ id: 3, rx_time: 200 },
|
||||
{ id: 4, rx_time: 400 },
|
||||
];
|
||||
const result = trimToLimit(records, 2);
|
||||
assert.equal(result.length, 2);
|
||||
const ids = result.map(r => r.id);
|
||||
assert.ok(ids.includes(4), 'should keep newest (id=4)');
|
||||
assert.ok(ids.includes(2), 'should keep second newest (id=2)');
|
||||
});
|
||||
|
||||
test('trimToLimit uses custom timestamp field', () => {
|
||||
const records = [
|
||||
{ id: 1, last_heard: 100 },
|
||||
{ id: 2, last_heard: 300 },
|
||||
{ id: 3, last_heard: 200 },
|
||||
];
|
||||
const result = trimToLimit(records, 1, 'last_heard');
|
||||
assert.equal(result.length, 1);
|
||||
assert.equal(result[0].id, 2);
|
||||
});
|
||||
|
||||
test('trimToLimit returns input for non-array values', () => {
|
||||
assert.equal(trimToLimit(null, 10), null);
|
||||
assert.equal(trimToLimit(undefined, 10), undefined);
|
||||
});
|
||||
|
||||
test('trimToLimit handles records with missing timestamp fields', () => {
|
||||
const records = [
|
||||
{ id: 1, rx_time: 100 },
|
||||
{ id: 2 },
|
||||
{ id: 3, rx_time: 300 },
|
||||
];
|
||||
const result = trimToLimit(records, 2);
|
||||
assert.equal(result.length, 2);
|
||||
assert.equal(result[0].id, 3);
|
||||
});
|
||||
@@ -0,0 +1,40 @@
|
||||
/*
|
||||
* Copyright © 2025-26 l5yth & contributors
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
import test from 'node:test';
|
||||
import assert from 'node:assert/strict';
|
||||
|
||||
import { withApp } from './main-app-test-helpers.js';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// isAutorefreshPaused
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('isAutorefreshPaused returns false by default', () => {
|
||||
withApp((t) => {
|
||||
assert.equal(t.isAutorefreshPaused(), false);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// restartAutoRefresh is safe when called without a timer
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('restartAutoRefresh does not throw when invoked with refreshMs 0', () => {
|
||||
withApp((t) => {
|
||||
assert.doesNotThrow(() => t.restartAutoRefresh());
|
||||
});
|
||||
});
|
||||
@@ -66,7 +66,7 @@ test('makeRoleFilterKey SENSOR and REPEATER produce distinct keys across protoco
|
||||
// matchesRoleFilter — no active filters
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('matchesRoleFilter returns true when no filters are active', () => {
|
||||
test('matchesRoleFilter returns true when no roles are hidden', () => {
|
||||
withApp((t) => {
|
||||
t.activeRoleFilters.clear();
|
||||
assert.equal(t.matchesRoleFilter({ role: 'ROUTER', protocol: 'meshtastic' }), true);
|
||||
@@ -75,56 +75,56 @@ test('matchesRoleFilter returns true when no filters are active', () => {
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// matchesRoleFilter — protocol-aware compound key matching
|
||||
// matchesRoleFilter — exclusion-set semantics (roles in set are hidden)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('matchesRoleFilter matches meshtastic SENSOR filter for meshtastic node', () => {
|
||||
test('matchesRoleFilter hides meshtastic SENSOR when in exclusion set', () => {
|
||||
withApp((t) => {
|
||||
t.activeRoleFilters.clear();
|
||||
t.activeRoleFilters.add('meshtastic:SENSOR');
|
||||
assert.equal(t.matchesRoleFilter({ role: 'SENSOR', protocol: 'meshtastic' }), true);
|
||||
});
|
||||
});
|
||||
|
||||
test('matchesRoleFilter does not match meshtastic SENSOR filter for meshcore node', () => {
|
||||
withApp((t) => {
|
||||
t.activeRoleFilters.clear();
|
||||
t.activeRoleFilters.add('meshtastic:SENSOR');
|
||||
assert.equal(t.matchesRoleFilter({ role: 'SENSOR', protocol: 'meshcore' }), false);
|
||||
});
|
||||
});
|
||||
|
||||
test('matchesRoleFilter matches meshcore SENSOR filter for meshcore node', () => {
|
||||
withApp((t) => {
|
||||
t.activeRoleFilters.clear();
|
||||
t.activeRoleFilters.add('meshcore:SENSOR');
|
||||
assert.equal(t.matchesRoleFilter({ role: 'SENSOR', protocol: 'meshcore' }), true);
|
||||
});
|
||||
});
|
||||
|
||||
test('matchesRoleFilter does not match meshcore SENSOR filter for meshtastic node', () => {
|
||||
withApp((t) => {
|
||||
t.activeRoleFilters.clear();
|
||||
t.activeRoleFilters.add('meshcore:SENSOR');
|
||||
assert.equal(t.matchesRoleFilter({ role: 'SENSOR', protocol: 'meshtastic' }), false);
|
||||
});
|
||||
});
|
||||
|
||||
test('matchesRoleFilter matches meshtastic REPEATER filter for meshtastic node', () => {
|
||||
test('matchesRoleFilter does not hide meshcore SENSOR when meshtastic SENSOR is hidden', () => {
|
||||
withApp((t) => {
|
||||
t.activeRoleFilters.clear();
|
||||
t.activeRoleFilters.add('meshtastic:REPEATER');
|
||||
assert.equal(t.matchesRoleFilter({ role: 'REPEATER', protocol: 'meshtastic' }), true);
|
||||
assert.equal(t.matchesRoleFilter({ role: 'REPEATER', protocol: 'meshcore' }), false);
|
||||
t.activeRoleFilters.add('meshtastic:SENSOR');
|
||||
assert.equal(t.matchesRoleFilter({ role: 'SENSOR', protocol: 'meshcore' }), true);
|
||||
});
|
||||
});
|
||||
|
||||
test('matchesRoleFilter matches meshcore REPEATER filter for meshcore node', () => {
|
||||
test('matchesRoleFilter hides meshcore SENSOR when in exclusion set', () => {
|
||||
withApp((t) => {
|
||||
t.activeRoleFilters.clear();
|
||||
t.activeRoleFilters.add('meshcore:SENSOR');
|
||||
assert.equal(t.matchesRoleFilter({ role: 'SENSOR', protocol: 'meshcore' }), false);
|
||||
});
|
||||
});
|
||||
|
||||
test('matchesRoleFilter does not hide meshtastic SENSOR when meshcore SENSOR is hidden', () => {
|
||||
withApp((t) => {
|
||||
t.activeRoleFilters.clear();
|
||||
t.activeRoleFilters.add('meshcore:SENSOR');
|
||||
assert.equal(t.matchesRoleFilter({ role: 'SENSOR', protocol: 'meshtastic' }), true);
|
||||
});
|
||||
});
|
||||
|
||||
test('matchesRoleFilter hides meshtastic REPEATER but not meshcore REPEATER', () => {
|
||||
withApp((t) => {
|
||||
t.activeRoleFilters.clear();
|
||||
t.activeRoleFilters.add('meshtastic:REPEATER');
|
||||
assert.equal(t.matchesRoleFilter({ role: 'REPEATER', protocol: 'meshtastic' }), false);
|
||||
assert.equal(t.matchesRoleFilter({ role: 'REPEATER', protocol: 'meshcore' }), true);
|
||||
});
|
||||
});
|
||||
|
||||
test('matchesRoleFilter hides meshcore REPEATER but not meshtastic REPEATER', () => {
|
||||
withApp((t) => {
|
||||
t.activeRoleFilters.clear();
|
||||
t.activeRoleFilters.add('meshcore:REPEATER');
|
||||
assert.equal(t.matchesRoleFilter({ role: 'REPEATER', protocol: 'meshcore' }), true);
|
||||
assert.equal(t.matchesRoleFilter({ role: 'REPEATER', protocol: 'meshtastic' }), false);
|
||||
assert.equal(t.matchesRoleFilter({ role: 'REPEATER', protocol: 'meshcore' }), false);
|
||||
assert.equal(t.matchesRoleFilter({ role: 'REPEATER', protocol: 'meshtastic' }), true);
|
||||
});
|
||||
});
|
||||
|
||||
@@ -132,27 +132,27 @@ test('matchesRoleFilter matches meshcore REPEATER filter for meshcore node', ()
|
||||
// matchesRoleFilter — null/absent protocol treated as meshtastic
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('matchesRoleFilter treats null protocol as meshtastic for filter matching', () => {
|
||||
test('matchesRoleFilter treats null protocol as meshtastic for exclusion', () => {
|
||||
withApp((t) => {
|
||||
t.activeRoleFilters.clear();
|
||||
t.activeRoleFilters.add('meshtastic:SENSOR');
|
||||
// null-protocol node should match the meshtastic SENSOR filter
|
||||
assert.equal(t.matchesRoleFilter({ role: 'SENSOR', protocol: null }), true);
|
||||
// but not the meshcore one
|
||||
// null-protocol node should be hidden by the meshtastic SENSOR exclusion
|
||||
assert.equal(t.matchesRoleFilter({ role: 'SENSOR', protocol: null }), false);
|
||||
// but meshcore SENSOR exclusion should not affect null-protocol nodes
|
||||
t.activeRoleFilters.clear();
|
||||
t.activeRoleFilters.add('meshcore:SENSOR');
|
||||
assert.equal(t.matchesRoleFilter({ role: 'SENSOR', protocol: null }), false);
|
||||
assert.equal(t.matchesRoleFilter({ role: 'SENSOR', protocol: null }), true);
|
||||
});
|
||||
});
|
||||
|
||||
test('matchesRoleFilter with multiple active filters returns true when any matches', () => {
|
||||
test('matchesRoleFilter with multiple hidden roles hides only those roles', () => {
|
||||
withApp((t) => {
|
||||
t.activeRoleFilters.clear();
|
||||
t.activeRoleFilters.add('meshtastic:SENSOR');
|
||||
t.activeRoleFilters.add('meshcore:REPEATER');
|
||||
assert.equal(t.matchesRoleFilter({ role: 'SENSOR', protocol: 'meshtastic' }), true);
|
||||
assert.equal(t.matchesRoleFilter({ role: 'REPEATER', protocol: 'meshcore' }), true);
|
||||
assert.equal(t.matchesRoleFilter({ role: 'ROUTER', protocol: 'meshtastic' }), false);
|
||||
assert.equal(t.matchesRoleFilter({ role: 'SENSOR', protocol: 'meshtastic' }), false);
|
||||
assert.equal(t.matchesRoleFilter({ role: 'REPEATER', protocol: 'meshcore' }), false);
|
||||
assert.equal(t.matchesRoleFilter({ role: 'ROUTER', protocol: 'meshtastic' }), true);
|
||||
});
|
||||
});
|
||||
|
||||
@@ -348,7 +348,7 @@ test('buildRoleButtons appends one child per palette entry', () => {
|
||||
withApp((t) => {
|
||||
t.legendRoleButtons.clear();
|
||||
const col = document.createElement('div');
|
||||
t.buildRoleButtons(col, { SENSOR: '#4A7EB4', REPEATER: '#C8D0DC' }, 'meshcore');
|
||||
t.buildRoleButtons(col, { SENSOR: '#40749E', REPEATER: '#B8C4D4' }, 'meshcore');
|
||||
assert.equal(col.childNodes.length, 2);
|
||||
});
|
||||
});
|
||||
@@ -357,7 +357,7 @@ test('buildRoleButtons sets dataset.role and dataset.protocol on each button', (
|
||||
withApp((t) => {
|
||||
t.legendRoleButtons.clear();
|
||||
const col = document.createElement('div');
|
||||
t.buildRoleButtons(col, { SENSOR: '#4A7EB4' }, 'meshcore');
|
||||
t.buildRoleButtons(col, { SENSOR: '#40749E' }, 'meshcore');
|
||||
const btn = t.legendRoleButtons.get('meshcore:SENSOR');
|
||||
assert.ok(btn, 'button should be in legendRoleButtons');
|
||||
assert.equal(btn.dataset.role, 'SENSOR');
|
||||
@@ -369,7 +369,7 @@ test('buildRoleButtons registers compound keys in legendRoleButtons', () => {
|
||||
withApp((t) => {
|
||||
t.legendRoleButtons.clear();
|
||||
const col = document.createElement('div');
|
||||
t.buildRoleButtons(col, { SENSOR: '#4A7EB4', REPEATER: '#C8D0DC' }, 'meshcore');
|
||||
t.buildRoleButtons(col, { SENSOR: '#40749E', REPEATER: '#B8C4D4' }, 'meshcore');
|
||||
assert.ok(t.legendRoleButtons.has('meshcore:SENSOR'));
|
||||
assert.ok(t.legendRoleButtons.has('meshcore:REPEATER'));
|
||||
});
|
||||
@@ -380,8 +380,8 @@ test('buildRoleButtons keeps meshtastic and meshcore SENSOR keys distinct', () =
|
||||
t.legendRoleButtons.clear();
|
||||
const colMc = document.createElement('div');
|
||||
const colMt = document.createElement('div');
|
||||
t.buildRoleButtons(colMc, { SENSOR: '#4A7EB4' }, 'meshcore');
|
||||
t.buildRoleButtons(colMt, { SENSOR: '#B2D880' }, 'meshtastic');
|
||||
t.buildRoleButtons(colMc, { SENSOR: '#40749E' }, 'meshcore');
|
||||
t.buildRoleButtons(colMt, { SENSOR: '#A8D5BA' }, 'meshtastic');
|
||||
assert.ok(t.legendRoleButtons.has('meshcore:SENSOR'));
|
||||
assert.ok(t.legendRoleButtons.has('meshtastic:SENSOR'));
|
||||
assert.notEqual(
|
||||
@@ -391,14 +391,14 @@ test('buildRoleButtons keeps meshtastic and meshcore SENSOR keys distinct', () =
|
||||
});
|
||||
});
|
||||
|
||||
test('buildRoleButtons sets aria-pressed to false initially', () => {
|
||||
test('buildRoleButtons sets aria-pressed to true initially (all visible)', () => {
|
||||
withApp((t) => {
|
||||
t.legendRoleButtons.clear();
|
||||
const col = document.createElement('div');
|
||||
t.buildRoleButtons(col, { ROUTER: '#D44E14' }, 'meshtastic');
|
||||
t.buildRoleButtons(col, { ROUTER: '#ff0019' }, 'meshtastic');
|
||||
const btn = t.legendRoleButtons.get('meshtastic:ROUTER');
|
||||
assert.ok(btn, 'button should be in legendRoleButtons');
|
||||
assert.equal(btn.getAttribute('aria-pressed'), 'false');
|
||||
assert.equal(btn.getAttribute('aria-pressed'), 'true');
|
||||
});
|
||||
});
|
||||
|
||||
@@ -406,7 +406,7 @@ test('buildRoleButtons creates swatch child with background color', () => {
|
||||
withApp((t) => {
|
||||
t.legendRoleButtons.clear();
|
||||
const col = document.createElement('div');
|
||||
t.buildRoleButtons(col, { ROUTER: '#D44E14' }, 'meshtastic');
|
||||
t.buildRoleButtons(col, { ROUTER: '#ff0019' }, 'meshtastic');
|
||||
const btn = t.legendRoleButtons.get('meshtastic:ROUTER');
|
||||
// swatch is the first child of the button
|
||||
const swatch = btn.childNodes[0];
|
||||
@@ -419,7 +419,7 @@ test('buildRoleButtons creates label child with role text', () => {
|
||||
withApp((t) => {
|
||||
t.legendRoleButtons.clear();
|
||||
const col = document.createElement('div');
|
||||
t.buildRoleButtons(col, { ROUTER: '#D44E14' }, 'meshtastic');
|
||||
t.buildRoleButtons(col, { ROUTER: '#ff0019' }, 'meshtastic');
|
||||
const btn = t.legendRoleButtons.get('meshtastic:ROUTER');
|
||||
// label is the second child of the button
|
||||
const label = btn.childNodes[1];
|
||||
@@ -432,55 +432,28 @@ test('buildRoleButtons creates label child with role text', () => {
|
||||
// updateLegendRoleFiltersUI
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('updateLegendRoleFiltersUI sets aria-pressed true on active role buttons', () => {
|
||||
test('updateLegendRoleFiltersUI sets aria-pressed false on hidden role buttons', () => {
|
||||
withApp((t) => {
|
||||
t.legendRoleButtons.clear();
|
||||
const col = document.createElement('div');
|
||||
t.buildRoleButtons(col, { SENSOR: '#4A7EB4' }, 'meshcore');
|
||||
t.buildRoleButtons(col, { SENSOR: '#40749E' }, 'meshcore');
|
||||
const btn = t.legendRoleButtons.get('meshcore:SENSOR');
|
||||
t.activeRoleFilters.clear();
|
||||
t.activeRoleFilters.add('meshcore:SENSOR');
|
||||
t.updateLegendRoleFiltersUI();
|
||||
assert.equal(btn.getAttribute('aria-pressed'), 'true');
|
||||
});
|
||||
});
|
||||
|
||||
test('updateLegendRoleFiltersUI sets aria-pressed false on inactive role buttons', () => {
|
||||
withApp((t) => {
|
||||
t.legendRoleButtons.clear();
|
||||
const col = document.createElement('div');
|
||||
t.buildRoleButtons(col, { SENSOR: '#4A7EB4' }, 'meshcore');
|
||||
const btn = t.legendRoleButtons.get('meshcore:SENSOR');
|
||||
t.activeRoleFilters.clear();
|
||||
t.updateLegendRoleFiltersUI();
|
||||
assert.equal(btn.getAttribute('aria-pressed'), 'false');
|
||||
});
|
||||
});
|
||||
|
||||
test('updateLegendRoleFiltersUI updates protocol button text to Show when hidden', () => {
|
||||
test('updateLegendRoleFiltersUI sets aria-pressed true on visible role buttons', () => {
|
||||
withApp((t) => {
|
||||
t.legendProtocolButtons.clear();
|
||||
const fakeBtn = document.createElement('button');
|
||||
fakeBtn.setAttribute('aria-pressed', 'false');
|
||||
t.legendProtocolButtons.set('meshtastic', fakeBtn);
|
||||
t.hiddenProtocols.clear();
|
||||
t.hiddenProtocols.add('meshtastic');
|
||||
t.legendRoleButtons.clear();
|
||||
const col = document.createElement('div');
|
||||
t.buildRoleButtons(col, { SENSOR: '#40749E' }, 'meshcore');
|
||||
const btn = t.legendRoleButtons.get('meshcore:SENSOR');
|
||||
t.activeRoleFilters.clear();
|
||||
t.updateLegendRoleFiltersUI();
|
||||
assert.equal(fakeBtn.getAttribute('aria-pressed'), 'true');
|
||||
assert.ok(fakeBtn.textContent.startsWith('Show'));
|
||||
});
|
||||
});
|
||||
|
||||
test('updateLegendRoleFiltersUI updates protocol button text to Hide when visible', () => {
|
||||
withApp((t) => {
|
||||
t.legendProtocolButtons.clear();
|
||||
const fakeBtn = document.createElement('button');
|
||||
fakeBtn.setAttribute('aria-pressed', 'true');
|
||||
t.legendProtocolButtons.set('meshcore', fakeBtn);
|
||||
t.hiddenProtocols.clear();
|
||||
t.updateLegendRoleFiltersUI();
|
||||
assert.equal(fakeBtn.getAttribute('aria-pressed'), 'false');
|
||||
assert.ok(fakeBtn.textContent.startsWith('Hide'));
|
||||
assert.equal(btn.getAttribute('aria-pressed'), 'true');
|
||||
});
|
||||
});
|
||||
|
||||
@@ -490,3 +463,81 @@ test('updateLegendRoleFiltersUI is safe when legendContainer is null', () => {
|
||||
assert.doesNotThrow(() => t.updateLegendRoleFiltersUI());
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// adjustStatsForHiddenProtocols
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('adjustStatsForHiddenProtocols returns original stats when nothing is hidden', () => {
|
||||
withApp((t) => {
|
||||
t.hiddenProtocols.clear();
|
||||
const stats = { hour: 10, day: 50, week: 100, month: 200, meshcore: { hour: 2, day: 10, week: 20, month: 40 }, meshtastic: { hour: 8, day: 40, week: 80, month: 160 } };
|
||||
const result = t.adjustStatsForHiddenProtocols(stats);
|
||||
assert.equal(result, stats);
|
||||
});
|
||||
});
|
||||
|
||||
test('adjustStatsForHiddenProtocols subtracts meshcore counts when meshcore hidden', () => {
|
||||
withApp((t) => {
|
||||
t.hiddenProtocols.clear();
|
||||
t.hiddenProtocols.add('meshcore');
|
||||
const stats = { hour: 10, day: 50, week: 100, month: 200, meshcore: { hour: 2, day: 10, week: 20, month: 40 }, meshtastic: { hour: 8, day: 40, week: 80, month: 160 } };
|
||||
const result = t.adjustStatsForHiddenProtocols(stats);
|
||||
assert.equal(result.week, 80);
|
||||
assert.equal(result.day, 40);
|
||||
assert.equal(result.month, 160);
|
||||
assert.equal(result.hour, 8);
|
||||
});
|
||||
});
|
||||
|
||||
test('adjustStatsForHiddenProtocols subtracts meshtastic counts when meshtastic hidden', () => {
|
||||
withApp((t) => {
|
||||
t.hiddenProtocols.clear();
|
||||
t.hiddenProtocols.add('meshtastic');
|
||||
const stats = { hour: 10, day: 50, week: 100, month: 200, meshcore: { hour: 2, day: 10, week: 20, month: 40 }, meshtastic: { hour: 8, day: 40, week: 80, month: 160 } };
|
||||
const result = t.adjustStatsForHiddenProtocols(stats);
|
||||
assert.equal(result.week, 20);
|
||||
assert.equal(result.day, 10);
|
||||
});
|
||||
});
|
||||
|
||||
test('adjustStatsForHiddenProtocols subtracts both when both hidden', () => {
|
||||
withApp((t) => {
|
||||
t.hiddenProtocols.clear();
|
||||
t.hiddenProtocols.add('meshcore');
|
||||
t.hiddenProtocols.add('meshtastic');
|
||||
const stats = { hour: 10, day: 50, week: 100, month: 200, meshcore: { hour: 2, day: 10, week: 20, month: 40 }, meshtastic: { hour: 8, day: 40, week: 80, month: 160 } };
|
||||
const result = t.adjustStatsForHiddenProtocols(stats);
|
||||
assert.equal(result.week, 0);
|
||||
assert.equal(result.day, 0);
|
||||
});
|
||||
});
|
||||
|
||||
test('adjustStatsForHiddenProtocols floors at zero', () => {
|
||||
withApp((t) => {
|
||||
t.hiddenProtocols.clear();
|
||||
t.hiddenProtocols.add('meshcore');
|
||||
const stats = { hour: 1, day: 5, week: 10, month: 20, meshcore: { hour: 50, day: 50, week: 50, month: 50 } };
|
||||
const result = t.adjustStatsForHiddenProtocols(stats);
|
||||
assert.equal(result.week, 0);
|
||||
assert.equal(result.day, 0);
|
||||
});
|
||||
});
|
||||
|
||||
test('adjustStatsForHiddenProtocols handles null stats gracefully', () => {
|
||||
withApp((t) => {
|
||||
t.hiddenProtocols.add('meshcore');
|
||||
assert.equal(t.adjustStatsForHiddenProtocols(null), null);
|
||||
assert.equal(t.adjustStatsForHiddenProtocols(undefined), undefined);
|
||||
});
|
||||
});
|
||||
|
||||
test('adjustStatsForHiddenProtocols handles missing protocol bucket', () => {
|
||||
withApp((t) => {
|
||||
t.hiddenProtocols.clear();
|
||||
t.hiddenProtocols.add('meshcore');
|
||||
const stats = { hour: 10, day: 50, week: 100, month: 200 };
|
||||
const result = t.adjustStatsForHiddenProtocols(stats);
|
||||
assert.equal(result.week, 100);
|
||||
});
|
||||
});
|
||||
|
||||
@@ -0,0 +1,240 @@
|
||||
/*
|
||||
* Copyright © 2025-26 l5yth & contributors
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
import test from 'node:test';
|
||||
import assert from 'node:assert/strict';
|
||||
import { createDomEnvironment } from './dom-environment.js';
|
||||
import { initializeApp } from '../main.js';
|
||||
|
||||
/** Minimal config that disables auto-refresh so we control timing. */
|
||||
const BASE_CONFIG = Object.freeze({
|
||||
channel: 'Primary',
|
||||
frequency: '915MHz',
|
||||
refreshMs: 0,
|
||||
refreshIntervalSeconds: 0,
|
||||
chatEnabled: true,
|
||||
mapCenter: { lat: 0, lon: 0 },
|
||||
mapZoom: null,
|
||||
maxDistanceKm: 0,
|
||||
tileFilters: { light: '', dark: '' },
|
||||
instancesFeatureEnabled: false,
|
||||
instanceDomain: null,
|
||||
snapshotWindowSeconds: 3600,
|
||||
});
|
||||
|
||||
/**
|
||||
* Build a stubbed fetch that records every call and responds with canned data.
|
||||
*
|
||||
* @param {Object} responsesByEndpoint Map of URL prefix to JSON response body.
|
||||
* @returns {{ fetch: Function, calls: Array<{ url: string, options: Object }> }}
|
||||
*/
|
||||
function buildStubFetch(responsesByEndpoint = {}) {
|
||||
const calls = [];
|
||||
|
||||
function stubFetch(url, options = {}) {
|
||||
calls.push({ url, options });
|
||||
for (const [prefix, body] of Object.entries(responsesByEndpoint)) {
|
||||
if (url.includes(prefix)) {
|
||||
return Promise.resolve({
|
||||
ok: true,
|
||||
status: 200,
|
||||
json: () => Promise.resolve(
|
||||
typeof body === 'function' ? body() : body,
|
||||
),
|
||||
});
|
||||
}
|
||||
}
|
||||
return Promise.resolve({
|
||||
ok: true,
|
||||
status: 200,
|
||||
json: () => Promise.resolve([]),
|
||||
});
|
||||
}
|
||||
|
||||
return { fetch: stubFetch, calls };
|
||||
}
|
||||
|
||||
/**
|
||||
* Run test body with a fetch-stubbed app instance.
|
||||
*
|
||||
* @param {Object} stubResponses Response map for the stub fetch.
|
||||
* @param {function(Object): Promise<void>} fn Receives { testUtils, calls }.
|
||||
*/
|
||||
async function withStubFetchApp(stubResponses, fn) {
|
||||
const env = createDomEnvironment({ includeBody: true });
|
||||
const originalFetch = globalThis.fetch;
|
||||
const { fetch: stubFetch, calls } = buildStubFetch(stubResponses);
|
||||
globalThis.fetch = stubFetch;
|
||||
try {
|
||||
const { _testUtils } = initializeApp(BASE_CONFIG);
|
||||
// Allow the initial refresh() to settle (it is async).
|
||||
await new Promise(r => setTimeout(r, 50));
|
||||
await fn({ testUtils: _testUtils, calls });
|
||||
} finally {
|
||||
globalThis.fetch = originalFetch;
|
||||
env.cleanup();
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Verify fetch functions append since parameter
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('first refresh does not include since parameter in fetch URLs', async () => {
|
||||
await withStubFetchApp({}, ({ calls }) => {
|
||||
const apiCalls = calls.filter(c => c.url.startsWith('/api/'));
|
||||
assert.ok(apiCalls.length > 0, 'should have made API calls');
|
||||
for (const call of apiCalls) {
|
||||
assert.ok(
|
||||
!call.url.includes('since='),
|
||||
`first refresh should not pass since: ${call.url}`,
|
||||
);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
test('second refresh includes since parameter for endpoints with timestamp data', async () => {
|
||||
const now = Math.floor(Date.now() / 1000);
|
||||
const stubResponses = {
|
||||
'/api/nodes': [{ node_id: '!aabb', last_heard: now, short_name: 'AB', role: 'CLIENT' }],
|
||||
'/api/messages': [{ id: 1, rx_time: now, from_id: '!aabb', text: 'hello' }],
|
||||
'/api/positions': [{ id: 1, node_id: '!aabb', rx_time: now, latitude: 52.5, longitude: 13.4 }],
|
||||
'/api/telemetry': [{ id: 1, node_id: '!aabb', rx_time: now, battery_level: 90 }],
|
||||
'/api/neighbors': [{ node_id: '!aabb', neighbor_id: '!ccdd', rx_time: now, snr: 10 }],
|
||||
'/api/traces': [{ id: 1, rx_time: now, src: 1, dest: 2 }],
|
||||
};
|
||||
|
||||
await withStubFetchApp(stubResponses, async ({ testUtils, calls }) => {
|
||||
// Verify first refresh completed without since params
|
||||
const firstRoundCalls = [...calls];
|
||||
const firstApiCalls = firstRoundCalls.filter(c => c.url.startsWith('/api/'));
|
||||
assert.ok(firstApiCalls.length > 0, 'initial refresh should have fired');
|
||||
for (const call of firstApiCalls) {
|
||||
assert.ok(
|
||||
!call.url.includes('since='),
|
||||
`first refresh should not pass since: ${call.url}`,
|
||||
);
|
||||
}
|
||||
|
||||
// Clear call log and trigger a second refresh
|
||||
calls.length = 0;
|
||||
await testUtils.refresh();
|
||||
await new Promise(r => setTimeout(r, 50));
|
||||
|
||||
// Second refresh should include since= on all data endpoints
|
||||
const secondApiCalls = calls.filter(c => c.url.startsWith('/api/'));
|
||||
assert.ok(secondApiCalls.length > 0, 'second refresh should have fired');
|
||||
|
||||
const nodeCall = secondApiCalls.find(c => c.url.includes('/api/nodes?'));
|
||||
assert.ok(nodeCall, 'should have made a nodes call');
|
||||
assert.ok(nodeCall.url.includes('since='), `nodes should include since: ${nodeCall.url}`);
|
||||
|
||||
const posCall = secondApiCalls.find(c => c.url.includes('/api/positions?'));
|
||||
assert.ok(posCall, 'should have made a positions call');
|
||||
assert.ok(posCall.url.includes('since='), `positions should include since: ${posCall.url}`);
|
||||
|
||||
const telCall = secondApiCalls.find(c => c.url.includes('/api/telemetry?'));
|
||||
assert.ok(telCall, 'should have made a telemetry call');
|
||||
assert.ok(telCall.url.includes('since='), `telemetry should include since: ${telCall.url}`);
|
||||
|
||||
const nbCall = secondApiCalls.find(c => c.url.includes('/api/neighbors?'));
|
||||
assert.ok(nbCall, 'should have made a neighbors call');
|
||||
assert.ok(nbCall.url.includes('since='), `neighbors should include since: ${nbCall.url}`);
|
||||
|
||||
const trCall = secondApiCalls.find(c => c.url.includes('/api/traces?'));
|
||||
assert.ok(trCall, 'should have made a traces call');
|
||||
assert.ok(trCall.url.includes('since='), `traces should include since: ${trCall.url}`);
|
||||
|
||||
const msgCalls = secondApiCalls.filter(c => c.url.includes('/api/messages?'));
|
||||
assert.ok(msgCalls.length > 0, 'should have made message calls');
|
||||
for (const mc of msgCalls) {
|
||||
assert.ok(mc.url.includes('since='), `messages should include since: ${mc.url}`);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
test('second refresh merges incremental data into existing state', async () => {
|
||||
const now = Math.floor(Date.now() / 1000);
|
||||
let callCount = 0;
|
||||
|
||||
// First call returns node A, second call returns node B
|
||||
const stubResponses = {
|
||||
'/api/nodes': () => {
|
||||
callCount++;
|
||||
if (callCount <= 1) {
|
||||
return [{ node_id: '!aaaa', last_heard: now, short_name: 'AA', role: 'CLIENT' }];
|
||||
}
|
||||
return [{ node_id: '!bbbb', last_heard: now + 60, short_name: 'BB', role: 'CLIENT' }];
|
||||
},
|
||||
};
|
||||
|
||||
await withStubFetchApp(stubResponses, async ({ testUtils, calls }) => {
|
||||
// After first refresh, call count should be 1
|
||||
assert.ok(callCount >= 1, 'first refresh should have fetched nodes');
|
||||
|
||||
// Trigger second refresh
|
||||
calls.length = 0;
|
||||
await testUtils.refresh();
|
||||
await new Promise(r => setTimeout(r, 50));
|
||||
|
||||
// The second refresh should have merged data
|
||||
assert.ok(callCount >= 2, 'second refresh should have fetched nodes again');
|
||||
});
|
||||
});
|
||||
|
||||
test('fetch functions use cache: default option', async () => {
|
||||
await withStubFetchApp({}, ({ calls }) => {
|
||||
const apiCalls = calls.filter(c => c.url.startsWith('/api/'));
|
||||
for (const call of apiCalls) {
|
||||
assert.equal(
|
||||
call.options.cache,
|
||||
'default',
|
||||
`${call.url} should use cache:default`,
|
||||
);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
test('messages fetch sends encrypted parameter when requested', async () => {
|
||||
await withStubFetchApp({}, ({ calls }) => {
|
||||
const encryptedCalls = calls.filter(
|
||||
c => c.url.includes('/api/messages') && c.url.includes('encrypted=true'),
|
||||
);
|
||||
assert.ok(encryptedCalls.length > 0, 'should have made encrypted message call');
|
||||
});
|
||||
});
|
||||
|
||||
test('since parameter uses a 1-second overlap to avoid missing rows', async () => {
|
||||
const now = Math.floor(Date.now() / 1000);
|
||||
const stubResponses = {
|
||||
'/api/nodes': [{ node_id: '!test', last_heard: now, short_name: 'T', role: 'CLIENT' }],
|
||||
};
|
||||
|
||||
await withStubFetchApp(stubResponses, async ({ testUtils, calls }) => {
|
||||
calls.length = 0;
|
||||
await testUtils.refresh();
|
||||
await new Promise(r => setTimeout(r, 50));
|
||||
|
||||
const nodeCall = calls.find(c => c.url.includes('/api/nodes?'));
|
||||
assert.ok(nodeCall, 'should have nodes call on second refresh');
|
||||
// The since value should be (now - 1) to create the overlap
|
||||
const expectedSince = now - 1;
|
||||
assert.ok(
|
||||
nodeCall.url.includes(`since=${expectedSince}`),
|
||||
`expected since=${expectedSince} in URL: ${nodeCall.url}`,
|
||||
);
|
||||
});
|
||||
});
|
||||
@@ -26,24 +26,24 @@ import {
|
||||
|
||||
const NOW = 1_700_000_000;
|
||||
|
||||
test('computeLocalActiveNodeStats calculates local hour/day/week/month counts', () => {
|
||||
test('computeLocalActiveNodeStats calculates local hour/day/week/month counts with per-protocol data', () => {
|
||||
const nodes = [
|
||||
{ last_heard: NOW - 60 },
|
||||
{ last_heard: NOW - 4_000 },
|
||||
{ last_heard: NOW - 90_000 },
|
||||
{ last_heard: NOW - (8 * 86_400) },
|
||||
{ last_heard: NOW - (20 * 86_400) },
|
||||
{ last_heard: NOW - 60, protocol: 'meshtastic' },
|
||||
{ last_heard: NOW - 4_000, protocol: 'meshcore' },
|
||||
{ last_heard: NOW - 90_000, protocol: 'meshtastic' },
|
||||
{ last_heard: NOW - (8 * 86_400), protocol: 'meshcore' },
|
||||
{ last_heard: NOW - (20 * 86_400), protocol: 'meshtastic' },
|
||||
];
|
||||
|
||||
const stats = computeLocalActiveNodeStats(nodes, NOW);
|
||||
|
||||
assert.deepEqual(stats, {
|
||||
hour: 1,
|
||||
day: 2,
|
||||
week: 3,
|
||||
month: 5,
|
||||
sampled: true,
|
||||
});
|
||||
assert.equal(stats.hour, 1);
|
||||
assert.equal(stats.day, 2);
|
||||
assert.equal(stats.week, 3);
|
||||
assert.equal(stats.month, 5);
|
||||
assert.equal(stats.sampled, true);
|
||||
assert.deepEqual(stats.meshcore, { hour: 0, day: 1, week: 1, month: 2 });
|
||||
assert.deepEqual(stats.meshtastic, { hour: 1, day: 1, week: 2, month: 3 });
|
||||
});
|
||||
|
||||
test('normaliseActiveNodeStatsPayload validates and normalizes API payload', () => {
|
||||
@@ -57,17 +57,27 @@ test('normaliseActiveNodeStatsPayload validates and normalizes API payload', ()
|
||||
sampled: false,
|
||||
};
|
||||
|
||||
assert.deepEqual(normaliseActiveNodeStatsPayload(payload), {
|
||||
hour: 11,
|
||||
day: 22,
|
||||
week: 33,
|
||||
month: 44,
|
||||
sampled: false,
|
||||
});
|
||||
const result = normaliseActiveNodeStatsPayload(payload);
|
||||
assert.equal(result.hour, 11);
|
||||
assert.equal(result.day, 22);
|
||||
assert.equal(result.week, 33);
|
||||
assert.equal(result.month, 44);
|
||||
assert.equal(result.sampled, false);
|
||||
|
||||
assert.equal(normaliseActiveNodeStatsPayload({}), null);
|
||||
});
|
||||
|
||||
test('normaliseActiveNodeStatsPayload includes per-protocol buckets when present', () => {
|
||||
const result = normaliseActiveNodeStatsPayload({
|
||||
active_nodes: { hour: 10, day: 20, week: 30, month: 40 },
|
||||
meshcore: { hour: 3, day: 8, week: 12, month: 15 },
|
||||
meshtastic: { hour: 7, day: 12, week: 18, month: 25 },
|
||||
sampled: false,
|
||||
});
|
||||
assert.deepEqual(result.meshcore, { hour: 3, day: 8, week: 12, month: 15 });
|
||||
assert.deepEqual(result.meshtastic, { hour: 7, day: 12, week: 18, month: 25 });
|
||||
});
|
||||
|
||||
test('normaliseActiveNodeStatsPayload rejects malformed stat values', () => {
|
||||
assert.equal(
|
||||
normaliseActiveNodeStatsPayload({ active_nodes: { hour: 'x', day: 1, week: 1, month: 1 } }),
|
||||
@@ -140,8 +150,8 @@ test('fetchActiveNodeStats reuses cached /api/stats response for repeated calls'
|
||||
|
||||
test('fetchActiveNodeStats falls back to local counts when stats fetch fails', async () => {
|
||||
const nodes = [
|
||||
{ last_heard: NOW - 120 },
|
||||
{ last_heard: NOW - (10 * 86_400) },
|
||||
{ last_heard: NOW - 120, protocol: 'meshtastic' },
|
||||
{ last_heard: NOW - (10 * 86_400), protocol: 'meshcore' },
|
||||
];
|
||||
const fetchImpl = async () => {
|
||||
throw new Error('network down');
|
||||
@@ -149,13 +159,13 @@ test('fetchActiveNodeStats falls back to local counts when stats fetch fails', a
|
||||
|
||||
const stats = await fetchActiveNodeStats({ nodes, nowSeconds: NOW, fetchImpl });
|
||||
|
||||
assert.deepEqual(stats, {
|
||||
hour: 1,
|
||||
day: 1,
|
||||
week: 1,
|
||||
month: 2,
|
||||
sampled: true,
|
||||
});
|
||||
assert.equal(stats.hour, 1);
|
||||
assert.equal(stats.day, 1);
|
||||
assert.equal(stats.week, 1);
|
||||
assert.equal(stats.month, 2);
|
||||
assert.equal(stats.sampled, true);
|
||||
assert.ok(stats.meshcore != null, 'fallback should include meshcore');
|
||||
assert.ok(stats.meshtastic != null, 'fallback should include meshtastic');
|
||||
});
|
||||
|
||||
test('fetchActiveNodeStats falls back to local counts on non-OK HTTP responses', async () => {
|
||||
|
||||
@@ -21,6 +21,32 @@ import { setupApp, setupAppWithOptions } from './main-app-test-helpers.js';
|
||||
|
||||
const NOW = 1_700_000_000;
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// updateTitleCount
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('updateTitleCount does not throw when title and header elements are absent', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
assert.doesNotThrow(() => {
|
||||
testUtils.updateTitleCount({ hour: 5, day: 20, week: 42, month: 100, sampled: false });
|
||||
});
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('updateTitleCount handles null and undefined stats gracefully', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
assert.doesNotThrow(() => testUtils.updateTitleCount(null));
|
||||
assert.doesNotThrow(() => testUtils.updateTitleCount(undefined));
|
||||
assert.doesNotThrow(() => testUtils.updateTitleCount({}));
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// updateLegendProtocolCounts
|
||||
// ---------------------------------------------------------------------------
|
||||
@@ -30,10 +56,11 @@ test('updateLegendProtocolCounts returns early when both count elements are null
|
||||
try {
|
||||
// Default state: meshcoreCountEl and meshtasticCountEl are null — should not throw.
|
||||
assert.doesNotThrow(() => {
|
||||
testUtils.updateLegendProtocolCounts(
|
||||
[{ last_heard: NOW - 100, protocol: 'meshcore' }],
|
||||
NOW,
|
||||
);
|
||||
testUtils.updateLegendProtocolCounts({
|
||||
week: 10,
|
||||
meshcore: { hour: 1, day: 2, week: 3, month: 4 },
|
||||
meshtastic: { hour: 5, day: 6, week: 7, month: 8 },
|
||||
});
|
||||
});
|
||||
} finally {
|
||||
cleanup();
|
||||
@@ -47,13 +74,11 @@ test('updateLegendProtocolCounts sets per-protocol counts when elements are pres
|
||||
const mtEl = { textContent: '' };
|
||||
testUtils._setProtocolCountElements(mcEl, mtEl);
|
||||
|
||||
const nodes = [
|
||||
{ last_heard: NOW - 100, protocol: 'meshcore' },
|
||||
{ last_heard: NOW - 200, protocol: 'meshcore' },
|
||||
{ last_heard: NOW - 300, protocol: 'meshtastic' },
|
||||
{ last_heard: NOW - (8 * 86_400) }, // outside 7-day window, should not count
|
||||
];
|
||||
testUtils.updateLegendProtocolCounts(nodes, NOW);
|
||||
testUtils.updateLegendProtocolCounts({
|
||||
week: 3,
|
||||
meshcore: { hour: 1, day: 1, week: 2, month: 3 },
|
||||
meshtastic: { hour: 0, day: 1, week: 1, month: 2 },
|
||||
});
|
||||
|
||||
assert.equal(mcEl.textContent, ' (2)', 'meshcore count should be 2');
|
||||
assert.equal(mtEl.textContent, ' (1)', 'meshtastic count should be 1');
|
||||
@@ -62,21 +87,18 @@ test('updateLegendProtocolCounts sets per-protocol counts when elements are pres
|
||||
}
|
||||
});
|
||||
|
||||
test('updateLegendProtocolCounts bins unknown protocols into the meshtastic column', () => {
|
||||
test('updateLegendProtocolCounts handles missing per-protocol data gracefully', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
const mcEl = { textContent: '' };
|
||||
const mtEl = { textContent: '' };
|
||||
testUtils._setProtocolCountElements(mcEl, mtEl);
|
||||
|
||||
const nodes = [
|
||||
{ last_heard: NOW - 100, protocol: 'reticulum' }, // unknown → meshtastic bucket
|
||||
{ last_heard: NOW - 200, protocol: 'meshcore' },
|
||||
];
|
||||
testUtils.updateLegendProtocolCounts(nodes, NOW);
|
||||
// Stats without per-protocol breakdowns (e.g. from an old instance).
|
||||
testUtils.updateLegendProtocolCounts({ week: 5 });
|
||||
|
||||
assert.equal(mcEl.textContent, ' (1)');
|
||||
assert.equal(mtEl.textContent, ' (1)');
|
||||
assert.equal(mcEl.textContent, ' (0)');
|
||||
assert.equal(mtEl.textContent, ' (0)');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
@@ -88,10 +110,10 @@ test('updateLegendProtocolCounts works when only meshcoreCountEl is present', ()
|
||||
const mcEl = { textContent: '' };
|
||||
testUtils._setProtocolCountElements(mcEl, null);
|
||||
|
||||
testUtils.updateLegendProtocolCounts(
|
||||
[{ last_heard: NOW - 100, protocol: 'meshcore' }],
|
||||
NOW,
|
||||
);
|
||||
testUtils.updateLegendProtocolCounts({
|
||||
week: 5,
|
||||
meshcore: { hour: 0, day: 0, week: 1, month: 2 },
|
||||
});
|
||||
assert.equal(mcEl.textContent, ' (1)');
|
||||
} finally {
|
||||
cleanup();
|
||||
@@ -104,10 +126,10 @@ test('updateLegendProtocolCounts works when only meshtasticCountEl is present',
|
||||
const mtEl = { textContent: '' };
|
||||
testUtils._setProtocolCountElements(null, mtEl);
|
||||
|
||||
testUtils.updateLegendProtocolCounts(
|
||||
[{ last_heard: NOW - 100, protocol: 'meshtastic' }],
|
||||
NOW,
|
||||
);
|
||||
testUtils.updateLegendProtocolCounts({
|
||||
week: 5,
|
||||
meshtastic: { hour: 0, day: 0, week: 1, month: 2 },
|
||||
});
|
||||
assert.equal(mtEl.textContent, ' (1)');
|
||||
} finally {
|
||||
cleanup();
|
||||
@@ -122,53 +144,107 @@ test('updateFooterStats is a no-op when footerActiveNodes element is absent', ()
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
assert.doesNotThrow(() => {
|
||||
testUtils.updateFooterStats([{ last_heard: NOW - 100 }], NOW);
|
||||
testUtils.updateFooterStats({ day: 1, week: 2, month: 3, sampled: false });
|
||||
});
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('updateFooterStats populates the active-stats element when present', async () => {
|
||||
test('updateFooterStats populates the active-stats element when present', () => {
|
||||
const { testUtils, env, cleanup } = setupAppWithOptions({
|
||||
extraElements: ['footerActiveNodes'],
|
||||
});
|
||||
try {
|
||||
const el = env.document.getElementById('footerActiveNodes');
|
||||
testUtils.updateFooterStats([{ last_heard: NOW - 100 }], NOW);
|
||||
|
||||
// Drain the microtask queue so the async .then callback executes.
|
||||
await new Promise(resolve => setImmediate(resolve));
|
||||
testUtils.updateFooterStats({ day: 10, week: 20, month: 30, sampled: false });
|
||||
|
||||
assert.ok(
|
||||
el.textContent.includes('/day'),
|
||||
`expected footerActiveNodes to contain "/day", got: ${el.textContent}`,
|
||||
);
|
||||
assert.ok(
|
||||
el.textContent.includes('10/day'),
|
||||
`expected footerActiveNodes to contain "10/day", got: ${el.textContent}`,
|
||||
);
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('updateFooterStats discards stale responses when a newer request is in flight', async () => {
|
||||
const { testUtils, env, cleanup } = setupAppWithOptions({
|
||||
extraElements: ['footerActiveNodes'],
|
||||
});
|
||||
// ---------------------------------------------------------------------------
|
||||
// applyProtocolVisibility
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('applyProtocolVisibility hides meshcore column when meshcore week is 0', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
const el = env.document.getElementById('footerActiveNodes');
|
||||
const mcCol = { style: { display: '' } };
|
||||
const mtCol = { style: { display: '' } };
|
||||
testUtils._setProtocolColElements(mcCol, mtCol);
|
||||
|
||||
// Fire two sequential updates; only the second should be applied.
|
||||
testUtils.updateFooterStats([{ last_heard: NOW - 100 }], NOW);
|
||||
testUtils.updateFooterStats([{ last_heard: NOW - 200 }], NOW);
|
||||
testUtils.applyProtocolVisibility({
|
||||
meshcore: { hour: 0, day: 0, week: 0, month: 0 },
|
||||
meshtastic: { hour: 1, day: 5, week: 10, month: 20 },
|
||||
});
|
||||
|
||||
await new Promise(resolve => setImmediate(resolve));
|
||||
assert.equal(mcCol.style.display, 'none', 'meshcore column should be hidden');
|
||||
assert.equal(mtCol.style.display, '', 'meshtastic column should remain visible');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
// Either one or neither result lands; the key invariant is no error thrown
|
||||
// and the element text is a valid stats string or empty.
|
||||
const text = el.textContent;
|
||||
assert.ok(
|
||||
text === '' || text.includes('/day'),
|
||||
`unexpected footerActiveNodes content: ${text}`,
|
||||
);
|
||||
test('applyProtocolVisibility hides meshtastic column when meshtastic week is 0', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
const mcCol = { style: { display: '' } };
|
||||
const mtCol = { style: { display: '' } };
|
||||
testUtils._setProtocolColElements(mcCol, mtCol);
|
||||
|
||||
testUtils.applyProtocolVisibility({
|
||||
meshcore: { hour: 1, day: 5, week: 10, month: 20 },
|
||||
meshtastic: { hour: 0, day: 0, week: 0, month: 0 },
|
||||
});
|
||||
|
||||
assert.equal(mcCol.style.display, '', 'meshcore column should remain visible');
|
||||
assert.equal(mtCol.style.display, 'none', 'meshtastic column should be hidden');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('applyProtocolVisibility shows both columns when both protocols have active nodes', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
const mcCol = { style: { display: 'none' } };
|
||||
const mtCol = { style: { display: 'none' } };
|
||||
testUtils._setProtocolColElements(mcCol, mtCol);
|
||||
|
||||
testUtils.applyProtocolVisibility({
|
||||
meshcore: { hour: 1, day: 2, week: 5, month: 10 },
|
||||
meshtastic: { hour: 2, day: 3, week: 8, month: 15 },
|
||||
});
|
||||
|
||||
assert.equal(mcCol.style.display, '', 'meshcore column should be visible');
|
||||
assert.equal(mtCol.style.display, '', 'meshtastic column should be visible');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
test('applyProtocolVisibility handles missing per-protocol data gracefully', () => {
|
||||
const { testUtils, cleanup } = setupApp();
|
||||
try {
|
||||
const mcCol = { style: { display: '' } };
|
||||
const mtCol = { style: { display: '' } };
|
||||
testUtils._setProtocolColElements(mcCol, mtCol);
|
||||
|
||||
// No per-protocol data at all — treat as 0.
|
||||
testUtils.applyProtocolVisibility({ week: 5 });
|
||||
|
||||
assert.equal(mcCol.style.display, 'none');
|
||||
assert.equal(mtCol.style.display, 'none');
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
|
||||
@@ -109,7 +109,7 @@ test('refreshNodeInformation merges telemetry metrics when the base node lacks t
|
||||
|
||||
assert.equal(calls.length, 4);
|
||||
calls.forEach(call => {
|
||||
assert.deepEqual(call.options, { cache: 'no-store' });
|
||||
assert.deepEqual(call.options, { cache: 'default' });
|
||||
});
|
||||
});
|
||||
|
||||
|
||||
@@ -919,7 +919,7 @@ test('fetchMessages handles HTTP responses and uses defaults', async () => {
|
||||
};
|
||||
const messages = await fetchMessages('!node', { fetchImpl });
|
||||
assert.equal(messages.length, 1);
|
||||
assert.equal(calls[0].options.cache, 'no-store');
|
||||
assert.equal(calls[0].options.cache, 'default');
|
||||
});
|
||||
|
||||
test('fetchMessages returns an empty list when the endpoint is missing', async () => {
|
||||
@@ -1002,7 +1002,7 @@ test('fetchTracesForNode requests traceroutes for the node', async () => {
|
||||
const traces = await fetchTracesForNode('!abc', { fetchImpl });
|
||||
assert.equal(traces.length, 1);
|
||||
assert.equal(calls[0].url.includes('/api/traces/!abc'), true);
|
||||
assert.equal(calls[0].options.cache, 'no-store');
|
||||
assert.equal(calls[0].options.cache, 'default');
|
||||
});
|
||||
|
||||
test('fetchTracesForNode returns empty when identifier is missing', async () => {
|
||||
|
||||
@@ -62,35 +62,35 @@ test('render priority uses canonical role keys and defaults to zero for unknowns
|
||||
});
|
||||
|
||||
test('render priority is protocol-aware for shared roles', () => {
|
||||
// SENSOR: meshtastic=2, meshcore=3
|
||||
// SENSOR: meshtastic=2, meshcore=9
|
||||
assert.equal(getRoleRenderPriority('SENSOR', 'meshtastic'), 2);
|
||||
assert.equal(getRoleRenderPriority('SENSOR', 'meshcore'), 3);
|
||||
assert.equal(getRoleRenderPriority('SENSOR', 'meshcore'), 9);
|
||||
assert.ok(getRoleRenderPriority('SENSOR', 'meshcore') > getRoleRenderPriority('SENSOR', 'meshtastic'));
|
||||
// REPEATER: meshtastic=11, meshcore=12
|
||||
// REPEATER: meshtastic=11, meshcore=3
|
||||
assert.equal(getRoleRenderPriority('REPEATER', 'meshtastic'), 11);
|
||||
assert.equal(getRoleRenderPriority('REPEATER', 'meshcore'), 12);
|
||||
assert.ok(getRoleRenderPriority('REPEATER', 'meshcore') > getRoleRenderPriority('REPEATER', 'meshtastic'));
|
||||
assert.equal(getRoleRenderPriority('REPEATER', 'meshcore'), 3);
|
||||
assert.ok(getRoleRenderPriority('REPEATER', 'meshtastic') > getRoleRenderPriority('REPEATER', 'meshcore'));
|
||||
});
|
||||
|
||||
test('render priority meshcore-exclusive roles have defined priorities', () => {
|
||||
assert.equal(getRoleRenderPriority('COMPANION', 'meshcore'), 7);
|
||||
assert.equal(getRoleRenderPriority('ROOM_SERVER', 'meshcore'), 9);
|
||||
assert.equal(getRoleRenderPriority('COMPANION', 'meshcore'), 12);
|
||||
assert.equal(getRoleRenderPriority('ROOM_SERVER', 'meshcore'), 7);
|
||||
});
|
||||
|
||||
test('render priority respects the full bottom-to-top order', () => {
|
||||
const order = [
|
||||
['CLIENT_HIDDEN', null],
|
||||
['SENSOR', 'meshtastic'],
|
||||
['SENSOR', 'meshcore'],
|
||||
['REPEATER', 'meshcore'],
|
||||
['TRACKER', null],
|
||||
['CLIENT_MUTE', null],
|
||||
['CLIENT', null],
|
||||
['COMPANION', 'meshcore'],
|
||||
['CLIENT_BASE', null],
|
||||
['ROOM_SERVER', 'meshcore'],
|
||||
['CLIENT_BASE', null],
|
||||
['SENSOR', 'meshcore'],
|
||||
['ROUTER_LATE', null],
|
||||
['REPEATER', 'meshtastic'],
|
||||
['REPEATER', 'meshcore'],
|
||||
['COMPANION', 'meshcore'],
|
||||
['ROUTER', null],
|
||||
['LOST_AND_FOUND', null],
|
||||
];
|
||||
|
||||
@@ -32,39 +32,41 @@ const NOW = 1_700_000_000;
|
||||
|
||||
test('computeLocalActiveNodeStats counts nodes within each window', () => {
|
||||
const nodes = [
|
||||
{ last_heard: NOW - 60 }, // within hour, day, week, month
|
||||
{ last_heard: NOW - 4_000 }, // within day, week, month
|
||||
{ last_heard: NOW - 90_000 }, // within week, month
|
||||
{ last_heard: NOW - (8 * 86_400) }, // within month only
|
||||
{ last_heard: NOW - (20 * 86_400) }, // within month only
|
||||
{ last_heard: NOW - 60, protocol: 'meshtastic' }, // within hour, day, week, month
|
||||
{ last_heard: NOW - 4_000, protocol: 'meshcore' }, // within day, week, month
|
||||
{ last_heard: NOW - 90_000, protocol: 'meshtastic' }, // within week, month
|
||||
{ last_heard: NOW - (8 * 86_400), protocol: 'meshcore' }, // within month only
|
||||
{ last_heard: NOW - (20 * 86_400), protocol: 'meshtastic' }, // within month only
|
||||
];
|
||||
|
||||
assert.deepEqual(computeLocalActiveNodeStats(nodes, NOW), {
|
||||
hour: 1,
|
||||
day: 2,
|
||||
week: 3,
|
||||
month: 5,
|
||||
sampled: true,
|
||||
});
|
||||
const result = computeLocalActiveNodeStats(nodes, NOW);
|
||||
assert.equal(result.hour, 1);
|
||||
assert.equal(result.day, 2);
|
||||
assert.equal(result.week, 3);
|
||||
assert.equal(result.month, 5);
|
||||
assert.equal(result.sampled, true);
|
||||
assert.deepEqual(result.meshcore, { hour: 0, day: 1, week: 1, month: 2 });
|
||||
assert.deepEqual(result.meshtastic, { hour: 1, day: 1, week: 2, month: 3 });
|
||||
});
|
||||
|
||||
test('computeLocalActiveNodeStats returns zero counts for empty node array', () => {
|
||||
assert.deepEqual(computeLocalActiveNodeStats([], NOW), {
|
||||
hour: 0,
|
||||
day: 0,
|
||||
week: 0,
|
||||
month: 0,
|
||||
sampled: true,
|
||||
});
|
||||
const result = computeLocalActiveNodeStats([], NOW);
|
||||
assert.equal(result.hour, 0);
|
||||
assert.equal(result.day, 0);
|
||||
assert.equal(result.week, 0);
|
||||
assert.equal(result.month, 0);
|
||||
assert.equal(result.sampled, true);
|
||||
assert.deepEqual(result.meshcore, { hour: 0, day: 0, week: 0, month: 0 });
|
||||
assert.deepEqual(result.meshtastic, { hour: 0, day: 0, week: 0, month: 0 });
|
||||
});
|
||||
|
||||
test('computeLocalActiveNodeStats handles non-array nodes gracefully', () => {
|
||||
assert.deepEqual(computeLocalActiveNodeStats(null, NOW), {
|
||||
hour: 0, day: 0, week: 0, month: 0, sampled: true,
|
||||
});
|
||||
assert.deepEqual(computeLocalActiveNodeStats(undefined, NOW), {
|
||||
hour: 0, day: 0, week: 0, month: 0, sampled: true,
|
||||
});
|
||||
const result = computeLocalActiveNodeStats(null, NOW);
|
||||
assert.equal(result.hour, 0);
|
||||
assert.deepEqual(result.meshcore, { hour: 0, day: 0, week: 0, month: 0 });
|
||||
const result2 = computeLocalActiveNodeStats(undefined, NOW);
|
||||
assert.equal(result2.hour, 0);
|
||||
assert.deepEqual(result2.meshcore, { hour: 0, day: 0, week: 0, month: 0 });
|
||||
});
|
||||
|
||||
test('computeLocalActiveNodeStats ignores nodes with missing last_heard', () => {
|
||||
@@ -74,9 +76,10 @@ test('computeLocalActiveNodeStats ignores nodes with missing last_heard', () =>
|
||||
{ last_heard: undefined },
|
||||
{ last_heard: 'not-a-number' },
|
||||
];
|
||||
assert.deepEqual(computeLocalActiveNodeStats(nodes, NOW), {
|
||||
hour: 0, day: 0, week: 0, month: 0, sampled: true,
|
||||
});
|
||||
const result = computeLocalActiveNodeStats(nodes, NOW);
|
||||
assert.equal(result.hour, 0);
|
||||
assert.deepEqual(result.meshcore, { hour: 0, day: 0, week: 0, month: 0 });
|
||||
assert.deepEqual(result.meshtastic, { hour: 0, day: 0, week: 0, month: 0 });
|
||||
});
|
||||
|
||||
test('computeLocalActiveNodeStats uses Date.now() when nowSeconds is non-finite', () => {
|
||||
@@ -84,13 +87,27 @@ test('computeLocalActiveNodeStats uses Date.now() when nowSeconds is non-finite'
|
||||
const result = computeLocalActiveNodeStats([{ last_heard: Date.now() / 1000 - 60 }], NaN);
|
||||
assert.equal(typeof result.hour, 'number');
|
||||
assert.ok(result.hour >= 0);
|
||||
assert.ok(result.meshcore != null);
|
||||
});
|
||||
|
||||
test('computeLocalActiveNodeStats counts nodes exactly at window boundary', () => {
|
||||
// A node whose last_heard equals exactly now - 3600 is within the hour window (<=).
|
||||
const nodes = [{ last_heard: NOW - 3600 }];
|
||||
const nodes = [{ last_heard: NOW - 3600, protocol: 'meshtastic' }];
|
||||
const result = computeLocalActiveNodeStats(nodes, NOW);
|
||||
assert.equal(result.hour, 1);
|
||||
assert.equal(result.meshtastic.hour, 1);
|
||||
assert.equal(result.meshcore.hour, 0);
|
||||
});
|
||||
|
||||
test('computeLocalActiveNodeStats bins unknown protocols into meshtastic bucket', () => {
|
||||
const nodes = [
|
||||
{ last_heard: NOW - 100, protocol: 'reticulum' },
|
||||
{ last_heard: NOW - 200, protocol: 'meshcore' },
|
||||
];
|
||||
const result = computeLocalActiveNodeStats(nodes, NOW);
|
||||
assert.equal(result.hour, 2);
|
||||
assert.equal(result.meshcore.hour, 1);
|
||||
assert.equal(result.meshtastic.hour, 1);
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
@@ -98,13 +115,47 @@ test('computeLocalActiveNodeStats counts nodes exactly at window boundary', () =
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test('normaliseActiveNodeStatsPayload validates and normalises API payload', () => {
|
||||
assert.deepEqual(
|
||||
normaliseActiveNodeStatsPayload({
|
||||
active_nodes: { hour: '11', day: 22, week: 33, month: 44 },
|
||||
sampled: false,
|
||||
}),
|
||||
{ hour: 11, day: 22, week: 33, month: 44, sampled: false }
|
||||
);
|
||||
const result = normaliseActiveNodeStatsPayload({
|
||||
active_nodes: { hour: '11', day: 22, week: 33, month: 44 },
|
||||
sampled: false,
|
||||
});
|
||||
assert.equal(result.hour, 11);
|
||||
assert.equal(result.day, 22);
|
||||
assert.equal(result.week, 33);
|
||||
assert.equal(result.month, 44);
|
||||
assert.equal(result.sampled, false);
|
||||
});
|
||||
|
||||
test('normaliseActiveNodeStatsPayload includes per-protocol buckets when present', () => {
|
||||
const result = normaliseActiveNodeStatsPayload({
|
||||
active_nodes: { hour: 10, day: 20, week: 30, month: 40 },
|
||||
meshcore: { hour: 3, day: 8, week: 12, month: 15 },
|
||||
meshtastic: { hour: 7, day: 12, week: 18, month: 25 },
|
||||
sampled: false,
|
||||
});
|
||||
assert.deepEqual(result.meshcore, { hour: 3, day: 8, week: 12, month: 15 });
|
||||
assert.deepEqual(result.meshtastic, { hour: 7, day: 12, week: 18, month: 25 });
|
||||
});
|
||||
|
||||
test('normaliseActiveNodeStatsPayload omits per-protocol buckets when absent', () => {
|
||||
const result = normaliseActiveNodeStatsPayload({
|
||||
active_nodes: { hour: 1, day: 2, week: 3, month: 4 },
|
||||
sampled: false,
|
||||
});
|
||||
assert.equal(result.meshcore, undefined);
|
||||
assert.equal(result.meshtastic, undefined);
|
||||
});
|
||||
|
||||
test('normaliseActiveNodeStatsPayload ignores malformed per-protocol buckets', () => {
|
||||
const result = normaliseActiveNodeStatsPayload({
|
||||
active_nodes: { hour: 1, day: 2, week: 3, month: 4 },
|
||||
meshcore: { hour: 'bad', day: 1, week: 1, month: 1 },
|
||||
meshtastic: 'not-an-object',
|
||||
sampled: false,
|
||||
});
|
||||
assert.equal(result.hour, 1);
|
||||
assert.equal(result.meshcore, undefined);
|
||||
assert.equal(result.meshtastic, undefined);
|
||||
});
|
||||
|
||||
test('normaliseActiveNodeStatsPayload returns null for missing active_nodes', () => {
|
||||
@@ -157,13 +208,22 @@ test('fetchActiveNodeStats returns remote stats when /api/stats succeeds', async
|
||||
});
|
||||
|
||||
test('fetchActiveNodeStats falls back to local counts on network error', async () => {
|
||||
const nodes = [{ last_heard: NOW - 120 }, { last_heard: NOW - (10 * 86_400) }];
|
||||
const nodes = [
|
||||
{ last_heard: NOW - 120, protocol: 'meshtastic' },
|
||||
{ last_heard: NOW - (10 * 86_400), protocol: 'meshcore' },
|
||||
];
|
||||
const stats = await fetchActiveNodeStats({
|
||||
nodes,
|
||||
nowSeconds: NOW,
|
||||
fetchImpl: async () => { throw new Error('network down'); },
|
||||
});
|
||||
assert.deepEqual(stats, { hour: 1, day: 1, week: 1, month: 2, sampled: true });
|
||||
assert.equal(stats.hour, 1);
|
||||
assert.equal(stats.day, 1);
|
||||
assert.equal(stats.week, 1);
|
||||
assert.equal(stats.month, 2);
|
||||
assert.equal(stats.sampled, true);
|
||||
assert.ok(stats.meshcore != null, 'fallback should include meshcore');
|
||||
assert.ok(stats.meshtastic != null, 'fallback should include meshtastic');
|
||||
});
|
||||
|
||||
test('fetchActiveNodeStats falls back to local counts on non-OK status', async () => {
|
||||
|
||||
@@ -179,7 +179,7 @@ export async function fetchAggregatedTelemetry({
|
||||
const bucketSecondsSafe = bucketSecondsCandidate > 0 ? bucketSecondsCandidate : TELEMETRY_BUCKET_SECONDS;
|
||||
const response = await fetchFn(
|
||||
`/api/telemetry/aggregated?windowSeconds=${windowSeconds}&bucketSeconds=${bucketSecondsSafe}`,
|
||||
{ cache: 'no-store' },
|
||||
{ cache: 'default' },
|
||||
);
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to fetch aggregated telemetry (HTTP ${response.status})`);
|
||||
|
||||
@@ -23,6 +23,7 @@ import {
|
||||
import { resolveLegendVisibility } from './map-legend-visibility.js';
|
||||
import { mergeConfig } from './settings.js';
|
||||
import { roleColors } from './role-helpers.js';
|
||||
import { meshcoreIconHtml, meshtasticIconHtml } from './protocol-helpers.js';
|
||||
|
||||
/**
|
||||
* Escape HTML special characters to prevent XSS.
|
||||
@@ -393,6 +394,18 @@ export async function initializeFederationPage(options = {}) {
|
||||
hasValue: hasNumberValue,
|
||||
defaultDirection: 'desc'
|
||||
},
|
||||
meshcoreNodesCount: {
|
||||
getValue: inst => toFiniteNumber(inst.meshcoreNodesCount),
|
||||
compare: compareNumber,
|
||||
hasValue: hasNumberValue,
|
||||
defaultDirection: 'desc'
|
||||
},
|
||||
meshtasticNodesCount: {
|
||||
getValue: inst => toFiniteNumber(inst.meshtasticNodesCount),
|
||||
compare: compareNumber,
|
||||
hasValue: hasNumberValue,
|
||||
defaultDirection: 'desc'
|
||||
},
|
||||
latitude: { getValue: inst => toFiniteNumber(inst.latitude), compare: compareNumber, hasValue: hasNumberValue, defaultDirection: 'asc' },
|
||||
longitude: { getValue: inst => toFiniteNumber(inst.longitude), compare: compareNumber, hasValue: hasNumberValue, defaultDirection: 'asc' },
|
||||
lastUpdateTime: {
|
||||
@@ -478,6 +491,10 @@ export async function initializeFederationPage(options = {}) {
|
||||
const contactHtml = renderContactHtml(instance.contactLink);
|
||||
const nodesCountValue = toFiniteNumber(instance.nodesCount ?? instance.nodes_count);
|
||||
const nodesCountText = nodesCountValue == null ? '<em>—</em>' : escapeHtml(String(nodesCountValue));
|
||||
const mcNodesVal = toFiniteNumber(instance.meshcoreNodesCount);
|
||||
const mcNodesText = mcNodesVal == null ? '<em>—</em>' : `${meshcoreIconHtml()} ${escapeHtml(String(mcNodesVal))}`;
|
||||
const mtNodesVal = toFiniteNumber(instance.meshtasticNodesCount);
|
||||
const mtNodesText = mtNodesVal == null ? '<em>—</em>' : `${meshtasticIconHtml()} ${escapeHtml(String(mtNodesVal))}`;
|
||||
|
||||
tr.innerHTML = `
|
||||
<td class="instances-col instances-col--name">${nameHtml}</td>
|
||||
@@ -487,6 +504,8 @@ export async function initializeFederationPage(options = {}) {
|
||||
<td class="instances-col instances-col--channel">${renderContactHtml(instance.channel) || ''}</td>
|
||||
<td class="instances-col instances-col--frequency">${escapeHtml(instance.frequency || '')}</td>
|
||||
<td class="instances-col instances-col--nodes mono">${nodesCountText}</td>
|
||||
<td class="instances-col instances-col--meshcore-nodes mono">${mcNodesText}</td>
|
||||
<td class="instances-col instances-col--meshtastic-nodes mono">${mtNodesText}</td>
|
||||
<td class="instances-col instances-col--latitude mono">${fmtCoords(instance.latitude)}</td>
|
||||
<td class="instances-col instances-col--longitude mono">${fmtCoords(instance.longitude)}</td>
|
||||
<td class="instances-col instances-col--last-update mono">${timeAgo(instance.lastUpdateTime, nowSec)}</td>
|
||||
|
||||
@@ -0,0 +1,108 @@
|
||||
/*
|
||||
* Copyright © 2025-26 l5yth & contributors
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
/**
|
||||
* Extract the maximum timestamp from an array of API records.
|
||||
*
|
||||
* Inspects the specified fields on each record and returns the highest
|
||||
* value found. Returns 0 when the array is empty or contains no valid
|
||||
* timestamps.
|
||||
*
|
||||
* @param {Array<Object>} records API response rows.
|
||||
* @param {Array<string>} [fields] Timestamp field names to inspect.
|
||||
* @returns {number} Maximum unix timestamp across all records.
|
||||
*/
|
||||
export function maxRecordTimestamp(records, fields = ['rx_time', 'last_heard']) {
|
||||
let max = 0;
|
||||
if (!Array.isArray(records)) return max;
|
||||
for (const record of records) {
|
||||
if (!record || typeof record !== 'object') continue;
|
||||
for (const field of fields) {
|
||||
const val = record[field];
|
||||
if (typeof val === 'number' && val > max) max = val;
|
||||
}
|
||||
}
|
||||
return max;
|
||||
}
|
||||
|
||||
/**
|
||||
* Merge incremental rows into an existing collection, deduplicating by a
|
||||
* key field. New rows replace existing entries with the same key.
|
||||
*
|
||||
* @param {Array<Object>} existing Previous full dataset.
|
||||
* @param {Array<Object>} incoming New incremental rows.
|
||||
* @param {string} keyField Property used for deduplication.
|
||||
* @returns {Array<Object>} Merged array.
|
||||
*/
|
||||
export function mergeById(existing, incoming, keyField) {
|
||||
if (!incoming || incoming.length === 0) return existing;
|
||||
const map = new Map();
|
||||
for (const item of existing) {
|
||||
const key = item[keyField];
|
||||
if (key != null) map.set(key, item);
|
||||
}
|
||||
for (const item of incoming) {
|
||||
const key = item[keyField];
|
||||
if (key != null) map.set(key, item);
|
||||
}
|
||||
return Array.from(map.values());
|
||||
}
|
||||
|
||||
/**
|
||||
* Merge incremental rows using a composite key built from multiple fields.
|
||||
*
|
||||
* Behaves like {@link mergeById} but joins the values of several fields
|
||||
* into a single string key so records with a composite primary key (e.g.
|
||||
* ``node_id`` + ``neighbor_id``) are deduplicated correctly.
|
||||
*
|
||||
* @param {Array<Object>} existing Previous full dataset.
|
||||
* @param {Array<Object>} incoming New incremental rows.
|
||||
* @param {Array<string>} keyFields Properties whose values form the composite key.
|
||||
* @returns {Array<Object>} Merged array.
|
||||
*/
|
||||
export function mergeByCompositeKey(existing, incoming, keyFields) {
|
||||
if (!incoming || incoming.length === 0) return existing;
|
||||
|
||||
function buildKey(item) {
|
||||
return keyFields.map(f => String(item[f] ?? '')).join('\0');
|
||||
}
|
||||
|
||||
const map = new Map();
|
||||
for (const item of existing) {
|
||||
map.set(buildKey(item), item);
|
||||
}
|
||||
for (const item of incoming) {
|
||||
map.set(buildKey(item), item);
|
||||
}
|
||||
return Array.from(map.values());
|
||||
}
|
||||
|
||||
/**
|
||||
* Trim an array to at most ``limit`` entries, keeping the ones with the
|
||||
* highest timestamp value. Prevents unbounded growth from incremental
|
||||
* merges over a long-running browser tab.
|
||||
*
|
||||
* @param {Array<Object>} records Merged record array.
|
||||
* @param {number} limit Maximum number of entries to retain.
|
||||
* @param {string} [tsField] Timestamp field name used for sorting.
|
||||
* @returns {Array<Object>} Trimmed array (may be the same reference if
|
||||
* already within the limit).
|
||||
*/
|
||||
export function trimToLimit(records, limit, tsField = 'rx_time') {
|
||||
if (!Array.isArray(records) || records.length <= limit) return records;
|
||||
const sorted = records.slice().sort((a, b) => (b[tsField] || 0) - (a[tsField] || 0));
|
||||
return sorted.slice(0, limit);
|
||||
}
|
||||
+321
-102
@@ -92,6 +92,7 @@ import {
|
||||
aggregateTelemetrySnapshots,
|
||||
} from './snapshot-aggregator.js';
|
||||
import { normalizeNodeCollection } from './node-snapshot-normalizer.js';
|
||||
import { maxRecordTimestamp, mergeById, mergeByCompositeKey, trimToLimit } from './incremental-helpers.js';
|
||||
import { buildTraceSegments } from './trace-paths.js';
|
||||
import {
|
||||
getRoleColor,
|
||||
@@ -133,6 +134,9 @@ export function initializeApp(config) {
|
||||
const statusEl = document.getElementById('status');
|
||||
const footerActiveNodes = document.getElementById('footerActiveNodes');
|
||||
const refreshBtn = document.getElementById('refreshBtn');
|
||||
const autorefreshToggle = document.getElementById('autorefreshToggle');
|
||||
const protocolToggleMeshcore = document.getElementById('protocolToggleMeshcore');
|
||||
const protocolToggleMeshtastic = document.getElementById('protocolToggleMeshtastic');
|
||||
const filterInput = document.getElementById('filterInput');
|
||||
const filterClearButton = document.getElementById('filterClear');
|
||||
const shortInfoTemplate = document.getElementById('shortInfoOverlayTemplate');
|
||||
@@ -226,6 +230,18 @@ export function initializeApp(config) {
|
||||
applyNodeFallback: applyNodeNameFallback,
|
||||
logger: console,
|
||||
});
|
||||
// Timestamps of the most recent record seen per data type. Used to pass
|
||||
// the ``since`` query parameter on subsequent refreshes so only new/changed
|
||||
// rows are transferred over the wire.
|
||||
let lastNodeTimestamp = 0;
|
||||
let lastMessageTimestamp = 0;
|
||||
let lastPositionTimestamp = 0;
|
||||
let lastTelemetryTimestamp = 0;
|
||||
let lastNeighborTimestamp = 0;
|
||||
let lastTraceTimestamp = 0;
|
||||
/** Whether the very first full fetch has completed. */
|
||||
let initialFetchDone = false;
|
||||
|
||||
const NODE_LIMIT = 1000;
|
||||
const TRACE_LIMIT = 200;
|
||||
const TRACE_MAX_AGE_SECONDS = 28 * 24 * 60 * 60;
|
||||
@@ -248,6 +264,7 @@ export function initializeApp(config) {
|
||||
|
||||
/** @type {ReturnType<typeof setTimeout>|null} */
|
||||
let refreshTimer = null;
|
||||
let autorefreshPaused = false;
|
||||
let activeStatsRequestId = 0;
|
||||
|
||||
/**
|
||||
@@ -453,7 +470,9 @@ export function initializeApp(config) {
|
||||
}
|
||||
// Only arm the timer when a positive interval is configured; a zero or
|
||||
// negative value means auto-refresh is intentionally disabled.
|
||||
if (REFRESH_MS > 0) {
|
||||
// When the user has explicitly paused auto-refresh, skip arming the timer
|
||||
// entirely so no background API requests are made.
|
||||
if (REFRESH_MS > 0 && !autorefreshPaused) {
|
||||
refreshTimer = setInterval(refresh, REFRESH_MS);
|
||||
}
|
||||
}
|
||||
@@ -772,7 +791,7 @@ export function initializeApp(config) {
|
||||
});
|
||||
}
|
||||
|
||||
/** @type {Set<string>} Active compound role-filter keys, each ``"<protocol>:<roleKey>"``. */
|
||||
/** @type {Set<string>} Hidden role compound keys — roles in this set are excluded from display. */
|
||||
const activeRoleFilters = new Set();
|
||||
/** @type {Map<string, HTMLElement>} Compound key → legend button element. */
|
||||
const legendRoleButtons = new Map();
|
||||
@@ -827,8 +846,7 @@ export function initializeApp(config) {
|
||||
return `${normalizeFilterProtocol(protocol)}:${getRoleKey(role)}`;
|
||||
}
|
||||
|
||||
/** @type {Readonly<Record<string,string>>} Display names for protocol tokens. */
|
||||
const PROTOCOL_DISPLAY_NAMES = Object.freeze({ meshtastic: 'Meshtastic', meshcore: 'MeshCore' });
|
||||
|
||||
|
||||
/**
|
||||
* Lazily create the floating map status element used for progress messages.
|
||||
@@ -1314,6 +1332,8 @@ export function initializeApp(config) {
|
||||
let legendToggleControl = null;
|
||||
let meshcoreCountEl = null;
|
||||
let meshtasticCountEl = null;
|
||||
let meshcoreColEl = null;
|
||||
let meshtasticColEl = null;
|
||||
let legendToggleButton = null;
|
||||
let legendVisible = true;
|
||||
|
||||
@@ -1463,18 +1483,14 @@ export function initializeApp(config) {
|
||||
*/
|
||||
function updateLegendRoleFiltersUI() {
|
||||
const hasFilters = activeRoleFilters.size > 0;
|
||||
// legendRoleButtons is keyed by compound key ("protocol:roleKey")
|
||||
// activeRoleFilters is a *hidden-roles* set: roles present in the set are
|
||||
// hidden. Buttons show aria-pressed="true" when the role is *visible*
|
||||
// (i.e. NOT in the hidden set) so that the default all-visible state
|
||||
// highlights every button.
|
||||
legendRoleButtons.forEach((button, compoundKey) => {
|
||||
if (!button) return;
|
||||
const isActive = activeRoleFilters.has(compoundKey);
|
||||
button.setAttribute('aria-pressed', isActive ? 'true' : 'false');
|
||||
});
|
||||
legendProtocolButtons.forEach((button, protocol) => {
|
||||
if (!button) return;
|
||||
const isHidden = hiddenProtocols.has(protocol);
|
||||
const displayName = PROTOCOL_DISPLAY_NAMES[protocol] ?? protocol;
|
||||
button.setAttribute('aria-pressed', isHidden ? 'true' : 'false');
|
||||
button.textContent = isHidden ? `Show ${displayName}` : `Hide ${displayName}`;
|
||||
const isHidden = activeRoleFilters.has(compoundKey);
|
||||
button.setAttribute('aria-pressed', isHidden ? 'false' : 'true');
|
||||
});
|
||||
if (legendContainer) {
|
||||
if (hasFilters || hiddenProtocols.size > 0) {
|
||||
@@ -1483,9 +1499,37 @@ export function initializeApp(config) {
|
||||
legendContainer.removeAttribute('data-has-active-filters');
|
||||
}
|
||||
}
|
||||
updateMetaProtocolToggleUI();
|
||||
updateLegendToggleState();
|
||||
}
|
||||
|
||||
/**
|
||||
* Sync the meta-row protocol toggle buttons with the current
|
||||
* {@link hiddenProtocols} state.
|
||||
*
|
||||
* When a protocol is hidden the button's ``<img>`` receives a greyscale
|
||||
* filter and ``aria-pressed`` is set to ``"true"``.
|
||||
*
|
||||
* @returns {void}
|
||||
*/
|
||||
function updateMetaProtocolToggleUI() {
|
||||
/** @type {Array<{btn: HTMLElement|null, protocol: string, name: string}>} */
|
||||
const toggles = [
|
||||
{ btn: protocolToggleMeshcore, protocol: 'meshcore', name: 'MeshCore' },
|
||||
{ btn: protocolToggleMeshtastic, protocol: 'meshtastic', name: 'Meshtastic' },
|
||||
];
|
||||
toggles.forEach(({ btn, protocol, name }) => {
|
||||
if (!btn) return;
|
||||
const isHidden = hiddenProtocols.has(protocol);
|
||||
btn.setAttribute('aria-pressed', isHidden ? 'true' : 'false');
|
||||
btn.setAttribute('aria-label', isHidden ? `Show ${name} nodes` : `Hide ${name} nodes`);
|
||||
const img = btn.querySelector('.protocol-toggle-icon');
|
||||
if (img) {
|
||||
img.style.filter = isHidden ? 'grayscale(1) opacity(0.4)' : '';
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Toggle the visibility filter for a role+protocol combination.
|
||||
*
|
||||
@@ -1523,7 +1567,7 @@ export function initializeApp(config) {
|
||||
item.className = 'legend-item';
|
||||
colEl.appendChild(item);
|
||||
item.type = 'button';
|
||||
item.setAttribute('aria-pressed', 'false');
|
||||
item.setAttribute('aria-pressed', 'true');
|
||||
item.dataset.role = role;
|
||||
item.dataset.protocol = protocol;
|
||||
const swatch = document.createElement('span');
|
||||
@@ -1538,6 +1582,7 @@ export function initializeApp(config) {
|
||||
item.addEventListener('click', legendClickHandler(event => {
|
||||
const exclusive = event.metaKey || event.ctrlKey;
|
||||
if (exclusive) {
|
||||
// Ctrl/Cmd+Click: hide only this role (all others become visible).
|
||||
activeRoleFilters.clear();
|
||||
activeRoleFilters.add(compoundKey);
|
||||
updateLegendRoleFiltersUI();
|
||||
@@ -1588,6 +1633,7 @@ export function initializeApp(config) {
|
||||
|
||||
// --- MeshCore column (left, bottom-aligned) ---
|
||||
const meshcoreCol = L.DomUtil.create('div', 'legend-column legend-column--bottom', itemsContainer);
|
||||
meshcoreColEl = meshcoreCol;
|
||||
const meshcoreColHeader = L.DomUtil.create('div', 'legend-column-header', meshcoreCol);
|
||||
meshcoreColHeader.appendChild(buildMeshcoreIconImg());
|
||||
const meshcoreColTitle = document.createElement('span');
|
||||
@@ -1599,6 +1645,7 @@ export function initializeApp(config) {
|
||||
|
||||
// --- Meshtastic column (right) ---
|
||||
const meshtasticCol = L.DomUtil.create('div', 'legend-column', itemsContainer);
|
||||
meshtasticColEl = meshtasticCol;
|
||||
const meshtasticColHeader = L.DomUtil.create('div', 'legend-column-header', meshtasticCol);
|
||||
meshtasticColHeader.appendChild(buildMeshtasticIconImg());
|
||||
const meshtasticColTitle = document.createElement('span');
|
||||
@@ -1612,28 +1659,7 @@ export function initializeApp(config) {
|
||||
buildRoleButtons(meshcoreCol, meshcoreRoleColors, 'meshcore');
|
||||
buildRoleButtons(meshtasticCol, roleColors, 'meshtastic');
|
||||
|
||||
// --- MeshCore column footer: protocol hide toggle ---
|
||||
legendProtocolButtons.clear();
|
||||
const buildProtocolToggle = (protocol, col) => {
|
||||
const displayName = PROTOCOL_DISPLAY_NAMES[protocol] ?? protocol;
|
||||
const btn = L.DomUtil.create('button', 'legend-item legend-protocol-toggle', col);
|
||||
btn.type = 'button';
|
||||
btn.setAttribute('aria-pressed', 'false');
|
||||
btn.textContent = `Hide ${displayName}`;
|
||||
btn.addEventListener('click', legendClickHandler(() => {
|
||||
if (hiddenProtocols.has(protocol)) {
|
||||
hiddenProtocols.delete(protocol);
|
||||
} else {
|
||||
hiddenProtocols.add(protocol);
|
||||
}
|
||||
updateLegendRoleFiltersUI();
|
||||
applyFilter();
|
||||
}));
|
||||
legendProtocolButtons.set(protocol, btn);
|
||||
};
|
||||
buildProtocolToggle('meshcore', meshcoreCol);
|
||||
|
||||
// --- Meshtastic column: line toggles then protocol hide toggle at bottom ---
|
||||
// --- Meshtastic column: line toggles at bottom ---
|
||||
neighborLinesToggleButton = L.DomUtil.create('button', 'legend-item legend-toggle-neighbors', meshtasticCol);
|
||||
neighborLinesToggleButton.type = 'button';
|
||||
neighborLinesToggleButton.addEventListener('click', legendClickHandler(() => {
|
||||
@@ -1648,9 +1674,6 @@ export function initializeApp(config) {
|
||||
}));
|
||||
updateTraceLinesToggleState();
|
||||
|
||||
// Hide Meshtastic toggle at the very bottom of the Meshtastic column.
|
||||
buildProtocolToggle('meshtastic', meshtasticCol);
|
||||
|
||||
updateLegendRoleFiltersUI();
|
||||
|
||||
// --- Clear filters — full-width below the two columns ---
|
||||
@@ -3249,8 +3272,24 @@ export function initializeApp(config) {
|
||||
});
|
||||
|
||||
const enrichedLogEntries = attachNodeContextToLogEntries(logEntries);
|
||||
// When a protocol is hidden, exclude its entries from the chat display.
|
||||
// Entries without a resolved node are kept; entries with a node but a
|
||||
// null/missing protocol are treated as meshtastic (the default protocol).
|
||||
const protocolVisibleEntries = hiddenProtocols.size > 0
|
||||
? enrichedLogEntries.filter(e => {
|
||||
if (!e || !e.node) return true;
|
||||
const proto = normalizeFilterProtocol(e.node.protocol);
|
||||
return !hiddenProtocols.has(proto);
|
||||
})
|
||||
: enrichedLogEntries;
|
||||
const protocolVisibleChannels = hiddenProtocols.size > 0
|
||||
? channels.filter(ch => {
|
||||
const proto = ch.protocol ? normalizeFilterProtocol(ch.protocol) : null;
|
||||
return !proto || !hiddenProtocols.has(proto);
|
||||
})
|
||||
: channels;
|
||||
const { logEntries: filteredLogEntries, channels: filteredChannels } = filterChatModel(
|
||||
{ logEntries: enrichedLogEntries, channels },
|
||||
{ logEntries: protocolVisibleEntries, channels: protocolVisibleChannels },
|
||||
filterQuery
|
||||
);
|
||||
|
||||
@@ -3571,11 +3610,14 @@ export function initializeApp(config) {
|
||||
* Fetch the latest nodes from the JSON API.
|
||||
*
|
||||
* @param {number} [limit=NODE_LIMIT] Maximum number of records.
|
||||
* @param {number} [since=0] Unix timestamp; only rows newer than this are returned.
|
||||
* @returns {Promise<Array<Object>>} Parsed node payloads.
|
||||
*/
|
||||
async function fetchNodes(limit = NODE_LIMIT) {
|
||||
async function fetchNodes(limit = NODE_LIMIT, since = 0) {
|
||||
const effectiveLimit = resolveSnapshotLimit(limit, NODE_LIMIT);
|
||||
const r = await fetch(`/api/nodes?limit=${effectiveLimit}`, { cache: 'no-store' });
|
||||
let url = `/api/nodes?limit=${effectiveLimit}`;
|
||||
if (since > 0) url += `&since=${since}`;
|
||||
const r = await fetch(url, { cache: 'default' });
|
||||
if (!r.ok) throw new Error('HTTP ' + r.status);
|
||||
return r.json();
|
||||
}
|
||||
@@ -3590,7 +3632,7 @@ export function initializeApp(config) {
|
||||
if (typeof nodeId !== 'string') return null;
|
||||
const trimmed = nodeId.trim();
|
||||
if (trimmed.length === 0) return null;
|
||||
const r = await fetch(`/api/nodes/${encodeURIComponent(trimmed)}`, { cache: 'no-store' });
|
||||
const r = await fetch(`/api/nodes/${encodeURIComponent(trimmed)}`, { cache: 'default' });
|
||||
if (r.status === 404) return null;
|
||||
if (!r.ok) throw new Error('HTTP ' + r.status);
|
||||
return r.json();
|
||||
@@ -3600,7 +3642,7 @@ export function initializeApp(config) {
|
||||
* Fetch recent messages from the JSON API.
|
||||
*
|
||||
* @param {number} [limit=NODE_LIMIT] Maximum number of rows.
|
||||
* @param {{ encrypted?: boolean }} [options] Optional retrieval flags.
|
||||
* @param {{ encrypted?: boolean, since?: number }} [options] Optional retrieval flags.
|
||||
* @returns {Promise<Array<Object>>} Parsed message payloads.
|
||||
*/
|
||||
async function fetchMessages(limit = MESSAGE_LIMIT, options = {}) {
|
||||
@@ -3610,8 +3652,11 @@ export function initializeApp(config) {
|
||||
if (options && options.encrypted) {
|
||||
params.set('encrypted', 'true');
|
||||
}
|
||||
if (options && options.since > 0) {
|
||||
params.set('since', String(options.since));
|
||||
}
|
||||
const query = params.toString();
|
||||
const r = await fetch(`/api/messages?${query}`, { cache: 'no-store' });
|
||||
const r = await fetch(`/api/messages?${query}`, { cache: 'default' });
|
||||
if (!r.ok) throw new Error('HTTP ' + r.status);
|
||||
return r.json();
|
||||
}
|
||||
@@ -3620,11 +3665,14 @@ export function initializeApp(config) {
|
||||
* Fetch neighbour information from the JSON API.
|
||||
*
|
||||
* @param {number} [limit=NODE_LIMIT] Maximum number of rows.
|
||||
* @param {number} [since=0] Unix timestamp; only rows newer than this are returned.
|
||||
* @returns {Promise<Array<Object>>} Parsed neighbour payloads.
|
||||
*/
|
||||
async function fetchNeighbors(limit = NODE_LIMIT) {
|
||||
async function fetchNeighbors(limit = NODE_LIMIT, since = 0) {
|
||||
const effectiveLimit = resolveSnapshotLimit(limit, NODE_LIMIT);
|
||||
const r = await fetch(`/api/neighbors?limit=${effectiveLimit}`, { cache: 'no-store' });
|
||||
let url = `/api/neighbors?limit=${effectiveLimit}`;
|
||||
if (since > 0) url += `&since=${since}`;
|
||||
const r = await fetch(url, { cache: 'default' });
|
||||
if (!r.ok) throw new Error('HTTP ' + r.status);
|
||||
return r.json();
|
||||
}
|
||||
@@ -3633,12 +3681,15 @@ export function initializeApp(config) {
|
||||
* Fetch traceroute observations from the JSON API.
|
||||
*
|
||||
* @param {number} [limit=TRACE_LIMIT] Maximum number of records.
|
||||
* @param {number} [since=0] Unix timestamp; only rows newer than this are returned.
|
||||
* @returns {Promise<Array<Object>>} Parsed trace payloads.
|
||||
*/
|
||||
async function fetchTraces(limit = TRACE_LIMIT) {
|
||||
async function fetchTraces(limit = TRACE_LIMIT, since = 0) {
|
||||
const safeLimit = Number.isFinite(limit) && limit > 0 ? Math.floor(limit) : TRACE_LIMIT;
|
||||
const effectiveLimit = Math.min(safeLimit, NODE_LIMIT);
|
||||
const r = await fetch(`/api/traces?limit=${effectiveLimit}`, { cache: 'no-store' });
|
||||
let url = `/api/traces?limit=${effectiveLimit}`;
|
||||
if (since > 0) url += `&since=${since}`;
|
||||
const r = await fetch(url, { cache: 'default' });
|
||||
if (!r.ok) throw new Error('HTTP ' + r.status);
|
||||
const traces = await r.json();
|
||||
return filterRecentTraces(traces, TRACE_MAX_AGE_SECONDS);
|
||||
@@ -3648,11 +3699,14 @@ export function initializeApp(config) {
|
||||
* Fetch telemetry entries from the JSON API.
|
||||
*
|
||||
* @param {number} [limit=NODE_LIMIT] Maximum number of rows.
|
||||
* @param {number} [since=0] Unix timestamp; only rows newer than this are returned.
|
||||
* @returns {Promise<Array<Object>>} Parsed telemetry payloads.
|
||||
*/
|
||||
async function fetchTelemetry(limit = NODE_LIMIT) {
|
||||
async function fetchTelemetry(limit = NODE_LIMIT, since = 0) {
|
||||
const effectiveLimit = resolveSnapshotLimit(limit, NODE_LIMIT);
|
||||
const r = await fetch(`/api/telemetry?limit=${effectiveLimit}`, { cache: 'no-store' });
|
||||
let url = `/api/telemetry?limit=${effectiveLimit}`;
|
||||
if (since > 0) url += `&since=${since}`;
|
||||
const r = await fetch(url, { cache: 'default' });
|
||||
if (!r.ok) throw new Error('HTTP ' + r.status);
|
||||
return r.json();
|
||||
}
|
||||
@@ -3661,11 +3715,14 @@ export function initializeApp(config) {
|
||||
* Fetch position packets from the JSON API.
|
||||
*
|
||||
* @param {number} [limit=NODE_LIMIT] Maximum number of rows.
|
||||
* @param {number} [since=0] Unix timestamp; only rows newer than this are returned.
|
||||
* @returns {Promise<Array<Object>>} Parsed position payloads.
|
||||
*/
|
||||
async function fetchPositions(limit = NODE_LIMIT) {
|
||||
async function fetchPositions(limit = NODE_LIMIT, since = 0) {
|
||||
const effectiveLimit = resolveSnapshotLimit(limit, NODE_LIMIT);
|
||||
const r = await fetch(`/api/positions?limit=${effectiveLimit}`, { cache: 'no-store' });
|
||||
let url = `/api/positions?limit=${effectiveLimit}`;
|
||||
if (since > 0) url += `&since=${since}`;
|
||||
const r = await fetch(url, { cache: 'default' });
|
||||
if (!r.ok) throw new Error('HTTP ' + r.status);
|
||||
return r.json();
|
||||
}
|
||||
@@ -3682,6 +3739,7 @@ export function initializeApp(config) {
|
||||
return Number.isFinite(num) ? num : null;
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Determine the best-effort timestamp in seconds from numeric or ISO values.
|
||||
*
|
||||
@@ -4319,7 +4377,7 @@ export function initializeApp(config) {
|
||||
function matchesRoleFilter(node) {
|
||||
if (!activeRoleFilters.size) return true;
|
||||
const compoundKey = makeRoleFilterKey(node && node.role, node && node.protocol);
|
||||
return activeRoleFilters.has(compoundKey);
|
||||
return !activeRoleFilters.has(compoundKey);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -4350,6 +4408,31 @@ export function initializeApp(config) {
|
||||
filterClearButton.hidden = !hasValue;
|
||||
}
|
||||
|
||||
/**
|
||||
* Return a copy of the stats object with totals reduced by the counts of
|
||||
* any protocols the user has explicitly hidden.
|
||||
*
|
||||
* Per-protocol sub-objects are left untouched so legend column counts and
|
||||
* visibility decisions still use the raw server values.
|
||||
*
|
||||
* @param {Object|null} stats Normalised stats from ``/api/stats``.
|
||||
* @returns {Object|null} Adjusted stats (new object) or the original if nothing is hidden.
|
||||
*/
|
||||
function adjustStatsForHiddenProtocols(stats) {
|
||||
if (!hiddenProtocols.size || !stats) return stats;
|
||||
const adjusted = { ...stats };
|
||||
for (const protocol of hiddenProtocols) {
|
||||
const bucket = stats[protocol];
|
||||
if (!bucket || typeof bucket !== 'object') continue;
|
||||
for (const key of ['hour', 'day', 'week', 'month']) {
|
||||
if (typeof adjusted[key] === 'number' && typeof bucket[key] === 'number') {
|
||||
adjusted[key] = Math.max(0, adjusted[key] - bucket[key]);
|
||||
}
|
||||
}
|
||||
}
|
||||
return adjusted;
|
||||
}
|
||||
|
||||
/**
|
||||
* Apply text and role filters to the node list and re-render outputs.
|
||||
*
|
||||
@@ -4369,11 +4452,21 @@ export function initializeApp(config) {
|
||||
const nowSec = Date.now()/1000;
|
||||
renderTable(sortedNodes, nowSec);
|
||||
renderMap(sortedNodes, nowSec);
|
||||
// Title and legend counts are intentionally global — they reflect the whole
|
||||
// network, not just the nodes visible under the current filter.
|
||||
updateTitleCount(allNodes, nowSec);
|
||||
updateLegendProtocolCounts(allNodes, nowSec);
|
||||
updateFooterStats(sortedNodes, nowSec);
|
||||
// Show an immediate local estimate for the title so it doesn't flicker
|
||||
// to (0) while waiting for the async /api/stats response.
|
||||
const localStats = computeLocalActiveNodeStats(allNodes, nowSec);
|
||||
updateTitleCount(adjustStatsForHiddenProtocols(localStats));
|
||||
// Title, legend, footer, and visibility are then corrected by /api/stats
|
||||
// which provides the authoritative, uncapped counts.
|
||||
const statsRequestId = ++activeStatsRequestId;
|
||||
void fetchActiveNodeStats({ nodes: allNodes, nowSeconds: nowSec }).then(stats => {
|
||||
if (statsRequestId !== activeStatsRequestId) return;
|
||||
const visibleStats = adjustStatsForHiddenProtocols(stats);
|
||||
updateTitleCount(visibleStats);
|
||||
updateLegendProtocolCounts(stats);
|
||||
updateFooterStats(visibleStats);
|
||||
applyProtocolVisibility(stats);
|
||||
});
|
||||
updateSortIndicators();
|
||||
// Pass the raw filterQuery (not the normalised form) so the chat log can
|
||||
// highlight matching substrings in their original case.
|
||||
@@ -4425,49 +4518,102 @@ export function initializeApp(config) {
|
||||
if (statusEl) {
|
||||
statusEl.textContent = 'refreshing…';
|
||||
}
|
||||
// On the first load fetch the full dataset; subsequent refreshes pass
|
||||
// the ``since`` timestamp so only new/changed rows are transferred.
|
||||
// A 1-second overlap avoids missing rows that arrive at the boundary.
|
||||
const useSince = initialFetchDone;
|
||||
const nodeSince = useSince ? Math.max(0, lastNodeTimestamp - 1) : 0;
|
||||
const msgSince = useSince ? Math.max(0, lastMessageTimestamp - 1) : 0;
|
||||
const posSince = useSince ? Math.max(0, lastPositionTimestamp - 1) : 0;
|
||||
const telSince = useSince ? Math.max(0, lastTelemetryTimestamp - 1) : 0;
|
||||
const nbSince = useSince ? Math.max(0, lastNeighborTimestamp - 1) : 0;
|
||||
const trSince = useSince ? Math.max(0, lastTraceTimestamp - 1) : 0;
|
||||
|
||||
// Secondary fetches are fire-and-forget with individual error handlers so
|
||||
// that a failure in one stream (e.g. telemetry) does not abort the whole
|
||||
// refresh cycle. Each promise resolves to an empty array on error, which
|
||||
// preserves the previous data until the next successful fetch.
|
||||
const neighborPromise = fetchNeighbors().catch(err => {
|
||||
const neighborPromise = fetchNeighbors(NODE_LIMIT, nbSince).catch(err => {
|
||||
console.warn('neighbor refresh failed; continuing without connections', err);
|
||||
return [];
|
||||
});
|
||||
const telemetryPromise = fetchTelemetry().catch(err => {
|
||||
const telemetryPromise = fetchTelemetry(NODE_LIMIT, telSince).catch(err => {
|
||||
console.warn('telemetry refresh failed; continuing without telemetry', err);
|
||||
return [];
|
||||
});
|
||||
const positionsPromise = fetchPositions().catch(err => {
|
||||
const positionsPromise = fetchPositions(NODE_LIMIT, posSince).catch(err => {
|
||||
console.warn('position refresh failed; continuing without updates', err);
|
||||
return [];
|
||||
});
|
||||
const tracesPromise = fetchTraces().catch(err => {
|
||||
const tracesPromise = fetchTraces(TRACE_LIMIT, trSince).catch(err => {
|
||||
console.warn('trace refresh failed; continuing without traceroutes', err);
|
||||
return [];
|
||||
});
|
||||
const encryptedMessagesPromise = fetchMessages(MESSAGE_LIMIT, { encrypted: true }).catch(err => {
|
||||
const encryptedMessagesPromise = fetchMessages(MESSAGE_LIMIT, { encrypted: true, since: msgSince }).catch(err => {
|
||||
console.warn('encrypted message refresh failed; continuing without encrypted entries', err);
|
||||
return [];
|
||||
});
|
||||
// Fan-out all requests simultaneously; nodes are the primary resource and
|
||||
// must succeed for rendering to proceed.
|
||||
const [
|
||||
nodes,
|
||||
positions,
|
||||
neighborTuples,
|
||||
traceEntries,
|
||||
messages,
|
||||
telemetryEntries,
|
||||
encryptedMessages
|
||||
incomingNodes,
|
||||
incomingPositions,
|
||||
incomingNeighbors,
|
||||
incomingTraces,
|
||||
incomingMessages,
|
||||
incomingTelemetry,
|
||||
incomingEncryptedMessages
|
||||
] = await Promise.all([
|
||||
fetchNodes(),
|
||||
fetchNodes(NODE_LIMIT, nodeSince),
|
||||
positionsPromise,
|
||||
neighborPromise,
|
||||
tracesPromise,
|
||||
fetchMessages(MESSAGE_LIMIT),
|
||||
fetchMessages(MESSAGE_LIMIT, { since: msgSince }),
|
||||
telemetryPromise,
|
||||
encryptedMessagesPromise
|
||||
]);
|
||||
|
||||
// Update high-water marks for incremental fetching.
|
||||
const incomingNodeTs = maxRecordTimestamp(incomingNodes, ['last_heard']);
|
||||
const incomingMsgTs = maxRecordTimestamp(incomingMessages, ['rx_time']);
|
||||
const incomingEncMsgTs = maxRecordTimestamp(incomingEncryptedMessages, ['rx_time']);
|
||||
const incomingPosTs = maxRecordTimestamp(incomingPositions, ['rx_time', 'position_time']);
|
||||
const incomingTelTs = maxRecordTimestamp(incomingTelemetry, ['rx_time', 'telemetry_time']);
|
||||
const incomingNbTs = maxRecordTimestamp(incomingNeighbors, ['rx_time']);
|
||||
const incomingTrTs = maxRecordTimestamp(incomingTraces, ['rx_time']);
|
||||
if (incomingNodeTs > lastNodeTimestamp) lastNodeTimestamp = incomingNodeTs;
|
||||
const latestMsgTs = Math.max(incomingMsgTs, incomingEncMsgTs);
|
||||
if (latestMsgTs > lastMessageTimestamp) lastMessageTimestamp = latestMsgTs;
|
||||
if (incomingPosTs > lastPositionTimestamp) lastPositionTimestamp = incomingPosTs;
|
||||
if (incomingTelTs > lastTelemetryTimestamp) lastTelemetryTimestamp = incomingTelTs;
|
||||
if (incomingNbTs > lastNeighborTimestamp) lastNeighborTimestamp = incomingNbTs;
|
||||
if (incomingTrTs > lastTraceTimestamp) lastTraceTimestamp = incomingTrTs;
|
||||
|
||||
// Merge incremental results with existing data. On first load the
|
||||
// existing arrays are empty so the merge is effectively a no-op.
|
||||
// Merge incremental results with existing data then trim to the
|
||||
// configured limits so long-running tabs do not accumulate stale
|
||||
// entries beyond what the server would return on a fresh fetch.
|
||||
const nodes = useSince ? mergeById(allNodes, incomingNodes, 'node_id') : incomingNodes;
|
||||
const positions = useSince
|
||||
? trimToLimit(mergeById(allPositionEntries, incomingPositions, 'id'), NODE_LIMIT)
|
||||
: incomingPositions;
|
||||
const neighborTuples = useSince
|
||||
? mergeByCompositeKey(allNeighbors, incomingNeighbors, ['node_id', 'neighbor_id'])
|
||||
: incomingNeighbors;
|
||||
const telemetryEntries = useSince
|
||||
? trimToLimit(mergeById(allTelemetryEntries, incomingTelemetry, 'id'), NODE_LIMIT)
|
||||
: incomingTelemetry;
|
||||
const traceEntries = useSince
|
||||
? trimToLimit(mergeById(allTraces, incomingTraces, 'id'), TRACE_LIMIT)
|
||||
: incomingTraces;
|
||||
const messages = useSince
|
||||
? trimToLimit(mergeById(allMessages, incomingMessages, 'id'), MESSAGE_LIMIT)
|
||||
: incomingMessages;
|
||||
const encryptedMessages = useSince
|
||||
? trimToLimit(mergeById(allEncryptedMessages, incomingEncryptedMessages, 'id'), MESSAGE_LIMIT)
|
||||
: incomingEncryptedMessages;
|
||||
|
||||
// Collapse per-source snapshot arrays into single merged records; the
|
||||
// snapshot window de-duplicates entries from multiple ingestors.
|
||||
const aggregatedNodes = aggregateNodeSnapshots(nodes);
|
||||
@@ -4497,6 +4643,7 @@ export function initializeApp(config) {
|
||||
allPositionEntries = aggregatedPositions;
|
||||
allNeighbors = aggregatedNeighbors;
|
||||
allTraces = Array.isArray(traceEntries) ? traceEntries : [];
|
||||
initialFetchDone = true;
|
||||
applyFilter();
|
||||
if (statusEl) {
|
||||
statusEl.textContent = 'updated ' + new Date().toLocaleTimeString();
|
||||
@@ -4519,18 +4666,64 @@ export function initializeApp(config) {
|
||||
refreshBtn.addEventListener('click', refresh);
|
||||
}
|
||||
|
||||
// --- Auto-refresh play/pause toggle ---
|
||||
if (autorefreshToggle) {
|
||||
autorefreshToggle.addEventListener('click', () => {
|
||||
autorefreshPaused = !autorefreshPaused;
|
||||
if (autorefreshPaused) {
|
||||
if (refreshTimer) {
|
||||
clearInterval(refreshTimer);
|
||||
refreshTimer = null;
|
||||
}
|
||||
autorefreshToggle.textContent = '\u25B6';
|
||||
autorefreshToggle.setAttribute('aria-label', 'Resume auto-refresh');
|
||||
autorefreshToggle.setAttribute('aria-pressed', 'true');
|
||||
if (statusEl) statusEl.textContent = 'Refresh paused.';
|
||||
} else {
|
||||
autorefreshToggle.textContent = '\u23F8';
|
||||
autorefreshToggle.setAttribute('aria-label', 'Pause auto-refresh');
|
||||
autorefreshToggle.setAttribute('aria-pressed', 'false');
|
||||
refresh();
|
||||
restartAutoRefresh();
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// --- Meta-row protocol toggle buttons ---
|
||||
/**
|
||||
* Wire a meta-row protocol toggle button to the shared
|
||||
* {@link hiddenProtocols} set.
|
||||
*
|
||||
* @param {HTMLElement|null} btn Button element.
|
||||
* @param {string} protocol Protocol token (``'meshcore'`` or ``'meshtastic'``).
|
||||
* @returns {void}
|
||||
*/
|
||||
function setupMetaProtocolToggle(btn, protocol) {
|
||||
if (!btn) return;
|
||||
btn.addEventListener('click', () => {
|
||||
if (hiddenProtocols.has(protocol)) {
|
||||
hiddenProtocols.delete(protocol);
|
||||
} else {
|
||||
hiddenProtocols.add(protocol);
|
||||
}
|
||||
updateMetaProtocolToggleUI();
|
||||
updateLegendRoleFiltersUI();
|
||||
applyFilter();
|
||||
});
|
||||
}
|
||||
setupMetaProtocolToggle(protocolToggleMeshcore, 'meshcore');
|
||||
setupMetaProtocolToggle(protocolToggleMeshtastic, 'meshtastic');
|
||||
|
||||
/**
|
||||
* Update the page/tab title with the total active-node count for the past 7 days.
|
||||
*
|
||||
* @param {Array<Object>} nodes All node payloads (unfiltered — counts are global).
|
||||
* @param {number} nowSec Reference timestamp.
|
||||
* @param {{week: number}} stats Active-node stats from /api/stats.
|
||||
* @returns {void}
|
||||
*/
|
||||
function updateTitleCount(nodes, nowSec) {
|
||||
const weekAgoSec = nowSec - 7 * 86_400;
|
||||
const count = nodes.filter(n => n.last_heard && Number(n.last_heard) >= weekAgoSec).length;
|
||||
function updateTitleCount(stats) {
|
||||
const count = stats?.week ?? 0;
|
||||
const text = `${baseTitle} (${count})`;
|
||||
titleEl.textContent = text;
|
||||
if (titleEl) titleEl.textContent = text;
|
||||
if (headerTitleTextEl) {
|
||||
headerTitleTextEl.textContent = text;
|
||||
} else if (headerEl) {
|
||||
@@ -4541,38 +4734,52 @@ export function initializeApp(config) {
|
||||
/**
|
||||
* Update legend column headers with per-protocol active node counts (7 days).
|
||||
*
|
||||
* @param {Array<Object>} nodes All node payloads (unfiltered).
|
||||
* @param {number} nowSec Reference timestamp.
|
||||
* @param {{meshcore?: {week: number}, meshtastic?: {week: number}}} stats Stats from /api/stats.
|
||||
* @returns {void}
|
||||
*/
|
||||
function updateLegendProtocolCounts(nodes, nowSec) {
|
||||
function updateLegendProtocolCounts(stats) {
|
||||
if (!meshcoreCountEl && !meshtasticCountEl) return;
|
||||
const weekAgoSec = nowSec - 7 * 86_400;
|
||||
const recentNodes = nodes.filter(n => Number.isFinite(Number(n.last_heard)) && Number(n.last_heard) >= weekAgoSec);
|
||||
const meshcoreCount = recentNodes.filter(n => n.protocol === 'meshcore').length;
|
||||
// Treat any non-meshcore node as Meshtastic until additional protocols are supported.
|
||||
const meshtasticCount = recentNodes.filter(n => n.protocol !== 'meshcore').length;
|
||||
if (meshcoreCountEl) meshcoreCountEl.textContent = ` (${meshcoreCount})`;
|
||||
if (meshtasticCountEl) meshtasticCountEl.textContent = ` (${meshtasticCount})`;
|
||||
if (meshcoreCountEl) meshcoreCountEl.textContent = ` (${stats?.meshcore?.week ?? 0})`;
|
||||
if (meshtasticCountEl) meshtasticCountEl.textContent = ` (${stats?.meshtastic?.week ?? 0})`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Update the footer active-node stats element with day/week/month counts.
|
||||
*
|
||||
* @param {Array<Object>} nodes Node payloads.
|
||||
* @param {number} nowSec Reference timestamp.
|
||||
* @param {{day: number, week: number, month: number, sampled: boolean}} stats Stats from /api/stats.
|
||||
* @returns {void}
|
||||
*/
|
||||
function updateFooterStats(nodes, nowSec) {
|
||||
if (!footerActiveNodes) {
|
||||
return;
|
||||
}
|
||||
const requestId = ++activeStatsRequestId;
|
||||
void fetchActiveNodeStats({ nodes, nowSeconds: nowSec }).then(stats => {
|
||||
if (requestId !== activeStatsRequestId) {
|
||||
return;
|
||||
}
|
||||
footerActiveNodes.textContent = 'Active: ' + formatActiveNodeStatsText({ stats });
|
||||
function updateFooterStats(stats) {
|
||||
if (!footerActiveNodes) return;
|
||||
footerActiveNodes.textContent = 'Active: ' + formatActiveNodeStatsText({ stats });
|
||||
}
|
||||
|
||||
/**
|
||||
* Hide/show UI elements based on per-protocol activity in the past 7 days.
|
||||
*
|
||||
* Hides the Charts nav link when meshtastic has no active nodes, and hides
|
||||
* legend columns for protocols with zero weekly activity.
|
||||
*
|
||||
* @param {{meshcore?: {week: number}, meshtastic?: {week: number}}} stats Stats from /api/stats.
|
||||
* @returns {void}
|
||||
*/
|
||||
function applyProtocolVisibility(stats) {
|
||||
const meshcoreWeek = stats?.meshcore?.week ?? 0;
|
||||
const meshtasticWeek = stats?.meshtastic?.week ?? 0;
|
||||
|
||||
// Hide legend columns for protocols with no activity in the past 7 days.
|
||||
if (meshcoreColEl) meshcoreColEl.style.display = meshcoreWeek === 0 ? 'none' : '';
|
||||
if (meshtasticColEl) meshtasticColEl.style.display = meshtasticWeek === 0 ? 'none' : '';
|
||||
|
||||
// Show protocol toggle buttons only when both protocols have weekly
|
||||
// activity — filtering is pointless when only one protocol is present.
|
||||
const bothActive = meshcoreWeek > 0 && meshtasticWeek > 0;
|
||||
if (protocolToggleMeshcore) protocolToggleMeshcore.hidden = !bothActive;
|
||||
if (protocolToggleMeshtastic) protocolToggleMeshtastic.hidden = !bothActive;
|
||||
|
||||
// Charts is meshtastic-only; hide the nav link when no meshtastic activity.
|
||||
document.querySelectorAll('a[href="/charts"]').forEach(el => {
|
||||
el.style.display = meshtasticWeek === 0 ? 'none' : '';
|
||||
});
|
||||
}
|
||||
|
||||
@@ -4607,12 +4814,24 @@ export function initializeApp(config) {
|
||||
updateTitleCount,
|
||||
updateLegendProtocolCounts,
|
||||
updateFooterStats,
|
||||
applyProtocolVisibility,
|
||||
restartAutoRefresh,
|
||||
updateMetaProtocolToggleUI,
|
||||
adjustStatsForHiddenProtocols,
|
||||
/** Whether auto-refresh is currently paused. */
|
||||
isAutorefreshPaused: () => autorefreshPaused,
|
||||
/** Inject mock count span elements for legend protocol count tests. */
|
||||
_setProtocolCountElements(mc, mt) {
|
||||
meshcoreCountEl = mc;
|
||||
meshtasticCountEl = mt;
|
||||
},
|
||||
/** Inject mock column elements for protocol visibility tests. */
|
||||
_setProtocolColElements(mc, mt) {
|
||||
meshcoreColEl = mc;
|
||||
meshtasticColEl = mt;
|
||||
},
|
||||
/** Trigger a manual refresh cycle (test use only). */
|
||||
refresh,
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
@@ -24,7 +24,7 @@ import {
|
||||
aggregateTelemetrySnapshots,
|
||||
} from './snapshot-aggregator.js';
|
||||
|
||||
const DEFAULT_FETCH_OPTIONS = Object.freeze({ cache: 'no-store' });
|
||||
const DEFAULT_FETCH_OPTIONS = Object.freeze({ cache: 'default' });
|
||||
const TELEMETRY_LIMIT = 1000;
|
||||
const POSITION_LIMIT = SNAPSHOT_WINDOW;
|
||||
const NEIGHBOR_LIMIT = 1000;
|
||||
|
||||
@@ -24,8 +24,8 @@
|
||||
* @module node-page-data
|
||||
*/
|
||||
|
||||
/** Shared fetch options that disable the browser HTTP cache for all API calls. */
|
||||
const DEFAULT_FETCH_OPTIONS = Object.freeze({ cache: 'no-store' });
|
||||
/** Shared fetch options for API calls, allowing conditional ETag revalidation. */
|
||||
const DEFAULT_FETCH_OPTIONS = Object.freeze({ cache: 'default' });
|
||||
|
||||
/** Maximum number of messages to request from the messages API. */
|
||||
const MESSAGE_LIMIT = 50;
|
||||
|
||||
@@ -78,7 +78,7 @@ import {
|
||||
} from './node-page-charts.js';
|
||||
import { fetchMessages, fetchTracesForNode } from './node-page-data.js';
|
||||
|
||||
const DEFAULT_FETCH_OPTIONS = Object.freeze({ cache: 'no-store' });
|
||||
const DEFAULT_FETCH_OPTIONS = Object.freeze({ cache: 'default' });
|
||||
const MESSAGE_LIMIT = 50;
|
||||
const RENDER_WAIT_INTERVAL_MS = 20;
|
||||
const RENDER_WAIT_TIMEOUT_MS = 500;
|
||||
|
||||
@@ -42,28 +42,28 @@ export const roleIdToName = Object.freeze({
|
||||
});
|
||||
|
||||
/**
|
||||
* Meshtastic role colour palette — warm yellow-green to burnt-orange gradient
|
||||
* that provides higher contrast than the previous blue-tinted palette, making
|
||||
* role distinctions more legible on both light and dark map tiles.
|
||||
* Meshtastic role colour palette — broad-spectrum blue-green-yellow-red gradient
|
||||
* with a distinctive lavender accent for {@link LOST_AND_FOUND}. Adopted from
|
||||
* the meshenvy fork to keep visual consistency across instances.
|
||||
*
|
||||
* Updated from the original blue-cool palette (see PR #657) to improve
|
||||
* readability alongside the MeshCore grey-blue palette.
|
||||
* The cool-blue low end is differentiated from the MeshCore steel-grey palette
|
||||
* by saturation (51 %+ here vs 18 % for MeshCore) and an 8-degree hue offset.
|
||||
*
|
||||
* Firmware 2.7.10 / Android 2.7.0 roles (see issue #177).
|
||||
*
|
||||
* @type {Readonly<Record<string, string>>}
|
||||
*/
|
||||
export const roleColors = Object.freeze({
|
||||
CLIENT_HIDDEN: '#A8D8B0',
|
||||
SENSOR: '#B2D880',
|
||||
TRACKER: '#C8D866',
|
||||
CLIENT_MUTE: '#DFCF52',
|
||||
CLIENT: '#ECC044',
|
||||
CLIENT_BASE: '#F0A834',
|
||||
REPEATER: '#F08824',
|
||||
ROUTER_LATE: '#E86C1C',
|
||||
ROUTER: '#D44E14',
|
||||
LOST_AND_FOUND: '#C0300C',
|
||||
CLIENT_HIDDEN: '#A9CBE8',
|
||||
SENSOR: '#A8D5BA',
|
||||
TRACKER: '#99e67f',
|
||||
CLIENT_MUTE: '#bcef75',
|
||||
CLIENT: '#f3ef74',
|
||||
CLIENT_BASE: '#fdbf79',
|
||||
REPEATER: '#fa997b',
|
||||
ROUTER_LATE: '#ff5061',
|
||||
ROUTER: '#ff0019',
|
||||
LOST_AND_FOUND: '#C3A8E8',
|
||||
});
|
||||
|
||||
/**
|
||||
@@ -77,10 +77,10 @@ export const roleColors = Object.freeze({
|
||||
* @type {Readonly<Record<string, string>>}
|
||||
*/
|
||||
export const meshcoreRoleColors = Object.freeze({
|
||||
REPEATER: '#C8D0DC',
|
||||
ROOM_SERVER: '#8AAAC6',
|
||||
SENSOR: '#4A7EB4',
|
||||
COMPANION: '#1A5498',
|
||||
REPEATER: '#B8C4D4',
|
||||
ROOM_SERVER: '#7A9EBC',
|
||||
SENSOR: '#40749E',
|
||||
COMPANION: '#164A88',
|
||||
});
|
||||
|
||||
/**
|
||||
@@ -144,17 +144,17 @@ export const meshtasticRoleRenderOrder = Object.freeze({
|
||||
});
|
||||
|
||||
/**
|
||||
* MeshCore-specific render priority overrides. Only roles whose stacking
|
||||
* order differs from the Meshtastic palette need to appear here — any role
|
||||
* absent from this map falls through to {@link meshtasticRoleRenderOrder}.
|
||||
* MeshCore-specific render priority overrides. Bottom-up stacking order:
|
||||
* REPEATER → ROOM_SERVER → SENSOR → COMPANION (top), so companion nodes
|
||||
* are always visible above infrastructure roles.
|
||||
*
|
||||
* @type {Readonly<Record<string, number>>}
|
||||
*/
|
||||
export const meshcoreRoleRenderOrder = Object.freeze({
|
||||
SENSOR: 3,
|
||||
COMPANION: 7,
|
||||
ROOM_SERVER: 9,
|
||||
REPEATER: 12,
|
||||
REPEATER: 3,
|
||||
ROOM_SERVER: 7,
|
||||
SENSOR: 9,
|
||||
COMPANION: 12,
|
||||
});
|
||||
|
||||
/**
|
||||
|
||||
@@ -26,11 +26,12 @@
|
||||
*/
|
||||
|
||||
/**
|
||||
* Compute active-node counts from a local node array.
|
||||
* Compute active-node counts from a local node array, including per-protocol
|
||||
* breakdowns for meshcore and meshtastic.
|
||||
*
|
||||
* @param {Array<Object>} nodes Node payloads.
|
||||
* @param {number} nowSeconds Reference timestamp (Unix seconds).
|
||||
* @returns {{hour: number, day: number, week: number, month: number, sampled: boolean}} Local count snapshot.
|
||||
* @returns {{hour: number, day: number, week: number, month: number, sampled: boolean, meshcore?: Object, meshtastic?: Object}} Local count snapshot.
|
||||
*/
|
||||
export function computeLocalActiveNodeStats(nodes, nowSeconds) {
|
||||
const safeNodes = Array.isArray(nodes) ? nodes : [];
|
||||
@@ -42,20 +43,48 @@ export function computeLocalActiveNodeStats(nodes, nowSeconds) {
|
||||
{ key: 'month', secs: 30 * 86_400 }
|
||||
];
|
||||
const counts = { sampled: true };
|
||||
const meshcore = {};
|
||||
const meshtastic = {};
|
||||
for (const window of windows) {
|
||||
counts[window.key] = safeNodes.filter(node => {
|
||||
const active = safeNodes.filter(node => {
|
||||
const lastHeard = Number(node?.last_heard);
|
||||
return Number.isFinite(lastHeard) && referenceNow - lastHeard <= window.secs;
|
||||
}).length;
|
||||
});
|
||||
counts[window.key] = active.length;
|
||||
meshcore[window.key] = active.filter(n => n.protocol === 'meshcore').length;
|
||||
meshtastic[window.key] = active.filter(n => n.protocol !== 'meshcore').length;
|
||||
}
|
||||
counts.meshcore = meshcore;
|
||||
counts.meshtastic = meshtastic;
|
||||
return counts;
|
||||
}
|
||||
|
||||
/**
|
||||
* Normalise a per-protocol bucket ({hour, day, week, month}) from the payload.
|
||||
*
|
||||
* @param {*} bucket Candidate object.
|
||||
* @returns {{hour: number, day: number, week: number, month: number}|null} Normalized bucket or null.
|
||||
*/
|
||||
function normaliseProtocolBucket(bucket) {
|
||||
if (!bucket || typeof bucket !== 'object') return null;
|
||||
const hour = Number(bucket.hour);
|
||||
const day = Number(bucket.day);
|
||||
const week = Number(bucket.week);
|
||||
const month = Number(bucket.month);
|
||||
if (![hour, day, week, month].every(Number.isFinite)) return null;
|
||||
return {
|
||||
hour: Math.max(0, Math.trunc(hour)),
|
||||
day: Math.max(0, Math.trunc(day)),
|
||||
week: Math.max(0, Math.trunc(week)),
|
||||
month: Math.max(0, Math.trunc(month)),
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse and validate the ``/api/stats`` payload.
|
||||
*
|
||||
* @param {*} payload Candidate JSON object from the stats endpoint.
|
||||
* @returns {{hour: number, day: number, week: number, month: number, sampled: boolean}|null} Normalized stats or null.
|
||||
* @returns {{hour: number, day: number, week: number, month: number, sampled: boolean, meshcore?: Object, meshtastic?: Object}|null} Normalized stats or null.
|
||||
*/
|
||||
export function normaliseActiveNodeStatsPayload(payload) {
|
||||
const activeNodes = payload && typeof payload === 'object' ? payload.active_nodes : null;
|
||||
@@ -69,13 +98,18 @@ export function normaliseActiveNodeStatsPayload(payload) {
|
||||
if (![hour, day, week, month].every(Number.isFinite)) {
|
||||
return null;
|
||||
}
|
||||
return {
|
||||
const result = {
|
||||
hour: Math.max(0, Math.trunc(hour)),
|
||||
day: Math.max(0, Math.trunc(day)),
|
||||
week: Math.max(0, Math.trunc(week)),
|
||||
month: Math.max(0, Math.trunc(month)),
|
||||
sampled: Boolean(payload.sampled)
|
||||
};
|
||||
const meshcore = normaliseProtocolBucket(payload.meshcore);
|
||||
const meshtastic = normaliseProtocolBucket(payload.meshtastic);
|
||||
if (meshcore) result.meshcore = meshcore;
|
||||
if (meshtastic) result.meshtastic = meshtastic;
|
||||
return result;
|
||||
}
|
||||
|
||||
// Module-level cache state for the remote stats endpoint.
|
||||
@@ -104,7 +138,7 @@ async function fetchRemoteActiveNodeStats(fetchImpl) {
|
||||
|
||||
activeNodeStatsFetchImpl = fetchImpl;
|
||||
activeNodeStatsFetchPromise = (async () => {
|
||||
const response = await fetchImpl('/api/stats', { cache: 'no-store' });
|
||||
const response = await fetchImpl('/api/stats', { cache: 'default' });
|
||||
if (!response?.ok) {
|
||||
throw new Error(`stats HTTP ${response?.status ?? 'unknown'}`);
|
||||
}
|
||||
|
||||
@@ -1595,6 +1595,24 @@ button:not(.chat-tab):not(.sort-button) {
|
||||
line-height: 1;
|
||||
}
|
||||
|
||||
.protocol-toggle-btn {
|
||||
width: 32px;
|
||||
height: 32px;
|
||||
border: 1px solid #ccc;
|
||||
background: #fff;
|
||||
border-radius: 6px;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.protocol-toggle-btn[aria-pressed="true"] {
|
||||
background: #f0f0f0;
|
||||
}
|
||||
|
||||
.protocol-toggle-icon {
|
||||
display: block;
|
||||
transition: filter 0.15s ease;
|
||||
}
|
||||
|
||||
button:not(.chat-tab):not(.sort-button):hover {
|
||||
background: #f6f6f6;
|
||||
}
|
||||
@@ -1782,12 +1800,6 @@ input[type="radio"] {
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.legend-protocol-toggle {
|
||||
margin-top: 4px;
|
||||
border-top: 1px solid rgba(0, 0, 0, 0.1);
|
||||
padding-top: 5px;
|
||||
}
|
||||
|
||||
.legend-swatch {
|
||||
display: inline-block;
|
||||
width: 12px;
|
||||
@@ -2083,6 +2095,19 @@ body.dark button:not(.chat-tab):not(.sort-button):hover {
|
||||
background: #444;
|
||||
}
|
||||
|
||||
body.dark .protocol-toggle-btn {
|
||||
background: #333;
|
||||
border-color: #444;
|
||||
}
|
||||
|
||||
body.dark .protocol-toggle-btn:hover {
|
||||
background: #444;
|
||||
}
|
||||
|
||||
body.dark .protocol-toggle-btn[aria-pressed="true"] {
|
||||
background: #2a2a2a;
|
||||
}
|
||||
|
||||
body.dark .sort-button {
|
||||
background: none;
|
||||
border: none;
|
||||
@@ -2129,10 +2154,6 @@ body.dark .legend-item[aria-pressed="true"] {
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
body.dark .legend-protocol-toggle {
|
||||
border-top-color: rgba(255, 255, 255, 0.15);
|
||||
}
|
||||
|
||||
body.dark .leaflet-popup-content-wrapper,
|
||||
body.dark .leaflet-popup-tip {
|
||||
background: #333;
|
||||
|
||||
@@ -0,0 +1,165 @@
|
||||
# Copyright © 2025-26 l5yth & contributors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# frozen_string_literal: true
|
||||
|
||||
require "spec_helper"
|
||||
|
||||
RSpec.describe PotatoMesh::App::ApiCache do
|
||||
after { described_class.invalidate_all }
|
||||
|
||||
describe ".fetch" do
|
||||
it "returns a hash with :value and :etag on cache miss" do
|
||||
result = described_class.fetch("test:key", ttl_seconds: 60) { "value_a" }
|
||||
expect(result).to be_a(Hash)
|
||||
expect(result[:value]).to eq("value_a")
|
||||
expect(result[:etag]).to match(/\A[0-9a-f]+\z/)
|
||||
end
|
||||
|
||||
it "returns the cached value within the TTL" do
|
||||
described_class.fetch("test:ttl", ttl_seconds: 60) { "first" }
|
||||
result = described_class.fetch("test:ttl", ttl_seconds: 60) { "second" }
|
||||
expect(result[:value]).to eq("first")
|
||||
end
|
||||
|
||||
it "recomputes the value after the TTL expires" do
|
||||
described_class.fetch("test:expired", ttl_seconds: 0) { "first" }
|
||||
sleep 0.01
|
||||
result = described_class.fetch("test:expired", ttl_seconds: 0) { "second" }
|
||||
expect(result[:value]).to eq("second")
|
||||
end
|
||||
|
||||
it "caches different keys independently" do
|
||||
described_class.fetch("key:a", ttl_seconds: 60) { "alpha" }
|
||||
described_class.fetch("key:b", ttl_seconds: 60) { "beta" }
|
||||
|
||||
a = described_class.fetch("key:a", ttl_seconds: 60) { "stale" }
|
||||
b = described_class.fetch("key:b", ttl_seconds: 60) { "stale" }
|
||||
expect(a[:value]).to eq("alpha")
|
||||
expect(b[:value]).to eq("beta")
|
||||
end
|
||||
|
||||
it "stores a pre-computed weak ETag matching the value digest" do
|
||||
result = described_class.fetch("test:etag", ttl_seconds: 60) { '{"ok":true}' }
|
||||
expected_digest = Digest::MD5.hexdigest('{"ok":true}')
|
||||
expect(result[:etag]).to eq(expected_digest)
|
||||
end
|
||||
|
||||
it "returns the same ETag on cache hit without recomputing" do
|
||||
first = described_class.fetch("test:etag-hit", ttl_seconds: 60) { "body" }
|
||||
second = described_class.fetch("test:etag-hit", ttl_seconds: 60) { "other" }
|
||||
expect(second[:etag]).to eq(first[:etag])
|
||||
end
|
||||
end
|
||||
|
||||
describe ".invalidate_all" do
|
||||
it "clears all cached entries" do
|
||||
described_class.fetch("inv:x", ttl_seconds: 60) { "x" }
|
||||
described_class.fetch("inv:y", ttl_seconds: 60) { "y" }
|
||||
expect(described_class.size).to eq(2)
|
||||
|
||||
described_class.invalidate_all
|
||||
expect(described_class.size).to eq(0)
|
||||
|
||||
result = described_class.fetch("inv:x", ttl_seconds: 60) { "fresh" }
|
||||
expect(result[:value]).to eq("fresh")
|
||||
end
|
||||
end
|
||||
|
||||
describe ".invalidate" do
|
||||
it "removes only the specified keys" do
|
||||
described_class.fetch("sel:a", ttl_seconds: 60) { "a" }
|
||||
described_class.fetch("sel:b", ttl_seconds: 60) { "b" }
|
||||
described_class.fetch("sel:c", ttl_seconds: 60) { "c" }
|
||||
|
||||
described_class.invalidate("sel:a", "sel:c")
|
||||
expect(described_class.size).to eq(1)
|
||||
|
||||
result = described_class.fetch("sel:b", ttl_seconds: 60) { "stale" }
|
||||
expect(result[:value]).to eq("b")
|
||||
|
||||
result = described_class.fetch("sel:a", ttl_seconds: 60) { "new_a" }
|
||||
expect(result[:value]).to eq("new_a")
|
||||
end
|
||||
end
|
||||
|
||||
describe ".invalidate_prefix" do
|
||||
it "removes entries whose keys start with any of the given prefixes" do
|
||||
described_class.fetch("api:nodes:200:", ttl_seconds: 60) { "n" }
|
||||
described_class.fetch("api:nodes:1000:", ttl_seconds: 60) { "n2" }
|
||||
described_class.fetch("api:messages:200:", ttl_seconds: 60) { "m" }
|
||||
described_class.fetch("api:stats:0", ttl_seconds: 60) { "s" }
|
||||
|
||||
described_class.invalidate_prefix("api:nodes:", "api:stats:")
|
||||
expect(described_class.size).to eq(1)
|
||||
|
||||
result = described_class.fetch("api:messages:200:", ttl_seconds: 60) { "stale" }
|
||||
expect(result[:value]).to eq("m")
|
||||
end
|
||||
|
||||
it "is a no-op when no keys match" do
|
||||
described_class.fetch("api:nodes:x", ttl_seconds: 60) { "n" }
|
||||
described_class.invalidate_prefix("api:telemetry:")
|
||||
expect(described_class.size).to eq(1)
|
||||
end
|
||||
end
|
||||
|
||||
describe "MAX_ENTRIES eviction" do
|
||||
it "evicts the oldest entry when the store exceeds MAX_ENTRIES" do
|
||||
max = described_class::MAX_ENTRIES
|
||||
# Fill the cache to capacity
|
||||
max.times do |i|
|
||||
described_class.fetch("fill:#{i}", ttl_seconds: 60) { "v#{i}" }
|
||||
end
|
||||
expect(described_class.size).to eq(max)
|
||||
|
||||
# Adding one more should evict the oldest
|
||||
described_class.fetch("fill:overflow", ttl_seconds: 60) { "new" }
|
||||
expect(described_class.size).to eq(max)
|
||||
|
||||
# The first entry should have been evicted
|
||||
result = described_class.fetch("fill:0", ttl_seconds: 60) { "recomputed" }
|
||||
expect(result[:value]).to eq("recomputed")
|
||||
end
|
||||
end
|
||||
|
||||
describe ".size" do
|
||||
it "reports the number of cached entries" do
|
||||
expect(described_class.size).to eq(0)
|
||||
described_class.fetch("sz:1", ttl_seconds: 60) { "v" }
|
||||
expect(described_class.size).to eq(1)
|
||||
end
|
||||
end
|
||||
|
||||
describe "error handling" do
|
||||
it "does not cache the value when the block raises" do
|
||||
expect {
|
||||
described_class.fetch("err:raise", ttl_seconds: 60) { raise "boom" }
|
||||
}.to raise_error(RuntimeError, "boom")
|
||||
|
||||
expect(described_class.size).to eq(0)
|
||||
end
|
||||
|
||||
it "allows a subsequent fetch after a block error" do
|
||||
begin
|
||||
described_class.fetch("err:retry", ttl_seconds: 60) { raise "first" }
|
||||
rescue RuntimeError
|
||||
# expected
|
||||
end
|
||||
|
||||
result = described_class.fetch("err:retry", ttl_seconds: 60) { "recovered" }
|
||||
expect(result[:value]).to eq("recovered")
|
||||
end
|
||||
end
|
||||
end
|
||||
+98
-3
@@ -496,6 +496,7 @@ RSpec.describe "Potato Mesh Sinatra app" do
|
||||
ENV.delete("PRIVATE")
|
||||
allow(Time).to receive(:now).and_return(reference_time)
|
||||
clear_database
|
||||
PotatoMesh::App::ApiCache.invalidate_all
|
||||
end
|
||||
|
||||
after do
|
||||
@@ -3021,7 +3022,7 @@ RSpec.describe "Potato Mesh Sinatra app" do
|
||||
db.results_as_hash = true
|
||||
row = db.get_first_row(
|
||||
<<~SQL,
|
||||
SELECT short_name, long_name, role, last_heard, first_heard
|
||||
SELECT short_name, long_name, role, protocol, last_heard, first_heard
|
||||
FROM nodes
|
||||
WHERE node_id = ?
|
||||
SQL
|
||||
@@ -3031,11 +3032,81 @@ RSpec.describe "Potato Mesh Sinatra app" do
|
||||
expect(row["short_name"]).to eq("ABCD")
|
||||
expect(row["long_name"]).to eq("Meshtastic ABCD")
|
||||
expect(row["role"]).to eq("CLIENT_HIDDEN")
|
||||
expect(row["protocol"]).to eq("meshtastic")
|
||||
expect(row["last_heard"]).to eq(reference_time.to_i)
|
||||
expect(row["first_heard"]).to eq(reference_time.to_i)
|
||||
end
|
||||
end
|
||||
|
||||
it "stores meshcore protocol and COMPANION role for meshcore nodes" do
|
||||
with_db do |db|
|
||||
created = ensure_unknown_node(db, "!abcd1234", nil, heard_time: reference_time.to_i, protocol: "meshcore")
|
||||
expect(created).to be_truthy
|
||||
end
|
||||
|
||||
with_db(readonly: true) do |db|
|
||||
db.results_as_hash = true
|
||||
row = db.get_first_row(
|
||||
<<~SQL,
|
||||
SELECT short_name, long_name, role, protocol
|
||||
FROM nodes
|
||||
WHERE node_id = ?
|
||||
SQL
|
||||
["!abcd1234"],
|
||||
)
|
||||
|
||||
expect(row["short_name"]).to eq("1234")
|
||||
expect(row["long_name"]).to eq("Meshcore 1234")
|
||||
expect(row["role"]).to eq("COMPANION")
|
||||
expect(row["protocol"]).to eq("meshcore")
|
||||
end
|
||||
end
|
||||
|
||||
it "defaults to meshtastic protocol and CLIENT_HIDDEN role" do
|
||||
with_db do |db|
|
||||
created = ensure_unknown_node(db, "!beef0000", nil)
|
||||
expect(created).to be_truthy
|
||||
end
|
||||
|
||||
with_db(readonly: true) do |db|
|
||||
db.results_as_hash = true
|
||||
row = db.get_first_row(
|
||||
<<~SQL,
|
||||
SELECT role, protocol
|
||||
FROM nodes
|
||||
WHERE node_id = ?
|
||||
SQL
|
||||
["!beef0000"],
|
||||
)
|
||||
|
||||
expect(row["role"]).to eq("CLIENT_HIDDEN")
|
||||
expect(row["protocol"]).to eq("meshtastic")
|
||||
end
|
||||
end
|
||||
|
||||
it "falls back to CLIENT_HIDDEN for an unknown protocol" do
|
||||
with_db do |db|
|
||||
created = ensure_unknown_node(db, "!cafe9999", nil, protocol: "reticulum")
|
||||
expect(created).to be_truthy
|
||||
end
|
||||
|
||||
with_db(readonly: true) do |db|
|
||||
db.results_as_hash = true
|
||||
row = db.get_first_row(
|
||||
<<~SQL,
|
||||
SELECT role, protocol, long_name
|
||||
FROM nodes
|
||||
WHERE node_id = ?
|
||||
SQL
|
||||
["!cafe9999"],
|
||||
)
|
||||
|
||||
expect(row["role"]).to eq("CLIENT_HIDDEN")
|
||||
expect(row["protocol"]).to eq("reticulum")
|
||||
expect(row["long_name"]).to eq("Reticulum 9999")
|
||||
end
|
||||
end
|
||||
|
||||
it "leaves timestamps nil when no receive time is provided" do
|
||||
with_db do |db|
|
||||
created = ensure_unknown_node(db, "!1111beef", nil)
|
||||
@@ -5909,14 +5980,15 @@ RSpec.describe "Potato Mesh Sinatra app" do
|
||||
end
|
||||
|
||||
describe "GET /api/stats" do
|
||||
it "returns exact SQL-backed activity counts without list-endpoint sampling" do
|
||||
it "returns exact SQL-backed activity counts with per-protocol breakdowns" do
|
||||
clear_database
|
||||
now = reference_time.to_i
|
||||
allow(Time).to receive(:now).and_return(reference_time)
|
||||
|
||||
with_db do |db|
|
||||
db.transaction
|
||||
1005.times do |index|
|
||||
# 1000 meshtastic nodes heard within the hour (protocol defaults to meshtastic)
|
||||
1000.times do |index|
|
||||
heard = now - (index % 1800)
|
||||
node_id = format("!%08x", index + 1)
|
||||
db.execute(
|
||||
@@ -5924,10 +5996,21 @@ RSpec.describe "Potato Mesh Sinatra app" do
|
||||
[node_id, index + 1, "n#{index}", "Node #{index}", "TBEAM", "CLIENT", heard, heard],
|
||||
)
|
||||
end
|
||||
# 5 meshcore nodes heard within the hour
|
||||
5.times do |index|
|
||||
heard = now - (index % 1800)
|
||||
node_id = format("!mc%06x", index + 1)
|
||||
db.execute(
|
||||
"INSERT INTO nodes(node_id, num, short_name, long_name, hw_model, role, last_heard, first_heard, protocol) VALUES(?,?,?,?,?,?,?,?,?)",
|
||||
[node_id, 100_001 + index, "mc#{index}", "MC Node #{index}", "TBEAM", "CLIENT", heard, heard, "meshcore"],
|
||||
)
|
||||
end
|
||||
# 1 meshtastic node heard 2 days ago (week window only)
|
||||
db.execute(
|
||||
INSERT_NODE_WITH_METADATA_SQL,
|
||||
["!week0001", 200_001, "week", "Week Node", "TBEAM", "CLIENT", now - (2 * 86_400), now - (2 * 86_400)],
|
||||
)
|
||||
# 1 meshtastic node heard 20 days ago (month window only)
|
||||
db.execute(
|
||||
INSERT_NODE_WITH_METADATA_SQL,
|
||||
["!month001", 200_002, "month", "Month Node", "TBEAM", "CLIENT", now - (20 * 86_400), now - (20 * 86_400)],
|
||||
@@ -5946,6 +6029,18 @@ RSpec.describe "Potato Mesh Sinatra app" do
|
||||
"week" => 1006,
|
||||
"month" => 1007,
|
||||
)
|
||||
expect(payload["meshcore"]).to include(
|
||||
"hour" => 5,
|
||||
"day" => 5,
|
||||
"week" => 5,
|
||||
"month" => 5,
|
||||
)
|
||||
expect(payload["meshtastic"]).to include(
|
||||
"hour" => 1000,
|
||||
"day" => 1000,
|
||||
"week" => 1001,
|
||||
"month" => 1002,
|
||||
)
|
||||
end
|
||||
end
|
||||
|
||||
|
||||
@@ -258,4 +258,53 @@ RSpec.describe PotatoMesh::App::Database do
|
||||
|
||||
expect(column_names_for("instances")).to include("contact_link")
|
||||
end
|
||||
|
||||
it "backfills misclassified meshcore placeholder nodes" do
|
||||
SQLite3::Database.new(PotatoMesh::Config.db_path) do |db|
|
||||
db.execute(<<~SQL)
|
||||
CREATE TABLE nodes(
|
||||
node_id TEXT PRIMARY KEY, num INTEGER, short_name TEXT, long_name TEXT,
|
||||
role TEXT, last_heard INTEGER, first_heard INTEGER,
|
||||
protocol TEXT NOT NULL DEFAULT 'meshtastic', synthetic BOOLEAN NOT NULL DEFAULT 0
|
||||
)
|
||||
SQL
|
||||
db.execute("CREATE TABLE messages(id INTEGER PRIMARY KEY)")
|
||||
|
||||
# Misclassified meshcore placeholder (bug #747)
|
||||
db.execute(
|
||||
"INSERT INTO nodes(node_id, short_name, long_name, role, protocol) VALUES (?, ?, ?, ?, ?)",
|
||||
["!aabb0001", "0001", "Meshcore 0001", "CLIENT_HIDDEN", "meshtastic"],
|
||||
)
|
||||
|
||||
# Meshcore node where protocol self-healed but role did not
|
||||
db.execute(
|
||||
"INSERT INTO nodes(node_id, short_name, long_name, role, protocol) VALUES (?, ?, ?, ?, ?)",
|
||||
["!aabb0002", "0002", "SomeNode", "CLIENT_HIDDEN", "meshcore"],
|
||||
)
|
||||
|
||||
# Meshtastic node that should remain untouched
|
||||
db.execute(
|
||||
"INSERT INTO nodes(node_id, short_name, long_name, role, protocol) VALUES (?, ?, ?, ?, ?)",
|
||||
["!aabb0003", "0003", "Meshtastic 0003", "CLIENT_HIDDEN", "meshtastic"],
|
||||
)
|
||||
end
|
||||
|
||||
harness_class.ensure_schema_upgrades
|
||||
|
||||
SQLite3::Database.new(PotatoMesh::Config.db_path, readonly: true) do |db|
|
||||
db.results_as_hash = true
|
||||
|
||||
fixed_proto = db.get_first_row("SELECT protocol, role FROM nodes WHERE node_id = '!aabb0001'")
|
||||
expect(fixed_proto["protocol"]).to eq("meshcore")
|
||||
expect(fixed_proto["role"]).to eq("COMPANION")
|
||||
|
||||
fixed_role = db.get_first_row("SELECT protocol, role FROM nodes WHERE node_id = '!aabb0002'")
|
||||
expect(fixed_role["protocol"]).to eq("meshcore")
|
||||
expect(fixed_role["role"]).to eq("COMPANION")
|
||||
|
||||
untouched = db.get_first_row("SELECT protocol, role FROM nodes WHERE node_id = '!aabb0003'")
|
||||
expect(untouched["protocol"]).to eq("meshtastic")
|
||||
expect(untouched["role"]).to eq("CLIENT_HIDDEN")
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
+391
-48
@@ -239,6 +239,125 @@ RSpec.describe PotatoMesh::App::Federation do
|
||||
skip "Net::HTTP#ipaddr accessor unavailable"
|
||||
end
|
||||
end
|
||||
|
||||
it "pins to the explicit ip_address when provided" do
|
||||
uri = URI.parse("https://remote.example.com/api")
|
||||
|
||||
http = federation_helpers.build_remote_http_client(uri, ip_address: "198.51.100.7")
|
||||
|
||||
if http.respond_to?(:ipaddr)
|
||||
expect(http.ipaddr).to eq("198.51.100.7")
|
||||
else
|
||||
skip "Net::HTTP#ipaddr accessor unavailable"
|
||||
end
|
||||
expect(Addrinfo).not_to have_received(:getaddrinfo)
|
||||
end
|
||||
|
||||
it "skips DNS resolution when ip_address is provided" do
|
||||
uri = URI.parse("http://remote.example.com/api")
|
||||
|
||||
federation_helpers.build_remote_http_client(uri, ip_address: "198.51.100.7")
|
||||
|
||||
expect(Addrinfo).not_to have_received(:getaddrinfo)
|
||||
end
|
||||
end
|
||||
|
||||
describe ".sort_addresses_for_connection" do
|
||||
it "returns an empty array unchanged" do
|
||||
expect(federation_helpers.send(:sort_addresses_for_connection, [])).to eq([])
|
||||
end
|
||||
|
||||
it "returns nil unchanged" do
|
||||
expect(federation_helpers.send(:sort_addresses_for_connection, nil)).to be_nil
|
||||
end
|
||||
|
||||
it "returns a single-element array unchanged" do
|
||||
addr = [IPAddr.new("203.0.113.5")]
|
||||
expect(federation_helpers.send(:sort_addresses_for_connection, addr)).to eq(addr)
|
||||
end
|
||||
|
||||
it "places IPv4 addresses before IPv6" do
|
||||
v6 = IPAddr.new("2001:db8::1")
|
||||
v4 = IPAddr.new("203.0.113.5")
|
||||
|
||||
result = federation_helpers.send(:sort_addresses_for_connection, [v6, v4])
|
||||
|
||||
expect(result).to eq([v4, v6])
|
||||
end
|
||||
|
||||
it "preserves relative order within the same address family" do
|
||||
v4a = IPAddr.new("203.0.113.1")
|
||||
v4b = IPAddr.new("203.0.113.2")
|
||||
v6a = IPAddr.new("2001:db8::1")
|
||||
v6b = IPAddr.new("2001:db8::2")
|
||||
|
||||
result = federation_helpers.send(:sort_addresses_for_connection, [v6a, v4a, v6b, v4b])
|
||||
|
||||
expect(result).to eq([v4a, v4b, v6a, v6b])
|
||||
end
|
||||
|
||||
it "preserves order when IPv4 already comes first" do
|
||||
v4 = IPAddr.new("203.0.113.5")
|
||||
v6 = IPAddr.new("2001:db8::1")
|
||||
|
||||
result = federation_helpers.send(:sort_addresses_for_connection, [v4, v6])
|
||||
|
||||
expect(result).to eq([v4, v6])
|
||||
end
|
||||
end
|
||||
|
||||
describe ".connection_refused_or_unreachable?" do
|
||||
it "returns true for Errno::ECONNREFUSED" do
|
||||
expect(federation_helpers.send(:connection_refused_or_unreachable?, Errno::ECONNREFUSED.new)).to be(true)
|
||||
end
|
||||
|
||||
it "returns true for Errno::EHOSTUNREACH" do
|
||||
expect(federation_helpers.send(:connection_refused_or_unreachable?, Errno::EHOSTUNREACH.new)).to be(true)
|
||||
end
|
||||
|
||||
it "returns true for Errno::ENETUNREACH" do
|
||||
expect(federation_helpers.send(:connection_refused_or_unreachable?, Errno::ENETUNREACH.new)).to be(true)
|
||||
end
|
||||
|
||||
it "returns true for Errno::ECONNRESET" do
|
||||
expect(federation_helpers.send(:connection_refused_or_unreachable?, Errno::ECONNRESET.new)).to be(true)
|
||||
end
|
||||
|
||||
it "returns true for Errno::ETIMEDOUT" do
|
||||
expect(federation_helpers.send(:connection_refused_or_unreachable?, Errno::ETIMEDOUT.new)).to be(true)
|
||||
end
|
||||
|
||||
it "returns true for Net::OpenTimeout" do
|
||||
expect(federation_helpers.send(:connection_refused_or_unreachable?, Net::OpenTimeout.new)).to be(true)
|
||||
end
|
||||
|
||||
it "returns false for OpenSSL::SSL::SSLError" do
|
||||
expect(federation_helpers.send(:connection_refused_or_unreachable?, OpenSSL::SSL::SSLError.new("fail"))).to be(false)
|
||||
end
|
||||
|
||||
it "returns false for Net::ReadTimeout" do
|
||||
expect(federation_helpers.send(:connection_refused_or_unreachable?, Net::ReadTimeout.new)).to be(false)
|
||||
end
|
||||
|
||||
it "returns false for a generic StandardError" do
|
||||
expect(federation_helpers.send(:connection_refused_or_unreachable?, StandardError.new("nope"))).to be(false)
|
||||
end
|
||||
|
||||
it "walks the cause chain to find a retryable error" do
|
||||
outer_with_cause = begin
|
||||
begin
|
||||
raise Errno::ECONNREFUSED, "refused"
|
||||
rescue Errno::ECONNREFUSED
|
||||
raise StandardError, "wrapper"
|
||||
end
|
||||
rescue StandardError => e
|
||||
e
|
||||
end
|
||||
|
||||
expect(outer_with_cause).to be_a(StandardError)
|
||||
expect(outer_with_cause.cause).to be_a(Errno::ECONNREFUSED)
|
||||
expect(federation_helpers.send(:connection_refused_or_unreachable?, outer_with_cause)).to be(true)
|
||||
end
|
||||
end
|
||||
|
||||
describe ".ingest_known_instances_from!" do
|
||||
@@ -626,6 +745,48 @@ RSpec.describe PotatoMesh::App::Federation do
|
||||
expect(row[2]).to eq("sig-2")
|
||||
end
|
||||
end
|
||||
|
||||
it "stores per-protocol node counts for new records" do
|
||||
with_db do |db|
|
||||
attrs = base_attributes.merge(
|
||||
nodes_count: 50,
|
||||
meshcore_nodes_count: 20,
|
||||
meshtastic_nodes_count: 30,
|
||||
)
|
||||
federation_helpers.send(:upsert_instance_record, db, attrs, "sig-1")
|
||||
|
||||
row = db.get_first_row(
|
||||
"SELECT nodes_count, meshcore_nodes_count, meshtastic_nodes_count FROM instances WHERE id = ?",
|
||||
base_attributes[:id],
|
||||
)
|
||||
expect(row[0]).to eq(50)
|
||||
expect(row[1]).to eq(20)
|
||||
expect(row[2]).to eq(30)
|
||||
end
|
||||
end
|
||||
|
||||
it "updates per-protocol node counts on conflict" do
|
||||
with_db do |db|
|
||||
attrs = base_attributes.merge(
|
||||
meshcore_nodes_count: 10,
|
||||
meshtastic_nodes_count: 15,
|
||||
)
|
||||
federation_helpers.send(:upsert_instance_record, db, attrs, "sig-1")
|
||||
|
||||
updated_attrs = base_attributes.merge(
|
||||
meshcore_nodes_count: 25,
|
||||
meshtastic_nodes_count: 40,
|
||||
)
|
||||
federation_helpers.send(:upsert_instance_record, db, updated_attrs, "sig-2")
|
||||
|
||||
row = db.get_first_row(
|
||||
"SELECT meshcore_nodes_count, meshtastic_nodes_count FROM instances WHERE id = ?",
|
||||
base_attributes[:id],
|
||||
)
|
||||
expect(row[0]).to eq(25)
|
||||
expect(row[1]).to eq(40)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
describe ".federation_user_agent_header" do
|
||||
@@ -660,12 +821,12 @@ RSpec.describe PotatoMesh::App::Federation do
|
||||
end
|
||||
end
|
||||
|
||||
describe ".perform_instance_http_request" do
|
||||
describe ".perform_single_http_request" do
|
||||
let(:uri) { URI.parse("https://remote.example.com/api") }
|
||||
let(:http_client) { instance_double(Net::HTTP) }
|
||||
|
||||
before do
|
||||
allow(federation_helpers).to receive(:build_remote_http_client).with(uri).and_return(http_client)
|
||||
allow(federation_helpers).to receive(:build_remote_http_client).and_return(http_client)
|
||||
end
|
||||
|
||||
it "wraps errors that omit a message with the error class name" do
|
||||
@@ -681,7 +842,7 @@ RSpec.describe PotatoMesh::App::Federation do
|
||||
allow(http_client).to receive(:start).and_raise(RemoteTcpFailure.new)
|
||||
|
||||
expect do
|
||||
federation_helpers.send(:perform_instance_http_request, uri)
|
||||
federation_helpers.send(:perform_single_http_request, uri)
|
||||
end.to raise_error(PotatoMesh::App::InstanceFetchError, "RemoteTcpFailure")
|
||||
end
|
||||
|
||||
@@ -689,7 +850,7 @@ RSpec.describe PotatoMesh::App::Federation do
|
||||
allow(http_client).to receive(:start).and_raise(OpenSSL::SSL::SSLError.new("handshake failed"))
|
||||
|
||||
expect do
|
||||
federation_helpers.send(:perform_instance_http_request, uri)
|
||||
federation_helpers.send(:perform_single_http_request, uri)
|
||||
end.to raise_error(
|
||||
PotatoMesh::App::InstanceFetchError,
|
||||
"OpenSSL::SSL::SSLError: handshake failed",
|
||||
@@ -700,19 +861,10 @@ RSpec.describe PotatoMesh::App::Federation do
|
||||
allow(http_client).to receive(:start).and_raise(Net::ReadTimeout.new)
|
||||
|
||||
expect do
|
||||
federation_helpers.send(:perform_instance_http_request, uri)
|
||||
federation_helpers.send(:perform_single_http_request, uri)
|
||||
end.to raise_error(PotatoMesh::App::InstanceFetchError, "Net::ReadTimeout")
|
||||
end
|
||||
|
||||
it "wraps restricted address resolution failures" do
|
||||
allow(federation_helpers).to receive(:build_remote_http_client).and_call_original
|
||||
allow(Addrinfo).to receive(:getaddrinfo).and_return([Addrinfo.ip("127.0.0.1")])
|
||||
|
||||
expect do
|
||||
federation_helpers.send(:perform_instance_http_request, uri)
|
||||
end.to raise_error(PotatoMesh::App::InstanceFetchError, "ArgumentError: restricted domain")
|
||||
end
|
||||
|
||||
it "applies federation headers to instance fetch requests" do
|
||||
connection = instance_double(HTTP_CONNECTION_DOUBLE)
|
||||
success_response = Net::HTTPOK.new("1.1", "200", "OK")
|
||||
@@ -728,7 +880,7 @@ RSpec.describe PotatoMesh::App::Federation do
|
||||
success_response
|
||||
end
|
||||
|
||||
result = federation_helpers.send(:perform_instance_http_request, uri)
|
||||
result = federation_helpers.send(:perform_single_http_request, uri)
|
||||
|
||||
expect(result).to eq("{}")
|
||||
expect(captured_request).not_to be_nil
|
||||
@@ -748,12 +900,144 @@ RSpec.describe PotatoMesh::App::Federation do
|
||||
allow(connection).to receive(:request).and_return(failure_response)
|
||||
|
||||
expect do
|
||||
federation_helpers.send(:perform_instance_http_request, uri)
|
||||
federation_helpers.send(:perform_single_http_request, uri)
|
||||
end.to raise_error(
|
||||
PotatoMesh::App::InstanceFetchError,
|
||||
a_string_including("unexpected response 502"),
|
||||
)
|
||||
end
|
||||
|
||||
it "passes ip_address through to build_remote_http_client" do
|
||||
allow(http_client).to receive(:start).and_raise(StandardError.new("test"))
|
||||
|
||||
begin
|
||||
federation_helpers.send(:perform_single_http_request, uri, ip_address: "198.51.100.7")
|
||||
rescue PotatoMesh::App::InstanceFetchError
|
||||
# expected
|
||||
end
|
||||
|
||||
expect(federation_helpers).to have_received(:build_remote_http_client).with(uri, ip_address: "198.51.100.7")
|
||||
end
|
||||
end
|
||||
|
||||
describe ".perform_instance_http_request" do
|
||||
let(:uri) { URI.parse("https://remote.example.com/api") }
|
||||
let(:v4_addr) { IPAddr.new("203.0.113.5") }
|
||||
let(:v6_addr) { IPAddr.new("2001:db8::1") }
|
||||
|
||||
before do
|
||||
allow(PotatoMesh::Config).to receive(:remote_instance_http_timeout).and_return(5)
|
||||
allow(PotatoMesh::Config).to receive(:remote_instance_read_timeout).and_return(10)
|
||||
allow(PotatoMesh::Config).to receive(:remote_instance_request_timeout).and_return(15)
|
||||
end
|
||||
|
||||
it "succeeds on the first address" do
|
||||
allow(federation_helpers).to receive(:resolve_remote_ip_addresses).and_return([v4_addr])
|
||||
allow(federation_helpers).to receive(:perform_single_http_request)
|
||||
.with(uri, ip_address: "203.0.113.5")
|
||||
.and_return("{}")
|
||||
|
||||
result = federation_helpers.send(:perform_instance_http_request, uri)
|
||||
|
||||
expect(result).to eq("{}")
|
||||
end
|
||||
|
||||
it "falls back to the next address on ECONNREFUSED" do
|
||||
allow(federation_helpers).to receive(:resolve_remote_ip_addresses).and_return([v6_addr, v4_addr])
|
||||
allow(federation_helpers).to receive(:perform_single_http_request)
|
||||
.with(uri, ip_address: "2001:db8::1")
|
||||
.and_raise(PotatoMesh::App::InstanceFetchError.new("Errno::ECONNREFUSED: Connection refused"))
|
||||
allow(federation_helpers).to receive(:connection_refused_or_unreachable?).and_return(true)
|
||||
allow(federation_helpers).to receive(:perform_single_http_request)
|
||||
.with(uri, ip_address: "203.0.113.5")
|
||||
.and_return('{"ok":true}')
|
||||
|
||||
result = federation_helpers.send(:perform_instance_http_request, uri)
|
||||
|
||||
expect(result).to eq('{"ok":true}')
|
||||
end
|
||||
|
||||
it "raises after all addresses fail with connection errors" do
|
||||
allow(federation_helpers).to receive(:resolve_remote_ip_addresses).and_return([v6_addr, v4_addr])
|
||||
allow(federation_helpers).to receive(:connection_refused_or_unreachable?).and_return(true)
|
||||
allow(federation_helpers).to receive(:perform_single_http_request)
|
||||
.and_raise(PotatoMesh::App::InstanceFetchError.new("Errno::ECONNREFUSED: Connection refused"))
|
||||
|
||||
expect do
|
||||
federation_helpers.send(:perform_instance_http_request, uri)
|
||||
end.to raise_error(PotatoMesh::App::InstanceFetchError, /ECONNREFUSED/)
|
||||
end
|
||||
|
||||
it "raises immediately on non-connection errors without trying further addresses" do
|
||||
allow(federation_helpers).to receive(:resolve_remote_ip_addresses).and_return([v6_addr, v4_addr])
|
||||
ssl_error = PotatoMesh::App::InstanceFetchError.new("OpenSSL::SSL::SSLError: handshake failed")
|
||||
# After sorting, IPv4 (203.0.113.5) is tried first
|
||||
allow(federation_helpers).to receive(:perform_single_http_request)
|
||||
.with(uri, ip_address: "203.0.113.5")
|
||||
.and_raise(ssl_error)
|
||||
allow(federation_helpers).to receive(:connection_refused_or_unreachable?).and_return(false)
|
||||
|
||||
expect do
|
||||
federation_helpers.send(:perform_instance_http_request, uri)
|
||||
end.to raise_error(PotatoMesh::App::InstanceFetchError, /SSLError/)
|
||||
|
||||
expect(federation_helpers).not_to have_received(:perform_single_http_request)
|
||||
.with(uri, ip_address: "2001:db8::1")
|
||||
end
|
||||
|
||||
it "stops iterating when shutdown is requested" do
|
||||
allow(federation_helpers).to receive(:resolve_remote_ip_addresses).and_return([v6_addr, v4_addr])
|
||||
call_count = 0
|
||||
allow(federation_helpers).to receive(:perform_single_http_request) do
|
||||
call_count += 1
|
||||
federation_helpers.request_federation_shutdown!
|
||||
raise PotatoMesh::App::InstanceFetchError, "refused"
|
||||
end
|
||||
allow(federation_helpers).to receive(:connection_refused_or_unreachable?).and_return(true)
|
||||
|
||||
expect do
|
||||
federation_helpers.send(:perform_instance_http_request, uri)
|
||||
end.to raise_error(PotatoMesh::App::InstanceFetchError)
|
||||
|
||||
expect(call_count).to eq(1)
|
||||
end
|
||||
|
||||
it "falls back to address-less request when resolution returns no results" do
|
||||
allow(federation_helpers).to receive(:resolve_remote_ip_addresses).and_return([])
|
||||
allow(federation_helpers).to receive(:perform_single_http_request)
|
||||
.with(uri, ip_address: nil)
|
||||
.and_return("{}")
|
||||
|
||||
result = federation_helpers.send(:perform_instance_http_request, uri)
|
||||
|
||||
expect(result).to eq("{}")
|
||||
end
|
||||
|
||||
it "tries IPv4 addresses before IPv6" do
|
||||
allow(federation_helpers).to receive(:resolve_remote_ip_addresses).and_return([v6_addr, v4_addr])
|
||||
call_order = []
|
||||
allow(federation_helpers).to receive(:perform_single_http_request) do |_uri, ip_address:|
|
||||
call_order << ip_address
|
||||
raise PotatoMesh::App::InstanceFetchError, "refused"
|
||||
end
|
||||
allow(federation_helpers).to receive(:connection_refused_or_unreachable?).and_return(true)
|
||||
|
||||
begin
|
||||
federation_helpers.send(:perform_instance_http_request, uri)
|
||||
rescue PotatoMesh::App::InstanceFetchError
|
||||
# expected
|
||||
end
|
||||
|
||||
expect(call_order).to eq(["203.0.113.5", "2001:db8::1"])
|
||||
end
|
||||
|
||||
it "wraps restricted address resolution failures" do
|
||||
allow(Addrinfo).to receive(:getaddrinfo).and_return([Addrinfo.ip("127.0.0.1")])
|
||||
|
||||
expect do
|
||||
federation_helpers.send(:perform_instance_http_request, uri)
|
||||
end.to raise_error(PotatoMesh::App::InstanceFetchError, "ArgumentError: restricted domain")
|
||||
end
|
||||
end
|
||||
|
||||
describe ".federation_sleep_with_shutdown" do
|
||||
@@ -781,11 +1065,84 @@ RSpec.describe PotatoMesh::App::Federation do
|
||||
end
|
||||
end
|
||||
|
||||
describe ".perform_announce_request" do
|
||||
let(:uri) { URI.parse("https://remote.mesh/api/instances") }
|
||||
let(:payload) { '{"id":"test"}' }
|
||||
let(:v4_addr) { IPAddr.new("203.0.113.5") }
|
||||
let(:v6_addr) { IPAddr.new("2001:db8::1") }
|
||||
let(:success_response) { Net::HTTPOK.new("1.1", "200", "OK") }
|
||||
|
||||
before do
|
||||
allow(success_response).to receive(:code).and_return("200")
|
||||
end
|
||||
|
||||
it "succeeds on the first resolved address" do
|
||||
allow(federation_helpers).to receive(:resolve_remote_ip_addresses).and_return([v4_addr])
|
||||
allow(federation_helpers).to receive(:perform_single_announce_request)
|
||||
.with(uri, payload, ip_address: "203.0.113.5")
|
||||
.and_return(success_response)
|
||||
|
||||
result = federation_helpers.send(:perform_announce_request, uri, payload)
|
||||
|
||||
expect(result).to eq(success_response)
|
||||
end
|
||||
|
||||
it "falls back to the next address on ECONNREFUSED" do
|
||||
allow(federation_helpers).to receive(:resolve_remote_ip_addresses).and_return([v6_addr, v4_addr])
|
||||
# After sorting, IPv4 is tried first
|
||||
allow(federation_helpers).to receive(:perform_single_announce_request)
|
||||
.with(uri, payload, ip_address: "203.0.113.5")
|
||||
.and_raise(Errno::ECONNREFUSED.new("refused"))
|
||||
allow(federation_helpers).to receive(:perform_single_announce_request)
|
||||
.with(uri, payload, ip_address: "2001:db8::1")
|
||||
.and_return(success_response)
|
||||
|
||||
result = federation_helpers.send(:perform_announce_request, uri, payload)
|
||||
|
||||
expect(result).to eq(success_response)
|
||||
end
|
||||
|
||||
it "raises after all addresses fail" do
|
||||
allow(federation_helpers).to receive(:resolve_remote_ip_addresses).and_return([v6_addr, v4_addr])
|
||||
allow(federation_helpers).to receive(:perform_single_announce_request)
|
||||
.and_raise(Errno::ECONNREFUSED.new("refused"))
|
||||
|
||||
expect do
|
||||
federation_helpers.send(:perform_announce_request, uri, payload)
|
||||
end.to raise_error(Errno::ECONNREFUSED)
|
||||
end
|
||||
|
||||
it "raises immediately on non-connection errors" do
|
||||
allow(federation_helpers).to receive(:resolve_remote_ip_addresses).and_return([v6_addr, v4_addr])
|
||||
# After sorting, IPv4 (203.0.113.5) is tried first
|
||||
allow(federation_helpers).to receive(:perform_single_announce_request)
|
||||
.with(uri, payload, ip_address: "203.0.113.5")
|
||||
.and_raise(OpenSSL::SSL::SSLError.new("handshake failed"))
|
||||
|
||||
expect do
|
||||
federation_helpers.send(:perform_announce_request, uri, payload)
|
||||
end.to raise_error(OpenSSL::SSL::SSLError)
|
||||
|
||||
expect(federation_helpers).not_to have_received(:perform_single_announce_request)
|
||||
.with(uri, payload, ip_address: "2001:db8::1")
|
||||
end
|
||||
|
||||
it "falls back to address-less request when resolution returns no results" do
|
||||
allow(federation_helpers).to receive(:resolve_remote_ip_addresses).and_return([])
|
||||
allow(federation_helpers).to receive(:perform_single_announce_request)
|
||||
.with(uri, payload, ip_address: nil)
|
||||
.and_return(success_response)
|
||||
|
||||
result = federation_helpers.send(:perform_announce_request, uri, payload)
|
||||
|
||||
expect(result).to eq(success_response)
|
||||
end
|
||||
end
|
||||
|
||||
describe ".announce_instance_to_domain" do
|
||||
let(:payload) { "{}" }
|
||||
let(:https_uri) { URI.parse("https://remote.mesh/api/instances") }
|
||||
let(:http_uri) { URI.parse("http://remote.mesh/api/instances") }
|
||||
let(:http_connection) { instance_double(HTTP_CONNECTION_DOUBLE) }
|
||||
let(:success_response) { Net::HTTPOK.new("1.1", "200", "OK") }
|
||||
|
||||
before do
|
||||
@@ -793,15 +1150,12 @@ RSpec.describe PotatoMesh::App::Federation do
|
||||
end
|
||||
|
||||
it "retries over HTTP when HTTPS connections are refused" do
|
||||
https_client = instance_double(Net::HTTP)
|
||||
http_client = instance_double(Net::HTTP)
|
||||
|
||||
allow(federation_helpers).to receive(:build_remote_http_client).with(https_uri).and_return(https_client)
|
||||
allow(federation_helpers).to receive(:build_remote_http_client).with(http_uri).and_return(http_client)
|
||||
|
||||
allow(https_client).to receive(:start).and_raise(Errno::ECONNREFUSED.new("refused"))
|
||||
allow(http_connection).to receive(:request).and_return(success_response)
|
||||
allow(http_client).to receive(:start).and_yield(http_connection).and_return(success_response)
|
||||
allow(federation_helpers).to receive(:perform_announce_request)
|
||||
.with(https_uri, payload)
|
||||
.and_raise(Errno::ECONNREFUSED.new("refused"))
|
||||
allow(federation_helpers).to receive(:perform_announce_request)
|
||||
.with(http_uri, payload)
|
||||
.and_return(success_response)
|
||||
|
||||
result = federation_helpers.announce_instance_to_domain("remote.mesh", payload)
|
||||
|
||||
@@ -811,14 +1165,12 @@ RSpec.describe PotatoMesh::App::Federation do
|
||||
end
|
||||
|
||||
it "logs a warning when HTTPS refusal persists after HTTP fallback" do
|
||||
https_client = instance_double(Net::HTTP)
|
||||
http_client = instance_double(Net::HTTP)
|
||||
|
||||
allow(federation_helpers).to receive(:build_remote_http_client).with(https_uri).and_return(https_client)
|
||||
allow(federation_helpers).to receive(:build_remote_http_client).with(http_uri).and_return(http_client)
|
||||
|
||||
allow(https_client).to receive(:start).and_raise(Errno::ECONNREFUSED.new("refused"))
|
||||
allow(http_client).to receive(:start).and_raise(SocketError.new("dns failure"))
|
||||
allow(federation_helpers).to receive(:perform_announce_request)
|
||||
.with(https_uri, payload)
|
||||
.and_raise(Errno::ECONNREFUSED.new("refused"))
|
||||
allow(federation_helpers).to receive(:perform_announce_request)
|
||||
.with(http_uri, payload)
|
||||
.and_raise(SocketError.new("dns failure"))
|
||||
|
||||
result = federation_helpers.announce_instance_to_domain("remote.mesh", payload)
|
||||
|
||||
@@ -829,24 +1181,15 @@ RSpec.describe PotatoMesh::App::Federation do
|
||||
).to eq(2)
|
||||
end
|
||||
|
||||
it "applies federation headers to announcement requests" do
|
||||
https_client = instance_double(Net::HTTP)
|
||||
allow(federation_helpers).to receive(:build_remote_http_client).with(https_uri).and_return(https_client)
|
||||
|
||||
captured_request = nil
|
||||
allow(https_client).to receive(:start).and_yield(http_connection).and_return(success_response)
|
||||
allow(http_connection).to receive(:request) do |request|
|
||||
captured_request = request
|
||||
success_response
|
||||
end
|
||||
it "logs success when the announcement is published" do
|
||||
allow(federation_helpers).to receive(:perform_announce_request)
|
||||
.with(https_uri, payload)
|
||||
.and_return(success_response)
|
||||
|
||||
result = federation_helpers.announce_instance_to_domain("remote.mesh", payload)
|
||||
|
||||
expect(result).to be(true)
|
||||
expect(captured_request).not_to be_nil
|
||||
expect(captured_request["Content-Type"]).to eq("application/json")
|
||||
expect(captured_request["Accept"]).to eq("application/json")
|
||||
expect(captured_request["User-Agent"]).to eq(federation_helpers.send(:federation_user_agent_header))
|
||||
expect(federation_helpers.debug_messages).to include("Published federation announcement")
|
||||
end
|
||||
end
|
||||
|
||||
|
||||
@@ -176,5 +176,52 @@ RSpec.describe PotatoMesh::App::Instances do
|
||||
expect(with_nodes["nodesCount"]).to eq(42)
|
||||
expect(zero_nodes["nodesCount"]).to eq(0)
|
||||
end
|
||||
|
||||
it "includes per-protocol node counts when present and omits when nil" do
|
||||
fixed_time = Time.utc(2025, 2, 3, 8, 0, 0)
|
||||
allow(Time).to receive(:now).and_return(fixed_time)
|
||||
|
||||
with_db do |db|
|
||||
db.execute(
|
||||
<<~SQL,
|
||||
INSERT INTO instances (id, domain, pubkey, last_update_time, is_private, nodes_count, meshcore_nodes_count, meshtastic_nodes_count)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
|
||||
SQL
|
||||
[
|
||||
"instance-proto-counts",
|
||||
"proto.mesh.test",
|
||||
PotatoMesh::Application::INSTANCE_PUBLIC_KEY_PEM,
|
||||
fixed_time.to_i,
|
||||
0,
|
||||
50,
|
||||
20,
|
||||
30,
|
||||
],
|
||||
)
|
||||
db.execute(
|
||||
<<~SQL,
|
||||
INSERT INTO instances (id, domain, pubkey, last_update_time, is_private, nodes_count)
|
||||
VALUES (?, ?, ?, ?, ?, ?)
|
||||
SQL
|
||||
[
|
||||
"instance-no-proto",
|
||||
"noproto.mesh.test",
|
||||
PotatoMesh::Application::INSTANCE_PUBLIC_KEY_PEM,
|
||||
fixed_time.to_i,
|
||||
0,
|
||||
10,
|
||||
],
|
||||
)
|
||||
end
|
||||
|
||||
payload = application_class.load_instances_for_api
|
||||
with_proto = payload.find { |row| row["domain"] == "proto.mesh.test" }
|
||||
without_proto = payload.find { |row| row["domain"] == "noproto.mesh.test" }
|
||||
|
||||
expect(with_proto["meshcoreNodesCount"]).to eq(20)
|
||||
expect(with_proto["meshtasticNodesCount"]).to eq(30)
|
||||
expect(without_proto.key?("meshcoreNodesCount")).to be(false)
|
||||
expect(without_proto.key?("meshtasticNodesCount")).to be(false)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
@@ -657,4 +657,94 @@ RSpec.describe PotatoMesh::App::Queries do
|
||||
expect(rows.length).to be >= 1
|
||||
end
|
||||
end
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# batch_resolve_node_ids
|
||||
# ---------------------------------------------------------------------------
|
||||
describe "#batch_resolve_node_ids" do
|
||||
before do
|
||||
with_db do |db|
|
||||
db.execute(
|
||||
"INSERT INTO nodes(node_id, num, short_name, last_heard, first_heard, role) VALUES (?,?,?,?,?,?)",
|
||||
["!aabb0001", 0xaabb0001, "N1", now, now, "CLIENT"],
|
||||
)
|
||||
db.execute(
|
||||
"INSERT INTO nodes(node_id, num, short_name, last_heard, first_heard, role) VALUES (?,?,?,?,?,?)",
|
||||
["!aabb0002", 0xaabb0002, "N2", now, now, "CLIENT"],
|
||||
)
|
||||
end
|
||||
end
|
||||
|
||||
it "resolves string node_id references" do
|
||||
with_db do |db|
|
||||
result = queries.batch_resolve_node_ids(db, ["!aabb0001", "!aabb0002"])
|
||||
expect(result["!aabb0001"]).to eq("!aabb0001")
|
||||
expect(result["!aabb0002"]).to eq("!aabb0002")
|
||||
end
|
||||
end
|
||||
|
||||
it "resolves numeric references to canonical node_id" do
|
||||
with_db do |db|
|
||||
num_str = 0xaabb0001.to_s
|
||||
result = queries.batch_resolve_node_ids(db, [num_str])
|
||||
expect(result[num_str]).to eq("!aabb0001")
|
||||
end
|
||||
end
|
||||
|
||||
it "returns an empty hash for empty input" do
|
||||
with_db do |db|
|
||||
expect(queries.batch_resolve_node_ids(db, [])).to eq({})
|
||||
expect(queries.batch_resolve_node_ids(db, nil)).to eq({})
|
||||
end
|
||||
end
|
||||
|
||||
it "omits references that cannot be resolved" do
|
||||
with_db do |db|
|
||||
result = queries.batch_resolve_node_ids(db, ["!nonexistent"])
|
||||
expect(result).not_to have_key("!nonexistent")
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# node_lookup_clause with db parameter
|
||||
# ---------------------------------------------------------------------------
|
||||
describe "#node_lookup_clause with db" do
|
||||
before do
|
||||
with_db do |db|
|
||||
db.execute(
|
||||
"INSERT INTO nodes(node_id, num, short_name, last_heard, first_heard, role) VALUES (?,?,?,?,?,?)",
|
||||
["!deadbeef", 0xdeadbeef, "DB", now, now, "CLIENT"],
|
||||
)
|
||||
end
|
||||
end
|
||||
|
||||
it "folds numeric columns into string columns when db is provided" do
|
||||
with_db do |db|
|
||||
clause = queries.node_lookup_clause(
|
||||
"!deadbeef",
|
||||
string_columns: ["node_id"],
|
||||
numeric_columns: ["node_num"],
|
||||
db: db,
|
||||
)
|
||||
expect(clause).not_to be_nil
|
||||
sql_fragment, _params = clause
|
||||
# When db is provided and numeric values are resolved, the OR with
|
||||
# node_num should not appear in the SQL.
|
||||
expect(sql_fragment).not_to include("node_num")
|
||||
expect(sql_fragment).to include("node_id")
|
||||
end
|
||||
end
|
||||
|
||||
it "falls back to OR when db is not provided" do
|
||||
clause = queries.node_lookup_clause(
|
||||
0xdeadbeef,
|
||||
string_columns: ["node_id"],
|
||||
numeric_columns: ["node_num"],
|
||||
)
|
||||
expect(clause).not_to be_nil
|
||||
sql_fragment, _params = clause
|
||||
expect(sql_fragment).to include("OR")
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
@@ -178,8 +178,15 @@
|
||||
<button type="button" id="filterClear" class="filter-clear" aria-label="Clear filter" hidden>×</button>
|
||||
</div>
|
||||
<% end %>
|
||||
<button id="protocolToggleMeshcore" type="button" class="icon-button protocol-toggle-btn" aria-label="Hide MeshCore nodes" aria-pressed="false" title="Toggle MeshCore node visibility" hidden>
|
||||
<img src="/assets/img/meshcore.svg" alt="" width="20" height="20" aria-hidden="true" class="protocol-toggle-icon" loading="lazy" decoding="async" />
|
||||
</button>
|
||||
<button id="protocolToggleMeshtastic" type="button" class="icon-button protocol-toggle-btn" aria-label="Hide Meshtastic nodes" aria-pressed="false" title="Toggle Meshtastic node visibility" hidden>
|
||||
<img src="/assets/img/meshtastic.svg" alt="" width="20" height="20" aria-hidden="true" class="protocol-toggle-icon" loading="lazy" decoding="async" />
|
||||
</button>
|
||||
<span id="footerActiveNodes" class="meta-active-nodes"></span>
|
||||
<button id="refreshBtn" type="button">Refresh</button>
|
||||
<button id="refreshBtn" type="button" title="Fetch latest data now">Refresh</button>
|
||||
<button id="autorefreshToggle" type="button" aria-label="Pause auto-refresh" aria-pressed="false" title="Pause or resume automatic data refresh">⏸</button>
|
||||
<span id="status" class="refresh-timestamp" aria-live="polite"></span>
|
||||
</div>
|
||||
<% end %>
|
||||
|
||||
@@ -24,6 +24,8 @@
|
||||
<th class="instances-col instances-col--channel" data-sort-key="channel"><span class="sort-header" role="button" tabindex="0" data-sort-key="channel" data-sort-label="Channel">Channel <span class="sort-indicator" aria-hidden="true"></span></span></th>
|
||||
<th class="instances-col instances-col--frequency" data-sort-key="frequency"><span class="sort-header" role="button" tabindex="0" data-sort-key="frequency" data-sort-label="Frequency">Frequency <span class="sort-indicator" aria-hidden="true"></span></span></th>
|
||||
<th class="instances-col instances-col--nodes" data-sort-key="nodesCount"><span class="sort-header" role="button" tabindex="0" data-sort-key="nodesCount" data-sort-label="Active Nodes (24h)">Active Nodes (24h) <span class="sort-indicator" aria-hidden="true"></span></span></th>
|
||||
<th class="instances-col instances-col--meshcore-nodes" data-sort-key="meshcoreNodesCount"><span class="sort-header" role="button" tabindex="0" data-sort-key="meshcoreNodesCount" data-sort-label="MeshCore (24h)"><img src="/assets/img/meshcore.svg" alt="" width="13" height="13" class="protocol-icon protocol-icon--meshcore" aria-hidden="true" loading="lazy" decoding="async" /> MeshCore (24h) <span class="sort-indicator" aria-hidden="true"></span></span></th>
|
||||
<th class="instances-col instances-col--meshtastic-nodes" data-sort-key="meshtasticNodesCount"><span class="sort-header" role="button" tabindex="0" data-sort-key="meshtasticNodesCount" data-sort-label="Meshtastic (24h)"><img src="/assets/img/meshtastic.svg" alt="" width="13" height="13" class="protocol-icon protocol-icon--meshtastic" aria-hidden="true" loading="lazy" decoding="async" /> Meshtastic (24h) <span class="sort-indicator" aria-hidden="true"></span></span></th>
|
||||
<th class="instances-col instances-col--latitude" data-sort-key="latitude"><span class="sort-header" role="button" tabindex="0" data-sort-key="latitude" data-sort-label="Latitude">Latitude <span class="sort-indicator" aria-hidden="true"></span></span></th>
|
||||
<th class="instances-col instances-col--longitude" data-sort-key="longitude"><span class="sort-header" role="button" tabindex="0" data-sort-key="longitude" data-sort-label="Longitude">Longitude <span class="sort-indicator" aria-hidden="true"></span></span></th>
|
||||
<th class="instances-col instances-col--last-update" data-sort-key="lastUpdateTime"><span class="sort-header" role="button" tabindex="0" data-sort-key="lastUpdateTime" data-sort-label="Last Update">Last Update <span class="sort-indicator" aria-hidden="true"></span></span></th>
|
||||
|
||||
Reference in New Issue
Block a user