Compare commits

...

126 Commits

Author SHA1 Message Date
Jack Kingsman 6b81dd3082 Updating changelog + build for 3.12.1 2026-04-19 21:18:26 -07:00
Jack Kingsman cc2b16e53f Test fix 2026-04-19 21:14:38 -07:00
Jack Kingsman 330007e120 Be smarter about web push not being available on snakeoil certs for mobile 2026-04-19 21:10:17 -07:00
Jack Kingsman f5a2a21f11 Fix e2e tests 2026-04-19 20:45:11 -07:00
Jack Kingsman a3e62885d4 Merge pull request #206 from jkingsman/dependabot/uv/uv-2c6491f7af
Bump the uv group across 1 directory with 2 updates
2026-04-19 19:36:12 -07:00
Jack Kingsman dbdd722c48 Merge pull request #207 from jkingsman/channel-mute
Add channel mute
2026-04-19 19:35:52 -07:00
jkingsman c8c8e6b549 Add channel mute 2026-04-19 19:31:26 -07:00
dependabot[bot] b8683e57d8 Bump the uv group across 1 directory with 2 updates
Bumps the uv group with 2 updates in the / directory: [pytest](https://github.com/pytest-dev/pytest) and [requests](https://github.com/psf/requests).


Updates `pytest` from 9.0.2 to 9.0.3
- [Release notes](https://github.com/pytest-dev/pytest/releases)
- [Changelog](https://github.com/pytest-dev/pytest/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest/compare/9.0.2...9.0.3)

Updates `requests` from 2.32.5 to 2.33.0
- [Release notes](https://github.com/psf/requests/releases)
- [Changelog](https://github.com/psf/requests/blob/main/HISTORY.md)
- [Commits](https://github.com/psf/requests/compare/v2.32.5...v2.33.0)

---
updated-dependencies:
- dependency-name: pytest
  dependency-version: 9.0.3
  dependency-type: direct:development
  dependency-group: uv
- dependency-name: requests
  dependency-version: 2.33.0
  dependency-type: indirect
  dependency-group: uv
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-04-20 01:44:21 +00:00
Jack Kingsman 491f159463 Merge pull request #205 from jkingsman/dependabot/npm_and_yarn/frontend/npm_and_yarn-916abd5bfa
Bump the npm_and_yarn group across 1 directory with 4 updates
2026-04-19 18:43:06 -07:00
jkingsman ead74e975b Update tests for vitest bump 2026-04-19 18:36:13 -07:00
dependabot[bot] 4fbd245ee4 Bump the npm_and_yarn group across 1 directory with 4 updates
Bumps the npm_and_yarn group with 3 updates in the /frontend directory: [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite), [flatted](https://github.com/WebReflection/flatted) and [picomatch](https://github.com/micromatch/picomatch).


Updates `vite` from 6.4.1 to 6.4.2
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/v6.4.2/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/v6.4.2/packages/vite)

Updates `esbuild` from 0.21.5 to 0.25.12
- [Release notes](https://github.com/evanw/esbuild/releases)
- [Changelog](https://github.com/evanw/esbuild/blob/main/CHANGELOG-2024.md)
- [Commits](https://github.com/evanw/esbuild/compare/v0.21.5...v0.25.12)

Updates `flatted` from 3.4.0 to 3.4.2
- [Commits](https://github.com/WebReflection/flatted/compare/v3.4.0...v3.4.2)

Updates `picomatch` from 2.3.1 to 2.3.2
- [Release notes](https://github.com/micromatch/picomatch/releases)
- [Changelog](https://github.com/micromatch/picomatch/blob/master/CHANGELOG.md)
- [Commits](https://github.com/micromatch/picomatch/compare/2.3.1...2.3.2)

Updates `picomatch` from 4.0.3 to 4.0.4
- [Release notes](https://github.com/micromatch/picomatch/releases)
- [Changelog](https://github.com/micromatch/picomatch/blob/master/CHANGELOG.md)
- [Commits](https://github.com/micromatch/picomatch/compare/2.3.1...2.3.2)

---
updated-dependencies:
- dependency-name: vite
  dependency-version: 6.4.2
  dependency-type: direct:development
  dependency-group: npm_and_yarn
- dependency-name: esbuild
  dependency-version: 0.25.12
  dependency-type: indirect
  dependency-group: npm_and_yarn
- dependency-name: flatted
  dependency-version: 3.4.2
  dependency-type: indirect
  dependency-group: npm_and_yarn
- dependency-name: picomatch
  dependency-version: 2.3.2
  dependency-type: indirect
  dependency-group: npm_and_yarn
- dependency-name: picomatch
  dependency-version: 4.0.4
  dependency-type: indirect
  dependency-group: npm_and_yarn
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-04-20 01:15:37 +00:00
Jack Kingsman dc7ec13cc5 Instructions for full monitoring feed 2026-04-19 16:25:18 -07:00
Jack Kingsman cfa2bf575c Correct HA documentation to use the actual node name 2026-04-19 15:11:25 -07:00
Jack Kingsman e9ef68432a Make caps consistent 2026-04-19 14:51:09 -07:00
Jack Kingsman 476adf393f Merge pull request #204 from jkingsman/extended-contact-fetch-timeout
Work better with radios that are flakey around providing current contact load state (BLE?)
2026-04-19 14:12:35 -07:00
Jack Kingsman f7a311d74b Always clear fav flag on a blind load 2026-04-19 01:25:08 -07:00
Jack Kingsman 09f807230b Patch up some vagaries and maintain best-effort loading. 2026-04-19 00:51:45 -07:00
Jack Kingsman c098f9eeb5 Be better about blind loading/auto-evict logging and run-through 2026-04-19 00:27:53 -07:00
Jack Kingsman 05493d06fc Extend contact read timeouts and add circular load/autoevict load mode 2026-04-18 21:06:27 -07:00
Jack Kingsman 6c1b8bd7e9 Phrasing fixups 2026-04-17 12:49:56 -07:00
Jack Kingsman d6e1218888 Updating changelog + build for 3.12.0 2026-04-17 12:21:35 -07:00
Jack Kingsman ad0e398704 Docs improvements 2026-04-17 10:24:45 -07:00
Jack Kingsman 39f5bb2b51 Don't stop on missing wire ack for dm send 2026-04-17 10:04:26 -07:00
Jack Kingsman 5257cb0b1b Go ham on radio clearing in manual mode 2026-04-17 09:38:05 -07:00
Jack Kingsman b1547773c5 Phrasing corrections 2026-04-17 08:56:16 -07:00
Jack Kingsman 71da6841c1 Documentation improvements 2026-04-17 00:38:50 -07:00
Jack Kingsman 6f00e857c2 Suck less at settings UI (help me I'm not a designer) 2026-04-16 23:49:52 -07:00
Jack Kingsman 303becf4b8 Merge pull request #183 from jkingsman/web-push
Add web push
2026-04-16 23:12:39 -07:00
Jack Kingsman 9ab4e7a9b0 Add beta testing note 2026-04-16 23:08:57 -07:00
Jack Kingsman b1020e6e34 Add some QOL improvements to HA integration 2026-04-16 23:03:56 -07:00
Jack Kingsman 87a892fc6e Don't chirp about the time set failures all the time 2026-04-16 21:58:53 -07:00
Jack Kingsman af76546287 Pass 2 2026-04-16 21:44:52 -07:00
Jack Kingsman 31bd4a0744 Add web push 2026-04-16 18:41:19 -07:00
Jack Kingsman 1db724073b Add follow-os light/dark theme. Closes #199. 2026-04-16 18:40:22 -07:00
Jack Kingsman 4783da8f3e Work on some more concurrency fixes re: locks and context managers. Poking at #179. 2026-04-16 18:04:56 -07:00
Jack Kingsman 4b69ec4519 Offer multiple timing windows for repeater telemetry pickup. Closes #192. 2026-04-16 13:55:01 -07:00
Jack Kingsman 8efbbd97bd Add airtime math and per-minute packets-over-uptime display for repeaters. Closes #194. 2026-04-16 13:29:24 -07:00
Jack Kingsman 1437e8e48a Fix issue where last_seen is incremented by events that definitely shouldn't increment it. Fixes #201. 2026-04-16 13:16:07 -07:00
Jack Kingsman 5cd8f7e80f Add local tunable for glittering status dot. Closes #200. 2026-04-16 12:40:30 -07:00
Jack Kingsman e8c50d0b2a Add neater contact + channels. Closes #197. 2026-04-16 12:22:02 -07:00
Jack Kingsman 7f3bb89323 Always expand layer selection and fix up top status bar 2026-04-16 12:18:14 -07:00
Jack Kingsman 5bfdd0880e Support multiple map layers. Closes #193. 2026-04-16 12:15:36 -07:00
Jack Kingsman 0e9bd59b44 Show learned path in routing override. Closes #195. 2026-04-16 11:59:43 -07:00
Jack Kingsman b1cd6e1aa9 Add link to node from map display. Closes #189. 2026-04-16 11:58:39 -07:00
Jack Kingsman 56fc589e0b Move to all PNGs in webmanifest. 2026-04-16 11:44:22 -07:00
Jack Kingsman 64502c4ca2 Fix default URL for map upload. Closes #190. 2026-04-16 11:39:17 -07:00
Jack Kingsman d1f657342a Fix statusbar over slide out panes in PWA. Closes #191. 2026-04-16 11:33:53 -07:00
Jack Kingsman 86a0ac7beb Don't strip outgoing colons on DMs or room servers. Closes #198. 2026-04-15 19:13:29 -07:00
Jack Kingsman 3b7e2737ee Updating changelog + build for 3.11.3 2026-04-12 23:54:44 -07:00
Jack Kingsman 01158ac69f Add screenshots and icons for webmanifest 2026-04-12 23:51:13 -07:00
Jack Kingsman 485df05372 Modify radio contact fill logic to use sent OR received messages as recency queue for loadin selection after favorites 2026-04-12 23:45:43 -07:00
Jack Kingsman e5e9eab935 Updating changelog + build for 3.11.2 2026-04-12 22:44:46 -07:00
Jack Kingsman 33b2d3c260 Unread DMs are ALWAYS at the top. Closes #185. 2026-04-12 22:41:41 -07:00
Jack Kingsman eccbd0bac5 use-credentials on webmanifest fetches so basic auth behaves. Closes #182. 2026-04-12 22:36:08 -07:00
Jack Kingsman 4f54ec2c93 Updating changelog + build for 3.11.1 2026-04-12 20:50:12 -07:00
Jack Kingsman eed38337c8 Add dummy SWer 2026-04-12 19:11:17 -07:00
Jack Kingsman e1ee7fcd24 Add default precision 2026-04-12 18:59:44 -07:00
Jack Kingsman 2756b1ae8d better wrapping around owner label on repeaters 2026-04-12 17:40:37 -07:00
Jack Kingsman ef1d6a5a1a Make all scripts +x 2026-04-12 17:35:54 -07:00
Jack Kingsman 14f42c59fe Use localized units for repeater display 2026-04-12 17:32:07 -07:00
Jack Kingsman b9414e84ee Add LPP/tracked repeater telemetry and HA fanout 2026-04-12 17:23:25 -07:00
Jack Kingsman 95a17ca8ee Merge pull request #174 from jkingsman/ha
HomeAssistant MQTT Integration Module
2026-04-12 15:09:49 -07:00
Jack Kingsman e6cedfbd0b Improve db best practices. Contributes to fixing #179. 2026-04-12 15:08:53 -07:00
Jack Kingsman c3d0af1473 Fix memoization 2026-04-12 15:06:45 -07:00
Jack Kingsman c24e291017 Destroy old discovery topics when the radio key changes 2026-04-12 14:59:41 -07:00
Jack Kingsman d2d009ae79 Autoseed with radio identity 2026-04-12 14:54:36 -07:00
Jack Kingsman d09166df84 HomeAssistant MQTT fanout 2026-04-12 14:36:13 -07:00
Jack Kingsman f2762ab495 Merge pull request #178 from jkingsman/migration-updates
Migration improvements
2026-04-12 14:35:26 -07:00
Jack Kingsman a411562ca7 Filter keys to only search using prefix/beginning. Closes #180 2026-04-12 12:08:30 -07:00
Jack Kingsman cde4d1744e Fix async db handling. Closes #179. 2026-04-12 11:57:37 -07:00
Jack Kingsman 4e73cd39c8 Migration improvements 2 2026-04-11 00:38:47 -07:00
Jack Kingsman 53b341d6fb Make migrations more better 2026-04-10 16:28:03 -07:00
Jack Kingsman 76ac97010e Use non-node20 checkout action 2026-04-10 16:19:21 -07:00
Jack Kingsman 53a4d8186a Updating changelog + build for 3.11.0 2026-04-10 16:12:27 -07:00
Jack Kingsman 70e1669113 Improve test coverage 2026-04-10 16:04:02 -07:00
Jack Kingsman 3b1a292507 Docs updates and be consistent about node >=20 2026-04-10 15:57:47 -07:00
Jack Kingsman 4f19e1ec9a Fix races and stale things 2026-04-10 15:54:03 -07:00
Jack Kingsman 59601bb98e Assume that a same-second same-message same-first-byte-key DM is more likely an echo than them sending the same message, and multi-retry for flood scope restoration 2026-04-10 15:50:45 -07:00
Jack Kingsman f6b0fd21fb Don't consume DM resend attempt on busy radio 2026-04-10 15:46:19 -07:00
Jack Kingsman 8a4858a313 Don't consume DM resend attempt on busy radio 2026-04-10 15:44:50 -07:00
Jack Kingsman 442c2fad20 Fix some frontend display/quality/doc issues 2026-04-10 15:43:08 -07:00
Jack Kingsman 8cc542ce23 Fix same-second same-message collision in room servers with per-sender disambiguation at DB level 2026-04-10 15:36:53 -07:00
Jack Kingsman a7258c120e Merge pull request #177 from YourSandwich/feature/battery-status
Add optional battery display to status bar
2026-04-10 14:55:39 -07:00
Jack Kingsman 8752320f52 Add some tests and move the helpers into their own TS file 2026-04-10 14:53:57 -07:00
Jack Kingsman f9f046a05f Fix inversion of const definition location 2026-04-10 14:51:19 -07:00
Jack Kingsman 390c0624ea IIFE => memo for battery color/styling conversion 2026-04-10 14:49:05 -07:00
YourSandwich 2f55d11b0b Add battery display toggles to Local Configuration 2026-04-10 23:38:29 +02:00
YourSandwich fa0be24990 Add battery indicator to status bar 2026-04-10 23:38:29 +02:00
Jack Kingsman 1e22a21445 Add radio health &c. to fanout bus 2026-04-10 14:31:45 -07:00
YourSandwich e09a3a01f7 Add localStorage helpers for battery display settings 2026-04-10 22:25:17 +02:00
Jack Kingsman 3bd756ee4e Pluck in HA radio stats into the WS fanout endpoint 2026-04-10 12:39:37 -07:00
Jack Kingsman 43c5e0f67d Improve e2e testing posture to make it sliiiightly less unfriendly for others to get working 2026-04-10 11:36:26 -07:00
Jack Kingsman c0fc5fbba2 Add AUR download and test script 2026-04-10 11:30:05 -07:00
Jack Kingsman c7248222dd Updating changelog + build for 3.10.0 2026-04-10 11:16:16 -07:00
Jack Kingsman 1e18a91f12 Merge pull request #172 from YourSandwich/aur-install-instructions
Add Arch Linux (AUR) packaging infrastructure
2026-04-10 10:54:49 -07:00
Jack Kingsman 18db6e4dd8 Make test script executable 2026-04-10 10:49:49 -07:00
Jack Kingsman 2393dadf1b Unload the service on uninstall 2026-04-10 10:48:38 -07:00
Jack Kingsman fd26576e0d Use correct email 2026-04-10 10:47:21 -07:00
Sandwich cb5a76eb5f Replace manual user/group creation with sysusers.d and tmpfiles.d 2026-04-10 19:23:01 +02:00
Jack Kingsman 7f5dde119f Update AGENTS.md 2026-04-10 00:15:57 -07:00
Jack Kingsman 799a721761 Be more defensive about systemd detection 2026-04-10 00:10:53 -07:00
Jack Kingsman 152a584f35 Fix TCP host 2026-04-10 00:10:41 -07:00
Jack Kingsman 5cc0476426 Fix port numbering 2026-04-10 00:06:22 -07:00
Jack Kingsman e468c6c161 Change command palette shortcut 2026-04-09 23:45:16 -07:00
Jack Kingsman e33537018b Fix AUR username 2026-04-09 23:11:02 -07:00
Jack Kingsman 0727793560 Add test script 2026-04-09 23:08:32 -07:00
Jack Kingsman 5c4e04e024 Skip daemon reload if systemctl isn't around 2026-04-09 23:08:26 -07:00
Jack Kingsman 967269ef7d Initial AUR work 2026-04-09 23:08:22 -07:00
Jack Kingsman 1903797d0d Fix broken statistics pane e2e test 2026-04-09 22:30:12 -07:00
Jack Kingsman bb5af5ba82 Bump apprise to 1.9.9. Closes #173. 2026-04-09 17:20:57 -07:00
Sandwich 424da7e232 Add Arch Linux (AUR) install instructions to README
Adds "Install Path 3: Arch Linux (AUR)" section covering both AUR
helper and manual makepkg installation, linking to the published
remoteterm-meshcore AUR package.

Closes #171
2026-04-09 03:51:39 +02:00
Jack Kingsman 159df1ec5b Revert "Add debug lines for fav click"
This reverts commit 8e2e039985.
2026-04-08 16:33:44 -07:00
Jack Kingsman 8e2e039985 Add debug lines for fav click 2026-04-08 16:18:46 -07:00
Jack Kingsman 01c86a486e Add packet feed filters; closes #169. 2026-04-08 14:44:41 -07:00
Jack Kingsman 7d5cfdec26 Add note about startup on windows 2026-04-07 22:07:31 -07:00
Jack Kingsman 5fe0ac0ad4 Be more memory concious on recent contact fetch 2026-04-07 16:41:34 -07:00
Jack Kingsman b98102ccac Add 72hr packet density view 2026-04-07 16:26:01 -07:00
Jack Kingsman a02c3cae9e Updating changelog + build for 3.9.0 2026-04-06 22:10:06 -07:00
Jack Kingsman ca7349a1a8 Add autofocus to text boxes 2026-04-06 21:59:46 -07:00
Jack Kingsman eeaa11b8b0 Fix lint bugs 2026-04-06 20:36:47 -07:00
Jack Kingsman 08eaf090b2 Be more guarded in the radio validity checks (and get outta here, you random repeaters I never favorited!) 2026-04-06 20:34:16 -07:00
Jack Kingsman 2f43420235 Add command palette 2026-04-06 20:27:55 -07:00
Jack Kingsman af74663518 Add guard for favorites sync 2026-04-06 20:12:58 -07:00
Jack Kingsman b7981c0450 Getting all Cal Raleigh up in here 2026-04-06 19:09:48 -07:00
Jack Kingsman 0f4976b9ee Merge pull request #167 from jkingsman/migrate-favorites
Add favorites as contact field (dug)
2026-04-05 22:19:01 -07:00
Jack Kingsman 1991f2515b Support relative URLs. Closes #165. 2026-04-05 22:11:12 -07:00
286 changed files with 22064 additions and 8995 deletions
+2 -2
View File
@@ -11,7 +11,7 @@ jobs:
steps:
- name: Check out repository
uses: actions/checkout@v5
uses: actions/checkout@v6
- name: Set up Python
uses: actions/setup-python@v6
@@ -44,7 +44,7 @@ jobs:
steps:
- name: Check out repository
uses: actions/checkout@v5
uses: actions/checkout@v6
- name: Set up Node.js
uses: actions/setup-node@v6
+73
View File
@@ -0,0 +1,73 @@
name: Publish AUR package
# Pushes the contents of pkg/aur/ to the remoteterm-meshcore AUR repository
# whenever a GitHub release is published. Can also be triggered manually for
# testing or out-of-band republishes.
#
# Required secrets:
# AUR_SSH_PRIVATE_KEY Private SSH key registered with the AUR maintainer
# account that owns the remoteterm-meshcore package.
# AUR_COMMIT_EMAIL Email used for the AUR git commit identity.
on:
release:
types: [published]
workflow_dispatch:
inputs:
version:
description: 'Version to publish (no v prefix, e.g. 3.9.1)'
required: true
concurrency:
# Serialize publishes so a fast back-to-back release sequence cannot race
# two pushes against the AUR repo. The later one wins by virtue of being
# the final state.
group: publish-aur
cancel-in-progress: false
jobs:
publish-aur:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- name: Resolve version from event
id: version
run: |
if [ "${{ github.event_name }}" = "workflow_dispatch" ]; then
VERSION="${{ inputs.version }}"
else
VERSION="${{ github.event.release.tag_name }}"
fi
VERSION="${VERSION#v}"
echo "version=$VERSION" >> "$GITHUB_OUTPUT"
echo "Publishing AUR package for version $VERSION"
- name: Stamp pkgver into PKGBUILD
run: |
sed -i "s/^pkgver=.*/pkgver=${{ steps.version.outputs.version }}/" pkg/aur/PKGBUILD
sed -i "s/^pkgrel=.*/pkgrel=1/" pkg/aur/PKGBUILD
- name: Publish to AUR
uses: KSXGitHub/github-actions-deploy-aur@v4.1.2
with:
pkgname: remoteterm-meshcore
pkgbuild: pkg/aur/PKGBUILD
assets: |
pkg/aur/remoteterm-meshcore.install
pkg/aur/remoteterm-meshcore.service
pkg/aur/remoteterm-meshcore.sysusers
pkg/aur/remoteterm-meshcore.tmpfiles
pkg/aur/remoteterm.env
commit_username: jackkingsman
commit_email: ${{ secrets.AUR_COMMIT_EMAIL }}
ssh_private_key: ${{ secrets.AUR_SSH_PRIVATE_KEY }}
commit_message: "Update to ${{ steps.version.outputs.version }}"
# Recompute sha256sums from the live release tarball + the bundled
# service/env files. The committed PKGBUILD has SKIP placeholders.
updpkgsums: true
# Validate the PKGBUILD parses and sources download, but skip the
# actual build (which would run uv sync + npm install for several
# minutes of CI time on every release).
test: true
test_flags: --clean --cleanbuild --nodeps --nobuild
+3
View File
@@ -30,3 +30,6 @@ references/
docker-compose.yml
docker-compose.yaml
.docker-certs/
# HA test environment (created by scripts/setup/start_ha_test_env.sh)
ha_test_config/
+31 -4
View File
@@ -179,7 +179,9 @@ Outgoing DMs send once immediately, then may retry up to 2 more times in the bac
ACKs are not a contact-route source. They drive message delivery state and may appear in analytics/detail surfaces, but they do not update `direct_path*` or otherwise influence route selection for future sends.
**Channel messages**: Flood messages echo back through repeaters. Repeats are identified by the database UNIQUE constraint on `(type, conversation_key, text, sender_timestamp)` — when an INSERT hits a duplicate, `_handle_duplicate_message()` in `packet_processor.py` adds the new path and, for outgoing messages only, increments the ack count. Incoming repeats add path data but do not change the ack count. There is no timestamp-windowed matching; deduplication is exact-match only.
**Channel messages**: Flood messages echo back through repeaters. Repeats are identified by the database UNIQUE constraint `idx_messages_dedup_null_safe` on `(type, conversation_key, text, COALESCE(sender_timestamp, 0))` where `type = 'CHAN'` — when an INSERT hits a duplicate, `_handle_duplicate_message()` in `packet_processor.py` adds the new path and, for outgoing messages only, increments the ack count. Incoming repeats add path data but do not change the ack count. There is no timestamp-windowed matching; deduplication is exact-match only.
**Incoming direct messages**: A separate unique index `idx_messages_incoming_priv_dedup` on `(type, conversation_key, text, COALESCE(sender_timestamp, 0), COALESCE(sender_key, ''))` where `type = 'PRIV' AND outgoing = 0` deduplicates incoming DMs. The additional `sender_key` term (added in migration 056) distinguishes room-server posts from different senders that arrive in the same second with identical text.
This message-layer echo/path handling is independent of raw-packet storage deduplication.
@@ -197,6 +199,7 @@ This message-layer echo/path handling is independent of raw-packet storage dedup
│ ├── event_handlers.py # Radio events
│ ├── decoder.py # Packet decryption
│ ├── websocket.py # Real-time broadcasts
│ ├── push/ # Web Push notification subsystem (VAPID keys, dispatch, send)
│ └── fanout/ # Fanout bus: MQTT, bots, webhooks, Apprise, SQS (see fanout/AGENTS_fanout.md)
├── frontend/ # React frontend
│ ├── AGENTS.md # Frontend documentation
@@ -209,6 +212,7 @@ This message-layer echo/path handling is independent of raw-packet storage dedup
│ │ ├── MapView.tsx # Leaflet map showing node locations
│ │ └── ...
│ └── vite.config.ts
├── pkg/aur/ # AUR package files (PKGBUILD, systemd service, env, install hooks)
├── scripts/ # Quality / release helpers (listing below is representative, not exhaustive)
│ ├── build/
│ │ ├── collect_licenses.sh # Gather third-party license attributions
@@ -216,7 +220,8 @@ This message-layer echo/path handling is independent of raw-packet storage dedup
│ ├── quality/
│ │ ├── all_quality.sh # Repo-standard autofix + validate gate
│ │ ├── e2e.sh # End-to-end test runner
│ │ ── extended_quality.sh # Quality gate plus e2e and Docker matrix
│ │ ── extended_quality.sh # Quality gate plus e2e and Docker matrix
│ │ └── test_aur_package.sh # Build + install AUR package in Arch Docker containers
│ └── setup/
│ ├── fetch_prebuilt_frontend.py # Download release frontend fallback
│ └── install_service.sh # Install/configure Linux systemd service
@@ -343,6 +348,7 @@ All endpoints are prefixed with `/api` (e.g., `/api/health`).
| POST | `/api/contacts/{public_key}/repeater/radio-settings` | Fetch repeater radio config via CLI |
| POST | `/api/contacts/{public_key}/repeater/advert-intervals` | Fetch advert intervals |
| POST | `/api/contacts/{public_key}/repeater/owner-info` | Fetch owner info |
| GET | `/api/contacts/{public_key}/repeater/telemetry-history` | Stored telemetry history for a repeater (read-only, no radio access) |
| POST | `/api/contacts/{public_key}/room/login` | Log in to a room server |
| POST | `/api/contacts/{public_key}/room/status` | Fetch room-server status telemetry |
| POST | `/api/contacts/{public_key}/room/lpp-telemetry` | Fetch room-server CayenneLPP sensor data |
@@ -371,13 +377,22 @@ All endpoints are prefixed with `/api` (e.g., `/api/health`).
| POST | `/api/settings/favorites/toggle` | Toggle favorite status |
| POST | `/api/settings/blocked-keys/toggle` | Toggle blocked key |
| POST | `/api/settings/blocked-names/toggle` | Toggle blocked name |
| POST | `/api/settings/migrate` | One-time migration from frontend localStorage |
| POST | `/api/settings/tracked-telemetry/toggle` | Toggle tracked telemetry repeater |
| GET | `/api/settings/tracked-telemetry/schedule` | Current telemetry scheduling derivation and next-run-at timestamp |
| GET | `/api/fanout` | List all fanout configs |
| POST | `/api/fanout` | Create new fanout config |
| PATCH | `/api/fanout/{id}` | Update fanout config (triggers module reload) |
| DELETE | `/api/fanout/{id}` | Delete fanout config (stops module) |
| POST | `/api/fanout/bots/disable-until-restart` | Stop bot fanout modules and keep bots disabled until the process restarts |
| GET | `/api/statistics` | Aggregated mesh network statistics |
| GET | `/api/push/vapid-public-key` | VAPID public key for browser push subscription |
| POST | `/api/push/subscribe` | Register/upsert a push subscription |
| GET | `/api/push/subscriptions` | List all push subscriptions |
| PATCH | `/api/push/subscriptions/{id}` | Update subscription label or filter preferences |
| DELETE | `/api/push/subscriptions/{id}` | Delete a push subscription |
| POST | `/api/push/subscriptions/{id}/test` | Send a test push notification |
| GET | `/api/push/conversations` | Global list of push-enabled conversation state keys |
| POST | `/api/push/conversations/toggle` | Add or remove a conversation from the global push list |
| WS | `/api/ws` | Real-time updates |
## Key Concepts
@@ -432,6 +447,17 @@ All external integrations are managed through the fanout bus (`app/fanout/`). Ea
Community MQTT forwards raw packets only. Its derived `path` field, when present on direct packets, is a comma-separated list of hop identifiers as reported by the packet format. Token width therefore varies with the packet's path hash mode; it is intentionally not a flat per-byte rendering.
### Web Push Notifications
Web Push is a standalone subsystem (`app/push/`) that sends browser push notifications for incoming messages even when the browser tab is closed. It is **not** a fanout module — it manages its own per-browser subscriptions, while the set of push-enabled conversations is stored once per server instance.
- **Requires HTTPS** (self-signed certificates work) and outbound internet from the server to reach browser push services (Google FCM, Mozilla autopush).
- VAPID key pair is auto-generated on first startup and stored in `app_settings`.
- Each browser subscription is stored in `push_subscriptions` with device identity and delivery state. The set of push-enabled conversations is stored globally in `app_settings.push_conversations`, so all subscribed browsers receive the same configured rooms/DMs.
- `broadcast_event()` in `websocket.py` dispatches to `push_manager.dispatch_message()` alongside fanout for `message` events.
- Expired subscriptions (HTTP 404/410 from push service) are auto-deleted.
- Frontend: service worker (`sw.js`) handles push display and notification click navigation. The `BellRing` icon in `ChatHeader` toggles per-conversation push. Device management lives in Settings > Local.
### Server-Side Decryption
The server can decrypt packets using stored keys, both in real-time and for historical packets.
@@ -477,8 +503,9 @@ mc.subscribe(EventType.ACK, handler)
| `MESHCORE_BASIC_AUTH_PASSWORD` | *(none)* | Optional app-wide HTTP Basic auth password; must be set together with `MESHCORE_BASIC_AUTH_USERNAME` |
| `MESHCORE_ENABLE_MESSAGE_POLL_FALLBACK` | `false` | Switch the always-on radio audit task from hourly checks to aggressive 10-second polling; the audit checks both missed message drift and channel-slot cache drift |
| `MESHCORE_FORCE_CHANNEL_SLOT_RECONFIGURE` | `false` | Disable channel-slot reuse and force `set_channel(...)` before every channel send, even on serial/BLE |
| `MESHCORE_LOAD_WITH_AUTOEVICT` | `false` | Enable autoevict contact loading: sets `AUTO_ADD_OVERWRITE_OLDEST` on the radio so adds never fail with TABLE_FULL, skips the removal phase during reconcile, and allows blind loading when `get_contacts` fails. Loaded contacts are not radio-favorited and may be evicted by new adverts when the table is full. |
**Note:** Runtime app settings are stored in the database (`app_settings` table), not environment variables. These include `max_radio_contacts`, `auto_decrypt_dm_on_advert`, `advert_interval`, `last_advert_time`, `favorites`, `last_message_times`, `flood_scope`, `blocked_keys`, `blocked_names`, and `discovery_blocked_types`. `max_radio_contacts` is the configured radio contact capacity baseline used by background maintenance: favorites reload first, non-favorite fill targets about 80% of that value, and full offload/reload triggers around 95% occupancy. They are configured via `GET/PATCH /api/settings`. MQTT, bot, webhook, Apprise, and SQS configs are stored in the `fanout_configs` table, managed via `/api/fanout`. If the radio's channel slots appear unstable or another client is mutating them underneath this app, operators can force the old always-reconfigure send path with `MESHCORE_FORCE_CHANNEL_SLOT_RECONFIGURE=true`.
**Note:** Runtime app settings are stored in the database (`app_settings` table), not environment variables. These include `max_radio_contacts`, `auto_decrypt_dm_on_advert`, `advert_interval`, `last_advert_time`, `last_message_times`, `flood_scope`, `blocked_keys`, `blocked_names`, `discovery_blocked_types`, `tracked_telemetry_repeaters`, `auto_resend_channel`, and `telemetry_interval_hours`. `max_radio_contacts` is the configured radio contact capacity baseline used by background maintenance: favorites reload first, non-favorite fill targets about 80% of that value, and full offload/reload triggers around 95% occupancy. They are configured via `GET/PATCH /api/settings`. MQTT, bot, webhook, Apprise, and SQS configs are stored in the `fanout_configs` table, managed via `/api/fanout`. If the radio's channel slots appear unstable or another client is mutating them underneath this app, operators can force the old always-reconfigure send path with `MESHCORE_FORCE_CHANNEL_SLOT_RECONFIGURE=true`.
Byte-perfect channel retries are user-triggered via `POST /api/messages/channel/{message_id}/resend` and are allowed for 30 seconds after the original send.
+92 -7
View File
@@ -1,3 +1,92 @@
## [3.12.1] - 2026-04-19
* Feature: Auto-evict/circular-buffer contact load mode (solves potential T-Beam issues)
* Feature: Channel mute
* Misc: HA Documentation improvements
* Misc: Bump deps & update tests
* Misc: Improve warnings around web push in untrusted contexts
## [3.12.0] - 2026-04-17
* Feature: Web Push -- get your mesh notifications on a locked phone or when your browser is closed!
* Feature: Add link to node from map display
* Feature: Map layers
* Feature: Better contact/channel selection for fanout
* Feature: Add glittering status dot option
* Feature: Add airtime math and average packets/min for repeater info displays
* Feature: Offer multiple timing intervals for repeater telemetry aurofetch
* Feature: Add ability to follow OS light/dark mode
* Bugfix: Clear 100% of messages from radio in fallback mode; don't stop at 100
* Bugfix: Don't stop DM retry just because the radio did not provide a radio ack on the wire
* Bugfix: Don't strip outgoing colons on DMs or room servers
* Bugfix: Patch statusbar overlap on PWA
* Bugfix: Patch default map upload URL
* Bugfix: Show learned path in routing override
* Bugfix: Centralize on "only means RF heard" for first_seen/last_seen
* Misc: Reduce frequency of time set failure chirping
* Misc: QoL improvements for Home Assistant integration
* Misc: Overhaul settings styling
* Misc: Documentation + tests updates
## [3.11.3] - 2026-04-12
* Bugfix: Add icons and screenshots for webmanifest
* Bugfix: Use incoming DMs, not just outgoing, for recency ranking for preferential radio contact load
## [3.11.2] - 2026-04-12
* Feature: Unread DMs are always at the top of the DM list no matter what
* Bugfix: Webmanifest needs withCredentials
## [3.11.1] - 2026-04-12
* Feature: Home Assistant MQTT fanout
* Feature: Add dummy service worker to enable PWA
* Bugfix: DB connection plurality issues
* Misc: Migration improvements
* Misc: Search keys from beginning
## [3.11.0] - 2026-04-10
* Feature: Radio health and contact data accessible on fanout bus
* Feature: Local node radio stats (voltage etc.) on WS health bus
* Feature: Battery indicator optional in status bar (configured in Local Settings)
* Bugfix: Fix same-second same-message collision in room servers
* Bugfix: Don't consume DM resend attempt if the radio was just busy
* Bugfix: Assume that a same-second same-message same-first-byte-key DM is more likely an echo than them sending the same message
* Bugfix: Multi-retry for flood scope restoration
* Misc: Testing & documentation improvements
## [3.10.0] - 2026-04-10
* Feature: Add Arch AUR package
* Feature: 72hr packet density view in statistics
* Feature: Add warnings for event loop selection for MQTT on Windows startup
* Bugfix: Bump Apprise to 1.9.9 to fix Matrix bug
* Misc: More memory-conscious on recent contact fetch
* Misc: Fix statistics pane e2e test
## [3.9.0] - 2026-04-06
* Feature: Add hop counts to hop-width selection options
* Feature: Show cached repeater telemetry inline in settings
* Feature: Retain recent traces and make them click-to-re-run
* Feature: Autofocus channel/DM textbox on desktop
* Feature: Favorites on the radio are now imported as favorites
* Bugfix: Be clearer on issue identification for missing HTTPS context in channel finder
* Bugfix: Don't use sender timestamp for message sequence display
* Bugfix: Function on subdomains happily
* Misc: Be gentler, room s/cracker/finder/
* Misc: Test and frontend correctness & test fixes
* Misc: Don't repeat clock sync failure logs
* Misc: Make warning in readme clearer about taking over the radio
* Misc: Improve readme phrasings
* Misc: Better y-axis selection for battery read-out
* Misc: Provide clearer warning on docker setup without docker installed
* Misc: Default visualizer stale pruning to on/5 minutes
* Misc: Migrate favorites to better storage pattern
* Misc: Provide dumper script for API + WS interfaces for prep for HA integration
## [3.8.0] - 2026-04-03
* Feature: Per-channel hop width override
@@ -115,7 +204,7 @@
* Bugfix: Fix Apprise duplicate names
* Bugfix: Be better about identity resolution in the stats pane
* Misc: Docs, test, and performance enhancements
* Misc: Don't prompt "Are you sure" when leaving an unedited interation
* Misc: Don't prompt "Are you sure" when leaving an unedited integration
* Misc: Log node time on startup
* Misc: Improve community MQTT error bubble-up
* Misc: Unread DMs always have a red unread counter
@@ -142,7 +231,7 @@
## [3.3.0] - 2026-03-13
* Feature: Use dashed lines to show collapsed ambiguous router results
* Feature: Jump to unred
* Feature: Jump to unread
* Feature: Local channel management to prevent need to reload channel every time
* Feature: Debug endpoint
* Feature: Force-singleton channel management
@@ -205,7 +294,7 @@
* Feature: Massive codebase refactor and overhaul
* Bugfix: Fix packet parsing for trace packets
* Bugfix: Refetch channels on reconnect
* Bugfix: Load All on repeater pane on mobile doesn't etend into lower text
* Bugfix: Load All on repeater pane on mobile doesn't extend into lower text
* Bugfix: Timestamps in logs
* Bugfix: Correct wrong clock sync command
* Misc: Improve bot error bubble up
@@ -222,10 +311,6 @@
* Bugfix: Don't obscure new integration dropdown on session boundary
## [2.7.8] - 2026-03-08
## [2.7.8] - 2026-03-08
* Bugfix: Improve frontend asset resolution and fixup the build/push script
+98 -4
View File
@@ -70,17 +70,111 @@ npm run test:run
npm run build
```
## Quality + Publishing Scripts
<details>
<summary>scripts/quality/</summary>
| Script | Purpose |
|--------|---------|
| `all_quality.sh` | Repo-standard gate: autofix (ruff, eslint, prettier), then pyright, pytest, vitest, and frontend build. Run before finishing any code change. |
| `extended_quality.sh` | `all_quality.sh` plus e2e tests and Docker build matrix. Used for release validation. |
| `e2e.sh` | Thin wrapper that runs Playwright e2e tests from `tests/e2e/`. |
| `docker_ci.sh` | Builds the Docker image and runs a smoke test against it. |
| `test_aur_package.sh` | Builds the AUR package in an Arch container, then installs and boots it in a second container with port 8000 exposed (hang finish). |
| `run_aur_with_radio.sh` | Like `test_aur_package.sh` but passes through the host serial device for testing with a real radio (hang finish). |
</details>
<details>
<summary>scripts/build/</summary>
| Script | Purpose |
|--------|---------|
| `publish.sh` | Full release ceremony: quality gate, version bump, changelog, frontend build, Docker multi-arch push, GitHub release. |
| `release_common.sh` | Shared shell helpers (version validation, formatting) sourced by other build scripts. |
| `package_release_artifact.sh` | Builds the prebuilt-frontend release zip attached to GitHub releases. |
| `push_docker_multiarch.sh` | Builds and pushes multi-arch Docker images (amd64 + arm64). |
| `create_github_release.sh` | Creates a GitHub release with changelog notes and the release artifact. |
| `extract_release_notes.sh` | Extracts the latest version's notes from `CHANGELOG.md` for the release body. |
| `collect_licenses.sh` | Gathers third-party license attributions into `LICENSES.md`. |
| `print_frontend_licenses.cjs` | Helper that extracts frontend npm dependency licenses. |
| `dump_api_specs.py` | Dumps the OpenAPI spec from the running backend (developer utility). |
</details>
## E2E Testing
E2E coverage exists, but it is intentionally not part of the normal development path.
E2E tests exercise the full stack (backend + frontend + real radio hardware) via Playwright.
These tests are only guaranteed to run correctly in a narrow subset of environments; they require a busy mesh with messages arriving constantly, an available autodetect-able radio, and a contact in the test database (which you can provide in `tests/e2e/.tmp/e2e-test.db` after an initial run). E2E tests are generally not necessary to run for normal development work.
> [!WARNING]
> E2E tests are **not part of the normal development path** — most contributors will never need to run them. They exist to catch integration issues that unit tests can't and generally only need to be run by maintainers.
### Hardware requirements
- A MeshCore radio connected via serial (auto-detected, or set `MESHCORE_SERIAL_PORT`)
- The radio must be powered on and past its startup sequence before tests begin
### Running
```bash
cd tests/e2e
npm install
npx playwright test # headless
npx playwright test --headed # you can probably guess
npx playwright install chromium # first time only
npx playwright test # headless
npx playwright test --headed # watch it run
```
The test harness starts its own uvicorn instance on port 8001 with a fresh temporary database. Your development server (port 8000) is unaffected.
### Test tiers
**Most tests (22 of 28) are fully self-contained.** They seed their own data via API calls or direct DB writes and need only a connected radio. These cover messaging, pagination, search, favorites, settings, fanout integrations, historical decryption, and all UI-only views.
**Mesh-traffic tests (tagged `@mesh-traffic`)** wait up to 3 minutes for an incoming message from another node on the network. If no traffic arrives, they fail with an advisory that the failure may be RF conditions, not a bug. These are: `incoming-message` and `packet-feed` (second test only).
**The partner-radio DM ACK test (tagged `@partner-radio`)** validates direct-route learning by sending a DM and waiting for an ACK. It requires a second radio in range that has your test radio in its contacts. Configure the partner node's public key and name via `E2E_PARTNER_RADIO_PUBKEY` and `E2E_PARTNER_RADIO_NAME`.
### Making mesh-traffic tests reliable: the echo bot
The most practical way to guarantee incoming traffic is to run an **echo bot on a second radio** monitoring a known channel. When the test suite starts a `@mesh-traffic` test, it sends a trigger message to that channel. If a bot on another radio is listening, it replies — generating the incoming RF packet the test needs within seconds instead of waiting for organic mesh traffic.
The test suite sends `!echo please give incoming message` to the echo channel (default `#flightless`) at the start of each `@mesh-traffic` test. The trigger message is configurable via `E2E_ECHO_TRIGGER_MESSAGE`.
Setup:
1. Set up a second MeshCore radio within RF range of your test radio
2. Run a RemoteTerm instance on the second radio
3. Configure a bot on the second radio that monitors the echo channel and replies when it sees the trigger. Example bot code:
```python
def bot(sender_name, sender_key, message_text, is_dm,
channel_key, channel_name, sender_timestamp, path):
if "!echo" in message_text.lower():
return f"[ECHO] {message_text}"
return None
```
4. The test suite calls `nudgeEchoBot()` automatically — no manual intervention needed
Without the echo bot, `@mesh-traffic` tests rely on organic traffic from other nodes. In a quiet RF environment they will time out.
### Environment variables
All E2E environment configuration is centralized in `tests/e2e/helpers/env.ts` with defaults that work for the maintainer's test rig. Override via environment variables:
| Variable | Default | Purpose |
|----------|---------|---------|
| `MESHCORE_SERIAL_PORT` | auto-detect | Serial port for the test radio |
| `E2E_ECHO_CHANNEL` | `#flightless` | Channel the echo bot monitors for traffic generation |
| `E2E_ECHO_TRIGGER_MESSAGE` | `!echo please give incoming message` | Message sent to nudge the echo bot |
| `E2E_PARTNER_RADIO_PUBKEY` | *(maintainer's test node)* | 64-char hex public key of a node that will ACK DMs from your radio |
| `E2E_PARTNER_RADIO_NAME` | *(maintainer's test node)* | Display name of that node (used in UI assertions) |
Example for a contributor with their own two-radio setup:
```bash
E2E_ECHO_CHANNEL="#mytest" \
E2E_PARTNER_RADIO_PUBKEY="abcd1234...full64charhexkey..." \
E2E_PARTNER_RADIO_NAME="MyTestNode" \
npx playwright test
```
## Pull Request Expectations
+2 -2
View File
@@ -13,7 +13,7 @@ RUN VITE_COMMIT_HASH=${COMMIT_HASH} npm run build
# Stage 2: Python runtime
FROM python:3.12-slim
FROM python:3.13-slim
ARG COMMIT_HASH=unknown
@@ -22,7 +22,7 @@ WORKDIR /app
ENV COMMIT_HASH=${COMMIT_HASH}
# Install uv
COPY --from=ghcr.io/astral-sh/uv:latest /uv /usr/local/bin/uv
COPY --from=ghcr.io/astral-sh/uv:0.6 /uv /usr/local/bin/uv
# Copy dependency files first for layer caching
COPY pyproject.toml uv.lock ./
+416 -2
View File
@@ -56,7 +56,7 @@ SOFTWARE.
</details>
### apprise (1.9.7) — BSD-2-Clause
### apprise (1.9.9) — BSD-2-Clause
<details>
<summary>Full license text</summary>
@@ -64,7 +64,7 @@ SOFTWARE.
```
BSD 2-Clause License
Copyright (c) 2025, Chris Caron <lead2gold@gmail.com>
Copyright (c) 2026, Chris Caron <lead2gold@gmail.com>
All rights reserved.
Redistribution and use in source and binary forms, with or without
@@ -647,6 +647,389 @@ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
</details>
### pywebpush (2.3.0) — MPL-2.0
<details>
<summary>Full license text</summary>
```
Mozilla Public License Version 2.0
==================================
1. Definitions
--------------
1.1. "Contributor"
means each individual or legal entity that creates, contributes to
the creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used
by a Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached
the notice in Exhibit A, the Executable Form of such Source Code
Form, and Modifications of such Source Code Form, in each case
including portions thereof.
1.5. "Incompatible With Secondary Licenses"
means
(a) that the initial Contributor has attached the notice described
in Exhibit B to the Covered Software; or
(b) that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the
terms of a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in
a separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible,
whether at the time of the initial grant or subsequently, any and
all of the rights conveyed by this License.
1.10. "Modifications"
means any of the following:
(a) any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered
Software; or
(b) any new file in Source Code Form that contains any Covered
Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the
License, by the making, using, selling, offering for sale, having
made, import, or transfer of either its Contributions or its
Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU
Lesser General Public License, Version 2.1, the GNU Affero General
Public License, Version 3.0, or any later versions of those
licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that
controls, is controlled by, or is under common control with You. For
purposes of this definition, "control" means (a) the power, direct
or indirect, to cause the direction or management of such entity,
whether by contract or otherwise, or (b) ownership of more than
fifty percent (50%) of the outstanding shares or beneficial
ownership of such entity.
2. License Grants and Conditions
--------------------------------
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
(a) under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
(b) under Patent Claims of such Contributor to make, use, sell, offer
for sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
(a) for any code that a Contributor has removed from Covered Software;
or
(b) for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
(c) under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights
to grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted
in Section 2.1.
3. Responsibilities
-------------------
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
(a) such Covered Software must also be made available in Source Code
Form, as described in Section 3.1, and You must inform recipients of
the Executable Form how they can obtain a copy of such Source Code
Form by reasonable means in a timely manner, at a charge no more
than the cost of distribution to the recipient; and
(b) You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter
the recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty,
or limitations of liability) contained within the Source Code Form of
the Covered Software, except that You may alter any license notices to
the extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
---------------------------------------------------
If it is impossible for You to comply with any of the terms of this
License with respect to some or all of the Covered Software due to
statute, judicial order, or regulation then You must: (a) comply with
the terms of this License to the maximum extent possible; and (b)
describe the limitations and the code they affect. Such description must
be placed in a text file included with all distributions of the Covered
Software under this License. Except to the extent prohibited by statute
or regulation, such description must be sufficiently detailed for a
recipient of ordinary skill to be able to understand it.
5. Termination
--------------
5.1. The rights granted under this License will terminate automatically
if You fail to comply with any of its terms. However, if You become
compliant, then the rights granted under this License from a particular
Contributor are reinstated (a) provisionally, unless and until such
Contributor explicitly and finally terminates Your grants, and (b) on an
ongoing basis, if such Contributor fails to notify You of the
non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular
Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means, this is the
first time You have received notice of non-compliance with this License
from such Contributor, and You become compliant prior to 30 days after
Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all
end user license agreements (excluding distributors and resellers) which
have been validly granted by You or Your distributors under this License
prior to termination shall survive termination.
************************************************************************
* *
* 6. Disclaimer of Warranty *
* ------------------------- *
* *
* Covered Software is provided under this License on an "as is" *
* basis, without warranty of any kind, either expressed, implied, or *
* statutory, including, without limitation, warranties that the *
* Covered Software is free of defects, merchantable, fit for a *
* particular purpose or non-infringing. The entire risk as to the *
* quality and performance of the Covered Software is with You. *
* Should any Covered Software prove defective in any respect, You *
* (not any Contributor) assume the cost of any necessary servicing, *
* repair, or correction. This disclaimer of warranty constitutes an *
* essential part of this License. No use of any Covered Software is *
* authorized under this License except under this disclaimer. *
* *
************************************************************************
************************************************************************
* *
* 7. Limitation of Liability *
* -------------------------- *
* *
* Under no circumstances and under no legal theory, whether tort *
* (including negligence), contract, or otherwise, shall any *
* Contributor, or anyone who distributes Covered Software as *
* permitted above, be liable to You for any direct, indirect, *
* special, incidental, or consequential damages of any character *
* including, without limitation, damages for lost profits, loss of *
* goodwill, work stoppage, computer failure or malfunction, or any *
* and all other commercial damages or losses, even if such party *
* shall have been informed of the possibility of such damages. This *
* limitation of liability shall not apply to liability for death or *
* personal injury resulting from such party's negligence to the *
* extent applicable law prohibits such limitation. Some *
* jurisdictions do not allow the exclusion or limitation of *
* incidental or consequential damages, so this exclusion and *
* limitation may not apply to You. *
* *
************************************************************************
8. Litigation
-------------
Any litigation relating to this License may be brought only in the
courts of a jurisdiction where the defendant maintains its principal
place of business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions.
Nothing in this Section shall prevent a party's ability to bring
cross-claims or counter-claims.
9. Miscellaneous
----------------
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides
that the language of a contract shall be construed against the drafter
shall not be used to construe this License against a Contributor.
10. Versions of the License
---------------------------
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
-------------------------------------------
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular
file, then You may include the notice in a location (such as a LICENSE
file in a relevant directory) where a recipient would be likely to look
for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
---------------------------------------------------------
This Source Code Form is "Incompatible With Secondary Licenses", as
defined by the Mozilla Public License, v. 2.0.
```
</details>
### uvicorn (0.40.0) — BSD-3-Clause
<details>
@@ -1188,6 +1571,37 @@ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLI
</details>
### cmdk (1.1.1) — MIT
<details>
<summary>Full license text</summary>
```
MIT License
Copyright (c) 2022 Paco Coursey
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
</details>
### d3-force (3.0.0) — ISC
<details>
+40 -3
View File
@@ -23,7 +23,7 @@ For advanced setup and troubleshooting see [README_ADVANCED.md](README_ADVANCED.
## Requirements
- Python 3.10+
- Python 3.11+
- Node.js LTS or current (20, 22, 24, 25) if you're not using a prebuilt release
- [UV](https://astral.sh/uv) package manager: `curl -LsSf https://astral.sh/uv/install.sh | sh`
- MeshCore radio connected via USB serial, TCP, or BLE
@@ -83,7 +83,7 @@ Access the app at http://localhost:8000.
Source checkouts expect a normal frontend build in `frontend/dist`.
> [!TIP]
> Running on lightweight hardware, or just do not want to build the frontend locally? From a cloned checkout, run `python3 scripts/setup/fetch_prebuilt_frontend.py` to fetch and unpack a prebuilt frontend into `frontend/prebuilt`, then start the app normally with `uv run uvicorn app.main:app --host 0.0.0.0 --port 8000`.
> Running on lightweight hardware, or just don't want to build the frontend locally? From a cloned checkout, run `python3 scripts/setup/fetch_prebuilt_frontend.py` to fetch and unpack a prebuilt frontend into `frontend/prebuilt`, then start the app normally with `uv run uvicorn app.main:app --host 0.0.0.0 --port 8000`.
> [!NOTE]
> On Linux, you can also install RemoteTerm as a persistent `systemd` service that starts on boot and restarts automatically on failure:
@@ -116,7 +116,9 @@ cp docker-compose.example.yml docker-compose.yml
bash scripts/setup/install_docker.sh
```
Your local `docker-compose.yml` is gitignored so future pulls do not overwrite your Docker settings.
> The interactive generator enables a self-signed (snakeoil) TLS certificate by default. If you accept the default, the app will be served over HTTPS and the generated compose file will include certificate mounts and an SSL command override. Decline if you prefer plain HTTP or plan to terminate TLS externally.
Your local `docker-compose.yml` is gitignored so future pulls don't overwrite your Docker settings.
The guided Docker flow can collect BLE settings, but BLE access from Docker still needs manual compose customization such as Bluetooth passthrough and possibly privileged mode or host networking. If you want the simpler path for BLE, use the regular Python launch flow instead.
@@ -135,6 +137,8 @@ sudo docker compose pull
sudo docker compose up -d
```
> If you switched to a local build (`build: .` instead of `image:`), use `sudo docker compose up -d --build` instead — `pull` only fetches remote images.
The example file and setup script default to the published Docker Hub image. To build locally from your checkout instead, replace:
```yaml
@@ -161,6 +165,29 @@ To stop:
sudo docker compose down
```
## Install Path 3: Arch Linux (AUR)
A [`remoteterm-meshcore`](https://aur.archlinux.org/packages/remoteterm-meshcore) package is available in the AUR. Install it with an AUR helper or build it manually:
```bash
# with an AUR helper
yay -S remoteterm-meshcore
# or manually
git clone https://aur.archlinux.org/remoteterm-meshcore.git
cd remoteterm-meshcore
makepkg -si
```
Configure your radio connection, then start the service:
```bash
sudo vi /etc/remoteterm-meshcore/remoteterm.env
sudo systemctl enable --now remoteterm-meshcore
```
Access the app at http://localhost:8000.
## Standard Environment Variables
Only one transport may be active at a time. If multiple are set, the server will refuse to start.
@@ -199,11 +226,21 @@ $env:MESHCORE_SERIAL_PORT="COM8" # or your COM port
uv run uvicorn app.main:app --host 0.0.0.0 --port 8000
```
> [!WARNING]
> **Windows + MQTT fanout:** Python's default Windows event loop (ProactorEventLoop) is not compatible with the MQTT libraries used by RemoteTerm. If you configure any MQTT integration, add `--loop none` to your uvicorn command:
>
> ```powershell
> uv run uvicorn app.main:app --host 0.0.0.0 --port 8000 --loop none
> ```
>
> If you forget, the app will start normally but MQTT connections will fail and you'll see a toast in the UI with this same guidance.
If you enable Basic Auth, protect the app with HTTPS. HTTP Basic credentials are not safe on plain HTTP. Also note that the app's permissive CORS policy is a deliberate trusted-network tradeoff, so cross-origin browser JavaScript is not a reliable way to use that Basic Auth gate.
## Where To Go Next
- Advanced setup, troubleshooting, HTTPS, systemd, remediation variables, and debug logging: [README_ADVANCED.md](README_ADVANCED.md)
- Home Assistant-specific guidance and entity/sensor naming schemes: [README_HA.md](README_HA.md)
- Contributing, tests, linting, E2E notes, and important AGENTS files: [CONTRIBUTING.md](CONTRIBUTING.md)
- Live API docs after the backend is running: http://localhost:8000/docs
+33
View File
@@ -8,6 +8,7 @@ These are intended for diagnosing or working around radios that behave oddly.
|----------|---------|-------------|
| `MESHCORE_ENABLE_MESSAGE_POLL_FALLBACK` | false | Run aggressive 10-second `get_msg()` fallback polling to check for messages |
| `MESHCORE_FORCE_CHANNEL_SLOT_RECONFIGURE` | false | Disable channel-slot reuse and force `set_channel(...)` before every channel send |
| `MESHCORE_LOAD_WITH_AUTOEVICT` | false | Enable autoevict mode for contact loading (see [Contact Loading Issues](#contact-loading-issues) below) |
| `__CLOWNTOWN_DO_CLOCK_WRAPAROUND` | false | Highly experimental: if the radio clock is ahead of system time, try forcing the clock to `0xFFFFFFFF`, wait for uint32 wraparound, and then retry normal time sync before falling back to reboot |
By default the app relies on radio events plus MeshCore auto-fetch for incoming messages, and also runs a low-frequency hourly audit poll. That audit checks both:
@@ -19,6 +20,38 @@ If the audit finds a mismatch, you'll see an error in the application UI and you
`__CLOWNTOWN_DO_CLOCK_WRAPAROUND=true` is a last-resort clock remediation for nodes whose RTC is stuck in the future and where rescue-mode time setting or GPS-based time is not available. It intentionally relies on the clock rolling past the 32-bit epoch boundary, which is board-specific behavior and may not be safe or effective on all MeshCore targets. Treat it as highly experimental.
## Contact Loading Issues
RemoteTerm loads favorite and recently active contacts onto the radio so that the radio can automatically acknowledge incoming DMs on your behalf. To do this, it first enumerates the radio's existing contact table, then reconciles it with the desired working set.
On BLE connections with many contacts (or radios with large contact tables from organic advertisements), the initial contact enumeration may time out. If this happens, the app will still attempt to load your favorites and recent contacts onto the radio on a best-effort basis, but without a full snapshot of what's already on the radio, some adds may be redundant or fail.
If the radio's contact table is already full (from contacts added by advertisements or another client), the app may not be able to load all desired contacts. In this case you'll see a warning that auto-DM acking may not work for all contacts. To resolve this:
- **Clear the radio's contact table** using another MeshCore client (e.g., the official companion app), then restart RemoteTerm
- **Lower the contact fill target** in Radio Settings to reduce how many contacts the app tries to load
- **Enable autoevict mode** (see below) to let the radio automatically make room
- If you don't need auto-DM acking, you can safely ignore these warnings — **sending and receiving messages is never affected**
### Autoevict Mode
Setting `MESHCORE_LOAD_WITH_AUTOEVICT=true` enables an alternative contact loading strategy that avoids TABLE_FULL errors entirely. On connect, the app enables the radio's `AUTO_ADD_OVERWRITE_OLDEST` preference, which makes the radio automatically evict the oldest non-favorite contact when the contact table is full. This means:
- Contact adds never fail — the radio always makes room by evicting stale contacts
- The app can load contacts even when it can't enumerate the radio's existing contact table (e.g., on slow BLE connections)
- No contact removal step is needed during reconciliation
**Trade-off:** Contacts loaded by the app are not marked as radio-side favorites, so they are eviction candidates if the radio receives a new advertisement while full. In practice, freshly-loaded contacts have a recent `lastmod` timestamp and will be among the last to be evicted. If you disconnect the radio from RemoteTerm and use it standalone, your contacts will not be protected from eviction by newer advertisements.
## Sub-Path Reverse Proxy
RemoteTerm works behind a reverse proxy that serves it under a sub-path (e.g. `/meshcore/` or Home Assistant ingress). All frontend asset and API paths are relative, so they resolve correctly under any prefix.
**Requirements:**
- The proxy must ensure the sub-path URL has a **trailing slash**. If a user visits `/meshcore` (no slash), relative paths break. Most proxies handle this automatically; for Nginx, a `location /meshcore/ { ... }` block (note the trailing slash) does the right thing.
- For correct PWA install behavior, the proxy should forward `X-Forwarded-Prefix` (set to the sub-path, e.g. `/meshcore`) so the web manifest generates correct `start_url` and `scope` values. `X-Forwarded-Proto` and `X-Forwarded-Host` are also respected for origin resolution.
## HTTPS
WebGPU channel-finding requires a secure context when you are not on `localhost`.
+526
View File
@@ -0,0 +1,526 @@
# Home Assistant Integration
RemoteTerm can publish mesh network data to Home Assistant via MQTT Discovery. Devices and entities appear automatically in HA -- no custom component or HACS install needed.
## Prerequisites
- Home Assistant with the [MQTT integration](https://www.home-assistant.io/integrations/mqtt/) configured
- An MQTT broker (e.g. Mosquitto) accessible to both HA and RemoteTerm
- RemoteTerm running and connected to a radio
## Setup
1. In RemoteTerm, go to **Settings > Integrations > Add > Home Assistant MQTT Discovery**
2. Enter your MQTT broker host and port (same broker HA is connected to)
3. Optionally enter broker username/password and TLS settings
4. Select contacts for GPS tracking and repeaters for telemetry (see below)
5. Configure which messages should fire events (scope selector at the bottom)
6. Save and enable
Devices will appear in HA under **Settings > Devices & Services > MQTT** within a few seconds.
## How MeshCore IDs Map Into Home Assistant
RemoteTerm uses each node's public key to derive a stable short identifier for MQTT topics:
- Full public key: `ae92577bae6c4f1d...`
- Node ID: `ae92577bae6c` (the first 12 hex characters, lowercased)
- Example MQTT topic: `meshcore/ae92577bae6c/gps`
When this README shows `<node_id>`, it always means that 12-character value. Node IDs appear in:
- MQTT discovery topics under `homeassistant/...`
- Runtime MQTT state topics under your configured prefix, usually `meshcore/...`
**Entity IDs** are different — HA auto-generates them from the device name and entity name, not from the node ID. For example, a radio named "MyRadio" produces entities like `binary_sensor.myradio_connected` and `event.myradio_messages`. A contact named "Alice" produces `device_tracker.alice`. You can find your actual entity IDs in **Settings > Devices & Services > MQTT** in HA, and you can rename them in HA's UI without affecting the integration.
You can also see the MQTT topic IDs in RemoteTerm's Home Assistant integration UI:
- `What gets created in Home Assistant`
- `Published topic summary`
## What Gets Created
### Local Radio Device
Always created. Updates every 60 seconds.
| Entity | Type | Description |
|--------|------|-------------|
| `binary_sensor.<radio_name>_connected` | Connectivity | Radio online/offline |
| `sensor.<radio_name>_noise_floor` | Signal strength | Radio noise floor (dBm) |
### Repeater Devices
One device per tracked repeater selected in the HA integration. Updates when telemetry is collected (auto-collect cycle (~8 hours or variable in settings), or when you manually fetch from the repeater dashboard).
Repeaters must first be added to the auto-telemetry tracking list in RemoteTerm's Radio settings section. Only auto-tracked repeaters appear in the HA integration's repeater picker.
| Entity | Type | Unit | Description |
|--------|------|------|-------------|
| `sensor.<repeater_name>_battery_voltage` | Voltage | V | Battery level |
| `sensor.<repeater_name>_noise_floor` | Signal strength | dBm | Local noise floor |
| `sensor.<repeater_name>_last_rssi` | Signal strength | dBm | Last received signal strength |
| `sensor.<repeater_name>_last_snr` | -- | dB | Last signal-to-noise ratio |
| `sensor.<repeater_name>_packets_received` | -- | count | Total packets received |
| `sensor.<repeater_name>_packets_sent` | -- | count | Total packets sent |
| `sensor.<repeater_name>_uptime` | Duration | s | Uptime since last reboot |
If RemoteTerm already has a cached telemetry snapshot for that repeater, it republishes it on startup so HA can populate the sensors immediately instead of waiting for the next collection cycle.
### Contact Device Trackers
One `device_tracker` per tracked contact. Updates passively whenever RemoteTerm hears an advertisement with GPS coordinates from that contact. No radio commands are sent -- it piggybacks on normal mesh traffic.
| Entity | Description |
|--------|-------------|
| `device_tracker.<contact_name>` | GPS position (latitude/longitude) |
### Message Event Entity
A single radio-scoped event entity, `event.<radio_name>_messages`, fires for each message matching your configured scope. Each event carries these attributes:
| Attribute | Example | Description |
|-----------|---------|-------------|
| `event_type` | `message_received` | Always `message_received` |
| `sender_name` | `Alice` | Display name of the sender |
| `sender_key` | `aabbccdd...` | Sender's public key |
| `text` | `hello` | Message body |
| `message_type` | `PRIV` or `CHAN` | Direct message or channel |
| `channel_name` | `#general` | Channel name (null for DMs) |
| `conversation_key` | `aabbccdd...` | Contact key (DM) or channel key |
| `outgoing` | `false` | Whether you sent this message |
## Entity Naming
HA auto-generates entity IDs by slugifying the device name and entity name. For a radio named "My Radio", entities look like `binary_sensor.my_radio_connected` and `event.my_radio_messages`. For a repeater named "Hilltop", `sensor.hilltop_battery_voltage`. For a contact named "Alice", `device_tracker.alice`. You can rename entities in HA's UI without affecting the integration.
MQTT topic paths use the 12-character node ID (first 12 hex characters of the public key). For example:
- Local radio health: `meshcore/<radio_node_id>/health`
- Repeater telemetry: `meshcore/<repeater_node_id>/telemetry`
- Contact GPS: `meshcore/<contact_node_id>/gps`
- Message events: `meshcore/<radio_node_id>/events/message`
## What Appears When
- Always created: the local radio device and its entities
- Created when selected in the HA integration: tracked repeater devices and tracked contact device trackers
- Populated only after data exists: contact GPS trackers need an advert with GPS; repeater sensors need telemetry, although cached repeater telemetry is replayed on startup when available
- Message event entity: always created once the HA integration is enabled for a connected radio
## Common Automations
### Low repeater battery alert
Notify when a tracked repeater's battery drops below a threshold.
**GUI:** Settings > Automations > Create > Numeric state trigger on `sensor.<repeater_name>_battery_voltage`, below `3.8`, action: notification.
**YAML:**
```yaml
automation:
- alias: "Repeater battery low"
trigger:
- platform: numeric_state
entity_id: sensor.hilltop_battery_voltage
below: 3.8
action:
- service: notify.mobile_app_your_phone
data:
title: "Repeater Battery Low"
message: >-
{{ state_attr('sensor.hilltop_battery_voltage', 'friendly_name') }}
is at {{ states('sensor.hilltop_battery_voltage') }}V
```
### Radio offline alert
Notify if the radio has been disconnected for more than 5 minutes.
**GUI:** Settings > Automations > Create > State trigger on `binary_sensor.<radio_name>_connected`, to `off`, for `00:05:00`, action: notification.
**YAML:**
```yaml
automation:
- alias: "Radio offline"
trigger:
- platform: state
entity_id: binary_sensor.myradio_connected
to: "off"
for: "00:05:00"
action:
- service: notify.mobile_app_your_phone
data:
title: "MeshCore Radio Offline"
message: "Radio has been disconnected for 5 minutes"
```
### Alert on any message from a specific room
Trigger when a message arrives in a specific channel. Two approaches:
#### Option A: Scope filtering (fully GUI, no template)
If you only care about one room, configure the HA integration's message scope to "Only listed channels" and select that room. Then every event fire is from that room.
**GUI:** Settings > Automations > Create > State trigger on `event.<radio_name>_messages`, action: notification.
**YAML:**
```yaml
automation:
- alias: "Emergency channel alert"
trigger:
- platform: state
entity_id: event.myradio_messages
action:
- service: notify.mobile_app_your_phone
data:
title: "Message in #emergency"
message: >-
{{ trigger.to_state.attributes.sender_name }}:
{{ trigger.to_state.attributes.text }}
```
#### Option B: Template condition (multiple rooms, one integration)
Keep scope as "All messages" and filter in the automation. The trigger is GUI, but the condition uses a one-line template.
**GUI:** Settings > Automations > Create > State trigger on `event.<radio_name>_messages` > Add condition > Template > enter the template below.
**YAML:**
```yaml
automation:
- alias: "Emergency channel alert"
trigger:
- platform: state
entity_id: event.myradio_messages
condition:
- condition: template
value_template: >-
{{ trigger.to_state.attributes.channel_name == '#emergency' }}
action:
- service: notify.mobile_app_your_phone
data:
title: "Message in #emergency"
message: >-
{{ trigger.to_state.attributes.sender_name }}:
{{ trigger.to_state.attributes.text }}
```
### Alert on DM from a specific contact
**YAML:**
```yaml
automation:
- alias: "DM from Alice"
trigger:
- platform: state
entity_id: event.myradio_messages
condition:
- condition: template
value_template: >-
{{ trigger.to_state.attributes.message_type == 'PRIV'
and trigger.to_state.attributes.sender_name == 'Alice' }}
action:
- service: notify.mobile_app_your_phone
data:
title: "DM from Alice"
message: "{{ trigger.to_state.attributes.text }}"
```
### Alert on messages containing a keyword
**YAML:**
```yaml
automation:
- alias: "Keyword alert"
trigger:
- platform: state
entity_id: event.myradio_messages
condition:
- condition: template
value_template: >-
{{ 'emergency' in trigger.to_state.attributes.text | lower }}
action:
- service: notify.mobile_app_your_phone
data:
title: "Emergency keyword detected"
message: >-
{{ trigger.to_state.attributes.sender_name }} in
{{ trigger.to_state.attributes.channel_name or 'DM' }}:
{{ trigger.to_state.attributes.text }}
```
### Track a contact on the HA map
No automation needed. Once a contact is selected for GPS tracking, their `device_tracker` entity automatically appears on the HA map. Go to **Settings > Dashboards > Map** (or add a Map card to any dashboard) and the tracked contact will show up when they advertise their GPS position.
### Dashboard card showing repeater battery
Add a sensor card to any dashboard:
```yaml
type: sensor
entity: sensor.hilltop_battery_voltage
name: "Hilltop Repeater Battery"
```
Or an entities card for multiple repeaters:
```yaml
type: entities
title: "Repeater Status"
entities:
- entity: sensor.hilltop_battery_voltage
name: "Hilltop"
- entity: sensor.valley_battery_voltage
name: "Valley"
- entity: sensor.ridge_battery_voltage
name: "Ridge"
```
### Full monitoring dashboard with message feed
This example creates a dashboard with repeater vitals, a live message feed, and a network activity graph. Replace the three slug values below to match your setup — find your entity IDs in **Settings > Devices & Services > MQTT**.
```yaml
# ┌─────────────────────────────────────────────────────┐
# │ Replace these three values to match your entities │
# │ │
# │ radio_slug: the prefix on your radio sensors │
# │ e.g. sensor.MYRADIO_noise_floor │
# │ repeater_slug: the prefix on your repeater sensors │
# │ e.g. sensor.HILLTOP_battery_voltage │
# │ message_event: your message event entity ID │
# │ e.g. event.MYRADIO_messages │
# └─────────────────────────────────────────────────────┘
#
# radio_slug: myradio
# repeater_slug: hilltop
# message_event: event.myradio_messages
```
**Step 1 — Dashboard YAML** (Settings > Dashboards > Add > edit in YAML):
```yaml
views:
- title: MeshCore
icon: mdi:radio-tower
cards:
- type: entities
title: Hilltop — Current # ← repeater name
state_color: true
entities:
- entity: sensor.hilltop_battery_voltage # ← repeater_slug
name: Battery
- entity: sensor.hilltop_noise_floor # ← repeater_slug
name: Noise Floor
- entity: sensor.hilltop_last_rssi # ← repeater_slug
name: Last RSSI
- entity: sensor.hilltop_last_snr # ← repeater_slug
name: Last SNR
- entity: sensor.hilltop_uptime # ← repeater_slug
name: Uptime
- entity: sensor.hilltop_packets_received # ← repeater_slug
name: Packets Rx
- entity: sensor.hilltop_packets_sent # ← repeater_slug
name: Packets Tx
- type: statistics-graph
title: Battery Voltage
entities:
- sensor.hilltop_battery_voltage # ← repeater_slug
stat_types: [mean, min, max]
days_to_show: 7
period: hour
- type: statistics-graph
title: Noise Floor
entities:
- sensor.hilltop_noise_floor # ← repeater_slug
stat_types: [mean, min, max]
days_to_show: 7
period: hour
- type: markdown
title: Message Feed (Last 10)
content: |
{% for i in range(1, 11) %}
{% set msg = states('input_text.meshcore_msg_' ~ i) %}
{% if msg and msg not in ['unknown', '', 'unavailable'] %}
{{ msg }}
{% endif %}
{% endfor %}
{% if states('input_text.meshcore_msg_1') in ['unknown', '', 'unavailable'] %}
*No messages yet.*
{% endif %}
- type: statistics-graph
title: Overall Packets Received
entities:
- sensor.myradio_packets_received # ← radio_slug
stat_types: [change]
days_to_show: 7
period: hour
```
**Step 2 — Message feed helpers**: create 10 text helpers named `MeshCore Msg 1` through `MeshCore Msg 10` (Settings > Helpers > Add > Text). These act as a rolling buffer for the Markdown card above.
**Step 3 — Message feed automation** (Settings > Automations > Create > edit in YAML):
```yaml
alias: MeshCore Message Feed Buffer
description: Rolling buffer of recent mesh messages for dashboard display
mode: queued
max: 10
triggers:
- trigger: state
entity_id: event.myradio_messages # ← message_event
actions:
- action: input_text.set_value
target:
entity_id: input_text.meshcore_msg_10
data:
value: "{{ states('input_text.meshcore_msg_9') }}"
- action: input_text.set_value
target:
entity_id: input_text.meshcore_msg_9
data:
value: "{{ states('input_text.meshcore_msg_8') }}"
- action: input_text.set_value
target:
entity_id: input_text.meshcore_msg_8
data:
value: "{{ states('input_text.meshcore_msg_7') }}"
- action: input_text.set_value
target:
entity_id: input_text.meshcore_msg_7
data:
value: "{{ states('input_text.meshcore_msg_6') }}"
- action: input_text.set_value
target:
entity_id: input_text.meshcore_msg_6
data:
value: "{{ states('input_text.meshcore_msg_5') }}"
- action: input_text.set_value
target:
entity_id: input_text.meshcore_msg_5
data:
value: "{{ states('input_text.meshcore_msg_4') }}"
- action: input_text.set_value
target:
entity_id: input_text.meshcore_msg_4
data:
value: "{{ states('input_text.meshcore_msg_3') }}"
- action: input_text.set_value
target:
entity_id: input_text.meshcore_msg_3
data:
value: "{{ states('input_text.meshcore_msg_2') }}"
- action: input_text.set_value
target:
entity_id: input_text.meshcore_msg_2
data:
value: "{{ states('input_text.meshcore_msg_1') }}"
- action: input_text.set_value
target:
entity_id: input_text.meshcore_msg_1
data:
value: >-
{{ as_timestamp(trigger.to_state.last_changed) |
timestamp_custom('%-I:%M %p') }} |
**{% if trigger.to_state.attributes.channel_name %}{{
trigger.to_state.attributes.channel_name }}{% else %}DM{% endif %}** |
{{ trigger.to_state.attributes.sender_name or 'Unknown' }}:
{{ (trigger.to_state.attributes.text or '')[:180] }}
```
## Troubleshooting
### Devices don't appear in HA
- Verify the MQTT integration is configured in HA (**Settings > Devices & Services > MQTT**) and shows "Connected"
- Verify RemoteTerm's HA integration shows "Connected" (green dot)
- Check that both HA and RemoteTerm are using the same MQTT broker
- Subscribe to discovery topics to verify messages are flowing:
```
mosquitto_sub -h <broker> -t 'homeassistant/#' -v
```
### Stale or duplicate devices
If you see unexpected devices (e.g. a generic "MeshCore Radio" alongside your named radio), clear the stale retained messages:
```
mosquitto_pub -h <broker> -t 'homeassistant/binary_sensor/meshcore_unknown/connected/config' -r -n
mosquitto_pub -h <broker> -t 'homeassistant/sensor/meshcore_unknown/noise_floor/config' -r -n
```
### Repeater sensors show "Unknown" or "Unavailable"
Repeater telemetry only updates when collected. Trigger a manual fetch by opening the repeater's dashboard in RemoteTerm and clicking "Status", or wait for the next auto-collect cycle (~8 hours).
If RemoteTerm already has cached telemetry for that repeater, it republishes the last known values on startup. If the sensors are still unknown or unavailable, it usually means no telemetry has ever been collected for that repeater yet.
### Contact device tracker shows "Unknown"
The contact's GPS position only updates when RemoteTerm hears an advertisement from that node that includes GPS coordinates. If the contact's device doesn't broadcast GPS or hasn't advertised recently, the tracker will show as unknown.
### Entity is "Unavailable"
Radio health entities have a 120-second expiry. If RemoteTerm stops sending health updates (e.g. it's shut down or loses connection to the broker), HA marks the entities as unavailable after 2 minutes. Restart RemoteTerm or check the broker connection.
## Removing the Integration
Disabling or deleting the HA integration in RemoteTerm's settings publishes empty retained messages to all discovery topics, which removes the devices and entities from HA automatically.
## Local Test Environment
For local development, RemoteTerm includes a helper that starts Mosquitto and Home Assistant with MQTT preconfigured:
```bash
./scripts/setup/start_ha_test_env.sh
```
That gives you:
- Home Assistant at `http://localhost:8123`
- Mosquitto at `localhost:1883`
- A pre-created HA MQTT integration using that broker
To watch all MQTT traffic during testing:
```bash
docker exec ha-test-mosquitto mosquitto_sub -h 127.0.0.1 -t '#' -v
```
To stop and clean up:
```bash
./scripts/setup/stop_ha_test_env.sh --clean
```
## MQTT Topics Reference
Runtime/state topics (where data is published):
| Topic | Content | Update frequency |
|-------|---------|-----------------|
| `meshcore/{node_id}/health` | `{"connected": bool, "noise_floor_dbm": int}` | Every 60s |
| `meshcore/{node_id}/telemetry` | `{"battery_volts": float, ...}` | ~8h or manual |
| `meshcore/{node_id}/gps` | `{"latitude": float, "longitude": float, ...}` | On advert |
| `meshcore/{node_id}/events/message` | `{"event_type": "message_received", ...}` | On message |
Discovery topics (entity registration, under `homeassistant/`):
| Pattern | Entity type |
|---------|------------|
| `homeassistant/binary_sensor/meshcore_<node_id>/connected/config` | Radio connectivity |
| `homeassistant/sensor/meshcore_<node_id>/noise_floor/config` | Noise floor sensor |
| `homeassistant/sensor/meshcore_<node_id>/battery_voltage/config` | Repeater battery |
| `homeassistant/sensor/meshcore_<node_id>/*/config` | Other repeater sensors |
| `homeassistant/device_tracker/meshcore_<node_id>/config` | Contact GPS tracker |
| `homeassistant/event/meshcore_<node_id>/messages/config` | Message event entity |
The `{node_id}` is always the first 12 characters of the node's public key, lowercased.
+49 -14
View File
@@ -27,10 +27,10 @@ app/
├── config.py # Env-driven runtime settings
├── channel_constants.py # Public/default channel constants shared across sync/send logic
├── database.py # SQLite connection + base schema + migration runner
├── migrations.py # Schema migrations (SQLite user_version)
├── migrations/ # Schema migrations (SQLite user_version, per-version modules)
├── models.py # Pydantic request/response models and typed write contracts (for example ContactUpsert)
├── version_info.py # Unified version/build metadata resolution for debug + startup surfaces
├── repository/ # Data access layer (contacts, channels, messages, raw_packets, settings, fanout)
├── repository/ # Data access layer (contacts, channels, messages, raw_packets, settings, fanout, push_subscriptions, repeater_telemetry)
├── services/ # Shared orchestration/domain services
│ ├── messages.py # Shared message creation, dedup, ACK application
│ ├── message_send.py # Direct send, channel send, resend workflows
@@ -40,7 +40,7 @@ app/
│ ├── contact_reconciliation.py # Prefix-claim, sender-key backfill, name-history wiring
│ ├── radio_lifecycle.py # Post-connect setup and reconnect/setup helpers
│ ├── radio_commands.py # Radio config/private-key command workflows
│ ├── radio_noise_floor.py # In-memory local radio noise-floor sampling/history
│ ├── radio_stats.py # In-memory local radio stats sampling and noise-floor history
│ └── radio_runtime.py # Router/dependency seam over the global RadioManager
├── radio.py # RadioManager transport/session state + lock management
├── radio_sync.py # Polling, sync, periodic advertisement loop
@@ -50,8 +50,12 @@ app/
├── events.py # Typed WS event payload serialization
├── websocket.py # WS manager + broadcast helpers
├── security.py # Optional app-wide HTTP Basic auth middleware for HTTP + WS
├── push/ # Web Push notification subsystem
│ ├── vapid.py # VAPID key generation, storage, caching
│ ├── send.py # pywebpush wrapper (async via thread executor)
│ └── manager.py # Push dispatch: filter, build payload, concurrent send
├── fanout/ # Fanout bus: MQTT, bots, webhooks, Apprise, SQS (see fanout/AGENTS_fanout.md)
├── dependencies.py # Shared FastAPI dependency providers
├── telemetry_interval.py # Shared telemetry interval math for tracked-repeater scheduler
├── path_utils.py # Path hex rendering and hop-width helpers
├── region_scope.py # Normalize/validate regional flood-scope values
├── keystore.py # Ephemeral private/public key storage for DM decryption
@@ -66,11 +70,12 @@ app/
├── packets.py
├── read_state.py
├── rooms.py
├── server_control.py
├── server_control.py # Shared helpers for repeater/room CLI flows (not an APIRouter)
├── settings.py
├── fanout.py
├── repeaters.py
├── statistics.py
├── push.py
└── ws.py
```
@@ -135,8 +140,9 @@ app/
### Echo/repeat dedup
- Message uniqueness: `(type, conversation_key, text, sender_timestamp)`.
- Duplicate insert is treated as an echo/repeat: the new path (if any) is appended, and the ACK count is incremented only for outgoing channel messages. Incoming direct messages with the same conversation/text/sender timestamp also collapse onto one stored row, with later observations merging path data instead of creating a second DM.
- Channel message uniqueness (`idx_messages_dedup_null_safe`): `(type, conversation_key, text, COALESCE(sender_timestamp, 0))` where `type = 'CHAN'`.
- Incoming PRIV message uniqueness (`idx_messages_incoming_priv_dedup`): `(type, conversation_key, text, COALESCE(sender_timestamp, 0), COALESCE(sender_key, ''))` where `type = 'PRIV' AND outgoing = 0``sender_key` was added in migration 056 to distinguish room-server posts from different senders in the same second.
- Duplicate insert is treated as an echo/repeat: the new path (if any) is appended, and the ACK count is incremented only for outgoing channel messages. Incoming direct messages with the same dedup identity also collapse onto one stored row, with later observations merging path data instead of creating a second DM.
### Raw packet dedup policy
@@ -161,10 +167,23 @@ app/
- All external integrations (MQTT, bots, webhooks, Apprise, SQS) are managed through the fanout bus (`app/fanout/`).
- Configs stored in `fanout_configs` table, managed via `GET/POST/PATCH/DELETE /api/fanout`.
- `broadcast_event()` in `websocket.py` dispatches to the fanout manager for `message` and `raw_packet` events.
- Each integration is a `FanoutModule` with scope-based filtering.
- `broadcast_event()` in `websocket.py` dispatches to the fanout manager for `message`, `raw_packet`, and `contact` events.
- `on_message` and `on_raw` are scope-gated. `on_contact`, `on_telemetry`, and `on_health` are dispatched to all modules unconditionally (modules filter internally).
- Repeater telemetry broadcasts are emitted after `RepeaterTelemetryRepository.record()` in both `radio_sync.py` (auto-collect) and `routers/repeaters.py` (manual fetch).
- The 60-second radio stats sampling loop in `radio_stats.py` dispatches an enriched health snapshot (radio identity + full stats) to all fanout modules after each sample.
- Community MQTT publishes raw packets only, but its derived `path` field for direct packets is emitted as comma-separated hop identifiers, not flat path bytes.
- See `app/fanout/AGENTS_fanout.md` for full architecture details.
- See `app/fanout/AGENTS_fanout.md` for full architecture details and event payload shapes.
### Web Push notifications
Web Push is a standalone subsystem in `app/push/`, separate from the fanout module system. It sends browser push notifications for incoming messages even when the tab is closed.
- **Not a fanout module** — Web Push manages per-browser subscriptions (N browsers, each with its own endpoint and delivery state), unlike fanout which is one-config-to-one-destination.
- **VAPID keys**: auto-generated P-256 key pair on first startup, stored in `app_settings.vapid_private_key` / `vapid_public_key`. Cached in-module by `app/push/vapid.py`.
- **Dispatch**: `broadcast_event()` in `websocket.py` fires `push_manager.dispatch_message(data)` alongside fanout for `message` events. The manager checks the global `app_settings.push_conversations` list, then sends to all currently registered subscriptions via `pywebpush` (run in a thread executor).
- **Stale cleanup**: HTTP 404/410 from the push service triggers immediate subscription deletion.
- **Subscriptions stored** in `push_subscriptions` table with `UNIQUE(endpoint)` for upsert semantics.
- Requires HTTPS (self-signed OK) and outbound internet to reach browser push services.
## API Surface (all under `/api`)
@@ -206,6 +225,7 @@ app/
- `POST /contacts/{public_key}/repeater/radio-settings`
- `POST /contacts/{public_key}/repeater/advert-intervals`
- `POST /contacts/{public_key}/repeater/owner-info`
- `GET /contacts/{public_key}/repeater/telemetry-history` — stored telemetry history for a repeater (read-only, no radio access)
- `POST /contacts/{public_key}/room/login`
- `POST /contacts/{public_key}/room/status`
- `POST /contacts/{public_key}/room/lpp-telemetry`
@@ -244,7 +264,8 @@ app/
- `POST /settings/favorites/toggle`
- `POST /settings/blocked-keys/toggle`
- `POST /settings/blocked-names/toggle`
- `POST /settings/migrate`
- `POST /settings/tracked-telemetry/toggle`
- `GET /settings/tracked-telemetry/schedule` — current telemetry scheduling derivation, interval options, and next-run-at timestamp
### Fanout
- `GET /fanout` — list all fanout configs
@@ -256,6 +277,16 @@ app/
### Statistics
- `GET /statistics` — aggregated mesh network stats (entity counts, message/packet splits, activity windows, busiest channels)
### Push
- `GET /push/vapid-public-key` — VAPID public key for browser `PushManager.subscribe()`
- `POST /push/subscribe` — register/upsert push subscription (keyed by endpoint URL)
- `GET /push/subscriptions` — list all push subscriptions
- `PATCH /push/subscriptions/{id}` — update label or filter preferences
- `DELETE /push/subscriptions/{id}` — delete subscription
- `POST /push/subscriptions/{id}/test` — send test notification
- `GET /push/conversations` — global list of push-enabled conversation state keys
- `POST /push/conversations/toggle` — add or remove a conversation from the global push list
### WebSocket
- `WS /ws`
@@ -286,7 +317,10 @@ Main tables:
- `raw_packets`
- `contact_advert_paths` (recent unique advertisement paths per contact, keyed by contact + path bytes + hop count)
- `contact_name_history` (tracks name changes over time)
- `app_settings`
- `repeater_telemetry_history` (time-series telemetry snapshots for tracked repeaters)
- `fanout_configs` (MQTT, bot, webhook, Apprise, SQS integration configs)
- `push_subscriptions` (Web Push browser subscriptions with delivery metadata; UNIQUE on endpoint)
- `app_settings` (includes `vapid_private_key` and `vapid_public_key` for Web Push VAPID signing)
Contact route state is canonicalized on the backend:
- stored route inputs: `direct_path`, `direct_path_len`, `direct_path_hash_mode`, `direct_path_updated_at`, plus optional `route_override_*`
@@ -301,14 +335,15 @@ Repository writes should prefer typed models such as `ContactUpsert` over ad hoc
`app_settings` fields in active model:
- `max_radio_contacts`
- `favorites`
- `auto_decrypt_dm_on_advert`
- `last_message_times`
- `preferences_migrated`
- `advert_interval`
- `last_advert_time`
- `flood_scope`
- `blocked_keys`, `blocked_names`, `discovery_blocked_types`
- `tracked_telemetry_repeaters`
- `auto_resend_channel`
- `telemetry_interval_hours`
Note: MQTT, community MQTT, and bot configs were migrated to the `fanout_configs` table (migrations 36-38).
+1
View File
@@ -26,6 +26,7 @@ class Settings(BaseSettings):
default=False,
validation_alias="__CLOWNTOWN_DO_CLOCK_WRAPAROUND",
)
load_with_autoevict: bool = False
skip_post_connect_sync: bool = False
basic_auth_username: str = ""
basic_auth_password: str = ""
+101 -6
View File
@@ -1,4 +1,7 @@
import asyncio
import logging
from collections.abc import AsyncIterator
from contextlib import asynccontextmanager
from pathlib import Path
import aiosqlite
@@ -7,7 +10,7 @@ from app.config import settings
logger = logging.getLogger(__name__)
SCHEMA = """
SCHEMA_TABLES = """
CREATE TABLE IF NOT EXISTS contacts (
public_key TEXT PRIMARY KEY,
name TEXT,
@@ -108,7 +111,8 @@ CREATE TABLE IF NOT EXISTS app_settings (
blocked_names TEXT DEFAULT '[]',
discovery_blocked_types TEXT DEFAULT '[]',
tracked_telemetry_repeaters TEXT DEFAULT '[]',
auto_resend_channel INTEGER DEFAULT 0
auto_resend_channel INTEGER DEFAULT 0,
telemetry_interval_hours INTEGER DEFAULT 8
);
INSERT OR IGNORE INTO app_settings (id) VALUES (1);
@@ -130,13 +134,18 @@ CREATE TABLE IF NOT EXISTS repeater_telemetry_history (
data TEXT NOT NULL,
FOREIGN KEY (public_key) REFERENCES contacts(public_key) ON DELETE CASCADE
);
"""
# Indexes are created after migrations so that legacy databases have all
# required columns (e.g. sender_key, added by migration 25) before index
# creation runs.
SCHEMA_INDEXES = """
CREATE INDEX IF NOT EXISTS idx_messages_received ON messages(received_at);
CREATE UNIQUE INDEX IF NOT EXISTS idx_messages_dedup_null_safe
ON messages(type, conversation_key, text, COALESCE(sender_timestamp, 0))
WHERE type = 'CHAN';
CREATE UNIQUE INDEX IF NOT EXISTS idx_messages_incoming_priv_dedup
ON messages(type, conversation_key, text, COALESCE(sender_timestamp, 0))
ON messages(type, conversation_key, text, COALESCE(sender_timestamp, 0), COALESCE(sender_key, ''))
WHERE type = 'PRIV' AND outgoing = 0;
CREATE INDEX IF NOT EXISTS idx_messages_sender_key ON messages(sender_key);
CREATE INDEX IF NOT EXISTS idx_messages_pagination
@@ -159,9 +168,74 @@ CREATE INDEX IF NOT EXISTS idx_repeater_telemetry_pk_ts
class Database:
"""Single-connection aiosqlite wrapper with coroutine-level serialization.
Why the lock: aiosqlite runs one ``sqlite3.Connection`` on a background
worker thread and serializes statement execution there. But SQLite's
``COMMIT`` fails with ``OperationalError: cannot commit transaction -
SQL statements in progress`` whenever *any* cursor on the connection has
a live prepared statement (a ``SELECT`` that returned ``SQLITE_ROW`` but
hasn't been fully consumed or closed). Under concurrent coroutines, one
task's in-flight ``fetchone()`` can still be in ``SQLITE_ROW`` state when
another task's ``commit()`` runs on the worker — triggering the error.
Fix: all DB work goes through ``tx()`` (writes) or ``readonly()`` (reads),
both of which acquire ``self._lock``. The lock is non-reentrant (asyncio
default) by design nested ``tx()`` calls are a bug. Repository methods
that compose multiple operations factor the raw SQL into private helpers
that take a ``conn`` and don't lock; the public method acquires the lock
once and calls those helpers.
Why reads are also locked: reads must also hold the lock, because a read
in ``SQLITE_ROW`` state is precisely the live statement that breaks a
concurrent writer's commit. Single-connection aiosqlite cannot safely
overlap reads and writes. If we ever split reader/writer connections in
the future, ``readonly()`` becomes the seam to point at the reader pool.
"""
def __init__(self, db_path: str):
self.db_path = db_path
self._connection: aiosqlite.Connection | None = None
self._lock = asyncio.Lock()
@asynccontextmanager
async def tx(self) -> AsyncIterator[aiosqlite.Connection]:
"""Acquire the connection for a write transaction.
Commits on clean exit, rolls back on exception. Callers MUST close
every cursor opened inside the block (use ``async with conn.execute(...)
as cursor:``) so no prepared statement is alive when commit runs.
The lock serializes concurrent writers AND ensures no reader's cursor
is alive during the commit. Nested calls will deadlock factor shared
SQL into helpers that accept ``conn`` and do not re-enter ``tx()``.
"""
async with self._lock:
if self._connection is None:
raise RuntimeError("Database not connected")
conn = self._connection
try:
yield conn
except BaseException:
await conn.rollback()
raise
else:
await conn.commit()
@asynccontextmanager
async def readonly(self) -> AsyncIterator[aiosqlite.Connection]:
"""Acquire the connection for a read. No commit, no rollback.
Locked for the same reason writes are: on a single connection, an
active read statement blocks a concurrent writer's commit. Callers
MUST fully consume or close cursors before the block exits (use
``async with conn.execute(...) as cursor:`` + ``fetchall`` /
``fetchone``; avoid holding a cursor across ``await`` on other IO).
"""
async with self._lock:
if self._connection is None:
raise RuntimeError("Database not connected")
yield self._connection
async def connect(self) -> None:
logger.info("Connecting to database at %s", self.db_path)
@@ -173,6 +247,22 @@ class Database:
# Persists in the DB file but we set it explicitly on every connection.
await self._connection.execute("PRAGMA journal_mode = WAL")
# synchronous = NORMAL is safe with WAL — only the most recent
# transaction can be lost on an OS crash (no corruption risk).
# Reduces fsync overhead vs. the default FULL.
await self._connection.execute("PRAGMA synchronous = NORMAL")
# Retry for up to 5s on lock contention instead of failing instantly.
# Matters when a second connection (e.g. VACUUM) touches the DB.
await self._connection.execute("PRAGMA busy_timeout = 5000")
# Bump page cache to ~64 MB (negative value = KB). Keeps hot pages
# in memory for read-heavy queries (unreads, pagination, search).
await self._connection.execute("PRAGMA cache_size = -64000")
# Keep temp tables and sort spills in memory instead of on disk.
await self._connection.execute("PRAGMA temp_store = MEMORY")
# Incremental auto-vacuum: freed pages are reclaimable via
# PRAGMA incremental_vacuum without a full VACUUM. Must be set before
# the first table is created (for new databases); for existing databases
@@ -185,15 +275,20 @@ class Database:
# constraints, then re-enabled for all subsequent application queries.
await self._connection.execute("PRAGMA foreign_keys = OFF")
await self._connection.executescript(SCHEMA)
await self._connection.executescript(SCHEMA_TABLES)
await self._connection.commit()
logger.debug("Database schema initialized")
logger.debug("Database tables initialized")
# Run any pending migrations
# Run any pending migrations before creating indexes, so that
# legacy databases have all required columns first.
from app.migrations import run_migrations
await run_migrations(self._connection)
await self._connection.executescript(SCHEMA_INDEXES)
await self._connection.commit()
logger.debug("Database indexes initialized")
# Enable FK enforcement for all application queries from this point on.
await self._connection.execute("PRAGMA foreign_keys = ON")
logger.debug("Foreign key enforcement enabled")
+4 -1
View File
@@ -299,8 +299,11 @@ def parse_advertisement(
timestamp = int.from_bytes(payload[32:36], byteorder="little")
flags = payload[100]
# Parse flags
# Parse flags — clamp device_role to valid range (0-4); corrupted
# advertisements can have junk in the lower nibble.
device_role = flags & 0x0F
if device_role > 4:
device_role = 0
has_location = bool(flags & 0x10)
has_feature1 = bool(flags & 0x20)
has_feature2 = bool(flags & 0x40)
+7 -1
View File
@@ -237,7 +237,13 @@ async def on_new_contact(event: "Event") -> None:
logger.debug("New contact: %s", public_key[:12])
contact_upsert = ContactUpsert.from_radio_dict(public_key.lower(), payload, on_radio=False)
contact_upsert.last_seen = int(time.time())
# Intentionally do not set first_seen or last_seen here: NEW_CONTACT
# fires from the radio's stored contact DB, not an RF observation.
# Both first_seen and last_seen are RF-only timestamps — they track
# the first and most recent time we actually heard this pubkey over
# the air (adverts, messages, path updates). Contacts synced from the
# radio's internal DB without any RF activity stay NULL until a real
# RF observation fills them in.
await ContactRepository.upsert(contact_upsert)
promoted_keys = await promote_prefix_contacts_for_contact(
public_key=public_key,
+2 -2
View File
@@ -2,10 +2,10 @@
import json
import logging
from typing import Any, Literal
from typing import Any, Literal, NotRequired
from pydantic import TypeAdapter
from typing_extensions import NotRequired, TypedDict
from typing_extensions import TypedDict
from app.models import Channel, Contact, Message, MessagePath, RawPacketBroadcast
from app.routers.health import HealthResponse
+63 -9
View File
@@ -1,6 +1,6 @@
# Fanout Bus Architecture
The fanout bus is a unified system for dispatching mesh radio events (decoded messages and raw packets) to external integrations. It replaces the previous scattered singleton MQTT publishers with a modular, configurable framework.
The fanout bus is a unified system for dispatching mesh radio events to external integrations. It replaces the previous scattered singleton MQTT publishers with a modular, configurable framework.
## Core Concepts
@@ -8,10 +8,15 @@ The fanout bus is a unified system for dispatching mesh radio events (decoded me
Base class that all integration modules extend:
- `__init__(config_id, config, *, name="")` — constructor; receives the config UUID, the type-specific config dict, and the user-assigned name
- `start()` / `stop()` — async lifecycle (e.g. open/close connections)
- `on_message(data)` — receive decoded messages (DM/channel)
- `on_raw(data)` — receive raw RF packets
- `on_message(data)` — receive decoded messages (scope-gated)
- `on_raw(data)` — receive raw RF packets (scope-gated)
- `on_contact(data)` — receive contact upserts; dispatched to all modules
- `on_telemetry(data)` — receive repeater telemetry snapshots; dispatched to all modules
- `on_health(data)` — receive periodic radio health snapshots; dispatched to all modules
- `status` property (**must override**) — return `"connected"`, `"disconnected"`, or `"error"`
All five event hooks are no-ops by default; modules override only the ones they care about.
### FanoutManager (manager.py)
Singleton that owns all active modules and dispatches events:
- `load_from_db()` — startup: load enabled configs, instantiate modules
@@ -19,6 +24,9 @@ Singleton that owns all active modules and dispatches events:
- `remove_config(id)` — delete: stop and remove
- `broadcast_message(data)` — scope-check + dispatch `on_message`
- `broadcast_raw(data)` — scope-check + dispatch `on_raw`
- `broadcast_contact(data)` — dispatch `on_contact` to all modules
- `broadcast_telemetry(data)` — dispatch `on_telemetry` to all modules
- `broadcast_health_fanout(data)` — dispatch `on_health` to all modules
- `stop_all()` — shutdown
- `get_statuses()` — health endpoint data
@@ -33,19 +41,65 @@ Each config has a `scope` JSON blob controlling what events reach it:
```
Community MQTT always enforces `{"messages": "none", "raw_packets": "all"}`.
Scope only gates `on_message` and `on_raw`. The `on_contact`, `on_telemetry`, and `on_health` hooks are dispatched to all modules unconditionally — modules that care about specific contacts or repeaters filter internally based on their own config.
## Event Flow
```
Radio Event -> packet_processor / event_handler
-> broadcast_event("message"|"raw_packet", data, realtime=True)
-> broadcast_event("message"|"raw_packet"|"contact", data, realtime=True)
-> WebSocket broadcast (always)
-> FanoutManager.broadcast_message/raw (only if realtime=True)
-> scope check per module
-> module.on_message / on_raw
-> FanoutManager.broadcast_message/raw/contact (only if realtime=True)
-> scope check per module (message/raw only)
-> module.on_message / on_raw / on_contact
Telemetry collect (radio_sync.py / routers/repeaters.py)
-> RepeaterTelemetryRepository.record(...)
-> FanoutManager.broadcast_telemetry(data)
-> module.on_telemetry (all modules, unconditional)
Health fanout (radio_stats.py, piggybacks on 60s stats sampling loop)
-> FanoutManager.broadcast_health_fanout(data)
-> module.on_health (all modules, unconditional)
```
Setting `realtime=False` (used during historical decryption) skips fanout dispatch entirely.
## Event Payloads
### on_message(data)
`Message.model_dump()` — the full Pydantic message model. Key fields:
- `type` (`"PRIV"` | `"CHAN"`), `conversation_key`, `text`, `sender_name`, `sender_key`
- `outgoing`, `acked`, `paths`, `sender_timestamp`, `received_at`
### on_raw(data)
Raw packet dict from `packet_processor.py`. Key fields:
- `id` (storage row ID), `observation_id` (per-arrival), `raw` (hex), `timestamp`
- `decrypted_info` (optional: `channel_key`, `contact_key`, `text`)
### on_contact(data)
`Contact.model_dump()` — the full Pydantic contact model. Key fields:
- `public_key`, `name`, `type` (0=unknown, 1=client, 2=repeater, 3=room, 4=sensor)
- `lat`, `lon`, `last_seen`, `first_seen`, `on_radio`
### on_telemetry(data)
Repeater telemetry snapshot, broadcast after successful `RepeaterTelemetryRepository.record()`.
Identical shape from both auto-collect (`radio_sync.py`) and manual fetch (`routers/repeaters.py`):
- `public_key`, `name`, `timestamp`
- `battery_volts`, `noise_floor_dbm`, `last_rssi_dbm`, `last_snr_db`
- `packets_received`, `packets_sent`, `airtime_seconds`, `rx_airtime_seconds`
- `uptime_seconds`, `sent_flood`, `sent_direct`, `recv_flood`, `recv_direct`
- `flood_dups`, `direct_dups`, `full_events`, `tx_queue_len`
### on_health(data)
Radio health + stats snapshot, broadcast every 60s by the stats sampling loop in `radio_stats.py`:
- `connected` (bool), `connection_info` (str | None)
- `public_key` (str | None), `name` (str | None)
- `noise_floor_dbm`, `battery_mv`, `uptime_secs` (int | None)
- `last_rssi` (int | None), `last_snr` (float | None)
- `tx_air_secs`, `rx_air_secs` (int | None)
- `packets_recv`, `packets_sent`, `flood_tx`, `direct_tx`, `flood_rx`, `direct_rx` (int | None)
## Current Module Types
### mqtt_private (mqtt_private.py)
@@ -90,8 +144,8 @@ Amazon SQS delivery. Config blob:
- Supports both decoded messages and raw packets via normal scope selection
### map_upload (map_upload.py)
Uploads heard repeater and room-server advertisements to map.meshcore.dev. Config blob:
- `api_url` (optional, default `""`) — upload endpoint; empty falls back to the public map.meshcore.dev API
Uploads heard repeater and room-server advertisements to map.meshcore.io. Config blob:
- `api_url` (optional, default `""`) — upload endpoint; empty falls back to the public map.meshcore.io API
- `dry_run` (bool, default `true`) — when true, logs the payload at INFO level without sending
- `geofence_enabled` (bool, default `false`) — when true, only uploads nodes within `geofence_radius_km` of the radio's own configured lat/lon
- `geofence_radius_km` (float, default `0`) — filter radius in kilometres
+9
View File
@@ -38,6 +38,15 @@ class FanoutModule:
async def on_raw(self, data: dict) -> None:
"""Called for raw RF packets. Override if needed."""
async def on_contact(self, data: dict) -> None:
"""Called for contact upserts (adverts, sync). Override if needed."""
async def on_telemetry(self, data: dict) -> None:
"""Called for repeater telemetry snapshots. Override if needed."""
async def on_health(self, data: dict) -> None:
"""Called for periodic radio health snapshots. Override if needed."""
@property
def status(self) -> str:
"""Return 'connected', 'disconnected', or 'error'."""
+1 -1
View File
@@ -164,7 +164,7 @@ class BotModule(FanoutModule):
),
timeout=BOT_EXECUTION_TIMEOUT,
)
except asyncio.TimeoutError:
except TimeoutError:
logger.warning("Bot '%s' execution timed out", self.name)
return
except Exception:
+1 -1
View File
@@ -538,7 +538,7 @@ class CommunityMqttPublisher(BaseMqttPublisher):
self._version_event.clear()
try:
await asyncio.wait_for(self._version_event.wait(), timeout=30)
except asyncio.TimeoutError:
except TimeoutError:
pass
return False
return True
+35 -1
View File
@@ -31,12 +31,14 @@ def _register_module_types() -> None:
from app.fanout.bot import BotModule
from app.fanout.map_upload import MapUploadModule
from app.fanout.mqtt_community import MqttCommunityModule
from app.fanout.mqtt_ha import MqttHaModule
from app.fanout.mqtt_private import MqttPrivateModule
from app.fanout.sqs import SqsModule
from app.fanout.webhook import WebhookModule
_MODULE_TYPES["mqtt_private"] = MqttPrivateModule
_MODULE_TYPES["mqtt_community"] = MqttCommunityModule
_MODULE_TYPES["mqtt_ha"] = MqttHaModule
_MODULE_TYPES["bot"] = BotModule
_MODULE_TYPES["webhook"] = WebhookModule
_MODULE_TYPES["apprise"] = AppriseModule
@@ -86,6 +88,11 @@ def _scope_matches_raw(scope: dict, _data: dict) -> bool:
return scope.get("raw_packets", "none") == "all"
def _always_match(_scope: dict, _data: dict) -> bool:
"""Match all modules unconditionally (filtering is module-internal)."""
return True
class FanoutManager:
"""Owns all active fanout modules and dispatches events."""
@@ -220,7 +227,7 @@ class FanoutManager:
handler = getattr(module, handler_name)
await asyncio.wait_for(handler(data), timeout=_DISPATCH_TIMEOUT_SECONDS)
self._clear_module_error(config_id)
except asyncio.TimeoutError:
except TimeoutError:
timeout_error = f"{handler_name} timed out after {_DISPATCH_TIMEOUT_SECONDS:.1f}s"
self._set_module_error(config_id, timeout_error)
logger.error(
@@ -270,6 +277,33 @@ class FanoutManager:
log_label="on_raw",
)
async def broadcast_contact(self, data: dict) -> None:
"""Dispatch a contact upsert to all modules."""
await self._dispatch_matching(
data,
matcher=_always_match,
handler_name="on_contact",
log_label="on_contact",
)
async def broadcast_telemetry(self, data: dict) -> None:
"""Dispatch a repeater telemetry snapshot to all modules."""
await self._dispatch_matching(
data,
matcher=_always_match,
handler_name="on_telemetry",
log_label="on_telemetry",
)
async def broadcast_health_fanout(self, data: dict) -> None:
"""Dispatch a radio health snapshot to all modules."""
await self._dispatch_matching(
data,
matcher=_always_match,
handler_name="on_health",
log_label="on_health",
)
async def stop_all(self) -> None:
"""Shutdown all modules."""
for config_id, (module, _) in list(self._modules.items()):
+5 -4
View File
@@ -1,6 +1,7 @@
"""Fanout module for uploading heard advert packets to map.meshcore.dev.
"""Fanout module for uploading heard advert packets to map.meshcore.io.
Mirrors the logic of the standalone map.meshcore.dev-uploader project:
Mirrors the logic of the standalone map.meshcore.dev-uploader project
(historical name; the live service is now hosted at map.meshcore.io):
- Listens on raw RF packets via on_raw
- Filters for ADVERT packets, only processes repeaters (role 2) and rooms (role 3)
- Skips nodes with no valid location (lat/lon None)
@@ -16,7 +17,7 @@ the raw hex link.
Config keys
-----------
api_url : str, default ""
Upload endpoint. Empty string falls back to the public map.meshcore.dev API.
Upload endpoint. Empty string falls back to the public map.meshcore.io API.
dry_run : bool, default True
When True, log the payload at INFO level instead of sending it.
geofence_enabled : bool, default False
@@ -46,7 +47,7 @@ from app.services.radio_runtime import radio_runtime
logger = logging.getLogger(__name__)
_DEFAULT_API_URL = "https://map.meshcore.dev/api/v1/uploader/node"
_DEFAULT_API_URL = "https://map.meshcore.io/api/v1/uploader/node"
# Re-upload guard: skip re-uploading a pubkey seen within this window (AU parity)
_REUPLOAD_SECONDS = 3600
+31 -2
View File
@@ -12,6 +12,7 @@ from __future__ import annotations
import asyncio
import json
import logging
import sys
import time
from abc import ABC, abstractmethod
from typing import Any
@@ -195,7 +196,7 @@ class BaseMqttPublisher(ABC):
self._version_event.wait(),
timeout=self._not_configured_timeout,
)
except asyncio.TimeoutError:
except TimeoutError:
continue
except asyncio.CancelledError:
return
@@ -230,7 +231,7 @@ class BaseMqttPublisher(ABC):
self._version_event.clear()
try:
await asyncio.wait_for(self._version_event.wait(), timeout=60)
except asyncio.TimeoutError:
except TimeoutError:
elapsed = time.monotonic() - connect_time
await self._on_periodic_wake(elapsed)
if self._should_break_wait(elapsed):
@@ -252,6 +253,34 @@ class BaseMqttPublisher(ABC):
self._client = None
self._last_error = _format_error_detail(e)
# Windows ProactorEventLoop does not implement add_reader /
# add_writer, which paho-mqtt requires. The failure can
# surface as a direct NotImplementedError (add_writer in
# __aenter__) or as a generic timeout (add_reader fails
# inside an event-loop callback, so paho never hears back).
# Either way, if we're on Windows with Proactor the root
# cause is the same and retrying won't help.
_on_proactor = (
sys.platform == "win32"
and type(asyncio.get_event_loop()).__name__ == "ProactorEventLoop"
)
if _on_proactor:
broadcast_error(
"MQTT unavailable — Windows event loop incompatible",
"The default Windows event loop (ProactorEventLoop) does "
"not support MQTT. Add --loop none to your uvicorn "
"command and restart. See README.md for details.",
)
_broadcast_health()
logger.error(
"%s cannot run: Windows ProactorEventLoop does not "
"implement add_reader/add_writer required by paho-mqtt. "
"Restart uvicorn with '--loop none' to use "
"SelectorEventLoop instead. Giving up (will not retry).",
self._integration_label(),
)
return
title, detail = self._on_error()
broadcast_error(title, detail)
_broadcast_health()
+780
View File
@@ -0,0 +1,780 @@
"""Home Assistant MQTT Discovery fanout module.
Publishes HA-compatible discovery configs and state updates so that mesh
network devices appear natively in Home Assistant via its built-in MQTT
integration. No custom HA component is needed.
Entity types created:
- Local radio: binary_sensor (connectivity) + sensors (noise floor, battery,
uptime, RSSI, SNR, airtime, packet counts)
- Per tracked repeater: sensor entities for telemetry fields
- Per tracked contact: device_tracker for GPS position
- Messages: event entity for scope-matched messages
"""
from __future__ import annotations
import logging
import ssl
from types import SimpleNamespace
from typing import Any
from app.fanout.base import FanoutModule, get_fanout_message_text
from app.fanout.mqtt_base import BaseMqttPublisher
logger = logging.getLogger(__name__)
# ── Repeater telemetry sensor definitions ─────────────────────────────────
_REPEATER_SENSORS: list[dict[str, Any]] = [
{
"field": "battery_volts",
"name": "Battery Voltage",
"object_id": "battery_voltage",
"device_class": "voltage",
"state_class": "measurement",
"unit": "V",
"precision": 2,
},
{
"field": "noise_floor_dbm",
"name": "Noise Floor",
"object_id": "noise_floor",
"device_class": "signal_strength",
"state_class": "measurement",
"unit": "dBm",
"precision": 0,
},
{
"field": "last_rssi_dbm",
"name": "Last RSSI",
"object_id": "last_rssi",
"device_class": "signal_strength",
"state_class": "measurement",
"unit": "dBm",
"precision": 0,
},
{
"field": "last_snr_db",
"name": "Last SNR",
"object_id": "last_snr",
"device_class": None,
"state_class": "measurement",
"unit": "dB",
"precision": 1,
},
{
"field": "packets_received",
"name": "Packets Received",
"object_id": "packets_received",
"device_class": None,
"state_class": "total_increasing",
"unit": None,
"precision": 0,
},
{
"field": "packets_sent",
"name": "Packets Sent",
"object_id": "packets_sent",
"device_class": None,
"state_class": "total_increasing",
"unit": None,
"precision": 0,
},
{
"field": "uptime_seconds",
"name": "Uptime",
"object_id": "uptime",
"device_class": "duration",
"state_class": None,
"unit": "s",
"precision": 0,
},
]
# ── LPP sensor metadata ─────────────────────────────────────────────────
_LPP_HA_META: dict[str, dict[str, Any]] = {
"temperature": {"device_class": "temperature", "unit": "°C", "precision": 1},
"humidity": {"device_class": "humidity", "unit": "%", "precision": 1},
"barometer": {"device_class": "atmospheric_pressure", "unit": "hPa", "precision": 1},
"voltage": {"device_class": "voltage", "unit": "V", "precision": 2},
"current": {"device_class": "current", "unit": "mA", "precision": 1},
"luminosity": {"device_class": "illuminance", "unit": "lux", "precision": 0},
"power": {"device_class": "power", "unit": "W", "precision": 1},
"energy": {"device_class": "energy", "unit": "kWh", "precision": 2},
"distance": {"device_class": "distance", "unit": "mm", "precision": 0},
"concentration": {"device_class": None, "unit": "ppm", "precision": 0},
"direction": {"device_class": None, "unit": "°", "precision": 0},
"altitude": {"device_class": None, "unit": "m", "precision": 1},
}
def _lpp_sensor_key(type_name: str, channel: int) -> str:
"""Build the flat telemetry-payload key for an LPP sensor."""
return f"lpp_{type_name}_ch{channel}"
def _repeater_telemetry_payload(data: dict[str, Any]) -> dict[str, Any]:
"""Build the flat HA state payload for a repeater telemetry snapshot."""
payload: dict[str, Any] = {}
for sensor in _REPEATER_SENSORS:
field = sensor["field"]
if field is not None:
payload[field] = data.get(field)
for sensor in data.get("lpp_sensors", []) or []:
key = _lpp_sensor_key(sensor.get("type_name", "unknown"), sensor.get("channel", 0))
payload[key] = sensor.get("value")
return payload
def _lpp_discovery_configs(
prefix: str,
pub_key: str,
device: dict,
lpp_sensors: list[dict],
state_topic: str,
) -> list[tuple[str, dict]]:
"""Build HA discovery configs for a repeater's LPP sensors."""
configs: list[tuple[str, dict]] = []
for sensor in lpp_sensors:
type_name = sensor.get("type_name", "unknown")
channel = sensor.get("channel", 0)
field = _lpp_sensor_key(type_name, channel)
meta = _LPP_HA_META.get(type_name, {})
nid = _node_id(pub_key)
object_id = field
display = type_name.replace("_", " ").title()
name = f"{display} (Ch {channel})"
cfg: dict[str, Any] = {
"name": name,
"unique_id": f"meshcore_{nid}_{object_id}",
"device": device,
"state_topic": state_topic,
"value_template": "{{ value_json." + field + " }}",
"state_class": "measurement",
"expire_after": 36000,
}
if meta.get("device_class"):
cfg["device_class"] = meta["device_class"]
if meta.get("unit"):
cfg["unit_of_measurement"] = meta["unit"]
if meta.get("precision") is not None:
cfg["suggested_display_precision"] = meta["precision"]
topic = f"homeassistant/sensor/meshcore_{nid}/{object_id}/config"
configs.append((topic, cfg))
return configs
# ── Local radio sensor definitions ────────────────────────────────────────
_RADIO_SENSORS: list[dict[str, Any]] = [
{
"field": "noise_floor_dbm",
"name": "Noise Floor",
"object_id": "noise_floor",
"device_class": "signal_strength",
"state_class": "measurement",
"unit": "dBm",
"precision": 0,
},
{
"field": "battery_volts",
"name": "Battery",
"object_id": "battery",
"device_class": "voltage",
"state_class": "measurement",
"unit": "V",
"precision": 2,
},
{
"field": "uptime_secs",
"name": "Uptime",
"object_id": "uptime",
"device_class": "duration",
"state_class": None,
"unit": "s",
"precision": 0,
},
{
"field": "last_rssi",
"name": "Last RSSI",
"object_id": "last_rssi",
"device_class": "signal_strength",
"state_class": "measurement",
"unit": "dBm",
"precision": 0,
},
{
"field": "last_snr",
"name": "Last SNR",
"object_id": "last_snr",
"device_class": None,
"state_class": "measurement",
"unit": "dB",
"precision": 1,
},
{
"field": "tx_air_secs",
"name": "TX Airtime",
"object_id": "tx_airtime",
"device_class": "duration",
"state_class": "total_increasing",
"unit": "s",
"precision": 0,
},
{
"field": "rx_air_secs",
"name": "RX Airtime",
"object_id": "rx_airtime",
"device_class": "duration",
"state_class": "total_increasing",
"unit": "s",
"precision": 0,
},
{
"field": "packets_recv",
"name": "Packets Received",
"object_id": "packets_received",
"device_class": None,
"state_class": "total_increasing",
"unit": None,
"precision": 0,
},
{
"field": "packets_sent",
"name": "Packets Sent",
"object_id": "packets_sent",
"device_class": None,
"state_class": "total_increasing",
"unit": None,
"precision": 0,
},
]
def _node_id(public_key: str) -> str:
"""Derive a stable, MQTT-safe node identifier from a public key."""
return public_key[:12].lower()
def _device_payload(
public_key: str,
name: str,
model: str,
*,
via_device_key: str | None = None,
) -> dict[str, Any]:
"""Build an HA device registry fragment."""
dev: dict[str, Any] = {
"identifiers": [f"meshcore_{_node_id(public_key)}"],
"name": name or public_key[:12],
"manufacturer": "MeshCore",
"model": model,
}
if via_device_key:
dev["via_device"] = f"meshcore_{_node_id(via_device_key)}"
return dev
# ── MQTT publisher subclass ───────────────────────────────────────────────
class _HaMqttPublisher(BaseMqttPublisher):
"""Thin MQTT lifecycle wrapper for the HA discovery module."""
_backoff_max = 30
_log_prefix = "HA-MQTT"
def __init__(self) -> None:
super().__init__()
self._on_connected_callback: Any = None
def _is_configured(self) -> bool:
s = self._settings
return bool(s and s.broker_host)
def _build_client_kwargs(self, settings: object) -> dict[str, Any]:
s: Any = settings
kw: dict[str, Any] = {
"hostname": s.broker_host,
"port": s.broker_port,
"username": s.username or None,
"password": s.password or None,
}
if s.use_tls:
ctx = ssl.create_default_context()
if s.tls_insecure:
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
kw["tls_context"] = ctx
return kw
def _on_connected(self, settings: object) -> tuple[str, str]:
s: Any = settings
return ("HA MQTT connected", f"{s.broker_host}:{s.broker_port}")
def _on_error(self) -> tuple[str, str]:
return ("HA MQTT connection failure", "Please correct the settings or disable.")
async def _on_connected_async(self, settings: object) -> None:
if self._on_connected_callback:
await self._on_connected_callback()
# ── Discovery config builders ─────────────────────────────────────────────
def _radio_discovery_configs(
prefix: str,
radio_key: str,
radio_name: str,
) -> list[tuple[str, dict]]:
"""Build HA discovery config payloads for the local radio device."""
nid = _node_id(radio_key)
device = _device_payload(radio_key, radio_name, "Radio")
state_topic = f"{prefix}/{nid}/health"
configs: list[tuple[str, dict]] = []
# binary_sensor: connected
configs.append(
(
f"homeassistant/binary_sensor/meshcore_{nid}/connected/config",
{
"name": "Connected",
"unique_id": f"meshcore_{nid}_connected",
"device": device,
"state_topic": state_topic,
"value_template": "{{ 'ON' if value_json.connected else 'OFF' }}",
"device_class": "connectivity",
"payload_on": "ON",
"payload_off": "OFF",
"expire_after": 120,
},
)
)
# sensors from _RADIO_SENSORS (noise floor, battery, uptime, RSSI, etc.)
for sensor in _RADIO_SENSORS:
cfg: dict[str, Any] = {
"name": sensor["name"],
"unique_id": f"meshcore_{nid}_{sensor['object_id']}",
"device": device,
"state_topic": state_topic,
"value_template": "{{ value_json." + sensor["field"] + " }}", # type: ignore[operator]
"expire_after": 120,
}
if sensor["device_class"]:
cfg["device_class"] = sensor["device_class"]
if sensor["state_class"]:
cfg["state_class"] = sensor["state_class"]
if sensor["unit"]:
cfg["unit_of_measurement"] = sensor["unit"]
if sensor.get("precision") is not None:
cfg["suggested_display_precision"] = sensor["precision"]
topic = f"homeassistant/sensor/meshcore_{nid}/{sensor['object_id']}/config"
configs.append((topic, cfg))
return configs
def _repeater_discovery_configs(
prefix: str,
pub_key: str,
name: str,
radio_key: str | None,
) -> list[tuple[str, dict]]:
"""Build HA discovery config payloads for a tracked repeater."""
nid = _node_id(pub_key)
device = _device_payload(pub_key, name, "Repeater", via_device_key=radio_key)
state_topic = f"{prefix}/{nid}/telemetry"
configs: list[tuple[str, dict]] = []
for sensor in _REPEATER_SENSORS:
cfg: dict[str, Any] = {
"name": sensor["name"],
"unique_id": f"meshcore_{nid}_{sensor['object_id']}",
"device": device,
"state_topic": state_topic,
"value_template": "{{ value_json." + sensor["field"] + " }}", # type: ignore[operator]
}
if sensor["device_class"]:
cfg["device_class"] = sensor["device_class"]
if sensor["state_class"]:
cfg["state_class"] = sensor["state_class"]
if sensor["unit"]:
cfg["unit_of_measurement"] = sensor["unit"]
if sensor.get("precision") is not None:
cfg["suggested_display_precision"] = sensor["precision"]
# 10 hours — margin over the 8-hour auto-collect cycle
cfg["expire_after"] = 36000
topic = f"homeassistant/sensor/meshcore_{nid}/{sensor['object_id']}/config"
configs.append((topic, cfg))
return configs
def _contact_tracker_discovery_config(
prefix: str,
pub_key: str,
name: str,
radio_key: str | None,
) -> tuple[str, dict]:
"""Build HA discovery config for a tracked contact's device_tracker."""
nid = _node_id(pub_key)
device = _device_payload(pub_key, name, "Node", via_device_key=radio_key)
topic = f"homeassistant/device_tracker/meshcore_{nid}/config"
cfg: dict[str, Any] = {
"name": name or pub_key[:12],
"unique_id": f"meshcore_{nid}_tracker",
"device": device,
"json_attributes_topic": f"{prefix}/{nid}/gps",
"source_type": "gps",
}
return topic, cfg
def _message_event_discovery_config(
prefix: str, radio_key: str, radio_name: str
) -> tuple[str, dict]:
"""Build HA discovery config for the message event entity."""
nid = _node_id(radio_key)
device = _device_payload(radio_key, radio_name, "Radio")
topic = f"homeassistant/event/meshcore_{nid}/messages/config"
cfg: dict[str, Any] = {
"name": "Messages",
"unique_id": f"meshcore_{nid}_messages",
"device": device,
"state_topic": f"{prefix}/{nid}/events/message",
"event_types": ["message_received"],
}
return topic, cfg
# ── Module class ──────────────────────────────────────────────────────────
def _config_to_settings(config: dict) -> SimpleNamespace:
return SimpleNamespace(
broker_host=config.get("broker_host", ""),
broker_port=config.get("broker_port", 1883),
username=config.get("username", ""),
password=config.get("password", ""),
use_tls=config.get("use_tls", False),
tls_insecure=config.get("tls_insecure", False),
)
class MqttHaModule(FanoutModule):
"""Home Assistant MQTT Discovery fanout module."""
def __init__(self, config_id: str, config: dict, *, name: str = "") -> None:
super().__init__(config_id, config, name=name)
self._publisher = _HaMqttPublisher()
self._publisher.set_integration_name(name or config_id)
self._publisher._on_connected_callback = self._publish_discovery
self._discovery_topics: list[str] = []
self._radio_key: str | None = None
self._radio_name: str | None = None
@property
def _prefix(self) -> str:
return self.config.get("topic_prefix", "meshcore")
@property
def _tracked_contacts(self) -> list[str]:
return self.config.get("tracked_contacts") or []
@property
def _tracked_repeaters(self) -> list[str]:
return self.config.get("tracked_repeaters") or []
# ── Lifecycle ──────────────────────────────────────────────────────
async def start(self) -> None:
self._seed_radio_identity_from_runtime()
settings = _config_to_settings(self.config)
await self._publisher.start(settings)
async def stop(self) -> None:
await self._remove_discovery()
await self._publisher.stop()
self._discovery_topics.clear()
# ── Discovery publishing ──────────────────────────────────────────
async def _publish_discovery(self) -> None:
"""Publish HA discovery configs and one-shot cached repeater state."""
if not self._radio_key:
# Don't publish discovery until we know the radio identity —
# the first health heartbeat will provide it and trigger this.
return
configs: list[tuple[str, dict]] = []
cached_repeater_states: list[tuple[str, dict[str, Any]]] = []
radio_name = self._radio_name or "MeshCore Radio"
configs.extend(_radio_discovery_configs(self._prefix, self._radio_key, radio_name))
# Tracked repeaters — resolve names and LPP sensors from DB best-effort
for pub_key in self._tracked_repeaters:
rname = await self._resolve_contact_name(pub_key)
configs.extend(
_repeater_discovery_configs(self._prefix, pub_key, rname, self._radio_key)
)
latest = await self._resolve_latest_telemetry(pub_key)
latest_data = latest.get("data", {}) if latest else {}
# Dynamic LPP sensor entities from last known telemetry snapshot
lpp_sensors = latest_data.get("lpp_sensors", [])
if lpp_sensors:
nid = _node_id(pub_key)
device = _device_payload(pub_key, rname, "Repeater", via_device_key=self._radio_key)
state_topic = f"{self._prefix}/{nid}/telemetry"
configs.extend(
_lpp_discovery_configs(self._prefix, pub_key, device, lpp_sensors, state_topic)
)
if latest_data:
cached_repeater_states.append(
(
f"{self._prefix}/{_node_id(pub_key)}/telemetry",
_repeater_telemetry_payload(latest_data),
)
)
# Tracked contacts — resolve names from DB best-effort
for pub_key in self._tracked_contacts:
cname = await self._resolve_contact_name(pub_key)
configs.append(
_contact_tracker_discovery_config(self._prefix, pub_key, cname, self._radio_key)
)
# Message event entity (namespaced to this radio)
configs.append(_message_event_discovery_config(self._prefix, self._radio_key, radio_name))
self._discovery_topics = [topic for topic, _ in configs]
for topic, payload in configs:
await self._publisher.publish(topic, payload, retain=True)
for topic, payload in cached_repeater_states:
# Replay cached state after discovery so newly created HA entities
# populate immediately, but do not retain it or HA will treat a
# broker reconnect as fresh telemetry and reset expire_after.
await self._publisher.publish(topic, payload)
logger.info(
"HA MQTT: published %d discovery configs (%d repeaters, %d contacts, %d cached telemetry states)",
len(configs),
len(self._tracked_repeaters),
len(self._tracked_contacts),
len(cached_repeater_states),
)
async def _clear_retained_topics(self, topics: list[str]) -> None:
"""Publish empty retained payloads to remove entries from broker."""
for topic in topics:
try:
if self._publisher._client:
await self._publisher._client.publish(topic, b"", retain=True)
except Exception:
pass # best-effort cleanup
async def _remove_discovery(self) -> None:
"""Publish empty retained payloads to remove all HA entities."""
if not self._publisher.connected or not self._discovery_topics:
return
await self._clear_retained_topics(self._discovery_topics)
@staticmethod
async def _resolve_contact_name(pub_key: str) -> str:
"""Look up a contact's display name, falling back to 12-char prefix."""
try:
from app.repository.contacts import ContactRepository
contact = await ContactRepository.get_by_key(pub_key)
if contact and contact.name:
return contact.name
except Exception:
pass
return pub_key[:12]
@staticmethod
async def _resolve_latest_telemetry(pub_key: str) -> dict | None:
"""Return the most recent telemetry row for a repeater, or None."""
try:
from app.repository.repeater_telemetry import RepeaterTelemetryRepository
return await RepeaterTelemetryRepository.get_latest(pub_key)
except Exception:
pass
return None
def _seed_radio_identity_from_runtime(self) -> None:
"""Best-effort bootstrap from the currently connected radio session."""
try:
from app.services.radio_runtime import radio_runtime
if not radio_runtime.is_connected:
return
mc = radio_runtime.meshcore
self_info = mc.self_info if mc is not None else None
if not isinstance(self_info, dict):
return
pub_key = self_info.get("public_key")
if isinstance(pub_key, str) and pub_key.strip():
self._radio_key = pub_key.strip().lower()
name = self_info.get("name")
if isinstance(name, str) and name.strip():
self._radio_name = name.strip()
except Exception:
logger.debug("HA MQTT: failed to seed radio identity from runtime", exc_info=True)
# ── Event handlers ────────────────────────────────────────────────
async def on_health(self, data: dict) -> None:
if not self._publisher.connected:
return
# Cache radio identity for discovery config generation
pub_key = data.get("public_key")
if pub_key:
new_name = data.get("name")
key_changed = pub_key != self._radio_key
name_changed = new_name and new_name != self._radio_name
if key_changed:
old_key = self._radio_key
old_topics = list(self._discovery_topics)
if old_topics:
await self._clear_retained_topics(old_topics)
self._discovery_topics.clear()
self._radio_key = pub_key
self._radio_name = new_name
# Remove stale discovery entries from the old identity (e.g.
# "unknown" placeholder from before the radio key was known),
# then re-publish with the real identity.
if old_key is not None and not old_topics:
await self._clear_retained_topics(
[t for t, _ in _radio_discovery_configs(self._prefix, old_key, "")]
)
await self._publish_discovery()
elif name_changed:
self._radio_name = new_name
await self._publish_discovery()
# Don't publish health state until we know the radio identity —
# otherwise we create a stale "unknown" device in HA.
if not self._radio_key:
return
nid = _node_id(self._radio_key)
payload: dict[str, Any] = {"connected": data.get("connected", False)}
for sensor in _RADIO_SENSORS:
field = sensor["field"]
if field is not None:
payload[field] = data.get(field)
# Normalize battery from millivolts to volts for consistency with
# repeater battery and the discovery config (unit: V, precision: 2).
battery_mv = data.get("battery_mv")
if battery_mv is not None:
payload["battery_volts"] = battery_mv / 1000.0
await self._publisher.publish(f"{self._prefix}/{nid}/health", payload)
async def on_contact(self, data: dict) -> None:
if not self._publisher.connected:
return
pub_key = data.get("public_key", "")
if pub_key not in self._tracked_contacts:
return
lat = data.get("lat")
lon = data.get("lon")
if lat is None or lon is None or (lat == 0.0 and lon == 0.0):
return
nid = _node_id(pub_key)
await self._publisher.publish(
f"{self._prefix}/{nid}/gps",
{
"latitude": lat,
"longitude": lon,
"gps_accuracy": 0,
"source_type": "gps",
},
)
async def on_telemetry(self, data: dict) -> None:
if not self._publisher.connected:
return
pub_key = data.get("public_key", "")
if pub_key not in self._tracked_repeaters:
return
nid = _node_id(pub_key)
# Publish the full telemetry dict — HA sensors use value_template
# to extract individual fields
payload = _repeater_telemetry_payload(data)
lpp_sensors: list[dict] = data.get("lpp_sensors", [])
rediscover = False
for sensor in lpp_sensors:
# Check if discovery for this sensor has been published yet
key = _lpp_sensor_key(sensor.get("type_name", "unknown"), sensor.get("channel", 0))
expected_topic = f"homeassistant/sensor/meshcore_{nid}/{key}/config"
if expected_topic not in self._discovery_topics:
rediscover = True
# If new LPP sensor types appeared, re-publish discovery *before*
# the state payload so HA already knows the entity when the value arrives.
if rediscover:
await self._publish_discovery()
await self._publisher.publish(f"{self._prefix}/{nid}/telemetry", payload)
async def on_message(self, data: dict) -> None:
if not self._publisher.connected or not self._radio_key:
return
text = get_fanout_message_text(data)
nid = _node_id(self._radio_key)
await self._publisher.publish(
f"{self._prefix}/{nid}/events/message",
{
"event_type": "message_received",
"sender_name": data.get("sender_name", ""),
"sender_key": data.get("sender_key", ""),
"text": text,
"conversation_key": data.get("conversation_key", ""),
"message_type": data.get("type", ""),
"channel_name": data.get("channel_name"),
"outgoing": data.get("outgoing", False),
},
)
# ── Status ────────────────────────────────────────────────────────
@property
def status(self) -> str:
if not self.config.get("broker_host"):
return "disconnected"
if self._publisher.last_error:
return "error"
return "connected" if self._publisher.connected else "disconnected"
@property
def last_error(self) -> str | None:
return self._publisher.last_error
+79 -11
View File
@@ -38,8 +38,17 @@ def _is_index_file(path: Path, index_file: Path) -> bool:
return path == index_file
def _resolve_request_origin(request: Request) -> str:
"""Resolve the external origin, honoring common reverse-proxy headers."""
def _resolve_request_base(request: Request) -> str:
"""Resolve the external base URL, honoring common reverse-proxy headers.
Returns a URL like ``https://host:8000/meshcore/`` (always trailing-slash)
so callers can append paths directly.
Recognized headers:
- ``X-Forwarded-Proto`` + ``X-Forwarded-Host``: override scheme and host.
- ``X-Forwarded-Prefix`` (or ``X-Forwarded-Path``): sub-path prefix added
by the proxy (e.g. ``/meshcore``).
"""
forwarded_proto = request.headers.get("x-forwarded-proto")
forwarded_host = request.headers.get("x-forwarded-host")
@@ -47,9 +56,20 @@ def _resolve_request_origin(request: Request) -> str:
proto = forwarded_proto.split(",")[0].strip()
host = forwarded_host.split(",")[0].strip()
if proto and host:
return f"{proto}://{host}"
origin = f"{proto}://{host}"
else:
origin = str(request.base_url).rstrip("/")
else:
origin = str(request.base_url).rstrip("/")
return str(request.base_url).rstrip("/")
# Sub-path prefix (e.g. /meshcore) communicated by the reverse proxy
prefix = (
(request.headers.get("x-forwarded-prefix") or request.headers.get("x-forwarded-path") or "")
.strip()
.rstrip("/")
)
return f"{origin}{prefix}/"
def _validate_frontend_dir(frontend_dir: Path, *, log_failures: bool = True) -> tuple[bool, Path]:
@@ -103,32 +123,80 @@ def register_frontend_static_routes(app: FastAPI, frontend_dir: Path) -> bool:
@app.get("/site.webmanifest")
async def serve_webmanifest(request: Request):
"""Serve a dynamic web manifest using the active request origin."""
origin = _resolve_request_origin(request)
"""Serve a dynamic web manifest using the active request base URL."""
base = _resolve_request_base(request)
manifest = {
"name": "RemoteTerm for MeshCore",
"short_name": "RemoteTerm",
"id": f"{origin}/",
"start_url": f"{origin}/",
"scope": f"{origin}/",
"id": base,
"start_url": base,
"scope": base,
"display": "standalone",
"display_override": ["window-controls-overlay", "standalone", "fullscreen"],
"theme_color": "#111419",
"background_color": "#111419",
# Icons are PNG-only on purpose. iOS Safari's manifest parser has
# historically been unreliable with SVG icons, and Android/Chrome
# PWA install flows prefer PNG for the install prompt.
#
# The "any" purpose entries are what iOS and desktop Chrome use
# for the home-screen / install icon. "maskable" entries are
# Android-only (adaptive icon with safe-zone crop); iOS does not
# apply the safe-zone mask, so a maskable-only icon set would
# render with excessive padding.
"icons": [
{
"src": f"{origin}/web-app-manifest-192x192.png",
"src": f"{base}favicon-96x96.png",
"sizes": "96x96",
"type": "image/png",
"purpose": "any",
},
{
"src": f"{base}apple-touch-icon.png",
"sizes": "180x180",
"type": "image/png",
"purpose": "any",
},
{
"src": f"{base}favicon-256x256.png",
"sizes": "256x256",
"type": "image/png",
"purpose": "any",
},
{
"src": f"{base}web-app-manifest-192x192.png",
"sizes": "192x192",
"type": "image/png",
"purpose": "maskable",
},
{
"src": f"{origin}/web-app-manifest-512x512.png",
"src": f"{base}web-app-manifest-512x512.png",
"sizes": "512x512",
"type": "image/png",
"purpose": "maskable",
},
],
"screenshots": [
{
"src": f"{base}screenshot-wide.png",
"sizes": "1367x909",
"type": "image/png",
"form_factor": "wide",
"label": "RemoteTerm desktop view",
},
{
"src": f"{base}screenshot-mobile.png",
"sizes": "1170x2532",
"type": "image/png",
"label": "RemoteTerm mobile view",
},
{
"src": f"{base}screenshot-mobile-2.png",
"sizes": "750x1334",
"type": "image/png",
"label": "RemoteTerm mobile conversation",
},
],
}
return JSONResponse(
manifest,
+1 -1
View File
@@ -24,7 +24,7 @@ logger = logging.getLogger(__name__)
NO_EVENT_RECEIVED_GUIDANCE = (
"Radio command channel is unresponsive (no_event_received). Ensure that your firmware is not "
"incompatible, outdated, or wrong-mode (e.g. repeater, not client), and that"
"incompatible, outdated, or wrong-mode (e.g. repeater, not client), and that "
"serial/TCP/BLE connectivity is successful (try another app and see if that one works?). The app cannot proceed because it cannot "
"issue commands to the radio."
)
+50 -4
View File
@@ -1,5 +1,41 @@
import asyncio
import logging
import sys
# ---------------------------------------------------------------------------
# Windows event-loop advisory for MQTT fanout
# ---------------------------------------------------------------------------
# On Windows, uvicorn's default event loop (ProactorEventLoop) does not
# implement add_reader()/add_writer(), which paho-mqtt (via aiomqtt) requires.
# We cannot fix this from inside the app — the loop is already created by the
# time this module is imported. Log a prominent warning so Windows operators
# who want MQTT know to add ``--loop none`` to their uvicorn command.
# ---------------------------------------------------------------------------
if sys.platform == "win32":
import asyncio as _asyncio
_loop = _asyncio.get_event_loop()
_is_proactor = type(_loop).__name__ == "ProactorEventLoop"
if _is_proactor:
print(
"\n" + "!" * 78 + "\n"
" NOTE FOR WINDOWS USERS\n" + "!" * 78 + "\n"
"\n"
" The running event loop is ProactorEventLoop, which is not\n"
" compatible with MQTT fanout (aiomqtt / paho-mqtt).\n"
"\n"
" If you use MQTT integrations, restart with --loop none:\n"
"\n"
" uv run uvicorn app.main:app \033[1m--loop none\033[0m"
" [... other options ...]\n"
"\n"
" Everything else works fine as-is.\n"
"\n" + "!" * 78 + "\n",
file=sys.stderr,
flush=True,
)
del _loop, _is_proactor
import asyncio
from contextlib import asynccontextmanager
from pathlib import Path
@@ -31,6 +67,7 @@ from app.routers import (
health,
messages,
packets,
push,
radio,
read_state,
repeaters,
@@ -40,8 +77,8 @@ from app.routers import (
ws,
)
from app.security import add_optional_basic_auth_middleware
from app.services.radio_noise_floor import start_noise_floor_sampling, stop_noise_floor_sampling
from app.services.radio_runtime import radio_runtime as radio_manager
from app.services.radio_stats import start_radio_stats_sampling, stop_radio_stats_sampling
from app.version_info import get_app_build_info
setup_logging()
@@ -66,13 +103,21 @@ async def lifespan(app: FastAPI):
await db.connect()
logger.info("Database connected")
# Initialize VAPID keys for Web Push (generates on first run)
from app.push.vapid import ensure_vapid_keys
try:
await ensure_vapid_keys()
except Exception:
logger.warning("Failed to initialize VAPID keys for Web Push", exc_info=True)
# Ensure default channels exist in the database even before the radio
# connects. Without this, a fresh or disconnected instance would return
# zero channels from GET /channels until the first successful radio sync.
from app.radio_sync import ensure_default_channels
await ensure_default_channels()
await start_noise_floor_sampling()
await start_radio_stats_sampling()
# Always start connection monitor (even if initial connection failed)
await radio_manager.start_connection_monitor()
@@ -101,7 +146,7 @@ async def lifespan(app: FastAPI):
await radio_manager.stop_connection_monitor()
await stop_background_contact_reconciliation()
await stop_message_polling()
await stop_noise_floor_sampling()
await stop_radio_stats_sampling()
await stop_periodic_advert()
await stop_periodic_sync()
await stop_telemetry_collect()
@@ -149,6 +194,7 @@ app.include_router(packets.router, prefix="/api")
app.include_router(read_state.router, prefix="/api")
app.include_router(settings.router, prefix="/api")
app.include_router(statistics.router, prefix="/api")
app.include_router(push.router, prefix="/api")
app.include_router(ws.router, prefix="/api")
# Serve frontend static files in production
-3309
View File
File diff suppressed because it is too large Load Diff
+38
View File
@@ -0,0 +1,38 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Add last_read_at column to contacts and channels tables.
This enables server-side read state tracking, replacing the localStorage
approach for consistent read state across devices.
ALTER TABLE ADD COLUMN is safe - it preserves existing data and handles
the "column already exists" case gracefully.
"""
# Add to contacts table
try:
await conn.execute("ALTER TABLE contacts ADD COLUMN last_read_at INTEGER")
logger.debug("Added last_read_at to contacts table")
except aiosqlite.OperationalError as e:
if "duplicate column name" in str(e).lower():
logger.debug("contacts.last_read_at already exists, skipping")
else:
raise
# Add to channels table
try:
await conn.execute("ALTER TABLE channels ADD COLUMN last_read_at INTEGER")
logger.debug("Added last_read_at to channels table")
except aiosqlite.OperationalError as e:
if "duplicate column name" in str(e).lower():
logger.debug("channels.last_read_at already exists, skipping")
else:
raise
await conn.commit()
@@ -0,0 +1,32 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Drop unused decrypt_attempts and last_attempt columns from raw_packets.
These columns were added for a retry-limiting feature that was never implemented.
They are written to but never read, so we can safely remove them.
SQLite 3.35.0+ supports ALTER TABLE DROP COLUMN. For older versions,
we silently skip (the columns will remain but are harmless).
"""
for column in ["decrypt_attempts", "last_attempt"]:
try:
await conn.execute(f"ALTER TABLE raw_packets DROP COLUMN {column}")
logger.debug("Dropped %s from raw_packets table", column)
except aiosqlite.OperationalError as e:
error_msg = str(e).lower()
if "no such column" in error_msg:
logger.debug("raw_packets.%s already dropped, skipping", column)
elif "syntax error" in error_msg or "drop column" in error_msg:
# SQLite version doesn't support DROP COLUMN - harmless, column stays
logger.debug("SQLite doesn't support DROP COLUMN, %s column will remain", column)
else:
raise
await conn.commit()
@@ -0,0 +1,49 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Drop the decrypted column and update indexes.
The decrypted column is redundant with message_id - a packet is decrypted
iff message_id IS NOT NULL. We replace the decrypted index with a message_id index.
SQLite 3.35.0+ supports ALTER TABLE DROP COLUMN. For older versions,
we silently skip the column drop but still update the index.
"""
# First, drop the old index on decrypted (safe even if it doesn't exist)
try:
await conn.execute("DROP INDEX IF EXISTS idx_raw_packets_decrypted")
logger.debug("Dropped idx_raw_packets_decrypted index")
except aiosqlite.OperationalError:
pass # Index didn't exist
# Create new index on message_id for efficient undecrypted packet queries
try:
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_raw_packets_message_id ON raw_packets(message_id)"
)
logger.debug("Created idx_raw_packets_message_id index")
except aiosqlite.OperationalError as e:
if "already exists" not in str(e).lower():
raise
# Try to drop the decrypted column
try:
await conn.execute("ALTER TABLE raw_packets DROP COLUMN decrypted")
logger.debug("Dropped decrypted from raw_packets table")
except aiosqlite.OperationalError as e:
error_msg = str(e).lower()
if "no such column" in error_msg:
logger.debug("raw_packets.decrypted already dropped, skipping")
elif "syntax error" in error_msg or "drop column" in error_msg:
# SQLite version doesn't support DROP COLUMN - harmless, column stays
logger.debug("SQLite doesn't support DROP COLUMN, decrypted column will remain")
else:
raise
await conn.commit()
@@ -0,0 +1,24 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Add payload_hash column to raw_packets for deduplication.
This column stores the SHA-256 hash of the packet payload (excluding routing/path info).
It will be used with a unique index to prevent duplicate packets from being stored.
"""
try:
await conn.execute("ALTER TABLE raw_packets ADD COLUMN payload_hash TEXT")
logger.debug("Added payload_hash column to raw_packets table")
except aiosqlite.OperationalError as e:
if "duplicate column name" in str(e).lower():
logger.debug("raw_packets.payload_hash already exists, skipping")
else:
raise
await conn.commit()
@@ -0,0 +1,126 @@
import logging
from hashlib import sha256
import aiosqlite
logger = logging.getLogger(__name__)
def _extract_payload_for_hash(raw_packet: bytes) -> bytes | None:
"""
Extract payload from a raw packet for hashing using canonical framing validation.
Returns the payload bytes, or None if packet is malformed.
"""
from app.path_utils import parse_packet_envelope
envelope = parse_packet_envelope(raw_packet)
return envelope.payload if envelope is not None else None
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Backfill payload_hash for existing packets and remove duplicates.
This may take a while for large databases. Progress is logged.
After backfilling, a unique index is created to prevent future duplicates.
"""
# Get count first
cursor = await conn.execute("SELECT COUNT(*) FROM raw_packets WHERE payload_hash IS NULL")
row = await cursor.fetchone()
total = row[0] if row else 0
if total == 0:
logger.debug("No packets need hash backfill")
else:
logger.info("Backfilling payload hashes for %d packets. This may take a while...", total)
# Process in batches to avoid memory issues
batch_size = 1000
processed = 0
duplicates_deleted = 0
# Track seen hashes to identify duplicates (keep oldest = lowest ID)
seen_hashes: dict[str, int] = {} # hash -> oldest packet ID
# First pass: compute hashes and identify duplicates
cursor = await conn.execute("SELECT id, data FROM raw_packets ORDER BY id ASC")
packets_to_update: list[tuple[str, int]] = [] # (hash, id)
ids_to_delete: list[int] = []
while True:
rows = await cursor.fetchmany(batch_size)
if not rows:
break
for row in rows:
packet_id = row[0]
packet_data = bytes(row[1])
# Extract payload and compute hash
payload = _extract_payload_for_hash(packet_data)
if payload:
payload_hash = sha256(payload).hexdigest()
else:
# For malformed packets, hash the full data
payload_hash = sha256(packet_data).hexdigest()
if payload_hash in seen_hashes:
# Duplicate - mark for deletion (we keep the older one)
ids_to_delete.append(packet_id)
duplicates_deleted += 1
else:
# New hash - keep this packet
seen_hashes[payload_hash] = packet_id
packets_to_update.append((payload_hash, packet_id))
processed += 1
if processed % 10000 == 0:
logger.info("Processed %d/%d packets...", processed, total)
# Second pass: update hashes for packets we're keeping
total_updates = len(packets_to_update)
logger.info("Updating %d packets with hashes...", total_updates)
for idx, (payload_hash, packet_id) in enumerate(packets_to_update, 1):
await conn.execute(
"UPDATE raw_packets SET payload_hash = ? WHERE id = ?",
(payload_hash, packet_id),
)
if idx % 10000 == 0:
logger.info("Updated %d/%d packets...", idx, total_updates)
# Third pass: delete duplicates
if ids_to_delete:
total_deletes = len(ids_to_delete)
logger.info("Removing %d duplicate packets...", total_deletes)
deleted_count = 0
# Delete in batches to avoid "too many SQL variables" error
for i in range(0, len(ids_to_delete), 500):
batch = ids_to_delete[i : i + 500]
placeholders = ",".join("?" * len(batch))
await conn.execute(f"DELETE FROM raw_packets WHERE id IN ({placeholders})", batch)
deleted_count += len(batch)
if deleted_count % 10000 < 500: # Log roughly every 10k
logger.info("Removed %d/%d duplicates...", deleted_count, total_deletes)
await conn.commit()
logger.info(
"Hash backfill complete: %d packets updated, %d duplicates removed",
len(packets_to_update),
duplicates_deleted,
)
# Create unique index on payload_hash (this enforces uniqueness going forward)
try:
await conn.execute(
"CREATE UNIQUE INDEX IF NOT EXISTS idx_raw_packets_payload_hash "
"ON raw_packets(payload_hash)"
)
logger.debug("Created unique index on payload_hash")
except aiosqlite.OperationalError as e:
if "already exists" not in str(e).lower():
raise
await conn.commit()
@@ -0,0 +1,42 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Replace path_len INTEGER column with path TEXT column in messages table.
The path column stores the hex-encoded routing path bytes. Path length can
be derived from the hex string (2 chars per byte = 1 hop).
SQLite 3.35.0+ supports ALTER TABLE DROP COLUMN. For older versions,
we silently skip the drop (the column will remain but is unused).
"""
# First, add the new path column
try:
await conn.execute("ALTER TABLE messages ADD COLUMN path TEXT")
logger.debug("Added path column to messages table")
except aiosqlite.OperationalError as e:
if "duplicate column name" in str(e).lower():
logger.debug("messages.path already exists, skipping")
else:
raise
# Try to drop the old path_len column
try:
await conn.execute("ALTER TABLE messages DROP COLUMN path_len")
logger.debug("Dropped path_len from messages table")
except aiosqlite.OperationalError as e:
error_msg = str(e).lower()
if "no such column" in error_msg:
logger.debug("messages.path_len already dropped, skipping")
elif "syntax error" in error_msg or "drop column" in error_msg:
# SQLite version doesn't support DROP COLUMN - harmless, column stays
logger.debug("SQLite doesn't support DROP COLUMN, path_len column will remain")
else:
raise
await conn.commit()
@@ -0,0 +1,96 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
def _extract_path_from_packet(raw_packet: bytes) -> str | None:
"""
Extract path hex string from a raw packet using canonical framing validation.
Returns the path as a hex string, or None if packet is malformed.
"""
from app.path_utils import parse_packet_envelope
envelope = parse_packet_envelope(raw_packet)
return envelope.path.hex() if envelope is not None else None
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Backfill path column for messages that have linked raw_packets.
For each message with a linked raw_packet (via message_id), extract the
path from the raw packet and update the message.
Only updates incoming messages (outgoing=0) since outgoing messages
don't have meaningful path data.
"""
# Get count of messages that need backfill
cursor = await conn.execute(
"""
SELECT COUNT(*)
FROM messages m
JOIN raw_packets rp ON rp.message_id = m.id
WHERE m.path IS NULL AND m.outgoing = 0
"""
)
row = await cursor.fetchone()
total = row[0] if row else 0
if total == 0:
logger.debug("No messages need path backfill")
return
logger.info("Backfilling path for %d messages. This may take a while...", total)
# Process in batches
batch_size = 1000
processed = 0
updated = 0
cursor = await conn.execute(
"""
SELECT m.id, rp.data
FROM messages m
JOIN raw_packets rp ON rp.message_id = m.id
WHERE m.path IS NULL AND m.outgoing = 0
ORDER BY m.id ASC
"""
)
updates: list[tuple[str, int]] = [] # (path, message_id)
while True:
rows = await cursor.fetchmany(batch_size)
if not rows:
break
for row in rows:
message_id = row[0]
packet_data = bytes(row[1])
path_hex = _extract_path_from_packet(packet_data)
if path_hex is not None:
updates.append((path_hex, message_id))
processed += 1
if processed % 10000 == 0:
logger.info("Processed %d/%d messages...", processed, total)
# Apply updates in batches
if updates:
logger.info("Updating %d messages with path data...", len(updates))
for idx, (path_hex, message_id) in enumerate(updates, 1):
await conn.execute(
"UPDATE messages SET path = ? WHERE id = ?",
(path_hex, message_id),
)
updated += 1
if idx % 10000 == 0:
logger.info("Updated %d/%d messages...", idx, len(updates))
await conn.commit()
logger.info("Path backfill complete: %d messages updated", updated)
@@ -0,0 +1,66 @@
import json
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Convert path TEXT column to paths TEXT column storing JSON array.
The new format stores multiple paths as a JSON array of objects:
[{"path": "1A2B", "received_at": 1234567890}, ...]
This enables tracking multiple delivery paths for the same message
(e.g., when a message is received via different repeater routes).
"""
# First, add the new paths column
try:
await conn.execute("ALTER TABLE messages ADD COLUMN paths TEXT")
logger.debug("Added paths column to messages table")
except aiosqlite.OperationalError as e:
if "duplicate column name" in str(e).lower():
logger.debug("messages.paths already exists, skipping column add")
else:
raise
# Migrate existing path data to paths array format
cursor = await conn.execute(
"SELECT id, path, received_at FROM messages WHERE path IS NOT NULL AND paths IS NULL"
)
rows = list(await cursor.fetchall())
if rows:
logger.info("Converting %d messages from path to paths array format...", len(rows))
for row in rows:
message_id = row[0]
old_path = row[1]
received_at = row[2]
# Convert single path to array format
paths_json = json.dumps([{"path": old_path, "received_at": received_at}])
await conn.execute(
"UPDATE messages SET paths = ? WHERE id = ?",
(paths_json, message_id),
)
logger.info("Converted %d messages to paths array format", len(rows))
# Try to drop the old path column (SQLite 3.35.0+ only)
try:
await conn.execute("ALTER TABLE messages DROP COLUMN path")
logger.debug("Dropped path column from messages table")
except aiosqlite.OperationalError as e:
error_msg = str(e).lower()
if "no such column" in error_msg:
logger.debug("messages.path already dropped, skipping")
elif "syntax error" in error_msg or "drop column" in error_msg:
# SQLite version doesn't support DROP COLUMN - harmless, column stays
logger.debug("SQLite doesn't support DROP COLUMN, path column will remain")
else:
raise
await conn.commit()
@@ -0,0 +1,41 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Create app_settings table for persistent application preferences.
This table stores:
- max_radio_contacts: Configured radio contact capacity baseline for maintenance thresholds
- favorites: JSON array of favorite conversations [{type, id}, ...]
- auto_decrypt_dm_on_advert: Whether to attempt historical DM decryption on new contact
- sidebar_sort_order: 'recent' or 'alpha' for sidebar sorting
- last_message_times: JSON object mapping conversation keys to timestamps
- preferences_migrated: Flag to track if localStorage has been migrated
The table uses a single-row pattern (id=1) for simplicity.
"""
await conn.execute(
"""
CREATE TABLE IF NOT EXISTS app_settings (
id INTEGER PRIMARY KEY CHECK (id = 1),
max_radio_contacts INTEGER DEFAULT 200,
favorites TEXT DEFAULT '[]',
auto_decrypt_dm_on_advert INTEGER DEFAULT 1,
sidebar_sort_order TEXT DEFAULT 'recent',
last_message_times TEXT DEFAULT '{}',
preferences_migrated INTEGER DEFAULT 0
)
"""
)
# Initialize with default row (use only the id column so this works
# regardless of which columns exist — defaults fill the rest).
await conn.execute("INSERT OR IGNORE INTO app_settings (id) VALUES (1)")
await conn.commit()
logger.debug("Created app_settings table with default values")
@@ -0,0 +1,23 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Add advert_interval column to app_settings table.
This enables configurable periodic advertisement interval (default 0 = disabled).
"""
try:
await conn.execute("ALTER TABLE app_settings ADD COLUMN advert_interval INTEGER DEFAULT 0")
logger.debug("Added advert_interval column to app_settings")
except aiosqlite.OperationalError as e:
if "duplicate column" in str(e).lower():
logger.debug("advert_interval column already exists, skipping")
else:
raise
await conn.commit()
@@ -0,0 +1,24 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Add last_advert_time column to app_settings table.
This tracks when the last advertisement was sent, ensuring we never
advertise faster than the configured advert_interval.
"""
try:
await conn.execute("ALTER TABLE app_settings ADD COLUMN last_advert_time INTEGER DEFAULT 0")
logger.debug("Added last_advert_time column to app_settings")
except aiosqlite.OperationalError as e:
if "duplicate column" in str(e).lower():
logger.debug("last_advert_time column already exists, skipping")
else:
raise
await conn.commit()
+33
View File
@@ -0,0 +1,33 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Add bot_enabled and bot_code columns to app_settings table.
This enables user-defined Python code to be executed when messages are received,
allowing for custom bot responses.
"""
try:
await conn.execute("ALTER TABLE app_settings ADD COLUMN bot_enabled INTEGER DEFAULT 0")
logger.debug("Added bot_enabled column to app_settings")
except aiosqlite.OperationalError as e:
if "duplicate column" in str(e).lower():
logger.debug("bot_enabled column already exists, skipping")
else:
raise
try:
await conn.execute("ALTER TABLE app_settings ADD COLUMN bot_code TEXT DEFAULT ''")
logger.debug("Added bot_code column to app_settings")
except aiosqlite.OperationalError as e:
if "duplicate column" in str(e).lower():
logger.debug("bot_code column already exists, skipping")
else:
raise
await conn.commit()
@@ -0,0 +1,76 @@
import json
import logging
import uuid
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Convert single bot_enabled/bot_code to multi-bot format.
Adds a 'bots' TEXT column storing a JSON array of bot configs:
[{"id": "uuid", "name": "Bot 1", "enabled": true, "code": "..."}]
If existing bot_code is non-empty OR bot_enabled is true, migrates
to a single bot named "Bot 1". Otherwise, creates empty array.
Attempts to drop the old bot_enabled and bot_code columns.
"""
# Add new bots column
try:
await conn.execute("ALTER TABLE app_settings ADD COLUMN bots TEXT DEFAULT '[]'")
logger.debug("Added bots column to app_settings")
except aiosqlite.OperationalError as e:
if "duplicate column" in str(e).lower():
logger.debug("bots column already exists, skipping")
else:
raise
# Migrate existing bot data
cursor = await conn.execute("SELECT bot_enabled, bot_code FROM app_settings WHERE id = 1")
row = await cursor.fetchone()
if row:
bot_enabled = bool(row[0]) if row[0] is not None else False
bot_code = row[1] or ""
# If there's existing bot data, migrate it
if bot_code.strip() or bot_enabled:
bots = [
{
"id": str(uuid.uuid4()),
"name": "Bot 1",
"enabled": bot_enabled,
"code": bot_code,
}
]
bots_json = json.dumps(bots)
logger.info("Migrating existing bot to multi-bot format: enabled=%s", bot_enabled)
else:
bots_json = "[]"
await conn.execute(
"UPDATE app_settings SET bots = ? WHERE id = 1",
(bots_json,),
)
# Try to drop old columns (SQLite 3.35.0+ only)
for column in ["bot_enabled", "bot_code"]:
try:
await conn.execute(f"ALTER TABLE app_settings DROP COLUMN {column}")
logger.debug("Dropped %s column from app_settings", column)
except aiosqlite.OperationalError as e:
error_msg = str(e).lower()
if "no such column" in error_msg:
logger.debug("app_settings.%s already dropped, skipping", column)
elif "syntax error" in error_msg or "drop column" in error_msg:
# SQLite version doesn't support DROP COLUMN - harmless, column stays
logger.debug("SQLite doesn't support DROP COLUMN, %s column will remain", column)
else:
raise
await conn.commit()
@@ -0,0 +1,152 @@
import json
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Lowercase all contact public keys and related data for case-insensitive matching.
Updates:
- contacts.public_key (PRIMARY KEY) via temp table swap
- messages.conversation_key for PRIV messages
- app_settings.favorites (contact IDs)
- app_settings.last_message_times (contact- prefixed keys)
Handles case collisions by keeping the most-recently-seen contact.
"""
# 1. Lowercase message conversation keys for private messages
try:
await conn.execute(
"UPDATE messages SET conversation_key = lower(conversation_key) WHERE type = 'PRIV'"
)
logger.debug("Lowercased PRIV message conversation_keys")
except aiosqlite.OperationalError as e:
if "no such table" in str(e).lower():
logger.debug("messages table does not exist yet, skipping conversation_key lowercase")
else:
raise
# 2. Check if contacts table exists before proceeding
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='contacts'"
)
if not await cursor.fetchone():
logger.debug("contacts table does not exist yet, skipping key lowercase")
await conn.commit()
return
# 3. Handle contacts table - check for case collisions first
cursor = await conn.execute(
"SELECT lower(public_key) as lk, COUNT(*) as cnt "
"FROM contacts GROUP BY lower(public_key) HAVING COUNT(*) > 1"
)
collisions = list(await cursor.fetchall())
if collisions:
logger.warning(
"Found %d case-colliding contact groups, keeping most-recently-seen",
len(collisions),
)
for row in collisions:
lower_key = row[0]
# Delete all but the most recently seen
await conn.execute(
"""DELETE FROM contacts WHERE public_key IN (
SELECT public_key FROM contacts
WHERE lower(public_key) = ?
ORDER BY COALESCE(last_seen, 0) DESC
LIMIT -1 OFFSET 1
)""",
(lower_key,),
)
# 3. Rebuild contacts with lowercased keys
# Get the actual column names from the table (handles different schema versions)
cursor = await conn.execute("PRAGMA table_info(contacts)")
columns_info = await cursor.fetchall()
all_columns = [col[1] for col in columns_info] # col[1] is column name
# Build column lists, lowering public_key
select_cols = ", ".join(f"lower({c})" if c == "public_key" else c for c in all_columns)
col_defs = []
for col in columns_info:
name, col_type, _notnull, default, pk = col[1], col[2], col[3], col[4], col[5]
parts = [name, col_type or "TEXT"]
if pk:
parts.append("PRIMARY KEY")
if default is not None:
parts.append(f"DEFAULT {default}")
col_defs.append(" ".join(parts))
create_sql = f"CREATE TABLE contacts_new ({', '.join(col_defs)})"
await conn.execute(create_sql)
await conn.execute(f"INSERT INTO contacts_new SELECT {select_cols} FROM contacts")
await conn.execute("DROP TABLE contacts")
await conn.execute("ALTER TABLE contacts_new RENAME TO contacts")
# Recreate the on_radio index (if column exists)
if "on_radio" in all_columns:
await conn.execute("CREATE INDEX IF NOT EXISTS idx_contacts_on_radio ON contacts(on_radio)")
# 4. Lowercase contact IDs in favorites JSON (if app_settings exists)
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='app_settings'"
)
if not await cursor.fetchone():
await conn.commit()
logger.info("Lowercased all contact public keys (no app_settings table)")
return
cursor = await conn.execute("SELECT favorites FROM app_settings WHERE id = 1")
row = await cursor.fetchone()
if row and row[0]:
try:
favorites = json.loads(row[0])
updated = False
for fav in favorites:
if fav.get("type") == "contact" and fav.get("id"):
new_id = fav["id"].lower()
if new_id != fav["id"]:
fav["id"] = new_id
updated = True
if updated:
await conn.execute(
"UPDATE app_settings SET favorites = ? WHERE id = 1",
(json.dumps(favorites),),
)
logger.debug("Lowercased contact IDs in favorites")
except (json.JSONDecodeError, TypeError):
pass
# 5. Lowercase contact keys in last_message_times JSON
cursor = await conn.execute("SELECT last_message_times FROM app_settings WHERE id = 1")
row = await cursor.fetchone()
if row and row[0]:
try:
times = json.loads(row[0])
new_times = {}
updated = False
for key, val in times.items():
if key.startswith("contact-"):
new_key = "contact-" + key[8:].lower()
if new_key != key:
updated = True
new_times[new_key] = val
else:
new_times[key] = val
if updated:
await conn.execute(
"UPDATE app_settings SET last_message_times = ? WHERE id = 1",
(json.dumps(new_times),),
)
logger.debug("Lowercased contact keys in last_message_times")
except (json.JSONDecodeError, TypeError):
pass
await conn.commit()
logger.info("Lowercased all contact public keys")
@@ -0,0 +1,44 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Fix NULL sender_timestamp values and add null-safe dedup index.
1. Set sender_timestamp = received_at for any messages with NULL sender_timestamp
2. Create a null-safe unique index as belt-and-suspenders protection
"""
# Check if messages table exists
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='messages'"
)
if not await cursor.fetchone():
logger.debug("messages table does not exist yet, skipping NULL sender_timestamp fix")
await conn.commit()
return
# Backfill NULL sender_timestamps with received_at
cursor = await conn.execute(
"UPDATE messages SET sender_timestamp = received_at WHERE sender_timestamp IS NULL"
)
if cursor.rowcount > 0:
logger.info("Backfilled %d messages with NULL sender_timestamp", cursor.rowcount)
# Try to create null-safe dedup index (may fail if existing duplicates exist)
try:
await conn.execute(
"""CREATE UNIQUE INDEX IF NOT EXISTS idx_messages_dedup_null_safe
ON messages(type, conversation_key, text, COALESCE(sender_timestamp, 0))"""
)
logger.debug("Created null-safe dedup index")
except aiosqlite.IntegrityError:
logger.warning(
"Could not create null-safe dedup index due to existing duplicates - "
"the application-level dedup will handle these"
)
await conn.commit()
@@ -0,0 +1,26 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Add experimental_channel_double_send column to app_settings table.
When enabled, channel sends perform an immediate byte-perfect duplicate send
using the same timestamp bytes.
"""
try:
await conn.execute(
"ALTER TABLE app_settings ADD COLUMN experimental_channel_double_send INTEGER DEFAULT 0"
)
logger.debug("Added experimental_channel_double_send column to app_settings")
except aiosqlite.OperationalError as e:
if "duplicate column" in str(e).lower():
logger.debug("experimental_channel_double_send column already exists, skipping")
else:
raise
await conn.commit()
@@ -0,0 +1,31 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Drop experimental_channel_double_send column from app_settings.
This feature is replaced by a user-triggered resend button.
SQLite 3.35.0+ supports ALTER TABLE DROP COLUMN. For older versions,
we silently skip (the column will remain but is unused).
"""
try:
await conn.execute("ALTER TABLE app_settings DROP COLUMN experimental_channel_double_send")
logger.debug("Dropped experimental_channel_double_send from app_settings")
except aiosqlite.OperationalError as e:
error_msg = str(e).lower()
if "no such column" in error_msg:
logger.debug("app_settings.experimental_channel_double_send already dropped, skipping")
elif "syntax error" in error_msg or "drop column" in error_msg:
logger.debug(
"SQLite doesn't support DROP COLUMN, "
"experimental_channel_double_send column will remain"
)
else:
raise
await conn.commit()
@@ -0,0 +1,64 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Drop the UNIQUE constraint on raw_packets.data via table rebuild.
This constraint creates a large autoindex (~30 MB on a 340K-row database) that
stores a complete copy of every raw packet BLOB in a B-tree. Deduplication is
already handled by the unique index on payload_hash, making the data UNIQUE
constraint pure storage overhead.
Requires table recreation since SQLite doesn't support DROP CONSTRAINT.
"""
# Check if the autoindex exists (indicates UNIQUE constraint on data)
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='index' "
"AND name='sqlite_autoindex_raw_packets_1'"
)
if not await cursor.fetchone():
logger.debug("raw_packets.data UNIQUE constraint already absent, skipping rebuild")
await conn.commit()
return
logger.info("Rebuilding raw_packets table to remove UNIQUE(data) constraint...")
# Get current columns from the existing table
cursor = await conn.execute("PRAGMA table_info(raw_packets)")
old_cols = {col[1] for col in await cursor.fetchall()}
# Target schema without UNIQUE on data
await conn.execute("""
CREATE TABLE raw_packets_new (
id INTEGER PRIMARY KEY AUTOINCREMENT,
timestamp INTEGER NOT NULL,
data BLOB NOT NULL,
message_id INTEGER,
payload_hash TEXT,
FOREIGN KEY (message_id) REFERENCES messages(id)
)
""")
# Copy only columns that exist in both old and new tables
new_cols = {"id", "timestamp", "data", "message_id", "payload_hash"}
copy_cols = ", ".join(sorted(c for c in new_cols if c in old_cols))
await conn.execute(
f"INSERT INTO raw_packets_new ({copy_cols}) SELECT {copy_cols} FROM raw_packets"
)
await conn.execute("DROP TABLE raw_packets")
await conn.execute("ALTER TABLE raw_packets_new RENAME TO raw_packets")
# Recreate indexes
await conn.execute(
"CREATE UNIQUE INDEX idx_raw_packets_payload_hash ON raw_packets(payload_hash)"
)
await conn.execute("CREATE INDEX idx_raw_packets_message_id ON raw_packets(message_id)")
await conn.commit()
logger.info("raw_packets table rebuilt without UNIQUE(data) constraint")
@@ -0,0 +1,83 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Drop the UNIQUE(type, conversation_key, text, sender_timestamp) constraint on messages.
This constraint creates a large autoindex (~13 MB on a 112K-row database) that
stores the full message text in a B-tree. The idx_messages_dedup_null_safe unique
index already provides identical dedup protection no rows have NULL
sender_timestamp since migration 15 backfilled them all.
INSERT OR IGNORE still works correctly because it checks all unique constraints,
including unique indexes like idx_messages_dedup_null_safe.
Requires table recreation since SQLite doesn't support DROP CONSTRAINT.
"""
# Check if the autoindex exists (indicates UNIQUE constraint)
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='index' AND name='sqlite_autoindex_messages_1'"
)
if not await cursor.fetchone():
logger.debug("messages UNIQUE constraint already absent, skipping rebuild")
await conn.commit()
return
logger.info("Rebuilding messages table to remove UNIQUE constraint...")
# Get current columns from the existing table
cursor = await conn.execute("PRAGMA table_info(messages)")
old_cols = {col[1] for col in await cursor.fetchall()}
# Target schema without the UNIQUE table constraint
await conn.execute("""
CREATE TABLE messages_new (
id INTEGER PRIMARY KEY AUTOINCREMENT,
type TEXT NOT NULL,
conversation_key TEXT NOT NULL,
text TEXT NOT NULL,
sender_timestamp INTEGER,
received_at INTEGER NOT NULL,
txt_type INTEGER DEFAULT 0,
signature TEXT,
outgoing INTEGER DEFAULT 0,
acked INTEGER DEFAULT 0,
paths TEXT
)
""")
# Copy only columns that exist in both old and new tables
new_cols = {
"id",
"type",
"conversation_key",
"text",
"sender_timestamp",
"received_at",
"txt_type",
"signature",
"outgoing",
"acked",
"paths",
}
copy_cols = ", ".join(sorted(c for c in new_cols if c in old_cols))
await conn.execute(f"INSERT INTO messages_new ({copy_cols}) SELECT {copy_cols} FROM messages")
await conn.execute("DROP TABLE messages")
await conn.execute("ALTER TABLE messages_new RENAME TO messages")
# Recreate indexes
await conn.execute("CREATE INDEX idx_messages_conversation ON messages(type, conversation_key)")
await conn.execute("CREATE INDEX idx_messages_received ON messages(received_at)")
await conn.execute(
"""CREATE UNIQUE INDEX idx_messages_dedup_null_safe
ON messages(type, conversation_key, text, COALESCE(sender_timestamp, 0))"""
)
await conn.commit()
logger.info("messages table rebuilt without UNIQUE constraint")
@@ -0,0 +1,45 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Enable WAL journal mode and incremental auto-vacuum.
WAL (Write-Ahead Logging):
- Faster writes: appends to a WAL file instead of rewriting the main DB
- Concurrent reads during writes (readers don't block writers)
- No journal file create/delete churn on every commit
Incremental auto-vacuum:
- Pages freed by DELETE become reclaimable without a full VACUUM
- Call PRAGMA incremental_vacuum to reclaim on demand
- Less overhead than FULL auto-vacuum (which reorganizes on every commit)
auto_vacuum mode change requires a VACUUM to restructure the file.
The VACUUM is performed before switching to WAL so it runs under the
current journal mode; WAL is then set as the final step.
"""
# Check current auto_vacuum mode
cursor = await conn.execute("PRAGMA auto_vacuum")
row = await cursor.fetchone()
current_auto_vacuum = row[0] if row else 0
if current_auto_vacuum != 2: # 2 = INCREMENTAL
logger.info("Switching auto_vacuum to INCREMENTAL (requires VACUUM)...")
await conn.execute("PRAGMA auto_vacuum = INCREMENTAL")
await conn.execute("VACUUM")
logger.info("VACUUM complete, auto_vacuum set to INCREMENTAL")
else:
logger.debug("auto_vacuum already INCREMENTAL, skipping VACUUM")
# Enable WAL mode (idempotent — returns current mode)
cursor = await conn.execute("PRAGMA journal_mode = WAL")
row = await cursor.fetchone()
mode = row[0] if row else "unknown"
logger.info("Journal mode set to %s", mode)
await conn.commit()
@@ -0,0 +1,29 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Enforce minimum 1-hour advert interval.
Any advert_interval between 1 and 3599 is clamped up to 3600 (1 hour).
Zero (disabled) is left unchanged.
"""
# Guard: app_settings table may not exist if running against a very old schema
# (it's created in migration 9). The UPDATE is harmless if the table exists
# but has no rows, but will error if the table itself is missing.
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='app_settings'"
)
if await cursor.fetchone() is None:
logger.debug("app_settings table does not exist yet, skipping advert_interval clamp")
return
await conn.execute(
"UPDATE app_settings SET advert_interval = 3600 WHERE advert_interval > 0 AND advert_interval < 3600"
)
await conn.commit()
logger.debug("Clamped advert_interval to minimum 3600 seconds")
@@ -0,0 +1,33 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Create table for recent unique advert paths per repeater.
This keeps path diversity for repeater advertisements without changing the
existing payload-hash raw packet dedup policy.
"""
await conn.execute("""
CREATE TABLE IF NOT EXISTS repeater_advert_paths (
id INTEGER PRIMARY KEY AUTOINCREMENT,
repeater_key TEXT NOT NULL,
path_hex TEXT NOT NULL,
path_len INTEGER NOT NULL,
first_seen INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
heard_count INTEGER NOT NULL DEFAULT 1,
UNIQUE(repeater_key, path_hex),
FOREIGN KEY (repeater_key) REFERENCES contacts(public_key)
)
""")
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_repeater_advert_paths_recent "
"ON repeater_advert_paths(repeater_key, last_seen DESC)"
)
await conn.commit()
logger.debug("Ensured repeater_advert_paths table and indexes exist")
+60
View File
@@ -0,0 +1,60 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Add first_seen column to contacts table.
Backfill strategy:
1. Set first_seen = last_seen for all contacts (baseline).
2. For contacts with PRIV messages, set first_seen = MIN(messages.received_at)
if that timestamp is earlier.
"""
# Guard: skip if contacts table doesn't exist (e.g. partial test schemas)
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='contacts'"
)
if not await cursor.fetchone():
return
try:
await conn.execute("ALTER TABLE contacts ADD COLUMN first_seen INTEGER")
logger.debug("Added first_seen to contacts table")
except aiosqlite.OperationalError as e:
if "duplicate column name" in str(e).lower():
logger.debug("contacts.first_seen already exists, skipping")
else:
raise
# Baseline: set first_seen = last_seen for all contacts
# Check if last_seen column exists (should in production, may not in minimal test schemas)
cursor = await conn.execute("PRAGMA table_info(contacts)")
columns = {row[1] for row in await cursor.fetchall()}
if "last_seen" in columns:
await conn.execute("UPDATE contacts SET first_seen = last_seen WHERE first_seen IS NULL")
# Refine: for contacts with PRIV messages, use earliest message timestamp if earlier
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='messages'"
)
if await cursor.fetchone():
await conn.execute(
"""
UPDATE contacts SET first_seen = (
SELECT MIN(m.received_at) FROM messages m
WHERE m.type = 'PRIV' AND m.conversation_key = contacts.public_key
)
WHERE EXISTS (
SELECT 1 FROM messages m
WHERE m.type = 'PRIV' AND m.conversation_key = contacts.public_key
AND m.received_at < COALESCE(contacts.first_seen, 9999999999)
)
"""
)
await conn.commit()
logger.debug("Added and backfilled first_seen column")
@@ -0,0 +1,53 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Create contact_name_history table and seed with current contact names.
"""
await conn.execute(
"""
CREATE TABLE IF NOT EXISTS contact_name_history (
id INTEGER PRIMARY KEY AUTOINCREMENT,
public_key TEXT NOT NULL,
name TEXT NOT NULL,
first_seen INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
UNIQUE(public_key, name),
FOREIGN KEY (public_key) REFERENCES contacts(public_key)
)
"""
)
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_contact_name_history_key "
"ON contact_name_history(public_key, last_seen DESC)"
)
# Seed: one row per contact from current data (skip if contacts table doesn't exist
# or lacks needed columns)
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='contacts'"
)
if await cursor.fetchone():
cursor = await conn.execute("PRAGMA table_info(contacts)")
cols = {row[1] for row in await cursor.fetchall()}
if "name" in cols and "public_key" in cols:
first_seen_expr = "first_seen" if "first_seen" in cols else "0"
last_seen_expr = "last_seen" if "last_seen" in cols else "0"
await conn.execute(
f"""
INSERT OR IGNORE INTO contact_name_history (public_key, name, first_seen, last_seen)
SELECT public_key, name,
COALESCE({first_seen_expr}, {last_seen_expr}, 0),
COALESCE({last_seen_expr}, 0)
FROM contacts
WHERE name IS NOT NULL AND name != ''
"""
)
await conn.commit()
logger.debug("Created contact_name_history table and seeded from contacts")
+124
View File
@@ -0,0 +1,124 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Add sender_name and sender_key columns to messages table.
Backfill:
- sender_name for CHAN messages: extract from "Name: message" format
- sender_key for CHAN messages: match name to contact (skip ambiguous)
- sender_key for incoming PRIV messages: set to conversation_key
"""
# Guard: skip if messages table doesn't exist
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='messages'"
)
if not await cursor.fetchone():
return
for column in ["sender_name", "sender_key"]:
try:
await conn.execute(f"ALTER TABLE messages ADD COLUMN {column} TEXT")
logger.debug("Added %s to messages table", column)
except aiosqlite.OperationalError as e:
if "duplicate column name" in str(e).lower():
logger.debug("messages.%s already exists, skipping", column)
else:
raise
# Check which columns the messages table has (may be minimal in test environments)
cursor = await conn.execute("PRAGMA table_info(messages)")
msg_cols = {row[1] for row in await cursor.fetchall()}
# Only backfill if the required columns exist
if "type" in msg_cols and "text" in msg_cols:
# Count messages to backfill for progress reporting
cursor = await conn.execute(
"SELECT COUNT(*) FROM messages WHERE type = 'CHAN' AND sender_name IS NULL"
)
row = await cursor.fetchone()
chan_count = row[0] if row else 0
if chan_count > 0:
logger.info("Backfilling sender_name for %d channel messages...", chan_count)
# Backfill sender_name for CHAN messages from "Name: message" format
# Only extract if colon position is valid (> 1 and < 51, i.e. name is 1-50 chars)
cursor = await conn.execute(
"""
UPDATE messages SET sender_name = SUBSTR(text, 1, INSTR(text, ': ') - 1)
WHERE type = 'CHAN' AND sender_name IS NULL
AND INSTR(text, ': ') > 1 AND INSTR(text, ': ') < 52
"""
)
if cursor.rowcount > 0:
logger.info("Backfilled sender_name for %d channel messages", cursor.rowcount)
# Backfill sender_key for incoming PRIV messages
if "outgoing" in msg_cols and "conversation_key" in msg_cols:
cursor = await conn.execute(
"""
UPDATE messages SET sender_key = conversation_key
WHERE type = 'PRIV' AND outgoing = 0 AND sender_key IS NULL
"""
)
if cursor.rowcount > 0:
logger.info("Backfilled sender_key for %d DM messages", cursor.rowcount)
# Backfill sender_key for CHAN messages: match sender_name to contacts
# Build name->key map, skip ambiguous names (multiple contacts with same name)
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='contacts'"
)
if await cursor.fetchone():
cursor = await conn.execute(
"SELECT public_key, name FROM contacts WHERE name IS NOT NULL AND name != ''"
)
rows = await cursor.fetchall()
name_to_keys: dict[str, list[str]] = {}
for row in rows:
name = row["name"]
key = row["public_key"]
if name not in name_to_keys:
name_to_keys[name] = []
name_to_keys[name].append(key)
# Only use unambiguous names (single contact per name)
unambiguous = {n: ks[0] for n, ks in name_to_keys.items() if len(ks) == 1}
if unambiguous:
logger.info(
"Matching sender_key for %d unique contact names...",
len(unambiguous),
)
# Use a temp table for a single bulk UPDATE instead of N individual queries
await conn.execute(
"CREATE TEMP TABLE _name_key_map (name TEXT PRIMARY KEY, public_key TEXT NOT NULL)"
)
await conn.executemany(
"INSERT INTO _name_key_map (name, public_key) VALUES (?, ?)",
list(unambiguous.items()),
)
cursor = await conn.execute(
"""
UPDATE messages SET sender_key = (
SELECT public_key FROM _name_key_map WHERE _name_key_map.name = messages.sender_name
)
WHERE type = 'CHAN' AND sender_key IS NULL
AND sender_name IN (SELECT name FROM _name_key_map)
"""
)
updated = cursor.rowcount
await conn.execute("DROP TABLE _name_key_map")
if updated > 0:
logger.info("Backfilled sender_key for %d channel messages", updated)
# Create index on sender_key for per-contact channel message counts
await conn.execute("CREATE INDEX IF NOT EXISTS idx_messages_sender_key ON messages(sender_key)")
await conn.commit()
logger.debug("Added sender_name and sender_key columns with backfill")
@@ -0,0 +1,81 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Rename repeater_advert_paths to contact_advert_paths with column
repeater_key -> public_key.
Uses table rebuild since ALTER TABLE RENAME COLUMN may not be available
in older SQLite versions.
"""
# Check if old table exists
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='repeater_advert_paths'"
)
if not await cursor.fetchone():
# Already renamed or doesn't exist — ensure new table exists
await conn.execute(
"""
CREATE TABLE IF NOT EXISTS contact_advert_paths (
id INTEGER PRIMARY KEY AUTOINCREMENT,
public_key TEXT NOT NULL,
path_hex TEXT NOT NULL,
path_len INTEGER NOT NULL,
first_seen INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
heard_count INTEGER NOT NULL DEFAULT 1,
UNIQUE(public_key, path_hex, path_len),
FOREIGN KEY (public_key) REFERENCES contacts(public_key)
)
"""
)
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_contact_advert_paths_recent "
"ON contact_advert_paths(public_key, last_seen DESC)"
)
await conn.commit()
logger.debug("contact_advert_paths already exists or old table missing, skipping rename")
return
# Create new table (IF NOT EXISTS in case SCHEMA already created it)
await conn.execute(
"""
CREATE TABLE IF NOT EXISTS contact_advert_paths (
id INTEGER PRIMARY KEY AUTOINCREMENT,
public_key TEXT NOT NULL,
path_hex TEXT NOT NULL,
path_len INTEGER NOT NULL,
first_seen INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
heard_count INTEGER NOT NULL DEFAULT 1,
UNIQUE(public_key, path_hex, path_len),
FOREIGN KEY (public_key) REFERENCES contacts(public_key)
)
"""
)
# Copy data (INSERT OR IGNORE in case of duplicates)
await conn.execute(
"""
INSERT OR IGNORE INTO contact_advert_paths (public_key, path_hex, path_len, first_seen, last_seen, heard_count)
SELECT repeater_key, path_hex, path_len, first_seen, last_seen, heard_count
FROM repeater_advert_paths
"""
)
# Drop old table
await conn.execute("DROP TABLE repeater_advert_paths")
# Create index
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_contact_advert_paths_recent "
"ON contact_advert_paths(public_key, last_seen DESC)"
)
await conn.commit()
logger.info("Renamed repeater_advert_paths to contact_advert_paths")
@@ -0,0 +1,36 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Backfill contacts.first_seen from contact_advert_paths where advert path
first_seen is earlier than the contact's current first_seen.
"""
# Guard: skip if either table doesn't exist
for table in ("contacts", "contact_advert_paths"):
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name=?", (table,)
)
if not await cursor.fetchone():
return
await conn.execute(
"""
UPDATE contacts SET first_seen = (
SELECT MIN(cap.first_seen) FROM contact_advert_paths cap
WHERE cap.public_key = contacts.public_key
)
WHERE EXISTS (
SELECT 1 FROM contact_advert_paths cap
WHERE cap.public_key = contacts.public_key
AND cap.first_seen < COALESCE(contacts.first_seen, 9999999999)
)
"""
)
await conn.commit()
logger.debug("Backfilled first_seen from contact_advert_paths")
@@ -0,0 +1,107 @@
import logging
from hashlib import sha256
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Convert payload_hash from 64-char hex TEXT to 32-byte BLOB.
Halves storage for both the column data and its UNIQUE index.
Uses Python bytes.fromhex() for the conversion since SQLite's unhex()
requires 3.41.0+ which may not be available on all deployments.
"""
# Guard: skip if raw_packets table doesn't exist
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='raw_packets'"
)
if not await cursor.fetchone():
logger.debug("raw_packets table does not exist, skipping payload_hash conversion")
await conn.commit()
return
# Check column types — skip if payload_hash doesn't exist or is already BLOB
cursor = await conn.execute("PRAGMA table_info(raw_packets)")
cols = {row[1]: row[2] for row in await cursor.fetchall()}
if "payload_hash" not in cols:
logger.debug("payload_hash column does not exist, skipping conversion")
await conn.commit()
return
if cols["payload_hash"].upper() == "BLOB":
logger.debug("payload_hash is already BLOB, skipping conversion")
await conn.commit()
return
logger.info("Rebuilding raw_packets to convert payload_hash TEXT → BLOB...")
# Create new table with BLOB type
await conn.execute("""
CREATE TABLE raw_packets_new (
id INTEGER PRIMARY KEY AUTOINCREMENT,
timestamp INTEGER NOT NULL,
data BLOB NOT NULL,
message_id INTEGER,
payload_hash BLOB,
FOREIGN KEY (message_id) REFERENCES messages(id)
)
""")
# Batch-convert rows: read TEXT hashes, convert to bytes, insert into new table
batch_size = 5000
cursor = await conn.execute(
"SELECT id, timestamp, data, message_id, payload_hash FROM raw_packets ORDER BY id"
)
total = 0
while True:
rows = await cursor.fetchmany(batch_size)
if not rows:
break
batch: list[tuple[int, int, bytes, int | None, bytes | None]] = []
for row in rows:
rid, ts, data, mid, ph = row[0], row[1], row[2], row[3], row[4]
if ph is not None and isinstance(ph, str):
try:
ph = bytes.fromhex(ph)
except ValueError:
# Not a valid hex string — hash the value to produce a valid BLOB
ph = sha256(ph.encode()).digest()
batch.append((rid, ts, data, mid, ph))
await conn.executemany(
"INSERT INTO raw_packets_new (id, timestamp, data, message_id, payload_hash) "
"VALUES (?, ?, ?, ?, ?)",
batch,
)
total += len(batch)
if total % 50000 == 0:
logger.info("Converted %d rows...", total)
# Preserve autoincrement sequence
cursor = await conn.execute("SELECT seq FROM sqlite_sequence WHERE name = 'raw_packets'")
seq_row = await cursor.fetchone()
if seq_row is not None:
await conn.execute(
"INSERT OR REPLACE INTO sqlite_sequence (name, seq) VALUES ('raw_packets_new', ?)",
(seq_row[0],),
)
await conn.execute("DROP TABLE raw_packets")
await conn.execute("ALTER TABLE raw_packets_new RENAME TO raw_packets")
# Clean up the sqlite_sequence entry for the old temp name
await conn.execute("DELETE FROM sqlite_sequence WHERE name = 'raw_packets_new'")
# Recreate indexes
await conn.execute(
"CREATE UNIQUE INDEX idx_raw_packets_payload_hash ON raw_packets(payload_hash)"
)
await conn.execute("CREATE INDEX idx_raw_packets_message_id ON raw_packets(message_id)")
await conn.commit()
logger.info("Converted %d payload_hash values from TEXT to BLOB", total)
@@ -0,0 +1,27 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Add a covering index for the unread counts query.
The /api/read-state/unreads endpoint runs three queries against messages.
The last-message-times query (GROUP BY type, conversation_key + MAX(received_at))
was doing a full table scan. This covering index lets SQLite resolve the
grouping and MAX entirely from the index without touching the table.
It also improves the unread count queries which filter on outgoing and received_at.
"""
# Guard: table or columns may not exist in partial-schema test setups
cursor = await conn.execute("PRAGMA table_info(messages)")
columns = {row[1] for row in await cursor.fetchall()}
required = {"type", "conversation_key", "outgoing", "received_at"}
if required <= columns:
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_messages_unread_covering "
"ON messages(type, conversation_key, outgoing, received_at)"
)
await conn.commit()
@@ -0,0 +1,31 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Add a composite index for message pagination and drop the now-redundant
idx_messages_conversation.
The pagination query (ORDER BY received_at DESC, id DESC LIMIT N) hits a
temp B-tree sort without this index. With it, SQLite walks the index in
order and stops after N rows critical for channels with 30K+ messages.
idx_messages_conversation(type, conversation_key) is a strict prefix of
both this index and idx_messages_unread_covering, so SQLite never picks it.
Dropping it saves ~6 MB and one index to maintain per INSERT.
"""
# Guard: table or columns may not exist in partial-schema test setups
cursor = await conn.execute("PRAGMA table_info(messages)")
columns = {row[1] for row in await cursor.fetchall()}
required = {"type", "conversation_key", "received_at", "id"}
if required <= columns:
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_messages_pagination "
"ON messages(type, conversation_key, received_at DESC, id DESC)"
)
await conn.execute("DROP INDEX IF EXISTS idx_messages_conversation")
await conn.commit()
+37
View File
@@ -0,0 +1,37 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add MQTT configuration columns to app_settings."""
# Guard: app_settings may not exist in partial-schema test setups
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='app_settings'"
)
if not await cursor.fetchone():
await conn.commit()
return
cursor = await conn.execute("PRAGMA table_info(app_settings)")
columns = {row[1] for row in await cursor.fetchall()}
new_columns = [
("mqtt_broker_host", "TEXT DEFAULT ''"),
("mqtt_broker_port", "INTEGER DEFAULT 1883"),
("mqtt_username", "TEXT DEFAULT ''"),
("mqtt_password", "TEXT DEFAULT ''"),
("mqtt_use_tls", "INTEGER DEFAULT 0"),
("mqtt_tls_insecure", "INTEGER DEFAULT 0"),
("mqtt_topic_prefix", "TEXT DEFAULT 'meshcore'"),
("mqtt_publish_messages", "INTEGER DEFAULT 0"),
("mqtt_publish_raw_packets", "INTEGER DEFAULT 0"),
]
for col_name, col_def in new_columns:
if col_name not in columns:
await conn.execute(f"ALTER TABLE app_settings ADD COLUMN {col_name} {col_def}")
await conn.commit()
@@ -0,0 +1,33 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add community MQTT configuration columns to app_settings."""
# Guard: app_settings may not exist in partial-schema test setups
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='app_settings'"
)
if not await cursor.fetchone():
await conn.commit()
return
cursor = await conn.execute("PRAGMA table_info(app_settings)")
columns = {row[1] for row in await cursor.fetchall()}
new_columns = [
("community_mqtt_enabled", "INTEGER DEFAULT 0"),
("community_mqtt_iata", "TEXT DEFAULT ''"),
("community_mqtt_broker_host", "TEXT DEFAULT 'mqtt-us-v1.letsmesh.net'"),
("community_mqtt_broker_port", "INTEGER DEFAULT 443"),
("community_mqtt_email", "TEXT DEFAULT ''"),
]
for col_name, col_def in new_columns:
if col_name not in columns:
await conn.execute(f"ALTER TABLE app_settings ADD COLUMN {col_name} {col_def}")
await conn.commit()
@@ -0,0 +1,23 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Seed the #remoteterm hashtag channel so new installs have it by default.
Uses INSERT OR IGNORE so it's a no-op if the channel already exists
(e.g. existing users who already added it manually). The channels table
is created by the base schema before migrations run, so it always exists
in production.
"""
try:
await conn.execute(
"INSERT OR IGNORE INTO channels (key, name, is_hashtag, on_radio) VALUES (?, ?, ?, ?)",
("8959AE053F2201801342A1DBDDA184F6", "#remoteterm", 1, 0),
)
await conn.commit()
except Exception:
logger.debug("Skipping #remoteterm seed (channels table not ready)")
+23
View File
@@ -0,0 +1,23 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add flood_scope column to app_settings for outbound region tagging.
Empty string means disabled (no scope set, messages sent unscoped).
"""
try:
await conn.execute("ALTER TABLE app_settings ADD COLUMN flood_scope TEXT DEFAULT ''")
await conn.commit()
except Exception as e:
error_msg = str(e).lower()
if "duplicate column" in error_msg:
logger.debug("flood_scope column already exists, skipping")
elif "no such table" in error_msg:
logger.debug("app_settings table not ready, skipping flood_scope migration")
else:
raise
+36
View File
@@ -0,0 +1,36 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add blocked_keys and blocked_names columns to app_settings.
These store JSON arrays of blocked public keys and display names.
Blocking hides messages from the UI but does not affect MQTT or bots.
"""
try:
await conn.execute("ALTER TABLE app_settings ADD COLUMN blocked_keys TEXT DEFAULT '[]'")
except Exception as e:
error_msg = str(e).lower()
if "duplicate column" in error_msg:
logger.debug("blocked_keys column already exists, skipping")
elif "no such table" in error_msg:
logger.debug("app_settings table not ready, skipping blocked_keys migration")
else:
raise
try:
await conn.execute("ALTER TABLE app_settings ADD COLUMN blocked_names TEXT DEFAULT '[]'")
except Exception as e:
error_msg = str(e).lower()
if "duplicate column" in error_msg:
logger.debug("blocked_names column already exists, skipping")
elif "no such table" in error_msg:
logger.debug("app_settings table not ready, skipping blocked_names migration")
else:
raise
await conn.commit()
@@ -0,0 +1,143 @@
import json
import logging
import uuid
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Create fanout_configs table and migrate existing MQTT settings.
Reads existing MQTT settings from app_settings and creates corresponding
fanout_configs rows. Old columns are NOT dropped (rollback safety).
"""
# 1. Create fanout_configs table
await conn.execute(
"""
CREATE TABLE IF NOT EXISTS fanout_configs (
id TEXT PRIMARY KEY,
type TEXT NOT NULL,
name TEXT NOT NULL,
enabled INTEGER DEFAULT 0,
config TEXT NOT NULL DEFAULT '{}',
scope TEXT NOT NULL DEFAULT '{}',
sort_order INTEGER DEFAULT 0,
created_at INTEGER NOT NULL
)
"""
)
# 2. Read existing MQTT settings
try:
cursor = await conn.execute(
"""
SELECT mqtt_broker_host, mqtt_broker_port, mqtt_username, mqtt_password,
mqtt_use_tls, mqtt_tls_insecure, mqtt_topic_prefix,
mqtt_publish_messages, mqtt_publish_raw_packets,
community_mqtt_enabled, community_mqtt_iata,
community_mqtt_broker_host, community_mqtt_broker_port,
community_mqtt_email
FROM app_settings WHERE id = 1
"""
)
row = await cursor.fetchone()
except Exception:
row = None
if row is None:
await conn.commit()
return
import time
now = int(time.time())
sort_order = 0
# 3. Migrate private MQTT if configured
broker_host = row["mqtt_broker_host"] or ""
if broker_host:
publish_messages = bool(row["mqtt_publish_messages"])
publish_raw = bool(row["mqtt_publish_raw_packets"])
enabled = publish_messages or publish_raw
config = {
"broker_host": broker_host,
"broker_port": row["mqtt_broker_port"] or 1883,
"username": row["mqtt_username"] or "",
"password": row["mqtt_password"] or "",
"use_tls": bool(row["mqtt_use_tls"]),
"tls_insecure": bool(row["mqtt_tls_insecure"]),
"topic_prefix": row["mqtt_topic_prefix"] or "meshcore",
}
scope = {
"messages": "all" if publish_messages else "none",
"raw_packets": "all" if publish_raw else "none",
}
await conn.execute(
"""
INSERT INTO fanout_configs (id, type, name, enabled, config, scope, sort_order, created_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
""",
(
str(uuid.uuid4()),
"mqtt_private",
"Private MQTT",
1 if enabled else 0,
json.dumps(config),
json.dumps(scope),
sort_order,
now,
),
)
sort_order += 1
logger.info("Migrated private MQTT settings to fanout_configs (enabled=%s)", enabled)
# 4. Migrate community MQTT if enabled OR configured (preserve disabled-but-configured)
community_enabled = bool(row["community_mqtt_enabled"])
community_iata = row["community_mqtt_iata"] or ""
community_host = row["community_mqtt_broker_host"] or ""
community_email = row["community_mqtt_email"] or ""
community_has_config = bool(
community_iata
or community_email
or (community_host and community_host != "mqtt-us-v1.letsmesh.net")
)
if community_enabled or community_has_config:
config = {
"broker_host": community_host or "mqtt-us-v1.letsmesh.net",
"broker_port": row["community_mqtt_broker_port"] or 443,
"iata": community_iata,
"email": community_email,
}
scope = {
"messages": "none",
"raw_packets": "all",
}
await conn.execute(
"""
INSERT INTO fanout_configs (id, type, name, enabled, config, scope, sort_order, created_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
""",
(
str(uuid.uuid4()),
"mqtt_community",
"Community MQTT",
1 if community_enabled else 0,
json.dumps(config),
json.dumps(scope),
sort_order,
now,
),
)
logger.info(
"Migrated community MQTT settings to fanout_configs (enabled=%s)", community_enabled
)
await conn.commit()
+63
View File
@@ -0,0 +1,63 @@
import json
import logging
import uuid
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Migrate bots from app_settings.bots JSON to fanout_configs rows."""
try:
cursor = await conn.execute("SELECT bots FROM app_settings WHERE id = 1")
row = await cursor.fetchone()
except Exception:
row = None
if row is None:
await conn.commit()
return
bots_json = row["bots"] or "[]"
try:
bots = json.loads(bots_json)
except (json.JSONDecodeError, TypeError):
bots = []
if not bots:
await conn.commit()
return
import time
now = int(time.time())
# Use sort_order starting at 200 to place bots after MQTT configs (0-99)
for i, bot in enumerate(bots):
bot_name = bot.get("name") or f"Bot {i + 1}"
bot_enabled = bool(bot.get("enabled", False))
bot_code = bot.get("code", "")
config_blob = json.dumps({"code": bot_code})
scope = json.dumps({"messages": "all", "raw_packets": "none"})
await conn.execute(
"""
INSERT INTO fanout_configs (id, type, name, enabled, config, scope, sort_order, created_at)
VALUES (?, 'bot', ?, ?, ?, ?, ?, ?)
""",
(
str(uuid.uuid4()),
bot_name,
1 if bot_enabled else 0,
config_blob,
scope,
200 + i,
now,
),
)
logger.info("Migrated bot '%s' to fanout_configs (enabled=%s)", bot_name, bot_enabled)
await conn.commit()
@@ -0,0 +1,54 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Drop legacy MQTT, community MQTT, and bots columns from app_settings.
These columns were migrated to fanout_configs in migrations 36 and 37.
SQLite 3.35.0+ supports ALTER TABLE DROP COLUMN. For older versions,
the columns remain but are harmless (no longer read or written).
"""
# Check if app_settings table exists (some test DBs may not have it)
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='app_settings'"
)
if await cursor.fetchone() is None:
await conn.commit()
return
columns_to_drop = [
"bots",
"mqtt_broker_host",
"mqtt_broker_port",
"mqtt_username",
"mqtt_password",
"mqtt_use_tls",
"mqtt_tls_insecure",
"mqtt_topic_prefix",
"mqtt_publish_messages",
"mqtt_publish_raw_packets",
"community_mqtt_enabled",
"community_mqtt_iata",
"community_mqtt_broker_host",
"community_mqtt_broker_port",
"community_mqtt_email",
]
for column in columns_to_drop:
try:
await conn.execute(f"ALTER TABLE app_settings DROP COLUMN {column}")
logger.debug("Dropped %s from app_settings", column)
except aiosqlite.OperationalError as e:
error_msg = str(e).lower()
if "no such column" in error_msg:
logger.debug("app_settings.%s already dropped, skipping", column)
elif "syntax error" in error_msg or "drop column" in error_msg:
logger.debug("SQLite doesn't support DROP COLUMN, %s column will remain", column)
else:
raise
await conn.commit()
@@ -0,0 +1,65 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add contacts.out_path_hash_mode and backfill legacy rows.
Historical databases predate multibyte routing support. Backfill rules:
- contacts with last_path_len = -1 are flood routes -> out_path_hash_mode = -1
- all other existing contacts default to 0 (1-byte legacy hop identifiers)
"""
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='contacts'"
)
if await cursor.fetchone() is None:
await conn.commit()
return
column_cursor = await conn.execute("PRAGMA table_info(contacts)")
columns = {row[1] for row in await column_cursor.fetchall()}
added_column = False
try:
await conn.execute(
"ALTER TABLE contacts ADD COLUMN out_path_hash_mode INTEGER NOT NULL DEFAULT 0"
)
added_column = True
logger.debug("Added out_path_hash_mode to contacts table")
except aiosqlite.OperationalError as e:
if "duplicate column name" in str(e).lower():
logger.debug("contacts.out_path_hash_mode already exists, skipping add")
else:
raise
if "last_path_len" not in columns:
await conn.commit()
return
if added_column:
await conn.execute(
"""
UPDATE contacts
SET out_path_hash_mode = CASE
WHEN last_path_len = -1 THEN -1
ELSE 0
END
"""
)
else:
await conn.execute(
"""
UPDATE contacts
SET out_path_hash_mode = CASE
WHEN last_path_len = -1 THEN -1
ELSE 0
END
WHERE out_path_hash_mode NOT IN (-1, 0, 1, 2)
OR (last_path_len = -1 AND out_path_hash_mode != -1)
"""
)
await conn.commit()
@@ -0,0 +1,82 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(
conn: aiosqlite.Connection,
) -> None:
"""Rebuild contact_advert_paths so uniqueness includes path_len.
Multi-byte routing can produce the same path_hex bytes with a different hop count,
which changes the hop boundaries and therefore the semantic next-hop identity.
"""
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='contact_advert_paths'"
)
if await cursor.fetchone() is None:
await conn.execute(
"""
CREATE TABLE IF NOT EXISTS contact_advert_paths (
id INTEGER PRIMARY KEY AUTOINCREMENT,
public_key TEXT NOT NULL,
path_hex TEXT NOT NULL,
path_len INTEGER NOT NULL,
first_seen INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
heard_count INTEGER NOT NULL DEFAULT 1,
UNIQUE(public_key, path_hex, path_len),
FOREIGN KEY (public_key) REFERENCES contacts(public_key)
)
"""
)
await conn.execute("DROP INDEX IF EXISTS idx_contact_advert_paths_recent")
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_contact_advert_paths_recent "
"ON contact_advert_paths(public_key, last_seen DESC)"
)
await conn.commit()
return
await conn.execute(
"""
CREATE TABLE contact_advert_paths_new (
id INTEGER PRIMARY KEY AUTOINCREMENT,
public_key TEXT NOT NULL,
path_hex TEXT NOT NULL,
path_len INTEGER NOT NULL,
first_seen INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
heard_count INTEGER NOT NULL DEFAULT 1,
UNIQUE(public_key, path_hex, path_len),
FOREIGN KEY (public_key) REFERENCES contacts(public_key)
)
"""
)
await conn.execute(
"""
INSERT INTO contact_advert_paths_new
(public_key, path_hex, path_len, first_seen, last_seen, heard_count)
SELECT
public_key,
path_hex,
path_len,
MIN(first_seen),
MAX(last_seen),
SUM(heard_count)
FROM contact_advert_paths
GROUP BY public_key, path_hex, path_len
"""
)
await conn.execute("DROP TABLE contact_advert_paths")
await conn.execute("ALTER TABLE contact_advert_paths_new RENAME TO contact_advert_paths")
await conn.execute("DROP INDEX IF EXISTS idx_contact_advert_paths_recent")
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_contact_advert_paths_recent "
"ON contact_advert_paths(public_key, last_seen DESC)"
)
await conn.commit()
@@ -0,0 +1,31 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add nullable routing-override columns to contacts."""
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='contacts'"
)
if await cursor.fetchone() is None:
await conn.commit()
return
for column_name, column_type in (
("route_override_path", "TEXT"),
("route_override_len", "INTEGER"),
("route_override_hash_mode", "INTEGER"),
):
try:
await conn.execute(f"ALTER TABLE contacts ADD COLUMN {column_name} {column_type}")
logger.debug("Added %s to contacts table", column_name)
except aiosqlite.OperationalError as e:
if "duplicate column name" in str(e).lower():
logger.debug("contacts.%s already exists, skipping", column_name)
else:
raise
await conn.commit()
@@ -0,0 +1,26 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add nullable per-channel flood-scope override column."""
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='channels'"
)
if await cursor.fetchone() is None:
await conn.commit()
return
try:
await conn.execute("ALTER TABLE channels ADD COLUMN flood_scope_override TEXT")
logger.debug("Added flood_scope_override to channels table")
except aiosqlite.OperationalError as e:
if "duplicate column name" in str(e).lower():
logger.debug("channels.flood_scope_override already exists, skipping")
else:
raise
await conn.commit()
@@ -0,0 +1,31 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Restrict the message dedup index to channel messages."""
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='messages'"
)
if await cursor.fetchone() is None:
await conn.commit()
return
cursor = await conn.execute("PRAGMA table_info(messages)")
columns = {row[1] for row in await cursor.fetchall()}
required_columns = {"type", "conversation_key", "text", "sender_timestamp"}
if not required_columns.issubset(columns):
logger.debug("messages table missing dedup-index columns, skipping migration 43")
await conn.commit()
return
await conn.execute("DROP INDEX IF EXISTS idx_messages_dedup_null_safe")
await conn.execute(
"""CREATE UNIQUE INDEX IF NOT EXISTS idx_messages_dedup_null_safe
ON messages(type, conversation_key, text, COALESCE(sender_timestamp, 0))
WHERE type = 'CHAN'"""
)
await conn.commit()
@@ -0,0 +1,157 @@
import json
import logging
import aiosqlite
logger = logging.getLogger(__name__)
def _merge_message_paths(paths_json_values: list[str | None]) -> str | None:
"""Merge multiple message path arrays into one exact-observation list."""
merged: list[dict[str, object]] = []
seen: set[tuple[object | None, object | None, object | None]] = set()
for paths_json in paths_json_values:
if not paths_json:
continue
try:
parsed = json.loads(paths_json)
except (TypeError, json.JSONDecodeError):
continue
if not isinstance(parsed, list):
continue
for entry in parsed:
if not isinstance(entry, dict):
continue
key = (
entry.get("path"),
entry.get("received_at"),
entry.get("path_len"),
)
if key in seen:
continue
seen.add(key)
merged.append(entry)
return json.dumps(merged) if merged else None
async def migrate(conn: aiosqlite.Connection) -> None:
"""Collapse same-contact same-text same-second incoming DMs into one row."""
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='messages'"
)
if await cursor.fetchone() is None:
await conn.commit()
return
cursor = await conn.execute("PRAGMA table_info(messages)")
columns = {row[1] for row in await cursor.fetchall()}
required_columns = {
"id",
"type",
"conversation_key",
"text",
"sender_timestamp",
"received_at",
"paths",
"txt_type",
"signature",
"outgoing",
"acked",
"sender_name",
"sender_key",
}
if not required_columns.issubset(columns):
logger.debug("messages table missing incoming-DM dedup columns, skipping migration 44")
await conn.commit()
return
raw_packets_cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='raw_packets'"
)
raw_packets_exists = await raw_packets_cursor.fetchone() is not None
duplicate_groups_cursor = await conn.execute(
"""
SELECT conversation_key, text,
COALESCE(sender_timestamp, 0) AS normalized_sender_timestamp,
COUNT(*) AS duplicate_count
FROM messages
WHERE type = 'PRIV' AND outgoing = 0
GROUP BY conversation_key, text, COALESCE(sender_timestamp, 0)
HAVING COUNT(*) > 1
"""
)
duplicate_groups = await duplicate_groups_cursor.fetchall()
for group in duplicate_groups:
normalized_sender_timestamp = group["normalized_sender_timestamp"]
rows_cursor = await conn.execute(
"""
SELECT *
FROM messages
WHERE type = 'PRIV' AND outgoing = 0
AND conversation_key = ? AND text = ?
AND COALESCE(sender_timestamp, 0) = ?
ORDER BY id ASC
""",
(
group["conversation_key"],
group["text"],
normalized_sender_timestamp,
),
)
rows = list(await rows_cursor.fetchall())
if len(rows) < 2:
continue
keeper = rows[0]
duplicate_ids = [row["id"] for row in rows[1:]]
merged_paths = _merge_message_paths([row["paths"] for row in rows])
merged_received_at = min(row["received_at"] for row in rows)
merged_txt_type = next((row["txt_type"] for row in rows if row["txt_type"] != 0), 0)
merged_signature = next((row["signature"] for row in rows if row["signature"]), None)
merged_sender_name = next((row["sender_name"] for row in rows if row["sender_name"]), None)
merged_sender_key = next((row["sender_key"] for row in rows if row["sender_key"]), None)
merged_acked = max(int(row["acked"] or 0) for row in rows)
await conn.execute(
"""
UPDATE messages
SET received_at = ?, paths = ?, txt_type = ?, signature = ?,
acked = ?, sender_name = ?, sender_key = ?
WHERE id = ?
""",
(
merged_received_at,
merged_paths,
merged_txt_type,
merged_signature,
merged_acked,
merged_sender_name,
merged_sender_key,
keeper["id"],
),
)
if raw_packets_exists:
for duplicate_id in duplicate_ids:
await conn.execute(
"UPDATE raw_packets SET message_id = ? WHERE message_id = ?",
(keeper["id"], duplicate_id),
)
placeholders = ",".join("?" for _ in duplicate_ids)
await conn.execute(
f"DELETE FROM messages WHERE id IN ({placeholders})",
duplicate_ids,
)
await conn.execute("DROP INDEX IF EXISTS idx_messages_incoming_priv_dedup")
await conn.execute(
"""CREATE UNIQUE INDEX IF NOT EXISTS idx_messages_incoming_priv_dedup
ON messages(type, conversation_key, text, COALESCE(sender_timestamp, 0))
WHERE type = 'PRIV' AND outgoing = 0"""
)
await conn.commit()
@@ -0,0 +1,136 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Replace legacy contact route columns with canonical direct-route columns."""
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='contacts'"
)
if await cursor.fetchone() is None:
await conn.commit()
return
cursor = await conn.execute("PRAGMA table_info(contacts)")
columns = {row[1] for row in await cursor.fetchall()}
target_columns = {
"public_key",
"name",
"type",
"flags",
"direct_path",
"direct_path_len",
"direct_path_hash_mode",
"direct_path_updated_at",
"route_override_path",
"route_override_len",
"route_override_hash_mode",
"last_advert",
"lat",
"lon",
"last_seen",
"on_radio",
"last_contacted",
"first_seen",
"last_read_at",
}
if (
target_columns.issubset(columns)
and "last_path" not in columns
and "out_path_hash_mode" not in columns
):
await conn.commit()
return
await conn.execute(
"""
CREATE TABLE contacts_new (
public_key TEXT PRIMARY KEY,
name TEXT,
type INTEGER DEFAULT 0,
flags INTEGER DEFAULT 0,
direct_path TEXT,
direct_path_len INTEGER,
direct_path_hash_mode INTEGER,
direct_path_updated_at INTEGER,
route_override_path TEXT,
route_override_len INTEGER,
route_override_hash_mode INTEGER,
last_advert INTEGER,
lat REAL,
lon REAL,
last_seen INTEGER,
on_radio INTEGER DEFAULT 0,
last_contacted INTEGER,
first_seen INTEGER,
last_read_at INTEGER
)
"""
)
select_expr = {
"public_key": "public_key",
"name": "NULL",
"type": "0",
"flags": "0",
"direct_path": "NULL",
"direct_path_len": "NULL",
"direct_path_hash_mode": "NULL",
"direct_path_updated_at": "NULL",
"route_override_path": "NULL",
"route_override_len": "NULL",
"route_override_hash_mode": "NULL",
"last_advert": "NULL",
"lat": "NULL",
"lon": "NULL",
"last_seen": "NULL",
"on_radio": "0",
"last_contacted": "NULL",
"first_seen": "NULL",
"last_read_at": "NULL",
}
for name in ("name", "type", "flags"):
if name in columns:
select_expr[name] = name
if "direct_path" in columns:
select_expr["direct_path"] = "direct_path"
if "direct_path_len" in columns:
select_expr["direct_path_len"] = "direct_path_len"
if "direct_path_hash_mode" in columns:
select_expr["direct_path_hash_mode"] = "direct_path_hash_mode"
for name in (
"route_override_path",
"route_override_len",
"route_override_hash_mode",
"last_advert",
"lat",
"lon",
"last_seen",
"on_radio",
"last_contacted",
"first_seen",
"last_read_at",
):
if name in columns:
select_expr[name] = name
ordered_columns = list(select_expr.keys())
await conn.execute(
f"""
INSERT INTO contacts_new ({", ".join(ordered_columns)})
SELECT {", ".join(select_expr[name] for name in ordered_columns)}
FROM contacts
"""
)
await conn.execute("DROP TABLE contacts")
await conn.execute("ALTER TABLE contacts_new RENAME TO contacts")
await conn.commit()
@@ -0,0 +1,93 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Move uniquely resolvable orphan contact child rows onto full contacts, drop the rest."""
existing_tables_cursor = await conn.execute("SELECT name FROM sqlite_master WHERE type='table'")
existing_tables = {row[0] for row in await existing_tables_cursor.fetchall()}
if "contacts" not in existing_tables:
await conn.commit()
return
child_tables = [
table
for table in ("contact_name_history", "contact_advert_paths")
if table in existing_tables
]
if not child_tables:
await conn.commit()
return
orphan_keys: set[str] = set()
for table in child_tables:
cursor = await conn.execute(
f"""
SELECT DISTINCT child.public_key
FROM {table} child
LEFT JOIN contacts c ON c.public_key = child.public_key
WHERE c.public_key IS NULL
"""
)
orphan_keys.update(row[0] for row in await cursor.fetchall())
for orphan_key in sorted(orphan_keys, key=len, reverse=True):
match_cursor = await conn.execute(
"""
SELECT public_key
FROM contacts
WHERE length(public_key) = 64
AND public_key LIKE ? || '%'
ORDER BY public_key
""",
(orphan_key.lower(),),
)
matches = [row[0] for row in await match_cursor.fetchall()]
resolved_key = matches[0] if len(matches) == 1 else None
if resolved_key is not None:
if "contact_name_history" in child_tables:
await conn.execute(
"""
INSERT INTO contact_name_history (public_key, name, first_seen, last_seen)
SELECT ?, name, first_seen, last_seen
FROM contact_name_history
WHERE public_key = ?
ON CONFLICT(public_key, name) DO UPDATE SET
first_seen = MIN(contact_name_history.first_seen, excluded.first_seen),
last_seen = MAX(contact_name_history.last_seen, excluded.last_seen)
""",
(resolved_key, orphan_key),
)
if "contact_advert_paths" in child_tables:
await conn.execute(
"""
INSERT INTO contact_advert_paths
(public_key, path_hex, path_len, first_seen, last_seen, heard_count)
SELECT ?, path_hex, path_len, first_seen, last_seen, heard_count
FROM contact_advert_paths
WHERE public_key = ?
ON CONFLICT(public_key, path_hex, path_len) DO UPDATE SET
first_seen = MIN(contact_advert_paths.first_seen, excluded.first_seen),
last_seen = MAX(contact_advert_paths.last_seen, excluded.last_seen),
heard_count = contact_advert_paths.heard_count + excluded.heard_count
""",
(resolved_key, orphan_key),
)
if "contact_name_history" in child_tables:
await conn.execute(
"DELETE FROM contact_name_history WHERE public_key = ?",
(orphan_key,),
)
if "contact_advert_paths" in child_tables:
await conn.execute(
"DELETE FROM contact_advert_paths WHERE public_key = ?",
(orphan_key,),
)
await conn.commit()
@@ -0,0 +1,39 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add indexes used by the statistics endpoint's time-windowed scans."""
cursor = await conn.execute("SELECT name FROM sqlite_master WHERE type='table'")
tables = {row[0] for row in await cursor.fetchall()}
if "raw_packets" in tables:
cursor = await conn.execute("PRAGMA table_info(raw_packets)")
raw_packet_columns = {row[1] for row in await cursor.fetchall()}
if "timestamp" in raw_packet_columns:
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_raw_packets_timestamp ON raw_packets(timestamp)"
)
if "contacts" in tables:
cursor = await conn.execute("PRAGMA table_info(contacts)")
contact_columns = {row[1] for row in await cursor.fetchall()}
if {"type", "last_seen"}.issubset(contact_columns):
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_contacts_type_last_seen ON contacts(type, last_seen)"
)
if "messages" in tables:
cursor = await conn.execute("PRAGMA table_info(messages)")
message_columns = {row[1] for row in await cursor.fetchall()}
if {"type", "received_at", "conversation_key"}.issubset(message_columns):
await conn.execute(
"""
CREATE INDEX IF NOT EXISTS idx_messages_type_received_conversation
ON messages(type, received_at, conversation_key)
"""
)
await conn.commit()
@@ -0,0 +1,27 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add discovery_blocked_types column to app_settings.
Stores a JSON array of integer contact type codes (1=Client, 2=Repeater,
3=Room, 4=Sensor) whose advertisements should not create new contacts.
Empty list means all types are accepted.
"""
try:
await conn.execute(
"ALTER TABLE app_settings ADD COLUMN discovery_blocked_types TEXT DEFAULT '[]'"
)
except Exception as e:
error_msg = str(e).lower()
if "duplicate column" in error_msg:
logger.debug("discovery_blocked_types column already exists, skipping")
elif "no such table" in error_msg:
logger.debug("app_settings table not ready, skipping discovery_blocked_types migration")
else:
raise
await conn.commit()
+158
View File
@@ -0,0 +1,158 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Rebuild FK tables with CASCADE/SET NULL and clean orphaned rows.
SQLite cannot ALTER existing FK constraints, so each table is rebuilt.
Orphaned child rows are cleaned up before the rebuild to ensure the
INSERT...SELECT into the new table (which has enforced FKs) succeeds.
"""
import shutil
from pathlib import Path
# Back up the database before table rebuilds (skip for in-memory DBs).
cursor = await conn.execute("PRAGMA database_list")
db_row = await cursor.fetchone()
db_path = db_row[2] if db_row else ""
if db_path and db_path != ":memory:" and Path(db_path).exists():
backup_path = db_path + ".pre-fk-migration.bak"
for suffix in ("", "-wal", "-shm"):
src = Path(db_path + suffix)
if src.exists():
shutil.copy2(str(src), backup_path + suffix)
logger.info("Database backed up to %s before FK migration", backup_path)
# --- Phase 1: clean orphans (guard each table's existence) ---
tables_cursor = await conn.execute("SELECT name FROM sqlite_master WHERE type='table'")
existing_tables = {row[0] for row in await tables_cursor.fetchall()}
if "contact_advert_paths" in existing_tables and "contacts" in existing_tables:
await conn.execute(
"DELETE FROM contact_advert_paths "
"WHERE public_key NOT IN (SELECT public_key FROM contacts)"
)
if "contact_name_history" in existing_tables and "contacts" in existing_tables:
await conn.execute(
"DELETE FROM contact_name_history "
"WHERE public_key NOT IN (SELECT public_key FROM contacts)"
)
if "raw_packets" in existing_tables and "messages" in existing_tables:
# Guard: message_id column may not exist on very old schemas
col_cursor = await conn.execute("PRAGMA table_info(raw_packets)")
raw_cols = {row[1] for row in await col_cursor.fetchall()}
if "message_id" in raw_cols:
await conn.execute(
"UPDATE raw_packets SET message_id = NULL WHERE message_id IS NOT NULL "
"AND message_id NOT IN (SELECT id FROM messages)"
)
await conn.commit()
logger.debug("Cleaned orphaned child rows before FK rebuild")
# --- Phase 2: rebuild raw_packets with ON DELETE SET NULL ---
# Skip if raw_packets doesn't have message_id (pre-migration-18 schema)
raw_has_message_id = False
if "raw_packets" in existing_tables:
col_cursor2 = await conn.execute("PRAGMA table_info(raw_packets)")
raw_has_message_id = "message_id" in {row[1] for row in await col_cursor2.fetchall()}
if raw_has_message_id:
# Dynamically build column list based on what the old table actually has,
# since very old schemas may lack payload_hash (added in migration 28).
col_cursor3 = await conn.execute("PRAGMA table_info(raw_packets)")
old_cols = [row[1] for row in await col_cursor3.fetchall()]
new_col_defs = [
"id INTEGER PRIMARY KEY AUTOINCREMENT",
"timestamp INTEGER NOT NULL",
"data BLOB NOT NULL",
"message_id INTEGER",
]
copy_cols = ["id", "timestamp", "data", "message_id"]
if "payload_hash" in old_cols:
new_col_defs.append("payload_hash BLOB")
copy_cols.append("payload_hash")
new_col_defs.append("FOREIGN KEY (message_id) REFERENCES messages(id) ON DELETE SET NULL")
cols_sql = ", ".join(new_col_defs)
copy_sql = ", ".join(copy_cols)
await conn.execute(f"CREATE TABLE raw_packets_fk ({cols_sql})")
await conn.execute(
f"INSERT INTO raw_packets_fk ({copy_sql}) SELECT {copy_sql} FROM raw_packets"
)
await conn.execute("DROP TABLE raw_packets")
await conn.execute("ALTER TABLE raw_packets_fk RENAME TO raw_packets")
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_raw_packets_message_id ON raw_packets(message_id)"
)
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_raw_packets_timestamp ON raw_packets(timestamp)"
)
if "payload_hash" in old_cols:
await conn.execute(
"CREATE UNIQUE INDEX IF NOT EXISTS idx_raw_packets_payload_hash ON raw_packets(payload_hash)"
)
await conn.commit()
logger.debug("Rebuilt raw_packets with ON DELETE SET NULL")
# --- Phase 3: rebuild contact_advert_paths with ON DELETE CASCADE ---
if "contact_advert_paths" in existing_tables:
await conn.execute(
"""
CREATE TABLE contact_advert_paths_fk (
id INTEGER PRIMARY KEY AUTOINCREMENT,
public_key TEXT NOT NULL,
path_hex TEXT NOT NULL,
path_len INTEGER NOT NULL,
first_seen INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
heard_count INTEGER NOT NULL DEFAULT 1,
UNIQUE(public_key, path_hex, path_len),
FOREIGN KEY (public_key) REFERENCES contacts(public_key) ON DELETE CASCADE
)
"""
)
await conn.execute(
"INSERT INTO contact_advert_paths_fk (id, public_key, path_hex, path_len, first_seen, last_seen, heard_count) "
"SELECT id, public_key, path_hex, path_len, first_seen, last_seen, heard_count FROM contact_advert_paths"
)
await conn.execute("DROP TABLE contact_advert_paths")
await conn.execute("ALTER TABLE contact_advert_paths_fk RENAME TO contact_advert_paths")
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_contact_advert_paths_recent "
"ON contact_advert_paths(public_key, last_seen DESC)"
)
await conn.commit()
logger.debug("Rebuilt contact_advert_paths with ON DELETE CASCADE")
# --- Phase 4: rebuild contact_name_history with ON DELETE CASCADE ---
if "contact_name_history" in existing_tables:
await conn.execute(
"""
CREATE TABLE contact_name_history_fk (
id INTEGER PRIMARY KEY AUTOINCREMENT,
public_key TEXT NOT NULL,
name TEXT NOT NULL,
first_seen INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
UNIQUE(public_key, name),
FOREIGN KEY (public_key) REFERENCES contacts(public_key) ON DELETE CASCADE
)
"""
)
await conn.execute(
"INSERT INTO contact_name_history_fk (id, public_key, name, first_seen, last_seen) "
"SELECT id, public_key, name, first_seen, last_seen FROM contact_name_history"
)
await conn.execute("DROP TABLE contact_name_history")
await conn.execute("ALTER TABLE contact_name_history_fk RENAME TO contact_name_history")
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_contact_name_history_key "
"ON contact_name_history(public_key, last_seen DESC)"
)
await conn.commit()
logger.debug("Rebuilt contact_name_history with ON DELETE CASCADE")
@@ -0,0 +1,27 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Create repeater_telemetry_history table for JSON-blob telemetry snapshots."""
await conn.execute(
"""
CREATE TABLE IF NOT EXISTS repeater_telemetry_history (
id INTEGER PRIMARY KEY AUTOINCREMENT,
public_key TEXT NOT NULL,
timestamp INTEGER NOT NULL,
data TEXT NOT NULL,
FOREIGN KEY (public_key) REFERENCES contacts(public_key) ON DELETE CASCADE
)
"""
)
await conn.execute(
"""
CREATE INDEX IF NOT EXISTS idx_repeater_telemetry_pk_ts
ON repeater_telemetry_history (public_key, timestamp)
"""
)
await conn.commit()
@@ -0,0 +1,24 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Remove vestigial sidebar_sort_order column from app_settings."""
col_cursor = await conn.execute("PRAGMA table_info(app_settings)")
columns = {row[1] for row in await col_cursor.fetchall()}
if "sidebar_sort_order" in columns:
try:
await conn.execute("ALTER TABLE app_settings DROP COLUMN sidebar_sort_order")
await conn.commit()
except Exception as e:
error_msg = str(e).lower()
if "syntax error" in error_msg or "drop column" in error_msg:
logger.debug(
"SQLite doesn't support DROP COLUMN, sidebar_sort_order column will remain"
)
await conn.commit()
else:
raise
@@ -0,0 +1,21 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add nullable per-channel path hash mode override column."""
tables_cursor = await conn.execute("SELECT name FROM sqlite_master WHERE type='table'")
if "channels" not in {row[0] for row in await tables_cursor.fetchall()}:
await conn.commit()
return
try:
await conn.execute("ALTER TABLE channels ADD COLUMN path_hash_mode_override INTEGER")
await conn.commit()
except Exception as e:
if "duplicate column" in str(e).lower():
await conn.commit()
else:
raise
@@ -0,0 +1,20 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add tracked_telemetry_repeaters JSON list column to app_settings."""
tables_cursor = await conn.execute("SELECT name FROM sqlite_master WHERE type='table'")
if "app_settings" not in {row[0] for row in await tables_cursor.fetchall()}:
await conn.commit()
return
col_cursor = await conn.execute("PRAGMA table_info(app_settings)")
columns = {row[1] for row in await col_cursor.fetchall()}
if "tracked_telemetry_repeaters" not in columns:
await conn.execute(
"ALTER TABLE app_settings ADD COLUMN tracked_telemetry_repeaters TEXT DEFAULT '[]'"
)
await conn.commit()
@@ -0,0 +1,20 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add auto_resend_channel boolean column to app_settings."""
tables_cursor = await conn.execute("SELECT name FROM sqlite_master WHERE type='table'")
if "app_settings" not in {row[0] for row in await tables_cursor.fetchall()}:
await conn.commit()
return
col_cursor = await conn.execute("PRAGMA table_info(app_settings)")
columns = {row[1] for row in await col_cursor.fetchall()}
if "auto_resend_channel" not in columns:
await conn.execute(
"ALTER TABLE app_settings ADD COLUMN auto_resend_channel INTEGER DEFAULT 0"
)
await conn.commit()
@@ -0,0 +1,93 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Move favorites from app_settings JSON blob to per-entity boolean columns.
1. Add ``favorite`` column to contacts and channels tables.
2. Backfill from the ``app_settings.favorites`` JSON array.
3. Drop the ``favorites`` column from app_settings.
"""
import json as _json
# --- Add columns ---
tables_cursor = await conn.execute("SELECT name FROM sqlite_master WHERE type='table'")
existing_tables = {row[0] for row in await tables_cursor.fetchall()}
for table in ("contacts", "channels"):
if table not in existing_tables:
continue
col_cursor = await conn.execute(f"PRAGMA table_info({table})")
columns = {row[1] for row in await col_cursor.fetchall()}
if "favorite" not in columns:
await conn.execute(f"ALTER TABLE {table} ADD COLUMN favorite INTEGER DEFAULT 0")
await conn.commit()
# --- Backfill from JSON ---
tables_cursor = await conn.execute("SELECT name FROM sqlite_master WHERE type='table'")
if "app_settings" not in {row[0] for row in await tables_cursor.fetchall()}:
await conn.commit()
return
col_cursor = await conn.execute("PRAGMA table_info(app_settings)")
settings_columns = {row[1] for row in await col_cursor.fetchall()}
if "favorites" not in settings_columns:
await conn.commit()
return
cursor = await conn.execute("SELECT favorites FROM app_settings WHERE id = 1")
row = await cursor.fetchone()
if row and row[0]:
try:
favorites = _json.loads(row[0])
except (ValueError, TypeError):
favorites = []
contact_keys = []
channel_keys = []
for fav in favorites:
if not isinstance(fav, dict):
continue
fav_type = fav.get("type")
fav_id = fav.get("id")
if not fav_id:
continue
if fav_type == "contact":
contact_keys.append(fav_id)
elif fav_type == "channel":
channel_keys.append(fav_id)
if contact_keys:
placeholders = ",".join("?" for _ in contact_keys)
await conn.execute(
f"UPDATE contacts SET favorite = 1 WHERE public_key IN ({placeholders})",
contact_keys,
)
if channel_keys:
placeholders = ",".join("?" for _ in channel_keys)
await conn.execute(
f"UPDATE channels SET favorite = 1 WHERE key IN ({placeholders})",
channel_keys,
)
if contact_keys or channel_keys:
logger.info(
"Backfilled %d contact favorite(s) and %d channel favorite(s) from app_settings",
len(contact_keys),
len(channel_keys),
)
await conn.commit()
# --- Drop the JSON column ---
try:
await conn.execute("ALTER TABLE app_settings DROP COLUMN favorites")
await conn.commit()
except Exception as e:
error_msg = str(e).lower()
if "syntax error" in error_msg or "drop column" in error_msg:
logger.debug("SQLite doesn't support DROP COLUMN; favorites column will remain unused")
await conn.commit()
else:
raise
@@ -0,0 +1,43 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add sender_key to the incoming PRIV dedup index.
Room-server posts are stored as PRIV messages sharing one conversation_key
(the room contact). Without sender_key in the uniqueness constraint, two
different room participants sending identical text in the same clock second
collide and the second message is silently dropped.
Adding COALESCE(sender_key, '') is strictly more permissive no existing
rows can conflict so the migration only needs to rebuild the index.
"""
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='messages'"
)
if await cursor.fetchone() is None:
await conn.commit()
return
# The index references type, conversation_key, sender_timestamp, outgoing,
# and sender_key. Some migration tests create minimal messages tables that
# lack these columns. Skip gracefully when the schema is too old.
col_cursor = await conn.execute("PRAGMA table_info(messages)")
columns = {row[1] for row in await col_cursor.fetchall()}
required = {"type", "conversation_key", "sender_timestamp", "outgoing", "sender_key"}
if not required.issubset(columns):
await conn.commit()
return
await conn.execute("DROP INDEX IF EXISTS idx_messages_incoming_priv_dedup")
await conn.execute(
"""CREATE UNIQUE INDEX IF NOT EXISTS idx_messages_incoming_priv_dedup
ON messages(type, conversation_key, text, COALESCE(sender_timestamp, 0),
COALESCE(sender_key, ''))
WHERE type = 'PRIV' AND outgoing = 0"""
)
await conn.commit()
@@ -0,0 +1,22 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add telemetry_interval_hours integer column to app_settings."""
tables_cursor = await conn.execute("SELECT name FROM sqlite_master WHERE type='table'")
if "app_settings" not in {row[0] for row in await tables_cursor.fetchall()}:
await conn.commit()
return
col_cursor = await conn.execute("PRAGMA table_info(app_settings)")
columns = {row[1] for row in await col_cursor.fetchall()}
if "telemetry_interval_hours" not in columns:
# Default to 8 hours, matching the previous hard-coded interval
# so existing users see no behavior change until they opt in.
await conn.execute(
"ALTER TABLE app_settings ADD COLUMN telemetry_interval_hours INTEGER DEFAULT 8"
)
await conn.commit()
+49
View File
@@ -0,0 +1,49 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add Web Push support: VAPID keys, push subscriptions table, and global conversation list."""
# VAPID key pair + global push conversation list in app_settings
table_check = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='app_settings'"
)
if await table_check.fetchone():
cursor = await conn.execute("PRAGMA table_info(app_settings)")
columns = {row[1] for row in await cursor.fetchall()}
if "vapid_private_key" not in columns:
await conn.execute(
"ALTER TABLE app_settings ADD COLUMN vapid_private_key TEXT DEFAULT ''"
)
if "vapid_public_key" not in columns:
await conn.execute(
"ALTER TABLE app_settings ADD COLUMN vapid_public_key TEXT DEFAULT ''"
)
if "push_conversations" not in columns:
await conn.execute(
"ALTER TABLE app_settings ADD COLUMN push_conversations TEXT DEFAULT '[]'"
)
# Push subscriptions — one row per browser/device
await conn.execute(
"""
CREATE TABLE IF NOT EXISTS push_subscriptions (
id TEXT PRIMARY KEY,
endpoint TEXT NOT NULL,
p256dh TEXT NOT NULL,
auth TEXT NOT NULL,
label TEXT NOT NULL DEFAULT '',
created_at INTEGER NOT NULL,
last_success_at INTEGER,
failure_count INTEGER DEFAULT 0,
UNIQUE(endpoint)
)
"""
)
await conn.commit()
+23
View File
@@ -0,0 +1,23 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add muted column to channels table."""
table_check = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='channels'"
)
if not await table_check.fetchone():
await conn.commit()
return
cursor = await conn.execute("PRAGMA table_info(channels)")
columns = {row[1] for row in await cursor.fetchall()}
if "muted" not in columns:
await conn.execute("ALTER TABLE channels ADD COLUMN muted INTEGER DEFAULT 0")
await conn.commit()
+66
View File
@@ -0,0 +1,66 @@
"""
Database migrations using SQLite's user_version pragma.
Migrations run automatically on startup. The user_version pragma tracks
which migrations have been applied (defaults to 0 for existing databases).
Each migration lives in its own file: ``_NNN_description.py``, exposing an
``async def migrate(conn)`` entry point. The runner auto-discovers files by
numeric prefix and executes them in order.
This approach is safe for existing users - their databases have user_version=0,
so all migrations run in order on first startup after upgrade.
"""
import importlib
import logging
import pkgutil
import re
import aiosqlite
logger = logging.getLogger(__name__)
async def get_version(conn: aiosqlite.Connection) -> int:
"""Get current schema version from SQLite user_version pragma."""
cursor = await conn.execute("PRAGMA user_version")
row = await cursor.fetchone()
return row[0] if row else 0
async def set_version(conn: aiosqlite.Connection, version: int) -> None:
"""Set schema version using SQLite user_version pragma."""
await conn.execute(f"PRAGMA user_version = {version}")
async def run_migrations(conn: aiosqlite.Connection) -> int:
"""
Run all pending migrations.
Returns the number of migrations applied.
"""
version = await get_version(conn)
applied = 0
for module_info in sorted(pkgutil.iter_modules(__path__), key=lambda m: m.name):
match = re.match(r"_(\d+)_", module_info.name)
if not match:
continue
num = int(match.group(1))
if num <= version:
continue
logger.info("Applying migration %d: %s", num, module_info.name)
mod = importlib.import_module(f"{__name__}.{module_info.name}")
await mod.migrate(conn)
await set_version(conn, num)
applied += 1
if applied > 0:
logger.info(
"Applied %d migration(s), schema now at version %d", applied, await get_version(conn)
)
else:
logger.debug("Schema up to date at version %d", version)
return applied
+36 -7
View File
@@ -4,6 +4,10 @@ from pydantic import BaseModel, Field
from app.path_utils import normalize_contact_route, normalize_route_override
# Valid MeshCore contact types: 0=unknown, 1=client, 2=repeater, 3=room, 4=sensor.
# Corrupted radio data can produce values outside this range.
_VALID_CONTACT_TYPES = frozenset({0, 1, 2, 3, 4})
class ContactRoute(BaseModel):
"""A normalized contact route."""
@@ -59,16 +63,30 @@ class ContactUpsert(BaseModel):
-1 if radio_data.get("out_path_len", -1) == -1 else 0,
),
)
# Clamp invalid contact types to 0 (unknown) — corrupted radio data
# can produce values like 111 or 240 that break downstream branching.
raw_type = radio_data.get("type", 0)
contact_type = raw_type if raw_type in _VALID_CONTACT_TYPES else 0
# Null out impossible coordinates — the contact is still ingested,
# but garbage lat/lon (e.g. 1953.7) is discarded rather than stored.
lat = radio_data.get("adv_lat")
lon = radio_data.get("adv_lon")
if lat is not None and not (-90 <= lat <= 90):
lat = None
if lon is not None and not (-180 <= lon <= 180):
lon = None
return cls(
public_key=public_key,
name=radio_data.get("adv_name"),
type=radio_data.get("type", 0),
type=contact_type,
flags=radio_data.get("flags", 0),
direct_path=direct_path,
direct_path_len=direct_path_len,
direct_path_hash_mode=direct_path_hash_mode,
lat=radio_data.get("adv_lat"),
lon=radio_data.get("adv_lon"),
lat=lat,
lon=lon,
last_advert=radio_data.get("last_advert"),
on_radio=on_radio,
)
@@ -328,6 +346,7 @@ class Channel(BaseModel):
)
last_read_at: int | None = None # Server-side read state tracking
favorite: bool = False
muted: bool = False
class ChannelMessageCounts(BaseModel):
@@ -824,6 +843,14 @@ class AppSettings(BaseModel):
default_factory=list,
description="Public keys of repeaters opted into periodic telemetry collection (max 8)",
)
telemetry_interval_hours: int = Field(
default=8,
description=(
"User-preferred telemetry collection interval in hours. The backend "
"clamps this up to the shortest legal interval given the number of "
"tracked repeaters so daily checks stay under a 24/day ceiling."
),
)
auto_resend_channel: bool = Field(
default=False,
description=(
@@ -859,13 +886,14 @@ class NoiseFloorHistoryStats(BaseModel):
latest_timestamp: int | None = Field(
default=None, description="Unix timestamp of the most recent sample"
)
supported: bool | None = Field(
default=None,
description="Whether the connected radio appears to support radio stats sampling",
)
samples: list[NoiseFloorSample] = Field(default_factory=list)
class PacketsPerHourBucket(BaseModel):
timestamp: int = Field(description="Unix timestamp at the start of the hour")
count: int = Field(description="Number of packets received in that hour")
class StatisticsResponse(BaseModel):
busiest_channels_24h: list[BusyChannel]
contact_count: int
@@ -881,6 +909,7 @@ class StatisticsResponse(BaseModel):
repeaters_heard: ContactActivityCounts
known_channels_active: ContactActivityCounts
path_hash_width_24h: PathHashWidthStats
packets_per_hour_72h: list[PacketsPerHourBucket]
noise_floor_24h: NoiseFloorHistoryStats
+24 -3
View File
@@ -39,6 +39,7 @@ from app.repository import (
ChannelRepository,
ContactAdvertPathRepository,
ContactRepository,
MessageRepository,
RawPacketRepository,
)
from app.services.contact_reconciliation import (
@@ -645,10 +646,30 @@ async def _process_direct_message(
)
if result is not None:
# Successfully decrypted!
# In the ambiguous direction case (both first bytes match), we
# defaulted to incoming. Check if a matching outgoing message
# already exists — if so, this is actually our own outgoing echo
# and should be treated as such instead of creating a duplicate
# incoming row.
effective_outgoing = is_outgoing
if not is_outgoing and dest_hash == src_hash:
existing_outgoing = await MessageRepository.get_by_content(
msg_type="PRIV",
conversation_key=contact.public_key.lower(),
text=result.message,
sender_timestamp=result.timestamp,
outgoing=True,
)
if existing_outgoing is not None:
effective_outgoing = True
logger.debug(
"Ambiguous DM resolved as outgoing echo (matched existing sent msg %d)",
existing_outgoing.id,
)
logger.debug(
"Decrypted DM %s contact %s: %s",
"to" if is_outgoing else "from",
"to" if effective_outgoing else "from",
contact.name or contact.public_key[:12],
result.message[:50] if result.message else "",
)
@@ -664,7 +685,7 @@ async def _process_direct_message(
path_len=packet_info.path_length if packet_info else None,
rssi=rssi,
snr=snr,
outgoing=is_outgoing,
outgoing=effective_outgoing,
)
return {
+13 -16
View File
@@ -9,6 +9,7 @@ The path_len wire byte is packed as [hash_mode:2][hop_count:6]:
Mode 3 (hash_size=4) is reserved and rejected.
"""
from collections.abc import Iterable
from dataclasses import dataclass
MAX_PATH_SIZE = 64
@@ -246,30 +247,26 @@ def parse_explicit_hop_route(route_text: str) -> tuple[str, int, int]:
return "".join(hops), len(hops), hash_size - 1
async def bucket_path_hash_widths(cursor, *, batch_size: int = 500) -> dict[str, int | float]:
def bucket_path_hash_widths(rows: Iterable) -> dict[str, int | float]:
"""Bucket raw packet rows by hop hash width and return counts + percentages.
*cursor* must be an already-executed async cursor whose rows have a ``data``
*rows* must be an already-fetched list whose elements have a ``data``
column containing raw packet bytes.
"""
single_byte = 0
double_byte = 0
triple_byte = 0
while True:
rows = await cursor.fetchmany(batch_size)
if not rows:
break
for row in rows:
envelope = parse_packet_envelope(bytes(row["data"]))
if envelope is None:
continue
if envelope.hash_size == 1:
single_byte += 1
elif envelope.hash_size == 2:
double_byte += 1
elif envelope.hash_size == 3:
triple_byte += 1
for row in rows:
envelope = parse_packet_envelope(bytes(row["data"]))
if envelope is None:
continue
if envelope.hash_size == 1:
single_byte += 1
elif envelope.hash_size == 2:
double_byte += 1
elif envelope.hash_size == 3:
triple_byte += 1
total = single_byte + double_byte + triple_byte
if total == 0:
View File
+182
View File
@@ -0,0 +1,182 @@
"""Web Push dispatch manager.
Checks the global push-enabled conversation list (stored in app_settings)
and sends push notifications to ALL registered devices when a matching
incoming message arrives.
"""
import asyncio
import json
import logging
from dataclasses import dataclass
from pywebpush import WebPushException
from app.push.send import send_push
from app.push.vapid import get_vapid_private_key
from app.repository.channels import ChannelRepository
from app.repository.push_subscriptions import PushSubscriptionRepository
from app.repository.settings import AppSettingsRepository
logger = logging.getLogger(__name__)
_SEND_TIMEOUT = 15 # seconds per push send
_VAPID_CLAIMS = {"sub": "mailto:noreply@meshcore.local"}
def _state_key_for_message(data: dict) -> str:
"""Derive the conversation state key from a message event payload."""
msg_type = data.get("type", "")
conversation_key = data.get("conversation_key", "")
if msg_type == "PRIV":
return f"contact-{conversation_key}"
return f"channel-{conversation_key}"
def _build_payload(data: dict) -> str:
"""Build the push notification JSON payload from a message event."""
msg_type = data.get("type", "")
text = data.get("text", "")
sender_name = data.get("sender_name") or ""
channel_name = data.get("channel_name") or ""
if msg_type == "PRIV":
title = f"Message from {sender_name}" if sender_name else "New direct message"
body = text
else:
title = channel_name if channel_name else "Channel message"
body = text
conversation_key = data.get("conversation_key", "")
state_key = _state_key_for_message(data)
if msg_type == "PRIV":
url_hash = f"#contact/{conversation_key}"
else:
url_hash = f"#channel/{conversation_key}"
return json.dumps(
{
"title": title,
"body": body,
# Tag per conversation so different conversations coexist in the
# notification tray, while repeated messages in the same
# conversation replace each other.
"tag": f"meshcore-{state_key}",
"url_hash": url_hash,
}
)
def _subscription_info(sub: dict) -> dict:
"""Build the subscription_info dict that pywebpush expects."""
return {
"endpoint": sub["endpoint"],
"keys": {
"p256dh": sub["p256dh"],
"auth": sub["auth"],
},
}
@dataclass
class _SendResult:
sub_id: str
success: bool = False
expired: bool = False
class PushManager:
async def dispatch_message(self, data: dict) -> None:
"""Send push notifications for a message event to all devices."""
# Don't notify for messages the operator just sent themselves
if data.get("outgoing"):
return
# Check the global conversation list
state_key = _state_key_for_message(data)
try:
push_conversations = await AppSettingsRepository.get_push_conversations()
except Exception:
logger.debug("Push dispatch: failed to load push_conversations", exc_info=True)
return
if state_key not in push_conversations:
return
# Skip muted channels
if data.get("type") == "CHAN" and data.get("conversation_key"):
try:
ch = await ChannelRepository.get_by_key(data["conversation_key"])
if ch and ch.muted:
return
except Exception:
logger.debug("Push dispatch: failed to check channel mute state", exc_info=True)
try:
subs = await PushSubscriptionRepository.get_all()
except Exception:
logger.debug("Push dispatch: failed to load subscriptions", exc_info=True)
return
if not subs:
return
payload = _build_payload(data)
vapid_key = get_vapid_private_key()
if not vapid_key:
logger.debug("Push dispatch: no VAPID key configured, skipping")
return
results = await asyncio.gather(
*(self._send_one(sub, payload, vapid_key) for sub in subs),
return_exceptions=True,
)
# Batch-update all delivery outcomes in one transaction.
success_ids: list[str] = []
failure_ids: list[str] = []
remove_ids: list[str] = []
for r in results:
if isinstance(r, _SendResult):
if r.expired:
remove_ids.append(r.sub_id)
elif r.success:
success_ids.append(r.sub_id)
else:
failure_ids.append(r.sub_id)
if success_ids or failure_ids or remove_ids:
try:
await PushSubscriptionRepository.batch_record_outcomes(
success_ids, failure_ids, remove_ids
)
except Exception:
logger.debug("Push dispatch: failed to record outcomes", exc_info=True)
async def _send_one(self, sub: dict, payload: str, vapid_key: str) -> _SendResult:
sub_id = sub["id"]
result = _SendResult(sub_id=sub_id)
try:
async with asyncio.timeout(_SEND_TIMEOUT):
await send_push(
subscription_info=_subscription_info(sub),
payload=payload,
vapid_private_key=vapid_key,
vapid_claims=_VAPID_CLAIMS,
)
result.success = True
except WebPushException as e:
status = getattr(e, "response", None)
status_code = getattr(status, "status_code", 0) if status else 0
if status_code in (403, 404, 410):
logger.info("Push subscription expired (HTTP %d), removing %s", status_code, sub_id)
result.expired = True
else:
logger.warning("Push send failed for %s: %s", sub_id, e)
except TimeoutError:
logger.warning("Push send timed out for %s", sub_id)
except Exception:
logger.debug("Push send error for %s", sub_id, exc_info=True)
return result
push_manager = PushManager()
+231
View File
@@ -0,0 +1,231 @@
"""Thin wrapper around pywebpush for sending push notifications.
Isolates the pywebpush dependency and runs the synchronous send in
a thread executor to avoid blocking the event loop.
"""
import asyncio
import logging
import socket
from typing import Any, cast
import requests
import urllib3.connection
import urllib3.connectionpool
from pywebpush import webpush
from requests.adapters import HTTPAdapter
from requests.exceptions import ConnectionError as RequestsConnectionError
from requests.exceptions import ConnectTimeout as RequestsConnectTimeout
from urllib3.exceptions import ConnectTimeoutError, NameResolutionError, NewConnectionError
logger = logging.getLogger(__name__)
DEFAULT_TIMEOUT = object()
DEFAULT_PUSH_CONNECT_TIMEOUT_SECONDS = 3
IPV4_FALLBACK_CONNECT_TIMEOUT_SECONDS = 10
DEFAULT_PUSH_READ_TIMEOUT_SECONDS = 10
def _create_ipv4_connection(
address: tuple[str, int],
timeout: float | None | object = DEFAULT_TIMEOUT,
source_address: tuple[str, int] | None = None,
socket_options=None,
) -> socket.socket:
"""Create a socket connection using IPv4 only."""
host, port = address
if host.startswith("["):
host = host.strip("[]")
err: OSError | None = None
for res in socket.getaddrinfo(host, port, socket.AF_INET, socket.SOCK_STREAM):
af, socktype, proto, _, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
if socket_options:
for opt in socket_options:
sock.setsockopt(*opt)
if timeout is not DEFAULT_TIMEOUT:
sock.settimeout(cast(float | None, timeout))
if source_address:
sock.bind(source_address)
sock.connect(sa)
return sock
except OSError as exc:
err = exc
if sock is not None:
sock.close()
if err is not None:
raise err
raise OSError("getaddrinfo returns an empty list")
class IPv4HTTPConnection(urllib3.connection.HTTPConnection):
"""urllib3 HTTP connection that resolves and connects via IPv4 only."""
def _new_conn(self) -> socket.socket:
try:
return _create_ipv4_connection(
(self._dns_host, self.port),
self.timeout,
source_address=self.source_address,
socket_options=self.socket_options,
)
except socket.gaierror as exc:
raise NameResolutionError(self.host, self, exc) from exc
except TimeoutError as exc:
raise ConnectTimeoutError(
self,
f"Connection to {self.host} timed out. (connect timeout={self.timeout})",
) from exc
except OSError as exc:
raise NewConnectionError(self, f"Failed to establish a new connection: {exc}") from exc
class IPv4HTTPSConnection(urllib3.connection.HTTPSConnection):
"""urllib3 HTTPS connection that resolves and connects via IPv4 only."""
def _new_conn(self) -> socket.socket:
try:
return _create_ipv4_connection(
(self._dns_host, self.port),
self.timeout,
source_address=self.source_address,
socket_options=self.socket_options,
)
except socket.gaierror as exc:
raise NameResolutionError(self.host, self, exc) from exc
except TimeoutError as exc:
raise ConnectTimeoutError(
self,
f"Connection to {self.host} timed out. (connect timeout={self.timeout})",
) from exc
except OSError as exc:
raise NewConnectionError(self, f"Failed to establish a new connection: {exc}") from exc
class IPv4HTTPConnectionPool(urllib3.connectionpool.HTTPConnectionPool):
ConnectionCls = cast(Any, IPv4HTTPConnection)
class IPv4HTTPSConnectionPool(urllib3.connectionpool.HTTPSConnectionPool):
ConnectionCls = cast(Any, IPv4HTTPSConnection)
def _configure_pool_manager_for_ipv4(manager: Any) -> None:
manager.pool_classes_by_scheme = manager.pool_classes_by_scheme.copy()
manager.pool_classes_by_scheme["http"] = IPv4HTTPConnectionPool
manager.pool_classes_by_scheme["https"] = IPv4HTTPSConnectionPool
class IPv4HTTPAdapter(HTTPAdapter):
"""requests adapter that uses IPv4-only urllib3 connection pools."""
def init_poolmanager(self, connections, maxsize, block=False, **pool_kwargs):
super().init_poolmanager(connections, maxsize, block=block, **pool_kwargs)
_configure_pool_manager_for_ipv4(self.poolmanager)
def proxy_manager_for(self, *args, **kwargs):
manager = super().proxy_manager_for(*args, **kwargs)
_configure_pool_manager_for_ipv4(manager)
return manager
def _build_default_requests_session() -> requests.Session:
return requests.Session()
def _build_ipv4_requests_session() -> requests.Session:
session = requests.Session()
adapter = IPv4HTTPAdapter()
session.mount("http://", adapter)
session.mount("https://", adapter)
return session
def _send_push_with_session(
*,
subscription_info: dict,
payload: str,
vapid_private_key: str,
vapid_claims: dict,
session: requests.Session,
connect_timeout_seconds: int,
) -> int:
response = webpush(
subscription_info=subscription_info,
data=payload,
vapid_private_key=vapid_private_key,
vapid_claims=vapid_claims,
content_encoding="aes128gcm",
timeout=cast(Any, (connect_timeout_seconds, DEFAULT_PUSH_READ_TIMEOUT_SECONDS)),
requests_session=session,
)
return response.status_code # type: ignore[union-attr]
def _send_push_with_fallback(
subscription_info: dict,
payload: str,
vapid_private_key: str,
vapid_claims: dict,
) -> int:
"""Send using normal dual-stack resolution, then retry with IPv4-only on connect failures."""
session = _build_default_requests_session()
try:
return _send_push_with_session(
subscription_info=subscription_info,
payload=payload,
vapid_private_key=vapid_private_key,
vapid_claims=vapid_claims,
session=session,
connect_timeout_seconds=DEFAULT_PUSH_CONNECT_TIMEOUT_SECONDS,
)
except (RequestsConnectTimeout, RequestsConnectionError) as exc:
logger.info("Push delivery retrying via IPv4 after initial network failure: %s", exc)
finally:
session.close()
session = _build_ipv4_requests_session()
try:
return _send_push_with_session(
subscription_info=subscription_info,
payload=payload,
vapid_private_key=vapid_private_key,
vapid_claims=vapid_claims,
session=session,
connect_timeout_seconds=IPV4_FALLBACK_CONNECT_TIMEOUT_SECONDS,
)
finally:
session.close()
async def send_push(
subscription_info: dict,
payload: str,
vapid_private_key: str,
vapid_claims: dict,
) -> int:
"""Send an encrypted push notification.
Args:
subscription_info: {"endpoint": ..., "keys": {"p256dh": ..., "auth": ...}}
payload: JSON string to encrypt and send
vapid_private_key: base64url-encoded raw EC private key scalar
vapid_claims: {"sub": "mailto:..."} or {"sub": "https://..."}
Returns:
HTTP status code from the push service.
Raises:
WebPushException: on push service error (caller handles 404/410 cleanup).
"""
loop = asyncio.get_running_loop()
return await loop.run_in_executor(
None,
lambda: _send_push_with_fallback(
subscription_info, payload, vapid_private_key, vapid_claims
),
)
+60
View File
@@ -0,0 +1,60 @@
"""VAPID key management for Web Push.
Generates a P-256 key pair on first use and caches it in app_settings
via ``AppSettingsRepository``. The public key is served to browsers
for ``PushManager.subscribe()``.
"""
import base64
import logging
from cryptography.hazmat.primitives.serialization import Encoding, PublicFormat
from py_vapid import Vapid
from app.repository.settings import AppSettingsRepository
logger = logging.getLogger(__name__)
_cached_private_key: str = ""
_cached_public_key: str = ""
async def ensure_vapid_keys() -> tuple[str, str]:
"""Read or generate VAPID keys. Call once at startup after DB connect."""
global _cached_private_key, _cached_public_key
private, public = await AppSettingsRepository.get_vapid_keys()
if private and public:
_cached_private_key = private
_cached_public_key = public
logger.info("VAPID keys loaded from database")
return _cached_private_key, _cached_public_key
# Generate new key pair
vapid = Vapid()
vapid.generate_keys()
# Private key as base64url-encoded raw 32-byte EC scalar — the format
# that pywebpush passes to ``Vapid.from_string()``.
raw_priv = vapid.private_key.private_numbers().private_value.to_bytes(32, "big") # type: ignore[union-attr]
_cached_private_key = base64.urlsafe_b64encode(raw_priv).rstrip(b"=").decode("ascii")
# Public key as uncompressed P-256 point, base64url-encoded (no padding)
# for the browser Push API's applicationServerKey
raw_pub = vapid.public_key.public_bytes(Encoding.X962, PublicFormat.UncompressedPoint) # type: ignore[union-attr]
_cached_public_key = base64.urlsafe_b64encode(raw_pub).rstrip(b"=").decode("ascii")
await AppSettingsRepository.set_vapid_keys(_cached_private_key, _cached_public_key)
logger.info("Generated and stored new VAPID key pair")
return _cached_private_key, _cached_public_key
def get_vapid_public_key() -> str:
"""Return the cached VAPID public key (base64url). Must call ensure_vapid_keys() first."""
return _cached_public_key
def get_vapid_private_key() -> str:
"""Return the cached VAPID private key (base64url). Must call ensure_vapid_keys() first."""
return _cached_private_key
+4 -1
View File
@@ -118,7 +118,7 @@ async def test_serial_device(port: str, baudrate: int, timeout: float = 3.0) ->
return True
return False
except asyncio.TimeoutError:
except TimeoutError:
logger.debug("Device %s timed out", port)
return False
except Exception as e:
@@ -192,6 +192,9 @@ class RadioManager:
if not blocking:
if self._operation_lock.locked():
raise RadioOperationBusyError(f"Radio is busy (operation: {name})")
# In single-threaded asyncio the lock cannot be acquired between the
# check above and the await below (no other coroutine runs until we
# yield). The await returns immediately for an uncontested lock.
await self._operation_lock.acquire()
else:
await self._operation_lock.acquire()
+494 -152
View File
@@ -14,6 +14,7 @@ import logging
import math
import time
from contextlib import asynccontextmanager
from datetime import UTC, datetime, timedelta
from typing import Literal
from meshcore import EventType, MeshCore
@@ -21,7 +22,7 @@ from meshcore import EventType, MeshCore
from app.channel_constants import PUBLIC_CHANNEL_KEY, PUBLIC_CHANNEL_NAME
from app.config import settings
from app.event_handlers import cleanup_expired_acks, on_contact_message
from app.models import Contact, ContactUpsert
from app.models import _VALID_CONTACT_TYPES, Contact, ContactUpsert
from app.radio import RadioOperationBusyError
from app.repository import (
AmbiguousPublicKeyPrefixError,
@@ -36,14 +37,47 @@ from app.services.contact_reconciliation import (
)
from app.services.messages import create_fallback_channel_message
from app.services.radio_runtime import radio_runtime as radio_manager
from app.telemetry_interval import clamp_telemetry_interval
from app.websocket import broadcast_error, broadcast_event
logger = logging.getLogger(__name__)
DEFAULT_MAX_CHANNELS = 40
_GET_CONTACTS_TIMEOUT = 10
AdvertMode = Literal["flood", "zero_hop"]
_AUTO_ADD_OVERWRITE_OLDEST = 0x01
_RADIO_CONTACT_FAVORITE = 0x01
async def _enable_autoevict_on_radio(mc: MeshCore) -> bool:
"""Ensure the radio's AUTO_ADD_OVERWRITE_OLDEST preference bit is set."""
try:
current = await mc.commands.get_autoadd_config()
if current is None or current.type == EventType.ERROR:
logger.warning("Could not read autoadd config from radio: %s", current)
return False
current_flags = current.payload.get("config", 0)
if current_flags & _AUTO_ADD_OVERWRITE_OLDEST:
logger.debug("Radio autoevict already enabled (autoadd_config=0x%02x)", current_flags)
return True
new_flags = current_flags | _AUTO_ADD_OVERWRITE_OLDEST
result = await mc.commands.set_autoadd_config(new_flags)
if result is not None and result.type == EventType.OK:
logger.info(
"Enabled radio autoevict (autoadd_config 0x%02x -> 0x%02x)",
current_flags,
new_flags,
)
return True
else:
logger.warning("Failed to enable radio autoevict: %s", result)
return False
except Exception as exc:
logger.warning("Error enabling radio autoevict: %s", exc)
return False
def _contact_sync_debug_fields(contact: Contact) -> dict[str, object]:
"""Return key contact fields for sync failure diagnostics."""
@@ -159,10 +193,10 @@ MIN_ADVERT_INTERVAL = 3600
# Periodic telemetry collection task handle
_telemetry_collect_task: asyncio.Task | None = None
# Telemetry collection interval (8 hours)
TELEMETRY_COLLECT_INTERVAL = 8 * 3600
# Initial delay before the first telemetry collection cycle (let radio settle)
# Initial delay before the scheduler starts (let radio settle). After this,
# the loop wakes at each UTC top-of-hour and decides whether to run a cycle
# based on the user's telemetry_interval_hours preference, clamped up to
# the shortest-legal interval for the current tracked-repeater count.
TELEMETRY_COLLECT_INITIAL_DELAY = 60
# Counter to pause polling during repeater operations (supports nested pauses)
@@ -237,7 +271,7 @@ async def should_run_full_periodic_sync(mc: MeshCore) -> bool:
capacity = _effective_radio_capacity(app_settings.max_radio_contacts)
refill_target, full_sync_trigger = _compute_radio_contact_limits(capacity)
result = await mc.commands.get_contacts()
result = await mc.commands.get_contacts(timeout=_GET_CONTACTS_TIMEOUT)
if result is None or result.type == EventType.ERROR:
logger.warning("Periodic sync occupancy check failed: %s", result)
return False
@@ -428,6 +462,16 @@ async def ensure_default_channels() -> None:
async def sync_and_offload_all(mc: MeshCore) -> dict:
"""Run fast startup sync, then background contact reconcile."""
autoevict_requested = settings.load_with_autoevict
autoevict = False
if autoevict_requested:
autoevict = await _enable_autoevict_on_radio(mc)
if not autoevict:
logger.warning(
"Autoevict requested but unavailable; falling back to snapshot-based "
"background contact reconcile"
)
# Contact on_radio is legacy/stale metadata. Clear it during the offload/reload
# cycle so old rows stop claiming radio residency we do not actively track.
@@ -439,9 +483,25 @@ async def sync_and_offload_all(mc: MeshCore) -> dict:
# Ensure default channels exist
await ensure_default_channels()
snapshot_failed = "error" in contacts_result
if snapshot_failed and not autoevict:
logger.warning(
"Radio contact snapshot failed — attempting best-effort contact "
"loading without a full picture of what's already on the radio"
)
broadcast_error(
"Could not enumerate radio contacts",
"Loading favorites and recent contacts on a best-effort basis — "
"some adds may be redundant or fail if the radio's contact table "
"is already full. Set MESHCORE_LOAD_WITH_AUTOEVICT=true for more "
"reliable loading without needing to read the radio first. "
"See 'Contact Loading Issues' in the Advanced Setup documentation.",
)
start_background_contact_reconciliation(
initial_radio_contacts=contacts_result.get("radio_contacts", {}),
expected_mc=mc,
autoevict=autoevict,
)
return {
@@ -459,9 +519,8 @@ async def drain_pending_messages(mc: MeshCore) -> int:
Returns the count of messages retrieved.
"""
count = 0
max_iterations = 100 # Safety limit
for _ in range(max_iterations):
while True:
try:
result = await mc.commands.get_msg(timeout=2.0)
@@ -480,7 +539,7 @@ async def drain_pending_messages(mc: MeshCore) -> int:
# Small delay between fetches
await asyncio.sleep(0.1)
except asyncio.TimeoutError:
except TimeoutError:
break
except Exception as e:
logger.warning("Error draining messages: %s", e, exc_info=True)
@@ -518,7 +577,7 @@ async def poll_for_messages(mc: MeshCore) -> int:
# If we got a message, there might be more - drain them
count += await drain_pending_messages(mc)
except asyncio.TimeoutError:
except TimeoutError:
pass
except Exception as e:
logger.warning("Message poll exception: %s", e, exc_info=True)
@@ -853,7 +912,7 @@ async def _attempt_clock_wraparound(mc: MeshCore, *, now: int, observed_radio_ti
return False
async def sync_radio_time(mc: MeshCore) -> bool:
async def sync_radio_time(mc: MeshCore, *, warn_on_failure: bool = True) -> bool:
"""Sync the radio's clock with the system time.
The firmware only accepts forward time adjustments (new >= current).
@@ -868,9 +927,15 @@ async def sync_radio_time(mc: MeshCore) -> bool:
only once; if it doesn't help (hardware RTC persists the wrong time),
the skew is logged as a warning on subsequent syncs.
``warn_on_failure`` controls log severity for rejected/failed sync attempts.
Startup and reconnect setup should leave this enabled so operators see the
initial skew problem. Periodic maintenance syncs pass ``False`` to avoid
repeating the same warning every few minutes after startup.
Returns True if the radio accepted the new time, False otherwise.
"""
global _clock_reboot_attempted # noqa: PLW0603
log_failure = logger.warning if warn_on_failure else logger.debug
try:
now = int(time.time())
preflight_radio_time: int | None = None
@@ -899,7 +964,7 @@ async def sync_radio_time(mc: MeshCore) -> bool:
if radio_time is not None:
delta = radio_time - now
logger.warning(
log_failure(
"Radio rejected time sync: radio clock is %+d seconds "
"(%+.1f hours) from system time (radio=%d, system=%d).",
delta,
@@ -909,7 +974,7 @@ async def sync_radio_time(mc: MeshCore) -> bool:
)
else:
delta = None
logger.warning(
log_failure(
"Radio rejected time sync (set_time returned %s) "
"and get_time query failed; cannot determine clock skew.",
result.type,
@@ -934,14 +999,14 @@ async def sync_radio_time(mc: MeshCore) -> bool:
# reboot, allowing the next post-connect sync to succeed.
if not _clock_reboot_attempted and (delta is None or delta > 30):
_clock_reboot_attempted = True
logger.warning(
log_failure(
"Rebooting radio to reset clock skew. Boards with a "
"volatile RTC will accept the correct time after restart."
)
try:
await mc.commands.reboot()
except Exception:
logger.warning("Reboot command failed", exc_info=True)
log_failure("Reboot command failed", exc_info=True)
elif _clock_reboot_attempted:
logger.debug(
"Clock skew persists after reboot (hardware RTC); ignoring until next session."
@@ -949,7 +1014,7 @@ async def sync_radio_time(mc: MeshCore) -> bool:
return False
except Exception as e:
logger.warning("Failed to sync radio time: %s", e, exc_info=True)
log_failure("Failed to sync radio time: %s", e, exc_info=True)
return False
@@ -969,7 +1034,7 @@ async def _periodic_sync_loop():
) as mc:
if await should_run_full_periodic_sync(mc):
await sync_and_offload_all(mc)
await sync_radio_time(mc)
await sync_radio_time(mc, warn_on_failure=False)
except RadioOperationBusyError:
logger.debug("Skipping periodic sync: radio busy")
except asyncio.CancelledError:
@@ -1038,7 +1103,7 @@ async def sync_contacts_from_radio(mc: MeshCore) -> dict:
synced = 0
try:
result = await mc.commands.get_contacts()
result = await mc.commands.get_contacts(timeout=_GET_CONTACTS_TIMEOUT)
if result is None or result.type == EventType.ERROR:
logger.error(
@@ -1070,8 +1135,14 @@ async def sync_contacts_from_radio(mc: MeshCore) -> dict:
logger.debug("Synced %d contacts from radio snapshot", synced)
# Import radio-favorited contacts into app favorites
radio_fav_keys = [pk for pk, data in contacts.items() if data.get("flags", 0) & 0x01]
# Import radio-favorited contacts into app favorites.
# Only trust the favorite bit on contacts with a valid type (0-4);
# garbled radio data can have junk flags with bit 0 set.
radio_fav_keys = [
pk
for pk, data in contacts.items()
if data.get("flags", 0) & 0x01 and data.get("type", -1) in _VALID_CONTACT_TYPES
]
if radio_fav_keys:
try:
imported = 0
@@ -1095,12 +1166,24 @@ async def _reconcile_radio_contacts_in_background(
*,
initial_radio_contacts: dict[str, dict],
expected_mc: MeshCore,
autoevict: bool = False,
) -> None:
"""Converge radio contacts toward the desired favorites+recents working set."""
"""Converge radio contacts toward the desired favorites+recents working set.
When *autoevict* is ``True`` the removal phase is skipped entirely and the
desired working set is blind-refreshed. Re-adding the full desired list
refreshes each contact's recency on supported firmware, so one successful
full pass converges the radio toward the desired working set without relying
on a stale contact snapshot.
"""
radio_contacts = dict(initial_radio_contacts)
removed = 0
loaded = 0
failed = 0
table_full = False
autoevict_next_index = 0
autoevict_full_pass_retries = 0
_MAX_AUTOEVICT_RETRIES = 3
try:
while True:
@@ -1108,18 +1191,32 @@ async def _reconcile_radio_contacts_in_background(
logger.info("Stopping background contact reconcile: radio transport changed")
break
# Pre-lock snapshot for quick-exit checks; authoritative list is
# re-fetched inside the radio lock below.
selected_contacts = await get_contacts_selected_for_radio_sync()
desired_fill_contacts = [
contact for contact in selected_contacts if len(contact.public_key) >= 64
]
if autoevict:
if not desired_fill_contacts:
logger.info(
"Background contact blind fill complete: no desired contacts selected"
)
break
if autoevict_next_index >= len(desired_fill_contacts):
autoevict_next_index = 0
desired_contacts = {
contact.public_key.lower(): contact
for contact in selected_contacts
if len(contact.public_key) >= 64
contact.public_key.lower(): contact for contact in desired_fill_contacts
}
removable_keys = [key for key in radio_contacts if key not in desired_contacts]
removable_keys = (
[] if autoevict else [key for key in radio_contacts if key not in desired_contacts]
)
missing_contacts = [
contact for key, contact in desired_contacts.items() if key not in radio_contacts
]
if not removable_keys and not missing_contacts:
if not autoevict and not removable_keys and not missing_contacts:
logger.info(
"Background contact reconcile complete: %d contacts on radio working set",
len(radio_contacts),
@@ -1127,6 +1224,8 @@ async def _reconcile_radio_contacts_in_background(
break
progressed = False
autoevict_pass_complete = False
autoevict_pass_failed = False
try:
async with radio_manager.radio_operation(
"background_contact_reconcile",
@@ -1140,100 +1239,232 @@ async def _reconcile_radio_contacts_in_background(
budget = CONTACT_RECONCILE_BATCH_SIZE
selected_contacts = await get_contacts_selected_for_radio_sync()
desired_fill_contacts = [
contact for contact in selected_contacts if len(contact.public_key) >= 64
]
if autoevict and autoevict_next_index >= len(desired_fill_contacts):
autoevict_next_index = 0
desired_contacts = {
contact.public_key.lower(): contact
for contact in selected_contacts
if len(contact.public_key) >= 64
contact.public_key.lower(): contact for contact in desired_fill_contacts
}
for public_key in list(radio_contacts):
if budget <= 0:
break
if public_key in desired_contacts:
continue
remove_payload = (
mc.get_contact_by_key_prefix(public_key[:12])
or radio_contacts.get(public_key)
or {"public_key": public_key}
)
try:
remove_result = await mc.commands.remove_contact(remove_payload)
except Exception as exc:
failed += 1
budget -= 1
logger.warning(
"Error removing contact %s during background reconcile: %s",
public_key[:12],
exc,
)
continue
budget -= 1
if remove_result.type == EventType.OK:
radio_contacts.pop(public_key, None)
_evict_removed_contact_from_library_cache(mc, public_key)
removed += 1
progressed = True
else:
failed += 1
logger.warning(
"Failed to remove contact %s during background reconcile: %s",
public_key[:12],
remove_result.payload,
)
if budget > 0:
for public_key, contact in desired_contacts.items():
if not autoevict:
for public_key in list(radio_contacts):
if budget <= 0:
break
if public_key in radio_contacts:
continue
if mc.get_contact_by_key_prefix(public_key[:12]):
radio_contacts[public_key] = {"public_key": public_key}
if public_key in desired_contacts:
continue
remove_payload = (
mc.get_contact_by_key_prefix(public_key[:12])
or radio_contacts.get(public_key)
or {"public_key": public_key}
)
try:
add_payload = contact.to_radio_dict()
add_result = await mc.commands.add_contact(add_payload)
remove_result = await mc.commands.remove_contact(remove_payload)
except Exception as exc:
failed += 1
budget -= 1
logger.warning(
"Error adding contact %s during background reconcile: %s",
"Error removing contact %s during background reconcile: %s",
public_key[:12],
exc,
exc_info=True,
)
continue
budget -= 1
if add_result.type == EventType.OK:
radio_contacts[public_key] = add_payload
loaded += 1
if remove_result.type == EventType.OK:
radio_contacts.pop(public_key, None)
_evict_removed_contact_from_library_cache(mc, public_key)
removed += 1
progressed = True
else:
failed += 1
reason = add_result.payload
hint = ""
if reason is None:
hint = (
" (no response from radio — if this repeats, check for "
"serial port contention from another process or try a "
"power cycle)"
)
logger.warning(
"Failed to add contact %s during background reconcile: %s%s",
"Failed to remove contact %s during background reconcile: %s",
public_key[:12],
reason,
hint,
remove_result.payload,
)
if budget > 0:
if autoevict:
# Budget is consumed by the slice bound rather than
# per-operation decrement — autoevict skips the
# removal phase so the full budget is always available.
batch_contacts = desired_fill_contacts[
autoevict_next_index : autoevict_next_index + budget
]
processed_contacts = 0
for contact in batch_contacts:
public_key = contact.public_key.lower()
try:
add_payload = contact.to_radio_dict()
# In autoevict mode, app-loaded contacts should
# remain evictable by the radio even if the
# stored contact record carries the favorite bit.
add_payload["flags"] = (
int(add_payload.get("flags", 0)) & ~_RADIO_CONTACT_FAVORITE
)
add_result = await mc.commands.add_contact(add_payload)
except Exception as exc:
failed += 1
logger.warning(
"Error blind-filling contact %s during background reconcile: %s",
public_key[:12],
exc,
exc_info=True,
)
autoevict_pass_failed = True
processed_contacts += 1
continue
if add_result.type == EventType.OK:
radio_contacts[public_key] = add_payload
loaded += 1
progressed = True
else:
failed += 1
autoevict_pass_failed = True
reason = add_result.payload
if isinstance(reason, dict) and reason.get("error_code") == 3:
logger.warning(
"Radio contact table full — stopping "
"contact reconcile (loaded %d this cycle)",
loaded,
)
table_full = True
break
hint = ""
if reason is None:
hint = (
" (no response from radio — if this repeats, check for "
"serial port contention from another process or try a "
"power cycle)"
)
logger.warning(
"Failed to blind-fill contact %s during background reconcile: %s%s",
public_key[:12],
reason,
hint,
)
processed_contacts += 1
autoevict_next_index += processed_contacts
autoevict_pass_complete = autoevict_next_index >= len(
desired_fill_contacts
)
else:
for public_key, contact in desired_contacts.items():
if budget <= 0:
break
if public_key in radio_contacts:
continue
if mc.get_contact_by_key_prefix(public_key[:12]):
radio_contacts[public_key] = {"public_key": public_key}
continue
try:
add_payload = contact.to_radio_dict()
add_result = await mc.commands.add_contact(add_payload)
except Exception as exc:
failed += 1
budget -= 1
logger.warning(
"Error adding contact %s during background reconcile: %s",
public_key[:12],
exc,
exc_info=True,
)
continue
budget -= 1
if add_result.type == EventType.OK:
radio_contacts[public_key] = add_payload
loaded += 1
progressed = True
else:
failed += 1
reason = add_result.payload
if isinstance(reason, dict) and reason.get("error_code") == 3:
logger.warning(
"Radio contact table full — stopping "
"contact reconcile (loaded %d this cycle)",
loaded,
)
table_full = True
break
hint = ""
if reason is None:
hint = (
" (no response from radio — if this repeats, check for "
"serial port contention from another process or try a "
"power cycle)"
)
logger.warning(
"Failed to add contact %s during background reconcile: %s%s",
public_key[:12],
reason,
hint,
)
except RadioOperationBusyError:
logger.debug("Background contact reconcile yielding: radio busy")
await asyncio.sleep(CONTACT_RECONCILE_BUSY_BACKOFF_SECONDS)
continue
if table_full:
if autoevict:
logger.error(
"We're expecting the radio to be in AUTO_ADD_OVERWRITE_OLDEST mode, "
"so a full-table error means we have no idea what is going on with "
"this radio; it is misbehaving. You should consider DM auto-acking "
"to be unreliable and/or not working for this radio. Sending and "
"receiving messages are not impacted by this error unless other "
"things are broken on your radio."
)
broadcast_error(
"Could not load all desired contacts onto the radio for auto-DM ack",
"Despite having auto-evict enabled, we got a contact-table-full error "
"from your radio. DM auto-ack is likely unavailable.",
)
else:
normal_table_full_message = (
"The radio's contact table is full. Clearing your radio contacts "
"using another client, lowering your contact fill target in "
"settings, or setting MESHCORE_LOAD_WITH_AUTOEVICT=true may "
"relieve this. See 'Contact Loading Issues' in the Advanced "
"README.md"
)
logger.error(
"Contact reconcile hit TABLE_FULL. %s",
normal_table_full_message,
)
broadcast_error(
"Could not load all desired contacts onto the radio for auto-DM ack",
normal_table_full_message,
)
break
if autoevict and autoevict_pass_complete:
if autoevict_pass_failed:
autoevict_full_pass_retries += 1
if autoevict_full_pass_retries >= _MAX_AUTOEVICT_RETRIES:
logger.warning(
"Background contact blind fill giving up after %d full passes "
"with persistent failures (loaded %d, failed %d)",
autoevict_full_pass_retries,
loaded,
failed,
)
break
autoevict_next_index = 0
else:
logger.info(
"Background contact blind fill complete: refreshed %d desired contacts",
len(desired_fill_contacts),
)
break
await asyncio.sleep(CONTACT_RECONCILE_YIELD_SECONDS)
if not progressed:
continue
@@ -1256,6 +1487,7 @@ def start_background_contact_reconciliation(
*,
initial_radio_contacts: dict[str, dict],
expected_mc: MeshCore,
autoevict: bool = False,
) -> None:
"""Start or replace the background contact reconcile task for the current radio."""
global _contact_reconcile_task
@@ -1267,11 +1499,13 @@ def start_background_contact_reconciliation(
_reconcile_radio_contacts_in_background(
initial_radio_contacts=initial_radio_contacts,
expected_mc=expected_mc,
autoevict=autoevict,
)
)
logger.info(
"Started background contact reconcile for %d radio contact(s)",
"Started background contact reconcile for %d radio contact(s)%s",
len(initial_radio_contacts),
" (autoevict mode)" if autoevict else "",
)
@@ -1289,7 +1523,13 @@ async def stop_background_contact_reconciliation() -> None:
async def get_contacts_selected_for_radio_sync() -> list[Contact]:
"""Return the contacts that would be loaded onto the radio right now."""
"""Return the contacts that would be loaded onto the radio right now.
Fill order:
1. Favorites (up to full capacity)
2. Most recently DM-active non-repeaters (sent or received, up to 80% refill target)
3. Most recently advertised non-repeaters (up to 80% refill target)
"""
app_settings = await AppSettingsRepository.get()
max_contacts = _effective_radio_capacity(app_settings.max_radio_contacts)
refill_target, _full_sync_trigger = _compute_radio_contact_limits(max_contacts)
@@ -1309,7 +1549,7 @@ async def get_contacts_selected_for_radio_sync() -> list[Contact]:
break
if len(selected_contacts) < refill_target:
for contact in await ContactRepository.get_recently_contacted_non_repeaters(
for contact in await ContactRepository.get_recently_dm_active_non_repeaters(
limit=max_contacts
):
key = contact.public_key.lower()
@@ -1348,8 +1588,8 @@ async def _sync_contacts_to_radio_inner(mc: MeshCore) -> dict:
Fill order is:
1. Favorite contacts
2. Most recently interacted-with non-repeaters
3. Most recently advert-heard non-repeaters without interaction history
2. Most recently DM-active non-repeaters (sent or received)
3. Most recently advert-heard non-repeaters
Favorite contacts are always reloaded first, up to the configured capacity.
Additional non-favorite fill stops at the refill target (80% of capacity).
@@ -1483,8 +1723,8 @@ async def sync_recent_contacts_to_radio(force: bool = False, mc: MeshCore | None
"""
Load contacts to the radio for DM ACK support.
Fill order is favorites, then recently contacted non-repeaters,
then recently advert-heard non-repeaters. Favorites are always reloaded
Fill order is favorites, then recently DM-active non-repeaters (sent or
received), then recently advert-heard non-repeaters. Favorites are always reloaded
up to the configured capacity; additional non-favorite fill stops at the
80% refill target.
Only runs at most once every CONTACT_SYNC_THROTTLE_SECONDS unless forced.
@@ -1578,10 +1818,40 @@ async def _collect_repeater_telemetry(mc: MeshCore, contact: Contact) -> bool:
"full_events": status.get("full_evts", 0),
}
# Best-effort LPP sensor fetch — failure here does not fail the overall
# collection; status telemetry is still recorded without sensor data.
try:
lpp_raw = await mc.commands.req_telemetry_sync(
contact.public_key, timeout=10, min_timeout=5
)
if lpp_raw:
lpp_sensors = []
for entry in lpp_raw:
value = entry.get("value", 0)
# Skip multi-value sensors (GPS, accelerometer, etc.)
if isinstance(value, dict):
continue
lpp_sensors.append(
{
"channel": entry.get("channel", 0),
"type_name": str(entry.get("type", "unknown")),
"value": value,
}
)
if lpp_sensors:
data["lpp_sensors"] = lpp_sensors
except Exception as e:
logger.debug(
"Telemetry collect: LPP sensor fetch failed for %s (non-fatal): %s",
contact.public_key[:12],
e,
)
try:
timestamp = int(time.time())
await RepeaterTelemetryRepository.record(
public_key=contact.public_key,
timestamp=int(time.time()),
timestamp=timestamp,
data=data,
)
logger.info(
@@ -1589,6 +1859,21 @@ async def _collect_repeater_telemetry(mc: MeshCore, contact: Contact) -> bool:
contact.name or contact.public_key[:12],
contact.public_key[:12],
)
# Dispatch to fanout modules (e.g. HA MQTT discovery)
from app.fanout.manager import fanout_manager
asyncio.create_task(
fanout_manager.broadcast_telemetry(
{
"public_key": contact.public_key,
"name": contact.name or contact.public_key[:12],
"timestamp": timestamp,
**data,
}
)
)
return True
except Exception as e:
logger.warning(
@@ -1599,62 +1884,122 @@ async def _collect_repeater_telemetry(mc: MeshCore, contact: Contact) -> bool:
return False
async def _run_telemetry_cycle() -> None:
"""Collect one telemetry sample from every tracked repeater."""
if not radio_manager.is_connected:
logger.debug("Telemetry collect: radio not connected, skipping cycle")
return
app_settings = await AppSettingsRepository.get()
tracked = app_settings.tracked_telemetry_repeaters
if not tracked:
return
logger.info("Telemetry collect: starting cycle for %d repeater(s)", len(tracked))
collected = 0
for pub_key in tracked:
contact = await ContactRepository.get_by_key(pub_key)
if not contact or contact.type != 2:
logger.debug(
"Telemetry collect: skipping %s (not found or not repeater)",
pub_key[:12],
)
continue
try:
async with radio_manager.radio_operation(
"telemetry_collect",
blocking=False,
suspend_auto_fetch=True,
) as mc:
if await _collect_repeater_telemetry(mc, contact):
collected += 1
except RadioOperationBusyError:
logger.debug(
"Telemetry collect: radio busy, skipping %s",
pub_key[:12],
)
logger.info(
"Telemetry collect: cycle complete, %d/%d successful",
collected,
len(tracked),
)
async def _sleep_until_next_utc_top_of_hour() -> None:
"""Sleep until the next UTC top-of-hour (or a minimum of 1 second)."""
now = datetime.now(UTC)
next_top = now.replace(minute=0, second=0, microsecond=0) + timedelta(hours=1)
delay = (next_top - now).total_seconds()
if delay < 1:
delay = 1
await asyncio.sleep(delay)
async def _maybe_run_scheduled_cycle(now: datetime) -> None:
"""Evaluate the modulo gate for the given UTC time and run a cycle if due.
Factored out of the loop so we can also invoke it immediately after the
post-boot initial delay otherwise a restart within the initial-delay
window before a scheduled boundary would carry the task past that boundary
and skip a due cycle (for 24h cadence users, that's a full day of missed
telemetry).
"""
app_settings = await AppSettingsRepository.get()
tracked_count = len(app_settings.tracked_telemetry_repeaters)
if tracked_count == 0:
return
effective_hours = clamp_telemetry_interval(app_settings.telemetry_interval_hours, tracked_count)
if effective_hours <= 0:
return
if now.hour % effective_hours != 0:
return
await _run_telemetry_cycle()
async def _telemetry_collect_loop() -> None:
"""Background task that collects telemetry from tracked repeaters every 8 hours.
"""Background task that runs tracked-repeater telemetry collection.
Runs a first cycle after a short initial delay (so newly tracked repeaters
get a sample promptly), then sleeps the full interval between subsequent cycles.
After an initial post-boot delay we evaluate the modulo gate once
(covers the edge case where the initial delay crossed a scheduled
boundary on restart). Then we wake at every UTC top-of-hour and
evaluate the gate again. A cycle runs only when
``current_utc_hour % effective_interval_hours == 0``, where the
effective interval is the user preference clamped up to the shortest
legal interval for the current tracked-repeater count. This keeps the
total daily check count bounded at ``DAILY_CHECK_CEILING`` (24).
Acquires the radio lock per-repeater (non-blocking) so manual operations can
The loop never updates the stored user preference. If the user picks a
short interval and then adds repeaters that make it illegal, they keep
their pick stored and we silently use the clamped value until they drop
repeaters.
Radio lock is acquired per-repeater (non-blocking) so manual ops can
interleave. Failures are logged and skipped.
"""
first_run = True
try:
await asyncio.sleep(TELEMETRY_COLLECT_INITIAL_DELAY)
except asyncio.CancelledError:
logger.info("Telemetry collect task cancelled before initial delay")
return
# Post-boot boundary check: if the delay carried us into a matching hour
# (or we booted exactly at a matching hour), run now rather than waiting
# another full cycle.
try:
await _maybe_run_scheduled_cycle(datetime.now(UTC))
except asyncio.CancelledError:
logger.info("Telemetry collect task cancelled after initial delay")
return
except Exception as e:
logger.error("Error in post-boot telemetry check: %s", e, exc_info=True)
while True:
try:
delay = TELEMETRY_COLLECT_INITIAL_DELAY if first_run else TELEMETRY_COLLECT_INTERVAL
await asyncio.sleep(delay)
first_run = False
if not radio_manager.is_connected:
logger.debug("Telemetry collect: radio not connected, skipping cycle")
continue
app_settings = await AppSettingsRepository.get()
tracked = app_settings.tracked_telemetry_repeaters
if not tracked:
continue
logger.info("Telemetry collect: starting cycle for %d repeater(s)", len(tracked))
collected = 0
for pub_key in tracked:
contact = await ContactRepository.get_by_key(pub_key)
if not contact or contact.type != 2:
logger.debug(
"Telemetry collect: skipping %s (not found or not repeater)",
pub_key[:12],
)
continue
try:
async with radio_manager.radio_operation(
"telemetry_collect",
blocking=False,
suspend_auto_fetch=True,
) as mc:
if await _collect_repeater_telemetry(mc, contact):
collected += 1
except RadioOperationBusyError:
logger.debug(
"Telemetry collect: radio busy, skipping %s",
pub_key[:12],
)
logger.info(
"Telemetry collect: cycle complete, %d/%d successful",
collected,
len(tracked),
)
await _sleep_until_next_utc_top_of_hour()
await _maybe_run_scheduled_cycle(datetime.now(UTC))
except asyncio.CancelledError:
logger.info("Telemetry collect task cancelled")
@@ -1668,10 +2013,7 @@ def start_telemetry_collect() -> None:
global _telemetry_collect_task
if _telemetry_collect_task is None or _telemetry_collect_task.done():
_telemetry_collect_task = asyncio.create_task(_telemetry_collect_loop())
logger.info(
"Started periodic telemetry collection (interval: %ds)",
TELEMETRY_COLLECT_INTERVAL,
)
logger.info("Started periodic telemetry collection (UTC-hourly scheduler)")
async def stop_telemetry_collect() -> None:
+82 -60
View File
@@ -8,31 +8,33 @@ class ChannelRepository:
@staticmethod
async def upsert(key: str, name: str, is_hashtag: bool = False, on_radio: bool = False) -> None:
"""Upsert a channel. Key is 32-char hex string."""
await db.conn.execute(
"""
INSERT INTO channels (key, name, is_hashtag, on_radio, flood_scope_override)
VALUES (?, ?, ?, ?, NULL)
ON CONFLICT(key) DO UPDATE SET
name = excluded.name,
is_hashtag = excluded.is_hashtag,
on_radio = excluded.on_radio
""",
(key.upper(), name, is_hashtag, on_radio),
)
await db.conn.commit()
async with db.tx() as conn:
async with conn.execute(
"""
INSERT INTO channels (key, name, is_hashtag, on_radio, flood_scope_override)
VALUES (?, ?, ?, ?, NULL)
ON CONFLICT(key) DO UPDATE SET
name = excluded.name,
is_hashtag = excluded.is_hashtag,
on_radio = excluded.on_radio
""",
(key.upper(), name, is_hashtag, on_radio),
):
pass
@staticmethod
async def get_by_key(key: str) -> Channel | None:
"""Get a channel by its key (32-char hex string)."""
cursor = await db.conn.execute(
"""
SELECT key, name, is_hashtag, on_radio, flood_scope_override, path_hash_mode_override, last_read_at, favorite
FROM channels
WHERE key = ?
""",
(key.upper(),),
)
row = await cursor.fetchone()
async with db.readonly() as conn:
async with conn.execute(
"""
SELECT key, name, is_hashtag, on_radio, flood_scope_override, path_hash_mode_override, last_read_at, favorite, muted
FROM channels
WHERE key = ?
""",
(key.upper(),),
) as cursor:
row = await cursor.fetchone()
if row:
return Channel(
key=row["key"],
@@ -43,19 +45,21 @@ class ChannelRepository:
path_hash_mode_override=row["path_hash_mode_override"],
last_read_at=row["last_read_at"],
favorite=bool(row["favorite"]),
muted=bool(row["muted"]),
)
return None
@staticmethod
async def get_all() -> list[Channel]:
cursor = await db.conn.execute(
"""
SELECT key, name, is_hashtag, on_radio, flood_scope_override, path_hash_mode_override, last_read_at, favorite
FROM channels
ORDER BY name
"""
)
rows = await cursor.fetchall()
async with db.readonly() as conn:
async with conn.execute(
"""
SELECT key, name, is_hashtag, on_radio, flood_scope_override, path_hash_mode_override, last_read_at, favorite, muted
FROM channels
ORDER BY name
"""
) as cursor:
rows = await cursor.fetchall()
return [
Channel(
key=row["key"],
@@ -66,6 +70,7 @@ class ChannelRepository:
path_hash_mode_override=row["path_hash_mode_override"],
last_read_at=row["last_read_at"],
favorite=bool(row["favorite"]),
muted=bool(row["muted"]),
)
for row in rows
]
@@ -73,21 +78,34 @@ class ChannelRepository:
@staticmethod
async def set_favorite(key: str, value: bool) -> bool:
"""Set or clear the favorite flag for a channel. Returns True if row was found."""
cursor = await db.conn.execute(
"UPDATE channels SET favorite = ? WHERE key = ?",
(1 if value else 0, key.upper()),
)
await db.conn.commit()
return cursor.rowcount > 0
async with db.tx() as conn:
async with conn.execute(
"UPDATE channels SET favorite = ? WHERE key = ?",
(1 if value else 0, key.upper()),
) as cursor:
rowcount = cursor.rowcount
return rowcount > 0
@staticmethod
async def set_muted(key: str, value: bool) -> bool:
"""Set or clear the muted flag for a channel. Returns True if row was found."""
async with db.tx() as conn:
async with conn.execute(
"UPDATE channels SET muted = ? WHERE key = ?",
(1 if value else 0, key.upper()),
) as cursor:
rowcount = cursor.rowcount
return rowcount > 0
@staticmethod
async def delete(key: str) -> None:
"""Delete a channel by key."""
await db.conn.execute(
"DELETE FROM channels WHERE key = ?",
(key.upper(),),
)
await db.conn.commit()
async with db.tx() as conn:
async with conn.execute(
"DELETE FROM channels WHERE key = ?",
(key.upper(),),
):
pass
@staticmethod
async def update_last_read_at(key: str, timestamp: int | None = None) -> bool:
@@ -96,35 +114,39 @@ class ChannelRepository:
Returns True if a row was updated, False if channel not found.
"""
ts = timestamp if timestamp is not None else int(time.time())
cursor = await db.conn.execute(
"UPDATE channels SET last_read_at = ? WHERE key = ?",
(ts, key.upper()),
)
await db.conn.commit()
return cursor.rowcount > 0
async with db.tx() as conn:
async with conn.execute(
"UPDATE channels SET last_read_at = ? WHERE key = ?",
(ts, key.upper()),
) as cursor:
rowcount = cursor.rowcount
return rowcount > 0
@staticmethod
async def update_flood_scope_override(key: str, flood_scope_override: str | None) -> bool:
"""Set or clear a channel's flood-scope override."""
cursor = await db.conn.execute(
"UPDATE channels SET flood_scope_override = ? WHERE key = ?",
(flood_scope_override, key.upper()),
)
await db.conn.commit()
return cursor.rowcount > 0
async with db.tx() as conn:
async with conn.execute(
"UPDATE channels SET flood_scope_override = ? WHERE key = ?",
(flood_scope_override, key.upper()),
) as cursor:
rowcount = cursor.rowcount
return rowcount > 0
@staticmethod
async def update_path_hash_mode_override(key: str, path_hash_mode_override: int | None) -> bool:
"""Set or clear a channel's path hash mode override."""
cursor = await db.conn.execute(
"UPDATE channels SET path_hash_mode_override = ? WHERE key = ?",
(path_hash_mode_override, key.upper()),
)
await db.conn.commit()
return cursor.rowcount > 0
async with db.tx() as conn:
async with conn.execute(
"UPDATE channels SET path_hash_mode_override = ? WHERE key = ?",
(path_hash_mode_override, key.upper()),
) as cursor:
rowcount = cursor.rowcount
return rowcount > 0
@staticmethod
async def mark_all_read(timestamp: int) -> None:
"""Mark all channels as read at the given timestamp."""
await db.conn.execute("UPDATE channels SET last_read_at = ?", (timestamp,))
await db.conn.commit()
async with db.tx() as conn:
async with conn.execute("UPDATE channels SET last_read_at = ?", (timestamp,)):
pass
+467 -349
View File
@@ -61,66 +61,72 @@ class ContactRepository:
)
)
await db.conn.execute(
"""
INSERT INTO contacts (public_key, name, type, flags, direct_path, direct_path_len,
direct_path_hash_mode, direct_path_updated_at,
route_override_path, route_override_len,
route_override_hash_mode,
last_advert, lat, lon, last_seen,
on_radio, last_contacted, first_seen)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(public_key) DO UPDATE SET
name = COALESCE(excluded.name, contacts.name),
type = CASE WHEN excluded.type = 0 THEN contacts.type ELSE excluded.type END,
flags = excluded.flags,
direct_path = COALESCE(excluded.direct_path, contacts.direct_path),
direct_path_len = COALESCE(excluded.direct_path_len, contacts.direct_path_len),
direct_path_hash_mode = COALESCE(
excluded.direct_path_hash_mode, contacts.direct_path_hash_mode
async with db.tx() as conn:
async with conn.execute(
"""
INSERT INTO contacts (public_key, name, type, flags, direct_path, direct_path_len,
direct_path_hash_mode, direct_path_updated_at,
route_override_path, route_override_len,
route_override_hash_mode,
last_advert, lat, lon, last_seen,
on_radio, last_contacted, first_seen)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(public_key) DO UPDATE SET
name = COALESCE(excluded.name, contacts.name),
type = CASE WHEN excluded.type = 0 THEN contacts.type ELSE excluded.type END,
flags = excluded.flags,
direct_path = COALESCE(excluded.direct_path, contacts.direct_path),
direct_path_len = COALESCE(excluded.direct_path_len, contacts.direct_path_len),
direct_path_hash_mode = COALESCE(
excluded.direct_path_hash_mode, contacts.direct_path_hash_mode
),
direct_path_updated_at = COALESCE(
excluded.direct_path_updated_at, contacts.direct_path_updated_at
),
route_override_path = COALESCE(
excluded.route_override_path, contacts.route_override_path
),
route_override_len = COALESCE(
excluded.route_override_len, contacts.route_override_len
),
route_override_hash_mode = COALESCE(
excluded.route_override_hash_mode, contacts.route_override_hash_mode
),
last_advert = COALESCE(excluded.last_advert, contacts.last_advert),
lat = COALESCE(excluded.lat, contacts.lat),
lon = COALESCE(excluded.lon, contacts.lon),
last_seen = CASE
WHEN excluded.last_seen IS NULL THEN contacts.last_seen
WHEN contacts.last_seen IS NULL THEN excluded.last_seen
WHEN excluded.last_seen > contacts.last_seen THEN excluded.last_seen
ELSE contacts.last_seen
END,
on_radio = COALESCE(excluded.on_radio, contacts.on_radio),
last_contacted = COALESCE(excluded.last_contacted, contacts.last_contacted),
first_seen = COALESCE(contacts.first_seen, excluded.first_seen)
""",
(
contact_row.public_key.lower(),
contact_row.name,
contact_row.type,
contact_row.flags,
direct_path,
direct_path_len,
direct_path_hash_mode,
contact_row.direct_path_updated_at,
route_override_path,
route_override_len,
route_override_hash_mode,
contact_row.last_advert,
contact_row.lat,
contact_row.lon,
contact_row.last_seen,
contact_row.on_radio,
contact_row.last_contacted,
contact_row.first_seen,
),
direct_path_updated_at = COALESCE(
excluded.direct_path_updated_at, contacts.direct_path_updated_at
),
route_override_path = COALESCE(
excluded.route_override_path, contacts.route_override_path
),
route_override_len = COALESCE(
excluded.route_override_len, contacts.route_override_len
),
route_override_hash_mode = COALESCE(
excluded.route_override_hash_mode, contacts.route_override_hash_mode
),
last_advert = COALESCE(excluded.last_advert, contacts.last_advert),
lat = COALESCE(excluded.lat, contacts.lat),
lon = COALESCE(excluded.lon, contacts.lon),
last_seen = excluded.last_seen,
on_radio = COALESCE(excluded.on_radio, contacts.on_radio),
last_contacted = COALESCE(excluded.last_contacted, contacts.last_contacted),
first_seen = COALESCE(contacts.first_seen, excluded.first_seen)
""",
(
contact_row.public_key.lower(),
contact_row.name,
contact_row.type,
contact_row.flags,
direct_path,
direct_path_len,
direct_path_hash_mode,
contact_row.direct_path_updated_at,
route_override_path,
route_override_len,
route_override_hash_mode,
contact_row.last_advert,
contact_row.lat,
contact_row.lon,
contact_row.last_seen if contact_row.last_seen is not None else int(time.time()),
contact_row.on_radio,
contact_row.last_contacted,
contact_row.first_seen,
),
)
await db.conn.commit()
):
pass
@staticmethod
def _row_to_contact(row) -> Contact:
@@ -178,10 +184,11 @@ class ContactRepository:
@staticmethod
async def get_by_key(public_key: str) -> Contact | None:
cursor = await db.conn.execute(
"SELECT * FROM contacts WHERE public_key = ?", (public_key.lower(),)
)
row = await cursor.fetchone()
async with db.readonly() as conn:
async with conn.execute(
"SELECT * FROM contacts WHERE public_key = ?", (public_key.lower(),)
) as cursor:
row = await cursor.fetchone()
return ContactRepository._row_to_contact(row) if row else None
@staticmethod
@@ -195,11 +202,12 @@ class ContactRepository:
exact = await ContactRepository.get_by_key(normalized_prefix)
if exact:
return exact
cursor = await db.conn.execute(
"SELECT * FROM contacts WHERE public_key LIKE ? ORDER BY public_key LIMIT 2",
(f"{normalized_prefix}%",),
)
rows = list(await cursor.fetchall())
async with db.readonly() as conn:
async with conn.execute(
"SELECT * FROM contacts WHERE public_key LIKE ? ORDER BY public_key LIMIT 2",
(f"{normalized_prefix}%",),
) as cursor:
rows = list(await cursor.fetchall())
if len(rows) != 1:
return None
return ContactRepository._row_to_contact(rows[0])
@@ -207,11 +215,12 @@ class ContactRepository:
@staticmethod
async def _get_prefix_matches(prefix: str, limit: int = 2) -> list[Contact]:
"""Get contacts matching a key prefix, up to limit."""
cursor = await db.conn.execute(
"SELECT * FROM contacts WHERE public_key LIKE ? ORDER BY public_key LIMIT ?",
(f"{prefix.lower()}%", limit),
)
rows = list(await cursor.fetchall())
async with db.readonly() as conn:
async with conn.execute(
"SELECT * FROM contacts WHERE public_key LIKE ? ORDER BY public_key LIMIT ?",
(f"{prefix.lower()}%", limit),
) as cursor:
rows = list(await cursor.fetchall())
return [ContactRepository._row_to_contact(row) for row in rows]
@staticmethod
@@ -237,8 +246,9 @@ class ContactRepository:
@staticmethod
async def get_by_name(name: str) -> list[Contact]:
"""Get all contacts with the given exact name."""
cursor = await db.conn.execute("SELECT * FROM contacts WHERE name = ?", (name,))
rows = await cursor.fetchall()
async with db.readonly() as conn:
async with conn.execute("SELECT * FROM contacts WHERE name = ?", (name,)) as cursor:
rows = await cursor.fetchall()
return [ContactRepository._row_to_contact(row) for row in rows]
@staticmethod
@@ -254,8 +264,9 @@ class ContactRepository:
normalized = [p.lower() for p in prefixes]
conditions = " OR ".join(["public_key LIKE ?"] * len(normalized))
params = [f"{p}%" for p in normalized]
cursor = await db.conn.execute(f"SELECT * FROM contacts WHERE {conditions}", params)
rows = await cursor.fetchall()
async with db.readonly() as conn:
async with conn.execute(f"SELECT * FROM contacts WHERE {conditions}", params) as cursor:
rows = await cursor.fetchall()
# Group by which prefix each row matches
prefix_to_rows: dict[str, list] = {p: [] for p in normalized}
for row in rows:
@@ -272,41 +283,67 @@ class ContactRepository:
@staticmethod
async def get_all(limit: int = 100, offset: int = 0) -> list[Contact]:
cursor = await db.conn.execute(
"SELECT * FROM contacts ORDER BY COALESCE(name, public_key) LIMIT ? OFFSET ?",
(limit, offset),
)
rows = await cursor.fetchall()
async with db.readonly() as conn:
async with conn.execute(
"SELECT * FROM contacts ORDER BY COALESCE(name, public_key) LIMIT ? OFFSET ?",
(limit, offset),
) as cursor:
rows = await cursor.fetchall()
return [ContactRepository._row_to_contact(row) for row in rows]
@staticmethod
async def get_recently_contacted_non_repeaters(limit: int = 200) -> list[Contact]:
"""Get recently interacted-with non-repeater contacts."""
cursor = await db.conn.execute(
"""
SELECT * FROM contacts
WHERE type != 2 AND last_contacted IS NOT NULL AND length(public_key) = 64
ORDER BY last_contacted DESC
LIMIT ?
""",
(limit,),
)
rows = await cursor.fetchall()
async with db.readonly() as conn:
async with conn.execute(
"""
SELECT * FROM contacts
WHERE type != 2 AND last_contacted IS NOT NULL AND length(public_key) = 64
ORDER BY last_contacted DESC
LIMIT ?
""",
(limit,),
) as cursor:
rows = await cursor.fetchall()
return [ContactRepository._row_to_contact(row) for row in rows]
@staticmethod
async def get_recently_dm_active_non_repeaters(limit: int = 200) -> list[Contact]:
"""Get non-repeater contacts with the most recent DM activity (sent or received)."""
async with db.readonly() as conn:
async with conn.execute(
"""
SELECT c.*
FROM contacts c
INNER JOIN (
SELECT conversation_key, MAX(received_at) AS last_dm
FROM messages
WHERE type = 'PRIV'
GROUP BY conversation_key
) m ON c.public_key = m.conversation_key
WHERE c.type != 2 AND length(c.public_key) = 64
ORDER BY m.last_dm DESC
LIMIT ?
""",
(limit,),
) as cursor:
rows = await cursor.fetchall()
return [ContactRepository._row_to_contact(row) for row in rows]
@staticmethod
async def get_recently_advertised_non_repeaters(limit: int = 200) -> list[Contact]:
"""Get recently advert-heard non-repeater contacts."""
cursor = await db.conn.execute(
"""
SELECT * FROM contacts
WHERE type != 2 AND last_advert IS NOT NULL AND length(public_key) = 64
ORDER BY last_advert DESC
LIMIT ?
""",
(limit,),
)
rows = await cursor.fetchall()
async with db.readonly() as conn:
async with conn.execute(
"""
SELECT * FROM contacts
WHERE type != 2 AND last_advert IS NOT NULL AND length(public_key) = 64
ORDER BY last_advert DESC
LIMIT ?
""",
(limit,),
) as cursor:
rows = await cursor.fetchall()
return [ContactRepository._row_to_contact(row) for row in rows]
@staticmethod
@@ -317,27 +354,44 @@ class ContactRepository:
path_hash_mode: int | None = None,
updated_at: int | None = None,
) -> None:
"""Persist a learned direct route for a contact.
Both callers (the RF PATH packet processor and the firmware PATH_UPDATE
event handler) are RF-backed: firmware ``onContactPathUpdated`` only
fires from ``onContactPathRecv`` during RF PATH packet reception. So
this method also advances ``last_seen`` monotonically. Never moves
``last_seen`` backwards if an out-of-order arrival lands with an older
timestamp.
"""
normalized_path, normalized_path_len, normalized_hash_mode = normalize_contact_route(
path,
path_len,
path_hash_mode,
)
ts = updated_at if updated_at is not None else int(time.time())
await db.conn.execute(
"""UPDATE contacts SET direct_path = ?, direct_path_len = ?,
direct_path_hash_mode = COALESCE(?, direct_path_hash_mode),
direct_path_updated_at = ?,
last_seen = ? WHERE public_key = ?""",
(
normalized_path,
normalized_path_len,
normalized_hash_mode,
ts,
ts,
public_key.lower(),
),
)
await db.conn.commit()
async with db.tx() as conn:
async with conn.execute(
"""UPDATE contacts SET direct_path = ?, direct_path_len = ?,
direct_path_hash_mode = COALESCE(?, direct_path_hash_mode),
direct_path_updated_at = ?,
last_seen = CASE
WHEN last_seen IS NULL THEN ?
WHEN ? > last_seen THEN ?
ELSE last_seen
END
WHERE public_key = ?""",
(
normalized_path,
normalized_path_len,
normalized_hash_mode,
ts,
ts,
ts,
ts,
public_key.lower(),
),
):
pass
@staticmethod
async def set_routing_override(
@@ -351,65 +405,71 @@ class ContactRepository:
path_len,
path_hash_mode,
)
await db.conn.execute(
"""
UPDATE contacts
SET route_override_path = ?, route_override_len = ?, route_override_hash_mode = ?
WHERE public_key = ?
""",
(
normalized_path,
normalized_len,
normalized_hash_mode,
public_key.lower(),
),
)
await db.conn.commit()
async with db.tx() as conn:
async with conn.execute(
"""
UPDATE contacts
SET route_override_path = ?, route_override_len = ?, route_override_hash_mode = ?
WHERE public_key = ?
""",
(
normalized_path,
normalized_len,
normalized_hash_mode,
public_key.lower(),
),
):
pass
@staticmethod
async def clear_routing_override(public_key: str) -> None:
await db.conn.execute(
"""
UPDATE contacts
SET route_override_path = NULL,
route_override_len = NULL,
route_override_hash_mode = NULL
WHERE public_key = ?
""",
(public_key.lower(),),
)
await db.conn.commit()
async with db.tx() as conn:
async with conn.execute(
"""
UPDATE contacts
SET route_override_path = NULL,
route_override_len = NULL,
route_override_hash_mode = NULL
WHERE public_key = ?
""",
(public_key.lower(),),
):
pass
@staticmethod
async def clear_on_radio_except(keep_keys: list[str]) -> None:
"""Set on_radio=False for all contacts NOT in keep_keys."""
if not keep_keys:
await db.conn.execute("UPDATE contacts SET on_radio = 0 WHERE on_radio = 1")
else:
placeholders = ",".join("?" * len(keep_keys))
await db.conn.execute(
f"UPDATE contacts SET on_radio = 0 WHERE on_radio = 1 AND public_key NOT IN ({placeholders})",
keep_keys,
)
await db.conn.commit()
async with db.tx() as conn:
if not keep_keys:
async with conn.execute("UPDATE contacts SET on_radio = 0 WHERE on_radio = 1"):
pass
else:
placeholders = ",".join("?" * len(keep_keys))
async with conn.execute(
f"UPDATE contacts SET on_radio = 0 WHERE on_radio = 1 AND public_key NOT IN ({placeholders})",
keep_keys,
):
pass
@staticmethod
async def get_favorites() -> list[Contact]:
"""Return all contacts marked as favorite."""
cursor = await db.conn.execute(
"SELECT * FROM contacts WHERE favorite = 1 AND LENGTH(public_key) = 64"
)
rows = await cursor.fetchall()
async with db.readonly() as conn:
async with conn.execute(
"SELECT * FROM contacts WHERE favorite = 1 AND LENGTH(public_key) = 64"
) as cursor:
rows = await cursor.fetchall()
return [ContactRepository._row_to_contact(row) for row in rows]
@staticmethod
async def set_favorite(public_key: str, value: bool) -> None:
"""Set or clear the favorite flag for a contact."""
await db.conn.execute(
"UPDATE contacts SET favorite = ? WHERE public_key = ?",
(1 if value else 0, public_key.lower()),
)
await db.conn.commit()
async with db.tx() as conn:
async with conn.execute(
"UPDATE contacts SET favorite = ? WHERE public_key = ?",
(1 if value else 0, public_key.lower()),
):
pass
@staticmethod
async def delete(public_key: str) -> None:
@@ -417,18 +477,53 @@ class ContactRepository:
# contact_name_history and contact_advert_paths cascade via FK.
# Messages are intentionally preserved so history re-surfaces
# if the contact is re-added later.
await db.conn.execute("DELETE FROM contacts WHERE public_key = ?", (normalized,))
await db.conn.commit()
async with db.tx() as conn:
async with conn.execute("DELETE FROM contacts WHERE public_key = ?", (normalized,)):
pass
@staticmethod
async def update_last_contacted(public_key: str, timestamp: int | None = None) -> None:
"""Update the last_contacted timestamp for a contact."""
"""Update the last_contacted timestamp for a contact.
``last_contacted`` tracks the most recent direct-conversation activity
with this contact in either direction (incoming or outgoing DM). It is
the field that powers "recent conversations" ordering on the frontend.
It deliberately does not touch ``last_seen``: ``last_seen`` is reserved
for actual RF reception from the contact, and outgoing sends are not
evidence that we heard from them. RF observations from DM ingest update
``last_seen`` via :meth:`touch_last_seen` on incoming DMs only.
"""
ts = timestamp if timestamp is not None else int(time.time())
await db.conn.execute(
"UPDATE contacts SET last_contacted = ?, last_seen = ? WHERE public_key = ?",
(ts, ts, public_key.lower()),
)
await db.conn.commit()
async with db.tx() as conn:
async with conn.execute(
"UPDATE contacts SET last_contacted = ? WHERE public_key = ?",
(ts, public_key.lower()),
):
pass
@staticmethod
async def touch_last_seen(public_key: str, timestamp: int) -> None:
"""Monotonically bump last_seen for a contact from an RF observation.
Never moves last_seen backwards; a no-op if the contact row does not
exist. Use this from packet-ingest paths that have attributed a packet
to a specific contact pubkey (advert, incoming DM, decrypted PATH, etc.).
"""
async with db.tx() as conn:
async with conn.execute(
"""
UPDATE contacts
SET last_seen = CASE
WHEN last_seen IS NULL THEN ?
WHEN ? > last_seen THEN ?
ELSE last_seen
END
WHERE public_key = ?
""",
(timestamp, timestamp, timestamp, public_key.lower()),
):
pass
@staticmethod
async def update_last_read_at(public_key: str, timestamp: int | None = None) -> bool:
@@ -437,22 +532,25 @@ class ContactRepository:
Returns True if a row was updated, False if contact not found.
"""
ts = timestamp if timestamp is not None else int(time.time())
cursor = await db.conn.execute(
"UPDATE contacts SET last_read_at = ? WHERE public_key = ?",
(ts, public_key.lower()),
)
await db.conn.commit()
return cursor.rowcount > 0
async with db.tx() as conn:
async with conn.execute(
"UPDATE contacts SET last_read_at = ? WHERE public_key = ?",
(ts, public_key.lower()),
) as cursor:
rowcount = cursor.rowcount
return rowcount > 0
@staticmethod
async def promote_prefix_placeholders(full_key: str) -> list[str]:
"""Promote prefix-only placeholder contacts to a resolved full key.
Returns the placeholder public keys that were merged into the full key.
All operations for the promotion happen inside one ``db.tx()`` so
partial promotions never leak to readers between steps.
"""
async def migrate_child_rows(old_key: str, new_key: str) -> None:
await db.conn.execute(
async def migrate_child_rows(conn, old_key: str, new_key: str) -> None:
async with conn.execute(
"""
INSERT INTO contact_name_history (public_key, name, first_seen, last_seen)
SELECT ?, name, first_seen, last_seen
@@ -463,8 +561,9 @@ class ContactRepository:
last_seen = MAX(contact_name_history.last_seen, excluded.last_seen)
""",
(new_key, old_key),
)
await db.conn.execute(
):
pass
async with conn.execute(
"""
INSERT INTO contact_advert_paths
(public_key, path_hex, path_len, first_seen, last_seen, heard_count)
@@ -477,132 +576,138 @@ class ContactRepository:
heard_count = contact_advert_paths.heard_count + excluded.heard_count
""",
(new_key, old_key),
)
await db.conn.execute(
):
pass
async with conn.execute(
"DELETE FROM contact_name_history WHERE public_key = ?",
(old_key,),
)
await db.conn.execute(
):
pass
async with conn.execute(
"DELETE FROM contact_advert_paths WHERE public_key = ?",
(old_key,),
)
):
pass
normalized_full_key = full_key.lower()
cursor = await db.conn.execute(
"""
SELECT public_key, last_seen, last_contacted, first_seen, last_read_at
FROM contacts
WHERE length(public_key) < 64
AND ? LIKE public_key || '%'
ORDER BY length(public_key) DESC, public_key
""",
(normalized_full_key,),
)
rows = list(await cursor.fetchall())
if not rows:
return []
promoted_keys: list[str] = []
for row in rows:
old_key = row["public_key"]
if old_key == normalized_full_key:
continue
match_cursor = await db.conn.execute(
async with db.tx() as conn:
async with conn.execute(
"""
SELECT COUNT(*) AS match_count
SELECT public_key, last_seen, last_contacted, first_seen, last_read_at
FROM contacts
WHERE length(public_key) = 64
AND public_key LIKE ? || '%'
WHERE length(public_key) < 64
AND ? LIKE public_key || '%'
ORDER BY length(public_key) DESC, public_key
""",
(old_key,),
)
match_row = await match_cursor.fetchone()
match_count = match_row["match_count"] if match_row is not None else 0
if match_count != 1:
logger.warning(
"Skipping prefix promotion for %s: %d full-key contacts match (expected 1)",
old_key,
match_count,
)
continue
(normalized_full_key,),
) as cursor:
rows = list(await cursor.fetchall())
if not rows:
return []
await migrate_child_rows(old_key, normalized_full_key)
for row in rows:
old_key = row["public_key"]
if old_key == normalized_full_key:
continue
# Merge timestamp metadata from the old prefix contact into the
# full-key contact (which all callers guarantee already exists),
# then delete the prefix placeholder.
await db.conn.execute(
"""
UPDATE contacts
SET last_seen = CASE
WHEN contacts.last_seen IS NULL THEN ?
WHEN ? IS NULL THEN contacts.last_seen
WHEN ? > contacts.last_seen THEN ?
ELSE contacts.last_seen
END,
last_contacted = CASE
WHEN contacts.last_contacted IS NULL THEN ?
WHEN ? IS NULL THEN contacts.last_contacted
WHEN ? > contacts.last_contacted THEN ?
ELSE contacts.last_contacted
END,
first_seen = CASE
WHEN contacts.first_seen IS NULL THEN ?
WHEN ? IS NULL THEN contacts.first_seen
WHEN ? < contacts.first_seen THEN ?
ELSE contacts.first_seen
END,
last_read_at = CASE
WHEN contacts.last_read_at IS NULL THEN ?
WHEN ? IS NULL THEN contacts.last_read_at
WHEN ? > contacts.last_read_at THEN ?
ELSE contacts.last_read_at
END
WHERE public_key = ?
""",
(
row["last_seen"],
row["last_seen"],
row["last_seen"],
row["last_seen"],
row["last_contacted"],
row["last_contacted"],
row["last_contacted"],
row["last_contacted"],
row["first_seen"],
row["first_seen"],
row["first_seen"],
row["first_seen"],
row["last_read_at"],
row["last_read_at"],
row["last_read_at"],
row["last_read_at"],
normalized_full_key,
),
)
await db.conn.execute("DELETE FROM contacts WHERE public_key = ?", (old_key,))
async with conn.execute(
"""
SELECT COUNT(*) AS match_count
FROM contacts
WHERE length(public_key) = 64
AND public_key LIKE ? || '%'
""",
(old_key,),
) as match_cursor:
match_row = await match_cursor.fetchone()
match_count = match_row["match_count"] if match_row is not None else 0
if match_count != 1:
logger.warning(
"Skipping prefix promotion for %s: %d full-key contacts match (expected 1)",
old_key,
match_count,
)
continue
promoted_keys.append(old_key)
await migrate_child_rows(conn, old_key, normalized_full_key)
# Merge timestamp metadata from the old prefix contact into the
# full-key contact (which all callers guarantee already exists),
# then delete the prefix placeholder.
async with conn.execute(
"""
UPDATE contacts
SET last_seen = CASE
WHEN contacts.last_seen IS NULL THEN ?
WHEN ? IS NULL THEN contacts.last_seen
WHEN ? > contacts.last_seen THEN ?
ELSE contacts.last_seen
END,
last_contacted = CASE
WHEN contacts.last_contacted IS NULL THEN ?
WHEN ? IS NULL THEN contacts.last_contacted
WHEN ? > contacts.last_contacted THEN ?
ELSE contacts.last_contacted
END,
first_seen = CASE
WHEN contacts.first_seen IS NULL THEN ?
WHEN ? IS NULL THEN contacts.first_seen
WHEN ? < contacts.first_seen THEN ?
ELSE contacts.first_seen
END,
last_read_at = CASE
WHEN contacts.last_read_at IS NULL THEN ?
WHEN ? IS NULL THEN contacts.last_read_at
WHEN ? > contacts.last_read_at THEN ?
ELSE contacts.last_read_at
END
WHERE public_key = ?
""",
(
row["last_seen"],
row["last_seen"],
row["last_seen"],
row["last_seen"],
row["last_contacted"],
row["last_contacted"],
row["last_contacted"],
row["last_contacted"],
row["first_seen"],
row["first_seen"],
row["first_seen"],
row["first_seen"],
row["last_read_at"],
row["last_read_at"],
row["last_read_at"],
row["last_read_at"],
normalized_full_key,
),
):
pass
async with conn.execute("DELETE FROM contacts WHERE public_key = ?", (old_key,)):
pass
promoted_keys.append(old_key)
await db.conn.commit()
return promoted_keys
@staticmethod
async def mark_all_read(timestamp: int) -> None:
"""Mark all contacts as read at the given timestamp."""
await db.conn.execute("UPDATE contacts SET last_read_at = ?", (timestamp,))
await db.conn.commit()
async with db.tx() as conn:
async with conn.execute("UPDATE contacts SET last_read_at = ?", (timestamp,)):
pass
@staticmethod
async def get_by_pubkey_first_byte(hex_byte: str) -> list[Contact]:
"""Get contacts whose public key starts with the given hex byte (2 chars)."""
cursor = await db.conn.execute(
"SELECT * FROM contacts WHERE substr(public_key, 1, 2) = ?",
(hex_byte.lower(),),
)
rows = await cursor.fetchall()
async with db.readonly() as conn:
async with conn.execute(
"SELECT * FROM contacts WHERE substr(public_key, 1, 2) = ?",
(hex_byte.lower(),),
) as cursor:
rows = await cursor.fetchall()
return [ContactRepository._row_to_contact(row) for row in rows]
@@ -641,62 +746,75 @@ class ContactAdvertPathRepository:
normalized_path = path_hex.lower()
path_len = hop_count if hop_count is not None else len(normalized_path) // 2
await db.conn.execute(
"""
INSERT INTO contact_advert_paths
(public_key, path_hex, path_len, first_seen, last_seen, heard_count)
VALUES (?, ?, ?, ?, ?, 1)
ON CONFLICT(public_key, path_hex, path_len) DO UPDATE SET
last_seen = MAX(contact_advert_paths.last_seen, excluded.last_seen),
heard_count = contact_advert_paths.heard_count + 1
""",
(normalized_key, normalized_path, path_len, timestamp, timestamp),
)
async with db.tx() as conn:
async with conn.execute(
"""
INSERT INTO contact_advert_paths
(public_key, path_hex, path_len, first_seen, last_seen, heard_count)
VALUES (?, ?, ?, ?, ?, 1)
ON CONFLICT(public_key, path_hex, path_len) DO UPDATE SET
last_seen = MAX(contact_advert_paths.last_seen, excluded.last_seen),
heard_count = contact_advert_paths.heard_count + 1
""",
(normalized_key, normalized_path, path_len, timestamp, timestamp),
):
pass
# Keep only the N most recent unique paths per contact.
await db.conn.execute(
"""
DELETE FROM contact_advert_paths
WHERE public_key = ?
AND id NOT IN (
SELECT id
FROM contact_advert_paths
WHERE public_key = ?
ORDER BY last_seen DESC, heard_count DESC, path_len ASC, path_hex ASC
LIMIT ?
)
""",
(normalized_key, normalized_key, max_paths),
)
await db.conn.commit()
# Keep only the N most recent unique paths per contact.
async with conn.execute(
"""
DELETE FROM contact_advert_paths
WHERE public_key = ?
AND id NOT IN (
SELECT id
FROM contact_advert_paths
WHERE public_key = ?
ORDER BY last_seen DESC, heard_count DESC, path_len ASC, path_hex ASC
LIMIT ?
)
""",
(normalized_key, normalized_key, max_paths),
):
pass
@staticmethod
async def get_recent_for_contact(public_key: str, limit: int = 10) -> list[ContactAdvertPath]:
cursor = await db.conn.execute(
"""
SELECT path_hex, path_len, first_seen, last_seen, heard_count
FROM contact_advert_paths
WHERE public_key = ?
ORDER BY last_seen DESC, heard_count DESC, path_len ASC, path_hex ASC
LIMIT ?
""",
(public_key.lower(), limit),
)
rows = await cursor.fetchall()
async with db.readonly() as conn:
async with conn.execute(
"""
SELECT path_hex, path_len, first_seen, last_seen, heard_count
FROM contact_advert_paths
WHERE public_key = ?
ORDER BY last_seen DESC, heard_count DESC, path_len ASC, path_hex ASC
LIMIT ?
""",
(public_key.lower(), limit),
) as cursor:
rows = await cursor.fetchall()
return [ContactAdvertPathRepository._row_to_path(row) for row in rows]
@staticmethod
async def get_recent_for_all_contacts(
limit_per_contact: int = 10,
) -> list[ContactAdvertPathSummary]:
cursor = await db.conn.execute(
"""
SELECT public_key, path_hex, path_len, first_seen, last_seen, heard_count
FROM contact_advert_paths
ORDER BY public_key ASC, last_seen DESC, heard_count DESC, path_len ASC, path_hex ASC
"""
)
rows = await cursor.fetchall()
async with db.readonly() as conn:
async with conn.execute(
"""
SELECT public_key, path_hex, path_len, first_seen, last_seen, heard_count
FROM (
SELECT *,
ROW_NUMBER() OVER (
PARTITION BY public_key
ORDER BY last_seen DESC, heard_count DESC, path_len ASC, path_hex ASC
) AS rn
FROM contact_advert_paths
)
WHERE rn <= ?
ORDER BY public_key ASC, last_seen DESC, heard_count DESC, path_len ASC, path_hex ASC
""",
(limit_per_contact,),
) as cursor:
rows = await cursor.fetchall()
grouped: dict[str, list[ContactAdvertPath]] = {}
for row in rows:
@@ -705,8 +823,6 @@ class ContactAdvertPathRepository:
if paths is None:
paths = []
grouped[key] = paths
if len(paths) >= limit_per_contact:
continue
paths.append(ContactAdvertPathRepository._row_to_path(row))
return [
@@ -720,29 +836,31 @@ class ContactNameHistoryRepository:
@staticmethod
async def record_name(public_key: str, name: str, timestamp: int) -> None:
"""Record a name observation. Upserts: updates last_seen if name already known."""
await db.conn.execute(
"""
INSERT INTO contact_name_history (public_key, name, first_seen, last_seen)
VALUES (?, ?, ?, ?)
ON CONFLICT(public_key, name) DO UPDATE SET
last_seen = MAX(contact_name_history.last_seen, excluded.last_seen)
""",
(public_key.lower(), name, timestamp, timestamp),
)
await db.conn.commit()
async with db.tx() as conn:
async with conn.execute(
"""
INSERT INTO contact_name_history (public_key, name, first_seen, last_seen)
VALUES (?, ?, ?, ?)
ON CONFLICT(public_key, name) DO UPDATE SET
last_seen = MAX(contact_name_history.last_seen, excluded.last_seen)
""",
(public_key.lower(), name, timestamp, timestamp),
):
pass
@staticmethod
async def get_history(public_key: str) -> list[ContactNameHistory]:
cursor = await db.conn.execute(
"""
SELECT name, first_seen, last_seen
FROM contact_name_history
WHERE public_key = ?
ORDER BY last_seen DESC
""",
(public_key.lower(),),
)
rows = await cursor.fetchall()
async with db.readonly() as conn:
async with conn.execute(
"""
SELECT name, first_seen, last_seen
FROM contact_name_history
WHERE public_key = ?
ORDER BY last_seen DESC
""",
(public_key.lower(),),
) as cursor:
rows = await cursor.fetchall()
return [
ContactNameHistory(
name=row["name"], first_seen=row["first_seen"], last_seen=row["last_seen"]

Some files were not shown because too many files have changed in this diff Show More