Compare commits

..

817 Commits

Author SHA1 Message Date
Jack Kingsman 4e73cd39c8 Migration improvements 2 2026-04-11 00:38:47 -07:00
Jack Kingsman 53b341d6fb Make migrations more better 2026-04-10 16:28:03 -07:00
Jack Kingsman 76ac97010e Use non-node20 checkout action 2026-04-10 16:19:21 -07:00
Jack Kingsman 53a4d8186a Updating changelog + build for 3.11.0 2026-04-10 16:12:27 -07:00
Jack Kingsman 70e1669113 Improve test coverage 2026-04-10 16:04:02 -07:00
Jack Kingsman 3b1a292507 Docs updates and be consistent about node >=20 2026-04-10 15:57:47 -07:00
Jack Kingsman 4f19e1ec9a Fix races and stale things 2026-04-10 15:54:03 -07:00
Jack Kingsman 59601bb98e Assume that a same-second same-message same-first-byte-key DM is more likely an echo than them sending the same message, and multi-retry for flood scope restoration 2026-04-10 15:50:45 -07:00
Jack Kingsman f6b0fd21fb Don't consume DM resend attempt on busy radio 2026-04-10 15:46:19 -07:00
Jack Kingsman 8a4858a313 Don't consume DM resend attempt on busy radio 2026-04-10 15:44:50 -07:00
Jack Kingsman 442c2fad20 Fix some frontend display/quality/doc issues 2026-04-10 15:43:08 -07:00
Jack Kingsman 8cc542ce23 Fix same-second same-message collision in room servers with per-sender disambiguation at DB level 2026-04-10 15:36:53 -07:00
Jack Kingsman a7258c120e Merge pull request #177 from YourSandwich/feature/battery-status
Add optional battery display to status bar
2026-04-10 14:55:39 -07:00
Jack Kingsman 8752320f52 Add some tests and move the helpers into their own TS file 2026-04-10 14:53:57 -07:00
Jack Kingsman f9f046a05f Fix inversion of const definition location 2026-04-10 14:51:19 -07:00
Jack Kingsman 390c0624ea IIFE => memo for battery color/styling conversion 2026-04-10 14:49:05 -07:00
YourSandwich 2f55d11b0b Add battery display toggles to Local Configuration 2026-04-10 23:38:29 +02:00
YourSandwich fa0be24990 Add battery indicator to status bar 2026-04-10 23:38:29 +02:00
Jack Kingsman 1e22a21445 Add radio health &c. to fanout bus 2026-04-10 14:31:45 -07:00
YourSandwich e09a3a01f7 Add localStorage helpers for battery display settings 2026-04-10 22:25:17 +02:00
Jack Kingsman 3bd756ee4e Pluck in HA radio stats into the WS fanout endpoint 2026-04-10 12:39:37 -07:00
Jack Kingsman 43c5e0f67d Improve e2e testing posture to make it sliiiightly less unfriendly for others to get working 2026-04-10 11:36:26 -07:00
Jack Kingsman c0fc5fbba2 Add AUR download and test script 2026-04-10 11:30:05 -07:00
Jack Kingsman c7248222dd Updating changelog + build for 3.10.0 2026-04-10 11:16:16 -07:00
Jack Kingsman 1e18a91f12 Merge pull request #172 from YourSandwich/aur-install-instructions
Add Arch Linux (AUR) packaging infrastructure
2026-04-10 10:54:49 -07:00
Jack Kingsman 18db6e4dd8 Make test script executable 2026-04-10 10:49:49 -07:00
Jack Kingsman 2393dadf1b Unload the service on uninstall 2026-04-10 10:48:38 -07:00
Jack Kingsman fd26576e0d Use correct email 2026-04-10 10:47:21 -07:00
Sandwich cb5a76eb5f Replace manual user/group creation with sysusers.d and tmpfiles.d 2026-04-10 19:23:01 +02:00
Jack Kingsman 7f5dde119f Update AGENTS.md 2026-04-10 00:15:57 -07:00
Jack Kingsman 799a721761 Be more defensive about systemd detection 2026-04-10 00:10:53 -07:00
Jack Kingsman 152a584f35 Fix TCP host 2026-04-10 00:10:41 -07:00
Jack Kingsman 5cc0476426 Fix port numbering 2026-04-10 00:06:22 -07:00
Jack Kingsman e468c6c161 Change command palette shortcut 2026-04-09 23:45:16 -07:00
Jack Kingsman e33537018b Fix AUR username 2026-04-09 23:11:02 -07:00
Jack Kingsman 0727793560 Add test script 2026-04-09 23:08:32 -07:00
Jack Kingsman 5c4e04e024 Skip daemon reload if systemctl isn't around 2026-04-09 23:08:26 -07:00
Jack Kingsman 967269ef7d Initial AUR work 2026-04-09 23:08:22 -07:00
Jack Kingsman 1903797d0d Fix broken statistics pane e2e test 2026-04-09 22:30:12 -07:00
Jack Kingsman bb5af5ba82 Bump apprise to 1.9.9. Closes #173. 2026-04-09 17:20:57 -07:00
Sandwich 424da7e232 Add Arch Linux (AUR) install instructions to README
Adds "Install Path 3: Arch Linux (AUR)" section covering both AUR
helper and manual makepkg installation, linking to the published
remoteterm-meshcore AUR package.

Closes #171
2026-04-09 03:51:39 +02:00
Jack Kingsman 159df1ec5b Revert "Add debug lines for fav click"
This reverts commit 8e2e039985.
2026-04-08 16:33:44 -07:00
Jack Kingsman 8e2e039985 Add debug lines for fav click 2026-04-08 16:18:46 -07:00
Jack Kingsman 01c86a486e Add packet feed filters; closes #169. 2026-04-08 14:44:41 -07:00
Jack Kingsman 7d5cfdec26 Add note about startup on windows 2026-04-07 22:07:31 -07:00
Jack Kingsman 5fe0ac0ad4 Be more memory concious on recent contact fetch 2026-04-07 16:41:34 -07:00
Jack Kingsman b98102ccac Add 72hr packet density view 2026-04-07 16:26:01 -07:00
Jack Kingsman a02c3cae9e Updating changelog + build for 3.9.0 2026-04-06 22:10:06 -07:00
Jack Kingsman ca7349a1a8 Add autofocus to text boxes 2026-04-06 21:59:46 -07:00
Jack Kingsman eeaa11b8b0 Fix lint bugs 2026-04-06 20:36:47 -07:00
Jack Kingsman 08eaf090b2 Be more guarded in the radio validity checks (and get outta here, you random repeaters I never favorited!) 2026-04-06 20:34:16 -07:00
Jack Kingsman 2f43420235 Add command palette 2026-04-06 20:27:55 -07:00
Jack Kingsman af74663518 Add guard for favorites sync 2026-04-06 20:12:58 -07:00
Jack Kingsman b7981c0450 Getting all Cal Raleigh up in here 2026-04-06 19:09:48 -07:00
Jack Kingsman 0f4976b9ee Merge pull request #167 from jkingsman/migrate-favorites
Add favorites as contact field (dug)
2026-04-05 22:19:01 -07:00
Jack Kingsman 1991f2515b Support relative URLs. Closes #165. 2026-04-05 22:11:12 -07:00
Jack Kingsman a351c86ccb Add favorites as contact field (dug) 2026-04-05 20:50:27 -07:00
Jack Kingsman c2e1a3cbe6 Import radio favorites as favorites 2026-04-05 18:15:04 -07:00
jkingsman c2d1339256 Default stale node pruning for visualizer to ON 2026-04-05 15:55:47 -07:00
jkingsman cb7139a7e1 Always offer basic auth, move docker-not-found warning to the top 2026-04-05 15:41:02 -07:00
Jack Kingsman 6332387704 Define a better y domain for repeater battery voltage 2026-04-05 12:45:52 -07:00
Jack Kingsman 3f2b8e2a1f Refocus CLI textbox after command completion. Closes #164. 2026-04-05 11:55:52 -07:00
Jack Kingsman 40c37745b6 Massage the Readme a bit more 2026-04-05 11:55:31 -07:00
Jack Kingsman 9edac47aa2 Add clearer warning about RemoteTerm taking over the radio and governing contacts/channels loading. Closes #163. 2026-04-05 11:49:57 -07:00
Jack Kingsman 44f8aafb66 Retain recent traces and make them click-to-trace. Closes #160. 2026-04-04 16:43:12 -07:00
Jack Kingsman 9e3805f5d0 Use receipt time not sender time for display 2026-04-04 16:24:36 -07:00
Jack Kingsman 457799d8df Calm down clock skew loggings 2026-04-04 15:31:30 -07:00
Jack Kingsman de3ad2d51f Calm it down on sync logs 2026-04-04 15:10:45 -07:00
Jack Kingsman ad83bc7979 Show telemetry inline 2026-04-04 14:29:31 -07:00
Jack Kingsman 9ebf63491c Have tests use prod regexes 2026-04-04 13:13:37 -07:00
Jack Kingsman b19585db6d Go crazy style on systemd escaping. Closes #159. 2026-04-04 12:24:36 -07:00
Jack Kingsman c28d22379e Be a little gentler; call it a room finder rather than a cracker 2026-04-04 12:06:28 -07:00
Jack Kingsman 1e5ccf6c29 Add clearer issue identification for missing HTTPS context for channel finder 2026-04-04 12:03:07 -07:00
Jack Kingsman 81f5bde287 Add hop counts to width selection 2026-04-03 22:06:00 -07:00
Jack Kingsman c33eb469ac Updating changelog + build for 3.8.0 2026-04-03 19:36:27 -07:00
Jack Kingsman 0fe6584e7a Add packet display to map & add map dark mode 2026-04-03 19:18:22 -07:00
Jack Kingsman 557d79d437 Add packets to general map 2026-04-03 18:57:34 -07:00
Jack Kingsman daff3dcb4a Drop low value tests 2026-04-03 17:55:02 -07:00
Jack Kingsman 77db7287d6 Drop lame imports 2026-04-03 17:51:26 -07:00
Jack Kingsman 67873e8dd9 Drop some duplicated logic and defns 2026-04-03 17:47:44 -07:00
Jack Kingsman e2ddf5f79f Move require connected down into the manager 2026-04-03 17:37:30 -07:00
Jack Kingsman 4a93641f04 Axe some dead code 2026-04-03 17:22:04 -07:00
Jack Kingsman d5922a214b Clear out old migration logic and replace with thin shim for favorites; sort order is lost 2026-04-03 17:15:41 -07:00
Jack Kingsman 7ad1ee26a4 Add RSSI/SNR to received messages. Closes #148. 2026-04-03 15:20:44 -07:00
Jack Kingsman 08238aa464 Add close button to modal. Closes #156 (and modals lol), ish. 2026-04-03 14:54:59 -07:00
Jack Kingsman 1046baf741 Add auto-resend option for not-heard-repeated messages. Closes #154. 2026-04-03 14:43:52 -07:00
Jack Kingsman 42e1b7b5d9 Add canonical style reference. Closes #155. 2026-04-03 14:27:44 -07:00
Jack Kingsman 3ca4f7edf7 Fix missing test failures and patch double declared model 2026-04-03 14:15:19 -07:00
Jack Kingsman 55081d4a2d Add hop width to channel info. Closes #153. 2026-04-03 14:04:35 -07:00
Jack Kingsman be2b2604df Add intervalized repeater metrics collection. Closes #151. 2026-04-03 13:45:39 -07:00
Jack Kingsman 35981d8f8b Be more aggressive about resetting the hop width and warning if that doesn't work. This and the prior work closes #152. 2026-04-03 13:16:43 -07:00
Jack Kingsman 8e998c03ba Add channel path hash width override 2026-04-03 13:05:58 -07:00
Jack Kingsman d802dd4212 Fix table display in primary agents.md 2026-04-02 20:31:54 -07:00
Jack Kingsman 7557eb1fa6 Merge pull request #150 from jkingsman/bugbash-v7
Bugbash v7
2026-04-02 20:20:23 -07:00
Jack Kingsman 6a4af5e602 More complete message lifecycle tests 2026-04-02 20:17:51 -07:00
Jack Kingsman 1895e6a919 Clean up legacy sort order 2026-04-02 20:16:16 -07:00
Jack Kingsman 975bf7f03f Docs, dead code, and schema updates 2026-04-02 19:03:02 -07:00
Jack Kingsman c7d5d3887d Yield radio lock on build repeater ops and use INSERT OR IGNORE instead of check-then-act on packet ops 2026-04-02 18:53:34 -07:00
Jack Kingsman 5c93d8487e Stop using db ops to do casing; unify on write and then our indices are happy once more 2026-04-02 18:50:56 -07:00
Jack Kingsman 5d2834a9fb Add some tests around cascade deletion behaving now that we have FK pragma turned on 2026-04-02 18:46:37 -07:00
Jack Kingsman cfe485bf29 Be kinder about streaming volume in memory 2026-04-02 18:43:48 -07:00
Jack Kingsman e7f6bd0397 Bump python requirement so as not to hit toml issues 2026-04-02 18:41:03 -07:00
Jack Kingsman 1e7dc6af46 Don't clobber sort order 2026-04-02 18:40:25 -07:00
Jack Kingsman af40cc3c8e Add more recent screenshot 2026-04-02 18:06:29 -07:00
Jack Kingsman 2561b70fed Fix tests for apprise redaction 2026-04-02 18:03:34 -07:00
Jack Kingsman 44f145b646 Updating changelog + build for 3.7.1 2026-04-02 18:01:22 -07:00
Jack Kingsman 55e2dc478d Redact Apprise URLs 2026-04-02 17:59:41 -07:00
Jack Kingsman 0932800e1f Fix lint 2026-04-02 17:38:35 -07:00
Jack Kingsman c333eb25e3 Updating changelog + build for 3.7.0 2026-04-02 17:30:19 -07:00
Jack Kingsman 580aa1cefd Correct TCP port 2026-04-02 13:55:05 -07:00
Jack Kingsman 30de09f71b Merge pull request #126 from maplemesh/gnomeadrift/repeater_telemetry_history
Logging battery voltage history from telemetry
2026-04-02 13:29:44 -07:00
Jack Kingsman 93d31adecd Don't change historical migrations (cruft from rebasing) and don't overwrite data 2026-04-02 13:21:21 -07:00
Jack Kingsman 5f969017f7 Add some tests, make it an actual endpoint (whoops said we didn't need that) and tidy things up a bit 2026-04-02 12:43:42 -07:00
Gnome Adrift 967dd05fad Prune telemetry entries, remove uplot comments, format code 2026-04-02 12:34:00 -07:00
Gnome Adrift c808f0930b Remove automatic telemetry querying, remove battery pane, add telemetry history pane 2026-04-02 12:31:51 -07:00
Gnome Adrift 87df4b4aa1 Fix for telemetry polling 2026-04-02 12:27:18 -07:00
Gnome Adrift 0511d6f69b Make battery history update when fetching telemetry 2026-04-02 12:27:18 -07:00
Gnome Adrift 78b5598f67 First draft of repeater telemetry feature 2026-04-02 12:27:06 -07:00
Jack Kingsman 5e1bdb2cc1 Fix terminals on hash room parsing 2026-04-02 00:23:07 -07:00
Jack Kingsman 4420d44838 Add bulk room add 2026-04-02 00:19:25 -07:00
Jack Kingsman ead1774cd3 Boost sidebar icon color 2026-04-01 22:17:04 -07:00
Jack Kingsman 0d45cbd849 Yolo to FK pragma
Move to fk pragma
2026-04-01 22:09:00 -07:00
Jack Kingsman 456f739f51 Emit correct events, update sender key, and don't let discovery path skip prefix promotion; other misc. fixes 2026-04-01 21:56:51 -07:00
Jack Kingsman 80c6cc44e5 Formatting and linting 2026-04-01 21:39:59 -07:00
Jack Kingsman 35265d8ae8 Back up all available files and remove dead else clause from contact prefix promotion 2026-04-01 21:34:35 -07:00
Jack Kingsman 4a2d7ed100 Move to FK pragma and prep other code points in light of that 2026-04-01 21:22:01 -07:00
Jack Kingsman 47c4f038fe Reorganize Database settings pane 2026-04-01 17:07:53 -07:00
Jack Kingsman 630ba67ef0 Patch up radio locking and frontend contact delete behavior for bulk contact delete 2026-04-01 16:52:25 -07:00
Jack Kingsman fd1188abcd Make our radio pane less miserable. Closes #145 2026-04-01 16:45:09 -07:00
Jack Kingsman 94513d7177 Move type label to top of bulk delete 2026-04-01 16:40:06 -07:00
Jack Kingsman fbff9821be Add bulk deletion interface 2026-04-01 16:33:05 -07:00
Jack Kingsman 1fd281121b Default auto-dm-decrypt to true 2026-04-01 15:58:15 -07:00
Jack Kingsman 5653a43941 Add new node ingest blocking 2026-04-01 15:57:22 -07:00
Jack Kingsman 7f07aedb8a Make repeaters blockable, and hide from the sidebar 2026-04-01 15:39:40 -07:00
Jack Kingsman e437ce74c6 Surface repeater info pane just like contacts 2026-04-01 14:21:00 -07:00
Jack Kingsman 4ff6d2018a Remove discontinuity on radio limit exceed for contacts 2026-04-01 12:27:10 -07:00
Jack Kingsman 1c634da687 Be more conservative around limits for radio contact adding and don't respect user value if it exceeds radio limits 2026-04-01 12:24:54 -07:00
Jack Kingsman 738c21dd66 Compact debug endpoint. Closes #143. 2026-04-01 12:18:03 -07:00
Jack Kingsman 7d72448ebf Make hop counts collapse to be neater. Closes #144. 2026-04-01 11:42:09 -07:00
Jack Kingsman b4f3d1f14c Add additional info to debug endpoint. Closes #142. 2026-04-01 11:31:20 -07:00
Jack Kingsman 416166b07c Add system arch data to debug output 2026-03-31 23:09:12 -07:00
Jack Kingsman 480798e117 Updating changelog + build for 3.6.7 2026-03-31 23:01:36 -07:00
Jack Kingsman 704a3d8a87 Updating changelog + build for 3.6.6 2026-03-31 22:52:14 -07:00
Jack Kingsman 96e108037c Updating changelog + build for 3.6.5 2026-03-31 22:21:06 -07:00
Jack Kingsman 97aade3632 Format changelog entries with bullets 2026-03-31 22:17:03 -07:00
Jack Kingsman e43584912b Updating changelog + build for 3.6.4 2026-03-31 22:14:58 -07:00
Jack Kingsman fccde36ecb Gentle emphasis on new contact/channel button 2026-03-31 21:57:35 -07:00
Jack Kingsman e631f9b0cc DHCP notes 2026-03-31 21:33:50 -07:00
Jack Kingsman b52431616e Be more sane with by-id aliases 2026-03-31 20:47:10 -07:00
Jack Kingsman 8446d99df1 Be more resistent for colons in the device ID 2026-03-31 20:23:07 -07:00
Jack Kingsman 8e1e913fcd Make all scripts executable 2026-03-31 18:26:04 -07:00
Jack Kingsman b74137dc72 Add trace clear and much better layout. Closes #139. 2026-03-31 16:55:21 -07:00
Jack Kingsman c83f9b0005 Rename Best RSSI to Strongest Neighbor. Closes #136. 2026-03-31 13:10:02 -07:00
Jack Kingsman 9f4737d350 Add hashtag link detection. Closes #134. 2026-03-31 12:55:52 -07:00
Jack Kingsman 29e9a5f701 Be more resilient in noise floor gathering 2026-03-31 12:35:35 -07:00
Jack Kingsman f0f06671cc Make new message button clearer 2026-03-31 12:28:47 -07:00
Jack Kingsman b1595e479c Use the image's full government name 2026-03-30 23:11:37 -07:00
Jack Kingsman 25df69bfbc Add snakeoil certs to docker setup 2026-03-30 22:13:09 -07:00
Jack Kingsman 88140081b9 Updating changelog + build for 3.6.3 2026-03-30 21:54:32 -07:00
Jack Kingsman 4326f57977 Lint fixes 2026-03-30 21:44:26 -07:00
Jack Kingsman 43abcd07b2 Improve DB streaming perf for cracking and statistics 2026-03-30 21:31:59 -07:00
Jack Kingsman 5c60559cb8 Fix memoization on cracker panel 2026-03-30 21:31:47 -07:00
Jack Kingsman 3c0d6a4466 Fix some misc. frontend correctness bugs 2026-03-30 21:29:01 -07:00
Jack Kingsman 7b9d8f6a23 Docs updates 2026-03-30 21:24:36 -07:00
Jack Kingsman 44d6fcac24 Add missing abort controller 2026-03-30 21:15:42 -07:00
Jack Kingsman 788d1cbdca Fix non-repeater traffic during repeater ops dropping messages 2026-03-30 21:13:25 -07:00
Jack Kingsman 26e8150092 Shorten quality script 2026-03-30 21:02:49 -07:00
Jack Kingsman 3a1c2d691b Misc. bug bash 2026-03-30 20:49:09 -07:00
Jack Kingsman 134e8d0d29 Add trace tool. Closes #130. 2026-03-30 19:26:12 -07:00
Jack Kingsman eb1f7ae638 Be more resilient about docker script input management 2026-03-30 17:54:07 -07:00
Jack Kingsman 14ba342160 Add docker install script 2026-03-30 17:09:25 -07:00
Jack Kingsman 7460c3ea9d Add font size slider. Closes #132. 2026-03-30 16:47:24 -07:00
Jack Kingsman 6534946bc7 Simplify installation instructions 2026-03-30 16:26:25 -07:00
Jack Kingsman 4847813ae1 Fix up the slow core query from the stats page. Closes #131. 2026-03-30 15:59:44 -07:00
Jack Kingsman 3f6efaae1d Overhaul script handling. Closes #125. 2026-03-30 15:51:43 -07:00
Jack Kingsman 60f3fa8e36 Add noise floor visualizer to statistics. Closes #129. 2026-03-30 15:31:39 -07:00
Jack Kingsman b42ca44ba7 Add noise floor plumbing 2026-03-30 14:23:01 -07:00
Jack Kingsman d4bbb8a542 Add multibyte trace output. Closes #127. 2026-03-30 12:52:01 -07:00
Jack Kingsman db248302e9 Show node name if we find it in the DB already. Closes #128. 2026-03-30 12:28:26 -07:00
Jack Kingsman 7aa4f76064 Fix clamping on value inputs to allow empty while focused 2026-03-29 22:38:06 -07:00
Jack Kingsman f01e91defc Clean up after release 2026-03-29 22:37:38 -07:00
Jack Kingsman 8ee08ff44a Updating changelog + build for 3.6.2 2026-03-29 19:55:49 -07:00
Jack Kingsman 6d9ea552bd Provide multi-platform docker builds. Closes #119. 2026-03-29 19:34:51 -07:00
Jack Kingsman 2cd71bf086 Fix linting 2026-03-29 19:09:36 -07:00
Jack Kingsman 08d55dec72 Show last error status on integrations. Closes #122. 2026-03-29 18:47:17 -07:00
Jack Kingsman 20532f70a3 Allow map uploader to follow redirects. Closes #123. 2026-03-29 18:15:10 -07:00
Jack Kingsman 659370e1eb Don't cast SNR/RSSI to string. Closes #121. 2026-03-29 18:02:59 -07:00
Jack Kingsman 7151cf3846 Be much, much clearer about room server ops. Closes #78. 2026-03-27 13:01:34 -07:00
Jack Kingsman 6e5256acce Be more flexible about radio offload. Closes #118. 2026-03-27 12:49:01 -07:00
Jack Kingsman 7d27567ae9 Merge pull request #109 from jkingsman/fix-room-server-ordering
Order room server messages by sender timestamp, not packet-receipt time
2026-03-27 10:18:21 -07:00
Jack Kingsman 5f0d042252 Fix time rendering unit issue 2026-03-26 21:32:23 -07:00
Jack Kingsman 6f68dfc609 Deal with non-existent hashes better 2026-03-26 20:36:13 -07:00
Jack Kingsman a32ddda79d Cut down bloat in unreads endpoint 2026-03-26 20:36:04 -07:00
Jack Kingsman ac6a5774af Updating changelog + build for 3.6.1 2026-03-26 19:14:44 -07:00
Jack Kingsman b12e612596 Merge pull request #117 from jkingsman/settings-scroll-fix. Closes #112
More content-paint patchy patchy bs
2026-03-26 18:27:55 -07:00
Jack Kingsman d1499ad75f Merge pull request #116 from kizniche/feat-int-mc-map-auto-uploader
Add automatic mesh map upload (integration/fanout module). Closes #108. Thank you!!
2026-03-26 18:08:34 -07:00
jkingsman 79d5e69ee0 Format + lint 2026-03-26 17:59:59 -07:00
jkingsman 498770bd88 More content-paint patchy patchy bs 2026-03-26 17:30:40 -07:00
jkingsman 1405df6039 Beef up some noopy tests 2026-03-26 17:22:42 -07:00
jkingsman ac5e71d6f2 Validate geofence radius to be positive 2026-03-26 17:20:13 -07:00
jkingsman 650a24a68c Centralize duplicated crypto code 2026-03-26 17:18:28 -07:00
Kizniche 53f122e503 formatting changes to satisfy check 2026-03-26 20:08:42 -04:00
Jack Kingsman bf0533807a Rich install script. Closes #111 2026-03-26 17:04:12 -07:00
jkingsman 094058bad7 Tweak install script 2026-03-26 16:59:53 -07:00
Kizniche efeb047116 Switching to using radio lat/lon, rename Community MQTT to Community Sharing, update AGENTS_fanout.md 2026-03-26 19:55:30 -04:00
jkingsman 88c99e0983 Add note in readme 2026-03-26 16:50:48 -07:00
jkingsman 983a37f68f Idempotentify and remove the explicit setup instructions in the advanced readme 2026-03-26 16:46:27 -07:00
jkingsman bea3495b79 Improve coverage around desktop notifications. Closes #115. 2026-03-26 16:39:38 -07:00
jkingsman 54c24c50d3 Clarify MQTT error logs when persistent 2026-03-26 13:39:08 -07:00
Kizniche b7972f50a8 Fix issues identified in failing checks 2026-03-25 19:06:33 -04:00
Kizniche bab1693c82 Fix freq and BW values, add geofence calc to dry run log 2026-03-25 18:39:27 -04:00
Kyle Gabriel f93844a01b Merge branch 'jkingsman:main' into feat-int-mc-map-auto-uploader 2026-03-25 14:40:59 -04:00
jkingsman 26b740fe3c Fix lint 2026-03-25 08:57:43 -07:00
jkingsman b0f5930e01 Swipe away 2026-03-25 08:46:50 -07:00
jkingsman 5b05fdefa1 Change room finder to be channels not rooms 2026-03-25 08:34:21 -07:00
jkingsman b63153b3a1 Initial swipe work 2026-03-25 08:32:06 -07:00
Jack Kingsman 3c5a832bef Merge pull request #113 from an0key/main
Update Sidebar.tsx
2026-03-25 08:19:04 -07:00
jkingsman fd8bc4b56a First draft of install script 2026-03-25 08:09:55 -07:00
Luke 2d943dedc5 Update Sidebar.tsx 2026-03-25 15:09:32 +00:00
Jack Kingsman 137f41970d Fix some places where we used vh instead of dvh for modal sizing 2026-03-24 21:07:20 -07:00
Jack Kingsman c833f1036b Test scroll fix for mobile browsers 2026-03-24 21:05:29 -07:00
Kyle Gabriel e15e6d83f7 Merge branch 'jkingsman:main' into feat-int-mc-map-auto-uploader 2026-03-24 19:55:14 -04:00
jkingsman 4ead2ffcde Add prebuilt frontend fetch script. Closes #110. 2026-03-24 16:42:49 -07:00
Kizniche f9ca35b3ae Switch from block list to allow list, add test to ensure certain nodes are skipped, fix test 2026-03-24 19:41:25 -04:00
Kizniche 7c4a244e05 Add geofence option 2026-03-24 19:41:25 -04:00
Kyle 6eab75ec7e Add Map Upload Integration and tests 2026-03-24 19:41:18 -04:00
jkingsman caf4bf4eff Fix linting 2026-03-24 16:32:19 -07:00
jkingsman 74e1f49db8 Show hop map in a larger modal. Closes #102. 2026-03-24 16:14:43 -07:00
jkingsman 95c874e643 Order room server messages by sender timestamp, not arrival-at-our-radio timestamp 2026-03-24 15:55:28 -07:00
Jack Kingsman 3b28ebfa49 Fix e2e tests 2026-03-24 14:51:29 -07:00
jkingsman d36c63f6b1 Complete room -> channel rename 2026-03-24 14:02:43 -07:00
jkingsman e8a4f5c349 Make a better integration/fanout selector 2026-03-24 13:48:50 -07:00
jkingsman b022aea71f Adjust phrasing on new-chat modal, and remove the unusable existing-contact scren. Closes #105. 2026-03-24 10:02:39 -07:00
jkingsman 5225a1c766 Don't be so eager on the quality gate 2026-03-24 09:59:37 -07:00
Jack Kingsman 41400c0528 Change page title and favicon for unreads. Green for favorite group chats, red for unread mentions or DMs. Closes #100 WOOOO 2026-03-23 21:36:54 -07:00
Jack Kingsman 07928d930c Clarify phrasing around bot system 2026-03-23 19:32:45 -07:00
Jack Kingsman 26742d0c88 Merge pull request #103 from jkingsman/bot-safety
Bot safety
2026-03-23 18:44:50 -07:00
Jack Kingsman 8b73bef30b More styling 2026-03-23 18:42:09 -07:00
Jack Kingsman 4b583fe337 Rephrasing and add env vars to docker compose 2026-03-23 18:36:55 -07:00
Jack Kingsman e6e7267eb1 Fix mobile modal 2026-03-23 18:33:37 -07:00
Jack Kingsman 36eeeae64d Protect against uncheck race condition 2026-03-23 18:27:42 -07:00
Jack Kingsman 7c988ae3d0 Initial bot safety warning pass 2026-03-23 15:16:04 -07:00
Jack Kingsman 1a0c4833d5 Enrich the error text for notification blockage and mention http/s issues 2026-03-23 09:12:17 -07:00
Jack Kingsman 84c500d018 Add clearer warning on frontend fetching invalid backend 2026-03-22 23:32:52 -07:00
Jack Kingsman 1960a16fb0 Add note about CORS + Basic auth 2026-03-22 23:28:33 -07:00
Jack Kingsman 3580aeda5a Updating changelog + build for 3.6.0 2026-03-22 22:14:55 -07:00
Jack Kingsman bb97b983bb Add room activity to stats view 2026-03-22 22:13:40 -07:00
Jack Kingsman da31b67d54 Add on-receive packet analyzer for canonical copy. Closes #97. 2026-03-22 21:34:41 -07:00
Jack Kingsman d840159f9c Update meshcore_py and remove monkeypatch for serial frame start detection. 2026-03-22 11:06:24 -07:00
Jack Kingsman 9de4158a6c Monkeypatch the meshcore_py lib for frame-start handling 2026-03-21 22:46:59 -07:00
Jack Kingsman 1e21644d74 Swap repeaters and room servers for better ordering, and the less common contact type at the bottom 2026-03-21 13:15:18 -07:00
Jack Kingsman df0ed8452b Add BYOPacket analyzer. Closes #98. 2026-03-20 21:57:07 -07:00
Jack Kingsman d4a5f0f728 Scroll in room server control pane. Closes #99. 2026-03-20 19:43:55 -07:00
Jack Kingsman 3e2c48457d Be more compact about the room server controls 2026-03-20 18:16:29 -07:00
Jack Kingsman d4f518df0c Retry e2e tests one before failing 2026-03-19 21:57:03 -07:00
Jack Kingsman 5213c8c84c Updating changelog + build for 3.5.0 2026-03-19 21:53:45 -07:00
Jack Kingsman 33c2b0c948 Be better about identity resolution for stats view 2026-03-19 21:42:39 -07:00
Jack Kingsman b021a4a8ac Fix e2e tests for contact stuff 2026-03-19 21:39:18 -07:00
Jack Kingsman c74fdec10b Add database information to debug endpoint 2026-03-19 21:17:23 -07:00
Jack Kingsman cf314e02ff Be cleaner about message cache dedupe after trimming inactive convos 2026-03-19 21:03:20 -07:00
Jack Kingsman 8ae600d010 Docs updates 2026-03-19 20:58:49 -07:00
Jack Kingsman fdd82e1f77 Clean up orphaned contact child rows and add foreign key enforcement 2026-03-19 20:56:36 -07:00
Jack Kingsman 9d129260fd Fix up the header collapse to be less terrible 2026-03-19 20:53:24 -07:00
Jack Kingsman 2b80760696 Add DB entry for outgoing inside the radio lock (didn't we just do the opposite?) 2026-03-19 20:43:35 -07:00
Jack Kingsman c2655c1809 Add basic room server support. Closes #78.
Allow basic room server usage
2026-03-19 20:28:44 -07:00
Jack Kingsman cee7103ec6 Don't log on missed login ack and don't make standalone contacts for repeater users 2026-03-19 20:26:10 -07:00
Jack Kingsman d05312c157 Add password-remember + warning on save 2026-03-19 20:10:59 -07:00
Jack Kingsman 5b166c4b66 Add room server 2026-03-19 19:22:40 -07:00
Jack Kingsman dbe2915635 Use metric by default 2026-03-19 17:56:04 -07:00
Jack Kingsman 2337d7b592 Remove Apprise duplicate names. Closes #88. 2026-03-19 17:44:51 -07:00
Jack Kingsman 62080424bb Multi-ack. Closes #81. 2026-03-19 17:30:34 -07:00
Jack Kingsman 1ae76848fe Improve test coverage 2026-03-19 17:19:35 -07:00
Jack Kingsman 45ed430580 Allow favorites to be sorted. Closes #91. 2026-03-19 17:05:34 -07:00
Jack Kingsman 5f8ce16855 Fix spacing around byte display on packet detail. Closes #93. 2026-03-19 16:58:27 -07:00
Jack Kingsman b79249c4a0 Add more realistic hop stats display 2026-03-19 16:49:06 -07:00
Jack Kingsman 85d1a940dc Update meshcore for three byte path failures 2026-03-19 09:57:06 -07:00
Jack Kingsman b85d451e26 Add packet feed clickable packet inspection. Closes #75 again. 2026-03-19 09:49:14 -07:00
Jack Kingsman 41a297c944 GIVE ME SMOOTS. Closes #87. 2026-03-18 22:43:34 -07:00
Jack Kingsman 41d64d86d4 Expand docker version testing coverage. Closes #84. 2026-03-18 22:09:44 -07:00
Jack Kingsman bd336e3ee2 Add fancy metrics view for packet feed. Closes #75. 2026-03-18 22:01:10 -07:00
Jack Kingsman cf585cdf87 Unread DMs are always red. Closes #86. 2026-03-18 21:05:40 -07:00
Jack Kingsman 417a583696 Use proper version formatting. Closes #70. 2026-03-18 20:50:56 -07:00
Jack Kingsman 541dba6a75 Fix migration to not import historical advert path 2026-03-19 03:45:51 +00:00
Jack Kingsman 720b8be64f Add e2e test 2026-03-19 03:45:51 +00:00
Jack Kingsman 2b5083e889 Doc updates 2026-03-19 03:45:51 +00:00
Jack Kingsman 5975006cf7 Dupe code cleanup 2026-03-19 03:45:51 +00:00
Jack Kingsman 69e09378f5 Pass 1 on PATH integration 2026-03-19 03:45:51 +00:00
Jack Kingsman b832239e22 Add zero-hop impulse advert. Closes #83. 2026-03-18 19:59:08 -07:00
Jack Kingsman d8e22ef4af Stop autofocus stealing of the cracker panel. Closes #80. 2026-03-18 17:53:42 -07:00
Jack Kingsman ffc5d75a58 Use the actual remoteterm version for the client_version tag so letsmesh rollups work better 2026-03-18 17:46:10 -07:00
Jack Kingsman 350c85ca6d Behave better around DM dedupe/storage. Closes #77. 2026-03-18 17:40:11 -07:00
Jack Kingsman 4d5f0087cc Fix sidebar ordering for contacts by advert. Closes #69. 2026-03-17 21:04:57 -07:00
Jack Kingsman e33bc553f5 improve MQTT error bubble up and massage communitymqtt + debug etc. for version management. Closes #70 2026-03-17 20:33:09 -07:00
Jack Kingsman 020acbda02 Do better DM retry to align with stndard firmware retry (but so that we can track the acks). Closes #73. 2026-03-17 18:12:07 -07:00
Jack Kingsman d5b8f7d462 Prevent contact status bar jitter. Closes #68. 2026-03-17 16:28:05 -07:00
Jack Kingsman bc16d804e9 Log the node time on startup 2026-03-16 22:44:53 -07:00
Jack Kingsman a0459edf62 Add clowntown clock rollover trick 2026-03-16 22:37:13 -07:00
Jack Kingsman 86170766eb Better no-connection errors 2026-03-16 22:17:54 -07:00
Jack Kingsman 33e1b527bd Don't force save-prompt on unedited integrations 2026-03-16 21:47:04 -07:00
Jack Kingsman 23f9bd216c Add pre-filled letsmesh/meshrank MQTT 2026-03-16 21:43:45 -07:00
Jack Kingsman 35b592d2a7 Auto-reset node if clock is too far ahead to change 2026-03-16 21:24:07 -07:00
Jack Kingsman c215aedc0d Let the radio settings pane open but not be used, rather than disabled 2026-03-16 20:30:34 -07:00
Jack Kingsman 9cd567895b Update the docs 2026-03-16 19:29:09 -07:00
Jack Kingsman c469633a30 Serialize radio disconnect in a lock 2026-03-16 19:25:00 -07:00
Jack Kingsman f8f0b3a8cf Updating changelog + build for 3.4.1 2026-03-16 18:43:34 -07:00
Jack Kingsman 47276dcb6c Improve handling of version info in pre-built bundles 2026-03-16 18:41:19 -07:00
Jack Kingsman 9c06ed62a4 Fix reopen-last behavior with new settings hash 2026-03-16 18:17:21 -07:00
Jack Kingsman e19a8d3395 Update our docs and README.md. Closes #65. 2026-03-16 18:14:31 -07:00
Jack Kingsman b68bfc41d6 Use better behavior on disconnected radio and allow deeplinking into settings. Closes #66. 2026-03-16 17:46:12 -07:00
Jack Kingsman ffb5fa51c1 Finish frontend phase 3 2026-03-16 17:32:27 -07:00
Jack Kingsman 0e4828bf72 Frontend optimization part 2 2026-03-16 17:32:27 -07:00
Jack Kingsman a5d9632a67 Phase 1 of frontend fixup 2026-03-16 17:32:27 -07:00
Jack Kingsman 24747ecd17 Unify our DM ingest 2026-03-16 17:32:27 -07:00
Jack Kingsman dbb8dd4c43 Updating changelog + build for 3.4.0 2026-03-16 15:41:43 -07:00
Jack Kingsman 6c003069d4 Move to pre-built frontend on release only. Closes #62 2026-03-16 15:40:01 -07:00
Jack Kingsman ea5ba3b2a3 Add radio model and stats display. Closes #64 2026-03-16 15:29:21 -07:00
Jack Kingsman 58b34a6a2f Make pagination requests abortable 2026-03-16 15:18:38 -07:00
Jack Kingsman 4277e0c924 Clear keys on radio disconnect and add better error for channel send non-radio response 2026-03-16 15:11:02 -07:00
Jack Kingsman 2f562ce682 Don't reconcile mid history view 2026-03-16 15:03:55 -07:00
Jack Kingsman 370ff115b4 Fix DM collapse on same second send 2026-03-16 14:53:48 -07:00
Jack Kingsman 04733b6a02 Use advert position if we don't have a from-repeater-stats lat/lon 2026-03-16 14:33:04 -07:00
Jack Kingsman 749fb43fd0 Ditch garbage data ingest for lat/lon and extend map. Closes #63 2026-03-16 14:24:58 -07:00
Jack Kingsman 8d7d926762 Fix repeater clock drift-drift on nav-away-come-back 2026-03-16 11:32:40 -07:00
Jack Kingsman c809dad05d Polish off all our gross edges around frontend/backend Public name management 2026-03-15 18:20:43 -07:00
Jack Kingsman c76f230c9f Reduce memo thrash on map update 2026-03-15 18:07:30 -07:00
Jack Kingsman 226dc4f59e use server time for advert freshness 2026-03-15 17:55:47 -07:00
Jack Kingsman 3f50a2ef07 fix e2e sort order test 2026-03-15 17:15:14 -07:00
Jack Kingsman 4a7ea9eb29 Always valodate fanout configs on save 2026-03-15 16:27:43 -07:00
Jack Kingsman 29368961fc Tweak send no-response handling 2026-03-15 16:12:17 -07:00
Jack Kingsman 7cb84ea6c7 Fix sidebar sort order. Closes #61 2026-03-15 16:02:09 -07:00
Jack Kingsman 0b1a19164a Fix typo and add better ops for decrypt API calls 2026-03-15 15:47:09 -07:00
Jack Kingsman cf1a55e258 Add prebuilt frontend 2026-03-14 23:05:57 -07:00
Jack Kingsman 0881998e5b Overhaul repeater interaction to better deal with login failure clearly 2026-03-14 22:58:14 -07:00
Jack Kingsman ac65943263 Updating changelog + build for 3.3.0 2026-03-13 22:25:47 -07:00
Jack Kingsman 04b324b711 Don't treat matched-prefix DMs as an ack (as it is an echo-ack for channel messages, not DMs) 2026-03-13 22:17:27 -07:00
Jack Kingsman 5512f9e677 Fix last-advert selection logic for path recency 2026-03-13 22:11:02 -07:00
Jack Kingsman b4962d39f0 Fix up resend logic to be cleaner 2026-03-13 22:07:16 -07:00
Jack Kingsman 39a687da58 Forward whole message to FE on resend so the browser updates 2026-03-13 21:57:19 -07:00
Jack Kingsman f41c7756d3 Prevent same-second outgoing collision now that we can send faster.
Also add pending ack tracking
2026-03-13 21:43:50 -07:00
Jack Kingsman bafea6a172 Fix blocking on DMs (again, but right this time) 2026-03-13 21:10:50 -07:00
Jack Kingsman 68f05075ca Modal-ify the room region override 2026-03-13 18:11:18 -07:00
Jack Kingsman adfb8c930c Move routing override to modal 2026-03-13 18:04:49 -07:00
Jack Kingsman 1299a301c1 Add route discovery 2026-03-13 17:55:17 -07:00
Jack Kingsman 3a4ea8022b Add local node discovery 2026-03-13 17:25:28 -07:00
Jack Kingsman bd19015693 Don't suggest npm ci 2026-03-13 14:46:55 -07:00
Jack Kingsman cb9c9ae289 Docs updates 2026-03-13 11:18:11 -07:00
Jack Kingsman 2369e69e0a Catch channel cache issues on set_channel failure during eviction 2026-03-13 11:09:23 -07:00
Jack Kingsman 9c2b6f0744 Add fallback polling message persistence for channel messages 2026-03-13 11:05:49 -07:00
Jack Kingsman 70d28e53a9 Fix self-node snapping on node addition 2026-03-13 10:42:36 -07:00
Jack Kingsman 96d8d1dc64 Be more strict with SQS region 2026-03-12 23:57:13 -07:00
Jack Kingsman a7ff041a48 Drop out channel hash helper 2026-03-12 23:57:13 -07:00
Jack Kingsman 5a580b9c01 Tighten up error phasing 2026-03-12 23:57:13 -07:00
Jack Kingsman 0834414ba4 Remove redundant channel listing 2026-03-12 23:57:13 -07:00
Jack Kingsman df538b3aaf Parallelize docker_ci 2026-03-12 23:57:13 -07:00
Jack Kingsman 2710cafb21 Add health endpoint 2026-03-12 23:57:13 -07:00
Jack Kingsman 338f632514 Remove unused endpoint and fix stale slot retry problems 2026-03-12 23:57:13 -07:00
Jack Kingsman 7e1f941760 Add documentation and force-lock-acquisition mode for channel management 2026-03-12 23:57:13 -07:00
Jack Kingsman 87ea2b4675 LRU-based parallel channel storage 2026-03-12 23:57:13 -07:00
Jack Kingsman 5c85a432c8 Phase 1 of manual channel management 2026-03-12 23:57:13 -07:00
Jack Kingsman 22ca5410ee Fix up unread bugs 2026-03-12 22:00:18 -07:00
Jack Kingsman 276e0e09b3 Show dismiss 'X' on the Jump to Unread 2026-03-12 20:23:38 -07:00
Jack Kingsman 1c57e35ba5 Don't collapse ambiguous senders to imply an indirect link betwen repeaters 2026-03-12 20:13:45 -07:00
Jack Kingsman 358589bd66 Cull a bunch of unused functions 2026-03-12 18:12:27 -07:00
Jack Kingsman 74c13d194c Fix message dual render and get the jump to unread link out of the way on visible unread boundaries. Closes #57. 2026-03-12 16:31:47 -07:00
Jack Kingsman 07fd88a4d6 Make repeater neighbor display need a GPS fix to show map + distance, and fetch before display. Closes #58. 2026-03-12 16:18:52 -07:00
Jack Kingsman 07934093e6 Don't force-insert a node with unknown relationships just because they are the marked recipient of a DM. Closes #44. 2026-03-12 14:30:26 -07:00
Jack Kingsman 3ee4f9d7a2 Do some same name ambiguous + known sibling collapse 2026-03-12 13:10:57 -07:00
Jack Kingsman b81f6ef89e Visualizer overhaul 2026-03-12 12:56:59 -07:00
Jack Kingsman 489950a2f7 Use dashed lines for collapsed ambiguous repeater paths. Closes #44. 2026-03-12 12:11:17 -07:00
Jack Kingsman 08e00373aa Updating changelog + build for 3.2.0 2026-03-12 11:55:56 -07:00
Jack Kingsman 2f0d35748a Deepend contrast on map colors. Fixes #54 2026-03-12 11:39:42 -07:00
Jack Kingsman 7db2974481 Fix bot kwargs and scoot over unread button to middle 2026-03-12 10:58:38 -07:00
Jack Kingsman 0a20929df6 Add conversation unread marker and jump-to-unread button 2026-03-12 10:54:25 -07:00
Jack Kingsman 30f6f95d8e Add path_bytes_per_hop to bot kwargs and encourage kwargs-only going forward 2026-03-12 10:32:33 -07:00
Jack Kingsman 9e8cf56b31 Be clearer about reality of location inclusion. Closes #53 2026-03-12 10:00:00 -07:00
Jack Kingsman fb535298be Fix up search pane and header key alignment regression 2026-03-12 00:35:05 -07:00
Jack Kingsman 1f2903fc2d Add better preview pane and tweak some themes for contrast 2026-03-12 00:06:33 -07:00
Jack Kingsman 6466a5c355 Add node GPS enablement + sourcing. Closes #53. 2026-03-11 22:42:41 -07:00
Jack Kingsman f8e88b3737 Docs tune up 2026-03-11 22:05:59 -07:00
Jack Kingsman a13e241636 Other connected clients get new chans over WS 2026-03-11 22:02:05 -07:00
Jack Kingsman 8c1a58b293 Don't use a stale MC instance 2026-03-11 21:57:59 -07:00
Jack Kingsman bf53e8a4cb Clarify message for fanout 2026-03-11 21:51:22 -07:00
Jack Kingsman c6cd209192 Change direct trace icon 2026-03-11 21:40:09 -07:00
Jack Kingsman e5c7ebb388 Tweak the win95 theme coloring a bit 2026-03-11 21:05:55 -07:00
Jack Kingsman e37632de3f Add dedupe bug the agents keep getting hung up on to errata 2026-03-11 20:58:29 -07:00
Jack Kingsman d36c5e3e32 Deal with reconciliation conflict from colliding WS fetches 2026-03-11 20:52:48 -07:00
Jack Kingsman bc7506b0d9 Some misc frontend cleanup grossness 2026-03-11 20:49:37 -07:00
Jack Kingsman 38c7277c9d More careful guards around channel message key matching on collision 2026-03-11 20:41:31 -07:00
Jack Kingsman 20d0bd92bb Oh god, so much code for such a minor flow. Ambiguous sender manually fetched prefix DMs are now visible. 2026-03-11 20:38:41 -07:00
Jack Kingsman e0df30b5f0 AGENTS.md fixups 2026-03-11 19:44:29 -07:00
Jack Kingsman 83635845b6 Don't sleep in the exception handler 2026-03-11 19:32:54 -07:00
Jack Kingsman 2e705538fd Abort serch requests on unmount 2026-03-11 19:26:27 -07:00
Jack Kingsman 4363fd2a73 Track message reconciliation and don't fire on stale returns 2026-03-11 19:20:38 -07:00
Jack Kingsman 5bd3205de5 Stop channs with blocked senders bothering the unread count 2026-03-11 19:11:53 -07:00
Jack Kingsman bcde3bd9d5 Don't construct synthetic objects 2026-03-11 19:04:57 -07:00
Jack Kingsman 15a8c637e4 Do our own tracking for contact-on-radio with more correct return state 2026-03-11 19:01:14 -07:00
Jack Kingsman d38efc0421 Add warning on search for user-key linkage unreliability 2026-03-11 18:45:03 -07:00
Jack Kingsman b311c406da Updating changelog + build for 3.1.1 2026-03-11 18:24:41 -07:00
Jack Kingsman b5e9e4d04c Fix tag notes 2026-03-11 18:23:20 -07:00
Jack Kingsman ce87dd9376 Updating changelog + build for 3.1.0 2026-03-11 18:20:31 -07:00
Jack Kingsman 5273d9139d Use newer workflow steps 2026-03-11 18:19:04 -07:00
Jack Kingsman 04ac3d6ed4 Drop out meshcore_py's autoreconnect logic on connection disable 2026-03-11 18:12:11 -07:00
Jack Kingsman 1a1d3059db Split up runs 2026-03-11 18:06:40 -07:00
Jack Kingsman 633510b7de Bring back in package-lock.json 2026-03-11 17:44:55 -07:00
Jack Kingsman 7f4c1e94fd Add github workflows 2026-03-11 17:37:57 -07:00
Jack Kingsman a06fefb34e New themes 2026-03-11 17:28:12 -07:00
Jack Kingsman 4e0b6a49b0 Add ability to pause radio connection (closes #51) 2026-03-11 17:17:03 -07:00
Jack Kingsman e993009782 True up some UX inconsistencies and have a theme preview pane 2026-03-11 17:03:43 -07:00
Jack Kingsman ad7028e508 Add better search management and operators + contact search quick link 2026-03-11 16:56:09 -07:00
Jack Kingsman ce9bbd1059 Better clarity on sidebar search 2026-03-11 16:22:02 -07:00
Jack Kingsman 0c35601af3 Enrich contact no-key info pane with first-in-use date 2026-03-11 16:19:10 -07:00
Jack Kingsman 93369f8d64 Enrich names-based contact pane a bit 2026-03-11 15:57:29 -07:00
Jack Kingsman e7d1f28076 Add SQS fanout 2026-03-11 14:17:08 -07:00
Jack Kingsman 472b4a5ed2 Better logging output 2026-03-11 13:40:48 -07:00
Jack Kingsman 314e4c7fff True up regional routing icon style 2026-03-11 10:04:01 -07:00
Jack Kingsman 528a94d2bd Add basic auth 2026-03-11 10:02:02 -07:00
Jack Kingsman fa1c086f5f Updating changelog + build for 3.0.0 2026-03-10 21:41:04 -07:00
Jack Kingsman d8bb747152 Reorder themes 2026-03-10 21:27:05 -07:00
Jack Kingsman 18a465fde8 Fix ordering 2026-03-10 21:06:50 -07:00
Jack Kingsman c52e00d2b7 Merge pull request #50 from jkingsman/notifications
Notifications
2026-03-10 20:49:12 -07:00
Jack Kingsman e17d1ba4b4 Move search bar to top level 2026-03-10 19:56:10 -07:00
Jack Kingsman 48a49ce48d Add some new themes 2026-03-10 19:54:06 -07:00
Jack Kingsman 9d1676818f Add lagoon pop 2026-03-10 19:51:10 -07:00
Jack Kingsman b5edd00220 Tweak light mode tools color and icon state 2026-03-10 19:36:04 -07:00
Jack Kingsman d3a7b7ce07 Add light mode toggle 2026-03-10 19:32:22 -07:00
Jack Kingsman 42ca242ee1 Update override badge for region routing 2026-03-10 19:26:31 -07:00
Jack Kingsman 3e7e0669c5 Add bell icon and use better notif icon 2026-03-10 19:04:52 -07:00
Jack Kingsman bee273ab56 Add notifications 2026-03-10 19:03:52 -07:00
Jack Kingsman 1842bcf43e Add new icon size + crush PNGs 2026-03-10 19:03:34 -07:00
Jack Kingsman 7c68973e30 Icon overhaul 2026-03-10 17:43:15 -07:00
Jack Kingsman c9ede1f71f Clearer about advertiser repeat button 2026-03-10 15:49:28 -07:00
Jack Kingsman 42e9628d98 Fix clock sync command 2026-03-10 15:46:34 -07:00
Jack Kingsman 1bf760121d Preserve repeater values when browsing away 2026-03-10 15:40:26 -07:00
Jack Kingsman bb4a601788 Coerce uvicorn logging to better format 2026-03-10 14:58:44 -07:00
Jack Kingsman d0ed3484ce Add hourly sync and crow loudly if it finds a discrepancy 2026-03-10 14:47:18 -07:00
Jack Kingsman 738e0b9815 Don't load full right away 2026-03-10 14:39:40 -07:00
Jack Kingsman 97997e23e8 Drop frequency of contact sync task, make standard polling opt-in only 2026-03-10 14:04:51 -07:00
Jack Kingsman eaee66f836 Add timestamps to logs and stop regen'ing licenses every time 2026-03-10 13:08:26 -07:00
Jack Kingsman 9a99d3f17e Codex Refactor -- Make things more manageable and LLM friendly 2026-03-10 12:26:30 -07:00
Jack Kingsman 73e717fbd8 Fix Load All button height 2026-03-10 09:41:23 -07:00
Jack Kingsman dc87fa42b2 Update AGENTS.md 2026-03-10 00:00:57 -07:00
Jack Kingsman f650e0ab34 Make all scripts executable 2026-03-09 23:55:17 -07:00
Jack Kingsman 39b745f8b0 Compactify some things for LLM wins 2026-03-09 23:53:19 -07:00
Jack Kingsman 18e1408292 Be better about DB insertion shape 2026-03-09 23:42:46 -07:00
Jack Kingsman 3e941a5b20 remove radio dependency fallback shim 2026-03-09 23:29:25 -07:00
Jack Kingsman a000fc88a5 make radio router use runtime seam only 2026-03-09 23:22:56 -07:00
Jack Kingsman def7c8e29e route radio sync through radio runtime 2026-03-09 23:16:17 -07:00
Jack Kingsman 9388e1f506 route startup and fanout through radio runtime 2026-03-09 23:11:57 -07:00
Jack Kingsman 81bdfe09fa extract radio runtime seam 2026-03-09 23:07:34 -07:00
Jack Kingsman 5e94b14b45 Refactor visualizer 2026-03-09 22:20:21 -07:00
Jack Kingsman c3f1a43a80 Be more gentle with frontend typing + go back to fire-and-forget for cracked room creation 2026-03-09 21:51:07 -07:00
Jack Kingsman 3316f00271 extract app shell prop assembly 2026-03-09 21:07:56 -07:00
Jack Kingsman 319b84455b extract conversation navigation state 2026-03-09 20:59:52 -07:00
Jack Kingsman f107dce920 extract frontend app shell 2026-03-09 20:23:24 -07:00
Jack Kingsman ec5b9663b2 Brief interlude -- fix corrupt packet message display 2026-03-09 20:11:13 -07:00
Jack Kingsman 19d7c3c98c extract conversation pane component 2026-03-09 19:41:03 -07:00
Jack Kingsman ae0ef90fe2 extract conversation timeline hook 2026-03-09 19:12:26 -07:00
Jack Kingsman 56e5e0d278 extract frontend conversation actions hook 2026-03-09 18:37:06 -07:00
Jack Kingsman 5d509a88d9 extract frontend realtime state hook 2026-03-09 18:27:01 -07:00
Jack Kingsman 946006bd7f extract radio command service 2026-03-09 18:13:18 -07:00
Jack Kingsman 344cee5508 extract radio lifecycle service 2026-03-09 18:02:58 -07:00
Jack Kingsman 0d671f361d extract message send service 2026-03-09 17:54:44 -07:00
Jack Kingsman 2d781cad56 add typed websocket event contracts 2026-03-09 17:47:31 -07:00
Jack Kingsman 088dcb39d6 extract contact reconciliation service 2026-03-09 17:32:43 -07:00
Jack Kingsman b1e3e71b68 extract dm ack tracker service 2026-03-09 17:03:07 -07:00
Jack Kingsman 557af55ee8 extract backend message lifecycle service 2026-03-09 16:56:23 -07:00
Jack Kingsman 9421c10e8f Refetch channels on reconnect and fix up count-change refresh guard 2026-03-09 16:44:39 -07:00
Jack Kingsman b157ee14e4 Add background-hash-mark addition for region routing
Per https://buymeacoffee.com/ripplebiz/region-filtering:

> After some discussions, and that there is some confusion
around #channels and #regions, it's been decided to drop
the requirement to have the '#' prefix. So, region names
will just be plain alphanumeric (and '-'), with no # prefix.

> For backwards compatibility, the names will internally have
a '#' prepended, but for all client GUI's and command lines,
you generally won't see mention of '#' prefixes. The next
firmware release (v1.12.0) and subsequent Ripple firmware
and Liam's app will have modified UI to remove the '#' requirement.

So, silently add, but don't duplicate, for users who have already
added hashmarks.
2026-03-09 15:24:23 -07:00
Jack Kingsman e03ddcaaa7 Improve correctness of regional traffic repeats 2026-03-09 15:03:18 -07:00
Jack Kingsman 6832516b40 Add improved note about region entry 2026-03-09 14:57:45 -07:00
Jack Kingsman 5bfd9f4af2 Use updated meshcore-decoder library with TRACE patch and fixup frontend routing display on packet list 2026-03-09 12:45:51 -07:00
Jack Kingsman 463a0c9084 Add bolder coloring for mentions in rollups and always use bold on DMs 2026-03-09 11:57:30 -07:00
Jack Kingsman 811c7e7349 Add regional channel routing (closes #42) 2026-03-09 11:09:31 -07:00
Jack Kingsman 0c5b37c07c Add custom pathing (closes #45) 2026-03-09 10:26:01 -07:00
Jack Kingsman 7e384c12bb Fix trace packet handling (closes #44) 2026-03-09 09:23:35 -07:00
Jack Kingsman 48bc8c6337 Improve bot error bubble uo along with a few other spots 2026-03-09 01:18:41 -07:00
Jack Kingsman c3d7b8f79a Improve bot error bubble uo along with a few other spots 2026-03-09 00:41:07 -07:00
Jack Kingsman b5c4413e63 Bump node requirement to 20+ 2026-03-08 23:00:08 -07:00
Jack Kingsman 9fbdbaa174 Updating changelog + build for 2.7.9 2026-03-08 22:18:59 -07:00
Jack Kingsman e99e522573 Fix clipping on integration add drop down 2026-03-08 22:17:32 -07:00
Jack Kingsman 9d806c608b Add contact normalization rather than loading the packed path bytes 2026-03-08 21:01:01 -07:00
Jack Kingsman 5a9489eff1 Updating changelog + build for 2.7.8 2026-03-08 20:47:09 -07:00
Jack Kingsman beb28b1f31 Updating changelog + build for 2.7.8 2026-03-08 20:42:03 -07:00
Jack Kingsman 7d688fa5f8 Move to more stable docker reqs without disrupting windows users 2026-03-08 20:38:33 -07:00
Jack Kingsman 09b68c37ba Better ci scripts 2026-03-08 19:56:58 -07:00
Jack Kingsman df7dbad73d Fix bad file refs in decoder that break npm 10 2026-03-08 19:56:44 -07:00
Jack Kingsman 060fb1ef59 Updating changelog + build for 2.7.1 2026-03-08 18:48:14 -07:00
Jack Kingsman b14e99ff24 Patch a bizarre browser quirk of leaky elements (???) in the packet list 2026-03-08 18:45:07 -07:00
Jack Kingsman 77523c1b15 Patch up to use a published patched meshcore-decoder and add a test script for different node versions 2026-03-08 18:35:58 -07:00
Jack Kingsman 9673b25ab3 yeeeikes fix raw packet feed sorry 2026-03-08 17:38:20 -07:00
Jack Kingsman 2732506f3c Fix historical DM packet length passing and fix up some docs 2026-03-08 17:12:36 -07:00
Jack Kingsman 523fe3e28e Updating changelog + build for 2.7.0 2026-03-08 16:23:23 -07:00
Jack Kingsman 3663db6ed3 Multibyte path support 2026-03-08 14:53:14 -07:00
Jack Kingsman 5832fbd2c9 Fix the meshcore decoder override 2026-03-08 14:52:26 -07:00
Jack Kingsman 655066ed73 Fix playwright tests for new radio status indicator 2026-03-08 14:30:47 -07:00
Jack Kingsman 5cb5c2ad25 Show hop width in the contact info modal 2026-03-08 13:54:07 -07:00
Jack Kingsman 69a6922827 Fix path modal support for multibyte 2026-03-08 13:54:07 -07:00
Jack Kingsman 806252ec7e Move to new meshcore_py version and rip out monkeypatch 2026-03-08 13:54:07 -07:00
Jack Kingsman 2236132df4 Add note bout brittle meshcore_py on corrupted advert handling 2026-03-08 13:54:07 -07:00
Jack Kingsman 564cd65496 Update tests with new out_path_hash_mode field and surface error on path hash mode set failure 2026-03-08 13:54:07 -07:00
Jack Kingsman b3625b4937 Add statistics 2026-03-08 13:54:07 -07:00
Jack Kingsman 20af50585b Docs & tests 2026-03-08 13:54:07 -07:00
Jack Kingsman d776f3d09b Add full-decrypt tests for multibyte paths 2026-03-08 13:54:07 -07:00
Jack Kingsman 075debc51b Force validation of path_hash_mode 2026-03-08 13:54:07 -07:00
Jack Kingsman 34318e4814 Use more faithful packet frame parsing 2026-03-08 13:54:07 -07:00
Jack Kingsman 48dab293ae Advert-path uses correct identity for dedupe 2026-03-08 13:54:07 -07:00
Jack Kingsman 76d11b01a7 Actually persist out_path_hash_mode instead of lossily deriving it 2026-03-08 13:54:06 -07:00
Jack Kingsman 69c812cfd4 Ewwww monkeypatch library bug I'm so sorry code gods. Bug reported at https://github.com/meshcore-dev/meshcore_py/issues/65 2026-03-08 13:54:06 -07:00
Jack Kingsman 2257c091e8 Fix visualizer coercion for multibyte hops 2026-03-08 13:54:06 -07:00
Jack Kingsman 55fb2390de Phase 8: Tests & Misc 2026-03-08 13:54:06 -07:00
Jack Kingsman 0b91fb18bd Phase 7: Integration & Migration 2026-03-08 13:54:06 -07:00
Jack Kingsman 8948f2e504 Phase 6: Radio config + path hash mode 2026-03-08 13:54:06 -07:00
Jack Kingsman 5c413bf949 Phase 5: Frontend path rendering 2026-03-08 13:54:06 -07:00
Jack Kingsman b0ffa28e46 Phase 4: Update advert path storage 2026-03-08 13:54:06 -07:00
Jack Kingsman f97c846378 Phase 3: Add path size inference and also bin some stupid migration tests while we're at it 2026-03-08 13:54:06 -07:00
Jack Kingsman e0d7c8a083 Whoops, pin newer meshcore lib 2026-03-08 13:54:06 -07:00
Jack Kingsman 11ce2be5fa Phase 2: Patch up message path metadata 2026-03-08 13:54:06 -07:00
Jack Kingsman 1fc041538e Phase 0.5 & 1: Centralize path utils, multi-hop packet decoding, updated PacketInfo shape 2026-03-08 13:54:06 -07:00
Jack Kingsman 0ac8e97ea2 Put tools in a collapsible 2026-03-08 13:54:01 -07:00
Jack Kingsman e6743d2098 Updating changelog + build for 2.6.1 2026-03-08 12:40:28 -07:00
Jack Kingsman f472ff7cab Fix multibyte meshcore-decoder dep hell 2026-03-08 12:34:47 -07:00
Jack Kingsman 7ac220aee1 Add git 2026-03-08 12:10:04 -07:00
Jack Kingsman 43e38ecc5b Updating changelog + build for 2.6.0 2026-03-08 00:08:42 -08:00
Jack Kingsman 99eddfc2ef Update status bar and boot up more quickly with actual radio status 2026-03-07 23:47:47 -08:00
Jack Kingsman f9eb6ebd98 Fix stale ref issue in strict mode 2026-03-07 21:40:34 -08:00
Jack Kingsman 8f59606867 Fix library name 2026-03-06 22:22:36 -08:00
Jack Kingsman d214da41c6 Change names of community MQTT 2026-03-06 22:14:11 -08:00
Jack Kingsman da22eb5c48 Fanout integration UX overhaul 2026-03-06 21:37:11 -08:00
Jack Kingsman 94546f90a4 Back to main fanout screen on save 2026-03-06 18:20:53 -08:00
Jack Kingsman f82cadb4e1 Show webhooks/apprise summary 2026-03-06 18:19:14 -08:00
Jack Kingsman f60c656566 Add better coverage for alternative community MQTTs. Closes #39 2026-03-06 18:14:28 -08:00
Jack Kingsman b5e2a4c269 Soften the 'always run tests' 2026-03-06 16:26:19 -08:00
Jack Kingsman d4f73d318a Fanout modules!
Overhaul post-message fanout. Closes PR #37.
2026-03-06 16:08:23 -08:00
Jack Kingsman 8ffae50b87 Add some brutal tests for webhooks 2026-03-06 16:06:13 -08:00
Jack Kingsman dd13768a44 Tighten up message broadcast contract 2026-03-06 15:55:04 -08:00
Jack Kingsman 3330028d27 Elevate error logging for message poll loop issues 2026-03-06 15:43:29 -08:00
Jack Kingsman 3144910cd9 Fix regression around direct path DMs 2026-03-06 15:41:47 -08:00
Jack Kingsman 819470cb40 Add missing httpx dep 2026-03-06 15:39:11 -08:00
Jack Kingsman d7d06ec1f8 Remove some dead code and unify param names around not sending for actual real life messages vs. historical decrypt 2026-03-06 14:44:48 -08:00
Jack Kingsman 9d03844371 Improve module lifecycling 2026-03-06 14:44:48 -08:00
Jack Kingsman 929a931ce9 Add channel name to broadcasts 2026-03-06 14:44:48 -08:00
Jack Kingsman cba9835568 Rework more coverage in e2e tests and don't force radio restart + better startup error handling 2026-03-06 14:44:48 -08:00
Jack Kingsman 58daf63d00 Change fanout tab name. 2026-03-06 14:44:48 -08:00
Jack Kingsman bb13d223ca Remove unused IATA regex and reduncant community enabled check that's always true 2026-03-06 14:44:48 -08:00
Jack Kingsman 863251d670 Remove dead on_raw method and move json import somewhere not dumb 2026-03-06 14:44:48 -08:00
Jack Kingsman 5e042b7bcc Add health refresh to delete handler and correct concurrency description 2026-03-06 14:44:48 -08:00
Jack Kingsman 4d15c7d894 Add per-config id lock to reload and remove stale comment 2026-03-06 14:44:48 -08:00
Jack Kingsman 22e28a9e5b Add min length to name, 400 on unknown scope, normalize IATA 2026-03-06 14:44:48 -08:00
Jack Kingsman e72c3abd7f Correct sender name and use non-deprecated loop getter 2026-03-06 14:44:48 -08:00
Jack Kingsman 439face70b Tweak fanout display and update docs 2026-03-06 14:44:48 -08:00
Jack Kingsman 55ac9df681 Parallelize fanouts and add module watchdog 2026-03-06 14:44:48 -08:00
Jack Kingsman 5808504ee0 Flesh out the fanout agents a bit more 2026-03-06 14:44:48 -08:00
Jack Kingsman cb4333df4f Fanout hitlist fixes: bugs, quality, tests, webhook HMAC signing 2026-03-06 14:44:48 -08:00
Jack Kingsman 7534f0cc54 Patch smome doc issues and do minor bug mop up (outgoing bot message flaggin, unprotected bot endpoint, contact filtering on scope selection, don't drop disabled but configured community endpoints 2026-03-06 14:44:48 -08:00
Jack Kingsman adfb4addb7 Add MQTT removal migration and fix tests + docs 2026-03-06 14:44:48 -08:00
Jack Kingsman e99fed2e76 Add some test coverage 2026-03-06 14:44:47 -08:00
Jack Kingsman 13fa94acaa Richer saving options + more popping color on disabled integration 2026-03-06 14:44:47 -08:00
Jack Kingsman 418955198f Add Apprise 2026-03-06 14:44:47 -08:00
Jack Kingsman e3e4e0b839 Add webhooks & reformat a bit 2026-03-06 14:44:47 -08:00
Jack Kingsman 5ecb63fde9 Move bots into Fanout & Forwarding 2026-03-06 14:44:47 -08:00
Jack Kingsman 7cd54d14d8 Move to modular fanout bus 2026-03-06 14:41:51 -08:00
Jack Kingsman 93b5bd908a Extend tests and fix docs 2026-03-06 14:41:34 -08:00
Jack Kingsman de30dfe87b Backfill channel sender identity when it's available 2026-03-06 14:36:33 -08:00
Jack Kingsman d5a60d6ca3 Attach contact name to DMs for blocking purposes 2026-03-06 14:28:41 -08:00
Jack Kingsman 8f2d55277f Add clearer messaging around back to chat 2026-03-06 11:40:29 -08:00
Jack Kingsman cdf5c0b81e Some new theme movement 2026-03-06 09:17:22 -08:00
Jack Kingsman ae51755f07 Fix issue with backend restart letting unreads accumulate on open thread 2026-03-05 23:01:04 -08:00
Jack Kingsman 7715732e69 Add sender_key to outgoing and make unread counts respect block list 2026-03-05 10:43:16 -08:00
Jack Kingsman 01a5dc8d93 A11y bug bash 2026-03-05 10:24:22 -08:00
Jack Kingsman c7bd4dd3fc Updating changelog + build for 2.5.0 2026-03-05 00:13:15 -08:00
Jack Kingsman a069af8364 Doc updates 2026-03-04 21:18:07 -08:00
Jack Kingsman 03f4963966 Guard flood scope and be better about blocking 2026-03-04 20:15:44 -08:00
Jack Kingsman e439bc913a Reorganize the settings panes a bit 2026-03-04 19:46:20 -08:00
Jack Kingsman d5fe9c677f Add contact blocking 2026-03-04 18:54:21 -08:00
Jack Kingsman 145609faf9 Add outgoing message region tagging. Closes #35. 2026-03-04 15:42:21 -08:00
Jack Kingsman c2931a266e Add hop display map in pathing modal 2026-03-04 13:03:43 -08:00
Jack Kingsman 9629a35fe1 Don't exclude 7 day stale nodes on Map view when linked in 2026-03-04 12:28:37 -08:00
Jack Kingsman 1ce72ecdf7 Add #remoteterm as default channel 2026-03-04 10:56:31 -08:00
Jack Kingsman e0fb093612 Fix non-cache refresh on unfocused active threads; doc and test improvements 2026-03-04 10:16:17 -08:00
Jack Kingsman 1f37da8d2d s/router/repeater/ 2026-03-03 22:22:08 -08:00
Jack Kingsman a9472870f3 Add 'Show Key' prompt on private rooms to prevent key leakage. Closes #36 2026-03-03 21:39:03 -08:00
Jack Kingsman d6611e8518 Add better logic around node-last-heard timing 2026-03-03 21:24:50 -08:00
Jack Kingsman 6274df7244 Add some color themes 2026-03-03 21:08:19 -08:00
Jack Kingsman 813a47ee14 Shard up frontend variables 2026-03-03 20:07:45 -08:00
Jack Kingsman e0e71180b2 Add global message search and more e2e tests 2026-03-03 19:19:24 -08:00
Jack Kingsman 73a835688d Add channel info box 2026-03-03 17:09:48 -08:00
Jack Kingsman 5d2aaa802b Move controls to top 2026-03-03 16:19:33 -08:00
Jack Kingsman eb78285b8f Add bot disable flow 2026-03-03 15:57:37 -08:00
Jack Kingsman e8538c55ea Fix flaky test selector 2026-03-03 14:02:05 -08:00
Jack Kingsman b1cb531911 Make node recency customizable in the visualizer 2026-03-03 13:52:55 -08:00
Jack Kingsman 8fa37fe6dc Websocket for contact deletion, radio contact deletion flag fix, resent message now appends sender name 2026-03-03 12:43:27 -08:00
Jack Kingsman 73d4647cfc Add node last heard reason to hover 2026-03-03 12:08:20 -08:00
Jack Kingsman 31afb7b9c0 Fix some doc gaps 2026-03-03 12:01:58 -08:00
Jack Kingsman 73f082c06c Fallback DM handler cleanliness (I never use this lol) 2026-03-03 11:59:15 -08:00
Jack Kingsman ea49bdff35 ID tiebreaker for same-second messages 2026-03-03 11:57:53 -08:00
Jack Kingsman 21fd505fb9 Add sender name on outgoing messages 2026-03-03 11:56:18 -08:00
Jack Kingsman 62943f6292 Add why-active reason to visualizer nodes 2026-03-03 11:54:54 -08:00
Jack Kingsman c76b7895dd Clarify MQTT limitation comment 2026-03-03 10:29:04 -08:00
Jack Kingsman 285c90f71e Conform status colors 2026-03-03 10:24:25 -08:00
Jack Kingsman e4662229b4 Shorten client name 2026-03-03 09:40:20 -08:00
Jack Kingsman be21b434cf Get closer to parity with meshcore packet capture 2026-03-03 09:31:24 -08:00
Jack Kingsman 662e84adbe Add community mqtt stats reporting 2026-03-03 09:17:57 -08:00
Jack Kingsman 5a72adc75b Add some QOL features to readiness scripts 2026-03-03 09:17:45 -08:00
Jack Kingsman 707f98d203 Fix packet sidebar retention 2026-03-03 09:05:21 -08:00
Jack Kingsman 4c1d5fb8ec Add status/LWT to community MQTT ingest 2026-03-02 23:10:25 -08:00
Jack Kingsman fb279ccf1a Accessibility overhaul 2026-03-02 20:34:06 -08:00
Jack Kingsman 7d39e726b4 Local serving optimizations and Windows docs updates 2026-03-02 20:19:49 -08:00
Jack Kingsman d9aa67d254 No more package locks 2026-03-02 19:25:40 -08:00
Jack Kingsman 81694e7ab3 Fix an e2e test and add a warning about nodes not showing up on community listings 2026-03-02 19:15:28 -08:00
Jack Kingsman 99f31c8226 Carve out dead code and cruft, and unify repeater status pane 2026-03-02 18:56:18 -08:00
Jack Kingsman f715d72467 Doc and typo hunting 2026-03-02 18:05:45 -08:00
Jack Kingsman f335fc56cc Patch up some missing tests and fix+test channel add not clearing on channel submission without add-another checked 2026-03-02 18:02:53 -08:00
Jack Kingsman d8294a8383 Add more warnings around radio config, stats loading, and packet decrypt (and remove accidentally committed script whoops) 2026-03-02 16:46:18 -08:00
Jack Kingsman 79db09bd15 Don't show mark all as read if there's nothing to read 2026-03-02 16:42:59 -08:00
Jack Kingsman e3fe36dc19 Fix stale closure on existing keys 2026-03-02 16:38:44 -08:00
Jack Kingsman 69584051f5 Don't message fetch on map or visualizer 2026-03-02 16:35:26 -08:00
Jack Kingsman 58ea1d7eb9 Be more protective around stripping at null byte, not after 2026-03-02 16:34:13 -08:00
Jack Kingsman c9776639a0 Commit package-lock 2026-03-02 16:33:08 -08:00
Jack Kingsman d860ea706d Add log line to show if our polling loop actually mops anything up 2026-03-02 15:32:58 -08:00
Jack Kingsman b7976206fc Add some additional documentation notes 2026-03-02 15:28:32 -08:00
Jack Kingsman f73d10328b Add note about let's mesh 2026-03-02 15:05:17 -08:00
Jack Kingsman a8ff2b4133 Updating changelog + build for 2.4.0 2026-03-02 14:54:21 -08:00
Jack Kingsman 09ad642d79 Merge pull request #34 from jkingsman/community-mqtt
Community MQTT (LetsMesh). Closes #33.
2026-03-02 14:42:04 -08:00
Jack Kingsman 9e68544fe9 Add warning to MQTT section 2026-03-02 14:41:49 -08:00
Jack Kingsman f059756064 Clearer labelling and page organization for MQTT 2026-03-02 14:25:44 -08:00
Jack Kingsman 95bacc4caf Split up community broker fields and reformat MQTT config page 2026-03-02 14:24:20 -08:00
Jack Kingsman 2581cc6af7 Show error toast on PK export failure 2026-03-02 14:24:20 -08:00
Jack Kingsman 05df314619 Refactor to combined base for MQTT 2026-03-02 14:24:19 -08:00
Jack Kingsman 00ca4afa8d Add support for community MQTT ingest 2026-03-02 14:24:19 -08:00
Jack Kingsman 2496d70c4b Add kofi link 2026-03-02 11:46:09 -08:00
Jack Kingsman 4b05dc2f41 Add clearer MQTT topics and payload shapes 2026-03-02 11:41:25 -08:00
Jack Kingsman b8cdae8a03 Tag mesh-traffic-reliant tests with a warning 2026-03-02 10:52:48 -08:00
Jack Kingsman 3bad3cb21c Add clearer message for e2e test lags 2026-03-02 10:46:48 -08:00
Jack Kingsman f118d5e222 Add debug log level info 2026-03-02 10:38:40 -08:00
Jack Kingsman e0d87c4df3 Add licenses explicitly (probably should have been doing this for a while; oops! Apologies!) 2026-03-01 22:12:42 -08:00
Jack Kingsman ed83d1b2c4 Add a sign of life to e2e tests 2026-03-01 19:23:56 -08:00
Jack Kingsman d988309a2f Move settings ordering around 2026-03-01 19:11:21 -08:00
Jack Kingsman 7c37133856 Add some info and attribution 2026-03-01 18:53:14 -08:00
Jack Kingsman 0bde67d66c Move build scripts into better places 2026-03-01 18:06:55 -08:00
Jack Kingsman 56d4fa707a Updating changelog + build for 2.3.0 2026-03-01 17:37:21 -08:00
Jack Kingsman a8af9b10f3 Break up repeater and settings into consituent files 2026-03-01 17:34:00 -08:00
Jack Kingsman 18ac86b4c0 Have visualizer remember settings 2026-03-01 16:38:25 -08:00
Jack Kingsman e504f4de33 Test and doc improvements 2026-03-01 14:53:18 -08:00
Jack Kingsman 9c4b049c8d Fix double message render glitch 2026-03-01 14:37:40 -08:00
Jack Kingsman 330c5efb31 Merge pull request #29 from jkingsman/mqtt
MQTT support
2026-03-01 11:11:46 -08:00
Jack Kingsman f993110ec4 Initial mqtt implementation 2026-03-01 11:11:36 -08:00
Jack Kingsman c891a23a41 Drop py3.12 req 2026-02-28 23:16:57 -08:00
Jack Kingsman 1f1c0faccc DOn't double detch unreads 2026-02-28 21:28:35 -08:00
Jack Kingsman 727ac913de Add more efficient message pagination index to eliminate temporary b-tree indexing 2026-02-28 21:00:16 -08:00
Jack Kingsman a55166989e Improve performance on unread endpoint 2026-02-28 20:10:38 -08:00
Jack Kingsman a2b211a8bc Move prefetch to a better spot 2026-02-28 19:45:40 -08:00
Jack Kingsman 5d90727718 Remove advert info and add arrows between advert path nodes 2026-02-28 17:23:06 -08:00
Jack Kingsman 0ad17c8d1f Optimize build/lint perf 2026-02-28 16:21:52 -08:00
Jack Kingsman 365728be02 Make pathing clearable on click 2026-02-28 15:33:50 -08:00
Jack Kingsman 7cad4a98dd Show repeater path type/length in title bar to match contacts 2026-02-28 13:51:24 -08:00
Jack Kingsman bac4db6b0a Updating changelog + build for 2.2.0 2026-02-28 13:33:49 -08:00
Jack Kingsman 60c0262490 Expand tests with E2E coverage 2026-02-28 13:24:13 -08:00
Jack Kingsman ce99d63701 Reorganize for great victory and move to blob for payload hasg 2026-02-27 21:03:34 -08:00
Jack Kingsman fc27361e37 Fix prefetch type glitch 2026-02-27 18:48:39 -08:00
Jack Kingsman dcd473de6c Clear raw packet ref on reconnect 2026-02-27 17:45:17 -08:00
Jack Kingsman 57e6ba534a Improve prefetch safety 2026-02-27 17:14:29 -08:00
Jack Kingsman 17f6a2b8c5 Compact code bloat for message fire and channel sync loops 2026-02-27 17:03:18 -08:00
Jack Kingsman 884972f9e0 Add some tests and improve docs 2026-02-27 16:54:18 -08:00
Jack Kingsman 60455cdd7b Autoreconcile and don't bother with toast 2026-02-27 16:38:08 -08:00
Jack Kingsman 194852ed16 Move to blob storage for payload hashes 2026-02-27 15:46:16 -08:00
Jack Kingsman 6a3510ce2e Misc. doc, test, and qol improvements 2026-02-27 15:17:29 -08:00
Jack Kingsman c40603a36f Cancellation guard on contact info pane 2026-02-27 15:05:22 -08:00
Jack Kingsman 2e8a4fde0a Historical DM decrypts are always incoming 2026-02-27 15:02:56 -08:00
Jack Kingsman 171b4405e5 Move contact sync to pass-the-mc mode 2026-02-27 14:59:52 -08:00
Jack Kingsman c5fd0292b8 Repeater Overhaul 2026-02-27 14:38:17 -08:00
Jack Kingsman 66cbf98b74 Post-merge cleanup, AGENTS.md work, unused endpoints, etc. 2026-02-27 14:36:43 -08:00
Jack Kingsman b3606169fe Add patchup commit 2026-02-27 14:28:17 -08:00
Jack Kingsman d4a2b9fac8 Linting 2026-02-27 14:20:53 -08:00
Jack Kingsman 26fbfcd015 Repeater UI overhaul 2026-02-27 14:20:52 -08:00
Jack Kingsman f4a383082e Merge pull request #28 from jkingsman/contact_info
Contact info pane
2026-02-27 13:47:57 -08:00
Jack Kingsman b91b2d5d7b Contact info pane 2026-02-27 13:45:42 -08:00
Jack Kingsman 24166e92e8 Add continue-on-failure attempts for when contact loading fails. Might help remedy #27, but there's still an issue (maybe radio lag?) 2026-02-26 00:43:32 -08:00
Jack Kingsman f003bda7b2 Don't queue packets while the page is hidden 2026-02-25 17:32:35 -08:00
Jack Kingsman a406e00229 Add local label 2026-02-25 16:18:33 -08:00
Jack Kingsman 56f8b796e6 Add scroll to repeater infobox on visualizer 2026-02-25 16:03:47 -08:00
Jack Kingsman 6ec2350b9a Add scroll to repeater infobox on visualizer 2026-02-25 15:54:12 -08:00
Jack Kingsman 566181faed More e2e tests 2026-02-24 22:33:28 -08:00
Jack Kingsman 27942975e2 Don't short circuit on zero key because claude is useless tonight 2026-02-24 21:50:52 -08:00
Jack Kingsman 1c2fb148bc Misc cruft -- filtering, pagination tests, etc. 2026-02-24 21:15:49 -08:00
Jack Kingsman 684724913f Clear channel name on new channel tab swap 2026-02-24 20:56:27 -08:00
Jack Kingsman 0826030f1c Add errata about 200 return for deletion of nonexistent channel 2026-02-24 20:49:22 -08:00
Jack Kingsman fb11690585 Add errata note about continuous retry 2026-02-24 20:47:59 -08:00
Jack Kingsman 5dcb52914b Catch failed vacuum 2026-02-24 20:47:10 -08:00
Jack Kingsman b4a0b1c515 Add refresh prompt after WS loss 2026-02-24 20:45:47 -08:00
Jack Kingsman 81c166bb8d Do some utterly disgusting MC library munging to deal with contacts coming out of sync 2026-02-24 20:41:21 -08:00
Jack Kingsman 71359e437f Clarify errata and known limitations 2026-02-24 20:37:53 -08:00
Jack Kingsman 932ea6b65d Pause autofetch during poll loop 2026-02-24 20:30:39 -08:00
Jack Kingsman 2757f25eb9 Use radio lock after setup 2026-02-24 20:26:18 -08:00
Jack Kingsman 561c8cf9c0 More code cleanup and optimization 2026-02-24 19:59:46 -08:00
Jack Kingsman 1b76211d53 More code rip out 2026-02-24 19:11:51 -08:00
Jack Kingsman b1a0456a05 Carve out some dead code 2026-02-24 18:40:35 -08:00
Jack Kingsman f7f696bf10 Remove rerender thrashing on setConnected 2026-02-24 18:13:31 -08:00
Jack Kingsman 5c0f3df806 Track advert path and use in mesh visualizer
Track advert path and use in mesh visualizer
2026-02-24 15:00:50 -08:00
Jack Kingsman c30ed0b4bc Track advert path and use in mesh visualizer 2026-02-24 14:55:28 -08:00
Jack Kingsman 440ab14d7f Rephrase command channel failure warning 2026-02-24 09:29:51 -08:00
Jack Kingsman c25b21469e Add frontend fallback resolver 2026-02-24 00:18:11 -08:00
Jack Kingsman 17e526697f Add radio event-response-failure message into the logs 2026-02-24 00:10:48 -08:00
Jack Kingsman 27cd3bd710 Updating changelog + build for 2.1.0 2026-02-23 23:51:49 -08:00
Jack Kingsman c0f740d5f9 API version reads from pyproject.toml 2026-02-23 23:38:01 -08:00
Jack Kingsman cc6e788021 Fix typos 2026-02-23 23:34:49 -08:00
Jack Kingsman 033af4027d Update AGENTS.md and add tests for broadcast payload shape 2026-02-23 23:20:35 -08:00
Jack Kingsman cc12128041 Add clearer error handling 2026-02-23 23:16:11 -08:00
Jack Kingsman 4f3d8a7838 Fix stuck post-connect failure state 2026-02-23 23:12:53 -08:00
Jack Kingsman 559935e3d5 Improve some coverage in integration form 2026-02-23 22:38:29 -08:00
Jack Kingsman ecb748b9e3 Drop out crappy tests, and improve quality overall 2026-02-23 22:28:09 -08:00
Jack Kingsman 31bb1e7d22 Move glyph further down for centering 2026-02-23 21:58:47 -08:00
Jack Kingsman 72b66214fa Add tests for MC object handling 2026-02-23 21:52:29 -08:00
Jack Kingsman 2125653978 Correct yet MORE instances of not using a well sourced MC object 2026-02-23 21:46:57 -08:00
Jack Kingsman 31302b4972 Strip out f-string usages in queries. Don't set bad examples! 2026-02-23 21:07:05 -08:00
Jack Kingsman c6a8c3835c Add note for other bug-finder LLMs about local message ID sub-millisecond collisions 2026-02-23 21:04:29 -08:00
Jack Kingsman 4b84f609b7 Fix content type and offset detection 2026-02-23 21:01:48 -08:00
Jack Kingsman a22224980e Reduce WS churn for incoming duplicates that don't affect ack/path list 2026-02-23 20:55:32 -08:00
Jack Kingsman ced0791c05 Add notes about known edge cases to prevent agent repop 2026-02-23 20:45:24 -08:00
Jack Kingsman 47867c50b8 Fix TOCTOU around radio reconnect 2026-02-23 20:42:11 -08:00
Jack Kingsman 1a4f57a03e Fix airtime polling cross-message display 2026-02-23 20:34:14 -08:00
Jack Kingsman 5d7a313c53 Add missing tests and address AGENTS.md gaps 2026-02-23 20:26:57 -08:00
Jack Kingsman b9de3b7dd7 Reduce default poll time and add DM ack clearing to standard poll 2026-02-23 20:00:42 -08:00
Jack Kingsman 7306627ac7 Move to SSoT for message dedup to prevent phantom unreads 2026-02-23 19:52:42 -08:00
Jack Kingsman 1bd31d68d9 Update server-side keystore after key refresh 2026-02-23 19:33:17 -08:00
Jack Kingsman 152eab99db More stable MC object reference and proper radio disconnection detection 2026-02-23 19:11:58 -08:00
Jack Kingsman cba9e20698 Drain before autofetch, fix same-second collisions, and always mc.disconnect() on false/probe failure 2026-02-23 17:33:35 -08:00
Jack Kingsman 619973bdf0 Add prebuilt image to docker-compose 2026-02-23 16:42:41 -08:00
Jack Kingsman ef4c79bc80 Move to hour-resolution adverts 2026-02-23 16:34:34 -08:00
Jack Kingsman 88d5a76081 Better behavior and message tracking around repeater contact on a busy mesh 2026-02-23 15:59:52 -08:00
jkingsman 9193d113fe Tighten up docker compose and docs 2026-02-22 14:02:11 -08:00
Jack Kingsman fd0f901546 Merge pull request #20 from suymur/feature/add-docker-compose
Add Docker Compose support for simplified deployment
2026-02-22 13:43:24 -08:00
Jack Kingsman 40d27dd8d6 Merge branch 'main' into feature/add-docker-compose 2026-02-22 13:43:15 -08:00
Jack Kingsman 54706700ab Remove unused from readme 2026-02-22 13:42:09 -08:00
Jack Kingsman 00aa212049 Add notes about ownership glitches + using prebuilt 2026-02-22 12:45:39 -08:00
Jack Kingsman 7542cc1142 Update README for docker compose 2026-02-22 12:07:37 -08:00
Jack Kingsman d525188cce Change back npm ci and use standard paths + ports 2026-02-22 11:59:58 -08:00
Jack Kingsman d635914d4b Remove unnecessary and clashing rounded border on settings panes 2026-02-22 11:53:05 -08:00
Jack Kingsman e806430a73 Merge pull request #21 from yellowcooln/main
Fix settings page scroll lock at browser zoom levels
2026-02-22 11:47:43 -08:00
Jack Kingsman 2e23733f41 Fix README docker image
Update README.md
2026-02-22 08:49:12 -08:00
Schappi 7e52982399 Update README.md
docker repository has changed...
2026-02-22 15:23:04 +01:00
Jack Kingsman 40dde4647a Correct button alignment 2026-02-21 17:23:46 -08:00
Jack Kingsman 7463f4e032 Move resend button into modal 2026-02-21 17:01:13 -08:00
Yellowcooln a7b5dcc9d8 Adjust class names for SettingsModal layout 2026-02-21 18:01:19 -05:00
Jack Kingsman 1e53fe9515 Better warning phrasing 2026-02-21 09:30:03 -08:00
Jack Kingsman 1477900f6f Linting... 2026-02-21 00:14:49 -08:00
Jack Kingsman 11f07f3501 Add endpoint for deleting raw packets of decrypted messages 2026-02-21 00:11:57 -08:00
Jack Kingsman 6d0505ade6 WAL + incremental vacuum for space happiness 2026-02-21 00:04:27 -08:00
Jack Kingsman 9e3b1d03a9 Drop unnecessary uniqs and indices 2026-02-21 00:00:13 -08:00
Jack Kingsman 9352b272d5 Bug cleanup: legacy hash restoration + dupicated convo router checks 2026-02-20 22:58:34 -08:00
Jack Kingsman c90a30787a Experimental dynamic manifest 2026-02-20 22:49:39 -08:00
Jack Kingsman 2321411ef0 Fix typo and change startup load hash behavior 2026-02-20 17:33:02 -08:00
Jack Kingsman a8a8f6e08b Fix typo and disable autocomplete 2026-02-20 17:26:30 -08:00
Jack Kingsman f9eb46f2ab Remember last used channel when selected 2026-02-20 17:16:05 -08:00
Jack Kingsman 41bf4eb73a Hide character counter for short messages on mobile 2026-02-20 17:15:57 -08:00
suymur e0ca50afc8 Add Docker Compose support for simplified deployment
- Add docker-compose.yaml with service configuration
- Support for multiple transport options (TCP, Serial, BLE)
- Configure standard port mapping (8000:8000)
- Use named volume for portable data persistence
- Update Dockerfile to use npm install for better compatibility

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-20 11:23:16 +01:00
Jack Kingsman d08a113fc8 Updating changelog + build for 2.0.1 2026-02-16 23:06:51 -08:00
Jack Kingsman f490cc756f Fix bug with statistics display on mobile 2026-02-16 23:01:04 -08:00
Jack Kingsman 3232075767 Update README a smidge with new features 2026-02-16 22:36:30 -08:00
Jack Kingsman a9d650ecd4 Update screenshot for 2.0 2026-02-16 22:27:53 -08:00
Jack Kingsman 7c23dcf6d9 Updating changelog + build for 2.0.0 2026-02-16 22:13:19 -08:00
Jack Kingsman 6e4872e25b Full screen mesh visualizer view 2026-02-16 22:08:25 -08:00
Jack Kingsman ef2b22a865 Don't dedupe adverts on payload (since what we care about is the path) 2026-02-16 22:03:04 -08:00
Jack Kingsman a4d8707479 Fix missing padding on collapsed visualizer 2026-02-16 21:02:47 -08:00
Jack Kingsman 0e25bd2281 Fix dedupe for frontend raw packet delivery 2026-02-16 20:46:43 -08:00
Jack Kingsman 56fde32970 Linting goodness 2026-02-16 19:20:11 -08:00
Jack Kingsman 1a59eb89fa Clarify some doc drift 2026-02-16 19:15:27 -08:00
Jack Kingsman 65b74b624b Add missing prefix-message claim in other contact sync spots we missed 2026-02-16 19:14:05 -08:00
Jack Kingsman 95e8bcca08 Clarify packet dedupe policy 2026-02-16 19:11:44 -08:00
Jack Kingsman e8ddba0131 Add radio lock acquire around missing spots, and validate 2026-02-16 19:10:20 -08:00
Jack Kingsman 8ca48cd6bc Use actual pubkey matching for path update, not default, and don't action the serial path update events 2026-02-16 19:06:09 -08:00
Jack Kingsman 72f12d80e5 Fix repeater command timestamp field usage 2026-02-16 18:59:39 -08:00
Jack Kingsman d2f5bd84a8 Make broadcast timestamp match fallback logic used for storage 2026-02-16 18:34:20 -08:00
Jack Kingsman cbe091ad90 Add clarifying comment for missing DM pathing info 2026-02-16 18:32:45 -08:00
Jack Kingsman 1f853aa54e Fix out of order path WS broadcasts overwriting each other 2026-02-16 18:30:27 -08:00
Jack Kingsman 8457799a60 Use contact object on broadcast from DB rather than hand-rolling 2026-02-16 18:26:05 -08:00
Jack Kingsman 591d333970 More missed lowercase key spots 2026-02-16 18:24:25 -08:00
Jack Kingsman 54a03a9467 Add guard for conversation switch mid-message-fetch 2026-02-16 18:23:10 -08:00
Jack Kingsman de7ab37998 Clear and reset only the visualizer, not the packet feed 2026-02-16 18:00:03 -08:00
Jack Kingsman 3042beaf27 Invert UI show/hide logic to be positive not negative 2026-02-16 17:53:41 -08:00
Jack Kingsman d4a7b37fa9 Whoops, linting 2026-02-16 17:49:27 -08:00
Jack Kingsman 6a3a99fe9f Clarify option labelling on visualizer 2026-02-16 17:45:30 -08:00
Jack Kingsman 7d340f19e0 Move orbit option to checkbox 2026-02-16 17:42:19 -08:00
Jack Kingsman 95d806717b Default to heuristic repeater grouping 2026-02-16 17:40:19 -08:00
Jack Kingsman 7f426ece4e Add last-five-min visualizer filtering 2026-02-16 17:39:35 -08:00
Jack Kingsman 7bb0e5e719 Clarify meaning of ack window 2026-02-16 17:36:42 -08:00
Jack Kingsman a495f284ea Remove visualizer shuffle; not needed in 3D (nearly always worse layout) 2026-02-16 17:36:05 -08:00
Jack Kingsman 7df21d03f2 Change visualizer layout 2026-02-16 17:33:20 -08:00
Jack Kingsman be007322d2 Frontend overhaul 2026-02-16 17:28:21 -08:00
Jack Kingsman 58900f7649 Add logo glyph 2026-02-16 16:49:48 -08:00
Jack Kingsman 877649ddc7 Frontend color overhaul 2026-02-16 16:45:05 -08:00
Jack Kingsman 24685038f8 Updating changelog + build for 1.10.0 2026-02-16 16:26:22 -08:00
Jack Kingsman 1e73cbf266 Add orbit to 3D viewer 2026-02-16 16:22:39 -08:00
Jack Kingsman 241f94ceaf Expand node radios in 3D view 2026-02-16 16:17:39 -08:00
Jack Kingsman e157826364 Actually clean up node labels on clear 2026-02-16 16:16:44 -08:00
Jack Kingsman 89d311e4ae Move visualizer to 3D 2026-02-16 15:36:44 -08:00
Jack Kingsman 0d03945b81 Fix turbo resend 2026-02-16 00:37:27 -08:00
Jack Kingsman 945053c20a Add turbo resend for testing 2026-02-15 23:49:50 -08:00
Jack Kingsman 1f3042f360 Add statistics endpoint 2026-02-15 12:54:42 -08:00
Jack Kingsman 3756579f9d Dedupe contacts/sidebar by key not name 2026-02-14 21:46:24 -08:00
Jack Kingsman 6e3cf28577 Fix ack/message race condition where out of sequence acks and messages would cause dropped acks 2026-02-14 21:19:25 -08:00
Jack Kingsman 9afaee24a0 Persist collapae state 2026-02-14 19:10:44 -08:00
Jack Kingsman c91449260d Fix sidebar and test typing 2026-02-14 18:04:43 -08:00
Jack Kingsman 36098f62b8 Merge pull request #18 from rgregg/codex/sidebar-contacts-repeaters-sections
feat(sidebar): add collapsible sections and split repeaters list
2026-02-14 17:58:59 -08:00
Jack Kingsman 8bb408180e Add unread badge at section level 2026-02-14 17:58:35 -08:00
Jack Kingsman b34bc1491a Unify row rendering 2026-02-14 17:50:58 -08:00
Ryan Gregg 4919f551f8 feat(sidebar): add collapsible sections and split repeaters list 2026-02-14 17:50:57 -08:00
Jack Kingsman 5a82d469b4 Add resend button for 30s 2026-02-14 17:37:51 -08:00
Jack Kingsman 7b2d5b817e Fix multi-send message pathing not appearing 2026-02-14 16:49:27 -08:00
Jack Kingsman a598cbbd1a Clearer username styling 2026-02-13 01:29:36 -08:00
Jack Kingsman 76db547f50 Better contrast; happier eyeballs! 2026-02-13 01:26:24 -08:00
Jack Kingsman 1c4d6c07a8 Prefetch all the things! 2026-02-13 00:48:37 -08:00
Jack Kingsman 908a479fa6 Improve perf with reduced fetching, more chunking, and window-level prefetch 2026-02-13 00:43:07 -08:00
Jack Kingsman b14ad71eca Action some lighthouse findings 2026-02-13 00:12:54 -08:00
Jack Kingsman 57d007dec2 Calm down sidebar refreshes with better contact don't-set behavior, unread count checks, and memoized sorting etc. 2026-02-13 00:00:53 -08:00
Jack Kingsman 430b5aaba7 Support incoming/outgoing detection in bots 2026-02-12 23:52:49 -08:00
Jack Kingsman 0fcf6a5653 s/stopped/idle/ on cracker interface 2026-02-12 19:53:25 -08:00
Jack Kingsman 3394183892 Fix outgoing first message top padding missing 2026-02-12 19:26:54 -08:00
607 changed files with 114376 additions and 16608 deletions
+1
View File
@@ -29,6 +29,7 @@ frontend/src/test/
# Docs
*.md
!README.md
!LICENSES.md
# Other
references/
+1
View File
@@ -0,0 +1 @@
frontend/prebuilt/** -diff
+74
View File
@@ -0,0 +1,74 @@
name: All Quality
on:
push:
pull_request:
jobs:
backend-checks:
runs-on: ubuntu-latest
timeout-minutes: 45
steps:
- name: Check out repository
uses: actions/checkout@v6
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: "3.12"
- name: Set up uv
uses: astral-sh/setup-uv@v7
with:
enable-cache: true
- name: Install backend dependencies
run: uv sync --dev
- name: Backend lint
run: uv run ruff check app/ tests/
- name: Backend format check
run: uv run ruff format --check app/ tests/
- name: Backend typecheck
run: uv run pyright app/
- name: Backend tests
run: PYTHONPATH=. uv run pytest tests/ -v
frontend-checks:
runs-on: ubuntu-latest
timeout-minutes: 45
steps:
- name: Check out repository
uses: actions/checkout@v6
- name: Set up Node.js
uses: actions/setup-node@v6
with:
node-version: "22"
cache: npm
cache-dependency-path: frontend/package-lock.json
- name: Install frontend dependencies
run: npm ci
working-directory: frontend
- name: Frontend lint
run: npm run lint
working-directory: frontend
- name: Frontend format check
run: npm run format:check
working-directory: frontend
- name: Frontend tests
run: npm run test:run
working-directory: frontend
- name: Frontend build
run: npm run build
working-directory: frontend
+73
View File
@@ -0,0 +1,73 @@
name: Publish AUR package
# Pushes the contents of pkg/aur/ to the remoteterm-meshcore AUR repository
# whenever a GitHub release is published. Can also be triggered manually for
# testing or out-of-band republishes.
#
# Required secrets:
# AUR_SSH_PRIVATE_KEY Private SSH key registered with the AUR maintainer
# account that owns the remoteterm-meshcore package.
# AUR_COMMIT_EMAIL Email used for the AUR git commit identity.
on:
release:
types: [published]
workflow_dispatch:
inputs:
version:
description: 'Version to publish (no v prefix, e.g. 3.9.1)'
required: true
concurrency:
# Serialize publishes so a fast back-to-back release sequence cannot race
# two pushes against the AUR repo. The later one wins by virtue of being
# the final state.
group: publish-aur
cancel-in-progress: false
jobs:
publish-aur:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- name: Resolve version from event
id: version
run: |
if [ "${{ github.event_name }}" = "workflow_dispatch" ]; then
VERSION="${{ inputs.version }}"
else
VERSION="${{ github.event.release.tag_name }}"
fi
VERSION="${VERSION#v}"
echo "version=$VERSION" >> "$GITHUB_OUTPUT"
echo "Publishing AUR package for version $VERSION"
- name: Stamp pkgver into PKGBUILD
run: |
sed -i "s/^pkgver=.*/pkgver=${{ steps.version.outputs.version }}/" pkg/aur/PKGBUILD
sed -i "s/^pkgrel=.*/pkgrel=1/" pkg/aur/PKGBUILD
- name: Publish to AUR
uses: KSXGitHub/github-actions-deploy-aur@v4.1.2
with:
pkgname: remoteterm-meshcore
pkgbuild: pkg/aur/PKGBUILD
assets: |
pkg/aur/remoteterm-meshcore.install
pkg/aur/remoteterm-meshcore.service
pkg/aur/remoteterm-meshcore.sysusers
pkg/aur/remoteterm-meshcore.tmpfiles
pkg/aur/remoteterm.env
commit_username: jackkingsman
commit_email: ${{ secrets.AUR_COMMIT_EMAIL }}
ssh_private_key: ${{ secrets.AUR_SSH_PRIVATE_KEY }}
commit_message: "Update to ${{ steps.version.outputs.version }}"
# Recompute sha256sums from the live release tarball + the bundled
# service/env files. The committed PKGBUILD has SKIP placeholders.
updpkgsums: true
# Validate the PKGBUILD parses and sources download, but skip the
# actual build (which would run uv sync + npm install for several
# minutes of CI time on every release).
test: true
test_flags: --clean --cleanbuild --nodeps --nobuild
+12 -1
View File
@@ -2,6 +2,8 @@
__pycache__/
*.py[oc]
build/
!scripts/build/
!scripts/build/**
wheels/
*.egg-info
@@ -12,10 +14,19 @@ frontend/test-results/
# Frontend build output (built from source by end users)
frontend/dist/
frontend/package-lock.json
frontend/prebuilt/
frontend/.eslintcache
# Release artifacts
remoteterm-prebuilt-frontend-v*.zip
# reference libraries
references/
# ancillary LLM files
.claude/
# local Docker compose files
docker-compose.yml
docker-compose.yaml
.docker-certs/
-1
View File
@@ -1 +0,0 @@
3.12
+218 -83
View File
@@ -4,55 +4,54 @@
**NEVER make git commits.** A human must make all commits. You may stage files and prepare commit messages, but do not run `git commit`.
If instructed to "run all tests" or "get ready for a commit" or other summative, work ending directives, make sure you run the following and that they all pass green:
If instructed to "run all tests" or "get ready for a commit" or other summative, work ending directives, run:
```bash
uv run ruff check app/ tests/ --fix # check for python violations
uv run ruff format app/ tests/ # format python
uv run pyright app/ # type check python
PYTHONPATH=. uv run pytest tests/ -v # test python
cd frontend/ # move to frontend directory
npm run lint:fix # fix lint violations
npm run format # format the code
npm run build # run a frontend build
./scripts/quality/all_quality.sh
```
This is the repo's end-to-end quality gate. It runs backend/frontend autofixers first, then type checking, tests, and the standard frontend build. All checks must pass green, and the script may leave formatting/lint edits behind.
## Overview
A web interface for MeshCore mesh radio networks. The backend connects to a MeshCore-compatible radio over Serial, TCP, or BLE and exposes REST/WebSocket APIs. The React frontend provides real-time messaging and radio configuration.
**For detailed component documentation, see:**
**For detailed component documentation, see these primary AGENTS.md files:**
- `app/AGENTS.md` - Backend (FastAPI, database, radio connection, packet decryption)
- `frontend/AGENTS.md` - Frontend (React, state management, WebSocket, components)
- `frontend/src/components/AGENTS.md` - Frontend visualizer feature (a particularly complex and long force-directed graph visualizer component; can skip this file unless you're working on that feature)
Ancillary AGENTS.md files which should generally not be reviewed unless specific work is being performed on those features include:
- `app/fanout/AGENTS_fanout.md` - Fanout bus architecture (MQTT, bots, webhooks, Apprise, SQS)
- `frontend/src/components/visualizer/AGENTS_packet_visualizer.md` - Packet visualizer (force-directed graph, advert-path identity, layout engine)
## Architecture Overview
```
┌─────────────────────────────────────────────────────────────────┐
│ Frontend (React)
│ Frontend (React) │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────────────┐ │
│ │ StatusBar│ │ Sidebar │ │MessageList│ │ MessageInput │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────────────┘ │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ CrackerPanel (global collapsible, WebGPU cracking) │ │
│ └────────────────────────────────────────────────────────────┘ │
│ │
│ useWebSocket ←──── Real-time updates
│ │
│ api.ts ←──── REST API calls
└───────────────────────────┼─────────────────────────────────────
│ │ │
│ useWebSocket ←──── Real-time updates │
│ │ │
│ api.ts ←──── REST API calls │
└───────────────────────────┼─────────────────────────────────────┘
│ HTTP + WebSocket (/api/*)
┌───────────────────────────┼──────────────────────────────────────┐
│ Backend (FastAPI) │
│ ┌──────────┐ ┌──────────────┐ ┌────────────┐ ┌───────────
│ │ Routers │→ │ Repositories │→ │ SQLite DB │ │ WebSocket │
│ └──────────┘ └──────────────┘ └────────────┘ │ Manager │
│ ↓ ───────────
│ ┌──────────────────────────────────────────────────────────┐
│ │ RadioManager + Event Handlers │ │
└──────────────────────────────────────────────────────────┘
│ ┌──────────┐ ┌──────────┐ ┌──────────────┐ ┌────────────┐
│ │ Routers │→ │ Services │→ │ Repositories │→ │ SQLite DB │ │
│ └──────────┘ └──────────┘ └──────────────┘ └────────────┘ │
│ ↓ ───────────
│ ┌──────────────────────────┐ └──────────────→ │ WebSocket │
│ │ Radio runtime seam + Manager │
│ RadioManager lifecycle │ └───────────┘
│ │ / event adapters │ │
│ └──────────────────────────┘ │
└───────────────────────────┼──────────────────────────────────────┘
│ Serial / TCP / BLE
┌──────┴──────┐
@@ -78,52 +77,111 @@ A web interface for MeshCore mesh radio networks. The backend connects to a Mesh
- Raw packet feed — a debug/observation tool ("radio aquarium"); interesting to watch or copy packets from, but not critical infrastructure
- Map view — visual display of node locations from advertisements
- Network visualizer — force-directed graph of mesh topology
- Bot system — automated message responses
- Fanout integrations (MQTT, bots, webhooks, Apprise, SQS) — see `app/fanout/AGENTS_fanout.md`
- Read state tracking / mark-all-read — convenience feature for unread badges; no need for transactional atomicity or race-condition hardening
## Error Handling Philosophy
**Background tasks** (WebSocket broadcasts, periodic sync, contact auto-loading, etc.) use fire-and-forget `asyncio.create_task`. Exceptions in these tasks are logged to the backend logs, which is sufficient for debugging. There is no need to track task references or add done-callbacks purely for error visibility. If there's a convenient way to bubble an error to the frontend (e.g., via `broadcast_error` for user-actionable problems), do so, but this is minor and best-effort.
Radio startup/setup is one place where that frontend bubbling is intentional: if post-connect setup hangs past its timeout, the backend both logs the failure and pushes a toast instructing the operator to reboot the radio and restart the server.
## Key Design Principles
1. **Store-and-serve**: Backend stores all packets even when no client is connected
2. **Parallel storage**: Messages stored both decrypted (when possible) and as raw packets
3. **Extended capacity**: Server stores contacts/channels beyond radio limits (~350 contacts, ~40 channels)
4. **Real-time updates**: WebSocket pushes events; REST for actions
4. **Real-time updates**: WebSocket pushes events; REST for actions; optional MQTT forwarding
5. **Offline-capable**: Radio operates independently; server syncs when connected
6. **Auto-reconnect**: Background monitor detects disconnection and attempts reconnection
## Code Ethos
- Prefer fewer, stronger modules over many tiny wrapper files.
- Split code only when the new module owns a real invariant, workflow, or contract.
- Avoid "enterprise" indirection layers whose main job is forwarding, renaming, or prop bundling.
- For this repo, "locally dense but semantically obvious" is better than context scattered across many files.
- Use typed contracts at important boundaries such as API payloads, WebSocket events, and repository writes.
- Refactors should be behavior-preserving slices with tests around the moved seam, not aesthetic reshuffles.
## Intentional Security Design Decisions
The following are **deliberate design choices**, not bugs. They are documented in the README with appropriate warnings. Do not "fix" these or flag them as vulnerabilities.
1. **No CORS restrictions**: The backend allows all origins (`allow_origins=["*"]`). This lets users access their radio from any device/origin on their network without configuration hassle.
2. **No authentication or authorization**: There is no login, no API keys, no session management. The app is designed for trusted networks (home LAN, VPN). The README warns users not to expose it to untrusted networks.
3. **Arbitrary bot code execution**: The bot system (`app/bot.py`) executes user-provided Python via `exec()` with full `__builtins__`. This is intentional — bots are a power-user feature for automation. The README explicitly warns that anyone on the network can execute arbitrary code through this.
2. **Minimal optional access control only**: The app has no user accounts, sessions, authorization model, or per-feature permissions. Operators may optionally set `MESHCORE_BASIC_AUTH_USERNAME` and `MESHCORE_BASIC_AUTH_PASSWORD` for app-wide HTTP Basic auth, but this is only a coarse gate and still requires HTTPS plus a trusted network posture.
3. **Arbitrary bot code execution**: The bot system (`app/fanout/bot_exec.py`) executes user-provided Python via `exec()` with full `__builtins__`. This is intentional — bots are a power-user feature for automation. The README explicitly warns that anyone on the network can execute arbitrary code through this. Operators can set `MESHCORE_DISABLE_BOTS=true` to completely disable the bot system at startup — this skips all bot execution, returns 403 on bot settings updates, and shows a disabled message in the frontend.
## Intentional Packet Handling Decision
Raw packet handling uses two identities by design:
- **`id` (DB packet row ID)**: storage identity from payload-hash deduplication (path bytes are excluded), so repeated payloads share one stored raw-packet row.
- **`observation_id` (WebSocket only)**: realtime observation identity, unique per RF arrival, so path-diverse repeats are still visible in-session.
Frontend packet-feed consumers should treat `observation_id` as the dedup/render key, while `id` remains the storage reference.
Channel metadata updates may also fan out as `channel` WebSocket events (full `Channel` payload) so clients can reflect local-only channel state such as regional flood-scope overrides without a full refetch.
## Contact Advert Path Memory
To improve repeater disambiguation in the network visualizer, the backend stores recent unique advertisement paths per contact in a dedicated table (`contact_advert_paths`).
- This is independent of raw-packet payload deduplication.
- Paths are keyed per contact + path + hop count, with `heard_count`, `first_seen`, and `last_seen`.
- Only the N most recent unique paths are retained per contact (currently 10).
- See `frontend/src/components/visualizer/AGENTS_packet_visualizer.md` § "Advert-Path Identity Hints" for how the visualizer consumes this data.
## Path Hash Modes
MeshCore firmware can encode path hops as 1-byte, 2-byte, or 3-byte identifiers.
- `path_hash_mode` values are `0` = 1-byte, `1` = 2-byte, `2` = 3-byte.
- `GET /api/radio/config` exposes both the current `path_hash_mode` and `path_hash_mode_supported`.
- `PATCH /api/radio/config` may update `path_hash_mode` only when the connected firmware supports it.
- Contact routing now uses canonical route fields: `direct_path`, `direct_path_len`, `direct_path_hash_mode`, plus optional `route_override_*`.
- The contact/API surface also exposes backend-computed `effective_route`, `effective_route_source`, `direct_route`, and `route_override` so send logic and UI do not reimplement precedence rules independently.
- Legacy `last_path`, `last_path_len`, and `out_path_hash_mode` are no longer part of the contact model or API contract.
- Route precedence for direct-message sends is: explicit override, then learned direct route, then flood.
- The learned direct route is sourced from radio contact sync (`out_path`) and PATH/path-discovery updates, matching how firmware updates `ContactInfo.out_path`.
- Advertisement paths are informational only. They are retained in `contact_advert_paths` for the contact pane and visualizer, but they are not used as DM send routes.
- `path_len` in API payloads is always hop count, not byte count. The actual path byte length is `hop_count * hash_size`.
## Data Flow
### Incoming Messages
1. Radio receives message → MeshCore library emits event
2. `event_handlers.py` catches event → stores in database
3. `ws_manager` broadcasts to connected clients
1. Radio receives raw bytes → `packet_processor.py` parses, decrypts, deduplicates, and stores in database (primary path via `RX_LOG_DATA` event)
2. `event_handlers.py` handles higher-level events (`CONTACT_MSG_RECV`, `ACK`) as a fallback/supplement
3. `broadcast_event()` in `websocket.py` fans out to both WebSocket clients and MQTT
4. Frontend `useWebSocket` receives → updates React state
### Outgoing Messages
1. User types message → clicks send
2. `api.sendChannelMessage()` → POST to backend
3. Backend calls `radio_manager.meshcore.commands.send_chan_msg()`
3. Backend route delegates to service-layer send orchestration, which acquires the radio lock and calls MeshCore commands
4. Message stored in database with `outgoing=true`
5. For direct messages: ACK tracked; for channel: repeat detection
Direct-message send behavior intentionally mirrors the firmware/library `send_msg_with_retry(...)` flow:
- We push the contact's effective route to the radio via `add_contact(...)` before sending.
- If the initial `MSG_SENT` result includes an expected ACK code, background retries are armed.
- Non-final retry attempts use the effective route (`override > direct > flood`).
- Retry timing follows the radio's `suggested_timeout`.
- The final retry is sent as flood by resetting the path on the radio first, even if an override or direct route exists.
- Path math is always hop-count based; hop bytes are interpreted using the stored `path_hash_mode`.
### ACK and Repeat Detection
**Direct messages**: Expected ACK code is tracked. When ACK event arrives, message marked as acked.
**Channel messages**: Flood messages echo back through repeaters. Repeats are identified by the database UNIQUE constraint on `(type, conversation_key, text, sender_timestamp)` — when an INSERT hits a duplicate, `_handle_duplicate_message()` in `packet_processor.py` increments the ack count on the original and adds the new path. There is no timestamp-windowed matching; deduplication is exact-match only.
Outgoing DMs send once immediately, then may retry up to 2 more times in the background only when the initial `MSG_SENT` result includes an expected ACK code and the message remains unacked. Retry timing follows the radio's `suggested_timeout` from `PACKET_MSG_SENT`, and the final retry is sent as flood even when a routing override is configured. DM ACK state is terminal on first ACK: sibling retry ACK codes are cleared so one DM should not accumulate multiple delivery confirmations from different retry attempts.
ACKs are not a contact-route source. They drive message delivery state and may appear in analytics/detail surfaces, but they do not update `direct_path*` or otherwise influence route selection for future sends.
**Channel messages**: Flood messages echo back through repeaters. Repeats are identified by the database UNIQUE constraint on `(type, conversation_key, text, sender_timestamp)` — when an INSERT hits a duplicate, `_handle_duplicate_message()` in `packet_processor.py` adds the new path and, for outgoing messages only, increments the ack count. Incoming repeats add path data but do not change the ack count. There is no timestamp-windowed matching; deduplication is exact-match only.
This message-layer echo/path handling is independent of raw-packet storage deduplication.
## Directory Structure
@@ -133,14 +191,17 @@ The following are **deliberate design choices**, not bugs. They are documented i
│ ├── AGENTS.md # Backend documentation
│ ├── main.py # App entry, lifespan
│ ├── routers/ # API endpoints
│ ├── repository.py # Database CRUD
│ ├── services/ # Shared backend orchestration/domain services, including radio_runtime access seam
│ ├── packet_processor.py # Raw packet pipeline, dedup, path handling
│ ├── repository/ # Database CRUD (contacts, channels, messages, raw_packets, settings, fanout)
│ ├── event_handlers.py # Radio events
│ ├── decoder.py # Packet decryption
── websocket.py # Real-time broadcasts
── websocket.py # Real-time broadcasts
│ └── fanout/ # Fanout bus: MQTT, bots, webhooks, Apprise, SQS (see fanout/AGENTS_fanout.md)
├── frontend/ # React frontend
│ ├── AGENTS.md # Frontend documentation
│ ├── src/
│ │ ├── App.tsx # Main component
│ │ ├── App.tsx # Frontend composition entry (hooks → AppShell)
│ │ ├── api.ts # REST client
│ │ ├── useWebSocket.ts # WebSocket hook
│ │ └── components/
@@ -148,7 +209,21 @@ The following are **deliberate design choices**, not bugs. They are documented i
│ │ ├── MapView.tsx # Leaflet map showing node locations
│ │ └── ...
│ └── vite.config.ts
├── references/meshcore_py/ # MeshCore Python library
├── pkg/aur/ # AUR package files (PKGBUILD, systemd service, env, install hooks)
├── scripts/ # Quality / release helpers (listing below is representative, not exhaustive)
│ ├── build/
│ │ ├── collect_licenses.sh # Gather third-party license attributions
│ │ └── publish.sh # Version bump, changelog, docker build & push
│ ├── quality/
│ │ ├── all_quality.sh # Repo-standard autofix + validate gate
│ │ ├── e2e.sh # End-to-end test runner
│ │ ├── extended_quality.sh # Quality gate plus e2e and Docker matrix
│ │ └── test_aur_package.sh # Build + install AUR package in Arch Docker containers
│ └── setup/
│ ├── fetch_prebuilt_frontend.py # Download release frontend fallback
│ └── install_service.sh # Install/configure Linux systemd service
├── README_ADVANCED.md # Advanced setup, troubleshooting, and service guidance
├── CONTRIBUTING.md # Contributor workflow and testing guidance
├── tests/ # Backend tests (pytest)
├── data/ # SQLite database (runtime)
└── pyproject.toml # Python dependencies
@@ -193,7 +268,7 @@ uv run uvicorn app.main:app --host 0.0.0.0 --port 8000
Access at `http://localhost:8000`. All API routes are prefixed with `/api`.
If `frontend/dist` (or `frontend/dist/index.html`) is missing, backend startup now logs an explicit error and continues serving API routes. In that case, frontend static routes are not mounted until a frontend build is present.
If `frontend/dist` is missing, the backend falls back to `frontend/prebuilt` when present (for example from the release zip artifact). If neither build directory is available, startup logs an explicit error and continues serving API routes without frontend static routes mounted.
## Testing
@@ -204,12 +279,23 @@ PYTHONPATH=. uv run pytest tests/ -v
```
Key test files:
- `tests/test_decoder.py` - Channel + direct message decryption, key exchange
- `tests/test_keystore.py` - Ephemeral key store
- `tests/test_event_handlers.py` - ACK tracking, repeat detection
- `tests/test_api.py` - API endpoints, read state tracking
- `tests/test_migrations.py` - Database migration system
- `tests/test_frontend_static.py` - Frontend static route registration (missing `dist`/`index.html` handling)
- `tests/test_api.py` - Broad API integration coverage across routers and read-state flows
- `tests/test_packet_pipeline.py` - End-to-end packet processing, decrypt, dedup, and message creation
- `tests/test_event_handlers.py` - ACK tracking, fallback DM handling, and event subscription cleanup
- `tests/test_send_messages.py` - Outgoing DM/channel send workflows, retries, and bot-trigger wiring
- `tests/test_packets_router.py` - Historical decrypt, maintenance, and raw-packet detail endpoints
- `tests/test_repeater_routes.py` - Repeater command/telemetry/trace pane endpoints
- `tests/test_room_routes.py` - Room-server login/status/ACL/telemetry endpoints
- `tests/test_radio_router.py` - Radio config, advert, discovery, trace, and reconnect endpoints
- `tests/test_radio_sync.py` - Radio sync, periodic tasks, contact offload/reload, and pending-message flushes
- `tests/test_fanout.py` - Fanout config CRUD, scope matching, and manager dispatch
- `tests/test_fanout_integration.py` - Integration-module lifecycle and delivery behavior
- `tests/test_statistics.py` - Aggregated mesh/network statistics and noise-floor snapshots
- `tests/test_version_info.py` - Version/build metadata resolution
- `tests/test_websocket.py` - WS manager broadcast and cleanup behavior
- `tests/test_frontend_static.py` - Frontend static route registration and fallback behavior
For the fuller backend inventory, see `app/AGENTS.md`. For frontend-specific suites, see `frontend/AGENTS.md`.
### Frontend (Vitest)
@@ -218,22 +304,9 @@ cd frontend
npm run test:run
```
### Before Completing Changes
### Before Completing Major Changes
**Always run both backend and frontend validation before finishing any changes:**
```bash
# From project root - run backend tests
PYTHONPATH=. uv run pytest tests/ -v
# From project root - run frontend tests and build
cd frontend && npm run test:run && npm run build
```
This catches:
- Type mismatches between frontend and backend (e.g., missing fields in TypeScript interfaces)
- Breaking changes to shared types or API contracts
- Runtime errors that only surface during compilation
**Run `./scripts/quality/all_quality.sh` before finishing major changes that have modified code or tests.** It is the standard repo gate: autofix first, then type checks, tests, and the standard frontend build. This is not necessary for docs-only changes. For minor changes (like wording, color, spacing, etc.), wait until prompted to run the quality gate.
## API Summary
@@ -241,42 +314,72 @@ All endpoints are prefixed with `/api` (e.g., `/api/health`).
| Method | Endpoint | Description |
|--------|----------|-------------|
| GET | `/api/health` | Connection status |
| GET | `/api/radio/config` | Radio configuration |
| PATCH | `/api/radio/config` | Update name, location, radio params |
| GET | `/api/health` | Connection status, fanout statuses, bots_disabled flag |
| GET | `/api/debug` | Support snapshot: recent logs, live radio probe, contact/channel drift audit, and running version/git info |
| GET | `/api/radio/config` | Radio configuration, including `path_hash_mode`, `path_hash_mode_supported`, advert-location on/off, and `multi_acks_enabled` |
| PATCH | `/api/radio/config` | Update name, location, advert-location on/off, `multi_acks_enabled`, radio params, and `path_hash_mode` when supported |
| PUT | `/api/radio/private-key` | Import private key to radio |
| POST | `/api/radio/advertise` | Send advertisement |
| POST | `/api/radio/advertise` | Send advertisement (`mode`: `flood` or `zero_hop`, default `flood`) |
| POST | `/api/radio/discover` | Run a short mesh discovery sweep for nearby repeaters/sensors |
| POST | `/api/radio/trace` | Send a multi-hop trace loop through known repeaters and back to the local radio |
| POST | `/api/radio/reboot` | Reboot radio or reconnect if disconnected |
| POST | `/api/radio/disconnect` | Disconnect from radio and pause automatic reconnect attempts |
| POST | `/api/radio/reconnect` | Manual radio reconnection |
| GET | `/api/contacts` | List contacts |
| GET | `/api/contacts/{key}` | Get contact by public key or prefix |
| GET | `/api/contacts/analytics` | Unified keyed-or-name contact analytics payload |
| GET | `/api/contacts/repeaters/advert-paths` | List recent unique advert paths for all contacts |
| POST | `/api/contacts` | Create contact (optionally trigger historical DM decrypt) |
| DELETE | `/api/contacts/{key}` | Delete contact |
| POST | `/api/contacts/sync` | Pull from radio |
| POST | `/api/contacts/{key}/add-to-radio` | Push contact to radio |
| POST | `/api/contacts/{key}/remove-from-radio` | Remove contact from radio |
| POST | `/api/contacts/{key}/mark-read` | Mark contact conversation as read |
| POST | `/api/contacts/{key}/telemetry` | Request telemetry from repeater |
| POST | `/api/contacts/{key}/command` | Send CLI command to repeater |
| POST | `/api/contacts/{key}/trace` | Trace route to contact |
| POST | `/api/contacts/bulk-delete` | Delete multiple contacts |
| DELETE | `/api/contacts/{public_key}` | Delete contact |
| POST | `/api/contacts/{public_key}/mark-read` | Mark contact conversation as read |
| POST | `/api/contacts/{public_key}/command` | Send CLI command to repeater |
| POST | `/api/contacts/{public_key}/routing-override` | Set or clear a forced routing override |
| POST | `/api/contacts/{public_key}/trace` | Trace route to contact |
| POST | `/api/contacts/{public_key}/path-discovery` | Discover forward/return paths and persist the learned direct route |
| POST | `/api/contacts/{public_key}/repeater/login` | Log in to a repeater |
| POST | `/api/contacts/{public_key}/repeater/status` | Fetch repeater status telemetry |
| POST | `/api/contacts/{public_key}/repeater/lpp-telemetry` | Fetch CayenneLPP sensor data |
| POST | `/api/contacts/{public_key}/repeater/neighbors` | Fetch repeater neighbors |
| POST | `/api/contacts/{public_key}/repeater/acl` | Fetch repeater ACL |
| POST | `/api/contacts/{public_key}/repeater/node-info` | Fetch repeater name, location, and clock via CLI |
| POST | `/api/contacts/{public_key}/repeater/radio-settings` | Fetch repeater radio config via CLI |
| POST | `/api/contacts/{public_key}/repeater/advert-intervals` | Fetch advert intervals |
| POST | `/api/contacts/{public_key}/repeater/owner-info` | Fetch owner info |
| POST | `/api/contacts/{public_key}/room/login` | Log in to a room server |
| POST | `/api/contacts/{public_key}/room/status` | Fetch room-server status telemetry |
| POST | `/api/contacts/{public_key}/room/lpp-telemetry` | Fetch room-server CayenneLPP sensor data |
| POST | `/api/contacts/{public_key}/room/acl` | Fetch room-server ACL entries |
| GET | `/api/channels` | List channels |
| GET | `/api/channels/{key}` | Get channel by key |
| GET | `/api/channels/{key}/detail` | Comprehensive channel profile (message stats, top senders) |
| POST | `/api/channels` | Create channel |
| POST | `/api/channels/bulk-hashtag` | Create multiple hashtag channels |
| DELETE | `/api/channels/{key}` | Delete channel |
| POST | `/api/channels/sync` | Pull from radio |
| POST | `/api/channels/{key}/flood-scope-override` | Set or clear a per-channel regional flood-scope override |
| POST | `/api/channels/{key}/path-hash-mode-override` | Set or clear a per-channel path hash mode override |
| POST | `/api/channels/{key}/mark-read` | Mark channel as read |
| GET | `/api/messages` | List with filters |
| GET | `/api/messages` | List with filters (`q`, `after`/`after_id` for forward pagination) |
| GET | `/api/messages/around/{id}` | Get messages around a specific message (for jump-to-message) |
| POST | `/api/messages/direct` | Send direct message |
| POST | `/api/messages/channel` | Send channel message |
| POST | `/api/messages/channel/{message_id}/resend` | Resend channel message (default: byte-perfect within 30s; `?new_timestamp=true`: fresh timestamp, no time limit, creates new message row) |
| GET | `/api/packets/undecrypted/count` | Count of undecrypted packets |
| GET | `/api/packets/{packet_id}` | Fetch one stored raw packet by row ID for on-demand inspection |
| POST | `/api/packets/decrypt/historical` | Decrypt stored packets |
| POST | `/api/packets/maintenance` | Delete old packets and vacuum |
| GET | `/api/read-state/unreads` | Server-computed unread counts, mentions, last message times |
| GET | `/api/read-state/unreads` | Server-computed unread counts, mentions, last message times, and `last_read_ats` boundaries |
| POST | `/api/read-state/mark-all-read` | Mark all conversations as read |
| GET | `/api/settings` | Get app settings |
| PATCH | `/api/settings` | Update app settings |
| POST | `/api/settings/favorites/toggle` | Toggle favorite status |
| POST | `/api/settings/migrate` | One-time migration from frontend localStorage |
| POST | `/api/settings/blocked-keys/toggle` | Toggle blocked key |
| POST | `/api/settings/blocked-names/toggle` | Toggle blocked name |
| POST | `/api/settings/tracked-telemetry/toggle` | Toggle tracked telemetry repeater |
| GET | `/api/fanout` | List all fanout configs |
| POST | `/api/fanout` | Create new fanout config |
| PATCH | `/api/fanout/{id}` | Update fanout config (triggers module reload) |
| DELETE | `/api/fanout/{id}` | Delete fanout config (stops module) |
| POST | `/api/fanout/bots/disable-until-restart` | Stop bot fanout modules and keep bots disabled until the process restarts |
| GET | `/api/statistics` | Aggregated mesh network statistics |
| WS | `/api/ws` | Real-time updates |
## Key Concepts
@@ -293,12 +396,15 @@ All endpoints are prefixed with `/api` (e.g., `/api/health`).
- `1` - Client (regular node)
- `2` - Repeater
- `3` - Room
- `4` - Sensor
### Channel Keys
- Stored as 32-character hex string (TEXT PRIMARY KEY)
- Hashtag channels: `SHA256("#name")[:16]` converted to hex
- Custom channels: User-provided or generated
- Channels may also persist `flood_scope_override`; when set, channel sends temporarily switch the radio flood scope to that value for the duration of the send, then restore the global app setting.
- Channels may persist `path_hash_mode_override` (0/1/2); when set, channel sends temporarily switch the radio path hash mode for the duration of the send, then restore the radio default.
### Message Types
@@ -310,9 +416,9 @@ All endpoints are prefixed with `/api` (e.g., `/api/health`).
Read state (`last_read_at`) is tracked **server-side** for consistency across devices:
- Stored as Unix timestamp in `contacts.last_read_at` and `channels.last_read_at`
- Updated via `POST /api/contacts/{key}/mark-read` and `POST /api/channels/{key}/mark-read`
- Updated via `POST /api/contacts/{public_key}/mark-read` and `POST /api/channels/{key}/mark-read`
- Bulk update via `POST /api/read-state/mark-all-read`
- Aggregated counts via `GET /api/read-state/unreads` (server-side computation)
- Aggregated counts via `GET /api/read-state/unreads` (server-side computation of counts, mention flags, `last_message_times`, and `last_read_ats`)
**State Tracking Keys (Frontend)**: Generated by `getStateKey()` for message times (sidebar sorting):
- Channels: `channel-{channel_key}`
@@ -320,6 +426,14 @@ Read state (`last_read_at`) is tracked **server-side** for consistency across de
**Note:** These are NOT the same as `Message.conversation_key` (the database field).
### Fanout Bus (MQTT, Bots, Webhooks, Apprise, SQS)
All external integrations are managed through the fanout bus (`app/fanout/`). Each integration is a `FanoutModule` with scope-based event filtering, stored in the `fanout_configs` table and managed via `GET/POST/PATCH/DELETE /api/fanout`.
`broadcast_event()` in `websocket.py` dispatches `message` and `raw_packet` events to the fanout manager. See `app/fanout/AGENTS_fanout.md` for full architecture details.
Community MQTT forwards raw packets only. Its derived `path` field, when present on direct packets, is a comma-separated list of hop identifiers as reported by the packet format. Token width therefore varies with the packet's path hash mode; it is intentionally not a flat per-byte rendering.
### Server-Side Decryption
The server can decrypt packets using stored keys, both in real-time and for historical packets.
@@ -354,13 +468,34 @@ mc.subscribe(EventType.ACK, handler)
|----------|---------|-------------|
| `MESHCORE_SERIAL_PORT` | auto-detect | Serial port for radio |
| `MESHCORE_TCP_HOST` | *(none)* | TCP host for radio (mutually exclusive with serial/BLE) |
| `MESHCORE_TCP_PORT` | `4000` | TCP port (used with `MESHCORE_TCP_HOST`) |
| `MESHCORE_TCP_PORT` | `5000` | TCP port (used with `MESHCORE_TCP_HOST`) |
| `MESHCORE_BLE_ADDRESS` | *(none)* | BLE device address (mutually exclusive with serial/TCP) |
| `MESHCORE_BLE_PIN` | *(required with BLE)* | BLE PIN code |
| `MESHCORE_SERIAL_BAUDRATE` | `115200` | Serial baud rate |
| `MESHCORE_LOG_LEVEL` | `INFO` | Logging level (`DEBUG`/`INFO`/`WARNING`/`ERROR`) |
| `MESHCORE_DATABASE_PATH` | `data/meshcore.db` | SQLite database location |
| `MESHCORE_DISABLE_BOTS` | `false` | Disable bot system entirely (blocks execution and config) |
| `MESHCORE_BASIC_AUTH_USERNAME` | *(none)* | Optional app-wide HTTP Basic auth username; must be set together with `MESHCORE_BASIC_AUTH_PASSWORD` |
| `MESHCORE_BASIC_AUTH_PASSWORD` | *(none)* | Optional app-wide HTTP Basic auth password; must be set together with `MESHCORE_BASIC_AUTH_USERNAME` |
| `MESHCORE_ENABLE_MESSAGE_POLL_FALLBACK` | `false` | Switch the always-on radio audit task from hourly checks to aggressive 10-second polling; the audit checks both missed message drift and channel-slot cache drift |
| `MESHCORE_FORCE_CHANNEL_SLOT_RECONFIGURE` | `false` | Disable channel-slot reuse and force `set_channel(...)` before every channel send, even on serial/BLE |
**Note:** Runtime app settings are stored in the database (`app_settings` table), not environment variables. These include `max_radio_contacts`, `experimental_channel_double_send`, `auto_decrypt_dm_on_advert`, `sidebar_sort_order`, `advert_interval`, `last_advert_time`, `favorites`, `last_message_times`, and `bots`. They are configured via `GET/PATCH /api/settings` (and related settings endpoints).
**Note:** Runtime app settings are stored in the database (`app_settings` table), not environment variables. These include `max_radio_contacts`, `auto_decrypt_dm_on_advert`, `advert_interval`, `last_advert_time`, `last_message_times`, `flood_scope`, `blocked_keys`, `blocked_names`, `discovery_blocked_types`, `tracked_telemetry_repeaters`, and `auto_resend_channel`. `max_radio_contacts` is the configured radio contact capacity baseline used by background maintenance: favorites reload first, non-favorite fill targets about 80% of that value, and full offload/reload triggers around 95% occupancy. They are configured via `GET/PATCH /api/settings`. MQTT, bot, webhook, Apprise, and SQS configs are stored in the `fanout_configs` table, managed via `/api/fanout`. If the radio's channel slots appear unstable or another client is mutating them underneath this app, operators can force the old always-reconfigure send path with `MESHCORE_FORCE_CHANNEL_SLOT_RECONFIGURE=true`.
`experimental_channel_double_send` is an opt-in experimental setting: when enabled, channel sends perform a second byte-perfect resend after a 3-second delay.
Byte-perfect channel retries are user-triggered via `POST /api/messages/channel/{message_id}/resend` and are allowed for 30 seconds after the original send.
**Transport mutual exclusivity:** Only one of `MESHCORE_SERIAL_PORT`, `MESHCORE_TCP_HOST`, or `MESHCORE_BLE_ADDRESS` may be set. If none are set, serial auto-detection is used.
## Errata & Known Non-Issues
### `meshcore_py` advert parsing can crash on malformed/truncated RF log packets
The vendored MeshCore Python reader's `LOG_DATA` advert path assumes the decoded advert payload always contains at least 101 bytes of advert body and reads the flags byte with `pk_buf.read(1)[0]` without a length guard. If a malformed or truncated RF log frame slips through, `MessageReader.handle_rx()` can fail with `IndexError: index out of range` from `meshcore/reader.py` while parsing payload type `0x04` (advert).
This does not indicate database corruption or a message-store bug. It is a parser-hardening gap in `meshcore_py`: the reader does not fully mirror firmware-side packet/path validation before attempting advert decode. The practical effect is usually a one-off asyncio task failure for that packet while later packets continue processing normally.
### Channel-message dedup intentionally treats same-name/same-text/same-second channel sends as indistinguishable because they are
Channel message storage deduplicates on `(type, conversation_key, text, sender_timestamp)`. Reviewers often flag this as "missing sender identity," but for channel messages the stored `text` already includes the displayed sender label (for example `Alice: hello`). That means two different users only collide when they produce the same rendered sender name, the same body text, and the same sender timestamp.
In that case, RemoteTerm usually does not have enough information to distinguish "two independent same-name sends" from "one message observed again as an echo/repeat." Without a reliable sender identity at ingest, treating those packets as the same message is an accepted limitation of the observable data model, not an obvious correctness bug.
+507 -100
View File
@@ -1,144 +1,551 @@
## [3.11.0] - 2026-04-10
* Feature: Radio health and contact data accessible on fanout bus
* Feature: Local node radio stats (voltage etc.) on WS health bus
* Feature: Battery indicator optional in status bar (configured in Local Settings)
* Bugfix: Fix same-second same-message collision in room servers
* Bugfix: Don't consume DM resend attempt if the radio was just busy
* Bugfix: Assume that a same-second same-message same-first-byte-key DM is more likely an echo than them sending the same message
* Bugfix: Multi-retry for flood scope restoration
* Misc: Testing & documentation improvements
## [3.10.0] - 2026-04-10
* Feature: Add Arch AUR package
* Feature: 72hr packet density view in statistics
* Feature: Add warnings for event loop selection for MQTT on Windows startup
* Bugfix: Bump Apprise to 1.9.9 to fix Matrix bug
* Misc: More memory-conscious on recent contact fetch
* Misc: Fix statistics pane e2e test
## [3.9.0] - 2026-04-06
* Feature: Add hop counts to hop-width selection options
* Feature: Show cached repeater telemetry inline in settings
* Feature: Retain recent traces and make them click-to-re-run
* Feature: Autofocus channel/DM textbox on desktop
* Feature: Favorites on the radio are now imported as favorites
* Bugfix: Be clearer on issue identification for missing HTTPS context in channel finder
* Bugfix: Don't use sender timestamp for message sequence display
* Bugfix: Function on subdomains happily
* Misc: Be gentler, room s/cracker/finder/
* Misc: Test and frontend correctness & test fixes
* Misc: Don't repeat clock sync failure logs
* Misc: Make warning in readme clearer about taking over the radio
* Misc: Improve readme phrasings
* Misc: Better y-axis selection for battery read-out
* Misc: Provide clearer warning on docker setup without docker installed
* Misc: Default visualizer stale pruning to on/5 minutes
* Misc: Migrate favorites to better storage pattern
* Misc: Provide dumper script for API + WS interfaces for prep for HA integration
## [3.8.0] - 2026-04-03
* Feature: Per-channel hop width override
* Feature: Intervalized repeater telemetry collection
* Feature: Auto-resend option for byte-perfect resends on no repeater echo
* Feature: Attach RSSI/SNR to received packets
* Feature: Add motion packet display to map
* Feature: Map dark mode
* Bugfix: Make DB indices more useful around capitalization
* Misc: Bump required Python to 3.11
* Misc: Performance, documentation, and test improvements
* Misc: More yields during long radio operations
* Misc: Dead code & crufty test removal
* Misc: Remove all but stub frontend favorites migration for very very old versions
## [3.7.1] - 2026-04-02
* Feature: Redact Apprise URLs to prevent sensitive information disclosure
## [3.7.0] - 2026-04-02
* Feature: Repeater battery tracking
* Feature: Repeater info pane just like contacts
* Feature: Make repeaters blockable
* Feature: Add new-node advert blocking
* Feature: Add bulk deletion interface
* Feature: Bulk room add on alt+click of new channel button
* Feature: More info in debug endpoint
* Bugfix: Be more conservative around radio load limits and don't exceed radio-reported capacity
* Misc: Default auto-DM decrypt to true
* Misc: Reorganize some settings panes
* Misc: Enable FK pragma
* Misc: Various performance and correctness fixes
* Misc: Correct TCP default port
## [3.6.7] - 2026-03-31
* Misc: Remove armv7 (for now)
## [3.6.6] - 2026-03-31
* Misc: Please I'm begging for the build scripts to be working now
## [3.6.5] - 2026-03-31
* Bugfix: Maybe fix problem with publish script
## [3.6.4] - 2026-03-31
* Feature: Clarify New Channel/Contact button
* Bugfix: Rename "Best RSSI" to "Strongest Neighbor"
* Bugfix: Improve layout of Trace pane
* Misc: Docker setup improvements
## [3.6.3] - 2026-03-30
* Feature: Add multi-byte trace
* Feature: Show node name on discovered node if we know it
* Feature: Add docker installation script
* Feature: Add historical noise floor to stats
* Feature: Add trace tool
* Bugfix: 100x performance on statistics endpoint with indices and better queries
* Misc: Performance and correctness improvements for backend-of-the-frontend
* Misc: Reorganize scripts
## [3.6.2] - 2026-03-29
* Feature: Be more flexible about timing and volume of full contact offload
* Feature: Improve room server and repeater ops to be much more clearer about auth status
* Feature: Show last error status on integrations
* Feature: Push multi-platform docker builds
* Bugfix: Fix advert interval time unit display
* Bugfix: Don't cast RSSI/SNR to string for community MQTT
* Bugfix: Map uploader follows redirect
* Misc: Thin out unnecessary cruft in unreads endpoint
* Misc: Fall back gracefully if linked to an unknown contact
## [3.6.1] - 2026-03-26
* Feature: MeshCore Map integration
* Feature: Add warning screen about bots
* Feature: Favicon reflects unread message state
* Feature: Show hop map in larger modal
* Feature: Add prebuilt frontend install script
* Feature: Add clean service installer script
* Feature: Swipe in to show menu
* Bugfix: Invalid backend API path serves error, not fallback index
* Bugfix: Fix some spacing/page height issues
* Misc: Misc. bugfixes and performance and test improvements
## [3.6.0] - 2026-03-22
* Feature: Add incoming-packet analytics
* Feature: BYOPacket for analysis
* Feature: Add room activity to stats view
* Bugfix: Handle Heltec v3 serial noise
* Misc: Swap repeaters and room servers for better ordering
## [3.5.0] - 2026-03-19
* Feature: Add room server alpha support
* Feature: Add option to force-reset node clock when it's too far ahead
* Feature: DMs auto-retry before resorting to flood
* Feature: Add impulse zero-hop advert
* Feature: Utilize PATH packets to correctly source a contact's route
* Feature: Metrics view on raw packet pane
* Feature: Metric, Imperial, and Smoots are now selectable for distance display
* Feature: Allow favorites to be sorted
* Feature: Add multi-ack support
* Feature: Password-remember checkbox on repeaters + room servers
* Bugfix: Serialize radio disconnect in a lock
* Bugfix: Fix contact bar layout issues
* Bugfix: Fix sidebar ordering for contacts by advert recency
* Bugfix: Fix version reporting in community MQTT
* Bugfix: Fix Apprise duplicate names
* Bugfix: Be better about identity resolution in the stats pane
* Misc: Docs, test, and performance enhancements
* Misc: Don't prompt "Are you sure" when leaving an unedited integration
* Misc: Log node time on startup
* Misc: Improve community MQTT error bubble-up
* Misc: Unread DMs always have a red unread counter
* Misc: Improve information in the debug view to show DB status
## [3.4.1] - 2026-03-16
* Bugfix: Improve handling of version information on prebuilt bundles
* Bugfix: Improve frontend usability on disconnected radio
* Misc: Docs and readme updates
* Misc: Overhaul DM ingest and frontend state handling
## [3.4.0] - 2026-03-16
* Feature: Add radio model and stats display
* Feature: Add prebuilt frontends, then deleted that and moved to prebuilt release artifacts
* Bugfix: Misc. frontend performance and correctness fixes
* Bugfix: Fix same-second same-content DM send collition
* Bugfix: Discard clearly-wrong GPS data
* Bugfix: Prevent repeater clock skew drift on page nav
* Misc: Use repeater's advertised location if we haven't loaded one from repeater admin
* Misc: Don't permit invalid fanout configs to be saved ever`
## [3.3.0] - 2026-03-13
* Feature: Use dashed lines to show collapsed ambiguous router results
* Feature: Jump to unread
* Feature: Local channel management to prevent need to reload channel every time
* Feature: Debug endpoint
* Feature: Force-singleton channel management
* Feature: Local node discovery
* Feature: Node routing discovery
* Bugfix: Don't tell users to us npm ci
* Bugfix: Fallback polling dm message persistence
* Bugfix: All native-JS inputs are now modals
* Bugfix: Same-second send collision resolution
* Bugfix: Proper browser updates on resend
* Bugfix: Don't use last-heard when we actually want last-advert for path discovery for nodes
* Bugfix: Don't treat prefix-matching DM echoes as acks like we do for channel messages
* Misc: Visualizer data layer overhaul for future map work
* Misc: Parallelize docker tests
## [3.2.0] - 2026-03-12
* Feature: Improve ambiguous-sender DM handling and visibility
* Feature: Allow for toggling of node GPS broadcast
* Feature: Add path width to bot and move example to full kwargs
* Feature: Improve node map color contrast
* Bugfix: More accurate tracking of contact data
* Bugfix: Misc. frontend performance and bugfixes
* Misc: Clearer warnings on user-key linkage
* Misc: Documentation improvements
## [3.1.1] - 2026-03-11
* Feature: Add basic auth
* Feature: SQS fanout
* Feature: Enrich contact info pane
* Feature: Search operators for node and channel
* Feature: Pause radio connection attempts from Radio settings
* Feature: New themes! What a great use of time!
* Feature: Github workflows runs for validation
* Bugfix: More consistent log format with times
* Bugfix: Patch meshcore_py bluetooth eager reconnection out during pauses
## [3.1.0] - 2026-03-11
* Feature: Add basic auth
* Feature: SQS fanout
* Feature: Enrich contact info pane
* Feature: Search operators for node and channel
* Feature: Pause radio connection attempts from Radio settings
* Feature: New themes! What a great use of time!
* Feature: Github workflows runs for validation
* Bugfix: More consistent log format with times
* Bugfix: Patch meshcore_py bluetooth eager reconnection out during pauses
## [3.0.0] - 2026-03-10
* Feature: Custom regions per-channel
* Feature: Add custom contact pathing
* Feature: Corrupt packets are more clear that they're corrupt
* Feature: Better, faster patterns around background fetching with explicit opt-in for recurring sync if the app detects you need it
* Feature: More consistent icons
* Feature: Add per-channel local notifications
* Feature: New themes
* Feature: Massive codebase refactor and overhaul
* Bugfix: Fix packet parsing for trace packets
* Bugfix: Refetch channels on reconnect
* Bugfix: Load All on repeater pane on mobile doesn't extend into lower text
* Bugfix: Timestamps in logs
* Bugfix: Correct wrong clock sync command
* Misc: Improve bot error bubble up
* Misc: Update to non-lib-included meshcore-decoder version
* Misc: Revise refactors to be more LLM friendly
* Misc: Fix script executability
* Misc: Better logging format with timestamp
* Misc: Repeater advert buttons separate flood and one-hop
* Misc: Preserve repeater pane on navigation away
* Misc: Clearer iconography and coloring for status bar buttons
* Misc: Search bar to top bar
## [2.7.9] - 2026-03-08
* Bugfix: Don't obscure new integration dropdown on session boundary
## [2.7.8] - 2026-03-08
* Bugfix: Improve frontend asset resolution and fixup the build/push script
## [2.7.1] - 2026-03-08
* Bugfix: Fix historical DM packet length passing
* Misc: Follow better inclusion patterns for the patched meshcore-decoder and just publish the dang package
* Misc: Patch a bewildering browser quirk that cause large raw packet lists to extend past the bottom of the page
## [2.7.0] - 2026-03-08
* Feature: Multibyte path support
* Feature: Add multibyte statistics to statistics pane
* Feature: Add path bittage to contact info pane
* Feature: Put tools in a collapsible
## [2.6.1] - 2026-03-08
* Misc: Fix busted docker builds; we don't have a 2.6.0 build sorry
## [2.6.0] - 2026-03-08
* Feature: A11y improvements
* Feature: New themes
* Feature: Backfill channel sender identity when available
* Feature: Modular fanout bus, including Webhooks, more customizable community MQTT, and Apprise
* Bugfix: Unreads now respect blocklist
* Bugfix: Unreads can't accumulate on an open thread
* Bugfix: Channel name in broadcasts
* Bugfix: Add missing httpx dependency
* Bugfix: Improvements to radio startup frontend-blocking time and radio status reporting
* Misc: Improved button signage for app movement
* Misc: Test, performance, and documentation improvements
## [2.5.0] - 2026-03-05
* Feature: Far better accessibility across the app (with far to go)
* Feature: Add community MQTT stats reporting, and improve over a few commits
* Feature: Color schemes and misc. settings reorg
* Feature: Add why-active to filtered nodes
* Feature: Add channel and contact info box
* Feature: Add contact blocking
* Feature: Add potential repeater path map display
* Feature: Add flood scoping/regions
* Feature: Global message search
* Feature: Fully safe bot disable
* Feature: Add default #remoteterm channel (lol sorry I had to)
* Feature: Custom recency pruning in visualizer
* Bugfix: Be more cautious around null byte stripping
* Bugfix: Clear channel-add interface on not-add-another
* Bugfix: Add status/name/MQTT LWT
* Bugfix: Channel deletion propagates over WS
* Bugfix: Show map location for all nodes on link, not 7-day-limited
* Bugfix: Hide private key channel keys by default
* Misc: Logline to show if cleanup loop on non-sync'd meshcore radio links fixes anything
* Misc: Doc, changelog, and test improvements
* Misc: Add, and remove, package lock (sorry Windows users)
* Misc: Don't show mark all as read if not necessary
* Misc: Fix stale closures and misc. frontend perf/correctness improvements
* Misc: Add Windows startup notes
* Misc: E2E expansion + improvement
* Misc: Move around visualizer settings
## [2.4.0] - 2026-03-02
* Feature: Add community MQTT reporting (e.g. LetsMesh.net)
* Misc: Build scripts and library attribution
* Misc: Add sign of life to E2E tests
## [2.3.0] - 2026-03-01
* Feature: Click path description to reset to flood
* Feature: Add MQTT publishing
* Feature: Visualizer remembers settings
* Bugfix: Fix prefetch usage
* Bugfix: Fixed an issue where busy channels can result in double-display of incoming messages
* Misc: Drop py3.12 requirement
* Misc: Performance, documentation, test, and file structure optimizations
* Misc: Add arrows between route nodes on contact info
* Misc: Show repeater path/type in title bar
## [2.2.0] - 2026-02-28
* Feature: Track advert paths and use to disambiguate repeater identity in visualizer
* Feature: Contact info pane
* Feature: Overhaul repeater interface
* Bugfix: Misc. frontend rendering + perf improvements
* Bugfix: Better behavior around radio locking and autofetch/polling
* Bugfix: Clear channel name field on new-channel modal tab change
* Bugfix: Repeater inforbox can scroll
* Bugfix: Better handling of historical DM encrypts
* Bugfix: Handle errors if returned in prefetch phase
* Misc: Radio event response failure is logged/surfaced better
* Misc: Improve test coverage and remove dead code
* Misc: Documentation and errata improvements
* Misc: Database storage optimization
## [2.1.0] - 2026-02-23
* Feature: Add ability to remember last-used channel on load
* Feature: Add `docker compose` support (thanks @suymur !)
* Feature: Better-aligned favicon (lol)
* Bugfix: Disable autocomplete on message field
* Bugfix: Legacy hash restoration on page load
* Bugfix: Align resend buttons in pathing modal
* Bugfix: Update README.md (briefly), then docker-compose.yaml, to reflect correct docker image host
* Bugfix: Correct settings pane scroll lock on zoom (thanks @yellowcooln !)
* Bugfix: Improved repeater comms on busy meshes
* Bugfix: Drain before autofetch from radio
* Bugfix: Fix, or document exceptions to, sub-second resolution message failure
* Bugfix: Improved handling of radio connection, disconnection, and connection-aliveness-status
* Bugfix: Force server-side keystore update when radio key changes
* Bugfix: Reduce WS churn for incoming message handling
* Bugfix: Fix content type signalling for irrelevant endpoints
* Bugfix: Handle stuck post-connect failure state
* Misc: Documentation & version parsing improvements
* Misc: Hide char counter on mobile for short messages
* Misc: Typo fixes in docs and settings
* Misc: Add dynamic webmanifest for hosts that can support it
* Misc: Improve DB size via dropping unnecessary uniqs, indices, vacuum, and offering ability to drop historical matches packets
* Misc: Drop weird rounded bounding box for settings
* Misc: Move resend buttons to pathing modal
* Misc: Improved comments around database ownership on *nix systems
* Misc: Move to SSoT for message dedupe on frontend
* Misc: Move DM ack clearing to standard poll, and increase hold time between polling
* Misc: Holistic testing overhaul
## [2.0.1] - 2026-02-16
* Bugfix: Fix missing trigger condition on statistics pane expansion on mobile
## [2.0.0] - 2026-02-16
* Feature: Frontend UX + log overhaul
* Bugfix: Use contact object from DB for broadcast rather than handrolling
* Bugfix: Fix out of order path WS messages overwriting each other
* Bugfix: Make broadcast timestamp match fallback logic used in storage code
* Bugfix: Fix repeater command timestamp selection logic
* Bugfix: Use actual pubkey matching for path update, and don't action serial path update events (use RX packet)
* Bugfix: Add missing radio operation locks in a few spots
* Bugfix: Fix dedupe for frontend raw packet delivery (mesh visualizer much more active now!)
* Bugfix: Less aggressive dedupe for advert packets (we don't care about the payload, we care about the path, duh)
* Misc: Visualizer layout refinement & option labels
## [1.10.0] - 2026-02-16
* Feature: Collapsible sidebar sections with per-section unread badge (thanks @rgregg !)
* Feature: 3D mesh visualizer
* Feature: Statistics pane
* Feature: Support incoming/outgoing indication for bot invocations
* Feature: Quick byte-perfect message resend if you got unlucky with repeats (thanks @rgregg -- we had a parallel implementation but I appreciate your work!)
* Bugfix: Fix top padding out outgoing message
* Bugfix: Frontend performance, appearance, and Lighthouse improvements (prefetches, form labelling, contrast, channel/roomlist changes)
* Bugfix: Multiple-sent messages had path appearing delays until rerender
* Bugfix: Fix ack/message race condition that caused dropped ack displays until rerender
* Misc: Dedupe contacts/rooms by key and not name to prevent name collisions creating unreachable conversations
* Misc: s/stopped/idle/ for room finder
## [1.9.3] - 2026-02-12
Feature: Upgrade the room finder to support two-word rooms
* Feature: Upgrade the room finder to support two-word rooms
## [1.9.2] - 2026-02-12
Feature: Options dialog sucks less
Bugix: Move tests to isolated memory DB
Bugfix: Mention case sensitivity
Bugfix: Stale header retention on settings page view
Bugfix: Non-isolated path writing
Bugfix: Nullable contact fields are now passed as real nulls
Bugfix: Look at all fields on message reconcile, not just text
Bugfix: Make mark-all-as-read atomic
Misc: Purge unused WS handlers from back when we did chans and contacts over WS, not API
Misc: Massive test and AGENTS.md overhauls and additions
* Feature: Options dialog sucks less
* Bugfix: Move tests to isolated memory DB
* Bugfix: Mention case sensitivity
* Bugfix: Stale header retention on settings page view
* Bugfix: Non-isolated path writing
* Bugfix: Nullable contact fields are now passed as real nulls
* Bugfix: Look at all fields on message reconcile, not just text
* Bugfix: Make mark-all-as-read atomic
* Misc: Purge unused WS handlers from back when we did chans and contacts over WS, not API
* Misc: Massive test and AGENTS.md overhauls and additions
## [1.9.1] - 2026-02-10
Feature: Contacts and channels use keys, not names
Bugfix: Fix falsy casting of 0 in lat lon and timing data
Bugfix: Show message length in bytes, not chars
Bugfix: Fix phantom unread badges on focused convos
Misc: Bot invocation to async
Misc: Use full key, not prefix, where we can
* Feature: Contacts and channels use keys, not names
* Bugfix: Fix falsy casting of 0 in lat lon and timing data
* Bugfix: Show message length in bytes, not chars
* Bugfix: Fix phantom unread badges on focused convos
* Misc: Bot invocation to async
* Misc: Use full key, not prefix, where we can
## [1.9.0] - 2026-02-10
Feature: Favorited contacts are preferentially loaded onto the radio
Feature: Add recent-message caching for fast switching
Feature: Add echo paths modal when echo-heard checkbox is clicked
Feature: Add experimental byte-perfect double-send for bad RF environments to try to punch the message out
* Feature: Favorited contacts are preferentially loaded onto the radio
* Feature: Add recent-message caching for fast switching
* Feature: Add echo paths modal when echo-heard checkbox is clicked
* Feature: Add experimental byte-perfect double-send for bad RF environments to try to punch the message out
Frontend: Better styling on echo + message path display
Bugfix: Prevent frontend static file serving path traversal vuln
Bugfix: Safer prefix-claiming for DMs we don't have the key for
Bugfix: Prevent injection from mentions with special characters
Bugfix: Fix repeaters comms showing in wrong channel when repeater operations are in flight and the channel is changed quickly
Bugfix: App can boot and test without a frontend dir
Misc: Improve and consistent-ify (?) backend radio operation lock management
Misc: Frontend performance and safety enhancements
Misc: Move builds to non-bundled; usage requires building the Frontend
Misc: Update tests and agent docs
* Bugfix: Prevent frontend static file serving path traversal vuln
* Bugfix: Safer prefix-claiming for DMs we don't have the key for
* Bugfix: Prevent injection from mentions with special characters
* Bugfix: Fix repeaters comms showing in wrong channel when repeater operations are in flight and the channel is changed quickly
* Bugfix: App can boot and test without a frontend dir
* Misc: Improve and consistent-ify (?) backend radio operation lock management
* Misc: Frontend performance and safety enhancements
* Misc: Move builds to non-bundled; usage requires building the Frontend
* Misc: Update tests and agent docs
## [1.8.0] - 2026-02-07
Feature: Single hop ping
Feature: PWA viewport fixes(thanks @rgregg)
* Feature: Single hop ping
* Feature: PWA viewport fixes(thanks @rgregg)
Feature (?): No frontend distribution; build it yourself ;P
Bugfix: Fix channel message send race condition (concurrent sends could corrupt shared radio slot)
Bugfix: Fix TOCTOU race in radio reconnect (duplicate connections under contention)
Bugfix: Better guarding around reconnection
Bugfix: Duplicate websocket connection fixes
Bugfix: Settings tab error cleanliness on tab swap
Bugfix: Fix path traversal vuln
* Bugfix: Fix channel message send race condition (concurrent sends could corrupt shared radio slot)
* Bugfix: Fix TOCTOU race in radio reconnect (duplicate connections under contention)
* Bugfix: Better guarding around reconnection
* Bugfix: Duplicate websocket connection fixes
* Bugfix: Settings tab error cleanliness on tab swap
* Bugfix: Fix path traversal vuln
UI: Swap visualizer legend ordering (yay prettier)
Misc: Perf and locking improvements
Misc: Always flood advertisements
Misc: Better packet dupe handling
Misc: Dead code cleanup, test improvements
## [1.8.0] - 2026-02-07
Feature: Single hop ping
Feature: PWA viewport fixes(thanks @rgregg)
Feature (?): No frontend distribution; build it yourself ;P
Bugfix: Fix channel message send race condition (concurrent sends could corrupt shared radio slot)
Bugfix: Fix TOCTOU race in radio reconnect (duplicate connections under contention)
Bugfix: Better guarding around reconnection
Bugfix: Duplicate websocket connection fixes
Bugfix: Settings tab error cleanliness on tab swap
Bugfix: Fix path traversal vuln
UI: Swap visualizer legend ordering (yay prettier)
Misc: Perf and locking improvements
Misc: Always flood advertisements
Misc: Better packet dupe handling
Misc: Dead code cleanup, test improvements
* Misc: Perf and locking improvements
* Misc: Always flood advertisements
* Misc: Better packet dupe handling
* Misc: Dead code cleanup, test improvements
## [1.7.1] - 2026-02-03
Feature: Clickable hyperlinks
Bugfix: More consistent public key normalization
Bugfix: Use more reliable cursor paging
Bugfix: Fix null timestamp dedupe failure
Bugfix: More concistent prefix-based message claiming on key reciept
Misc: Bot can respond to its own messages
Misc: Additional tests
Misc: Remove unneeded message dedupe logic
Misc: Resync settings after radio settings mutation
* Feature: Clickable hyperlinks
* Bugfix: More consistent public key normalization
* Bugfix: Use more reliable cursor paging
* Bugfix: Fix null timestamp dedupe failure
* Bugfix: More consistent prefix-based message claiming on key receipt
* Misc: Bot can respond to its own messages
* Misc: Additional tests
* Misc: Remove unneeded message dedupe logic
* Misc: Resync settings after radio settings mutation
## [1.7.0] - 2026-01-27
Feature: Multi-bot functionality
Bugfix: Adjust bot code editor display and add line numbers
Bugfix: Fix clock filtering and contact lookup behavior bugs
Bugfix: Fix repeater message duplication issue
Bugfix: Correct outbound message timestamp assignment (affecting outgoing messages seen as incoming)
* Feature: Multi-bot functionality
* Bugfix: Adjust bot code editor display and add line numbers
* Bugfix: Fix clock filtering and contact lookup behavior bugs
* Bugfix: Fix repeater message duplication issue
* Bugfix: Correct outbound message timestamp assignment (affecting outgoing messages seen as incoming)
UI: Move advertise button to identity tab
Misc: Clarify fallback functionality for missing private key export in logs
* Misc: Clarify fallback functionality for missing private key export in logs
## [1.6.0] - 2026-01-26
Feature: Visualizer: extract public key from AnonReq, add heuristic repeater disambiguation, add reset button, draggable nodes
Feature: Customizable advertising interval
Feature: In-app bot setup
Bugfix: Force contact onto radio before DM send
Misc: Remove unused code
* Feature: Visualizer: extract public key from AnonReq, add heuristic repeater disambiguation, add reset button, draggable nodes
* Feature: Customizable advertising interval
* Feature: In-app bot setup
* Bugfix: Force contact onto radio before DM send
* Misc: Remove unused code
## [1.5.0] - 2026-01-19
Feature: Network visualizer
* Feature: Network visualizer
## [1.4.1] - 2026-01-19
Feature: Add option to attempt historical DM decrypt on new-contact advertisement (disabled by default)
Feature: Server-side preference management for favorites, read status, etc.
* Feature: Add option to attempt historical DM decrypt on new-contact advertisement (disabled by default)
* Feature: Server-side preference management for favorites, read status, etc.
UI: More compact hop labelling
Bugfix: Misc. race conditions and websocket handling
Bugfix: Reduce fetching cadence by loading all contact data at start to prevent fetches on advertise-driven update
* Bugfix: Misc. race conditions and websocket handling
* Bugfix: Reduce fetching cadence by loading all contact data at start to prevent fetches on advertise-driven update
## [1.4.0] - 2026-01-18
UI: Improve button layout for room searcher
UI: Improve favicon coloring
UI: Improve status bar button layout on small screen
Feature: Show multi-path hop display with distance estimates
Feature: Search rooms and contacts by key, not just name
Bugfix: Historical DM decryption now works as expected
Bugfix: Don't double-set active conversation after addition; wait for backend room name normalization
* Feature: Show multi-path hop display with distance estimates
* Feature: Search rooms and contacts by key, not just name
* Bugfix: Historical DM decryption now works as expected
* Bugfix: Don't double-set active conversation after addition; wait for backend room name normalization
## [1.3.1] - 2026-01-17
UI: Rework restart handling
Feature: Add `dutycyle_start` command to logged-in repeater session to start five min duty cycle tracking
* Feature: Add `dutycyle_start` command to logged-in repeater session to start five min duty cycle tracking
Bug: Improve error message rendering from server-side errors
UI: Remove octothorpe from channel listing
## [1.3.0] - 2026-01-17
Feature: Rework database schema to drop unnecessary columns and dedupe payloads at the DB level
Feature: Massive frontend settings overhaul. It ain't gorgeous but it's easier to navigate.
Feature: Drop repeater login wait time; vestigial from debugging a different issue
* Feature: Rework database schema to drop unnecessary columns and dedupe payloads at the DB level
* Feature: Massive frontend settings overhaul. It ain't gorgeous but it's easier to navigate.
* Feature: Drop repeater login wait time; vestigial from debugging a different issue
## [1.2.1] - 2026-01-17
@@ -146,27 +553,27 @@ Update: Update meshcore-hashtag-cracker to include sender-identification correct
## [1.2.0] - 2026-01-16
Feature: Add favorites
* Feature: Add favorites
## [1.1.0] - 2026-01-14
Bugfix: Use actual pathing data from advertisements, not just always flood (oops)
Bugfix: Autosync radio clock periodically to prevent drift (would show up most commonly as issues with repeater comms)
* Bugfix: Use actual pathing data from advertisements, not just always flood (oops)
* Bugfix: Autosync radio clock periodically to prevent drift (would show up most commonly as issues with repeater comms)
## [1.0.3] - 2026-01-13
Bugfix: Add missing test management packages
Improvement: Drop unnecessary repeater timeouts, and retain timeout for login only -- repeater ops are faster AND more reliable!
* Bugfix: Add missing test management packages
* Improvement: Drop unnecessary repeater timeouts, and retain timeout for login only -- repeater ops are faster AND more reliable!
## [1.0.2] - 2026-01-13
Improvement: Add delays between router ops to prevent traffic collisions
* Improvement: Add delays between router ops to prevent traffic collisions
## [1.0.1] - 2026-01-13
Bugixes: Cleaner DB shutdown, radio reconnect contention, packet dedupe garbage removal
* Bugixes: Cleaner DB shutdown, radio reconnect contention, packet dedupe garbage removal
## [1.0.0] - 2026-01-13
Initial full release!
* Initial full release!
+203
View File
@@ -0,0 +1,203 @@
# Contributing
## Guiding Principles
- In all your interactions with developers, maintainers, and users, be kind.
- Prefer small, comprehensible changes over large sweeping ones. Individual commits should be meaningful atomic chunks of work. Pull requests with many, many commits instead of a phased approach may be declined.
- Pull requests must be fully understood and explicitly endorsed by a human before merge. AI assistance is great, and this repo is optimized for it, but we keep quality by keeping our agents on track to write clear code, useful (not useless) tests, good architecture, and big-picture thinking.
- No pull request should introduce new failing lint, typecheck, test, or build results.
- Every pull request should have an associated issue or discussion thread; a brand new feature appearing first in a PR is an antipattern.
- No truly automated radio traffic. Bot replies are already the practical edge of what this project wants to automate; any kind of traffic that would be intervalized or automated is not what this project is about.
- No ingestion from the internet onto the mesh. This project is a radio client, not a bridge for outside traffic to enter the network. The mesh is strong because it is a radio mesh, not the internet with some weird wireless links.
## Local Development
### Backend
```bash
uv sync
uv run uvicorn app.main:app --reload
```
With an explicit serial port:
```bash
MESHCORE_SERIAL_PORT=/dev/ttyUSB0 uv run uvicorn app.main:app --reload
```
On Windows (PowerShell):
```powershell
uv sync
$env:MESHCORE_SERIAL_PORT="COM8"
uv run uvicorn app.main:app --reload
```
### Frontend
```bash
cd frontend
npm install
npm run dev
```
Run both the backend and `npm run dev` for hot-reloading frontend development.
## Quality Checks
Run the full quality suite before proposing or handing off code changes:
```bash
./scripts/quality/all_quality.sh
```
That runs linting, formatting, type checking, tests, and builds for both backend and frontend.
If you need targeted commands while iterating:
```bash
# backend
uv run ruff check app/ tests/ --fix
uv run ruff format app/ tests/
uv run pyright app/
PYTHONPATH=. uv run pytest tests/ -v
# frontend
cd frontend
npm run lint:fix
npm run format
npm run test:run
npm run build
```
## Quality + Publishing Scripts
<details>
<summary>scripts/quality/</summary>
| Script | Purpose |
|--------|---------|
| `all_quality.sh` | Repo-standard gate: autofix (ruff, eslint, prettier), then pyright, pytest, vitest, and frontend build. Run before finishing any code change. |
| `extended_quality.sh` | `all_quality.sh` plus e2e tests and Docker build matrix. Used for release validation. |
| `e2e.sh` | Thin wrapper that runs Playwright e2e tests from `tests/e2e/`. |
| `docker_ci.sh` | Builds the Docker image and runs a smoke test against it. |
| `test_aur_package.sh` | Builds the AUR package in an Arch container, then installs and boots it in a second container with port 8000 exposed (hang finish). |
| `run_aur_with_radio.sh` | Like `test_aur_package.sh` but passes through the host serial device for testing with a real radio (hang finish). |
</details>
<details>
<summary>scripts/build/</summary>
| Script | Purpose |
|--------|---------|
| `publish.sh` | Full release ceremony: quality gate, version bump, changelog, frontend build, Docker multi-arch push, GitHub release. |
| `release_common.sh` | Shared shell helpers (version validation, formatting) sourced by other build scripts. |
| `package_release_artifact.sh` | Builds the prebuilt-frontend release zip attached to GitHub releases. |
| `push_docker_multiarch.sh` | Builds and pushes multi-arch Docker images (amd64 + arm64). |
| `create_github_release.sh` | Creates a GitHub release with changelog notes and the release artifact. |
| `extract_release_notes.sh` | Extracts the latest version's notes from `CHANGELOG.md` for the release body. |
| `collect_licenses.sh` | Gathers third-party license attributions into `LICENSES.md`. |
| `print_frontend_licenses.cjs` | Helper that extracts frontend npm dependency licenses. |
| `dump_api_specs.py` | Dumps the OpenAPI spec from the running backend (developer utility). |
</details>
## E2E Testing
E2E tests exercise the full stack (backend + frontend + real radio hardware) via Playwright.
> [!WARNING]
> E2E tests are **not part of the normal development path** — most contributors will never need to run them. They exist to catch integration issues that unit tests can't and generally only need to be run by maintainers.
### Hardware requirements
- A MeshCore radio connected via serial (auto-detected, or set `MESHCORE_SERIAL_PORT`)
- The radio must be powered on and past its startup sequence before tests begin
### Running
```bash
cd tests/e2e
npm install
npx playwright install chromium # first time only
npx playwright test # headless
npx playwright test --headed # watch it run
```
The test harness starts its own uvicorn instance on port 8001 with a fresh temporary database. Your development server (port 8000) is unaffected.
### Test tiers
**Most tests (22 of 28) are fully self-contained.** They seed their own data via API calls or direct DB writes and need only a connected radio. These cover messaging, pagination, search, favorites, settings, fanout integrations, historical decryption, and all UI-only views.
**Mesh-traffic tests (tagged `@mesh-traffic`)** wait up to 3 minutes for an incoming message from another node on the network. If no traffic arrives, they fail with an advisory that the failure may be RF conditions, not a bug. These are: `incoming-message` and `packet-feed` (second test only).
**The partner-radio DM ACK test (tagged `@partner-radio`)** validates direct-route learning by sending a DM and waiting for an ACK. It requires a second radio in range that has your test radio in its contacts. Configure the partner node's public key and name via `E2E_PARTNER_RADIO_PUBKEY` and `E2E_PARTNER_RADIO_NAME`.
### Making mesh-traffic tests reliable: the echo bot
The most practical way to guarantee incoming traffic is to run an **echo bot on a second radio** monitoring a known channel. When the test suite starts a `@mesh-traffic` test, it sends a trigger message to that channel. If a bot on another radio is listening, it replies — generating the incoming RF packet the test needs within seconds instead of waiting for organic mesh traffic.
The test suite sends `!echo please give incoming message` to the echo channel (default `#flightless`) at the start of each `@mesh-traffic` test. The trigger message is configurable via `E2E_ECHO_TRIGGER_MESSAGE`.
Setup:
1. Set up a second MeshCore radio within RF range of your test radio
2. Run a RemoteTerm instance on the second radio
3. Configure a bot on the second radio that monitors the echo channel and replies when it sees the trigger. Example bot code:
```python
def bot(sender_name, sender_key, message_text, is_dm,
channel_key, channel_name, sender_timestamp, path):
if "!echo" in message_text.lower():
return f"[ECHO] {message_text}"
return None
```
4. The test suite calls `nudgeEchoBot()` automatically — no manual intervention needed
Without the echo bot, `@mesh-traffic` tests rely on organic traffic from other nodes. In a quiet RF environment they will time out.
### Environment variables
All E2E environment configuration is centralized in `tests/e2e/helpers/env.ts` with defaults that work for the maintainer's test rig. Override via environment variables:
| Variable | Default | Purpose |
|----------|---------|---------|
| `MESHCORE_SERIAL_PORT` | auto-detect | Serial port for the test radio |
| `E2E_ECHO_CHANNEL` | `#flightless` | Channel the echo bot monitors for traffic generation |
| `E2E_ECHO_TRIGGER_MESSAGE` | `!echo please give incoming message` | Message sent to nudge the echo bot |
| `E2E_PARTNER_RADIO_PUBKEY` | *(maintainer's test node)* | 64-char hex public key of a node that will ACK DMs from your radio |
| `E2E_PARTNER_RADIO_NAME` | *(maintainer's test node)* | Display name of that node (used in UI assertions) |
Example for a contributor with their own two-radio setup:
```bash
E2E_ECHO_CHANNEL="#mytest" \
E2E_PARTNER_RADIO_PUBKEY="abcd1234...full64charhexkey..." \
E2E_PARTNER_RADIO_NAME="MyTestNode" \
npx playwright test
```
## Pull Request Expectations
- Keep scope tight.
- Explain why the change is needed.
- Link the issue or discussion where the behavior was agreed on.
- Call out any follow-up work left intentionally undone.
- Do not treat code review as the place where the app's direction is first introduced or debated
## Notes For Agent-Assisted Work
Before making non-trivial changes, read:
- `./AGENTS.md`
- `./app/AGENTS.md`
- `./frontend/AGENTS.md`
Read these only when working in those areas:
- `./app/fanout/AGENTS_fanout.md`
- `./frontend/src/components/visualizer/AGENTS_packet_visualizer.md`
- Agent output is welcome, but human review is mandatory.
- Agents should start with the AGENTS files above before making architectural changes.
- If a change touches advanced areas like fanout or the visualizer, read the area-specific AGENTS file before editing.
+12 -3
View File
@@ -1,20 +1,26 @@
# Stage 1: Build frontend
FROM node:20-slim AS frontend-builder
ARG COMMIT_HASH=unknown
WORKDIR /build
COPY frontend/package*.json ./
COPY frontend/package.json frontend/package-lock.json frontend/.npmrc ./
RUN npm ci
COPY frontend/ ./
RUN npm run build
RUN VITE_COMMIT_HASH=${COMMIT_HASH} npm run build
# Stage 2: Python runtime
FROM python:3.12-slim
ARG COMMIT_HASH=unknown
WORKDIR /app
ENV COMMIT_HASH=${COMMIT_HASH}
# Install uv
COPY --from=ghcr.io/astral-sh/uv:latest /uv /usr/local/bin/uv
@@ -27,6 +33,9 @@ RUN uv sync --frozen --no-dev
# Copy application code
COPY app/ ./app/
# Copy license attributions
COPY LICENSES.md ./
# Copy built frontend from first stage
COPY --from=frontend-builder /build/dist ./frontend/dist
@@ -35,5 +44,5 @@ RUN mkdir -p /app/data
EXPOSE 8000
# Run the application
# Run the application (we retain root for max compatibility)
CMD ["uv", "run", "uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
+1813
View File
File diff suppressed because it is too large Load Diff
+150 -161
View File
@@ -2,26 +2,29 @@
Backend server + browser interface for MeshCore mesh radio networks. Connect your radio over Serial, TCP, or BLE, and then you can:
* Send and receive DMs and GroupTexts
* Send and receive DMs and channel messages
* Cache all received packets, decrypting as you gain keys
* Run multiple Python bots that can analyze messages and respond to DMs and channels
* Monitor unlimited contacts and channels (radio limits don't apply -- packets are decrypted server-side)
* Access your radio remotely over your network or VPN
* Brute force hashtag room names for GroupTexts you don't have keys for yet
* Search for hashtag channel names for channels you don't have keys for yet
* Forward packets to MQTT, LetsMesh, MeshRank, SQS, Apprise, etc.
* Use the more recent 1.14 firmwares which support multibyte pathing
* Visualize the mesh as a map or node set, view repeater stats, and more!
**Warning:** This app has no auth, and is for trusted environments only. _Do not put this on an untrusted network, or open it to the public._ The bots can execute arbitrary Python code which means anyone on your network can, too. If you need access control, consider using a reverse proxy like Nginx, or extending FastAPI.
For advanced setup and troubleshooting see [README_ADVANCED.md](README_ADVANCED.md). If you plan to contribute, read [CONTRIBUTING.md](CONTRIBUTING.md).
![Screenshot of the application's web interface](screenshot.png)
**Warning:** This app is for trusted environments only. _Do not put this on an untrusted network, or open it to the public._ You can optionally set `MESHCORE_BASIC_AUTH_USERNAME` and `MESHCORE_BASIC_AUTH_PASSWORD` for app-wide HTTP Basic auth, but that is only a coarse gate and must be paired with HTTPS. The bots can execute arbitrary Python code which means anyone who gets access to the app can, too. To completely disable the bot system, start the server with `MESHCORE_DISABLE_BOTS=true` — this prevents all bot execution and blocks bot configuration changes via the API. If you need stronger access control, consider using a reverse proxy like Nginx, or extending FastAPI; full access control and user management are outside the scope of this app.
## Disclaimer
![Screenshot of the application's web interface](app_screenshot.png)
This is entirely vibecoded slop -- no warranty of fitness for any purpose. It's been lovingly guided by an engineer with a passion for clean code and good tests, but it's still mostly LLM output, so you may find some bugs.
If extending, have your LLM read the three `AGENTS.md` files: `./AGENTS.md`, `./frontend/AGENTS.md`, and `./app/AGENTS.md`.
> [!WARNING]
> RemoteTerm does *full* management of the radio, meaning that once a radio is connected to RemoteTerm, all contacts/channels will be imported and offloaded to RemoteTerm and the contacts actually synced to the device will be governed by RemoteTerm. This means that RemoteTerm can be a poor fit for users who are looking to swap radios in and out, maintaining radio state (favorites, channels, etc.) irrespective of app usage.
## Requirements
- Python 3.10+
- Node.js 18+ (for frontend development only)
- Python 3.11+
- Node.js LTS or current (20, 22, 24, 25) if you're not using a prebuilt release
- [UV](https://astral.sh/uv) package manager: `curl -LsSf https://astral.sh/uv/install.sh | sh`
- MeshCore radio connected via USB serial, TCP, or BLE
@@ -39,23 +42,29 @@ ls /dev/ttyUSB* /dev/ttyACM*
#######
ls /dev/cu.usbserial-* /dev/cu.usbmodem*
###########
# Windows
###########
# In PowerShell:
Get-CimInstance Win32_SerialPort | Select-Object DeviceID, Caption
######
# WSL2
######
# Run this in an elevated PowerShell (not WSL) window
winget install usbipd
# restart console
# find device ID (e.g. 3-8)
# then find device ID
usbipd list
# attach device to WSL
usbipd bind --busid 3-8
# make device shareable
usbipd bind --busid 3-8 # (or whatever the right ID is)
# attach device to WSL (run this each time you plug in the device)
usbipd attach --wsl --busid 3-8
# device will appear in WSL as /dev/ttyUSB0 or /dev/ttyACM0
```
</details>
## Quick Start
## Install Path 1: Clone And Build
**This approach is recommended over Docker due to intermittent serial communications issues I've seen on \*nix systems.**
@@ -63,199 +72,179 @@ usbipd bind --busid 3-8
git clone https://github.com/jkingsman/Remote-Terminal-for-MeshCore.git
cd Remote-Terminal-for-MeshCore
# Install backend dependencies
uv sync
# Build frontend
cd frontend && npm install && npm run build && cd ..
# Run server
uv run uvicorn app.main:app --host 0.0.0.0 --port 8000
```
The server auto-detects the serial port. To specify a transport manually:
```bash
# Serial (explicit port)
MESHCORE_SERIAL_PORT=/dev/ttyUSB0 uv run uvicorn app.main:app --host 0.0.0.0 --port 8000
Access the app at http://localhost:8000.
# TCP (e.g. via wifi-enabled firmware)
MESHCORE_TCP_HOST=192.168.1.100 MESHCORE_TCP_PORT=4000 uv run uvicorn app.main:app --host 0.0.0.0 --port 8000
Source checkouts expect a normal frontend build in `frontend/dist`.
# BLE (address and PIN both required)
MESHCORE_BLE_ADDRESS=AA:BB:CC:DD:EE:FF MESHCORE_BLE_PIN=123456 uv run uvicorn app.main:app --host 0.0.0.0 --port 8000
```
> [!TIP]
> Running on lightweight hardware, or just do not want to build the frontend locally? From a cloned checkout, run `python3 scripts/setup/fetch_prebuilt_frontend.py` to fetch and unpack a prebuilt frontend into `frontend/prebuilt`, then start the app normally with `uv run uvicorn app.main:app --host 0.0.0.0 --port 8000`.
Access at http://localhost:8000
> [!NOTE]
> On Linux, you can also install RemoteTerm as a persistent `systemd` service that starts on boot and restarts automatically on failure:
>
> ```bash
> bash scripts/setup/install_service.sh
> ```
>
> For the full service workflow and post-install operations, see [README_ADVANCED.md](README_ADVANCED.md).
> **Note:** WebGPU cracking requires HTTPS when not on localhost. See the HTTPS section under Additional Setup.
## Install Path 2: Docker
## Docker
> **Warning:** Docker has had reports intermittent issues with serial event subscriptions. The native method above is more reliable.
> **Warning:** Docker has intermittent issues with serial event subscriptions. The native method above is more reliable.
Local Docker builds are architecture-native by default. On Apple Silicon Macs and ARM64 Linux hosts such as Raspberry Pi, `docker compose build` / `docker compose up --build` will produce an ARM64 image unless you override the platform.
> **Note:** BLE-in-docker is outside the scope of this README, but the env vars should all still work.
For serial-device passthrough, use rootful Docker. In practice that usually means starting the stack with `sudo docker compose ...` unless your Docker daemon is already configured for rootful access via your user/group. Rootless Docker has been observed to fail on serial-device mappings even when the compose file itself is correct.
Create a local `docker-compose.yml` in one of two ways:
1. Copy the example file and edit it by hand:
```bash
# Serial
docker run -d \
--device=/dev/ttyUSB0 \
-v remoteterm-data:/app/data \
-p 8000:8000 \
jkingsman/remote-terminal-for-meshcore:latest
# TCP
docker run -d \
-e MESHCORE_TCP_HOST=192.168.1.100 \
-e MESHCORE_TCP_PORT=4000 \
-v remoteterm-data:/app/data \
-p 8000:8000 \
jkingsman/remote-terminal-for-meshcore:latest
cp docker-compose.example.yml docker-compose.yml
```
## Development
### Backend
2. Or generate one interactively:
```bash
uv sync
uv run uvicorn app.main:app --reload
# Or with explicit serial port
MESHCORE_SERIAL_PORT=/dev/ttyUSB0 uv run uvicorn app.main:app --reload
bash scripts/setup/install_docker.sh
```
### Frontend
> The interactive generator enables a self-signed (snakeoil) TLS certificate by default. If you accept the default, the app will be served over HTTPS and the generated compose file will include certificate mounts and an SSL command override. Decline if you prefer plain HTTP or plan to terminate TLS externally.
Your local `docker-compose.yml` is gitignored so future pulls do not overwrite your Docker settings.
The guided Docker flow can collect BLE settings, but BLE access from Docker still needs manual compose customization such as Bluetooth passthrough and possibly privileged mode or host networking. If you want the simpler path for BLE, use the regular Python launch flow instead.
Then customize the local compose file for your transport and launch:
```bash
cd frontend
npm install
npm run dev # Dev server at http://localhost:5173 (proxies API to :8000)
npm run build # Production build to dist/
sudo docker compose up # add -d for background once you validate it's working
```
Run both the backend and `npm run dev` for hot-reloading frontend development.
The database is stored in `./data/` (bind-mounted), so the container shares the same database as the native app.
### Code Quality & Tests
Please test, lint, format, and quality check your code before PRing or committing. At the least, run a lint + autoformat + pyright check on the bakend, and a lint + autoformat on the frontend.
<details>
<summary>But how?</summary>
To rebuild after pulling updates:
```bash
# python
uv run ruff check app/ tests/ --fix # lint + auto-fix
uv run ruff format app/ tests/ # format (always writes)
uv run pyright app/ # type checking
PYTHONPATH=. uv run pytest tests/ -v # backend tests
# frontend
cd frontend
npm run lint:fix # esLint + auto-fix
npm run test:run # run tests
npm run format # prettier (always writes)
npm run build # build the frontend
sudo docker compose pull
sudo docker compose up -d
```
</details>
## Configuration
> If you switched to a local build (`build: .` instead of `image:`), use `sudo docker compose up -d --build` instead — `pull` only fetches remote images.
The example file and setup script default to the published Docker Hub image. To build locally from your checkout instead, replace:
```yaml
image: docker.io/jkingsman/remoteterm-meshcore:latest
```
with:
```yaml
build: .
```
Then run:
```bash
sudo docker compose up -d --build
```
The container runs as root by default for maximum serial passthrough compatibility across host setups. On Linux, if you switch between native and Docker runs, `./data` can end up root-owned. If you do not need that serial compatibility behavior, you can enable the optional `user: "${UID:-1000}:${GID:-1000}"` line in `docker-compose.yml` to keep ownership aligned with your host user.
To stop:
```bash
sudo docker compose down
```
## Install Path 3: Arch Linux (AUR)
A [`remoteterm-meshcore`](https://aur.archlinux.org/packages/remoteterm-meshcore) package is available in the AUR. Install it with an AUR helper or build it manually:
```bash
# with an AUR helper
yay -S remoteterm-meshcore
# or manually
git clone https://aur.archlinux.org/remoteterm-meshcore.git
cd remoteterm-meshcore
makepkg -si
```
Configure your radio connection, then start the service:
```bash
sudo vi /etc/remoteterm-meshcore/remoteterm.env
sudo systemctl enable --now remoteterm-meshcore
```
Access the app at http://localhost:8000.
## Standard Environment Variables
Only one transport may be active at a time. If multiple are set, the server will refuse to start.
| Variable | Default | Description |
|----------|---------|-------------|
| `MESHCORE_SERIAL_PORT` | (auto-detect) | Serial port path |
| `MESHCORE_SERIAL_BAUDRATE` | 115200 | Serial baud rate |
| `MESHCORE_TCP_HOST` | | TCP host (mutually exclusive with serial/BLE) |
| `MESHCORE_TCP_PORT` | 4000 | TCP port |
| `MESHCORE_TCP_PORT` | 5000 | TCP port |
| `MESHCORE_BLE_ADDRESS` | | BLE device address (mutually exclusive with serial/TCP) |
| `MESHCORE_BLE_PIN` | | BLE PIN (required when BLE address is set) |
| `MESHCORE_LOG_LEVEL` | INFO | DEBUG, INFO, WARNING, ERROR |
| `MESHCORE_DATABASE_PATH` | data/meshcore.db | SQLite database path |
| `MESHCORE_MAX_RADIO_CONTACTS` | 200 | Max recent contacts to keep on radio for DM ACKs |
| `MESHCORE_LOG_LEVEL` | INFO | `DEBUG`, `INFO`, `WARNING`, `ERROR` |
| `MESHCORE_DATABASE_PATH` | `data/meshcore.db` | SQLite database path |
| `MESHCORE_DISABLE_BOTS` | false | Disable bot system entirely (blocks execution and config; an intermediate security precaution, but not as good as basic auth) |
| `MESHCORE_BASIC_AUTH_USERNAME` | | Optional app-wide HTTP Basic auth username; must be set together with `MESHCORE_BASIC_AUTH_PASSWORD` |
| `MESHCORE_BASIC_AUTH_PASSWORD` | | Optional app-wide HTTP Basic auth password; must be set together with `MESHCORE_BASIC_AUTH_USERNAME` |
Only one transport may be active at a time. If multiple are set, the server will refuse to start.
## Additional Setup
<details>
<summary>HTTPS (Required for WebGPU Cracking outside localhost)</summary>
WebGPU requires a secure context. When not on `localhost`, serve over HTTPS:
Common launch patterns:
```bash
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes -subj '/CN=localhost'
uv run uvicorn app.main:app --host 0.0.0.0 --port 8000 --ssl-keyfile=key.pem --ssl-certfile=cert.pem
# Serial (explicit port)
MESHCORE_SERIAL_PORT=/dev/ttyUSB0 uv run uvicorn app.main:app --host 0.0.0.0 --port 8000
# TCP
MESHCORE_TCP_HOST=192.168.1.100 MESHCORE_TCP_PORT=5000 uv run uvicorn app.main:app --host 0.0.0.0 --port 8000
# BLE
MESHCORE_BLE_ADDRESS=AA:BB:CC:DD:EE:FF MESHCORE_BLE_PIN=123456 uv run uvicorn app.main:app --host 0.0.0.0 --port 8000
```
For Docker:
On Windows (PowerShell), set environment variables as a separate statement:
```bash
# generate TLS cert
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes -subj '/CN=localhost'
# run with cert
docker run -d \
--device=/dev/ttyUSB0 \
-v remoteterm-data:/app/data \
-v $(pwd)/cert.pem:/app/cert.pem:ro \
-v $(pwd)/key.pem:/app/key.pem:ro \
-p 8000:8000 \
jkingsman/remote-terminal-for-meshcore:latest \
uv run uvicorn app.main:app --host 0.0.0.0 --port 8000 --ssl-keyfile=/app/key.pem --ssl-certfile=/app/cert.pem
```powershell
$env:MESHCORE_SERIAL_PORT="COM8" # or your COM port
uv run uvicorn app.main:app --host 0.0.0.0 --port 8000
```
Accept the browser warning, or use [mkcert](https://github.com/FiloSottile/mkcert) for locally-trusted certs.
</details>
> [!WARNING]
> **Windows + MQTT fanout:** Python's default Windows event loop (ProactorEventLoop) is not compatible with the MQTT libraries used by RemoteTerm. If you configure any MQTT integration, add `--loop none` to your uvicorn command:
>
> ```powershell
> uv run uvicorn app.main:app --host 0.0.0.0 --port 8000 --loop none
> ```
>
> If you forget, the app will start normally but MQTT connections will fail and you'll see a toast in the UI with this same guidance.
<details>
<summary>Systemd Service (Linux)</summary>
If you enable Basic Auth, protect the app with HTTPS. HTTP Basic credentials are not safe on plain HTTP. Also note that the app's permissive CORS policy is a deliberate trusted-network tradeoff, so cross-origin browser JavaScript is not a reliable way to use that Basic Auth gate.
```bash
# Create service user
sudo useradd -r -m -s /bin/false remoteterm
## Where To Go Next
# Install to /opt/remoteterm
sudo mkdir -p /opt/remoteterm
sudo cp -r . /opt/remoteterm/
sudo chown -R remoteterm:remoteterm /opt/remoteterm
- Advanced setup, troubleshooting, HTTPS, systemd, remediation variables, and debug logging: [README_ADVANCED.md](README_ADVANCED.md)
- Contributing, tests, linting, E2E notes, and important AGENTS files: [CONTRIBUTING.md](CONTRIBUTING.md)
- Live API docs after the backend is running: http://localhost:8000/docs
# Install dependencies
cd /opt/remoteterm
sudo -u remoteterm uv venv
sudo -u remoteterm uv sync
## Disclaimer
# Build frontend (required for the backend to serve the web UI)
cd /opt/remoteterm/frontend
sudo -u remoteterm npm install
sudo -u remoteterm npm run build
This is developed with very heavy agentic assistance -- there is no warranty of fitness for any purpose. It's been lovingly guided by an engineer with a passion for clean code and good tests, but it's still mostly LLM output, so you may find some bugs.
# Install and start service
sudo cp /opt/remoteterm/remoteterm.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable --now remoteterm
# Check status
sudo systemctl status remoteterm
sudo journalctl -u remoteterm -f
```
Edit `/etc/systemd/system/remoteterm.service` to set `MESHCORE_SERIAL_PORT` if needed.
</details>
<details>
<summary>Testing</summary>
**Backend:**
```bash
PYTHONPATH=. uv run pytest tests/ -v
```
**Frontend:**
```bash
cd frontend
npm run test:run
```
</details>
## API Documentation
With the backend running: http://localhost:8000/docs
If extending, have your LLM read the three `AGENTS.md` files: `./AGENTS.md`, `./frontend/AGENTS.md`, and `./app/AGENTS.md`.
+80
View File
@@ -0,0 +1,80 @@
# Advanced Setup And Troubleshooting
## Remediation Environment Variables
These are intended for diagnosing or working around radios that behave oddly.
| Variable | Default | Description |
|----------|---------|-------------|
| `MESHCORE_ENABLE_MESSAGE_POLL_FALLBACK` | false | Run aggressive 10-second `get_msg()` fallback polling to check for messages |
| `MESHCORE_FORCE_CHANNEL_SLOT_RECONFIGURE` | false | Disable channel-slot reuse and force `set_channel(...)` before every channel send |
| `__CLOWNTOWN_DO_CLOCK_WRAPAROUND` | false | Highly experimental: if the radio clock is ahead of system time, try forcing the clock to `0xFFFFFFFF`, wait for uint32 wraparound, and then retry normal time sync before falling back to reboot |
By default the app relies on radio events plus MeshCore auto-fetch for incoming messages, and also runs a low-frequency hourly audit poll. That audit checks both:
- whether messages were left on the radio without reaching the app through event subscription
- whether the app's channel-slot expectations still match the radio's actual channel listing
If the audit finds a mismatch, you'll see an error in the application UI and your logs. If you see that warning, or if messages on the radio never show up in the app, try `MESHCORE_ENABLE_MESSAGE_POLL_FALLBACK=true` to switch that task into a more aggressive 10-second safety net. If room sends appear to be using the wrong channel slot or another client is changing slots underneath this app, try `MESHCORE_FORCE_CHANNEL_SLOT_RECONFIGURE=true` to force the radio to validate the channel slot is valid before sending (will delay sending by ~500ms).
`__CLOWNTOWN_DO_CLOCK_WRAPAROUND=true` is a last-resort clock remediation for nodes whose RTC is stuck in the future and where rescue-mode time setting or GPS-based time is not available. It intentionally relies on the clock rolling past the 32-bit epoch boundary, which is board-specific behavior and may not be safe or effective on all MeshCore targets. Treat it as highly experimental.
## Sub-Path Reverse Proxy
RemoteTerm works behind a reverse proxy that serves it under a sub-path (e.g. `/meshcore/` or Home Assistant ingress). All frontend asset and API paths are relative, so they resolve correctly under any prefix.
**Requirements:**
- The proxy must ensure the sub-path URL has a **trailing slash**. If a user visits `/meshcore` (no slash), relative paths break. Most proxies handle this automatically; for Nginx, a `location /meshcore/ { ... }` block (note the trailing slash) does the right thing.
- For correct PWA install behavior, the proxy should forward `X-Forwarded-Prefix` (set to the sub-path, e.g. `/meshcore`) so the web manifest generates correct `start_url` and `scope` values. `X-Forwarded-Proto` and `X-Forwarded-Host` are also respected for origin resolution.
## HTTPS
WebGPU channel-finding requires a secure context when you are not on `localhost`.
Generate a local cert and start the backend with TLS:
```bash
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes -subj '/CN=localhost'
uv run uvicorn app.main:app --host 0.0.0.0 --port 8000 --ssl-keyfile=key.pem --ssl-certfile=cert.pem
```
For Docker Compose, generate the cert, mount it into the container, and override the launch command:
```yaml
services:
remoteterm:
volumes:
- ./data:/app/data
- ./cert.pem:/app/cert.pem:ro
- ./key.pem:/app/key.pem:ro
command: uv run uvicorn app.main:app --host 0.0.0.0 --port 8000 --ssl-keyfile=/app/key.pem --ssl-certfile=/app/cert.pem
```
Accept the browser warning, or use [mkcert](https://github.com/FiloSottile/mkcert) for locally-trusted certs.
## Systemd Service
On Linux systems, this is the recommended installation method if you want RemoteTerm set up as a persistent systemd service that starts automatically on boot and restarts automatically if it crashes. Run the installer script from the repo root. It runs as your current user, installs from wherever you cloned the repo, and prints a quick-reference cheatsheet when done — no separate service account or path juggling required.
```bash
bash scripts/setup/install_service.sh
```
You can also rerun the script later to change transport, bot, or auth settings. If the service is already running, the installer stops it, rewrites the unit file, reloads systemd, and starts it again with the new configuration.
## Debug Logging And Bug Reports
If you're experiencing issues or opening a bug report, please start the backend with debug logging enabled. Debug mode provides a much more detailed breakdown of radio communication, packet processing, and other internal operations, which makes it significantly easier to diagnose problems.
```bash
MESHCORE_LOG_LEVEL=DEBUG uv run uvicorn app.main:app --host 0.0.0.0 --port 8000
```
You can also navigate to `/api/debug` (or go to Settings -> About -> "Open debug support snapshot" at the bottom). This debug block contains information about the operating environment, expectations around keys and channels, and radio status. It also includes the most recent logs. **Non-log information reveals no keys, channel names, or other privilege information beyond the names of your bots. The logs, however, may contain channel names or keys (but never your private key).** If you do not wish to include this information, copy up to the `STOP COPYING HERE` marker in the debug body.
## Development Notes
For day-to-day development, see [CONTRIBUTING.md](CONTRIBUTING.md).
Windows note: I've seen an intermittent startup issue like `"Received empty packet: index out of range"` with failed contact sync. I can't figure out why this happens. The issue typically resolves on restart. If you can figure out why this happens, I will buy you a virtual or iRL six pack if you're in the PNW. As a former always-windows-girlie before embracing WSL2, I despise second-classing M$FT users, but I'm just stuck with this one.
+259 -43
View File
@@ -8,38 +8,69 @@ Keep it aligned with `app/` source files and router behavior.
- FastAPI
- aiosqlite
- Pydantic
- MeshCore Python library (`references/meshcore_py`)
- MeshCore Python library (`meshcore` from PyPI)
- PyCryptodome
## Code Ethos
- Prefer strong domain modules over layers of pass-through helpers.
- Split code when the new module owns real policy, not just a nicer name.
- Avoid wrapper services around globals unless they materially improve testability or reduce coupling.
- Keep workflows locally understandable; do not scatter one reasoning unit across several files without a clear contract.
- Typed write/read contracts are preferred over loose dict-shaped repository inputs.
## Backend Map
```text
app/
├── main.py # App startup/lifespan, router registration, static frontend mounting
├── config.py # Env-driven runtime settings
├── channel_constants.py # Public/default channel constants shared across sync/send logic
├── database.py # SQLite connection + base schema + migration runner
├── migrations.py # Schema migrations (SQLite user_version)
├── models.py # Pydantic request/response models
├── repository.py # Data access layer
├── radio.py # RadioManager + auto-reconnect monitor
├── models.py # Pydantic request/response models and typed write contracts (for example ContactUpsert)
├── version_info.py # Unified version/build metadata resolution for debug + startup surfaces
├── repository/ # Data access layer (contacts, channels, messages, raw_packets, settings, fanout)
├── services/ # Shared orchestration/domain services
│ ├── messages.py # Shared message creation, dedup, ACK application
│ ├── message_send.py # Direct send, channel send, resend workflows
│ ├── dm_ingest.py # Shared direct-message ingest / dedup seam for packet + fallback paths
│ ├── dm_ack_apply.py # Shared DM ACK application over pending/buffered ACK state
│ ├── dm_ack_tracker.py # Pending DM ACK state
│ ├── contact_reconciliation.py # Prefix-claim, sender-key backfill, name-history wiring
│ ├── radio_lifecycle.py # Post-connect setup and reconnect/setup helpers
│ ├── radio_commands.py # Radio config/private-key command workflows
│ ├── radio_stats.py # In-memory local radio stats sampling and noise-floor history
│ └── radio_runtime.py # Router/dependency seam over the global RadioManager
├── radio.py # RadioManager transport/session state + lock management
├── radio_sync.py # Polling, sync, periodic advertisement loop
├── decoder.py # Packet parsing/decryption
├── packet_processor.py # Raw packet pipeline, dedup, path handling
├── event_handlers.py # MeshCore event subscriptions and ACK tracking
├── events.py # Typed WS event payload serialization
├── websocket.py # WS manager + broadcast helpers
├── bot.py # Bot execution and outbound bot sends
├── security.py # Optional app-wide HTTP Basic auth middleware for HTTP + WS
├── fanout/ # Fanout bus: MQTT, bots, webhooks, Apprise, SQS (see fanout/AGENTS_fanout.md)
├── dependencies.py # Shared FastAPI dependency providers
├── path_utils.py # Path hex rendering and hop-width helpers
├── region_scope.py # Normalize/validate regional flood-scope values
├── keystore.py # Ephemeral private/public key storage for DM decryption
├── frontend_static.py # Mount/serve built frontend (production)
└── routers/
├── health.py
├── debug.py
├── radio.py
├── contacts.py
├── channels.py
├── messages.py
├── packets.py
├── read_state.py
├── rooms.py
├── server_control.py
├── settings.py
├── fanout.py
├── repeaters.py
├── statistics.py
└── ws.py
```
@@ -49,33 +80,76 @@ app/
1. Radio emits events.
2. `on_rx_log_data` stores raw packet and tries decrypt/pipeline handling.
3. Decrypted messages are inserted into `messages` and broadcast over WS.
4. `CONTACT_MSG_RECV` is a fallback DM path when packet pipeline cannot decrypt.
3. Shared message-domain services create/update `messages` and shape WS payloads.
4. Direct-message storage is centralized in `services/dm_ingest.py`; packet-processor DMs and `CONTACT_MSG_RECV` fallback events both route through that seam.
### Outgoing messages
1. Send endpoints in `routers/messages.py` call MeshCore commands.
2. Message is persisted as outgoing.
1. Send endpoints in `routers/messages.py` validate requests and delegate to `services/message_send.py`.
2. Service-layer send workflows call MeshCore commands, persist outgoing messages, and wire ACK tracking.
3. Endpoint broadcasts WS `message` event so all live clients update.
4. ACK/repeat updates arrive later as `message_acked` events.
5. Channel resend (`POST /messages/channel/{id}/resend`) strips the sender name prefix by exact match against the current radio name. This assumes the radio name hasn't changed between the original send and the resend. Name changes require an explicit radio config update and are rare, but the `new_timestamp=true` resend path has no time window, so a mismatch is possible if the name was changed between the original send and a later resend.
### Connection lifecycle
- `RadioManager.start_connection_monitor()` checks health every 5s.
- On reconnect, monitor runs `post_connect_setup()` before broadcasting healthy state.
- Setup includes handler registration, key export, time sync, contact/channel sync, polling/advert tasks.
- `RadioManager.post_connect_setup()` delegates to `services/radio_lifecycle.py`.
- Routers, startup/lifespan code, fanout helpers, and `radio_sync.py` should reach radio state through `services/radio_runtime.py`, not by importing `app.radio.radio_manager` directly.
- Shared reconnect/setup helpers in `services/radio_lifecycle.py` are used by startup, the monitor, and manual reconnect/reboot flows before broadcasting healthy state.
- Setup still includes handler registration, key export, time sync, contact/channel sync, and advertisement tasks. The message-poll task always starts: by default it runs as a low-frequency hourly audit, and `MESHCORE_ENABLE_MESSAGE_POLL_FALLBACK=true` switches it to aggressive 10-second polling. That audit checks both missed-radio-message drift and channel-slot cache drift; cache mismatches are logged, toasted, and the send-slot cache is reset.
- Post-connect setup is timeout-bounded. If initial radio offload/setup hangs too long, the backend logs the failure and broadcasts an `error` toast telling the operator to reboot the radio and restart the server.
## Important Behaviors
### Multibyte routing
- Packet `path_len` values are hop counts, not byte counts.
- Hop width comes from the packet or radio `path_hash_mode`: `0` = 1-byte, `1` = 2-byte, `2` = 3-byte.
- Channel slot count comes from firmware-reported `DEVICE_INFO.max_channels`; do not hardcode `40` when scanning/offloading channel slots.
- Channel sends use a session-local LRU slot cache after startup channel offload clears the radio. Repeated sends to the same channel reuse the loaded slot; new channels fill free slots up to the discovered channel capacity, then evict the least recently used cached channel.
- TCP radios do not reuse cached slot contents. For TCP, channel sends still force `set_channel(...)` before every send because this backend does not have exclusive device access.
- `MESHCORE_FORCE_CHANNEL_SLOT_RECONFIGURE=true` disables slot reuse on all transports and forces the old always-`set_channel(...)` behavior before every channel send.
- Contacts persist canonical direct-route fields (`direct_path`, `direct_path_len`, `direct_path_hash_mode`) so contact sync and outbound DM routing reuse the exact stored hop width instead of inferring from path bytes.
- Direct-route sources are limited to radio contact sync (`out_path`) and PATH/path-discovery updates. This mirrors firmware `onContactPathRecv(...)`, which replaces `ContactInfo.out_path` when a new returned path is heard.
- `route_override_path`, `route_override_len`, and `route_override_hash_mode` take precedence over the learned direct route for radio-bound sends.
- Advertisement paths are stored only in `contact_advert_paths` for analytics/visualization. They are not part of `Contact.to_radio_dict()` or DM route selection.
- `contact_advert_paths` identity is `(public_key, path_hex, path_len)` because the same hex bytes can represent different routes at different hop widths.
### Read/unread state
- Server is source of truth (`contacts.last_read_at`, `channels.last_read_at`).
- `GET /api/read-state/unreads` returns counts, mention flags, and `last_message_times`.
- `GET /api/read-state/unreads` returns counts, mention flags, `last_message_times`, and `last_read_ats`.
### DM ingest + ACKs
- `services/dm_ingest.py` is the one place that should decide fallback-context resolution, DM dedup/reconciliation, and packet-linked vs. content-based storage behavior.
- `CONTACT_MSG_RECV` is a fallback path, not a parallel source of truth. If you change DM storage behavior, trace both `event_handlers.py` and `packet_processor.py`.
- DM ACK tracking is an in-memory pending/buffered map in `services/dm_ack_tracker.py`, with periodic expiry from `radio_sync.py`.
- Outgoing DMs send once inline, store/broadcast immediately after the first successful `MSG_SENT`, then may retry up to 2 more times in the background only when the initial `MSG_SENT` result includes an expected ACK code and the message remains unacked.
- DM retry timing follows the firmware-provided `suggested_timeout` from `PACKET_MSG_SENT`; do not replace it with a fixed app timeout unless you intentionally want more aggressive duplicate-prone retries.
- Direct-message send behavior is intended to emulate `meshcore_py.commands.send_msg_with_retry(...)` when the radio provides an expected ACK code: stage the effective contact route on the radio, send, wait for ACK, and on the final retry force flood via `reset_path(...)`.
- Non-final DM attempts use the contact's effective route (`override > direct > flood`). The final retry is intentionally sent as flood even when a routing override exists.
- DM ACK state is terminal on first ACK. Retry attempts may register multiple expected ACK codes for the same message, but sibling pending codes are cleared once one ACK wins so a DM should not accrue multiple delivery confirmations from retries.
- ACKs are delivery state, not routing state. Bundled ACKs inside PATH packets still satisfy pending DM sends, but ACK history does not feed contact route learning.
### Echo/repeat dedup
- Message uniqueness: `(type, conversation_key, text, sender_timestamp)`.
- Duplicate insert is treated as an echo/repeat; ACK count/path list is updated.
- Duplicate insert is treated as an echo/repeat: the new path (if any) is appended, and the ACK count is incremented only for outgoing channel messages. Incoming direct messages with the same conversation/text/sender timestamp also collapse onto one stored row, with later observations merging path data instead of creating a second DM.
### Raw packet dedup policy
- Raw packet storage deduplicates by payload hash (`RawPacketRepository.create`), excluding routing/path bytes.
- Stored packet `id` is therefore a payload identity, not a per-arrival identity.
- Realtime raw-packet WS broadcasts include `observation_id` (unique per RF arrival) in addition to `id`.
- Frontend packet-feed features should key/dedupe by `observation_id`; use `id` only as the storage reference.
- Message-layer repeat handling (`_handle_duplicate_message` + `MessageRepository.add_path`) is separate from raw-packet storage dedup.
### Contact sync throttle
- `sync_recent_contacts_to_radio()` sets `_last_contact_sync = now` before the sync completes.
- This is intentional: if sync fails, the next attempt is still throttled to prevent a retry-storm against a flaky radio. Contacts will resync on the next scheduled cycle or on reconnect.
### Periodic advertisement
@@ -83,59 +157,106 @@ app/
- `0` means disabled.
- Last send time tracked in `app_settings.last_advert_time`.
### Fanout bus
- All external integrations (MQTT, bots, webhooks, Apprise, SQS) are managed through the fanout bus (`app/fanout/`).
- Configs stored in `fanout_configs` table, managed via `GET/POST/PATCH/DELETE /api/fanout`.
- `broadcast_event()` in `websocket.py` dispatches to the fanout manager for `message`, `raw_packet`, and `contact` events.
- `on_message` and `on_raw` are scope-gated. `on_contact`, `on_telemetry`, and `on_health` are dispatched to all modules unconditionally (modules filter internally).
- Repeater telemetry broadcasts are emitted after `RepeaterTelemetryRepository.record()` in both `radio_sync.py` (auto-collect) and `routers/repeaters.py` (manual fetch).
- The 60-second radio stats sampling loop in `radio_stats.py` dispatches an enriched health snapshot (radio identity + full stats) to all fanout modules after each sample.
- Community MQTT publishes raw packets only, but its derived `path` field for direct packets is emitted as comma-separated hop identifiers, not flat path bytes.
- See `app/fanout/AGENTS_fanout.md` for full architecture details and event payload shapes.
## API Surface (all under `/api`)
### Health
- `GET /health`
### Debug
- `GET /debug` — support snapshot with recent logs, live radio probe, slot/contact audits, and version/git info
### Radio
- `GET /radio/config`
- `PATCH /radio/config`
- `GET /radio/config` — includes `path_hash_mode`, `path_hash_mode_supported`, advert-location on/off, and `multi_acks_enabled`
- `PATCH /radio/config` — may update `path_hash_mode` (`0..2`) when firmware supports it, and `multi_acks_enabled`
- `PUT /radio/private-key`
- `POST /radio/advertise`
- `POST /radio/advertise` — manual advert send; request body may set `mode` to `flood` or `zero_hop` (defaults to `flood`)
- `POST /radio/discover` — short mesh discovery sweep for nearby repeaters/sensors
- `POST /radio/trace` — send a multi-hop trace loop through known repeaters and back to the local radio
- `POST /radio/disconnect`
- `POST /radio/reboot`
- `POST /radio/reconnect`
### Contacts
- `GET /contacts`
- `GET /contacts/{public_key}`
- `GET /contacts/analytics` — unified keyed-or-name analytics payload
- `GET /contacts/repeaters/advert-paths` — recent advert paths for all contacts
- `POST /contacts`
- `POST /contacts/bulk-delete`
- `DELETE /contacts/{public_key}`
- `POST /contacts/sync`
- `POST /contacts/{public_key}/add-to-radio`
- `POST /contacts/{public_key}/remove-from-radio`
- `POST /contacts/{public_key}/mark-read`
- `POST /contacts/{public_key}/telemetry`
- `POST /contacts/{public_key}/command`
- `POST /contacts/{public_key}/routing-override`
- `POST /contacts/{public_key}/trace`
- `POST /contacts/{public_key}/path-discovery` — discover forward/return paths, persist the learned direct route, and sync it back to the radio best-effort
- `POST /contacts/{public_key}/repeater/login`
- `POST /contacts/{public_key}/repeater/status`
- `POST /contacts/{public_key}/repeater/lpp-telemetry`
- `POST /contacts/{public_key}/repeater/neighbors`
- `POST /contacts/{public_key}/repeater/acl`
- `POST /contacts/{public_key}/repeater/node-info`
- `POST /contacts/{public_key}/repeater/radio-settings`
- `POST /contacts/{public_key}/repeater/advert-intervals`
- `POST /contacts/{public_key}/repeater/owner-info`
- `POST /contacts/{public_key}/room/login`
- `POST /contacts/{public_key}/room/status`
- `POST /contacts/{public_key}/room/lpp-telemetry`
- `POST /contacts/{public_key}/room/acl`
### Channels
- `GET /channels`
- `GET /channels/{key}`
- `GET /channels/{key}/detail`
- `POST /channels`
- `POST /channels/bulk-hashtag`
- `DELETE /channels/{key}`
- `POST /channels/sync`
- `POST /channels/{key}/flood-scope-override`
- `POST /channels/{key}/path-hash-mode-override`
- `POST /channels/{key}/mark-read`
### Messages
- `GET /messages`
- `GET /messages` — list with filters; supports `q` (full-text search), `after`/`after_id` (forward cursor)
- `GET /messages/around/{message_id}` — context messages around a target (for jump-to-message navigation)
- `POST /messages/direct`
- `POST /messages/channel`
- `POST /messages/channel/{message_id}/resend`
### Packets
- `GET /packets/undecrypted/count`
- `GET /packets/{packet_id}` — fetch one stored raw packet by row ID for on-demand inspection
- `POST /packets/decrypt/historical`
- `POST /packets/maintenance`
### Read state
- `GET /read-state/unreads`
- `GET /read-state/unreads` — counts, mention flags, `last_message_times`, and `last_read_ats`
- `POST /read-state/mark-all-read`
### Settings
- `GET /settings`
- `PATCH /settings`
- `POST /settings/favorites/toggle`
- `POST /settings/migrate`
- `POST /settings/blocked-keys/toggle`
- `POST /settings/blocked-names/toggle`
- `POST /settings/tracked-telemetry/toggle`
### Fanout
- `GET /fanout` — list all fanout configs
- `POST /fanout` — create new fanout config
- `PATCH /fanout/{id}` — update fanout config (triggers module reload)
- `DELETE /fanout/{id}` — delete fanout config (stops module)
- `POST /fanout/bots/disable-until-restart` — stop bot modules and keep bots disabled until restart
### Statistics
- `GET /statistics` — aggregated mesh network stats (entity counts, message/packet splits, activity windows, busiest channels)
### WebSocket
- `WS /ws`
@@ -144,39 +265,60 @@ app/
- `health` — radio connection status (broadcast on change, personal on connect)
- `contact` — single contact upsert (from advertisements and radio sync)
- `contact_resolved` — prefix contact reconciled to a full contact row (payload: `{ previous_public_key, contact }`)
- `message` — new message (channel or DM, from packet processor or send endpoints)
- `message_acked` — ACK/echo update for existing message (ack count + paths)
- `raw_packet` — every incoming RF packet (for real-time packet feed UI)
- `error` — toast notification (reconnect failure, missing private key, etc.)
- `contact_deleted` — contact removed from database (payload: `{ public_key }`)
- `channel` — single channel upsert/update (payload: full `Channel`)
- `channel_deleted` — channel removed from database (payload: `{ key }`)
- `error` — toast notification (reconnect failure, missing private key, stuck radio startup, etc.)
- `success` — toast notification (historical decrypt complete, etc.)
Initial WS connect sends `health` only. Contacts/channels are loaded by REST.
Backend WS sends go through typed serialization in `events.py`. Initial WS connect sends `health` only. Contacts/channels are loaded by REST.
Client sends `"ping"` text; server replies `{"type":"pong"}`.
## Data Model Notes
Main tables:
- `contacts`
- `contacts` (includes `first_seen` for contact age tracking and `direct_path_hash_mode` / `route_override_*` for DM routing)
- `channels`
- `messages`
Includes optional `flood_scope_override` for channel-specific regional sends and optional `path_hash_mode_override` for per-channel path hop width.
- `messages` (includes `sender_name`, `sender_key` for per-contact channel message attribution)
- `raw_packets`
- `contact_advert_paths` (recent unique advertisement paths per contact, keyed by contact + path bytes + hop count)
- `contact_name_history` (tracks name changes over time)
- `repeater_telemetry_history` (time-series telemetry snapshots for tracked repeaters)
- `fanout_configs` (MQTT, bot, webhook, Apprise, SQS integration configs)
- `app_settings`
Contact route state is canonicalized on the backend:
- stored route inputs: `direct_path`, `direct_path_len`, `direct_path_hash_mode`, `direct_path_updated_at`, plus optional `route_override_*`
- computed route surface: `effective_route`, `effective_route_source`, `direct_route`, `route_override`
- removed legacy names: `last_path`, `last_path_len`, `out_path_hash_mode`
Frontend and send paths should consume the canonical route surface rather than reconstructing precedence from raw fields.
Repository writes should prefer typed models such as `ContactUpsert` over ad hoc dict payloads when adding or updating schema-coupled data.
`max_radio_contacts` is the configured radio contact capacity baseline. Favorites reload first, the app refills non-favorite working-set contacts to about 80% of that capacity, and periodic offload triggers once occupancy reaches about 95%.
`app_settings` fields in active model:
- `max_radio_contacts`
- `experimental_channel_double_send`
- `favorites`
- `auto_decrypt_dm_on_advert`
- `sidebar_sort_order`
- `last_message_times`
- `preferences_migrated`
- `advert_interval`
- `last_advert_time`
- `bots`
- `flood_scope`
- `blocked_keys`, `blocked_names`, `discovery_blocked_types`
- `tracked_telemetry_repeaters`
- `auto_resend_channel`
Note: MQTT, community MQTT, and bot configs were migrated to the `fanout_configs` table (migrations 36-38).
## Security Posture (intentional)
- No authn/authz.
- No per-user authn/authz model; optionally, operators may enable app-wide HTTP Basic auth for both HTTP and WS entrypoints.
- No CORS restriction (`*`).
- Bot code executes user-provided Python via `exec()`.
@@ -190,13 +332,87 @@ Run backend tests:
PYTHONPATH=. uv run pytest tests/ -v
```
High-signal suites:
- `tests/test_packet_pipeline.py`
- `tests/test_event_handlers.py`
- `tests/test_send_messages.py`
- `tests/test_radio.py`
- `tests/test_api.py`
- `tests/test_migrations.py`
Test suites:
```text
tests/
├── conftest.py # Shared fixtures
├── test_ack_tracking_wiring.py # DM ACK tracking extraction and wiring
├── test_api.py # REST endpoint integration tests
├── test_block_lists.py # Blocked keys/names filtering across list/search surfaces
├── test_bot.py # Bot execution and sandboxing
├── test_channel_sender_backfill.py # Sender-key backfill uniqueness rules for channel messages
├── test_channels_router.py # Channels router endpoints
├── test_community_mqtt.py # Community MQTT publisher (JWT, packet format, hash, broadcast)
├── test_config.py # Configuration validation
├── test_contact_reconciliation_service.py # Prefix/contact reconciliation service helpers
├── test_contacts_router.py # Contacts router endpoints
├── test_decoder.py # Packet parsing/decryption
├── test_disable_bots.py # MESHCORE_DISABLE_BOTS=true feature
├── test_echo_dedup.py # Echo/repeat deduplication (incl. concurrent)
├── test_fanout.py # Fanout bus CRUD, scope matching, manager dispatch
├── test_fanout_hitlist.py # Fanout-related hitlist regression tests
├── test_fanout_integration.py # Fanout integration tests
├── test_event_handlers.py # ACK tracking, event registration, cleanup
├── test_frontend_static.py # Frontend static file serving
├── test_health_mqtt_status.py # Health endpoint MQTT status field
├── test_http_quality.py # Cache-control / gzip / basic-auth HTTP quality checks
├── test_key_normalization.py # Public key normalization
├── test_keystore.py # Ephemeral keystore
├── test_main_startup.py # App startup and lifespan
├── test_map_upload.py # Map upload fanout module
├── test_message_pagination.py # Cursor-based message pagination
├── test_message_prefix_claim.py # Message prefix claim logic
├── test_mqtt.py # MQTT publisher topic routing and lifecycle
├── test_messages_search.py # Message search, around, forward pagination
├── test_migrations.py # Schema migration system
├── test_packet_pipeline.py # End-to-end packet processing
├── test_packets_router.py # Packets router endpoints (decrypt, maintenance)
├── test_path_utils.py # Path hex rendering helpers
├── test_radio.py # RadioManager, serial detection
├── test_radio_commands_service.py # Radio config/private-key service workflows
├── test_radio_lifecycle_service.py # Reconnect/setup orchestration helpers
├── test_radio_operation.py # radio_operation() context manager
├── test_radio_router.py # Radio router endpoints
├── test_radio_runtime_service.py # radio_runtime seam behavior and helpers
├── test_radio_sync.py # Polling, sync, advertisement
├── test_real_crypto.py # Real cryptographic operations
├── test_repeater_routes.py # Repeater command/telemetry/trace + granular pane endpoints
├── test_repository.py # Data access layer
├── test_room_routes.py # Room-server login/status/telemetry/ACL endpoints
├── test_rx_log_data.py # on_rx_log_data event handler integration
├── test_security.py # Optional Basic Auth middleware / config behavior
├── test_send_messages.py # Outgoing messages, bot triggers, concurrent sends
├── test_settings_router.py # Settings endpoints, advert validation
├── test_statistics.py # Statistics aggregation
├── test_version_info.py # Version/build metadata resolution
├── test_websocket.py # WS manager broadcast/cleanup
└── test_websocket_route.py # WS endpoint lifecycle
```
## Errata & Known Non-Issues
### Sender timestamps are 1-second resolution (protocol constraint)
The MeshCore radio protocol encodes `sender_timestamp` as a 4-byte little-endian integer (Unix seconds). This is a firmware-level wire format — the radio, the Python library (`commands/messaging.py`), and the decoder (`decoder.py`) all read/write exactly 4 bytes. Millisecond Unix timestamps would overflow 4 bytes, so higher resolution is not possible without a firmware change.
**Consequence:** Message dedup still operates at 1-second granularity because the radio protocol only provides second-resolution `sender_timestamp`. Do not attempt to fix this by switching to millisecond timestamps — it will break echo dedup (the echo's 4-byte timestamp won't match the stored value) and overflow `to_bytes(4, "little")`. Incoming DMs now share the same second-resolution content identity tradeoff as channel echoes: same-contact same-text same-second observations collapse onto one stored row.
### Outgoing DM echoes remain undecrypted
When our own outgoing DM is heard back via `RX_LOG_DATA` (self-echo, loopback), `_process_direct_message` passes `our_public_key=None` for the outgoing direction, disabling the outbound hash check in the decoder. The decoder's inbound check (`src_hash == their_first_byte`) fails because the source is us, not the contact — so decryption returns `None`. This is by design: outgoing DMs are stored directly by the send endpoint, so no message is lost.
### Infinite setup retry on connection monitor
When `post_connect_setup()` fails (e.g. `export_and_store_private_key` raises `RuntimeError` because the radio didn't respond), `_setup_complete` is never set to `True`. The connection monitor sees `connected and not setup_complete` and retries every 5 seconds — indefinitely. This is intentional: the radio may be rebooting, waking from sleep, or otherwise temporarily unresponsive. We keep retrying so that setup completes automatically once the radio becomes available, without requiring manual intervention.
### DELETE channel returns 200 for non-existent keys
`DELETE /api/channels/{key}` returns `{"status": "ok"}` even if the key didn't exist. This is intentional — the postcondition is "channel doesn't exist," which is satisfied regardless of whether it existed before. No 404 needed.
### Contact lat/lon 0.0 vs NULL
MeshCore uses `0.0` as the sentinel for "no GPS coordinates" (see `models.py` `to_radio_dict`). The upsert SQL uses `COALESCE(excluded.lat, contacts.lat)`, which preserves existing values when the new value is `NULL` — but `0.0` is not `NULL`, so it overwrites previously valid coordinates. This is intentional: we always want the most recent location data. If a device stops broadcasting GPS, the old coordinates are presumably stale/wrong, so overwriting with "not available" (`0.0`) is the correct behavior.
## Editing Checklist
-298
View File
@@ -1,298 +0,0 @@
"""
Bot execution module for automatic message responses.
This module provides functionality for executing user-defined Python code
in response to incoming messages. The user's code can process message data
and optionally return a response string or a list of strings.
SECURITY WARNING: This executes arbitrary Python code provided by the user.
It should only be enabled on trusted systems where the user understands
the security implications.
"""
import asyncio
import logging
import time
from concurrent.futures import ThreadPoolExecutor
from typing import Any
from fastapi import HTTPException
logger = logging.getLogger(__name__)
# Limit concurrent bot executions to prevent resource exhaustion
_bot_semaphore = asyncio.Semaphore(100)
# Dedicated thread pool for bot execution (separate from default executor)
_bot_executor = ThreadPoolExecutor(max_workers=100, thread_name_prefix="bot_")
# Timeout for bot code execution (seconds)
BOT_EXECUTION_TIMEOUT = 10
# Minimum spacing between bot message sends (seconds)
# This ensures repeaters have time to return to listening mode
BOT_MESSAGE_SPACING = 2.0
# Global state for rate limiting bot sends
_bot_send_lock = asyncio.Lock()
_last_bot_send_time: float = 0.0
def execute_bot_code(
code: str,
sender_name: str | None,
sender_key: str | None,
message_text: str,
is_dm: bool,
channel_key: str | None,
channel_name: str | None,
sender_timestamp: int | None,
path: str | None,
) -> str | list[str] | None:
"""
Execute user-provided bot code with message context.
The code should define a function:
`bot(sender_name, sender_key, message_text, is_dm, channel_key, channel_name, sender_timestamp, path)`
that returns either None (no response), a string (single response message),
or a list of strings (multiple messages sent in order).
Args:
code: Python code defining the bot function
sender_name: Display name of the sender (may be None)
sender_key: 64-char hex public key of sender for DMs, None for channel messages
message_text: The message content
is_dm: True for direct messages, False for channel messages
channel_key: 32-char hex channel key for channel messages, None for DMs
channel_name: Channel name (e.g. "#general" with hash), None for DMs
sender_timestamp: Sender's timestamp from the message (may be None)
path: Hex-encoded routing path (may be None)
Returns:
Response string, list of strings, or None.
Note: This executes arbitrary code. Only use with trusted input.
"""
if not code or not code.strip():
return None
# Build execution namespace with allowed imports
namespace: dict[str, Any] = {
"__builtins__": __builtins__,
}
try:
# Execute the user's code to define the bot function
exec(code, namespace)
except Exception as e:
logger.warning("Bot code compilation failed: %s", e)
return None
# Check if bot function was defined
if "bot" not in namespace or not callable(namespace["bot"]):
logger.debug("Bot code does not define a callable 'bot' function")
return None
bot_func = namespace["bot"]
try:
# Call the bot function with message context
result = bot_func(
sender_name,
sender_key,
message_text,
is_dm,
channel_key,
channel_name,
sender_timestamp,
path,
)
# Validate result
if result is None:
return None
if isinstance(result, str):
return result if result.strip() else None
if isinstance(result, list):
# Filter to non-empty strings only
valid_messages = [msg for msg in result if isinstance(msg, str) and msg.strip()]
return valid_messages if valid_messages else None
logger.debug("Bot function returned unsupported type: %s", type(result))
return None
except Exception as e:
logger.warning("Bot function execution failed: %s", e)
return None
async def process_bot_response(
response: str | list[str],
is_dm: bool,
sender_key: str,
channel_key: str | None,
) -> None:
"""
Send the bot's response message(s) using the existing message sending endpoints.
For DMs, sends a direct message back to the sender.
For channel messages, sends to the same channel.
Bot messages are rate-limited to ensure at least BOT_MESSAGE_SPACING seconds
between sends, giving repeaters time to return to listening mode.
Args:
response: The response text to send, or a list of messages to send in order
is_dm: Whether the original message was a DM
sender_key: Public key of the original sender (for DM replies)
channel_key: Channel key for channel message replies
"""
# Normalize to list for uniform processing
messages = [response] if isinstance(response, str) else response
for message_text in messages:
await _send_single_bot_message(message_text, is_dm, sender_key, channel_key)
async def _send_single_bot_message(
message_text: str,
is_dm: bool,
sender_key: str,
channel_key: str | None,
) -> None:
"""
Send a single bot message with rate limiting.
Args:
message_text: The message text to send
is_dm: Whether the original message was a DM
sender_key: Public key of the original sender (for DM replies)
channel_key: Channel key for channel message replies
"""
global _last_bot_send_time
from app.models import SendChannelMessageRequest, SendDirectMessageRequest
from app.routers.messages import send_channel_message, send_direct_message
# Serialize bot sends and enforce minimum spacing
async with _bot_send_lock:
# Calculate how long since last bot send
now = time.monotonic()
time_since_last = now - _last_bot_send_time
if _last_bot_send_time > 0 and time_since_last < BOT_MESSAGE_SPACING:
wait_time = BOT_MESSAGE_SPACING - time_since_last
logger.debug("Rate limiting bot send, waiting %.2fs", wait_time)
await asyncio.sleep(wait_time)
try:
if is_dm:
logger.info("Bot sending DM reply to %s", sender_key[:12])
request = SendDirectMessageRequest(destination=sender_key, text=message_text)
await send_direct_message(request)
elif channel_key:
logger.info("Bot sending channel reply to %s", channel_key[:8])
request = SendChannelMessageRequest(channel_key=channel_key, text=message_text)
await send_channel_message(request)
else:
logger.warning("Cannot send bot response: no destination")
return # Don't update timestamp if we didn't send
except HTTPException as e:
logger.error("Bot failed to send response: %s", e.detail)
return # Don't update timestamp on failure
except Exception as e:
logger.error("Bot failed to send response: %s", e)
return # Don't update timestamp on failure
# Update last send time after successful send
_last_bot_send_time = time.monotonic()
async def run_bot_for_message(
sender_name: str | None,
sender_key: str | None,
message_text: str,
is_dm: bool,
channel_key: str | None,
channel_name: str | None = None,
sender_timestamp: int | None = None,
path: str | None = None,
is_outgoing: bool = False,
) -> None:
"""
Run all enabled bots for a message (incoming or outgoing).
This is the main entry point called by message handlers after
a message is successfully decrypted and stored. Bots run serially,
and errors in one bot don't prevent others from running.
Args:
sender_name: Display name of the sender
sender_key: 64-char hex public key of sender (DMs only, None for channels)
message_text: The message content
is_dm: True for direct messages, False for channel messages
channel_key: Channel key for channel messages
channel_name: Channel name (e.g. "#general"), None for DMs
sender_timestamp: Sender's timestamp from the message
path: Hex-encoded routing path
is_outgoing: Whether this is our own outgoing message
"""
# Early check if any bots are enabled (will re-check after sleep)
from app.repository import AppSettingsRepository
settings = await AppSettingsRepository.get()
enabled_bots = [b for b in settings.bots if b.enabled and b.code.strip()]
if not enabled_bots:
return
async with _bot_semaphore:
logger.debug(
"Running %d bot(s) for message from %s (is_dm=%s)",
len(enabled_bots),
sender_name or (sender_key[:12] if sender_key else "unknown"),
is_dm,
)
# Wait for the initiating message's retransmissions to propagate through the mesh
await asyncio.sleep(2)
# Re-check settings after sleep (user may have changed bot config)
settings = await AppSettingsRepository.get()
enabled_bots = [b for b in settings.bots if b.enabled and b.code.strip()]
if not enabled_bots:
logger.debug("All bots disabled during wait, skipping")
return
# Run each enabled bot serially
loop = asyncio.get_event_loop()
for bot in enabled_bots:
logger.debug("Executing bot '%s'", bot.name)
try:
response = await asyncio.wait_for(
loop.run_in_executor(
_bot_executor,
execute_bot_code,
bot.code,
sender_name,
sender_key,
message_text,
is_dm,
channel_key,
channel_name,
sender_timestamp,
path,
),
timeout=BOT_EXECUTION_TIMEOUT,
)
except asyncio.TimeoutError:
logger.warning(
"Bot '%s' execution timed out after %ds", bot.name, BOT_EXECUTION_TIMEOUT
)
continue # Continue to next bot
except Exception as e:
logger.warning("Bot '%s' execution error: %s", bot.name, e)
continue # Continue to next bot
# Send response if any
if response:
await process_bot_response(response, is_dm, sender_key or "", channel_key)
+10
View File
@@ -0,0 +1,10 @@
PUBLIC_CHANNEL_KEY = "8B3387E9C5CDEA6AC9E5EDBAA115CD72"
PUBLIC_CHANNEL_NAME = "Public"
def is_public_channel_key(key: str) -> bool:
return key.upper() == PUBLIC_CHANNEL_KEY
def is_public_channel_name(name: str) -> bool:
return name.casefold() == PUBLIC_CHANNEL_NAME.casefold()
+168 -6
View File
@@ -1,7 +1,10 @@
import logging
import logging.config
from collections import deque
from threading import Lock
from typing import Literal
from pydantic import model_validator
from pydantic import Field, model_validator
from pydantic_settings import BaseSettings, SettingsConfigDict
@@ -11,11 +14,21 @@ class Settings(BaseSettings):
serial_port: str = "" # Empty string triggers auto-detection
serial_baudrate: int = 115200
tcp_host: str = ""
tcp_port: int = 4000
tcp_port: int = 5000
ble_address: str = ""
ble_pin: str = ""
log_level: Literal["DEBUG", "INFO", "WARNING", "ERROR"] = "INFO"
database_path: str = "data/meshcore.db"
disable_bots: bool = False
enable_message_poll_fallback: bool = False
force_channel_slot_reconfigure: bool = False
clowntown_do_clock_wraparound: bool = Field(
default=False,
validation_alias="__CLOWNTOWN_DO_CLOCK_WRAPAROUND",
)
skip_post_connect_sync: bool = False
basic_auth_username: str = ""
basic_auth_password: str = ""
@model_validator(mode="after")
def validate_transport_exclusivity(self) -> "Settings":
@@ -33,6 +46,11 @@ class Settings(BaseSettings):
)
if self.ble_address and not self.ble_pin:
raise ValueError("MESHCORE_BLE_PIN is required when MESHCORE_BLE_ADDRESS is set.")
if self.basic_auth_partially_configured:
raise ValueError(
"MESHCORE_BASIC_AUTH_USERNAME and MESHCORE_BASIC_AUTH_PASSWORD "
"must be set together."
)
return self
@property
@@ -43,14 +61,158 @@ class Settings(BaseSettings):
return "ble"
return "serial"
@property
def basic_auth_enabled(self) -> bool:
return bool(self.basic_auth_username and self.basic_auth_password)
@property
def basic_auth_partially_configured(self) -> bool:
any_credentials_set = bool(self.basic_auth_username or self.basic_auth_password)
return any_credentials_set and not self.basic_auth_enabled
settings = Settings()
class _RingBufferLogHandler(logging.Handler):
"""Keep a bounded in-memory tail of formatted log lines."""
def __init__(self, max_lines: int = 1000) -> None:
super().__init__()
self._buffer: deque[str] = deque(maxlen=max_lines)
self._lock = Lock()
def emit(self, record: logging.LogRecord) -> None:
try:
line = self.format(record)
except Exception:
self.handleError(record)
return
with self._lock:
self._buffer.append(line)
def get_lines(self, limit: int = 1000) -> list[str]:
with self._lock:
if limit <= 0:
return []
return list(self._buffer)[-limit:]
def clear(self) -> None:
with self._lock:
self._buffer.clear()
_recent_log_handler = _RingBufferLogHandler(max_lines=1000)
def get_recent_log_lines(limit: int = 1000) -> list[str]:
"""Return recent formatted log lines from the in-memory ring buffer."""
return _recent_log_handler.get_lines(limit)
def clear_recent_log_lines() -> None:
"""Clear the in-memory log ring buffer."""
_recent_log_handler.clear()
class _RepeatSquelch(logging.Filter):
"""Suppress rapid-fire identical messages and emit a summary instead.
Attached to the ``meshcore`` library logger to catch its repeated
"Serial Connection started" lines that flood the log when another
process holds the serial port.
"""
def __init__(self, threshold: int = 3) -> None:
super().__init__()
self._last_msg: str | None = None
self._repeat_count: int = 0
self._threshold = threshold
def filter(self, record: logging.LogRecord) -> bool:
msg = record.getMessage()
if msg == self._last_msg:
self._repeat_count += 1
if self._repeat_count == self._threshold:
record.msg = (
"%s (repeated %d times — possible serial port contention from another process)"
)
record.args = (msg, self._repeat_count)
record.levelno = logging.WARNING
record.levelname = "WARNING"
return True
# Suppress further repeats beyond the threshold
return self._repeat_count < self._threshold
else:
self._last_msg = msg
self._repeat_count = 1
return True
def setup_logging() -> None:
"""Configure logging for the application."""
logging.basicConfig(
level=settings.log_level,
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
logging.config.dictConfig(
{
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"default": {
"format": "%(asctime)s - %(name)s - %(levelname)s - %(message)s",
"datefmt": "%Y-%m-%d %H:%M:%S",
},
"uvicorn_access": {
"()": "uvicorn.logging.AccessFormatter",
"fmt": '%(asctime)s - %(name)s - %(levelname)s - %(client_addr)s - "%(request_line)s" %(status_code)s',
"datefmt": "%Y-%m-%d %H:%M:%S",
"use_colors": None,
},
},
"handlers": {
"default": {
"class": "logging.StreamHandler",
"formatter": "default",
},
"uvicorn_access": {
"class": "logging.StreamHandler",
"formatter": "uvicorn_access",
},
},
"root": {
"level": settings.log_level,
"handlers": ["default"],
},
"loggers": {
"uvicorn": {
"level": settings.log_level,
"handlers": ["default"],
"propagate": False,
},
"uvicorn.error": {
"level": settings.log_level,
"handlers": ["default"],
"propagate": False,
},
"uvicorn.access": {
"level": settings.log_level,
"handlers": ["uvicorn_access"],
"propagate": False,
},
},
}
)
_recent_log_handler.setLevel(logging.DEBUG)
_recent_log_handler.setFormatter(
logging.Formatter(
fmt="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
)
)
for logger_name in ("", "uvicorn", "uvicorn.error", "uvicorn.access"):
target = logging.getLogger(logger_name)
if _recent_log_handler not in target.handlers:
target.addHandler(_recent_log_handler)
# Squelch repeated messages from the meshcore library (e.g. rapid-fire
# "Serial Connection started" when the port is contended).
logging.getLogger("meshcore").addFilter(_RepeatSquelch())
+142 -19
View File
@@ -7,27 +7,39 @@ from app.config import settings
logger = logging.getLogger(__name__)
SCHEMA = """
SCHEMA_TABLES = """
CREATE TABLE IF NOT EXISTS contacts (
public_key TEXT PRIMARY KEY,
name TEXT,
type INTEGER DEFAULT 0,
flags INTEGER DEFAULT 0,
last_path TEXT,
last_path_len INTEGER DEFAULT -1,
direct_path TEXT,
direct_path_len INTEGER,
direct_path_hash_mode INTEGER,
direct_path_updated_at INTEGER,
route_override_path TEXT,
route_override_len INTEGER,
route_override_hash_mode INTEGER,
last_advert INTEGER,
lat REAL,
lon REAL,
last_seen INTEGER,
on_radio INTEGER DEFAULT 0,
last_contacted INTEGER
last_contacted INTEGER,
first_seen INTEGER,
last_read_at INTEGER,
favorite INTEGER DEFAULT 0
);
CREATE TABLE IF NOT EXISTS channels (
key TEXT PRIMARY KEY,
name TEXT NOT NULL,
is_hashtag INTEGER DEFAULT 0,
on_radio INTEGER DEFAULT 0
on_radio INTEGER DEFAULT 0,
flood_scope_override TEXT,
path_hash_mode_override INTEGER,
last_read_at INTEGER,
favorite INTEGER DEFAULT 0
);
CREATE TABLE IF NOT EXISTS messages (
@@ -37,16 +49,18 @@ CREATE TABLE IF NOT EXISTS messages (
text TEXT NOT NULL,
sender_timestamp INTEGER,
received_at INTEGER NOT NULL,
path TEXT,
paths TEXT,
txt_type INTEGER DEFAULT 0,
signature TEXT,
outgoing INTEGER DEFAULT 0,
acked INTEGER DEFAULT 0,
-- Deduplication: identical text + timestamp in the same conversation is treated as a
-- mesh echo/repeat. Second-precision timestamps mean two intentional identical messages
-- within the same second would collide, but this is not feasible in practice — LoRa
-- transmission takes several seconds per message, and the UI clears the input on send.
UNIQUE(type, conversation_key, text, sender_timestamp)
sender_name TEXT,
sender_key TEXT
-- Deduplication: channel echoes/repeats use a content/time unique index so
-- duplicate observations reconcile onto a single stored row. Legacy
-- databases may also gain an incoming-DM content index via migration 44.
-- Enforced via idx_messages_dedup_null_safe (unique index) rather than a table constraint
-- to avoid the storage overhead of SQLite's autoindex duplicating every message text.
);
CREATE TABLE IF NOT EXISTS raw_packets (
@@ -54,15 +68,98 @@ CREATE TABLE IF NOT EXISTS raw_packets (
timestamp INTEGER NOT NULL,
data BLOB NOT NULL,
message_id INTEGER,
payload_hash TEXT,
FOREIGN KEY (message_id) REFERENCES messages(id)
payload_hash BLOB,
FOREIGN KEY (message_id) REFERENCES messages(id) ON DELETE SET NULL
);
CREATE INDEX IF NOT EXISTS idx_messages_conversation ON messages(type, conversation_key);
CREATE TABLE IF NOT EXISTS contact_advert_paths (
id INTEGER PRIMARY KEY AUTOINCREMENT,
public_key TEXT NOT NULL,
path_hex TEXT NOT NULL,
path_len INTEGER NOT NULL,
first_seen INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
heard_count INTEGER NOT NULL DEFAULT 1,
UNIQUE(public_key, path_hex, path_len),
FOREIGN KEY (public_key) REFERENCES contacts(public_key) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS contact_name_history (
id INTEGER PRIMARY KEY AUTOINCREMENT,
public_key TEXT NOT NULL,
name TEXT NOT NULL,
first_seen INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
UNIQUE(public_key, name),
FOREIGN KEY (public_key) REFERENCES contacts(public_key) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS app_settings (
id INTEGER PRIMARY KEY CHECK (id = 1),
max_radio_contacts INTEGER DEFAULT 200,
favorites TEXT DEFAULT '[]',
auto_decrypt_dm_on_advert INTEGER DEFAULT 1,
last_message_times TEXT DEFAULT '{}',
preferences_migrated INTEGER DEFAULT 0,
advert_interval INTEGER DEFAULT 0,
last_advert_time INTEGER DEFAULT 0,
flood_scope TEXT DEFAULT '',
blocked_keys TEXT DEFAULT '[]',
blocked_names TEXT DEFAULT '[]',
discovery_blocked_types TEXT DEFAULT '[]',
tracked_telemetry_repeaters TEXT DEFAULT '[]',
auto_resend_channel INTEGER DEFAULT 0
);
INSERT OR IGNORE INTO app_settings (id) VALUES (1);
CREATE TABLE IF NOT EXISTS fanout_configs (
id TEXT PRIMARY KEY,
type TEXT NOT NULL,
name TEXT NOT NULL,
enabled INTEGER DEFAULT 0,
config TEXT NOT NULL DEFAULT '{}',
scope TEXT NOT NULL DEFAULT '{}',
sort_order INTEGER DEFAULT 0,
created_at INTEGER NOT NULL
);
CREATE TABLE IF NOT EXISTS repeater_telemetry_history (
id INTEGER PRIMARY KEY AUTOINCREMENT,
public_key TEXT NOT NULL,
timestamp INTEGER NOT NULL,
data TEXT NOT NULL,
FOREIGN KEY (public_key) REFERENCES contacts(public_key) ON DELETE CASCADE
);
"""
# Indexes are created after migrations so that legacy databases have all
# required columns (e.g. sender_key, added by migration 25) before index
# creation runs.
SCHEMA_INDEXES = """
CREATE INDEX IF NOT EXISTS idx_messages_received ON messages(received_at);
CREATE UNIQUE INDEX IF NOT EXISTS idx_messages_dedup_null_safe
ON messages(type, conversation_key, text, COALESCE(sender_timestamp, 0))
WHERE type = 'CHAN';
CREATE UNIQUE INDEX IF NOT EXISTS idx_messages_incoming_priv_dedup
ON messages(type, conversation_key, text, COALESCE(sender_timestamp, 0), COALESCE(sender_key, ''))
WHERE type = 'PRIV' AND outgoing = 0;
CREATE INDEX IF NOT EXISTS idx_messages_sender_key ON messages(sender_key);
CREATE INDEX IF NOT EXISTS idx_messages_pagination
ON messages(type, conversation_key, received_at DESC, id DESC);
CREATE INDEX IF NOT EXISTS idx_messages_unread_covering
ON messages(type, conversation_key, outgoing, received_at);
CREATE INDEX IF NOT EXISTS idx_raw_packets_message_id ON raw_packets(message_id);
CREATE INDEX IF NOT EXISTS idx_raw_packets_timestamp ON raw_packets(timestamp);
CREATE UNIQUE INDEX IF NOT EXISTS idx_raw_packets_payload_hash ON raw_packets(payload_hash);
CREATE INDEX IF NOT EXISTS idx_contacts_on_radio ON contacts(on_radio);
CREATE INDEX IF NOT EXISTS idx_contacts_type_last_seen ON contacts(type, last_seen);
CREATE INDEX IF NOT EXISTS idx_messages_type_received_conversation
ON messages(type, received_at, conversation_key);
CREATE INDEX IF NOT EXISTS idx_contact_advert_paths_recent
ON contact_advert_paths(public_key, last_seen DESC);
CREATE INDEX IF NOT EXISTS idx_contact_name_history_key
ON contact_name_history(public_key, last_seen DESC);
CREATE INDEX IF NOT EXISTS idx_repeater_telemetry_pk_ts
ON repeater_telemetry_history(public_key, timestamp);
"""
@@ -76,15 +173,41 @@ class Database:
Path(self.db_path).parent.mkdir(parents=True, exist_ok=True)
self._connection = await aiosqlite.connect(self.db_path)
self._connection.row_factory = aiosqlite.Row
await self._connection.executescript(SCHEMA)
await self._connection.commit()
logger.debug("Database schema initialized")
# Run any pending migrations
# WAL mode: faster writes, concurrent readers during writes, no journal file churn.
# Persists in the DB file but we set it explicitly on every connection.
await self._connection.execute("PRAGMA journal_mode = WAL")
# Incremental auto-vacuum: freed pages are reclaimable via
# PRAGMA incremental_vacuum without a full VACUUM. Must be set before
# the first table is created (for new databases); for existing databases
# migration 20 handles the one-time VACUUM to restructure the file.
await self._connection.execute("PRAGMA auto_vacuum = INCREMENTAL")
# Foreign key enforcement: must be set per-connection (not persisted).
# Disabled during schema init and migrations to avoid issues with
# historical table-rebuild migrations that may temporarily violate
# constraints, then re-enabled for all subsequent application queries.
await self._connection.execute("PRAGMA foreign_keys = OFF")
await self._connection.executescript(SCHEMA_TABLES)
await self._connection.commit()
logger.debug("Database tables initialized")
# Run any pending migrations before creating indexes, so that
# legacy databases have all required columns first.
from app.migrations import run_migrations
await run_migrations(self._connection)
await self._connection.executescript(SCHEMA_INDEXES)
await self._connection.commit()
logger.debug("Database indexes initialized")
# Enable FK enforcement for all application queries from this point on.
await self._connection.execute("PRAGMA foreign_keys = ON")
logger.debug("Foreign key enforcement enabled")
async def disconnect(self) -> None:
if self._connection:
await self._connection.close()
+159 -82
View File
@@ -58,6 +58,28 @@ class DecryptedDirectMessage:
message: str
dest_hash: str # First byte of destination pubkey as hex
src_hash: str # First byte of sender pubkey as hex
signed_sender_prefix: str | None = None
@property
def txt_type(self) -> int:
return self.flags >> 2
@property
def attempt(self) -> int:
return self.flags & 0x03
@dataclass
class DecryptedPathPayload:
"""Result of decrypting a PATH payload."""
dest_hash: str
src_hash: str
returned_path: bytes
returned_path_len: int
returned_path_hash_mode: int
extra_type: int
extra: bytes
@dataclass
@@ -79,18 +101,14 @@ class PacketInfo:
route_type: RouteType
payload_type: PayloadType
payload_version: int
path_length: int
path: bytes # The routing path (empty if path_length is 0)
path_length: int # Decoded hop count (not the raw wire byte)
path: bytes # The routing path bytes (empty if path_length is 0)
payload: bytes
path_hash_size: int = 1 # Bytes per hop: 1, 2, or 3
def calculate_channel_hash(channel_key: bytes) -> str:
"""
Calculate the channel hash from a 16-byte channel key.
Returns the first byte of SHA256(key) as hex.
"""
hash_bytes = hashlib.sha256(channel_key).digest()
return format(hash_bytes[0], "02x")
def _is_valid_advert_location(lat: float, lon: float) -> bool:
return -90 <= lat <= 90 and -180 <= lon <= 180
def extract_payload(raw_packet: bytes) -> bytes | None:
@@ -100,86 +118,36 @@ def extract_payload(raw_packet: bytes) -> bytes | None:
Packet structure:
- Byte 0: header (route_type, payload_type, version)
- For TRANSPORT routes: bytes 1-4 are transport codes
- Next byte: path_length
- Next path_length bytes: path data
- Next byte: path byte (packed as [hash_mode:2][hop_count:6])
- Next hop_count * hash_size bytes: path data
- Remaining: payload
Returns the payload bytes, or None if packet is malformed.
"""
if len(raw_packet) < 2:
return None
from app.path_utils import parse_packet_envelope
try:
header = raw_packet[0]
route_type = header & 0x03
offset = 1
# Skip transport codes if present (TRANSPORT_FLOOD=0, TRANSPORT_DIRECT=3)
if route_type in (0x00, 0x03):
if len(raw_packet) < offset + 4:
return None
offset += 4
# Get path length
if len(raw_packet) < offset + 1:
return None
path_length = raw_packet[offset]
offset += 1
# Skip path data
if len(raw_packet) < offset + path_length:
return None
offset += path_length
# Rest is payload
return raw_packet[offset:]
except (ValueError, IndexError):
return None
envelope = parse_packet_envelope(raw_packet)
return envelope.payload if envelope is not None else None
def parse_packet(raw_packet: bytes) -> PacketInfo | None:
"""Parse a raw packet and extract basic info."""
if len(raw_packet) < 2:
from app.path_utils import parse_packet_envelope
envelope = parse_packet_envelope(raw_packet)
if envelope is None:
return None
try:
header = raw_packet[0]
route_type = RouteType(header & 0x03)
payload_type = PayloadType((header >> 2) & 0x0F)
payload_version = (header >> 6) & 0x03
offset = 1
# Skip transport codes if present
if route_type in (RouteType.TRANSPORT_FLOOD, RouteType.TRANSPORT_DIRECT):
if len(raw_packet) < offset + 4:
return None
offset += 4
# Get path length
if len(raw_packet) < offset + 1:
return None
path_length = raw_packet[offset]
offset += 1
# Extract path data
if len(raw_packet) < offset + path_length:
return None
path = raw_packet[offset : offset + path_length]
offset += path_length
# Rest is payload
payload = raw_packet[offset:]
return PacketInfo(
route_type=route_type,
payload_type=payload_type,
payload_version=payload_version,
path_length=path_length,
path=path,
payload=payload,
route_type=RouteType(envelope.route_type),
payload_type=PayloadType(envelope.payload_type),
payload_version=envelope.payload_version,
path_length=envelope.hop_count,
path_hash_size=envelope.hash_size,
path=envelope.path,
payload=envelope.payload,
)
except (ValueError, IndexError):
except ValueError:
return None
@@ -282,7 +250,7 @@ def try_decrypt_packet_with_channel_key(
return None
packet_channel_hash = format(packet_info.payload[0], "02x")
expected_hash = calculate_channel_hash(channel_key)
expected_hash = format(hashlib.sha256(channel_key).digest()[0], "02x")
if packet_channel_hash != expected_hash:
return None
@@ -301,7 +269,9 @@ def get_packet_payload_type(raw_packet: bytes) -> PayloadType | None:
return None
def parse_advertisement(payload: bytes) -> ParsedAdvertisement | None:
def parse_advertisement(
payload: bytes, raw_packet: bytes | None = None
) -> ParsedAdvertisement | None:
"""
Parse an advertisement payload.
@@ -327,11 +297,13 @@ def parse_advertisement(payload: bytes) -> ParsedAdvertisement | None:
# Parse fixed-position fields
public_key = payload[0:32].hex()
timestamp = int.from_bytes(payload[32:36], byteorder="little")
# signature = payload[36:100] # Not currently verified
flags = payload[100]
# Parse flags
# Parse flags — clamp device_role to valid range (0-4); corrupted
# advertisements can have junk in the lower nibble.
device_role = flags & 0x0F
if device_role > 4:
device_role = 0
has_location = bool(flags & 0x10)
has_feature1 = bool(flags & 0x20)
has_feature2 = bool(flags & 0x40)
@@ -358,6 +330,16 @@ def parse_advertisement(payload: bytes) -> ParsedAdvertisement | None:
lon_raw = int.from_bytes(payload[offset + 4 : offset + 8], byteorder="little", signed=True)
lat = lat_raw / 1_000_000
lon = lon_raw / 1_000_000
if not _is_valid_advert_location(lat, lon):
packet_hex = (raw_packet if raw_packet is not None else payload).hex().upper()
logger.warning(
"Dropping location data for nonsensical packet -- packet %s implies lat/lon %s/%s. Outta this world!",
packet_hex,
lat,
lon,
)
lat = None
lon = None
offset += 8
# Skip feature fields if present
@@ -528,10 +510,19 @@ def decrypt_direct_message(payload: bytes, shared_secret: bytes) -> DecryptedDir
# Extract message text (UTF-8, null-padded)
message_bytes = decrypted[5:]
signed_sender_prefix: str | None = None
txt_type = flags >> 2
if txt_type == 2:
if len(message_bytes) < 4:
return None
signed_sender_prefix = message_bytes[:4].hex()
message_bytes = message_bytes[4:]
try:
message_text = message_bytes.decode("utf-8")
# Remove null terminator and any padding
message_text = message_text.rstrip("\x00")
# Truncate at first null terminator (consistent with channel message handling)
null_idx = message_text.find("\x00")
if null_idx >= 0:
message_text = message_text[:null_idx]
except UnicodeDecodeError:
return None
@@ -541,6 +532,7 @@ def decrypt_direct_message(payload: bytes, shared_secret: bytes) -> DecryptedDir
message=message_text,
dest_hash=dest_hash,
src_hash=src_hash,
signed_sender_prefix=signed_sender_prefix,
)
@@ -604,3 +596,88 @@ def try_decrypt_dm(
return None
return decrypt_direct_message(packet_info.payload, shared_secret)
def decrypt_path_payload(payload: bytes, shared_secret: bytes) -> DecryptedPathPayload | None:
"""Decrypt a PATH payload using the ECDH shared secret."""
if len(payload) < 4:
return None
dest_hash = format(payload[0], "02x")
src_hash = format(payload[1], "02x")
mac = payload[2:4]
ciphertext = payload[4:]
if len(ciphertext) == 0 or len(ciphertext) % 16 != 0:
return None
calculated_mac = hmac.new(shared_secret, ciphertext, hashlib.sha256).digest()[:2]
if calculated_mac != mac:
return None
try:
cipher = AES.new(shared_secret[:16], AES.MODE_ECB)
decrypted = cipher.decrypt(ciphertext)
except Exception as e:
logger.debug("AES decryption failed for PATH payload: %s", e)
return None
if len(decrypted) < 2:
return None
from app.path_utils import decode_path_byte
packed_len = decrypted[0]
try:
returned_path_len, hash_size = decode_path_byte(packed_len)
except ValueError:
return None
path_byte_len = returned_path_len * hash_size
if len(decrypted) < 1 + path_byte_len + 1:
return None
offset = 1
returned_path = decrypted[offset : offset + path_byte_len]
offset += path_byte_len
extra_type = decrypted[offset] & 0x0F
offset += 1
extra = decrypted[offset:]
return DecryptedPathPayload(
dest_hash=dest_hash,
src_hash=src_hash,
returned_path=returned_path,
returned_path_len=returned_path_len,
returned_path_hash_mode=hash_size - 1,
extra_type=extra_type,
extra=extra,
)
def try_decrypt_path(
raw_packet: bytes,
our_private_key: bytes,
their_public_key: bytes,
our_public_key: bytes,
) -> DecryptedPathPayload | None:
"""Try to decrypt a raw packet as a PATH packet."""
packet_info = parse_packet(raw_packet)
if packet_info is None or packet_info.payload_type != PayloadType.PATH:
return None
if len(packet_info.payload) < 4:
return None
dest_hash = packet_info.payload[0]
src_hash = packet_info.payload[1]
if dest_hash != our_public_key[0] or src_hash != their_public_key[0]:
return None
try:
shared_secret = derive_shared_secret(our_private_key, their_public_key)
except Exception as e:
logger.debug("Failed to derive shared secret for PATH payload: %s", e)
return None
return decrypt_path_payload(packet_info.payload, shared_secret)
-17
View File
@@ -1,17 +0,0 @@
"""Shared dependencies for FastAPI routers."""
from fastapi import HTTPException
from app.radio import radio_manager
def require_connected():
"""Dependency that ensures radio is connected and returns meshcore instance.
Raises HTTPException 503 if radio is not connected.
"""
if getattr(radio_manager, "is_setup_in_progress", False) is True:
raise HTTPException(status_code=503, detail="Radio is initializing")
if not radio_manager.is_connected or radio_manager.meshcore is None:
raise HTTPException(status_code=503, detail="Radio not connected")
return radio_manager.meshcore
+173 -129
View File
@@ -1,13 +1,25 @@
import asyncio
import logging
import time
from typing import TYPE_CHECKING
from meshcore import EventType
from app.models import CONTACT_TYPE_REPEATER, Contact
from app.models import CONTACT_TYPE_ROOM, Contact, ContactUpsert
from app.packet_processor import process_raw_packet
from app.repository import AmbiguousPublicKeyPrefixError, ContactRepository, MessageRepository
from app.repository import (
ContactRepository,
)
from app.services import dm_ack_tracker
from app.services.contact_reconciliation import (
promote_prefix_contacts_for_contact,
record_contact_name_and_reconcile,
)
from app.services.dm_ack_apply import apply_dm_ack_code
from app.services.dm_ingest import (
ingest_fallback_direct_message,
resolve_direct_message_sender_metadata,
resolve_fallback_direct_message_context,
)
from app.websocket import broadcast_event
if TYPE_CHECKING:
@@ -20,31 +32,14 @@ logger = logging.getLogger(__name__)
_active_subscriptions: list["Subscription"] = []
# Track pending ACKs: expected_ack_code -> (message_id, timestamp, timeout_ms)
_pending_acks: dict[str, tuple[int, float, int]] = {}
def track_pending_ack(expected_ack: str, message_id: int, timeout_ms: int) -> bool:
"""Compatibility wrapper for pending DM ACK tracking."""
return dm_ack_tracker.track_pending_ack(expected_ack, message_id, timeout_ms)
def track_pending_ack(expected_ack: str, message_id: int, timeout_ms: int) -> None:
"""Track a pending ACK for a direct message."""
_pending_acks[expected_ack] = (message_id, time.time(), timeout_ms)
logger.debug(
"Tracking pending ACK %s for message %d (timeout %dms)",
expected_ack,
message_id,
timeout_ms,
)
def _cleanup_expired_acks() -> None:
"""Remove expired pending ACKs."""
now = time.time()
expired = []
for code, (_msg_id, created_at, timeout_ms) in _pending_acks.items():
if now - created_at > (timeout_ms / 1000) * 2: # 2x timeout as buffer
expired.append(code)
for code in expired:
del _pending_acks[code]
logger.debug("Expired pending ACK %s", code)
def cleanup_expired_acks() -> None:
"""Compatibility wrapper for expiring stale DM ACK entries."""
dm_ack_tracker.cleanup_expired_acks()
async def on_contact_message(event: "Event") -> None:
@@ -57,8 +52,8 @@ async def on_contact_message(event: "Event") -> None:
2. The packet processor couldn't match the sender to a known contact
The packet processor handles: decryption, storage, broadcast, bot trigger.
This handler only stores if the packet processor didn't already handle it
(detected via INSERT OR IGNORE returning None for duplicates).
This handler adapts CONTACT_MSG_RECV payloads into the shared DM ingest
workflow, which reconciles duplicates against the packet pipeline when possible.
"""
payload = event.payload
@@ -72,94 +67,68 @@ async def on_contact_message(event: "Event") -> None:
sender_pubkey = payload.get("public_key") or payload.get("pubkey_prefix", "")
received_at = int(time.time())
# Look up contact from database - use prefix lookup only if needed
# (get_by_key_or_prefix does exact match first, then prefix fallback)
try:
contact = await ContactRepository.get_by_key_or_prefix(sender_pubkey)
except AmbiguousPublicKeyPrefixError:
logger.warning(
"DM sender prefix '%s' is ambiguous; storing under prefix until full key is known",
sender_pubkey,
)
contact = None
if contact:
sender_pubkey = contact.public_key.lower()
# Promote any prefix-stored messages to this full key
await MessageRepository.claim_prefix_messages(sender_pubkey)
# Skip messages from repeaters - they only send CLI responses, not chat messages.
# CLI responses are handled by the command endpoint and txt_type filter above.
if contact.type == CONTACT_TYPE_REPEATER:
logger.debug(
"Skipping message from repeater %s (not stored in chat history)",
sender_pubkey[:12],
)
return
# Try to create message - INSERT OR IGNORE handles duplicates atomically
# If the packet processor already stored this message, this returns None
msg_id = await MessageRepository.create(
msg_type="PRIV",
text=payload.get("text", ""),
conversation_key=sender_pubkey,
sender_timestamp=payload.get("sender_timestamp") or received_at,
context = await resolve_fallback_direct_message_context(
sender_public_key=sender_pubkey,
received_at=received_at,
path=payload.get("path"),
broadcast_fn=broadcast_event,
contact_repository=ContactRepository,
log=logger,
)
if context.skip_storage:
logger.debug(
"Skipping message from repeater %s (not stored in chat history)",
context.conversation_key[:12],
)
return
# Try to create or reconcile the message via the shared DM ingest service.
ts = payload.get("sender_timestamp")
sender_timestamp = ts if ts is not None else received_at
path = payload.get("path")
path_len = payload.get("path_len")
sender_name = context.sender_name
sender_key = context.sender_key
signature = payload.get("signature")
if (
context.contact is not None
and context.contact.type == CONTACT_TYPE_ROOM
and txt_type == 2
and isinstance(signature, str)
and signature
):
sender_name, sender_key = await resolve_direct_message_sender_metadata(
sender_public_key=signature,
received_at=received_at,
broadcast_fn=broadcast_event,
contact_repository=ContactRepository,
log=logger,
)
message = await ingest_fallback_direct_message(
conversation_key=context.conversation_key,
text=payload.get("text", ""),
sender_timestamp=sender_timestamp,
received_at=received_at,
path=path,
path_len=path_len,
txt_type=txt_type,
signature=payload.get("signature"),
signature=signature,
sender_name=sender_name,
sender_key=sender_key,
broadcast_fn=broadcast_event,
update_last_contacted_key=context.contact.public_key.lower() if context.contact else None,
)
if msg_id is None:
if message is None:
# Already handled by packet processor (or exact duplicate) - nothing more to do
logger.debug("DM from %s already processed by packet processor", sender_pubkey[:12])
logger.debug(
"DM from %s already processed by packet processor", context.conversation_key[:12]
)
return
# If we get here, the packet processor didn't handle this message
# (likely because private key export is not available)
logger.debug("DM from %s handled by event handler (fallback path)", sender_pubkey[:12])
# Build paths array for broadcast
path = payload.get("path")
paths = [{"path": path or "", "received_at": received_at}] if path is not None else None
# Broadcast the new message
broadcast_event(
"message",
{
"id": msg_id,
"type": "PRIV",
"conversation_key": sender_pubkey,
"text": payload.get("text", ""),
"sender_timestamp": payload.get("sender_timestamp"),
"received_at": received_at,
"paths": paths,
"txt_type": txt_type,
"signature": payload.get("signature"),
"outgoing": False,
"acked": 0,
},
)
# Update contact last_contacted (contact was already fetched above)
if contact:
await ContactRepository.update_last_contacted(sender_pubkey, received_at)
# Run bot if enabled
from app.bot import run_bot_for_message
asyncio.create_task(
run_bot_for_message(
sender_name=contact.name if contact else None,
sender_key=sender_pubkey,
message_text=payload.get("text", ""),
is_dm=True,
channel_key=None,
channel_name=None,
sender_timestamp=payload.get("sender_timestamp"),
path=payload.get("path"),
is_outgoing=False,
)
logger.debug(
"DM from %s handled by event handler (fallback path)", context.conversation_key[:12]
)
@@ -189,15 +158,67 @@ async def on_rx_log_data(event: "Event") -> None:
async def on_path_update(event: "Event") -> None:
"""Handle path update events."""
payload = event.payload
logger.debug("Path update for %s", payload.get("pubkey_prefix"))
public_key = str(payload.get("public_key", "")).lower()
pubkey_prefix = str(payload.get("pubkey_prefix", "")).lower()
pubkey_prefix = payload.get("pubkey_prefix", "")
path = payload.get("path", "")
path_len = payload.get("path_len", -1)
contact: Contact | None = None
if public_key:
logger.debug("Path update for %s", public_key[:12])
contact = await ContactRepository.get_by_key(public_key)
elif pubkey_prefix:
# Legacy compatibility: older payloads may only include a prefix.
logger.debug("Path update for prefix %s", pubkey_prefix)
contact = await ContactRepository.get_by_key_prefix(pubkey_prefix)
else:
logger.debug("PATH_UPDATE missing public_key/pubkey_prefix, skipping")
return
existing = await ContactRepository.get_by_key_prefix(pubkey_prefix)
if existing:
await ContactRepository.update_path(existing.public_key, path, path_len)
if not contact:
return
# PATH_UPDATE is a serial control push event from firmware (not an RF packet).
# Current meshcore payloads only include public_key for this event.
# RF route/path bytes are handled via RX_LOG_DATA -> process_raw_packet,
# so if path fields are absent here we treat this as informational only.
path = payload.get("path")
path_len = payload.get("path_len")
path_hash_mode = payload.get("path_hash_mode")
if path is None or path_len is None:
logger.debug(
"PATH_UPDATE for %s has no path payload, skipping DB update", contact.public_key[:12]
)
return
try:
normalized_path_len = int(path_len)
except (TypeError, ValueError):
logger.warning(
"Invalid path_len in PATH_UPDATE for %s: %r", contact.public_key[:12], path_len
)
return
normalized_path_hash_mode: int | None
if path_hash_mode is None:
# Legacy firmware/library payloads only support 1-byte hop hashes.
normalized_path_hash_mode = -1 if normalized_path_len == -1 else 0
else:
try:
normalized_path_hash_mode = int(path_hash_mode)
except (TypeError, ValueError):
logger.warning(
"Invalid path_hash_mode in PATH_UPDATE for %s: %r",
contact.public_key[:12],
path_hash_mode,
)
normalized_path_hash_mode = None
await ContactRepository.update_direct_path(
contact.public_key,
str(path),
normalized_path_len,
normalized_path_hash_mode,
updated_at=int(time.time()),
)
async def on_new_contact(event: "Event") -> None:
@@ -215,13 +236,42 @@ async def on_new_contact(event: "Event") -> None:
logger.debug("New contact: %s", public_key[:12])
contact_data = {
**Contact.from_radio_dict(public_key, payload, on_radio=True),
"last_seen": int(time.time()),
}
await ContactRepository.upsert(contact_data)
contact_upsert = ContactUpsert.from_radio_dict(public_key.lower(), payload, on_radio=False)
contact_upsert.last_seen = int(time.time())
await ContactRepository.upsert(contact_upsert)
promoted_keys = await promote_prefix_contacts_for_contact(
public_key=public_key,
log=logger,
)
broadcast_event("contact", contact_data)
adv_name = payload.get("adv_name")
await record_contact_name_and_reconcile(
public_key=public_key,
contact_name=adv_name,
timestamp=int(time.time()),
log=logger,
)
# Read back from DB so the broadcast includes all fields (last_contacted,
# last_read_at, etc.) matching the REST Contact shape exactly.
db_contact = await ContactRepository.get_by_key(public_key)
broadcast_event(
"contact",
(
db_contact.model_dump()
if db_contact
else Contact(**contact_upsert.model_dump(exclude_none=True)).model_dump()
),
)
if db_contact:
for old_key in promoted_keys:
broadcast_event(
"contact_resolved",
{
"previous_public_key": old_key,
"contact": db_contact.model_dump(),
},
)
async def on_ack(event: "Event") -> None:
@@ -234,15 +284,9 @@ async def on_ack(event: "Event") -> None:
return
logger.debug("Received ACK with code %s", ack_code)
_cleanup_expired_acks()
if ack_code in _pending_acks:
message_id, _, _ = _pending_acks.pop(ack_code)
logger.info("ACK received for message %d", message_id)
ack_count = await MessageRepository.increment_ack_count(message_id)
broadcast_event("message_acked", {"message_id": message_id, "ack_count": ack_count})
matched = await apply_dm_ack_code(ack_code, broadcast_fn=broadcast_event)
if matched:
logger.info("ACK received for code %s", ack_code)
else:
logger.debug("ACK code %s does not match any pending messages", ack_code)
+85
View File
@@ -0,0 +1,85 @@
"""Typed WebSocket event contracts and serialization helpers."""
import json
import logging
from typing import Any, Literal, NotRequired
from pydantic import TypeAdapter
from typing_extensions import TypedDict
from app.models import Channel, Contact, Message, MessagePath, RawPacketBroadcast
from app.routers.health import HealthResponse
logger = logging.getLogger(__name__)
WsEventType = Literal[
"health",
"message",
"contact",
"contact_resolved",
"channel",
"contact_deleted",
"channel_deleted",
"raw_packet",
"message_acked",
"error",
"success",
]
class ContactDeletedPayload(TypedDict):
public_key: str
class ContactResolvedPayload(TypedDict):
previous_public_key: str
contact: Contact
class ChannelDeletedPayload(TypedDict):
key: str
class MessageAckedPayload(TypedDict):
message_id: int
ack_count: int
paths: NotRequired[list[MessagePath]]
packet_id: NotRequired[int | None]
class ToastPayload(TypedDict):
message: str
details: NotRequired[str]
_PAYLOAD_ADAPTERS: dict[WsEventType, TypeAdapter[Any]] = {
"health": TypeAdapter(HealthResponse),
"message": TypeAdapter(Message),
"contact": TypeAdapter(Contact),
"contact_resolved": TypeAdapter(ContactResolvedPayload),
"channel": TypeAdapter(Channel),
"contact_deleted": TypeAdapter(ContactDeletedPayload),
"channel_deleted": TypeAdapter(ChannelDeletedPayload),
"raw_packet": TypeAdapter(RawPacketBroadcast),
"message_acked": TypeAdapter(MessageAckedPayload),
"error": TypeAdapter(ToastPayload),
"success": TypeAdapter(ToastPayload),
}
def dump_ws_event(event_type: str, data: Any) -> str:
"""Serialize a WebSocket event envelope with validation for known event types."""
adapter = _PAYLOAD_ADAPTERS.get(event_type) # type: ignore[arg-type]
if adapter is None:
return json.dumps({"type": event_type, "data": data})
try:
validated = adapter.validate_python(data)
payload = adapter.dump_python(validated, mode="json")
return json.dumps({"type": event_type, "data": payload})
except Exception:
logger.exception(
"Failed to validate WebSocket payload for event %s; falling back to raw JSON envelope",
event_type,
)
return json.dumps({"type": event_type, "data": data})
+365
View File
@@ -0,0 +1,365 @@
# Fanout Bus Architecture
The fanout bus is a unified system for dispatching mesh radio events to external integrations. It replaces the previous scattered singleton MQTT publishers with a modular, configurable framework.
## Core Concepts
### FanoutModule (base.py)
Base class that all integration modules extend:
- `__init__(config_id, config, *, name="")` — constructor; receives the config UUID, the type-specific config dict, and the user-assigned name
- `start()` / `stop()` — async lifecycle (e.g. open/close connections)
- `on_message(data)` — receive decoded messages (scope-gated)
- `on_raw(data)` — receive raw RF packets (scope-gated)
- `on_contact(data)` — receive contact upserts; dispatched to all modules
- `on_telemetry(data)` — receive repeater telemetry snapshots; dispatched to all modules
- `on_health(data)` — receive periodic radio health snapshots; dispatched to all modules
- `status` property (**must override**) — return `"connected"`, `"disconnected"`, or `"error"`
All five event hooks are no-ops by default; modules override only the ones they care about.
### FanoutManager (manager.py)
Singleton that owns all active modules and dispatches events:
- `load_from_db()` — startup: load enabled configs, instantiate modules
- `reload_config(id)` — CRUD: stop old, start new
- `remove_config(id)` — delete: stop and remove
- `broadcast_message(data)` — scope-check + dispatch `on_message`
- `broadcast_raw(data)` — scope-check + dispatch `on_raw`
- `broadcast_contact(data)` — dispatch `on_contact` to all modules
- `broadcast_telemetry(data)` — dispatch `on_telemetry` to all modules
- `broadcast_health_fanout(data)` — dispatch `on_health` to all modules
- `stop_all()` — shutdown
- `get_statuses()` — health endpoint data
All modules are constructed uniformly: `cls(config_id, config_blob, name=cfg.get("name", ""))`.
### Scope Matching
Each config has a `scope` JSON blob controlling what events reach it:
```json
{"messages": "all", "raw_packets": "all"}
{"messages": "none", "raw_packets": "all"}
{"messages": {"channels": ["key1"], "contacts": "all"}, "raw_packets": "none"}
```
Community MQTT always enforces `{"messages": "none", "raw_packets": "all"}`.
Scope only gates `on_message` and `on_raw`. The `on_contact`, `on_telemetry`, and `on_health` hooks are dispatched to all modules unconditionally — modules that care about specific contacts or repeaters filter internally based on their own config.
## Event Flow
```
Radio Event -> packet_processor / event_handler
-> broadcast_event("message"|"raw_packet"|"contact", data, realtime=True)
-> WebSocket broadcast (always)
-> FanoutManager.broadcast_message/raw/contact (only if realtime=True)
-> scope check per module (message/raw only)
-> module.on_message / on_raw / on_contact
Telemetry collect (radio_sync.py / routers/repeaters.py)
-> RepeaterTelemetryRepository.record(...)
-> FanoutManager.broadcast_telemetry(data)
-> module.on_telemetry (all modules, unconditional)
Health fanout (radio_stats.py, piggybacks on 60s stats sampling loop)
-> FanoutManager.broadcast_health_fanout(data)
-> module.on_health (all modules, unconditional)
```
Setting `realtime=False` (used during historical decryption) skips fanout dispatch entirely.
## Event Payloads
### on_message(data)
`Message.model_dump()` — the full Pydantic message model. Key fields:
- `type` (`"PRIV"` | `"CHAN"`), `conversation_key`, `text`, `sender_name`, `sender_key`
- `outgoing`, `acked`, `paths`, `sender_timestamp`, `received_at`
### on_raw(data)
Raw packet dict from `packet_processor.py`. Key fields:
- `id` (storage row ID), `observation_id` (per-arrival), `raw` (hex), `timestamp`
- `decrypted_info` (optional: `channel_key`, `contact_key`, `text`)
### on_contact(data)
`Contact.model_dump()` — the full Pydantic contact model. Key fields:
- `public_key`, `name`, `type` (0=unknown, 1=client, 2=repeater, 3=room, 4=sensor)
- `lat`, `lon`, `last_seen`, `first_seen`, `on_radio`
### on_telemetry(data)
Repeater telemetry snapshot, broadcast after successful `RepeaterTelemetryRepository.record()`.
Identical shape from both auto-collect (`radio_sync.py`) and manual fetch (`routers/repeaters.py`):
- `public_key`, `name`, `timestamp`
- `battery_volts`, `noise_floor_dbm`, `last_rssi_dbm`, `last_snr_db`
- `packets_received`, `packets_sent`, `airtime_seconds`, `rx_airtime_seconds`
- `uptime_seconds`, `sent_flood`, `sent_direct`, `recv_flood`, `recv_direct`
- `flood_dups`, `direct_dups`, `full_events`, `tx_queue_len`
### on_health(data)
Radio health + stats snapshot, broadcast every 60s by the stats sampling loop in `radio_stats.py`:
- `connected` (bool), `connection_info` (str | None)
- `public_key` (str | None), `name` (str | None)
- `noise_floor_dbm`, `battery_mv`, `uptime_secs` (int | None)
- `last_rssi` (int | None), `last_snr` (float | None)
- `tx_air_secs`, `rx_air_secs` (int | None)
- `packets_recv`, `packets_sent`, `flood_tx`, `direct_tx`, `flood_rx`, `direct_rx` (int | None)
## Current Module Types
### mqtt_private (mqtt_private.py)
Wraps `MqttPublisher` from `app/fanout/mqtt.py`. Config blob:
- `broker_host`, `broker_port`, `username`, `password`
- `use_tls`, `tls_insecure`, `topic_prefix`
### mqtt_community (mqtt_community.py)
Wraps `CommunityMqttPublisher` from `app/fanout/community_mqtt.py`. Config blob:
- `broker_host`, `broker_port`, `iata`, `email`
- Only publishes raw packets (on_message is a no-op)
- The published `raw` field is always the original packet hex.
- When a direct packet includes a `path` field, it is emitted as comma-separated hop identifiers exactly as the packet reports them. Token width varies with the packet's path hash mode (`1`, `2`, or `3` bytes per hop); there is no legacy flat per-byte companion field.
### bot (bot.py)
Wraps bot code execution via `app/fanout/bot_exec.py`. Config blob:
- `code` — Python bot function source code
- Executes in a thread pool with timeout and semaphore concurrency control
- Rate-limits outgoing messages for repeater compatibility
- Channel `message_text` passed to bot code is normalized for human readability by stripping a leading `"{sender_name}: "` prefix when it matches the payload sender.
### webhook (webhook.py)
HTTP webhook delivery. Config blob:
- `url`, `method` (POST/PUT/PATCH)
- `hmac_secret` (optional) — when set, each request includes an HMAC-SHA256 signature of the JSON body
- `hmac_header` (optional, default `X-Webhook-Signature`) — header name for the signature (value format: `sha256=<hex>`)
- `headers` — arbitrary extra headers (JSON object)
### apprise (apprise_mod.py)
Push notifications via Apprise library. Config blob:
- `urls` — newline-separated Apprise notification service URLs
- `preserve_identity` — suppress Discord webhook name/avatar override
- `include_path` — include routing path in notification body
- Channel notifications normalize stored message text by stripping a leading `"{sender_name}: "` prefix when it matches the payload sender so alerts do not duplicate the name.
### sqs (sqs.py)
Amazon SQS delivery. Config blob:
- `queue_url` — target queue URL
- `region_name` (optional; inferred from standard AWS SQS queue URLs when omitted), `endpoint_url` (optional)
- `access_key_id`, `secret_access_key`, `session_token` (all optional; blank uses the normal AWS credential chain)
- Publishes a JSON envelope of the form `{"event_type":"message"|"raw_packet","data":...}`
- Supports both decoded messages and raw packets via normal scope selection
### map_upload (map_upload.py)
Uploads heard repeater and room-server advertisements to map.meshcore.dev. Config blob:
- `api_url` (optional, default `""`) — upload endpoint; empty falls back to the public map.meshcore.dev API
- `dry_run` (bool, default `true`) — when true, logs the payload at INFO level without sending
- `geofence_enabled` (bool, default `false`) — when true, only uploads nodes within `geofence_radius_km` of the radio's own configured lat/lon
- `geofence_radius_km` (float, default `0`) — filter radius in kilometres
Geofence notes:
- The reference center is always the radio's own `adv_lat`/`adv_lon` from `radio_runtime.meshcore.self_info`, read **live at upload time** — no lat/lon is stored in the fanout config itself.
- If the radio's lat/lon is `(0, 0)` or the radio is not connected, the geofence check is silently skipped so uploads continue normally until coordinates are configured.
- Requires the radio to have `ENABLE_PRIVATE_KEY_EXPORT=1` firmware to sign uploads.
- Scope is always `{"messages": "none", "raw_packets": "all"}` — only raw RF packets are processed.
## Adding a New Integration Type
### Step-by-step checklist
#### 1. Backend module (`app/fanout/my_type.py`)
Create a class extending `FanoutModule`:
```python
from app.fanout.base import FanoutModule
class MyTypeModule(FanoutModule):
def __init__(self, config_id: str, config: dict, *, name: str = "") -> None:
super().__init__(config_id, config, name=name)
# Initialize module-specific state
async def start(self) -> None:
"""Open connections, create clients, etc."""
async def stop(self) -> None:
"""Close connections, clean up resources."""
async def on_message(self, data: dict) -> None:
"""Handle decoded messages. Omit if not needed."""
async def on_raw(self, data: dict) -> None:
"""Handle raw packets. Omit if not needed."""
@property
def status(self) -> str:
"""Required. Return 'connected', 'disconnected', or 'error'."""
...
```
Constructor requirements:
- Must accept `config_id: str, config: dict, *, name: str = ""`
- Must forward `name` to super: `super().__init__(config_id, config, name=name)`
#### 2. Register in manager (`app/fanout/manager.py`)
Add import and mapping in `_register_module_types()`:
```python
from app.fanout.my_type import MyTypeModule
_MODULE_TYPES["my_type"] = MyTypeModule
```
#### 3. Router changes (`app/routers/fanout.py`)
Three changes needed:
**a)** Add to `_VALID_TYPES` set:
```python
_VALID_TYPES = {"mqtt_private", "mqtt_community", "bot", "webhook", "apprise", "sqs", "my_type"}
```
**b)** Add a validation function:
```python
def _validate_my_type_config(config: dict) -> None:
"""Validate my_type config blob."""
if not config.get("some_required_field"):
raise HTTPException(status_code=400, detail="some_required_field is required")
```
**c)** Wire validation into both `create_fanout_config` and `update_fanout_config` — add an `elif` to the validation block in each:
```python
elif body.type == "my_type":
_validate_my_type_config(body.config)
```
Note: validation only runs when the config will be enabled (disabled configs are treated as drafts).
**d)** Add scope enforcement in `_enforce_scope()` if the type has fixed scope constraints (e.g. raw_packets always none). Otherwise it falls through to the `mqtt_private` default which allows both messages and raw_packets to be configurable.
#### 4. Frontend editor component (`SettingsFanoutSection.tsx`)
Four changes needed in this single file:
**a)** Add to `TYPE_LABELS` and `TYPE_OPTIONS` at the top:
```tsx
const TYPE_LABELS: Record<string, string> = {
// ... existing entries ...
my_type: 'My Type',
};
const TYPE_OPTIONS = [
// ... existing entries ...
{ value: 'my_type', label: 'My Type' },
];
```
**b)** Create an editor component (follows the same pattern as existing editors):
```tsx
function MyTypeConfigEditor({
config,
scope,
onChange,
onScopeChange,
}: {
config: Record<string, unknown>;
scope: Record<string, unknown>;
onChange: (config: Record<string, unknown>) => void;
onScopeChange: (scope: Record<string, unknown>) => void;
}) {
return (
<div className="space-y-3">
{/* Type-specific config fields */}
<Separator />
<ScopeSelector scope={scope} onChange={onScopeChange} />
</div>
);
}
```
If your type does NOT have user-configurable scope (like bot or community MQTT), omit the `scope`/`onScopeChange` props and the `ScopeSelector`.
The `ScopeSelector` component is defined within the same file. It accepts an optional `showRawPackets` prop:
- **Without `showRawPackets`** (webhook, apprise): shows message scope only (all/only/except — no "none" option since that would make the integration a no-op). A warning appears when the effective selection matches nothing.
- **With `showRawPackets`** (private MQTT): adds a "Forward raw packets" toggle and includes the "No messages" option (valid when raw packets are enabled). The warning appears only when both raw packets and messages are effectively disabled.
**c)** Add default config and scope in `handleAddCreate`:
```tsx
const defaults: Record<string, Record<string, unknown>> = {
// ... existing entries ...
my_type: { some_field: '', other_field: true },
};
const defaultScopes: Record<string, Record<string, unknown>> = {
// ... existing entries ...
my_type: { messages: 'all', raw_packets: 'none' },
};
```
**d)** Wire the editor into the detail view's conditional render block:
```tsx
{editingConfig.type === 'my_type' && (
<MyTypeConfigEditor
config={editConfig}
scope={editScope}
onChange={setEditConfig}
onScopeChange={setEditScope}
/>
)}
```
#### 5. Tests
**Backend integration tests** (`tests/test_fanout_integration.py`):
- Test that a configured + enabled module receives messages via `FanoutManager.broadcast_message`
- Test scope filtering (all, none, selective)
- Test that a disabled module does not receive messages
**Backend unit tests** (`tests/test_fanout_hitlist.py` or a dedicated file):
- Test config validation (required fields, bad values)
- Test module-specific logic in isolation
**Frontend tests** (`frontend/src/test/fanoutSection.test.tsx`):
- The existing suite covers the list/edit/create flow generically. If your editor has special behavior, add specific test cases.
#### Summary of files to touch
| File | Change |
|------|--------|
| `app/fanout/my_type.py` | New module class |
| `app/fanout/manager.py` | Import + register in `_register_module_types()` |
| `app/routers/fanout.py` | `_VALID_TYPES` + validator function + scope enforcement |
| `frontend/.../SettingsFanoutSection.tsx` | `TYPE_LABELS` + `TYPE_OPTIONS` + editor component + defaults + detail view wiring |
| `tests/test_fanout_integration.py` | Integration tests |
## REST API
| Method | Endpoint | Description |
|--------|----------|-------------|
| GET | `/api/fanout` | List all fanout configs |
| POST | `/api/fanout` | Create new config |
| PATCH | `/api/fanout/{id}` | Update config (triggers module reload) |
| DELETE | `/api/fanout/{id}` | Delete config (stops module) |
## Database
`fanout_configs` table:
- `id` TEXT PRIMARY KEY
- `type`, `name`, `enabled`, `config` (JSON), `scope` (JSON)
- `sort_order`, `created_at`
Migrations:
- **36**: Creates `fanout_configs` table, migrates existing MQTT settings from `app_settings`
- **37**: Migrates bot configs from `app_settings.bots` JSON column into fanout rows
- **38**: Drops legacy `mqtt_*`, `community_mqtt_*`, and `bots` columns from `app_settings`
## Key Files
- `app/fanout/base.py` — FanoutModule base class
- `app/fanout/manager.py` — FanoutManager singleton
- `app/fanout/mqtt_base.py` — BaseMqttPublisher ABC (shared MQTT connection loop)
- `app/fanout/mqtt.py` — MqttPublisher (private MQTT publishing)
- `app/fanout/community_mqtt.py` — CommunityMqttPublisher (community MQTT with JWT auth)
- `app/fanout/mqtt_private.py` — Private MQTT fanout module
- `app/fanout/mqtt_community.py` — Community MQTT fanout module
- `app/fanout/bot.py` — Bot fanout module
- `app/fanout/bot_exec.py` — Bot code execution, response processing, rate limiting
- `app/fanout/webhook.py` — Webhook fanout module
- `app/fanout/apprise_mod.py` — Apprise fanout module
- `app/fanout/sqs.py` — Amazon SQS fanout module
- `app/fanout/map_upload.py` — Map Upload fanout module
- `app/repository/fanout.py` — Database CRUD
- `app/routers/fanout.py` — REST API
- `app/websocket.py``broadcast_event()` dispatches to fanout
- `frontend/src/components/settings/SettingsFanoutSection.tsx` — UI
+8
View File
@@ -0,0 +1,8 @@
from app.fanout.base import FanoutModule
from app.fanout.manager import FanoutManager, fanout_manager
__all__ = [
"FanoutManager",
"FanoutModule",
"fanout_manager",
]
+129
View File
@@ -0,0 +1,129 @@
"""Fanout module for Apprise push notifications."""
from __future__ import annotations
import asyncio
import logging
from urllib.parse import parse_qsl, urlencode, urlsplit, urlunsplit
from app.fanout.base import FanoutModule, get_fanout_message_text
from app.path_utils import split_path_hex
logger = logging.getLogger(__name__)
def _parse_urls(raw: str) -> list[str]:
"""Split multi-line URL string into individual URLs."""
return [line.strip() for line in raw.splitlines() if line.strip()]
def _normalize_discord_url(url: str) -> str:
"""Add avatar=no to Discord URLs to suppress identity override."""
parts = urlsplit(url)
scheme = parts.scheme.lower()
host = parts.netloc.lower()
is_discord = scheme in ("discord", "discords") or (
scheme in ("http", "https")
and host in ("discord.com", "discordapp.com")
and parts.path.lower().startswith("/api/webhooks/")
)
if not is_discord:
return url
query = dict(parse_qsl(parts.query, keep_blank_values=True))
query["avatar"] = "no"
return urlunsplit((parts.scheme, parts.netloc, parts.path, urlencode(query), parts.fragment))
def _format_body(data: dict, *, include_path: bool) -> str:
"""Build a human-readable notification body from message data."""
msg_type = data.get("type", "")
text = get_fanout_message_text(data)
sender_name = data.get("sender_name") or "Unknown"
via = ""
if include_path:
paths = data.get("paths")
if paths and isinstance(paths, list) and len(paths) > 0:
first_path = paths[0] if isinstance(paths[0], dict) else {}
path_str = first_path.get("path", "")
path_len = first_path.get("path_len")
else:
path_str = None
path_len = None
if msg_type == "PRIV" and path_str is None:
via = " **via:** [`direct`]"
elif path_str is not None:
path_str = path_str.strip().lower()
if path_str == "":
via = " **via:** [`direct`]"
else:
hop_count = path_len if isinstance(path_len, int) else len(path_str) // 2
hops = split_path_hex(path_str, hop_count)
if hops:
hop_list = ", ".join(f"`{h}`" for h in hops)
via = f" **via:** [{hop_list}]"
if msg_type == "PRIV":
return f"**DM:** {sender_name}: {text}{via}"
channel_name = data.get("channel_name") or data.get("conversation_key", "channel")
return f"**{channel_name}:** {sender_name}: {text}{via}"
def _send_sync(urls_raw: str, body: str, *, preserve_identity: bool) -> bool:
"""Send notification synchronously via Apprise. Returns True on success."""
import apprise as apprise_lib
urls = _parse_urls(urls_raw)
if not urls:
return False
notifier = apprise_lib.Apprise()
for url in urls:
if preserve_identity:
url = _normalize_discord_url(url)
notifier.add(url)
return bool(notifier.notify(title="", body=body))
class AppriseModule(FanoutModule):
"""Sends push notifications via Apprise for incoming messages."""
def __init__(self, config_id: str, config: dict, *, name: str = "") -> None:
super().__init__(config_id, config, name=name)
async def on_message(self, data: dict) -> None:
# Skip outgoing messages — only notify on incoming
if data.get("outgoing"):
return
urls = self.config.get("urls", "")
if not urls or not urls.strip():
return
preserve_identity = self.config.get("preserve_identity", True)
include_path = self.config.get("include_path", True)
body = _format_body(data, include_path=include_path)
try:
success = await asyncio.to_thread(
_send_sync, urls, body, preserve_identity=preserve_identity
)
self._set_last_error(None if success else "Apprise notify returned failure")
if not success:
logger.warning("Apprise notification failed for module %s", self.config_id)
except Exception as exc:
self._set_last_error(str(exc))
logger.exception("Apprise send error for module %s", self.config_id)
@property
def status(self) -> str:
if not self.config.get("urls", "").strip():
return "disconnected"
if self.last_error:
return "error"
return "connected"
+92
View File
@@ -0,0 +1,92 @@
"""Base class for fanout integration modules."""
from __future__ import annotations
def _broadcast_fanout_health() -> None:
"""Push updated fanout status to connected frontend clients."""
from app.services.radio_runtime import radio_runtime as radio_manager
from app.websocket import broadcast_health
broadcast_health(radio_manager.is_connected, radio_manager.connection_info)
class FanoutModule:
"""Base class for all fanout integrations.
Each module wraps a specific integration (MQTT, webhook, etc.) and
receives dispatched messages/packets from the FanoutManager.
Subclasses must override the ``status`` property.
"""
def __init__(self, config_id: str, config: dict, *, name: str = "") -> None:
self.config_id = config_id
self.config = config
self.name = name
self._last_error: str | None = None
async def start(self) -> None:
"""Start the module (e.g. connect to broker). Override for persistent connections."""
async def stop(self) -> None:
"""Stop the module (e.g. disconnect from broker)."""
async def on_message(self, data: dict) -> None:
"""Called for decoded messages (DM/channel). Override if needed."""
async def on_raw(self, data: dict) -> None:
"""Called for raw RF packets. Override if needed."""
async def on_contact(self, data: dict) -> None:
"""Called for contact upserts (adverts, sync). Override if needed."""
async def on_telemetry(self, data: dict) -> None:
"""Called for repeater telemetry snapshots. Override if needed."""
async def on_health(self, data: dict) -> None:
"""Called for periodic radio health snapshots. Override if needed."""
@property
def status(self) -> str:
"""Return 'connected', 'disconnected', or 'error'."""
raise NotImplementedError
@property
def last_error(self) -> str | None:
"""Return the most recent retained operator-facing error, if any."""
return self._last_error
def _set_last_error(self, value: str | None) -> None:
"""Update the retained error and broadcast health when it changes."""
if self._last_error == value:
return
self._last_error = value
_broadcast_fanout_health()
def get_fanout_message_text(data: dict) -> str:
"""Return the best human-readable message body for fanout consumers.
Channel messages are stored with the rendered sender label embedded in the
text (for example ``"Alice: hello"``). Human-facing integrations that also
carry ``sender_name`` should strip that duplicated prefix when it matches
the payload sender exactly.
"""
text = data.get("text", "")
if not isinstance(text, str):
return ""
if data.get("type") != "CHAN":
return text
sender_name = data.get("sender_name")
if not isinstance(sender_name, str) or not sender_name:
return text
prefix, separator, remainder = text.partition(": ")
if separator and prefix == sender_name:
return remainder
return text
+179
View File
@@ -0,0 +1,179 @@
"""Fanout module wrapping bot execution logic."""
from __future__ import annotations
import asyncio
import logging
from app.fanout.base import FanoutModule
logger = logging.getLogger(__name__)
def _derive_path_bytes_per_hop(paths: object, path_value: str | None) -> int | None:
"""Derive hop width from the first serialized message path when possible."""
if not isinstance(path_value, str) or not path_value:
return None
if not isinstance(paths, list) or not paths:
return None
first_path = paths[0]
if not isinstance(first_path, dict):
return None
path_hops = first_path.get("path_len")
if not isinstance(path_hops, int) or path_hops <= 0:
return None
path_hex_chars = len(path_value)
if path_hex_chars % 2 != 0:
return None
path_bytes = path_hex_chars // 2
if path_bytes % path_hops != 0:
return None
hop_width = path_bytes // path_hops
if hop_width not in (1, 2, 3):
return None
return hop_width
class BotModule(FanoutModule):
"""Wraps a single bot's code execution and response routing.
Each BotModule represents one bot configuration. It receives decoded
messages via ``on_message``, executes the bot's Python code in a
background task (after a 2-second settle delay), and sends any response
back through the radio.
"""
def __init__(self, config_id: str, config: dict, *, name: str = "Bot") -> None:
super().__init__(config_id, config, name=name)
self._tasks: set[asyncio.Task] = set()
self._active = True
async def stop(self) -> None:
self._active = False
for task in self._tasks:
task.cancel()
# Wait briefly for tasks to acknowledge cancellation
if self._tasks:
await asyncio.gather(*self._tasks, return_exceptions=True)
self._tasks.clear()
async def on_message(self, data: dict) -> None:
"""Kick off bot execution in a background task so we don't block dispatch."""
task = asyncio.create_task(self._run_for_message(data))
self._tasks.add(task)
task.add_done_callback(self._tasks.discard)
async def _run_for_message(self, data: dict) -> None:
from app.fanout.bot_exec import (
BOT_EXECUTION_TIMEOUT,
execute_bot_code,
process_bot_response,
)
code = self.config.get("code", "")
if not code or not code.strip():
return
msg_type = data.get("type", "")
is_dm = msg_type == "PRIV"
conversation_key = data.get("conversation_key", "")
logger.debug(
"Bot '%s' starting for type=%s conversation=%s outgoing=%s",
self.name,
msg_type or "unknown",
conversation_key[:12] if conversation_key else "(none)",
bool(data.get("outgoing", False)),
)
# Extract bot parameters from broadcast data
if is_dm:
sender_key = data.get("sender_key") or conversation_key
is_outgoing = data.get("outgoing", False)
message_text = data.get("text", "")
channel_key = None
channel_name = None
# Outgoing DMs: sender is us, not the contact
if is_outgoing:
sender_name = None
else:
sender_name = data.get("sender_name")
if sender_name is None:
from app.repository import ContactRepository
contact = await ContactRepository.get_by_key(conversation_key)
sender_name = contact.name if contact else None
else:
sender_key = None
is_outgoing = bool(data.get("outgoing", False))
sender_name = data.get("sender_name")
channel_key = conversation_key
channel_name = data.get("channel_name")
if channel_name is None:
from app.repository import ChannelRepository
channel = await ChannelRepository.get_by_key(conversation_key)
channel_name = channel.name if channel else None
# Strip "sender: " prefix from channel message text
text = data.get("text", "")
if sender_name and text.startswith(f"{sender_name}: "):
message_text = text[len(f"{sender_name}: ") :]
else:
message_text = text
sender_timestamp = data.get("sender_timestamp")
path_value = data.get("path")
paths = data.get("paths")
# Message model serializes paths as list of dicts; extract first path string
if path_value is None and paths and isinstance(paths, list) and len(paths) > 0:
path_value = paths[0].get("path") if isinstance(paths[0], dict) else None
path_bytes_per_hop = _derive_path_bytes_per_hop(paths, path_value)
# Wait for message to settle (allows retransmissions to be deduped)
await asyncio.sleep(2)
# Execute bot code in thread pool with timeout
from app.fanout.bot_exec import _bot_executor, _bot_semaphore
async with _bot_semaphore:
loop = asyncio.get_running_loop()
try:
response = await asyncio.wait_for(
loop.run_in_executor(
_bot_executor,
execute_bot_code,
code,
sender_name,
sender_key,
message_text,
is_dm,
channel_key,
channel_name,
sender_timestamp,
path_value,
is_outgoing,
path_bytes_per_hop,
),
timeout=BOT_EXECUTION_TIMEOUT,
)
except TimeoutError:
logger.warning("Bot '%s' execution timed out", self.name)
return
except Exception:
logger.exception("Bot '%s' execution error", self.name)
return
if response and self._active:
await process_bot_response(response, is_dm, sender_key or "", channel_key)
@property
def status(self) -> str:
return "connected"
+368
View File
@@ -0,0 +1,368 @@
"""
Bot execution module for automatic message responses.
This module provides functionality for executing user-defined Python code
in response to incoming messages. The user's code can process message data
and optionally return a response string or a list of strings.
SECURITY WARNING: This executes arbitrary Python code provided by the user.
It should only be enabled on trusted systems where the user understands
the security implications.
"""
import asyncio
import inspect
import logging
import time
from concurrent.futures import ThreadPoolExecutor
from dataclasses import dataclass
from typing import Any
from fastapi import HTTPException
logger = logging.getLogger(__name__)
# Limit concurrent bot executions to prevent resource exhaustion
_bot_semaphore = asyncio.Semaphore(100)
# Dedicated thread pool for bot execution (separate from default executor)
_bot_executor = ThreadPoolExecutor(max_workers=100, thread_name_prefix="bot_")
# Timeout for bot code execution (seconds)
BOT_EXECUTION_TIMEOUT = 10
# Minimum spacing between bot message sends (seconds)
# This ensures repeaters have time to return to listening mode
BOT_MESSAGE_SPACING = 2.0
# Global state for rate limiting bot sends
_bot_send_lock = asyncio.Lock()
_last_bot_send_time: float = 0.0
@dataclass(frozen=True)
class BotCallPlan:
"""How to call a validated bot() function."""
call_style: str
keyword_args: tuple[str, ...] = ()
def _analyze_bot_signature(bot_func_or_sig) -> BotCallPlan:
"""Validate bot() signature and return a supported call plan."""
try:
sig = (
bot_func_or_sig
if isinstance(bot_func_or_sig, inspect.Signature)
else inspect.signature(bot_func_or_sig)
)
except (ValueError, TypeError) as exc:
raise ValueError("Bot function signature could not be inspected") from exc
params = sig.parameters
param_values = tuple(params.values())
positional_params = [
p
for p in param_values
if p.kind in (inspect.Parameter.POSITIONAL_ONLY, inspect.Parameter.POSITIONAL_OR_KEYWORD)
]
has_varargs = any(p.kind == inspect.Parameter.VAR_POSITIONAL for p in param_values)
has_kwargs = any(p.kind == inspect.Parameter.VAR_KEYWORD for p in param_values)
explicit_optional_names = tuple(
name for name in ("is_outgoing", "path_bytes_per_hop") if name in params
)
unsupported_required_kwonly = [
p.name
for p in param_values
if p.kind == inspect.Parameter.KEYWORD_ONLY
and p.default is inspect.Parameter.empty
and p.name not in {"is_outgoing", "path_bytes_per_hop"}
]
if unsupported_required_kwonly:
raise ValueError(
"Bot function signature is not supported. Unsupported required keyword-only "
"parameters: " + ", ".join(unsupported_required_kwonly)
)
positional_capacity = len(positional_params)
base_args = [object()] * 8
base_keyword_args: dict[str, object] = {
"sender_name": object(),
"sender_key": object(),
"message_text": object(),
"is_dm": object(),
"channel_key": object(),
"channel_name": object(),
"sender_timestamp": object(),
"path": object(),
}
candidate_specs: list[tuple[str, list[object], dict[str, object]]] = []
keyword_args = dict(base_keyword_args)
if has_kwargs or "is_outgoing" in params:
keyword_args["is_outgoing"] = False
if has_kwargs or "path_bytes_per_hop" in params:
keyword_args["path_bytes_per_hop"] = 1
candidate_specs.append(("keyword", [], keyword_args))
if not has_kwargs and explicit_optional_names:
kwargs: dict[str, object] = {}
if has_kwargs or "is_outgoing" in params:
kwargs["is_outgoing"] = False
if has_kwargs or "path_bytes_per_hop" in params:
kwargs["path_bytes_per_hop"] = 1
candidate_specs.append(("mixed_keyword", base_args, kwargs))
if has_varargs or positional_capacity >= 10:
candidate_specs.append(("positional_10", base_args + [False, 1], {}))
if has_varargs or positional_capacity >= 9:
candidate_specs.append(("positional_9", base_args + [False], {}))
if has_varargs or positional_capacity >= 8:
candidate_specs.append(("legacy", base_args, {}))
for call_style, args, kwargs in candidate_specs:
try:
sig.bind(*args, **kwargs)
except TypeError:
continue
if call_style in {"keyword", "mixed_keyword"}:
return BotCallPlan(call_style="keyword", keyword_args=tuple(kwargs.keys()))
return BotCallPlan(call_style=call_style)
raise ValueError(
"Bot function signature is not supported. Use the default bot template as a reference. "
"Supported trailing parameters are: path; path + is_outgoing; "
"path + path_bytes_per_hop; path + is_outgoing + path_bytes_per_hop; "
"or use **kwargs for forward compatibility."
)
def execute_bot_code(
code: str,
sender_name: str | None,
sender_key: str | None,
message_text: str,
is_dm: bool,
channel_key: str | None,
channel_name: str | None,
sender_timestamp: int | None,
path: str | None,
is_outgoing: bool = False,
path_bytes_per_hop: int | None = None,
) -> str | list[str] | None:
"""
Execute user-provided bot code with message context.
The code should define a function:
`bot(sender_name, sender_key, message_text, is_dm, channel_key, channel_name, sender_timestamp, path, is_outgoing, path_bytes_per_hop)`
or use named parameters / `**kwargs`.
that returns either None (no response), a string (single response message),
or a list of strings (multiple messages sent in order).
Legacy bot functions with older signatures are detected via inspect and
called without the newer parameters for backward compatibility.
Args:
code: Python code defining the bot function
sender_name: Display name of the sender (may be None)
sender_key: 64-char hex public key of sender for DMs, None for channel messages
message_text: The message content
is_dm: True for direct messages, False for channel messages
channel_key: 32-char hex channel key for channel messages, None for DMs
channel_name: Channel name (e.g. "#general" with hash), None for DMs
sender_timestamp: Sender's timestamp from the message (may be None)
path: Hex-encoded routing path (may be None)
is_outgoing: True if this is our own outgoing message
path_bytes_per_hop: Number of bytes per routing hop (1, 2, or 3), if known
Returns:
Response string, list of strings, or None.
Note: This executes arbitrary code. Only use with trusted input.
"""
if not code or not code.strip():
return None
# Build execution namespace with allowed imports
namespace: dict[str, Any] = {
"__builtins__": __builtins__,
}
try:
# Execute the user's code to define the bot function
exec(code, namespace)
except Exception:
logger.exception("Bot code compilation failed")
return None
# Check if bot function was defined
if "bot" not in namespace or not callable(namespace["bot"]):
logger.debug("Bot code does not define a callable 'bot' function")
return None
bot_func = namespace["bot"]
try:
call_plan = _analyze_bot_signature(bot_func)
except ValueError as exc:
logger.error("%s", exc)
return None
try:
# Call the bot function with appropriate signature
if call_plan.call_style == "positional_10":
result = bot_func(
sender_name,
sender_key,
message_text,
is_dm,
channel_key,
channel_name,
sender_timestamp,
path,
is_outgoing,
path_bytes_per_hop,
)
elif call_plan.call_style == "positional_9":
result = bot_func(
sender_name,
sender_key,
message_text,
is_dm,
channel_key,
channel_name,
sender_timestamp,
path,
is_outgoing,
)
elif call_plan.call_style == "keyword":
keyword_args: dict[str, Any] = {}
if "sender_name" in call_plan.keyword_args:
keyword_args["sender_name"] = sender_name
if "sender_key" in call_plan.keyword_args:
keyword_args["sender_key"] = sender_key
if "message_text" in call_plan.keyword_args:
keyword_args["message_text"] = message_text
if "is_dm" in call_plan.keyword_args:
keyword_args["is_dm"] = is_dm
if "channel_key" in call_plan.keyword_args:
keyword_args["channel_key"] = channel_key
if "channel_name" in call_plan.keyword_args:
keyword_args["channel_name"] = channel_name
if "sender_timestamp" in call_plan.keyword_args:
keyword_args["sender_timestamp"] = sender_timestamp
if "path" in call_plan.keyword_args:
keyword_args["path"] = path
if "is_outgoing" in call_plan.keyword_args:
keyword_args["is_outgoing"] = is_outgoing
if "path_bytes_per_hop" in call_plan.keyword_args:
keyword_args["path_bytes_per_hop"] = path_bytes_per_hop
result = bot_func(**keyword_args)
else:
result = bot_func(
sender_name,
sender_key,
message_text,
is_dm,
channel_key,
channel_name,
sender_timestamp,
path,
)
# Validate result
if result is None:
return None
if isinstance(result, str):
return result if result.strip() else None
if isinstance(result, list):
# Filter to non-empty strings only
valid_messages = [msg for msg in result if isinstance(msg, str) and msg.strip()]
return valid_messages if valid_messages else None
logger.debug("Bot function returned unsupported type: %s", type(result))
return None
except Exception:
logger.exception("Bot function execution failed")
return None
async def process_bot_response(
response: str | list[str],
is_dm: bool,
sender_key: str,
channel_key: str | None,
) -> None:
"""
Send the bot's response message(s) using the existing message sending endpoints.
For DMs, sends a direct message back to the sender.
For channel messages, sends to the same channel.
Bot messages are rate-limited to ensure at least BOT_MESSAGE_SPACING seconds
between sends, giving repeaters time to return to listening mode.
Args:
response: The response text to send, or a list of messages to send in order
is_dm: Whether the original message was a DM
sender_key: Public key of the original sender (for DM replies)
channel_key: Channel key for channel message replies
"""
# Normalize to list for uniform processing
messages = [response] if isinstance(response, str) else response
for message_text in messages:
await _send_single_bot_message(message_text, is_dm, sender_key, channel_key)
async def _send_single_bot_message(
message_text: str,
is_dm: bool,
sender_key: str,
channel_key: str | None,
) -> None:
"""
Send a single bot message with rate limiting.
Args:
message_text: The message text to send
is_dm: Whether the original message was a DM
sender_key: Public key of the original sender (for DM replies)
channel_key: Channel key for channel message replies
"""
global _last_bot_send_time
from app.models import SendChannelMessageRequest, SendDirectMessageRequest
from app.routers.messages import send_channel_message, send_direct_message
# Serialize bot sends and enforce minimum spacing
async with _bot_send_lock:
# Calculate how long since last bot send
now = time.monotonic()
time_since_last = now - _last_bot_send_time
if _last_bot_send_time > 0 and time_since_last < BOT_MESSAGE_SPACING:
wait_time = BOT_MESSAGE_SPACING - time_since_last
logger.debug("Rate limiting bot send, waiting %.2fs", wait_time)
await asyncio.sleep(wait_time)
try:
if is_dm:
logger.info("Bot sending DM reply to %s", sender_key[:12])
request = SendDirectMessageRequest(destination=sender_key, text=message_text)
await send_direct_message(request)
elif channel_key:
logger.info("Bot sending channel reply to %s", channel_key[:8])
request = SendChannelMessageRequest(channel_key=channel_key, text=message_text)
await send_channel_message(request)
else:
logger.warning("Cannot send bot response: no destination")
return # Don't update timestamp if we didn't send
except HTTPException as e:
logger.error("Bot failed to send response: %s", e.detail, exc_info=True)
return # Don't update timestamp on failure
except Exception:
logger.exception("Bot failed to send response")
return # Don't update timestamp on failure
# Update last send time after successful send
_last_bot_send_time = time.monotonic()
+544
View File
@@ -0,0 +1,544 @@
"""Community MQTT publisher for sharing raw packets with the MeshCore community.
Publishes raw packet data to mqtt-us-v1.letsmesh.net using the protocol
defined by meshcore-packet-capture (https://github.com/agessaman/meshcore-packet-capture).
Authentication uses Ed25519 JWT tokens signed with the radio's private key.
This module is independent from the private MqttPublisher in app/mqtt.py.
"""
from __future__ import annotations
import asyncio
import base64
import hashlib
import json
import logging
import ssl
import time
from datetime import datetime
from typing import Any, Protocol
import aiomqtt
from app.fanout.mqtt_base import BaseMqttPublisher
from app.keystore import ed25519_sign_expanded
from app.path_utils import parse_packet_envelope, split_path_hex
from app.version_info import get_app_build_info
logger = logging.getLogger(__name__)
_DEFAULT_BROKER = "mqtt-us-v1.letsmesh.net"
_DEFAULT_PORT = 443 # Community protocol uses WSS on port 443 by default
_CLIENT_ID = "RemoteTerm"
# Proactive JWT renewal: reconnect 1 hour before the 24h token expires
_TOKEN_LIFETIME = 86400 # 24 hours (must match _generate_jwt_token exp)
_TOKEN_RENEWAL_THRESHOLD = _TOKEN_LIFETIME - 3600 # 23 hours
# Periodic status republish interval (matches meshcore-packet-capture reference)
_STATS_REFRESH_INTERVAL = 300 # 5 minutes
_STATS_MIN_CACHE_SECS = 60 # Don't re-fetch stats within 60s
# Route type mapping: bottom 2 bits of first byte
_ROUTE_MAP = {0: "F", 1: "F", 2: "D", 3: "T"}
class CommunityMqttSettings(Protocol):
"""Attributes expected on the settings object for the community MQTT publisher."""
community_mqtt_enabled: bool
community_mqtt_broker_host: str
community_mqtt_broker_port: int
community_mqtt_transport: str
community_mqtt_use_tls: bool
community_mqtt_tls_verify: bool
community_mqtt_auth_mode: str
community_mqtt_username: str
community_mqtt_password: str
community_mqtt_iata: str
community_mqtt_email: str
community_mqtt_token_audience: str
def _base64url_encode(data: bytes) -> str:
"""Base64url encode without padding."""
return base64.urlsafe_b64encode(data).rstrip(b"=").decode("ascii")
def _generate_jwt_token(
private_key: bytes,
public_key: bytes,
*,
audience: str = _DEFAULT_BROKER,
email: str = "",
) -> str:
"""Generate a JWT token for community MQTT authentication.
Creates a token with Ed25519 signature using MeshCore's expanded key format.
Token format: header_b64.payload_b64.signature_hex
Optional ``email`` embeds a node-claiming identity so the community
aggregator can associate this radio with an owner.
"""
header = {"alg": "Ed25519", "typ": "JWT"}
now = int(time.time())
pubkey_hex = public_key.hex().upper()
payload: dict[str, object] = {
"publicKey": pubkey_hex,
"iat": now,
"exp": now + _TOKEN_LIFETIME,
"aud": audience,
"owner": pubkey_hex,
"client": _get_client_version(),
}
if email:
payload["email"] = email
header_b64 = _base64url_encode(json.dumps(header, separators=(",", ":")).encode())
payload_b64 = _base64url_encode(json.dumps(payload, separators=(",", ":")).encode())
signing_input = f"{header_b64}.{payload_b64}".encode()
scalar = private_key[:32]
prefix = private_key[32:]
signature = ed25519_sign_expanded(signing_input, scalar, prefix, public_key)
return f"{header_b64}.{payload_b64}.{signature.hex()}"
def _calculate_packet_hash(raw_bytes: bytes) -> str:
"""Calculate packet hash matching MeshCore's Packet::calculatePacketHash().
Parses the packet structure to extract payload type and payload data,
then hashes: payload_type(1 byte) [+ path_len(2 bytes LE) for TRACE] + payload_data.
Returns first 16 hex characters (uppercase).
"""
if not raw_bytes:
return "0" * 16
try:
envelope = parse_packet_envelope(raw_bytes)
if envelope is None:
return "0" * 16
# Hash: payload_type(1 byte) [+ path_byte as uint16_t LE for TRACE] + payload_data
# IMPORTANT: TRACE hash uses the raw wire byte (not decoded hop count) to match firmware.
hash_obj = hashlib.sha256()
hash_obj.update(bytes([envelope.payload_type]))
if envelope.payload_type == 9: # PAYLOAD_TYPE_TRACE
hash_obj.update(envelope.path_byte.to_bytes(2, byteorder="little"))
hash_obj.update(envelope.payload)
return hash_obj.hexdigest()[:16].upper()
except Exception:
return "0" * 16
def _decode_packet_fields(raw_bytes: bytes) -> tuple[str, str, str, list[str], int | None]:
"""Decode packet fields used by the community uploader payload format.
Returns:
(route_letter, packet_type_str, payload_len_str, path_values, payload_type_int)
"""
# Reference defaults when decode fails
route = "U"
packet_type = "0"
payload_len = "0"
path_values: list[str] = []
payload_type: int | None = None
try:
envelope = parse_packet_envelope(raw_bytes)
if envelope is None or envelope.payload_version != 0:
return route, packet_type, payload_len, path_values, payload_type
payload_type = envelope.payload_type
route = _ROUTE_MAP.get(envelope.route_type, "U")
packet_type = str(payload_type)
payload_len = str(len(envelope.payload))
path_values = split_path_hex(envelope.path.hex(), envelope.hop_count)
return route, packet_type, payload_len, path_values, payload_type
except Exception:
return route, packet_type, payload_len, path_values, payload_type
def _format_raw_packet(data: dict[str, Any], device_name: str, public_key_hex: str) -> dict:
"""Convert a RawPacketBroadcast dict to meshcore-packet-capture format."""
raw_hex = data.get("data", "")
raw_bytes = bytes.fromhex(raw_hex) if raw_hex else b""
route, packet_type, payload_len, path_values, _payload_type = _decode_packet_fields(raw_bytes)
# Reference format uses local "now" timestamp and derived time/date fields.
current_time = datetime.now()
ts_str = current_time.isoformat()
# Keep numeric telemetry numeric so downstream analyzers can ingest it.
# Preserve the existing "Unknown" fallback for missing values.
snr_val = data.get("snr")
rssi_val = data.get("rssi")
snr: float | str = float(snr_val) if snr_val is not None else "Unknown"
rssi: int | str = int(rssi_val) if rssi_val is not None else "Unknown"
packet_hash = _calculate_packet_hash(raw_bytes)
packet = {
"origin": device_name or "MeshCore Device",
"origin_id": public_key_hex.upper(),
"timestamp": ts_str,
"type": "PACKET",
"direction": "rx",
"time": current_time.strftime("%H:%M:%S"),
"date": current_time.strftime("%d/%m/%Y"),
"len": str(len(raw_bytes)),
"packet_type": packet_type,
"route": route,
"payload_len": payload_len,
"raw": raw_hex.upper(),
"SNR": snr,
"RSSI": rssi,
"hash": packet_hash,
}
if route == "D":
packet["path"] = ",".join(path_values)
return packet
def _build_status_topic(settings: CommunityMqttSettings, pubkey_hex: str) -> str:
"""Build the ``meshcore/{IATA}/{PUBKEY}/status`` topic string."""
iata = settings.community_mqtt_iata.upper().strip()
return f"meshcore/{iata}/{pubkey_hex}/status"
def _build_radio_info() -> str:
"""Format the radio parameters string from self_info.
Matches the reference format: ``"freq,bw,sf,cr"`` (comma-separated raw
values). Falls back to ``"0,0,0,0"`` when unavailable.
"""
from app.services.radio_runtime import radio_runtime as radio_manager
try:
if radio_manager.meshcore and radio_manager.meshcore.self_info:
info = radio_manager.meshcore.self_info
freq = info.get("radio_freq", 0)
bw = info.get("radio_bw", 0)
sf = info.get("radio_sf", 0)
cr = info.get("radio_cr", 0)
return f"{freq},{bw},{sf},{cr}"
except Exception:
pass
return "0,0,0,0"
def _get_client_version() -> str:
"""Return the canonical client/version identifier for community MQTT."""
build = get_app_build_info()
commit_hash = build.commit_hash or "unknown"
return f"{_CLIENT_ID}/{build.version}-{commit_hash}"
class CommunityMqttPublisher(BaseMqttPublisher):
"""Manages the community MQTT connection and publishes raw packets."""
_backoff_max = 60
_log_prefix = "Community MQTT"
_not_configured_timeout: float | None = 30
def __init__(self) -> None:
super().__init__()
self._key_unavailable_warned: bool = False
self._cached_device_info: dict[str, str] | None = None
self._cached_stats: dict[str, Any] | None = None
self._stats_supported: bool | None = None
self._last_stats_fetch: float = 0.0
self._last_status_publish: float = 0.0
async def start(self, settings: object) -> None:
self._key_unavailable_warned = False
self._cached_device_info = None
self._cached_stats = None
self._stats_supported = None
self._last_stats_fetch = 0.0
self._last_status_publish = 0.0
await super().start(settings)
def _on_not_configured(self) -> None:
from app.keystore import get_public_key, has_private_key
from app.websocket import broadcast_error
s: CommunityMqttSettings | None = self._settings
auth_mode = getattr(s, "community_mqtt_auth_mode", "token") if s else "token"
if (
s
and auth_mode == "token"
and get_public_key() is not None
and not has_private_key()
and not self._key_unavailable_warned
):
broadcast_error(
"Community MQTT unavailable",
"Radio firmware does not support private key export.",
)
self._key_unavailable_warned = True
def _is_configured(self) -> bool:
"""Check if community MQTT is enabled and keys are available."""
from app.keystore import get_public_key, has_private_key
s: CommunityMqttSettings | None = self._settings
if not s or not s.community_mqtt_enabled:
return False
if get_public_key() is None:
return False
auth_mode = getattr(s, "community_mqtt_auth_mode", "token")
if auth_mode == "token":
return has_private_key()
return True
def _build_client_kwargs(self, settings: object) -> dict[str, Any]:
s: CommunityMqttSettings = settings # type: ignore[assignment]
from app.keystore import get_private_key, get_public_key
from app.services.radio_runtime import radio_runtime as radio_manager
private_key = get_private_key()
public_key = get_public_key()
assert public_key is not None # guaranteed by _pre_connect
pubkey_hex = public_key.hex().upper()
broker_host = s.community_mqtt_broker_host or _DEFAULT_BROKER
broker_port = s.community_mqtt_broker_port or _DEFAULT_PORT
transport = s.community_mqtt_transport or "websockets"
use_tls = bool(s.community_mqtt_use_tls)
tls_verify = bool(s.community_mqtt_tls_verify)
auth_mode = s.community_mqtt_auth_mode or "token"
secure_connection = use_tls and tls_verify
tls_context: ssl.SSLContext | None = None
if use_tls:
tls_context = ssl.create_default_context()
if not tls_verify:
tls_context.check_hostname = False
tls_context.verify_mode = ssl.CERT_NONE
device_name = ""
if radio_manager.meshcore and radio_manager.meshcore.self_info:
device_name = radio_manager.meshcore.self_info.get("name", "")
status_topic = _build_status_topic(s, pubkey_hex)
offline_payload = json.dumps(
{
"status": "offline",
"timestamp": datetime.now().isoformat(),
"origin": device_name or "MeshCore Device",
"origin_id": pubkey_hex,
}
)
kwargs: dict[str, Any] = {
"hostname": broker_host,
"port": broker_port,
"transport": transport,
"tls_context": tls_context,
"will": aiomqtt.Will(status_topic, offline_payload, retain=True),
}
if auth_mode == "token":
assert private_key is not None
token_audience = (s.community_mqtt_token_audience or "").strip() or broker_host
jwt_token = _generate_jwt_token(
private_key,
public_key,
audience=token_audience,
email=(s.community_mqtt_email or "") if secure_connection else "",
)
kwargs["username"] = f"v1_{pubkey_hex}"
kwargs["password"] = jwt_token
elif auth_mode == "password":
kwargs["username"] = s.community_mqtt_username or None
kwargs["password"] = s.community_mqtt_password or None
if transport == "websockets":
kwargs["websocket_path"] = "/"
return kwargs
def _on_connected(self, settings: object) -> tuple[str, str]:
s: CommunityMqttSettings = settings # type: ignore[assignment]
broker_host = s.community_mqtt_broker_host or _DEFAULT_BROKER
broker_port = s.community_mqtt_broker_port or _DEFAULT_PORT
return ("Community MQTT connected", f"{broker_host}:{broker_port}")
async def _fetch_device_info(self) -> dict[str, str]:
"""Fetch firmware model/version from the radio (cached for the connection)."""
if self._cached_device_info is not None:
return self._cached_device_info
from app.radio import RadioDisconnectedError, RadioOperationBusyError
from app.services.radio_runtime import radio_runtime as radio_manager
fallback = {"model": "unknown", "firmware_version": "unknown"}
try:
async with radio_manager.radio_operation(
"community_stats_device_info", blocking=False
) as mc:
event = await mc.commands.send_device_query()
from meshcore.events import EventType
if event.type == EventType.DEVICE_INFO:
fw_ver = event.payload.get("fw ver", 0)
if fw_ver >= 3:
model = event.payload.get("model", "unknown") or "unknown"
ver = event.payload.get("ver", "unknown") or "unknown"
fw_build = event.payload.get("fw_build", "") or ""
fw_str = f"v{ver} (Build: {fw_build})" if fw_build else f"v{ver}"
self._cached_device_info = {
"model": model,
"firmware_version": fw_str,
}
else:
# Old firmware — cache what we can
self._cached_device_info = {
"model": "unknown",
"firmware_version": f"v{fw_ver}" if fw_ver else "unknown",
}
return self._cached_device_info
except (RadioOperationBusyError, RadioDisconnectedError):
pass
except Exception as e:
logger.debug("Community MQTT: device info fetch failed: %s", e)
# Don't cache transient failures — allow retry on next status publish
return fallback
async def _fetch_stats(self) -> dict[str, Any] | None:
"""Fetch core + radio stats from the radio (best-effort, cached)."""
if self._stats_supported is False:
return self._cached_stats
now = time.monotonic()
if (
now - self._last_stats_fetch
) < _STATS_MIN_CACHE_SECS and self._cached_stats is not None:
return self._cached_stats
from app.radio import RadioDisconnectedError, RadioOperationBusyError
from app.services.radio_runtime import radio_runtime as radio_manager
try:
async with radio_manager.radio_operation("community_stats_fetch", blocking=False) as mc:
from meshcore.events import EventType
result: dict[str, Any] = {}
core_event = await mc.commands.get_stats_core()
if core_event.type == EventType.ERROR:
logger.info("Community MQTT: firmware does not support stats commands")
self._stats_supported = False
return self._cached_stats
if core_event.type == EventType.STATS_CORE:
result.update(core_event.payload)
radio_event = await mc.commands.get_stats_radio()
if radio_event.type == EventType.ERROR:
logger.info("Community MQTT: firmware does not support stats commands")
self._stats_supported = False
return self._cached_stats
if radio_event.type == EventType.STATS_RADIO:
result.update(radio_event.payload)
if result:
self._cached_stats = result
self._last_stats_fetch = now
return self._cached_stats
except (RadioOperationBusyError, RadioDisconnectedError):
pass
except Exception as e:
logger.debug("Community MQTT: stats fetch failed: %s", e)
return self._cached_stats
async def _publish_status(
self, settings: CommunityMqttSettings, *, refresh_stats: bool = True
) -> None:
"""Build and publish the enriched retained status message."""
from app.keystore import get_public_key
from app.services.radio_runtime import radio_runtime as radio_manager
public_key = get_public_key()
if public_key is None:
return
pubkey_hex = public_key.hex().upper()
device_name = ""
if radio_manager.meshcore and radio_manager.meshcore.self_info:
device_name = radio_manager.meshcore.self_info.get("name", "")
device_info = await self._fetch_device_info()
stats = await self._fetch_stats() if refresh_stats else self._cached_stats
status_topic = _build_status_topic(settings, pubkey_hex)
payload: dict[str, Any] = {
"status": "online",
"timestamp": datetime.now().isoformat(),
"origin": device_name or "MeshCore Device",
"origin_id": pubkey_hex,
"model": device_info.get("model", "unknown"),
"firmware_version": device_info.get("firmware_version", "unknown"),
"radio": _build_radio_info(),
"client_version": _get_client_version(),
}
if stats:
payload["stats"] = stats
await self.publish(status_topic, payload, retain=True)
self._last_status_publish = time.monotonic()
async def _on_connected_async(self, settings: object) -> None:
"""Publish a retained online status message after connecting."""
await self._publish_status(settings) # type: ignore[arg-type]
async def _on_periodic_wake(self, elapsed: float) -> None:
if not self._settings:
return
now = time.monotonic()
if (now - self._last_status_publish) >= _STATS_REFRESH_INTERVAL:
await self._publish_status(self._settings, refresh_stats=True)
def _on_error(self) -> tuple[str, str]:
return (
"Community MQTT connection failure",
"Check your internet connection or try again later.",
)
def _should_break_wait(self, elapsed: float) -> bool:
if not self.connected:
logger.info("Community MQTT publish failure detected, reconnecting")
return True
s: CommunityMqttSettings | None = self._settings
auth_mode = getattr(s, "community_mqtt_auth_mode", "token") if s else "token"
if auth_mode == "token" and elapsed >= _TOKEN_RENEWAL_THRESHOLD:
logger.info("Community MQTT JWT nearing expiry, reconnecting")
return True
return False
async def _pre_connect(self, settings: object) -> bool:
from app.keystore import get_private_key, get_public_key
s: CommunityMqttSettings = settings # type: ignore[assignment]
auth_mode = s.community_mqtt_auth_mode or "token"
private_key = get_private_key()
public_key = get_public_key()
if public_key is None or (auth_mode == "token" and private_key is None):
# Keys not available yet, wait for settings change or key export
self.connected = False
self._version_event.clear()
try:
await asyncio.wait_for(self._version_event.wait(), timeout=30)
except TimeoutError:
pass
return False
return True
+372
View File
@@ -0,0 +1,372 @@
"""FanoutManager: owns all active fanout modules and dispatches events."""
from __future__ import annotations
import asyncio
import logging
from typing import Any
from app.fanout.base import FanoutModule
logger = logging.getLogger(__name__)
_DISPATCH_TIMEOUT_SECONDS = 30.0
# Type string -> module class mapping
_MODULE_TYPES: dict[str, type] = {}
def _format_error_detail(exc: Exception) -> str:
"""Return a short operator-facing error string."""
message = str(exc).strip()
if message:
return f"{type(exc).__name__}: {message}"
return type(exc).__name__
def _register_module_types() -> None:
"""Lazily populate the type registry to avoid circular imports."""
if _MODULE_TYPES:
return
from app.fanout.apprise_mod import AppriseModule
from app.fanout.bot import BotModule
from app.fanout.map_upload import MapUploadModule
from app.fanout.mqtt_community import MqttCommunityModule
from app.fanout.mqtt_private import MqttPrivateModule
from app.fanout.sqs import SqsModule
from app.fanout.webhook import WebhookModule
_MODULE_TYPES["mqtt_private"] = MqttPrivateModule
_MODULE_TYPES["mqtt_community"] = MqttCommunityModule
_MODULE_TYPES["bot"] = BotModule
_MODULE_TYPES["webhook"] = WebhookModule
_MODULE_TYPES["apprise"] = AppriseModule
_MODULE_TYPES["sqs"] = SqsModule
_MODULE_TYPES["map_upload"] = MapUploadModule
def _matches_filter(filter_value: Any, key: str) -> bool:
"""Check a single filter value (channels or contacts) against a key.
Supported shapes:
"all" -> True
"none" -> False
["key1", "key2"] -> key in list (only listed)
{"except": ["key1", "key2"]} -> key not in list (all except listed)
"""
if filter_value == "all":
return True
if filter_value == "none":
return False
if isinstance(filter_value, list):
return key in filter_value
if isinstance(filter_value, dict) and "except" in filter_value:
return key not in filter_value["except"]
return False
def _scope_matches_message(scope: dict, data: dict) -> bool:
"""Check whether a message event matches the given scope."""
messages = scope.get("messages", "none")
if messages == "all":
return True
if messages == "none":
return False
if isinstance(messages, dict):
msg_type = data.get("type", "")
conversation_key = data.get("conversation_key", "")
if msg_type == "CHAN":
return _matches_filter(messages.get("channels", "none"), conversation_key)
elif msg_type == "PRIV":
return _matches_filter(messages.get("contacts", "none"), conversation_key)
return False
def _scope_matches_raw(scope: dict, _data: dict) -> bool:
"""Check whether a raw packet event matches the given scope."""
return scope.get("raw_packets", "none") == "all"
def _always_match(_scope: dict, _data: dict) -> bool:
"""Match all modules unconditionally (filtering is module-internal)."""
return True
class FanoutManager:
"""Owns all active fanout modules and dispatches events."""
def __init__(self) -> None:
self._modules: dict[str, tuple[FanoutModule, dict]] = {} # id -> (module, scope)
self._restart_locks: dict[str, asyncio.Lock] = {}
self._bots_disabled_until_restart = False
self._module_errors: dict[str, str] = {}
def _broadcast_health_update(self) -> None:
from app.services.radio_runtime import radio_runtime as radio_manager
from app.websocket import broadcast_health
broadcast_health(radio_manager.is_connected, radio_manager.connection_info)
def _set_module_error(self, config_id: str, error: str) -> None:
if self._module_errors.get(config_id) == error:
return
self._module_errors[config_id] = error
self._broadcast_health_update()
def _clear_module_error(self, config_id: str) -> None:
if self._module_errors.pop(config_id, None) is not None:
self._broadcast_health_update()
def get_bots_disabled_source(self) -> str | None:
"""Return why bot modules are unavailable, if at all."""
from app.config import settings as server_settings
if server_settings.disable_bots:
return "env"
if self._bots_disabled_until_restart:
return "until_restart"
return None
def bots_disabled_effective(self) -> bool:
"""Return True when bot modules should be treated as unavailable."""
return self.get_bots_disabled_source() is not None
async def load_from_db(self) -> None:
"""Read enabled fanout_configs and instantiate modules."""
_register_module_types()
from app.repository.fanout import FanoutConfigRepository
configs = await FanoutConfigRepository.get_enabled()
for cfg in configs:
await self._start_module(cfg)
async def _start_module(self, cfg: dict[str, Any]) -> None:
"""Instantiate and start a single module from a config dict."""
config_id = cfg["id"]
config_type = cfg["type"]
config_blob = cfg["config"]
scope = cfg["scope"]
# Skip bot modules when bots are disabled server-wide or until restart.
if config_type == "bot" and self.bots_disabled_effective():
logger.info(
"Skipping bot module %s (bots disabled: %s)",
config_id,
self.get_bots_disabled_source(),
)
return
cls = _MODULE_TYPES.get(config_type)
if cls is None:
logger.warning("Unknown fanout type %r for config %s, skipping", config_type, config_id)
return
try:
module = cls(config_id, config_blob, name=cfg.get("name", ""))
await module.start()
self._modules[config_id] = (module, scope)
self._clear_module_error(config_id)
logger.info(
"Started fanout module %s (type=%s)", cfg.get("name", config_id), config_type
)
except Exception as exc:
logger.exception("Failed to start fanout module %s", config_id)
self._set_module_error(config_id, _format_error_detail(exc))
async def reload_config(self, config_id: str) -> None:
"""Stop old module (if any) and start updated config."""
lock = self._restart_locks.setdefault(config_id, asyncio.Lock())
async with lock:
await self.remove_config(config_id)
from app.repository.fanout import FanoutConfigRepository
cfg = await FanoutConfigRepository.get(config_id)
if cfg is None or not cfg["enabled"]:
return
await self._start_module(cfg)
async def remove_config(self, config_id: str) -> None:
"""Stop and remove a module."""
entry = self._modules.pop(config_id, None)
if entry is not None:
module, _ = entry
try:
await module.stop()
except Exception:
logger.exception("Error stopping fanout module %s", config_id)
self._clear_module_error(config_id)
async def _dispatch_matching(
self,
data: dict,
*,
matcher: Any,
handler_name: str,
log_label: str,
) -> None:
"""Dispatch to all matching modules concurrently."""
tasks = []
for config_id, (module, scope) in list(self._modules.items()):
if matcher(scope, data):
tasks.append(self._run_handler(config_id, module, handler_name, data, log_label))
if tasks:
await asyncio.gather(*tasks)
async def _run_handler(
self,
config_id: str,
module: FanoutModule,
handler_name: str,
data: dict,
log_label: str,
) -> None:
"""Run one module handler with per-module exception isolation."""
try:
handler = getattr(module, handler_name)
await asyncio.wait_for(handler(data), timeout=_DISPATCH_TIMEOUT_SECONDS)
self._clear_module_error(config_id)
except TimeoutError:
timeout_error = f"{handler_name} timed out after {_DISPATCH_TIMEOUT_SECONDS:.1f}s"
self._set_module_error(config_id, timeout_error)
logger.error(
"Fanout %s %s timed out after %.1fs; restarting module",
config_id,
log_label,
_DISPATCH_TIMEOUT_SECONDS,
)
await self._restart_module(config_id, module)
except Exception as exc:
self._set_module_error(config_id, _format_error_detail(exc))
logger.exception("Fanout %s %s error", config_id, log_label)
async def _restart_module(self, config_id: str, module: FanoutModule) -> None:
"""Restart a timed-out module if it is still the active instance."""
lock = self._restart_locks.setdefault(config_id, asyncio.Lock())
async with lock:
entry = self._modules.get(config_id)
if entry is None or entry[0] is not module:
return
try:
await module.stop()
await module.start()
except Exception:
logger.exception("Failed to restart timed-out fanout module %s", config_id)
self._modules.pop(config_id, None)
self._set_module_error(
config_id,
"Module restart failed after timeout",
)
async def broadcast_message(self, data: dict) -> None:
"""Dispatch a decoded message to modules whose scope matches."""
await self._dispatch_matching(
data,
matcher=_scope_matches_message,
handler_name="on_message",
log_label="on_message",
)
async def broadcast_raw(self, data: dict) -> None:
"""Dispatch a raw packet to modules whose scope matches."""
await self._dispatch_matching(
data,
matcher=_scope_matches_raw,
handler_name="on_raw",
log_label="on_raw",
)
async def broadcast_contact(self, data: dict) -> None:
"""Dispatch a contact upsert to all modules."""
await self._dispatch_matching(
data,
matcher=_always_match,
handler_name="on_contact",
log_label="on_contact",
)
async def broadcast_telemetry(self, data: dict) -> None:
"""Dispatch a repeater telemetry snapshot to all modules."""
await self._dispatch_matching(
data,
matcher=_always_match,
handler_name="on_telemetry",
log_label="on_telemetry",
)
async def broadcast_health_fanout(self, data: dict) -> None:
"""Dispatch a radio health snapshot to all modules."""
await self._dispatch_matching(
data,
matcher=_always_match,
handler_name="on_health",
log_label="on_health",
)
async def stop_all(self) -> None:
"""Shutdown all modules."""
for config_id, (module, _) in list(self._modules.items()):
try:
await module.stop()
except Exception:
logger.exception("Error stopping fanout module %s", config_id)
self._modules.clear()
self._restart_locks.clear()
self._module_errors.clear()
def get_statuses(self) -> dict[str, dict[str, str | None]]:
"""Return status info for each active module."""
from app.repository.fanout import _configs_cache
result: dict[str, dict[str, str | None]] = {}
all_ids = set(_configs_cache) | set(self._modules) | set(self._module_errors)
for config_id in all_ids:
info = _configs_cache.get(config_id, {})
if info.get("enabled") is False:
continue
module_entry = self._modules.get(config_id)
module = module_entry[0] if module_entry is not None else None
last_error = module.last_error if module is not None else None
status = module.status if module is not None else "error"
manager_error = self._module_errors.get(config_id)
if manager_error is not None:
status = "error"
last_error = manager_error
elif last_error is not None and status != "error":
status = "error"
if module is None and last_error is None:
continue
result[config_id] = {
"name": info.get("name", config_id),
"type": info.get("type", "unknown"),
"status": status,
"last_error": last_error,
}
return result
async def disable_bots_until_restart(self) -> str:
"""Stop active bot modules and prevent them from starting again until restart."""
source = self.get_bots_disabled_source()
if source == "env":
return source
self._bots_disabled_until_restart = True
from app.repository.fanout import _configs_cache
bot_ids = [
config_id
for config_id in list(self._modules)
if _configs_cache.get(config_id, {}).get("type") == "bot"
]
for config_id in bot_ids:
await self.remove_config(config_id)
return "until_restart"
# Module-level singleton
fanout_manager = FanoutManager()
+319
View File
@@ -0,0 +1,319 @@
"""Fanout module for uploading heard advert packets to map.meshcore.dev.
Mirrors the logic of the standalone map.meshcore.dev-uploader project:
- Listens on raw RF packets via on_raw
- Filters for ADVERT packets, only processes repeaters (role 2) and rooms (role 3)
- Skips nodes with no valid location (lat/lon None)
- Applies per-pubkey rate-limiting (1-hour window, matching the uploader)
- Signs the upload request with the radio's own Ed25519 private key
- POSTs to the map API (or logs in dry-run mode)
Dry-run mode (default: True) logs the full would-be payload at INFO level
without making any HTTP requests. Disable it only after verifying the log
output looks correct — in particular the radio params (freq/bw/sf/cr) and
the raw hex link.
Config keys
-----------
api_url : str, default ""
Upload endpoint. Empty string falls back to the public map.meshcore.dev API.
dry_run : bool, default True
When True, log the payload at INFO level instead of sending it.
geofence_enabled : bool, default False
When True, only upload nodes whose location falls within geofence_radius_km of
the radio's own configured latitude/longitude (read live from the radio at upload
time — no lat/lon is stored in this config). When the radio's lat/lon is not set
(0, 0) or unavailable, the geofence check is silently skipped so uploads continue
normally until coordinates are configured.
geofence_radius_km : float, default 0.0
Radius of the geofence in kilometres. Nodes further than this distance
from the radio's own position are skipped.
"""
from __future__ import annotations
import hashlib
import json
import logging
import math
import httpx
from app.decoder import parse_advertisement, parse_packet
from app.fanout.base import FanoutModule
from app.keystore import ed25519_sign_expanded, get_private_key, get_public_key
from app.services.radio_runtime import radio_runtime
logger = logging.getLogger(__name__)
_DEFAULT_API_URL = "https://map.meshcore.dev/api/v1/uploader/node"
# Re-upload guard: skip re-uploading a pubkey seen within this window (AU parity)
_REUPLOAD_SECONDS = 3600
# Only upload repeaters (2) and rooms (3). Any other role — including future
# roles not yet defined — is rejected. An allowlist is used rather than a
# blocklist so that new roles cannot accidentally start populating the map.
_ALLOWED_DEVICE_ROLES = {2, 3}
def _get_radio_params() -> dict:
"""Read radio frequency parameters from the connected radio's self_info.
The Python meshcore library returns radio_freq in MHz (e.g. 910.525) and
radio_bw in kHz (e.g. 62.5). These are exactly the units the map API
expects, matching what the JS reference uploader produces after its own
/1000 division on raw integer values. No further scaling is applied here.
"""
try:
mc = radio_runtime.meshcore
if not mc:
return {"freq": 0, "cr": 0, "sf": 0, "bw": 0}
info = mc.self_info
if not isinstance(info, dict):
return {"freq": 0, "cr": 0, "sf": 0, "bw": 0}
freq = info.get("radio_freq", 0) or 0
bw = info.get("radio_bw", 0) or 0
sf = info.get("radio_sf", 0) or 0
cr = info.get("radio_cr", 0) or 0
return {
"freq": freq,
"cr": cr,
"sf": sf,
"bw": bw,
}
except Exception as exc:
logger.debug("MapUpload: could not read radio params: %s", exc)
return {"freq": 0, "cr": 0, "sf": 0, "bw": 0}
_ROLE_NAMES: dict[int, str] = {2: "repeater", 3: "room"}
def _haversine_km(lat1: float, lon1: float, lat2: float, lon2: float) -> float:
"""Return the great-circle distance in kilometres between two lat/lon points."""
r = 6371.0
phi1, phi2 = math.radians(lat1), math.radians(lat2)
dphi = math.radians(lat2 - lat1)
dlam = math.radians(lon2 - lon1)
a = math.sin(dphi / 2) ** 2 + math.cos(phi1) * math.cos(phi2) * math.sin(dlam / 2) ** 2
return 2 * r * math.asin(math.sqrt(a))
class MapUploadModule(FanoutModule):
"""Uploads heard ADVERT packets to the MeshCore community map."""
def __init__(self, config_id: str, config: dict, *, name: str = "") -> None:
super().__init__(config_id, config, name=name)
self._client: httpx.AsyncClient | None = None
# Per-pubkey rate limiting: pubkey_hex -> last_uploaded_advert_timestamp
self._seen: dict[str, int] = {}
async def start(self) -> None:
self._client = httpx.AsyncClient(
timeout=httpx.Timeout(15.0),
follow_redirects=True,
)
self._last_error = None
self._seen.clear()
async def stop(self) -> None:
if self._client:
await self._client.aclose()
self._client = None
self._last_error = None
async def on_raw(self, data: dict) -> None:
if data.get("payload_type") != "ADVERT":
return
raw_hex = data.get("data", "")
if not raw_hex:
return
try:
raw_bytes = bytes.fromhex(raw_hex)
except ValueError:
return
packet_info = parse_packet(raw_bytes)
if packet_info is None:
return
advert = parse_advertisement(packet_info.payload, raw_packet=raw_bytes)
if advert is None:
return
# Advert Ed25519 signature verification is intentionally skipped.
# The radio validates packets before passing them to RT.
# Only process repeaters (2) and rooms (3) — any other role is rejected
if advert.device_role not in _ALLOWED_DEVICE_ROLES:
return
# Skip nodes with no valid location — the decoder already nulls out
# impossible values, so None means either no location flag or bad coords.
if advert.lat is None or advert.lon is None:
logger.debug(
"MapUpload: skipping %s — no valid location",
advert.public_key[:12],
)
return
pubkey = advert.public_key.lower()
# Rate-limit: skip if this pubkey's timestamp hasn't advanced enough
last_seen = self._seen.get(pubkey)
if last_seen is not None:
if last_seen >= advert.timestamp:
logger.debug(
"MapUpload: skipping %s — possible replay (last=%d, advert=%d)",
pubkey[:12],
last_seen,
advert.timestamp,
)
return
if advert.timestamp < last_seen + _REUPLOAD_SECONDS:
logger.debug(
"MapUpload: skipping %s — within 1-hr rate-limit window (delta=%ds)",
pubkey[:12],
advert.timestamp - last_seen,
)
return
await self._upload(
pubkey, advert.timestamp, advert.device_role, raw_hex, advert.lat, advert.lon
)
async def _upload(
self,
pubkey: str,
advert_timestamp: int,
device_role: int,
raw_hex: str,
lat: float,
lon: float,
) -> None:
# Geofence check: if enabled, skip nodes outside the configured radius.
# The reference center is the radio's own lat/lon read live from self_info —
# no coordinates are stored in the fanout config. If the radio lat/lon is
# (0, 0) or unavailable the check is skipped transparently so uploads
# continue normally until the operator sets coordinates in radio settings.
geofence_dist_km: float | None = None
if self.config.get("geofence_enabled"):
try:
mc = radio_runtime.meshcore
sinfo = mc.self_info if mc else None
fence_lat = float((sinfo or {}).get("adv_lat", 0) or 0)
fence_lon = float((sinfo or {}).get("adv_lon", 0) or 0)
except Exception as exc:
logger.debug("MapUpload: could not read radio lat/lon for geofence: %s", exc)
fence_lat = 0.0
fence_lon = 0.0
if fence_lat == 0.0 and fence_lon == 0.0:
logger.debug(
"MapUpload: geofence skipped for %s — radio lat/lon not configured",
pubkey[:12],
)
else:
fence_radius_km = float(self.config.get("geofence_radius_km", 0) or 0)
geofence_dist_km = _haversine_km(fence_lat, fence_lon, lat, lon)
if geofence_dist_km > fence_radius_km:
logger.debug(
"MapUpload: skipping %s — outside geofence (%.2f km > %.2f km)",
pubkey[:12],
geofence_dist_km,
fence_radius_km,
)
return
private_key = get_private_key()
public_key = get_public_key()
if private_key is None or public_key is None:
logger.warning(
"MapUpload: private key not available — cannot sign upload for %s. "
"Ensure radio firmware has ENABLE_PRIVATE_KEY_EXPORT=1.",
pubkey[:12],
)
return
api_url = str(self.config.get("api_url", "") or _DEFAULT_API_URL).strip()
dry_run = bool(self.config.get("dry_run", True))
role_name = _ROLE_NAMES.get(device_role, f"role={device_role}")
params = _get_radio_params()
upload_data = {
"params": params,
"links": [f"meshcore://{raw_hex}"],
}
# Sign: SHA-256 the compact JSON, then Ed25519-sign the hash
json_str = json.dumps(upload_data, separators=(",", ":"))
data_hash = hashlib.sha256(json_str.encode()).digest()
scalar = private_key[:32]
prefix_bytes = private_key[32:]
signature = ed25519_sign_expanded(data_hash, scalar, prefix_bytes, public_key)
request_payload = {
"data": json_str,
"signature": signature.hex(),
"publicKey": public_key.hex(),
}
if dry_run:
geofence_note = (
f" | geofence: {geofence_dist_km:.2f} km from observer"
if geofence_dist_km is not None
else ""
)
logger.info(
"MapUpload [DRY RUN] %s (%s)%s → would POST to %s\n payload: %s",
pubkey[:12],
role_name,
geofence_note,
api_url,
json.dumps(request_payload, separators=(",", ":")),
)
# Still update _seen so rate-limiting works during dry-run testing
self._seen[pubkey] = advert_timestamp
return
if not self._client:
return
try:
resp = await self._client.post(
api_url,
content=json.dumps(request_payload, separators=(",", ":")),
headers={"Content-Type": "application/json"},
)
resp.raise_for_status()
self._seen[pubkey] = advert_timestamp
self._set_last_error(None)
logger.info(
"MapUpload: uploaded %s (%s) → HTTP %d",
pubkey[:12],
role_name,
resp.status_code,
)
except httpx.HTTPStatusError as exc:
self._set_last_error(f"HTTP {exc.response.status_code}")
logger.warning(
"MapUpload: server returned %d for %s: %s",
exc.response.status_code,
pubkey[:12],
exc.response.text[:200],
)
except httpx.RequestError as exc:
self._set_last_error(str(exc))
logger.warning("MapUpload: request error for %s: %s", pubkey[:12], exc)
@property
def status(self) -> str:
if self._client is None:
return "disconnected"
if self.last_error:
return "error"
return "connected"
+91
View File
@@ -0,0 +1,91 @@
"""MQTT publisher for forwarding mesh network events to an MQTT broker."""
from __future__ import annotations
import logging
import ssl
from typing import Any, Protocol
from app.fanout.mqtt_base import BaseMqttPublisher
logger = logging.getLogger(__name__)
class PrivateMqttSettings(Protocol):
"""Attributes expected on the settings object for the private MQTT publisher."""
mqtt_broker_host: str
mqtt_broker_port: int
mqtt_username: str
mqtt_password: str
mqtt_use_tls: bool
mqtt_tls_insecure: bool
mqtt_publish_messages: bool
mqtt_publish_raw_packets: bool
class MqttPublisher(BaseMqttPublisher):
"""Manages an MQTT connection and publishes mesh network events."""
_backoff_max = 30
_log_prefix = "MQTT"
def _is_configured(self) -> bool:
"""Check if MQTT is configured and has something to publish."""
s: PrivateMqttSettings | None = self._settings
return bool(
s and s.mqtt_broker_host and (s.mqtt_publish_messages or s.mqtt_publish_raw_packets)
)
def _build_client_kwargs(self, settings: object) -> dict[str, Any]:
s: PrivateMqttSettings = settings # type: ignore[assignment]
return {
"hostname": s.mqtt_broker_host,
"port": s.mqtt_broker_port,
"username": s.mqtt_username or None,
"password": s.mqtt_password or None,
"tls_context": self._build_tls_context(s),
}
def _on_connected(self, settings: object) -> tuple[str, str]:
s: PrivateMqttSettings = settings # type: ignore[assignment]
return ("MQTT connected", f"{s.mqtt_broker_host}:{s.mqtt_broker_port}")
def _on_error(self) -> tuple[str, str]:
return ("MQTT connection failure", "Please correct the settings or disable.")
@staticmethod
def _build_tls_context(settings: PrivateMqttSettings) -> ssl.SSLContext | None:
"""Build TLS context from settings, or None if TLS is disabled."""
if not settings.mqtt_use_tls:
return None
ctx = ssl.create_default_context()
if settings.mqtt_tls_insecure:
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
return ctx
def _build_message_topic(prefix: str, data: dict[str, Any]) -> str:
"""Build MQTT topic for a decrypted message."""
msg_type = data.get("type", "")
conversation_key = data.get("conversation_key", "unknown")
if msg_type == "PRIV":
return f"{prefix}/dm:{conversation_key}"
elif msg_type == "CHAN":
return f"{prefix}/gm:{conversation_key}"
return f"{prefix}/message:{conversation_key}"
def _build_raw_packet_topic(prefix: str, data: dict[str, Any]) -> str:
"""Build MQTT topic for a raw packet."""
info = data.get("decrypted_info")
if info and isinstance(info, dict):
contact_key = info.get("contact_key")
channel_key = info.get("channel_key")
if contact_key:
return f"{prefix}/raw/dm:{contact_key}"
if channel_key:
return f"{prefix}/raw/gm:{channel_key}"
return f"{prefix}/raw/unrouted"
+301
View File
@@ -0,0 +1,301 @@
"""Shared base class for MQTT publisher lifecycle management.
Both ``MqttPublisher`` (private broker) and ``CommunityMqttPublisher``
(community aggregator) inherit from ``BaseMqttPublisher``, which owns
the connection-loop skeleton, reconnect/backoff logic, and publish method.
Subclasses override a small set of hooks to control configuration checks,
client construction, toast messages, and optional wait-loop behavior.
"""
from __future__ import annotations
import asyncio
import json
import logging
import sys
import time
from abc import ABC, abstractmethod
from typing import Any
import aiomqtt
logger = logging.getLogger(__name__)
_BACKOFF_MIN = 5
def _format_error_detail(exc: Exception) -> str:
"""Return a short operator-facing error string."""
message = str(exc).strip()
if message:
return message
return type(exc).__name__
def _broadcast_health() -> None:
"""Push updated health (including MQTT status) to all WS clients."""
from app.services.radio_runtime import radio_runtime as radio_manager
from app.websocket import broadcast_health
broadcast_health(radio_manager.is_connected, radio_manager.connection_info)
class BaseMqttPublisher(ABC):
"""Base class for MQTT publishers with shared lifecycle management.
Subclasses implement the abstract hooks to control configuration checks,
client construction, toast messages, and optional wait-loop behavior.
The settings type is duck-typed — each subclass defines a Protocol
describing the attributes it expects (e.g. ``PrivateMqttSettings``,
``CommunityMqttSettings``). Callers pass ``SimpleNamespace`` instances
that satisfy the protocol.
"""
_backoff_max: int = 30
_log_prefix: str = "MQTT"
_not_configured_timeout: float | None = None # None = block forever
def __init__(self) -> None:
self._client: aiomqtt.Client | None = None
self._task: asyncio.Task[None] | None = None
self._settings: Any = None
self._settings_version: int = 0
self._version_event: asyncio.Event = asyncio.Event()
self.connected: bool = False
self.integration_name: str = ""
self._last_error: str | None = None
def set_integration_name(self, name: str) -> None:
"""Attach the configured fanout-module name for operator-facing logs."""
self.integration_name = name.strip()
def _integration_label(self) -> str:
"""Return a concise label for logs, including the configured module name."""
if self.integration_name:
return f"{self._log_prefix} [{self.integration_name}]"
return self._log_prefix
@property
def last_error(self) -> str | None:
"""Return the most recent retained connection/publish error."""
return self._last_error
# ── Lifecycle ──────────────────────────────────────────────────────
async def start(self, settings: object) -> None:
"""Start the background connection loop."""
self._settings = settings
self._last_error = None
self._settings_version += 1
self._version_event.set()
if self._task is None or self._task.done():
self._task = asyncio.create_task(self._connection_loop())
async def stop(self) -> None:
"""Cancel the background task and disconnect."""
if self._task and not self._task.done():
self._task.cancel()
try:
await self._task
except asyncio.CancelledError:
pass
self._task = None
self._client = None
self.connected = False
self._last_error = None
async def restart(self, settings: object) -> None:
"""Called when settings change — stop + start."""
await self.stop()
await self.start(settings)
async def publish(self, topic: str, payload: dict[str, Any], *, retain: bool = False) -> None:
"""Publish a JSON payload. Drops silently if not connected."""
if self._client is None or not self.connected:
return
try:
await self._client.publish(topic, json.dumps(payload), retain=retain)
except Exception as e:
logger.warning(
"%s publish failed on %s. This is usually transient network noise; "
"if it self-resolves and reconnects, it is generally not a concern. Persistent errors may indicate a problem with your network connection or MQTT broker. Original error: %s",
self._integration_label(),
topic,
e,
exc_info=True,
)
self.connected = False
self._last_error = _format_error_detail(e)
# Wake the connection loop so it exits the wait and reconnects
self._settings_version += 1
self._version_event.set()
# ── Abstract hooks ─────────────────────────────────────────────────
@abstractmethod
def _is_configured(self) -> bool:
"""Return True when this publisher should attempt to connect."""
@abstractmethod
def _build_client_kwargs(self, settings: object) -> dict[str, Any]:
"""Return the keyword arguments for ``aiomqtt.Client(...)``."""
@abstractmethod
def _on_connected(self, settings: object) -> tuple[str, str]:
"""Return ``(title, detail)`` for the success toast on connect."""
@abstractmethod
def _on_error(self) -> tuple[str, str]:
"""Return ``(title, detail)`` for the error toast on connect failure."""
# ── Optional hooks ─────────────────────────────────────────────────
def _should_break_wait(self, elapsed: float) -> bool:
"""Return True to break the inner wait (e.g. token expiry)."""
return False
async def _pre_connect(self, settings: object) -> bool:
"""Called before connecting. Return True to proceed, False to retry."""
return True
def _on_not_configured(self) -> None:
"""Called each time the loop finds the publisher not configured."""
return # no-op by default; subclasses may override
async def _on_connected_async(self, settings: object) -> None:
"""Async hook called after connection succeeds (before health broadcast).
Subclasses can override to publish messages immediately after connecting.
"""
return # no-op by default
async def _on_periodic_wake(self, elapsed: float) -> None:
"""Called every ~60s while connected. Subclasses may override."""
return
# ── Connection loop ────────────────────────────────────────────────
async def _connection_loop(self) -> None:
"""Background loop: connect, wait for version change, reconnect on failure."""
from app.websocket import broadcast_error, broadcast_success
backoff = _BACKOFF_MIN
while True:
if not self._is_configured():
self._on_not_configured()
self.connected = False
self._client = None
self._version_event.clear()
try:
if self._not_configured_timeout is None:
await self._version_event.wait()
else:
await asyncio.wait_for(
self._version_event.wait(),
timeout=self._not_configured_timeout,
)
except TimeoutError:
continue
except asyncio.CancelledError:
return
continue
settings = self._settings
assert settings is not None # guaranteed by _is_configured()
version_at_connect = self._settings_version
try:
if not await self._pre_connect(settings):
continue
client_kwargs = self._build_client_kwargs(settings)
connect_time = time.monotonic()
async with aiomqtt.Client(**client_kwargs) as client:
self._client = client
self.connected = True
self._last_error = None
backoff = _BACKOFF_MIN
title, detail = self._on_connected(settings)
broadcast_success(title, detail)
await self._on_connected_async(settings)
_broadcast_health()
# Wait until cancelled or settings version changes.
# The 60s timeout is a housekeeping wake-up; actual connection
# liveness is handled by paho-mqtt's keepalive mechanism.
while self._settings_version == version_at_connect:
self._version_event.clear()
try:
await asyncio.wait_for(self._version_event.wait(), timeout=60)
except TimeoutError:
elapsed = time.monotonic() - connect_time
await self._on_periodic_wake(elapsed)
if self._should_break_wait(elapsed):
break
continue
# async with exited — client is now closed
self._client = None
self.connected = False
_broadcast_health()
except asyncio.CancelledError:
self.connected = False
self._client = None
return
except Exception as e:
self.connected = False
self._client = None
self._last_error = _format_error_detail(e)
# Windows ProactorEventLoop does not implement add_reader /
# add_writer, which paho-mqtt requires. The failure can
# surface as a direct NotImplementedError (add_writer in
# __aenter__) or as a generic timeout (add_reader fails
# inside an event-loop callback, so paho never hears back).
# Either way, if we're on Windows with Proactor the root
# cause is the same and retrying won't help.
_on_proactor = (
sys.platform == "win32"
and type(asyncio.get_event_loop()).__name__ == "ProactorEventLoop"
)
if _on_proactor:
broadcast_error(
"MQTT unavailable — Windows event loop incompatible",
"The default Windows event loop (ProactorEventLoop) does "
"not support MQTT. Add --loop none to your uvicorn "
"command and restart. See README.md for details.",
)
_broadcast_health()
logger.error(
"%s cannot run: Windows ProactorEventLoop does not "
"implement add_reader/add_writer required by paho-mqtt. "
"Restart uvicorn with '--loop none' to use "
"SelectorEventLoop instead. Giving up (will not retry).",
self._integration_label(),
)
return
title, detail = self._on_error()
broadcast_error(title, detail)
_broadcast_health()
logger.warning(
"%s connection error. This is usually transient network noise; "
"if it self-resolves, it is generally not a concern: %s "
"(reconnecting in %ds). If this error persists, check your network connection and MQTT broker status.",
self._integration_label(),
e,
backoff,
exc_info=True,
)
try:
await asyncio.sleep(backoff)
except asyncio.CancelledError:
return
backoff = min(backoff * 2, self._backoff_max)
+145
View File
@@ -0,0 +1,145 @@
"""Fanout module wrapping the community MQTT publisher."""
from __future__ import annotations
import logging
import re
import string
from types import SimpleNamespace
from typing import Any
from app.fanout.base import FanoutModule
from app.fanout.community_mqtt import CommunityMqttPublisher, _format_raw_packet
logger = logging.getLogger(__name__)
_IATA_RE = re.compile(r"^[A-Z]{3}$")
_DEFAULT_PACKET_TOPIC_TEMPLATE = "meshcore/{IATA}/{PUBLIC_KEY}/packets"
_TOPIC_TEMPLATE_FIELD_CANONICAL = {
"iata": "IATA",
"public_key": "PUBLIC_KEY",
}
def _normalize_topic_template(topic_template: str) -> str:
"""Normalize packet topic template fields to canonical uppercase placeholders."""
template = topic_template.strip() or _DEFAULT_PACKET_TOPIC_TEMPLATE
parts: list[str] = []
try:
parsed = string.Formatter().parse(template)
for literal_text, field_name, format_spec, conversion in parsed:
parts.append(literal_text)
if field_name is None:
continue
normalized_field = _TOPIC_TEMPLATE_FIELD_CANONICAL.get(field_name.lower())
if normalized_field is None:
raise ValueError(f"Unsupported topic template field(s): {field_name}")
replacement = ["{", normalized_field]
if conversion:
replacement.extend(["!", conversion])
if format_spec:
replacement.extend([":", format_spec])
replacement.append("}")
parts.append("".join(replacement))
except ValueError:
raise
return "".join(parts)
def _config_to_settings(config: dict) -> SimpleNamespace:
"""Map a fanout config blob to a settings namespace for the CommunityMqttPublisher."""
return SimpleNamespace(
community_mqtt_enabled=True,
community_mqtt_broker_host=config.get("broker_host", "mqtt-us-v1.letsmesh.net"),
community_mqtt_broker_port=config.get("broker_port", 443),
community_mqtt_transport=config.get("transport", "websockets"),
community_mqtt_use_tls=config.get("use_tls", True),
community_mqtt_tls_verify=config.get("tls_verify", True),
community_mqtt_auth_mode=config.get("auth_mode", "token"),
community_mqtt_username=config.get("username", ""),
community_mqtt_password=config.get("password", ""),
community_mqtt_iata=config.get("iata", ""),
community_mqtt_email=config.get("email", ""),
community_mqtt_token_audience=config.get("token_audience", ""),
)
def _render_packet_topic(topic_template: str, *, iata: str, public_key: str) -> str:
"""Render the configured raw-packet publish topic."""
template = _normalize_topic_template(topic_template)
return template.format(IATA=iata, PUBLIC_KEY=public_key)
class MqttCommunityModule(FanoutModule):
"""Wraps a CommunityMqttPublisher for community packet sharing."""
def __init__(self, config_id: str, config: dict, *, name: str = "") -> None:
super().__init__(config_id, config, name=name)
self._publisher = CommunityMqttPublisher()
self._publisher.set_integration_name(name or config_id)
async def start(self) -> None:
settings = _config_to_settings(self.config)
await self._publisher.start(settings)
async def stop(self) -> None:
await self._publisher.stop()
async def on_message(self, data: dict) -> None:
# Community MQTT only publishes raw packets, not decoded messages.
pass
async def on_raw(self, data: dict) -> None:
if not self._publisher.connected or self._publisher._settings is None:
return
await _publish_community_packet(self._publisher, self.config, data)
@property
def status(self) -> str:
if self._publisher._is_configured():
if self._publisher.last_error:
return "error"
return "connected" if self._publisher.connected else "disconnected"
return "disconnected"
@property
def last_error(self) -> str | None:
return self._publisher.last_error
async def _publish_community_packet(
publisher: CommunityMqttPublisher,
config: dict,
data: dict[str, Any],
) -> None:
"""Format and publish a raw packet to the community broker."""
try:
from app.keystore import get_public_key
from app.services.radio_runtime import radio_runtime as radio_manager
public_key = get_public_key()
if public_key is None:
return
pubkey_hex = public_key.hex().upper()
device_name = ""
if radio_manager.meshcore and radio_manager.meshcore.self_info:
device_name = radio_manager.meshcore.self_info.get("name", "")
packet = _format_raw_packet(data, device_name, pubkey_hex)
iata = config.get("iata", "").upper().strip()
if not _IATA_RE.fullmatch(iata):
logger.debug("Community MQTT: skipping publish — no valid IATA code configured")
return
topic = _render_packet_topic(
str(config.get("topic_template", _DEFAULT_PACKET_TOPIC_TEMPLATE)),
iata=iata,
public_key=pubkey_hex,
)
await publisher.publish(topic, packet)
except Exception as e:
logger.warning("Community MQTT broadcast error: %s", e, exc_info=True)
+68
View File
@@ -0,0 +1,68 @@
"""Fanout module wrapping the private MQTT publisher."""
from __future__ import annotations
import logging
from types import SimpleNamespace
from app.fanout.base import FanoutModule
from app.fanout.mqtt import MqttPublisher, _build_message_topic, _build_raw_packet_topic
logger = logging.getLogger(__name__)
def _config_to_settings(config: dict) -> SimpleNamespace:
"""Map a fanout config blob to a settings namespace for the MqttPublisher."""
return SimpleNamespace(
mqtt_broker_host=config.get("broker_host", ""),
mqtt_broker_port=config.get("broker_port", 1883),
mqtt_username=config.get("username", ""),
mqtt_password=config.get("password", ""),
mqtt_use_tls=config.get("use_tls", False),
mqtt_tls_insecure=config.get("tls_insecure", False),
mqtt_topic_prefix=config.get("topic_prefix", "meshcore"),
mqtt_publish_messages=True,
mqtt_publish_raw_packets=True,
)
class MqttPrivateModule(FanoutModule):
"""Wraps an MqttPublisher instance for private MQTT forwarding."""
def __init__(self, config_id: str, config: dict, *, name: str = "") -> None:
super().__init__(config_id, config, name=name)
self._publisher = MqttPublisher()
self._publisher.set_integration_name(name or config_id)
async def start(self) -> None:
settings = _config_to_settings(self.config)
await self._publisher.start(settings)
async def stop(self) -> None:
await self._publisher.stop()
async def on_message(self, data: dict) -> None:
if not self._publisher.connected or self._publisher._settings is None:
return
prefix = self.config.get("topic_prefix", "meshcore")
topic = _build_message_topic(prefix, data)
await self._publisher.publish(topic, data)
async def on_raw(self, data: dict) -> None:
if not self._publisher.connected or self._publisher._settings is None:
return
prefix = self.config.get("topic_prefix", "meshcore")
topic = _build_raw_packet_topic(prefix, data)
await self._publisher.publish(topic, data)
@property
def status(self) -> str:
if not self.config.get("broker_host"):
return "disconnected"
if self._publisher.last_error:
return "error"
return "connected" if self._publisher.connected else "disconnected"
@property
def last_error(self) -> str | None:
return self._publisher.last_error
+163
View File
@@ -0,0 +1,163 @@
"""Fanout module for Amazon SQS delivery."""
from __future__ import annotations
import asyncio
import hashlib
import json
import logging
from functools import partial
from urllib.parse import urlparse
import boto3
from botocore.exceptions import BotoCoreError, ClientError
from app.fanout.base import FanoutModule
logger = logging.getLogger(__name__)
def _build_payload(data: dict, *, event_type: str) -> str:
"""Serialize a fanout event into a stable JSON envelope."""
return json.dumps(
{
"event_type": event_type,
"data": data,
},
separators=(",", ":"),
sort_keys=True,
)
def _infer_region_from_queue_url(queue_url: str) -> str | None:
"""Infer AWS region from a standard SQS queue URL host when possible."""
host = urlparse(queue_url).hostname or ""
if not host:
return None
parts = host.split(".")
if len(parts) < 4 or parts[0] != "sqs":
return None
if parts[2] != "amazonaws":
return None
if parts[3] not in {"com", "com.cn"}:
return None
region = parts[1].strip()
return region or None
def _is_fifo_queue(queue_url: str) -> bool:
"""Return True when the configured queue URL points at an SQS FIFO queue."""
return queue_url.rstrip("/").endswith(".fifo")
def _build_message_group_id(data: dict, *, event_type: str) -> str:
"""Choose a stable FIFO group ID from the event identity."""
if event_type == "message":
conversation_key = str(data.get("conversation_key", "")).strip()
if conversation_key:
return f"message-{conversation_key}"
return "message-default"
return "raw-packets"
def _build_message_deduplication_id(data: dict, *, event_type: str, body: str) -> str:
"""Choose a deterministic deduplication ID for FIFO queues."""
if event_type == "message":
message_id = data.get("id")
if isinstance(message_id, int):
return f"message-{message_id}"
else:
observation_id = data.get("observation_id")
if isinstance(observation_id, str) and observation_id.strip():
return f"raw-{observation_id}"
packet_id = data.get("id")
if isinstance(packet_id, int):
return f"raw-{packet_id}"
return hashlib.sha256(body.encode()).hexdigest()
class SqsModule(FanoutModule):
"""Delivers message and raw-packet events to an Amazon SQS queue."""
def __init__(self, config_id: str, config: dict, *, name: str = "") -> None:
super().__init__(config_id, config, name=name)
self._client = None
async def start(self) -> None:
kwargs: dict[str, str] = {}
queue_url = str(self.config.get("queue_url", "")).strip()
region_name = str(self.config.get("region_name", "")).strip()
endpoint_url = str(self.config.get("endpoint_url", "")).strip()
access_key_id = str(self.config.get("access_key_id", "")).strip()
secret_access_key = str(self.config.get("secret_access_key", "")).strip()
session_token = str(self.config.get("session_token", "")).strip()
if not region_name:
region_name = _infer_region_from_queue_url(queue_url) or ""
if region_name:
kwargs["region_name"] = region_name
if endpoint_url:
kwargs["endpoint_url"] = endpoint_url
if access_key_id and secret_access_key:
kwargs["aws_access_key_id"] = access_key_id
kwargs["aws_secret_access_key"] = secret_access_key
if session_token:
kwargs["aws_session_token"] = session_token
self._client = boto3.client("sqs", **kwargs)
self._last_error = None
async def stop(self) -> None:
self._client = None
async def on_message(self, data: dict) -> None:
await self._send(data, event_type="message")
async def on_raw(self, data: dict) -> None:
await self._send(data, event_type="raw_packet")
async def _send(self, data: dict, *, event_type: str) -> None:
if self._client is None:
return
queue_url = str(self.config.get("queue_url", "")).strip()
if not queue_url:
return
body = _build_payload(data, event_type=event_type)
request_kwargs: dict[str, object] = {
"QueueUrl": queue_url,
"MessageBody": body,
"MessageAttributes": {
"event_type": {
"DataType": "String",
"StringValue": event_type,
}
},
}
if _is_fifo_queue(queue_url):
request_kwargs["MessageGroupId"] = _build_message_group_id(data, event_type=event_type)
request_kwargs["MessageDeduplicationId"] = _build_message_deduplication_id(
data, event_type=event_type, body=body
)
try:
await asyncio.to_thread(partial(self._client.send_message, **request_kwargs))
self._set_last_error(None)
except (ClientError, BotoCoreError) as exc:
self._set_last_error(str(exc))
logger.warning("SQS %s send error: %s", self.config_id, exc)
except Exception as exc:
self._set_last_error(str(exc))
logger.exception("Unexpected SQS send error for %s", self.config_id)
@property
def status(self) -> str:
if not str(self.config.get("queue_url", "")).strip():
return "disconnected"
if self.last_error:
return "error"
return "connected"
+83
View File
@@ -0,0 +1,83 @@
"""Fanout module for webhook (HTTP POST) delivery."""
from __future__ import annotations
import hashlib
import hmac
import json
import logging
import httpx
from app.fanout.base import FanoutModule
logger = logging.getLogger(__name__)
class WebhookModule(FanoutModule):
"""Delivers message data to an HTTP endpoint via POST (or configurable method)."""
def __init__(self, config_id: str, config: dict, *, name: str = "") -> None:
super().__init__(config_id, config, name=name)
self._client: httpx.AsyncClient | None = None
async def start(self) -> None:
self._client = httpx.AsyncClient(timeout=httpx.Timeout(10.0))
self._last_error = None
async def stop(self) -> None:
if self._client:
await self._client.aclose()
self._client = None
async def on_message(self, data: dict) -> None:
await self._send(data, event_type="message")
async def _send(self, data: dict, *, event_type: str) -> None:
if not self._client:
return
url = self.config.get("url", "")
if not url:
return
method = self.config.get("method", "POST").upper()
extra_headers = self.config.get("headers", {})
hmac_secret = self.config.get("hmac_secret", "")
hmac_header = self.config.get("hmac_header", "X-Webhook-Signature")
headers = {
"Content-Type": "application/json",
"X-Webhook-Event": event_type,
**extra_headers,
}
body_bytes = json.dumps(data, separators=(",", ":"), sort_keys=True).encode()
if hmac_secret:
sig = hmac.new(hmac_secret.encode(), body_bytes, hashlib.sha256).hexdigest()
headers[hmac_header or "X-Webhook-Signature"] = f"sha256={sig}"
try:
resp = await self._client.request(method, url, content=body_bytes, headers=headers)
resp.raise_for_status()
self._set_last_error(None)
except httpx.HTTPStatusError as exc:
self._set_last_error(f"HTTP {exc.response.status_code}")
logger.warning(
"Webhook %s returned %s for %s",
self.config_id,
exc.response.status_code,
url,
)
except httpx.RequestError as exc:
self._set_last_error(str(exc))
logger.warning("Webhook %s request error: %s", self.config_id, exc)
@property
def status(self) -> str:
if not self.config.get("url"):
return "disconnected"
if self.last_error:
return "error"
return "connected"
+188 -35
View File
@@ -1,49 +1,115 @@
import logging
from pathlib import Path
from fastapi import FastAPI, HTTPException
from fastapi.responses import FileResponse
from fastapi import FastAPI, HTTPException, Request
from fastapi.responses import FileResponse, JSONResponse
from fastapi.staticfiles import StaticFiles
logger = logging.getLogger(__name__)
INDEX_CACHE_CONTROL = "no-store"
ASSET_CACHE_CONTROL = "public, max-age=31536000, immutable"
STATIC_FILE_CACHE_CONTROL = "public, max-age=3600"
FRONTEND_BUILD_INSTRUCTIONS = (
"Run 'cd frontend && npm install && npm run build', "
"or use a release zip that includes frontend/prebuilt."
)
class CacheControlStaticFiles(StaticFiles):
"""StaticFiles variant that adds a fixed Cache-Control header."""
def __init__(self, *args, cache_control: str, **kwargs) -> None:
super().__init__(*args, **kwargs)
self.cache_control = cache_control
def file_response(self, *args, **kwargs):
response = super().file_response(*args, **kwargs)
response.headers["Cache-Control"] = self.cache_control
return response
def _file_response(path: Path, *, cache_control: str) -> FileResponse:
return FileResponse(path, headers={"Cache-Control": cache_control})
def _is_index_file(path: Path, index_file: Path) -> bool:
"""Return True when the requested file is the SPA shell index.html."""
return path == index_file
def _resolve_request_base(request: Request) -> str:
"""Resolve the external base URL, honoring common reverse-proxy headers.
Returns a URL like ``https://host:8000/meshcore/`` (always trailing-slash)
so callers can append paths directly.
Recognized headers:
- ``X-Forwarded-Proto`` + ``X-Forwarded-Host``: override scheme and host.
- ``X-Forwarded-Prefix`` (or ``X-Forwarded-Path``): sub-path prefix added
by the proxy (e.g. ``/meshcore``).
"""
forwarded_proto = request.headers.get("x-forwarded-proto")
forwarded_host = request.headers.get("x-forwarded-host")
if forwarded_proto and forwarded_host:
proto = forwarded_proto.split(",")[0].strip()
host = forwarded_host.split(",")[0].strip()
if proto and host:
origin = f"{proto}://{host}"
else:
origin = str(request.base_url).rstrip("/")
else:
origin = str(request.base_url).rstrip("/")
# Sub-path prefix (e.g. /meshcore) communicated by the reverse proxy
prefix = (
(request.headers.get("x-forwarded-prefix") or request.headers.get("x-forwarded-path") or "")
.strip()
.rstrip("/")
)
return f"{origin}{prefix}/"
def _validate_frontend_dir(frontend_dir: Path, *, log_failures: bool = True) -> tuple[bool, Path]:
"""Resolve and validate a built frontend directory."""
frontend_dir = frontend_dir.resolve()
index_file = frontend_dir / "index.html"
if not frontend_dir.exists():
if log_failures:
logger.error("Frontend build directory not found at %s.", frontend_dir)
return False, frontend_dir
if not frontend_dir.is_dir():
if log_failures:
logger.error("Frontend build path is not a directory: %s.", frontend_dir)
return False, frontend_dir
if not index_file.exists():
if log_failures:
logger.error("Frontend index file not found at %s.", index_file)
return False, frontend_dir
return True, frontend_dir
def register_frontend_static_routes(app: FastAPI, frontend_dir: Path) -> bool:
"""Register frontend static file routes if a built frontend is available.
"""Register frontend static file routes if a built frontend is available."""
valid, frontend_dir = _validate_frontend_dir(frontend_dir)
if not valid:
return False
Returns True when routes are registered, False when frontend files are
missing/incomplete. Missing frontend files are logged but are not fatal.
"""
frontend_dir = frontend_dir.resolve()
index_file = frontend_dir / "index.html"
assets_dir = frontend_dir / "assets"
if not frontend_dir.exists():
logger.error(
"Frontend build directory not found at %s. "
"Run 'cd frontend && npm run build'. API will continue without frontend routes.",
frontend_dir,
)
return False
if not frontend_dir.is_dir():
logger.error(
"Frontend build path is not a directory: %s. "
"API will continue without frontend routes.",
frontend_dir,
)
return False
if not index_file.exists():
logger.error(
"Frontend index file not found at %s. "
"Run 'cd frontend && npm run build'. API will continue without frontend routes.",
index_file,
)
return False
if assets_dir.exists() and assets_dir.is_dir():
app.mount("/assets", StaticFiles(directory=assets_dir), name="assets")
app.mount(
"/assets",
CacheControlStaticFiles(directory=assets_dir, cache_control=ASSET_CACHE_CONTROL),
name="assets",
)
else:
logger.warning(
"Frontend assets directory missing at %s; /assets files will not be served",
@@ -53,11 +119,58 @@ def register_frontend_static_routes(app: FastAPI, frontend_dir: Path) -> bool:
@app.get("/")
async def serve_index():
"""Serve the frontend index.html."""
return FileResponse(index_file)
return _file_response(index_file, cache_control=INDEX_CACHE_CONTROL)
@app.get("/site.webmanifest")
async def serve_webmanifest(request: Request):
"""Serve a dynamic web manifest using the active request base URL."""
base = _resolve_request_base(request)
manifest = {
"name": "RemoteTerm for MeshCore",
"short_name": "RemoteTerm",
"id": base,
"start_url": base,
"scope": base,
"display": "standalone",
"display_override": ["window-controls-overlay", "standalone", "fullscreen"],
"theme_color": "#111419",
"background_color": "#111419",
"icons": [
{
"src": f"{base}web-app-manifest-192x192.png",
"sizes": "192x192",
"type": "image/png",
"purpose": "maskable",
},
{
"src": f"{base}web-app-manifest-512x512.png",
"sizes": "512x512",
"type": "image/png",
"purpose": "maskable",
},
],
}
return JSONResponse(
manifest,
media_type="application/manifest+json",
headers={"Cache-Control": "no-store"},
)
@app.get("/{path:path}")
async def serve_frontend(path: str):
"""Serve frontend files, falling back to index.html for SPA routing."""
if path == "api" or path.startswith("api/"):
return JSONResponse(
status_code=404,
content={
"detail": (
"API endpoint not found. If you are seeing this in response to a "
"frontend request, you may be running a newer frontend with an older "
"backend or vice versa. A full update is suggested."
)
},
)
file_path = (frontend_dir / path).resolve()
try:
file_path.relative_to(frontend_dir)
@@ -65,9 +178,49 @@ def register_frontend_static_routes(app: FastAPI, frontend_dir: Path) -> bool:
raise HTTPException(status_code=404, detail="Not found") from None
if file_path.exists() and file_path.is_file():
return FileResponse(file_path)
cache_control = (
INDEX_CACHE_CONTROL
if _is_index_file(file_path, index_file)
else STATIC_FILE_CACHE_CONTROL
)
return _file_response(file_path, cache_control=cache_control)
return FileResponse(index_file)
return _file_response(index_file, cache_control=INDEX_CACHE_CONTROL)
logger.info("Serving frontend from %s", frontend_dir)
return True
def register_first_available_frontend_static_routes(
app: FastAPI, frontend_dirs: list[Path]
) -> Path | None:
"""Register frontend routes from the first valid build directory."""
for i, candidate in enumerate(frontend_dirs):
valid, resolved_candidate = _validate_frontend_dir(candidate, log_failures=False)
if not valid:
continue
if register_frontend_static_routes(app, resolved_candidate):
logger.info("Selected frontend build directory %s", resolved_candidate)
return resolved_candidate
if i < len(frontend_dirs) - 1:
logger.warning("Frontend build at %s was unusable; trying fallback", resolved_candidate)
logger.error(
"No usable frontend build found. Searched: %s. %s API will continue without frontend routes.",
", ".join(str(path.resolve()) for path in frontend_dirs),
FRONTEND_BUILD_INSTRUCTIONS,
)
return None
def register_frontend_missing_fallback(app: FastAPI) -> None:
"""Register a fallback route that tells the user to build the frontend."""
@app.get("/", include_in_schema=False)
async def frontend_not_built():
return JSONResponse(
status_code=404,
content={"detail": f"Frontend not built. {FRONTEND_BUILD_INSTRUCTIONS}"},
)
+47 -1
View File
@@ -1,14 +1,18 @@
"""
Ephemeral keystore for storing sensitive keys in memory.
Ephemeral keystore for storing sensitive keys in memory, plus the Ed25519
signing primitive used by fanout modules that need to sign requests with the
radio's own key.
The private key is stored in memory only and is never persisted to disk.
It's exported from the radio on startup and reconnect, then used for
server-side decryption of direct messages.
"""
import hashlib
import logging
from typing import TYPE_CHECKING
import nacl.bindings
from meshcore import EventType
from app.decoder import derive_public_key
@@ -18,11 +22,47 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
NO_EVENT_RECEIVED_GUIDANCE = (
"Radio command channel is unresponsive (no_event_received). Ensure that your firmware is not "
"incompatible, outdated, or wrong-mode (e.g. repeater, not client), and that "
"serial/TCP/BLE connectivity is successful (try another app and see if that one works?). The app cannot proceed because it cannot "
"issue commands to the radio."
)
# Ed25519 group order (L) — used in the expanded signing primitive below
_L = 2**252 + 27742317777372353535851937790883648493
# In-memory storage for the private key and derived public key
_private_key: bytes | None = None
_public_key: bytes | None = None
def ed25519_sign_expanded(message: bytes, scalar: bytes, prefix: bytes, public_key: bytes) -> bytes:
"""Sign a message using MeshCore's expanded Ed25519 key format.
MeshCore stores 64-byte keys as scalar(32) || prefix(32). Standard
Ed25519 libraries expect seed format and would re-SHA-512 the key, so we
perform the signing manually using the already-expanded key material.
Port of meshcore-packet-capture's ed25519_sign_with_expanded_key().
"""
r = int.from_bytes(hashlib.sha512(prefix + message).digest(), "little") % _L
R = nacl.bindings.crypto_scalarmult_ed25519_base_noclamp(r.to_bytes(32, "little"))
k = int.from_bytes(hashlib.sha512(R + public_key + message).digest(), "little") % _L
s = (r + k * int.from_bytes(scalar, "little")) % _L
return R + s.to_bytes(32, "little")
def clear_keys() -> None:
"""Clear any stored private/public key material from memory."""
global _private_key, _public_key
had_key = _private_key is not None or _public_key is not None
_private_key = None
_public_key = None
if had_key:
logger.info("Cleared in-memory keystore")
def set_private_key(key: bytes) -> None:
"""Store the private key in memory and derive the public key.
@@ -91,8 +131,14 @@ async def export_and_store_private_key(mc: "MeshCore") -> bool:
)
return False
else:
reason = result.payload.get("reason") if isinstance(result.payload, dict) else None
if result.type == EventType.ERROR and reason == "no_event_received":
logger.error("%s Raw response: %s", NO_EVENT_RECEIVED_GUIDANCE, result.payload)
raise RuntimeError(NO_EVENT_RECEIVED_GUIDANCE)
logger.error("Failed to export private key: %s", result.payload)
return False
except RuntimeError:
raise
except Exception as e:
logger.error("Error exporting private key: %s", e)
return False
+118 -12
View File
@@ -1,58 +1,146 @@
import logging
import sys
# ---------------------------------------------------------------------------
# Windows event-loop advisory for MQTT fanout
# ---------------------------------------------------------------------------
# On Windows, uvicorn's default event loop (ProactorEventLoop) does not
# implement add_reader()/add_writer(), which paho-mqtt (via aiomqtt) requires.
# We cannot fix this from inside the app — the loop is already created by the
# time this module is imported. Log a prominent warning so Windows operators
# who want MQTT know to add ``--loop none`` to their uvicorn command.
# ---------------------------------------------------------------------------
if sys.platform == "win32":
import asyncio as _asyncio
_loop = _asyncio.get_event_loop()
_is_proactor = type(_loop).__name__ == "ProactorEventLoop"
if _is_proactor:
print(
"\n" + "!" * 78 + "\n"
" NOTE FOR WINDOWS USERS\n" + "!" * 78 + "\n"
"\n"
" The running event loop is ProactorEventLoop, which is not\n"
" compatible with MQTT fanout (aiomqtt / paho-mqtt).\n"
"\n"
" If you use MQTT integrations, restart with --loop none:\n"
"\n"
" uv run uvicorn app.main:app \033[1m--loop none\033[0m"
" [... other options ...]\n"
"\n"
" Everything else works fine as-is.\n"
"\n" + "!" * 78 + "\n",
file=sys.stderr,
flush=True,
)
del _loop, _is_proactor
import asyncio
from contextlib import asynccontextmanager
from pathlib import Path
from fastapi import FastAPI
from fastapi import FastAPI, Request
from fastapi.middleware.cors import CORSMiddleware
from fastapi.middleware.gzip import GZipMiddleware
from fastapi.responses import JSONResponse
from app.config import settings as server_settings
from app.config import setup_logging
from app.database import db
from app.frontend_static import register_frontend_static_routes
from app.radio import radio_manager
from app.frontend_static import (
register_first_available_frontend_static_routes,
register_frontend_missing_fallback,
)
from app.radio import RadioDisconnectedError
from app.radio_sync import (
stop_background_contact_reconciliation,
stop_message_polling,
stop_periodic_advert,
stop_periodic_sync,
stop_telemetry_collect,
)
from app.routers import (
channels,
contacts,
debug,
fanout,
health,
messages,
packets,
radio,
read_state,
repeaters,
rooms,
settings,
statistics,
ws,
)
from app.security import add_optional_basic_auth_middleware
from app.services.radio_runtime import radio_runtime as radio_manager
from app.services.radio_stats import start_radio_stats_sampling, stop_radio_stats_sampling
from app.version_info import get_app_build_info
setup_logging()
logger = logging.getLogger(__name__)
async def _startup_radio_connect_and_setup() -> None:
"""Connect/setup the radio in the background so HTTP serving can start immediately."""
try:
connected = await radio_manager.reconnect_and_prepare(broadcast_on_success=True)
if connected:
logger.info("Connected to radio")
else:
logger.warning("Failed to connect to radio on startup")
except Exception:
logger.exception("Failed to connect to radio on startup")
@asynccontextmanager
async def lifespan(app: FastAPI):
"""Manage database and radio connection lifecycle."""
await db.connect()
logger.info("Database connected")
try:
await radio_manager.connect()
logger.info("Connected to radio")
await radio_manager.post_connect_setup()
except Exception as e:
logger.warning("Failed to connect to radio on startup: %s", e)
# Ensure default channels exist in the database even before the radio
# connects. Without this, a fresh or disconnected instance would return
# zero channels from GET /channels until the first successful radio sync.
from app.radio_sync import ensure_default_channels
await ensure_default_channels()
await start_radio_stats_sampling()
# Always start connection monitor (even if initial connection failed)
await radio_manager.start_connection_monitor()
# Start fanout modules (MQTT, etc.) from database configs
from app.fanout.manager import fanout_manager
try:
await fanout_manager.load_from_db()
except Exception:
logger.exception("Failed to start fanout modules")
startup_radio_task = asyncio.create_task(_startup_radio_connect_and_setup())
app.state.startup_radio_task = startup_radio_task
yield
logger.info("Shutting down")
if startup_radio_task and not startup_radio_task.done():
startup_radio_task.cancel()
try:
await startup_radio_task
except asyncio.CancelledError:
pass
await fanout_manager.stop_all()
await radio_manager.stop_connection_monitor()
await stop_background_contact_reconciliation()
await stop_message_polling()
await stop_radio_stats_sampling()
await stop_periodic_advert()
await stop_periodic_sync()
await stop_telemetry_collect()
if radio_manager.meshcore:
await radio_manager.meshcore.stop_auto_message_fetching()
await radio_manager.disconnect()
@@ -62,10 +150,12 @@ async def lifespan(app: FastAPI):
app = FastAPI(
title="RemoteTerm for MeshCore API",
description="API for interacting with MeshCore mesh radio networks",
version="1.9.2",
version=get_app_build_info().version,
lifespan=lifespan,
)
add_optional_basic_auth_middleware(app, server_settings)
app.add_middleware(GZipMiddleware, minimum_size=500)
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
@@ -74,17 +164,33 @@ app.add_middleware(
allow_headers=["*"],
)
@app.exception_handler(RadioDisconnectedError)
async def radio_disconnected_handler(request: Request, exc: RadioDisconnectedError):
"""Return 503 when a radio disconnect race occurs during an operation."""
return JSONResponse(status_code=503, content={"detail": "Radio not connected"})
# API routes - all prefixed with /api for production compatibility
app.include_router(health.router, prefix="/api")
app.include_router(debug.router, prefix="/api")
app.include_router(fanout.router, prefix="/api")
app.include_router(radio.router, prefix="/api")
app.include_router(contacts.router, prefix="/api")
app.include_router(repeaters.router, prefix="/api")
app.include_router(rooms.router, prefix="/api")
app.include_router(channels.router, prefix="/api")
app.include_router(messages.router, prefix="/api")
app.include_router(packets.router, prefix="/api")
app.include_router(read_state.router, prefix="/api")
app.include_router(settings.router, prefix="/api")
app.include_router(statistics.router, prefix="/api")
app.include_router(ws.router, prefix="/api")
# Serve frontend static files in production
FRONTEND_DIR = Path(__file__).parent.parent / "frontend" / "dist"
register_frontend_static_routes(app, FRONTEND_DIR)
FRONTEND_DIST_DIR = Path(__file__).parent.parent / "frontend" / "dist"
FRONTEND_PREBUILT_DIR = Path(__file__).parent.parent / "frontend" / "prebuilt"
if not register_first_available_frontend_static_routes(
app, [FRONTEND_DIST_DIR, FRONTEND_PREBUILT_DIR]
):
register_frontend_missing_fallback(app)
-1023
View File
File diff suppressed because it is too large Load Diff
+38
View File
@@ -0,0 +1,38 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Add last_read_at column to contacts and channels tables.
This enables server-side read state tracking, replacing the localStorage
approach for consistent read state across devices.
ALTER TABLE ADD COLUMN is safe - it preserves existing data and handles
the "column already exists" case gracefully.
"""
# Add to contacts table
try:
await conn.execute("ALTER TABLE contacts ADD COLUMN last_read_at INTEGER")
logger.debug("Added last_read_at to contacts table")
except aiosqlite.OperationalError as e:
if "duplicate column name" in str(e).lower():
logger.debug("contacts.last_read_at already exists, skipping")
else:
raise
# Add to channels table
try:
await conn.execute("ALTER TABLE channels ADD COLUMN last_read_at INTEGER")
logger.debug("Added last_read_at to channels table")
except aiosqlite.OperationalError as e:
if "duplicate column name" in str(e).lower():
logger.debug("channels.last_read_at already exists, skipping")
else:
raise
await conn.commit()
@@ -0,0 +1,32 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Drop unused decrypt_attempts and last_attempt columns from raw_packets.
These columns were added for a retry-limiting feature that was never implemented.
They are written to but never read, so we can safely remove them.
SQLite 3.35.0+ supports ALTER TABLE DROP COLUMN. For older versions,
we silently skip (the columns will remain but are harmless).
"""
for column in ["decrypt_attempts", "last_attempt"]:
try:
await conn.execute(f"ALTER TABLE raw_packets DROP COLUMN {column}")
logger.debug("Dropped %s from raw_packets table", column)
except aiosqlite.OperationalError as e:
error_msg = str(e).lower()
if "no such column" in error_msg:
logger.debug("raw_packets.%s already dropped, skipping", column)
elif "syntax error" in error_msg or "drop column" in error_msg:
# SQLite version doesn't support DROP COLUMN - harmless, column stays
logger.debug("SQLite doesn't support DROP COLUMN, %s column will remain", column)
else:
raise
await conn.commit()
@@ -0,0 +1,49 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Drop the decrypted column and update indexes.
The decrypted column is redundant with message_id - a packet is decrypted
iff message_id IS NOT NULL. We replace the decrypted index with a message_id index.
SQLite 3.35.0+ supports ALTER TABLE DROP COLUMN. For older versions,
we silently skip the column drop but still update the index.
"""
# First, drop the old index on decrypted (safe even if it doesn't exist)
try:
await conn.execute("DROP INDEX IF EXISTS idx_raw_packets_decrypted")
logger.debug("Dropped idx_raw_packets_decrypted index")
except aiosqlite.OperationalError:
pass # Index didn't exist
# Create new index on message_id for efficient undecrypted packet queries
try:
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_raw_packets_message_id ON raw_packets(message_id)"
)
logger.debug("Created idx_raw_packets_message_id index")
except aiosqlite.OperationalError as e:
if "already exists" not in str(e).lower():
raise
# Try to drop the decrypted column
try:
await conn.execute("ALTER TABLE raw_packets DROP COLUMN decrypted")
logger.debug("Dropped decrypted from raw_packets table")
except aiosqlite.OperationalError as e:
error_msg = str(e).lower()
if "no such column" in error_msg:
logger.debug("raw_packets.decrypted already dropped, skipping")
elif "syntax error" in error_msg or "drop column" in error_msg:
# SQLite version doesn't support DROP COLUMN - harmless, column stays
logger.debug("SQLite doesn't support DROP COLUMN, decrypted column will remain")
else:
raise
await conn.commit()
@@ -0,0 +1,24 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Add payload_hash column to raw_packets for deduplication.
This column stores the SHA-256 hash of the packet payload (excluding routing/path info).
It will be used with a unique index to prevent duplicate packets from being stored.
"""
try:
await conn.execute("ALTER TABLE raw_packets ADD COLUMN payload_hash TEXT")
logger.debug("Added payload_hash column to raw_packets table")
except aiosqlite.OperationalError as e:
if "duplicate column name" in str(e).lower():
logger.debug("raw_packets.payload_hash already exists, skipping")
else:
raise
await conn.commit()
@@ -0,0 +1,126 @@
import logging
from hashlib import sha256
import aiosqlite
logger = logging.getLogger(__name__)
def _extract_payload_for_hash(raw_packet: bytes) -> bytes | None:
"""
Extract payload from a raw packet for hashing using canonical framing validation.
Returns the payload bytes, or None if packet is malformed.
"""
from app.path_utils import parse_packet_envelope
envelope = parse_packet_envelope(raw_packet)
return envelope.payload if envelope is not None else None
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Backfill payload_hash for existing packets and remove duplicates.
This may take a while for large databases. Progress is logged.
After backfilling, a unique index is created to prevent future duplicates.
"""
# Get count first
cursor = await conn.execute("SELECT COUNT(*) FROM raw_packets WHERE payload_hash IS NULL")
row = await cursor.fetchone()
total = row[0] if row else 0
if total == 0:
logger.debug("No packets need hash backfill")
else:
logger.info("Backfilling payload hashes for %d packets. This may take a while...", total)
# Process in batches to avoid memory issues
batch_size = 1000
processed = 0
duplicates_deleted = 0
# Track seen hashes to identify duplicates (keep oldest = lowest ID)
seen_hashes: dict[str, int] = {} # hash -> oldest packet ID
# First pass: compute hashes and identify duplicates
cursor = await conn.execute("SELECT id, data FROM raw_packets ORDER BY id ASC")
packets_to_update: list[tuple[str, int]] = [] # (hash, id)
ids_to_delete: list[int] = []
while True:
rows = await cursor.fetchmany(batch_size)
if not rows:
break
for row in rows:
packet_id = row[0]
packet_data = bytes(row[1])
# Extract payload and compute hash
payload = _extract_payload_for_hash(packet_data)
if payload:
payload_hash = sha256(payload).hexdigest()
else:
# For malformed packets, hash the full data
payload_hash = sha256(packet_data).hexdigest()
if payload_hash in seen_hashes:
# Duplicate - mark for deletion (we keep the older one)
ids_to_delete.append(packet_id)
duplicates_deleted += 1
else:
# New hash - keep this packet
seen_hashes[payload_hash] = packet_id
packets_to_update.append((payload_hash, packet_id))
processed += 1
if processed % 10000 == 0:
logger.info("Processed %d/%d packets...", processed, total)
# Second pass: update hashes for packets we're keeping
total_updates = len(packets_to_update)
logger.info("Updating %d packets with hashes...", total_updates)
for idx, (payload_hash, packet_id) in enumerate(packets_to_update, 1):
await conn.execute(
"UPDATE raw_packets SET payload_hash = ? WHERE id = ?",
(payload_hash, packet_id),
)
if idx % 10000 == 0:
logger.info("Updated %d/%d packets...", idx, total_updates)
# Third pass: delete duplicates
if ids_to_delete:
total_deletes = len(ids_to_delete)
logger.info("Removing %d duplicate packets...", total_deletes)
deleted_count = 0
# Delete in batches to avoid "too many SQL variables" error
for i in range(0, len(ids_to_delete), 500):
batch = ids_to_delete[i : i + 500]
placeholders = ",".join("?" * len(batch))
await conn.execute(f"DELETE FROM raw_packets WHERE id IN ({placeholders})", batch)
deleted_count += len(batch)
if deleted_count % 10000 < 500: # Log roughly every 10k
logger.info("Removed %d/%d duplicates...", deleted_count, total_deletes)
await conn.commit()
logger.info(
"Hash backfill complete: %d packets updated, %d duplicates removed",
len(packets_to_update),
duplicates_deleted,
)
# Create unique index on payload_hash (this enforces uniqueness going forward)
try:
await conn.execute(
"CREATE UNIQUE INDEX IF NOT EXISTS idx_raw_packets_payload_hash "
"ON raw_packets(payload_hash)"
)
logger.debug("Created unique index on payload_hash")
except aiosqlite.OperationalError as e:
if "already exists" not in str(e).lower():
raise
await conn.commit()
@@ -0,0 +1,42 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Replace path_len INTEGER column with path TEXT column in messages table.
The path column stores the hex-encoded routing path bytes. Path length can
be derived from the hex string (2 chars per byte = 1 hop).
SQLite 3.35.0+ supports ALTER TABLE DROP COLUMN. For older versions,
we silently skip the drop (the column will remain but is unused).
"""
# First, add the new path column
try:
await conn.execute("ALTER TABLE messages ADD COLUMN path TEXT")
logger.debug("Added path column to messages table")
except aiosqlite.OperationalError as e:
if "duplicate column name" in str(e).lower():
logger.debug("messages.path already exists, skipping")
else:
raise
# Try to drop the old path_len column
try:
await conn.execute("ALTER TABLE messages DROP COLUMN path_len")
logger.debug("Dropped path_len from messages table")
except aiosqlite.OperationalError as e:
error_msg = str(e).lower()
if "no such column" in error_msg:
logger.debug("messages.path_len already dropped, skipping")
elif "syntax error" in error_msg or "drop column" in error_msg:
# SQLite version doesn't support DROP COLUMN - harmless, column stays
logger.debug("SQLite doesn't support DROP COLUMN, path_len column will remain")
else:
raise
await conn.commit()
@@ -0,0 +1,96 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
def _extract_path_from_packet(raw_packet: bytes) -> str | None:
"""
Extract path hex string from a raw packet using canonical framing validation.
Returns the path as a hex string, or None if packet is malformed.
"""
from app.path_utils import parse_packet_envelope
envelope = parse_packet_envelope(raw_packet)
return envelope.path.hex() if envelope is not None else None
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Backfill path column for messages that have linked raw_packets.
For each message with a linked raw_packet (via message_id), extract the
path from the raw packet and update the message.
Only updates incoming messages (outgoing=0) since outgoing messages
don't have meaningful path data.
"""
# Get count of messages that need backfill
cursor = await conn.execute(
"""
SELECT COUNT(*)
FROM messages m
JOIN raw_packets rp ON rp.message_id = m.id
WHERE m.path IS NULL AND m.outgoing = 0
"""
)
row = await cursor.fetchone()
total = row[0] if row else 0
if total == 0:
logger.debug("No messages need path backfill")
return
logger.info("Backfilling path for %d messages. This may take a while...", total)
# Process in batches
batch_size = 1000
processed = 0
updated = 0
cursor = await conn.execute(
"""
SELECT m.id, rp.data
FROM messages m
JOIN raw_packets rp ON rp.message_id = m.id
WHERE m.path IS NULL AND m.outgoing = 0
ORDER BY m.id ASC
"""
)
updates: list[tuple[str, int]] = [] # (path, message_id)
while True:
rows = await cursor.fetchmany(batch_size)
if not rows:
break
for row in rows:
message_id = row[0]
packet_data = bytes(row[1])
path_hex = _extract_path_from_packet(packet_data)
if path_hex is not None:
updates.append((path_hex, message_id))
processed += 1
if processed % 10000 == 0:
logger.info("Processed %d/%d messages...", processed, total)
# Apply updates in batches
if updates:
logger.info("Updating %d messages with path data...", len(updates))
for idx, (path_hex, message_id) in enumerate(updates, 1):
await conn.execute(
"UPDATE messages SET path = ? WHERE id = ?",
(path_hex, message_id),
)
updated += 1
if idx % 10000 == 0:
logger.info("Updated %d/%d messages...", idx, len(updates))
await conn.commit()
logger.info("Path backfill complete: %d messages updated", updated)
@@ -0,0 +1,66 @@
import json
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Convert path TEXT column to paths TEXT column storing JSON array.
The new format stores multiple paths as a JSON array of objects:
[{"path": "1A2B", "received_at": 1234567890}, ...]
This enables tracking multiple delivery paths for the same message
(e.g., when a message is received via different repeater routes).
"""
# First, add the new paths column
try:
await conn.execute("ALTER TABLE messages ADD COLUMN paths TEXT")
logger.debug("Added paths column to messages table")
except aiosqlite.OperationalError as e:
if "duplicate column name" in str(e).lower():
logger.debug("messages.paths already exists, skipping column add")
else:
raise
# Migrate existing path data to paths array format
cursor = await conn.execute(
"SELECT id, path, received_at FROM messages WHERE path IS NOT NULL AND paths IS NULL"
)
rows = list(await cursor.fetchall())
if rows:
logger.info("Converting %d messages from path to paths array format...", len(rows))
for row in rows:
message_id = row[0]
old_path = row[1]
received_at = row[2]
# Convert single path to array format
paths_json = json.dumps([{"path": old_path, "received_at": received_at}])
await conn.execute(
"UPDATE messages SET paths = ? WHERE id = ?",
(paths_json, message_id),
)
logger.info("Converted %d messages to paths array format", len(rows))
# Try to drop the old path column (SQLite 3.35.0+ only)
try:
await conn.execute("ALTER TABLE messages DROP COLUMN path")
logger.debug("Dropped path column from messages table")
except aiosqlite.OperationalError as e:
error_msg = str(e).lower()
if "no such column" in error_msg:
logger.debug("messages.path already dropped, skipping")
elif "syntax error" in error_msg or "drop column" in error_msg:
# SQLite version doesn't support DROP COLUMN - harmless, column stays
logger.debug("SQLite doesn't support DROP COLUMN, path column will remain")
else:
raise
await conn.commit()
@@ -0,0 +1,41 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Create app_settings table for persistent application preferences.
This table stores:
- max_radio_contacts: Configured radio contact capacity baseline for maintenance thresholds
- favorites: JSON array of favorite conversations [{type, id}, ...]
- auto_decrypt_dm_on_advert: Whether to attempt historical DM decryption on new contact
- sidebar_sort_order: 'recent' or 'alpha' for sidebar sorting
- last_message_times: JSON object mapping conversation keys to timestamps
- preferences_migrated: Flag to track if localStorage has been migrated
The table uses a single-row pattern (id=1) for simplicity.
"""
await conn.execute(
"""
CREATE TABLE IF NOT EXISTS app_settings (
id INTEGER PRIMARY KEY CHECK (id = 1),
max_radio_contacts INTEGER DEFAULT 200,
favorites TEXT DEFAULT '[]',
auto_decrypt_dm_on_advert INTEGER DEFAULT 1,
sidebar_sort_order TEXT DEFAULT 'recent',
last_message_times TEXT DEFAULT '{}',
preferences_migrated INTEGER DEFAULT 0
)
"""
)
# Initialize with default row (use only the id column so this works
# regardless of which columns exist — defaults fill the rest).
await conn.execute("INSERT OR IGNORE INTO app_settings (id) VALUES (1)")
await conn.commit()
logger.debug("Created app_settings table with default values")
@@ -0,0 +1,23 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Add advert_interval column to app_settings table.
This enables configurable periodic advertisement interval (default 0 = disabled).
"""
try:
await conn.execute("ALTER TABLE app_settings ADD COLUMN advert_interval INTEGER DEFAULT 0")
logger.debug("Added advert_interval column to app_settings")
except aiosqlite.OperationalError as e:
if "duplicate column" in str(e).lower():
logger.debug("advert_interval column already exists, skipping")
else:
raise
await conn.commit()
@@ -0,0 +1,24 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Add last_advert_time column to app_settings table.
This tracks when the last advertisement was sent, ensuring we never
advertise faster than the configured advert_interval.
"""
try:
await conn.execute("ALTER TABLE app_settings ADD COLUMN last_advert_time INTEGER DEFAULT 0")
logger.debug("Added last_advert_time column to app_settings")
except aiosqlite.OperationalError as e:
if "duplicate column" in str(e).lower():
logger.debug("last_advert_time column already exists, skipping")
else:
raise
await conn.commit()
+33
View File
@@ -0,0 +1,33 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Add bot_enabled and bot_code columns to app_settings table.
This enables user-defined Python code to be executed when messages are received,
allowing for custom bot responses.
"""
try:
await conn.execute("ALTER TABLE app_settings ADD COLUMN bot_enabled INTEGER DEFAULT 0")
logger.debug("Added bot_enabled column to app_settings")
except aiosqlite.OperationalError as e:
if "duplicate column" in str(e).lower():
logger.debug("bot_enabled column already exists, skipping")
else:
raise
try:
await conn.execute("ALTER TABLE app_settings ADD COLUMN bot_code TEXT DEFAULT ''")
logger.debug("Added bot_code column to app_settings")
except aiosqlite.OperationalError as e:
if "duplicate column" in str(e).lower():
logger.debug("bot_code column already exists, skipping")
else:
raise
await conn.commit()
@@ -0,0 +1,76 @@
import json
import logging
import uuid
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Convert single bot_enabled/bot_code to multi-bot format.
Adds a 'bots' TEXT column storing a JSON array of bot configs:
[{"id": "uuid", "name": "Bot 1", "enabled": true, "code": "..."}]
If existing bot_code is non-empty OR bot_enabled is true, migrates
to a single bot named "Bot 1". Otherwise, creates empty array.
Attempts to drop the old bot_enabled and bot_code columns.
"""
# Add new bots column
try:
await conn.execute("ALTER TABLE app_settings ADD COLUMN bots TEXT DEFAULT '[]'")
logger.debug("Added bots column to app_settings")
except aiosqlite.OperationalError as e:
if "duplicate column" in str(e).lower():
logger.debug("bots column already exists, skipping")
else:
raise
# Migrate existing bot data
cursor = await conn.execute("SELECT bot_enabled, bot_code FROM app_settings WHERE id = 1")
row = await cursor.fetchone()
if row:
bot_enabled = bool(row[0]) if row[0] is not None else False
bot_code = row[1] or ""
# If there's existing bot data, migrate it
if bot_code.strip() or bot_enabled:
bots = [
{
"id": str(uuid.uuid4()),
"name": "Bot 1",
"enabled": bot_enabled,
"code": bot_code,
}
]
bots_json = json.dumps(bots)
logger.info("Migrating existing bot to multi-bot format: enabled=%s", bot_enabled)
else:
bots_json = "[]"
await conn.execute(
"UPDATE app_settings SET bots = ? WHERE id = 1",
(bots_json,),
)
# Try to drop old columns (SQLite 3.35.0+ only)
for column in ["bot_enabled", "bot_code"]:
try:
await conn.execute(f"ALTER TABLE app_settings DROP COLUMN {column}")
logger.debug("Dropped %s column from app_settings", column)
except aiosqlite.OperationalError as e:
error_msg = str(e).lower()
if "no such column" in error_msg:
logger.debug("app_settings.%s already dropped, skipping", column)
elif "syntax error" in error_msg or "drop column" in error_msg:
# SQLite version doesn't support DROP COLUMN - harmless, column stays
logger.debug("SQLite doesn't support DROP COLUMN, %s column will remain", column)
else:
raise
await conn.commit()
@@ -0,0 +1,152 @@
import json
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Lowercase all contact public keys and related data for case-insensitive matching.
Updates:
- contacts.public_key (PRIMARY KEY) via temp table swap
- messages.conversation_key for PRIV messages
- app_settings.favorites (contact IDs)
- app_settings.last_message_times (contact- prefixed keys)
Handles case collisions by keeping the most-recently-seen contact.
"""
# 1. Lowercase message conversation keys for private messages
try:
await conn.execute(
"UPDATE messages SET conversation_key = lower(conversation_key) WHERE type = 'PRIV'"
)
logger.debug("Lowercased PRIV message conversation_keys")
except aiosqlite.OperationalError as e:
if "no such table" in str(e).lower():
logger.debug("messages table does not exist yet, skipping conversation_key lowercase")
else:
raise
# 2. Check if contacts table exists before proceeding
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='contacts'"
)
if not await cursor.fetchone():
logger.debug("contacts table does not exist yet, skipping key lowercase")
await conn.commit()
return
# 3. Handle contacts table - check for case collisions first
cursor = await conn.execute(
"SELECT lower(public_key) as lk, COUNT(*) as cnt "
"FROM contacts GROUP BY lower(public_key) HAVING COUNT(*) > 1"
)
collisions = list(await cursor.fetchall())
if collisions:
logger.warning(
"Found %d case-colliding contact groups, keeping most-recently-seen",
len(collisions),
)
for row in collisions:
lower_key = row[0]
# Delete all but the most recently seen
await conn.execute(
"""DELETE FROM contacts WHERE public_key IN (
SELECT public_key FROM contacts
WHERE lower(public_key) = ?
ORDER BY COALESCE(last_seen, 0) DESC
LIMIT -1 OFFSET 1
)""",
(lower_key,),
)
# 3. Rebuild contacts with lowercased keys
# Get the actual column names from the table (handles different schema versions)
cursor = await conn.execute("PRAGMA table_info(contacts)")
columns_info = await cursor.fetchall()
all_columns = [col[1] for col in columns_info] # col[1] is column name
# Build column lists, lowering public_key
select_cols = ", ".join(f"lower({c})" if c == "public_key" else c for c in all_columns)
col_defs = []
for col in columns_info:
name, col_type, _notnull, default, pk = col[1], col[2], col[3], col[4], col[5]
parts = [name, col_type or "TEXT"]
if pk:
parts.append("PRIMARY KEY")
if default is not None:
parts.append(f"DEFAULT {default}")
col_defs.append(" ".join(parts))
create_sql = f"CREATE TABLE contacts_new ({', '.join(col_defs)})"
await conn.execute(create_sql)
await conn.execute(f"INSERT INTO contacts_new SELECT {select_cols} FROM contacts")
await conn.execute("DROP TABLE contacts")
await conn.execute("ALTER TABLE contacts_new RENAME TO contacts")
# Recreate the on_radio index (if column exists)
if "on_radio" in all_columns:
await conn.execute("CREATE INDEX IF NOT EXISTS idx_contacts_on_radio ON contacts(on_radio)")
# 4. Lowercase contact IDs in favorites JSON (if app_settings exists)
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='app_settings'"
)
if not await cursor.fetchone():
await conn.commit()
logger.info("Lowercased all contact public keys (no app_settings table)")
return
cursor = await conn.execute("SELECT favorites FROM app_settings WHERE id = 1")
row = await cursor.fetchone()
if row and row[0]:
try:
favorites = json.loads(row[0])
updated = False
for fav in favorites:
if fav.get("type") == "contact" and fav.get("id"):
new_id = fav["id"].lower()
if new_id != fav["id"]:
fav["id"] = new_id
updated = True
if updated:
await conn.execute(
"UPDATE app_settings SET favorites = ? WHERE id = 1",
(json.dumps(favorites),),
)
logger.debug("Lowercased contact IDs in favorites")
except (json.JSONDecodeError, TypeError):
pass
# 5. Lowercase contact keys in last_message_times JSON
cursor = await conn.execute("SELECT last_message_times FROM app_settings WHERE id = 1")
row = await cursor.fetchone()
if row and row[0]:
try:
times = json.loads(row[0])
new_times = {}
updated = False
for key, val in times.items():
if key.startswith("contact-"):
new_key = "contact-" + key[8:].lower()
if new_key != key:
updated = True
new_times[new_key] = val
else:
new_times[key] = val
if updated:
await conn.execute(
"UPDATE app_settings SET last_message_times = ? WHERE id = 1",
(json.dumps(new_times),),
)
logger.debug("Lowercased contact keys in last_message_times")
except (json.JSONDecodeError, TypeError):
pass
await conn.commit()
logger.info("Lowercased all contact public keys")
@@ -0,0 +1,44 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Fix NULL sender_timestamp values and add null-safe dedup index.
1. Set sender_timestamp = received_at for any messages with NULL sender_timestamp
2. Create a null-safe unique index as belt-and-suspenders protection
"""
# Check if messages table exists
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='messages'"
)
if not await cursor.fetchone():
logger.debug("messages table does not exist yet, skipping NULL sender_timestamp fix")
await conn.commit()
return
# Backfill NULL sender_timestamps with received_at
cursor = await conn.execute(
"UPDATE messages SET sender_timestamp = received_at WHERE sender_timestamp IS NULL"
)
if cursor.rowcount > 0:
logger.info("Backfilled %d messages with NULL sender_timestamp", cursor.rowcount)
# Try to create null-safe dedup index (may fail if existing duplicates exist)
try:
await conn.execute(
"""CREATE UNIQUE INDEX IF NOT EXISTS idx_messages_dedup_null_safe
ON messages(type, conversation_key, text, COALESCE(sender_timestamp, 0))"""
)
logger.debug("Created null-safe dedup index")
except aiosqlite.IntegrityError:
logger.warning(
"Could not create null-safe dedup index due to existing duplicates - "
"the application-level dedup will handle these"
)
await conn.commit()
@@ -0,0 +1,26 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Add experimental_channel_double_send column to app_settings table.
When enabled, channel sends perform an immediate byte-perfect duplicate send
using the same timestamp bytes.
"""
try:
await conn.execute(
"ALTER TABLE app_settings ADD COLUMN experimental_channel_double_send INTEGER DEFAULT 0"
)
logger.debug("Added experimental_channel_double_send column to app_settings")
except aiosqlite.OperationalError as e:
if "duplicate column" in str(e).lower():
logger.debug("experimental_channel_double_send column already exists, skipping")
else:
raise
await conn.commit()
@@ -0,0 +1,31 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Drop experimental_channel_double_send column from app_settings.
This feature is replaced by a user-triggered resend button.
SQLite 3.35.0+ supports ALTER TABLE DROP COLUMN. For older versions,
we silently skip (the column will remain but is unused).
"""
try:
await conn.execute("ALTER TABLE app_settings DROP COLUMN experimental_channel_double_send")
logger.debug("Dropped experimental_channel_double_send from app_settings")
except aiosqlite.OperationalError as e:
error_msg = str(e).lower()
if "no such column" in error_msg:
logger.debug("app_settings.experimental_channel_double_send already dropped, skipping")
elif "syntax error" in error_msg or "drop column" in error_msg:
logger.debug(
"SQLite doesn't support DROP COLUMN, "
"experimental_channel_double_send column will remain"
)
else:
raise
await conn.commit()
@@ -0,0 +1,64 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Drop the UNIQUE constraint on raw_packets.data via table rebuild.
This constraint creates a large autoindex (~30 MB on a 340K-row database) that
stores a complete copy of every raw packet BLOB in a B-tree. Deduplication is
already handled by the unique index on payload_hash, making the data UNIQUE
constraint pure storage overhead.
Requires table recreation since SQLite doesn't support DROP CONSTRAINT.
"""
# Check if the autoindex exists (indicates UNIQUE constraint on data)
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='index' "
"AND name='sqlite_autoindex_raw_packets_1'"
)
if not await cursor.fetchone():
logger.debug("raw_packets.data UNIQUE constraint already absent, skipping rebuild")
await conn.commit()
return
logger.info("Rebuilding raw_packets table to remove UNIQUE(data) constraint...")
# Get current columns from the existing table
cursor = await conn.execute("PRAGMA table_info(raw_packets)")
old_cols = {col[1] for col in await cursor.fetchall()}
# Target schema without UNIQUE on data
await conn.execute("""
CREATE TABLE raw_packets_new (
id INTEGER PRIMARY KEY AUTOINCREMENT,
timestamp INTEGER NOT NULL,
data BLOB NOT NULL,
message_id INTEGER,
payload_hash TEXT,
FOREIGN KEY (message_id) REFERENCES messages(id)
)
""")
# Copy only columns that exist in both old and new tables
new_cols = {"id", "timestamp", "data", "message_id", "payload_hash"}
copy_cols = ", ".join(sorted(c for c in new_cols if c in old_cols))
await conn.execute(
f"INSERT INTO raw_packets_new ({copy_cols}) SELECT {copy_cols} FROM raw_packets"
)
await conn.execute("DROP TABLE raw_packets")
await conn.execute("ALTER TABLE raw_packets_new RENAME TO raw_packets")
# Recreate indexes
await conn.execute(
"CREATE UNIQUE INDEX idx_raw_packets_payload_hash ON raw_packets(payload_hash)"
)
await conn.execute("CREATE INDEX idx_raw_packets_message_id ON raw_packets(message_id)")
await conn.commit()
logger.info("raw_packets table rebuilt without UNIQUE(data) constraint")
@@ -0,0 +1,83 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Drop the UNIQUE(type, conversation_key, text, sender_timestamp) constraint on messages.
This constraint creates a large autoindex (~13 MB on a 112K-row database) that
stores the full message text in a B-tree. The idx_messages_dedup_null_safe unique
index already provides identical dedup protection — no rows have NULL
sender_timestamp since migration 15 backfilled them all.
INSERT OR IGNORE still works correctly because it checks all unique constraints,
including unique indexes like idx_messages_dedup_null_safe.
Requires table recreation since SQLite doesn't support DROP CONSTRAINT.
"""
# Check if the autoindex exists (indicates UNIQUE constraint)
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='index' AND name='sqlite_autoindex_messages_1'"
)
if not await cursor.fetchone():
logger.debug("messages UNIQUE constraint already absent, skipping rebuild")
await conn.commit()
return
logger.info("Rebuilding messages table to remove UNIQUE constraint...")
# Get current columns from the existing table
cursor = await conn.execute("PRAGMA table_info(messages)")
old_cols = {col[1] for col in await cursor.fetchall()}
# Target schema without the UNIQUE table constraint
await conn.execute("""
CREATE TABLE messages_new (
id INTEGER PRIMARY KEY AUTOINCREMENT,
type TEXT NOT NULL,
conversation_key TEXT NOT NULL,
text TEXT NOT NULL,
sender_timestamp INTEGER,
received_at INTEGER NOT NULL,
txt_type INTEGER DEFAULT 0,
signature TEXT,
outgoing INTEGER DEFAULT 0,
acked INTEGER DEFAULT 0,
paths TEXT
)
""")
# Copy only columns that exist in both old and new tables
new_cols = {
"id",
"type",
"conversation_key",
"text",
"sender_timestamp",
"received_at",
"txt_type",
"signature",
"outgoing",
"acked",
"paths",
}
copy_cols = ", ".join(sorted(c for c in new_cols if c in old_cols))
await conn.execute(f"INSERT INTO messages_new ({copy_cols}) SELECT {copy_cols} FROM messages")
await conn.execute("DROP TABLE messages")
await conn.execute("ALTER TABLE messages_new RENAME TO messages")
# Recreate indexes
await conn.execute("CREATE INDEX idx_messages_conversation ON messages(type, conversation_key)")
await conn.execute("CREATE INDEX idx_messages_received ON messages(received_at)")
await conn.execute(
"""CREATE UNIQUE INDEX idx_messages_dedup_null_safe
ON messages(type, conversation_key, text, COALESCE(sender_timestamp, 0))"""
)
await conn.commit()
logger.info("messages table rebuilt without UNIQUE constraint")
@@ -0,0 +1,45 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Enable WAL journal mode and incremental auto-vacuum.
WAL (Write-Ahead Logging):
- Faster writes: appends to a WAL file instead of rewriting the main DB
- Concurrent reads during writes (readers don't block writers)
- No journal file create/delete churn on every commit
Incremental auto-vacuum:
- Pages freed by DELETE become reclaimable without a full VACUUM
- Call PRAGMA incremental_vacuum to reclaim on demand
- Less overhead than FULL auto-vacuum (which reorganizes on every commit)
auto_vacuum mode change requires a VACUUM to restructure the file.
The VACUUM is performed before switching to WAL so it runs under the
current journal mode; WAL is then set as the final step.
"""
# Check current auto_vacuum mode
cursor = await conn.execute("PRAGMA auto_vacuum")
row = await cursor.fetchone()
current_auto_vacuum = row[0] if row else 0
if current_auto_vacuum != 2: # 2 = INCREMENTAL
logger.info("Switching auto_vacuum to INCREMENTAL (requires VACUUM)...")
await conn.execute("PRAGMA auto_vacuum = INCREMENTAL")
await conn.execute("VACUUM")
logger.info("VACUUM complete, auto_vacuum set to INCREMENTAL")
else:
logger.debug("auto_vacuum already INCREMENTAL, skipping VACUUM")
# Enable WAL mode (idempotent — returns current mode)
cursor = await conn.execute("PRAGMA journal_mode = WAL")
row = await cursor.fetchone()
mode = row[0] if row else "unknown"
logger.info("Journal mode set to %s", mode)
await conn.commit()
@@ -0,0 +1,29 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Enforce minimum 1-hour advert interval.
Any advert_interval between 1 and 3599 is clamped up to 3600 (1 hour).
Zero (disabled) is left unchanged.
"""
# Guard: app_settings table may not exist if running against a very old schema
# (it's created in migration 9). The UPDATE is harmless if the table exists
# but has no rows, but will error if the table itself is missing.
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='app_settings'"
)
if await cursor.fetchone() is None:
logger.debug("app_settings table does not exist yet, skipping advert_interval clamp")
return
await conn.execute(
"UPDATE app_settings SET advert_interval = 3600 WHERE advert_interval > 0 AND advert_interval < 3600"
)
await conn.commit()
logger.debug("Clamped advert_interval to minimum 3600 seconds")
@@ -0,0 +1,33 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Create table for recent unique advert paths per repeater.
This keeps path diversity for repeater advertisements without changing the
existing payload-hash raw packet dedup policy.
"""
await conn.execute("""
CREATE TABLE IF NOT EXISTS repeater_advert_paths (
id INTEGER PRIMARY KEY AUTOINCREMENT,
repeater_key TEXT NOT NULL,
path_hex TEXT NOT NULL,
path_len INTEGER NOT NULL,
first_seen INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
heard_count INTEGER NOT NULL DEFAULT 1,
UNIQUE(repeater_key, path_hex),
FOREIGN KEY (repeater_key) REFERENCES contacts(public_key)
)
""")
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_repeater_advert_paths_recent "
"ON repeater_advert_paths(repeater_key, last_seen DESC)"
)
await conn.commit()
logger.debug("Ensured repeater_advert_paths table and indexes exist")
+60
View File
@@ -0,0 +1,60 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Add first_seen column to contacts table.
Backfill strategy:
1. Set first_seen = last_seen for all contacts (baseline).
2. For contacts with PRIV messages, set first_seen = MIN(messages.received_at)
if that timestamp is earlier.
"""
# Guard: skip if contacts table doesn't exist (e.g. partial test schemas)
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='contacts'"
)
if not await cursor.fetchone():
return
try:
await conn.execute("ALTER TABLE contacts ADD COLUMN first_seen INTEGER")
logger.debug("Added first_seen to contacts table")
except aiosqlite.OperationalError as e:
if "duplicate column name" in str(e).lower():
logger.debug("contacts.first_seen already exists, skipping")
else:
raise
# Baseline: set first_seen = last_seen for all contacts
# Check if last_seen column exists (should in production, may not in minimal test schemas)
cursor = await conn.execute("PRAGMA table_info(contacts)")
columns = {row[1] for row in await cursor.fetchall()}
if "last_seen" in columns:
await conn.execute("UPDATE contacts SET first_seen = last_seen WHERE first_seen IS NULL")
# Refine: for contacts with PRIV messages, use earliest message timestamp if earlier
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='messages'"
)
if await cursor.fetchone():
await conn.execute(
"""
UPDATE contacts SET first_seen = (
SELECT MIN(m.received_at) FROM messages m
WHERE m.type = 'PRIV' AND m.conversation_key = contacts.public_key
)
WHERE EXISTS (
SELECT 1 FROM messages m
WHERE m.type = 'PRIV' AND m.conversation_key = contacts.public_key
AND m.received_at < COALESCE(contacts.first_seen, 9999999999)
)
"""
)
await conn.commit()
logger.debug("Added and backfilled first_seen column")
@@ -0,0 +1,53 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Create contact_name_history table and seed with current contact names.
"""
await conn.execute(
"""
CREATE TABLE IF NOT EXISTS contact_name_history (
id INTEGER PRIMARY KEY AUTOINCREMENT,
public_key TEXT NOT NULL,
name TEXT NOT NULL,
first_seen INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
UNIQUE(public_key, name),
FOREIGN KEY (public_key) REFERENCES contacts(public_key)
)
"""
)
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_contact_name_history_key "
"ON contact_name_history(public_key, last_seen DESC)"
)
# Seed: one row per contact from current data (skip if contacts table doesn't exist
# or lacks needed columns)
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='contacts'"
)
if await cursor.fetchone():
cursor = await conn.execute("PRAGMA table_info(contacts)")
cols = {row[1] for row in await cursor.fetchall()}
if "name" in cols and "public_key" in cols:
first_seen_expr = "first_seen" if "first_seen" in cols else "0"
last_seen_expr = "last_seen" if "last_seen" in cols else "0"
await conn.execute(
f"""
INSERT OR IGNORE INTO contact_name_history (public_key, name, first_seen, last_seen)
SELECT public_key, name,
COALESCE({first_seen_expr}, {last_seen_expr}, 0),
COALESCE({last_seen_expr}, 0)
FROM contacts
WHERE name IS NOT NULL AND name != ''
"""
)
await conn.commit()
logger.debug("Created contact_name_history table and seeded from contacts")
+124
View File
@@ -0,0 +1,124 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Add sender_name and sender_key columns to messages table.
Backfill:
- sender_name for CHAN messages: extract from "Name: message" format
- sender_key for CHAN messages: match name to contact (skip ambiguous)
- sender_key for incoming PRIV messages: set to conversation_key
"""
# Guard: skip if messages table doesn't exist
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='messages'"
)
if not await cursor.fetchone():
return
for column in ["sender_name", "sender_key"]:
try:
await conn.execute(f"ALTER TABLE messages ADD COLUMN {column} TEXT")
logger.debug("Added %s to messages table", column)
except aiosqlite.OperationalError as e:
if "duplicate column name" in str(e).lower():
logger.debug("messages.%s already exists, skipping", column)
else:
raise
# Check which columns the messages table has (may be minimal in test environments)
cursor = await conn.execute("PRAGMA table_info(messages)")
msg_cols = {row[1] for row in await cursor.fetchall()}
# Only backfill if the required columns exist
if "type" in msg_cols and "text" in msg_cols:
# Count messages to backfill for progress reporting
cursor = await conn.execute(
"SELECT COUNT(*) FROM messages WHERE type = 'CHAN' AND sender_name IS NULL"
)
row = await cursor.fetchone()
chan_count = row[0] if row else 0
if chan_count > 0:
logger.info("Backfilling sender_name for %d channel messages...", chan_count)
# Backfill sender_name for CHAN messages from "Name: message" format
# Only extract if colon position is valid (> 1 and < 51, i.e. name is 1-50 chars)
cursor = await conn.execute(
"""
UPDATE messages SET sender_name = SUBSTR(text, 1, INSTR(text, ': ') - 1)
WHERE type = 'CHAN' AND sender_name IS NULL
AND INSTR(text, ': ') > 1 AND INSTR(text, ': ') < 52
"""
)
if cursor.rowcount > 0:
logger.info("Backfilled sender_name for %d channel messages", cursor.rowcount)
# Backfill sender_key for incoming PRIV messages
if "outgoing" in msg_cols and "conversation_key" in msg_cols:
cursor = await conn.execute(
"""
UPDATE messages SET sender_key = conversation_key
WHERE type = 'PRIV' AND outgoing = 0 AND sender_key IS NULL
"""
)
if cursor.rowcount > 0:
logger.info("Backfilled sender_key for %d DM messages", cursor.rowcount)
# Backfill sender_key for CHAN messages: match sender_name to contacts
# Build name->key map, skip ambiguous names (multiple contacts with same name)
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='contacts'"
)
if await cursor.fetchone():
cursor = await conn.execute(
"SELECT public_key, name FROM contacts WHERE name IS NOT NULL AND name != ''"
)
rows = await cursor.fetchall()
name_to_keys: dict[str, list[str]] = {}
for row in rows:
name = row["name"]
key = row["public_key"]
if name not in name_to_keys:
name_to_keys[name] = []
name_to_keys[name].append(key)
# Only use unambiguous names (single contact per name)
unambiguous = {n: ks[0] for n, ks in name_to_keys.items() if len(ks) == 1}
if unambiguous:
logger.info(
"Matching sender_key for %d unique contact names...",
len(unambiguous),
)
# Use a temp table for a single bulk UPDATE instead of N individual queries
await conn.execute(
"CREATE TEMP TABLE _name_key_map (name TEXT PRIMARY KEY, public_key TEXT NOT NULL)"
)
await conn.executemany(
"INSERT INTO _name_key_map (name, public_key) VALUES (?, ?)",
list(unambiguous.items()),
)
cursor = await conn.execute(
"""
UPDATE messages SET sender_key = (
SELECT public_key FROM _name_key_map WHERE _name_key_map.name = messages.sender_name
)
WHERE type = 'CHAN' AND sender_key IS NULL
AND sender_name IN (SELECT name FROM _name_key_map)
"""
)
updated = cursor.rowcount
await conn.execute("DROP TABLE _name_key_map")
if updated > 0:
logger.info("Backfilled sender_key for %d channel messages", updated)
# Create index on sender_key for per-contact channel message counts
await conn.execute("CREATE INDEX IF NOT EXISTS idx_messages_sender_key ON messages(sender_key)")
await conn.commit()
logger.debug("Added sender_name and sender_key columns with backfill")
@@ -0,0 +1,81 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Rename repeater_advert_paths to contact_advert_paths with column
repeater_key -> public_key.
Uses table rebuild since ALTER TABLE RENAME COLUMN may not be available
in older SQLite versions.
"""
# Check if old table exists
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='repeater_advert_paths'"
)
if not await cursor.fetchone():
# Already renamed or doesn't exist — ensure new table exists
await conn.execute(
"""
CREATE TABLE IF NOT EXISTS contact_advert_paths (
id INTEGER PRIMARY KEY AUTOINCREMENT,
public_key TEXT NOT NULL,
path_hex TEXT NOT NULL,
path_len INTEGER NOT NULL,
first_seen INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
heard_count INTEGER NOT NULL DEFAULT 1,
UNIQUE(public_key, path_hex, path_len),
FOREIGN KEY (public_key) REFERENCES contacts(public_key)
)
"""
)
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_contact_advert_paths_recent "
"ON contact_advert_paths(public_key, last_seen DESC)"
)
await conn.commit()
logger.debug("contact_advert_paths already exists or old table missing, skipping rename")
return
# Create new table (IF NOT EXISTS in case SCHEMA already created it)
await conn.execute(
"""
CREATE TABLE IF NOT EXISTS contact_advert_paths (
id INTEGER PRIMARY KEY AUTOINCREMENT,
public_key TEXT NOT NULL,
path_hex TEXT NOT NULL,
path_len INTEGER NOT NULL,
first_seen INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
heard_count INTEGER NOT NULL DEFAULT 1,
UNIQUE(public_key, path_hex, path_len),
FOREIGN KEY (public_key) REFERENCES contacts(public_key)
)
"""
)
# Copy data (INSERT OR IGNORE in case of duplicates)
await conn.execute(
"""
INSERT OR IGNORE INTO contact_advert_paths (public_key, path_hex, path_len, first_seen, last_seen, heard_count)
SELECT repeater_key, path_hex, path_len, first_seen, last_seen, heard_count
FROM repeater_advert_paths
"""
)
# Drop old table
await conn.execute("DROP TABLE repeater_advert_paths")
# Create index
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_contact_advert_paths_recent "
"ON contact_advert_paths(public_key, last_seen DESC)"
)
await conn.commit()
logger.info("Renamed repeater_advert_paths to contact_advert_paths")
@@ -0,0 +1,36 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Backfill contacts.first_seen from contact_advert_paths where advert path
first_seen is earlier than the contact's current first_seen.
"""
# Guard: skip if either table doesn't exist
for table in ("contacts", "contact_advert_paths"):
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name=?", (table,)
)
if not await cursor.fetchone():
return
await conn.execute(
"""
UPDATE contacts SET first_seen = (
SELECT MIN(cap.first_seen) FROM contact_advert_paths cap
WHERE cap.public_key = contacts.public_key
)
WHERE EXISTS (
SELECT 1 FROM contact_advert_paths cap
WHERE cap.public_key = contacts.public_key
AND cap.first_seen < COALESCE(contacts.first_seen, 9999999999)
)
"""
)
await conn.commit()
logger.debug("Backfilled first_seen from contact_advert_paths")
@@ -0,0 +1,107 @@
import logging
from hashlib import sha256
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Convert payload_hash from 64-char hex TEXT to 32-byte BLOB.
Halves storage for both the column data and its UNIQUE index.
Uses Python bytes.fromhex() for the conversion since SQLite's unhex()
requires 3.41.0+ which may not be available on all deployments.
"""
# Guard: skip if raw_packets table doesn't exist
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='raw_packets'"
)
if not await cursor.fetchone():
logger.debug("raw_packets table does not exist, skipping payload_hash conversion")
await conn.commit()
return
# Check column types — skip if payload_hash doesn't exist or is already BLOB
cursor = await conn.execute("PRAGMA table_info(raw_packets)")
cols = {row[1]: row[2] for row in await cursor.fetchall()}
if "payload_hash" not in cols:
logger.debug("payload_hash column does not exist, skipping conversion")
await conn.commit()
return
if cols["payload_hash"].upper() == "BLOB":
logger.debug("payload_hash is already BLOB, skipping conversion")
await conn.commit()
return
logger.info("Rebuilding raw_packets to convert payload_hash TEXT → BLOB...")
# Create new table with BLOB type
await conn.execute("""
CREATE TABLE raw_packets_new (
id INTEGER PRIMARY KEY AUTOINCREMENT,
timestamp INTEGER NOT NULL,
data BLOB NOT NULL,
message_id INTEGER,
payload_hash BLOB,
FOREIGN KEY (message_id) REFERENCES messages(id)
)
""")
# Batch-convert rows: read TEXT hashes, convert to bytes, insert into new table
batch_size = 5000
cursor = await conn.execute(
"SELECT id, timestamp, data, message_id, payload_hash FROM raw_packets ORDER BY id"
)
total = 0
while True:
rows = await cursor.fetchmany(batch_size)
if not rows:
break
batch: list[tuple[int, int, bytes, int | None, bytes | None]] = []
for row in rows:
rid, ts, data, mid, ph = row[0], row[1], row[2], row[3], row[4]
if ph is not None and isinstance(ph, str):
try:
ph = bytes.fromhex(ph)
except ValueError:
# Not a valid hex string — hash the value to produce a valid BLOB
ph = sha256(ph.encode()).digest()
batch.append((rid, ts, data, mid, ph))
await conn.executemany(
"INSERT INTO raw_packets_new (id, timestamp, data, message_id, payload_hash) "
"VALUES (?, ?, ?, ?, ?)",
batch,
)
total += len(batch)
if total % 50000 == 0:
logger.info("Converted %d rows...", total)
# Preserve autoincrement sequence
cursor = await conn.execute("SELECT seq FROM sqlite_sequence WHERE name = 'raw_packets'")
seq_row = await cursor.fetchone()
if seq_row is not None:
await conn.execute(
"INSERT OR REPLACE INTO sqlite_sequence (name, seq) VALUES ('raw_packets_new', ?)",
(seq_row[0],),
)
await conn.execute("DROP TABLE raw_packets")
await conn.execute("ALTER TABLE raw_packets_new RENAME TO raw_packets")
# Clean up the sqlite_sequence entry for the old temp name
await conn.execute("DELETE FROM sqlite_sequence WHERE name = 'raw_packets_new'")
# Recreate indexes
await conn.execute(
"CREATE UNIQUE INDEX idx_raw_packets_payload_hash ON raw_packets(payload_hash)"
)
await conn.execute("CREATE INDEX idx_raw_packets_message_id ON raw_packets(message_id)")
await conn.commit()
logger.info("Converted %d payload_hash values from TEXT to BLOB", total)
@@ -0,0 +1,27 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Add a covering index for the unread counts query.
The /api/read-state/unreads endpoint runs three queries against messages.
The last-message-times query (GROUP BY type, conversation_key + MAX(received_at))
was doing a full table scan. This covering index lets SQLite resolve the
grouping and MAX entirely from the index without touching the table.
It also improves the unread count queries which filter on outgoing and received_at.
"""
# Guard: table or columns may not exist in partial-schema test setups
cursor = await conn.execute("PRAGMA table_info(messages)")
columns = {row[1] for row in await cursor.fetchall()}
required = {"type", "conversation_key", "outgoing", "received_at"}
if required <= columns:
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_messages_unread_covering "
"ON messages(type, conversation_key, outgoing, received_at)"
)
await conn.commit()
@@ -0,0 +1,31 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""
Add a composite index for message pagination and drop the now-redundant
idx_messages_conversation.
The pagination query (ORDER BY received_at DESC, id DESC LIMIT N) hits a
temp B-tree sort without this index. With it, SQLite walks the index in
order and stops after N rows — critical for channels with 30K+ messages.
idx_messages_conversation(type, conversation_key) is a strict prefix of
both this index and idx_messages_unread_covering, so SQLite never picks it.
Dropping it saves ~6 MB and one index to maintain per INSERT.
"""
# Guard: table or columns may not exist in partial-schema test setups
cursor = await conn.execute("PRAGMA table_info(messages)")
columns = {row[1] for row in await cursor.fetchall()}
required = {"type", "conversation_key", "received_at", "id"}
if required <= columns:
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_messages_pagination "
"ON messages(type, conversation_key, received_at DESC, id DESC)"
)
await conn.execute("DROP INDEX IF EXISTS idx_messages_conversation")
await conn.commit()
+37
View File
@@ -0,0 +1,37 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add MQTT configuration columns to app_settings."""
# Guard: app_settings may not exist in partial-schema test setups
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='app_settings'"
)
if not await cursor.fetchone():
await conn.commit()
return
cursor = await conn.execute("PRAGMA table_info(app_settings)")
columns = {row[1] for row in await cursor.fetchall()}
new_columns = [
("mqtt_broker_host", "TEXT DEFAULT ''"),
("mqtt_broker_port", "INTEGER DEFAULT 1883"),
("mqtt_username", "TEXT DEFAULT ''"),
("mqtt_password", "TEXT DEFAULT ''"),
("mqtt_use_tls", "INTEGER DEFAULT 0"),
("mqtt_tls_insecure", "INTEGER DEFAULT 0"),
("mqtt_topic_prefix", "TEXT DEFAULT 'meshcore'"),
("mqtt_publish_messages", "INTEGER DEFAULT 0"),
("mqtt_publish_raw_packets", "INTEGER DEFAULT 0"),
]
for col_name, col_def in new_columns:
if col_name not in columns:
await conn.execute(f"ALTER TABLE app_settings ADD COLUMN {col_name} {col_def}")
await conn.commit()
@@ -0,0 +1,33 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add community MQTT configuration columns to app_settings."""
# Guard: app_settings may not exist in partial-schema test setups
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='app_settings'"
)
if not await cursor.fetchone():
await conn.commit()
return
cursor = await conn.execute("PRAGMA table_info(app_settings)")
columns = {row[1] for row in await cursor.fetchall()}
new_columns = [
("community_mqtt_enabled", "INTEGER DEFAULT 0"),
("community_mqtt_iata", "TEXT DEFAULT ''"),
("community_mqtt_broker_host", "TEXT DEFAULT 'mqtt-us-v1.letsmesh.net'"),
("community_mqtt_broker_port", "INTEGER DEFAULT 443"),
("community_mqtt_email", "TEXT DEFAULT ''"),
]
for col_name, col_def in new_columns:
if col_name not in columns:
await conn.execute(f"ALTER TABLE app_settings ADD COLUMN {col_name} {col_def}")
await conn.commit()
@@ -0,0 +1,23 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Seed the #remoteterm hashtag channel so new installs have it by default.
Uses INSERT OR IGNORE so it's a no-op if the channel already exists
(e.g. existing users who already added it manually). The channels table
is created by the base schema before migrations run, so it always exists
in production.
"""
try:
await conn.execute(
"INSERT OR IGNORE INTO channels (key, name, is_hashtag, on_radio) VALUES (?, ?, ?, ?)",
("8959AE053F2201801342A1DBDDA184F6", "#remoteterm", 1, 0),
)
await conn.commit()
except Exception:
logger.debug("Skipping #remoteterm seed (channels table not ready)")
+23
View File
@@ -0,0 +1,23 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add flood_scope column to app_settings for outbound region tagging.
Empty string means disabled (no scope set, messages sent unscoped).
"""
try:
await conn.execute("ALTER TABLE app_settings ADD COLUMN flood_scope TEXT DEFAULT ''")
await conn.commit()
except Exception as e:
error_msg = str(e).lower()
if "duplicate column" in error_msg:
logger.debug("flood_scope column already exists, skipping")
elif "no such table" in error_msg:
logger.debug("app_settings table not ready, skipping flood_scope migration")
else:
raise
+36
View File
@@ -0,0 +1,36 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add blocked_keys and blocked_names columns to app_settings.
These store JSON arrays of blocked public keys and display names.
Blocking hides messages from the UI but does not affect MQTT or bots.
"""
try:
await conn.execute("ALTER TABLE app_settings ADD COLUMN blocked_keys TEXT DEFAULT '[]'")
except Exception as e:
error_msg = str(e).lower()
if "duplicate column" in error_msg:
logger.debug("blocked_keys column already exists, skipping")
elif "no such table" in error_msg:
logger.debug("app_settings table not ready, skipping blocked_keys migration")
else:
raise
try:
await conn.execute("ALTER TABLE app_settings ADD COLUMN blocked_names TEXT DEFAULT '[]'")
except Exception as e:
error_msg = str(e).lower()
if "duplicate column" in error_msg:
logger.debug("blocked_names column already exists, skipping")
elif "no such table" in error_msg:
logger.debug("app_settings table not ready, skipping blocked_names migration")
else:
raise
await conn.commit()
@@ -0,0 +1,143 @@
import json
import logging
import uuid
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Create fanout_configs table and migrate existing MQTT settings.
Reads existing MQTT settings from app_settings and creates corresponding
fanout_configs rows. Old columns are NOT dropped (rollback safety).
"""
# 1. Create fanout_configs table
await conn.execute(
"""
CREATE TABLE IF NOT EXISTS fanout_configs (
id TEXT PRIMARY KEY,
type TEXT NOT NULL,
name TEXT NOT NULL,
enabled INTEGER DEFAULT 0,
config TEXT NOT NULL DEFAULT '{}',
scope TEXT NOT NULL DEFAULT '{}',
sort_order INTEGER DEFAULT 0,
created_at INTEGER NOT NULL
)
"""
)
# 2. Read existing MQTT settings
try:
cursor = await conn.execute(
"""
SELECT mqtt_broker_host, mqtt_broker_port, mqtt_username, mqtt_password,
mqtt_use_tls, mqtt_tls_insecure, mqtt_topic_prefix,
mqtt_publish_messages, mqtt_publish_raw_packets,
community_mqtt_enabled, community_mqtt_iata,
community_mqtt_broker_host, community_mqtt_broker_port,
community_mqtt_email
FROM app_settings WHERE id = 1
"""
)
row = await cursor.fetchone()
except Exception:
row = None
if row is None:
await conn.commit()
return
import time
now = int(time.time())
sort_order = 0
# 3. Migrate private MQTT if configured
broker_host = row["mqtt_broker_host"] or ""
if broker_host:
publish_messages = bool(row["mqtt_publish_messages"])
publish_raw = bool(row["mqtt_publish_raw_packets"])
enabled = publish_messages or publish_raw
config = {
"broker_host": broker_host,
"broker_port": row["mqtt_broker_port"] or 1883,
"username": row["mqtt_username"] or "",
"password": row["mqtt_password"] or "",
"use_tls": bool(row["mqtt_use_tls"]),
"tls_insecure": bool(row["mqtt_tls_insecure"]),
"topic_prefix": row["mqtt_topic_prefix"] or "meshcore",
}
scope = {
"messages": "all" if publish_messages else "none",
"raw_packets": "all" if publish_raw else "none",
}
await conn.execute(
"""
INSERT INTO fanout_configs (id, type, name, enabled, config, scope, sort_order, created_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
""",
(
str(uuid.uuid4()),
"mqtt_private",
"Private MQTT",
1 if enabled else 0,
json.dumps(config),
json.dumps(scope),
sort_order,
now,
),
)
sort_order += 1
logger.info("Migrated private MQTT settings to fanout_configs (enabled=%s)", enabled)
# 4. Migrate community MQTT if enabled OR configured (preserve disabled-but-configured)
community_enabled = bool(row["community_mqtt_enabled"])
community_iata = row["community_mqtt_iata"] or ""
community_host = row["community_mqtt_broker_host"] or ""
community_email = row["community_mqtt_email"] or ""
community_has_config = bool(
community_iata
or community_email
or (community_host and community_host != "mqtt-us-v1.letsmesh.net")
)
if community_enabled or community_has_config:
config = {
"broker_host": community_host or "mqtt-us-v1.letsmesh.net",
"broker_port": row["community_mqtt_broker_port"] or 443,
"iata": community_iata,
"email": community_email,
}
scope = {
"messages": "none",
"raw_packets": "all",
}
await conn.execute(
"""
INSERT INTO fanout_configs (id, type, name, enabled, config, scope, sort_order, created_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
""",
(
str(uuid.uuid4()),
"mqtt_community",
"Community MQTT",
1 if community_enabled else 0,
json.dumps(config),
json.dumps(scope),
sort_order,
now,
),
)
logger.info(
"Migrated community MQTT settings to fanout_configs (enabled=%s)", community_enabled
)
await conn.commit()
+63
View File
@@ -0,0 +1,63 @@
import json
import logging
import uuid
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Migrate bots from app_settings.bots JSON to fanout_configs rows."""
try:
cursor = await conn.execute("SELECT bots FROM app_settings WHERE id = 1")
row = await cursor.fetchone()
except Exception:
row = None
if row is None:
await conn.commit()
return
bots_json = row["bots"] or "[]"
try:
bots = json.loads(bots_json)
except (json.JSONDecodeError, TypeError):
bots = []
if not bots:
await conn.commit()
return
import time
now = int(time.time())
# Use sort_order starting at 200 to place bots after MQTT configs (0-99)
for i, bot in enumerate(bots):
bot_name = bot.get("name") or f"Bot {i + 1}"
bot_enabled = bool(bot.get("enabled", False))
bot_code = bot.get("code", "")
config_blob = json.dumps({"code": bot_code})
scope = json.dumps({"messages": "all", "raw_packets": "none"})
await conn.execute(
"""
INSERT INTO fanout_configs (id, type, name, enabled, config, scope, sort_order, created_at)
VALUES (?, 'bot', ?, ?, ?, ?, ?, ?)
""",
(
str(uuid.uuid4()),
bot_name,
1 if bot_enabled else 0,
config_blob,
scope,
200 + i,
now,
),
)
logger.info("Migrated bot '%s' to fanout_configs (enabled=%s)", bot_name, bot_enabled)
await conn.commit()
@@ -0,0 +1,54 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Drop legacy MQTT, community MQTT, and bots columns from app_settings.
These columns were migrated to fanout_configs in migrations 36 and 37.
SQLite 3.35.0+ supports ALTER TABLE DROP COLUMN. For older versions,
the columns remain but are harmless (no longer read or written).
"""
# Check if app_settings table exists (some test DBs may not have it)
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='app_settings'"
)
if await cursor.fetchone() is None:
await conn.commit()
return
columns_to_drop = [
"bots",
"mqtt_broker_host",
"mqtt_broker_port",
"mqtt_username",
"mqtt_password",
"mqtt_use_tls",
"mqtt_tls_insecure",
"mqtt_topic_prefix",
"mqtt_publish_messages",
"mqtt_publish_raw_packets",
"community_mqtt_enabled",
"community_mqtt_iata",
"community_mqtt_broker_host",
"community_mqtt_broker_port",
"community_mqtt_email",
]
for column in columns_to_drop:
try:
await conn.execute(f"ALTER TABLE app_settings DROP COLUMN {column}")
logger.debug("Dropped %s from app_settings", column)
except aiosqlite.OperationalError as e:
error_msg = str(e).lower()
if "no such column" in error_msg:
logger.debug("app_settings.%s already dropped, skipping", column)
elif "syntax error" in error_msg or "drop column" in error_msg:
logger.debug("SQLite doesn't support DROP COLUMN, %s column will remain", column)
else:
raise
await conn.commit()
@@ -0,0 +1,65 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add contacts.out_path_hash_mode and backfill legacy rows.
Historical databases predate multibyte routing support. Backfill rules:
- contacts with last_path_len = -1 are flood routes -> out_path_hash_mode = -1
- all other existing contacts default to 0 (1-byte legacy hop identifiers)
"""
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='contacts'"
)
if await cursor.fetchone() is None:
await conn.commit()
return
column_cursor = await conn.execute("PRAGMA table_info(contacts)")
columns = {row[1] for row in await column_cursor.fetchall()}
added_column = False
try:
await conn.execute(
"ALTER TABLE contacts ADD COLUMN out_path_hash_mode INTEGER NOT NULL DEFAULT 0"
)
added_column = True
logger.debug("Added out_path_hash_mode to contacts table")
except aiosqlite.OperationalError as e:
if "duplicate column name" in str(e).lower():
logger.debug("contacts.out_path_hash_mode already exists, skipping add")
else:
raise
if "last_path_len" not in columns:
await conn.commit()
return
if added_column:
await conn.execute(
"""
UPDATE contacts
SET out_path_hash_mode = CASE
WHEN last_path_len = -1 THEN -1
ELSE 0
END
"""
)
else:
await conn.execute(
"""
UPDATE contacts
SET out_path_hash_mode = CASE
WHEN last_path_len = -1 THEN -1
ELSE 0
END
WHERE out_path_hash_mode NOT IN (-1, 0, 1, 2)
OR (last_path_len = -1 AND out_path_hash_mode != -1)
"""
)
await conn.commit()
@@ -0,0 +1,82 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(
conn: aiosqlite.Connection,
) -> None:
"""Rebuild contact_advert_paths so uniqueness includes path_len.
Multi-byte routing can produce the same path_hex bytes with a different hop count,
which changes the hop boundaries and therefore the semantic next-hop identity.
"""
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='contact_advert_paths'"
)
if await cursor.fetchone() is None:
await conn.execute(
"""
CREATE TABLE IF NOT EXISTS contact_advert_paths (
id INTEGER PRIMARY KEY AUTOINCREMENT,
public_key TEXT NOT NULL,
path_hex TEXT NOT NULL,
path_len INTEGER NOT NULL,
first_seen INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
heard_count INTEGER NOT NULL DEFAULT 1,
UNIQUE(public_key, path_hex, path_len),
FOREIGN KEY (public_key) REFERENCES contacts(public_key)
)
"""
)
await conn.execute("DROP INDEX IF EXISTS idx_contact_advert_paths_recent")
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_contact_advert_paths_recent "
"ON contact_advert_paths(public_key, last_seen DESC)"
)
await conn.commit()
return
await conn.execute(
"""
CREATE TABLE contact_advert_paths_new (
id INTEGER PRIMARY KEY AUTOINCREMENT,
public_key TEXT NOT NULL,
path_hex TEXT NOT NULL,
path_len INTEGER NOT NULL,
first_seen INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
heard_count INTEGER NOT NULL DEFAULT 1,
UNIQUE(public_key, path_hex, path_len),
FOREIGN KEY (public_key) REFERENCES contacts(public_key)
)
"""
)
await conn.execute(
"""
INSERT INTO contact_advert_paths_new
(public_key, path_hex, path_len, first_seen, last_seen, heard_count)
SELECT
public_key,
path_hex,
path_len,
MIN(first_seen),
MAX(last_seen),
SUM(heard_count)
FROM contact_advert_paths
GROUP BY public_key, path_hex, path_len
"""
)
await conn.execute("DROP TABLE contact_advert_paths")
await conn.execute("ALTER TABLE contact_advert_paths_new RENAME TO contact_advert_paths")
await conn.execute("DROP INDEX IF EXISTS idx_contact_advert_paths_recent")
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_contact_advert_paths_recent "
"ON contact_advert_paths(public_key, last_seen DESC)"
)
await conn.commit()
@@ -0,0 +1,31 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add nullable routing-override columns to contacts."""
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='contacts'"
)
if await cursor.fetchone() is None:
await conn.commit()
return
for column_name, column_type in (
("route_override_path", "TEXT"),
("route_override_len", "INTEGER"),
("route_override_hash_mode", "INTEGER"),
):
try:
await conn.execute(f"ALTER TABLE contacts ADD COLUMN {column_name} {column_type}")
logger.debug("Added %s to contacts table", column_name)
except aiosqlite.OperationalError as e:
if "duplicate column name" in str(e).lower():
logger.debug("contacts.%s already exists, skipping", column_name)
else:
raise
await conn.commit()
@@ -0,0 +1,26 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add nullable per-channel flood-scope override column."""
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='channels'"
)
if await cursor.fetchone() is None:
await conn.commit()
return
try:
await conn.execute("ALTER TABLE channels ADD COLUMN flood_scope_override TEXT")
logger.debug("Added flood_scope_override to channels table")
except aiosqlite.OperationalError as e:
if "duplicate column name" in str(e).lower():
logger.debug("channels.flood_scope_override already exists, skipping")
else:
raise
await conn.commit()
@@ -0,0 +1,31 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Restrict the message dedup index to channel messages."""
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='messages'"
)
if await cursor.fetchone() is None:
await conn.commit()
return
cursor = await conn.execute("PRAGMA table_info(messages)")
columns = {row[1] for row in await cursor.fetchall()}
required_columns = {"type", "conversation_key", "text", "sender_timestamp"}
if not required_columns.issubset(columns):
logger.debug("messages table missing dedup-index columns, skipping migration 43")
await conn.commit()
return
await conn.execute("DROP INDEX IF EXISTS idx_messages_dedup_null_safe")
await conn.execute(
"""CREATE UNIQUE INDEX IF NOT EXISTS idx_messages_dedup_null_safe
ON messages(type, conversation_key, text, COALESCE(sender_timestamp, 0))
WHERE type = 'CHAN'"""
)
await conn.commit()
@@ -0,0 +1,157 @@
import json
import logging
import aiosqlite
logger = logging.getLogger(__name__)
def _merge_message_paths(paths_json_values: list[str | None]) -> str | None:
"""Merge multiple message path arrays into one exact-observation list."""
merged: list[dict[str, object]] = []
seen: set[tuple[object | None, object | None, object | None]] = set()
for paths_json in paths_json_values:
if not paths_json:
continue
try:
parsed = json.loads(paths_json)
except (TypeError, json.JSONDecodeError):
continue
if not isinstance(parsed, list):
continue
for entry in parsed:
if not isinstance(entry, dict):
continue
key = (
entry.get("path"),
entry.get("received_at"),
entry.get("path_len"),
)
if key in seen:
continue
seen.add(key)
merged.append(entry)
return json.dumps(merged) if merged else None
async def migrate(conn: aiosqlite.Connection) -> None:
"""Collapse same-contact same-text same-second incoming DMs into one row."""
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='messages'"
)
if await cursor.fetchone() is None:
await conn.commit()
return
cursor = await conn.execute("PRAGMA table_info(messages)")
columns = {row[1] for row in await cursor.fetchall()}
required_columns = {
"id",
"type",
"conversation_key",
"text",
"sender_timestamp",
"received_at",
"paths",
"txt_type",
"signature",
"outgoing",
"acked",
"sender_name",
"sender_key",
}
if not required_columns.issubset(columns):
logger.debug("messages table missing incoming-DM dedup columns, skipping migration 44")
await conn.commit()
return
raw_packets_cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='raw_packets'"
)
raw_packets_exists = await raw_packets_cursor.fetchone() is not None
duplicate_groups_cursor = await conn.execute(
"""
SELECT conversation_key, text,
COALESCE(sender_timestamp, 0) AS normalized_sender_timestamp,
COUNT(*) AS duplicate_count
FROM messages
WHERE type = 'PRIV' AND outgoing = 0
GROUP BY conversation_key, text, COALESCE(sender_timestamp, 0)
HAVING COUNT(*) > 1
"""
)
duplicate_groups = await duplicate_groups_cursor.fetchall()
for group in duplicate_groups:
normalized_sender_timestamp = group["normalized_sender_timestamp"]
rows_cursor = await conn.execute(
"""
SELECT *
FROM messages
WHERE type = 'PRIV' AND outgoing = 0
AND conversation_key = ? AND text = ?
AND COALESCE(sender_timestamp, 0) = ?
ORDER BY id ASC
""",
(
group["conversation_key"],
group["text"],
normalized_sender_timestamp,
),
)
rows = list(await rows_cursor.fetchall())
if len(rows) < 2:
continue
keeper = rows[0]
duplicate_ids = [row["id"] for row in rows[1:]]
merged_paths = _merge_message_paths([row["paths"] for row in rows])
merged_received_at = min(row["received_at"] for row in rows)
merged_txt_type = next((row["txt_type"] for row in rows if row["txt_type"] != 0), 0)
merged_signature = next((row["signature"] for row in rows if row["signature"]), None)
merged_sender_name = next((row["sender_name"] for row in rows if row["sender_name"]), None)
merged_sender_key = next((row["sender_key"] for row in rows if row["sender_key"]), None)
merged_acked = max(int(row["acked"] or 0) for row in rows)
await conn.execute(
"""
UPDATE messages
SET received_at = ?, paths = ?, txt_type = ?, signature = ?,
acked = ?, sender_name = ?, sender_key = ?
WHERE id = ?
""",
(
merged_received_at,
merged_paths,
merged_txt_type,
merged_signature,
merged_acked,
merged_sender_name,
merged_sender_key,
keeper["id"],
),
)
if raw_packets_exists:
for duplicate_id in duplicate_ids:
await conn.execute(
"UPDATE raw_packets SET message_id = ? WHERE message_id = ?",
(keeper["id"], duplicate_id),
)
placeholders = ",".join("?" for _ in duplicate_ids)
await conn.execute(
f"DELETE FROM messages WHERE id IN ({placeholders})",
duplicate_ids,
)
await conn.execute("DROP INDEX IF EXISTS idx_messages_incoming_priv_dedup")
await conn.execute(
"""CREATE UNIQUE INDEX IF NOT EXISTS idx_messages_incoming_priv_dedup
ON messages(type, conversation_key, text, COALESCE(sender_timestamp, 0))
WHERE type = 'PRIV' AND outgoing = 0"""
)
await conn.commit()
@@ -0,0 +1,136 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Replace legacy contact route columns with canonical direct-route columns."""
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='contacts'"
)
if await cursor.fetchone() is None:
await conn.commit()
return
cursor = await conn.execute("PRAGMA table_info(contacts)")
columns = {row[1] for row in await cursor.fetchall()}
target_columns = {
"public_key",
"name",
"type",
"flags",
"direct_path",
"direct_path_len",
"direct_path_hash_mode",
"direct_path_updated_at",
"route_override_path",
"route_override_len",
"route_override_hash_mode",
"last_advert",
"lat",
"lon",
"last_seen",
"on_radio",
"last_contacted",
"first_seen",
"last_read_at",
}
if (
target_columns.issubset(columns)
and "last_path" not in columns
and "out_path_hash_mode" not in columns
):
await conn.commit()
return
await conn.execute(
"""
CREATE TABLE contacts_new (
public_key TEXT PRIMARY KEY,
name TEXT,
type INTEGER DEFAULT 0,
flags INTEGER DEFAULT 0,
direct_path TEXT,
direct_path_len INTEGER,
direct_path_hash_mode INTEGER,
direct_path_updated_at INTEGER,
route_override_path TEXT,
route_override_len INTEGER,
route_override_hash_mode INTEGER,
last_advert INTEGER,
lat REAL,
lon REAL,
last_seen INTEGER,
on_radio INTEGER DEFAULT 0,
last_contacted INTEGER,
first_seen INTEGER,
last_read_at INTEGER
)
"""
)
select_expr = {
"public_key": "public_key",
"name": "NULL",
"type": "0",
"flags": "0",
"direct_path": "NULL",
"direct_path_len": "NULL",
"direct_path_hash_mode": "NULL",
"direct_path_updated_at": "NULL",
"route_override_path": "NULL",
"route_override_len": "NULL",
"route_override_hash_mode": "NULL",
"last_advert": "NULL",
"lat": "NULL",
"lon": "NULL",
"last_seen": "NULL",
"on_radio": "0",
"last_contacted": "NULL",
"first_seen": "NULL",
"last_read_at": "NULL",
}
for name in ("name", "type", "flags"):
if name in columns:
select_expr[name] = name
if "direct_path" in columns:
select_expr["direct_path"] = "direct_path"
if "direct_path_len" in columns:
select_expr["direct_path_len"] = "direct_path_len"
if "direct_path_hash_mode" in columns:
select_expr["direct_path_hash_mode"] = "direct_path_hash_mode"
for name in (
"route_override_path",
"route_override_len",
"route_override_hash_mode",
"last_advert",
"lat",
"lon",
"last_seen",
"on_radio",
"last_contacted",
"first_seen",
"last_read_at",
):
if name in columns:
select_expr[name] = name
ordered_columns = list(select_expr.keys())
await conn.execute(
f"""
INSERT INTO contacts_new ({", ".join(ordered_columns)})
SELECT {", ".join(select_expr[name] for name in ordered_columns)}
FROM contacts
"""
)
await conn.execute("DROP TABLE contacts")
await conn.execute("ALTER TABLE contacts_new RENAME TO contacts")
await conn.commit()
@@ -0,0 +1,93 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Move uniquely resolvable orphan contact child rows onto full contacts, drop the rest."""
existing_tables_cursor = await conn.execute("SELECT name FROM sqlite_master WHERE type='table'")
existing_tables = {row[0] for row in await existing_tables_cursor.fetchall()}
if "contacts" not in existing_tables:
await conn.commit()
return
child_tables = [
table
for table in ("contact_name_history", "contact_advert_paths")
if table in existing_tables
]
if not child_tables:
await conn.commit()
return
orphan_keys: set[str] = set()
for table in child_tables:
cursor = await conn.execute(
f"""
SELECT DISTINCT child.public_key
FROM {table} child
LEFT JOIN contacts c ON c.public_key = child.public_key
WHERE c.public_key IS NULL
"""
)
orphan_keys.update(row[0] for row in await cursor.fetchall())
for orphan_key in sorted(orphan_keys, key=len, reverse=True):
match_cursor = await conn.execute(
"""
SELECT public_key
FROM contacts
WHERE length(public_key) = 64
AND public_key LIKE ? || '%'
ORDER BY public_key
""",
(orphan_key.lower(),),
)
matches = [row[0] for row in await match_cursor.fetchall()]
resolved_key = matches[0] if len(matches) == 1 else None
if resolved_key is not None:
if "contact_name_history" in child_tables:
await conn.execute(
"""
INSERT INTO contact_name_history (public_key, name, first_seen, last_seen)
SELECT ?, name, first_seen, last_seen
FROM contact_name_history
WHERE public_key = ?
ON CONFLICT(public_key, name) DO UPDATE SET
first_seen = MIN(contact_name_history.first_seen, excluded.first_seen),
last_seen = MAX(contact_name_history.last_seen, excluded.last_seen)
""",
(resolved_key, orphan_key),
)
if "contact_advert_paths" in child_tables:
await conn.execute(
"""
INSERT INTO contact_advert_paths
(public_key, path_hex, path_len, first_seen, last_seen, heard_count)
SELECT ?, path_hex, path_len, first_seen, last_seen, heard_count
FROM contact_advert_paths
WHERE public_key = ?
ON CONFLICT(public_key, path_hex, path_len) DO UPDATE SET
first_seen = MIN(contact_advert_paths.first_seen, excluded.first_seen),
last_seen = MAX(contact_advert_paths.last_seen, excluded.last_seen),
heard_count = contact_advert_paths.heard_count + excluded.heard_count
""",
(resolved_key, orphan_key),
)
if "contact_name_history" in child_tables:
await conn.execute(
"DELETE FROM contact_name_history WHERE public_key = ?",
(orphan_key,),
)
if "contact_advert_paths" in child_tables:
await conn.execute(
"DELETE FROM contact_advert_paths WHERE public_key = ?",
(orphan_key,),
)
await conn.commit()
@@ -0,0 +1,39 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add indexes used by the statistics endpoint's time-windowed scans."""
cursor = await conn.execute("SELECT name FROM sqlite_master WHERE type='table'")
tables = {row[0] for row in await cursor.fetchall()}
if "raw_packets" in tables:
cursor = await conn.execute("PRAGMA table_info(raw_packets)")
raw_packet_columns = {row[1] for row in await cursor.fetchall()}
if "timestamp" in raw_packet_columns:
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_raw_packets_timestamp ON raw_packets(timestamp)"
)
if "contacts" in tables:
cursor = await conn.execute("PRAGMA table_info(contacts)")
contact_columns = {row[1] for row in await cursor.fetchall()}
if {"type", "last_seen"}.issubset(contact_columns):
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_contacts_type_last_seen ON contacts(type, last_seen)"
)
if "messages" in tables:
cursor = await conn.execute("PRAGMA table_info(messages)")
message_columns = {row[1] for row in await cursor.fetchall()}
if {"type", "received_at", "conversation_key"}.issubset(message_columns):
await conn.execute(
"""
CREATE INDEX IF NOT EXISTS idx_messages_type_received_conversation
ON messages(type, received_at, conversation_key)
"""
)
await conn.commit()
@@ -0,0 +1,27 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add discovery_blocked_types column to app_settings.
Stores a JSON array of integer contact type codes (1=Client, 2=Repeater,
3=Room, 4=Sensor) whose advertisements should not create new contacts.
Empty list means all types are accepted.
"""
try:
await conn.execute(
"ALTER TABLE app_settings ADD COLUMN discovery_blocked_types TEXT DEFAULT '[]'"
)
except Exception as e:
error_msg = str(e).lower()
if "duplicate column" in error_msg:
logger.debug("discovery_blocked_types column already exists, skipping")
elif "no such table" in error_msg:
logger.debug("app_settings table not ready, skipping discovery_blocked_types migration")
else:
raise
await conn.commit()
+158
View File
@@ -0,0 +1,158 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Rebuild FK tables with CASCADE/SET NULL and clean orphaned rows.
SQLite cannot ALTER existing FK constraints, so each table is rebuilt.
Orphaned child rows are cleaned up before the rebuild to ensure the
INSERT...SELECT into the new table (which has enforced FKs) succeeds.
"""
import shutil
from pathlib import Path
# Back up the database before table rebuilds (skip for in-memory DBs).
cursor = await conn.execute("PRAGMA database_list")
db_row = await cursor.fetchone()
db_path = db_row[2] if db_row else ""
if db_path and db_path != ":memory:" and Path(db_path).exists():
backup_path = db_path + ".pre-fk-migration.bak"
for suffix in ("", "-wal", "-shm"):
src = Path(db_path + suffix)
if src.exists():
shutil.copy2(str(src), backup_path + suffix)
logger.info("Database backed up to %s before FK migration", backup_path)
# --- Phase 1: clean orphans (guard each table's existence) ---
tables_cursor = await conn.execute("SELECT name FROM sqlite_master WHERE type='table'")
existing_tables = {row[0] for row in await tables_cursor.fetchall()}
if "contact_advert_paths" in existing_tables and "contacts" in existing_tables:
await conn.execute(
"DELETE FROM contact_advert_paths "
"WHERE public_key NOT IN (SELECT public_key FROM contacts)"
)
if "contact_name_history" in existing_tables and "contacts" in existing_tables:
await conn.execute(
"DELETE FROM contact_name_history "
"WHERE public_key NOT IN (SELECT public_key FROM contacts)"
)
if "raw_packets" in existing_tables and "messages" in existing_tables:
# Guard: message_id column may not exist on very old schemas
col_cursor = await conn.execute("PRAGMA table_info(raw_packets)")
raw_cols = {row[1] for row in await col_cursor.fetchall()}
if "message_id" in raw_cols:
await conn.execute(
"UPDATE raw_packets SET message_id = NULL WHERE message_id IS NOT NULL "
"AND message_id NOT IN (SELECT id FROM messages)"
)
await conn.commit()
logger.debug("Cleaned orphaned child rows before FK rebuild")
# --- Phase 2: rebuild raw_packets with ON DELETE SET NULL ---
# Skip if raw_packets doesn't have message_id (pre-migration-18 schema)
raw_has_message_id = False
if "raw_packets" in existing_tables:
col_cursor2 = await conn.execute("PRAGMA table_info(raw_packets)")
raw_has_message_id = "message_id" in {row[1] for row in await col_cursor2.fetchall()}
if raw_has_message_id:
# Dynamically build column list based on what the old table actually has,
# since very old schemas may lack payload_hash (added in migration 28).
col_cursor3 = await conn.execute("PRAGMA table_info(raw_packets)")
old_cols = [row[1] for row in await col_cursor3.fetchall()]
new_col_defs = [
"id INTEGER PRIMARY KEY AUTOINCREMENT",
"timestamp INTEGER NOT NULL",
"data BLOB NOT NULL",
"message_id INTEGER",
]
copy_cols = ["id", "timestamp", "data", "message_id"]
if "payload_hash" in old_cols:
new_col_defs.append("payload_hash BLOB")
copy_cols.append("payload_hash")
new_col_defs.append("FOREIGN KEY (message_id) REFERENCES messages(id) ON DELETE SET NULL")
cols_sql = ", ".join(new_col_defs)
copy_sql = ", ".join(copy_cols)
await conn.execute(f"CREATE TABLE raw_packets_fk ({cols_sql})")
await conn.execute(
f"INSERT INTO raw_packets_fk ({copy_sql}) SELECT {copy_sql} FROM raw_packets"
)
await conn.execute("DROP TABLE raw_packets")
await conn.execute("ALTER TABLE raw_packets_fk RENAME TO raw_packets")
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_raw_packets_message_id ON raw_packets(message_id)"
)
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_raw_packets_timestamp ON raw_packets(timestamp)"
)
if "payload_hash" in old_cols:
await conn.execute(
"CREATE UNIQUE INDEX IF NOT EXISTS idx_raw_packets_payload_hash ON raw_packets(payload_hash)"
)
await conn.commit()
logger.debug("Rebuilt raw_packets with ON DELETE SET NULL")
# --- Phase 3: rebuild contact_advert_paths with ON DELETE CASCADE ---
if "contact_advert_paths" in existing_tables:
await conn.execute(
"""
CREATE TABLE contact_advert_paths_fk (
id INTEGER PRIMARY KEY AUTOINCREMENT,
public_key TEXT NOT NULL,
path_hex TEXT NOT NULL,
path_len INTEGER NOT NULL,
first_seen INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
heard_count INTEGER NOT NULL DEFAULT 1,
UNIQUE(public_key, path_hex, path_len),
FOREIGN KEY (public_key) REFERENCES contacts(public_key) ON DELETE CASCADE
)
"""
)
await conn.execute(
"INSERT INTO contact_advert_paths_fk (id, public_key, path_hex, path_len, first_seen, last_seen, heard_count) "
"SELECT id, public_key, path_hex, path_len, first_seen, last_seen, heard_count FROM contact_advert_paths"
)
await conn.execute("DROP TABLE contact_advert_paths")
await conn.execute("ALTER TABLE contact_advert_paths_fk RENAME TO contact_advert_paths")
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_contact_advert_paths_recent "
"ON contact_advert_paths(public_key, last_seen DESC)"
)
await conn.commit()
logger.debug("Rebuilt contact_advert_paths with ON DELETE CASCADE")
# --- Phase 4: rebuild contact_name_history with ON DELETE CASCADE ---
if "contact_name_history" in existing_tables:
await conn.execute(
"""
CREATE TABLE contact_name_history_fk (
id INTEGER PRIMARY KEY AUTOINCREMENT,
public_key TEXT NOT NULL,
name TEXT NOT NULL,
first_seen INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
UNIQUE(public_key, name),
FOREIGN KEY (public_key) REFERENCES contacts(public_key) ON DELETE CASCADE
)
"""
)
await conn.execute(
"INSERT INTO contact_name_history_fk (id, public_key, name, first_seen, last_seen) "
"SELECT id, public_key, name, first_seen, last_seen FROM contact_name_history"
)
await conn.execute("DROP TABLE contact_name_history")
await conn.execute("ALTER TABLE contact_name_history_fk RENAME TO contact_name_history")
await conn.execute(
"CREATE INDEX IF NOT EXISTS idx_contact_name_history_key "
"ON contact_name_history(public_key, last_seen DESC)"
)
await conn.commit()
logger.debug("Rebuilt contact_name_history with ON DELETE CASCADE")
@@ -0,0 +1,27 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Create repeater_telemetry_history table for JSON-blob telemetry snapshots."""
await conn.execute(
"""
CREATE TABLE IF NOT EXISTS repeater_telemetry_history (
id INTEGER PRIMARY KEY AUTOINCREMENT,
public_key TEXT NOT NULL,
timestamp INTEGER NOT NULL,
data TEXT NOT NULL,
FOREIGN KEY (public_key) REFERENCES contacts(public_key) ON DELETE CASCADE
)
"""
)
await conn.execute(
"""
CREATE INDEX IF NOT EXISTS idx_repeater_telemetry_pk_ts
ON repeater_telemetry_history (public_key, timestamp)
"""
)
await conn.commit()
@@ -0,0 +1,24 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Remove vestigial sidebar_sort_order column from app_settings."""
col_cursor = await conn.execute("PRAGMA table_info(app_settings)")
columns = {row[1] for row in await col_cursor.fetchall()}
if "sidebar_sort_order" in columns:
try:
await conn.execute("ALTER TABLE app_settings DROP COLUMN sidebar_sort_order")
await conn.commit()
except Exception as e:
error_msg = str(e).lower()
if "syntax error" in error_msg or "drop column" in error_msg:
logger.debug(
"SQLite doesn't support DROP COLUMN, sidebar_sort_order column will remain"
)
await conn.commit()
else:
raise
@@ -0,0 +1,21 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add nullable per-channel path hash mode override column."""
tables_cursor = await conn.execute("SELECT name FROM sqlite_master WHERE type='table'")
if "channels" not in {row[0] for row in await tables_cursor.fetchall()}:
await conn.commit()
return
try:
await conn.execute("ALTER TABLE channels ADD COLUMN path_hash_mode_override INTEGER")
await conn.commit()
except Exception as e:
if "duplicate column" in str(e).lower():
await conn.commit()
else:
raise
@@ -0,0 +1,20 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add tracked_telemetry_repeaters JSON list column to app_settings."""
tables_cursor = await conn.execute("SELECT name FROM sqlite_master WHERE type='table'")
if "app_settings" not in {row[0] for row in await tables_cursor.fetchall()}:
await conn.commit()
return
col_cursor = await conn.execute("PRAGMA table_info(app_settings)")
columns = {row[1] for row in await col_cursor.fetchall()}
if "tracked_telemetry_repeaters" not in columns:
await conn.execute(
"ALTER TABLE app_settings ADD COLUMN tracked_telemetry_repeaters TEXT DEFAULT '[]'"
)
await conn.commit()
@@ -0,0 +1,20 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add auto_resend_channel boolean column to app_settings."""
tables_cursor = await conn.execute("SELECT name FROM sqlite_master WHERE type='table'")
if "app_settings" not in {row[0] for row in await tables_cursor.fetchall()}:
await conn.commit()
return
col_cursor = await conn.execute("PRAGMA table_info(app_settings)")
columns = {row[1] for row in await col_cursor.fetchall()}
if "auto_resend_channel" not in columns:
await conn.execute(
"ALTER TABLE app_settings ADD COLUMN auto_resend_channel INTEGER DEFAULT 0"
)
await conn.commit()
@@ -0,0 +1,93 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Move favorites from app_settings JSON blob to per-entity boolean columns.
1. Add ``favorite`` column to contacts and channels tables.
2. Backfill from the ``app_settings.favorites`` JSON array.
3. Drop the ``favorites`` column from app_settings.
"""
import json as _json
# --- Add columns ---
tables_cursor = await conn.execute("SELECT name FROM sqlite_master WHERE type='table'")
existing_tables = {row[0] for row in await tables_cursor.fetchall()}
for table in ("contacts", "channels"):
if table not in existing_tables:
continue
col_cursor = await conn.execute(f"PRAGMA table_info({table})")
columns = {row[1] for row in await col_cursor.fetchall()}
if "favorite" not in columns:
await conn.execute(f"ALTER TABLE {table} ADD COLUMN favorite INTEGER DEFAULT 0")
await conn.commit()
# --- Backfill from JSON ---
tables_cursor = await conn.execute("SELECT name FROM sqlite_master WHERE type='table'")
if "app_settings" not in {row[0] for row in await tables_cursor.fetchall()}:
await conn.commit()
return
col_cursor = await conn.execute("PRAGMA table_info(app_settings)")
settings_columns = {row[1] for row in await col_cursor.fetchall()}
if "favorites" not in settings_columns:
await conn.commit()
return
cursor = await conn.execute("SELECT favorites FROM app_settings WHERE id = 1")
row = await cursor.fetchone()
if row and row[0]:
try:
favorites = _json.loads(row[0])
except (ValueError, TypeError):
favorites = []
contact_keys = []
channel_keys = []
for fav in favorites:
if not isinstance(fav, dict):
continue
fav_type = fav.get("type")
fav_id = fav.get("id")
if not fav_id:
continue
if fav_type == "contact":
contact_keys.append(fav_id)
elif fav_type == "channel":
channel_keys.append(fav_id)
if contact_keys:
placeholders = ",".join("?" for _ in contact_keys)
await conn.execute(
f"UPDATE contacts SET favorite = 1 WHERE public_key IN ({placeholders})",
contact_keys,
)
if channel_keys:
placeholders = ",".join("?" for _ in channel_keys)
await conn.execute(
f"UPDATE channels SET favorite = 1 WHERE key IN ({placeholders})",
channel_keys,
)
if contact_keys or channel_keys:
logger.info(
"Backfilled %d contact favorite(s) and %d channel favorite(s) from app_settings",
len(contact_keys),
len(channel_keys),
)
await conn.commit()
# --- Drop the JSON column ---
try:
await conn.execute("ALTER TABLE app_settings DROP COLUMN favorites")
await conn.commit()
except Exception as e:
error_msg = str(e).lower()
if "syntax error" in error_msg or "drop column" in error_msg:
logger.debug("SQLite doesn't support DROP COLUMN; favorites column will remain unused")
await conn.commit()
else:
raise
@@ -0,0 +1,43 @@
import logging
import aiosqlite
logger = logging.getLogger(__name__)
async def migrate(conn: aiosqlite.Connection) -> None:
"""Add sender_key to the incoming PRIV dedup index.
Room-server posts are stored as PRIV messages sharing one conversation_key
(the room contact). Without sender_key in the uniqueness constraint, two
different room participants sending identical text in the same clock second
collide and the second message is silently dropped.
Adding COALESCE(sender_key, '') is strictly more permissive — no existing
rows can conflict — so the migration only needs to rebuild the index.
"""
cursor = await conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='messages'"
)
if await cursor.fetchone() is None:
await conn.commit()
return
# The index references type, conversation_key, sender_timestamp, outgoing,
# and sender_key. Some migration tests create minimal messages tables that
# lack these columns. Skip gracefully when the schema is too old.
col_cursor = await conn.execute("PRAGMA table_info(messages)")
columns = {row[1] for row in await col_cursor.fetchall()}
required = {"type", "conversation_key", "sender_timestamp", "outgoing", "sender_key"}
if not required.issubset(columns):
await conn.commit()
return
await conn.execute("DROP INDEX IF EXISTS idx_messages_incoming_priv_dedup")
await conn.execute(
"""CREATE UNIQUE INDEX IF NOT EXISTS idx_messages_incoming_priv_dedup
ON messages(type, conversation_key, text, COALESCE(sender_timestamp, 0),
COALESCE(sender_key, ''))
WHERE type = 'PRIV' AND outgoing = 0"""
)
await conn.commit()
+66
View File
@@ -0,0 +1,66 @@
"""
Database migrations using SQLite's user_version pragma.
Migrations run automatically on startup. The user_version pragma tracks
which migrations have been applied (defaults to 0 for existing databases).
Each migration lives in its own file: ``_NNN_description.py``, exposing an
``async def migrate(conn)`` entry point. The runner auto-discovers files by
numeric prefix and executes them in order.
This approach is safe for existing users - their databases have user_version=0,
so all migrations run in order on first startup after upgrade.
"""
import importlib
import logging
import pkgutil
import re
import aiosqlite
logger = logging.getLogger(__name__)
async def get_version(conn: aiosqlite.Connection) -> int:
"""Get current schema version from SQLite user_version pragma."""
cursor = await conn.execute("PRAGMA user_version")
row = await cursor.fetchone()
return row[0] if row else 0
async def set_version(conn: aiosqlite.Connection, version: int) -> None:
"""Set schema version using SQLite user_version pragma."""
await conn.execute(f"PRAGMA user_version = {version}")
async def run_migrations(conn: aiosqlite.Connection) -> int:
"""
Run all pending migrations.
Returns the number of migrations applied.
"""
version = await get_version(conn)
applied = 0
for module_info in sorted(pkgutil.iter_modules(__path__), key=lambda m: m.name):
match = re.match(r"_(\d+)_", module_info.name)
if not match:
continue
num = int(match.group(1))
if num <= version:
continue
logger.info("Applying migration %d: %s", num, module_info.name)
mod = importlib.import_module(f"{__name__}.{module_info.name}")
await mod.migrate(conn)
await set_version(conn, num)
applied += 1
if applied > 0:
logger.info(
"Applied %d migration(s), schema now at version %d", applied, await get_version(conn)
)
else:
logger.debug("Schema up to date at version %d", version)
return applied
+689 -85
View File
@@ -2,59 +2,218 @@ from typing import Literal
from pydantic import BaseModel, Field
from app.path_utils import normalize_contact_route, normalize_route_override
# Valid MeshCore contact types: 0=unknown, 1=client, 2=repeater, 3=room, 4=sensor.
# Corrupted radio data can produce values outside this range.
_VALID_CONTACT_TYPES = frozenset({0, 1, 2, 3, 4})
class ContactRoute(BaseModel):
"""A normalized contact route."""
path: str = Field(description="Hex-encoded path bytes (empty string for direct/flood)")
path_len: int = Field(description="Hop count (-1=flood, 0=direct, >0=explicit route)")
path_hash_mode: int = Field(
description="Path hash mode (-1=flood, 0=1-byte, 1=2-byte, 2=3-byte hop identifiers)"
)
class ContactUpsert(BaseModel):
"""Typed write contract for contacts persisted to SQLite."""
public_key: str = Field(description="Public key (64-char hex)")
name: str | None = None
type: int = 0
flags: int = 0
direct_path: str | None = None
direct_path_len: int | None = None
direct_path_hash_mode: int | None = None
direct_path_updated_at: int | None = None
route_override_path: str | None = None
route_override_len: int | None = None
route_override_hash_mode: int | None = None
last_advert: int | None = None
lat: float | None = None
lon: float | None = None
last_seen: int | None = None
on_radio: bool | None = None
last_contacted: int | None = None
first_seen: int | None = None
@classmethod
def from_contact(cls, contact: "Contact", **changes) -> "ContactUpsert":
return cls.model_validate(
{
**contact.model_dump(exclude={"last_read_at"}),
**changes,
}
)
@classmethod
def from_radio_dict(
cls, public_key: str, radio_data: dict, on_radio: bool = False
) -> "ContactUpsert":
"""Convert radio contact data to the contact-row write shape."""
direct_path, direct_path_len, direct_path_hash_mode = normalize_contact_route(
radio_data.get("out_path"),
radio_data.get("out_path_len", -1),
radio_data.get(
"out_path_hash_mode",
-1 if radio_data.get("out_path_len", -1) == -1 else 0,
),
)
# Clamp invalid contact types to 0 (unknown) — corrupted radio data
# can produce values like 111 or 240 that break downstream branching.
raw_type = radio_data.get("type", 0)
contact_type = raw_type if raw_type in _VALID_CONTACT_TYPES else 0
# Null out impossible coordinates — the contact is still ingested,
# but garbage lat/lon (e.g. 1953.7) is discarded rather than stored.
lat = radio_data.get("adv_lat")
lon = radio_data.get("adv_lon")
if lat is not None and not (-90 <= lat <= 90):
lat = None
if lon is not None and not (-180 <= lon <= 180):
lon = None
return cls(
public_key=public_key,
name=radio_data.get("adv_name"),
type=contact_type,
flags=radio_data.get("flags", 0),
direct_path=direct_path,
direct_path_len=direct_path_len,
direct_path_hash_mode=direct_path_hash_mode,
lat=lat,
lon=lon,
last_advert=radio_data.get("last_advert"),
on_radio=on_radio,
)
class Contact(BaseModel):
public_key: str = Field(description="Public key (64-char hex)")
name: str | None = None
type: int = 0 # 0=unknown, 1=client, 2=repeater, 3=room
type: int = 0 # 0=unknown, 1=client, 2=repeater, 3=room, 4=sensor
flags: int = 0
last_path: str | None = None
last_path_len: int = -1
direct_path: str | None = None
direct_path_len: int = -1
direct_path_hash_mode: int = -1
direct_path_updated_at: int | None = None
route_override_path: str | None = None
route_override_len: int | None = None
route_override_hash_mode: int | None = None
last_advert: int | None = None
lat: float | None = None
lon: float | None = None
last_seen: int | None = None
on_radio: bool = False
favorite: bool = False
last_contacted: int | None = None # Last time we sent/received a message
last_read_at: int | None = None # Server-side read state tracking
first_seen: int | None = None
effective_route: ContactRoute | None = None
effective_route_source: Literal["override", "direct", "flood"] = "flood"
direct_route: ContactRoute | None = None
route_override: ContactRoute | None = None
def model_post_init(self, __context) -> None:
direct_path, direct_path_len, direct_path_hash_mode = normalize_contact_route(
self.direct_path,
self.direct_path_len,
self.direct_path_hash_mode,
)
self.direct_path = direct_path or None
self.direct_path_len = direct_path_len
self.direct_path_hash_mode = direct_path_hash_mode
route_override_path, route_override_len, route_override_hash_mode = (
normalize_route_override(
self.route_override_path,
self.route_override_len,
self.route_override_hash_mode,
)
)
self.route_override_path = route_override_path or None
self.route_override_len = route_override_len
self.route_override_hash_mode = route_override_hash_mode
if (
route_override_path is not None
and route_override_len is not None
and route_override_hash_mode is not None
):
self.route_override = ContactRoute(
path=route_override_path,
path_len=route_override_len,
path_hash_mode=route_override_hash_mode,
)
else:
self.route_override = None
if direct_path_len >= 0:
self.direct_route = ContactRoute(
path=direct_path,
path_len=direct_path_len,
path_hash_mode=direct_path_hash_mode,
)
else:
self.direct_route = None
path, path_len, path_hash_mode = self.effective_route_tuple()
if self.has_route_override():
self.effective_route_source = "override"
elif self.direct_route is not None:
self.effective_route_source = "direct"
else:
self.effective_route_source = "flood"
self.effective_route = ContactRoute(
path=path,
path_len=path_len,
path_hash_mode=path_hash_mode,
)
def has_route_override(self) -> bool:
return self.route_override_len is not None
def effective_route_tuple(self) -> tuple[str, int, int]:
if self.has_route_override():
return normalize_contact_route(
self.route_override_path,
self.route_override_len,
self.route_override_hash_mode,
)
if self.direct_path_len >= 0:
return normalize_contact_route(
self.direct_path,
self.direct_path_len,
self.direct_path_hash_mode,
)
return "", -1, -1
def to_radio_dict(self) -> dict:
"""Convert to the dict format expected by meshcore radio commands.
The radio API uses different field names (adv_name, out_path, etc.)
than our database schema (name, last_path, etc.).
than our database schema (name, direct_path, etc.).
"""
effective_path, effective_path_len, effective_path_hash_mode = self.effective_route_tuple()
return {
"public_key": self.public_key,
"adv_name": self.name or "",
"type": self.type,
"flags": self.flags,
"out_path": self.last_path or "",
"out_path_len": self.last_path_len,
"out_path": effective_path,
"out_path_len": effective_path_len,
"out_path_hash_mode": effective_path_hash_mode,
"adv_lat": self.lat if self.lat is not None else 0.0,
"adv_lon": self.lon if self.lon is not None else 0.0,
"last_advert": self.last_advert if self.last_advert is not None else 0,
}
@staticmethod
def from_radio_dict(public_key: str, radio_data: dict, on_radio: bool = False) -> dict:
"""Convert radio contact data to database format dict.
This is the inverse of to_radio_dict(), used when syncing contacts
from radio to database.
"""
return {
"public_key": public_key,
"name": radio_data.get("adv_name"),
"type": radio_data.get("type", 0),
"flags": radio_data.get("flags", 0),
"last_path": radio_data.get("out_path"),
"last_path_len": radio_data.get("out_path_len", -1),
"lat": radio_data.get("adv_lat"),
"lon": radio_data.get("adv_lon"),
"last_advert": radio_data.get("last_advert"),
"on_radio": on_radio,
}
def to_upsert(self, **changes) -> ContactUpsert:
"""Convert the stored contact to the repository's write contract."""
return ContactUpsert.from_contact(self, **changes)
class CreateContactRequest(BaseModel):
@@ -68,8 +227,108 @@ class CreateContactRequest(BaseModel):
)
class ContactRoutingOverrideRequest(BaseModel):
"""Request to set, force, or clear a contact routing override."""
route: str = Field(
description=(
"Blank clears the override, "
'"-1" forces flood, "0" forces direct, and explicit routes are '
"comma-separated 1/2/3-byte hop hex values"
)
)
# Contact type constants
CONTACT_TYPE_REPEATER = 2
CONTACT_TYPE_ROOM = 3
class ContactAdvertPath(BaseModel):
"""A unique advert path observed for a contact."""
path: str = Field(description="Hex-encoded routing path (empty string for direct)")
path_len: int = Field(description="Number of hops in the path")
next_hop: str | None = Field(
default=None,
description="First hop toward us as a full hop identifier, or null for direct",
)
first_seen: int = Field(description="Unix timestamp of first observation")
last_seen: int = Field(description="Unix timestamp of most recent observation")
heard_count: int = Field(description="Number of times this unique path was heard")
class ContactAdvertPathSummary(BaseModel):
"""Recent unique advertisement paths for a single contact."""
public_key: str = Field(description="Contact public key (64-char hex)")
paths: list[ContactAdvertPath] = Field(
default_factory=list, description="Most recent unique advert paths"
)
class ContactNameHistory(BaseModel):
"""A historical name used by a contact."""
name: str
first_seen: int
last_seen: int
class ContactActiveRoom(BaseModel):
"""A channel where a contact has been active."""
channel_key: str
channel_name: str
message_count: int
class NearestRepeater(BaseModel):
"""A repeater that has relayed a contact's advertisements."""
public_key: str
name: str | None = None
path_len: int
last_seen: int
heard_count: int
class ContactAnalyticsHourlyBucket(BaseModel):
"""A single hourly activity bucket for contact analytics."""
bucket_start: int = Field(description="Unix timestamp for the start of the hour bucket")
last_24h_count: int = 0
last_week_average: float = 0
all_time_average: float = 0
class ContactAnalyticsWeeklyBucket(BaseModel):
"""A single weekly activity bucket for contact analytics."""
bucket_start: int = Field(description="Unix timestamp for the start of the 7-day bucket")
message_count: int = 0
class ContactAnalytics(BaseModel):
"""Unified contact analytics payload for keyed and name-only lookups."""
lookup_type: Literal["contact", "name"]
name: str
contact: Contact | None = None
name_first_seen_at: int | None = None
name_history: list[ContactNameHistory] = Field(default_factory=list)
dm_message_count: int = 0
channel_message_count: int = 0
includes_direct_messages: bool = False
most_active_rooms: list[ContactActiveRoom] = Field(default_factory=list)
advert_paths: list[ContactAdvertPath] = Field(default_factory=list)
advert_frequency: float | None = Field(
default=None,
description="Advert observations per hour (includes multi-path arrivals of same advert)",
)
nearest_repeaters: list[NearestRepeater] = Field(default_factory=list)
hourly_activity: list[ContactAnalyticsHourlyBucket] = Field(default_factory=list)
weekly_activity: list[ContactAnalyticsWeeklyBucket] = Field(default_factory=list)
class Channel(BaseModel):
@@ -77,14 +336,70 @@ class Channel(BaseModel):
name: str
is_hashtag: bool = False
on_radio: bool = False
flood_scope_override: str | None = Field(
default=None,
description="Per-channel outbound flood scope override (null = use global app setting)",
)
path_hash_mode_override: int | None = Field(
default=None,
description="Per-channel path hash mode override (0=1-byte, 1=2-byte, 2=3-byte, null = use radio default)",
)
last_read_at: int | None = None # Server-side read state tracking
favorite: bool = False
class ChannelMessageCounts(BaseModel):
"""Time-windowed message counts for a channel."""
last_1h: int = 0
last_24h: int = 0
last_48h: int = 0
last_7d: int = 0
all_time: int = 0
class ChannelTopSender(BaseModel):
"""A top sender in a channel over the last 24 hours."""
sender_name: str
sender_key: str | None = None
message_count: int
class PathHashWidthStats(BaseModel):
"""Hop byte width distribution for parsed raw packets."""
total_packets: int = 0
single_byte: int = 0
double_byte: int = 0
triple_byte: int = 0
single_byte_pct: float = 0.0
double_byte_pct: float = 0.0
triple_byte_pct: float = 0.0
class ChannelDetail(BaseModel):
"""Comprehensive channel profile data."""
channel: Channel
message_counts: ChannelMessageCounts = Field(default_factory=ChannelMessageCounts)
first_message_at: int | None = None
unique_sender_count: int = 0
top_senders_24h: list[ChannelTopSender] = Field(default_factory=list)
path_hash_width_24h: PathHashWidthStats = Field(default_factory=PathHashWidthStats)
class MessagePath(BaseModel):
"""A single path that a message took to reach us."""
path: str = Field(description="Hex-encoded routing path (2 chars per hop)")
path: str = Field(description="Hex-encoded routing path")
received_at: int = Field(description="Unix timestamp when this path was received")
path_len: int | None = Field(
default=None,
description="Hop count. None = legacy (infer as len(path)//2, i.e. 1-byte hops)",
)
rssi: int | None = Field(default=None, description="Last-hop RSSI in dBm")
snr: float | None = Field(default=None, description="Last-hop SNR in dB")
class Message(BaseModel):
@@ -99,8 +414,27 @@ class Message(BaseModel):
)
txt_type: int = 0
signature: str | None = None
sender_key: str | None = None
outgoing: bool = False
acked: int = 0
sender_name: str | None = None
channel_name: str | None = None
packet_id: int | None = Field(
default=None,
description="Representative raw packet row ID when archival raw bytes exist",
)
class MessagesAroundResponse(BaseModel):
messages: list[Message]
has_older: bool
has_newer: bool
class ResendChannelMessageResponse(BaseModel):
status: str
message_id: int
message: Message | None = None
class RawPacketDecryptedInfo(BaseModel):
@@ -108,6 +442,8 @@ class RawPacketDecryptedInfo(BaseModel):
channel_name: str | None = None
sender: str | None = None
channel_key: str | None = None
contact_key: str | None = None
class RawPacketBroadcast(BaseModel):
@@ -118,6 +454,11 @@ class RawPacketBroadcast(BaseModel):
"""
id: int
observation_id: int = Field(
description=(
"Monotonic per-process ID for this RF observation (distinct from the DB packet row ID)"
)
)
timestamp: int
data: str = Field(description="Hex-encoded packet data")
payload_type: str = Field(description="Packet type name (e.g., GROUP_TEXT, ADVERT)")
@@ -127,6 +468,21 @@ class RawPacketBroadcast(BaseModel):
decrypted_info: RawPacketDecryptedInfo | None = None
class RawPacketDetail(BaseModel):
"""Stored raw-packet detail returned by the packet API."""
id: int
timestamp: int
data: str = Field(description="Hex-encoded packet data")
payload_type: str = Field(description="Packet type name (e.g. GROUP_TEXT, ADVERT)")
snr: float | None = Field(default=None, description="Signal-to-noise ratio in dB if available")
rssi: int | None = Field(
default=None, description="Received signal strength in dBm if available"
)
decrypted: bool = False
decrypted_info: RawPacketDecryptedInfo | None = None
class SendMessageRequest(BaseModel):
text: str = Field(min_length=1)
@@ -141,12 +497,100 @@ class SendChannelMessageRequest(SendMessageRequest):
channel_key: str = Field(description="Channel key (32-char hex)")
class TelemetryRequest(BaseModel):
class RepeaterLoginRequest(BaseModel):
"""Request to log in to a repeater."""
password: str = Field(
default="", description="Repeater password (empty string for no password)"
default="", description="Repeater password (empty string for guest login)"
)
class RepeaterLoginResponse(BaseModel):
"""Response from repeater login."""
status: str = Field(description="Login result status")
authenticated: bool = Field(description="Whether repeater authentication was confirmed")
message: str | None = Field(
default=None,
description="Optional warning or error message when authentication was not confirmed",
)
class RepeaterStatusResponse(BaseModel):
"""Status telemetry from a repeater (single attempt, no retries)."""
battery_volts: float = Field(description="Battery voltage in volts")
tx_queue_len: int = Field(description="Transmit queue length")
noise_floor_dbm: int = Field(description="Noise floor in dBm")
last_rssi_dbm: int = Field(description="Last RSSI in dBm")
last_snr_db: float = Field(description="Last SNR in dB")
packets_received: int = Field(description="Total packets received")
packets_sent: int = Field(description="Total packets sent")
airtime_seconds: int = Field(description="TX airtime in seconds")
rx_airtime_seconds: int = Field(description="RX airtime in seconds")
uptime_seconds: int = Field(description="Uptime in seconds")
sent_flood: int = Field(description="Flood packets sent")
sent_direct: int = Field(description="Direct packets sent")
recv_flood: int = Field(description="Flood packets received")
recv_direct: int = Field(description="Direct packets received")
flood_dups: int = Field(description="Duplicate flood packets")
direct_dups: int = Field(description="Duplicate direct packets")
full_events: int = Field(description="Full event queue count")
telemetry_history: list["TelemetryHistoryEntry"] = Field(
default_factory=list, description="Recent telemetry history snapshots"
)
class RepeaterNodeInfoResponse(BaseModel):
"""Identity/location info from a repeater (small CLI batch)."""
name: str | None = Field(default=None, description="Repeater name")
lat: str | None = Field(default=None, description="Latitude")
lon: str | None = Field(default=None, description="Longitude")
clock_utc: str | None = Field(default=None, description="Repeater clock in UTC")
class RepeaterRadioSettingsResponse(BaseModel):
"""Radio settings from a repeater (radio/config CLI batch)."""
firmware_version: str | None = Field(default=None, description="Firmware version string")
radio: str | None = Field(default=None, description="Radio settings (freq,bw,sf,cr)")
tx_power: str | None = Field(default=None, description="TX power in dBm")
airtime_factor: str | None = Field(default=None, description="Airtime factor")
repeat_enabled: str | None = Field(default=None, description="Repeat mode enabled")
flood_max: str | None = Field(default=None, description="Max flood hops")
class RepeaterAdvertIntervalsResponse(BaseModel):
"""Advertisement intervals from a repeater."""
advert_interval: str | None = Field(default=None, description="Local advert interval")
flood_advert_interval: str | None = Field(default=None, description="Flood advert interval")
class RepeaterOwnerInfoResponse(BaseModel):
"""Owner info and guest password from a repeater."""
owner_info: str | None = Field(default=None, description="Owner info string")
guest_password: str | None = Field(default=None, description="Guest password")
class LppSensor(BaseModel):
"""A single CayenneLPP sensor reading from req_telemetry_sync."""
channel: int = Field(description="LPP channel number")
type_name: str = Field(description="Sensor type name (e.g. temperature, humidity)")
value: float | dict = Field(
description="Scalar value or dict for multi-value sensors (GPS, accel)"
)
class RepeaterLppTelemetryResponse(BaseModel):
"""CayenneLPP sensor telemetry from a repeater."""
sensors: list[LppSensor] = Field(default_factory=list, description="List of sensor readings")
class NeighborInfo(BaseModel):
"""Information about a neighbor seen by a repeater."""
@@ -167,34 +611,18 @@ class AclEntry(BaseModel):
permission_name: str = Field(description="Human-readable permission name")
class TelemetryResponse(BaseModel):
"""Telemetry data from a repeater, formatted for human readability."""
class RepeaterNeighborsResponse(BaseModel):
"""Neighbors list from a repeater."""
pubkey_prefix: str = Field(description="12-char public key prefix")
battery_volts: float = Field(description="Battery voltage in volts")
tx_queue_len: int = Field(description="Transmit queue length")
noise_floor_dbm: int = Field(description="Noise floor in dBm")
last_rssi_dbm: int = Field(description="Last RSSI in dBm")
last_snr_db: float = Field(description="Last SNR in dB")
packets_received: int = Field(description="Total packets received")
packets_sent: int = Field(description="Total packets sent")
airtime_seconds: int = Field(description="TX airtime in seconds")
rx_airtime_seconds: int = Field(description="RX airtime in seconds")
uptime_seconds: int = Field(description="Uptime in seconds")
sent_flood: int = Field(description="Flood packets sent")
sent_direct: int = Field(description="Direct packets sent")
recv_flood: int = Field(description="Flood packets received")
recv_direct: int = Field(description="Direct packets received")
flood_dups: int = Field(description="Duplicate flood packets")
direct_dups: int = Field(description="Duplicate direct packets")
full_events: int = Field(description="Full event queue count")
neighbors: list[NeighborInfo] = Field(
default_factory=list, description="List of neighbors seen by repeater"
)
class RepeaterAclResponse(BaseModel):
"""ACL list from a repeater."""
acl: list[AclEntry] = Field(default_factory=list, description="Access control list")
clock_output: str | None = Field(
default=None, description="Output from 'clock' command (or error message)"
)
class TraceResponse(BaseModel):
@@ -209,6 +637,83 @@ class TraceResponse(BaseModel):
path_len: int = Field(description="Number of hops in the trace path")
class RadioTraceHopRequest(BaseModel):
"""One requested hop in a radio trace path."""
public_key: str | None = Field(
default=None,
description="Full repeater public key when this hop maps to a known repeater",
)
hop_hex: str | None = Field(
default=None,
description="Raw hop hash hex when using a custom repeater prefix",
)
class RadioTraceRequest(BaseModel):
"""Ordered trace path for a radio trace loop."""
hop_hash_bytes: Literal[1, 2, 4] = Field(
default=4,
description="Hash width in bytes for every hop in this trace path",
)
hops: list[RadioTraceHopRequest] = Field(
min_length=1,
description="Ordered repeater hops, using either known repeater keys or custom hop hex",
)
class RadioTraceNode(BaseModel):
"""One resolved node in a radio trace result."""
role: Literal["repeater", "custom", "local"] = Field(description="Node role in the trace")
public_key: str | None = Field(
default=None,
description="Resolved full public key for this node when known",
)
name: str | None = Field(default=None, description="Display name for this node when known")
observed_hash: str | None = Field(
default=None,
description="Observed 4-byte trace hash for this node as hex",
)
snr: float | None = Field(default=None, description="Reported SNR for this node in dB")
class RadioTraceResponse(BaseModel):
"""Resolved multi-hop radio trace result."""
path_len: int = Field(description="Number of hashed nodes returned by the trace response")
timeout_seconds: float = Field(description="Timeout window used while waiting for the trace")
nodes: list[RadioTraceNode] = Field(
default_factory=list,
description="Ordered trace nodes: repeater hops followed by the terminal local radio",
)
class PathDiscoveryRoute(BaseModel):
"""One resolved route returned by contact path discovery."""
path: str = Field(description="Hex-encoded path bytes")
path_len: int = Field(description="Hop count for this route")
path_hash_mode: int = Field(
description="Path hash mode (0=1-byte, 1=2-byte, 2=3-byte hop identifiers)"
)
class PathDiscoveryResponse(BaseModel):
"""Round-trip routing data for a contact path discovery request."""
contact: Contact = Field(
description="Updated contact row after saving the learned forward path"
)
forward_path: PathDiscoveryRoute = Field(
description="Route used from the local radio to the target contact"
)
return_path: PathDiscoveryRoute = Field(
description="Route used from the target contact back to the local radio"
)
class CommandRequest(BaseModel):
"""Request to send a CLI command to a repeater."""
@@ -225,20 +730,50 @@ class CommandResponse(BaseModel):
)
class Favorite(BaseModel):
"""A favorite conversation."""
class RadioDiscoveryRequest(BaseModel):
"""Request to discover nearby mesh nodes from the local radio."""
type: Literal["channel", "contact"] = Field(description="'channel' or 'contact'")
id: str = Field(description="Channel key or contact public key")
target: Literal["repeaters", "sensors", "all"] = Field(
default="all",
description="Which node classes to discover over the mesh",
)
class BotConfig(BaseModel):
"""Configuration for a single bot."""
class RadioDiscoveryResult(BaseModel):
"""One mesh node heard during a discovery sweep."""
id: str = Field(description="UUID for stable identity across renames/reorders")
name: str = Field(description="User-editable name")
enabled: bool = Field(default=False, description="Whether this bot is enabled")
code: str = Field(default="", description="Python code for this bot")
public_key: str = Field(description="Discovered node public key as hex")
name: str | None = Field(
default=None,
description="Known name for this node from contacts DB, if any",
)
node_type: Literal["repeater", "sensor"] = Field(description="Discovered node class")
heard_count: int = Field(default=1, description="How many responses were heard from this node")
local_snr: float | None = Field(
default=None,
description="SNR at which the local radio heard the response (dB)",
)
local_rssi: int | None = Field(
default=None,
description="RSSI at which the local radio heard the response (dBm)",
)
remote_snr: float | None = Field(
default=None,
description="SNR reported by the remote node while hearing our discovery request (dB)",
)
class RadioDiscoveryResponse(BaseModel):
"""Response payload for a mesh discovery sweep."""
target: Literal["repeaters", "sensors", "all"] = Field(
description="Which node classes were requested"
)
duration_seconds: float = Field(description="How long the sweep listened for responses")
results: list[RadioDiscoveryResult] = Field(
default_factory=list,
description="Deduplicated discovery responses heard during the sweep",
)
class UnreadCounts(BaseModel):
@@ -253,6 +788,9 @@ class UnreadCounts(BaseModel):
last_message_times: dict[str, int] = Field(
default_factory=dict, description="Map of stateKey -> last message timestamp"
)
last_read_ats: dict[str, int | None] = Field(
default_factory=dict, description="Map of stateKey -> server-side last_read_at boundary"
)
class AppSettings(BaseModel):
@@ -261,36 +799,18 @@ class AppSettings(BaseModel):
max_radio_contacts: int = Field(
default=200,
description=(
"Maximum contacts to keep on radio for DM ACKs "
"(favorite contacts first, then recent non-repeaters)"
"Configured radio contact capacity used for maintenance thresholds; "
"favorites reload first, then background fill targets about 80% of this value"
),
)
experimental_channel_double_send: bool = Field(
default=False,
description=(
"Experimental: when enabled, channel messages are sent twice with a 3-second delay, "
"reusing the same timestamp bytes"
),
)
favorites: list[Favorite] = Field(
default_factory=list, description="List of favorited conversations"
)
auto_decrypt_dm_on_advert: bool = Field(
default=False,
default=True,
description="Whether to attempt historical DM decryption on new contact advertisement",
)
sidebar_sort_order: Literal["recent", "alpha"] = Field(
default="recent",
description="Sidebar sort order: 'recent' or 'alpha'",
)
last_message_times: dict[str, int] = Field(
default_factory=dict,
description="Map of conversation state keys to last message timestamps",
)
preferences_migrated: bool = Field(
default=False,
description="Whether preferences have been migrated from localStorage",
)
advert_interval: int = Field(
default=0,
description="Periodic advertisement interval in seconds (0 = disabled)",
@@ -299,7 +819,91 @@ class AppSettings(BaseModel):
default=0,
description="Unix timestamp of last advertisement sent (0 = never)",
)
bots: list[BotConfig] = Field(
default_factory=list,
description="List of bot configurations",
flood_scope: str = Field(
default="",
description="Outbound flood scope / region name (empty = disabled, no tagging)",
)
blocked_keys: list[str] = Field(
default_factory=list,
description="Public keys whose messages are hidden from the UI",
)
blocked_names: list[str] = Field(
default_factory=list,
description="Display names whose messages are hidden from the UI",
)
discovery_blocked_types: list[int] = Field(
default_factory=list,
description=(
"Contact type codes (1=Client, 2=Repeater, 3=Room, 4=Sensor) whose "
"advertisements should not create new contacts; existing contacts are still updated"
),
)
tracked_telemetry_repeaters: list[str] = Field(
default_factory=list,
description="Public keys of repeaters opted into periodic telemetry collection (max 8)",
)
auto_resend_channel: bool = Field(
default=False,
description=(
"When enabled, outgoing channel messages that receive no echo within 2 seconds "
"are automatically byte-perfect resent once (within the 30-second dedup window)"
),
)
class BusyChannel(BaseModel):
channel_key: str
channel_name: str
message_count: int
class ContactActivityCounts(BaseModel):
last_hour: int
last_24_hours: int
last_week: int
class NoiseFloorSample(BaseModel):
timestamp: int = Field(description="Unix timestamp of the sampled reading")
noise_floor_dbm: int = Field(description="Noise floor in dBm")
class NoiseFloorHistoryStats(BaseModel):
sample_interval_seconds: int = Field(description="Expected spacing between samples")
coverage_seconds: int = Field(description="How much of the last 24 hours is represented")
latest_noise_floor_dbm: int | None = Field(
default=None, description="Most recent sampled noise floor in dBm"
)
latest_timestamp: int | None = Field(
default=None, description="Unix timestamp of the most recent sample"
)
samples: list[NoiseFloorSample] = Field(default_factory=list)
class PacketsPerHourBucket(BaseModel):
timestamp: int = Field(description="Unix timestamp at the start of the hour")
count: int = Field(description="Number of packets received in that hour")
class StatisticsResponse(BaseModel):
busiest_channels_24h: list[BusyChannel]
contact_count: int
repeater_count: int
channel_count: int
total_packets: int
decrypted_packets: int
undecrypted_packets: int
total_dms: int
total_channel_messages: int
total_outgoing: int
contacts_heard: ContactActivityCounts
repeaters_heard: ContactActivityCounts
known_channels_active: ContactActivityCounts
path_hash_width_24h: PathHashWidthStats
packets_per_hour_72h: list[PacketsPerHourBucket]
noise_floor_24h: NoiseFloorHistoryStats
class TelemetryHistoryEntry(BaseModel):
timestamp: int
data: dict
+309 -373
View File
@@ -15,6 +15,7 @@ are offloaded from the radio to the server.
import asyncio
import logging
import time
from itertools import count
from app.decoder import (
DecryptedDirectMessage,
@@ -25,83 +26,38 @@ from app.decoder import (
parse_packet,
try_decrypt_dm,
try_decrypt_packet_with_channel_key,
try_decrypt_path,
)
from app.keystore import get_private_key, get_public_key, has_private_key
from app.models import CONTACT_TYPE_REPEATER, RawPacketBroadcast, RawPacketDecryptedInfo
from app.models import (
Contact,
ContactUpsert,
RawPacketBroadcast,
RawPacketDecryptedInfo,
)
from app.repository import (
ChannelRepository,
ContactAdvertPathRepository,
ContactRepository,
MessageRepository,
RawPacketRepository,
)
from app.services.contact_reconciliation import (
promote_prefix_contacts_for_contact,
record_contact_name_and_reconcile,
)
from app.services.dm_ack_apply import apply_dm_ack_code
from app.services.messages import (
create_dm_message_from_decrypted as _create_dm_message_from_decrypted,
)
from app.services.messages import (
create_message_from_decrypted as _create_message_from_decrypted,
)
from app.websocket import broadcast_error, broadcast_event
logger = logging.getLogger(__name__)
async def _handle_duplicate_message(
packet_id: int,
msg_type: str,
conversation_key: str,
text: str,
sender_timestamp: int,
path: str | None,
received: int,
) -> None:
"""Handle a duplicate message by updating paths/acks on the existing record.
Called when MessageRepository.create returns None (INSERT OR IGNORE hit a duplicate).
Looks up the existing message, adds the new path, increments ack count for outgoing
messages, and broadcasts the update to clients.
"""
existing_msg = await MessageRepository.get_by_content(
msg_type=msg_type,
conversation_key=conversation_key,
text=text,
sender_timestamp=sender_timestamp,
)
if not existing_msg:
label = "message" if msg_type == "CHAN" else "DM"
logger.warning(
"Duplicate %s for %s but couldn't find existing",
label,
conversation_key[:12],
)
return
logger.debug(
"Duplicate %s for %s (msg_id=%d, outgoing=%s) - adding path",
msg_type,
conversation_key[:12],
existing_msg.id,
existing_msg.outgoing,
)
# Add path if provided
if path is not None:
paths = await MessageRepository.add_path(existing_msg.id, path, received)
else:
# Get current paths for broadcast
paths = existing_msg.paths or []
# Increment ack count for outgoing messages (echo confirmation)
if existing_msg.outgoing:
ack_count = await MessageRepository.increment_ack_count(existing_msg.id)
else:
ack_count = await MessageRepository.get_ack_count(existing_msg.id)
# Broadcast updated paths
broadcast_event(
"message_acked",
{
"message_id": existing_msg.id,
"ack_count": ack_count,
"paths": [p.model_dump() for p in paths] if paths else [],
},
)
# Mark this packet as decrypted
await RawPacketRepository.mark_decrypted(packet_id, existing_msg.id)
_raw_observation_counter = count(1)
async def create_message_from_decrypted(
@@ -112,102 +68,29 @@ async def create_message_from_decrypted(
timestamp: int,
received_at: int | None = None,
path: str | None = None,
path_len: int | None = None,
rssi: int | None = None,
snr: float | None = None,
channel_name: str | None = None,
trigger_bot: bool = True,
realtime: bool = True,
) -> int | None:
"""Create a message record from decrypted channel packet content.
This is the shared logic for storing decrypted channel messages,
used by both real-time packet processing and historical decryption.
Args:
packet_id: ID of the raw packet being processed
channel_key: Hex string channel key
channel_name: Channel name (e.g. "#general"), for bot context
sender: Sender name (will be prefixed to message) or None
message_text: The decrypted message content
timestamp: Sender timestamp from the packet
received_at: When the packet was received (defaults to now)
path: Hex-encoded routing path
trigger_bot: Whether to trigger bot response (False for historical decryption)
Returns the message ID if created, None if duplicate.
"""
received = received_at or int(time.time())
# Format the message text with sender prefix if present
text = f"{sender}: {message_text}" if sender else message_text
# Normalize channel key to uppercase for consistency
channel_key_normalized = channel_key.upper()
# Try to create message - INSERT OR IGNORE handles duplicates atomically
msg_id = await MessageRepository.create(
msg_type="CHAN",
text=text,
conversation_key=channel_key_normalized,
sender_timestamp=timestamp,
received_at=received,
"""Store a decrypted channel message via the shared message service."""
return await _create_message_from_decrypted(
packet_id=packet_id,
channel_key=channel_key,
sender=sender,
message_text=message_text,
timestamp=timestamp,
received_at=received_at,
path=path,
path_len=path_len,
rssi=rssi,
snr=snr,
channel_name=channel_name,
realtime=realtime,
broadcast_fn=broadcast_event,
)
if msg_id is None:
# Duplicate message detected - this happens when:
# 1. Our own outgoing message echoes back (flood routing)
# 2. Same message arrives via multiple paths before first is committed
# In either case, add the path to the existing message.
await _handle_duplicate_message(
packet_id, "CHAN", channel_key_normalized, text, timestamp, path, received
)
return None
logger.info("Stored channel message %d for channel %s", msg_id, channel_key_normalized[:8])
# Mark the raw packet as decrypted
await RawPacketRepository.mark_decrypted(packet_id, msg_id)
# Build paths array for broadcast
# Use "is not None" to include empty string (direct/0-hop messages)
paths = [{"path": path or "", "received_at": received}] if path is not None else None
# Broadcast new message to connected clients
broadcast_event(
"message",
{
"id": msg_id,
"type": "CHAN",
"conversation_key": channel_key_normalized,
"text": text,
"sender_timestamp": timestamp,
"received_at": received,
"paths": paths,
"txt_type": 0,
"signature": None,
"outgoing": False,
"acked": 0,
},
)
# Run bot if enabled (for incoming channel messages, not historical decryption)
if trigger_bot:
from app.bot import run_bot_for_message
asyncio.create_task(
run_bot_for_message(
sender_name=sender,
sender_key=None, # Channel messages don't have a sender public key
message_text=message_text,
is_dm=False,
channel_key=channel_key_normalized,
channel_name=channel_name,
sender_timestamp=timestamp,
path=path,
is_outgoing=False,
)
)
return msg_id
async def create_dm_message_from_decrypted(
packet_id: int,
@@ -216,124 +99,28 @@ async def create_dm_message_from_decrypted(
our_public_key: str | None,
received_at: int | None = None,
path: str | None = None,
path_len: int | None = None,
rssi: int | None = None,
snr: float | None = None,
outgoing: bool = False,
trigger_bot: bool = True,
realtime: bool = True,
) -> int | None:
"""Create a message record from decrypted direct message packet content.
This is the shared logic for storing decrypted direct messages,
used by both real-time packet processing and historical decryption.
Args:
packet_id: ID of the raw packet being processed
decrypted: DecryptedDirectMessage from decoder
their_public_key: The contact's full 64-char public key (conversation_key)
our_public_key: Our public key (to determine direction), or None
received_at: When the packet was received (defaults to now)
path: Hex-encoded routing path
outgoing: Whether this is an outgoing message (we sent it)
trigger_bot: Whether to trigger bot response (False for historical decryption)
Returns the message ID if created, None if duplicate.
"""
# Check if sender is a repeater - repeaters only send CLI responses, not chat messages.
# CLI responses are handled by the command endpoint, not stored in chat history.
contact = await ContactRepository.get_by_key(their_public_key)
if contact and contact.type == CONTACT_TYPE_REPEATER:
logger.debug(
"Skipping message from repeater %s (CLI responses not stored): %s",
their_public_key[:12],
(decrypted.message or "")[:50],
)
return None
received = received_at or int(time.time())
# conversation_key is always the other party's public key
conversation_key = their_public_key.lower()
# Try to create message - INSERT OR IGNORE handles duplicates atomically
msg_id = await MessageRepository.create(
msg_type="PRIV",
text=decrypted.message,
conversation_key=conversation_key,
sender_timestamp=decrypted.timestamp,
received_at=received,
"""Store a decrypted direct message via the shared message service."""
return await _create_dm_message_from_decrypted(
packet_id=packet_id,
decrypted=decrypted,
their_public_key=their_public_key,
our_public_key=our_public_key,
received_at=received_at,
path=path,
path_len=path_len,
rssi=rssi,
snr=snr,
outgoing=outgoing,
realtime=realtime,
broadcast_fn=broadcast_event,
)
if msg_id is None:
# Duplicate message detected
await _handle_duplicate_message(
packet_id,
"PRIV",
conversation_key,
decrypted.message,
decrypted.timestamp,
path,
received,
)
return None
logger.info(
"Stored direct message %d for contact %s (outgoing=%s)",
msg_id,
conversation_key[:12],
outgoing,
)
# Mark the raw packet as decrypted
await RawPacketRepository.mark_decrypted(packet_id, msg_id)
# Build paths array for broadcast
paths = [{"path": path or "", "received_at": received}] if path is not None else None
# Broadcast new message to connected clients
broadcast_event(
"message",
{
"id": msg_id,
"type": "PRIV",
"conversation_key": conversation_key,
"text": decrypted.message,
"sender_timestamp": decrypted.timestamp,
"received_at": received,
"paths": paths,
"txt_type": 0,
"signature": None,
"outgoing": outgoing,
"acked": 0,
},
)
# Update contact's last_contacted timestamp (for sorting)
await ContactRepository.update_last_contacted(conversation_key, received)
# Run bot if enabled (for all real-time DMs, including our own outgoing messages)
if trigger_bot:
from app.bot import run_bot_for_message
# Get contact name for the bot
contact = await ContactRepository.get_by_key(their_public_key)
sender_name = contact.name if contact else None
asyncio.create_task(
run_bot_for_message(
sender_name=sender_name,
sender_key=their_public_key,
message_text=decrypted.message,
is_dm=True,
channel_key=None,
channel_name=None,
sender_timestamp=decrypted.timestamp,
path=path,
is_outgoing=outgoing,
)
)
return msg_id
async def run_historical_dm_decryption(
private_key_bytes: bytes,
@@ -344,25 +131,28 @@ async def run_historical_dm_decryption(
"""Background task to decrypt historical DM packets with contact's key."""
from app.websocket import broadcast_success
packets = await RawPacketRepository.get_undecrypted_text_messages()
total = len(packets)
total = 0
decrypted_count = 0
if total == 0:
logger.info("No undecrypted TEXT_MESSAGE packets to process")
return
logger.info("Starting historical DM decryption of %d TEXT_MESSAGE packets", total)
logger.info("Starting historical DM decryption scan for undecrypted TEXT_MESSAGE packets")
# Derive our public key from the private key
our_public_key_bytes = derive_public_key(private_key_bytes)
for packet_id, packet_data, packet_timestamp in packets:
# Note: passing our_public_key=None means outgoing DMs won't be matched
# by try_decrypt_dm (the inbound check requires src_hash == their_first_byte,
# which fails for our outgoing packets). This is acceptable because outgoing
# DMs are stored directly by the send endpoint. Historical decryption only
# recovers incoming messages.
async for (
packet_id,
packet_data,
packet_timestamp,
) in RawPacketRepository.stream_undecrypted_text_messages():
total += 1
# Note: passing our_public_key=None disables the outbound hash check in
# try_decrypt_dm (only the inbound check src_hash == their_first_byte runs).
# For the 255/256 case where our first byte differs from the contact's,
# outgoing packets fail the inbound check and are skipped — which is correct
# since outgoing DMs are stored directly by the send endpoint.
# For the 1/256 case where bytes match, an outgoing packet may decrypt
# successfully, but the dual-hash direction check below correctly identifies
# it and the DB dedup constraint prevents a duplicate insert.
result = try_decrypt_dm(
packet_data,
private_key_bytes,
@@ -371,14 +161,25 @@ async def run_historical_dm_decryption(
)
if result is not None:
# Determine direction by checking src_hash
# Determine direction using both hashes (mirrors _process_direct_message
# logic at lines 806-818) to handle the 1/256 case where our first
# public key byte matches the contact's.
src_hash = result.src_hash.lower()
dest_hash = result.dest_hash.lower()
our_first_byte = format(our_public_key_bytes[0], "02x").lower()
outgoing = src_hash == our_first_byte
if src_hash == our_first_byte and dest_hash != our_first_byte:
outgoing = True
else:
# Incoming, ambiguous (both match), or neither matches.
# Default to incoming — outgoing DMs are stored by the send
# endpoint, so historical decryption only recovers incoming.
outgoing = False
# Extract path from the raw packet for storage
packet_info = parse_packet(packet_data)
path_hex = packet_info.path.hex() if packet_info else None
path_len = packet_info.path_length if packet_info else None
msg_id = await create_dm_message_from_decrypted(
packet_id=packet_id,
@@ -387,13 +188,18 @@ async def run_historical_dm_decryption(
our_public_key=our_public_key_bytes.hex(),
received_at=packet_timestamp,
path=path_hex,
path_len=path_len,
outgoing=outgoing,
trigger_bot=False, # Historical decryption should not trigger bot
realtime=False, # Historical decryption should not trigger fanout
)
if msg_id is not None:
decrypted_count += 1
if total == 0:
logger.info("No undecrypted TEXT_MESSAGE packets to process")
return
logger.info(
"Historical DM decryption complete: %d/%d packets decrypted",
decrypted_count,
@@ -471,11 +277,13 @@ async def process_raw_packet(
This is the main entry point for all incoming RF packets.
Note: Packets are deduplicated by payload hash in the database. If we receive
a duplicate packet (same payload, different path), we still broadcast it to
the frontend (for the real-time packet feed) but skip decryption processing
since the original packet was already processed.
a duplicate payload (same payload, different path), we still broadcast it to
the frontend for realtime packet-feed fidelity. Some payload types are also
intentionally reprocessed on duplicate arrival so message-level dedup/path
merge logic and advert/path-history tracking still see each observation.
"""
ts = timestamp or int(time.time())
observation_id = next(_raw_observation_counter)
packet_id, is_new_packet = await RawPacketRepository.create(raw_bytes, ts)
raw_hex = raw_bytes.hex()
@@ -485,6 +293,13 @@ async def process_raw_packet(
payload_type = packet_info.payload_type if packet_info else None
payload_type_name = payload_type.name if payload_type else "Unknown"
if packet_info is None and len(raw_bytes) > 2:
logger.warning(
"Failed to parse %d-byte packet (id=%d); stored undecrypted",
len(raw_bytes),
packet_id,
)
# Log packet arrival at debug level
path_hex = packet_info.path.hex() if packet_info and packet_info.path else ""
logger.debug(
@@ -513,24 +328,33 @@ async def process_raw_packet(
# deduplication in create_message_from_decrypted handles adding paths to existing messages.
# This is more reliable than trying to look up the message via raw packet linking.
if payload_type == PayloadType.GROUP_TEXT:
decrypt_result = await _process_group_text(raw_bytes, packet_id, ts, packet_info)
decrypt_result = await _process_group_text(
raw_bytes, packet_id, ts, packet_info, rssi=rssi, snr=snr
)
if decrypt_result:
result.update(decrypt_result)
elif payload_type == PayloadType.ADVERT and is_new_packet:
# Only process new advertisements (duplicates don't add value)
elif payload_type == PayloadType.ADVERT:
# Process all advert arrivals (even payload-hash duplicates) so the
# advert-history table retains recent path observations.
await _process_advertisement(raw_bytes, ts, packet_info)
elif payload_type == PayloadType.TEXT_MESSAGE:
# Try to decrypt direct messages using stored private key and known contacts
decrypt_result = await _process_direct_message(raw_bytes, packet_id, ts, packet_info)
decrypt_result = await _process_direct_message(
raw_bytes, packet_id, ts, packet_info, rssi=rssi, snr=snr
)
if decrypt_result:
result.update(decrypt_result)
elif payload_type == PayloadType.PATH:
await _process_path_packet(raw_bytes, ts, packet_info)
# Always broadcast raw packet for the packet feed UI (even duplicates)
# This enables the frontend cracker to see all incoming packets in real-time
broadcast_payload = RawPacketBroadcast(
id=packet_id,
observation_id=observation_id,
timestamp=ts,
data=raw_hex,
payload_type=payload_type_name,
@@ -540,6 +364,8 @@ async def process_raw_packet(
decrypted_info=RawPacketDecryptedInfo(
channel_name=result["channel_name"],
sender=result["sender"],
channel_key=result.get("channel_key"),
contact_key=result.get("contact_key"),
)
if result["decrypted"]
else None,
@@ -554,6 +380,8 @@ async def _process_group_text(
packet_id: int,
timestamp: int,
packet_info: PacketInfo | None,
rssi: int | None = None,
snr: float | None = None,
) -> dict | None:
"""
Process a GroupText (channel message) packet.
@@ -589,6 +417,9 @@ async def _process_group_text(
timestamp=decrypted.timestamp,
received_at=timestamp,
path=packet_info.path.hex() if packet_info else None,
path_len=packet_info.path_length if packet_info else None,
rssi=rssi,
snr=snr,
)
return {
@@ -596,6 +427,7 @@ async def _process_group_text(
"channel_name": channel.name,
"sender": decrypted.sender,
"message_id": msg_id, # None if duplicate, msg_id if new
"channel_key": channel.key,
}
# Couldn't decrypt with any known key
@@ -611,7 +443,6 @@ async def _process_advertisement(
Process an advertisement packet.
Extracts contact info and updates the database/broadcasts to clients.
For non-repeater contacts, triggers sync of recent contacts to radio for DM ACK support.
"""
# Parse packet to get path info if not already provided
if packet_info is None:
@@ -620,101 +451,101 @@ async def _process_advertisement(
logger.debug("Failed to parse advertisement packet")
return
advert = parse_advertisement(packet_info.payload)
advert = parse_advertisement(packet_info.payload, raw_packet=raw_bytes)
if not advert:
logger.debug("Failed to parse advertisement payload")
return
# Extract path info from packet
new_path_len = packet_info.path_length
new_path_hex = packet_info.path.hex() if packet_info.path else ""
# Try to find existing contact
existing = await ContactRepository.get_by_key(advert.public_key.lower())
# Determine which path to use: keep shorter path if heard recently (within 60s)
# This handles advertisement echoes through different routes
PATH_FRESHNESS_SECONDS = 60
use_existing_path = False
if existing and existing.last_seen:
path_age = timestamp - existing.last_seen
existing_path_len = existing.last_path_len if existing.last_path_len >= 0 else float("inf")
# Keep existing path if it's fresh and shorter (or equal)
if path_age <= PATH_FRESHNESS_SECONDS and existing_path_len <= new_path_len:
use_existing_path = True
logger.debug(
"Keeping existing shorter path for %s (existing=%d, new=%d, age=%ds)",
advert.public_key[:12],
existing_path_len,
new_path_len,
path_age,
)
if use_existing_path:
assert existing is not None # Guaranteed by the conditions that set use_existing_path
path_len = existing.last_path_len if existing.last_path_len is not None else -1
path_hex = existing.last_path or ""
else:
path_len = new_path_len
path_hex = new_path_hex
logger.debug(
"Parsed advertisement from %s: %s (role=%d, lat=%s, lon=%s, path_len=%d)",
"Parsed advertisement from %s: %s (role=%d, lat=%s, lon=%s, advert_path_len=%d)",
advert.public_key[:12],
advert.name,
advert.device_role,
advert.lat,
advert.lon,
path_len,
new_path_len,
)
# Use device_role from advertisement for contact type (1=Chat, 2=Repeater, 3=Room, 4=Sensor)
# Use advert.timestamp for last_advert (sender's timestamp), receive timestamp for last_seen
# Use device_role from advertisement for contact type (1=Chat, 2=Repeater, 3=Room, 4=Sensor).
# Persist advert freshness fields using the server receive wall clock so
# route selection is not affected by sender clock skew.
contact_type = (
advert.device_role if advert.device_role > 0 else (existing.type if existing else 0)
)
contact_data = {
"public_key": advert.public_key.lower(),
"name": advert.name,
"type": contact_type,
"lat": advert.lat,
"lon": advert.lon,
"last_advert": advert.timestamp if advert.timestamp > 0 else timestamp,
"last_seen": timestamp,
"last_path": path_hex,
"last_path_len": path_len,
}
# Check discovery_blocked_types: skip new contacts whose type is blocked.
# Existing contacts are always updated (location, name, last_seen, etc.).
if existing is None and contact_type > 0:
from app.repository import AppSettingsRepository
await ContactRepository.upsert(contact_data)
claimed = await MessageRepository.claim_prefix_messages(advert.public_key.lower())
if claimed > 0:
logger.info(
"Claimed %d prefix DM message(s) for contact %s",
claimed,
advert.public_key[:12],
)
settings = await AppSettingsRepository.get()
if contact_type in settings.discovery_blocked_types:
logger.debug(
"Skipping new contact %s: type %d is in discovery_blocked_types",
advert.public_key[:12],
contact_type,
)
return
# Broadcast contact update to connected clients
broadcast_event(
"contact",
{
"public_key": advert.public_key.lower(),
"name": advert.name,
"type": contact_type,
"flags": existing.flags if existing else 0,
"last_path": path_hex,
"last_path_len": path_len,
"last_advert": advert.timestamp if advert.timestamp > 0 else timestamp,
"lat": advert.lat,
"lon": advert.lon,
"last_seen": timestamp,
"on_radio": existing.on_radio if existing else False,
},
contact_upsert = ContactUpsert(
public_key=advert.public_key.lower(),
name=advert.name,
type=contact_type,
lat=advert.lat,
lon=advert.lon,
last_advert=timestamp,
last_seen=timestamp,
first_seen=timestamp, # COALESCE in upsert preserves existing value
)
# Upsert the contact BEFORE recording advert paths so the parent row
# exists when foreign key enforcement is enabled.
await ContactRepository.upsert(contact_upsert)
# Keep recent unique advert paths for all contacts.
await ContactAdvertPathRepository.record_observation(
public_key=advert.public_key.lower(),
path_hex=new_path_hex,
timestamp=timestamp,
max_paths=10,
hop_count=new_path_len,
)
promoted_keys = await promote_prefix_contacts_for_contact(
public_key=advert.public_key,
log=logger,
)
await record_contact_name_and_reconcile(
public_key=advert.public_key,
contact_name=advert.name,
timestamp=timestamp,
log=logger,
)
# Read back from DB so the broadcast includes all fields (last_contacted,
# last_read_at, flags, on_radio, etc.) matching the REST Contact shape exactly.
db_contact = await ContactRepository.get_by_key(advert.public_key.lower())
if db_contact:
broadcast_event("contact", db_contact.model_dump())
for old_key in promoted_keys:
broadcast_event(
"contact_resolved",
{
"previous_public_key": old_key,
"contact": db_contact.model_dump(),
},
)
else:
broadcast_event(
"contact",
Contact(**contact_upsert.model_dump(exclude_none=True)).model_dump(),
)
# For new contacts, optionally attempt to decrypt any historical DMs we may have stored
# This is controlled by the auto_decrypt_dm_on_advert setting
if existing is None:
@@ -724,20 +555,14 @@ async def _process_advertisement(
if settings.auto_decrypt_dm_on_advert:
await start_historical_dm_decryption(None, advert.public_key.lower(), advert.name)
# If this is not a repeater, trigger recent contacts sync to radio
# This ensures we can auto-ACK DMs from recent contacts
if contact_type != CONTACT_TYPE_REPEATER:
# Import here to avoid circular import
from app.radio_sync import sync_recent_contacts_to_radio
asyncio.create_task(sync_recent_contacts_to_radio())
async def _process_direct_message(
raw_bytes: bytes,
packet_id: int,
timestamp: int,
packet_info: PacketInfo | None,
rssi: int | None = None,
snr: float | None = None,
) -> dict | None:
"""
Process a TEXT_MESSAGE (direct message) packet.
@@ -821,10 +646,30 @@ async def _process_direct_message(
)
if result is not None:
# Successfully decrypted!
# In the ambiguous direction case (both first bytes match), we
# defaulted to incoming. Check if a matching outgoing message
# already exists — if so, this is actually our own outgoing echo
# and should be treated as such instead of creating a duplicate
# incoming row.
effective_outgoing = is_outgoing
if not is_outgoing and dest_hash == src_hash:
existing_outgoing = await MessageRepository.get_by_content(
msg_type="PRIV",
conversation_key=contact.public_key.lower(),
text=result.message,
sender_timestamp=result.timestamp,
outgoing=True,
)
if existing_outgoing is not None:
effective_outgoing = True
logger.debug(
"Ambiguous DM resolved as outgoing echo (matched existing sent msg %d)",
existing_outgoing.id,
)
logger.debug(
"Decrypted DM %s contact %s: %s",
"to" if is_outgoing else "from",
"to" if effective_outgoing else "from",
contact.name or contact.public_key[:12],
result.message[:50] if result.message else "",
)
@@ -836,8 +681,11 @@ async def _process_direct_message(
their_public_key=contact.public_key,
our_public_key=our_public_key.hex(),
received_at=timestamp,
path=packet_info.path.hex() if packet_info.path else None,
outgoing=is_outgoing,
path=packet_info.path.hex() if packet_info else None,
path_len=packet_info.path_length if packet_info else None,
rssi=rssi,
snr=snr,
outgoing=effective_outgoing,
)
return {
@@ -845,8 +693,96 @@ async def _process_direct_message(
"contact_name": contact.name,
"sender": contact.name or contact.public_key[:12],
"message_id": msg_id,
"contact_key": contact.public_key,
}
# Couldn't decrypt with any known contact
logger.debug("Could not decrypt DM with any of %d candidate contacts", len(candidate_contacts))
return None
async def _process_path_packet(
raw_bytes: bytes,
timestamp: int,
packet_info: PacketInfo | None,
) -> None:
"""Process a PATH packet and update the learned direct route."""
if not has_private_key():
return
private_key = get_private_key()
our_public_key = get_public_key()
if private_key is None or our_public_key is None:
return
if packet_info is None:
packet_info = parse_packet(raw_bytes)
if packet_info is None or packet_info.payload is None or len(packet_info.payload) < 4:
return
dest_hash = format(packet_info.payload[0], "02x").lower()
src_hash = format(packet_info.payload[1], "02x").lower()
our_first_byte = format(our_public_key[0], "02x").lower()
if dest_hash != our_first_byte:
return
candidate_contacts = await ContactRepository.get_by_pubkey_first_byte(src_hash)
if not candidate_contacts:
logger.debug("No contacts found matching hash %s for PATH decryption", src_hash)
return
for contact in candidate_contacts:
if len(contact.public_key) != 64:
continue
try:
contact_public_key = bytes.fromhex(contact.public_key)
except ValueError:
continue
result = try_decrypt_path(
raw_packet=raw_bytes,
our_private_key=private_key,
their_public_key=contact_public_key,
our_public_key=our_public_key,
)
if result is None:
continue
await ContactRepository.update_direct_path(
contact.public_key,
result.returned_path.hex(),
result.returned_path_len,
result.returned_path_hash_mode,
updated_at=timestamp,
)
if result.extra_type == PayloadType.ACK and len(result.extra) >= 4:
ack_code = result.extra[:4].hex()
matched = await apply_dm_ack_code(ack_code, broadcast_fn=broadcast_event)
if matched:
logger.info(
"Applied bundled PATH ACK for %s via contact %s",
ack_code,
contact.public_key[:12],
)
else:
logger.debug(
"Buffered bundled PATH ACK %s via contact %s",
ack_code,
contact.public_key[:12],
)
elif result.extra_type == PayloadType.RESPONSE and len(result.extra) > 0:
logger.debug(
"Observed bundled PATH RESPONSE from %s (%d bytes)",
contact.public_key[:12],
len(result.extra),
)
refreshed_contact = await ContactRepository.get_by_key(contact.public_key)
if refreshed_contact is not None:
broadcast_event("contact", refreshed_contact.model_dump())
return
logger.debug(
"Could not decrypt PATH packet with any of %d candidate contacts", len(candidate_contacts)
)

Some files were not shown because too many files have changed in this diff Show More