Compare commits

...

173 Commits

Author SHA1 Message Date
Kelly
4dc6befeab Update requirements.txt
cleanup
2026-03-25 15:18:29 -07:00
Kelly
219eea5399 Update joke.py 2026-03-24 12:48:56 -07:00
Kelly
c987c1286e Update mesh_bot.py 2026-03-24 11:05:26 -07:00
Kelly
2ebf721bc9 dopewars fix
end of game
2026-03-22 17:24:35 -07:00
Kelly
bdef9a1f08 Update install_service.sh 2026-03-22 15:39:37 -07:00
Kelly
2da56bc31f Update install_service.sh 2026-03-22 15:37:34 -07:00
Kelly
1e3c3b9ea0 Update install_service.sh 2026-03-22 15:37:05 -07:00
Kelly
d01d7ae668 Update install_service.sh 2026-03-22 14:07:03 -07:00
Kelly
b875eed9fd Update install_service.sh 2026-03-22 13:54:40 -07:00
Kelly
e8cd85700c Create install_service.sh 2026-03-20 19:49:22 -07:00
Kelly
91b02fead4 Update README.md 2026-03-20 18:37:40 -07:00
Kelly
cba6fe3ba2 Update bootstrap.sh 2026-03-17 17:18:50 -07:00
Kelly
021efc8c63 Update bootstrap.sh 2026-03-17 17:02:26 -07:00
Kelly
a4b67072cb Update bootstrap.sh 2026-03-17 16:40:46 -07:00
Kelly
f1e1516919 Update bootstrap.sh 2026-03-17 16:18:46 -07:00
Kelly
e675134d08 Create bootstrap.sh 2026-03-17 16:17:15 -07:00
Kelly
655f2bf7e5 Update requirements.txt 2026-03-17 12:19:21 -07:00
Kelly
46cd2a8051 Update README.md 2026-03-16 21:37:10 -07:00
Kelly
fcc4f24ea5 Merge pull request #300 from SpudGunMan/dependabot/github_actions/docker/login-action-9fe7774c8f8ebfade96f0a62aa10f3882309d517
Bump docker/login-action from db14339dbc0a1f0b184157be94b23a2138122354 to 9fe7774c8f8ebfade96f0a62aa10f3882309d517
2026-03-16 19:49:15 -07:00
Kelly
7ddf29ca06 Update requirements.txt 2026-03-16 19:41:26 -07:00
dependabot[bot]
372bc0c5a7 Bump docker/login-action
Bumps [docker/login-action](https://github.com/docker/login-action) from db14339dbc0a1f0b184157be94b23a2138122354 to 9fe7774c8f8ebfade96f0a62aa10f3882309d517.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](db14339dbc...9fe7774c8f)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-version: 9fe7774c8f8ebfade96f0a62aa10f3882309d517
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-16 09:54:07 +00:00
Kelly
b3bcb62f6c Merge pull request #299 from SpudGunMan/dependabot/github_actions/docker/login-action-db14339dbc0a1f0b184157be94b23a2138122354 2026-03-09 06:11:32 -07:00
dependabot[bot]
6fb33dde10 Bump docker/build-push-action from 6.19.2 to 7.0.0
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 6.19.2 to 7.0.0.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](10e90e3645...d08e5c354a)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-version: 7.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-09 06:11:14 -07:00
dependabot[bot]
744ca772f2 Bump docker/metadata-action from 5.10.0 to 6.0.0
Bumps [docker/metadata-action](https://github.com/docker/metadata-action) from 5.10.0 to 6.0.0.
- [Release notes](https://github.com/docker/metadata-action/releases)
- [Commits](c299e40c65...030e881283)

---
updated-dependencies:
- dependency-name: docker/metadata-action
  dependency-version: 6.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-09 06:10:50 -07:00
dependabot[bot]
b5e0653839 Bump docker/login-action
Bumps [docker/login-action](https://github.com/docker/login-action) from 3227f5311cb93ffd14d13e65d8cc400d30f4dd8a to db14339dbc0a1f0b184157be94b23a2138122354.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](3227f5311c...db14339dbc)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-version: db14339dbc0a1f0b184157be94b23a2138122354
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-09 10:02:49 +00:00
Kelly
f7462a498e Update dxspot.py 2026-03-07 18:19:54 -08:00
Kelly
30609c822d Update game_serve.py 2026-03-07 17:40:46 -08:00
Kelly
bbfce73aaa Update game_serve.py 2026-03-07 17:26:10 -08:00
Kelly
4f2cd2caef Update game_serve.py
doh
2026-03-07 17:25:09 -08:00
Kelly
294c09754f Update game_serve.py 2026-03-07 17:24:03 -08:00
Kelly
9b69ca69c4 fix link parsing
This fixes an issue with how the bbslink sync process was handling incoming posts. It was not removing the @fromNode from the end of the body, which was causing it to get appended again during the push process. This would compound with more nodes involved in the sync process

alt to https://github.com/SpudGunMan/meshing-around/pull/296

Co-Authored-By: Amy Nagle <1270500+kabili207@users.noreply.github.com>
2026-03-05 21:26:42 -08:00
Kelly
290c366cee Merge pull request #295 from SpudGunMan/dependabot/github_actions/actions/attest-build-provenance-4
Bump actions/attest-build-provenance from 3 to 4
2026-03-05 12:51:20 -08:00
dependabot[bot]
a7f0561f09 Bump actions/attest-build-provenance from 3 to 4
Bumps [actions/attest-build-provenance](https://github.com/actions/attest-build-provenance) from 3 to 4.
- [Release notes](https://github.com/actions/attest-build-provenance/releases)
- [Changelog](https://github.com/actions/attest-build-provenance/blob/main/RELEASE.md)
- [Commits](https://github.com/actions/attest-build-provenance/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/attest-build-provenance
  dependency-version: '4'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-02 10:10:58 +00:00
Kelly
4496f19605 Update locationdata.py 2026-02-27 14:37:32 -08:00
Kelly
6499a6e619 Merge pull request #278 from peanutyost/main
Added Features to Location module
2026-02-27 14:31:02 -08:00
Kelly
fe1444b025 cleanup Help2 2026-02-27 14:28:17 -08:00
Kelly
7bfbae503a clean help 2026-02-27 14:26:54 -08:00
Kelly
7cfb45d2b1 Update locationdata.py
hope to have resolved this https://github.com/SpudGunMan/meshing-around/issues/285
2026-02-27 12:56:22 -08:00
Kelly
0fb351ef4d fix end
thanks for issue https://github.com/SpudGunMan/meshing-around/issues/291
2026-02-26 20:22:53 -08:00
Kelly
2f6abade80 ARRLFixz
https://www.arrl.org/withdrawn-questions
2026-02-26 20:21:31 -08:00
Kelly
5247f8d9d3 Update locationdata.py 2026-02-26 15:20:18 -08:00
Kelly
b36059183c Update locationdata.py 2026-02-26 15:18:49 -08:00
Kelly
f737e401a5 Update locationdata.py 2026-02-26 15:15:10 -08:00
Kelly
98b5f4fb7f Update scheduler.py 2026-02-26 14:43:55 -08:00
Kelly
17fa03ff9d Merge branch 'main' of https://github.com/SpudGunMan/meshing-around 2026-02-26 14:41:32 -08:00
Kelly
40aaa7202c Update scheduler.py 2026-02-26 14:41:02 -08:00
Kelly
5088397856 Merge pull request #292 from SpudGunMan/dependabot/github_actions/docker/build-push-action-10e90e3645eae34f1e60eeb005ba3a3d33f178e8
Bump docker/build-push-action from 8c1e8f8e5bf845ba3773a14f3967965548a2341e to 10e90e3645eae34f1e60eeb005ba3a3d33f178e8
2026-02-26 14:20:07 -08:00
Kelly
db1c31579c Update install.sh 2026-02-26 14:18:38 -08:00
dependabot[bot]
dcf1b8f3cc Bump docker/build-push-action
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 8c1e8f8e5bf845ba3773a14f3967965548a2341e to 10e90e3645eae34f1e60eeb005ba3a3d33f178e8.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](8c1e8f8e5b...10e90e3645)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-version: 10e90e3645eae34f1e60eeb005ba3a3d33f178e8
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-16 10:09:14 +00:00
Kelly
2a7000a2e6 Update hamtest.py 2026-02-14 16:49:59 -08:00
Kelly
aa0aaed0b5 Merge pull request #288 from SpudGunMan/dependabot/github_actions/docker/build-push-action-8c1e8f8e5bf845ba3773a14f3967965548a2341e 2026-02-04 08:27:55 -08:00
Kelly
9db4dc8ab9 Merge pull request #289 from SpudGunMan/dependabot/github_actions/docker/login-action-3227f5311cb93ffd14d13e65d8cc400d30f4dd8a 2026-02-04 08:27:37 -08:00
dependabot[bot]
85e8f41dca Bump docker/login-action
Bumps [docker/login-action](https://github.com/docker/login-action) from 0567fa5ae8c9a197cb207537dc5cbb43ca3d803f to 3227f5311cb93ffd14d13e65d8cc400d30f4dd8a.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](0567fa5ae8...3227f5311c)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-version: 3227f5311cb93ffd14d13e65d8cc400d30f4dd8a
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-02 10:27:43 +00:00
dependabot[bot]
ddb123b759 Bump docker/build-push-action
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 64c9b141502b80dbbd71e008a0130ad330f480f8 to 8c1e8f8e5bf845ba3773a14f3967965548a2341e.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](64c9b14150...8c1e8f8e5b)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-version: 8c1e8f8e5bf845ba3773a14f3967965548a2341e
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-02 10:27:30 +00:00
dependabot[bot]
10afde663e Bump docker/login-action (#286) 2026-01-19 09:09:51 -08:00
dependabot[bot]
c931d13e6e Bump docker/login-action
Bumps [docker/login-action](https://github.com/docker/login-action) from 6862ffc5ab2cdb4405cf318a62a6f4c066e2298b to 916386b00027d425839f8da46d302dab33f5875b.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](6862ffc5ab...916386b000)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-version: 916386b00027d425839f8da46d302dab33f5875b
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-16 16:46:29 -08:00
dependabot[bot]
ba6075b616 Bump docker/build-push-action
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 9e436ba9f2d7bcd1d038c8e55d039d37896ddc5d to 64c9b141502b80dbbd71e008a0130ad330f480f8.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](9e436ba9f2...64c9b14150)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-version: 64c9b141502b80dbbd71e008a0130ad330f480f8
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-16 16:46:09 -08:00
Jacob Morris
68c065825b Adds data persistence loop and shared method to handle all persistence operations 2026-01-16 16:45:46 -08:00
Kelly
213f121807 Update battleship.py 2026-01-13 17:37:26 -08:00
SpudGunMan
530d78482a Update locationdata.py 2026-01-02 16:00:17 -08:00
peanutyost
6c459cd317 added store and return Altitude to location. 2025-12-30 16:47:46 -06:00
peanutyost
626f0dddf7 reverted .gitignore 2025-12-30 08:03:00 -06:00
peanutyost
bb57301b20 reverse some changes that were not needed. 2025-12-30 07:57:46 -06:00
peanutyost
d3adf77896 Cleanup readme 2025-12-30 07:47:37 -06:00
peanutyost
157176acf7 updated the map help command. 2025-12-30 07:42:10 -06:00
peanutyost
4fd35dc004 updated the readme for the map module. 2025-12-30 07:40:23 -06:00
peanutyost
955f7350e9 Refactor location management
- Cleaned up .gitignore to only include necessary entries.
- Updated config.template to include new settings for location management.
- Added SQLite database initialization and management functions in locationdata.py for saving, retrieving, and deleting locations.
- Enhanced map command handling to support saving and listing locations, including public/private visibility controls.
2025-12-29 22:40:15 -06:00
SpudGunMan
09515b9bc0 Update tictactoe.py 2025-12-27 19:40:05 -08:00
SpudGunMan
9b8c9d80c8 Update hamtest.py 2025-12-27 19:40:02 -08:00
SpudGunMan
8ee838f5c6 Update hangman.py 2025-12-27 19:39:34 -08:00
SpudGunMan
757d6d30b8 fix 2025-12-27 19:38:06 -08:00
SpudGunMan
1ee785d388 fix 2025-12-27 19:32:58 -08:00
SpudGunMan
c3284f0a0f fix Turn Counter 2025-12-27 16:21:17 -08:00
SpudGunMan
bdcc479360 enhance
fix turn count at 0
2025-12-27 15:39:27 -08:00
SpudGunMan
b1444b24e4 Update mmind.py 2025-12-27 15:26:21 -08:00
SpudGunMan
aef67da492 alertDuration
customize the check for API alerts
2025-12-27 15:06:56 -08:00
SpudGunMan
b8b8145447 enhance
@SnyderMesh Thanks for Idea
2025-12-26 15:33:11 -08:00
dependabot[bot]
42a4842a5b Bump docker/login-action
Bumps [docker/login-action](https://github.com/docker/login-action) from 28fdb31ff34708d19615a74d67103ddc2ea9725c to 6862ffc5ab2cdb4405cf318a62a6f4c066e2298b.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](28fdb31ff3...6862ffc5ab)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-version: 6862ffc5ab2cdb4405cf318a62a6f4c066e2298b
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-22 11:35:11 -08:00
SpudGunMan
201591d469 fix dopewar replay bug
reference https://github.com/SpudGunMan/meshing-around/issues/274 thanks MJTheis

Co-Authored-By: MJTheis <232630404+mjtheis@users.noreply.github.com>
2025-12-21 18:27:26 -08:00
dependabot[bot]
4ecdc7b108 Bump docker/metadata-action (#273) 2025-12-09 09:16:24 -08:00
Kelly
3f78bf7a67 Merge pull request #272 from SpudGunMan/dependabot/github_actions/actions/checkout-6 2025-12-09 09:15:48 -08:00
dependabot[bot]
8af21b760c Bump actions/checkout from 5 to 6
Bumps [actions/checkout](https://github.com/actions/checkout) from 5 to 6.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-24 11:21:13 +00:00
SpudGunMan
ea3ed46e86 Update locationdata.py 2025-11-18 13:08:52 -08:00
SpudGunMan
d78d6acd1e Update locationdata.py 2025-11-17 14:33:55 -08:00
SpudGunMan
e9b483f4e8 Update locationdata.py 2025-11-16 19:30:30 -08:00
SpudGunMan
94660e7993 Update space.py 2025-11-15 06:27:17 -08:00
SpudGunMan
12aeaef250 fix hop doh hops! 2025-11-12 23:01:19 -08:00
Kelly
2a6f76ab5b Merge pull request #268 from SpudGunMan/lab
Lab
2025-11-12 21:58:31 -08:00
SpudGunMan
05df1e1a3c logsoff 2025-11-12 21:58:00 -08:00
SpudGunMan
38131b4180 cleanup 2025-11-12 21:53:27 -08:00
SpudGunMan
397c39b13d relay node 2025-11-12 21:33:36 -08:00
SpudGunMan
af7dfe8a51 Update README.md 2025-11-12 19:53:20 -08:00
SpudGunMan
d5d163aab9 logs on 2025-11-12 19:48:12 -08:00
Kelly
58cc3e4314 Merge pull request #267 from SpudGunMan/lab
Lab
2025-11-12 19:47:11 -08:00
SpudGunMan
3274dfdbc0 logs off 2025-11-12 19:45:27 -08:00
SpudGunMan
84a1a163d3 Update system.py 2025-11-12 19:34:46 -08:00
SpudGunMan
289eb70738 cleanup 2025-11-12 19:17:49 -08:00
SpudGunMan
a6d51e41bf clean 2025-11-12 19:08:41 -08:00
SpudGunMan
a63020bbb7 space
space
2025-11-12 19:02:46 -08:00
SpudGunMan
2416e73fbf hop refactor for new proto 2025-11-12 17:27:09 -08:00
SpudGunMan
f87f34f8bf Update icad_tone.py 2025-11-12 17:16:39 -08:00
SpudGunMan
eaed034d20 Revert "Update icad_tone.py"
This reverts commit 14b876b989.
2025-11-12 17:13:03 -08:00
SpudGunMan
ec9ac1b1fe Revert "Update icad_tone.py"
This reverts commit c79f3cdfbc.
2025-11-12 17:12:59 -08:00
SpudGunMan
e84ce13878 Revert "Update icad_tone.py"
This reverts commit c31947194e.
2025-11-12 17:12:49 -08:00
SpudGunMan
a5fc8aca82 fix hops 2025-11-12 17:11:14 -08:00
SpudGunMan
c31947194e Update icad_tone.py 2025-11-12 16:21:20 -08:00
SpudGunMan
c79f3cdfbc Update icad_tone.py 2025-11-12 16:18:26 -08:00
SpudGunMan
14b876b989 Update icad_tone.py 2025-11-12 16:09:31 -08:00
SpudGunMan
2cc5b23753 Update icad_tone.py 2025-11-12 15:48:19 -08:00
SpudGunMan
a5b0fda3ac Update icad_tone.py 2025-11-12 13:08:30 -08:00
SpudGunMan
9c5c332e01 Update icad_tone.py 2025-11-12 12:27:45 -08:00
SpudGunMan
ac5e96e463 Update icad_tone.py 2025-11-12 12:26:33 -08:00
SpudGunMan
0ce7deb740 Update icad_tone.py 2025-11-12 12:23:04 -08:00
SpudGunMan
a60333318b Update icad_tone.py 2025-11-12 12:19:38 -08:00
SpudGunMan
665acaa904 Update icad_tone.py 2025-11-12 12:10:18 -08:00
SpudGunMan
0aa8bccd04 Update README.md 2025-11-12 12:07:33 -08:00
SpudGunMan
2e5e8a7589 Update README.md 2025-11-12 11:52:48 -08:00
SpudGunMan
e3e6393bad icad_tone_alerts
icad_tone_alerts
2025-11-12 11:50:40 -08:00
SpudGunMan
be38588292 Update system.py 2025-11-12 10:17:32 -08:00
SpudGunMan
14fb3f9cb6 lab logging 2025-11-11 23:58:12 -08:00
Kelly
c40cd86592 Merge pull request #265 from SpudGunMan/lab
Lab
2025-11-11 23:56:01 -08:00
SpudGunMan
69df48957e clear Logs 2025-11-11 23:54:17 -08:00
SpudGunMan
e29573ebc0 Update system.py 2025-11-11 22:24:52 -08:00
SpudGunMan
13b9b75f86 Update system.py 2025-11-11 22:24:44 -08:00
SpudGunMan
0bfe908391 Update system.py 2025-11-11 22:21:14 -08:00
SpudGunMan
5baee422c2 Update system.py 2025-11-11 22:18:22 -08:00
SpudGunMan
38ff05fd40 logs 2025-11-11 21:22:28 -08:00
SpudGunMan
e1def5422a cleanup 2025-11-11 21:13:41 -08:00
SpudGunMan
93031010cb enhance output data for solar report 2025-11-11 21:05:58 -08:00
SpudGunMan
21e614ab8e enhance solar with NOAA radio weather 2025-11-11 19:39:04 -08:00
SpudGunMan
a5322867e3 Update pong_bot.py 2025-11-11 16:58:19 -08:00
SpudGunMan
2863a64ec8 Update mesh_bot.py 2025-11-11 16:57:30 -08:00
SpudGunMan
678fde7b2c logs 2025-11-11 16:45:29 -08:00
Kelly
ec0f9f966c Merge pull request #264 from SpudGunMan/lab
Lab
2025-11-11 16:44:28 -08:00
SpudGunMan
fd114301f6 logs 2025-11-11 16:43:52 -08:00
SpudGunMan
1778cb6feb Update system.py 2025-11-11 16:38:13 -08:00
SpudGunMan
fc7ca37184 Update system.py 2025-11-11 16:35:16 -08:00
SpudGunMan
fe2110ca2b Update system.py 2025-11-11 16:30:56 -08:00
SpudGunMan
179113e83a Update system.py 2025-11-11 14:46:51 -08:00
SpudGunMan
79348be644 debug 2025-11-11 14:12:55 -08:00
SpudGunMan
35c6232b0c enhance 2025-11-11 13:40:48 -08:00
SpudGunMan
2aa7ffb0e8 packet debugging 2025-11-11 13:35:16 -08:00
SpudGunMan
a7060bc516 cleanup 2025-11-11 13:23:55 -08:00
SpudGunMan
998d979d71 batter channel assign 2025-11-11 13:13:06 -08:00
Kelly
cdfb451d67 Merge pull request #262 from SpudGunMan/dependabot/github_actions/docker/metadata-action-8d8c7c12f7b958582a5cb82ba16d5903cb27976a
Bump docker/metadata-action from 032a4b3bda1b716928481836ac5bfe36e1feaad6 to 8d8c7c12f7b958582a5cb82ba16d5903cb27976a
2025-11-11 13:05:16 -08:00
Kelly
994405955a Merge pull request #263 from SpudGunMan/lab
Lab
2025-11-11 13:03:39 -08:00
SpudGunMan
17d92dc78d Update system.py 2025-11-11 13:02:03 -08:00
SpudGunMan
d0e33f943f logs 2025-11-11 12:53:24 -08:00
dependabot[bot]
f55c7311fa Bump docker/metadata-action
Bumps [docker/metadata-action](https://github.com/docker/metadata-action) from 032a4b3bda1b716928481836ac5bfe36e1feaad6 to 8d8c7c12f7b958582a5cb82ba16d5903cb27976a.
- [Release notes](https://github.com/docker/metadata-action/releases)
- [Commits](032a4b3bda...8d8c7c12f7)

---
updated-dependencies:
- dependency-name: docker/metadata-action
  dependency-version: 8d8c7c12f7b958582a5cb82ba16d5903cb27976a
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-10 12:24:13 +00:00
SpudGunMan
5f4f832af6 Update install.sh 2025-11-09 17:00:26 -08:00
SpudGunMan
c3e8f4a93e fix install embedded 2025-11-09 16:59:52 -08:00
SpudGunMan
e72b3c191e Update system.py 2025-11-09 16:43:22 -08:00
SpudGunMan
3774b8407b refactor with new API 2025-11-09 16:29:36 -08:00
SpudGunMan
0e074a6885 Update custom_scheduler.template 2025-11-09 15:54:00 -08:00
SpudGunMan
c8800d837f Update config.template 2025-11-09 15:51:52 -08:00
SpudGunMan
289ada1fc0 refactor 2025-11-09 15:45:09 -08:00
SpudGunMan
e64e60358d enhance 2025-11-09 15:33:32 -08:00
SpudGunMan
658fb33b69 enhance log 2025-11-09 15:29:52 -08:00
SpudGunMan
1568d026f2 flood packet work
debug on
2025-11-09 15:23:26 -08:00
SpudGunMan
232bf98efd bug highfly
Highfly: error: unsupported operand type(s) for -: 'str' and 'int'
2025-11-09 15:17:02 -08:00
SpudGunMan
3cce938334 ChannelAwarness 2025-11-09 15:00:21 -08:00
SpudGunMan
e6a17d9258 patch DM for welcome and LLM 2025-11-09 13:19:36 -08:00
SpudGunMan
d403e4c8c0 Update game_serve.py 2025-11-09 13:18:36 -08:00
SpudGunMan
ae19c5b83f Update battleship_vid.py 2025-11-09 13:18:17 -08:00
SpudGunMan
532efda9e8 Update config.template 2025-11-08 19:48:23 -08:00
SpudGunMan
d20eab03e9 Update README.md 2025-11-08 18:52:08 -08:00
SpudGunMan
df43b61a0c Update install.sh 2025-11-08 12:13:15 -08:00
SpudGunMan
862347cbec refactor leader fly and speed 2025-11-08 11:37:01 -08:00
SpudGunMan
9c412b8328 leaderboard airspeed 2025-11-08 11:24:00 -08:00
SpudGunMan
c3fcacd64b Update locationdata.py 2025-11-08 11:08:16 -08:00
SpudGunMan
68b5de2950 one is a lonely number 2025-11-08 11:04:16 -08:00
SpudGunMan
f578ba6084 Update yolo_vision.py 2025-11-07 12:44:56 -08:00
SpudGunMan
961bb3abba Tesseract 2025-11-07 12:34:17 -08:00
38 changed files with 2082 additions and 530 deletions

View File

@@ -25,10 +25,10 @@ jobs:
#
steps:
- name: Checkout repository
uses: actions/checkout@v5
uses: actions/checkout@v6
# Uses the `docker/login-action` action to log in to the Container registry registry using the account and password that will publish the packages. Once published, the packages are scoped to the account defined here.
- name: Log in to the Container registry
uses: docker/login-action@28fdb31ff34708d19615a74d67103ddc2ea9725c
uses: docker/login-action@9fe7774c8f8ebfade96f0a62aa10f3882309d517
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
@@ -36,7 +36,7 @@ jobs:
# This step uses [docker/metadata-action](https://github.com/docker/metadata-action#about) to extract tags and labels that will be applied to the specified image. The `id` "meta" allows the output of this step to be referenced in a subsequent step. The `images` value provides the base name for the tags and labels.
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@032a4b3bda1b716928481836ac5bfe36e1feaad6
uses: docker/metadata-action@030e881283bb7a6894de51c315a6bfe6a94e05cf
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
# This step uses the `docker/build-push-action` action to build the image, based on your repository's `Dockerfile`. If the build succeeds, it pushes the image to GitHub Packages.
@@ -44,7 +44,7 @@ jobs:
# It uses the `tags` and `labels` parameters to tag and label the image with the output from the "meta" step.
- name: Build and push Docker image
id: push
uses: docker/build-push-action@9e436ba9f2d7bcd1d038c8e55d039d37896ddc5d
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294
with:
context: .
push: true
@@ -53,7 +53,7 @@ jobs:
# This step generates an artifact attestation for the image, which is an unforgeable statement about where and how it was built. It increases supply chain security for people who consume the image. For more information, see [Using artifact attestations to establish provenance for builds](/actions/security-guides/using-artifact-attestations-to-establish-provenance-for-builds).
- name: Generate artifact attestation
uses: actions/attest-build-provenance@v3
uses: actions/attest-build-provenance@v4
with:
subject-name: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME}}
subject-digest: ${{ steps.push.outputs.digest }}

2
.gitignore vendored
View File

@@ -28,4 +28,4 @@ modules/custom_scheduler.py
venv/
# Python cache
__pycache__/
__pycache__/

View File

@@ -52,6 +52,7 @@ Mesh Bot is a feature-rich Python bot designed to enhance your [Meshtastic](http
- **Customizable Triggers**: Use proximity events for creative applications like "king of the hill" or 🧭 geocache games by adjusting the alert cycle.
- **High Flying Alerts**: Receive notifications when nodes with high altitude are detected on the mesh.
- **Voice/Command Triggers**: Activate bot functions using keywords or voice commands (see [Voice Commands](#voice-commands-vox) for "Hey Chirpy!" support).
- **YOLOv5 alerts**: Use camera modules to detect objects or OCR
### EAS Alerts
- **FEMA iPAWS/EAS Alerts**: Receive Emergency Alerts from FEMA via API on internet-connected nodes.
@@ -72,6 +73,7 @@ Mesh Bot is a feature-rich Python bot designed to enhance your [Meshtastic](http
- **WSJT-X Integration**: Monitor WSJT-X (FT8, FT4, WSPR, etc.) decode messages and forward them to the mesh network with optional callsign filtering.
- **JS8Call Integration**: Monitor JS8Call messages and forward them to the mesh network with optional callsign filtering.
- **Meshages TTS**: The bot can speak mesh messages aloud using [KittenTTS](https://github.com/KittenML/KittenTTS). Enable this feature to have important alerts and messages read out loud on your device—ideal for hands-free operation or accessibility. See [radio.md](modules/radio.md) for setup instructions.
- **Offline Tone out Decoder**: Decode fire Tone out and DTMF and action with alerts to mesh
### Asset Tracking, Check-In/Check-Out, and Inventory Management
Advanced check-in/check-out and asset tracking for people and equipment—ideal for accountability, safety monitoring, and logistics (e.g., Radio-Net, FEMA, trailhead groups). Admin approval workflows, GPS location capture, and overdue alerts. The integrated inventory and point-of-sale (POS) system enables item management, sales tracking, cart-based transactions, and daily reporting, for swaps, emergency supply management, and field operations, maker-places.
@@ -100,7 +102,7 @@ Advanced check-in/check-out and asset tracking for people and equipment—ideal
- **Automatic Message Chunking**: Messages over 160 characters are automatically split to ensure reliable delivery across multiple hops.
## Getting Started
This project is developed on Linux (specifically a Raspberry Pi) but should work on any platform where the [Meshtastic protobuf API](https://meshtastic.org/docs/software/python/cli/) modules are supported, and with any compatible [Meshtastic](https://meshtastic.org/docs/getting-started/) hardware. For pico or low-powered devices, see projects for embedding, armbian or [buildroot](https://github.com/buildroot-meshtastic/buildroot-meshtastic), also see [femtofox](https://github.com/noon92/femtofox) for running on luckfox hardware. If you need a local console consider the [firefly](https://github.com/pdxlocations/firefly) project.
This project is developed on Linux (specifically a Raspberry Pi) but should work on any platform where the [Meshtastic protobuf API](https://meshtastic.org/docs/software/python/cli/) modules are supported, and with any compatible [Meshtastic](https://meshtastic.org/docs/getting-started/) hardware, however it is **recomended to use the latest firmware code**. For low-powered devices [mPWRD-OS](https://github.com/SpudGunMan/mPWRD-OS) for running on luckfox hardware. If you need a local console consider the [firefly](https://github.com/pdxlocations/firefly) project.
🥔 Please use responsibly and follow local rulings for such equipment. This project captures packets, logs them, and handles over the air communications which can include PII such as GPS locations.
@@ -172,6 +174,7 @@ For testing and feature ideas on Discord and GitHub, if its stable its thanks to
- **mrpatrick1991**: For OG Docker configurations. 💻
- **A-c0rN**: Assistance with iPAWS and 🚨
- **Mike O'Connell/skrrt**: For [eas_alert_parser](etc/eas_alert_parser.py) enhanced by **sheer.cold**
- **dadud**: For idea on [etc/icad_tone.py](etc/icad_tone.py)
- **WH6GXZ nurse dude**: Volcano Alerts 🌋
- **mikecarper**: hamtest, leading to quiz etc.. 📋
- **c.merphy360**: high altitude alerts. 🚀

29
bootstrap.sh Executable file
View File

@@ -0,0 +1,29 @@
#!/usr/bin/env bash
set -e
BASE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$BASE_DIR"
if [[ ! -d "$BASE_DIR/venv" ]]; then
python3 -m venv "$BASE_DIR/venv"
fi
source "$BASE_DIR/venv/bin/activate"
"$BASE_DIR/venv/bin/pip" install -r "$BASE_DIR/requirements.txt"
mkdir -p "$BASE_DIR/data"
cp -Rn "$BASE_DIR/etc/data/." "$BASE_DIR/data/"
if [[ ! -f "$BASE_DIR/config.ini" ]]; then
cp "$BASE_DIR/config.template" "$BASE_DIR/config.ini"
sleep 1
replace="s|type = serial|type = tcp|g"
sed -i.bak "$replace" "$BASE_DIR/config.ini"
replace="s|# hostname = meshtastic.local|hostname = localhost|g"
sed -i.bak "$replace" "$BASE_DIR/config.ini"
rm -f "$BASE_DIR/config.ini.bak"
else
echo "config.ini already exists, leaving it unchanged."
fi
deactivate

View File

@@ -79,31 +79,29 @@ kiwixURL = http://127.0.0.1:8080
# Kiwix library name (e.g., wikipedia_en_100_nopic_2025-09)
kiwixLibraryName = wikipedia_en_100_nopic_2025-09
# Enable ollama LLM see more at https://ollama.com
# Enable LLM local Ollama integration, set true for any LLM support
ollama = False
# Ollama model to use (defaults to gemma3:270m) gemma2 is good for older SYSTEM prompt
# ollamaModel = gemma3:latest
# ollamaModel = gemma2:2b
# server instance to use (defaults to local machine install)
# Ollama server instance to use (defaults to local machine install)
ollamaHostName = http://localhost:11434
# Produce LLM replies to messages that aren't commands?
# If False, the LLM only replies to the "ask:" and "askai" commands.
llmReplyToNonCommands = True
# if True, the input is sent raw to the LLM, if False uses SYSTEM prompt
rawLLMQuery = True
# Enable Wikipedia/Kiwix integration with LLM for RAG (Retrieval Augmented Generation)
# When enabled, LLM will automatically search Wikipedia/Kiwix and include context in responses
llmUseWikiContext = False
# Use OpenWebUI instead of direct Ollama API (enables advanced RAG features)
# Use OpenWebUI instead of direct Ollama API / still leave ollama = True
useOpenWebUI = False
# OpenWebUI server URL (e.g., http://localhost:3000)
openWebUIURL = http://localhost:3000
# OpenWebUI API key/token (required when useOpenWebUI is True)
openWebUIAPIKey =
# Ollama model to use (defaults to gemma3:270m) gemma2 is good for older SYSTEM prompt
# ollamaModel is used for both Ollama and OpenWebUI when useOpenWebUI its just the model name
# ollamaModel = gemma3:latest
# ollamaModel = gemma2:2b
# if True, the query is sent raw to the LLM, if False uses internal SYSTEM prompt
rawLLMQuery = True
# If False, the LLM only replies to the "ask:" and "askai" commands. otherwise DM's automatically go to LLM
llmReplyToNonCommands = True
# Enable Wikipedia/Kiwix integration with LLM for RAG (Retrieval Augmented Generation)
# When enabled, LLM will automatically search Wikipedia/Kiwix and include context in responses
llmUseWikiContext = False
# StoreForward Enabled and Limits
StoreForward = True
StoreLimit = 3
@@ -202,6 +200,12 @@ lat = 48.50
lon = -123.0
fuzzConfigLocation = True
fuzzItAll = False
# database file for saved locations
locations_db = data/locations.db
# if True, only administrators can save public locations
public_location_admin_manage = False
# if True, only administrators can delete locations
delete_public_locations_admins_only = False
# Default to metric units rather than imperial
useMetric = False
@@ -325,6 +329,7 @@ value =
# interval to use when time is not set (e.g. every 2 days)
interval =
# time of day in 24:00 hour format when value is 'day' and interval is not set
# Process run :00,:20,:40 try and vary the 20 minute offsets to avoid collision
time =
[radioMon]
@@ -491,4 +496,10 @@ autoBanThreshold = 5
# Throttle value for API requests no ban_hammer
apiThrottleValue = 20
# Timeframe for offenses (in seconds)
autoBanTimeframe = 3600
autoBanTimeframe = 3600
[dataPersistence]
# Enable or disable the data persistence loop service
enabled = True
# Interval in seconds for the persistence loop (how often to save data)
interval = 300

View File

@@ -97,3 +97,36 @@ Run this script to monitor the camera feed and generate alerts for detected and
---
## icad_tone.py
**Purpose:**
`icad_tone.py` is a utility script for detecting fire and EMS radio tones using the [icad_tone_detection](https://github.com/thegreatcodeholio/icad_tone_detection) library. It analyzes audio from a live stream, soundcard, or WAV file, identifies various tone types (such as two-tone, long tone, hi/low, pulsed, MDC, and DTMF), and writes detected alerts to `alert.txt` for integration with Mesh Bot or Meshtastic.
**Usage:**
Run the script from the command line, specifying a WAV file for offline analysis or configuring it to listen to a stream or soundcard for real-time monitoring.
```sh
python etc/icad_tone.py --wav path/to/file.wav
```
Or, for live monitoring (after setting `HTTP_STREAM_URL` in the script):
```sh
python etc/icad_tone.py
```
**What it does:**
- Loads audio from a stream, soundcard, or WAV file.
- Uses `icad_tone_detection` to analyze audio for tone patterns.
- Prints raw detection results and summaries to the console.
- Writes a summary of detected tones to `alert.txt` (overwriting each time).
- Handles errors and missing dependencies gracefully.
**Configuration:**
- `ALERT_FILE_PATH`: Path to the alert output file (default: `alert.txt`).
- `AUDIO_SOURCE`: Set to `"http"` for streaming or `"soundcard"` for local audio input.
- `HTTP_STREAM_URL`: URL of the audio stream (required if using HTTP source).
- `SAMPLE_RATE`, `INPUT_CHANNELS`, `CHUNK_DURATION`: Audio processing parameters.
**Note:**
- Requires installation of dependencies (`icad_tone_detection`)
- Set `HTTP_STREAM_URL` to a valid stream if using HTTP mode.
- Intended for experimental or hobbyist use; may require customization for your workflow.

View File

@@ -15,6 +15,7 @@ def setup_custom_schedules(send_message, tell_joke, welcome_message, handle_wxc,
5. Make sure to uncomment (delete the single #) the example schedules down at the end of the file to enable them
Python is sensitive to indentation so be careful when editing this file.
https://thonny.org is included on pi's image and is a simple IDE to use for editing python files.
6. System Tasks run every 20min try and avoid overlapping schedules to reduce API rapid fire issues. use like 8:05
Available functions you can import and use, be sure they are enabled modules in config.ini:
- tell_joke() - Returns a random joke

View File

@@ -959,18 +959,6 @@
"To relay messages between satellites"
]
},
{
"id": "E2A13",
"correct": 1,
"refs": "",
"question": "Which of the following techniques is used by digital satellites to relay messages?",
"answers": [
"Digipeating",
"Store-and-forward",
"Multisatellite relaying",
"Node hopping"
]
},
{
"id": "E2B01",
"correct": 0,
@@ -2495,18 +2483,6 @@
"Utilizing a Class D final amplifier"
]
},
{
"id": "E4D05",
"correct": 0,
"refs": "",
"question": "What transmitter frequencies would create an intermodulation-product signal in a receiver tuned to 146.70 MHz when a nearby station transmits on 146.52 MHz?",
"answers": [
"146.34 MHz and 146.61 MHz",
"146.88 MHz and 146.34 MHz",
"146.10 MHz and 147.30 MHz",
"146.30 MHz and 146.90 MHz"
]
},
{
"id": "E4D06",
"correct": 2,
@@ -3851,18 +3827,6 @@
"Permeability"
]
},
{
"id": "E6D07",
"correct": 3,
"refs": "",
"question": "What is the current that flows in the primary winding of a transformer when there is no load on the secondary winding?",
"answers": [
"Stabilizing current",
"Direct current",
"Excitation current",
"Magnetizing current"
]
},
{
"id": "E6D08",
"correct": 1,

View File

@@ -35,18 +35,6 @@
"12 meters"
]
},
{
"id": "G1A04",
"correct": 3,
"refs": "[97.303(h)]",
"question": "Which of the following amateur bands is restricted to communication only on specific channels, rather than frequency ranges?",
"answers": [
"11 meters",
"12 meters",
"30 meters",
"60 meters"
]
},
{
"id": "G1A05",
"correct": 0,
@@ -347,18 +335,6 @@
"Submit a rule-making proposal to the FCC describing the codes and methods of the technique"
]
},
{
"id": "G1C09",
"correct": 2,
"refs": "[97.313(i)]",
"question": "What is the maximum power limit on the 60-meter band?",
"answers": [
"1500 watts PEP",
"10 watts RMS",
"ERP of 100 watts PEP with respect to a dipole",
"ERP of 100 watts PEP with respect to an isotropic antenna"
]
},
{
"id": "G1C11",
"correct": 3,
@@ -611,18 +587,6 @@
"1500 watts"
]
},
{
"id": "G1E09",
"correct": 0,
"refs": "[97.115]",
"question": "Under what circumstances are messages that are sent via digital modes exempt from Part 97 third-party rules that apply to other modes of communication?",
"answers": [
"Under no circumstances",
"When messages are encrypted",
"When messages are not encrypted",
"When under automatic control"
]
},
{
"id": "G1E10",
"correct": 0,
@@ -4079,18 +4043,6 @@
"All these choices are correct"
]
},
{
"id": "G8C01",
"correct": 2,
"refs": "",
"question": "On what band do amateurs share channels with the unlicensed Wi-Fi service?",
"answers": [
"432 MHz",
"902 MHz",
"2.4 GHz",
"10.7 GHz"
]
},
{
"id": "G8C02",
"correct": 0,

222
etc/icad_tone.py Normal file
View File

@@ -0,0 +1,222 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# icad_tone.py - uses icad_tone_detection, for fire and EMS tone detection
# https://github.com/thegreatcodeholio/icad_tone_detection
# output to alert.txt for meshing-around bot
# 2025 K7MHI Kelly Keeton
# ---------------------------
# User Configuration Section
# ---------------------------
ALERT_FILE_PATH = "alert.txt" # Path to alert log file, or None to disable logging
AUDIO_SOURCE = "soundcard" # "soundcard" for mic/line-in, "http" for stream
HTTP_STREAM_URL = "" # Set to your stream URL if using "http"
SAMPLE_RATE = 16000 # Audio sample rate (Hz)
INPUT_CHANNELS = 1 # Number of input channels (1=mono)
MIN_SAMPLES = 4096 # Minimum samples per detection window (increase for better accuracy)
STREAM_BUFFER = 32000 # Number of bytes to buffer before detection (for MP3 streams)
INPUT_DEVICE = 0 # Set to device index or name, or None for default
# ---------------------------
import sys
import time
from icad_tone_detection import tone_detect
from pydub import AudioSegment
import requests
import sounddevice as sd
import numpy as np
import argparse
import io
import warnings
warnings.filterwarnings("ignore", message="nperseg = .* is greater than input length")
def write_alert(message):
if ALERT_FILE_PATH:
try:
with open(ALERT_FILE_PATH, "w") as f: # overwrite each time
f.write(message + "\n")
except Exception as e:
print(f"Error writing to alert file: {e}", file=sys.stderr)
def detect_and_alert(audio_data, sample_rate):
try:
result = tone_detect(audio_data, sample_rate)
except Exception as e:
print(f"Detection error: {e}", file=sys.stderr)
return
# Only print if something is detected
if result and any(getattr(result, t, []) for t in [
"two_tone_result", "long_result", "hi_low_result", "pulsed_result", "mdc_result", "dtmf_result"
]):
print("Raw detection result:", result)
# Prepare alert summary for all relevant tone types
summary = []
if hasattr(result, "dtmf_result") and result.dtmf_result:
for dtmf in result.dtmf_result:
summary.append(f"DTMF Digit: {dtmf.get('digit', '?')} | Duration: {dtmf.get('length', '?')}s")
if hasattr(result, "hi_low_result") and result.hi_low_result:
for hl in result.hi_low_result:
summary.append(
f"Hi/Low Alternations: {hl.get('alternations', '?')} | Duration: {hl.get('length', '?')}s"
)
if hasattr(result, "mdc_result") and result.mdc_result:
for mdc in result.mdc_result:
summary.append(
f"MDC UnitID: {mdc.get('unitID', '?')} | Op: {mdc.get('op', '?')} | Duration: {mdc.get('length', '?')}s"
)
if hasattr(result, "pulsed_result") and result.pulsed_result:
for pl in result.pulsed_result:
summary.append(
f"Pulsed Tone: {pl.get('detected', '?')}Hz | Cycles: {pl.get('cycles', '?')} | Duration: {pl.get('length', '?')}s"
)
if hasattr(result, "two_tone_result") and result.two_tone_result:
for tt in result.two_tone_result:
summary.append(
f"Two-Tone: {tt.get('detected', ['?','?'])[0]}Hz/{tt.get('detected', ['?','?'])[1]}Hz | Tone A: {tt.get('tone_a_length', '?')}s | Tone B: {tt.get('tone_b_length', '?')}s"
)
if hasattr(result, "long_result") and result.long_result:
for lt in result.long_result:
summary.append(
f"Long Tone: {lt.get('detected', '?')}Hz | Duration: {lt.get('length', '?')}s"
)
if summary:
write_alert("\n".join(summary))
def get_supported_sample_rate(device, channels=1):
# Try common sample rates
for rate in [44100, 48000, 16000, 8000]:
try:
sd.check_input_settings(device=device, channels=channels, samplerate=rate)
return rate
except Exception:
continue
return None
def main():
print("="*80)
print(" iCAD Tone Decoder for Meshing-Around Booting Up!")
if AUDIO_SOURCE == "soundcard":
try:
if INPUT_DEVICE is not None:
sd.default.device = INPUT_DEVICE
device_info = sd.query_devices(INPUT_DEVICE, kind='input')
else:
device_info = sd.query_devices(sd.default.device, kind='input')
device_name = device_info['name']
# Detect supported sample rate
detected_rate = get_supported_sample_rate(sd.default.device, INPUT_CHANNELS)
if detected_rate:
SAMPLE_RATE = detected_rate
else:
print("No supported sample rate found, using default.", file=sys.stderr)
except Exception:
device_name = "Unknown"
print(f" Mode: Soundcard | Device: {device_name} | Sample Rate: {SAMPLE_RATE} Hz | Channels: {INPUT_CHANNELS}")
elif AUDIO_SOURCE == "http":
print(f" Mode: HTTP Stream | URL: {HTTP_STREAM_URL} | Buffer: {STREAM_BUFFER} bytes")
else:
print(f" Mode: {AUDIO_SOURCE}")
print("="*80)
time.sleep(1)
parser = argparse.ArgumentParser(description="ICAD Tone Detection")
parser.add_argument("--wav", type=str, help="Path to WAV file for detection")
args = parser.parse_args()
if args.wav:
print(f"Processing WAV file: {args.wav}")
try:
audio = AudioSegment.from_file(args.wav)
if audio.channels > 1:
audio = audio.set_channels(1)
print(f"AudioSegment: channels={audio.channels}, frame_rate={audio.frame_rate}, duration={len(audio)}ms")
detect_and_alert(audio, audio.frame_rate)
except Exception as e:
print(f"Error processing WAV file: {e}", file=sys.stderr)
return
print("Starting ICAD Tone Detection...")
if AUDIO_SOURCE == "http":
if not HTTP_STREAM_URL or HTTP_STREAM_URL.startswith("http://your-stream-url-here"):
print("ERROR: Please set a valid HTTP_STREAM_URL or provide a WAV file using --wav option.", file=sys.stderr)
sys.exit(2)
print(f"Listening to HTTP stream: {HTTP_STREAM_URL}")
try:
response = requests.get(HTTP_STREAM_URL, stream=True, timeout=10)
buffer = io.BytesIO()
try:
for chunk in response.iter_content(chunk_size=4096):
buffer.write(chunk)
# Use STREAM_BUFFER for detection window
if buffer.tell() > STREAM_BUFFER:
buffer.seek(0)
audio = AudioSegment.from_file(buffer, format="mp3")
if audio.channels > 1:
audio = audio.set_channels(1)
# --- Simple audio level detection ---
samples = np.array(audio.get_array_of_samples())
if samples.dtype != np.float32:
samples = samples.astype(np.float32) / 32767.0 # Normalize to -1..1
rms = np.sqrt(np.mean(samples**2))
if rms > 0.01:
print(f"Audio detected! RMS: {rms:.3f} ", end='\r')
if rms > 0.5:
print(f"WARNING: Audio too loud! RMS: {rms:.3f} ", end='\r')
# --- End audio level detection ---
detect_and_alert(audio, audio.frame_rate)
buffer = io.BytesIO()
except KeyboardInterrupt:
print("\nStopped by user.")
sys.exit(0)
except requests.exceptions.RequestException as e:
print(f"Connection error: {e}", file=sys.stderr)
sys.exit(3)
except Exception as e:
print(f"Error processing HTTP stream: {e}", file=sys.stderr)
sys.exit(4)
elif AUDIO_SOURCE == "soundcard":
print("Listening to audio device:")
buffer = np.array([], dtype=np.float32)
min_samples = MIN_SAMPLES # Use configured minimum samples
def callback(indata, frames, time_info, status):
nonlocal buffer
try:
samples = indata[:, 0]
buffer = np.concatenate((buffer, samples))
# --- Simple audio level detection ---
rms = np.sqrt(np.mean(samples**2))
if rms > 0.01:
print(f"Audio detected! RMS: {rms:.3f} ", end='\r')
if rms > 0.5:
print(f"WARNING: Audio too loud! RMS: {rms:.3f} ", end='\r')
# --- End audio level detection ---
# Only process when buffer is large enough
while buffer.size >= min_samples:
int_samples = np.int16(buffer[:min_samples] * 32767)
audio = AudioSegment(
data=int_samples.tobytes(),
sample_width=2,
frame_rate=SAMPLE_RATE,
channels=1
)
detect_and_alert(audio, SAMPLE_RATE)
buffer = buffer[min_samples:] # keep remainder for next window
except Exception as e:
print(f"Callback error: {e}", file=sys.stderr)
try:
with sd.InputStream(samplerate=SAMPLE_RATE, channels=INPUT_CHANNELS, dtype='float32', callback=callback):
print("Press Ctrl+C to stop.")
import signal
signal.pause() # Wait for Ctrl+C, keeps CPU usage minimal
except KeyboardInterrupt:
print("Stopped by user.")
except Exception as e:
print(f"Error accessing soundcard: {e}", file=sys.stderr)
sys.exit(5)
else:
print("Unknown AUDIO_SOURCE. Set to 'http' or 'soundcard'.", file=sys.stderr)
sys.exit(6)
if __name__ == "__main__":
main()

173
etc/install_service.sh Normal file
View File

@@ -0,0 +1,173 @@
#!/usr/bin/env bash
set -euo pipefail
# Install mesh_bot as a systemd service for the current user.
# Defaults:
# - project path: /opt/meshing-around
# - service name: mesh_bot
# - service user: invoking user (SUDO_USER when using sudo)
SERVICE_NAME="mesh_bot"
PROJECT_PATH="/opt/meshing-around"
SERVICE_USER="${SUDO_USER:-${USER:-}}"
SERVICE_GROUP=""
USE_LAUNCH_SH=1
NEED_MESHTASTICD=1
DRY_RUN=0
usage() {
cat <<'EOF'
Usage:
bash etc/install_service.sh [options]
Options:
--project-path PATH Project root path (default: /opt/meshing-around)
--user USER Linux user to run the service as (default: invoking user)
--group GROUP Linux group to run the service as (default: user's primary group)
--direct-python Run python3 mesh_bot.py directly (skip launch.sh)
--no-meshtasticd Do not require meshtasticd.service to be present
--dry-run Print actions without changing the system
-h, --help Show this help
Examples:
sudo bash etc/install_service.sh
sudo bash etc/install_service.sh --project-path /opt/meshing-around --user $USER
EOF
}
log() {
printf '[install_service] %s\n' "$*"
}
die() {
printf '[install_service] ERROR: %s\n' "$*" >&2
exit 1
}
while [[ $# -gt 0 ]]; do
case "$1" in
--project-path)
[[ $# -ge 2 ]] || die "Missing value for --project-path"
PROJECT_PATH="$2"
shift 2
;;
--user)
[[ $# -ge 2 ]] || die "Missing value for --user"
SERVICE_USER="$2"
shift 2
;;
--group)
[[ $# -ge 2 ]] || die "Missing value for --group"
SERVICE_GROUP="$2"
shift 2
;;
--direct-python)
USE_LAUNCH_SH=0
shift
;;
--no-meshtasticd)
NEED_MESHTASTICD=0
shift
;;
--dry-run)
DRY_RUN=1
shift
;;
-h|--help)
usage
exit 0
;;
*)
die "Unknown option: $1"
;;
esac
done
[[ -n "$SERVICE_USER" ]] || die "Could not determine service user. Use --user USER."
[[ "$SERVICE_USER" != "root" ]] || die "Refusing to install service as root. Use --user USER."
if ! id "$SERVICE_USER" >/dev/null 2>&1; then
die "User '$SERVICE_USER' does not exist"
fi
if [[ -z "$SERVICE_GROUP" ]]; then
SERVICE_GROUP="$(id -gn "$SERVICE_USER")"
fi
id -g "$SERVICE_USER" >/dev/null 2>&1 || die "Could not determine group for user '$SERVICE_USER'"
[[ -d "$PROJECT_PATH" ]] || die "Project path not found: $PROJECT_PATH"
[[ -f "$PROJECT_PATH/mesh_bot.py" ]] || die "mesh_bot.py not found in $PROJECT_PATH"
if [[ $USE_LAUNCH_SH -eq 1 ]]; then
[[ -f "$PROJECT_PATH/launch.sh" ]] || die "launch.sh not found in $PROJECT_PATH"
EXEC_START="/usr/bin/bash $PROJECT_PATH/launch.sh mesh"
else
EXEC_START="/usr/bin/python3 $PROJECT_PATH/mesh_bot.py"
fi
if [[ $NEED_MESHTASTICD -eq 1 ]]; then
if ! systemctl list-units --type=service --no-pager --all | grep meshtasticd.service; then
die "meshtasticd.service dependency not found. to ignore this check, run with --no-meshtasticd flag."
fi
MESHTASTICD_DEPENDENCY_LINES=$'\nAfter=meshtasticd.service\nRequires=meshtasticd.service'
else
MESHTASTICD_DEPENDENCY_LINES=""
fi
SERVICE_FILE_CONTENT="[Unit]
Description=MESH-BOT
After=network.target${MESHTASTICD_DEPENDENCY_LINES}
[Service]
Type=simple
User=$SERVICE_USER
Group=$SERVICE_GROUP
WorkingDirectory=$PROJECT_PATH
ExecStart=$EXEC_START
KillSignal=SIGINT
Environment=REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt
Environment=SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt
Environment=PYTHONUNBUFFERED=1
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
"
TARGET_SERVICE_FILE="/etc/systemd/system/$SERVICE_NAME.service"
log "Service user: $SERVICE_USER"
log "Service group: $SERVICE_GROUP"
log "Project path: $PROJECT_PATH"
log "Service file: $TARGET_SERVICE_FILE"
log "ExecStart: $EXEC_START"
if [[ $DRY_RUN -eq 1 ]]; then
log "Dry run mode enabled. Service file content:"
printf '\n%s\n' "$SERVICE_FILE_CONTENT"
exit 0
fi
if [[ $EUID -ne 0 ]]; then
die "This script needs root privileges. Re-run with: sudo bash etc/install_service.sh"
fi
printf '%s' "$SERVICE_FILE_CONTENT" > "$TARGET_SERVICE_FILE"
chmod 644 "$TARGET_SERVICE_FILE"
# Ensure runtime files are writable by the service account.
mkdir -p "$PROJECT_PATH/logs" "$PROJECT_PATH/data"
chown -R "$SERVICE_USER:$SERVICE_GROUP" "$PROJECT_PATH/logs" "$PROJECT_PATH/data"
if [[ -f "$PROJECT_PATH/config.ini" ]]; then
chown "$SERVICE_USER:$SERVICE_GROUP" "$PROJECT_PATH/config.ini"
chmod 664 "$PROJECT_PATH/config.ini"
fi
systemctl daemon-reload
systemctl enable "$SERVICE_NAME.service"
systemctl restart "$SERVICE_NAME.service"
log "Service installed and started."
log "Check status with: sudo systemctl status $SERVICE_NAME.service"
log "View logs with: sudo journalctl -u $SERVICE_NAME.service -f"

View File

@@ -1,9 +1,9 @@
#!/usr/bin/env python3
# YOLOv5 Object Detection with Movement Tracking using Raspberry Pi AI Camera or USB Webcam
# YOLOv5 Requirements: yolo5 https://docs.ultralytics.com/yolov5/quickstart_tutorial/
# PiCamera2 Requirements: picamera2 https://github.com/raspberrypi/picamera2
# PiCamera2 may need `sudo apt install imx500-all` on Raspberry Pi OS
# PiCamera2 Requirements: picamera2 https://github.com/raspberrypi/picamera2 `sudo apt install imx500-all`
# NVIDIA GPU PyTorch: https://developer.nvidia.com/cuda-downloads
# OCR with Tesseract: https://tesseract-ocr.github.io/tessdoc/Installation.html. `sudo apt-get install tesseract-ocr`
# Adjust settings below as needed, indended for meshing-around alert.txt output to meshtastic
# 2025 K7MHI Kelly Keeton
@@ -16,6 +16,10 @@ MOVEMENT_THRESHOLD = 50 # Pixels to consider as movement (adjust as needed)
IGNORE_STATIONARY = True # Whether to ignore stationary objects in output
ALERT_FUSE_COUNT = 5 # Number of consecutive detections before alerting
ALERT_FILE_PATH = "alert.txt" # e.g., "/opt/meshing-around/alert.txt" or None for no file output
OCR_PROCESSING_ENABLED = True # Whether to perform OCR on detected objects
SAVE_EVIDENCE_IMAGES = True # Whether to save evidence images when OCR text is found in bbox
EVIDENCE_IMAGE_DIR = "." # Change to desired directory, e.g., "/opt/meshing-around/data/images"
EVIDENCE_IMAGE_PATTERN = "evidence_{timestamp}.png"
try:
import torch # YOLOv5 https://docs.ultralytics.com/yolov5/quickstart_tutorial/
@@ -24,7 +28,10 @@ try:
import time
import warnings
import sys
import os
import datetime
if OCR_PROCESSING_ENABLED:
import pytesseract # pip install pytesseract
if PI_CAM:
from picamera2 import Picamera2 # pip install picamera2
@@ -60,10 +67,10 @@ else:
cap.set(cv2.CAP_PROP_FRAME_WIDTH, cam_res[0])
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, cam_res[1])
print("="*40)
print(f" Sentinal Vision 3000 Booting Up!")
print(f" Model: {YOLO_MODEL} | Camera: {CAMERA_TYPE} | Resolution: {RESOLUTION}")
print("="*40)
print("="*80)
print(f" Sentinal Vision 3000 Booting Up!")
print(f" Model: {YOLO_MODEL} | Camera: {CAMERA_TYPE} | Resolution: {RESOLUTION} | OCR: {'Enabled' if OCR_PROCESSING_ENABLED else 'Disabled'}")
print("="*80)
time.sleep(1)
def alert_output(msg, alert_file_path=ALERT_FILE_PATH):
@@ -74,6 +81,22 @@ def alert_output(msg, alert_file_path=ALERT_FILE_PATH):
with open(alert_file_path, "w") as f: # Use "a" to append instead of overwrite
f.write(msg_no_time + "\n")
def extract_text_from_bbox(img, bbox):
try:
cropped = img.crop((bbox[0], bbox[1], bbox[2], bbox[3]))
text = pytesseract.image_to_string(cropped, config="--psm 7")
text_stripped = text.strip()
if text_stripped and SAVE_EVIDENCE_IMAGES:
timestamp = datetime.datetime.now().strftime('%Y%m%d_%H%M%S')
image_path = os.path.join(EVIDENCE_IMAGE_DIR, EVIDENCE_IMAGE_PATTERN.format(timestamp=timestamp))
cropped.save(image_path)
print(f"Saved evidence image: {image_path}")
return f"{text_stripped}"
except Exception as e:
print(f"Error during OCR: {e}")
print("More at https://tesseract-ocr.github.io/tessdoc/Installation.html")
return False
try:
i = 0 # Frame counter if zero will be infinite
system_normal_printed = False # system nominal flag, if true disables printing
@@ -134,23 +157,40 @@ try:
if fuse_counters[obj_id] < ALERT_FUSE_COUNT:
continue # Don't alert yet
# OCR on detected region
bbox = [row['xmin'], row['ymin'], row['xmax'], row['ymax']]
if OCR_PROCESSING_ENABLED:
ocr_text = extract_text_from_bbox(img, bbox)
if prev_x is not None:
delta = x_center - prev_x
if abs(delta) < MOVEMENT_THRESHOLD:
direction = "stationary"
if IGNORE_STATIONARY:
if obj_id not in __builtins__.stationary_reported:
alert_output(f"[{timestamp}] {count} {row['name']} {direction}")
msg = f"[{timestamp}] {count} {row['name']} {direction}"
if OCR_PROCESSING_ENABLED and ocr_text:
msg += f" | OCR: {ocr_text}"
alert_output(msg)
__builtins__.stationary_reported.add(obj_id)
else:
alert_output(f"[{timestamp}] {count} {row['name']} {direction}")
msg = f"[{timestamp}] {count} {row['name']} {direction}"
if OCR_PROCESSING_ENABLED and ocr_text:
msg += f" | OCR: {ocr_text}"
alert_output(msg)
else:
direction = "moving right" if delta > 0 else "moving left"
alert_output(f"[{timestamp}] {count} {row['name']} {direction}")
msg = f"[{timestamp}] {count} {row['name']} {direction}"
if OCR_PROCESSING_ENABLED and ocr_text:
msg += f" | OCR: {ocr_text}"
alert_output(msg)
__builtins__.stationary_reported.discard(obj_id)
else:
direction = "detected"
alert_output(f"[{timestamp}] {count} {row['name']} {direction}")
msg = f"[{timestamp}] {count} {row['name']} {direction}"
if OCR_PROCESSING_ENABLED and ocr_text:
msg += f" | OCR: {ocr_text}"
alert_output(msg)
# Reset fuse counters for objects not detected in this frame
for obj_id in list(__builtins__.fuse_counters.keys()):

View File

@@ -56,6 +56,14 @@ if [[ $NOPE -eq 1 ]]; then
sudo rm -f /etc/systemd/system/ollama.service
sudo rm -rf /usr/local/bin/ollama
sudo rm -rf ~/.ollama
# remove ollama service account if exists
if id ollama &>/dev/null; then
sudo userdel ollama || true
fi
# remove ollama group if exists
if getent group ollama &>/dev/null; then
sudo groupdel ollama || true
fi
echo "Ollama removed."
else
echo "Ollama not removed."
@@ -99,6 +107,18 @@ if [[ ! -w ${program_path} ]]; then
exit 1
fi
# check if we have git and curl installed
if ! command -v git &> /dev/null
then
printf "git not found, trying 'apt-get install git'\n"
sudo apt-get install git
fi
if ! command -v curl &> /dev/null
then
printf "curl not found, trying 'apt-get install curl'\n"
sudo apt-get install curl
fi
# check if we are in /opt/meshing-around
if [[ "$program_path" != "/opt/meshing-around" ]]; then
echo "----------------------------------------------"
@@ -110,6 +130,7 @@ if [[ "$program_path" != "/opt/meshing-around" ]]; then
if [[ $(echo "$move" | grep -i "^y") ]]; then
sudo mv "$program_path" /opt/meshing-around
cd /opt/meshing-around
sudo git config --global --add safe.directory /opt/meshing-around
printf "\nProject moved to /opt/meshing-around.\n"
printf "Please re-run the installer from the new location.\n"
exit 0
@@ -449,8 +470,8 @@ if [[ $(echo "${embedded}" | grep -i "^n") ]]; then
printf "sudo systemctl disable %s.service\n" "$service" >> install_notes.txt
printf "sudo systemctl disable %s.service\n" "$service" >> install_notes.txt
printf "\n older chron statment to run the report generator hourly:\n" >> install_notes.txt
printf "0 * * * * /usr/bin/python3 $program_path/etc/report_generator5.py" >> install_notes.txt
printf " to edit crontab run 'crontab -e'\n" >> install_notes.txt
#printf "0 * * * * /usr/bin/python3 $program_path/etc/report_generator5.py" >> install_notes.txt
#printf " to edit crontab run 'crontab -e'\n" >> install_notes.txt
printf "\nmesh_bot_reporting.timer installed to run daily at 4:20 am\n" >> install_notes.txt
printf "Check timer status: systemctl status mesh_bot_reporting.timer\n" >> install_notes.txt
printf "List all timers: systemctl list-timers\n" >> install_notes.txt
@@ -475,21 +496,6 @@ else
# add service dependency for meshtasticd into service file
#replace="s|After=network.target|After=network.target meshtasticd.service|g"
# Set up the meshing around service
sudo cp /opt/meshing-around/etc/$service.service /etc/systemd/system/$service.service
sudo systemctl daemon-reload
sudo systemctl enable $service.service
sudo systemctl start $service.service
sudo systemctl daemon-reload
# # check if the cron job already exists
# if ! crontab -l | grep -q "$chronjob"; then
# # add the cron job to run the report_generator5.py script
# (crontab -l 2>/dev/null; echo "$chronjob") | crontab -
# printf "\nAdded cron job to run report_generator5.py\n"
# else
# printf "\nCron job already exists, skipping\n"
# fi
# document the service install
printf "Reference following commands:\n\n" > install_notes.txt
printf "sudo systemctl status %s.service\n" "$service" >> install_notes.txt
@@ -500,8 +506,8 @@ else
printf "sudo systemctl stop %s.service\n" "$service" >> install_notes.txt
printf "sudo systemctl disable %s.service\n" "$service" >> install_notes.txt
printf "older crontab to run the report generator hourly:" >> install_notes.txt
printf "0 * * * * /usr/bin/python3 $program_path/etc/report_generator5.py" >> install_notes.txt
printf " to edit crontab run 'crontab -e'" >> install_notes.txt
#printf "0 * * * * /usr/bin/python3 $program_path/etc/report_generator5.py" >> install_notes.txt
#printf " to edit crontab run 'crontab -e'" >> install_notes.txt
printf "\nmesh_bot_reporting.timer installed to run daily at 4:20 am\n" >> install_notes.txt
printf "Check timer status: systemctl status mesh_bot_reporting.timer\n" >> install_notes.txt
printf "List all timers: systemctl list-timers\n" >> install_notes.txt

View File

@@ -1,4 +1,4 @@
#!/usr/bin/python3
#!/usr/bin/env python3
# Meshtastic Autoresponder MESH Bot
# K7MHI Kelly Keeton 2025
try:
@@ -109,7 +109,7 @@ def auto_response(message, snr, rssi, hop, pkiStatus, message_from_id, channel_n
"setsms": lambda: handle_sms( message_from_id, message),
"sitrep": lambda: handle_lheard(message, message_from_id, deviceID, isDM),
"sms:": lambda: handle_sms(message_from_id, message),
"solar": lambda: drap_xray_conditions() + "\n" + solar_conditions(),
"solar": lambda: drap_xray_conditions() + "\n" + solar_conditions() + "\n" + get_noaa_scales_summary(),
"sun": lambda: handle_sun(message_from_id, deviceID, channel_number),
"survey": lambda: surveyHandler(message, message_from_id, deviceID),
"s:": lambda: surveyHandler(message, message_from_id, deviceID),
@@ -287,10 +287,12 @@ def handle_ping(message_from_id, deviceID, message, hop, snr, rssi, isDM, chann
#flood
msg += " [F]"
if (float(snr) != 0 or float(rssi) != 0) and "Hops" not in hop:
if (float(snr) != 0 or float(rssi) != 0) and "Hop" not in hop:
msg += f"\nSNR:{snr} RSSI:{rssi}"
elif "Hops" in hop:
msg += f"\n{hop}🐇 "
elif "Hop" in hop:
# janky, remove the words Gateway or MQTT if present
hop = hop.replace("Gateway", "").replace("Direct", "").replace("MQTT", "").strip()
msg += f"\n{hop} "
if "@" in message:
msg = msg + " @" + message.split("@")[1]
@@ -660,7 +662,7 @@ def handle_llm(message_from_id, channel_number, deviceID, message, publicChannel
if not any(node['nodeID'] == message_from_id and node['welcome'] == True for node in seenNodes):
if (channel_number == publicChannel and my_settings.antiSpam) or my_settings.useDMForResponse:
# send via DM
send_message(my_settings.welcome_message, channel_number, message_from_id, deviceID)
send_message(my_settings.welcome_message, 0, message_from_id, deviceID)
else:
# send via channel
send_message(my_settings.welcome_message, channel_number, 0, deviceID)
@@ -693,7 +695,7 @@ def handle_llm(message_from_id, channel_number, deviceID, message, publicChannel
if msg != '':
if (channel_number == publicChannel and my_settings.antiSpam) or my_settings.useDMForResponse:
# send via DM
send_message(msg, channel_number, message_from_id, deviceID)
send_message(msg, 0, message_from_id, deviceID)
else:
# send via channel
send_message(msg, channel_number, 0, deviceID)
@@ -713,6 +715,7 @@ def handle_llm(message_from_id, channel_number, deviceID, message, publicChannel
def handleDopeWars(message, nodeID, rxNode):
global dwPlayerTracker
global dwHighScore
msg = ""
# Find player in tracker
player = next((p for p in dwPlayerTracker if p.get('userID') == nodeID), None)
@@ -723,7 +726,6 @@ def handleDopeWars(message, nodeID, rxNode):
'userID': nodeID,
'last_played': time.time(),
'cmd': 'new',
# ... add other fields as needed ...
}
dwPlayerTracker.append(player)
msg = 'Welcome to 💊Dope Wars💉 You have ' + str(total_days) + ' days to make as much 💰 as possible! '
@@ -736,11 +738,6 @@ def handleDopeWars(message, nodeID, rxNode):
if p.get('userID') == nodeID:
p['last_played'] = time.time()
msg = playDopeWars(nodeID, message)
# if message starts wth 'e'xit remove player from tracker
if message.lower().startswith('e'):
dwPlayerTracker[:] = [p for p in dwPlayerTracker if p.get('userID') != nodeID]
msg = 'You have exited Dope Wars.'
return msg
def handle_gTnW(chess = False):
@@ -1909,24 +1906,38 @@ def onReceive(packet, interface):
# check if the packet has a channel flag use it ## FIXME needs to be channel hash lookup
if packet.get('channel'):
channel_number = packet.get('channel')
# get channel name from channel number from connected devices
for device in channel_list:
if device["interface_id"] == rxNode:
device_channels = device['channels']
for chan_name, info in device_channels.items():
if info['number'] == channel_number:
channel_name = chan_name
channel_name = "unknown"
try:
res = resolve_channel_name(channel_number, rxNode, interface)
if res:
try:
channel_name, _ = res
except Exception:
channel_name = "unknown"
else:
# Search all interfaces for this channel
cache = build_channel_cache()
found_on_other = None
for device in cache:
for chan_name, info in device.get("channels", {}).items():
if str(info.get('number')) == str(channel_number) or str(info.get('hash')) == str(channel_number):
found_on_other = device.get("interface_id")
found_chan_name = chan_name
break
if found_on_other:
break
# get channel hashes for the interface
device = next((d for d in channel_list if d["interface_id"] == rxNode), None)
if device:
# Find the channel name whose hash matches channel_number
for chan_name, info in device['channels'].items():
if info['hash'] == channel_number:
print(f"Matched channel hash {info['hash']} to channel name {chan_name}")
channel_name = chan_name
break
if found_on_other and found_on_other != rxNode:
logger.debug(
f"System: Received Packet on Channel:{channel_number} ({found_chan_name}) on Interface:{rxNode}, but this channel is configured on Interface:{found_on_other}"
)
except Exception as e:
logger.debug(f"System: channel resolution error: {e}")
#debug channel info
# if "unknown" in str(channel_name):
# logger.debug(f"System: Received Packet on Channel:{channel_number} on Interface:{rxNode}")
# else:
# logger.debug(f"System: Received Packet on Channel:{channel_number} Name:{channel_name} on Interface:{rxNode}")
# check if the packet has a simulator flag
simulator_flag = packet.get('decoded', {}).get('simulator', False)
@@ -2015,25 +2026,32 @@ def onReceive(packet, interface):
else:
hop_count = hop_away
if hop == "" and hop_count > 0:
if hop_count > 0:
# set hop string from calculated hop count
hop = f"{hop_count} Hop" if hop_count == 1 else f"{hop_count} Hops"
if hop_start == hop_limit and "lora" in str(transport_mechanism).lower() and (snr != 0 or rssi != 0):
if hop_start == hop_limit and "lora" in str(transport_mechanism).lower() and (snr != 0 or rssi != 0) and hop_count == 0:
# 2.7+ firmware direct hop over LoRa
hop = "Direct"
if ((hop_start == 0 and hop_limit >= 0) or via_mqtt or ("mqtt" in str(transport_mechanism).lower())):
if via_mqtt or "mqtt" in str(transport_mechanism).lower():
hop = "MQTT"
elif hop == "" and hop_count == 0 and (snr != 0 or rssi != 0):
# this came from a UDP but we had signal info so gateway is used
hop = "Gateway"
elif "unknown" in str(transport_mechanism).lower() and (snr == 0 and rssi == 0):
# we for sure detected this sourced from a UDP like host
via_mqtt = True
elif "udp" in str(transport_mechanism).lower():
hop = "Gateway"
if hop in ("MQTT", "Gateway") and hop_count > 0:
hop = f"{hop_count} Hops"
hop = f" {hop_count} Hops"
# Add relay node info if present
if packet.get('relayNode') is not None:
relay_val = packet['relayNode']
last_byte = relay_val & 0xFF
if last_byte == 0x00:
hex_val = 'OldFW'
else:
hex_val = f"{last_byte:02X}"
hop += f" Relay:{hex_val}"
if enableHopLogs:
logger.debug(f"System: Packet HopDebugger: hop_away:{hop_away} hop_limit:{hop_limit} hop_start:{hop_start} calculated_hop_count:{hop_count} final_hop_value:{hop} via_mqtt:{via_mqtt} transport_mechanism:{transport_mechanism} Hostname:{rxNodeHostName}")
@@ -2254,8 +2272,11 @@ async def main():
# Create core tasks
tasks.append(asyncio.create_task(start_rx(), name="mesh_rx"))
tasks.append(asyncio.create_task(watchdog(), name="watchdog"))
# Add optional tasks
if my_settings.dataPersistence_enabled:
tasks.append(asyncio.create_task(dataPersistenceLoop(), name="data_persistence"))
if my_settings.file_monitor_enabled:
tasks.append(asyncio.create_task(handleFileWatcher(), name="file_monitor"))

View File

@@ -353,16 +353,15 @@ The system uses SQLite with four tables:
| `howfar` | Distance traveled since last check |
| `howtall` | Calculate height using sun angle |
| `whereami` | Show current location/address |
| `map` | Log/view location data to map.csv |
| `map` | Save/retrieve locations, get headings, manage location database |
Configure in `[location]` section of `config.ini`.
Certainly! Heres a README help section for your `mapHandler` command, suitable for users of your meshbot:
---
## 📍 Map Command
The `map` command allows you to log your current GPS location with a custom description. This is useful for mapping mesh nodes, events, or points of interest.
The `map` command provides a comprehensive location management system that allows you to save, retrieve, and manage locations in a SQLite database. You can save private locations (visible only to you) or public locations (visible to all nodes), get headings and distances to saved locations, and manage your location data.
### Usage
@@ -370,23 +369,117 @@ The `map` command allows you to log your current GPS location with a custom desc
```
map help
```
Displays usage instructions for the map command.
Displays usage instructions for all map commands.
- **Log a Location**
- **Save a Private Location**
```
map <description>
map save <name> [description]
```
Saves your current location as a private location (only visible to your node).
Examples:
```
map save BaseCamp
map save BaseCamp Main base camp location
```
- **Save a Public Location**
```
map save public <name> [description]
```
Saves your current location as a public location (visible to all nodes).
Examples:
```
map save public TrailHead
map save public TrailHead Starting point for hiking trail
```
**Note:** If `public_location_admin_manage = True` in config, only administrators can save public locations.
- **Get Heading to a Location**
```
map <name>
```
Retrieves a saved location and provides heading (bearing) and distance from your current position.
The system prioritizes your private location if both private and public locations exist with the same name.
Example:
```
map Found a new mesh node near the park
map BaseCamp
```
Response includes:
- Location coordinates
- Compass heading (bearing)
- Distance
- Description (if provided)
- **Get Heading to a Public Location**
```
map public <name>
```
Specifically retrieves a public location, even if you have a private location with the same name.
Example:
```
map public BaseCamp
```
- **List All Saved Locations**
```
map list
```
Lists all locations you can access:
- Your private locations (🔒Private)
- All public locations (🌐Public)
Locations are sorted with private locations first, then public locations, both alphabetically by name.
- **Delete a Location**
```
map delete <name>
```
Deletes a location from the database.
**Permission Rules:**
- If `delete_public_locations_admins_only = False` (default):
- Users can delete their own private locations
- Users can delete public locations they created
- Anyone can delete any public location
- If `delete_public_locations_admins_only = True`:
- Only administrators can delete public locations
The system prioritizes deleting your private location if both private and public locations exist with the same name.
- **Legacy CSV Logging**
```
map log <description>
```
Logs your current location to the legacy CSV file (`data/map_data.csv`) with a description. This is the original map functionality preserved for backward compatibility.
Example:
```
map log Found a new mesh node near the park
```
This will log your current location with the description "Found a new mesh node near the park".
### How It Works
- The bot records your user ID, latitude, longitude, and your description in a CSV file (`data/map_data.csv`).
- If your location data is missing or invalid, youll receive an error message.
- You can view or process the CSV file later for mapping or analysis.
- **Database Storage:** All locations are stored in a SQLite database (`data/locations.db` by default, configurable via `locations_db` in config.ini).
- **Location Types:**
- **Private Locations:** Only visible to the node that created them
- **Public Locations:** Visible to all nodes
- **Conflict Resolution:** If you try to save a private location with the same name as an existing public location, you'll be prompted that there is a the public record with that name.
- **Distance Calculation:** Uses the Haversine formula for accurate distance calculations. Distances less than 0.25 miles are displayed in feet; otherwise in miles (or kilometers if metric is enabled).
- **Heading Calculation:** Provides compass bearing (0-360 degrees) from your current location to the target location.
### Configuration
Configure in `[location]` section of `config.ini`:
- `locations_db` - Path to the SQLite database file (default: `data/locations.db`)
- `public_location_admin_manage` - If `True`, only administrators can save public locations (default: `False`)
- `delete_public_locations_admins_only` - If `True`, only administrators can delete locations (default: `False`)
**Tip:** Use `map help` at any time to see these instructions in the bot.

View File

@@ -255,7 +255,19 @@ def bbs_sync_posts(input, peerNode, RxNode):
#store the message
subject = input.split("$")[1].split("#")[0]
body = input.split("#")[1]
fromNodeHex = input.split("@")[1]
fromNodeHex = body.split("@")[1]
#validate the fromNodeHex is a valid hex number
try:
int(fromNodeHex, 16)
except ValueError:
logger.error(f"System: Invalid fromNodeHex in bbslink from node {peerNode}: {input}")
fromNodeHex = hex(peerNode)
#validate the subject and body are not empty
if subject.strip() == "" or body.strip() == "":
logger.error(f"System: Empty subject or body in bbslink from node {peerNode}: {input}")
return "System: Invalid bbslink format."
#store the message in the bbsdb
try:
bbs_post_message(subject, body, int(fromNodeHex, 16))
except:

View File

@@ -46,7 +46,7 @@ def handledxcluster(message, nodeID, deviceID):
freq_hz = spot.get('freq', spot.get('frequency', None))
frequency = f"{float(freq_hz)/1e6:.3f} MHz" if freq_hz else "N/A"
mode_val = spot.get('mode', 'N/A')
comment = spot.get('comment', '')
comment = spot.get('comment') or ''
if len(comment) > 111: # Truncate comment to 111 chars
comment = comment[:111] + '...'
sig = spot.get('sig', '')

View File

@@ -6,6 +6,7 @@ import random
import copy
import uuid
import time
from modules.settings import battleshipTracker
OCEAN = "~"
FIRE = "x"

View File

@@ -12,6 +12,9 @@ CELL_SIZE = 40
BOARD_MARGIN = 50
STATUS_WIDTH = 320
latest_battleship_board = None
latest_battleship_meta = None
def draw_board(screen, board, top_left, cell_size, show_ships=False):
font = pygame.font.Font(None, 28)
x0, y0 = top_left
@@ -92,6 +95,11 @@ def battleship_visual_main(game):
pygame.quit()
sys.exit()
def parse_battleship_message(msg):
# Expected payload:
# MBSP|label|timestamp|nodeID|deviceID|sessionID|status|shotsFired|boardType|shipsStatus|boardString
print("Parsing Battleship message:", msg)
# if __name__ == "__main__":
# # Example: create a new game and show the boards
# game = Battleship(vs_ai=True)

View File

@@ -5,6 +5,7 @@ import random
import time
import pickle
from modules.log import logger
from modules.settings import dwPlayerTracker
# Global variables
total_days = 7 # number of days or rotations the player has to play
@@ -382,15 +383,19 @@ def endGameDw(nodeID):
with open('data/dopewar_hs.pkl', 'wb') as file:
pickle.dump(dwHighScore, file)
msg = "You finished with $" + "{:,}".format(cash) + " and beat the high score!🎉💰"
return msg
if cash > starting_cash:
elif cash > starting_cash:
msg = 'You made money! 💵 Up ' + str((cash/starting_cash).__round__()) + 'x! Well done.'
return msg
if cash == starting_cash:
elif cash == starting_cash:
msg = 'You broke even... hope you at least had fun 💉💊'
return msg
if cash < starting_cash:
else:
msg = "You lost money, better go get a real job.💸"
# remove player from all trackers and databases
dwPlayerTracker[:] = [p for p in dwPlayerTracker if p.get('userID') != nodeID]
dwCashDb[:] = [p for p in dwCashDb if p.get('userID') != nodeID]
dwInventoryDb[:] = [p for p in dwInventoryDb if p.get('userID') != nodeID]
dwLocationDb[:] = [p for p in dwLocationDb if p.get('userID') != nodeID]
dwGameDayDb[:] = [p for p in dwGameDayDb if p.get('userID') != nodeID]
return msg
@@ -495,6 +500,11 @@ def playDopeWars(nodeID, cmd):
if dwGameDayDb[i].get('userID') == nodeID:
inGame = True
# Allow ending the game from any state while a session is active.
cmd_normalized = str(cmd).strip().lower()
if inGame and cmd_normalized in ['e', 'end', 'quit', 'exit']:
return endGameDw(nodeID)
if not inGame:
# initalize player in the database
loc = generatelocations()
@@ -605,9 +615,6 @@ def playDopeWars(nodeID, cmd):
# render_game_screen
msg = render_game_screen(nodeID, game_day, total_days, loc_choice, -1, price_list, 0, 'nothing')
return msg
elif 'e' in menu_choice:
msg = endGameDw(nodeID)
return msg
else:
msg = f'example buy:\nb,drug#,qty# or Sell: s,1,10 qty can be (m)ax\n f,p or end'
return msg

View File

@@ -5,6 +5,7 @@ import random
import time
import pickle
from modules.log import logger
from modules.settings import golfTracker
# Clubs setup
driver_distances = list(range(230, 280, 5))

View File

@@ -10,6 +10,7 @@ import json
import random
import os
from modules.log import logger
from modules.settings import hamtestTracker
class HamTest:
def __init__(self):
@@ -135,8 +136,16 @@ class HamTest:
# remove the game[id] from the list
del self.game[id]
# hamtestTracker stores dicts like {"nodeID": nodeID, ...}
for i in range(len(hamtestTracker)):
try:
if hamtestTracker[i].get('nodeID') == id:
hamtestTracker.pop(i)
break
except Exception:
continue
return msg
hamtestTracker = []
hamtest = HamTest()

View File

@@ -3,6 +3,7 @@ from modules.log import logger, getPrettyTime
import os
import json
import random
from modules.settings import hangmanTracker
class Hangman:
WORDS = [

View File

@@ -145,10 +145,9 @@ def tableOfContents():
'file': '📁', 'folder': '📂', 'sports': '🏅', 'athlete': '🏃', 'competition': '🏆', 'race': '🏁', 'tournament': '🏆', 'champion': '🏆', 'medal': '🏅', 'victory': '🏆', 'win': '🏆', 'lose': '😞',
'draw': '🤝', 'team': '👥', 'player': '👤', 'coach': '👨‍🏫', 'referee': '🧑‍⚖️', 'stadium': '🏟️', 'arena': '🏟️', 'field': '🏟️', 'court': '🏟️', 'track': '🏟️', 'gym': '🏋️', 'fitness': '🏋️', 'exercise': '🏋️',
'workout': '🏋️', 'training': '🏋️', 'practice': '🏋️', 'game': '🎮', 'match': '🎮', 'score': '🏅', 'goal': '🥅', 'point': '🏅', 'basket': '🏀', 'home run': '⚾️', 'strike': '🎳', 'spare': '🎳', 'frame': '🎳',
'inning': '⚾️', 'quarter': '🏈', 'half': '🏈', 'overtime': '🏈', 'penalty': '⚽️', 'foul': '', 'timeout': '', 'substitute': '🔄', 'bench': '🪑', 'sideline': '🏟', 'dugout': '⚾️', 'locker room': '🚪', 'shower': '🚿',
'uniform': '👕', 'jersey': '👕', 'cleats': '👟', 'helmet': '⛑️', 'pads': '🛡️', 'gloves': '🧤', 'bat': '⚾️', 'ball': '⚽️', 'puck': '🏒', 'stick': '🏒', 'net': '🥅', 'hoop': '🏀', 'goalpost': '🥅', 'whistle': '🔔',
'scoreboard': '📊', 'fans': '👥', 'crowd': '👥', 'cheer': '📣', 'boo': '😠', 'applause': '👏', 'celebration': '🎉', 'parade': '🎉', 'trophy': '🏆', 'medal': '🏅', 'ribbon': '🎀', 'cup': '🏆', 'championship': '🏆',
'league': '🏆', 'season': '🏆', 'playoffs': '🏆', 'finals': '🏆', 'runner-up': '🥈', 'third place': '🥉', 'snowman': '☃️', 'snowmen': '⛄️'
'inning': '⚾️', 'shower': '🚿', 'uniform': '👕', 'jersey': '👕', 'cleats': '👟', 'helmet': '', 'pads': '🛡', 'gloves': '🧤', 'bat': '⚾️', 'ball': '', 'puck': '🏒', 'stick': '🏒', 'net': '🥅', 'goalpost': '🥅',
'scoreboard': '📊', 'fans': '👥', 'crowd': '👥', 'cheer': '📣', 'boo': '😠', 'applause': '👏', 'celebration': '🎉', 'parade': '🎉', 'trophy': '🏆', 'medal': '🏅', 'ribbon': '🎀',
'third place': '🥉', 'snowman': '☃️', 'snowmen': '⛄️'
}
return wordToEmojiMap

View File

@@ -211,7 +211,7 @@ def compareCodeMMind(secret_code, user_guess, nodeID):
def playGameMMind(diff, secret_code, turn_count, nodeID, message):
msg = ''
won = False
if turn_count <= 10:
if turn_count < 11:
user_guess = getGuessMMind(diff, message, nodeID)
if user_guess == "XXXX":
msg += f"Invalid guess. Please enter 4 valid colors letters.\n🔴🟢🔵🔴 is RGBR"
@@ -240,7 +240,7 @@ def playGameMMind(diff, secret_code, turn_count, nodeID, message):
# reset turn count in tracker
for i in range(len(mindTracker)):
if mindTracker[i]['nodeID'] == nodeID:
mindTracker[i]['turns'] = 0
mindTracker[i]['turns'] = 1
mindTracker[i]['secret_code'] = ''
mindTracker[i]['cmd'] = 'new'
@@ -277,6 +277,7 @@ def start_mMind(nodeID, message):
if mindTracker[i]['nodeID'] == nodeID:
mindTracker[i]['cmd'] = 'makeCode'
mindTracker[i]['diff'] = diff
mindTracker[i]['turns'] = 1
# Return color message to player
msg += chooseDifficultyMMind(message.lower()[0])
return msg

View File

@@ -4,6 +4,7 @@
import random
import time
import modules.settings as my_settings
from modules.settings import tictactoeTracker
useSynchCompression = True
if useSynchCompression:
@@ -16,9 +17,14 @@ class TicTacToe:
if getattr(my_settings, "disable_emojis_in_games", False):
self.X = "X"
self.O = "O"
self.digit_emojis = None
else:
self.X = ""
self.O = "⭕️"
# Unicode emoji digits 1⃣-9
self.digit_emojis = [
"1", "2", "3", "4", "5", "6", "7", "8", "9"
]
self.display_module = display_module
self.game = {}
self.win_lines_3d = self.generate_3d_win_lines()
@@ -73,7 +79,13 @@ class TicTacToe:
row = []
for j in range(3):
cell = b[i*3+j]
row.append(cell if cell != " " else str(i*3+j+1))
if cell != " ":
row.append(cell)
else:
if self.digit_emojis:
row.append(self.digit_emojis[i*3+j])
else:
row.append(str(i*3+j+1))
s += " | ".join(row) + "\n"
return s
return ""
@@ -147,10 +159,24 @@ class TicTacToe:
msg = self.new_game(nodeID, new_mode, g["channel"], g["deviceID"])
return msg
try:
pos = int(input_msg)
except Exception:
return f"Enter a number between 1 and {max_pos}."
# Accept emoji digits as input
pos = None
# Try to match emoji digits if enabled
if self.digit_emojis:
try:
# Remove variation selectors for matching
normalized_input = input_msg.replace("\ufe0f", "")
for idx, emoji in enumerate(self.digit_emojis[:max_pos]):
if normalized_input == emoji.replace("\ufe0f", ""):
pos = idx + 1
break
except Exception:
pass
if pos is None:
try:
pos = int(input_msg)
except Exception:
return f"Enter a number or emoji between 1 and {max_pos}."
if not self.make_move(nodeID, pos):
return f"Invalid move! Pick 1-{max_pos}:"

View File

@@ -4,6 +4,7 @@ import random
import time
import pickle
from modules.log import logger, getPrettyTime
from modules.settings import vpTracker
vpStartingCash = 20
# Define the Card class
@@ -260,6 +261,7 @@ class PlayerVP:
def getLastCmdVp(nodeID):
global vpTracker
last_cmd = ""
for i in range(len(vpTracker)):
if vpTracker[i]['nodeID'] == nodeID:
@@ -267,6 +269,7 @@ def getLastCmdVp(nodeID):
return last_cmd
def setLastCmdVp(nodeID, cmd):
global vpTracker
for i in range(len(vpTracker)):
if vpTracker[i]['nodeID'] == nodeID:
vpTracker[i]['cmd'] = cmd

File diff suppressed because it is too large Load Diff

View File

@@ -5,6 +5,7 @@ import schedule
from datetime import datetime
from modules.log import logger
from modules.system import send_message
from modules.settings import MOTD, schedulerMotd, schedulerMessage, schedulerChannel, schedulerInterface, schedulerValue, schedulerTime, schedulerInterval
async def run_scheduler_loop(interval=1):
logger.debug(f"System: Scheduler loop started Tasks: {len(schedule.jobs)}, Details:{extract_schedule_fields(schedule.get_jobs())}")
@@ -24,11 +25,12 @@ async def run_scheduler_loop(interval=1):
except asyncio.CancelledError:
logger.debug("System: Scheduler loop cancelled, shutting down.")
def safe_int(val, default=0, type=""):
def safe_int(val, default=0, type=''):
try:
return int(val)
except (ValueError, TypeError):
logger.debug(f"System: Scheduler config {type} error '{val}' to int, using default {default}")
if val != '':
logger.debug(f"System: Scheduler config {type} error '{val}' to int, using default {default}")
return default
def extract_schedule_fields(jobs):

View File

@@ -135,6 +135,10 @@ if 'inventory' not in config:
config['inventory'] = {'enabled': 'False', 'inventory_db': 'data/inventory.db', 'disable_penny': 'False'}
config.write(open(config_file, 'w'))
if 'location' not in config:
config['location'] = {'locations_db': 'data/locations.db', 'public_location_admin_manage': 'False', 'delete_public_locations_admins_only': 'False'}
config.write(open(config_file, 'w'))
# interface1 settings
interface1_type = config['interface'].get('type', 'serial')
port1 = config['interface'].get('port', '')
@@ -323,6 +327,9 @@ try:
coastalForecastDays = config['location'].getint('coastalForecastDays', 3) # default 3 days
# location alerts
alert_duration = config['location'].getint('alertDuration', 20) # default 20 minutes
if alert_duration < 10: # the API calls need throttle time
alert_duration = 10
eAlertBroadcastEnabled = config['location'].getboolean('eAlertBroadcastEnabled', False) # old deprecated name
ipawsAlertEnabled = config['location'].getboolean('ipawsAlertEnabled', False) # default False new ^
# Keep both in sync for backward compatibility
@@ -390,6 +397,11 @@ try:
inventory_db = config['inventory'].get('inventory_db', 'data/inventory.db')
disable_penny = config['inventory'].getboolean('disable_penny', False)
# location mapping
locations_db = config['location'].get('locations_db', 'data/locations.db')
public_location_admin_manage = config['location'].getboolean('public_location_admin_manage', False)
delete_public_locations_admins_only = config['location'].getboolean('delete_public_locations_admins_only', False)
# E-Mail Settings
sysopEmails = config['smtp'].get('sysopEmails', '').split(',')
enableSMTP = config['smtp'].getboolean('enableSMTP', False)
@@ -504,6 +516,10 @@ try:
autoBanThreshold = config['messagingSettings'].getint('autoBanThreshold', 5) # default 5 offenses
autoBanTimeframe = config['messagingSettings'].getint('autoBanTimeframe', 3600) # default 1 hour in seconds
apiThrottleValue = config['messagingSettings'].getint('apiThrottleValue', 20) # default 20 requests
# data persistence settings
dataPersistence_enabled = config.getboolean('dataPersistence', 'enabled', fallback=True) # default True
dataPersistence_interval = config.getint('dataPersistence', 'interval', fallback=300) # default 300 seconds (5 minutes)
except Exception as e:
print(f"System: Error reading config file: {e}")
print("System: Check the config.ini against config.template file for missing sections or values.")

View File

@@ -37,19 +37,27 @@ def hf_band_conditions():
def solar_conditions():
# radio related solar conditions from hamsql.com
solar_cond = ""
solar_cond = requests.get("https://www.hamqsl.com/solarxml.php", timeout=urlTimeoutSeconds)
if(solar_cond.ok):
solar_xml = xml.dom.minidom.parseString(solar_cond.text)
for i in solar_xml.getElementsByTagName("solardata"):
solar_a_index = i.getElementsByTagName("aindex")[0].childNodes[0].data
solar_k_index = i.getElementsByTagName("kindex")[0].childNodes[0].data
solar_xray = i.getElementsByTagName("xray")[0].childNodes[0].data
solar_flux = i.getElementsByTagName("solarflux")[0].childNodes[0].data
sunspots = i.getElementsByTagName("sunspots")[0].childNodes[0].data
signalnoise = i.getElementsByTagName("signalnoise")[0].childNodes[0].data
solar_cond = "A-Index: " + solar_a_index + "\nK-Index: " + solar_k_index + "\nSunspots: " + sunspots + "\nX-Ray Flux: " + solar_xray + "\nSolar Flux: " + solar_flux + "\nSignal Noise: " + signalnoise
else:
logger.error("Solar: Error fetching solar conditions")
try:
solar_cond = requests.get("https://www.hamqsl.com/solarxml.php", timeout=urlTimeoutSeconds)
if solar_cond.ok:
try:
solar_xml = xml.dom.minidom.parseString(solar_cond.text)
except Exception as e:
logger.error(f"Solar: XML parse error: {e}")
return ERROR_FETCHING_DATA
for i in solar_xml.getElementsByTagName("solardata"):
solar_a_index = i.getElementsByTagName("aindex")[0].childNodes[0].data
solar_k_index = i.getElementsByTagName("kindex")[0].childNodes[0].data
solar_xray = i.getElementsByTagName("xray")[0].childNodes[0].data
solar_flux = i.getElementsByTagName("solarflux")[0].childNodes[0].data
sunspots = i.getElementsByTagName("sunspots")[0].childNodes[0].data
signalnoise = i.getElementsByTagName("signalnoise")[0].childNodes[0].data
solar_cond = "A-Index: " + solar_a_index + "\nK-Index: " + solar_k_index + "\nSunspots: " + sunspots + "\nX-Ray Flux: " + solar_xray + "\nSolar Flux: " + solar_flux + "\nSignal Noise: " + signalnoise
else:
logger.error("Solar: Error fetching solar conditions")
solar_cond = ERROR_FETCHING_DATA
except Exception as e:
logger.error(f"Solar: Exception fetching or parsing: {e}")
solar_cond = ERROR_FETCHING_DATA
return solar_cond
@@ -68,6 +76,77 @@ def drap_xray_conditions():
xray_flux = ERROR_FETCHING_DATA
return xray_flux
def get_noaa_scales_summary():
"""
Show latest observed, 24-hour max, and predicted geomagnetic, storm, and blackout data.
"""
try:
response = requests.get("https://services.swpc.noaa.gov/products/noaa-scales.json", timeout=urlTimeoutSeconds)
if response.ok:
data = response.json()
today = datetime.utcnow().date()
latest_entry = None
latest_dt = None
max_g_today = None
max_g_scale = -1
predicted_g = None
predicted_g_scale = -1
# Find latest observed and 24-hour max for today
for entry in data.values():
date_str = entry.get("DateStamp")
time_str = entry.get("TimeStamp")
if date_str and time_str:
try:
dt = datetime.strptime(f"{date_str} {time_str}", "%Y-%m-%d %H:%M:%S")
g = entry.get("G", {})
g_scale = int(g.get("Scale", -1)) if g.get("Scale") else -1
# Latest observed for today
if dt.date() == today:
if latest_dt is None or dt > latest_dt:
latest_dt = dt
latest_entry = entry
# 24-hour max for today
if g_scale > max_g_scale:
max_g_scale = g_scale
max_g_today = entry
# Predicted (future)
elif dt.date() > today:
if g_scale > predicted_g_scale:
predicted_g_scale = g_scale
predicted_g = entry
except Exception:
continue
def format_entry(label, entry):
if not entry:
return f"{label}: No data"
g = entry.get("G", {})
s = entry.get("S", {})
r = entry.get("R", {})
parts = [f"{label} {g.get('Text', 'N/A')} (G:{g.get('Scale', 'N/A')})"]
# Only show storm if it's happening
if s.get("Text") and s.get("Text") != "none":
parts.append(f"Currently:{s.get('Text')} (S:{s.get('Scale', 'N/A')})")
# Only show blackout if it's not "none" or scale is not 0
if r.get("Text") and r.get("Text") != "none" and r.get("Scale") not in [None, "0", 0]:
parts.append(f"RF Blackout:{r.get('Text')} (R:{r.get('Scale', 'N/A')})")
return "\n".join(parts)
output = []
#output.append(format_entry("Latest Observed", latest_entry))
output.append(format_entry("24hrMax:", max_g_today))
output.append(format_entry("Predicted:", predicted_g))
return "\n".join(output)
else:
return NO_ALERTS
except Exception as e:
logger.warning(f"Error fetching services.swpc.noaa.gov: {e}")
return ERROR_FETCHING_DATA
def get_sun(lat=0, lon=0):
# get sunrise and sunset times using callers location or default
obs = ephem.Observer()

View File

@@ -331,24 +331,6 @@ if ble_count > 1:
logger.critical(f"System: Multiple BLE interfaces detected. Only one BLE interface is allowed. Exiting")
exit()
def xor_hash(data: bytes) -> int:
"""Compute an XOR hash from bytes."""
result = 0
for char in data:
result ^= char
return result
def generate_hash(name: str, key: str) -> int:
"""generate the channel number by hashing the channel name and psk"""
if key == "AQ==":
key = "1PG7OiApB1nwvP+rz05pAQ=="
replaced_key = key.replace("-", "+").replace("_", "/")
key_bytes = base64.b64decode(replaced_key.encode("utf-8"))
h_name = xor_hash(bytes(name, "utf-8"))
h_key = xor_hash(key_bytes)
result: int = h_name ^ h_key
return result
# Initialize interfaces
logger.debug(f"System: Initializing Interfaces")
interface1 = interface2 = interface3 = interface4 = interface5 = interface6 = interface7 = interface8 = interface9 = None
@@ -403,44 +385,90 @@ for i in range(1, 10):
globals()[f'myNodeNum{i}'] = 777
# Fetch channel list from each device
channel_list = []
for i in range(1, 10):
if globals().get(f'interface{i}') and globals().get(f'interface{i}_enabled'):
_channel_cache = None
def build_channel_cache(force_refresh: bool = False):
"""
Build and cache channel_list from interfaces once (or when forced).
"""
global _channel_cache
if _channel_cache is not None and not force_refresh:
return _channel_cache
cache = []
for i in range(1, 10):
if not globals().get(f'interface{i}') or not globals().get(f'interface{i}_enabled'):
continue
try:
node = globals()[f'interface{i}'].getNode('^local')
channels = node.channels
channel_dict = {}
for channel in channels:
if hasattr(channel, 'role') and channel.role:
channel_name = getattr(channel.settings, 'name', '').strip()
channel_number = getattr(channel, 'index', 0)
# Only add channels with a non-empty name
if channel_name:
channel_dict[channel_name] = channel_number
channel_list.append({
"interface_id": i,
"channels": channel_dict
})
logger.debug(f"System: Fetched Channel List from Device{i}")
except Exception as e:
logger.error(f"System: Error fetching channel list from Device{i}: {e}")
# Try to use the node-provided channel/hash table if available
try:
ch_hash_table_raw = node.get_channels_with_hash()
#print(f"System: Device{i} Channel Hash Table: {ch_hash_table_raw}")
except Exception:
logger.warning(f"System: API version error update API `pip3 install --upgrade meshtastic[cli]`")
ch_hash_table_raw = []
# add channel hash to channel_list
for device in channel_list:
interface_id = device["interface_id"]
interface = globals().get(f'interface{interface_id}')
for channel_name, channel_number in device["channels"].items():
psk_base64 = "AQ==" # default PSK
channel_hash = generate_hash(channel_name, psk_base64)
# add hash to the channel entry in channel_list under key 'hash'
for entry in channel_list:
if entry["interface_id"] == interface_id:
entry["channels"][channel_name] = {
"number": channel_number,
"hash": channel_hash
}
channel_dict = {}
# Use the hash table as the source of truth for channels
if isinstance(ch_hash_table_raw, list):
for entry in ch_hash_table_raw:
channel_name = entry.get("name", "").strip()
channel_number = entry.get("index")
ch_hash = entry.get("hash")
role = entry.get("role", "")
# Always add PRIMARY/SECONDARY channels, even if name is empty
if role in ("PRIMARY", "SECONDARY"):
channel_dict[channel_name if channel_name else f"Channel{channel_number}"] = {
"number": channel_number,
"hash": ch_hash
}
elif isinstance(ch_hash_table_raw, dict):
for channel_name, ch_hash in ch_hash_table_raw.items():
channel_dict[channel_name] = {"number": None, "hash": ch_hash}
# Always add the interface, even if no named channels
cache.append({"interface_id": i, "channels": channel_dict})
logger.debug(f"System: Fetched Channel List from Device{i} (cached)")
except Exception as e:
logger.debug(f"System: Error fetching channel list from Device{i}: {e}")
_channel_cache = cache
return _channel_cache
def refresh_channel_cache():
"""Force rebuild of channel cache (call only when channel config changes)."""
return build_channel_cache(force_refresh=True)
channel_list = build_channel_cache()
#print(f"System: Channel Cache Built: {channel_list}")
#### FUN-ctions ####
def resolve_channel_name(channel_number, rxNode=1, interface_obj=None):
"""
Resolve a channel number/hash to its name using cached channel list.
"""
try:
# ensure cache exists (cheap)
cached = build_channel_cache()
# quick search in cache first (no node calls)
for device in cached:
if device.get("interface_id") == rxNode:
device_channels = device.get("channels", {}) or {}
# info is dict: {name: {'number': X, 'hash': Y}}
for chan_name, info in device_channels.items():
try:
if isinstance(info, dict):
if str(info.get('number')) == str(channel_number) or str(info.get('hash')) == str(channel_number):
return (chan_name, info.get('number') or info.get('hash'))
else:
if str(info) == str(channel_number):
return (chan_name, info)
except Exception:
continue
break # stop searching other devices
except Exception as e:
logger.debug(f"System: Error resolving channel name from cache: {e}")
def cleanup_memory():
"""Clean up memory by limiting list sizes and removing stale entries"""
@@ -976,7 +1004,7 @@ def stringSafeCheck(s, fromID=0):
if len(s) > 1000:
return False
# Check for single-character injections
single_injection_chars = [';', '|', '}', '>', ')']
single_injection_chars = [';', '|', '}', '>']
if any(c in s for c in single_injection_chars):
return False # injection character found
# Check for multi-character patterns
@@ -1276,8 +1304,8 @@ def handleAlertBroadcast(deviceID=1):
if should_send_alert("overdue", overdueAlerts, min_interval=300): # 5 minutes interval for overdue alerts
send_message(overdueAlerts, emergency_responder_alert_channel, 0, emergency_responder_alert_interface)
# Only allow API call every 20 minutes
if not (clock.minute % 20 == 0 and clock.second <= 17):
# Only allow API call every alert_duration minutes at xx:00, xx:20, xx:40
if not (clock.minute % alert_duration == 0 and clock.second <= 17):
return False
# Collect alerts
@@ -1320,7 +1348,7 @@ def handleAlertBroadcast(deviceID=1):
def onDisconnect(interface):
# Handle disconnection of the interface
logger.warning(f"System: Abrupt Disconnection of Interface detected")
logger.warning(f"System: Abrupt Disconnection of Interface detected, attempting reconnect...")
interface.close()
# Telemetry Functions
@@ -1466,6 +1494,7 @@ def initializeMeshLeaderboard():
'lowestBattery': {'nodeID': None, 'value': 101, 'timestamp': 0}, # 🪫
'longestUptime': {'nodeID': None, 'value': 0, 'timestamp': 0}, # 🕰️
'fastestSpeed': {'nodeID': None, 'value': 0, 'timestamp': 0}, # 🚓
'fastestAirSpeed': {'nodeID': None, 'value': 0, 'timestamp': 0}, # ✈️
'highestAltitude': {'nodeID': None, 'value': 0, 'timestamp': 0}, # 🚀
'tallestNode': {'nodeID': None, 'value': 0, 'timestamp': 0}, # 🪜
'coldestTemp': {'nodeID': None, 'value': 999, 'timestamp': 0}, # 🥶
@@ -1612,30 +1641,39 @@ def consumeMetadata(packet, rxNode=0, channel=-1):
positionMetadata[nodeID] = {}
for key in position_stats_keys:
positionMetadata[nodeID][key] = position_data.get(key, 0)
# Track fastest speed 🚓
if position_data.get('groundSpeed') is not None:
speed = position_data['groundSpeed']
if speed > meshLeaderboard['fastestSpeed']['value']:
meshLeaderboard['fastestSpeed'] = {'nodeID': nodeID, 'value': speed, 'timestamp': time.time()}
if logMetaStats:
logger.info(f"System: 🚓 New speed record: {speed} km/h from NodeID:{nodeID} ShortName:{get_name_from_number(nodeID, 'short', rxNode)}")
# Track highest altitude 🚀 (also log if over highfly_altitude threshold)
if position_data.get('altitude') is not None:
altitude = position_data['altitude']
if altitude > highfly_altitude:
if altitude > meshLeaderboard['highestAltitude']['value']:
meshLeaderboard['highestAltitude'] = {'nodeID': nodeID, 'value': altitude, 'timestamp': time.time()}
if logMetaStats:
logger.info(f"System: 🚀 New altitude record: {altitude}m from NodeID:{nodeID} ShortName:{get_name_from_number(nodeID, 'short', rxNode)}")
# Track tallest node 🪜 (under the highfly_altitude limit by 100m)
# Track altitude and speed records
if position_data.get('altitude') is not None:
altitude = position_data['altitude']
highflying = altitude > highfly_altitude
# Tallest node (below highfly_altitude - 100m)
if altitude < (highfly_altitude - 100):
if altitude > meshLeaderboard['tallestNode']['value']:
meshLeaderboard['tallestNode'] = {'nodeID': nodeID, 'value': altitude, 'timestamp': time.time()}
if logMetaStats:
logger.info(f"System: 🪜 New tallest node record: {altitude}m from NodeID:{nodeID} ShortName:{get_name_from_number(nodeID, 'short', rxNode)}")
# Highest altitude (above highfly_altitude)
if highflying:
if altitude > meshLeaderboard['highestAltitude']['value']:
meshLeaderboard['highestAltitude'] = {'nodeID': nodeID, 'value': altitude, 'timestamp': time.time()}
if logMetaStats:
logger.info(f"System: 🚀 New altitude record: {altitude}m from NodeID:{nodeID} ShortName:{get_name_from_number(nodeID, 'short', rxNode)}")
# Track speed records
if position_data.get('groundSpeed') is not None:
speed = position_data['groundSpeed']
# Fastest ground speed (not highflying)
if not highflying and speed > meshLeaderboard['fastestSpeed']['value']:
meshLeaderboard['fastestSpeed'] = {'nodeID': nodeID, 'value': speed, 'timestamp': time.time()}
if logMetaStats:
logger.info(f"System: 🚓 New speed record: {speed} km/h from NodeID:{nodeID} ShortName:{get_name_from_number(nodeID, 'short', rxNode)}")
# Fastest air speed (highflying)
elif highflying and speed > meshLeaderboard['fastestAirSpeed']['value']:
meshLeaderboard['fastestAirSpeed'] = {'nodeID': nodeID, 'value': speed, 'timestamp': time.time()}
if logMetaStats:
logger.info(f"System: ✈️ New air speed record: {speed} km/h from NodeID:{nodeID} ShortName:{get_name_from_number(nodeID, 'short', rxNode)}")
# if altitude is over highfly_altitude send a log and message for high-flying nodes and not in highfly_ignoreList
if position_data.get('altitude', 0) > highfly_altitude and highfly_enabled and str(nodeID) not in highfly_ignoreList and not isNodeBanned(nodeID):
logger.info(f"System: High Altitude {position_data['altitude']}m on Device: {rxNode} Channel: {channel} NodeID:{nodeID} Lat:{position_data.get('latitude', 0)} Lon:{position_data.get('longitude', 0)}")
@@ -1935,6 +1973,16 @@ def get_mesh_leaderboard(msg, fromID, deviceID):
result += f"🚓 Speed: {value_kmh} km/h {get_name_from_number(nodeID, 'short', 1)}\n"
else:
result += f"🚓 Speed: {value_mph} mph {get_name_from_number(nodeID, 'short', 1)}\n"
# Tallest node
if meshLeaderboard['tallestNode']['nodeID']:
nodeID = meshLeaderboard['tallestNode']['nodeID']
value_m = meshLeaderboard['tallestNode']['value']
value_ft = round(value_m * 3.28084, 0)
if use_metric:
result += f"🪜 Tallest: {int(round(value_m, 0))}m {get_name_from_number(nodeID, 'short', 1)}\n"
else:
result += f"🪜 Tallest: {int(value_ft)}ft {get_name_from_number(nodeID, 'short', 1)}\n"
# Highest altitude
if meshLeaderboard['highestAltitude']['nodeID']:
@@ -1946,15 +1994,15 @@ def get_mesh_leaderboard(msg, fromID, deviceID):
else:
result += f"🚀 Altitude: {int(value_ft)}ft {get_name_from_number(nodeID, 'short', 1)}\n"
# Tallest node
if meshLeaderboard['tallestNode']['nodeID']:
nodeID = meshLeaderboard['tallestNode']['nodeID']
value_m = meshLeaderboard['tallestNode']['value']
value_ft = round(value_m * 3.28084, 0)
# Fastest airspeed
if meshLeaderboard['fastestAirSpeed']['nodeID']:
nodeID = meshLeaderboard['fastestAirSpeed']['nodeID']
value_kmh = round(meshLeaderboard['fastestAirSpeed']['value'], 1)
value_mph = round(value_kmh / 1.60934, 1)
if use_metric:
result += f"🪜 Tallest: {int(round(value_m, 0))}m {get_name_from_number(nodeID, 'short', 1)}\n"
result += f"✈️ Airspeed: {value_kmh} km/h {get_name_from_number(nodeID, 'short', 1)}\n"
else:
result += f"🪜 Tallest: {int(value_ft)}ft {get_name_from_number(nodeID, 'short', 1)}\n"
result += f"✈️ Airspeed: {value_mph} mph {get_name_from_number(nodeID, 'short', 1)}\n"
# Coldest temperature
if meshLeaderboard['coldestTemp']['nodeID']:
@@ -2040,7 +2088,7 @@ def get_mesh_leaderboard(msg, fromID, deviceID):
result = result.strip()
if result == "📊Leaderboard📊\n":
result += "No records yet! Keep meshing! 📡"
result += "No records yet! Keep meshing! 📡 \n firmware 2.7+ `Broadcast Device Metrics` in Telemetry Config, needs enabled for full use. Ideally not on AQ=="
return result
@@ -2377,8 +2425,36 @@ async def watchdog():
load_bbsdm()
load_bbsdb()
def saveAllData():
try:
# Save BBS data if enabled
if bbs_enabled:
save_bbsdb()
save_bbsdm()
logger.debug("Persistence: BBS data saved")
# Save leaderboard data if enabled
if logMetaStats:
saveLeaderboard()
logger.debug("Persistence: Leaderboard data saved")
# Save ban list
save_bbsBanList()
logger.debug("Persistence: Ban list saved")
logger.info("Persistence: Save completed")
except Exception as e:
logger.error(f"Persistence: Save error: {e}")
async def dataPersistenceLoop():
"""Data persistence service loop for periodic data saving"""
logger.debug("Persistence: Loop started")
while True:
await asyncio.sleep(dataPersistence_interval)
saveAllData()
def exit_handler():
# Close the interface and save the BBS messages
# Close the interface and save all data
logger.debug(f"System: Closing Autoresponder")
try:
logger.debug(f"System: Closing Interface1")
@@ -2390,12 +2466,9 @@ def exit_handler():
globals()[f'interface{i}'].close()
except Exception as e:
logger.error(f"System: closing: {e}")
if bbs_enabled:
save_bbsdb()
save_bbsdm()
logger.debug(f"System: BBS Messages Saved")
if logMetaStats:
saveLeaderboard()
saveAllData()
logger.debug(f"System: Exiting")
asyncLoop.stop()
asyncLoop.close()

View File

@@ -174,7 +174,9 @@ class TestBot(unittest.TestCase):
self.assertIsInstance(haha, str)
def test_tictactoe_initial_and_move(self):
from games.tictactoe import tictactoe
from games.tictactoe import TicTacToe
# Create an instance (no display module required for tests)
tictactoe = TicTacToe(display_module=None)
user_id = "testuser"
# Start a new game (no move yet)
initial = tictactoe.play(user_id, "")

View File

@@ -1,78 +0,0 @@
# modules/test_checklist.py
import os
import sys
# Add the parent directory to sys.path to allow module imports
parent_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
sys.path.insert(0, parent_path)
import unittest
from unittest.mock import patch
from checklist import process_checklist_command, initialize_checklist_database
import time
class TestProcessChecklistCommand(unittest.TestCase):
def setUp(self):
# Always start with a fresh DB
initialize_checklist_database()
# Patch settings for consistent test behavior
patcher1 = patch('modules.checklist.reverse_in_out', False)
patcher2 = patch('modules.checklist.bbs_ban_list', [])
patcher3 = patch('modules.checklist.bbs_admin_list', ['999'])
self.mock_reverse = patcher1.start()
self.mock_ban = patcher2.start()
self.mock_admin = patcher3.start()
self.addCleanup(patcher1.stop)
self.addCleanup(patcher2.stop)
self.addCleanup(patcher3.stop)
def test_checkin_command(self):
result = process_checklist_command(1, "checkin test note", name="TESTUSER", location=["loc"])
self.assertIn("Checked✅In: TESTUSER", result)
def test_checkout_command(self):
# First checkin
process_checklist_command(1, "checkin test note", name="TESTUSER", location=["loc"])
# Then checkout
result = process_checklist_command(1, "checkout", name="TESTUSER", location=["loc"])
self.assertIn("Checked⌛Out: TESTUSER", result)
def test_checkin_with_interval(self):
result = process_checklist_command(1, "checkin 15 hiking", name="TESTUSER", location=["loc"])
self.assertIn("monitoring every 15min", result)
def test_checkout_all(self):
# Multiple checkins
process_checklist_command(1, "checkin note1", name="TESTUSER", location=["loc"])
process_checklist_command(1, "checkin note2", name="TESTUSER", location=["loc"])
result = process_checklist_command(1, "checkout all", name="TESTUSER", location=["loc"])
self.assertIn("Checked out", result)
self.assertIn("check-ins for TESTUSER", result)
def test_checklistapprove_nonadmin(self):
process_checklist_command(1, "checkin foo", name="FOO", location=["loc"])
result = process_checklist_command(2, "checklistapprove 1", name="NOTADMIN", location=["loc"])
self.assertNotIn("approved", result)
def test_checklistdeny_nonadmin(self):
process_checklist_command(1, "checkin foo", name="FOO", location=["loc"])
result = process_checklist_command(2, "checklistdeny 1", name="NOTADMIN", location=["loc"])
self.assertNotIn("denied", result)
def test_help_command(self):
result = process_checklist_command(1, "checklist ?", name="TESTUSER", location=["loc"])
self.assertIn("Command: checklist", result)
def test_checklist_listing(self):
process_checklist_command(1, "checkin foo", name="FOO", location=["loc"])
result = process_checklist_command(1, "checklist", name="FOO", location=["loc"])
self.assertIsInstance(result, str)
self.assertIn("checked-In", result)
def test_invalid_command(self):
result = process_checklist_command(1, "foobar", name="FOO", location=["loc"])
self.assertEqual(result, "Invalid command.")
if __name__ == "__main__":
unittest.main()

View File

@@ -104,12 +104,12 @@ def handle_ping(message_from_id, deviceID, message, hop, snr, rssi, isDM, chann
#flood
msg += " [F]"
if (float(snr) != 0 or float(rssi) != 0) and "Hops" not in hop:
if (float(snr) != 0 or float(rssi) != 0) and "Hop" not in hop:
msg += f"\nSNR:{snr} RSSI:{rssi}"
elif "Hops" in hop:
msg += f"\n{hop}🐇 "
else:
msg += "\nflood route"
elif "Hop" in hop:
# janky, remove the words Gateway or MQTT if present
hop = hop.replace("Gateway", "").replace("Direct", "").replace("MQTT", "").strip()
msg += f"\n{hop} "
if "@" in message:
msg = msg + " @" + message.split("@")[1]
@@ -279,24 +279,38 @@ def onReceive(packet, interface):
# check if the packet has a channel flag use it ## FIXME needs to be channel hash lookup
if packet.get('channel'):
channel_number = packet.get('channel')
# get channel name from channel number from connected devices
for device in channel_list:
if device["interface_id"] == rxNode:
device_channels = device['channels']
for chan_name, info in device_channels.items():
if info['number'] == channel_number:
channel_name = chan_name
channel_name = "unknown"
try:
res = resolve_channel_name(channel_number, rxNode, interface)
if res:
try:
channel_name, _ = res
except Exception:
channel_name = "unknown"
else:
# Search all interfaces for this channel
cache = build_channel_cache()
found_on_other = None
for device in cache:
for chan_name, info in device.get("channels", {}).items():
if str(info.get('number')) == str(channel_number) or str(info.get('hash')) == str(channel_number):
found_on_other = device.get("interface_id")
found_chan_name = chan_name
break
if found_on_other:
break
if found_on_other and found_on_other != rxNode:
logger.debug(
f"System: Received Packet on Channel:{channel_number} ({found_chan_name}) on Interface:{rxNode}, but this channel is configured on Interface:{found_on_other}"
)
except Exception as e:
logger.debug(f"System: channel resolution error: {e}")
# get channel hashes for the interface
device = next((d for d in channel_list if d["interface_id"] == rxNode), None)
if device:
# Find the channel name whose hash matches channel_number
for chan_name, info in device['channels'].items():
if info['hash'] == channel_number:
print(f"Matched channel hash {info['hash']} to channel name {chan_name}")
channel_name = chan_name
break
#debug channel info
# if "unknown" in str(channel_name):
# logger.debug(f"System: Received Packet on Channel:{channel_number} on Interface:{rxNode}")
# else:
# logger.debug(f"System: Received Packet on Channel:{channel_number} Name:{channel_name} on Interface:{rxNode}")
# check if the packet has a simulator flag
simulator_flag = packet.get('decoded', {}).get('simulator', False)
@@ -370,25 +384,32 @@ def onReceive(packet, interface):
else:
hop_count = hop_away
if hop == "" and hop_count > 0:
if hop_count > 0:
# set hop string from calculated hop count
hop = f"{hop_count} Hop" if hop_count == 1 else f"{hop_count} Hops"
if hop_start == hop_limit and "lora" in str(transport_mechanism).lower() and (snr != 0 or rssi != 0):
if hop_start == hop_limit and "lora" in str(transport_mechanism).lower() and (snr != 0 or rssi != 0) and hop_count == 0:
# 2.7+ firmware direct hop over LoRa
hop = "Direct"
if ((hop_start == 0 and hop_limit >= 0) or via_mqtt or ("mqtt" in str(transport_mechanism).lower())):
if via_mqtt or "mqtt" in str(transport_mechanism).lower():
hop = "MQTT"
elif hop == "" and hop_count == 0 and (snr != 0 or rssi != 0):
# this came from a UDP but we had signal info so gateway is used
hop = "Gateway"
elif "unknown" in str(transport_mechanism).lower() and (snr == 0 and rssi == 0):
# we for sure detected this sourced from a UDP like host
via_mqtt = True
elif "udp" in str(transport_mechanism).lower():
hop = "Gateway"
if hop in ("MQTT", "Gateway") and hop_count > 0:
hop = f"{hop_count} Hops"
hop = f" {hop_count} Hops"
# Add relay node info if present
if packet.get('relayNode') is not None:
relay_val = packet['relayNode']
last_byte = relay_val & 0xFF
if last_byte == 0x00:
hex_val = 'FF'
else:
hex_val = f"{last_byte:02X}"
hop += f" (Relay:{hex_val})"
if my_settings.enableHopLogs:
logger.debug(f"System: Packet HopDebugger: hop_away:{hop_away} hop_limit:{hop_limit} hop_start:{hop_start} calculated_hop_count:{hop_count} final_hop_value:{hop} via_mqtt:{via_mqtt} transport_mechanism:{transport_mechanism} Hostname:{rxNodeHostName}")
@@ -650,8 +671,11 @@ async def main():
# Create core tasks
tasks.append(asyncio.create_task(start_rx(), name="mesh_rx"))
tasks.append(asyncio.create_task(watchdog(), name="watchdog"))
# Add optional tasks
if my_settings.dataPersistence_enabled:
tasks.append(asyncio.create_task(dataPersistenceLoop(), name="data_persistence"))
if my_settings.file_monitor_enabled:
tasks.append(asyncio.create_task(handleFileWatcher(), name="file_monitor"))

View File

@@ -1,7 +1,6 @@
meshtastic
pubsub
datetime
pyephem
PyPubSub
ephem
requests
maidenhead
beautifulsoup4

View File

@@ -13,4 +13,9 @@ This is not a full turnkey setup for Docker yet?
`docker compose run ollama`
`docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=http://127.0.0.1:11434 -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main`
`docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=http://127.0.0.1:11434 -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main`
### Other Stuff
A cool tool to use with RAG creation with open-webui
- https://github.com/microsoft/markitdown

View File

@@ -18,7 +18,7 @@ try:
from pubsub import pub
from meshtastic.protobuf import mesh_pb2, portnums_pb2
except ImportError:
print("meshtastic API not found. pip install -U meshtastic")
print("meshtastic API not found. pip install -U meshtastic")
exit(1)
try:
@@ -26,10 +26,13 @@ try:
from mudp.encryption import generate_hash
except ImportError:
print("mUDP module not found. pip install -U mudp")
print("If launching, venv run source venv/bin/activate and then pip install -U mudp pygame-ce")
print("use deactivate to exit venv when done")
exit(1)
try:
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from modules.games.tictactoe_vid import handle_tictactoe_payload, ttt_main
from modules.games.battleship_vid import parse_battleship_message
except Exception as e:
print(f"Error importing modules: {e}\nRun this program from the main project directory, e.g. 'python3 script/game_serve.py'")
exit(1)
@@ -130,6 +133,13 @@ def on_private_app(packet: mesh_pb2.MeshPacket, addr=None):
add_seen_message(msg_tuple)
handle_tictactoe_payload(packet_payload, from_id=packet_from_id)
print(f"[Channel: {rx_channel}] [Port: {port_name}] Tic-Tac-Toe Message payload:", packet_payload)
elif packet_payload.startswith("MBSP:"):
packet_payload = packet_payload[5:] # remove 'MBSP:'
msg_tuple = (getattr(packet, 'from', None), packet.to, packet_payload)
if msg_tuple not in seen_messages:
add_seen_message(msg_tuple)
#parse_battleship_message(packet_payload, from_id=packet_from_id)
print(f"[Channel: {rx_channel}] [Port: {port_name}] Battleship Message payload:", packet_payload)
else:
msg_tuple = (getattr(packet, 'from', None), packet.to, packet_payload)
if msg_tuple not in seen_messages:
@@ -169,7 +179,7 @@ def main():
print(r"""
___
/ \
| HOT | Mesh Bot Display Server v0.9.5
| HOT | Mesh Bot Display Server v0.9.5b
| TOT | (aka tot-bot)
\___/